Search results for: scale invariant feature
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7534

Search results for: scale invariant feature

5014 PointNetLK-OBB: A Point Cloud Registration Algorithm with High Accuracy

Authors: Wenhao Lan, Ning Li, Qiang Tong

Abstract:

To improve the registration accuracy of a source point cloud and template point cloud when the initial relative deflection angle is too large, a PointNetLK algorithm combined with an oriented bounding box (PointNetLK-OBB) is proposed. In this algorithm, the OBB of a 3D point cloud is used to represent the macro feature of source and template point clouds. Under the guidance of the iterative closest point algorithm, the OBB of the source and template point clouds is aligned, and a mirror symmetry effect is produced between them. According to the fitting degree of the source and template point clouds, the mirror symmetry plane is detected, and the optimal rotation and translation of the source point cloud is obtained to complete the 3D point cloud registration task. To verify the effectiveness of the proposed algorithm, a comparative experiment was performed using the publicly available ModelNet40 dataset. The experimental results demonstrate that, compared with PointNetLK, PointNetLK-OBB improves the registration accuracy of the source and template point clouds when the initial relative deflection angle is too large, and the sensitivity of the initial relative position between the source point cloud and template point cloud is reduced. The primary contribution of this paper is the use of PointNetLK to avoid the non-convex problem of traditional point cloud registration and leveraging the regularity of the OBB to avoid the local optimization problem in the PointNetLK context.

Keywords: mirror symmetry, oriented bounding box, point cloud registration, PointNetLK-OBB

Procedia PDF Downloads 150
5013 Remote Sensing of Urban Land Cover Change: Trends, Driving Forces, and Indicators

Authors: Wei Ji

Abstract:

This study was conducted in the Kansas City metropolitan area of the United States, which has experienced significant urban sprawling in recent decades. The remote sensing of land cover changes in this area spanned over four decades from 1972 through 2010. The project was implemented in two stages: the first stage focused on detection of long-term trends of urban land cover change, while the second one examined how to detect the coupled effects of human impact and climate change on urban landscapes. For the first-stage study, six Landsat images were used with a time interval of about five years for the period from 1972 through 2001. Four major land cover types, built-up land, forestland, non-forest vegetation land, and surface water, were mapped using supervised image classification techniques. The study found that over the three decades the built-up lands in the study area were more than doubled, which was mainly at the expense of non-forest vegetation lands. Surprisingly and interestingly, the area also saw a significant gain in surface water coverage. This observation raised questions: How have human activities and precipitation variation jointly impacted surface water cover during recent decades? How can we detect such coupled impacts through remote sensing analysis? These questions led to the second stage of the study, in which we designed and developed approaches to detecting fine-scale surface waters and analyzing coupled effects of human impact and precipitation variation on the waters. To effectively detect urban landscape changes that might be jointly shaped by precipitation variation, our study proposed “urban wetscapes” (loosely-defined urban wetlands) as a new indicator for remote sensing detection. The study examined whether urban wetscape dynamics was a sensitive indicator of the coupled effects of the two driving forces. To better detect this indicator, a rule-based classification algorithm was developed to identify fine-scale, hidden wetlands that could not be appropriately detected based on their spectral differentiability by a traditional image classification. Three SPOT images for years 1992, 2008, and 2010, respectively were classified with this technique to generate the four types of land cover as described above. The spatial analyses of remotely-sensed wetscape changes were implemented at the scales of metropolitan, watershed, and sub-watershed, as well as based on the size of surface water bodies in order to accurately reveal urban wetscape change trends in relation to the driving forces. The study identified that urban wetscape dynamics varied in trend and magnitude from the metropolitan, watersheds, to sub-watersheds in response to human impacts at different scales. The study also found that increased precipitation in the region in the past decades swelled larger wetlands in particular while generally smaller wetlands decreased mainly due to human development activities. These results confirm that wetscape dynamics can effectively reveal the coupled effects of human impact and climate change on urban landscapes. As such, remote sensing of this indicator provides new insights into the relationships between urban land cover changes and driving forces.

Keywords: urban land cover, human impact, climate change, rule-based classification, across-scale analysis

Procedia PDF Downloads 308
5012 Pilot Scale Production and Compatibility Criteria of New Self-Cleaning Materials

Authors: Jonjaua Ranogajec, Ognjen Rudic, Snezana Pasalic, Snezana Vucetic, Damir Cjepa

Abstract:

The paper involves a chain of activities from synthesis, establishment of the methodology for characterization and testing of novel protective materials through the pilot production and application on model supports. It summarizes the results regarding the development of the pilot production protocol for newly developed self-cleaning materials. The optimization of the production parameters was completed in order to improve the most important functional properties (mineralogy characteristics, particle size, self-cleaning properties and photocatalytic activity) of the newly designed nanocomposite material.

Keywords: pilot production, self-cleaning materials, compatibility, cultural heritage

Procedia PDF Downloads 395
5011 A Group Setting of IED in Microgrid Protection Management System

Authors: Jyh-Cherng Gu, Ming-Ta Yang, Chao-Fong Yan, Hsin-Yung Chung, Yung-Ruei Chang, Yih-Der Lee, Chen-Min Chan, Chia-Hao Hsu

Abstract:

There are a number of distributed generations (DGs) installed in microgrid, which may have diverse path and direction of power flow or fault current. The overcurrent protection scheme for the traditional radial type distribution system will no longer meet the needs of microgrid protection. Integrating the intelligent electronic device (IED) and a supervisory control and data acquisition (SCADA) with IEC 61850 communication protocol, the paper proposes a microgrid protection management system (MPMS) to protect power system from the fault. In the proposed method, the MPMS performs logic programming of each IED to coordinate their tripping sequence. The GOOSE message defined in IEC 61850 is used as the transmission information medium among IEDs. Moreover, to cope with the difference in fault current of microgrid between grid-connected mode and islanded mode, the proposed MPMS applies the group setting feature of IED to protect system and robust adaptability. Once the microgrid topology varies, the MPMS will recalculate the fault current and update the group setting of IED. Provided there is a fault, IEDs will isolate the fault at once. Finally, the Matlab/Simulink and Elipse Power Studio software are used to simulate and demonstrate the feasibility of the proposed method.

Keywords: IEC 61850, IED, group Setting, microgrid

Procedia PDF Downloads 463
5010 A Web-Based Systems Immunology Toolkit Allowing the Visualization and Comparative Analysis of Publically Available Collective Data to Decipher Immune Regulation in Early Life

Authors: Mahbuba Rahman, Sabri Boughorbel, Scott Presnell, Charlie Quinn, Darawan Rinchai, Damien Chaussabel, Nico Marr

Abstract:

Collections of large-scale datasets made available in public repositories can be used to identify and fill gaps in biomedical knowledge. But first, these data need to be made readily accessible to researchers for analysis and interpretation. Here a collection of transcriptome datasets was made available to investigate the functional programming of human hematopoietic cells in early life. Thirty two datasets were retrieved from the NCBI Gene Expression Omnibus (GEO) and loaded in a custom, interactive web application called the Gene Expression browser (GXB), designed for visualization and query of integrated large-scale data. Multiple sample groupings and gene rank lists were created based on the study design and variables in each dataset. Web links to customized graphical views can be generated by users and subsequently be used to graphically present data in manuscripts for publication. The GXB tool also enables browsing of a single gene across datasets, which can provide information on the role of a given molecule across biological systems. The dataset collection is available online. As a proof-of-principle, one of the datasets (GSE25087) was re-analyzed to identify genes that are differentially expressed by regulatory T cells in early life. Re-analysis of this dataset and a cross-study comparison using multiple other datasets in the above mentioned collection revealed that PMCH, a gene encoding a precursor of melanin-concentrating hormone (MCH), a cyclic neuropeptide, is highly expressed in a variety of other hematopoietic cell types, including neonatal erythroid cells as well as plasmacytoid dendritic cells upon viral infection. Our findings suggest an as yet unrecognized role of MCH in immune regulation, thereby highlighting the unique potential of the curated dataset collection and systems biology approach to generate new hypotheses which can be tested in future mechanistic studies.

Keywords: early-life, GEO datasets, PMCH, interactive query, systems biology

Procedia PDF Downloads 296
5009 Urban Heat Islands Analysis of Matera, Italy Based on the Change of Land Cover Using Satellite Landsat Images from 2000 to 2017

Authors: Giuseppina Anna Giorgio, Angela Lorusso, Maria Ragosta, Vito Telesca

Abstract:

Climate change is a major public health threat due to the effects of extreme weather events on human health and on quality of life in general. In this context, mean temperatures are increasing, in particular, extreme temperatures, with heat waves becoming more frequent, more intense, and longer lasting. In many cities, extreme heat waves have drastically increased, giving rise to so-called Urban Heat Island (UHI) phenomenon. In an urban centre, maximum temperatures may be up to 10° C warmer, due to different local atmospheric conditions. UHI occurs in the metropolitan areas as function of the population size and density of a city. It consists of a significant difference in temperature compared to the rural/suburban areas. Increasing industrialization and urbanization have increased this phenomenon and it has recently also been detected in small cities. Weather conditions and land use are one of the key parameters in the formation of UHI. In particular surface urban heat island is directly related to temperatures, to land surface types and surface modifications. The present study concern a UHI analysis of Matera city (Italy) based on the analysis of temperature, change in land use and land cover, using Corine Land Cover maps and satellite Landsat images. Matera, located in Southern Italy, has a typical Mediterranean climate with mild winters and hot and humid summers. Moreover, Matera has been awarded the international title of the 2019 European Capital of Culture. Matera represents a significant example of vernacular architecture. The structure of the city is articulated by a vertical succession of dug layers sometimes excavated or partly excavated and partly built, according to the original shape and height of the calcarenitic slope. In this study, two meteorological stations were selected: MTA (MaTera Alsia, in industrial zone) and MTCP (MaTera Civil Protection, suburban area located in a green zone). In order to evaluate the increase in temperatures (in terms of UHI occurrences) over time, and evaluating the effect of land use on weather conditions, the climate variability of temperatures for both stations was explored. Results show that UHI phenomena is growing in Matera city, with an increase of maximum temperature values at a local scale. Subsequently, spatial analysis was conducted by Landsat satellite images. Four years was selected in the summer period (27/08/2000, 27/07/2006, 11/07/2012, 02/08/2017). In Particular, Landsat 7 ETM+ for 2000, 2006 and 2012 years; Landsat 8 OLI/TIRS for 2017. In order to estimate the LST, Mono Window Algorithm was applied. Therefore, the increase of LST values spatial scale trend has been verified, in according to results obtained at local scale. Finally, the analysis of land use maps over the years by the LST and/or the maximum temperatures measured, show that the development of industrialized area produces a corresponding increase in temperatures and consequently a growth in UHI.

Keywords: climate variability, land surface temperature, LANDSAT images, urban heat island

Procedia PDF Downloads 125
5008 Structural Characterization and Application of Tio2 Nano-Partical

Authors: Maru Chetan, Desai Abhilash

Abstract:

The structural characteristics & application of TiO2 powder with different phases are study by various techniques in this paper. TTIP, EG and citric acid use as Ti source and catalyst respectively synthesis for sol gel synthesis of TiO2 powder. To replace sol gel method we develop the new method of making nano particle of TiO2 powder. It is two route method one is physical and second one is chemical route. Specific aim to this process is to minimize the production cost and the large scale production of nano particle The synthesis product work characterize by EDAX, SEM, XRD tests.

Keywords: mortal and pestle, nano particle , TiO2, TTIP

Procedia PDF Downloads 323
5007 Integration of a Microbial Electrolysis Cell and an Oxy-Combustion Boiler

Authors: Ruth Diego, Luis M. Romeo, Antonio Morán

Abstract:

In the present work, a study of the coupling of a Bioelectrochemical System together with an oxy-combustion boiler is carried out; specifically, it proposes to connect the combustion gas outlet of a boiler with a microbial electrolysis cell (MEC) where the CO2 from the gases are transformed into methane in the cathode chamber, and the oxygen produced in the anode chamber is recirculated to the oxy-combustion boiler. The MEC mainly consists of two electrodes (anode and cathode) immersed in an aqueous electrolyte; these electrodes are separated by a proton exchange membrane (PEM). In this case, the anode is abiotic (where oxygen is produced), and it is at the cathode that an electroactive biofilm is formed with microorganisms that catalyze the CO2 reduction reactions. Real data from an oxy-combustion process in a boiler of around 20 thermal MW have been used for this study and are combined with data obtained on a smaller scale (laboratory-pilot scale) to determine the yields that could be obtained considering the system as environmentally sustainable energy storage. In this way, an attempt is made to integrate a relatively conventional energy production system (oxy-combustion) with a biological system (microbial electrolysis cell), which is a challenge to be addressed in this type of new hybrid scheme. In this way, a novel concept is presented with the basic dimensioning of the necessary equipment and the efficiency of the global process. In this work, it has been calculated that the efficiency of this power-to-gas system based on MEC cells when coupled to industrial processes is of the same order of magnitude as the most promising equivalent routes. The proposed process has two main limitations, the overpotentials in the electrodes that penalize the overall efficiency and the need for storage tanks for the process gases. The results of the calculations carried out in this work show that certain real potentials achieve an acceptable performance. Regarding the tanks, with adequate dimensioning, it is possible to achieve complete autonomy. The proposed system called OxyMES provides energy storage without energetically penalizing the process when compared to an oxy-combustion plant with conventional CO2 capture. According to the results obtained, this system can be applied as a measure to decarbonize an industry, changing the original fuel of the oxy-combustion boiler to the biogas generated in the MEC cell. It could also be used to neutralize CO2 emissions from industry by converting it to methane and then injecting it into the natural gas grid.

Keywords: microbial electrolysis cells, oxy-combustion, co2, power-to-gas

Procedia PDF Downloads 108
5006 Motivation and Self-Concept in Language Learning: An Exploratory Study of English Language Learners

Authors: A. van Staden, M. M. Coetzee

Abstract:

Despite numerous efforts to increase the literacy level of South African learners, for example, through the implementation of educational policies such as the Revised National Curriculum statement, advocating mother-tongue instruction (during a child's formative years), in reality, the majority of South African children are still being educated in a second language (in most cases English). Moreover, despite the fact that a significant percentage of our country's budget is spent on the education sector and that both policy makers and educationalists have emphasized the importance of learning English in this globalized world, the poor overall academic performance and English literacy level of a large number of school leavers are still a major concern. As we move forward in an attempt to comprehend the nuances of English language and literacy development in our country, it is imperative to explore both extrinsic and intrinsic factors that contribute or impede the effective development of English as a second language. In the present study, the researchers set out to investigate how intrinsic factors such as motivation and self-concept contribute to or affect English language learning amongst high school learners in South Africa. Emanating from the above the main research question that guided this research is the following: Is there a significant relationship between high school learners' self-concept, motivation, and English second language performances? In order to investigate this hypothesis, this study utilized quantitative research methodology to investigate the interplay of self-concept and motivation in English language learning. For this purpose, we sampled 201 high school learners from various schools in South Africa. Methods of data gathering inter alia included the following: A biographical questionnaire; the Academic Motivational Scale and the Piers-Harris Self-Concept Scale. Pearson Product Moment Correlation Analyses yielded significant correlations between L2 learners' motivation and their English language proficiency, including demonstrating positive correlations between L2 learners' self-concept and their achievements in English. Accordingly, researchers have argued that the learning context, in which students learn English as a second language, has a crucial influence on students' motivational levels. This emphasizes the important role the teacher has to play in creating learning environments that will enhance L2 learners' motivation and improve their self-concepts.

Keywords: motivation, self-concept, language learning, English second language learners (L2)

Procedia PDF Downloads 268
5005 Using Daily Light Integral Concept to Construct the Ecological Plant Design Strategy of Urban Landscape

Authors: Chuang-Hung Lin, Cheng-Yuan Hsu, Jia-Yan Lin

Abstract:

It is an indispensible strategy to adopt greenery approach on architectural bases so as to improve ecological habitats, decrease heat-island effect, purify air quality, and relieve surface runoff as well as noise pollution, all of which are done in an attempt to achieve sustainable environment. How we can do with plant design to attain the best visual quality and ideal carbon dioxide fixation depends on whether or not we can appropriately make use of greenery according to the nature of architectural bases. To achieve the goal, it is a need that architects and landscape architects should be provided with sufficient local references. Current greenery studies focus mainly on the heat-island effect of urban with large scale. Most of the architects still rely on people with years of expertise regarding the adoption and disposition of plantation in connection with microclimate scale. Therefore, environmental design, which integrates science and aesthetics, requires fundamental research on landscape environment technology divided from building environment technology. By doing so, we can create mutual benefits between green building and the environment. This issue is extremely important for the greening design of the bases of green buildings in cities and various open spaces. The purpose of this study is to establish plant selection and allocation strategies under different building sunshade levels. Initially, with the shading of sunshine on the greening bases as the starting point, the effects of the shades produced by different building types on the greening strategies were analyzed. Then, by measuring the PAR( photosynthetic active radiation), the relative DLI( daily light integral) was calculated, while the DLI Map was established in order to evaluate the effects of the building shading on the established environmental greening, thereby serving as a reference for plant selection and allocation. The discussion results were to be applied in the evaluation of environment greening of greening buildings and establish the “right plant, right place” design strategy of multi-level ecological greening for application in urban design and landscape design development, as well as the greening criteria to feedback to the eco-city greening buildings.

Keywords: daily light integral, plant design, urban open space

Procedia PDF Downloads 511
5004 Geology, Geomorphology and Genesis of Andarokh Karstic Cave, North-East Iran

Authors: Mojtaba Heydarizad

Abstract:

Andarokh basin is one of the main karstic regions in Khorasan Razavi province NE Iran. This basin is part of Kopeh-Dagh mega zone extending from Caspian Sea in the east to northern Afghanistan in the west. This basin is covered by Mozdooran Formation, Ngr evaporative formation and quaternary alluvium deposits in descending order of age. Mozdooran carbonate formation is notably karstified. The main surface karstic features in Mozdooran formation are Groove karren, Cleft karren, Rain pit, Rill karren, Tritt karren, Kamintza, Domes, and Table karren. In addition to surface features, deep karstic feature Andarokh Cave also exists in the region. Studying Ca, Mg, Mn, Sr, Fe concentration and Sr/Mn ratio in Mozdooran formation samples with distance to main faults and joints system using PCA analyses demonstrates intense meteoric digenesis role in controlling carbonate rock geochemistry. The karst evaluation in Andarokh basin varies from early stages 'deep seated karst' in Mesozoic to mature karstic system 'Exhumed karst' in quaternary period. Andarokh cave (the main cave in Andarokh basin) is rudimentary branch work consists of three passages of A, B and C and two entrances Andarokh and Sky.

Keywords: Andarokh basin, Andarokh cave, geochemical analyses, karst evaluation

Procedia PDF Downloads 155
5003 Applications of AFM in 4D to Optimize the Design of Genetic Nanoparticles

Authors: Hosam Abdelhady

Abstract:

Filming the behaviors of individual DNA molecules in their environment when they interact with individual medicinal nano-polymers in a molecular scale has opened the door to understand the effect of the molecular shape, size, and incubation time with nanocarriers on optimizing the design of robust genetic Nano molecules able to resist the enzymatic degradation, enter the cell, reach to the nucleus and kill individual cancer cells in their environment. To this end, we will show how we applied the 4D AFM as a guide to finetune the design of genetic nanoparticles and to film the effects of these nanoparticles on the nanomechanical and morphological profiles of individual cancer cells.

Keywords: AFM, dendrimers, nanoparticles, DNA, gene therapy, imaging

Procedia PDF Downloads 73
5002 Reed: An Approach Towards Quickly Bootstrapping Multilingual Acoustic Models

Authors: Bipasha Sen, Aditya Agarwal

Abstract:

Multilingual automatic speech recognition (ASR) system is a single entity capable of transcribing multiple languages sharing a common phone space. Performance of such a system is highly dependent on the compatibility of the languages. State of the art speech recognition systems are built using sequential architectures based on recurrent neural networks (RNN) limiting the computational parallelization in training. This poses a significant challenge in terms of time taken to bootstrap and validate the compatibility of multiple languages for building a robust multilingual system. Complex architectural choices based on self-attention networks are made to improve the parallelization thereby reducing the training time. In this work, we propose Reed, a simple system based on 1D convolutions which uses very short context to improve the training time. To improve the performance of our system, we use raw time-domain speech signals directly as input. This enables the convolutional layers to learn feature representations rather than relying on handcrafted features such as MFCC. We report improvement on training and inference times by atleast a factor of 4x and 7.4x respectively with comparable WERs against standard RNN based baseline systems on SpeechOcean's multilingual low resource dataset.

Keywords: convolutional neural networks, language compatibility, low resource languages, multilingual automatic speech recognition

Procedia PDF Downloads 123
5001 A Re-Evaluation of Green Architecture and Its Contributions to Environmental Sustainability

Authors: Po-Ching Wang

Abstract:

Considering the notable effects of natural resource consumption and impacts on fragile ecosystems, reflection on contemporary sustainable design is critical. Nevertheless, the idea of ‘green’ has been misapplied and even abused, and, in fact, much damage to the environment has been done in its name. In 1996’s popular science fiction film Independence Day, an alien species, having exhausted the natural resources of one planet, moves on to another —a fairly obvious irony on contemporary human beings’ irresponsible use of the Earth’s natural resources in modern times. In fact, the human ambition to master nature and freely access the world’s resources has long been inherent in manifestos evinced by productions of the environmental design professions. Ron Herron’s Walking City, an experimental architectural piece of 1964, is one example that comes to mind here. For this design concept, the architect imagined a gigantic nomadic urban aggregate that by way of an insect-like robotic carrier would move all over the world, on land and sea, to wherever its inhabitants want. Given the contemporary crisis regarding natural resources, recently ideas pertinent to structuring a sustainable environment have been attracting much interest in architecture, a field that has been accused of significantly contributing to ecosystem degradation. Great art, such as Fallingwater building, has been regarded as nature-friendly, but its notion of ‘green’ might be inadequate in the face of the resource demands made by human populations today. This research suggests a more conservative and scrupulous attitude to attempting to modify nature for architectural settings. Designs that pursue spiritual or metaphysical interconnections through anthropocentric aesthetics are not sufficient to benefit ecosystem integrity; though high-tech energy-saving processes may contribute to a fine-scale sustainability, they may ultimately cause catastrophe in the global scale. Design with frugality is proposed in order to actively reduce environmental load. The aesthetic taste and ecological sensibility of design professions and the public alike may have to be reshaped in order to make the goals of environmental sustainability viable.

Keywords: anthropocentric aesthetic, aquarium sustainability, biosphere 2, ecological aesthetic, ecological footprint, frugal design

Procedia PDF Downloads 209
5000 Mechanism of Action of Troxerutin in Reducing Oxidative Stress

Authors: Nasrin Hosseinzad

Abstract:

Troxerutin, a trihydroxyethylated derived of rutin, is a flavonoid existing in tea, coffee, cereal grains, various fruits and vegetables have been conveyed to display radioprotective, antithrombotic, nephron-protective and hepato-protective possessions. Troxerutin, has been well-proved to utilize hepatoprotective assets. Troxerutin could upturn the resistance of hippocampal neurons alongside apoptosis by lessening the action of AChE and oxidative stress. Consequently, troxerutin may have advantageous properties in the administration of Alzheimer's disease and cancer. Troxerutin has been testified to have several welfares and medicinal stuffs. It could shelter the mouse kidney against d-gal-induced damage by refining renal utility, decreasing histopathologic changes, dropping ROS construction, reintroducing the activities of antioxidant enzymes and reducing DNA oxidative destruction. The DNA cleavage study clarifies that troxerutin showed DNA protection against hydroxyl radical persuaded DNA mutilation. Troxerutin uses anti-cancer effect in HuH-7 hepatocarcinoma cells conceivably through synchronized regulation of the molecular signalling pathways, Nrf2 and NF-κB. DNA binding at slight channel by troxerutin may have donated to feature breaks leading to improved radiation brought cell death. Furthermore, the mechanism principal the observed variance in the antioxidant activities of troxerutin and its esters was qualified to equally their free radical scavenging capabilities and dissemination on the cell membrane outward.

Keywords: troxerutin, DNA, oxidative stress, antioxidant, free radical

Procedia PDF Downloads 160
4999 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 290
4998 Modeling Optimal Lipophilicity and Drug Performance in Ligand-Receptor Interactions: A Machine Learning Approach to Drug Discovery

Authors: Jay Ananth

Abstract:

The drug discovery process currently requires numerous years of clinical testing as well as money just for a single drug to earn FDA approval. For drugs that even make it this far in the process, there is a very slim chance of receiving FDA approval, resulting in detrimental hurdles to drug accessibility. To minimize these inefficiencies, numerous studies have implemented computational methods, although few computational investigations have focused on a crucial feature of drugs: lipophilicity. Lipophilicity is a physical attribute of a compound that measures its solubility in lipids and is a determinant of drug efficacy. This project leverages Artificial Intelligence to predict the impact of a drug’s lipophilicity on its performance by accounting for factors such as binding affinity and toxicity. The model predicted lipophilicity and binding affinity in the validation set with very high R² scores of 0.921 and 0.788, respectively, while also being applicable to a variety of target receptors. The results expressed a strong positive correlation between lipophilicity and both binding affinity and toxicity. The model helps in both drug development and discovery, providing every pharmaceutical company with recommended lipophilicity levels for drug candidates as well as a rapid assessment of early-stage drugs prior to any testing, eliminating significant amounts of time and resources currently restricting drug accessibility.

Keywords: drug discovery, lipophilicity, ligand-receptor interactions, machine learning, drug development

Procedia PDF Downloads 111
4997 Fused Structure and Texture (FST) Features for Improved Pedestrian Detection

Authors: Hussin K. Ragb, Vijayan K. Asari

Abstract:

In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.

Keywords: pedestrian detection, phase congruency, local phase, LBP features, CSLBP features, FST descriptor

Procedia PDF Downloads 488
4996 Taylor’s Law and Relationship between Life Expectancy at Birth and Variance in Age at Death in Period Life Table

Authors: David A. Swanson, Lucky M. Tedrow

Abstract:

Taylor’s Law is a widely observed empirical pattern that relates variances to means in sets of non-negative measurements via an approximate power function, which has found application to human mortality. This study adds to this research by showing that Taylor’s Law leads to a model that reasonably describes the relationship between life expectancy at birth (e0, which also is equal to mean age at death in a life table) and variance at age of death in seven World Bank regional life tables measured at two points in time, 1970 and 2000. Using as a benchmark a non-random sample of four Japanese female life tables covering the period from 1950 to 2004, the study finds that the simple linear model provides reasonably accurate estimates of variance in age at death in a life table from e0, where the latter range from 60.9 to 85.59 years. Employing 2017 life tables from the Human Mortality Database, the simple linear model is used to provide estimates of variance at age in death for six countries, three of which have high e0 values and three of which have lower e0 values. The paper provides a substantive interpretation of Taylor’s Law relative to e0 and concludes by arguing that reasonably accurate estimates of variance in age at death in a period life table can be calculated using this approach, which also can be used where e0 itself is estimated rather than generated through the construction of a life table, a useful feature of the model.

Keywords: empirical pattern, mean age at death in a life table, mean age of a stationary population, stationary population

Procedia PDF Downloads 330
4995 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing

Authors: Abdullah Bal, Sevdenur Bal

Abstract:

This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.

Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware

Procedia PDF Downloads 506
4994 Effect of Motor Imagery of Truncal Exercises on Trunk Function and Balance in Early Stroke: A Randomized Controlled Trial

Authors: Elsa Reethu, S. Karthik Babu, N. Syed

Abstract:

Background: Studies in the past focused on the additional benefits of action observation in improving upper and lower limb functions and improving activities of daily living when administered along with conventional therapy. Nevertheless, there is a paucity of literature proving the effects of motor imagery of truncal exercise in improving trunk control in patients with stroke. Aims/purpose: To study the effect of motor imagery of truncal exercises on trunk function and balance in early stroke. Methods: A total of 24 patients were included in the study. 12 were included in the experimental group and 12 were included in control group Trunk function was measured using Trunk Control Test (TCT), Trunk Impairment Scale Verheyden (TIS Verheyden) and Trunk Impairment Scale Fujiwara (TIS Fujiwara). The balance was assessed using Brunel Balance Assessment (BBA) and Tinetti POMA. For the experimental group, each session was for 30 minutes of physical exercises and 15 minutes of motor imagery, once a day, six times a week for 3 weeks and prior to the exercise session, patients viewed a video tape of all the trunk exercises to be performed for 15minutes. The control group practiced the trunk exercises alone for the same duration. Measurements were taken before, after and 4 weeks after intervention. Results: The effect of treatment in motor imagery group showed better improvement when compared with control group when measured after 3 weeks on values of static sitting balance, dynamic balance, total TIS (Verheyden) score, BBA, Tinetti balance and gait with a large effect size of 0.86, 1.99, 1.69, 1.06, 1.63 and 0.97 respectively. The moderate effect size was seen in values of TIS Fujiwara (0.58) and small effect size was seen on TCT (0.12) and TIS coordination component (0.13).at the end of 4 weeks after intervention, the large effect size was identified on values of dynamic balance (2.06), total TIS score (1.59) and Tinetti balance (1.24). The moderate effect size was observed on BBA (0.62) and Tinetti gait (0.72). Conclusion: Trunk motor imagery is effective in improving trunk function and balance in patients with stroke and has a carryover effect in the aspects of mobility. The therapy gain that was observed during the time of discharge was seen to be maintained at the follow-up levels.

Keywords: stroke, trunk rehabilitation, trunk function, balance, motor imagery

Procedia PDF Downloads 300
4993 Classifying and Predicting Efficiencies Using Interval DEA Grid Setting

Authors: Yiannis G. Smirlis

Abstract:

The classification and the prediction of efficiencies in Data Envelopment Analysis (DEA) is an important issue, especially in large scale problems or when new units frequently enter the under-assessment set. In this paper, we contribute to the subject by proposing a grid structure based on interval segmentations of the range of values for the inputs and outputs. Such intervals combined, define hyper-rectangles that partition the space of the problem. This structure, exploited by Interval DEA models and a dominance relation, acts as a DEA pre-processor, enabling the classification and prediction of efficiency scores, without applying any DEA models.

Keywords: data envelopment analysis, interval DEA, efficiency classification, efficiency prediction

Procedia PDF Downloads 164
4992 Decentralised Edge Authentication in the Industrial Enterprise IoT Space

Authors: C. P. Autry, A.W. Roscoe

Abstract:

Authentication protocols based on public key infrastructure (PKI) and trusted third party (TTP) are no longer adequate for industrial scale IoT networks thanks to issues such as low compute and power availability, the use of widely distributed and commercial off-the-shelf (COTS) systems, and the increasingly sophisticated attackers and attacks we now have to counter. For example, there is increasing concern about nation-state-based interference and future quantum computing capability. We have examined this space from first principles and have developed several approaches to group and point-to-point authentication for IoT that do not depend on the use of a centralised client-server model. We emphasise the use of quantum resistant primitives such as strong cryptographic hashing and the use multi-factor authentication.

Keywords: authentication, enterprise IoT cybersecurity, PKI/TTP, IoT space

Procedia PDF Downloads 169
4991 Characteristics and Key Exploration Directions of Gold Deposits in China

Authors: Bin Wang, Yong Xu, Honggang Qu, Rongmei Liu, Zhenji Gao

Abstract:

Based on the geodynamic environment, basic geological characteristics of minerals and so on, gold deposits in China are divided into 11 categories, of which tectonic fracture altered rock, mid-intrudes and contact zone, micro-fine disseminated and continental volcanic types are the main prospecting kinds. The metallogenic age of gold deposits in China is dominated by the Mesozoic and Cenozoic. According to the geotectonic units, geological evolution, geological conditions, spatial distribution, gold deposits types, metallogenic factors etc., 42 gold concentration areas are initially determined and have a concentrated distribution feature. On the basis of the gold exploration density, gold concentration areas are divided into high, medium and low level areas. High ones are mainly distributed in the central and eastern regions. 93.04% of the gold exploration drillings are within 500 meters, but there are some problems, such as less and shallower of drilling verification etc.. The paper discusses the resource potentials of gold deposits and proposes the future prospecting directions and suggestions. The deep and periphery of old mines in the central and eastern regions and western area, especially in Xinjiang and Qinghai, will be the future key prospecting one and have huge potential gold reserves. If the exploration depth is extended to 2,000 meters shallow, the gold resources will double.

Keywords: gold deposits, gold deposits types, gold concentration areas, prospecting, resource potentiality

Procedia PDF Downloads 77
4990 Outcome of Comparison between Partial Thickness Skin Graft Harvesting from Scalp and Lower Limb for Scalp Defect: A Clinical Trial Study

Authors: Mahdi Eskandarlou, Mehrdad Taghipour

Abstract:

Background: Partial-thickness skin graft is the cornerstone for scalp defect repair. Routine donor sites include abdomen, thighs, and buttocks. Given the potential side effects following harvesting from these sites and the potential advantages of harvesting from scalp (broad surface, rapid healing, and better cosmetics results), this study is trying to compare the outcomes of graft harvesting from scalp and lower limb. Methods: This clinical trial is conducted among a sample number of 40 partial thickness graft candidates (20 case and 20 control group) with scalp defect presenting to plastic surgery clinic at Besat Hospital during the time period between 2018 and 2019. Sampling was done by simple randomization using random digit table. Data gathering was performed using a designated checklist. The donor site in case group and control group was scalp and lower limb, respectively. The resultant data were analyzed using chi-squared and t-test and SPPS version 21 (SPSS Statistics for Windows, Version 21.0. Armonk, NY: IBM Corp). Results: Of the total 40 patients participating in this study, 28 patients (70%) were male, and 12 (30%) were female with and mean age of 63.62 ± 09.73 years. Hypertension and diabetes mellitus were the most common comorbidities among patients with basal cell carcinoma (BCC) and trauma being the most common etiology for the defects. There was a statistically meaningful relationship between two groups regarding the etiology of defect (P=0.02). The most common anatomic location of defect for case and control groups was temporal and parietal, respectively. Most of the defects were deep to galea zone. The mean diameter of defect was 24.28 ± 45.37 mm for all of the patients. The difference between diameter of defect in both groups was statistically meaningful, while no such difference between graft diameter was seen. The graft 'Take' was completely successful in both groups according to evaluations. The level of postoperative pain was lower in the case group compared to the control according to VAS scale, and the satisfaction was higher in them per Likert scale. Conclusion: Scalp can safely be used as donor site for skin graft to be used for scalp defects, which is associated with better results and lower complication rates compared to other donor sites.

Keywords: donor site, leg, partial-thickness graft, scalp

Procedia PDF Downloads 150
4989 Traverse Surveying Table Simple and Sure

Authors: Hamid Fallah

Abstract:

Creating surveying stations is the first thing that a surveyor learns; they can use it for control and implementation in projects such as buildings, roads, tunnels, monitoring, etc., whatever is related to the preparation of maps. In this article, the method of calculation through the traverse table and by checking several examples of errors of several publishers of surveying books in the calculations of this table, we also control the results of several software in a simple way. Surveyors measure angles and lengths in creating surveying stations, so the most important task of a surveyor is to be able to correctly remove the error of angles and lengths from the calculations and to determine whether the amount of error is within the permissible limit for delete it or not.

Keywords: UTM, localization, scale factor, cartesian, traverse

Procedia PDF Downloads 82
4988 Effective Parameter Selection for Audio-Based Music Mood Classification for Christian Kokborok Song: A Regression-Based Approach

Authors: Sanchali Das, Swapan Debbarma

Abstract:

Music mood classification is developing in both the areas of music information retrieval (MIR) and natural language processing (NLP). Some of the Indian languages like Hindi English etc. have considerable exposure in MIR. But research in mood classification in regional language is very less. In this paper, powerful audio based feature for Kokborok Christian song is identified and mood classification task has been performed. Kokborok is an Indo-Burman language especially spoken in the northeastern part of India and also some other countries like Bangladesh, Myanmar etc. For performing audio-based classification task, useful audio features are taken out by jMIR software. There are some standard audio parameters are there for the audio-based task but as known to all that every language has its unique characteristics. So here, the most significant features which are the best fit for the database of Kokborok song is analysed. The regression-based model is used to find out the independent parameters that act as a predictor and predicts the dependencies of parameters and shows how it will impact on overall classification result. For classification WEKA 3.5 is used, and selected parameters create a classification model. And another model is developed by using all the standard audio features that are used by most of the researcher. In this experiment, the essential parameters that are responsible for effective audio based mood classification and parameters that do not significantly change for each of the Christian Kokborok songs are analysed, and a comparison is also shown between the two above model.

Keywords: Christian Kokborok song, mood classification, music information retrieval, regression

Procedia PDF Downloads 222
4987 Design and Development of 5-DOF Color Sorting Manipulator for Industrial Applications

Authors: Atef A. Ata, Sohair F. Rezeka, Ahmed El-Shenawy, Mohammed Diab

Abstract:

Image processing in today’s world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system applications which consists of two integrated stations of processing and handling with a new image processing feature. Existing color sorting techniques use a set of inductive, capacitive, and optical sensors to differentiate object color. This research presents a mechatronics color sorting system solution with the application of image processing. A 5-DOF robot arm is designed and developed with pick and place operation to be main part of the color sorting system. Image processing procedure senses the circular objects in an image captured in real time by a webcam attached at the end-effector then extracts color and position information out of it. This information is passed as a sequence of sorting commands to the manipulator that has pick-and-place mechanism. Performance analysis proves that this color based object sorting system works very accurate under ideal condition in term of adequate illumination, circular objects shape and color. The circular objects tested for sorting are red, green and blue. For non-ideal condition, such as unspecified color the accuracy reduces to 80%.

Keywords: robotics manipulator, 5-DOF manipulator, image processing, color sorting, pick-and-place

Procedia PDF Downloads 374
4986 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 76
4985 Research on the Spatial Organization and Collaborative Innovation of Innovation Corridors from the Perspective of Ecological Niche: A Case Study of Seven Municipal Districts in Jiangsu Province, China

Authors: Weikang Peng

Abstract:

The innovation corridor is an important spatial carrier to promote regional collaborative innovation, and its development process is the spatial re-organization process of regional innovation resources. This paper takes the Nanjing-Zhenjiang G312 Industrial Innovation Corridor, which involves seven municipal districts in Jiangsu Province, as empirical evidence. Based on multi-source spatial big data in 2010, 2016, and 2022, this paper applies triangulated irregular network (TIN), head/tail breaks, regional innovation ecosystem (RIE) niche fitness evaluation model, and social network analysis to carry out empirical research on the spatial organization and functional structural evolution characteristics of innovation corridors and their correlation with the structural evolution of collaborative innovation network. The results show, first, the development of innovation patches in the corridor has fractal characteristics in time and space and tends to be multi-center and cluster layout along the Nanjing Bypass Highway and National Highway G312. Second, there are large differences in the spatial distribution pattern of niche fitness in the corridor in various dimensions, and the niche fitness of innovation patches along the highway has increased significantly. Third, the scale of the collaborative innovation network in the corridor is expanding fast. The core of the network is shifting from the main urban area to the periphery of the city along the highway, with small-world and hierarchical levels, and the core-edge network structure is highlighted. With the development of the Innovation Corridor, the main collaborative mode in the corridor is changing from collaboration within innovation patches to collaboration between innovation patches, and innovation patches with high ecological suitability tend to be the active areas of collaborative innovation. Overall, polycentric spatial layout, graded functional structure, diversified innovation clusters, and differentiated environmental support play an important role in effectively constructing collaborative innovation linkages and the stable expansion of the scale of collaborative innovation within the innovation corridor.

Keywords: innovation corridor development, spatial structure, niche fitness evaluation model, head/tail breaks, innovation network

Procedia PDF Downloads 20