Search results for: binary vector quantization (BVQ)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1738

Search results for: binary vector quantization (BVQ)

298 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 113
297 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine

Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy

Abstract:

Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.

Keywords: land cover, google earth engine, machine learning, remote sensing

Procedia PDF Downloads 104
296 A Computational Approach for the Prediction of Relevant Olfactory Receptors in Insects

Authors: Zaide Montes Ortiz, Jorge Alberto Molina, Alejandro Reyes

Abstract:

Insects are extremely successful organisms. A sophisticated olfactory system is in part responsible for their survival and reproduction. The detection of volatile organic compounds can positively or negatively affect many behaviors in insects. Compounds such as carbon dioxide (CO2), ammonium, indol, and lactic acid are essential for many species of mosquitoes like Anopheles gambiae in order to locate vertebrate hosts. For instance, in A. gambiae, the olfactory receptor AgOR2 is strongly activated by indol, which accounts for almost 30% of human sweat. On the other hand, in some insects of agricultural importance, the detection and identification of pheromone receptors (PRs) in lepidopteran species has become a promising field for integrated pest management. For example, with the disruption of the pheromone receptor, BmOR1, mediated by transcription activator-like effector nucleases (TALENs), the sensitivity to bombykol was completely removed affecting the pheromone-source searching behavior in male moths. Then, the detection and identification of olfactory receptors in the genomes of insects is fundamental to improve our understanding of the ecological interactions, and to provide alternatives in the integrated pests and vectors management. Hence, the objective of this study is to propose a bioinformatic workflow to enhance the detection and identification of potential olfactory receptors in genomes of relevant insects. Applying Hidden Markov models (Hmms) and different computational tools, potential candidates for pheromone receptors in Tuta absoluta were obtained, as well as potential carbon dioxide receptors in Rhodnius prolixus, the main vector of Chagas disease. This study showed the validity of a bioinformatic workflow with a potential to improve the identification of certain olfactory receptors in different orders of insects.

Keywords: bioinformatic workflow, insects, olfactory receptors, protein prediction

Procedia PDF Downloads 134
295 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 69
294 Factors Associated with Involvement in Physical Activity among Children (Aged 6-18 Years) Training at Excel Soccer Academy in Uganda

Authors: Syrus Zimaze, George Nsimbe, Valley Mugwanya, Matiya Lule, Edgar Watson, Patrick Gwayambadde

Abstract:

Physical inactivity is a growing global epidemic, also recognised as a major public health challenge. Globally, there are alarming rates of children reported with cardiovascular disease and obesity with limited interventions. In Sub Saharan Africa, there is limited information about involvement in physical activity especially among children aged 6 to 18 years. The aim of this study was to explore factors associated with involvement in physical activity among children in Uganda. Methods: We included all parents with children aged 6 to 18 years training with Excel Soccer Academy between January 2017 and June 2018. Physical activity definition was time spent participating in routine soccer training at the academy for more than 30 days. Each child's attendance was recorded, and parents provided demographic and social economic data. Data on predictors of physical activity involvement were collected using a standardized questionnaire. Descriptive statistics and frequency were used. Binary logistic regression was used at the multi variable level adjusting for education, residence, transport means and access to information technology. Results: Overall 356 parents were interviewed; Boys 318 (89.3%) engaged more in physical activity than girls. The median age for children was 13 years (IQR:6-18) and 42 years (IQR:37-49) among parents. The median time spent at the Excel soccer academy was 13.4 months (IQR: 4.6-35.7) Majority of the children attended formal education, p < 0.001). Factors associated with involvement in physical activity included: owning a permanent house compared to a rented house (odds ratio [OR] :2.84: 95% CI: 2.09-3.86, p < 0.0001), owning a car compared to using public transport (OR: 5.64 CI: 4.80-6.63, p < 0.0001), a parent having received formal education compared to non-formal education (OR: 2.93 CI: 2.47-3.46, p < 0.0001) and daily access to information technology (OR:0.40 CI:0.25-0.66, p < 0.001). Parent’s age and gender were not associated to involvement in physical activity. Conclusions: Socioeconomic factors were positively associated with involvement in physical activity with boys participating more than girls in soccer activities. More interventions are required geared towards increasing girl’s participation in physical activity and those targeting children from less privilege homes.

Keywords: physical activity, Sub-Saharan Africa, social economic factors, children

Procedia PDF Downloads 143
293 Solving LWE by Pregressive Pumps and Its Optimization

Authors: Leizhang Wang, Baocang Wang

Abstract:

General Sieve Kernel (G6K) is considered as currently the fastest algorithm for the shortest vector problem (SVP) and record holder of open SVP challenge. We study the lattice basis quality improvement effects of the Workout proposed in G6K, which is composed of a series of pumps to solve SVP. Firstly, we use a low-dimensional pump output basis to propose a predictor to predict the quality of high-dimensional Pumps output basis. Both theoretical analysis and experimental tests are performed to illustrate that it is more computationally expensive to solve the LWE problems by using a G6K default SVP solving strategy (Workout) than these lattice reduction algorithms (e.g. BKZ 2.0, Progressive BKZ, Pump, and Jump BKZ) with sieving as their SVP oracle. Secondly, the default Workout in G6K is optimized to achieve a stronger reduction and lower computational cost. Thirdly, we combine the optimized Workout and the Pump output basis quality predictor to further reduce the computational cost by optimizing LWE instances selection strategy. In fact, we can solve the TU LWE challenge (n = 65, q = 4225, = 0:005) 13.6 times faster than the G6K default Workout. Fourthly, we consider a combined two-stage (Preprocessing by BKZ- and a big Pump) LWE solving strategy. Both stages use dimension for free technology to give new theoretical security estimations of several LWE-based cryptographic schemes. The security estimations show that the securities of these schemes with the conservative Newhope’s core-SVP model are somewhat overestimated. In addition, in the case of LAC scheme, LWE instances selection strategy can be optimized to further improve the LWE-solving efficiency even by 15% and 57%. Finally, some experiments are implemented to examine the effects of our strategies on the Normal Form LWE problems, and the results demonstrate that the combined strategy is four times faster than that of Newhope.

Keywords: LWE, G6K, pump estimator, LWE instances selection strategy, dimension for free

Procedia PDF Downloads 47
292 Pro Life-Pro Choice Debate: Looking through the Prism of Abortion Right in the Indian Context

Authors: Satabdi Das

Abstract:

Background:The abortion debate has polarized women, pitting them against each other in the binary of pro-choice and pro-life. While the followers of pro-choice views the right to an abortion as inherent to a women's right to sovereignty, the latter believes that it is unethical to kill a unborn baby as it is in a way denying the foetus' right to life. So there are innumerable arguments and counter arguments without hyphenation and the dilemma remains that which one is more significant – the mother's right to terminate pregnancy or the foetus' right to life. This pro-life and pro-choice debate has an western root which is more about reproductive freedom. But the Western standard of looking at abortion debate is not fully relevant in the Indian context. The situation is entirely different here. Sex selective foeticide is a social ill in India which cannot be explained through the prism of abortion debate only. It must take into account the problems of forced female foeticide. Objectives: Against this backdrop the study sheds light on the following issues: -How the Reproductive debate has been evolved? -How it is relevant in the Indian Context where female foeticide is a harsh reality? -How one should address the dilemma between life and death in the context of pro life-pro choice debate? Methodology: The study employs historical analytical and descriptive analytical methods and uses primary documents like governmental documents and secondary sources like analytical articles in books, journals, and relevant websites. Findings: -Fertility control is not a modern day phenomenon. It has its roots throughout ancient, medieval and present epochs. However, there existed debates over the rights of the foetus and the question of ethics pertaining to the act of abortion. -Pre-natal sex determination for sex selective abortion is a common phenomenon in India because of the wish for male heirs. The cultural preferences for male child over female ones have resulted in the disappearance of girl children. -When does the life begin has not been recognized by any law. Considering Indian case, it can be said that the Pro life/ pro choice is not that relevant as it is in the US. Here the women are often denied the basic human rights. They are murdered at the womb in many places. Their right to lives are jeopardised in that way. In the liberal abortion regime of India, women's choice to end a pregnancy is limited among very few enlightened families. In many cases, it is the decision of the family to end a pregnancy for boy preference. For that pre natal sex determination plays a crucial role. Conclusion: In India, we can be pro life only when the right to life of the unborn can be secured irrespective of its sex. Similarly we belong to pro-choice group only when the choice to terminate a baby is entirely decided by the mother for her own reasons.

Keywords: female foeticide, India, prolife/pro choice, right to abortion

Procedia PDF Downloads 174
291 Assesment of Genetic Fidelity of Micro-Clones of an Aromatic Medicinal Plant Murraya koenigii (L.) Spreng

Authors: Ramesh Joshi, Nisha Khatik

Abstract:

Murraya koenigii (L.) Spreng locally known as “Curry patta” or “Meetha neem” belonging to the family Rutaceae that grows wildly in Southern Asia. Its aromatic leaves are commonly used as the raw material for traditional medicinal formulations in India. The leaves contain essential oil and also used as a condiment. Several monomeric and binary carbazol alkaloids present in the various plant parts. These alkaloids have been reported to possess anti-microbial, mosquitocidal, topo-isomerase inhibition and antioxidant properties. Some of the alkaloids reported in this plant have showed anti carcinogenic and anti-diabetic properties. The conventional method of propagation of this tree is limited to seeds only, which retain their viability for only a short period. Hence, a biotechnological approach might have an advantage edging over traditional breeding as well as the genetic improvement of M. koenigii within a short period. The development of a reproducible regeneration protocol is the prerequisite for ex situ conservation and micropropagation. An efficient protocol for high frequency regeneration of in vitro plants of Murraya koenigii via different explants such as- nodal segments, intermodal segments, leaf, root segments, hypocotyle, cotyledons and cotyledonary node explants is described. In the present investigation, assessment of clonal fidelity in the micropropagated plantlets of Murraya koenigii was attempted using RAPD and ISSR markers at different pathways of plant tissue culture technique. About 20 ISSR and 40 RAPD primers were used for all the samples. Genomic DNA was extracted by CTAB method. ISSR primer were found to be more suitable as compared to RAPD for the analysis of clonal fidelity of M. koenigii. The amplifications however, were finally performed using RAPD, ISSR markers owing to their better performance in terms of generation of amplification products. In RAPD primer maximum 75% polymorphism was recorded in OPU-2 series which exhibited out of 04 scorable bands, three bands were polymorphic with a band range of size 600-1500 bp. In ISSR primers the UBC 857 showed 50% polymorphism with 02 band were polymorphic of band range size between 400-1000 bp.

Keywords: genetic fidelity, Murraya koenigii, aromatic plants, ISSR primers

Procedia PDF Downloads 480
290 Insights into Insect Vectors: Liberibacter Interactions

Authors: Murad Ghanim

Abstract:

The citrus greening disease, also known as Huanglongbing, caused by the phloem-limited bacterium Candidatus Liberibacter asiaticus (CLas) has resulted in tremendous losses and the death of millions of citrus trees worldwide. CLas is transmitted by the Asian citrus psyllid (ACP) Diaphorina citri. The closely-related bacterium Candidatus Liberibacter solanacearum (CLso), which is associated with vegetative disorders in carrots and the zebra chips disease in potatoes, is transmitted by other psyllid species including Bactericera trigonica in carrots and B. ckockerelli in potatoes. Chemical sprays are currently the prevailing method for managing these diseases for limiting psyllid populations; however, they are limited in their effectiveness. A promising approach to prevent the transmission of these pathogens is to interfere with the vector-pathogen interactions, but our understanding of these processes is very limited. CLas induces changes in the nuclear architecture in the midgut of ACP and activates programmed cell death (apoptosis) in this organ. Strikingly, CLso displayed an opposite effect in the gut of B. trigonica, showing limited apoptosis, but widespread necrosis. Electron and fluorescent microscopy further showed that CLas induced the formation of Endoplasmic reticulum (ER) inclusion- and replication-like bodies, in which it increases and multiplies. ER involvement in bacterial replication is hypothesized to be the first stage of an immune response leading to the apoptotic and necrotic responses. ER exploitation and the subsequent events that lead to these cellular and stress responses might activate a cascade of molecular responses ending up with apoptosis and necrosis. Understanding the molecular interactions that underlay the necrotic/apoptotic responses to the bacteria will increase our knowledge of ACP-CLas, and BT-CLso interactions, and will set the foundation for developing novel, and efficient strategies to disturb these interactions and inhibit the transmission.

Keywords: Liberibacter, psyllid, transmission, apoptosis, necrosis

Procedia PDF Downloads 135
289 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 122
288 Phenotypic and Genotypic Expression of Hylomma Anatolicum Ticks Silenced for Ferritin Genes through RNA Interference Technology

Authors: Muhammad Sohail Sajid, Mahvish Maqbool, Hafiz Muhammad Rizwan, Muhammad Saqib, Haroon Ahmad

Abstract:

Ticks are blood-sucking ectoparasite that causes a decrease in production and economic losses and affects mammals, reptiles, and birds. Hyalomma anatolicum is the main vector for CCHF transmission and Pakistan has faced several outbreaks of CCHF in the recent past. Ferritin (fer)is a highly conserved molecule that is ubiquitous in most tick tissues and responsible for iron metabolism and storage. It was hypothesized that the development of acaricidal resistance and residual effects of commercially used acaricides could be controlled by using alternative control methods, including RNA interference. The current study aimed to evaluate the fer silencing effects on tick feeding, average body weight, egg mass index, and mortality. Ticks, collected through the standard collection protocols were further subjected to RNA isolation using the Trizol method. Commercially available kit procedures were followed for cDNA and dsRNA synthesis. The soaking/Immersion method was used for dsRNA delivery. Our findings have shown a 27% reduction in body weight of fer silenced group and showed a significant association of fer and body weight. Silencing of fer had a significant effect on the engorgement percentage (P= 0.0007), oviposition (P=0.008), egg mass (P= 0.004) and hatching (P= 0.001). The soaking method was used for dsRNA delivery and 15°C was found to be an optimum temperature for inducing gene silencing in ticks as at this temperature, maximum survivability after immersion was attained. This study along with previous studies, described that iron toxicity due to the silencing of fer could play an important role in the control of ticks and fer can be used as a potent candidate for vaccine development.

Keywords: ticks, iron, ferritin, engorgement, oviposition, immersion, RNA interference

Procedia PDF Downloads 81
287 An Assessment of Floodplain Vegetation Response to Groundwater Changes Using the Soil & Water Assessment Tool Hydrological Model, Geographic Information System, and Machine Learning in the Southeast Australian River Basin

Authors: Newton Muhury, Armando A. Apan, Tek N. Marasani, Gebiaw T. Ayele

Abstract:

The changing climate has degraded freshwater availability in Australia that influencing vegetation growth to a great extent. This study assessed the vegetation responses to groundwater using Terra’s moderate resolution imaging spectroradiometer (MODIS), Normalised Difference Vegetation Index (NDVI), and soil water content (SWC). A hydrological model, SWAT, has been set up in a southeast Australian river catchment for groundwater analysis. The model was calibrated and validated against monthly streamflow from 2001 to 2006 and 2007 to 2010, respectively. The SWAT simulated soil water content for 43 sub-basins and monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) were applied in the machine learning tool, Waikato Environment for Knowledge Analysis (WEKA), using two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The assessment shows that different types of vegetation response and soil water content vary in the dry and wet seasons. The WEKA model generated high positive relationships (r = 0.76, 0.73, and 0.81) between NDVI values of all vegetation in the sub-basins against soil water content (SWC), the groundwater flow (GW), and the combination of these two variables, respectively, during the dry season. However, these responses were reduced by 36.8% (r = 0.48) and 13.6% (r = 0.63) against GW and SWC, respectively, in the wet season. Although the rainfall pattern is highly variable in the study area, the summer rainfall is very effective for the growth of the grass vegetation type. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.

Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater

Procedia PDF Downloads 82
286 Classifying Affective States in Virtual Reality Environments Using Physiological Signals

Authors: Apostolos Kalatzis, Ashish Teotia, Vishnunarayan Girishan Prabhu, Laura Stanley

Abstract:

Emotions are functional behaviors influenced by thoughts, stimuli, and other factors that induce neurophysiological changes in the human body. Understanding and classifying emotions are challenging as individuals have varying perceptions of their environments. Therefore, it is crucial that there are publicly available databases and virtual reality (VR) based environments that have been scientifically validated for assessing emotional classification. This study utilized two commercially available VR applications (Guided Meditation VR™ and Richie’s Plank Experience™) to induce acute stress and calm state among participants. Subjective and objective measures were collected to create a validated multimodal dataset and classification scheme for affective state classification. Participants’ subjective measures included the use of the Self-Assessment Manikin, emotional cards and 9 point Visual Analogue Scale for perceived stress, collected using a Virtual Reality Assessment Tool developed by our team. Participants’ objective measures included Electrocardiogram and Respiration data that were collected from 25 participants (15 M, 10 F, Mean = 22.28  4.92). The features extracted from these data included heart rate variability components and respiration rate, both of which were used to train two machine learning models. Subjective responses validated the efficacy of the VR applications in eliciting the two desired affective states; for classifying the affective states, a logistic regression (LR) and a support vector machine (SVM) with a linear kernel algorithm were developed. The LR outperformed the SVM and achieved 93.8%, 96.2%, 93.8% leave one subject out cross-validation accuracy, precision and recall, respectively. The VR assessment tool and data collected in this study are publicly available for other researchers.

Keywords: affective computing, biosignals, machine learning, stress database

Procedia PDF Downloads 127
285 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models

Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri

Abstract:

Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.

Keywords: multimodal transportation, demand modeling, travel behavior, statistical models

Procedia PDF Downloads 153
284 Modelling and Optimization of a Combined Sorption Enhanced Biomass Gasification with Hydrothermal Carbonization, Hot Gas Cleaning and Dielectric Barrier Discharge Plasma Reactor to Produce Pure H₂ and Methanol Synthesis

Authors: Vera Marcantonio, Marcello De Falco, Mauro Capocelli, Álvaro Amado-Fierro, Teresa A. Centeno, Enrico Bocci

Abstract:

Concerns about energy security, energy prices, and climate change led scientific research towards sustainable solutions to fossil fuel as renewable energy sources coupled with hydrogen as an energy vector and carbon capture and conversion technologies. Among the technologies investigated in the last decades, biomass gasification acquired great interest owing to the possibility of obtaining low-cost and CO₂ negative emission hydrogen production from a large variety of everywhere available organic wastes. Upstream and downstream treatment were then studied in order to maximize hydrogen yield, reduce the content of organic and inorganic contaminants under the admissible levels for the technologies which are coupled with, capture, and convert carbon dioxide. However, studies which analyse a whole process made of all those technologies are still missing. In order to fill this lack, the present paper investigated the coexistence of hydrothermal carbonization (HTC), sorption enhance gasification (SEG), hot gas cleaning (HGC), and CO₂ conversion by dielectric barrier discharge (DBD) plasma reactor for H₂ production from biomass waste by means of Aspen Plus software. The proposed model aimed to identify and optimise the performance of the plant by varying operating parameters (such as temperature, CaO/biomass ratio, separation efficiency, etc.). The carbon footprint of the global plant is 2.3 kg CO₂/kg H₂, lower than the latest limit value imposed by the European Commission to consider hydrogen as “clean”, that was set to 3 kg CO₂/kg H₂. The hydrogen yield referred to the whole plant is 250 gH₂/kgBIOMASS.

Keywords: biomass gasification, hydrogen, aspen plus, sorption enhance gasification

Procedia PDF Downloads 59
283 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 154
282 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed

Authors: Marion G. Ben-Jacob, David Wang

Abstract:

There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.

Keywords: emporium model, mathematics, pedagogy, STEM

Procedia PDF Downloads 55
281 Spirits and Social Agency: A Critical Review of Studies from Africa

Authors: Sanaa Riaz

Abstract:

Spirits occupy a world that simultaneously dwells between the divine and the earthly binary while speaking to all forces of nature, marginality, and extremity in between. This paper examines the conceptualizations, interactions with, and experience of spiritual beings in relation to the concept of self and social agency, defined as a continuum of cooperation leaving those involved with an enhanced or diminished perception of self-agency. To do justice to the diverse mythological and popular interpretations of spirit entities, ethnographic examples from Africa, in particular, will be used. An examination of the nature and role of spirits in Africa allows one to understand the ways in which colonial influences brought by Catholicism and Islam added to the pre-colonial repertoire and syncretic imaginations of spirits. A comprehensive framework to analyze spirits requires situating them as a cognitive configuration of humans to communicate with other humans and forces of nature to receive knowledge about the normative in social roles, conduct, and action. Understanding spirits also requires a rethinking of the concept of self as not one encapsulated in the individual but one representing positionalities in collective negotiations, adversity, and alliances. To use the postmodern understanding of identity as a far from a coherent collection of selves fluidly moving between and dialoguing with gravitational and contradictory social forces, benevolent and maleficent spirit forces represent how people make sense of their origin, physiological and ecological changes, subsistence, and political environment and social relations. A discussion on spirits requires examining the rituals and mediational forces and their performance that allow participants to tackle adversity, voicelessness and continue to work safely and morally for the collective good. Moreover, it is important to see the conceptualization of spirits in unison with sorcery and spirit possession, central to voodoo practices, also because they speak volumes about the experiences of slavery and marginalization. This paper has two motives: It presents a critical literature review of ethnographic accounts of spirit entities in African spiritual experiences to examine the ways in which spirits become mediums through which the self is conceptualized and asserted. Second, the paper highlights the ways in which spirits become a medium to represent political and sociocultural ambiguities and desires along a spectrum of social agencies, including joint agency, vicarious agency, and interfered agency.

Keywords: spirits, social agency, self, ethnographic case studies

Procedia PDF Downloads 44
280 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA

Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell

Abstract:

Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.

Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis

Procedia PDF Downloads 222
279 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 492
278 Assesments of Some Environment Variables on Fisheries at Two Levels: Global and Fao Major Fishing Areas

Authors: Hyelim Park, Juan Martin Zorrilla

Abstract:

Climate change influences very widely and in various ways ocean ecosystem functioning. The consequences of climate change on marine ecosystems are an increase in temperature and irregular behavior of some solute concentrations. These changes would affect fisheries catches in several ways. Our aim is to assess the quantitative contribution change of fishery catches along the time and express them through four environment variables: Sea Surface Temperature (SST4) and the concentrations of Chlorophyll (CHL), Particulate Inorganic Carbon (PIC) and Particulate Organic Carbon (POC) at two spatial scales: Global and the nineteen FAO Major Fishing Areas divisions. Data collection was based on the FAO FishStatJ 2014 database as well as MODIS Aqua satellite observations from 2002 to 2012. Some data had to be corrected and interpolated using some existing methods. As the results, a multivariable regression model for average Global fisheries captures contained temporal mean of SST4, standard deviation of SST4, standard deviation of CHL and standard deviation of PIC. Global vector auto-regressive (VAR) model showed that SST4 was a statistical cause of global fishery capture. To accommodate varying conditions in fishery condition and influence of climate change variables, a model was constructed for each FAO major fishing area. From the management perspective it should be recognized some limitations of the FAO marine areas division that opens to possibility to the discussion of the subdivision of the areas into smaller units. Furthermore, it should be treated that the contribution changes of fishery species and the possible environment factor for specific species at various scale levels.

Keywords: fisheries-catch, FAO FishStatJ, MODIS Aqua, sea surface temperature (SST), chlorophyll, particulate inorganic carbon (PIC), particulate organic carbon (POC), VAR, granger causality

Procedia PDF Downloads 466
277 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 34
276 Assessment of Multi-Domain Energy Systems Modelling Methods

Authors: M. Stewart, Ameer Al-Khaykan, J. M. Counsell

Abstract:

Emissions are a consequence of electricity generation. A major option for low carbon generation, local energy systems featuring Combined Heat and Power with solar PV (CHPV) has significant potential to increase energy performance, increase resilience, and offer greater control of local energy prices while complementing the UK’s emissions standards and targets. Recent advances in dynamic modelling and simulation of buildings and clusters of buildings using the IDEAS framework have successfully validated a novel multi-vector (simultaneous control of both heat and electricity) approach to integrating the wide range of primary and secondary plant typical of local energy systems designs including CHP, solar PV, gas boilers, absorption chillers and thermal energy storage, and associated electrical and hot water networks, all operating under a single unified control strategy. Results from this work indicate through simulation that integrated control of thermal storage can have a pivotal role in optimizing system performance well beyond the present expectations. Environmental impact analysis and reporting of all energy systems including CHPV LES presently employ a static annual average carbon emissions intensity for grid supplied electricity. This paper focuses on establishing and validating CHPV environmental performance against conventional emissions values and assessment benchmarks to analyze emissions performance without and with an active thermal store in a notional group of non-domestic buildings. Results of this analysis are presented and discussed in context of performance validation and quantifying the reduced environmental impact of CHPV systems with active energy storage in comparison with conventional LES designs.

Keywords: CHPV, thermal storage, control, dynamic simulation

Procedia PDF Downloads 222
275 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components

Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea

Abstract:

Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.

Keywords: assessment, part of speech, sentiment analysis, student feedback

Procedia PDF Downloads 125
274 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments

Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis

Abstract:

In global navigation satellite systems (GNSS), denied settings such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation, thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.

Keywords: autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion

Procedia PDF Downloads 192
273 The Concept of Decentralization: Modern Challenges for the EU Countries, Prospects for Further Implementation in Ukraine

Authors: Alina Murtishcheva

Abstract:

The tendency of globalization, challenges to democracy and peace caused by the Russian invasion of Ukraine, and other global conflicts require searching general orientations of governmental development, including local government. The formation of a common theoretical framework for local government guarantees not only of harmonisation of European legislation but also creates prerequisites for the integration of new members into the European Union. One of the most important milestones of such a theoretical framework is the concept of decentralization. Decentralization as a phenomenon is characteristic of most European Union countries at different historical stages. For Ukraine, as a country that has clearly defined a European integration vector of development, understanding not only the legal but also the theoretical basis of decentralisation processes in European countries is an important prerequisite for further reforms. Decentralisation takes different forms, which leads to a variety of understandings in doctrine and, consequently, different interpretations in national legislation. Despite of this, decentralisation is based on common ideas and values such as democracy, participation, the rule of law, and proximity government that are shared by all EU member states. Nevertheless, not all EU countries are currently implementing broad decentralization in their political and legal practices. Some countries are gradually moving in this direction, while others remain quite centralised. There is also a new, insufficiently studied trend today – recentralisation, which can be broadly defined as the strengthening of centralization tendencies in countries that were considered to be decentralized. Consequently, an exploratory theoretical study is needed to identify how the concept of decentralization is combined with the recentralization tendency in EU member states. The purpose of this study is to empirically analyse scientific approaches to the concept of “decentralisation”, to highlight the tendency of recentralisation and its consequences, to analyse Ukraine's experience in the field of decentralisation of public power, and to outline the prospects for further development of Ukrainian legislation in this area.

Keywords: centralization, decentralization, local government, recentralization, reforms

Procedia PDF Downloads 59
272 Plant Mediated RNAi Approach to Knock Down Ecdysone Receptor Gene of Colorado Potato Beetle

Authors: Tahira Hussain, Ilhom Rahamkulov, Muhammad Aasim, Ugur Pirlak, Emre Aksoy, Mehmet Emin Caliskan, Allah Bakhsh

Abstract:

RNA interference (RNAi) has proved its usefulness in functional genomic research on insects recently and is considered potential strategy in crop improvement for the control of insect pests. The different insect pests incur significant losses to potato yield worldwide, Colorado Potato Beetle (CPB) being most notorious one. The present study focuses to knock down highly specific 20-hydroxyecdysone hormone-receptor complex interaction by using RNAi approach to silence Ecdysone receptor (EcR) gene of CPB in transgenic potato plants expressing dsRNA of EcR gene. The partial cDNA of Ecdysone receptor gene of CPB was amplified using specific primers in sense and anti-sense orientation and cloned in pRNAi-GG vector flanked by an intronic sequence (pdk). Leaf and internodal explants of Lady Olympia, Agria and Granola cultivars of potato were infected with Agrobacterium strain LBA4404 harboring plasmid pRNAi-CPB, pRNAi-GFP (used as control). Neomycin phosphotransferase (nptII) gene was used as a plant selectable marker at a concentration of 100 mg L⁻¹. The primary transformants obtained have shown proper integration of T-DNA in plant genome by standard molecular analysis like polymerase chain reaction (PCR), real-time PCR, Sothern blot. The transgenic plants developed out of these cultivars are being evaluated for their efficacy against larvae as well adults of CPB. The transgenic lines are expected to inhibit expression of EcR protein gene, hindering their molting process, hence leading to increased potato yield.

Keywords: plant mediated RNAi, molecular strategy, ecdysone receptor, insect metamorphosis

Procedia PDF Downloads 154
271 Obtainment of Systems with Efavirenz and Lamellar Double Hydroxide as an Alternative for Solubility Improvement of the Drug

Authors: Danilo A. F. Fontes, Magaly A. M.Lyra, Maria L. C. Moura, Leslie R. M. Ferraz, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim, Giovanna C. R. M. Schver, Ping I. Lee, Severino Alves-Júnior, José L. Soares-Sobrinho, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a first-choice drug in antiretroviral therapy with high efficacy in the treatment of infection by Human Immunodeficiency Virus, which causes Acquired Immune Deficiency Syndrome (AIDS). EFV has low solubility in water resulting in a decrease in the dissolution rate and, consequently, in its bioavailability. Among the technological alternatives to increase solubility, the Lamellar Double Hydroxides (LDH) have been applied in the development of systems with poorly water-soluble drugs. The use of analytical techniques such as X-Ray Diffraction (XRD), Infrared Spectroscopy (IR) and Differential Scanning Calorimetry (DSC) allowed the elucidation of drug interaction with the lamellar compounds. The objective of this work was to characterize and develop the binary systems with EFV and LDH in order to increase the solubility of the drug. The LDH-CaAl was synthesized by the method of co-precipitation from salt solutions of calcium nitrate and aluminum nitrate in basic medium. The systems EFV-LDH and their physical mixtures (PM) were obtained at different concentrations (5-60% of EFV) using the solvent technique described by Takahashi & Yamaguchi (1991). The characterization of the systems and the PM’s was performed by XRD techniques, IR, DSC and dissolution test under non-sink conditions. The results showed improvements in the solubility of EFV when associated with LDH, due to a possible change in its crystal structure and formation of an amorphous material. From the DSC results, one could see that the endothermic peak at 173°C, temperature that correspond to the melting process of EFZ in the crystal form, was present in the PM results. For the EFZ-LDH systems (with 5, 10 and 30% of drug loading), this peak was not observed. XRD profiles of the PM showed well-defined peaks for EFV. Analyzing the XRD patterns of the systems, it was found that the XRD profiles of all the systems showed complete attenuation of the characteristic peaks of the crystalline form of EFZ. The IR technique showed that, in the results of the PM, there was the appearance of one band and overlap of other bands, while the IR results of the systems with 5, 10 and 30% drug loading showed the disappearance of bands and a few others with reduced intensity. The dissolution test under non-sink conditions showed that systems with 5, 10 and 30% drug loading promoted a great increase in the solubility of EFV, but the system with 10% of drug loading was the only one that could keep substantial amount of drug in solution at different pHs.

Keywords: Efavirenz, Lamellar Double Hydroxides, Pharmaceutical Techonology, Solubility

Procedia PDF Downloads 562
270 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Egypt: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), CO2 emissions and gross domestic product (GDP) for Egypt using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen maximum likelihood method for co-integration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests some negative impacts of the CO2 emissions and the coal and natural gas use on the GDP. Conversely, a positive long-run causality from the electricity consumption to the GDP is found to be significant in Egypt during the period. In the short-run, some positive unidirectional causalities exist, running from the coal consumption to the GDP, and the CO2 emissions and the natural gas use. Further, the GDP and the electricity use are positively influenced by the consumption of petroleum products and the direct combustion of crude oil. Overall, the results support arguments that there are relationships among environmental quality, energy use, and economic output in both the short term and long term; however, the effects may differ due to the sources of energy, such as in the case of Egypt for the period of 1980-2010.

Keywords: CO2 emissions, Egypt, energy consumption, GDP, time series analysis

Procedia PDF Downloads 606
269 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks

Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer

Abstract:

New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.

Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics

Procedia PDF Downloads 119