Search results for: bayesian inference
86 Assessment of Genetic Diversity and Population Structure of Goldstripe Sardinella, Sardinella gibbosa in the Transboundary Area of Kenya and Tanzania Using mtDNA and msDNA Markers
Authors: Sammy Kibor, Filip Huyghe, Marc Kochzius, James Kairo
Abstract:
Goldstripe Sardinella, Sardinella gibbosa, (Bleeker, 1849) is a commercially and ecologically important small pelagic fish common in the Western Indian Ocean region. The present study aimed to assess genetic diversity and population structure of the species in the Kenya-Tanzania transboundary area using mtDNA and msDNA markers. Some 630 bp sequence in the mitochondrial DNA (mtDNA) Cytochrome C Oxidase I (COI) and five polymorphic microsatellite DNA loci were analyzed. Fin clips of 309 individuals from eight locations within the transboundary area were collected between July and December 2018. The S. gibbosa individuals from the different locations were distinguishable from one another based on the mtDNA variation, as demonstrated with a neighbor-joining tree and minimum spanning network analysis. None of the identified 22 haplotypes were shared between Kenya and Tanzania. Gene diversity per locus was relatively high (0.271-0.751), highest Fis was 0.391. The structure analysis, discriminant analysis of Principal component (DAPC) and the pair-wise (FST = 0.136 P < 0.001) values after Bonferroni correction using five microsatellite loci provided clear inference on genetic differentiation and thus evidence of population structure of S. gibbosa along the Kenya-Tanzania coast. This study shows a high level of genetic diversity and the presence of population structure (Φst =0.078 P < 0.001) resulting to the existence of four populations giving a clear indication of minimum gene flow among the population. This information has application in the designing of marine protected areas, an important tool for marine conservation.Keywords: marine connectivity, microsatellites, population genetics, transboundary
Procedia PDF Downloads 12485 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: linked open data, information integration, digital libraries, data mining
Procedia PDF Downloads 42684 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique
Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki
Abstract:
Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector
Procedia PDF Downloads 33583 MAGNI Dynamics: A Vision-Based Kinematic and Dynamic Upper-Limb Model for Intelligent Robotic Rehabilitation
Authors: Alexandros Lioulemes, Michail Theofanidis, Varun Kanal, Konstantinos Tsiakas, Maher Abujelala, Chris Collander, William B. Townsend, Angie Boisselle, Fillia Makedon
Abstract:
This paper presents a home-based robot-rehabilitation instrument, called ”MAGNI Dynamics”, that utilized a vision-based kinematic/dynamic module and an adaptive haptic feedback controller. The system is expected to provide personalized rehabilitation by adjusting its resistive and supportive behavior according to a fuzzy intelligence controller that acts as an inference system, which correlates the user’s performance to different stiffness factors. The vision module uses the Kinect’s skeletal tracking to monitor the user’s effort in an unobtrusive and safe way, by estimating the torque that affects the user’s arm. The system’s torque estimations are justified by capturing electromyographic data from primitive hand motions (Shoulder Abduction and Shoulder Forward Flexion). Moreover, we present and analyze how the Barrett WAM generates a force-field with a haptic controller to support or challenge the users. Experiments show that by shifting the proportional value, that corresponds to different stiffness factors of the haptic path, can potentially help the user to improve his/her motor skills. Finally, potential areas for future research are discussed, that address how a rehabilitation robotic framework may include multisensing data, to improve the user’s recovery process.Keywords: human-robot interaction, kinect, kinematics, dynamics, haptic control, rehabilitation robotics, artificial intelligence
Procedia PDF Downloads 32982 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods
Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo
Abstract:
The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines
Procedia PDF Downloads 62181 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant
Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani
Abstract:
Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning
Procedia PDF Downloads 3880 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 8979 Design of a Fuzzy Expert System for the Impact of Diabetes Mellitus on Cardiac and Renal Impediments
Authors: E. Rama Devi Jothilingam
Abstract:
Diabetes mellitus is now one of the most common non communicable diseases globally. India leads the world with largest number of diabetic subjects earning the title "diabetes capital of the world". In order to reduce the mortality rate, a fuzzy expert system is designed to predict the severity of cardiac and renal problems of diabetic patients using fuzzy logic. Since uncertainty is inherent in medicine, fuzzy logic is used in this research work to remove the inherent fuzziness of linguistic concepts and uncertain status in diabetes mellitus which is the prime cause for the cardiac arrest and renal failure. In this work, the controllable risk factors "blood sugar, insulin, ketones, lipids, obesity, blood pressure and protein/creatinine ratio" are considered as input parameters and the "the stages of cardiac" (SOC)" and the stages of renal" (SORD) are considered as the output parameters. The triangular membership functions are used to model the input and output parameters. The rule base is constructed for the proposed expert system based on the knowledge from the medical experts. Mamdani inference engine is used to infer the information based on the rule base to take major decision in diagnosis. Mean of maximum is used to get a non fuzzy control action that best represent possibility distribution of an inferred fuzzy control action. The proposed system also classifies the patients with high risk and low risk using fuzzy c means clustering techniques so that the patients with high risk are treated immediately. The system is validated with Matlab and is used as a tracking system with accuracy and robustness.Keywords: Diabetes mellitus, fuzzy expert system, Mamdani, MATLAB
Procedia PDF Downloads 29078 Frank Norris’ McTeague: An Entropic Melodrama
Authors: Mohsen Masoomi, Fazel Asadi Amjad, Monireh Arvin
Abstract:
According to Naturalistic principles, human destiny in the form of blind chance and determinism, entraps the individual, so man is a defenceless creature unable to escape from the ruthless paws of a stoical universe. In Naturalism; nonetheless, melodrama mirrors a conscious alternative with a peculiar function. A typical American Naturalistic character thus cannot be a subject for social criticism of American society since they are not victims of the ongoing virtual slavery, capitalist system, nor of a ruined milieu, but of their own volition, and more importantly, their character frailty. Through a Postmodern viewpoint, each Naturalistic work can encompass some entropic trends and changes culminating in an entire failure and devastation. Frank Norris in McTeague displays the futile struggles of ordinary men and how they end up brutes. McTeague encompasses intoxication, abuse, violation, and ruthless homicides. Norris’ depictions of the falling individual as a demon represent the entropic dimension of Naturalistic novels. McTeague’s defeat is somewhat his own fault, the result of his own blunders and resolution, not the result of sheer accident. Throughout the novel, each character is a kind of insane quester indicating McTeague’s decadence and, by inference, the decadence of Western civilisation. McTeague seems to designate Norris’ solicitude for a community fabricated by the elements of human negative demeanours and conducts hauling acute symptoms of infectious dehumanisation. The aim of this article is to illustrate how one specific negative human disposition gradually, like a running fire, can spread everywhere and burn everything in itself. The author applies the concept of entropy metaphorically to describe the individual devolutions that necessarily comprise community entropy in McTeague, a dying universe.Keywords: animal imagery, entropy, Gypsy, melodrama
Procedia PDF Downloads 27877 Development of a Fuzzy Logic Based Model for Monitoring Child Pornography
Authors: Mariam Ismail, Kazeem Rufai, Jeremiah Balogun
Abstract:
A study was conducted to apply fuzzy logic to the development of a monitoring model for child pornography based on associated risk factors, which can be used by forensic experts or integrated into forensic systems for the early detection of child pornographic activities. A number of methods were adopted in the study, which includes an extensive review of related works was done in order to identify the factors that are associated with child pornography following which they were validated by an expert sex psychologist and guidance counselor, and relevant data was collected. Fuzzy membership functions were used to fuzzify the associated variables identified alongside the risk of the occurrence of child pornography based on the inference rules that were provided by the experts consulted, and the fuzzy logic expert system was simulated using the Fuzzy Logic Toolbox available in the MATLAB Software Release 2016. The results of the study showed that there were 4 categories of risk factors required for assessing the risk of a suspect committing child pornography offenses. The results of the study showed that 2 and 3 triangular membership functions were used to formulate the risk factors based on the 2 and 3 number of labels assigned, respectively. The results of the study showed that 5 fuzzy logic models were formulated such that the first 4 was used to assess the impact of each category on child pornography while the last one takes the 4 outputs from the 4 fuzzy logic models as inputs required for assessing the risk of child pornography. The following conclusion was made; there were factors that were related to personal traits, social traits, history of child pornography crimes, and self-regulatory deficiency traits by the suspects required for the assessment of the risk of child pornography crimes committed by a suspect. Using the values of the identified risk factors selected for this study, the risk of child pornography can be easily assessed from their values in order to determine the likelihood of a suspect perpetuating the crime.Keywords: fuzzy, membership functions, pornography, risk factors
Procedia PDF Downloads 12976 Socio-Demographic Factors and Testing Practices Are Associated with Spatial Patterns of Clostridium difficile Infection in the Australian Capital Territory, 2004-2014
Authors: Aparna Lal, Ashwin Swaminathan, Teisa Holani
Abstract:
Background: Clostridium difficile infections (CDIs) have been on the rise globally. In Australia, rates of CDI in all States and Territories have increased significantly since mid-2011. Identifying risk factors for CDI in the community can help inform targeted interventions to reduce infection. Methods: We examine the role of neighbourhood socio-economic status, demography, testing practices and the number of residential aged care facilities on spatial patterns in CDI incidence in the Australian Capital Territory. Data on all tests conducted for CDI were obtained from ACT Pathology by postcode for the period 1st January 2004 through 31 December 2014. Distribution of age groups and the neighbourhood Index of Relative Socio-economic Advantage Disadvantage (IRSAD) were obtained from the Australian Bureau of Statistics 2011 National Census data. A Bayesian spatial conditional autoregressive model was fitted at the postcode level to quantify the relationship between CDI and socio-demographic factors. To identify CDI hotspots, exceedance probabilities were set at a threshold of twice the estimated relative risk. Results: CDI showed a positive spatial association with the number of tests (RR=1.01, 95% CI 1.00, 1.02) and the resident population over 65 years (RR=1.00, 95% CI 1.00, 1.01). The standardized index of relative socio-economic advantage disadvantage (IRSAD) was significantly negatively associated with CDI (RR=0.74, 95% CI 0.56, 0.94). We identified three postcodes with high probability (0.8-1.0) of excess risk. Conclusions: Here, we demonstrate geographic variations in CDI in the ACT with a positive association of CDI with socioeconomic disadvantage and identify areas with a high probability of elevated risk compared with surrounding communities. These findings highlight community-based risk factors for CDI.Keywords: spatial, socio-demographic, infection, Clostridium difficile
Procedia PDF Downloads 32075 Moral Reasoning among Croatian Adolescents with Different Levels of Education
Authors: Nataša Šimić, Ljiljana Gregov, Matilda Nikolić, Andrea Tokić, Ana Proroković
Abstract:
Moral development takes place in six phases which can be divided in a pre-conventional, conventional and post-conventional level. Moral reasoning, as a key concept of moral development theories, involves a process of discernment/inference in doubtful situations. In research to date, education has proved to be a significant predictor of moral reasoning. The aim of this study was to investigate differences in moral reasoning and Kohlberg's phases of moral development between Croatian adolescents with different levels of education. In Study 1 comparisons between the group of secondary school students aged 17-18 (N=192) and the group of university students aged 21-25 (N=383) were made. Study 2 included comparison between university students group (N=69) and non-students group (N=43) aged from 21 to 24 (these two groups did not differ in age). In both studies, the Croatian Test of Moral Reasoning by Proroković was applied. As a measure of moral reasoning, the Index of Moral Reasoning (IMR) was calculated. This measure has some advantages compared to other measures of moral reasoning, and includes individual assessments of deviations from the ‘optimal profile’. Results of the Study 1 did not show differences in the IMR between secondary school students and university students. Both groups gave higher assessments to the arguments that correspond to higher phases of moral development. However, group differences were found for pre-conventional and conventional phases. As expected, secondary school students gave significantly higher assessments to the arguments that correspond to lower phases of moral development. Results of the Study 2 showed that university students, in relation to non-students, have higher IMR. Respecting to phases of moral development, both groups of participants gave higher assessments to the arguments that correspond to the post-conventional phase. Consistent with expectations and previous findings, results of both studies did not confirm gender differences in moral reasoning.Keywords: education, index of moral reasoning, Kohlberg's theory of moral development, moral reasoning
Procedia PDF Downloads 24974 Applications of the Morphological Variability in River Management: A Study of West Rapti River
Authors: Partha Sarathi Mondal, Srabani Sanyal
Abstract:
Different geomorphic agents produce a different landforms pattern. Similarly rivers also have a distinct and diverse landforms pattern. And even, within a river course different and distinct assemblage of landforms i.e. morphological variability are seen. These morphological variability are produced by different river processes. Channel and floodplain morphology helps to interpret river processes. Consequently morphological variability can be used as an important tool for assessing river processes, hydrological connectivity and river health, which will help us to draw inference about river processes and therefore, management of river health. The present study is documented on West Rapti river, a trans-boundary river flowing through Nepal and India, from its source to confluence with Ghaghra river in India. The river shows a significant morphological variability throughout its course. The present study tries to find out factors and processes responsible for the morphological variability of the river and in which way it can be applied in river management practices. For this purpose channel and floodplain morphology of West Rapti river was mapped as accurately as possible and then on the basis of process-form interactions, inferences are drawn to understand factors of morphological variability. The study shows that the valley setting of West Rapti river, in the Himalayan region, is confined and somewhere partly confined whereas, channel of the West Rapti river is single thread in most part of Himalayan region and braided in valley region. In the foothill region valley is unconfined and channel is braided, in middle part channel is meandering and valley is unconfined, whereas, channel is anthropogenically altered in the lower part of the course. Due to this the morphology of West Rapti river is highly diverse. These morphological variability are produced by different geomorphic processes. Therefore, for any river management it is essential to sustain these morphological variability so that the river could not cross the geomorphic threshold and environmental flow of the river along with the biodiversity of riparian region is maintained.Keywords: channel morphology, environmental flow, floodplain morphology, geomorphic threshold
Procedia PDF Downloads 37373 Security of Database Using Chaotic Systems
Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem
Abstract:
Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST
Procedia PDF Downloads 26572 Class Size Effects on Reading Achievement in Europe: Evidence from Progress in International Reading Literacy Study
Authors: Ting Shen, Spyros Konstantopoulos
Abstract:
During the past three decades, class size effects have been a focal debate in education. The idea of having smaller class is enormously popular among parents, teachers and policy makers. The rationale of its popularity is that small classroom could provide a better learning environment in which there would be more teacher-pupil interaction and more individualized instruction. This early stage benefits would also have a long-term positive effect. It is a common belief that reducing class size may result in increases in student achievement. However, the empirical evidence about class-size effects from experimental or quasi-experimental studies has been mixed overall. This study sheds more light on whether class size reduction impacts reading achievement in eight European countries: Bulgaria, Germany, Hungary, Italy, Lithuania, Romania, Slovakia, and Slovenia. We examine class size effects on reading achievement using national probability samples of fourth graders. All eight European countries had participated in the Progress in International Reading Literacy Study (PIRLS) in 2001, 2006 and 2011. Methodologically, the quasi-experimental method of instrumental variables (IV) has been utilized to facilitate causal inference of class size effects. Overall, the results indicate that class size effects on reading achievement are not significant across countries and years. However, class size effects are evident in Romania where reducing class size increases reading achievement. In contrast, in Germany, increasing class size seems to increase reading achievement. In future work, it would be valuable to evaluate differential class size effects for minority or economically disadvantaged student groups or low- and high-achievers. Replication studies with different samples and in various settings would also be informative. Future research should continue examining class size effects in different age groups and countries using rich international databases.Keywords: class size, reading achievement, instrumental variables, PIRLS
Procedia PDF Downloads 29071 Quality of Service Based Routing Algorithm for Real Time Applications in MANETs Using Ant Colony and Fuzzy Logic
Authors: Farahnaz Karami
Abstract:
Routing is an important, challenging task in mobile ad hoc networks due to node mobility, lack of central control, unstable links, and limited resources. An ant colony has been found to be an attractive technique for routing in Mobile Ad Hoc Networks (MANETs). However, existing swarm intelligence based routing protocols find an optimal path by considering only one or two route selection metrics without considering correlations among such parameters making them unsuitable lonely for routing real time applications. Fuzzy logic combines multiple route selection parameters containing uncertain information or imprecise data in nature, but does not have multipath routing property naturally in order to provide load balancing. The objective of this paper is to design a routing algorithm using fuzzy logic and ant colony that can solve some of routing problems in mobile ad hoc networks, such as nodes energy consumption optimization to increase network lifetime, link failures rate reduction to increase packet delivery reliability and providing load balancing to optimize available bandwidth. In proposed algorithm, the path information will be given to fuzzy inference system by ants. Based on the available path information and considering the parameters required for quality of service (QoS), the fuzzy cost of each path is calculated and the optimal paths will be selected. NS2.35 simulation tools are used for simulation and the results are compared and evaluated with the newest QoS based algorithms in MANETs according to packet delivery ratio, end-to-end delay and routing overhead ratio criterions. The simulation results show significant improvement in the performance of these networks in terms of decreasing end-to-end delay, and routing overhead ratio, and also increasing packet delivery ratio.Keywords: mobile ad hoc networks, routing, quality of service, ant colony, fuzzy logic
Procedia PDF Downloads 6470 Companies’ Internationalization: Multi-Criteria-Based Prioritization Using Fuzzy Logic
Authors: Jorge Anibal Restrepo Morales, Sonia Martín Gómez
Abstract:
A model based on a logical framework was developed to quantify SMEs' internationalization capacity. To do so, linguistic variables, such as human talent, infrastructure, innovation strategies, FTAs, marketing strategies, finance, etc. were integrated. It is argued that a company’s management of international markets depends on internal factors, especially capabilities and resources available. This study considers internal factors as the biggest business challenge because they force companies to develop an adequate set of capabilities. At this stage, importance and strategic relevance have to be defined in order to build competitive advantages. A fuzzy inference system is proposed to model the resources, skills, and capabilities that determine the success of internationalization. Data: 157 linguistic variables were used. These variables were defined by international trade entrepreneurs, experts, consultants, and researchers. Using expert judgment, the variables were condensed into18 factors that explain SMEs’ export capacity. The proposed model is applied by means of a case study of the textile and clothing cluster in Medellin, Colombia. In the model implementation, a general index of 28.2 was obtained for internationalization capabilities. The result confirms that the sector’s current capabilities and resources are not sufficient for a successful integration into the international market. The model specifies the factors and variables, which need to be worked on in order to improve export capability. In the case of textile companies, the lack of a continuous recording of information stands out. Likewise, there are very few studies directed towards developing long-term plans, and., there is little consistency in exports criteria. This method emerges as an innovative management tool linked to internal organizational spheres and their different abilities.Keywords: business strategy, exports, internationalization, fuzzy set methods
Procedia PDF Downloads 29469 Phylogenetic Analysis Based On the Internal Transcribed Spacer-2 (ITS2) Sequences of Diadegma semiclausum (Hymenoptera: Ichneumonidae) Populations Reveals Significant Adaptive Evolution
Authors: Ebraheem Al-Jouri, Youssef Abu-Ahmad, Ramasamy Srinivasan
Abstract:
The parasitoid, Diadegma semiclausum (Hymenoptera: Ichneumonidae) is one of the most effective exotic parasitoids of diamondback moth (DBM), Plutella xylostella in the lowland areas of Homs, Syria. Molecular evolution studies are useful tools to shed light on the molecular bases of insect geographical spread and adaptation to new hosts and environment and for designing better control strategies. In this study, molecular evolution analysis was performed based on the 42 nuclear internal transcribed spacer-2 (ITS2) sequences representing the D. semiclausum and eight other Diadegma spp. from Syria and worldwide. Possible recombination events were identified by RDP4 program. Four potential recombinants of the American D. insulare and D. fenestrale (Jeju) were detected. After detecting and removing recombinant sequences, the ratio of non-synonymous (dN) to synonymous (dS) substitutions per site (dN/dS=ɷ) has been used to identify codon positions involved in adaptive processes. Bayesian techniques were applied to detect selective pressures at a codon level by using five different approaches including: fixed effects likelihood (FEL), internal fixed effects likelihood (IFEL), random effects method (REL), mixed effects model of evolution (MEME) and Program analysis of maximum liklehood (PAML). Among the 40 positively selected amino acids (aa) that differed significantly between clades of Diadegma species, three aa under positive selection were only identified in D. semiclausum. Additionally, all D. semiclausum branches tree were highly found under episodic diversifying selection (EDS) at p≤0.05. Our study provide evidence that both recombination and positive selection have contributed to the molecular diversity of Diadegma spp. and highlights the significant contribution of D. semiclausum in adaptive evolution and influence the fitness in the DBM parasitoid.Keywords: diadegma sp, DBM, ITS2, phylogeny, recombination, dN/dS, evolution, positive selection
Procedia PDF Downloads 41668 Exploring Affordable Care Practs in Nigeria’s Health Insurance Discourse
Authors: Emmanuel Chinaguh, Kehinde Adeosun
Abstract:
Nigerians die untimely, with 55.75 years of life expectancy, which is 17.45 below the world average of 73.2 (Worldometer, 2020). This is due, among other factors, to the country's limited access to high-quality healthcare. To increase access to good and affordable healthcare services, the National Health Insurance Authority (NHIA) Bill 2022 – which repealed the National Health Insurance Scheme Act 2004 – was passed into law. Applying Jacob Mey’s (2001) pragmatics act (pract) theory, this study explores how NHIA seeks to actualise these healthcare goals by characterising the general situational prototype or pragmemes and pragmatic acts in institutional communications. Data was sourced from the NHIA operational guidelines, which has 147 pages and four sections, and shared posters on NHIA Nigeria Twitter Handle with 14,200 followers. Digital humanities tools, like AntConc and Voyant, were engaged in the data analysis for text encoding and data visualisation. This study identifies these discourse tokens in the data: advertisement and programmes, standards and accreditation, records and information, and offences and penalties. Advertisement and programmes pract facilitating, propagating, prospecting, advising and informing; standards and accreditation, and records and information pract stating, informing and instructing; and offences and penalties pract stating and sanctioning. These practs combined to advance the goals of affordable care and universal accessibility to quality healthcare services. The pragmatic acts were marked by these pragmatic tools: shared situational knowledge (SSK), relevance (REL), reference (REF) and inference (INF). This paper adds to the understanding of health insurance discourse in Nigeria as a mediated social practice that promotes the health of Nigerians.Keywords: affordable care, NHIA, Nigeria’s health insurance discourse, pragmatic acts.
Procedia PDF Downloads 8567 Detecting Memory-Related Gene Modules in sc/snRNA-seq Data by Deep-Learning
Authors: Yong Chen
Abstract:
To understand the detailed molecular mechanisms of memory formation in engram cells is one of the most fundamental questions in neuroscience. Recent single-cell RNA-seq (scRNA-seq) and single-nucleus RNA-seq (snRNA-seq) techniques have allowed us to explore the sparsely activated engram ensembles, enabling access to the molecular mechanisms that underlie experience-dependent memory formation and consolidation. However, the absence of specific and powerful computational methods to detect memory-related genes (modules) and their regulatory relationships in the sc/snRNA-seq datasets has strictly limited the analysis of underlying mechanisms and memory coding principles in mammalian brains. Here, we present a deep-learning method named SCENTBOX, to detect memory-related gene modules and causal regulatory relationships among themfromsc/snRNA-seq datasets. SCENTBOX first constructs codifferential expression gene network (CEGN) from case versus control sc/snRNA-seq datasets. It then detects the highly correlated modules of differential expression genes (DEGs) in CEGN. The deep network embedding and attention-based convolutional neural network strategies are employed to precisely detect regulatory relationships among DEG genes in a module. We applied them on scRNA-seq datasets of TRAP; Ai14 mouse neurons with fear memory and detected not only known memory-related genes, but also the modules and potential causal regulations. Our results provided novel regulations within an interesting module, including Arc, Bdnf, Creb, Dusp1, Rgs4, and Btg2. Overall, our methods provide a general computational tool for processing sc/snRNA-seq data from case versus control studie and a systematic investigation of fear-memory-related gene modules.Keywords: sc/snRNA-seq, memory formation, deep learning, gene module, causal inference
Procedia PDF Downloads 12066 A Distributed Mobile Agent Based on Intrusion Detection System for MANET
Authors: Maad Kamal Al-Anni
Abstract:
This study is about an algorithmic dependence of Artificial Neural Network on Multilayer Perceptron (MPL) pertaining to the classification and clustering presentations for Mobile Adhoc Network vulnerabilities. Moreover, mobile ad hoc network (MANET) is ubiquitous intelligent internetworking devices in which it has the ability to detect their environment using an autonomous system of mobile nodes that are connected via wireless links. Security affairs are the most important subject in MANET due to the easy penetrative scenarios occurred in such an auto configuration network. One of the powerful techniques used for inspecting the network packets is Intrusion Detection System (IDS); in this article, we are going to show the effectiveness of artificial neural networks used as a machine learning along with stochastic approach (information gain) to classify the malicious behaviors in simulated network with respect to different IDS techniques. The monitoring agent is responsible for detection inference engine, the audit data is collected from collecting agent by simulating the node attack and contrasted outputs with normal behaviors of the framework, whenever. In the event that there is any deviation from the ordinary behaviors then the monitoring agent is considered this event as an attack , in this article we are going to demonstrate the signature-based IDS approach in a MANET by implementing the back propagation algorithm over ensemble-based Traffic Table (TT), thus the signature of malicious behaviors or undesirable activities are often significantly prognosticated and efficiently figured out, by increasing the parametric set-up of Back propagation algorithm during the experimental results which empirically shown its effectiveness for the ratio of detection index up to 98.6 percentage. Consequently it is proved in empirical results in this article, the performance matrices are also being included in this article with Xgraph screen show by different through puts like Packet Delivery Ratio (PDR), Through Put(TP), and Average Delay(AD).Keywords: Intrusion Detection System (IDS), Mobile Adhoc Networks (MANET), Back Propagation Algorithm (BPA), Neural Networks (NN)
Procedia PDF Downloads 19465 Introducing Two Species of Parastagonospora (Phaeosphaeriaceae) on Grasses from Italy and Russia, Based on Morphology and Phylogeny
Authors: Ishani D. Goonasekara, Erio Camporesi, Timur Bulgakov, Rungtiwa Phookamsak, Kevin D. Hyde
Abstract:
Phaeosphaeriaceae comprises a large number of species occurring mainly on grasses and cereal crops as endophytes, saprobes and especially pathogens. Parastagonospora is an important genus in Phaeosphaeriaceae that includes pathogens causing leaf and glume blotch on cereal crops. Currently, there are fifteen Parastagonospora species described, including both pathogens and saprobes. In this study, one sexual morph species and an asexual morph species, occurring as saprobes on members of Poaceae are introduced based on morphology and a combined molecular analysis of the LSU, SSU, ITS, and RPB2 gene sequence data. The sexual morph species Parastagonospora elymi was isolated from a Russian sample of Elymus repens, a grass commonly known as couch grass, and important for grazing animals, as a weed and used in traditional Austrian medicine. P. elymi is similar to the sexual morph of P. avenae in having cylindrical asci, bearing 8, overlapping biseriate, fusiform ascospores but can be distinguished by its subglobose to conical shaped, wider ascomata. In addition, no sheath was observed surrounding the ascospores. The asexual morph species was isolated from a specimen from Italy, on Dactylis glomerata, a commonly found grass distributed in temperate regions. It is introduced as Parastagonospora macrouniseptata, a coelomycete, and bears a close resemblance to P. allouniseptata and P. uniseptata in having globose to subglobose, pycnidial conidiomata and hyaline, cylindrical, 1-septate conidia. However, the new species could be distinguished in having much larger conidiomata. In the phylogenetic analysis which consisted of a maximum likelihood and Bayesian analysis P. elymi showed low bootstrap support, but well segregated from other strains within the Parastagonospora clade. P. neoallouniseptata formed a sister clade with P. allouniseptata with high statistical support.Keywords: dothideomycetes, multi-gene analysis, Poaceae, saprobes, taxonomy
Procedia PDF Downloads 11664 Lexical Semantic Analysis to Support Ontology Modeling of Maintenance Activities– Case Study of Offshore Riser Integrity
Authors: Vahid Ebrahimipour
Abstract:
Word representation and context meaning of text-based documents play an essential role in knowledge modeling. Business procedures written in natural language are meant to store technical and engineering information, management decision and operation experience during the production system life cycle. Context meaning representation is highly dependent upon word sense, lexical relativity, and sematic features of the argument. This paper proposes a method for lexical semantic analysis and context meaning representation of maintenance activity in a mass production system. Our approach constructs a straightforward lexical semantic approach to analyze facilitates semantic and syntactic features of context structure of maintenance report to facilitate translation, interpretation, and conversion of human-readable interpretation into computer-readable representation and understandable with less heterogeneity and ambiguity. The methodology will enable users to obtain a representation format that maximizes shareability and accessibility for multi-purpose usage. It provides a contextualized structure to obtain a generic context model that can be utilized during the system life cycle. At first, it employs a co-occurrence-based clustering framework to recognize a group of highly frequent contextual features that correspond to a maintenance report text. Then the keywords are identified for syntactic and semantic extraction analysis. The analysis exercises causality-driven logic of keywords’ senses to divulge the structural and meaning dependency relationships between the words in a context. The output is a word contextualized representation of maintenance activity accommodating computer-based representation and inference using OWL/RDF.Keywords: lexical semantic analysis, metadata modeling, contextual meaning extraction, ontology modeling, knowledge representation
Procedia PDF Downloads 10563 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation
Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro
Abstract:
This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.Keywords: acceptance, block size, mixed linear model, testing order, testing order
Procedia PDF Downloads 32162 Exploring the Applications of Neural Networks in the Adaptive Learning Environment
Authors: Baladitya Swaika, Rahul Khatry
Abstract:
Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.Keywords: computer adaptive tests, item response theory, machine learning, neural networks
Procedia PDF Downloads 17561 Application of Host Factors as Biomarker in Early Diagnosis of Pulmonary Tuberculosis
Authors: Ambrish Tiwari, Sudhasini Panda, Archana Singh, Kalpana Luthra, S. K. Sharma
Abstract:
Introduction: On the basis of available literature we know that various host factors play a role in outcome of Tuberculosis (TB) infection by modulating innate immunity. One such factor is Inducible Nitric Oxide Synthase enzyme (iNOS) which help in the production of Nitric Oxide (NO), an antimicrobial agent. Expression of iNOS is in control of various host factors in which Vitamin D along with its nuclear receptor Vitamin D receptor (VDR) is one of them. Vitamin D along with its receptor also produces cathelicidin (antimicrobicidal agent). With this background, we attempted to investigate the levels of Vitamin D and NO along with their associated molecules in tuberculosis patients and household contacts as compared to healthy controls and assess the implication of these findings in susceptibility to tuberculosis (TB). Study subjects and methods: 100 active TB patients, 75 household contacts, and 70 healthy controls were taken. VDR and iNOS mRNA levels were studied using real-time PCR. Serum VDR, cathelicidin, iNOS levels were measured using ELISA. Serum Vitamin D levels were measured in serum samples using chemiluminescence based immunoassay. NO was measured using colorimetry based kit. Results: VDR and iNOS mRNA levels were found to be lower in active TB group compared to household contacts and healthy controls (P=0.0001 and 0.005 respectively). The serum levels of Vitamin D were also found to be lower in active TB group as compared to healthy controls (P =0.001). Levels of cathelicidin and NO was higher in patient group as compared to other groups (p=0.01 and 0.5 respectively). However, the expression of VDR and iNOS and levels of vitamin D was significantly (P < 0.05) higher in household contacts compared to both active TB and healthy control groups. Inference: Higher levels of Vitamin D along with VDR and iNOS expression in household contacts as compared to patients suggest that vitamin D might have a protective role against TB which prevents activation of the disease. From our data, we can conclude that decreased vitamin D levels could be implicated in disease progression and we can use cathelicidin and NO as a biomarker for early diagnosis of pulmonary tuberculosis.Keywords: vitamin D, VDR, iNOS, tuberculosis
Procedia PDF Downloads 30360 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures
Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk
Abstract:
From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach
Procedia PDF Downloads 20259 Collaboration-Based Islamic Financial Services: Case Study of Islamic Fintech in Indonesia
Authors: Erika Takidah, Salina Kassim
Abstract:
Digital transformation has accelerated in the new millennium. It is reshaping the financial services industry from a traditional system to financial technology. Moreover, the number of financial inclusion rates in Indonesia is less than 60%. An innovative model needed to elucidate this national problem. On the other hand, the Islamic financial service industry and financial technology grow fast as a new aspire in economic development. An Islamic bank, takaful, Islamic microfinance, Islamic financial technology and Islamic social finance institution could collaborate to intensify the financial inclusion number in Indonesia. The primary motive of this paper is to examine the strategy of collaboration-based Islamic financial services to enhance financial inclusion in Indonesia, particularly facing the digital era. The fundamental findings for the main problems are the foundations and key ecosystems aspect involved in the development of collaboration-based Islamic financial services. By using the Interpretive Structural Model (ISM) approach, the core problems faced in the development of the models have lacked policy instruments guarding the collaboration-based Islamic financial services with fintech work process and availability of human resources for fintech. The core strategies or foundations that are needed in the framework of collaboration-based Islamic financial services are the ability to manage and analyze data in the big data era. For the aspects of the Ecosystem or actors involved in the development of this model, the important actor is government or regulator, educational institutions, and also existing industries (Islamic financial services). The outcome of the study designates that strategy collaboration of Islamic financial services institution supported by robust technology, a legal and regulatory commitment of the regulators and policymakers of the Islamic financial institutions, extensive public awareness of financial inclusion in Indonesia. The study limited itself to realize financial inclusion, particularly in Islamic finance development in Indonesia. The study will have an inference for the concerned professional bodies, regulators, policymakers, stakeholders, and practitioners of Islamic financial service institutions.Keywords: collaboration, financial inclusion, Islamic financial services, Islamic fintech
Procedia PDF Downloads 14258 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification
Authors: Hung-Sheng Lin, Cheng-Hsuan Li
Abstract:
Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction
Procedia PDF Downloads 34457 Machine Learning in Agriculture: A Brief Review
Authors: Aishi Kundu, Elhan Raza
Abstract:
"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting
Procedia PDF Downloads 104