Search results for: Extended Park´s vector approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15957

Search results for: Extended Park´s vector approach

12507 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes

Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani

Abstract:

The development of the method to annotate unknown gene functions is an important task in bioinformatics. One of the approaches for the annotation is The identification of the metabolic pathway that genes are involved in. Gene expression data have been utilized for the identification, since gene expression data reflect various intracellular phenomena. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.

Keywords: metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning

Procedia PDF Downloads 393
12506 Logistic Regression Based Model for Predicting Students’ Academic Performance in Higher Institutions

Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu

Abstract:

In recent years, there has been a desire to forecast student academic achievement prior to graduation. This is to help them improve their grades, particularly for individuals with poor performance. The goal of this study is to employ supervised learning techniques to construct a predictive model for student academic achievement. Many academics have already constructed models that predict student academic achievement based on factors such as smoking, demography, culture, social media, parent educational background, parent finances, and family background, to name a few. This feature and the model employed may not have correctly classified the students in terms of their academic performance. This model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester as a prerequisite to predict if the student will perform well in future on related courses. The model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost, returning a 96.7% accuracy. This model is available as a desktop application, allowing both instructors and students to benefit from user-friendly interfaces for predicting student academic achievement. As a result, it is recommended that both students and professors use this tool to better forecast outcomes.

Keywords: artificial intelligence, ML, logistic regression, performance, prediction

Procedia PDF Downloads 88
12505 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data

Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour

Abstract:

Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.

Keywords: physical activity, machine learning, under 5s, disability, accelerometer

Procedia PDF Downloads 197
12504 Offshore Power Transition Project

Authors: Kashmir Johal

Abstract:

Within a wider context of improving whole-life effectiveness of gas and oil fields, we have been researching how to generate power local to the wellhead. (Provision of external power to a subsea wellhead can be prohibitively expensive and results in uneconomic fields. This has been an oil/gas industry challenge for many years.) We have been developing a possible approach to “local” power generation and have been conducting technical, environmental, (and economic) research to develop a viable approach. We sought to create a workable design for a new type of power generation system that makes use of differential pressure that can exist between the sea surface and a gas (or oil reservoir). The challenge has not just been to design a system capable of generating power from potential energy but also to design it in such a way that it anticipates and deals with the wide range of technological, environmental, and chemical constraints faced in such environments. We believe this project shows the enormous opportunity in deriving clean, economic, and zero emissions renewable energy from offshore sources. Since this technology is not currently available, a patent has been filed to protect the advancement of this technology.

Keywords: renewable, energy, power, offshore

Procedia PDF Downloads 60
12503 Assessing Proteomic Variations Due to Genetic Modification of Tomatoes Using Three Complementary Approaches

Authors: Hanaa A. S. Oraby, Amal A. M. Hassan, Mahmoud M. Sakr, Atef A. A. Haiba

Abstract:

Applying the profiling approach for the assessment of proteomic variations due to genetic modification of the Egyptian tomato cultivar "Edkawy", three complementary approaches were used. These methods are amino acids analysis, gel electrophoresis, and Gas chromatography coupled with mass spectrometry (GC/MS). The results of the present study Show evidence of proteomic variations between both modified tomato and its non-modified counterpart. Amino acids concentrations, and the protein patterns separation on the 1D SDS-PAGE were not similar in the case of transformed tomato compared to that of the non-transformed counterpart. These detected differences are most likely derived from the process of transformation. Results also revealed that the efficiency of GC/MS approach to identify a mixture of unknown proteins is limited. GC/MS analysis was only able to identify few number of protein molecules. Therefore, more advanced and specific technologies like MALDI-TOF-MS are recommended to be employed.

Keywords: GMOs, unintended effects, proteomic variations, 1D SDS-PAGE, GC/MS

Procedia PDF Downloads 441
12502 The Effects of Garlic (Allium sativum) in the Diet on Some Serum Biochemical Parameters of Oscar Fish (Astronotus ocellatus)

Authors: Ali Saghaei, Negar Ghotbeddin, Ebrahim Rajabzadeh Ghatrami, Milad Maniat

Abstract:

The use of herbs as natural additives in fish diets are used to enhance the efficiency and safety systems. The use of herbs, garlic, due to the structure and composition of it has beneficial role in human nutrition and animal nutrition. This study was conducted evaluate the effect different levels of garlic (Allium sativum) powder on the some serum biochemical parameters of Oscar fish (Astronotus ocellatus). Fish were divided into four groups fed on diets containing garlic in different levels; 5 g kg˗1, 10 g kg-1, 20 g kg-1, 30 g kg-1 diet and the control group diet was without garlic. A total number of 300 fish was used and Triplicate groups of Oscar fish with initial weight of 12.43±0.24 g were hand-fed to visual satiation at three meals per day. The experiment extended for two months. Total Protein (TP), Albumin (ALB), Globulin (GLB) and Albumin/Globulin (A/G) ratio, were determined. Based on the results, no significant differences were seen among treatments and control groups during the experimental period for TP, ALB, GLB, and A/G ratio (p > 0.05). Although, the highest amount of serum total protein and globulin levels were observed in diet containing 10 g kg-1 of garlic. Also, the highest value of albumin and A/G were observed in diet containing 20 g kg-1 of garlic, but there were no significant difference with other treatments. The results of this study show that addition of garlic Allium sativum to fish diet can improve fish health.

Keywords: garlic (Allium sativum), serum, Oscar fish (Astronotus ocellatus), iran

Procedia PDF Downloads 472
12501 Recognition of Noisy Words Using the Time Delay Neural Networks Approach

Authors: Khenfer-Koummich Fatima, Mesbahi Larbi, Hendel Fatiha

Abstract:

This paper presents a recognition system for isolated words like robot commands. It’s carried out by Time Delay Neural Networks; TDNN. To teleoperate a robot for specific tasks as turn, close, etc… In industrial environment and taking into account the noise coming from the machine. The choice of TDNN is based on its generalization in terms of accuracy, in more it acts as a filter that allows the passage of certain desirable frequency characteristics of speech; the goal is to determine the parameters of this filter for making an adaptable system to the variability of speech signal and to noise especially, for this the back propagation technique was used in learning phase. The approach was applied on commands pronounced in two languages separately: The French and Arabic. The results for two test bases of 300 spoken words for each one are 87%, 97.6% in neutral environment and 77.67%, 92.67% when the white Gaussian noisy was added with a SNR of 35 dB.

Keywords: TDNN, neural networks, noise, speech recognition

Procedia PDF Downloads 281
12500 Symo-syl: A Meta-Phonological Intervention to Support Italian Pre-Schoolers’ Emergent Literacy Skills

Authors: Tamara Bastianello, Rachele Ferrari, Marinella Majorano

Abstract:

The adoption of the syllabic approach in preschool programmes could support and reinforce meta-phonological awareness and literacy skills in children. The introduction of a meta-phonological intervention in preschool could facilitate the transition to primary school, especially for children with learning fragilities. In the present contribution, we want to investigate the efficacy of "Simo-syl" intervention in enhancing emergent literacy skills in children (especially for reading). Simo-syl is a 12 weeks multimedia programme developed for children to improve their language and communication skills and later literacy development in preschool. During the intervention, Simo-syl, an invented character, leads children in a series of meta-phonological games. Forty-six Italian preschool children (i.e., the Simo-syl group) participated in the programme; seventeen preschool children (i.e., the control group) did not participate in the intervention. Children in the two groups were between 4;10 and 5;9 years. They were assessed on their vocabulary, morpho-syntactical, meta-phonological, phonological, and phono-articulatory skills twice: 1) at the beginning of the last year of the preschool through standardised paper-based assessment tools and 2) one week after the intervention. All children in the Simo-syl group took part in the meta-phonological programme based on the syllabic approach. The intervention lasted 12 weeks (three activities per week; week 1: activities focused on syllable blending and spelling and a first approach to the written code; weeks 2-11: activities focused on syllables recognition; week 12: activities focused on vowels recognition). Very few children (Simo-syl group = 21, control group = 9) were tested again (post-test) one week after the intervention. Before starting the intervention programme, the Simo-syl and the control groups had similar meta-phonological, phonological, lexical skills (all ps > .05). One week after the intervention, a significant difference emerged between the two groups in their meta-phonological skills (syllable blending, p = .029; syllable spelling, p = .032), in their vowel recognition ability (p = .032) and their word reading skills (p = .05). An ANOVA confirmed the effect of the group membership on the developmental growth for the word reading task (F (1,28) = 6.83, p = .014, ηp2 = .196). Taking part in the Simo-syl intervention has a positive effect on the ability to read in preschool children.

Keywords: intervention programme, literacy skills, meta-phonological skills, syllabic approach

Procedia PDF Downloads 155
12499 A Micro-Scale of Electromechanical System Micro-Sensor Resonator Based on UNO-Microcontroller for Low Magnetic Field Detection

Authors: Waddah Abdelbagi Talha, Mohammed Abdullah Elmaleeh, John Ojur Dennis

Abstract:

This paper focuses on the simulation and implementation of a resonator micro-sensor for low magnetic field sensing based on a U-shaped cantilever and piezoresistive configuration, which works based on Lorentz force physical phenomena. The resonance frequency is an important parameter that depends upon the highest response and sensitivity through the frequency domain (frequency response) of any vibrated micro-scale of an electromechanical system (MEMS) device. And it is important to determine the direction of the detected magnetic field. The deflection of the cantilever is considered for vibrated mode with different frequencies in the range of (0 Hz to 7000 Hz); for the purpose of observing the frequency response. A simple electronic circuit-based polysilicon piezoresistors in Wheatstone's bridge configuration are used to transduce the response of the cantilever to electrical measurements at various voltages. Microcontroller-based Arduino program and PROTEUS electronic software are used to analyze the output signals from the sensor. The highest output voltage amplitude of about 4.7 mV is spotted at about 3 kHz of the frequency domain, indicating the highest sensitivity, which can be called resonant sensitivity. Based on the resonant frequency value, the mode of vibration is determined (up-down vibration), and based on that, the vector of the magnetic field is also determined.

Keywords: resonant frequency, sensitivity, Wheatstone bridge, UNO-microcontroller

Procedia PDF Downloads 109
12498 Real-Time Episodic Memory Construction for Optimal Action Selection in Cognitive Robotics

Authors: Deon de Jager, Yahya Zweiri, Dimitrios Makris

Abstract:

The three most important components in the cognitive architecture for cognitive robotics is memory representation, memory recall, and action-selection performed by the executive. In this paper, action selection, performed by the executive, is defined as a memory quantification and optimization process. The methodology describes the real-time construction of episodic memory through semantic memory optimization. The optimization is performed by set-based particle swarm optimization, using an adaptive entropy memory quantification approach for fitness evaluation. The performance of the approach is experimentally evaluated by simulation, where a UAV is tasked with the collection and delivery of a medical package. The experiments show that the UAV dynamically uses the episodic memory to autonomously control its velocity, while successfully completing its mission.

Keywords: cognitive robotics, semantic memory, episodic memory, maximum entropy principle, particle swarm optimization

Procedia PDF Downloads 143
12497 Infodemic Detection on Social Media with a Multi-Dimensional Deep Learning Framework

Authors: Raymond Xu, Cindy Jingru Wang

Abstract:

Social media has become a globally connected and influencing platform. Social media data, such as tweets, can help predict the spread of pandemics and provide individuals and healthcare providers early warnings. Public psychological reactions and opinions can be efficiently monitored by AI models on the progression of dominant topics on Twitter. However, statistics show that as the coronavirus spreads, so does an infodemic of misinformation due to pandemic-related factors such as unemployment and lockdowns. Social media algorithms are often biased toward outrage by promoting content that people have an emotional reaction to and are likely to engage with. This can influence users’ attitudes and cause confusion. Therefore, social media is a double-edged sword. Combating fake news and biased content has become one of the essential tasks. This research analyzes the variety of methods used for fake news detection covering random forest, logistic regression, support vector machines, decision tree, naive Bayes, BoW, TF-IDF, LDA, CNN, RNN, LSTM, DeepFake, and hierarchical attention network. The performance of each method is analyzed. Based on these models’ achievements and limitations, a multi-dimensional AI framework is proposed to achieve higher accuracy in infodemic detection, especially pandemic-related news. The model is trained on contextual content, images, and news metadata.

Keywords: artificial intelligence, fake news detection, infodemic detection, image recognition, sentiment analysis

Procedia PDF Downloads 233
12496 Risk Assessment of Oil Spill Pollution by Integration of Gnome, Aloha and Gis in Bandar Abbas Coast, Iran

Authors: Mehrnaz Farzingohar, Mehran Yasemi, Ahmad Savari

Abstract:

The oil products are imported and exported via Rajaee’s tanker terminal. Within loading and discharging in several cases the oil is released into the berths and made oil spills. The spills are distributed within short time and seriously affected Rajaee port’s environment and even extended areas. The trajectory and fate of oil spills investigated by modeling and parted by three risk levels base on the modeling results. First GNOME (General NOAA Operational Modeling Environment) applied to trajectory the liquid oil. Second, ALOHA (Areal Location Of Hazardous Atmosphere) air quality model, is integrated to predict the oil evaporation path within the air. Base on the identified zones the high risk areas are signed by colored dots which their densities calculated and clarified on a map which displayed the harm places. Wind and water circulation moved the pollution to the East of Rajaee Port that accumulated about 12 km of coastline. Approximately 20 km of north east of Qeshm Island shore is covered by the three levels of risky areas. Since the main wind direction is SSW the pollution pushed to the east and the highest risk zones formed on the crests edges hence the low risk appeared on the concavities. This assessment help the management and emergency systems to monitor the exposure places base on the priority factors and find the best approaches to protect the environment.

Keywords: oil spill, modeling, pollution, risk assessment

Procedia PDF Downloads 377
12495 Impact Assessment of Lean Practices on Social Sustainability Indicators: An Approach Using ISM Method

Authors: Aline F. Marcon, Eduardo F. da Silva, Marina Bouzon

Abstract:

The impact of lean management on environmental sustainability is the research line that receives the most attention from academicians. Therefore, the social dimension of sustainable development has so far received less attention. This paper aims to evaluate the impact of intra-plant lean manufacturing practices on social sustainability indicators extracted from the Global Reporting Initiative (GRI) parameters. The method is two-phased, including MCDM approach to uncover the most relevant practices regarding social performance and Interpretive Structural Modeling (ISM) method to reveal the structural relationship among lean practices. Professionals from the academic and industrial fields answered the questionnaires. From the results of this paper, it is possible to verify that practices such as “Safety Improvement Programs”, “Total Quality Management” and “Cross-functional Workforce” are the ones which have the most positive influence on the set of GRI social indicators.

Keywords: indicators, ISM, lean, social, sustainability

Procedia PDF Downloads 136
12494 Development of Industry Sector Specific Factory Standards

Authors: Peter Burggräf, Moritz Krunke, Hanno Voet

Abstract:

Due to shortening product and technology lifecycles, many companies use standardization approaches in product development and factory planning to reduce costs and time to market. Unlike large companies, where modular systems are already widely used, small and medium-sized companies often show a much lower degree of standardization due to lower scale effects and missing capacities for the development of these standards. To overcome these challenges, the development of industry sector specific standards in cooperations or by third parties is an interesting approach. This paper analyzes which branches that are mainly dominated by small or medium-sized companies might be especially interesting for the development of factory standards using the example of the German industry. For this, a key performance indicator based approach was developed that will be presented in detail with its specific results for the German industry structure.

Keywords: factory planning, factory standards, industry sector specific standardization, production planning

Procedia PDF Downloads 385
12493 Effect of Grafting and Rain Shelter Technologies on Performance of Tomato (Lycopersicum esculentum Mill.)

Authors: Evy Latifah, Eli Korlina, Hanik Anggraeni, Kuntoro Boga, Joko Mariyono

Abstract:

During the rainy season, the tomato plants are vulnerable to various diseases. A disease that attacks the leaves of tomato plants (foliar diseases) such as late blight (Phytophtora infestans) and spotting bacteria (bacterial spot / Xanthomonas sp.) In addition, there is a disease that attacks the roots such as fusarium and bacterial wilt. If not immediately anticipated, it will decrease the quality and quantity of crop yields. In fact, it can lead to crop failure. The aim of this research is to know the production of tomato grafting by using Timoty and CLN 3024 tomatoes at rain shelter during rainy season in lowland. Data were analyzed using analysis of variance and tested further by Least Significant Difference (LSD) level of 5 %. The parameters measured were plant height (cm), stem diameter (cm), number of fruit space, canopy extended, number of branches, number of productive branches, and the number of stem segments. The results show at the beginning of growth until the end of the treatment without grafting with relative rain shelter displays the highest plant height. This was followed by extensive crop canopy. For tomato grafting and non-grafting using rain shelter able to produce the number of branches and number of productive branches at most. While at the end of the growth in the number of productive branches generated as much. Highest production of tomatoes produced by tomato dig rafting to use the shelter.

Keywords: field trail, wet and dry season, production, diseases, rain shelter

Procedia PDF Downloads 215
12492 Design Optimization of Miniature Mechanical Drive Systems Using Tolerance Analysis Approach

Authors: Eric Mxolisi Mkhondo

Abstract:

Geometrical deviations and interaction of mechanical parts influences the performance of miniature systems.These deviations tend to cause costly problems during assembly due to imperfections of components, which are invisible to a naked eye.They also tend to cause unsatisfactory performance during operation due to deformation cause by environmental conditions.One of the effective tools to manage the deviations and interaction of parts in the system is tolerance analysis.This is a quantitative tool for predicting the tolerance variations which are defined during the design process.Traditional tolerance analysis assumes that the assembly is static and the deviations come from the manufacturing discrepancies, overlooking the functionality of the whole system and deformation of parts due to effect of environmental conditions. This paper presents an integrated tolerance analysis approach for miniature system in operation.In this approach, a computer-aided design (CAD) model is developed from system’s specification.The CAD model is then used to specify the geometrical and dimensional tolerance limits (upper and lower limits) that vary component’s geometries and sizes while conforming to functional requirements.Worst-case tolerances are analyzed to determine the influenced of dimensional changes due to effects of operating temperatures.The method is used to evaluate the nominal conditions, and worse case conditions in maximum and minimum dimensions of assembled components.These three conditions will be evaluated under specific operating temperatures (-40°C,-18°C, 4°C, 26°C, 48°C, and 70°C). A case study on the mechanism of a zoom lens system is used to illustrate the effectiveness of the methodology.

Keywords: geometric dimensioning, tolerance analysis, worst-case analysis, zoom lens mechanism

Procedia PDF Downloads 158
12491 Clustered Regularly Interspaced Short Palindromic Repeats Interference (CRISPRi): An Approach to Inhibit Microbial Biofilm

Authors: Azna Zuberi

Abstract:

Biofilm is a sessile bacterial accretion in which bacteria adapts different physiological and morphological behavior from planktonic form. It is the root cause of about 80% microbial infections in human. Among them, E. coli biofilms are most prevalent in medical devices associated nosocomial infections. The objective of this study was to inhibit biofilm formation by targeting LuxS gene, involved in quorum sensing using CRISPRi. luxS is a synthase, involved in the synthesis of Autoinducer-2(AI-2), which in turn guides the initial stage of biofilm formation. To implement CRISPRi system, we have synthesized complementary sgRNA to target gene sequence and co-expressed with dCas9. Suppression of luxS was confirmed through qRT-PCR. The effect of luxS gene on biofilm inhibition was studied through crystal violet assay, XTT reduction assay and scanning electron microscopy. We conclude that CRISPRi system could be a potential strategy to inhibit bacterial biofilm through mechanism base approach.

Keywords: biofilm, CRISPRi, luxS, microbial

Procedia PDF Downloads 173
12490 Community Forest Management Practice in Nepal: Public Understanding of Forest Benefit

Authors: Chandralal Shrestha

Abstract:

In the developing countries like Nepal, the community based forest management approach has often been glorified as one of the best forest management alternatives to maximize the forest benefits. Though the approach has succeeded to construct a local level institution and conserve the forest biodiversity, how the local communities perceived about the forest benefits, the question always remains silent among the researchers and policy makers. The paper aims to explore the understanding of forest benefits from the perspective of local communities who used the forests in terms of institutional stability, equity and livelihood opportunity, and ecological stability. The paper revealed that the local communities have mixed understanding over the forest benefits. The institutional and ecological activities carried out by the local communities indicated that they have better understanding over the forest benefits. However, inequality while sharing the forest benefits, low pricing strategy and its negative consequences in valuation of forest products and limited livelihood opportunities indicated the poor understanding.

Keywords: community based forest management, forest benefits, lowland, Nepal

Procedia PDF Downloads 302
12489 An Algorithm of Set-Based Particle Swarm Optimization with Status Memory for Traveling Salesman Problem

Authors: Takahiro Hino, Michiharu Maeda

Abstract:

Particle swarm optimization (PSO) is an optimization approach that achieves the social model of bird flocking and fish schooling. PSO works in continuous space and can solve continuous optimization problem with high quality. Set-based particle swarm optimization (SPSO) functions in discrete space by using a set. SPSO can solve combinatorial optimization problem with high quality and is successful to apply to the large-scale problem. In this paper, we present an algorithm of SPSO with status memory to decide the position based on the previous position for solving traveling salesman problem (TSP). In order to show the effectiveness of our approach. We examine SPSOSM for TSP compared to the existing algorithms.

Keywords: combinatorial optimization problems, particle swarm optimization, set-based particle swarm optimization, traveling salesman problem

Procedia PDF Downloads 538
12488 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 157
12487 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 144
12486 Precision Pest Management by the Use of Pheromone Traps and Forecasting Module in Mobile App

Authors: Muhammad Saad Aslam

Abstract:

In 2021, our organization has launched our proprietary mobile App i.e. Farm Intelligence platform, an industrial-first precision agriculture solution, to Pakistan. It was piloted at 47 locations (spanning around 1,200 hectares of land), addressing growers’ pain points by bringing the benefits of precision agriculture to their doorsteps. This year, we have extended its reach by more than 10 times (nearly 130,000 hectares of land) in almost 600 locations across the country. The project team selected highly infested areas to set up traps, which then enabled the sales team to initiate evidence-based conversations with the grower community about preventive crop protection products that includes pesticides and insecticides. Mega farmer meeting field visits and demonstrations plots coupled with extensive marketing activities, were setup to include farmer community. With the help of App real-time pest monitoring (using heat maps and infestation prediction through predictive analytics) we have equipped our growers with on spot insights that will help them optimize pesticide applications. Heat maps allow growers to identify infestation hot spots to fine-tune pesticide delivery, while predictive analytics enable preventive application of pesticides before the situation escalates. Ultimately, they empower growers to keep their crops safe for a healthy harvest.

Keywords: precision pest management, precision agriculture, real time pest tracking, pest forecasting

Procedia PDF Downloads 81
12485 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction

Authors: M. D. Haneef, R. B. Randall, Z. Peng

Abstract:

Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in the industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration based analysis and wear prediction. This work is an extension of a previous study, in which an engine simulation model was developed using a MATLAB/SIMULINK program, whereby the engine parameters used in the simulation were obtained experimentally from a Toyota 3SFE 2.0 litre petrol engines. Simulated hydrodynamic bearing forces were used to estimate vibrations signals and envelope analysis was carried out to analyze the effect of speed, load and clearance on the vibration response. Three different loads 50/80/110 N-m, three different speeds 1500/2000/3000 rpm, and three different clearances, i.e., normal, 2 times and 4 times the normal clearance were simulated to examine the effect of wear on bearing forces. The magnitude of the squared envelope of the generated vibration signals though not affected by load, but was observed to rise significantly with increasing speed and clearance indicating the likelihood of augmented wear. In the present study, the simulation model was extended further to investigate the bearing wear behavior, resulting as a consequence of different operating conditions, to complement the vibration analysis. In the current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. Also, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journal and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 µm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behavior and on the other hand it also helps to establish a correlation between wear based and vibration based analysis. Therefore, the model provides a cost-effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.

Keywords: condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction

Procedia PDF Downloads 302
12484 Looking for a Connection between Oceanic Regions with Trends in Evaporation with Continental Ones with Trends in Precipitation through a Lagrangian Approach

Authors: Raquel Nieto, Marta Vázquez, Anita Drumond, Luis Gimeno

Abstract:

One of the hot spots of climate change is the increment of ocean evaporation. The best estimation of evaporation, OAFlux data, shows strong increasing trends in evaporation from the oceans since 1978, with peaks during the hemispheric winter and strongest along the paths of the global western boundary currents and any inner Seas. The transport of moisture from oceanic sources to the continents is the connection between evaporation from the ocean and precipitation over the continents. A key question is to try to relate evaporative source regions over the oceans where trends have occurred in the last decades with their sinks over the continents to check if there have been also any trends in the precipitation amount or its characteristics. A Lagrangian approach based on FLEXPART and ERA-interim data is used to establish this connection. The analyzed period was 1980 to 2012. Results show that there is not a general pattern, but a significant agreement was found in important areas of climate interest.

Keywords: ocean evaporation, Lagrangian approaches, contiental precipitation, Europe

Procedia PDF Downloads 247
12483 Constructivism and Situational Analysis as Background for Researching Complex Phenomena: Example of Inclusion

Authors: Radim Sip, Denisa Denglerova

Abstract:

It’s impossible to capture complex phenomena, such as inclusion, with reductionism. The most common form of reductionism is the objectivist approach, where processes and relationships are reduced to entities and clearly outlined phases, with a consequent search for relationships between them. Constructivism as a paradigm and situational analysis as a methodological research portfolio represent a way to avoid the dominant objectivist approach. They work with a situation, i.e. with the essential blending of actors and their environment. Primary transactions are taking place between actors and their surroundings. Researchers create constructs based on their need to solve a problem. Concepts therefore do not describe reality, but rather a complex of real needs in relation to the available options how such needs can be met. For examination of a complex problem, corresponding methodological tools and overall design of the research are necessary. Using an original research on inclusion in the Czech Republic as an example, this contribution demonstrates that inclusion is not a substance easily described, but rather a relationship field changing its forms in response to its actors’ behaviour and current circumstances. Inclusion consists of dynamic relationship between an ideal, real circumstances and ways to achieve such ideal under the given circumstances. Such achievement has many shapes and thus cannot be captured by description of objects. It can be expressed in relationships in the situation defined by time and space. Situational analysis offers tools to examine such phenomena. It understands a situation as a complex of dynamically changing aspects and prefers relationships and positions in the given situation over a clear and final definition of actors, entities, etc. Situational analysis assumes creation of constructs as a tool for solving a problem at hand. It emphasizes the meanings that arise in the process of coordinating human actions, and the discourses through which these meanings are negotiated. Finally, it offers “cartographic tools” (situational maps, socials worlds / arenas maps, positional maps) that are able to capture the complexity in other than linear-analytical ways. This approach allows for inclusion to be described as a complex of phenomena taking place with a certain historical preference, a complex that can be overlooked if analyzed with a more traditional approach.

Keywords: constructivism, situational analysis, objective realism, reductionism, inclusion

Procedia PDF Downloads 140
12482 Characteristics of Tremella fuciformis and Annulohypoxylon stygium for Optimal Cultivation Conditions

Authors: Eun-Ji Lee, Hye-Sung Park, Chan-Jung Lee, Won-Sik Kong

Abstract:

We analyzed the DNA sequence of the ITS (Internal Transcribed Spacer) region of the 18S ribosomal gene and compared it with the gene sequence of T. fuciformis and Hypoxylon sp. in the BLAST database. The sequences of collected T. fuciformis and Hypoxylon sp. have over 99% homology in the T. fuciformis and Hypoxylon sp. sequence BLAST database. In order to select the optimal medium for T. fuciformis, five kinds of a medium such as Potato Dextrose Agar (PDA), Mushroom Complete Medium (MCM), Malt Extract Agar (MEA), Yeast extract (YM), and Compost Extract Dextrose Agar (CDA) were used. T. fuciformis showed the best growth on PDA medium, and Hypoxylon sp. showed the best growth on MCM. So as to investigate the optimum pH and temperature, the pH range was set to pH4 to pH8 and the temperature range was set to 15℃ to 35℃ (5℃ degree intervals). Optimum culture conditions for the T. fuciformis growth were pH5 at 25℃. Hypoxylon sp. were pH6 at 25°C. In order to confirm the most suitable carbon source, we used fructose, galactose, saccharose, soluble starch, inositol, glycerol, xylose, dextrose, lactose, dextrin, Na-CMC, adonitol. Mannitol, mannose, maltose, raffinose, cellobiose, ethanol, salicine, glucose, arabinose. In the optimum carbon source, T. fuciformis is xylose and Hypoxylon sp. is arabinose. Using the column test, we confirmed sawdust a suitable for T. fuciformis, since the composition of sawdust affects the growth of fruiting bodies of T. fuciformis. The sawdust we used is oak tree, pine tree, poplar, birch, cottonseed meal, cottonseed hull. In artificial cultivation of T. fuciformis with sawdust medium, T. fuciformis and Hypoxylon sp. showed fast mycelial growth on mixture of oak tree sawdust, cottonseed hull, and wheat bran.

Keywords: cultivation, optimal condition, tremella fuciformis, nutritional source

Procedia PDF Downloads 198
12481 Research on the Conservation Strategy of Territorial Landscape Based on Characteristics: The Case of Fujian, China

Authors: Tingting Huang, Sha Li, Geoffrey Griffiths, Martin Lukac, Jianning Zhu

Abstract:

Territorial landscapes have experienced a gradual loss of their typical characteristics during long-term human activities. In order to protect the integrity of regional landscapes, it is necessary to characterize, evaluate and protect them in a graded manner. The study takes Fujian, China, as an example and classifies the landscape characters of the site at the regional scale, middle scale, and detailed scale. A multi-scale approach combining parametric and holistic approaches is used to classify and partition the landscape character types (LCTs) and landscape character areas (LCAs) at different scales, and a multi-element landscape assessment approach is adopted to explore the conservation strategies of the landscape character. Firstly, multiple fields and multiple elements of geography, nature and humanities were selected as the basis of assessment according to the scales. Secondly, the study takes a parametric approach to the classification and partitioning of landscape character, Principal Component Analysis, and two-stage cluster analysis (K-means and GMM) in MATLAB software to obtain LCTs, combines with Canny Operator Edge Detection Algorithm to obtain landscape character contours and corrects LCTs and LCAs by field survey and manual identification methods. Finally, the study adopts the Landscape Sensitivity Assessment method to perform landscape character conservation analysis and formulates five strategies for different LCAs: conservation, enhancement, restoration, creation, and combination. This multi-scale identification approach can efficiently integrate multiple types of landscape character elements, reduce the difficulty of broad-scale operations in the process of landscape character conservation, and provide a basis for landscape character conservation strategies. Based on the natural background and the restoration of regional characteristics, the results of landscape character assessment are scientific and objective and can provide a strong reference in regional and national scale territorial spatial planning.

Keywords: parameterization, multi-scale, landscape character identify, landscape character assessment

Procedia PDF Downloads 82
12480 Virtualization of Biomass Colonization: Potential of Application in Precision Medicine

Authors: Maria Valeria De Bonis, Gianpaolo Ruocco

Abstract:

Nowadays, computational modeling is paving new design and verification ways in a number of industrial sectors. The technology is ripe to challenge some case in the Bioengineering and Medicine frameworks: for example, looking at the strategical and ethical importance of oncology research, efforts should be made to yield new and powerful resources to tumor knowledge and understanding. With these driving motivations, we approach this gigantic problem by using some standard engineering tools such as the mathematics behind the biomass transfer. We present here some bacterial colonization studies in complex structures. As strong analogies hold with some tumor proliferation, we extend our study to a benchmark case of solid tumor. By means of a commercial software, we model biomass and energy evolution in arbitrary media. The approach will be useful to cast virtualization cases of cancer growth in human organs, while augmented reality tools will be used to yield for a realistic aid to informed decision in treatment and surgery.

Keywords: bacteria, simulation, tumor, precision medicine

Procedia PDF Downloads 328
12479 Learning Dynamic Representations of Nodes in Temporally Variant Graphs

Authors: Sandra Mitrovic, Gaurav Singh

Abstract:

In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.

Keywords: churn prediction, dynamic networks, node2vec, auto-encoders

Procedia PDF Downloads 304
12478 Production of New Hadron States in Effective Field Theory

Authors: Qi Wu, Dian-Yong Chen, Feng-Kun Guo, Gang Li

Abstract:

In the past decade, a growing number of new hadron states have been observed, which are dubbed as XYZ states in the heavy quarkonium mass regions. In this work, we present our study on the production of some new hadron states. In particular, we investigate the processes Υ(5S,6S)→ Zb (10610)/Zb (10650)π, Bc→ Zc (3900)/Zc (4020)π and Λb→ Pc (4312)/Pc (4440)/Pc (4457)K. (1) For the production of Zb (10610)/Zb (10650) from Υ(5S,6S) decay, two types of bottom-meson loops were discussed within a nonrelativistic effective field theory. We found that the loop contributions with all intermediate states being the S-wave ground state bottom mesons are negligible, while the loops with one bottom meson being the broad B₀* or B₁' resonance could provide the dominant contributions to the Υ(5S)→ Zb⁽'⁾ π. (2) For the production of Zc (3900)/Zc (4020) from Bc decay, the branching ratios of Bc⁺→ Z (3900)⁺ π⁰ and Bc⁺→ Zc (4020)⁺ π⁰ are estimated to be of order of 10⁽⁻⁴⁾ and 10⁽⁻⁷⁾ in an effective Lagrangian approach. The large production rate of Zc (3900) could provide an important source of the production of Zc (3900) from the semi-exclusive decay of b-flavored hadrons reported by D0 Collaboration, which can be tested by the exclusive measurements in LHCb. (3) For the production of Pc (4312), Pc (4440) and Pc (4457) from Λb decay, the ratio of the branching fraction of Λb→ Pc K was predicted in a molecular scenario by using an effective Lagrangian approach, which is weakly dependent on our model parameter. We also find the ratios of the productions of the branching fractions of Λb→ Pc K and Pc→ J/ψ p can be well interpreted in the molecular scenario. Moreover, the estimated branching fractions of Λb→ Pc K are of order 10⁽⁻⁶⁾, which could be tested by further measurements in LHCb Collaboration.

Keywords: effective Lagrangian approach, hadron loops, molecular states, new hadron states

Procedia PDF Downloads 121