Search results for: quantum algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2591

Search results for: quantum algorithms

401 Conservation Planning of Paris Polyphylla Smith, an Important Medicinal Herb of the Indian Himalayan Region Using Predictive Distribution Modelling

Authors: Mohd Tariq, Shyamal K. Nandi, Indra D. Bhatt

Abstract:

Paris polyphylla Smith (Family- Liliaceae; English name-Love apple: Local name- Satuwa) is an important folk medicinal herb of the Indian subcontinent, being a source of number of bioactive compounds for drug formulation. The rhizomes are widely used as antihelmintic, antispasmodic, digestive stomachic, expectorant and vermifuge, antimicrobial, anti-inflammatory, heart and vascular malady, anti-fertility and sedative. Keeping in view of this, the species is being constantly removed from nature for trade and various pharmaceuticals purpose, as a result, the availability of the species in its natural habitat is decreasing. In this context, it would be pertinent to conserve this species and reintroduce them in its natural habitat. Predictive distribution modelling of this species was performed in Western Himalayan Region. One such recent method is Ecological Niche Modelling, also popularly known as Species distribution modelling, which uses computer algorithms to generate predictive maps of species distributions in a geographic space by correlating the point distributional data with a set of environmental raster data. In case of P. polyphylla, and to understand its potential distribution zones and setting up of artificial introductions, or selecting conservation sites, and conservation and management of their native habitat. Among the different districts of Uttarakhand (28°05ˈ-31°25ˈ N and 77°45ˈ-81°45ˈ E) Uttarkashi, Rudraprayag, Chamoli, Pauri Garhwal and some parts of Bageshwar, 'Maximum Entropy' (Maxent) has predicted wider potential distribution of P. polyphylla Smith. Distribution of P. polyphylla is mainly governed by Precipitation of Driest Quarter and Mean Diurnal Range i.e., 27.08% and 18.99% respectively which indicates that humidity (27%) and average temperature (19°C) might be suitable for better growth of Paris polyphylla.

Keywords: biodiversity conservation, Indian Himalayan region, Paris polyphylla, predictive distribution modelling

Procedia PDF Downloads 330
400 Modified Weibull Approach for Bridge Deterioration Modelling

Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight

Abstract:

State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.

Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models

Procedia PDF Downloads 727
399 Solving LWE by Pregressive Pumps and Its Optimization

Authors: Leizhang Wang, Baocang Wang

Abstract:

General Sieve Kernel (G6K) is considered as currently the fastest algorithm for the shortest vector problem (SVP) and record holder of open SVP challenge. We study the lattice basis quality improvement effects of the Workout proposed in G6K, which is composed of a series of pumps to solve SVP. Firstly, we use a low-dimensional pump output basis to propose a predictor to predict the quality of high-dimensional Pumps output basis. Both theoretical analysis and experimental tests are performed to illustrate that it is more computationally expensive to solve the LWE problems by using a G6K default SVP solving strategy (Workout) than these lattice reduction algorithms (e.g. BKZ 2.0, Progressive BKZ, Pump, and Jump BKZ) with sieving as their SVP oracle. Secondly, the default Workout in G6K is optimized to achieve a stronger reduction and lower computational cost. Thirdly, we combine the optimized Workout and the Pump output basis quality predictor to further reduce the computational cost by optimizing LWE instances selection strategy. In fact, we can solve the TU LWE challenge (n = 65, q = 4225, = 0:005) 13.6 times faster than the G6K default Workout. Fourthly, we consider a combined two-stage (Preprocessing by BKZ- and a big Pump) LWE solving strategy. Both stages use dimension for free technology to give new theoretical security estimations of several LWE-based cryptographic schemes. The security estimations show that the securities of these schemes with the conservative Newhope’s core-SVP model are somewhat overestimated. In addition, in the case of LAC scheme, LWE instances selection strategy can be optimized to further improve the LWE-solving efficiency even by 15% and 57%. Finally, some experiments are implemented to examine the effects of our strategies on the Normal Form LWE problems, and the results demonstrate that the combined strategy is four times faster than that of Newhope.

Keywords: LWE, G6K, pump estimator, LWE instances selection strategy, dimension for free

Procedia PDF Downloads 60
398 Effect of Concentration Level and Moisture Content on the Detection and Quantification of Nickel in Clay Agricultural Soil in Lebanon

Authors: Layan Moussa, Darine Salam, Samir Mustapha

Abstract:

Heavy metal contamination in agricultural soils in Lebanon poses serious environmental and health problems. Intensive efforts are employed to improve existing quantification methods of heavy metals in contaminated environments since conventional detection techniques have shown to be time-consuming, tedious, and costly. The implication of hyperspectral remote sensing in this field is possible and promising. However, factors impacting the efficiency of hyperspectral imaging in detecting and quantifying heavy metals in agricultural soils were not thoroughly studied. This study proposes to assess the use of hyperspectral imaging for the detection of Ni in agricultural clay soil collected from the Bekaa Valley, a major agricultural area in Lebanon, under different contamination levels and soil moisture content. Soil samples were contaminated with Ni, with concentrations ranging from 150 mg/kg to 4000 mg/kg. On the other hand, soil with background contamination was subjected to increased moisture levels varying from 5 to 75%. Hyperspectral imaging was used to detect and quantify Ni contamination in the soil at different contamination levels and moisture content. IBM SPSS statistical software was used to develop models that predict the concentration of Ni and moisture content in agricultural soil. The models were constructed using linear regression algorithms. The spectral curves obtained reflected an inverse correlation between both Ni concentration and moisture content with respect to reflectance. On the other hand, the models developed resulted in high values of predicted R2 of 0.763 for Ni concentration and 0.854 for moisture content. Those predictions stated that Ni presence was well expressed near 2200 nm and that of moisture was at 1900 nm. The results from this study would allow us to define the potential of using the hyperspectral imaging (HSI) technique as a reliable and cost-effective alternative for heavy metal pollution detection in contaminated soils and soil moisture prediction.

Keywords: heavy metals, hyperspectral imaging, moisture content, soil contamination

Procedia PDF Downloads 101
397 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 136
396 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models

Authors: Ainouna Bouziane

Abstract:

The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.

Keywords: electron tomography, supported catalysts, nanometrology, error assessment

Procedia PDF Downloads 88
395 A Novel Epitope Prediction for Vaccine Designing against Ebola Viral Envelope Proteins

Authors: Manju Kanu, Subrata Sinha, Surabhi Johari

Abstract:

Viral proteins of Ebola viruses belong to one of the best studied viruses; however no effective prevention against EBOV has been developed. Epitope-based vaccines provide a new strategy for prophylactic and therapeutic application of pathogen-specific immunity. A critical requirement of this strategy is the identification and selection of T-cell epitopes that act as vaccine targets. This study describes current methodologies for the selection process, with Ebola virus as a model system. Hence great challenge in the field of ebola virus research is to design universal vaccine. A combination of publicly available bioinformatics algorithms and computational tools are used to screen and select antigen sequences as potential T-cell epitopes of supertypes Human Leukocyte Antigen (HLA) alleles. MUSCLE and MOTIF tools were used to find out most conserved peptide sequences of viral proteins. Immunoinformatics tools were used for prediction of immunogenic peptides of viral proteins in zaire strains of Ebola virus. Putative epitopes for viral proteins (VP) were predicted from conserved peptide sequences of VP. Three tools NetCTL 1.2, BIMAS and Syfpeithi were used to predict the Class I putative epitopes while three tools, ProPred, IEDB-SMM-align and NetMHCII 2.2 were used to predict the Class II putative epitopes. B cell epitopes were predicted by BCPREDS 1.0. Immunogenic peptides were identified and selected manually by putative epitopes predicted from online tools individually for both MHC classes. Finally sequences of predicted peptides for both MHC classes were looked for common region which was selected as common immunogenic peptide. The immunogenic peptides were found for viral proteins of Ebola virus: epitopes FLESGAVKY, SSLAKHGEY. These predicted peptides could be promising candidates to be used as target for vaccine design.

Keywords: epitope, b cell, immunogenicity, ebola

Procedia PDF Downloads 314
394 Assessing Building Rooftop Potential for Solar Photovoltaic Energy and Rainwater Harvesting: A Sustainable Urban Plan for Atlantis, Western Cape

Authors: Adedayo Adeleke, Dineo Pule

Abstract:

The ongoing load-shedding in most parts of South Africa, combined with climate change causing severe drought conditions in Cape Town, has left electricity consumers seeking alternative sources of power and water. Solar energy, which is abundant in most parts of South Africa and is regarded as a clean and renewable source of energy, allows for the generation of electricity via solar photovoltaic systems. Rainwater harvesting is the collection and storage of rainwater from building rooftops, allowing people without access to water to collect it. The lack of dependable energy and water source must be addressed by shifting to solar energy via solar photovoltaic systems and rainwater harvesting. Before this can be done, the potential of building rooftops must be assessed to determine whether solar energy and rainwater harvesting will be able to meet or significantly contribute to Atlantis industrial areas' electricity and water demands. This research project presents methods and approaches for automatically extracting building rooftops in Atlantis industrial areas and evaluating their potential for solar photovoltaics and rainwater harvesting systems using Light Detection and Ranging (LiDAR) data and aerial imagery. The four objectives were to: (1) identify an optimal method of extracting building rooftops from aerial imagery and LiDAR data; (2) identify a suitable solar radiation model that can provide a global solar radiation estimate of the study area; (3) estimate solar photovoltaic potential overbuilding rooftop; and (4) estimate the amount of rainwater that can be harvested from the building rooftop in the study area. Mapflow, a plugin found in Quantum Geographic Information System(GIS) was used to automatically extract building rooftops using aerial imagery. The mean annual rainfall in Cape Town was obtained from a 29-year rainfall period (1991- 2020) and used to calculate the amount of rainwater that can be harvested from building rooftops. The potential for rainwater harvesting and solar photovoltaic systems was assessed, and it can be concluded that there is potential for these systems but only to supplement the existing resource supply and offer relief in times of drought and load-shedding.

Keywords: roof potential, rainwater harvesting, urban plan, roof extraction

Procedia PDF Downloads 115
393 An Assessment of Floodplain Vegetation Response to Groundwater Changes Using the Soil & Water Assessment Tool Hydrological Model, Geographic Information System, and Machine Learning in the Southeast Australian River Basin

Authors: Newton Muhury, Armando A. Apan, Tek N. Marasani, Gebiaw T. Ayele

Abstract:

The changing climate has degraded freshwater availability in Australia that influencing vegetation growth to a great extent. This study assessed the vegetation responses to groundwater using Terra’s moderate resolution imaging spectroradiometer (MODIS), Normalised Difference Vegetation Index (NDVI), and soil water content (SWC). A hydrological model, SWAT, has been set up in a southeast Australian river catchment for groundwater analysis. The model was calibrated and validated against monthly streamflow from 2001 to 2006 and 2007 to 2010, respectively. The SWAT simulated soil water content for 43 sub-basins and monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) were applied in the machine learning tool, Waikato Environment for Knowledge Analysis (WEKA), using two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The assessment shows that different types of vegetation response and soil water content vary in the dry and wet seasons. The WEKA model generated high positive relationships (r = 0.76, 0.73, and 0.81) between NDVI values of all vegetation in the sub-basins against soil water content (SWC), the groundwater flow (GW), and the combination of these two variables, respectively, during the dry season. However, these responses were reduced by 36.8% (r = 0.48) and 13.6% (r = 0.63) against GW and SWC, respectively, in the wet season. Although the rainfall pattern is highly variable in the study area, the summer rainfall is very effective for the growth of the grass vegetation type. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.

Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater

Procedia PDF Downloads 101
392 Software Development to Empowering Digital Libraries with Effortless Digital Cataloging and Access

Authors: Abdul Basit Kiani

Abstract:

The software for the digital library system is a cutting-edge solution designed to revolutionize the way libraries manage and provide access to their vast collections of digital content. This advanced software leverages the power of technology to offer a seamless and user-friendly experience for both library staff and patrons. By implementing this software, libraries can efficiently organize, store, and retrieve digital resources, including e-books, audiobooks, journals, articles, and multimedia content. Its intuitive interface allows library staff to effortlessly manage cataloging, metadata extraction, and content enrichment, ensuring accurate and comprehensive access to digital materials. For patrons, the software offers a personalized and immersive digital library experience. They can easily browse the digital catalog, search for specific items, and explore related content through intelligent recommendation algorithms. The software also facilitates seamless borrowing, lending, and preservation of digital items, enabling users to access their favorite resources anytime, anywhere, on multiple devices. With robust security features, the software ensures the protection of intellectual property rights and enforces access controls to safeguard sensitive content. Integration with external authentication systems and user management tools streamlines the library's administration processes, while advanced analytics provide valuable insights into patron behavior and content usage. Overall, this software for the digital library system empowers libraries to embrace the digital era, offering enhanced access, convenience, and discoverability of their vast collections. It paves the way for a more inclusive and engaging library experience, catering to the evolving needs of tech-savvy patrons.

Keywords: software development, empowering digital libraries, digital cataloging and access, management system

Procedia PDF Downloads 83
391 A Cross-Cultural Approach for Communication with Biological and Non-Biological Intelligences

Authors: Thomas Schalow

Abstract:

This paper posits the need to take a cross-cultural approach to communication with non-human cultures and intelligences in order to meet the following three imminent contingencies: communicating with sentient biological intelligences, communicating with extraterrestrial intelligences, and communicating with artificial super-intelligences. The paper begins with a discussion of how intelligence emerges. It disputes some common assumptions we maintain about consciousness, intention, and language. The paper next explores cross-cultural communication among humans, including non-sapiens species. The next argument made is that we need to become much more serious about communicating with the non-human, intelligent life forms that already exist around us here on Earth. There is an urgent need to broaden our definition of communication and reach out to the other sentient life forms that inhabit our world. The paper next examines the science and philosophy behind CETI (communication with extraterrestrial intelligences) and how it has proven useful, even in the absence of contact with alien life. However, CETI’s assumptions and methodology need to be revised and based on the cross-cultural approach to communication proposed in this paper if we are truly serious about finding and communicating with life beyond Earth. The final theme explored in this paper is communication with non-biological super-intelligences using a cross-cultural communication approach. This will present a serious challenge for humanity, as we have never been truly compelled to converse with other species, and our failure to seriously consider such intercourse has left us largely unprepared to deal with communication in a future that will be mediated and controlled by computer algorithms. Fortunately, our experience dealing with other human cultures can provide us with a framework for this communication. The basic assumptions behind intercultural communication can be applied to the many types of communication envisioned in this paper if we are willing to recognize that we are in fact dealing with other cultures when we interact with other species, alien life, and artificial super-intelligence. The ideas considered in this paper will require a new mindset for humanity, but a new disposition will prepare us to face the challenges posed by a future dominated by artificial intelligence.

Keywords: artificial intelligence, CETI, communication, culture, language

Procedia PDF Downloads 358
390 Experimental Activity on the Photovoltaic Effect

Authors: Salomão Manuel Francisco, Manuel António Salgueiro Da Silva, Bento Filipe Barreiras Pinto Cavadas, Teresa Monteiro Seixas

Abstract:

In bachelor's degrees in Physics Education framework in Angola, and to a certain extent, within the community of Portuguese language countries (CPLP), teaching methodologies rely heavily on theoretical memorization and mathematical demonstrations. This approach often discourages students, particularly the female population, as the reliance on theoretical mathematical demonstrations generates the perception of Physics as an arduous, challenging discipline. To address this challenge and recognize the value of practical application as an evaluative criterion of material truth, we propose a practical activity in Environmental Physics that will be shared with Angolan higher education teachers, who will receive full scaffolding and support from the authors. These teachers, adopting and developing similar activities in a classroom setting, will contribute to the environmental education framework as well. Additionally, this work aligns with different goals of UNESCO's 2030 agenda, namely, specifically, goals 4, 5, 7, 11, 13, and 17. The experimental activity developed in this work is centered around the demonstration of the photovoltaic effect and its application for renewable energy production. The first objective of the activity is to study the variation of electrical power supplied by a photovoltaic system (PV) to an electrical circuit as the angle of light incidence changes. Students can observe that the power supplied to the circuit is greater when light rays fall perpendicularly on the PV. However, as the angle of incidence increases, resulting in a larger area covered by the light rays, the power supplied to the circuit decreases due to lower irradiance. The second objective is to demonstrate that the power output can be maximized by adjusting the circuit load resistance at each irradiance value. In these two parts of the activity, students can analyze experimental data taking into account the irradiance law and the equivalent circuit description of a PV cell. Through detailed data analysis, students are also expected to assess the effects of temperature on PV efficiency degradation and the efficiency enhancement provided by light concentration mechanisms. As a third objective, students can explore how the color of incident light affects the PV output power, considering the quantum nature of light and its interaction with the PV system.

Keywords: experiments, irradiation law, physic teaching, photovoltaic effect

Procedia PDF Downloads 83
389 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning

Procedia PDF Downloads 297
388 Proposed Algorithms to Assess Concussion Potential in Rear-End Motor Vehicle Collisions: A Meta-Analysis

Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin McCleery

Abstract:

Introduction: Mild traumatic brain injuries, also referred to as concussions, represent an increasing burden to society. Due to limited objective diagnostic measures, concussions are diagnosed by assessing subjective symptoms, often leading to disputes to their presence. Common biomechanical measures associated with concussion are high linear and/or angular acceleration to the head. With regards to linear acceleration, approximately 80g’s has previously been shown to equate with a 50% probability of concussion. Motor vehicle collisions (MVCs) are a leading cause of concussion, due to high head accelerations experienced. The change in velocity (delta-V) of a vehicle in an MVC is an established metric for impact severity. As acceleration is the rate of delta-V with respect to time, the purpose of this paper is to determine the relation between delta-V (and occupant parameters) with linear head acceleration. Methods: A meta-analysis was conducted for manuscripts collected using the following keywords: head acceleration, concussion, brain injury, head kinematics, delta-V, change in velocity, motor vehicle collision, and rear-end. Ultimately, 280 studies were surveyed, 14 of which fulfilled the inclusion criteria as studies investigating the human response to impacts, reporting head acceleration, and delta-V of the occupant’s vehicle. Statistical analysis was conducted with SPSS and R. The best fit line analysis allowed for an initial understanding of the relation between head acceleration and delta-V. To further investigate the effect of occupant parameters on head acceleration, a quadratic model and a full linear mixed model was developed. Results: From the 14 selected studies, 139 crashes were analyzed with head accelerations and delta-V values ranging from 0.6 to 17.2g and 1.3 to 11.1 km/h, respectively. Initial analysis indicated that the best line of fit (Model 1) was defined as Head Acceleration = 0.465

Keywords: acceleration, brain injury, change in velocity, Delta-V, TBI

Procedia PDF Downloads 233
387 Predicting Low Birth Weight Using Machine Learning: A Study on 53,637 Ethiopian Birth Data

Authors: Kehabtimer Shiferaw Kotiso, Getachew Hailemariam, Abiy Seifu Estifanos

Abstract:

Introduction: Despite the highest share of low birth weight (LBW) for neonatal mortality and morbidity, predicting births with LBW for better intervention preparation is challenging. This study aims to predict LBW using a dataset encompassing 53,637 birth cohorts collected from 36 primary hospitals across seven regions in Ethiopia from February 2022 to June 2024. Methods: We identified ten explanatory variables related to maternal and neonatal characteristics, including maternal education, age, residence, history of miscarriage or abortion, history of preterm birth, type of pregnancy, number of livebirths, number of stillbirths, antenatal care frequency, and sex of the fetus to predict LBW. Using WEKA 3.8.2, we developed and compared seven machine learning algorithms. Data preprocessing included handling missing values, outlier detection, and ensuring data integrity in birth weight records. Model performance was evaluated through metrics such as accuracy, precision, recall, F1-score, and area under the Receiver Operating Characteristic curve (ROC AUC) using 10-fold cross-validation. Results: The results demonstrated that the decision tree, J48, logistic regression, and gradient boosted trees model achieved the highest accuracy (94.5% to 94.6%) with a precision of 93.1% to 93.3%, F1-score of 92.7% to 93.1%, and ROC AUC of 71.8% to 76.6%. Conclusion: This study demonstrates the effectiveness of machine learning models in predicting LBW. The high accuracy and recall rates achieved indicate that these models can serve as valuable tools for healthcare policymakers and providers in identifying at-risk newborns and implementing timely interventions to achieve the sustainable developmental goal (SDG) related to neonatal mortality.

Keywords: low birth weight, machine learning, classification, neonatal mortality, Ethiopia

Procedia PDF Downloads 22
386 Research and Implementation of Cross-domain Data Sharing System in Net-centric Environment

Authors: Xiaoqing Wang, Jianjian Zong, Li Li, Yanxing Zheng, Jinrong Tong, Mao Zhan

Abstract:

With the rapid development of network and communication technology, a great deal of data has been generated in different domains of a network. These data show a trend of increasing scale and more complex structure. Therefore, an effective and flexible cross-domain data-sharing system is needed. The Cross-domain Data Sharing System(CDSS) in a net-centric environment is composed of three sub-systems. The data distribution sub-system provides data exchange service through publish-subscribe technology that supports asynchronism and multi-to-multi communication, which adapts to the needs of the dynamic and large-scale distributed computing environment. The access control sub-system adopts Attribute-Based Access Control(ABAC) technology to uniformly model various data attributes such as subject, object, permission and environment, which effectively monitors the activities of users accessing resources and ensures that legitimate users get effective access control rights within a legal time. The cross-domain access security negotiation subsystem automatically determines the access rights between different security domains in the process of interactive disclosure of digital certificates and access control policies through trust policy management and negotiation algorithms, which provides an effective means for cross-domain trust relationship establishment and access control in a distributed environment. The CDSS’s asynchronous,multi-to-multi and loosely-coupled communication features can adapt well to data exchange and sharing in dynamic, distributed and large-scale network environments. Next, we will give CDSS new features to support the mobile computing environment.

Keywords: data sharing, cross-domain, data exchange, publish-subscribe

Procedia PDF Downloads 124
385 Personality Profiles, Emotional Disturbance and Health-Related Quality of Life in Patients with Epilepsy

Authors: Usha Barahmand, Ruhollah Heydari Sheikh Ahmad, Sara Alaie Khoraem

Abstract:

Introduction: The association of epilepsy with several psychological disorders and reduced quality of life has long been recognized. The present study aimed at comparing the personality profiles, quality of life and symptomatology of anxiety and depression in patients with epilepsy and healthy controls. Materials and Methods: Forty seven patients (29 men and 18 women) with diagnosed epilepsy participated in this study. Forty seven healthy controls who matched the patients in age and gender were also recruited. The participants’ personality and psychological profiles were assessed using the Depression, Anxiety, and Stress Scale (DASS-21), the Short-Form Health Survey (SF-36) and the HEXACO Personality Inventory (HEXACO-PI). Scoring algorithms were applied to the SF-36 produce the physical and mental component scores (PCS and MCS). Results: There were statistically significant differences in the total SF-36 score, anxiety, depression and stress scores of the DASS-21 between patients and controls. Anxiety, stress and depression scores significantly correlated inversely with the PCS and MCS. Data analysis showed that females had higher depression scores than males in both patients and controls, while males in both groups scored higher on stress. Patients’ personality scores were also different from those reported by controls on emotional, agreeableness and extroversion. Patients scored higher on emotionality, and lower on agreeableness and extraversion. Patients also scored lower on indices of quality of life. Regression analysis revealed that emotionality, anxiety, stress and MCS accounted for a significant proportion of the variance in severity of epileptic seizures. Conclusion: Stressful situations and psychological conditions as well as the personality trait of neuroticism were related to the occurrence of recurrent epileptic seizures.

Keywords: anxiety, depression, epilepsy, neuroticism, personality, quality of life, stress

Procedia PDF Downloads 370
384 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 79
383 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 97
382 Nanoparticles Modification by Grafting Strategies for the Development of Hybrid Nanocomposites

Authors: Irati Barandiaran, Xabier Velasco-Iza, Galder Kortaberria

Abstract:

Hybrid inorganic/organic nanostructured materials based on block copolymers are of considerable interest in the field of Nanotechnology, taking into account that these nanocomposites combine the properties of polymer matrix and the unique properties of the added nanoparticles. The use of block copolymers as templates offers the opportunity to control the size and the distribution of inorganic nanoparticles. This research is focused on the surface modification of inorganic nanoparticles to reach a good interface between nanoparticles and polymer matrices which hinders the nanoparticle aggregation. The aim of this work is to obtain a good and selective dispersion of Fe3O4 magnetic nanoparticles into different types of block copolymers such us, poly(styrene-b-methyl methacrylate) (PS-b-PMMA), poly(styrene-b-ε-caprolactone) (PS-b-PCL) poly(isoprene-b-methyl methacrylate) (PI-b-PMMA) or poly(styrene-b-butadiene-b-methyl methacrylate) (SBM) by using different grafting strategies. Fe3O4 magnetic nanoparticles have been surface-modified with polymer or block copolymer brushes following different grafting methods (grafting to, grafting from and grafting through) to achieve a selective location of nanoparticles into desired domains of the block copolymers. Morphology of fabricated hybrid nanocomposites was studied by means of atomic force microscopy (AFM) and with the aim to reach well-ordered nanostructured composites different annealing methods were used. Additionally, nanoparticle amount has been also varied in order to investigate the effect of the nanoparticle content in the morphology of the block copolymer. Nowadays different characterization methods were using in order to investigate magnetic properties of nanometer-scale electronic devices. Particularly, two different techniques have been used with the aim of characterizing synthesized nanocomposites. First, magnetic force microscopy (MFM) was used to investigate qualitatively the magnetic properties taking into account that this technique allows distinguishing magnetic domains on the sample surface. On the other hand, magnetic characterization by vibrating sample magnetometer and superconducting quantum interference device. This technique demonstrated that magnetic properties of nanoparticles have been transferred to the nanocomposites, exhibiting superparamagnetic behavior similar to that of the maghemite nanoparticles at room temperature. Obtained advanced nanostructured materials could found possible applications in the field of dye-sensitized solar cells and electronic nanodevices.

Keywords: atomic force microscopy, block copolymers, grafting techniques, iron oxide nanoparticles

Procedia PDF Downloads 262
381 Isotope Effects on Inhibitors Binding to HIV Reverse Transcriptase

Authors: Agnieszka Krzemińska, Katarzyna Świderek, Vicente Molinier, Piotr Paneth

Abstract:

In order to understand in details the interactions between ligands and the enzyme isotope effects were studied between clinically used drugs that bind in the active site of Human Immunodeficiency Virus Reverse Transcriptase, HIV-1 RT, as well as triazole-based inhibitor that binds in the allosteric pocket of this enzyme. The magnitudes and origins of the resulting binding isotope effects were analyzed. Subsequently, binding isotope effect of the same triazole-based inhibitor bound in the active site were analyzed and compared. Together, these results show differences in binding origins in two sites of the enzyme and allow to analyze binding mode and place of newly synthesized inhibitors. Typical protocol is described below on the example of triazole ligand in the allosteric pocket. Triazole was docked into allosteric cavity of HIV-1 RT with Glide using extra-precision mode as implemented in Schroedinger software. The structure of HIV-1 RT was obtained from Protein Data Bank as structure of PDB ID 2RKI. The pKa for titratable amino acids was calculated using PROPKA software, and in order to neutralize the system 15 Cl- were added using tLEaP package implemented in AMBERTools ver.1.5. Also N-terminals and C-terminals were build using tLEaP. The system was placed in 144x160x144Å3 orthorhombic box of water molecules using NAMD program. Missing parameters for triazole were obtained at the AM1 level using Antechamber software implemented in AMBERTools. The energy minimizations were carried out by means of a conjugate gradient algorithm using NAMD. Then system was heated from 0 to 300 K with temperature increment 0.001 K. Subsequently 2 ns Langevin−Verlet (NVT) MM MD simulation with AMBER force field implemented in NAMD was carried out. Periodic Boundary Conditions and cut-offs for the nonbonding interactions, range radius from 14.5 to 16 Å, are used. After 2 ns relaxation 200 ps of QM/MM MD at 300 K were simulated. The triazole was treated quantum mechanically at the AM1 level, protein was described using AMBER and water molecules were described using TIP3P, as implemented in fDynamo library. Molecules 20 Å apart from the triazole were kept frozen, with cut-offs established on range radius from 14.5 to 16 Å. In order to describe interactions between triazole and RT free energy of binding using Free Energy Perturbation method was done. The change in frequencies from ligand in solution to ligand bounded in enzyme was used to calculate binding isotope effects.

Keywords: binding isotope effects, molecular dynamics, HIV, reverse transcriptase

Procedia PDF Downloads 432
380 Assimilating Remote Sensing Data Into Crop Models: A Global Systematic Review

Authors: Luleka Dlamini, Olivier Crespo, Jos van Dam

Abstract:

Accurately estimating crop growth and yield is pivotal for timely sustainable agricultural management and ensuring food security. Crop models and remote sensing can complement each other and form a robust analysis tool to improve crop growth and yield estimations when combined. This study thus aims to systematically evaluate how research that exclusively focuses on assimilating RS data into crop models varies among countries, crops, data assimilation methods, and farming conditions. A strict search string was applied in the Scopus and Web of Science databases, and 497 potential publications were obtained. After screening for relevance with predefined inclusion/exclusion criteria, 123 publications were considered in the final review. Results indicate that over 81% of the studies were conducted in countries associated with high socio-economic and technological advancement, mainly China, the United States of America, France, Germany, and Italy. Many of these studies integrated MODIS or Landsat data into WOFOST to improve crop growth and yield estimation of staple crops at the field and regional scales. Most studies use recalibration or updating methods alongside various algorithms to assimilate remotely sensed leaf area index into crop models. However, these methods cannot account for the uncertainties in remote sensing observations and the crop model itself. l. Over 85% of the studies were based on commercial and irrigated farming systems. Despite a great global interest in data assimilation into crop models, limited research has been conducted in resource- and data-limited regions like Africa. We foresee a great potential for such application in those conditions. Hence facilitating and expanding the use of such an approach, from which developing farming communities could benefit.

Keywords: crop models, remote sensing, data assimilation, crop yield estimation

Procedia PDF Downloads 131
379 Assimilating Remote Sensing Data into Crop Models: A Global Systematic Review

Authors: Luleka Dlamini, Olivier Crespo, Jos van Dam

Abstract:

Accurately estimating crop growth and yield is pivotal for timely sustainable agricultural management and ensuring food security. Crop models and remote sensing can complement each other and form a robust analysis tool to improve crop growth and yield estimations when combined. This study thus aims to systematically evaluate how research that exclusively focuses on assimilating RS data into crop models varies among countries, crops, data assimilation methods, and farming conditions. A strict search string was applied in the Scopus and Web of Science databases, and 497 potential publications were obtained. After screening for relevance with predefined inclusion/exclusion criteria, 123 publications were considered in the final review. Results indicate that over 81% of the studies were conducted in countries associated with high socio-economic and technological advancement, mainly China, the United States of America, France, Germany, and Italy. Many of these studies integrated MODIS or Landsat data into WOFOST to improve crop growth and yield estimation of staple crops at the field and regional scales. Most studies use recalibration or updating methods alongside various algorithms to assimilate remotely sensed leaf area index into crop models. However, these methods cannot account for the uncertainties in remote sensing observations and the crop model itself. l. Over 85% of the studies were based on commercial and irrigated farming systems. Despite a great global interest in data assimilation into crop models, limited research has been conducted in resource- and data-limited regions like Africa. We foresee a great potential for such application in those conditions. Hence facilitating and expanding the use of such an approach, from which developing farming communities could benefit.

Keywords: crop models, remote sensing, data assimilation, crop yield estimation

Procedia PDF Downloads 82
378 Static Application Security Testing Approach for Non-Standard Smart Contracts

Authors: Antonio Horta, Renato Marinho, Raimir Holanda

Abstract:

Considered as an evolution of the Blockchain, the Ethereum platform, besides allowing transactions of its cryptocurrency named Ether, it allows the programming of decentralised applications (DApps) and smart contracts. However, this functionality into blockchains has raised other types of threats, and the exploitation of smart contracts vulnerabilities has taken companies to experience big losses. This research intends to figure out the number of contracts that are under risk of being drained. Through a deep investigation, more than two hundred thousand smart contracts currently available in the Ethereum platform were scanned and estimated how much money is at risk. The experiment was based in a query run on Google Big Query in July 2022 and returned 50,707,133 contracts published on the Ethereum platform. After applying the filtering criteria, the experimentgot 430,584 smart contracts to download and analyse. The filtering criteria consisted of filtering out: ERC20 and ERC721 contracts, contracts without transactions, and contracts without balance. From this amount of 430,584 smart contracts selected, only 268,103 had source codes published on Etherscan, however, we discovered, using a hashing process, that there were contracts duplication. Removing the duplicated contracts, the process ended up with 20,417 source codes, which were analysed using the open source SAST tool smartbugswith oyente and securify algorithms. In the end, there was nearly $100,000 at risk of being drained from the potentially vulnerable smart contracts. It is important to note that the tools used in this study may generate false positives, which may interfere with the number of vulnerable contracts. To address this point, our next step in this research is to develop an application to test the contract in a parallel environment to verify the vulnerability. Finally, this study aims to alert users and companies about the risk on not properly creating and analysing their smart contracts before publishing them into the platform. As any other application, smart contracts are at risk of having vulnerabilities which, in this case, may result in direct financial losses.

Keywords: blockchain, reentrancy, static application security testing, smart contracts

Procedia PDF Downloads 88
377 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 318
376 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering

Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause

Abstract:

In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.

Keywords: image processing, illumination equalization, shadow filtering, object detection

Procedia PDF Downloads 216
375 Genetic Programming: Principles, Applications and Opportunities for Hydrological Modelling

Authors: Oluwaseun K. Oyebode, Josiah A. Adeyemo

Abstract:

Hydrological modelling plays a crucial role in the planning and management of water resources, most especially in water stressed regions where the need to effectively manage the available water resources is of critical importance. However, due to the complex, nonlinear and dynamic behaviour of hydro-climatic interactions, achieving reliable modelling of water resource systems and accurate projection of hydrological parameters are extremely challenging. Although a significant number of modelling techniques (process-based and data-driven) have been developed and adopted in that regard, the field of hydrological modelling is still considered as one that has sluggishly progressed over the past decades. This is majorly as a result of the identification of some degree of uncertainty in the methodologies and results of techniques adopted. In recent times, evolutionary computation (EC) techniques have been developed and introduced in response to the search for efficient and reliable means of providing accurate solutions to hydrological related problems. This paper presents a comprehensive review of the underlying principles, methodological needs and applications of a promising evolutionary computation modelling technique – genetic programming (GP). It examines the specific characteristics of the technique which makes it suitable to solving hydrological modelling problems. It discusses the opportunities inherent in the application of GP in water related-studies such as rainfall estimation, rainfall-runoff modelling, streamflow forecasting, sediment transport modelling, water quality modelling and groundwater modelling among others. Furthermore, the means by which such opportunities could be harnessed in the near future are discussed. In all, a case for total embracement of GP and its variants in hydrological modelling studies is made so as to put in place strategies that would translate into achieving meaningful progress as it relates to modelling of water resource systems, and also positively influence decision-making by relevant stakeholders.

Keywords: computational modelling, evolutionary algorithms, genetic programming, hydrological modelling

Procedia PDF Downloads 298
374 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 186
373 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach

Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip

Abstract:

The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.

Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method

Procedia PDF Downloads 129
372 A Convolutional Neural Network Based Vehicle Theft Detection, Location, and Reporting System

Authors: Michael Moeti, Khuliso Sigama, Thapelo Samuel Matlala

Abstract:

One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets especially in the motorist industry, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. Sixty (60) vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies.

Keywords: CNN, location identification, tracking, GPS, GSM

Procedia PDF Downloads 167