Search results for: hybrid genetic algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4833

Search results for: hybrid genetic algorithms

1413 A Lifetime-Enhancing Monitoring Node Distribution Using Minimum Spanning Tree in Mobile Ad Hoc Networks

Authors: Sungchul Ha, Hyunwoo Kim

Abstract:

In mobile ad hoc networks, all nodes in a network only have limited resources and calculation ability. Therefore communication topology which have long lifetime is good for all nodes in mobile ad hoc networks. There are a variety of researches on security problems in wireless ad hoc networks. The existing many researches try to make efficient security schemes to reduce network power consumption and enhance network lifetime. Because a new node can join the network at any time, the wireless ad hoc networks are exposed to various threats and can be destroyed by attacks. Resource consumption is absolutely necessary to secure networks, but more resource consumption can be a critical problem to network lifetime. This paper focuses on efficient monitoring node distribution to enhance network lifetime in wireless ad hoc networks. Since the wireless ad hoc networks cannot use centralized infrastructure and security systems of wired networks, a new special IDS scheme is necessary. The scheme should not only cover all nodes in a network but also enhance the network lifetime. In this paper, we propose an efficient IDS node distribution scheme using minimum spanning tree (MST) method. The simulation results show that the proposed algorithm has superior performance in comparison with existing algorithms.

Keywords: MANETs, IDS, power control, minimum spanning tree

Procedia PDF Downloads 351
1412 Characterization of Bacteriophage for Biocontrol of Pseudomonas syringae, Causative Agent of Canker in Prunus spp.

Authors: Mojgan Rabiey, Shyamali Roy, Billy Quilty, Ryan Creeth, George Sundin, Robert W. Jackson

Abstract:

Bacterial canker is a major disease of Prunus species such as cherry (Prunus avium). It is caused by Pseudomonas syringae species including P. syringae pv. syringae (Pss) and P. syringae pv. morsprunorum race 1 (Psm1) and race 2 (Psm2). Concerns over the environmental impact of, and developing resistance to, copper controls call for alternative approaches to disease management. One method of control could be achieved using naturally occurring bacteriophage (phage) infective to the bacterial pathogens. Phages were isolated from soil, leaf, and bark of cherry trees in five locations in the South East of England. The phages were assessed for their host range against strains of Pss, Psm1, and Psm2. The phages exhibited a differential ability to infect and lyse different Pss and Psm isolates as well as some other P. syringae pathovars. However, the phages were unable to infect beneficial bacteria such as Pseudomonas fluorescens. A subset of 18 of these phages were further characterised genetically (Random Amplification of Polymorphic DNA-PCR fingerprinting and sequencing) and using electron microscopy. The phages are tentatively identified as belonging to the order Caudovirales and the families Myoviridae, Podoviridae, and Siphoviridae, with genetic material being dsDNA. Future research will fully sequence the phage genomes. The efficacy of the phage, both individually and in cocktails, to reduce disease progression in vivo will be investigated to understand the potential for practical use of these phages as biocontrol agents.

Keywords: bacteriophage, pseudomonas, bacterial cancker, biological control

Procedia PDF Downloads 134
1411 Impressions of HyFlex in an Engineering Technology Program in an Undergraduate Urban Commuter Institution

Authors: Zory Marantz

Abstract:

Hybrid flexible (HyFlex) is a pedagogical methodology whereby an instructor delivers content in three modalities, i.e. live in-person (LIP), live online synchronous (LOS), and non-live online asynchronous (nLOaS). HyFlex is focused on providing the largest level of flexibility needed to achieve a cohesive environment across all modalities and incorporating four basic principles – learner’s choice, reusability, accessibility, and equivalency. Much literature has focused on the advantages of this methodology in providing students with the flexibility to choose their learning modality as best suits their schedules and learning styles. Initially geared toward graduate-level students, the concept has been applied to undergraduate studies, particularly during our national pedagogical response to the COVID19 pandemic. There is still little literature about the practicality and feasibility of HyFlex for hardware laboratory intensive engineering technology programs, particularly in dense, urban commuter institutions of higher learning. During a semester of engineering, a lab-based course was taught in the HyFlex modality, and students were asked to complete a survey about their experience. The data demonstrated that there is no single mode that is preferred by a majority of students and the usefulness of any modality is limited to how familiar the student and instructor are with the technology being applied. The technology is only as effective as our understanding and comfort with its functionality. For HyFlex to succeed in its implementation in an engineering technology environment within an urban commuter institution, faculty and students must be properly introduced to the technology being used.

Keywords: education, HyFlex, technology, urban, commuter, pedagogy

Procedia PDF Downloads 78
1410 On the Influence of the Metric Space in the Critical Behavior of Magnetic Temperature

Authors: J. C. Riaño-Rojas, J. D. Alzate-Cardona, E. Restrepo-Parra

Abstract:

In this work, a study of generic magnetic nanoparticles varying the metric space is presented. As the metric space is changed, the nanoparticle form and the inner product are also varied, since the energetic scale is not conserved. This study is carried out using Monte Carlo simulations combined with the Wolff embedding and Metropolis algorithms. The Metropolis algorithm is used at high temperature regions to reach the equilibrium quickly. The Wolff embedding algorithm is used at low and critical temperature regions in order to reduce the critical slowing down phenomenon. The ions number is kept constant for the different forms and the critical temperatures using finite size scaling are found. We observed that critical temperatures don't exhibit significant changes when the metric space was varied. Additionally, the effective dimension according the metric space was determined. A study of static behavior for reaching the static critical exponents was developed. The objective of this work is to observe the behavior of the thermodynamic quantities as energy, magnetization, specific heat, susceptibility and Binder's cumulants at the critical region, in order to demonstrate if the magnetic nanoparticles describe their magnetic interactions in the Euclidean space or if there is any correspondence in other metric spaces.

Keywords: nanoparticles, metric, Monte Carlo, critical behaviour

Procedia PDF Downloads 502
1409 System for Electromyography Signal Emulation Through the Use of Embedded Systems

Authors: Valentina Narvaez Gaitan, Laura Valentina Rodriguez Leguizamon, Ruben Dario Hernandez B.

Abstract:

This work describes a physiological signal emulation system that uses electromyography (EMG) signals obtained from muscle sensors in the first instance. These signals are used to extract their characteristics to model and emulate specific arm movements. The main objective of this effort is to develop a new biomedical software system capable of generating physiological signals through the use of embedded systems by establishing the characteristics of the acquired signals. The acquisition system used was Biosignals, which contains two EMG electrodes used to acquire signals from the forearm muscles placed on the extensor and flexor muscles. Processing algorithms were implemented to classify the signals generated by the arm muscles when performing specific movements such as wrist flexion extension, palmar grip, and wrist pronation-supination. Matlab software was used to condition and preprocess the signals for subsequent classification. Subsequently, the mathematical modeling of each signal is performed to be generated by the embedded system, with a validation of the accuracy of the obtained signal using the percentage of cross-correlation, obtaining a precision of 96%. The equations are then discretized to be emulated in the embedded system, obtaining a system capable of generating physiological signals according to the characteristics of medical analysis.

Keywords: classification, electromyography, embedded system, emulation, physiological signals

Procedia PDF Downloads 85
1408 Real-Time Pedestrian Detection Method Based on Improved YOLOv3

Authors: Jingting Luo, Yong Wang, Ying Wang

Abstract:

Pedestrian detection in image or video data is a very important and challenging task in security surveillance. The difficulty of this task is to locate and detect pedestrians of different scales in complex scenes accurately. To solve these problems, a deep neural network (RT-YOLOv3) is proposed to realize real-time pedestrian detection at different scales in security monitoring. RT-YOLOv3 improves the traditional YOLOv3 algorithm. Firstly, the deep residual network is added to extract vehicle features. Then six convolutional neural networks with different scales are designed and fused with the corresponding scale feature maps in the residual network to form the final feature pyramid to perform pedestrian detection tasks. This method can better characterize pedestrians. In order to further improve the accuracy and generalization ability of the model, a hybrid pedestrian data set training method is used to extract pedestrian data from the VOC data set and train with the INRIA pedestrian data set. Experiments show that the proposed RT-YOLOv3 method achieves 93.57% accuracy of mAP (mean average precision) and 46.52f/s (number of frames per second). In terms of accuracy, RT-YOLOv3 performs better than Fast R-CNN, Faster R-CNN, YOLO, SSD, YOLOv2, and YOLOv3. This method reduces the missed detection rate and false detection rate, improves the positioning accuracy, and meets the requirements of real-time detection of pedestrian objects.

Keywords: pedestrian detection, feature detection, convolutional neural network, real-time detection, YOLOv3

Procedia PDF Downloads 124
1407 Methaheuristic Bat Algorithm in Training of Feed-Forward Neural Network for Stock Price Prediction

Authors: Marjan Golmaryami, Marzieh Behzadi

Abstract:

Recent developments in stock exchange highlight the need for an efficient and accurate method that helps stockholders make better decision. Since stock markets have lots of fluctuations during the time and different effective parameters, it is difficult to make good decisions. The purpose of this study is to employ artificial neural network (ANN) which can deal with time series data and nonlinear relation among variables to forecast next day stock price. Unlike other evolutionary algorithms which were utilized in stock exchange prediction, we trained our proposed neural network with metaheuristic bat algorithm, with fast and powerful convergence and applied it in stock price prediction for the first time. In order to prove the performance of the proposed method, this research selected a 7 year dataset from Parsian Bank stocks and after imposing data preprocessing, used 3 types of ANN (back propagation-ANN, particle swarm optimization-ANN and bat-ANN) to predict the closed price of stocks. Afterwards, this study engaged MATLAB to simulate 3 types of ANN, with the scoring target of mean absolute percentage error (MAPE). The results may be adapted to other companies stocks too.

Keywords: artificial neural network (ANN), bat algorithm, particle swarm optimization algorithm (PSO), stock exchange

Procedia PDF Downloads 530
1406 Design and Implementation a Platform for Adaptive Online Learning Based on Fuzzy Logic

Authors: Budoor Al Abid

Abstract:

Educational systems are increasingly provided as open online services, providing guidance and support for individual learners. To adapt the learning systems, a proper evaluation must be made. This paper builds the evaluation model Fuzzy C Means Adaptive System (FCMAS) based on data mining techniques to assess the difficulty of the questions. The following steps are implemented; first using a dataset from an online international learning system called (slepemapy.cz) the dataset contains over 1300000 records with 9 features for students, questions and answers information with feedback evaluation. Next, a normalization process as preprocessing step was applied. Then FCM clustering algorithms are used to adaptive the difficulty of the questions. The result is three cluster labeled data depending on the higher Wight (easy, Intermediate, difficult). The FCM algorithm gives a label to all the questions one by one. Then Random Forest (RF) Classifier model is constructed on the clustered dataset uses 70% of the dataset for training and 30% for testing; the result of the model is a 99.9% accuracy rate. This approach improves the Adaptive E-learning system because it depends on the student behavior and gives accurate results in the evaluation process more than the evaluation system that depends on feedback only.

Keywords: machine learning, adaptive, fuzzy logic, data mining

Procedia PDF Downloads 174
1405 Impact of Integrated Signals for Doing Human Activity Recognition Using Deep Learning Models

Authors: Milagros Jaén-Vargas, Javier García Martínez, Karla Miriam Reyes Leiva, María Fernanda Trujillo-Guerrero, Francisco Fernandes, Sérgio Barroso Gonçalves, Miguel Tavares Silva, Daniel Simões Lopes, José Javier Serrano Olmedo

Abstract:

Human Activity Recognition (HAR) is having a growing impact in creating new applications and is responsible for emerging new technologies. Also, the use of wearable sensors is an important key to exploring the human body's behavior when performing activities. Hence, the use of these dispositive is less invasive and the person is more comfortable. In this study, a database that includes three activities is used. The activities were acquired from inertial measurement unit sensors (IMU) and motion capture systems (MOCAP). The main objective is differentiating the performance from four Deep Learning (DL) models: Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and hybrid model Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM), when considering acceleration, velocity and position and evaluate if integrating the IMU acceleration to obtain velocity and position represent an increment in performance when it works as input to the DL models. Moreover, compared with the same type of data provided by the MOCAP system. Despite the acceleration data is cleaned when integrating, results show a minimal increase in accuracy for the integrated signals.

Keywords: HAR, IMU, MOCAP, acceleration, velocity, position, feature maps

Procedia PDF Downloads 83
1404 Factors Affecting Employee Decision Making in an AI Environment

Authors: Yogesh C. Sharma, A. Seetharaman

Abstract:

The decision-making process in humans is a complicated system influenced by a variety of intrinsic and extrinsic factors. Human decisions have a ripple effect on subsequent decisions. In this study, the scope of human decision making is limited to employees. In an organisation, a person makes a variety of decisions from the time they are hired to the time they retire. The goal of this research is to identify various elements that influence decision-making. In addition, the environment in which a decision is made is a significant aspect of the decision-making process. Employees in today's workplace use artificial intelligence (AI) systems for automation and decision augmentation. The impact of AI systems on the decision-making process is examined in this study. This research is designed based on a systematic literature review. Based on gaps in the literature, limitations and the scope of future research have been identified. Based on these findings, a research framework has been designed to identify various factors affecting employee decision making. Employee decision making is influenced by technological advancement, data-driven culture, human trust, decision automation-augmentation, and workplace motivation. Hybrid human-AI systems require the development of new skill sets and organisational design. Employee psychological safety and supportive leadership influences overall job satisfaction.

Keywords: employee decision making, artificial intelligence (AI) environment, human trust, technology innovation, psychological safety

Procedia PDF Downloads 92
1403 Proxisch: An Optimization Approach of Large-Scale Unstable Proxy Servers Scheduling

Authors: Xiaoming Jiang, Jinqiao Shi, Qingfeng Tan, Wentao Zhang, Xuebin Wang, Muqian Chen

Abstract:

Nowadays, big companies such as Google, Microsoft, which have adequate proxy servers, have perfectly implemented their web crawlers for a certain website in parallel. But due to lack of expensive proxy servers, it is still a puzzle for researchers to crawl large amounts of information from a single website in parallel. In this case, it is a good choice for researchers to use free public proxy servers which are crawled from the Internet. In order to improve efficiency of web crawler, the following two issues should be considered primarily: (1) Tasks may fail owing to the instability of free proxy servers; (2) A proxy server will be blocked if it visits a single website frequently. In this paper, we propose Proxisch, an optimization approach of large-scale unstable proxy servers scheduling, which allow anyone with extremely low cost to run a web crawler efficiently. Proxisch is designed to work efficiently by making maximum use of reliable proxy servers. To solve second problem, it establishes a frequency control mechanism which can ensure the visiting frequency of any chosen proxy server below the website’s limit. The results show that our approach performs better than the other scheduling algorithms.

Keywords: proxy server, priority queue, optimization algorithm, distributed web crawling

Procedia PDF Downloads 200
1402 Adaptive Strategies of Maize in Leaf Traits to N Deficiency

Authors: Panpan Fan, Bo Ming, Niels Anten, Jochem Evers, Yaoyao Li, Shaokun Li, Ruizhi xie

Abstract:

Nitrogen (N) utilization for crop production under N deficiency conditions is subject to a trade-off between maintaining specific leaf N content (SLN), important for radiation-use efficiency (RUE), versus maintaining leaf area (LA) development, important for light capture. This paper aims to explore how maize deals with this trade-off through responses in SLN, LA and their underlying traits during the vegetative and reproductive growth stages. In a ten-year N fertilization trial in Jilin province, Northeast China, three N fertilizer levels have been maintained: N-deficiency (N0), low N supply (N1), and high N supply (N2). We analyzed data from years 8 and 10 of this experiment for two common hybrids. Under N deficiency, maize plants maintained LA and decreased SLN during vegetative stages, while both LA and SLN decreased comparably during reproductive stages. Canopy-average specific leaf area (SLA) decreased sharply during vegetative stages and slightly during reproductive stages, mainly because senesced leaves in the lower canopy had a higher SLA. In the vegetative stage, maize maintained leaf area at low N by maintaining leaf biomass (albeit hence having N content/mass) and slightly increasing SLA. These responses to N deficiency were stronger in maize hybrid XY335 than in ZD958. We conclude the main strategy of maize to cope with low N is to maintain plant growth, mainly by increasing SLA throughout the plant during early growth. N was too limiting for either strategy to be followed during later growth stages.

Keywords: leaf N content per unit leaf area, N deficiency, specific leaf area, maize strateg

Procedia PDF Downloads 75
1401 Full Characterization of Heterogeneous Antibody Samples under Denaturing and Native Conditions on a Hybrid Quadrupole-Orbitrap Mass Spectrometer

Authors: Rowan Moore, Kai Scheffler, Eugen Damoc, Jennifer Sutton, Aaron Bailey, Stephane Houel, Simon Cubbon, Jonathan Josephs

Abstract:

Purpose: MS analysis of monoclonal antibodies (mAbs) at the protein and peptide levels is critical during development and production of biopharmaceuticals. The compositions of current generation therapeutic proteins are often complex due to various modifications which may affect efficacy. Intact proteins analyzed by MS are detected in higher charge states that also provide more complexity in mass spectra. Protein analysis in native or native-like conditions with zero or minimal organic solvent and neutral or weakly acidic pH decreases charge state value resulting in mAb detection at higher m/z ranges with more spatial resolution. Methods: Three commercially available mAbs were used for all experiments. Intact proteins were desalted online using size exclusion chromatography (SEC) or reversed phase chromatography coupled on-line with a mass spectrometer. For streamlined use of the LC- MS platform we used a single SEC column and alternately selected specific mobile phases to perform separations in either denaturing or native-like conditions: buffer A (20 % ACN, 0.1 % FA) with Buffer B (100 mM ammonium acetate). For peptide analysis mAbs were proteolytically digested with and without prior reduction and alkylation. The mass spectrometer used for all experiments was a commercially available Thermo Scientific™ hybrid Quadrupole-Orbitrap™ mass spectrometer, equipped with the new BioPharma option which includes a new High Mass Range (HMR) mode that allows for improved high mass transmission and mass detection up to 8000 m/z. Results: We have analyzed the profiles of three mAbs under reducing and native conditions by direct infusion with offline desalting and with on-line desalting via size exclusion and reversed phase type columns. The presence of high salt under denaturing conditions was found to influence the observed charge state envelope and impact mass accuracy after spectral deconvolution. The significantly lower charge states observed under native conditions improves the spatial resolution of protein signals and has significant benefits for the analysis of antibody mixtures, e.g. lysine variants, degradants or sequence variants. This type of analysis requires the detection of masses beyond the standard mass range ranging up to 6000 m/z requiring the extended capabilities available in the new HMR mode. We have compared each antibody sample that was analyzed individually with mixtures in various relative concentrations. For this type of analysis, we observed that apparent native structures persist and ESI is benefited by the addition of low amounts of acetonitrile and formic acid in combination with the ammonium acetate-buffered mobile phase. For analyses on the peptide level we analyzed reduced/alkylated, and non-reduced proteolytic digests of the individual antibodies separated via reversed phase chromatography aiming to retrieve as much information as possible regarding sequence coverage, disulfide bridges, post-translational modifications such as various glycans, sequence variants, and their relative quantification. All data acquired were submitted to a single software package for analysis aiming to obtain a complete picture of the molecules analyzed. Here we demonstrate the capabilities of the mass spectrometer to fully characterize homogeneous and heterogeneous therapeutic proteins on one single platform. Conclusion: Full characterization of heterogeneous intact protein mixtures by improved mass separation on a quadrupole-Orbitrap™ mass spectrometer with extended capabilities has been demonstrated.

Keywords: disulfide bond analysis, intact analysis, native analysis, mass spectrometry, monoclonal antibodies, peptide mapping, post-translational modifications, sequence variants, size exclusion chromatography, therapeutic protein analysis, UHPLC

Procedia PDF Downloads 350
1400 An Enhanced Distributed Weighted Clustering Algorithm for Intra and Inter Cluster Routing in MANET

Authors: K. Gomathi

Abstract:

Mobile Ad hoc Networks (MANET) is defined as collection of routable wireless mobile nodes with no centralized administration and communicate each other using radio signals. Especially MANETs deployed in hostile environments where hackers will try to disturb the secure data transfer and drain the valuable network resources. Since MANET is battery operated network, preserving the network resource is essential one. For resource constrained computation, efficient routing and to increase the network stability, the network is divided into smaller groups called clusters. The clustering architecture consists of Cluster Head(CH), ordinary node and gateway. The CH is responsible for inter and intra cluster routing. CH election is a prominent research area and many more algorithms are developed using many different metrics. The CH with longer life sustains network lifetime, for this purpose Secondary Cluster Head(SCH) also elected and it is more economical. To nominate efficient CH, a Enhanced Distributed Weighted Clustering Algorithm (EDWCA) has been proposed. This approach considers metrics like battery power, degree difference and speed of the node for CH election. The proficiency of proposed one is evaluated and compared with existing algorithm using Network Simulator(NS-2).

Keywords: MANET, EDWCA, clustering, cluster head

Procedia PDF Downloads 380
1399 A Novel Breast Cancer Detection Algorithm Using Point Region Growing Segmentation and Pseudo-Zernike Moments

Authors: Aileen F. Wang

Abstract:

Mammography has been one of the most reliable methods for early detection and diagnosis of breast cancer. However, mammography misses about 17% and up to 30% of breast cancers due to the subtle and unstable appearances of breast cancer in their early stages. Recent computer-aided diagnosis (CADx) technology using Zernike moments has improved detection accuracy. However, it has several drawbacks: it uses manual segmentation, Zernike moments are not robust, and it still has a relatively high false negative rate (FNR)–17.6%. This project will focus on the development of a novel breast cancer detection algorithm to automatically segment the breast mass and further reduce FNR. The algorithm consists of automatic segmentation of a single breast mass using Point Region Growing Segmentation, reconstruction of the segmented breast mass using Pseudo-Zernike moments, and classification of the breast mass using the root mean square (RMS). A comparative study among the various algorithms on the segmentation and reconstruction of breast masses was performed on randomly selected mammographic images. The results demonstrated that the newly developed algorithm is the best in terms of accuracy and cost effectiveness. More importantly, the new classifier RMS has the lowest FNR–6%.

Keywords: computer aided diagnosis, mammography, point region growing segmentation, pseudo-zernike moments, root mean square

Procedia PDF Downloads 438
1398 A Conv-Long Short-term Memory Deep Learning Model for Traffic Flow Prediction

Authors: Ali Reza Sattarzadeh, Ronny J. Kutadinata, Pubudu N. Pathirana, Van Thanh Huynh

Abstract:

Traffic congestion has become a severe worldwide problem, affecting everyday life, fuel consumption, time, and air pollution. The primary causes of these issues are inadequate transportation infrastructure, poor traffic signal management, and rising population. Traffic flow forecasting is one of the essential and effective methods in urban congestion and traffic management, which has attracted the attention of researchers. With the development of technology, undeniable progress has been achieved in existing methods. However, there is a possibility of improvement in the extraction of temporal and spatial features to determine the importance of traffic flow sequences and extraction features. In the proposed model, we implement the convolutional neural network (CNN) and long short-term memory (LSTM) deep learning models for mining nonlinear correlations and their effectiveness in increasing the accuracy of traffic flow prediction in the real dataset. According to the experiments, the results indicate that implementing Conv-LSTM networks increases the productivity and accuracy of deep learning models for traffic flow prediction.

Keywords: deep learning algorithms, intelligent transportation systems, spatiotemporal features, traffic flow prediction

Procedia PDF Downloads 150
1397 Effects of Chemicals in Elderly

Authors: Ali Kuzu

Abstract:

There are about 800 thousand chemicals in our environment and the number is increasing more than a thousand every year. While most of these chemicals are used as components in various consumer products, some are faced as industrial waste in the environment. Unfortunately, many of these chemicals are hazardous and affect humans. According to the “International Program on Chemical Safety” of World Health Organization; Among the chronic health effects of chemicals, cancer is of major concern. Many substances have found in recent years to be carcinogenic in one or more species of laboratory animals. Especially with respect to long-term effects, the response to a chemical may vary, quantitatively or qualitatively, in different groups of individuals depending on predisposing conditions, such as nutritional status, disease status, current infection, climatic extremes, and genetic features, sex and age of the individuals. Understanding the response of such specific risk groups is an important area of toxicology research. People with age 65+ is defined as “aged (or elderly)”. The elderly population in the world is about 600 million, which corresponds to ~8 percent of the world population. While every 1 of each 4 people is aged in Japan, the elderly population is quite close to 20 percent in many developed countries. And elderly population in these countries is growing more rapidly than the total population. The negative effects of chemicals on elderly take an important place in health-care related issues in last decades. The aged population is more susceptible to the harmful effects of environmental chemicals. According to the poor health of the organ systems in elderly, the ability of their body to eliminate the harmful effects and chemical substances from their body is also poor. With the increasing life expectancy, more and more people will face problems associated with chemical residues.

Keywords: elderly, chemicals’ effects, aged care, care need

Procedia PDF Downloads 435
1396 On Enabling Miner Self-Rescue with In-Mine Robots using Real-Time Object Detection with Thermal Images

Authors: Cyrus Addy, Venkata Sriram Siddhardh Nadendla, Kwame Awuah-Offei

Abstract:

Surface robots in modern underground mine rescue operations suffer from several limitations in enabling a prompt self-rescue. Therefore, the possibility of designing and deploying in-mine robots to expedite miner self-rescue can have a transformative impact on miner safety. These in-mine robots for miner self-rescue can be envisioned to carry out diverse tasks such as object detection, autonomous navigation, and payload delivery. Specifically, this paper investigates the challenges in the design of object detection algorithms for in-mine robots using thermal images, especially to detect people in real-time. A total of 125 thermal images were collected in the Missouri S&T Experimental Mine with the help of student volunteers using the FLIR TG 297 infrared camera, which were pre-processed into training and validation datasets with 100 and 25 images, respectively. Three state-of-the-art, pre-trained real-time object detection models, namely YOLOv5, YOLO-FIRI, and YOLOv8, were considered and re-trained using transfer learning techniques on the training dataset. On the validation dataset, the re-trained YOLOv8 outperforms the re-trained versions of both YOLOv5, and YOLO-FIRI.

Keywords: miner self-rescue, object detection, underground mine, YOLO

Procedia PDF Downloads 58
1395 Heavy Metal Contamination in Soils: Detection and Assessment Using Machine Learning Algorithms Based on Hyperspectral Images

Authors: Reem El Chakik

Abstract:

The levels of heavy metals in agricultural lands in Lebanon have been witnessing a noticeable increase in the past few years, due to increased anthropogenic pollution sources. Heavy metals pose a serious threat to the environment for being non-biodegradable and persistent, accumulating thus to dangerous levels in the soil. Besides the traditional laboratory and chemical analysis methods, Hyperspectral Imaging (HSI) has proven its efficiency in the rapid detection of HMs contamination. In Lebanon, a continuous environmental monitoring, including the monitoring of levels of HMs in agricultural soils, is lacking. This is due in part to the high cost of analysis. Hence, this proposed research aims at defining the current national status of HMs contamination in agricultural soil, and to evaluate the effectiveness of using HSI in the detection of HM in contaminated agricultural fields. To achieve the two main objectives of this study, soil samples were collected from different areas throughout the country and were analyzed for HMs using Atomic Absorption Spectrophotometry (AAS). The results were compared to those obtained from the HSI technique that was applied using Hyspex SWIR-384 camera. The results showed that the Lebanese agricultural soils contain high contamination levels of Zn, and that the more clayey the soil is, the lower reflectance it has.

Keywords: agricultural soils in Lebanon, atomic absorption spectrophotometer, hyperspectral imaging., heavy metals contamination

Procedia PDF Downloads 94
1394 FPGA Implementation of a Marginalized Particle Filter for Delineation of P and T Waves of ECG Signal

Authors: Jugal Bhandari, K. Hari Priya

Abstract:

The ECG signal provides important clinical information which could be used to pretend the diseases related to heart. Accordingly, delineation of ECG signal is an important task. Whereas delineation of P and T waves is a complex task. This paper deals with the Study of ECG signal and analysis of signal by means of Verilog Design of efficient filters and MATLAB tool effectively. It includes generation and simulation of ECG signal, by means of real time ECG data, ECG signal filtering and processing by analysis of different algorithms and techniques. In this paper, we design a basic particle filter which generates a dynamic model depending on the present and past input samples and then produces the desired output. Afterwards, the output will be processed by MATLAB to get the actual shape and accurate values of the ranges of P-wave and T-wave of ECG signal. In this paper, Questasim is a tool of mentor graphics which is being used for simulation and functional verification. The same design is again verified using Xilinx ISE which will be also used for synthesis, mapping and bit file generation. Xilinx FPGA board will be used for implementation of system. The final results of FPGA shall be verified with ChipScope Pro where the output data can be observed.

Keywords: ECG, MATLAB, Bayesian filtering, particle filter, Verilog hardware descriptive language

Procedia PDF Downloads 352
1393 Apolipoprotein A1 -75 G to a Substitution and Its Relationship with Serum ApoA1 Levels among Indian Punjabi Population

Authors: Savjot Kaur, Mridula Mahajan, AJS Bhanwer, Santokh Singh, Kawaljit Matharoo

Abstract:

Background: Disorders of lipid metabolism and genetic predisposition are CAD risk factors. ApoA1 is the apolipoprotein component of anti-atherogenic high density lipoprotein (HDL) particles. The protective action of HDL and ApoA1 is attributed to their central role in reverse cholesterol transport (RCT). Aim: This study was aimed at identifying sequence variations in ApoA1 (-75G>A) and its association with serum ApoA1 levels. Methods: A total of 300 CAD patients and 300 Normal individuals (controls) were analyzed. PCR-RFLP method was used to determine the DNA polymorphism in the ApoA1 gene, PCR products digested with restriction enzyme MspI, followed by Agarose Gel Electrophoresis. Serum apolipoprotein A1 concentration was estimated with immunoturbidimetric method. Results: Deviation from Hardy- Weinberg Equilibrium (HWE) was observed for this gene variant. The A- allele frequency was higher among Coronary Artery disease patients (53.8) compared to controls (45.5), p= 0.004, O.R= 1.38(1.11-1.75). Under recessive model analysis (AA vs. GG+GA) AA genotype of ApoA1 G>A substitution conferred ~1 fold increased risk towards CAD susceptibility (p= 0.002, OR= 1.72(1.2-2.43). With serum ApoA1 levels < 107 A allele frequency was higher among CAD cases (50) as compared to controls (43.4) [p=0.23, OR= 1.2(0.84-2)] and there was zero % occurrence of A allele frequency in individuals with ApoA1 levels > 177. Conclusion: Serum ApoA1 levels were associated with ApoA1 promoter region variation and influence CAD risk. The individuals with the APOA1 -75 A allele confer excess hazard of developing CAD as a result of its effect on low serum concentrations of ApoA1.

Keywords: apolipoprotein A1 (G>A) gene polymorphism, coronary artery disease (CAD), reverse cholesterol transport (RCT)

Procedia PDF Downloads 303
1392 Left to Right-Right Most Parsing Algorithm with Lookahead

Authors: Jamil Ahmed

Abstract:

Left to Right-Right Most (LR) parsing algorithm is a widely used algorithm of syntax analysis. It is contingent on a parsing table, whereas the parsing tables are extracted from the grammar. The parsing table specifies the actions to be taken during parsing. It requires that the parsing table should have no action conflicts for the same input symbol. This requirement imposes a condition on the class of grammars over which the LR algorithms work. However, there are grammars for which the parsing tables hold action conflicts. In such cases, the algorithm needs a capability of scanning (looking-ahead) next input symbols ahead of the current input symbol. In this paper, a ‘Left to Right’-‘Right Most’ parsing algorithm with lookahead capability is introduced. The 'look-ahead' capability in the LR parsing algorithm is the major contribution of this paper. The practicality of the proposed algorithm is substantiated by the parser implementation of the Context Free Grammar (CFG) of an already proposed programming language 'State Controlled Object Oriented Programming' (SCOOP). SCOOP’s Context Free Grammar has 125 productions and 192 item sets. This algorithm parses SCOOP while the grammar requires to ‘look ahead’ the input symbols due to action conflicts in its parsing table. Proposed LR parsing algorithm with lookahead capability can be viewed as an optimization of ‘Simple Left to Right’-‘Right Most’ (SLR) parsing algorithm.

Keywords: left to right-right most parsing, syntax analysis, bottom-up parsing algorithm

Procedia PDF Downloads 109
1391 PEINS: A Generic Compression Scheme Using Probabilistic Encoding and Irrational Number Storage

Authors: P. Jayashree, S. Rajkumar

Abstract:

With social networks and smart devices generating a multitude of data, effective data management is the need of the hour for networks and cloud applications. Some applications need effective storage while some other applications need effective communication over networks and data reduction comes as a handy solution to meet out both requirements. Most of the data compression techniques are based on data statistics and may result in either lossy or lossless data reductions. Though lossy reductions produce better compression ratios compared to lossless methods, many applications require data accuracy and miniature details to be preserved. A variety of data compression algorithms does exist in the literature for different forms of data like text, image, and multimedia data. In the proposed work, a generic progressive compression algorithm, based on probabilistic encoding, called PEINS is projected as an enhancement over irrational number stored coding technique to cater to storage issues of increasing data volumes as a cost effective solution, which also offers data security as a secondary outcome to some extent. The proposed work reveals cost effectiveness in terms of better compression ratio with no deterioration in compression time.

Keywords: compression ratio, generic compression, irrational number storage, probabilistic encoding

Procedia PDF Downloads 274
1390 Development of Geo-computational Model for Analysis of Lassa Fever Dynamics and Lassa Fever Outbreak Prediction

Authors: Adekunle Taiwo Adenike, I. K. Ogundoyin

Abstract:

Lassa fever is a neglected tropical virus that has become a significant public health issue in Nigeria, with the country having the greatest burden in Africa. This paper presents a Geo-Computational Model for Analysis and Prediction of Lassa Fever Dynamics and Outbreaks in Nigeria. The model investigates the dynamics of the virus with respect to environmental factors and human populations. It confirms the role of the rodent host in virus transmission and identifies how climate and human population are affected. The proposed methodology is carried out on a Linux operating system using the OSGeoLive virtual machine for geographical computing, which serves as a base for spatial ecology computing. The model design uses Unified Modeling Language (UML), and the performance evaluation uses machine learning algorithms such as random forest, fuzzy logic, and neural networks. The study aims to contribute to the control of Lassa fever, which is achievable through the combined efforts of public health professionals and geocomputational and machine learning tools. The research findings will potentially be more readily accepted and utilized by decision-makers for the attainment of Lassa fever elimination.

Keywords: geo-computational model, lassa fever dynamics, lassa fever, outbreak prediction, nigeria

Procedia PDF Downloads 78
1389 Air Quality Analysis Using Machine Learning Models Under Python Environment

Authors: Salahaeddine Sbai

Abstract:

Air quality analysis using machine learning models is a method employed to assess and predict air pollution levels. This approach leverages the capabilities of machine learning algorithms to analyze vast amounts of air quality data and extract valuable insights. By training these models on historical air quality data, they can learn patterns and relationships between various factors such as weather conditions, pollutant emissions, and geographical features. The trained models can then be used to predict air quality levels in real-time or forecast future pollution levels. This application of machine learning in air quality analysis enables policymakers, environmental agencies, and the general public to make informed decisions regarding health, environmental impact, and mitigation strategies. By understanding the factors influencing air quality, interventions can be implemented to reduce pollution levels, mitigate health risks, and enhance overall air quality management. Climate change is having significant impacts on Morocco, affecting various aspects of the country's environment, economy, and society. In this study, we use some machine learning models under python environment to predict and analysis air quality change over North of Morocco to evaluate the climate change impact on agriculture.

Keywords: air quality, machine learning models, pollution, pollutant emissions

Procedia PDF Downloads 76
1388 Applying Neural Networks for Solving Record Linkage Problem via Fuzzy Description Logics

Authors: Mikheil Kalmakhelidze

Abstract:

Record linkage (RL) problem has become more and more important in recent years due to the growing interest towards big data analysis. The problem can be formulated in a very simple way: Given two entries a and b of a database, decide whether they represent the same object or not. There are two classical deterministic and probabilistic ways of solving the RL problem. Using simple Bayes classifier in many cases produces useful results but sometimes they show to be poor. In recent years several successful approaches have been made towards solving specific RL problems by neural network algorithms including single layer perception, multilayer back propagation network etc. In our work, we model the RL problem for specific dataset of student applications in fuzzy description logic (FDL) where linkage of specific pair (a,b) depends on the truth value of corresponding formula A(a,b) in a canonical FDL model. As a main result, we build neural network for deciding truth value of FDL formulas in a canonical model and thus link RL problem to machine learning. We apply the approach to dataset with 10000 entries and also compare to classical RL solving approaches. The results show to be more accurate than standard probabilistic approach.

Keywords: description logic, fuzzy logic, neural networks, record linkage

Procedia PDF Downloads 261
1387 An Efficient Hybrid Feedstock Pretreatment Technique for the Release of Fermentable Sugar from Cassava Peels for Biofuel Production

Authors: Gabriel Sanjo Aruwajoye, E. B. Gueguim Kana

Abstract:

Agricultural residues present a low-cost feedstock for bioenergy production around the world. Cassava peels waste are rich in organic molecules that can be readily converted to value added products such as biomaterials and biofuels. However, due to the presence of high proportion of structural carbohydrates and lignin, the hydrolysis of this feedstock is imperative to achieve maximum substrate utilization and energy yield. This study model and optimises the release of Fermentable Sugar (FS) from cassava peels waste using the Response Surface Methodology. The investigated pretreatment input parameters consisted of soaking temperature (oC), soaking time (hours), autoclave duration (minutes), acid concentration (% v/v), substrate solid loading (% w/v) within the range of 30 to 70, 0 to 24, 5 to 20, 0 to 5 and 2 to 10 respectively. The Box-Behnken design was used to generate 46 experimental runs which were investigated for FS release. The obtained data were used to fit a quadratic model. A coefficient of determination of 0.87 and F value of 8.73 was obtained indicating the good fitness of the model. The predicted optimum pretreatment conditions were 69.62 oC soaking temperature, 2.57 hours soaking duration, 5 minutes autoclave duration, 3.68 % v/v HCl and 9.65 % w/v solid loading corresponding to FS yield of 91.83g/l (0.92 g/g cassava peels) thus 58% improvement on the non-optimised pretreatment. Our findings demonstrate an efficient pretreatment model for fermentable sugar release from cassava peels waste for various bioprocesses.

Keywords: feedstock pretreatment, cassava peels, fermentable sugar, response surface methodology

Procedia PDF Downloads 346
1386 Designing Supplier Partnership Success Factors in the Coal Mining Industry

Authors: Ahmad Afif, Teuku Yuri M. Zagloel

Abstract:

Sustainable supply chain management is a new pattern that has emerged recently in industry and companies. The procurement process is one of the key factors for efficiency in supply chain management practices. Partnership is one of the procurement strategies for strategic items. The success factors of the partnership must be determined to avoid things that endanger the financial and operational status of the company. The current supplier partnership research focuses on the selection of general criteria and sustainable supplier selection. Currently, there is still limited research on the success factors of supplier partnerships that focus on strategic items in the coal mining industry. Meanwhile, the procurement of coal mining has its own characteristics, and there are regulations related to the procurement of goods. Therefore, this research was conducted to determine the categories of goods that are included in the strategic items and to design the success factors of supplier partnerships. The main factors studied are general, financial, production, reputation, synergies, and sustainable. The research was conducted using the Kraljic method to determine the categories of goods that are included in the strategic items. To design a supplier partnership success factor using the Hybrid Multi Criteria Decision Making method. Integrated Fuzzy AHP-Fuzzy TOPSIS is used to determine the weight of the success factors of supplier partnerships and to rank suppliers on the factors used.

Keywords: supplier, partnership, strategic item, success factors, and coal mining industry

Procedia PDF Downloads 116
1385 Detection of Atrial Fibrillation Using Wearables via Attentional Two-Stream Heterogeneous Networks

Authors: Huawei Bai, Jianguo Yao, Fellow, IEEE

Abstract:

Atrial fibrillation (AF) is the most common form of heart arrhythmia and is closely associated with mortality and morbidity in heart failure, stroke, and coronary artery disease. The development of single spot optical sensors enables widespread photoplethysmography (PPG) screening, especially for AF, since it represents a more convenient and noninvasive approach. To our knowledge, most existing studies based on public and unbalanced datasets can barely handle the multiple noises sources in the real world and, also, lack interpretability. In this paper, we construct a large- scale PPG dataset using measurements collected from PPG wrist- watch devices worn by volunteers and propose an attention-based two-stream heterogeneous neural network (TSHNN). The first stream is a hybrid neural network consisting of a three-layer one-dimensional convolutional neural network (1D-CNN) and two-layer attention- based bidirectional long short-term memory (Bi-LSTM) network to learn representations from temporally sampled signals. The second stream extracts latent representations from the PPG time-frequency spectrogram using a five-layer CNN. The outputs from both streams are fed into a fusion layer for the outcome. Visualization of the attention weights learned demonstrates the effectiveness of the attention mechanism against noise. The experimental results show that the TSHNN outperforms all the competitive baseline approaches and with 98.09% accuracy, achieves state-of-the-art performance.

Keywords: PPG wearables, atrial fibrillation, feature fusion, attention mechanism, hyber network

Procedia PDF Downloads 101
1384 Inherited Eye Diseases in Africa: A Scoping Review and Strategy for an African Longitudinal Eye Study

Authors: Bawa Yusuf Muhammad, Musa Abubakar Kana, Aminatu Abdulrahman, Kerry Goetz

Abstract:

Background: Inherited eye diseases are disorders that affect globally, 1 in 1000 people. The six main world populations have created databases containing information on eye genotypes. Aim: The aim of the scoping review was to mine and present the available information to date on the genetics of inherited eye diseases within the African continent. Method: Literature Search Strategy was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). PubMed and Google Scholar searched for articles on inherited eye diseases from inception to 20th June 2022. Both Original and review articles that report on inherited, genetic or developmental/congenital eye diseases within the African Continent were included in the research. Results: A total of 1162 citations were obtained, but only 37 articles were reviewed based on the inclusion and exclusion criteria. The highest output of publications on inherited eye diseases comes from South Africa and Tunisia (about 43%), followed by Morocco and Egypt (27%), then Sub-Saharan Africa and North Africa (13.50%), while the remaining articles (16.5%) originated from Nigeria, Ghana, Mauritania Cameroon, Zimbabwe and combined article between Zimbabwe and Cameroon. Glaucoma and inherited retinal disorders represent the most studied diseases, followed by Albinism and congenital cataracts, respectively. Conclusion: Despite the growing research from Tunisia, Morocco, Egypt and South Africa, Sub-Saharan Africa remains almost a virgin region to explore the genetics of eye diseases.

Keywords: inherited eye diseases, Africa, scoping review, longitudinal eye study

Procedia PDF Downloads 38