Search results for: maximum likelihood classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6556

Search results for: maximum likelihood classification

5476 Assessment of Climate Change Impacts on the Hydrology of Upper Guder Catchment, Upper Blue Nile

Authors: Fikru Fentaw Abera

Abstract:

Climate changes alter regional hydrologic conditions and results in a variety of impacts on water resource systems. Such hydrologic changes will affect almost every aspect of human well-being. The goal of this paper is to assess the impact of climate change on the hydrology of Upper Guder catchment located in northwest of Ethiopia. The GCM derived scenarios (HadCM3 A2a & B2a SRES emission scenarios) experiments were used for the climate projection. The statistical downscaling model (SDSM) was used to generate future possible local meteorological variables in the study area. The down-scaled data were then used as input to the soil and water assessment tool (SWAT) model to simulate the corresponding future stream flow regime in Upper Guder catchment of the Abay River Basin. A semi distributed hydrological model, SWAT was developed and Generalized Likelihood Uncertainty Estimation (GLUE) was utilized for uncertainty analysis. GLUE is linked with SWAT in the Calibration and Uncertainty Program known as SWAT-CUP. Three benchmark periods simulated for this study were 2020s, 2050s and 2080s. The time series generated by GCM of HadCM3 A2a and B2a and Statistical Downscaling Model (SDSM) indicate a significant increasing trend in maximum and minimum temperature values and a slight increasing trend in precipitation for both A2a and B2a emission scenarios in both Gedo and Tikur Inch stations for all three bench mark periods. The hydrologic impact analysis made with the downscaled temperature and precipitation time series as input to the hydrological model SWAT suggested for both A2a and B2a emission scenarios. The model output shows that there may be an annual increase in flow volume up to 35% for both emission scenarios in three benchmark periods in the future. All seasons show an increase in flow volume for both A2a and B2a emission scenarios for all time horizons. Potential evapotranspiration in the catchment also will increase annually on average 3-15% for the 2020s and 7-25% for the 2050s and 2080s for both A2a and B2a emissions scenarios.

Keywords: climate change, Guder sub-basin, GCM, SDSM, SWAT, SWAT-CUP, GLUE

Procedia PDF Downloads 359
5475 Electricity Production from Vermicompost Liquid Using Microbial Fuel Cell

Authors: Pratthana Ammaraphitak, Piyachon Ketsuwan, Rattapoom Prommana

Abstract:

Electricity production from vermicompost liquid was investigated in microbial fuel cells (MFCs). The aim of this study was to determine the performance of vermicompost liquid as a biocatalyst for electricity production by MFCs. Chemical and physical parameters of vermicompost liquid as total nitrogen, ammonia-nitrogen, nitrate, nitrite, total phosphorus, potassium, organic matter, C:N ratio, pH, and electrical conductivity in MFCs were studied. The performance of MFCs was operated in open circuit mode for 7 days. The maximum open circuit voltage (OCV) was 0.45 V. The maximum power density of 5.29 ± 0.75 W/m² corresponding to a current density of 0.024 2 ± 0.0017 A/m² was achieved by the 1000 Ω on day 2. Vermicompost liquid has efficiency to generate electricity from organic waste.

Keywords: vermicompost liquid, microbial fuel cell, nutrient, electricity production

Procedia PDF Downloads 175
5474 A Technique for Image Segmentation Using K-Means Clustering Classification

Authors: Sadia Basar, Naila Habib, Awais Adnan

Abstract:

The paper presents the Technique for Image Segmentation Using K-Means Clustering Classification. The presented algorithms were specific, however, missed the neighboring information and required high-speed computerized machines to run the segmentation algorithms. Clustering is the process of partitioning a group of data points into a small number of clusters. The proposed method is content-aware and feature extraction method which is able to run on low-end computerized machines, simple algorithm, required low-quality streaming, efficient and used for security purpose. It has the capability to highlight the boundary and the object. At first, the user enters the data in the representation of the input. Then in the next step, the digital image is converted into groups clusters. Clusters are divided into many regions. The same categories with same features of clusters are assembled within a group and different clusters are placed in other groups. Finally, the clusters are combined with respect to similar features and then represented in the form of segments. The clustered image depicts the clear representation of the digital image in order to highlight the regions and boundaries of the image. At last, the final image is presented in the form of segments. All colors of the image are separated in clusters.

Keywords: clustering, image segmentation, K-means function, local and global minimum, region

Procedia PDF Downloads 369
5473 Study of Three-Dimensional Computed Tomography of Frontoethmoidal Cells Using International Frontal Sinus Anatomy Classification

Authors: Prabesh Karki, Shyam Thapa Chettri, Bajarang Prasad Sah, Manoj Bhattarai, Sudeep Mishra

Abstract:

Introduction: Frontal sinus is frequently described as the most difficult sinus to access surgically due to its proximity to the cribriform plate, orbit, and anterior ethmoid artery. Frontal sinus surgery requires a detailed understanding of the cellular structure and FSDP unique to each patient, making high-resolution CT scans an indispensable tool to assess the difficulty of planned sinus surgery. International Frontal Sinus Anatomy Classification (IFAC) was developed to provide a more precise nomenclature for cells in the frontal recess, classifying cells based on their anatomic origin. Objectives: To assess the proportion of frontal cell variants defined by IFAC, variation with respect to age and gender. Methods: 54 cases were enrolled after a detailed clinical history, thorough general and physical examinations, and CT a report ordered in a film. Assessment and tabulation of the presence of frontal cells according to the IFAC analyzed. The prevalence of each cell type was calculated, and data were entered in MS Excel and analyzed using Statistical Package for the Social Sciences (SPSS). Descriptive statistics and frequencies were defined for categorical and numerical variables. Frequency, percentage, the mean and standard deviation were calculated. Result: Among 54 patients, 30 (55.6%) were male and 24 (44.4%) were female. The patient enrolled ranged from 18 to 78 years. Majority33.3% (n=18) were in age group of >50 years.According to IFAC, Agger nasi cells (92.6%) were most common, whereas supraorbital ethmoidal cells were least common 16 (29.6%). Prevalence of other frontoethmoidal cells was SAC- 57.4%, SAFC- 38.9%, SBC- 74.1%, SBFC- 33.3%, FSC- 38.9% of 54 cases. Conclusion: IFAC is an international consensus document that describes an anatomically precise nomenclature for classifying frontoethmoidal cells' anatomy. This study has defined the prevalence, symmetry and reliability of frontoethmoidal cells as established by the IFAC system as in other parts of the world.

Keywords: frontal sinus, frontoethmoidal cells, international frontal sinus anatomy classification

Procedia PDF Downloads 95
5472 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement

Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes

Abstract:

Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.

Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology

Procedia PDF Downloads 75
5471 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform

Procedia PDF Downloads 301
5470 Effectiveness of the Lacey Assessment of Preterm Infants to Predict Neuromotor Outcomes of Premature Babies at 12 Months Corrected Age

Authors: Thanooja Naushad, Meena Natarajan, Tushar Vasant Kulkarni

Abstract:

Background: The Lacey Assessment of Preterm Infants (LAPI) is used in clinical practice to identify premature babies at risk of neuromotor impairments, especially cerebral palsy. This study attempted to find the validity of the Lacey assessment of preterm infants to predict neuromotor outcomes of premature babies at 12 months corrected age and to compare its predictive ability with the brain ultrasound. Methods: This prospective cohort study included 89 preterm infants (45 females and 44 males) born below 35 weeks gestation who were admitted to the neonatal intensive care unit of a government hospital in Dubai. Initial assessment was done using the Lacey assessment after the babies reached 33 weeks postmenstrual age. Follow up assessment on neuromotor outcomes was done at 12 months (± 1 week) corrected age using two standardized outcome measures, i.e., infant neurological international battery and Alberta infant motor scale. Brain ultrasound data were collected retrospectively. Data were statistically analyzed, and the diagnostic accuracy of the Lacey assessment of preterm infants (LAPI) was calculated -when used alone and in combination with the brain ultrasound. Results: On comparison with brain ultrasound, the Lacey assessment showed superior specificity (96% vs. 77%), higher positive predictive value (57% vs. 22%), and higher positive likelihood ratio (18 vs. 3) to predict neuromotor outcomes at one year of age. The sensitivity of Lacey assessment was lower than brain ultrasound (66% vs. 83%), whereas specificity was similar (97% vs. 98%). A combination of Lacey assessment and brain ultrasound results showed higher sensitivity (80%), positive (66%), and negative (98%) predictive values, positive likelihood ratio (24), and test accuracy (95%) than Lacey assessment alone in predicting neurological outcomes. The negative predictive value of the Lacey assessment was similar to that of its combination with brain ultrasound (96%). Conclusion: Results of this study suggest that the Lacey assessment of preterm infants can be used as a supplementary assessment tool for premature babies in the neonatal intensive care unit. Due to its high specificity, Lacey assessment can be used to identify those babies at low risk of abnormal neuromotor outcomes at a later age. When used along with the findings of the brain ultrasound, Lacey assessment has better sensitivity to identify preterm babies at particular risk. These findings have applications in identifying premature babies who may benefit from early intervention services.

Keywords: brain ultrasound, lacey assessment of preterm infants, neuromotor outcomes, preterm

Procedia PDF Downloads 136
5469 System for Electromyography Signal Emulation Through the Use of Embedded Systems

Authors: Valentina Narvaez Gaitan, Laura Valentina Rodriguez Leguizamon, Ruben Dario Hernandez B.

Abstract:

This work describes a physiological signal emulation system that uses electromyography (EMG) signals obtained from muscle sensors in the first instance. These signals are used to extract their characteristics to model and emulate specific arm movements. The main objective of this effort is to develop a new biomedical software system capable of generating physiological signals through the use of embedded systems by establishing the characteristics of the acquired signals. The acquisition system used was Biosignals, which contains two EMG electrodes used to acquire signals from the forearm muscles placed on the extensor and flexor muscles. Processing algorithms were implemented to classify the signals generated by the arm muscles when performing specific movements such as wrist flexion extension, palmar grip, and wrist pronation-supination. Matlab software was used to condition and preprocess the signals for subsequent classification. Subsequently, the mathematical modeling of each signal is performed to be generated by the embedded system, with a validation of the accuracy of the obtained signal using the percentage of cross-correlation, obtaining a precision of 96%. The equations are then discretized to be emulated in the embedded system, obtaining a system capable of generating physiological signals according to the characteristics of medical analysis.

Keywords: classification, electromyography, embedded system, emulation, physiological signals

Procedia PDF Downloads 102
5468 Constructing Orthogonal De Bruijn and Kautz Sequences and Applications

Authors: Yaw-Ling Lin

Abstract:

A de Bruijn graph of order k is a graph whose vertices representing all length-k sequences with edges joining pairs of vertices whose sequences have maximum possible overlap (length k−1). Every Hamiltonian cycle of this graph defines a distinct, minimum length de Bruijn sequence containing all k-mers exactly once. A Kautz sequence is the minimal generating sequence so as the sequence of minimal length that produces all possible length-k sequences with the restriction that every two consecutive alphabets in the sequences must be different. A collection of de Bruijn/Kautz sequences are orthogonal if any two sequences are of maximally differ in sequence composition; that is, the maximum length of their common substring is k. In this paper, we discuss how such a collection of (maximal) orthogonal de Bruijn/Kautz sequences can be made and use the algorithm to build up a web application service for the synthesized DNA and other related biomolecular sequences.

Keywords: biomolecular sequence synthesis, de Bruijn sequences, Eulerian cycle, Hamiltonian cycle, Kautz sequences, orthogonal sequences

Procedia PDF Downloads 162
5467 Structural Analysis of Hydro-Turbine Spiral Casing and Stay Ring Using Ansys

Authors: Surjit Angra, Pooja Rani, Vinod Kumar

Abstract:

In hydro power plant spiral casing and Stay ring is meant to guide the water flow to guide vane and runner. Spiral casing and Stay ring is subjected to static i.e. pressure load as well as fluctuating load acting on the structure due to water hammer effect in water conductor system. Finite element method has been used to calculate stresses on spiral casing and stay ring. These calculations were done for the maximum possible loading under operating condition "LC1 Quick Shut Down”. The design load is reached for the spiral casing and stay ring during the emergency closure of the guide apparatus "LC1 Quick Shut Down”. During this operation the forces from the head cover to the stay ring also reach their maximum.

Keywords: hydro-turbine, spiral casing, stay ring, structural analysis

Procedia PDF Downloads 512
5466 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models

Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan

Abstract:

Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.

Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network

Procedia PDF Downloads 21
5465 Variability of Climatic Elements in Nigeria Over Recent 100 Years

Authors: T. Salami, O. S. Idowu, N. J. Bello

Abstract:

Climatic variability is an essential issue when dealing with the issue of climate change. Variability of some climate parameter helps to determine how variable the climatic condition of a region will behave. The most important of these climatic variables which help to determine the climatic condition in an area are both the Temperature and Precipitation. This research deals with Longterm climatic variability in Nigeria. Variables examined in this analysis include near-surface temperature, near surface minimum temperature, maximum temperature, relative humidity, vapour pressure, precipitation, wet-day frequency and cloud cover using data ranging between 1901-2010. Analyses were carried out and the following methods were used: - Regression and EOF analysis. Results show that the annual average, minimum and maximum near-surface temperature all gradually increases from 1901 to 2010. And they are in the same case in a wet season and dry season. Minimum near-surface temperature, with its linear trends are significant for annual, wet season and dry season means. However, the diurnal temperature range decreases in the recent 100 years imply that the minimum near-surface temperature has increased more than the maximum. Both precipitation and wet day frequency decline from the analysis, demonstrating that Nigeria has become dryer than before by the way of rainfall. Temperature and precipitation variability has become very high during these periods especially in the Northern areas. Areas which had excessive rainfall were confronted with flooding and other related issues while area that had less precipitation were all confronted with drought. More practical issues will be presented.

Keywords: climate, variability, flooding, excessive rainfall

Procedia PDF Downloads 380
5464 Bioremediation Effect on Shear Strength of Contaminated Soils

Authors: Samira Abbaspour

Abstract:

Soil contamination by oil industry is unavoidable issue; irrespective of environmental impact, which occurs during the process of soil contaminating and remediating. Effect of this phenomenon on the geotechnical properties of the soil has not been investigated thoroughly. Some researchers studied the environmental aspects of these phenomena more than geotechnical point of view. In this research, compaction and unconfined compression tests were conducted on samples of natural, contaminated and treated soil after 50 days of bio-treatment. The results manifest that increasing the amount of crude oil, leads to decreased values of maximum dry density and optimum water content and increased values of unconfined compression strength (UCS). However, almost 65% of this contamination terminated by using a Bioremer as a bioremediation agent. Foremost, as bioremediation takes place, values of maximum dry density, unconfined compression strength and failure strain increase.

Keywords: contamination, shear strength, compaction, oil contamination

Procedia PDF Downloads 180
5463 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes

Authors: Madushani Rodrigo, Banuka Athuraliya

Abstract:

In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.

Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16

Procedia PDF Downloads 104
5462 Multilocus Phylogenetic Approach Reveals Informative DNA Barcodes for Studying Evolution and Taxonomy of Fusarium Fungi

Authors: Alexander A. Stakheev, Larisa V. Samokhvalova, Sergey K. Zavriev

Abstract:

Fusarium fungi are among the most devastating plant pathogens distributed all over the world. Significant reduction of grain yield and quality caused by Fusarium leads to multi-billion dollar annual losses to the world agricultural production. These organisms can also cause infections in immunocompromised persons and produce the wide range of mycotoxins, such as trichothecenes, fumonisins, and zearalenone, which are hazardous to human and animal health. Identification of Fusarium fungi based on the morphology of spores and spore-forming structures, colony color and appearance on specific culture media is often very complicated due to the high similarity of these features for closely related species. Modern Fusarium taxonomy increasingly uses data of crossing experiments (biological species concept) and genetic polymorphism analysis (phylogenetic species concept). A number of novel Fusarium sibling species has been established using DNA barcoding techniques. Species recognition is best made with the combined phylogeny of intron-rich protein coding genes and ribosomal DNA sequences. However, the internal transcribed spacer of (ITS), which is considered to be universal DNA barcode for Fungi, is not suitable for genus Fusarium, because of its insufficient variability between closely related species and the presence of non-orthologous copies in the genome. Nowadays, the translation elongation factor 1 alpha (TEF1α) gene is the “gold standard” of Fusarium taxonomy, but the search for novel informative markers is still needed. In this study, we used two novel DNA markers, frataxin (FXN) and heat shock protein 90 (HSP90) to discover phylogenetic relationships between Fusarium species. Multilocus phylogenetic analysis based on partial sequences of TEF1α, FXN, HSP90, as well as intergenic spacer of ribosomal DNA (IGS), beta-tubulin (β-TUB) and phosphate permease (PHO) genes has been conducted for 120 isolates of 19 Fusarium species from different climatic zones of Russia and neighboring countries using maximum likelihood (ML) and maximum parsimony (MP) algorithms. Our analyses revealed that FXN and HSP90 genes could be considered as informative phylogenetic markers, suitable for evolutionary and taxonomic studies of Fusarium genus. It has been shown that PHO gene possesses more variable (22 %) and parsimony informative (19 %) characters than other markers, including TEF1α (12 % and 9 %, correspondingly) when used for elucidating phylogenetic relationships between F. avenaceum and its closest relatives – F. tricinctum, F. acuminatum, F. torulosum. Application of novel DNA barcodes confirmed the fact that F. arthrosporioides do not represent a separate species but only a subspecies of F. avenaceum. Phylogeny based on partial PHO and FXN sequences revealed the presence of separate cluster of four F. avenaceum strains which were closer to F. torulosum than to major F. avenaceum clade. The strain F-846 from Moldova, morphologically identified as F. poae, formed a separate lineage in all the constructed dendrograms, and could potentially be considered as a separate species, but more information is needed to confirm this conclusion. Variable sites in PHO sequences were used for the first-time development of specific qPCR-based diagnostic assays for F. acuminatum and F. torulosum. This work was supported by Russian Foundation for Basic Research (grant № 15-29-02527).

Keywords: DNA barcode, fusarium, identification, phylogenetics, taxonomy

Procedia PDF Downloads 321
5461 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 204
5460 Investigating the Dynamic Plantar Pressure Distribution in Individuals with Multiple Sclerosis

Authors: Hilal Keklicek, Baris Cetin, Yeliz Salci, Ayla Fil, Umut Altinkaynak, Kadriye Armutlu

Abstract:

Objectives and Goals: Spasticity is a common symptom characterized with a velocity dependent increase in tonic stretch reflexes (muscle tone) in patient with multiple sclerosis (MS). Hypertonic muscles affect the normal plantigrade contact by disturbing accommodation of foot to the ground while walking. It is important to know the differences between healthy and neurologic foot features for management of spasticity related deformities and/or determination of rehabilitation purposes and contents. This study was planned with the aim of investigating the dynamic plantar pressure distribution in individuals with MS and determining the differences between healthy individuals (HI). Methods: Fifty-five individuals with MS (108 foot with spasticity according to Modified Ashworth Scale) and 20 HI (40 foot) were the participants of the study. The dynamic pedobarograph was utilized for evaluation of dynamic loading parameters. Participants were informed to walk at their self-selected speed for seven times to eliminate learning effect. The parameters were divided into 2 categories including; maximum loading pressure (N/cm2) and time of maximum pressure (ms) were collected from heal medial, heal lateral, mid foot, heads of first, second, third, fourth and fifth metatarsal bones. Results: There were differences between the groups in maximum loading pressure of heal medial (p < .001), heal lateral (p < .001), midfoot (p=.041) and 5th metatarsal areas (p=.036). Also, there were differences between the groups the time of maximum pressure of all metatarsal areas, midfoot, heal medial and heal lateral (p < .001) in favor of HI. Conclusions: The study provided basic data about foot pressure distribution in individuals with MS. Results of the study primarily showed that spasticity of lower extremity muscle disrupted the posteromedial foot loading. Secondarily, according to the study result, spasticity lead to inappropriate timing during load transfer from hind foot to forefoot.

Keywords: multiple sclerosis, plantar pressure distribution, gait, norm values

Procedia PDF Downloads 317
5459 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: convolutional neural networks, coffee bean, peaberry, sorting, support vector machine

Procedia PDF Downloads 142
5458 Laser Powder Bed Fusion Awareness for Engineering Students in France and Qatar

Authors: Hiba Naccache, Rima Hleiss

Abstract:

Additive manufacturing AM or 3D printing is one of the pillars of Industry 4.0. Compared to traditional manufacturing, AM provides a prototype before production in order to optimize the design and avoid the stock market and uses strictly necessary material which can be recyclable, for the benefit of leaning towards local production, saving money, time and resources. Different types of AM exist and it has a broad range of applications across several industries like aerospace, automotive, medicine, education and else. The Laser Powder Bed Fusion (LPBF) is a metal AM technique that uses a laser to liquefy metal powder, layer by layer, to build a three-dimensional (3D) object. In industry 4.0 and aligned with the numbers 9 (Industry, Innovation and Infrastructure) and 12 (Responsible Production and Consumption) of the Sustainable Development Goals of the UNESCO 2030 Agenda, the AM’s manufacturers committed to minimizing the environmental impact by being sustainable in every production. The LPBF has several environmental advantages, like reduced waste production, lower energy consumption, and greater flexibility in creating components with lightweight and complex geometries. However, LPBF also have environmental drawbacks, like energy consumption, gas consumption and emissions. It is critical to recognize the environmental impacts of LPBF in order to mitigate them. To increase awareness and promote sustainable practices regarding LPBF, the researchers use the Elaboration Likelihood Model (ELM) theory where people from multiple universities in France and Qatar process information in two ways: peripherally and centrally. The peripheral campaigns use superficial cues to get attention, and the central campaigns provide clear and concise information. The authors created a seminar including a video showing LPBF production and a website with educational resources. The data is collected using questionnaire to test attitude about the public awareness before and after the seminar. The results reflected a great shift on the awareness toward LPBF and its impact on the environment. With no presence of similar research, to our best knowledge, this study will add to the literature on the sustainability of the LPBF production technique.

Keywords: additive manufacturing, laser powder bed fusion, elaboration likelihood model theory, sustainable development goals, education-awareness, France, Qatar, specific energy consumption, environmental impact, lightweight components

Procedia PDF Downloads 84
5457 A Cohesive Zone Model with Parameters Determined by Uniaxial Stress-Strain Curve

Authors: Y.J. Wang, C. Q. Ru

Abstract:

A key issue of cohesive zone models is how to determine the cohesive zone model parameters based on real material test data. In this paper, uniaxial nominal stress-strain curve (SS curve) is used to determine two key parameters of a cohesive zone model (CZM): The maximum traction and the area under the curve of traction-separation law (TSL). To this end, the true SS curve is obtained based on the nominal SS curve, and the relationship between the nominal SS curve and TSL is derived based on an assumption that the stress for cracking should be the same in both CZM and the real material. In particular, the true SS curve after necking is derived from the nominal SS curve by taking the average of the power law extrapolation and the linear extrapolation, and a damage factor is introduced to offset the true stress reduction caused by the voids generated at the necking zone. The maximum traction of the TSL is equal to the maximum true stress calculated based on the damage factor at the end of hardening. In addition, a simple specimen is modeled by Abaqus/Standard to calculate the critical J-integral, and the fracture energy calculated by the critical J-integral represents the stored strain energy in the necking zone calculated by the true SS curve. Finally, the CZM parameters obtained by the present method are compared to those used in a previous related work for a simulation of the drop-weight tear test.

Keywords: dynamic fracture, cohesive zone model, traction-separation law, stress-strain curve, J-integral

Procedia PDF Downloads 467
5456 Variability of L-Band GPS Scintillation over Auroral Region, Maitri, Antarctica

Authors: Prakash Khatarkar, P. A. Khan, Shweta Mukherjee, Roshni Atulkar, P. K. Purohit, A. K. Gwal

Abstract:

We have investigated the occurrence characteristics of ionospheric scintillations, using dual frequency GPS, installed and operated at Indian scientific base station Maitri (71.45S and 11.45E), Antarctica, during December 2009 to December 2010. The scintillation morphology is described in terms of S4 Index. The scintillations are classified into four main categories as Weak (0.21.0). From the analysis we found that the percentage of weak, moderate, strong and saturated scintillations were 96%, 80%, 58% and 7%, respectively. The maximum percentage of all types of scintillation was observed in the summer season, followed by equinox and the least in winter season. As the year 2010 was a low solar activity period, consequently the maximum occurrences of scintillations were those of weak and moderate and only four cases of saturated scintillation were observed.

Keywords: L-band scintillation, GPS, auroral region, low solar activity

Procedia PDF Downloads 643
5455 Platelet Indices among the Cases of Vivax Malaria

Authors: Mirza Sultan Ahmad, Mubashra Ahmad, Ramlah Mehmood, Nazia Mahboob, Waqar Nasir

Abstract:

Objective: To ascertain the prevalence of thrombocytopenia and study changes in MPV and PDW among cases of vivax malaria. Design: Descriptive analytic study. Place and duration of study: Department of pediatrics, Fazle Omar Hospital, from January to December 2012. Methodology: All patients from birth to 16 years age, who presented in Fazle- Omar hospital, Rabwah from January to December 2012 were included in this study. Hundred patients with other febrile illnesses were taken as control. Full blood counts were checked by Madonic CA 620 analyzer. Name, age, sex, weight, platelet counts. MPV, PDW, any evidence of bleeding, outcome of cases included in this study and taken as control were recorded on data sheets. Results: One hundred and forty-two patients were included in this study. There was no incidence of death or active bleeding. Median platelet count was 109000/mm3. Thrombocytopenia was present in 108 (76.1%) patients. Severe thrombocytopenia was present in 10(7%) patients. Minimum count was 27000/mm3 and maximum was 341000/mm3. Platelet counts of control group was significantly more as compared with study group.(p<.001) Median MPV was 8.70. Minimum value was 6.40 and maximum was 11.90. MPV of study group was significantly more than control group.(p<.001) Median PDW was 11.30. Minimum value was 8.5 and maximum was 16.70. There was no difference between PDW of study and control groups (p=0.246). Conclusions: Thrombocytopenia is a common complication among pediatric cases of vivax malaria. MPV of cases of vivax malaria is higher than control group.

Keywords: malaria vivax, platelet, mean platelet volume, thrombocytopenia

Procedia PDF Downloads 391
5454 Triangular Geometric Feature for Offline Signature Verification

Authors: Zuraidasahana Zulkarnain, Mohd Shafry Mohd Rahim, Nor Anita Fairos Ismail, Mohd Azhar M. Arsad

Abstract:

Handwritten signature is accepted widely as a biometric characteristic for personal authentication. The use of appropriate features plays an important role in determining accuracy of signature verification; therefore, this paper presents a feature based on the geometrical concept. To achieve the aim, triangle attributes are exploited to design a new feature since the triangle possesses orientation, angle and transformation that would improve accuracy. The proposed feature uses triangulation geometric set comprising of sides, angles and perimeter of a triangle which is derived from the center of gravity of a signature image. For classification purpose, Euclidean classifier along with Voting-based classifier is used to verify the tendency of forgery signature. This classification process is experimented using triangular geometric feature and selected global features. Based on an experiment that was validated using Grupo de Senales 960 (GPDS-960) signature database, the proposed triangular geometric feature achieves a lower Average Error Rates (AER) value with a percentage of 34% as compared to 43% of the selected global feature. As a conclusion, the proposed triangular geometric feature proves to be a more reliable feature for accurate signature verification.

Keywords: biometrics, euclidean classifier, features extraction, offline signature verification, voting-based classifier

Procedia PDF Downloads 375
5453 The Impacts of Negative Moral Characters on Health: An Article Review

Authors: Mansoor Aslamzai, Delaqa Del, Sayed Azam Sajid

Abstract:

Introduction: Though moral disorders have a high burden, there is no separate topic regarding this problem in the International Classification of Diseases (ICD). Along with the modification of WHO ICD-11, spirituality can prevent the rapid progress of such derangement as well. Objective: This study evaluated the effects of bad moral characters on health, as well as carried out the role of spirituality in the improvement of immorality. Method: This narrative article review was accomplished in 2020-2021 and the articles were searched through the Web of Science, PubMed, BMC, and Google scholar. Results: Based on the current review, most experimental and observational studies revealed significant negative effects of unwell moral characters on the overall aspects of health and well-being. Nowadays, a lot of studies established the positive role of spirituality in the improvement of health and moral disorder. The studies concluded, facilities must be available within schools, universities, and communities for everyone to learn the knowledge of spirituality and improve their unwell moral character world. Conclusion: Considering the negative relationship between unwell moral characters and well-being, the current study proposes the addition of moral disorder as a separate topic in the WHO International Classification of Diseases. Based on this literature review, spirituality will improve moral disorder and establish excellent moral traits.

Keywords: bad moral characters, effect, health, spirituality and well-being

Procedia PDF Downloads 181
5452 Determination of Cohesive Zone Model’s Parameters Based On the Uniaxial Stress-Strain Curve

Authors: Y. J. Wang, C. Q. Ru

Abstract:

A key issue of cohesive zone models is how to determine the cohesive zone model (CZM) parameters based on real material test data. In this paper, uniaxial nominal stress-strain curve (SS curve) is used to determine two key parameters of a cohesive zone model: the maximum traction and the area under the curve of traction-separation law (TSL). To this end, the true SS curve is obtained based on the nominal SS curve, and the relationship between the nominal SS curve and TSL is derived based on an assumption that the stress for cracking should be the same in both CZM and the real material. In particular, the true SS curve after necking is derived from the nominal SS curve by taking the average of the power law extrapolation and the linear extrapolation, and a damage factor is introduced to offset the true stress reduction caused by the voids generated at the necking zone. The maximum traction of the TSL is equal to the maximum true stress calculated based on the damage factor at the end of hardening. In addition, a simple specimen is simulated by Abaqus/Standard to calculate the critical J-integral, and the fracture energy calculated by the critical J-integral represents the stored strain energy in the necking zone calculated by the true SS curve. Finally, the CZM parameters obtained by the present method are compared to those used in a previous related work for a simulation of the drop-weight tear test.

Keywords: dynamic fracture, cohesive zone model, traction-separation law, stress-strain curve, J-integral

Procedia PDF Downloads 508
5451 Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks

Authors: Jacqueline Rose T. Alipo-on, Francesca Isabelle F. Escobar, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar Al Dahoul

Abstract:

Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.

Keywords: heartbeat classification, convolutional neural network, electrocardiogram signals, generative adversarial networks, long short-term memory, ResNet-50

Procedia PDF Downloads 123
5450 Analyzing the Risk Based Approach in General Data Protection Regulation: Basic Challenges Connected with Adapting the Regulation

Authors: Natalia Kalinowska

Abstract:

The adoption of the General Data Protection Regulation, (GDPR) finished the four-year work of the European Commission in this area in the European Union. Considering far-reaching changes, which will be applied by GDPR, the European legislator envisaged two-year transitional period. Member states and companies have to prepare for a new regulation until 25 of May 2018. The idea, which becomes a new look at an attitude to data protection in the European Union is risk-based approach. So far, as a result of implementation of Directive 95/46/WE, in many European countries (including Poland) there have been adopted very particular regulations, specifying technical and organisational security measures e.g. Polish implementing rules indicate even how long password should be. According to the new approach from May 2018, controllers and processors will be obliged to apply security measures adequate to level of risk associated with specific data processing. The risk in GDPR should be interpreted as the likelihood of a breach of the rights and freedoms of the data subject. According to Recital 76, the likelihood and severity of the risk to the rights and freedoms of the data subject should be determined by reference to the nature, scope, context and purposes of the processing. GDPR does not indicate security measures which should be applied – in recitals there are only examples such as anonymization or encryption. It depends on a controller’s decision what type of security measures controller considered as sufficient and he will be responsible if these measures are not sufficient or if his identification of risk level is incorrect. Data protection regulation indicates few levels of risk. Recital 76 indicates risk and high risk, but some lawyers think, that there is one more category – low risk/now risk. Low risk/now risk data processing is a situation when it is unlikely to result in a risk to the rights and freedoms of natural persons. GDPR mentions types of data processing when a controller does not have to evaluate level of risk because it has been classified as „high risk” processing e.g. processing on a large scale of special categories of data, processing with using new technologies. The methodology will include analysis of legal regulations e.g. GDPR, the Polish Act on the Protection of personal data. Moreover: ICO Guidelines and articles concerning risk based approach in GDPR. The main conclusion is that an appropriate risk assessment is a key to keeping data safe and avoiding financial penalties. On the one hand, this approach seems to be more equitable, not only for controllers or processors but also for data subjects, but on the other hand, it increases controllers’ uncertainties in the assessment which could have a direct impact on incorrect data protection and potential responsibility for infringement of regulation.

Keywords: general data protection regulation, personal data protection, privacy protection, risk based approach

Procedia PDF Downloads 250
5449 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem

Authors: Y. Wang

Abstract:

The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.

Keywords: frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem

Procedia PDF Downloads 228
5448 The Effect of Treated Waste-Water on Compaction and Compression of Fine Soil

Authors: M. Attom, F. Abed, M. Elemam, M. Nazal, N. ElMessalami

Abstract:

—The main objective of this paper is to study the effect of treated waste-water (TWW) on the compaction and compressibility properties of fine soil. Two types of fine soils (clayey soils) were selected for this study and classified as CH soil and Cl type of soil. Compaction and compressibility properties such as optimum water content, maximum dry unit weight, consolidation index and swell index, maximum past pressure and volume change were evaluated using both tap and treated waste water. It was found that the use of treated waste water affects all of these properties. The maximum dry unit weight increased for both soils and the optimum water content decreased as much as 13.6% for highly plastic soil. The significant effect was observed in swell index and swelling pressure of the soils. The swell indexed decreased by as much as 42% and 33% for highly plastic and low plastic soils, respectively, when TWW is used. Additionally, the swelling pressure decreased by as much as 16% for both soil types. The result of this research pointed out that the use of treated waste water has a positive effect on compaction and compression properties of clay soil and promise for potential use of this water in engineering applications. Keywords—Consolidation, proctor compaction, swell index, treated waste-water, volume change.

Keywords: consolidation, proctor compaction, swell index, treated waste-water, volume change

Procedia PDF Downloads 257
5447 Application of Italian Guidelines for Existing Bridge Management

Authors: Giovanni Menichini, Salvatore Giacomo Morano, Gloria Terenzi, Luca Salvatori, Maurizio Orlando

Abstract:

The “Guidelines for Risk Classification, Safety Assessment, and Structural Health Monitoring of Existing Bridges” were recently approved by the Italian Government to define technical standards for managing the national network of existing bridges. These guidelines provide a framework for risk mitigation and safety assessment of bridges, which are essential elements of the built environment and form the basis for the operation of transport systems. Within the guideline framework, a workflow based on three main points was proposed: (1) risk-based, i.e., based on typical parameters of hazard, vulnerability, and exposure; (2) multi-level, i.e., including six assessment levels of increasing complexity; and (3) multirisk, i.e., assessing structural/foundational, seismic, hydrological, and landslide risks. The paper focuses on applying the Italian Guidelines to specific case studies, aiming to identify the parameters that predominantly influence the determination of the “class of attention”. The significance of each parameter is determined via sensitivity analysis. Additionally, recommendations for enhancing the process of assigning the class of attention are proposed.

Keywords: bridge safety assessment, Italian guidelines implementation, risk classification, structural health monitoring

Procedia PDF Downloads 53