Search results for: artificial Bee colony algorithm
3744 Algorithm for Recognizing Trees along Power Grid Using Multispectral Imagery
Authors: C. Hamamura, V. Gialluca
Abstract:
Much of the Eclectricity Distributors has about 70% of its electricity interruptions arising from cause "trees", alone or associated with wind and rain and with or without falling branch and / or trees. This contributes inexorably and significantly to outages, resulting in high costs as compensation in addition to the operation and maintenance costs. On the other hand, there is little data structure and solutions to better organize the trees pruning plan effectively, minimizing costs and environmentally friendly. This work describes the development of an algorithm to provide data of trees associated to power grid. The method is accomplished on several steps using satellite imagery and geographically vectorized grid. A sliding window like approach is performed to seek the area around the grid. The proposed method counted 764 trees on a patch of the grid, which was very close to the 738 trees counted manually. The trees data was used as a part of a larger project that implements a system to optimize tree pruning plan.Keywords: image pattern recognition, trees pruning, trees recognition, neural network
Procedia PDF Downloads 4993743 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization
Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın
Abstract:
There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.Keywords: aircraft, fatigue, joint, life, optimization, prediction.
Procedia PDF Downloads 1753742 Value-Based Argumentation Frameworks and Judicial Moral Reasoning
Authors: Sonia Anand Knowlton
Abstract:
As the use of Artificial Intelligence is becoming increasingly integrated in virtually every area of life, the need and interest to logically formalize the law and judicial reasoning is growing tremendously. The study of argumentation frameworks (AFs) provides promise in this respect. AF’s provide a way of structuring human reasoning using a formal system of non-monotonic logic. P.M. Dung first introduced this framework and demonstrated that certain arguments must prevail and certain arguments must perish based on whether they are logically “attacked” by other arguments. Dung labelled the set of prevailing arguments as the “preferred extension” of the given argumentation framework. Trevor Bench-Capon’s Value-based Argumentation Frameworks extended Dung’s AF system by allowing arguments to derive their force from the promotion of “preferred” values. In VAF systems, the success of an attack from argument A to argument B (i.e., the triumph of argument A) requires that argument B does not promote a value that is preferred to argument A. There has been thorough discussion of the application of VAFs to the law within the computer science literature, mainly demonstrating that legal cases can be effectively mapped out using VAFs. This article analyses VAFs from a jurisprudential standpoint to provide a philosophical and theoretical analysis of what VAFs tell the legal community about the judicial reasoning, specifically distinguishing between legal and moral reasoning. It highlights the limitations of using VAFs to account for judicial moral reasoning in theory and in practice.Keywords: nonmonotonic logic, legal formalization, computer science, artificial intelligence, morality
Procedia PDF Downloads 743741 Transfer Knowledge From Multiple Source Problems to a Target Problem in Genetic Algorithm
Authors: Terence Soule, Tami Al Ghamdi
Abstract:
To study how to transfer knowledge from multiple source problems to the target problem, we modeled the Transfer Learning (TL) process using Genetic Algorithms as the model solver. TL is the process that aims to transfer learned data from one problem to another problem. The TL process aims to help Machine Learning (ML) algorithms find a solution to the problems. The Genetic Algorithms (GA) give researchers access to information that we have about how the old problem is solved. In this paper, we have five different source problems, and we transfer the knowledge to the target problem. We studied different scenarios of the target problem. The results showed combined knowledge from multiple source problems improves the GA performance. Also, the process of combining knowledge from several problems results in promoting diversity of the transferred population.Keywords: transfer learning, genetic algorithm, evolutionary computation, source and target
Procedia PDF Downloads 1403740 Disease Level Assessment in Wheat Plots Using a Residual Deep Learning Algorithm
Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell
Abstract:
The assessment of disease levels in crop fields is an important and time-consuming task that generally relies on expert knowledge of trained individuals. Image classification in agriculture problems historically has been based on classical machine learning strategies that make use of hand-engineered features in the top of a classification algorithm. This approach tends to not produce results with high accuracy and generalization to the classes classified by the system when the nature of the elements has a significant variability. The advent of deep convolutional neural networks has revolutionized the field of machine learning, especially in computer vision tasks. These networks have great resourcefulness of learning and have been applied successfully to image classification and object detection tasks in the last years. The objective of this work was to propose a new method based on deep learning convolutional neural networks towards the task of disease level monitoring. Common RGB images of winter wheat were obtained during a growing season. Five categories of disease levels presence were produced, in collaboration with agronomists, for the algorithm classification. Disease level tasks performed by experts provided ground truth data for the disease score of the same winter wheat plots were RGB images were acquired. The system had an overall accuracy of 84% on the discrimination of the disease level classes.Keywords: crop disease assessment, deep learning, precision agriculture, residual neural networks
Procedia PDF Downloads 3323739 Integrating AI in Education: Enhancing Learning Processes and Personalization
Authors: Waleed Afandi
Abstract:
Artificial intelligence (AI) has rapidly transformed various sectors, including education. This paper explores the integration of AI in education, emphasizing its potential to revolutionize learning processes, enhance teaching methodologies, and personalize education. We examine the historical context of AI in education, current applications, and the potential challenges and ethical considerations associated with its implementation. By reviewing a wide range of literature, this study aims to provide a comprehensive understanding of how AI can be leveraged to improve educational outcomes and the future directions of AI-driven educational innovations. Additionally, the paper discusses the impact of AI on student engagement, teacher support, and administrative efficiency. Case studies highlighting successful AI applications in diverse educational settings are presented, showcasing the practical benefits and real-world implications. The analysis also addresses potential disparities in access to AI technologies and suggests strategies to ensure equitable implementation. Through a balanced examination of the promises and pitfalls of AI in education, this study seeks to inform educators, policymakers, and technologists about the optimal pathways for integrating AI to foster an inclusive, effective, and innovative educational environment.Keywords: artificial intelligence, education, personalized learning, teaching methodologies, educational outcomes, AI applications, student engagement, teacher support, administrative efficiency, equity in education
Procedia PDF Downloads 323738 Application of Smplify-X Algorithm with Enhanced Gender Classifier in 3D Human Pose Estimation
Authors: Jiahe Liu, Hongyang Yu, Miao Luo, Feng Qian
Abstract:
The widespread application of 3D human body reconstruction spans various fields. Smplify-X, an algorithm reliant on single-image input, employs three distinct body parameter templates, necessitating gender classification of individuals within the input image. Researchers employed a ResNet18 network to train a gender classifier within the Smplify-X framework, setting the threshold at 0.9, designating images falling below this threshold as having neutral gender. This model achieved 62.38% accurate predictions and 7.54% incorrect predictions. Our improvement involved refining the MobileNet network, resulting in a raised threshold of 0.97. Consequently, we attained 78.89% accurate predictions and a mere 0.2% incorrect predictions, markedly enhancing prediction precision and enabling more precise 3D human body reconstruction.Keywords: SMPLX, mobileNet, gender classification, 3D human reconstruction
Procedia PDF Downloads 1003737 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks
Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin
Abstract:
Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network
Procedia PDF Downloads 1383736 An Ensemble-based Method for Vehicle Color Recognition
Authors: Saeedeh Barzegar Khalilsaraei, Manoocheher Kelarestaghi, Farshad Eshghi
Abstract:
The vehicle color, as a prominent and stable feature, helps to identify a vehicle more accurately. As a result, vehicle color recognition is of great importance in intelligent transportation systems. Unlike conventional methods which use only a single Convolutional Neural Network (CNN) for feature extraction or classification, in this paper, four CNNs, with different architectures well-performing in different classes, are trained to extract various features from the input image. To take advantage of the distinct capability of each network, the multiple outputs are combined using a stack generalization algorithm as an ensemble technique. As a result, the final model performs better than each CNN individually in vehicle color identification. The evaluation results in terms of overall average accuracy and accuracy variance show the proposed method’s outperformance compared to the state-of-the-art rivals.Keywords: Vehicle Color Recognition, Ensemble Algorithm, Stack Generalization, Convolutional Neural Network
Procedia PDF Downloads 853735 Iterative Segmentation and Application of Hausdorff Dilation Distance in Defect Detection
Authors: S. Shankar Bharathi
Abstract:
Inspection of surface defects on metallic components has always been challenging due to its specular property. Occurrences of defects such as scratches, rust, pitting are very common in metallic surfaces during the manufacturing process. These defects if unchecked can hamper the performance and reduce the life time of such component. Many of the conventional image processing algorithms in detecting the surface defects generally involve segmentation techniques, based on thresholding, edge detection, watershed segmentation and textural segmentation. They later employ other suitable algorithms based on morphology, region growing, shape analysis, neural networks for classification purpose. In this paper the work has been focused only towards detecting scratches. Global and other thresholding techniques were used to extract the defects, but it proved to be inaccurate in extracting the defects alone. However, this paper does not focus on comparison of different segmentation techniques, but rather describes a novel approach towards segmentation combined with hausdorff dilation distance. The proposed algorithm is based on the distribution of the intensity levels, that is, whether a certain gray level is concentrated or evenly distributed. The algorithm is based on extraction of such concentrated pixels. Defective images showed higher level of concentration of some gray level, whereas in non-defective image, there seemed to be no concentration, but were evenly distributed. This formed the basis in detecting the defects in the proposed algorithm. Hausdorff dilation distance based on mathematical morphology was used to strengthen the segmentation of the defects.Keywords: metallic surface, scratches, segmentation, hausdorff dilation distance, machine vision
Procedia PDF Downloads 4283734 Experimental Investigation of Interfacial Bond Strength of Concrete Layers
Authors: Rajkamal Kumar, Sudhir Mishra
Abstract:
The connections between various elements of concrete structures play a vital role in determining the durability of structures. These connections produce discontinuities and to ensure the monolithic behavior of structures, these connections should be carefully designed. The connections between concrete layers may occur in various situations such as structure repairing and rehabilitation or construction of huge structures with cast-in-situ or pre-cast elements, etc. Bond strength at the interface of these concrete layers should be able to prevent the progressive slip from taking place and it should also ensure satisfactory performance of the structure. Different approaches to enhance the bond strength at interface have been a major area of research. Nowadays, micro-concrete is getting popular as a repair material. Under this ambit, this paper aims to present the experimental results of connections between concrete layers of different age with artificial indentation at interface with two types of repair material: Concrete with same parent concrete composition and ready-mix mortar (micro-concrete), artificial indentations (grooves and holes) were made on the old layer of concrete to increase the bond strength. Curing plays an important role in determining the bond strength. Optimum duration for curing have also been discussed for each type of repair material. Different types of failure patterns have also been mentioned.Keywords: adhesion, cohesion, compressive stress, micro-concrete, shear stress, slant shear test
Procedia PDF Downloads 3343733 Hit-Or-Miss Transform as a Tool for Similar Shape Detection
Authors: Osama Mohamed Elrajubi, Idris El-Feghi, Mohamed Abu Baker Saghayer
Abstract:
This paper describes an identification of specific shapes within binary images using the morphological Hit-or-Miss Transform (HMT). Hit-or-Miss transform is a general binary morphological operation that can be used in searching of particular patterns of foreground and background pixels in an image. It is actually a basic operation of binary morphology since almost all other binary morphological operators are derived from it. The input of this method is a binary image and a structuring element (a template which will be searched in a binary image) while the output is another binary image. In this paper a modification of Hit-or-Miss transform has been proposed. The accuracy of algorithm is adjusted according to the similarity of the template and the sought template. The implementation of this method has been done by C language. The algorithm has been tested on several images and the results have shown that this new method can be used for similar shape detection.Keywords: hit-or-miss operator transform, HMT, binary morphological operation, shape detection, binary images processing
Procedia PDF Downloads 3333732 Generating 3D Anisotropic Centroidal Voronoi Tessellations
Authors: Alexandre Marin, Alexandra Bac, Laurent Astart
Abstract:
New numerical methods for PDE resolution (such as Finite Volumes (FV) or Virtual Elements Method (VEM)) open new needs in terms of meshing of domains of interest, and in particular, polyhedral meshes have many advantages. One way to build such meshes consists of constructing Restricted Voronoi Diagrams (RVDs) whose boundaries respect the domain of interest. By minimizing a function defined for RVDs, the shapes of cells can be controlled, e.g., elongated according to user-defined directions or adjusted to comply with given aspect ratios (anisotropy) and density variations. In this paper, our contribution is threefold: First, we introduce a new gradient formula for the Voronoi tessellation energy under a continuous anisotropy field. Second, we describe a meshing algorithm based on the optimisation of this function that we validate against state-of-the-art approaches. Finally, we propose a hierarchical approach to speed up our meshing algorithm.Keywords: anisotropic Voronoi diagrams, meshes for numerical simulations, optimisation, volumic polyhedral meshing
Procedia PDF Downloads 1163731 Short Answer Grading Using Multi-Context Features
Authors: S. Sharan Sundar, Nithish B. Moudhgalya, Nidhi Bhandari, Vineeth Vijayaraghavan
Abstract:
Automatic Short Answer Grading is one of the prime applications of artificial intelligence in education. Several approaches involving the utilization of selective handcrafted features, graphical matching techniques, concept identification and mapping, complex deep frameworks, sentence embeddings, etc. have been explored over the years. However, keeping in mind the real-world application of the task, these solutions present a slight overhead in terms of computations and resources in achieving high performances. In this work, a simple and effective solution making use of elemental features based on statistical, linguistic properties, and word-based similarity measures in conjunction with tree-based classifiers and regressors is proposed. The results for classification tasks show improvements ranging from 1%-30%, while the regression task shows a stark improvement of 35%. The authors attribute these improvements to the addition of multiple similarity scores to provide ensemble of scoring criteria to the models. The authors also believe the work could reinstate that classical natural language processing techniques and simple machine learning models can be used to achieve high results for short answer grading.Keywords: artificial intelligence, intelligent systems, natural language processing, text mining
Procedia PDF Downloads 1333730 Human Posture Estimation Based on Multiple Viewpoints
Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo
Abstract:
This study aimed to address the problem of improving the confidence of key points by fusing multi-view information, thereby estimating human posture more accurately. We first obtained multi-view image information and then used the MvP algorithm to fuse this multi-view information together to obtain a set of high-confidence human key points. We used these as the input for the Spatio-Temporal Graph Convolution (ST-GCN). ST-GCN is a deep learning model used for processing spatio-temporal data, which can effectively capture spatio-temporal relationships in video sequences. By using the MvP algorithm to fuse multi-view information and inputting it into the spatio-temporal graph convolution model, this study provides an effective method to improve the accuracy of human posture estimation and provides strong support for further research and application in related fields.Keywords: multi-view, pose estimation, ST-GCN, joint fusion
Procedia PDF Downloads 703729 The Use Support Vector Machine and Back Propagation Neural Network for Prediction of Daily Tidal Levels Along The Jeddah Coast, Saudi Arabia
Authors: E. A. Mlybari, M. S. Elbisy, A. H. Alshahri, O. M. Albarakati
Abstract:
Sea level rise threatens to increase the impact of future storms and hurricanes on coastal communities. Accurate sea level change prediction and supplement is an important task in determining constructions and human activities in coastal and oceanic areas. In this study, support vector machines (SVM) is proposed to predict daily tidal levels along the Jeddah Coast, Saudi Arabia. The optimal parameter values of kernel function are determined using a genetic algorithm. The SVM results are compared with the field data and with back propagation (BP). Among the models, the SVM is superior to BPNN and has better generalization performance.Keywords: tides, prediction, support vector machines, genetic algorithm, back-propagation neural network, risk, hazards
Procedia PDF Downloads 4683728 Variable Tree Structure QR Decomposition-M Algorithm (QRD-M) in Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Systems
Authors: Jae-Hyun Ro, Jong-Kwang Kim, Chang-Hee Kang, Hyoung-Kyu Song
Abstract:
In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, QR decomposition-M algorithm (QRD-M) has suboptimal error performance. However, the QRD-M has still high complexity due to many calculations at each layer in tree structure. To reduce the complexity of the QRD-M, proposed QRD-M modifies existing tree structure by eliminating unnecessary candidates at almost whole layers. The method of the elimination is discarding the candidates which have accumulated squared Euclidean distances larger than calculated threshold. The simulation results show that the proposed QRD-M has same bit error rate (BER) performance with lower complexity than the conventional QRD-M.Keywords: complexity, MIMO-OFDM, QRD-M, squared Euclidean distance
Procedia PDF Downloads 3333727 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives
Authors: Chen Guo, Heng Tang, Ben Niu
Abstract:
Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives
Procedia PDF Downloads 1403726 Parallel Evaluation of Sommerfeld Integrals for Multilayer Dyadic Green's Function
Authors: Duygu Kan, Mehmet Cayoren
Abstract:
Sommerfeld-integrals (SIs) are commonly encountered in electromagnetics problems involving analysis of antennas and scatterers embedded in planar multilayered media. Generally speaking, the analytical solution of SIs is unavailable, and it is well known that numerical evaluation of SIs is very time consuming and computationally expensive due to the highly oscillating and slowly decaying nature of the integrands. Therefore, fast computation of SIs has a paramount importance. In this paper, a parallel code has been developed to speed up the computation of SI in the framework of calculation of dyadic Green’s function in multilayered media. OpenMP shared memory approach is used to parallelize the SI algorithm and resulted in significant time savings. Moreover accelerating the computation of dyadic Green’s function is discussed based on the parallel SI algorithm developed.Keywords: Sommerfeld-integrals, multilayer dyadic Green’s function, OpenMP, shared memory parallel programming
Procedia PDF Downloads 2473725 A Comparative Study on a Tilt-Integral-Derivative Controller with Proportional-Integral-Derivative Controller for a Pacemaker
Authors: Aysan Esgandanian, Sabalan Daneshvar
Abstract:
The study is done to determine the comparison between proportional-integral-derivative controller (PID controller) and tilt-integral-derivative (TID controller) for cardiac pacemaker systems, which can automatically control the heart rate to accurately track a desired preset profile. The controller offers good adaption of heart to the physiological needs of the patient. The parameters of the both controllers are tuned by particle swarm optimization (PSO) algorithm which uses the integral of time square error as a fitness function to be minimized. Simulation results are performed on the developed cardiovascular system of humans and results demonstrate that the TID controller produces superior control performance than PID controllers. In this paper, all simulations were performed in Matlab.Keywords: integral of time square error, pacemaker systems, proportional-integral-derivative controller, PSO algorithm, tilt-integral-derivative controller
Procedia PDF Downloads 4633724 Clustering for Detection of the Population at Risk of Anticholinergic Medication
Authors: A. Shirazibeheshti, T. Radwan, A. Ettefaghian, G. Wilson, C. Luca, Farbod Khanizadeh
Abstract:
Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature, which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on over 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. To further evaluate the performance of the model, any association between the average risk score within each group and other factors such as socioeconomic status (i.e., Index of Multiple Deprivation) and an index of health and disability were investigated. The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings also show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, indicating that females are more at risk from this kind of multiple medications. The risk may be monitored and controlled in well artificial intelligence-equipped healthcare management systems.Keywords: anticholinergic medicines, clustering, deprivation, socioeconomic status
Procedia PDF Downloads 2123723 Developing an ANN Model to Predict Anthropometric Dimensions Based on Real Anthropometric Database
Authors: Waleed A. Basuliman, Khalid S. AlSaleh, Mohamed Z. Ramadan
Abstract:
Applying the anthropometric dimensions is considered one of the important factors when designing any human-machine system. In this study, the estimation of anthropometric dimensions has been improved by developing artificial neural network that aims to predict the anthropometric measurements of the male in Saudi Arabia. A total of 1427 Saudi males from age 6 to 60 participated in measuring twenty anthropometric dimensions. These anthropometric measurements are important for designing the majority of work and life applications in Saudi Arabia. The data were collected during 8 months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining fifteen dimensions were set to be the measured variables (outcomes). The hidden layers have been varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was significantly able to predict the body dimensions for the population of Saudi Arabia. The network mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found 0.0348 and 3.225 respectively. The accuracy of the developed neural network was evaluated by compare the predicted outcomes with a multiple regression model. The ANN model performed better and resulted excellent correlation coefficients between the predicted and actual dimensions.Keywords: artificial neural network, anthropometric measurements, backpropagation, real anthropometric database
Procedia PDF Downloads 5763722 Development of a Solar Energy Based Prototype, CyanoClean, for Arsenic Removal from Water with the Use of a Cyanobacterial Consortium in Field Conditions of India
Authors: Anurakti Shukla, Sudhakar Srivastava
Abstract:
Cyanobacteria are known for rapid growth rates, high biomass, and the ability to accumulate potentially toxic elements and contaminants. The present work was planned to develop a low-cost, feasible prototype, CyanoClean, for the growth of a cyanobacterial consortium for the removal of arsenic (As) from water. The cyanobacterial consortium consisting of Oscillatoria, Phormidiumand Gloeotrichiawas used, and the conditions for optimal growth of the consortium were standardized. A pH of 7.6, initial cyanobacterial biomass of 10 g/L, and arsenite [As(III)] and arsenate [As(V)] concentration of 400 μΜand 600 μM, respectively, were found to be suitable. The CyanoClean prototype was designed with acrylic sheet and had arrangements for optimal cyanobacterial growth in natural sunlight and also in artificial light. The As removal experiments in concentration- and duration-dependent manner demonstrated removal of up to 39-69% and 9-33% As respectively from As(III) and As(V)-contaminated water. In field testing of CyanoClean, natural As-contaminated groundwater was used, and As reduction was monitored when a flow rate of 3 L/h was maintained. In a field experiment, As concentration in groundwater was found to reduce from 102.43 μg L⁻¹ to <10 μg L⁻¹ after 6 h in natural sunlight. However, in shaded conditions under artificial light, the same result was achieved after 9 h. The CyanoClean prototype is of simple design and can be easily up-scaled for application at a small- to medium-size land and shall be affordable even for a low- to middle-income group farmer.Keywords: cyanoclean, gloeotrichia, oscillatoria, phormidium, phycoremediation
Procedia PDF Downloads 1433721 Comparison between Continuous Genetic Algorithms and Particle Swarm Optimization for Distribution Network Reconfiguration
Authors: Linh Nguyen Tung, Anh Truong Viet, Nghien Nguyen Ba, Chuong Trinh Trong
Abstract:
This paper proposes a reconfiguration methodology based on a continuous genetic algorithm (CGA) and particle swarm optimization (PSO) for minimizing active power loss and minimizing voltage deviation. Both algorithms are adapted using graph theory to generate feasible individuals, and the modified crossover is used for continuous variable of CGA. To demonstrate the performance and effectiveness of the proposed methods, a comparative analysis of CGA with PSO for network reconfiguration, on 33-node and 119-bus radial distribution system is presented. The simulation results have shown that both CGA and PSO can be used in the distribution network reconfiguration and CGA outperformed PSO with significant success rate in finding optimal distribution network configuration.Keywords: distribution network reconfiguration, particle swarm optimization, continuous genetic algorithm, power loss reduction, voltage deviation
Procedia PDF Downloads 1893720 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering
Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott
Abstract:
Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.Keywords: cancer research, graph theory, machine learning, single cell analysis
Procedia PDF Downloads 1133719 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 4843718 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network
Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin
Abstract:
The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake
Procedia PDF Downloads 643717 Fabrication of a Continuous Flow System for Biofilm Studies
Authors: Mohammed Jibrin Ndejiko
Abstract:
Modern and current models such as flow cell technology which enhances a non-destructive growth and inspection of the sessile microbial communities revealed a great understanding of biofilms. A continuous flow system was designed to evaluate possibility of biofilm formation by Escherichia coli DH5α on the stainless steel (type 304) under continuous nutrient supply. The result of the colony forming unit (CFU) count shows that bacterial attachment and subsequent biofilm formation on stainless steel coupons with average surface roughness of 1.5 ± 1.8 µm and 2.0 ± 0.09 µm were both significantly higher (p ≤ 0.05) than those of the stainless steel coupon with lower surface roughness of 0.38 ± 1.5 µm. These observations support the hypothesis that surface profile is one of the factors that influence biofilm formation on stainless steel surfaces. The SEM and FESEM micrographs of the stainless steel coupons also revealed the attached Escherichia coli DH5α biofilm and dehydrated extracellular polymeric substance on the stainless steel surfaces. Thus, the fabricated flow system represented a very useful tool to study biofilm formation under continuous nutrient supply.Keywords: biofilm, flowcell, stainless steel, coupon
Procedia PDF Downloads 3183716 Learning Grammars for Detection of Disaster-Related Micro Events
Authors: Josef Steinberger, Vanni Zavarella, Hristo Tanev
Abstract:
Natural disasters cause tens of thousands of victims and massive material damages. We refer to all those events caused by natural disasters, such as damage on people, infrastructure, vehicles, services and resource supply, as micro events. This paper addresses the problem of micro - event detection in online media sources. We present a natural language grammar learning algorithm and apply it to online news. The algorithm in question is based on distributional clustering and detection of word collocations. We also explore the extraction of micro-events from social media and describe a Twitter mining robot, who uses combinations of keywords to detect tweets which talk about effects of disasters.Keywords: online news, natural language processing, machine learning, event extraction, crisis computing, disaster effects, Twitter
Procedia PDF Downloads 4783715 Prevention of Road Accidents by Computerized Drowsiness Detection System
Authors: Ujjal Chattaraj, P. C. Dasbebartta, S. Bhuyan
Abstract:
This paper aims to propose a method to detect the action of the driver’s eyes, using the concept of face detection. There are three major key contributing methods which can rapidly process the framework of the facial image and hence produce results which further can program the reactions of the vehicles as pre-programmed for the traffic safety. This paper compares and analyses the methods on the basis of their reaction time and their ability to deal with fluctuating images of the driver. The program used in this study is simple and efficient, built using the AdaBoost learning algorithm. Through this program, the system would be able to discard background regions and focus on the face-like regions. The results are analyzed on a common computer which makes it feasible for the end users. The application domain of this experiment is quite wide, such as detection of drowsiness or influence of alcohols in drivers or detection for the case of identification.Keywords: AdaBoost learning algorithm, face detection, framework, traffic safety
Procedia PDF Downloads 157