Search results for: artificial Bee colony algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5571

Search results for: artificial Bee colony algorithm

4401 HR MRI CS Based Image Reconstruction

Authors: Krzysztof Malczewski

Abstract:

Magnetic Resonance Imaging (MRI) reconstruction algorithm using compressed sensing is presented in this paper. It is exhibited that the offered approach improves MR images spatial resolution in circumstances when highly undersampled k-space trajectories are applied. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were conventionally assumed necessary. Magnetic Resonance Imaging (MRI) is a fundamental medical imaging method struggles with an inherently slow data acquisition process. The use of CS to MRI has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the objective is to combine super-resolution image enhancement algorithm with CS framework benefits to achieve high resolution MR output image. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity. The presented algorithm considers the cardiac and respiratory movements.

Keywords: super-resolution, MRI, compressed sensing, sparse-sense, image enhancement

Procedia PDF Downloads 430
4400 Active Control Improvement of Smart Cantilever Beam by Piezoelectric Materials and On-Line Differential Artificial Neural Networks

Authors: P. Karimi, A. H. Khedmati Bazkiaei

Abstract:

The main goal of this study is to test differential neural network as a controller of smart structure and is to enumerate its advantages and disadvantages in comparison with other controllers. In this study, the smart structure has been considered as a Euler Bernoulli cantilever beam and it has been tried that it be under control with the use of vibration neural network resulting from movement. Also, a linear observer has been considered as a reference controller and has been compared its results. The considered vibration charts and the controlled state have been recounted in the final part of this text. The obtained result show that neural observer has better performance in comparison to the implemented linear observer.

Keywords: smart material, on-line differential artificial neural network, active control, finite element method

Procedia PDF Downloads 210
4399 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 125
4398 Triangulations via Iterated Largest Angle Bisection

Authors: Yeonjune Kang

Abstract:

A triangulation of a planar region is a partition of that region into triangles. In the finite element method, triangulations are often used as the grid underlying a computation. In order to be suitable as a finite element mesh, a triangulation must have well-shaped triangles, according to criteria that depend on the details of the particular problem. For instance, most methods require that all triangles be small and as close to the equilateral shape as possible. Stated differently, one wants to avoid having either thin or flat triangles in the triangulation. There are many triangulation procedures, a particular one being the one known as the longest edge bisection algorithm described below. Starting with a given triangle, locate the midpoint of the longest edge and join it to the opposite vertex of the triangle. Two smaller triangles are formed; apply the same bisection procedure to each of these triangles. Continuing in this manner after n steps one obtains a triangulation of the initial triangle into 2n smaller triangles. The longest edge algorithm was first considered in the late 70’s. It was shown by various authors that this triangulation has the desirable properties for the finite element method: independently of the number of iterations the angles of these triangles cannot get too small; moreover, the size of the triangles decays exponentially. In the present paper we consider a related triangulation algorithm we refer to as the largest angle bisection procedure. As the name suggests, rather than bisecting the longest edge, at each step we bisect the largest angle. We study the properties of the resulting triangulation and prove that, while the general behavior resembles the one in the longest edge bisection algorithm, there are several notable differences as well.

Keywords: angle bisectors, geometry, triangulation, applied mathematics

Procedia PDF Downloads 401
4397 Traditional Drawing, BIM and Erudite Design Process

Authors: Maryam Kalkatechi

Abstract:

Nowadays, parametric design, scientific analysis, and digital fabrication are dominant. Many architectural practices are increasingly seeking to incorporate advanced digital software and fabrication in their projects. Proposing an erudite design process that combines digital and practical aspects in a strong frame within the method was resulted from the dissertation research. The digital aspects are the progressive advancements in algorithm design and simulation software. These aspects have assisted the firms to develop more holistic concepts at the early stage and maintain collaboration among disciplines during the design process. The erudite design process enhances the current design processes by encouraging the designer to implement the construction and architecture knowledge within the algorithm to make successful design processes. The erudite design process also involves the ongoing improvements of applying the new method of 3D printing in construction. This is achieved through the ‘data-sketches’. The term ‘data-sketch’ was developed by the author in the dissertation that was recently completed. It accommodates the decisions of the architect on the algorithm. This paper introduces the erudite design process and its components. It will summarize the application of this process in development of the ‘3D printed construction unit’. This paper contributes to overlaying the academic and practice with advanced technology by presenting a design process that transfers the dominance of tool to the learned architect and encourages innovation in design processes.

Keywords: erudite, data-sketch, algorithm design in architecture, design process

Procedia PDF Downloads 275
4396 Facial Biometric Privacy Using Visual Cryptography: A Fundamental Approach to Enhance the Security of Facial Biometric Data

Authors: Devika Tanna

Abstract:

'Biometrics' means 'life measurement' but the term is usually associated with the use of unique physiological characteristics to identify an individual. It is important to secure the privacy of digital face image that is stored in central database. To impart privacy to such biometric face images, first, the digital face image is split into two host face images such that, each of it gives no idea of existence of the original face image and, then each cover image is stored in two different databases geographically apart. When both the cover images are simultaneously available then only we can access that original image. This can be achieved by using the XM2VTS and IMM face database, an adaptive algorithm for spatial greyscale. The algorithm helps to select the appropriate host images which are most likely to be compatible with the secret image stored in the central database based on its geometry and appearance. The encryption is done using GEVCS which results in a reconstructed image identical to the original private image.

Keywords: adaptive algorithm, database, host images, privacy, visual cryptography

Procedia PDF Downloads 130
4395 Machine Learning Techniques to Develop Traffic Accident Frequency Prediction Models

Authors: Rodrigo Aguiar, Adelino Ferreira

Abstract:

Road traffic accidents are the leading cause of unnatural death and injuries worldwide, representing a significant problem of road safety. In this context, the use of artificial intelligence with advanced machine learning techniques has gained prominence as a promising approach to predict traffic accidents. This article investigates the application of machine learning algorithms to develop traffic accident frequency prediction models. Models are evaluated based on performance metrics, making it possible to do a comparative analysis with traditional prediction approaches. The results suggest that machine learning can provide a powerful tool for accident prediction, which will contribute to making more informed decisions regarding road safety.

Keywords: machine learning, artificial intelligence, frequency of accidents, road safety

Procedia PDF Downloads 89
4394 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: canny pruning, hand recognition, machine learning, skin tracking

Procedia PDF Downloads 185
4393 Human Resource Management Challenges in Age of Artificial Intelligence: Methodology of Case Analysis

Authors: Olga Leontjeva

Abstract:

In the age of Artificial Intelligence (AI), some organization management approaches need to be adapted or changed. Human Resource Management (HRM) is a part of organization management that is under the managers' focus nowadays, because AI integration into organization activities brings some HRM-connected challenges. The topic became more significant during the crises of many organizations in the world caused by the coronavirus pandemic (COVID-19). The paper presents an approach, which will be used for the study that is going to be focused on the various case analysis. The author of the future study will analyze the cases of the organizations from Latvia and Spain that are grouped by the size, type of activity and area of business. The information for the cases will be collected through structured interviews and online surveys. The main result presented is the questionnaire developed that will be used for the study as well as the definition and description of sampling. The first round of the survey will be based on convenience sampling that is the main limitation of the study. To conclude, the approach developed will help to collect valid data if the organizations participating in the survey are ready to share their cases in depth, so the researchers could draw the right conclusions and generalize compared organizations’ cases. The questionnaire developed for the survey is applicable for both written online data collection as well as for the interviews. The case analysis will help to identify some HRM challenges that are connected to AI integration into organization activities such as management of different generation employees and their training peculiarities.

Keywords: age of artificial intelligence, case analysis, generation Y and Z employees, human resource management

Procedia PDF Downloads 169
4392 A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting

Authors: Analise Borg, Paul Micallef

Abstract:

Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature.

Keywords: audio fingerprinting, mapping algorithm, Gaussian Mixture Models, MFCC, MPEG-7

Procedia PDF Downloads 421
4391 Evaluation of Re-mineralization Ability of Nanohydroxyapatite and Coral Calcium with Different Concentrations on Initial Enamel Carious Lesions

Authors: Ali Abdelnabi, Nermeen Hamza

Abstract:

Coral calcium is a boasting natural product and dietary supplement which is considered a source of alkaline calcium carbonate, this study is a comparative study, comparing the remineralization effect of the new product of coral calcium with that of nano-hydroxyapatite. Methodology: a total of 35 extracted molars were collected, examined and sectioned to obtain 70 sound enamel discs, all discs were numbered and examined by scanning electron microscope coupled with Energy Dispersive Analysis of X-rays(EDAX) for mineral content, subjected to artificial caries, and mineral content was re-measured, discs were divided into seven groups according to the remineralizing agent used, where groups 1 to 3 used 10%, 20%, 30% nanohydroxyapatite gel respectively, groups 4 to 6 used 10%, 20%, 30% coral calcium gel and group 7 with no remineralizing agent (control group). All groups were re-examined by EDAX after remineralization; data were calculated and tabulated. Results: All groups showed a statistically significant drop in calcium level after artificial caries; all groups showed a statistically significant rise in calcium content after remineralization except for the control group; groups 1 and 5 showed the highest increase in calcium level after remineralization. Conclusion: coral calcium can be considered a comparative product to nano-hydroxyapatite regarding the remineralization of enamel initial carious lesions.

Keywords: artificial caries, coral calcium, nanohydroxyapatite, re-mineralization

Procedia PDF Downloads 123
4390 Environment and Health Quality in Urban Slums of Chandigarh: A Case Study

Authors: Ritu Sarsoha

Abstract:

According to World Summit 2002 health is an integral component of sustainable development. Due to overpopulation and lack of employment opportunities in villages and small towns, the rural youth tend to migrate to the big cities causing mushrooming of slums. These slums lack most of the basic necessities of life particularly regarding environmental pollution and appropriate health care system. Present paper deals with the socio-economic and environmental status of people living in slum area of Chandigarh which has now grown as a big city today as it has become a hub for the migrants from U. P. and Bihar. Here is a case study of Colony no. 5 of Chandigarh which is divided into more than one block.

Keywords: slum, socio-economic, environment pollution, health

Procedia PDF Downloads 305
4389 Distribution System Planning with Distributed Generation and Capacitor Placements

Authors: Nattachote Rugthaicharoencheep

Abstract:

This paper presents a feeder reconfiguration problem in distribution systems. The objective is to minimize the system power loss and to improve bus voltage profile. The optimization problem is subjected to system constraints consisting of load-point voltage limits, radial configuration format, no load-point interruption, and feeder capability limits. A method based on genetic algorithm, a search algorithm based on the mechanics of natural selection and natural genetics, is proposed to determine the optimal pattern of configuration. The developed methodology is demonstrated by a 33-bus radial distribution system with distributed generations and feeder capacitors. The study results show that the optimal on/off patterns of the switches can be identified to give the minimum power loss while respecting all the constraints.

Keywords: network reconfiguration, distributed generation capacitor placement, loss reduction, genetic algorithm

Procedia PDF Downloads 177
4388 A Comparison of Sequential Quadratic Programming, Genetic Algorithm, Simulated Annealing, Particle Swarm Optimization for the Design and Optimization of a Beam Column

Authors: Nima Khosravi

Abstract:

This paper describes an integrated optimization technique with concurrent use of sequential quadratic programming, genetic algorithm, and simulated annealing particle swarm optimization for the design and optimization of a beam column. In this research, the comparison between 4 different types of optimization methods. The comparison is done and it is found out that all the methods meet the required constraints and the lowest value of the objective function is achieved by SQP, which was also the fastest optimizer to produce the results. SQP is a gradient based optimizer hence its results are usually the same after every run. The only thing which affects the results is the initial conditions given. The initial conditions given in the various test run were very large as compared. Hence, the value converged at a different point. Rest of the methods is a heuristic method which provides different values for different runs even if every parameter is kept constant.

Keywords: beam column, genetic algorithm, particle swarm optimization, sequential quadratic programming, simulated annealing

Procedia PDF Downloads 386
4387 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: graph cuts, lung CT scan, lung parenchyma segmentation, patch-based similarity metric

Procedia PDF Downloads 169
4386 Artificial Membrane Comparison for Skin Permeation in Skin PAMPA

Authors: Aurea C. L. Lacerda, Paulo R. H. Moreno, Bruna M. P. Vianna, Cristina H. R. Serra, Airton Martin, André R. Baby, Vladi O. Consiglieri, Telma M. Kaneko

Abstract:

The modified Franz cell is the most widely used model for in vitro permeation studies, however it still presents some disadvantages. Thus, some alternative methods have been developed such as Skin PAMPA, which is a bio- artificial membrane that has been applied for skin penetration estimation of xenobiotics based on HT permeability model consisting. Skin PAMPA greatest advantage is to carry out more tests, in a fast and inexpensive way. The membrane system mimics the stratum corneum characteristics, which is the primary skin barrier. The barrier properties are given by corneocytes embedded in a multilamellar lipid matrix. This layer is the main penetration route through the paracellular permeation pathway and it consists of a mixture of cholesterol, ceramides, and fatty acids as the dominant components. However, there is no consensus on the membrane composition. The objective of this work was to compare the performance among different bio-artificial membranes for studying the permeation in skin PAMPA system. Material and methods: In order to mimetize the lipid composition`s present in the human stratum corneum six membranes were developed. The membrane composition was equimolar mixture of cholesterol, ceramides 1-O-C18:1, C22, and C20, plus fatty acids C20 and C24. The membrane integrity assay was based on the transport of Brilliant Cresyl Blue, which has a low permeability; and Lucifer Yellow with very poor permeability and should effectively be completely rejected. The membrane characterization was performed using Confocal Laser Raman Spectroscopy, using stabilized laser at 785 nm with 10 second integration time and 2 accumulations. The membrane behaviour results on the PAMPA system were statistically evaluated and all of the compositions have shown integrity and permeability. The confocal Raman spectra were obtained in the region of 800-1200 cm-1 that is associated with the C-C stretches of the carbon scaffold from the stratum corneum lipids showed similar pattern for all the membranes. The ceramides, long chain fatty acids and cholesterol in equimolar ratio permitted to obtain lipid mixtures with self-organization capability, similar to that occurring into the stratum corneum. Conclusion: The artificial biological membranes studied for Skin PAMPA showed to be similar and with comparable properties to the stratum corneum.

Keywords: bio-artificial membranes, comparison, confocal Raman, skin PAMPA

Procedia PDF Downloads 509
4385 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 127
4384 Interpretation and Clustering Framework for Analyzing ECG Survey Data

Authors: Irum Matloob, Shoab Ahmad Khan, Fahim Arif

Abstract:

As Indo-Pak has been the victim of heart diseases since many decades. Many surveys showed that percentage of cardiac patients is increasing in Pakistan day by day, and special attention is needed to pay on this issue. The framework is proposed for performing detailed analysis of ECG survey data which is conducted for measuring prevalence of heart diseases statistics in Pakistan. The ECG survey data is evaluated or filtered by using automated Minnesota codes and only those ECGs are used for further analysis which is fulfilling the standardized conditions mentioned in the Minnesota codes. Then feature selection is performed by applying proposed algorithm based on discernibility matrix, for selecting relevant features from the database. Clustering is performed for exposing natural clusters from the ECG survey data by applying spectral clustering algorithm using fuzzy c means algorithm. The hidden patterns and interesting relationships which have been exposed after this analysis are useful for further detailed analysis and for many other multiple purposes.

Keywords: arrhythmias, centroids, ECG, clustering, discernibility matrix

Procedia PDF Downloads 470
4383 New Segmentation of Piecewise Linear Regression Models Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise linear regression models are very flexible models for modeling the data. If the piecewise linear regression models are matched against the data, then the parameters are generally not known. This paper studies the problem of parameter estimation of piecewise linear regression models. The method used to estimate the parameters of picewise linear regression models is Bayesian method. But the Bayes estimator can not be found analytically. To overcome these problems, the reversible jump MCMC algorithm is proposed. Reversible jump MCMC algorithm generates the Markov chain converges to the limit distribution of the posterior distribution of the parameters of picewise linear regression models. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of picewise linear regression models.

Keywords: regression, piecewise, Bayesian, reversible Jump MCMC

Procedia PDF Downloads 521
4382 A Comparison of Neural Network and DOE-Regression Analysis for Predicting Resource Consumption of Manufacturing Processes

Authors: Frank Kuebler, Rolf Steinhilper

Abstract:

Artificial neural networks (ANN) as well as Design of Experiments (DOE) based regression analysis (RA) are mainly used for modeling of complex systems. Both methodologies are commonly applied in process and quality control of manufacturing processes. Due to the fact that resource efficiency has become a critical concern for manufacturing companies, these models needs to be extended to predict resource-consumption of manufacturing processes. This paper describes an approach to use neural networks as well as DOE based regression analysis for predicting resource consumption of manufacturing processes and gives a comparison of the achievable results based on an industrial case study of a turning process.

Keywords: artificial neural network, design of experiments, regression analysis, resource efficiency, manufacturing process

Procedia PDF Downloads 524
4381 The Comparison of Chromium Ions Release for Stainless Steel between Artificial Saliva and Breadfruit Leaf Extracts

Authors: Mirna Febriani

Abstract:

The use of stainless steel wires in the field of dentistry is widely used, especially for orthodontic and prosthodontic treatment using stainless steel wire. The oral cavity is the ideal environment for corrosion, which can be caused by saliva. Prevention of corrosion on stainless steel wires can be done by using an organic or non-organic corrosion inhibitor. One of the organic inhibitors that can be used to prevent corrosion is the leaves of breadfruit. The method used for this research using Atomic Absorption Spectrophotometric test. The results showed that the difference of chromium ion releases on soaking in saliva and breadfruit leaf extracts on days 1, 3, 7 and 14. Statically calculation with independent T-test with p < 0,05 showed the significant difference. The conclusion of this study shows that breadfruit leaf extract can inhibit the corrosion rate of stainless steel wires.

Keywords: chromium ion, stainless steel, artificial saliva, breadfruit leaf

Procedia PDF Downloads 170
4380 Genetic Algorithm Optimization of a Small Scale Natural Gas Liquefaction Process

Authors: M. I. Abdelhamid, A. O. Ghallab, R. S. Ettouney, M. A. El-Rifai

Abstract:

An optimization scheme based on COM server is suggested for communication between Genetic Algorithm (GA) toolbox of MATLAB and Aspen HYSYS. The structure and details of the proposed framework are discussed. The power of the developed scheme is illustrated by its application to the optimization of a recently developed natural gas liquefaction process in which Aspen HYSYS was used for minimization of the power consumption by optimizing the values of five operating variables. In this work, optimization by coupling between the GA in MATLAB and Aspen HYSYS model of the same process using the same five decision variables enabled improvements in power consumption by 3.3%, when 77% of the natural gas feed is liquefied. Also on inclusion of the flow rates of both nitrogen and carbon dioxide refrigerants as two additional decision variables, the power consumption decreased by 6.5% for a 78% liquefaction of the natural gas feed.

Keywords: stranded gas liquefaction, genetic algorithm, COM server, single nitrogen expansion, carbon dioxide pre-cooling

Procedia PDF Downloads 449
4379 Reliable Soup: Reliable-Driven Model Weight Fusion on Ultrasound Imaging Classification

Authors: Shuge Lei, Haonan Hu, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Yan Tong

Abstract:

It remains challenging to measure reliability from classification results from different machine learning models. This paper proposes a reliable soup optimization algorithm based on the model weight fusion algorithm Model Soup, aiming to improve reliability by using dual-channel reliability as the objective function to fuse a series of weights in the breast ultrasound classification models. Experimental results on breast ultrasound clinical datasets demonstrate that reliable soup significantly enhances the reliability of breast ultrasound image classification tasks. The effectiveness of the proposed approach was verified via multicenter trials. The results from five centers indicate that the reliability optimization algorithm can enhance the reliability of the breast ultrasound image classification model and exhibit low multicenter correlation.

Keywords: breast ultrasound image classification, feature attribution, reliability assessment, reliability optimization

Procedia PDF Downloads 85
4378 A Memetic Algorithm for an Energy-Costs-Aware Flexible Job-Shop Scheduling Problem

Authors: Christian Böning, Henrik Prinzhorn, Eric C. Hund, Malte Stonis

Abstract:

In this article, the flexible job-shop scheduling problem is extended by consideration of energy costs which arise owing to the power peak, and further decision variables such as work in process and throughput time are incorporated into the objective function. This enables a production plan to be simultaneously optimized in respect of the real arising energy and logistics costs. The energy-costs-aware flexible job-shop scheduling problem (EFJSP) which arises is described mathematically, and a memetic algorithm (MA) is presented as a solution. In the MA, the evolutionary process is supplemented with a local search. Furthermore, repair procedures are used in order to rectify any infeasible solutions that have arisen in the evolutionary process. The potential for lowering the real arising costs of a production plan through consideration of energy consumption levels is highlighted.

Keywords: energy costs, flexible job-shop scheduling, memetic algorithm, power peak

Procedia PDF Downloads 345
4377 Bidirectional Dynamic Time Warping Algorithm for the Recognition of Isolated Words Impacted by Transient Noise Pulses

Authors: G. Tamulevičius, A. Serackis, T. Sledevič, D. Navakauskas

Abstract:

We consider the biggest challenge in speech recognition – noise reduction. Traditionally detected transient noise pulses are removed with the corrupted speech using pulse models. In this paper we propose to cope with the problem directly in Dynamic Time Warping domain. Bidirectional Dynamic Time Warping algorithm for the recognition of isolated words impacted by transient noise pulses is proposed. It uses simple transient noise pulse detector, employs bidirectional computation of dynamic time warping and directly manipulates with warping results. Experimental investigation with several alternative solutions confirms effectiveness of the proposed algorithm in the reduction of impact of noise on recognition process – 3.9% increase of the noisy speech recognition is achieved.

Keywords: transient noise pulses, noise reduction, dynamic time warping, speech recognition

Procedia PDF Downloads 559
4376 Anticipation of Bending Reinforcement Based on Iranian Concrete Code Using Meta-Heuristic Tools

Authors: Seyed Sadegh Naseralavi, Najmeh Bemani

Abstract:

In this paper, different concrete codes including America, New Zealand, Mexico, Italy, India, Canada, Hong Kong, Euro Code and Britain are compared with the Iranian concrete design code. First, by using Adaptive Neuro Fuzzy Inference System (ANFIS), the codes having the most correlation with the Iranian ninth issue of the national regulation are determined. Consequently, two anticipated methods are used for comparing the codes: Artificial Neural Network (ANN) and Multi-variable regression. The results show that ANN performs better. Predicting is done by using only tensile steel ratio and with ignoring the compression steel ratio.

Keywords: adaptive neuro fuzzy inference system, anticipate method, artificial neural network, concrete design code, multi-variable regression

Procedia PDF Downloads 284
4375 Study of Surface Water Quality in the Wadi El Harrach for Its Use in the Artificial Groundwater Recharge of the Mitidja, North Algeria

Authors: M. Meddi, A. Boufekane

Abstract:

The Mitidja coastal groundwater which extends over an area of 1450 km2 is a strategic resource in the Algiers region. The high dependence of the regional economy on the use of this groundwater forces us to have recourse to its artificial recharge from the Wadi El Harrach in its upstream part. This system of artificial recharge has shown its effectiveness in the development of water resource mentioned in the succeeding works in several regions of the world. The objective of this study is to: Increase the reserves of water inputs by infiltration, raise the water level and its good quality in wells and boreholes, reduce losses to the sea, and address seawater intrusion by maintaining balance in the freshwater-saltwater interface in the downstream part of the groundwater basin. After analyzing the situation, it was noticed that a qualitative monitoring of the Wadi water for the groundwater recharge has to be done. For this purpose, we proceeded during three successive years (2010, 2011, and 2012) to the monthly sampling of water in the upstream part of the Wadi El Harrach for chemical analysis. The variation of the sediment transport concentration will be also measured. This monitoring aims to characterize the water quality and avoid clogging in the proposed recharge area. The results of these analyses showed the good chemical quality according to the analyses we performed in the laboratory during the three years, but they are too loaded with suspended matters. We noticed that these fine particles come from the grinding of limestone of sandpit located upstream of the area of the proposed recharge system. This problem can be solved by a water supply upstream of sandpit. For the recharge, we propose the method of using two wells for dual use, which means that it can be used for water supply and extraction. This solution is inexpensive in our case and could easily be used as wells are already drilled in the upstream part. This solution increases over time the piezometric level and also reduce groundwater contamination by saltwater in the downstream part.

Keywords: water quality, artificial groundwater recharge, Mitidja, North Algeria

Procedia PDF Downloads 287
4374 Problem of Services Selection in Ubiquitous Systems

Authors: Malika Yaici, Assia Arab, Betitra Yakouben, Samia Zermani

Abstract:

Ubiquitous computing is nowadays a reality through the networking of a growing number of computing devices. It allows providing users with context aware information and services in a heterogeneous environment, anywhere and anytime. Selection of the best context-aware service, between many available services and providers, is a tedious problem. In this paper, a service selection method based on Constraint Satisfaction Problem (CSP) formalism is proposed. The services are considered as variables and domains; and the user context, preferences and providers characteristics are considered as constraints. The Backtrack algorithm is used to solve the problem to find the best service and provider which matches the user requirements. Even though this algorithm has an exponential complexity, but its use guarantees that the service, that best matches the user requirements, will be found. A comparison of the proposed method with the existing solutions finishes the paper.

Keywords: ubiquitous computing, services selection, constraint satisfaction problem, backtrack algorithm

Procedia PDF Downloads 245
4373 MCDM Spectrum Handover Models for Cognitive Wireless Networks

Authors: Cesar Hernández, Diego Giral, Fernando Santa

Abstract:

The spectral handoff is important in cognitive wireless networks to ensure an adequate quality of service and performance for secondary user communications. This work proposes a benchmarking of performance of the three spectrum handoff models: VIKOR, SAW and MEW. Four evaluation metrics are used. These metrics are, accumulative average of failed handoffs, accumulative average of handoffs performed, accumulative average of transmission bandwidth and, accumulative average of the transmission delay. As a difference with related work, the performance of the three spectrum handoff models was validated with captured data of spectral occupancy in experiments realized at the GSM frequency band (824 MHz-849 MHz). These data represent the actual behavior of the licensed users for this wireless frequency band. The results of the comparative show that VIKOR Algorithm provides 15.8% performance improvement compared to a SAW Algorithm and, 12.1% better than the MEW Algorithm.

Keywords: cognitive radio, decision making, MEW, SAW, spectrum handoff, VIKOR

Procedia PDF Downloads 437
4372 An Improved Face Recognition Algorithm Using Histogram-Based Features in Spatial and Frequency Domains

Authors: Qiu Chen, Koji Kotani, Feifei Lee, Tadahiro Ohmi

Abstract:

In this paper, we propose an improved face recognition algorithm using histogram-based features in spatial and frequency domains. For adding spatial information of the face to improve recognition performance, a region-division (RD) method is utilized. The facial area is firstly divided into several regions, then feature vectors of each facial part are generated by Binary Vector Quantization (BVQ) histogram using DCT coefficients in low frequency domains, as well as Local Binary Pattern (LBP) histogram in spatial domain. Recognition results with different regions are first obtained separately and then fused by weighted averaging. Publicly available ORL database is used for the evaluation of our proposed algorithm, which is consisted of 40 subjects with 10 images per subject containing variations in lighting, posing, and expressions. It is demonstrated that face recognition using RD method can achieve much higher recognition rate.

Keywords: binary vector quantization (BVQ), DCT coefficients, face recognition, local binary patterns (LBP)

Procedia PDF Downloads 349