Search results for: real-coded genetic algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4766

Search results for: real-coded genetic algorithm

3236 Optimal Energy Management and Environmental Index Optimization of a Microgrid Operating by Renewable and Sustainable Generation Systems

Authors: Nabil Mezhoud

Abstract:

The economic operation of electric energy generating systems is one of the predominant problems in energy systems. Due to the need for better reliability, high energy quality, lower losses, lower cost and a clean environment, the application of renewable and sustainable energy sources, such as wind energy, solar energy, etc., in recent years has become more widespread. In this work, one of a bio-inspired meta-heuristic algorithm inspired by the flashing behavior of fireflies at night called the Firefly Algorithm (FFA) is applied to solve the Optimal Energy Management (OEM) and the environmental index (EI) problems of a micro-grid (MG) operating by Renewable and Sustainable Generation Systems (RSGS). Our main goal is to minimize the nonlinear objective function of an electrical microgrid, taking into account equality and inequality constraints. The FFA approach was examined and tested on a standard MG system composed of different types of RSGS, such as wind turbines (WT), photovoltaic systems (PV), and non-renewable energy, such as fuel cells (FC), micro turbine (MT), diesel generator (DEG) and loads with energy storage systems (ESS). The results are promising and show the effectiveness and robustness of the proposed approach to solve the OEM and the EI problems. The results of the proposed method have been compared and validated with those known references published recently.

Keywords: renewable energy sources, energy management, distributed generator, micro-grids, firefly algorithm

Procedia PDF Downloads 78
3235 Off-Line Parameter Estimation for the Induction Motor Drive System

Authors: Han-Woong Ahn, In-Gun Kim, Hyun-Seok Hong, Dong-Woo Kang, Ju Lee

Abstract:

It is important to accurately identify machine parameters for direct vector control. To obtain the parameter values, traditional methods can be used such as no-load and rotor locked tests. However, there are many differences between values obtained from the traditional tests and actual values. In addition, there are drawbacks that additional equipment and cost are required for the experiment. Therefore, it is hard to temporary operation to estimate induction motor parameters. Therefore, this paper deals with the estimation algorithm of induction motor parameters without a motor operation and the measurement from additional equipment such as sensors and dynamometer. The validity and usefulness of the estimation algorithm considering inverter nonlinearity is verified by comparing the conventional method with the proposed method.

Keywords: induction motor, parameter, off-line estimation, inverter nonlinearity

Procedia PDF Downloads 532
3234 Comparison of Phynotypic Traits of Three Arabian Horse Strains

Authors: Saria Almarzook, Monika Reissmann, Gudrun Brockmann

Abstract:

Due to its history, occurrence in different ecosystems and diverse using, the modern horse (Equus caballus) shows large variability in size, appearance, behavior and habits. At all times, breeders try to create groups (breeds, strains) representing high homology but showing clear differences in comparison to other groups. A great interest of analyzing phenotypic and genetic traits looking for real diversity and genetic uniqueness existents for Arabian horses in Syria. 90 Arabian horses from governmental research center of Arabian horses in Damascus were included. The horses represent three strains (Kahlawi, Saklawi, Hamdani) originated from different geographical zones. They were raised on the same farm, under stable conditions. Twelve phenotypic traits were measured: wither height (WH), croup width (CW), croup height (CH), neck girth (NG), thorax girth (TG), chest girth (ChG), chest depth (ChD), chest width (ChW), back line length (BLL), body length (BL), fore cannon length (FCL) and hind cannon length (HCL). The horses were divided into groups according to age (less than 2 years, 2-4 years, 4-9 years, over 9 years) and to sex (male, female). The statistical analyzes show that age has significant influence of WH while the strain has only a very limited effect. On CW, NG, BLL, FCL and HCL, there is only a significant influence of sex. Age has significant effect on CH and BL. All sources of classes have a significant effect on TG, ChG, ChD and ChW. Strain has a significant effect on the BL. These results provide first information for real biodiversity in and between the strains and can be used to develop the breeding work in the Arabian horse breed.

Keywords: Arabian horse, phenotypic traits, strains, Syria

Procedia PDF Downloads 391
3233 Symbiotic Organism Search (SOS) for Solving the Capacitated Vehicle Routing Problem

Authors: Eki Ruskartina, Vincent F. Yu, Budi Santosa, A. A. N. Perwira Redi

Abstract:

This paper introduces symbiotic organism search (SOS) for solving capacitated vehicle routing problem (CVRP). SOS is a new approach in metaheuristics fields and never been used to solve discrete problems. A sophisticated decoding method to deal with a discrete problem setting in CVRP is applied using the basic symbiotic organism search (SOS) framework. The performance of the algorithm was evaluated on a set of benchmark instances and compared results with best known solution. The computational results show that the proposed algorithm can produce good solution as a preliminary testing. These results indicated that the proposed SOS can be applied as an alternative to solve the capacitated vehicle routing problem.

Keywords: symbiotic organism search, capacitated vehicle routing problem, metaheuristic

Procedia PDF Downloads 635
3232 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems

Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer

Abstract:

This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.

Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control

Procedia PDF Downloads 154
3231 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry

Authors: Nadia Belu, Laurenţiu Mihai Ionescu, Agnieszka Misztal

Abstract:

The automotive industry is one of the most important industries in the world that concerns not only the economy, but also the world culture. In the present financial and economic context, this field faces new challenges posed by the current crisis, companies must maintain product quality, deliver on time and at a competitive price in order to achieve customer satisfaction. Two of the most recommended techniques of quality management by specific standards of the automotive industry, in the product development, are Failure Mode and Effects Analysis (FMEA) and Control Plan. FMEA is a methodology for risk management and quality improvement aimed at identifying potential causes of failure of products and processes, their quantification by risk assessment, ranking of the problems identified according to their importance, to the determination and implementation of corrective actions related. The companies use Control Plans realized using the results from FMEA to evaluate a process or product for strengths and weaknesses and to prevent problems before they occur. The Control Plans represent written descriptions of the systems used to control and minimize product and process variation. In addition Control Plans specify the process monitoring and control methods (for example Special Controls) used to control Special Characteristics. In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.

Keywords: automotive industry, FMEA, control plan, automotive technology

Procedia PDF Downloads 407
3230 Synthesis of a Model Predictive Controller for Artificial Pancreas

Authors: Mohamed El Hachimi, Abdelhakim Ballouk, Ilyas Khelafa, Abdelaziz Mouhou

Abstract:

Introduction: Type 1 diabetes occurs when beta cells are destroyed by the body's own immune system. Treatment of type 1 diabetes mellitus could be greatly improved by applying a closed-loop control strategy to insulin delivery, also known as an Artificial Pancreas (AP). Method: In this paper, we present a new formulation of the cost function for a Model Predictive Control (MPC) utilizing a technic which accelerates the speed of control of the AP and tackles the nonlinearity of the control problem via asymmetric objective functions. Finding: The finding of this work consists in a new Model Predictive Control algorithm that leads to good performances like decreasing the time of hyperglycaemia and avoiding hypoglycaemia. Conclusion: These performances are validated under in silico trials.

Keywords: artificial pancreas, control algorithm, biomedical control, MPC, objective function, nonlinearity

Procedia PDF Downloads 308
3229 Development of Microsatellite Markers for Dalmatian Pyrethrum Using Next-Generation Sequencing

Authors: Ante Turudic, Filip Varga, Zlatko Liber, Jernej Jakse, Zlatko Satovic, Ivan Radosavljevic, Martina Grdisa

Abstract:

Microsatellites (SSRs) are highly informative repetitive sequences of 2-6 base pairs, which are the most used molecular markers in assessing the genetic diversity of plant species. Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip) is an outcrossing diploid (2n = 18) endemic to the eastern Adriatic coast and source of the natural insecticide pyrethrin. Due to the high repetitiveness and large size of the genome (haploid genome size of 9,58 pg), previous attempts to develop microsatellite markers using the standard methods were unsuccessful. A next-generation sequencing (NGS) approach was applied on genomic DNA extracted from fresh leaves of Dalmatian pyrethrum. The sequencing was conducted using NovaSeq6000 Illumina sequencer, after which almost 400 million high-quality paired-end reads were obtained, with a read length of 150 base pairs. Short reads were assembled by combining two approaches; (1) de-novo assembly and (2) joining of overlapped pair-end reads. In total, 6.909.675 contigs were obtained, with the contig average length of 249 base pairs. Of the resulting contigs, 31.380 contained one or multiple microsatellite sequences, in total 35.556 microsatellite loci were identified. Out of detected microsatellites, dinucleotide repeats were the most frequent, accounting for more than half of all microsatellites identifies (21,212; 59.7%), followed by trinucleotide repeats (9,204; 25.9%). Tetra-, penta- and hexanucleotides had similar frequency of 1,822 (5.1%), 1,472 (4.1%), and 1,846 (5.2%), respectively. Contigs containing microsatellites were further filtered by SSR pattern type, transposon occurrences, assembly characteristics, GC content, and the number of occurrences against the draft genome of T. cinerariifolium published previously. After the selection process, 50 microsatellite loci were used for primer design. Designed primers were tested on samples from five distinct populations, and 25 of them showed a high degree of polymorphism. The selected loci were then genotyped on 20 samples belonging to one population resulting in 17 microsatellite markers. Availability of codominant SSR markers will significantly improve the knowledge on population genetic diversity and structure as well as complex genetics and biochemistry of this species. Acknowledgment: This work has been fully supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir/ Sch. Bip.) insecticidal potential’ - (PyrDiv) (IP-06-2016-9034).

Keywords: genome assembly, NGS, SSR, Tanacetum cinerariifolium

Procedia PDF Downloads 132
3228 Implementation of Algorithm K-Means for Grouping District/City in Central Java Based on Macro Economic Indicators

Authors: Nur Aziza Luxfiati

Abstract:

Clustering is partitioning data sets into sub-sets or groups in such a way that elements certain properties have shared property settings with a high level of similarity within one group and a low level of similarity between groups. . The K-Means algorithm is one of thealgorithmsclustering as a grouping tool that is most widely used in scientific and industrial applications because the basic idea of the kalgorithm is-means very simple. In this research, applying the technique of clustering using the k-means algorithm as a method of solving the problem of national development imbalances between regions in Central Java Province based on macroeconomic indicators. The data sample used is secondary data obtained from the Central Java Provincial Statistics Agency regarding macroeconomic indicator data which is part of the publication of the 2019 National Socio-Economic Survey (Susenas) data. score and determine the number of clusters (k) using the elbow method. After the clustering process is carried out, the validation is tested using themethodsBetween-Class Variation (BCV) and Within-Class Variation (WCV). The results showed that detection outlier using z-score normalization showed no outliers. In addition, the results of the clustering test obtained a ratio value that was not high, namely 0.011%. There are two district/city clusters in Central Java Province which have economic similarities based on the variables used, namely the first cluster with a high economic level consisting of 13 districts/cities and theclustersecondwith a low economic level consisting of 22 districts/cities. And in the cluster second, namely, between low economies, the authors grouped districts/cities based on similarities to macroeconomic indicators such as 20 districts of Gross Regional Domestic Product, with a Poverty Depth Index of 19 districts, with 5 districts in Human Development, and as many as Open Unemployment Rate. 10 districts.

Keywords: clustering, K-Means algorithm, macroeconomic indicators, inequality, national development

Procedia PDF Downloads 159
3227 An Efficient Subcarrier Scheduling Algorithm for Downlink OFDMA-Based Wireless Broadband Networks

Authors: Hassen Hamouda, Mohamed Ouwais Kabaou, Med Salim Bouhlel

Abstract:

The growth of wireless technology made opportunistic scheduling a widespread theme in recent research. Providing high system throughput without reducing fairness allocation is becoming a very challenging task. A suitable policy for resource allocation among users is of crucial importance. This study focuses on scheduling multiple streaming flows on the downlink of a WiMAX system based on orthogonal frequency division multiple access (OFDMA). In this paper, we take the first step in formulating and analyzing this problem scrupulously. As a result, we proposed a new scheduling scheme based on Round Robin (RR) Algorithm. Because of its non-opportunistic process, RR does not take in account radio conditions and consequently it affect both system throughput and multi-users diversity. Our contribution called MORRA (Modified Round Robin Opportunistic Algorithm) consists to propose a solution to this issue. MORRA not only exploits the concept of opportunistic scheduler but also takes into account other parameters in the allocation process. The first parameter is called courtesy coefficient (CC) and the second is called Buffer Occupancy (BO). Performance evaluation shows that this well-balanced scheme outperforms both RR and MaxSNR schedulers and demonstrate that choosing between system throughput and fairness is not required.

Keywords: OFDMA, opportunistic scheduling, fairness hierarchy, courtesy coefficient, buffer occupancy

Procedia PDF Downloads 302
3226 Supervised Learning for Cyber Threat Intelligence

Authors: Jihen Bennaceur, Wissem Zouaghi, Ali Mabrouk

Abstract:

The major aim of cyber threat intelligence (CTI) is to provide sophisticated knowledge about cybersecurity threats to ensure internal and external safeguards against modern cyberattacks. Inaccurate, incomplete, outdated, and invaluable threat intelligence is the main problem. Therefore, data analysis based on AI algorithms is one of the emergent solutions to overcome the threat of information-sharing issues. In this paper, we propose a supervised machine learning-based algorithm to improve threat information sharing by providing a sophisticated classification of cyber threats and data. Extensive simulations investigate the accuracy, precision, recall, f1-score, and support overall to validate the designed algorithm and to compare it with several supervised machine learning algorithms.

Keywords: threat information sharing, supervised learning, data classification, performance evaluation

Procedia PDF Downloads 151
3225 Using Scale Invariant Feature Transform Features to Recognize Characters in Natural Scene Images

Authors: Belaynesh Chekol, Numan Çelebi

Abstract:

The main purpose of this work is to recognize individual characters extracted from natural scene images using scale invariant feature transform (SIFT) features as an input to K-nearest neighbor (KNN); a classification learner algorithm. For this task, 1,068 and 78 images of English alphabet characters taken from Chars74k data set is used to train and test the classifier respectively. For each character image, We have generated describing features by using SIFT algorithm. This set of features is fed to the learner so that it can recognize and label new images of English characters. Two types of KNN (fine KNN and weighted KNN) were trained and the resulted classification accuracy is 56.9% and 56.5% respectively. The training time taken was the same for both fine and weighted KNN.

Keywords: character recognition, KNN, natural scene image, SIFT

Procedia PDF Downloads 282
3224 Prediction of Energy Storage Areas for Static Photovoltaic System Using Irradiation and Regression Modelling

Authors: Kisan Sarda, Bhavika Shingote

Abstract:

This paper aims to evaluate regression modelling for prediction of Energy storage of solar photovoltaic (PV) system using Semi parametric regression techniques because there are some parameters which are known while there are some unknown parameters like humidity, dust etc. Here irradiation of solar energy is different for different places on the basis of Latitudes, so by finding out areas which give more storage we can implement PV systems at those places and our need of energy will be fulfilled. This regression modelling is done for daily, monthly and seasonal prediction of solar energy storage. In this, we have used R modules for designing the algorithm. This algorithm will give the best comparative results than other regression models for the solar PV cell energy storage.

Keywords: semi parametric regression, photovoltaic (PV) system, regression modelling, irradiation

Procedia PDF Downloads 384
3223 An Improvement of Multi-Label Image Classification Method Based on Histogram of Oriented Gradient

Authors: Ziad Abdallah, Mohamad Oueidat, Ali El-Zaart

Abstract:

Image Multi-label Classification (IMC) assigns a label or a set of labels to an image. The big demand for image annotation and archiving in the web attracts the researchers to develop many algorithms for this application domain. The existing techniques for IMC have two drawbacks: The description of the elementary characteristics from the image and the correlation between labels are not taken into account. In this paper, we present an algorithm (MIML-HOGLPP), which simultaneously handles these limitations. The algorithm uses the histogram of gradients as feature descriptor. It applies the Label Priority Power-set as multi-label transformation to solve the problem of label correlation. The experiment shows that the results of MIML-HOGLPP are better in terms of some of the evaluation metrics comparing with the two existing techniques.

Keywords: data mining, information retrieval system, multi-label, problem transformation, histogram of gradients

Procedia PDF Downloads 376
3222 Forecasting Unusual Infection of Patient Used by Irregular Weighted Point Set

Authors: Seema Vaidya

Abstract:

Mining association rule is a key issue in data mining. In any case, the standard models ignore the distinction among the exchanges, and the weighted association rule mining does not transform on databases with just binary attributes. This paper proposes a novel continuous example and executes a tree (FP-tree) structure, which is an increased prefix-tree structure for securing compacted, discriminating data about examples, and makes a fit FP-tree-based mining system, FP enhanced capacity algorithm is used, for mining the complete game plan of examples by illustration incessant development. Here, this paper handles the motivation behind making remarkable and weighted item sets, i.e. rare weighted item set mining issue. The two novel brightness measures are proposed for figuring the infrequent weighted item set mining issue. Also, the algorithm are handled which perform IWI which is more insignificant IWI mining. Moreover we utilized the rare item set for choice based structure. The general issue of the start of reliable definite rules is troublesome for the grounds that hypothetically no inciting technique with no other person can promise the rightness of influenced theories. In this way, this framework expects the disorder with the uncommon signs. Usage study demonstrates that proposed algorithm upgrades the structure which is successful and versatile for mining both long and short diagnostics rules. Structure upgrades aftereffects of foreseeing rare diseases of patient.

Keywords: association rule, data mining, IWI mining, infrequent item set, frequent pattern growth

Procedia PDF Downloads 401
3221 Land Use Dynamics of Ikere Forest Reserve, Nigeria Using Geographic Information System

Authors: Akintunde Alo

Abstract:

The incessant encroachments into the forest ecosystem by the farmers and local contractors constitute a major threat to the conservation of genetic resources and biodiversity in Nigeria. To propose a viable monitoring system, this study employed Geographic Information System (GIS) technology to assess the changes that occurred for a period of five years (between 2011 and 2016) in Ikere forest reserve. Landsat imagery of the forest reserve was obtained. For the purpose of geo-referencing the acquired satellite imagery, ground-truth coordinates of some benchmark places within the forest reserve was relied on. Supervised classification algorithm, image processing, vectorization and map production were realized using ArcGIS. Various land use systems within the forest ecosystem were digitized into polygons of different types and colours for 2011 and 2016, roads were represented with lines of different thickness and colours. Of the six land-use delineated, the grassland increased from 26.50 % in 2011 to 45.53% in 2016 of the total land area with a percentage change of 71.81 %. Plantations of Gmelina arborea and Tectona grandis on the other hand reduced from 62.16 % in 2011 to 27.41% in 2016. The farmland and degraded land recorded percentage change of about 176.80 % and 8.70 % respectively from 2011 to 2016. Overall, the rate of deforestation in the study area is on the increase and becoming severe. About 72.59% of the total land area has been converted to non-forestry uses while the remnant 27.41% is occupied by plantations of Gmelina arborea and Tectona grandis. Interestingly, over 55 % of the plantation area in 2011 has changed to grassland, or converted to farmland and degraded land in 2016. The rate of change over time was about 9.79 % annually. Based on the results, rapid actions to prevail on the encroachers to stop deforestation and encouraged re-afforestation in the study area are recommended.

Keywords: land use change, forest reserve, satellite imagery, geographical information system

Procedia PDF Downloads 358
3220 Virtual Dimension Analysis of Hyperspectral Imaging to Characterize a Mining Sample

Authors: L. Chevez, A. Apaza, J. Rodriguez, R. Puga, H. Loro, Juan Z. Davalos

Abstract:

Virtual Dimension (VD) procedure is used to analyze Hyperspectral Image (HIS) treatment-data in order to estimate the abundance of mineral components of a mining sample. Hyperspectral images coming from reflectance spectra (NIR region) are pre-treated using Standard Normal Variance (SNV) and Minimum Noise Fraction (MNF) methodologies. The endmember components are identified by the Simplex Growing Algorithm (SVG) and after adjusted to the reflectance spectra of reference-databases using Simulated Annealing (SA) methodology. The obtained abundance of minerals of the sample studied is very near to the ones obtained using XRD with a total relative error of 2%.

Keywords: hyperspectral imaging, minimum noise fraction, MNF, simplex growing algorithm, SGA, standard normal variance, SNV, virtual dimension, XRD

Procedia PDF Downloads 159
3219 Fitness Action Recognition Based on MediaPipe

Authors: Zixuan Xu, Yichun Lou, Yang Song, Zihuai Lin

Abstract:

MediaPipe is an open-source machine learning computer vision framework that can be ported into a multi-platform environment, which makes it easier to use it to recognize the human activity. Based on this framework, many human recognition systems have been created, but the fundamental issue is the recognition of human behavior and posture. In this paper, two methods are proposed to recognize human gestures based on MediaPipe, the first one uses the Adaptive Boosting algorithm to recognize a series of fitness gestures, and the second one uses the Fast Dynamic Time Warping algorithm to recognize 413 continuous fitness actions. These two methods are also applicable to any human posture movement recognition.

Keywords: computer vision, MediaPipe, adaptive boosting, fast dynamic time warping

Procedia PDF Downloads 123
3218 Cytoxicity Studies of Sachets Beverages Using Allium Cepa Test

Authors: Ja’Afar Umar, Naziru Salisu

Abstract:

The consumption of powdered or industrialized juices has increased globally due to the fast pace of city life. These foods, with their attractive color, odor, and taste, are easily diluted in water and can lead to obesity, diabetes, hypertension, and cardiovascular problems. In a study, 80 purple varieties of onion bulbs were used to evaluate the cytotoxicity of the Tiara and Bevi mix beverage powder. The viability of the bulbs was tested using the A. cepa toxicity test. The bulbs were divided into five groups, and the root growth was recorded. The mixture was then squashed in a 45% acetic acid solution and examined for chromosomal abnormalities. The chromosomal abnormalities were classified as bridges, c-mitoses, vagrants, fragments, stickiness, bi-nuclei, and multi-polar. The study found that the highest number of dividing cells was in the negative control group, followed by the group treated with BM beverage. The highest number of aberrant cells was in the group treated with TR beverage, followed by BM 5%. Stickiness of cells was observed in both BM and TR 5% beverage concentrations. No lagging chromosome was present in the negative control group. The highest mitotic index was in the negative control group, and bridge fragrance was observed in the groups treated with different beverages. This study highlights the importance of Allium cepa L. in genotoxic substance testing, revealing chromosomal and mitotic abnormalities in root tip cells. The study also reveals that at 5% concentrations, root growth decreases, indicating potential genetic abnormalities in Allium cepa's genetic material.

Keywords: cytotoxicity, Allium cepa, Beverages, Chromosome

Procedia PDF Downloads 18
3217 Optimal and Critical Path Analysis of State Transportation Network Using Neo4J

Authors: Pallavi Bhogaram, Xiaolong Wu, Min He, Onyedikachi Okenwa

Abstract:

A transportation network is a realization of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include road networks, railways, air routes, pipelines, and many more. The transportation network plays a vital role in maintaining the vigor of the nation’s economy. Hence, ensuring the network stays resilient all the time, especially in the face of challenges such as heavy traffic loads and large scale natural disasters, is of utmost importance. In this paper, we used the Neo4j application to develop the graph. Neo4j is the world's leading open-source, NoSQL, a native graph database that implements an ACID-compliant transactional backend to applications. The Southern California network model is developed using the Neo4j application and obtained the most critical and optimal nodes and paths in the network using centrality algorithms. The edge betweenness centrality algorithm calculates the critical or optimal paths using Yen's k-shortest paths algorithm, and the node betweenness centrality algorithm calculates the amount of influence a node has over the network. The preliminary study results confirm that the Neo4j application can be a suitable tool to study the important nodes and the critical paths for the major congested metropolitan area.

Keywords: critical path, transportation network, connectivity reliability, network model, Neo4j application, edge betweenness centrality index

Procedia PDF Downloads 135
3216 Umbrella Reinforcement Learning – A Tool for Hard Problems

Authors: Egor E. Nuzhin, Nikolay V. Brilliantov

Abstract:

We propose an approach for addressing Reinforcement Learning (RL) problems. It combines the ideas of umbrella sampling, borrowed from Monte Carlo technique of computational physics and chemistry, with optimal control methods, and is realized on the base of neural networks. This results in a powerful algorithm, designed to solve hard RL problems – the problems, with long-time delayed reward, state-traps sticking and a lack of terminal states. It outperforms the prominent algorithms, such as PPO, RND, iLQR and VI, which are among the most efficient for the hard problems. The new algorithm deals with a continuous ensemble of agents and expected return, that includes the ensemble entropy. This results in a quick and efficient search of the optimal policy in terms of ”exploration-exploitation trade-off” in the state-action space.

Keywords: umbrella sampling, reinforcement learning, policy gradient, dynamic programming

Procedia PDF Downloads 26
3215 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 127
3214 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing

Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng

Abstract:

Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.

Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware

Procedia PDF Downloads 121
3213 A Similarity/Dissimilarity Measure to Biological Sequence Alignment

Authors: Muhammad A. Khan, Waseem Shahzad

Abstract:

Analysis of protein sequences is carried out for the purpose to discover their structural and ancestry relationship. Sequence similarity determines similar protein structures, similar function, and homology detection. Biological sequences composed of amino acid residues or nucleotides provide significant information through sequence alignment. In this paper, we present a new similarity/dissimilarity measure to sequence alignment based on the primary structure of a protein. The approach finds the distance between the two given sequences using the novel sequence alignment algorithm and a mathematical model. The algorithm runs at a time complexity of O(n²). A distance matrix is generated to construct a phylogenetic tree of different species. The new similarity/dissimilarity measure outperforms other existing methods.

Keywords: alignment, distance, homology, mathematical model, phylogenetic tree

Procedia PDF Downloads 179
3212 DEA-Based Variable Structure Position Control of DC Servo Motor

Authors: Ladan Maijama’a, Jibril D. Jiya, Ejike C. Anene

Abstract:

This paper presents Differential Evolution Algorithm (DEA) based Variable Structure Position Control (VSPC) of Laboratory DC servomotor (LDCSM). DEA is employed for the optimal tuning of Variable Structure Control (VSC) parameters for position control of a DC servomotor. The VSC combines the techniques of Sliding Mode Control (SMC) that gives the advantages of small overshoot, improved step response characteristics, faster dynamic response and adaptability to plant parameter variations, suppressed influences of disturbances and uncertainties in system behavior. The results of the simulation responses of the VSC parameters adjustment by DEA were performed in Matlab Version 2010a platform and yield better dynamic performance compared with the untuned VSC designed.

Keywords: differential evolution algorithm, laboratory DC servomotor, sliding mode control, variable structure control

Procedia PDF Downloads 417
3211 Scintigraphic Image Coding of Region of Interest Based on SPIHT Algorithm Using Global Thresholding and Huffman Coding

Authors: A. Seddiki, M. Djebbouri, D. Guerchi

Abstract:

Medical imaging produces human body pictures in digital form. Since these imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. Many current compression schemes provide a very high compression rate but with considerable loss of quality. On the other hand, in some areas in medicine, it may be sufficient to maintain high image quality only in region of interest (ROI). This paper discusses a contribution to the lossless compression in the region of interest of Scintigraphic images based on SPIHT algorithm and global transform thresholding using Huffman coding.

Keywords: global thresholding transform, huffman coding, region of interest, SPIHT coding, scintigraphic images

Procedia PDF Downloads 369
3210 New Test Algorithm to Detect Acute and Chronic HIV Infection Using a 4th Generation Combo Test

Authors: Barun K. De

Abstract:

Acquired immunodeficiency syndrome (AIDS) is caused by two types of human immunodeficiency viruses, collectively designated HIV. HIV infection is spreading globally particularly in developing countries. Before an individual is diagnosed with HIV, the disease goes through different phases. First there is an acute early phase that is followed by an established or chronic phase. Subsequently, there is a latency period after which the individual becomes immunodeficient. It is in the acute phase that an individual is highly infectious due to a high viral load. Presently, HIV diagnosis involves use of tests that do not detect the acute phase infection during which both the viral RNA and p24 antigen are expressed. Instead, these less sensitive tests detect antibodies to viral antigens which are typically sero-converted later in the disease process following acute infection. These antibodies are detected in both asymptomatic HIV-infected individuals as well as AIDS patients. Studies indicate that early diagnosis and treatment of HIV infection can reduce medical costs, improve survival, and reduce spreading of infection to new uninfected partners. Newer 4th generation combination antigen/antibody tests are highly sensitive and specific for detection of acute and established HIV infection (HIV1 and HIV2) enabling immediate linkage to care. The CDC (Center of Disease Control, USA) recently recommended an algorithm involving three different tests to screen and diagnose acute and established infections of HIV-1 and HIV-2 in a general population. Initially a 4th generation combo test detects a viral antigen p24 and specific antibodies against HIV -1 and HIV-2 envelope proteins. If the test is positive it is followed by a second test known as a differentiation assay which detects antibodies against specific HIV-1 and HIV-2 envelope proteins confirming established infection of HIV-1 or HIV-2. However if it is negative then another test is performed that measures viral load confirming an acute HIV-1 infection. Screening results of a Phoenix area population detected 0.3% new HIV infections among which 32.4% were acute cases. Studies in the U.S. indicate that this algorithm effectively reduces HIV infection through immediate treatment and education following diagnosis.

Keywords: new algorithm, HIV, diagnosis, infection

Procedia PDF Downloads 416
3209 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping

Authors: Jie Xu, Zengshan Tian, Ze Li

Abstract:

Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.

Keywords: frequency hopping, phase error elimination, carrier phase, ranging

Procedia PDF Downloads 125
3208 Multi-Objective Optimization in Carbon Abatement Technology Cycles (CAT) and Related Areas: Survey, Developments and Prospects

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele

Abstract:

An infinitesimal increase in performance can have immense reduction in operating and capital expenses in a power generation system. Therefore, constant studies are being carried out to improve both conventional and novel power cycles. Globally, power producers are constantly researching on ways to minimize emission and to collectively downsize the total cost rate of power plants. A substantial spurt of developmental technologies of low carbon cycles have been suggested and studied, however they all have their limitations and financial implication. In the area of carbon abatement in power plants, three major objectives conflict: The cost rate of the plant, Power output and Environmental impact. Since, an increase in one of this parameter directly affects the other. This poses a multi-objective problem. It is paramount to be able to discern the point where improving one objective affects the other. Hence, the need for a Pareto-based optimization algorithm. Pareto-based optimization algorithm helps to find those points where improving one objective influences another objective negatively and stops there. The application of Pareto-based optimization algorithm helps the user/operator/designer make an informed decision. This paper sheds more light on areas that multi-objective optimization has been applied in carbon abatement technologies in the last five years, developments and prospects.

Keywords: gas turbine, low carbon technology, pareto optimal, multi-objective optimization

Procedia PDF Downloads 792
3207 Computer-Aided Detection of Simultaneous Abdominal Organ CT Images by Iterative Watershed Transform

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

Interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis applications. Segmentation of liver, spleen and kidneys is regarded as a major primary step in the computer-aided diagnosis of abdominal organ diseases. In this paper, a semi-automated method for medical image data is presented for the abdominal organ segmentation data using mathematical morphology. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. Our algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter, we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.

Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, simultaneous organ segmentation, the watershed algorithm

Procedia PDF Downloads 442