Search results for: reversed roulette wheel selection algorithms.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2544

Search results for: reversed roulette wheel selection algorithms.

1494 An Adaptive Memetic Algorithm With Dynamic Population Management for Designing HIV Multidrug Therapies

Authors: Hassan Zarei, Ali Vahidian Kamyad, Sohrab Effati

Abstract:

In this paper, a mathematical model of human immunodeficiency virus (HIV) is utilized and an optimization problem is proposed, with the final goal of implementing an optimal 900-day structured treatment interruption (STI) protocol. Two type of commonly used drugs in highly active antiretroviral therapy (HAART), reverse transcriptase inhibitors (RTI) and protease inhibitors (PI), are considered. In order to solving the proposed optimization problem an adaptive memetic algorithm with population management (AMAPM) is proposed. The AMAPM uses a distance measure to control the diversity of population in genotype space and thus preventing the stagnation and premature convergence. Moreover, the AMAPM uses diversity parameter in phenotype space to dynamically set the population size and the number of crossovers during the search process. Three crossover operators diversify the population, simultaneously. The progresses of crossover operators are utilized to set the number of each crossover per generation. In order to escaping the local optima and introducing the new search directions toward the global optima, two local searchers assist the evolutionary process. In contrast to traditional memetic algorithms, the activation of these local searchers is not random and depends on both the diversity parameters in genotype space and phenotype space. The capability of AMAPM in finding optimal solutions compared with three popular metaheurestics is introduced.

Keywords: HIV therapy design, memetic algorithms, adaptivealgorithms, nonlinear integer programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
1493 Integrated Drunken Driving Prevention System

Authors: T. Shyam Ramanath, A. Sudharsan, A. Kavitha

Abstract:

As is needless to say; a majority of accidents, which occur, are due to drunk driving. As such, there is no effective mechanism to prevent this. Here we have designed an integrated system for the same purpose. Alcohol content in the driver-s body is detected by means of an infrared breath analyzer placed at the steering wheel. An infrared cell directs infrared energy through the sample and any unabsorbed energy at the other side is detected. The higher the concentration of ethanol, the more infrared absorption occurs (in much the same way that a sunglass lens absorbs visible light, alcohol absorbs infrared light). Thus the alcohol level of the driver is continuously monitored and calibrated on a scale. When it exceeds a particular limit the fuel supply is cutoff. If the device is removed also, the fuel supply will be automatically cut off or an alarm is sounded depending upon the requirement. This does not happen abruptly and special indicators are fixed at the back to avoid inconvenience to other drivers using the highway signals. Frame work for integration of sensors and control module in a scalable multi-agent system is provided .A SMS which contains the current GPS location of the vehicle is sent via a GSM module to the police control room to alert the police. The system is foolproof and the driver cannot tamper with it easily. Thus it provides an effective and cost effective solution for the problem of drunk driving in vehicles.

Keywords: Global system monitoring, global positioning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4218
1492 A Growing Natural Gas Approach for Evaluating Quality of Software Modules

Authors: Parvinder S. Sandhu, Sandeep Khimta, Kiranpreet Kaur

Abstract:

The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.

Keywords: Growing Neural Gas, data clustering, fault prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1865
1491 The Effects of Weather Anomalies on the Quantitative and Qualitative Parameters of Maize Hybrids of Different Genetic Traits in Hungary

Authors: Zs. J. Becze, Á. Krivián, M. Sárvári

Abstract:

Hybrid selection and the application of hybrid specific production technologies are important in terms of the increase of the yield and crop safety of maize. The main explanation for this is climate change, since weather extremes are going on and seem to accelerate in Hungary too.

The biological bases, the selection of appropriate hybrids will be of greater importance in the future. The issue of the adaptability of hybrids will be considerably appreciated. Its good agronomical traits and stress bearing against climatic factors and agrotechnical elements (e.g. different types of herbicides) will be important. There have been examples of 3-4 consecutive droughty years in the past decades, e.g. 1992-1993-1994 or 2009-2011-2012, which made the results of crop production critical. Irrigation cannot be the solution for the problem since currently only the 2% of the arable land is irrigated. Temperatures exceeding the multi-year average are characteristic mainly to the July and August in Hungary, which significantly increase the soil surface evaporation, thus further enhance water shortage. In terms of the yield and crop safety of maize, the weather of these two months is crucial, since the extreme high temperature in July decreases the viability of the pollen and the pistil of maize, decreases the extent of fertilization and makes grain-filling tardy. Consequently, yield and crop safety decrease.

Keywords: Abiotic factors, drought, nutrition content, yield.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900
1490 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: Computer vision, deep learning, object detection, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
1489 Characterisation of Wind-Driven Ventilation in Complex Terrain Conditions

Authors: Daniel Micallef, Damien Bounaudet, Robert N. Farrugia, Simon P. Borg, Vincent Buhagiar, Tonio Sant

Abstract:

The physical effects of upstream flow obstructions such as vegetation on cross-ventilation phenomena of a building are important for issues such as indoor thermal comfort. Modelling such effects in Computational Fluid Dynamics simulations may also be challenging. The aim of this work is to establish the cross-ventilation jet behaviour in such complex terrain conditions as well as to provide guidelines on the implementation of CFD numerical simulations in order to model complex terrain features such as vegetation in an efficient manner. The methodology consists of onsite measurements on a test cell coupled with numerical simulations. It was found that the cross-ventilation flow is highly turbulent despite the very low velocities encountered internally within the test cells. While no direct measurement of the jet direction was made, the measurements indicate that flow tends to be reversed from the leeward to the windward side. Modelling such a phenomenon proves challenging and is strongly influenced by how vegetation is modelled. A solid vegetation tends to predict better the direction and magnitude of the flow than a porous vegetation approach. A simplified terrain model was also shown to provide good comparisons with observation. The findings have important implications on the study of cross-ventilation in complex terrain conditions since the flow direction does not remain trivial, as with the traditional isolated building case.

Keywords: Complex terrain, cross-ventilation, wind driven ventilation, Computational Fluid Dynamics (CFD), wind resource.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 894
1488 Iterative Methods for An Inverse Problem

Authors: Minghui Wang, Shanrui Hu

Abstract:

An inverse problem of doubly center matrices is discussed. By translating the constrained problem into unconstrained problem, two iterative methods are proposed. A numerical example illustrate our algorithms.

Keywords: doubly center matrix, electric network theory, iterative methods, least-square problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1479
1487 A Study on Algorithm Fusion for Recognition and Tracking of Moving Robot

Authors: Jungho Choi, Youngwan Cho

Abstract:

This paper presents an algorithm for the recognition and tracking of moving objects, 1/10 scale model car is used to verify performance of the algorithm. Presented algorithm for the recognition and tracking of moving objects in the paper is as follows. SURF algorithm is merged with Lucas-Kanade algorithm. SURF algorithm has strong performance on contrast, size, rotation changes and it recognizes objects but it is slow due to many computational complexities. Processing speed of Lucas-Kanade algorithm is fast but the recognition of objects is impossible. Its optical flow compares the previous and current frames so that can track the movement of a pixel. The fusion algorithm is created in order to solve problems which occurred using the Kalman Filter to estimate the position and the accumulated error compensation algorithm was implemented. Kalman filter is used to create presented algorithm to complement problems that is occurred when fusion two algorithms. Kalman filter is used to estimate next location, compensate for the accumulated error. The resolution of the camera (Vision Sensor) is fixed to be 640x480. To verify the performance of the fusion algorithm, test is compared to SURF algorithm under three situations, driving straight, curve, and recognizing cars behind the obstacles. Situation similar to the actual is possible using a model vehicle. Proposed fusion algorithm showed superior performance and accuracy than the existing object recognition and tracking algorithms. We will improve the performance of the algorithm, so that you can experiment with the images of the actual road environment.

Keywords: SURF, Optical Flow Lucas-Kanade, Kalman Filter, object recognition, object tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2292
1486 Molecular Dynamics and Circular Dichroism Studies on Aurein 1.2 and Retro Analog

Authors: Safyeh Soufian, Hoosein Naderi-Manesh, Abdoali Alizadeh, Mohammad Nabi Sarbolouki

Abstract:

Aurein 1.2 is a 13-residue amphipathic peptide with antibacterial and anticancer activity. Aurein1.2 and its retro analog were synthesized to study the activity of the peptides in relation to their structure. The antibacterial test result showed the retro-analog is inactive. The secondary structural analysis by CD spectra indicated that both of the peptides at TFE/Water adopt alpha-helical conformation. MD simulation was performed on aurein 1.2 and retro-analog in water and TFE in order to analyse the factors that are involved in the activity difference between retro and the native peptide. The simulation results are discussed and validated in the light of experimental data from the CD experiment. Both of the peptides showed a relatively similar pattern for their hydrophobicity, hydrophilicity, solvent accessible surfaces, and solvent accessible hydrophobic surfaces. However, they showed different in directions of dipole moment of peptides. Also, Our results further indicate that the reversion of the amino acid sequence affects flexibility .The data also showed that factors causing structural rigidity may decrease the activity. Consequently, our finding suggests that in the case of sequence-reversed peptide strategy, one has to pay attention to the role of amino acid sequence order in making flexibility and role of dipole moment direction in peptide activity. KeywordsAntimicrobial peptides, retro, molecular dynamic, circular dichroism.

Keywords: Antimicrobial peptides, retro, molecular dynamic, circular dichroism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
1485 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration

Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith

Abstract:

Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.

Keywords: Multimodal image registration, GAN, cycle consistency, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810
1484 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems

Authors: Nyeng P. Gyang

Abstract:

Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.

Keywords: Cloud computing systems, multicore systems, parallel delaunay triangulation, parallel surface modeling and generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 879
1483 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 977
1482 Analyzing the Effect of Materials’ Selection on Energy Saving and Carbon Footprint: A Case Study Simulation of Concrete Structure Building

Authors: M. Kouhirostamkolaei, M. Kouhirostami, M. Sam, J. Woo, A. T. Asutosh, J. Li, C. Kibert

Abstract:

Construction is one of the most energy consumed activities in the urban environment that results in a significant amount of greenhouse gas emissions around the world. Thus, the impact of the construction industry on global warming is undeniable. Thus, reducing building energy consumption and mitigating carbon production can slow the rate of global warming. The purpose of this study is to determine the amount of energy consumption and carbon dioxide production during the operation phase and the impact of using new shells on energy saving and carbon footprint. Therefore, a residential building with a re-enforced concrete structure is selected in Babolsar, Iran. DesignBuilder software has been used for one year of building operation to calculate the amount of carbon dioxide production and energy consumption in the operation phase of the building. The primary results show the building use 61750 kWh of energy each year. Computer simulation analyzes the effect of changing building shells -using XPS polystyrene and new electrochromic windows- as well as changing the type of lighting on energy consumption reduction and subsequent carbon dioxide production. The results show that the amount of energy and carbon production during building operation has been reduced by approximately 70% by applying the proposed changes. The changes reduce CO2e to 11345 kg CO2/yr. The result of this study helps designers and engineers to consider material selection’s process as one of the most important stages of design for improving energy performance of buildings.

Keywords: Construction materials, green construction, energy simulation, carbon footprint, energy saving, concrete structure, DesignBuilder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 992
1481 Methods and Algorithms of Ensuring Data Privacy in AI-Based Healthcare Systems and Technologies

Authors: Omar Farshad Jeelani, Makaire Njie, Viktoriia M. Korzhuk

Abstract:

Recently, the application of AI-powered algorithms in healthcare continues to flourish. Particularly, access to healthcare information, including patient health history, diagnostic data, and PII (Personally Identifiable Information) is paramount in the delivery of efficient patient outcomes. However, as the exchange of healthcare information between patients and healthcare providers through AI-powered solutions increases, protecting a person’s information and their privacy has become even more important. Arguably, the increased adoption of healthcare AI has resulted in a significant concentration on the security risks and protection measures to the security and privacy of healthcare data, leading to escalated analyses and enforcement. Since these challenges are brought by the use of AI-based healthcare solutions to manage healthcare data, AI-based data protection measures are used to resolve the underlying problems. Consequently, these projects propose AI-powered safeguards and policies/laws to protect the privacy of healthcare data. The project present the best-in-school techniques used to preserve data privacy of AI-powered healthcare applications. Popular privacy-protecting methods like Federated learning, cryptography techniques, differential privacy methods, and hybrid methods are discussed together with potential cyber threats, data security concerns, and prospects. Also, the project discusses some of the relevant data security acts/laws that govern the collection, storage, and processing of healthcare data to guarantee owners’ privacy is preserved. This inquiry discusses various gaps and uncertainties associated with healthcare AI data collection procedures, and identifies potential correction/mitigation measures.

Keywords: Data privacy, artificial intelligence, healthcare AI, data sharing, healthcare organizations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 114
1480 Resource Matching and a Matchmaking Service for an Intelligent Grid

Authors: Xin Bai, Han Yu, Yongchang Ji, Dan C. Marinescu

Abstract:

We discuss the application of matching in the area of resource discovery and resource allocation in grid computing. We present a formal definition of matchmaking, overview algorithms to evaluate different matchmaking expressions, and develop a matchmaking service for an intelligent grid environment.

Keywords: Grid, Matchmaking, Ontology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
1479 Artifacts in Spiral X-ray CT Scanners: Problems and Solutions

Authors: Mehran Yazdi, Luc Beaulieu

Abstract:

Artifact is one of the most important factors in degrading the CT image quality and plays an important role in diagnostic accuracy. In this paper, some artifacts typically appear in Spiral CT are introduced. The different factors such as patient, equipment and interpolation algorithm which cause the artifacts are discussed and new developments and image processing algorithms to prevent or reduce them are presented.

Keywords: CT artifacts, Spiral CT, Artifact removal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4505
1478 Assessment of Downy mildew Resistance (Peronospora farinosa) in a Quinoa (Chenopodium quinoa Willd.) Germplasm

Authors: Manal Mhada, BrahimEzzahiri, Ouafae Benlhabib

Abstract:

Seventy-nine accessions, including two local wild species (Chenopodium album and C. murale) and several cultivated quinoa lines developed through recurrent selection in Morocco were screened for their resistance against Peronospora farinose, the causal agent of downy mildew disease. The method of artificial inoculation on detached healthy leaves taken from the middle stage of the plant was used. Screened accessions showed different levels of quantitative resistance to downy mildew as they were scored through the calculation of their area under disease progress curve and their two resistance components, the incubation period and the latent period. Significant differences were found between accessions regarding the three criteria (Incubation Period, Latent Period and Area Under Diseases Progress Curve). Accessions M2a and S938/1 were ranked resistant as they showed the longest Incubation Period (7 days) and Latent Period (12 days) and the lowest area under diseases progress curve (4). Therefore, M24 is the most susceptible accession as it has presented the highest area under diseases progress curve (34.5) and the shortest Incubation Period (1 day) and Latent Period (3 days). In parallel to this evaluation approach, the accession resistance was confirmed under the field conditions through natural infection by using the tree-leaf method. The high correlation found between detached leaf inoculation method and field screening under natural infection allows us to use this laboratory technique with sureness in further selection works.

Keywords: Detached leaf inoculation, Downy mildew, Field screening, Quinoa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2510
1477 Grading and Sequencing Tasks in Task-Based Syllabus: A Critical Look at Criterion Selection

Authors: Hossein Ahmadi, Ogholgol Nazari

Abstract:

The necessity of grading and sequencing tasks has led to the development of different criteria in this regard. However, appropriateness of these criteria in different situations is less discussed. This paper attempts to shed more light on the priority of different criteria in relation with different factors including learners, teachers, educational, and cultural factors.

Keywords: Criteria, Grading, Sequencing, Language learning tasks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6656
1476 Comparative Study Using Weka for Red Blood Cells Classification

Authors: Jameela Ali Alkrimi, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy

Abstract:

Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithms tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital - Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.

Keywords: K-Nearest Neighbors, Neural Network, Radial Basis Function, Red blood cells, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2995
1475 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme Gradient Boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impairment, multiclass classification, ADNI, support vector machine, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 958
1474 An Optimal Unsupervised Satellite image Segmentation Approach Based on Pearson System and k-Means Clustering Algorithm Initialization

Authors: Ahmed Rekik, Mourad Zribi, Ahmed Ben Hamida, Mohamed Benjelloun

Abstract:

This paper presents an optimal and unsupervised satellite image segmentation approach based on Pearson system and k-Means Clustering Algorithm Initialization. Such method could be considered as original by the fact that it utilised K-Means clustering algorithm for an optimal initialisation of image class number on one hand and it exploited Pearson system for an optimal statistical distributions- affectation of each considered class on the other hand. Satellite image exploitation requires the use of different approaches, especially those founded on the unsupervised statistical segmentation principle. Such approaches necessitate definition of several parameters like image class number, class variables- estimation and generalised mixture distributions. Use of statistical images- attributes assured convincing and promoting results under the condition of having an optimal initialisation step with appropriated statistical distributions- affectation. Pearson system associated with a k-means clustering algorithm and Stochastic Expectation-Maximization 'SEM' algorithm could be adapted to such problem. For each image-s class, Pearson system attributes one distribution type according to different parameters and especially the Skewness 'β1' and the kurtosis 'β2'. The different adapted algorithms, K-Means clustering algorithm, SEM algorithm and Pearson system algorithm, are then applied to satellite image segmentation problem. Efficiency of those combined algorithms was firstly validated with the Mean Quadratic Error 'MQE' evaluation, and secondly with visual inspection along several comparisons of these unsupervised images- segmentation.

Keywords: Unsupervised classification, Pearson system, Satellite image, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040
1473 Similarity Measures and Weighted Fuzzy C-Mean Clustering Algorithm

Authors: Bainian Li, Kongsheng Zhang, Jian Xu

Abstract:

In this paper we study the fuzzy c-mean clustering algorithm combined with principal components method. Demonstratively analysis indicate that the new clustering method is well rather than some clustering algorithms. We also consider the validity of clustering method.

Keywords: FCM algorithm, Principal Components Analysis, Clustervalidity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724
1472 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.

Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2641
1471 Architecture, Implementation and Application of Tools for Experimental Analysis

Authors: Tom Dowling, Adam Duffy

Abstract:

This paper presents an architecture to assist in the development of tools to perform experimental analysis. Existing implementations of tools based on this architecture are also described in this paper. These tools are applied to the real world problem of fault attack emulation and detection in cryptographic algorithms.

Keywords: Software Architectures and Design, Software Componentsand Reuse, Engineering Secure Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
1470 Uncertainty Multiple Criteria Decision Making Analysis for Stealth Combat Aircraft Selection

Authors: C. Ardil

Abstract:

Fuzzy set theory and its extensions (intuitionistic fuzzy sets, picture fuzzy sets, and neutrosophic sets) have been widely used to address imprecision and uncertainty in complex decision-making. However, they may struggle with inherent indeterminacy and inconsistency in real-world situations. This study introduces uncertainty sets as a promising alternative, offering a structured framework for incorporating both types of uncertainty into decision-making processes.This work explores the theoretical foundations and applications of uncertainty sets. A novel decision-making algorithm based on uncertainty set-based proximity measures is developed and demonstrated through a practical application: selecting the most suitable stealth combat aircraft.

The results highlight the effectiveness of uncertainty sets in ranking alternatives under uncertainty. Uncertainty sets offer several advantages, including structured uncertainty representation, robust ranking mechanisms, and enhanced decision-making capabilities due to their ability to account for ambiguity.Future research directions are also outlined, including comparative analysis with existing MCDM methods under uncertainty, sensitivity analysis to assess the robustness of rankings,and broader application to various MCDM problems with diverse complexities. By exploring these avenues, uncertainty sets can be further established as a valuable tool for navigating uncertainty in complex decision-making scenarios.

Keywords: Uncertainty set, stealth combat aircraft selection multiple criteria decision-making analysis, MCDM, uncertainty proximity analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 186
1469 Some Physiological Effects of Momordica charantia and Trigonella foenum-graecum Extracts in Diabetic Rats as Compared with Cidophage®

Authors: Wehash, F. E., Ismail I. Abo-Ghanema, Rasha Mohamed Saleh

Abstract:

This study was conducted to evaluate the anti-diabetic properties of ethanolic extract of two plants commonly used in folk medicine, Mormodica charantia (bitter melon) and Trigonella foenum-graecum (fenugreek). The study was performed on STZinduced diabetic rats (DM type-I). Plant extracts of these two plants were given to STZ diabetic rats at the concentration of 500 mg/kg body weight ,50 mg/kg body weight respectively. Cidophage® (metformin HCl) were administered to another group to support the results at a dose of 500 mg/kg body weight, the ethanolic extracts and Cidophage administered orally once a day for four weeks using a stomach tube and; serum samples were obtained for biochemical analysis. The extracts caused significant decreases in glucose levels compared with diabetic control rats. Insulin secretions were increased after 4 weeks of treatment with Cidophage® compared with the control non-diabetic rats. Levels of AST and ALT liver enzymes were normalized by all treatments. Decreases in liver cholesterol, triglycerides, and LDL in diabetic rats were observed with all treatments. HDL levels were increased by the treatments in the following order: bitter melon, Cidophage®, and fenugreek. Creatinine levels were reduced by all treatments. Serum nitric oxide and malonaldehyde levels were reduced by all extracts. GSH levels were increased by all extracts. Extravasation as measured by the Evans Blue test increased significantly in STZ-induced diabetic animals. This effect was reversed by ethanolic extracts of bitter melon or fenugreek.

Keywords: Cidophage®, Diabetic rats, Mormodica charantia, Trigonella foenum-graecum

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2265
1468 Survey of Potato Viral Infection Using Das-Elisa Method in Georgia

Authors: Maia Kukhaleishvili, Ekaterine Bulauri, Iveta Megrelishvili, Tamar Shamatava, Tamar Chipashvili

Abstract:

Plant viruses can cause loss of yield and quality in a lot of important crops. Symptoms of pathogens are variable depending on the cultivars and virus strain. Selection of resistant potato varieties would reduce the risk of virus transmission and significant economic impact. Other way to avoid reduced harvest yields is regular potato seed production sampling and testing for viral infection. The aim of this study was to determine the occurrence and distribution of viral diseases according potato cultivars for further selection of virus-free material in Georgia. During the summer 2015- 2016, 5 potato cultivars (Sante, Laura, Jelly, Red Sonia, Anushka) at 5 different farms located in Akhalkalaki were tested for 6 different potato viruses: Potato virus A (PVA), Potato virus M (PVM), Potato virus S (PVS), Potato virus X (PVX), Potato virus Y (PVY) and potato leaf roll virus (PLRV). A serological method, Double Antibody Sandwich-Enzyme linked Immunosorbent Assay (DASELISA) was used at the laboratory to analyze the results. The result showed that PVY (21.4%) and PLRV (19.7%) virus presence in collected samples was relatively high compared to others. Researched potato cultivars except Jelly and Laura were infected by PVY with different concentrations. PLRV was found only in three potato cultivars (Sante, Jelly, Red Sonia) and PVM virus (3.12%) was characterized with low prevalence. PVX, PVA and PVS virus infection was not reported. It would be noted that 7.9% of samples were containing PVY/PLRV mix infection. Based on the results it can be concluded that PVY and PLRV infections are dominant in all research cultivars. Therefore significant yield losses are expected. Systematic, long-term control of potato viral infection, especially seed-potatoes, must be regarded as the most important factor to increase seed productivity.

Keywords: Diseases, infection, potato, virus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942
1467 Development of an Ensemble Classification Model Based on Hybrid Filter-Wrapper Feature Selection for Email Phishing Detection

Authors: R. B. Ibrahim, M. S. Argungu, I. M. Mungadi

Abstract:

It is obvious in this present time, internet has become an indispensable part of human life since its inception. The Internet has provided diverse opportunities to make life so easy for human beings, through the adoption of various channels. Among these channels are email, internet banking, video conferencing, and the like. Email is one of the easiest means of communication hugely accepted among individuals and organizations globally. But over decades the security integrity of this platform has been challenged with malicious activities like Phishing. Email phishing is designed by phishers to fool the recipient into handing over sensitive personal information such as passwords, credit card numbers, account credentials, social security numbers, etc. This activity has caused a lot of financial damage to email users globally which has resulted in bankruptcy, sudden death of victims, and other health-related sicknesses. Although many methods have been proposed to detect email phishing, in this research, the results of multiple machine-learning methods for predicting email phishing have been compared with the use of filter-wrapper feature selection. It is worth noting that all three models performed substantially but one outperformed the other. The dataset used for these models is obtained from Kaggle online data repository, while three classifiers: decision tree, Naïve Bayes, and Logistic regression are ensemble (Bagging) respectively. Results from the study show that the Decision Tree (CART) bagging ensemble recorded the highest accuracy of 98.13% using PEF (Phishing Essential Features). This result further demonstrates the dependability of the proposed model.

Keywords: Ensemble, hybrid, filter-wrapper, phishing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178
1466 The Benefits of End-To-End Integrated Planning from the Mine to Client Supply for Minimizing Penalties

Authors: G. Martino, F. Silva, E. Marchal

Abstract:

The control over delivered iron ore blend characteristics is one of the most important aspects of the mining business. The iron ore price is a function of its composition, which is the outcome of the beneficiation process. So, end-to-end integrated planning of mine operations can reduce risks of penalties on the iron ore price. In a standard iron mining company, the production chain is composed of mining, ore beneficiation, and client supply. When mine planning and client supply decisions are made uncoordinated, the beneficiation plant struggles to deliver the best blend possible. Technological improvements in several fields allowed bridging the gap between departments and boosting integrated decision-making processes. Clusterization and classification algorithms over historical production data generate reasonable previsions for quality and volume of iron ore produced for each pile of run-of-mine (ROM) processed. Mathematical modeling can use those deterministic relations to propose iron ore blends that better-fit specifications within a delivery schedule. Additionally, a model capable of representing the whole production chain can clearly compare the overall impact of different decisions in the process. This study shows how flexibilization combined with a planning optimization model between the mine and the ore beneficiation processes can reduce risks of out of specification deliveries. The model capabilities are illustrated on a hypothetical iron ore mine with magnetic separation process. Finally, this study shows ways of cost reduction or profit increase by optimizing process indicators across the production chain and integrating the different plannings with the sales decisions.

Keywords: Clusterization and classification algorithms, integrated planning, optimization, mathematical modeling, penalty minimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 645
1465 Driving Behaviors at Intersections (Case Study- Tehran-Zone 3-Region 3)

Authors: A. Mansour Khaki, A. E. Forouhid, S. Hemmati, M. Rahnamay-Naeini

Abstract:

In this article we research on the drivers’ behavior at intersections. Some significant behaviors are chosen and designed a questionnaire which was about 2 pages. In this questionnaire, samples were being asked to answer by checking the box. The answers have been from always to never. This questionnaire related to our selection’s behaviors. Finally it has been resulted that most of aggressive behaviors were being common in them. Also it has been suggested some solutions for each of them.

Keywords: Driver, behavior, intersection, study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423