Search results for: Clustering Algorithms.
1241 Comparative Study of Tensile Properties of Cast and Hot Forged Alumina Nanoparticle Reinforced Composites
Authors: S. Ghanaraja, Subrata Ray, S. K. Nath
Abstract:
Particle reinforced Metal Matrix Composite (MMC) succeeds in synergizing the metallic matrix with ceramic particle reinforcements to result in improved strength, particularly at elevated temperatures, but adversely it affects the ductility of the matrix because of agglomeration and porosity. The present study investigates the outcome of tensile properties in a cast and hot forged composite reinforced simultaneously with coarse and fine particles. Nano-sized alumina particles have been generated by milling mixture of aluminum and manganese dioxide powders. Milled particles after drying are added to molten metal and the resulting slurry is cast. The microstructure of the composites shows good distribution of both the size categories of particles without significant clustering. The presence of nanoparticles along with coarser particles in a composite improves both strength and ductility considerably. Delay in debonding of coarser particles to higher stress is due to reduced mismatch in extension caused by increased strain hardening in presence of the nanoparticles. However, higher addition of powder mix beyond a limit results in deterioration of mechanical properties, possibly due to clustering of nanoparticles. The porosity in cast composite generally increases with the increasing addition of powder mix as observed during process and on forging it has got reduced. The base alloy and nanocomposites show improvement in flow stress which could be attributed to lowering of porosity and grain refinement as a consequence of forging.
Keywords: Aluminum, alumina, nanoparticle reinforced composites, porosity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14751240 PSO-based Possibilistic Portfolio Model with Transaction Costs
Authors: Wei Chen, Cui-you Yao, Yue Qiu
Abstract:
This paper deals with a portfolio selection problem based on the possibility theory under the assumption that the returns of assets are LR-type fuzzy numbers. A possibilistic portfolio model with transaction costs is proposed, in which the possibilistic mean value of the return is termed measure of investment return, and the possibilistic variance of the return is termed measure of investment risk. Due to considering transaction costs, the existing traditional optimization algorithms usually fail to find the optimal solution efficiently and heuristic algorithms can be the best method. Therefore, a particle swarm optimization is designed to solve the corresponding optimization problem. At last, a numerical example is given to illustrate our proposed effective means and approaches.Keywords: Possibility theory, portfolio selection, transaction costs, particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15341239 Nuclear Medical Image Treatment System Based On FPGA in Real Time
Authors: B. Mahmoud, M.H. Bedoui, R. Raychev, H. Essabbah
Abstract:
We present in this paper an acquisition and treatment system designed for semi-analog Gamma-camera. It consists of a nuclear medical Image Acquisition, Treatment and Display chain(IATD) ensuring the acquisition, the treatment of the signals(resulting from the Gamma-camera detection head) and the scintigraphic image construction in real time. This chain is composed by an analog treatment board and a digital treatment board. We describe the designed systems and the digital treatment algorithms in which we have improved the performance and the flexibility. The digital treatment algorithms are implemented in a specific reprogrammable circuit FPGA (Field Programmable Gate Array).interface for semi-analog cameras of Sopha Medical Vision(SMVi) by taking as example SOPHY DS7. The developed system consists of an Image Acquisition, Treatment and Display (IATD) ensuring the acquisition and the treatment of the signals resulting from the DH. The developed chain is formed by a treatment analog board and a digital treatment board designed around a DSP [2]. In this paper we have presented the architecture of a new version of our chain IATD in which the integration of the treatment algorithms is executed on an FPGA (Field Programmable Gate Array)
Keywords: Nuclear medical image, scintigraphic image, digitaltreatment, linearity, spectrometry, FPGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16761238 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model
Authors: Nicolae Bold, Daniel Nijloveanu
Abstract:
The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.Keywords: Genetic algorithm, chromosomes, genes, cropping, agriculture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16021237 A Matching Algorithm of Minutiae for Real Time Fingerprint Identification System
Authors: Shahram Mohammadi, Ali Frajzadeh
Abstract:
A lot of matching algorithms with different characteristics have been introduced in recent years. For real time systems these algorithms are usually based on minutiae features. In this paper we introduce a novel approach for feature extraction in which the extracted features are independent of shift and rotation of the fingerprint and at the meantime the matching operation is performed much more easily and with higher speed and accuracy. In this new approach first for any fingerprint a reference point and a reference orientation is determined and then based on this information features are converted into polar coordinates. Due to high speed and accuracy of this approach and small volume of extracted features and easily execution of matching operation this approach is the most appropriate for real time applications.
Keywords: Matching, Minutiae, Reference point, Reference orientation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24141236 An Integrated Framework for the Realtime Investigation of State Space Exploration
Authors: Jörg Lassig, Stefanie Thiem
Abstract:
The objective of this paper is the introduction to a unified optimization framework for research and education. The OPTILIB framework implements different general purpose algorithms for combinatorial optimization and minimum search on standard continuous test functions. The preferences of this library are the straightforward integration of new optimization algorithms and problems as well as the visualization of the optimization process of different methods exploring the search space exclusively or for the real time visualization of different methods in parallel. Further the usage of several implemented methods is presented on the basis of two use cases, where the focus is especially on the algorithm visualization. First it is demonstrated how different methods can be compared conveniently using OPTILIB on the example of different iterative improvement schemes for the TRAVELING SALESMAN PROBLEM. A second study emphasizes how the framework can be used to find global minima in the continuous domain.Keywords: Global Optimization Heuristics, Particle Swarm Optimization, Ensemble Based Threshold Accepting, Ruin and Recreate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13831235 Using Genetic Algorithms in Closed Loop Identification of the Systems with Variable Structure Controller
Authors: O.M. Mohamed vall, M. Radhi
Abstract:
This work presents a recursive identification algorithm. This algorithm relates to the identification of closed loop system with Variable Structure Controller. The approach suggested includes two stages. In the first stage a genetic algorithm is used to obtain the parameters of switching function which gives a control signal rich in commutations (i.e. a control signal whose spectral characteristics are closest possible to those of a white noise signal). The second stage consists in the identification of the system parameters by the instrumental variable method and using the optimal switching function parameters obtained with the genetic algorithm. In order to test the validity of this algorithm a simulation example is presented.
Keywords: Closed loop identification, variable structure controller, pseud-random binary sequence, genetic algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14401234 A Robust Visual Tracking Algorithm with Low-Rank Region Covariance
Authors: Songtao Wu, Yuesheng Zhu, Ziqiang Sun
Abstract:
Region covariance (RC) descriptor is an effective and efficient feature for visual tracking. Current RC-based tracking algorithms use the whole RC matrix to track the target in video directly. However, there exist some issues for these whole RCbased algorithms. If some features are contaminated, the whole RC will become unreliable, which results in lost object-tracking. In addition, if some features are very discriminative to the background, other features are still processed and thus reduce the efficiency. In this paper a new robust tracking method is proposed, in which the whole RC matrix is decomposed into several low rank matrices. Those matrices are dynamically chosen and processed so as to achieve a good tradeoff between discriminability and complexity. Experimental results have shown that our method is more robust to complex environment changes, especially either when occlusion happens or when the background is similar to the target compared to other RC-based methods.Keywords: Visual tracking, region covariance descriptor, lowrankregion covariance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15841233 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.
Keywords: Anomaly detection, dimensionality reduction, frequencies selection, modal analysis, neural network, structural health monitoring, vibration measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7081232 Multiuser Detection in CDMA Fast Fading Multipath Channel using Heuristic Genetic Algorithms
Authors: Muhammad Naeem, Syed Ismail Shah, Habibullah Jamal
Abstract:
In this paper, a simple heuristic genetic algorithm is used for Multistage Multiuser detection in fast fading environments. Multipath channels, multiple access interference (MAI) and near far effect cause the performance of the conventional detector to degrade. Heuristic Genetic algorithms, a rapidly growing area of artificial intelligence, uses evolutionary programming for initial search, which not only helps to converge the solution towards near optimal performance efficiently but also at a very low complexity as compared with optimal detector. This holds true for Additive White Gaussian Noise (AWGN) and multipath fading channels. Experimental results are presented to show the superior performance of the proposed techque over the existing methods.Keywords: Genetic Algorithm (GA), Multiple AccessInterference (MAI), Multistage Detectors (MSD), SuccessiveInterference Cancellation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20471231 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: Artificial neural networks, fuel consumption, machine learning, regression, statistical tests.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8291230 Computing Maximum Uniquely Restricted Matchings in Restricted Interval Graphs
Authors: Swapnil Gupta, C. Pandu Rangan
Abstract:
A uniquely restricted matching is defined to be a matching M whose matched vertices induces a sub-graph which has only one perfect matching. In this paper, we make progress on the open question of the status of this problem on interval graphs (graphs obtained as the intersection graph of intervals on a line). We give an algorithm to compute maximum cardinality uniquely restricted matchings on certain sub-classes of interval graphs. We consider two sub-classes of interval graphs, the former contained in the latter, and give O(|E|^2) time algorithms for both of them. It is to be noted that both sub-classes are incomparable to proper interval graphs (graphs obtained as the intersection graph of intervals in which no interval completely contains another interval), on which the problem can be solved in polynomial time.Keywords: Uniquely restricted matching, interval graph, design and analysis of algorithms, matching, induced matching, witness counting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15471229 Optimizing Mobile Agents Migration Based on Decision Tree Learning
Authors: Yasser k. Ali, Hesham N. Elmahdy, Sanaa El Olla Hanfy Ahmed
Abstract:
Mobile agents are a powerful approach to develop distributed systems since they migrate to hosts on which they have the resources to execute individual tasks. In a dynamic environment like a peer-to-peer network, Agents have to be generated frequently and dispatched to the network. Thus they will certainly consume a certain amount of bandwidth of each link in the network if there are too many agents migration through one or several links at the same time, they will introduce too much transferring overhead to the links eventually, these links will be busy and indirectly block the network traffic, therefore, there is a need of developing routing algorithms that consider about traffic load. In this paper we seek to create cooperation between a probabilistic manner according to the quality measure of the network traffic situation and the agent's migration decision making to the next hop based on decision tree learning algorithms.
Keywords: Agent Migration, Decision Tree learning, ID3 algorithm, Naive Bayes Classifier
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19911228 Partial 3D Reconstruction using Evolutionary Algorithms
Authors: Mónica Pérez-Meza, Rodrigo Montúfar-Chaveznava
Abstract:
When reconstructing a scenario, it is necessary to know the structure of the elements present on the scene to have an interpretation. In this work we link 3D scenes reconstruction to evolutionary algorithms through the vision stereo theory. We consider vision stereo as a method that provides the reconstruction of a scene using only a couple of images of the scene and performing some computation. Through several images of a scene, captured from different positions, vision stereo can give us an idea about the threedimensional characteristics of the world. Vision stereo usually requires of two cameras, making an analogy to the mammalian vision system. In this work we employ only a camera, which is translated along a path, capturing images every certain distance. As we can not perform all computations required for an exhaustive reconstruction, we employ an evolutionary algorithm to partially reconstruct the scene in real time. The algorithm employed is the fly algorithm, which employ “flies" to reconstruct the principal characteristics of the world following certain evolutionary rules.Keywords: 3D Reconstruction, Computer Vision, EvolutionaryAlgorithms, Vision Stereo.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18861227 Reformulations of Big Bang-Big Crunch Algorithm for Discrete Structural Design Optimization
Authors: O. Hasançebi, S. Kazemzadeh Azad
Abstract:
In the present study the efficiency of Big Bang-Big Crunch (BB-BC) algorithm is investigated in discrete structural design optimization. It is shown that a standard version of the BB-BC algorithm is sometimes unable to produce reasonable solutions to problems from discrete structural design optimization. Two reformulations of the algorithm, which are referred to as modified BB-BC (MBB-BC) and exponential BB-BC (EBB-BC), are introduced to enhance the capability of the standard algorithm in locating good solutions for steel truss and frame type structures, respectively. The performances of the proposed algorithms are experimented and compared to its standard version as well as some other algorithms over several practical design examples. In these examples, steel structures are sized for minimum weight subject to stress, stability and displacement limitations according to the provisions of AISC-ASD.Keywords: Structural optimization, discrete optimization, metaheuristics, big bang-big crunch (BB-BC) algorithm, design optimization of steel trusses and frames.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23891226 Dynamic Decompression for Text Files
Authors: Ananth Kamath, Ankit Kant, Aravind Srivatsa, Harisha J.A
Abstract:
Compression algorithms reduce the redundancy in data representation to decrease the storage required for that data. Lossless compression researchers have developed highly sophisticated approaches, such as Huffman encoding, arithmetic encoding, the Lempel-Ziv (LZ) family, Dynamic Markov Compression (DMC), Prediction by Partial Matching (PPM), and Burrows-Wheeler Transform (BWT) based algorithms. Decompression is also required to retrieve the original data by lossless means. A compression scheme for text files coupled with the principle of dynamic decompression, which decompresses only the section of the compressed text file required by the user instead of decompressing the entire text file. Dynamic decompressed files offer better disk space utilization due to higher compression ratios compared to most of the currently available text file formats.Keywords: Compression, Dynamic Decompression, Text file format, Portable Document Format, Compression Ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17631225 Equity Risk Premiums and Risk Free Rates in Modelling and Prediction of Financial Markets
Authors: Mohammad Ghavami, Reza S. Dilmaghani
Abstract:
This paper presents an adaptive framework for modelling financial markets using equity risk premiums, risk free rates and volatilities. The recorded economic factors are initially used to train four adaptive filters for a certain limited period of time in the past. Once the systems are trained, the adjusted coefficients are used for modelling and prediction of an important financial market index. Two different approaches based on least mean squares (LMS) and recursive least squares (RLS) algorithms are investigated. Performance analysis of each method in terms of the mean squared error (MSE) is presented and the results are discussed. Computer simulations carried out using recorded data show MSEs of 4% and 3.4% for the next month prediction using LMS and RLS adaptive algorithms, respectively. In terms of twelve months prediction, RLS method shows a better tendency estimation compared to the LMS algorithm.Keywords: Prediction of financial markets, Adaptive methods, MSE, LSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10201224 A New Method for Contour Approximation Using Basic Ramer Idea
Authors: Ali Abdrhman Ukasha
Abstract:
This paper presented two new efficient algorithms for contour approximation. The proposed algorithm is compared with Ramer (good quality), Triangle (faster) and Trapezoid (fastest) in this work; which are briefly described. Cartesian co-ordinates of an input contour are processed in such a manner that finally contours is presented by a set of selected vertices of the edge of the contour. In the paper the main idea of the analyzed procedures for contour compression is performed. For comparison, the mean square error and signal-to-noise ratio criterions are used. Computational time of analyzed methods is estimated depending on a number of numerical operations. Experimental results are obtained both in terms of image quality, compression ratios, and speed. The main advantages of the analyzed algorithm is small numbers of the arithmetic operations compared to the existing algorithms.Keywords: Polygonal approximation, Ramer, Triangle and Trapezoid methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18051223 Contact Drying Simulation of Particulate Materials: A Comprehensive Approach
Authors: Marco Intelvi, Apolinar Picado, Joaquín Martínez
Abstract:
In this work, simulation algorithms for contact drying of agitated particulate materials under vacuum and at atmospheric pressure were developed. The implementation of algorithms gives a predictive estimation of drying rate curves and bulk bed temperature during contact drying. The calculations are based on the penetration model to describe the drying process, where all process parameters such as heat and mass transfer coefficients, effective bed properties, gas and liquid phase properties are estimated with proper correlations. Simulation results were compared with experimental data from the literature. In both cases, simulation results were in good agreement with experimental data. Few deviations were identified and the limitations of the predictive capabilities of the models are discussed. The programs give a good insight of the drying behaviour of the analysed powders.Keywords: Agitated bed, Atmospheric pressure, Penetrationmodel, Vacuum
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22421222 Documents Emotions Classification Model Based on TF-IDF Weighting Measure
Authors: Amr Mansour Mohsen, Hesham Ahmed Hassan, Amira M. Idrees
Abstract:
Emotions classification of text documents is applied to reveal if the document expresses a determined emotion from its writer. As different supervised methods are previously used for emotion documents’ classification, in this research we present a novel model that supports the classification algorithms for more accurate results by the support of TF-IDF measure. Different experiments have been applied to reveal the applicability of the proposed model, the model succeeds in raising the accuracy percentage according to the determined metrics (precision, recall, and f-measure) based on applying the refinement of the lexicon, integration of lexicons using different perspectives, and applying the TF-IDF weighting measure over the classifying features. The proposed model has also been compared with other research to prove its competence in raising the results’ accuracy.
Keywords: Emotion detection, TF-IDF, WEKA tool, classification algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17241221 A New Biologically Inspired Pattern Recognition Spproach for Face Recognition
Authors: V. Kabeer, N.K.Narayanan
Abstract:
This paper reports a new pattern recognition approach for face recognition. The biological model of light receptors - cones and rods in human eyes and the way they are associated with pattern vision in human vision forms the basis of this approach. The functional model is simulated using CWD and WPD. The paper also discusses the experiments performed for face recognition using the features extracted from images in the AT & T face database. Artificial Neural Network and k- Nearest Neighbour classifier algorithms are employed for the recognition purpose. A feature vector is formed for each of the face images in the database and recognition accuracies are computed and compared using the classifiers. Simulation results show that the proposed method outperforms traditional way of feature extraction methods prevailing for pattern recognition in terms of recognition accuracy for face images with pose and illumination variations.
Keywords: Face recognition, Image analysis, Wavelet feature extraction, Pattern recognition, Classifier algorithms
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16771220 Capacitor Placement in Distribution Systems Using Simulating Annealing (SA)
Authors: Esmail Limouzade, Mahmood.Joorabian, Najaf Hedayat
Abstract:
This paper undertakes the problem of optimal capacitor placement in a distribution system. The problem is how to optimally determine the locations to install capacitors, the types and sizes of capacitors to he installed and, during each load level,the control settings of these capacitors in order that a desired objective function is minimized while the load constraints,network constraints and operational constraints (e.g. voltage profile) at different load levels are satisfied. The problem is formulated as a combinatorial optimization problem with a nondifferentiable objective function. Four solution mythologies based on algorithms (GA),tabu search (TS), and hybrid GA-SA algorithms are presented.The solution methodologies are preceded by a sensitivity analysis to select the candidate capacitor installation locations.Keywords: Genetic Algorithm (GA) , capacitor placement, voltage profile, network losses, Simulated Annealing, distribution network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18061219 EEG-Based Screening Tool for School Student’s Brain Disorders Using Machine Learning Algorithms
Authors: Abdelrahman A. Ramzy, Bassel S. Abdallah, Mohamed E. Bahgat, Sarah M. Abdelkader, Sherif H. ElGohary
Abstract:
Attention-Deficit/Hyperactivity Disorder (ADHD), epilepsy, and autism affect millions of children worldwide, many of which are undiagnosed despite the fact that all of these disorders are detectable in early childhood. Late diagnosis can cause severe problems due to the late treatment and to the misconceptions and lack of awareness as a whole towards these disorders. Moreover, electroencephalography (EEG) has played a vital role in the assessment of neural function in children. Therefore, quantitative EEG measurement will be utilized as a tool for use in the evaluation of patients who may have ADHD, epilepsy, and autism. We propose a screening tool that uses EEG signals and machine learning algorithms to detect these disorders at an early age in an automated manner. The proposed classifiers used with epilepsy as a step taken for the work done so far, provided an accuracy of approximately 97% using SVM, Naïve Bayes and Decision tree, while 98% using KNN, which gives hope for the work yet to be conducted.
Keywords: ADHD, autism, epilepsy, EEG, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9971218 Detection of Action Potentials in the Presence of Noise Using Phase-Space Techniques
Authors: Christopher Paterson, Richard Curry, Alan Purvis, Simon Johnson
Abstract:
Emerging Bio-engineering fields such as Brain Computer Interfaces, neuroprothesis devices and modeling and simulation of neural networks have led to increased research activity in algorithms for the detection, isolation and classification of Action Potentials (AP) from noisy data trains. Current techniques in the field of 'unsupervised no-prior knowledge' biosignal processing include energy operators, wavelet detection and adaptive thresholding. These tend to bias towards larger AP waveforms, AP may be missed due to deviations in spike shape and frequency and correlated noise spectrums can cause false detection. Also, such algorithms tend to suffer from large computational expense. A new signal detection technique based upon the ideas of phasespace diagrams and trajectories is proposed based upon the use of a delayed copy of the AP to highlight discontinuities relative to background noise. This idea has been used to create algorithms that are computationally inexpensive and address the above problems. Distinct AP have been picked out and manually classified from real physiological data recorded from a cockroach. To facilitate testing of the new technique, an Auto Regressive Moving Average (ARMA) noise model has been constructed bases upon background noise of the recordings. Along with the AP classification means this model enables generation of realistic neuronal data sets at arbitrary signal to noise ratio (SNR).Keywords: Action potential detection, Low SNR, Phase spacediagrams/trajectories, Unsupervised/no-prior knowledge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16431217 Optimized Delay Constrained QoS Routing
Authors: P. S. Prakash, S. Selvan
Abstract:
QoS Routing aims to find paths between senders and receivers satisfying the QoS requirements of the application which efficiently using the network resources and underlying routing algorithm to be able to find low-cost paths that satisfy given QoS constraints. The problem of finding least-cost routing is known to be NP-hard or complete and some algorithms have been proposed to find a near optimal solution. But these heuristics or algorithms either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we concentrate an algorithm that finds a near-optimal solution fast and we named this algorithm as optimized Delay Constrained Routing (ODCR), which uses an adaptive path weight function together with an additional constraint imposed on the path cost, to restrict search space and hence ODCR finds near optimal solution in much quicker time.Keywords: QoS, Delay, Routing, Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12131216 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography
Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi
Abstract:
Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16181215 Optimizing the Project Delivery Time with Time Cost Trade-offs
Authors: Wei Lo, Ming-En Kuo
Abstract:
While to minimize the overall project cost is always one of the objectives of construction managers, to obtain the maximum economic return is definitely one the ultimate goals of the project investors. As there is a trade-off relationship between the project time and cost, and the project delivery time directly affects the timing of economic recovery of an investment project, to provide a method that can quantify the relationship between the project delivery time and cost, and identify the optimal delivery time to maximize economic return has always been the focus of researchers and industrial practitioners. Using genetic algorithms, this study introduces an optimization model that can quantify the relationship between the project delivery time and cost and furthermore, determine the optimal delivery time to maximize the economic return of the project. The results provide objective quantification for accurately evaluating the project delivery time and cost, and facilitate the analysis of the economic return of a project.Keywords: Time-Cost Trade-Off, Genetic Algorithms, Resource Integration, Economic return.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17731214 Hybrid Method Using Wavelets and Predictive Method for Compression of Speech Signal
Authors: Karima Siham Aoubid, Mohamed Boulemden
Abstract:
The development of the signal compression algorithms is having compressive progress. These algorithms are continuously improved by new tools and aim to reduce, an average, the number of bits necessary to the signal representation by means of minimizing the reconstruction error. The following article proposes the compression of Arabic speech signal by a hybrid method combining the wavelet transform and the linear prediction. The adopted approach rests, on one hand, on the original signal decomposition by ways of analysis filters, which is followed by the compression stage, and on the other hand, on the application of the order 5, as well as, the compression signal coefficients. The aim of this approach is the estimation of the predicted error, which will be coded and transmitted. The decoding operation is then used to reconstitute the original signal. Thus, the adequate choice of the bench of filters is useful to the transform in necessary to increase the compression rate and induce an impercevable distortion from an auditive point of view.Keywords: Compression, linear prediction analysis, multiresolution analysis, speech signal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13371213 Global Security Using Human Face Understanding under Vision Ubiquitous Architecture System
Abstract:
Different methods containing biometric algorithms are presented for the representation of eigenfaces detection including face recognition, are identification and verification. Our theme of this research is to manage the critical processing stages (accuracy, speed, security and monitoring) of face activities with the flexibility of searching and edit the secure authorized database. In this paper we implement different techniques such as eigenfaces vector reduction by using texture and shape vector phenomenon for complexity removal, while density matching score with Face Boundary Fixation (FBF) extracted the most likelihood characteristics in this media processing contents. We examine the development and performance efficiency of the database by applying our creative algorithms in both recognition and detection phenomenon. Our results show the performance accuracy and security gain with better achievement than a number of previous approaches in all the above processes in an encouraging mode.Keywords: Ubiquitous architecture, verification, Identification, recognition
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13361212 Decentralized Handoff for Microcellular Mobile Communication System using Fuzzy Logic
Authors: G. M. Mir, N. A. Shah, Moinuddin
Abstract:
Efficient handoff algorithms are a cost-effective way of enhancing the capacity and QoS of cellular system. The higher value of hysteresis effectively prevents unnecessary handoffs but causes undesired cell dragging. This undesired cell dragging causes interference or could lead to dropped calls in microcellular environment. The problems are further exacerbated by the corner effect phenomenon which causes the signal level to drop by 20-30 dB in 10-20 meters. Thus, in order to maintain reliable communication in a microcellular system new and better handoff algorithms must be developed. A fuzzy based handoff algorithm is proposed in this paper as a solution to this problem. Handoff on the basis of ratio of slopes of normal signal loss to the actual signal loss is presented. The fuzzy based solution is supported by comparing its results with the results obtained in analytical solution.Keywords: Slope ratio, handoff, corner effect, fuzzy logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1514