Search results for: Adaptive Neural Controller
59 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method
Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari
Abstract:
The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.Keywords: Optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143358 The Mouth and Gastrointestinal Tract of the African Lung Fish Protopterus annectens in River Niger at Agenebode, Nigeria
Authors: Marian Agbugui
Abstract:
The West African Lung fishes are fishes rich in protein and serve as an important source of food supply for man. The kind of food ingested by this group of fishes is dependent on the alimentary canal as well as the fish’s digestive processes which provide suitable modifications for maximum utilization of food taken. A study of the alimentary canal of P. annectens will expose the best information on the anatomy and histology of the fish. Samples of P. annectens were dissected to reveal the liver, pancreas and entire gut wall. Digital pictures of the mouth, jaws and the Gastrointestinal Tract (GIT) were taken. The entire gut was identified, sectioned and micro graphed. P. annectens was observed to possess a terminal mouth that opens up to 10% of its total body length, an adaptive feature to enable the fish to swallow the whole of its pry. Its dentition is made up of incisors- scissor-like teeth which also help to firmly grip, seize and tear through the skin of prey before swallowing. A short, straight and longitudinal GIT was observed in P. annectens which is known to be common feature in lungfishes, though it is thought to be a primitive characteristic similar to the lamprey. The oesophagus is short and distensible similar to other predatory and carnivorous species. Food is temporarily stored in the stomach before it is passed down into the intestine. A pyloric aperture is seen at the end of the double folded pyloric valve which leads into an intestine that makes up 75% of the whole GIT. The intestine begins at the posterior end of the pyloric aperture and winds down in six coils through the whole length intestine and ends at the cloaca. From this study it is concluded that P. annectens possess a composite GIT with organs similar to other lung fishes; it is a detritor with carnivorous abilities.
Keywords: Gastrointestinal tract, incisors scissor-like teeth, intestine, mucus, Protopterus annectens, serosa.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74357 Comparison of Different Techniques to Estimate Surface Soil Moisture
Authors: S. Farid F. Mojtahedi, Ali Khosravi, Behnaz Naeimian, S. Adel A. Hosseini
Abstract:
Land subsidence is a gradual settling or sudden sinking of the land surface from changes that take place underground. There are different causes of land subsidence; most notably, ground-water overdraft and severe weather conditions. Subsidence of the land surface due to ground water overdraft is caused by an increase in the intergranular pressure in unconsolidated aquifers, which results in a loss of buoyancy of solid particles in the zone dewatered by the falling water table and accordingly compaction of the aquifer. On the other hand, exploitation of underground water may result in significant changes in degree of saturation of soil layers above the water table, increasing the effective stress in these layers, and considerable soil settlements. This study focuses on estimation of soil moisture at surface using different methods. Specifically, different methods for the estimation of moisture content at the soil surface, as an important term to solve Richard’s equation and estimate soil moisture profile are presented, and their results are discussed through comparison with field measurements obtained from Yanco1 station in south-eastern Australia. Surface soil moisture is not easy to measure at the spatial scale of a catchment. Due to the heterogeneity of soil type, land use, and topography, surface soil moisture may change considerably in space and time.
Keywords: Artificial neural network, empirical method, remote sensing, surface soil moisture, unsaturated soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 213456 Design, Fabrication and Evaluation of MR Damper
Authors: A. Ashfak, A. Saheed, K. K. Abdul Rasheed, J. Abdul Jaleel
Abstract:
This paper presents the design, fabrication and evaluation of magneto-rheological damper. Semi-active control devices have received significant attention in recent years because they offer the adaptability of active control devices without requiring the associated large power sources. Magneto-Rheological (MR) dampers are semi- active control devices that use MR fluids to produce controllable dampers. They potentially offer highly reliable operation and can be viewed as fail-safe in that they become passive dampers if the control hardware malfunction. The advantage of MR dampers over conventional dampers are that they are simple in construction, compromise between high frequency isolation and natural frequency isolation, they offer semi-active control, use very little power, have very quick response, has few moving parts, have a relax tolerances and direct interfacing with electronics. Magneto- Rheological (MR) fluids are Controllable fluids belonging to the class of active materials that have the unique ability to change dynamic yield stress when acted upon by an electric or magnetic field, while maintaining viscosity relatively constant. This property can be utilized in MR damper where the damping force is changed by changing the rheological properties of the fluid magnetically. MR fluids have a dynamic yield stress over Electro-Rheological fluids (ER) and a broader operational temperature range. The objective of this papert was to study the application of an MR damper to vibration control, design the vibration damper using MR fluids, test and evaluate its performance. In this paper the Rheology and the theory behind MR fluids and their use on vibration control were studied. Then a MR vibration damper suitable for vehicle suspension was designed and fabricated using the MR fluid. The MR damper was tested using a dynamic test rig and the results were obtained in the form of force vs velocity and the force vs displacement plots. The results were encouraging and greatly inspire further research on the topic.Keywords: Magneto-rheological Fluid, MR Damper, Semiactive controller, Electro-rheological fluid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595655 Reading against the Grain: Transcodifying Stimulus Meaning
Authors: Aba-Carina Pârlog
Abstract:
The paper shows that on transferring sense from the SL to the TL, the translator’s reading against the grain determines the creation of a faulty pattern of rendering the original meaning in the receiving culture which reflects the use of misleading transformative codes. In this case, the translator is a writer per se who decides what goes in and out of the book, how the style is to be ciphered and what elements of ideology are to be highlighted. The paper also proves that figurative language must not be flattened for the sake of clarity or naturalness. The missing figurative elements make the translated text less interesting, less challenging and less vivid which reflects poorly on the writer. There is a close connection between style and the writer’s person. If the writer’s style is very much altered in a translation, the translation is useless as the original writer and his / her imaginative world can no longer be discovered. The purpose of the paper is to prove that adaptation is a dangerous tool which leads to variants that sometimes reflect the original less than the reader would wish to. It contradicts the very essence of the process of translation which is that of making an original work available in a foreign language. If the adaptive transformative codes are so flexible that they encourage the translator to repeatedly leave out parts of the original work, then a subversive pattern emerges which changes the entire book. In conclusion, as a result of using adaptation, manipulative or subversive effects are created in the translated work. This is generally achieved by adding new words or connotations, creating new figures of speech or using explicitations. The additional meanings of the original work are neglected and the translator creates new meanings, implications, emphases and contexts. Again s/he turns into a new author who enjoys the freedom of expressing his / her own ideas without the constraints of the original text. Reading against the grain is unadvisable during the process of translation and consequently, following personal common sense becomes essential in the field of translation as well as everywhere else, so that translation should not become a source of fantasy.Keywords: Speculative aesthetics, substance of expression, transformative code, translation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165354 Fuzzy Wavelet Packet based Feature Extraction Method for Multifunction Myoelectric Control
Authors: Rami N. Khushaba, Adel Al-Jumaily
Abstract:
The myoelectric signal (MES) is one of the Biosignals utilized in helping humans to control equipments. Recent approaches in MES classification to control prosthetic devices employing pattern recognition techniques revealed two problems, first, the classification performance of the system starts degrading when the number of motion classes to be classified increases, second, in order to solve the first problem, additional complicated methods were utilized which increase the computational cost of a multifunction myoelectric control system. In an effort to solve these problems and to achieve a feasible design for real time implementation with high overall accuracy, this paper presents a new method for feature extraction in MES recognition systems. The method works by extracting features using Wavelet Packet Transform (WPT) applied on the MES from multiple channels, and then employs Fuzzy c-means (FCM) algorithm to generate a measure that judges on features suitability for classification. Finally, Principle Component Analysis (PCA) is utilized to reduce the size of the data before computing the classification accuracy with a multilayer perceptron neural network. The proposed system produces powerful classification results (99% accuracy) by using only a small portion of the original feature set.Keywords: Biomedical Signal Processing, Data mining andInformation Extraction, Machine Learning, Rehabilitation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173753 Assisted Prediction of Hypertension Based on Heart Rate Variability and Improved Residual Networks
Authors: Yong Zhao, Jian He, Cheng Zhang
Abstract:
Cardiovascular disease resulting from hypertension poses a significant threat to human health, and early detection of hypertension can potentially save numerous lives. Traditional methods for detecting hypertension require specialized equipment and are often incapable of capturing continuous blood pressure fluctuations. To address this issue, this study starts by analyzing the principle of heart rate variability (HRV) and introduces the utilization of sliding window and power spectral density (PSD) techniques to analyze both temporal and frequency domain features of HRV. Subsequently, a hypertension prediction network that relies on HRV is proposed, combining Resnet, attention mechanisms, and a multi-layer perceptron. The network leverages a modified ResNet18 to extract frequency domain features, while employing an attention mechanism to integrate temporal domain features, thus enabling auxiliary hypertension prediction through the multi-layer perceptron. The proposed network is trained and tested using the publicly available SHAREE dataset from PhysioNet. The results demonstrate that the network achieves a high prediction accuracy of 92.06% for hypertension, surpassing traditional models such as K Near Neighbor (KNN), Bayes, Logistic regression, and traditional Convolutional Neural Network (CNN).
Keywords: Feature extraction, heart rate variability, hypertension, residual networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19552 Spatiotemporal Analysis of Visual Evoked Responses Using Dense EEG
Authors: Rima Hleiss, Elie Bitar, Mahmoud Hassan, Mohamad Khalil
Abstract:
A comprehensive study of object recognition in the human brain requires combining both spatial and temporal analysis of brain activity. Here, we are mainly interested in three issues: the time perception of visual objects, the ability of discrimination between two particular categories (objects vs. animals), and the possibility to identify a particular spatial representation of visual objects. Our experiment consisted of acquiring dense electroencephalographic (EEG) signals during a picture-naming task comprising a set of objects and animals’ images. These EEG responses were recorded from nine participants. In order to determine the time perception of the presented visual stimulus, we analyzed the Event Related Potentials (ERPs) derived from the recorded EEG signals. The analysis of these signals showed that the brain perceives animals and objects with different time instants. Concerning the discrimination of the two categories, the support vector machine (SVM) was applied on the instantaneous EEG (excellent temporal resolution: on the order of millisecond) to categorize the visual stimuli into two different classes. The spatial differences between the evoked responses of the two categories were also investigated. The results showed a variation of the neural activity with the properties of the visual input. Results showed also the existence of a spatial pattern of electrodes over particular regions of the scalp in correspondence to their responses to the visual inputs.
Keywords: Brain activity, dense EEG, evoked responses, spatiotemporal analysis, SVM, perception.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 107151 Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms
Authors: Aqsa Ali, Aleem Mushtaq, Attaullah Memon, Monna
Abstract:
In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.Keywords: American sign language, assistive hand gesture interpreter, human-machine interface, machine learning, sensing glove.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 273150 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images
Authors: Jameela Ali Alkrimi, Loay E. George, Azizah Suliman, Abdul Rahim Ahmad, Karim Al-Jashamy
Abstract:
Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. Anemia is a lack of RBCs is characterized by its level compared to the normal hemoglobin level. In this study, a system based image processing methodology was developed to localize and extract RBCs from microscopic images. Also, the machine learning approach is adopted to classify the localized anemic RBCs images. Several textural and geometrical features are calculated for each extracted RBCs. The training set of features was analyzed using principal component analysis (PCA). With the proposed method, RBCs were isolated in 4.3secondsfrom an image containing 18 to 27 cells. The reasons behind using PCA are its low computation complexity and suitability to find the most discriminating features which can lead to accurate classification decisions. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network RBFNN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained within short time period, and the results became better when PCA was used.
Keywords: Red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 319949 Milling Simulations with a 3-DOF Flexible Planar Robot
Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden
Abstract:
Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.Keywords: Control, machining, multibody, robotic, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136748 Locating Center Points for Radial Basis Function Networks Using Instance Reduction Techniques
Authors: Rana Yousef, Khalil el Hindi
Abstract:
The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.
Keywords: Radial basis function networks, Instance-based reduction, PNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168847 Simulation of Solar Assisted Absorption Cooling and Electricity Generation along with Thermal Storage
Authors: Faezeh Mosallat, Eric L. Bibeau, Tarek El Mekkawy
Abstract:
Parabolic solar trough systems have seen limited deployments in cold northern climates as they are more suitable for electricity production in southern latitudes. A numerical dynamic model is developed to simulate troughs installed in cold climates and validated using a parabolic solar trough facility in Winnipeg. The model is developed in Simulink and will be utilized to simulate a trigeneration system for heating, cooling and electricity generation in remote northern communities. The main objective of this simulation is to obtain operational data of solar troughs in cold climates and use the model to determine ways to improve the economics and address cold weather issues. In this paper the validated Simulink model is applied to simulate a solar assisted absorption cooling system along with electricity generation using Organic Rankine Cycle (ORC) and thermal storage. A control strategy is employed to distribute the heated oil from solar collectors among the above three systems considering the temperature requirements. This modelling provides dynamic performance results using measured meteorological data recorded every minute at the solar facility location. The purpose of this modeling approach is to accurately predict system performance at each time step considering the solar radiation fluctuations due to passing clouds. Optimization of the controller in cold temperatures is another goal of the simulation to for example minimize heat losses in winter when energy demand is high and solar resources are low. The solar absorption cooling is modeled to use the generated heat from the solar trough system and provide cooling in summer for a greenhouse which is located next to the solar field. The results of the simulation are presented for a summer day in Winnipeg which includes comparison of performance parameters of the absorption cooling and ORC systems at different heat transfer fluid (HTF) temperatures.
Keywords: Absorption cooling, parabolic solar trough, remote community, organic Rankine cycle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 311446 Design of an Intelligent Location Identification Scheme Based On LANDMARC and BPNs
Authors: S. Chaisit, H.Y. Kung, N.T. Phuong
Abstract:
Radio frequency identification (RFID) applications have grown rapidly in many industries, especially in indoor location identification. The advantage of using received signal strength indicator (RSSI) values as an indoor location measurement method is a cost-effective approach without installing extra hardware. Because the accuracy of many positioning schemes using RSSI values is limited by interference factors and the environment, thus it is challenging to use RFID location techniques based on integrating positioning algorithm design. This study proposes the location estimation approach and analyzes a scheme relying on RSSI values to minimize location errors. In addition, this paper examines different factors that affect location accuracy by integrating the backpropagation neural network (BPN) with the LANDMARC algorithm in a training phase and an online phase. First, the training phase computes coordinates obtained from the LANDMARC algorithm, which uses RSSI values and the real coordinates of reference tags as training data for constructing an appropriate BPN architecture and training length. Second, in the online phase, the LANDMARC algorithm calculates the coordinates of tracking tags, which are then used as BPN inputs to obtain location estimates. The results show that the proposed scheme can estimate locations more accurately compared to LANDMARC without extra devices.
Keywords: BPNs, indoor location, location estimation, intelligent location identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 201145 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction
Authors: Qais M. Yousef, Yasmeen A. Alshaer
Abstract:
Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.
Keywords: Artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 92044 Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems
Authors: Kyoung-jae Kim
Abstract:
Recommender systems are usually regarded as an important marketing tool in the e-commerce. They use important information about users to facilitate accurate recommendation. The information includes user context such as location, time and interest for personalization of mobile users. We can easily collect information about location and time because mobile devices communicate with the base station of the service provider. However, information about user interest can-t be easily collected because user interest can not be captured automatically without user-s approval process. User interest usually represented as a need. In this study, we classify needs into two types according to prior research. This study investigates the usefulness of data mining techniques for classifying user need type for recommendation systems. We employ several data mining techniques including artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis. Experimental results show that CHAID algorithm outperforms other models for classifying user need type. This study performs McNemar test to examine the statistical significance of the differences of classification results. The results of McNemar test also show that CHAID performs better than the other models with statistical significance.Keywords: Customer need type, Data mining techniques, Recommender system, Personalization, Mobile user.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 214643 Flow Discharge Determination in Straight Compound Channels Using ANNs
Authors: A. Zahiri, A. A. Dehghani
Abstract:
Although many researchers have studied the flow hydraulics in compound channels, there are still many complicated problems in determination of their flow rating curves. Many different methods have been presented for these channels but extending them for all types of compound channels with different geometrical and hydraulic conditions is certainly difficult. In this study, by aid of nearly 400 laboratory and field data sets of geometry and flow rating curves from 30 different straight compound sections and using artificial neural networks (ANNs), flow discharge in compound channels was estimated. 13 dimensionless input variables including relative depth, relative roughness, relative width, aspect ratio, bed slope, main channel side slopes, flood plains side slopes and berm inclination and one output variable (flow discharge), have been used in ANNs. Comparison of ANNs model and traditional method (divided channel method-DCM) shows high accuracy of ANNs model results. The results of Sensitivity analysis showed that the relative depth with 47.6 percent contribution, is the most effective input parameter for flow discharge prediction. Relative width and relative roughness have 19.3 and 12.2 percent of importance, respectively. On the other hand, shape parameter, main channel and flood plains side slopes with 2.1, 3.8 and 3.8 percent of contribution, have the least importance.Keywords: ANN model, compound channels, divided channel method (DCM), flow rating curve
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 255842 ECG Based Reliable User Identification Using Deep Learning
Authors: R. N. Begum, Ambalika Sharma, G. K. Singh
Abstract:
Identity theft has serious ramifications beyond data and personal information loss. This necessitates the implementation of robust and efficient user identification systems. Therefore, automatic biometric recognition systems are the need of the hour, and electrocardiogram (ECG)-based systems are unquestionably the best choice due to their appealing inherent characteristics. The Convolutional Neural Networks (CNNs) are the recent state-of-the-art techniques for ECG-based user identification systems. However, the results obtained are significantly below standards, and the situation worsens as the number of users and types of heartbeats in the dataset grows. As a result, this study proposes a highly accurate and resilient ECG-based person identification system using CNN's dense learning framework. The proposed research explores explicitly the caliber of dense CNNs in the field of ECG-based human recognition. The study tests four different configurations of dense CNN which are trained on a dataset of recordings collected from eight popular ECG databases. With the highest False Acceptance Rate (FAR) of 0.04% and the highest False Rejection Rate (FRR) of 5%, the best performing network achieved an identification accuracy of 99.94%. The best network is also tested with various train/test split ratios. The findings show that DenseNets are not only extremely reliable, but also highly efficient. Thus, they might also be implemented in real-time ECG-based human recognition systems.
Keywords: Biometrics, dense networks, identification rate, train/test split ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 54141 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine
Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li
Abstract:
Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.
Keywords: Machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 94840 Text-independent Speaker Identification Based on MAP Channel Compensation and Pitch-dependent Features
Authors: Jiqing Han, Rongchun Gao
Abstract:
One major source of performance decline in speaker recognition system is channel mismatch between training and testing. This paper focuses on improving channel robustness of speaker recognition system in two aspects of channel compensation technique and channel robust features. The system is text-independent speaker identification system based on two-stage recognition. In the aspect of channel compensation technique, this paper applies MAP (Maximum A Posterior Probability) channel compensation technique, which was used in speech recognition, to speaker recognition system. In the aspect of channel robust features, this paper introduces pitch-dependent features and pitch-dependent speaker model for the second stage recognition. Based on the first stage recognition to testing speech using GMM (Gaussian Mixture Model), the system uses GMM scores to decide if it needs to be recognized again. If it needs to, the system selects a few speakers from all of the speakers who participate in the first stage recognition for the second stage recognition. For each selected speaker, the system obtains 3 pitch-dependent results from his pitch-dependent speaker model, and then uses ANN (Artificial Neural Network) to unite the 3 pitch-dependent results and 1 GMM score for getting a fused result. The system makes the second stage recognition based on these fused results. The experiments show that the correct rate of two-stage recognition system based on MAP channel compensation technique and pitch-dependent features is 41.7% better than the baseline system for closed-set test.Keywords: Channel Compensation, Channel Robustness, MAP, Speaker Identification
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154539 Simulation Data Management Approach for Developing Adaptronic Systems – The W-Model Methodology
Authors: Roland S. Nattermann, Reiner Anderl
Abstract:
Existing proceeding-models for the development of mechatronic systems provide a largely parallel action in the detailed development. This parallel approach is to take place also largely independent of one another in the various disciplines involved. An approach for a new proceeding-model provides a further development of existing models to use for the development of Adaptronic Systems. This approach is based on an intermediate integration and an abstract modeling of the adaptronic system. Based on this system-model a simulation of the global system behavior, due to external and internal factors or Forces is developed. For the intermediate integration a special data management system is used. According to the presented approach this data management system has a number of functions that are not part of the "normal" PDM functionality. Therefore a concept for a new data management system for the development of Adaptive system is presented in this paper. This concept divides the functions into six layers. In the first layer a system model is created, which divides the adaptronic system based on its components and the various technical disciplines. Moreover, the parameters and properties of the system are modeled and linked together with the requirements and the system model. The modeled parameters and properties result in a network which is analyzed in the second layer. From this analysis necessary adjustments to individual components for specific manipulation of the system behavior can be determined. The third layer contains an automatic abstract simulation of the system behavior. This simulation is a precursor for network analysis and serves as a filter. By the network analysis and simulation changes to system components are examined and necessary adjustments to other components are calculated. The other layers of the concept treat the automatic calculation of system reliability, the "normal" PDM-functionality and the integration of discipline-specific data into the system model. A prototypical implementation of an appropriate data management with the addition of an automatic system development is being implemented using the data management system ENOVIA SmarTeam V5 and the simulation system MATLAB.
Keywords: Adaptronic, Data-Management, LOEWE-CentreAdRIA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 236838 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: Artificial neural networks, fuel consumption, machine learning, regression, statistical tests.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83137 Financing Decision and Productivity Growth for the Venture Capital Industry Using High-Order Fuzzy Time Series
Authors: Shang-En Yu
Abstract:
Human society, there are many uncertainties, such as economic growth rate forecast of the financial crisis, many scholars have, since the the Song Chissom two scholars in 1993 the concept of the so-called fuzzy time series (Fuzzy Time Series)different mode to deal with these problems, a previous study, however, usually does not consider the relevant variables selected and fuzzy process based solely on subjective opinions the fuzzy semantic discrete, so can not objectively reflect the characteristics of the data set, in addition to carrying outforecasts are often fuzzy rules as equally important, failed to consider the importance of each fuzzy rule. For these reasons, the variable selection (Factor Selection) through self-organizing map (Self-Organizing Map, SOM) and proposed high-end weighted multivariate fuzzy time series model based on fuzzy neural network (Fuzzy-BPN), and using the the sequential weighted average operator (Ordered Weighted Averaging operator, OWA) weighted prediction. Therefore, in order to verify the proposed method, the Taiwan stock exchange (Taiwan Stock Exchange Corporation) Taiwan Weighted Stock Index (Taiwan Stock Exchange Capitalization Weighted Stock Index, TAIEX) as experimental forecast target, in order to filter the appropriate variables in the experiment Finally, included in other studies in recent years mode in conjunction with this study, the results showed that the predictive ability of this study further improve.
Keywords: Heterogeneity, residential mortgage loans, foreclosure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138836 A Vehicular Visual Tracking System Incorporating Global Positioning System
Authors: Hsien-Chou Liao, Yu-Shiang Wang
Abstract:
Surveillance system is widely used in the traffic monitoring. The deployment of cameras is moving toward a ubiquitous camera (UbiCam) environment. In our previous study, a novel service, called GPS-VT, was firstly proposed by incorporating global positioning system (GPS) and visual tracking techniques for the UbiCam environment. The first prototype is called GODTA (GPS-based Moving Object Detection and Tracking Approach). For a moving person carried GPS-enabled mobile device, he can be tracking when he enters the field-of-view (FOV) of a camera according to his real-time GPS coordinate. In this paper, GPS-VT service is applied to the tracking of vehicles. The moving speed of a vehicle is much faster than a person. It means that the time passing through the FOV is much shorter than that of a person. Besides, the update interval of GPS coordinate is once per second, it is asynchronous with the frame rate of the real-time image. The above asynchronous is worsen by the network transmission delay. These factors are the main challenging to fulfill GPS-VT service on a vehicle.In order to overcome the influence of the above factors, a back-propagation neural network (BPNN) is used to predict the possible lane before the vehicle enters the FOV of a camera. Then, a template matching technique is used for the visual tracking of a target vehicle. The experimental result shows that the target vehicle can be located and tracking successfully. The success location rate of the implemented prototype is higher than that of the previous GODTA.Keywords: visual surveillance, visual tracking, globalpositioning system, intelligent transportation system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 191735 On the Parameter Optimization of Fuzzy Inference Systems
Authors: Erika Martinez Ramirez, Rene V. Mayorga
Abstract:
Nowadays, more engineering systems are using some kind of Artificial Intelligence (AI) for the development of their processes. Some well-known AI techniques include artificial neural nets, fuzzy inference systems, and neuro-fuzzy inference systems among others. Furthermore, many decision-making applications base their intelligent processes on Fuzzy Logic; due to the Fuzzy Inference Systems (FIS) capability to deal with problems that are based on user knowledge and experience. Also, knowing that users have a wide variety of distinctiveness, and generally, provide uncertain data, this information can be used and properly processed by a FIS. To properly consider uncertainty and inexact system input values, FIS normally use Membership Functions (MF) that represent a degree of user satisfaction on certain conditions and/or constraints. In order to define the parameters of the MFs, the knowledge from experts in the field is very important. This knowledge defines the MF shape to process the user inputs and through fuzzy reasoning and inference mechanisms, the FIS can provide an “appropriate" output. However an important issue immediately arises: How can it be assured that the obtained output is the optimum solution? How can it be guaranteed that each MF has an optimum shape? A viable solution to these questions is through the MFs parameter optimization. In this Paper a novel parameter optimization process is presented. The process for FIS parameter optimization consists of the five simple steps that can be easily realized off-line. Here the proposed process of FIS parameter optimization it is demonstrated by its implementation on an Intelligent Interface section dealing with the on-line customization / personalization of internet portals applied to E-commerce.Keywords: Artificial Intelligence, Fuzzy Logic, Fuzzy InferenceSystems, Nonlinear Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 198434 Offline Parameter Identification and State-of-Charge Estimation for Healthy and Aged Electric Vehicle Batteries Based on the Combined Model
Authors: Xiaowei Zhang, Min Xu, Saeid Habibi, Fengjun Yan, Ryan Ahmed
Abstract:
Recently, Electric Vehicles (EVs) have received extensive consideration since they offer a more sustainable and greener transportation alternative compared to fossil-fuel propelled vehicles. Lithium-Ion (Li-ion) batteries are increasingly being deployed in EVs because of their high energy density, high cell-level voltage, and low rate of self-discharge. Since Li-ion batteries represent the most expensive component in the EV powertrain, accurate monitoring and control strategies must be executed to ensure their prolonged lifespan. The Battery Management System (BMS) has to accurately estimate parameters such as the battery State-of-Charge (SOC), State-of-Health (SOH), and Remaining Useful Life (RUL). In order for the BMS to estimate these parameters, an accurate and control-oriented battery model has to work collaboratively with a robust state and parameter estimation strategy. Since battery physical parameters, such as the internal resistance and diffusion coefficient change depending on the battery state-of-life (SOL), the BMS has to be adaptive to accommodate for this change. In this paper, an extensive battery aging study has been conducted over 12-months period on 5.4 Ah, 3.7 V Lithium polymer cells. Instead of using fixed charging/discharging aging cycles at fixed C-rate, a set of real-world driving scenarios have been used to age the cells. The test has been interrupted every 5% capacity degradation by a set of reference performance tests to assess the battery degradation and track model parameters. As battery ages, the combined model parameters are optimized and tracked in an offline mode over the entire batteries lifespan. Based on the optimized model, a state and parameter estimation strategy based on the Extended Kalman Filter (EKF) and the relatively new Smooth Variable Structure Filter (SVSF) have been applied to estimate the SOC at various states of life.
Keywords: Lithium-Ion batteries, genetic algorithm optimization, battery aging test, and parameter identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154533 Route Training in Mobile Robotics through System Identification
Authors: Roberto Iglesias, Theocharis Kyriacou, Ulrich Nehmzow, Steve Billings
Abstract:
Fundamental sensor-motor couplings form the backbone of most mobile robot control tasks, and often need to be implemented fast, efficiently and nevertheless reliably. Machine learning techniques are therefore often used to obtain the desired sensor-motor competences. In this paper we present an alternative to established machine learning methods such as artificial neural networks, that is very fast, easy to implement, and has the distinct advantage that it generates transparent, analysable sensor-motor couplings: system identification through nonlinear polynomial mapping. This work, which is part of the RobotMODIC project at the universities of Essex and Sheffield, aims to develop a theoretical understanding of the interaction between the robot and its environment. One of the purposes of this research is to enable the principled design of robot control programs. As a first step towards this aim we model the behaviour of the robot, as this emerges from its interaction with the environment, with the NARMAX modelling method (Nonlinear, Auto-Regressive, Moving Average models with eXogenous inputs). This method produces explicit polynomial functions that can be subsequently analysed using established mathematical methods. In this paper we demonstrate the fidelity of the obtained NARMAX models in the challenging task of robot route learning; we present a set of experiments in which a Magellan Pro mobile robot was taught to follow four different routes, always using the same mechanism to obtain the required control law.Keywords: Mobile robotics, system identification, non-linear modelling, NARMAX.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172232 Statistical Feature Extraction Method for Wood Species Recognition System
Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof
Abstract:
Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.Keywords: Classification, fuzzy, inspection system, image analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174431 Localizing and Recognizing Integral Pitches of Cheque Document Images
Authors: Bremananth R., Veerabadran C. S., Andy W. H. Khong
Abstract:
Automatic reading of handwritten cheque is a computationally complex process and it plays an important role in financial risk management. Machine vision and learning provide a viable solution to this problem. Research effort has mostly been focused on recognizing diverse pitches of cheques and demand drafts with an identical outline. However most of these methods employ templatematching to localize the pitches and such schemes could potentially fail when applied to different types of outline maintained by the bank. In this paper, the so-called outline problem is resolved by a cheque information tree (CIT), which generalizes the localizing method to extract active-region-of-entities. In addition, the weight based density plot (WBDP) is performed to isolate text entities and read complete pitches. Recognition is based on texture features using neural classifiers. Legal amount is subsequently recognized by both texture and perceptual features. A post-processing phase is invoked to detect the incorrect readings by Type-2 grammar using the Turing machine. The performance of the proposed system was evaluated using cheque and demand drafts of 22 different banks. The test data consists of a collection of 1540 leafs obtained from 10 different account holders from each bank. Results show that this approach can easily be deployed without significant design amendments.Keywords: Cheque reading, Connectivity checking, Text localization, Texture analysis, Turing machine, Signature verification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165730 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images
Authors: SP. Chokkalingam, K. Komathy
Abstract:
Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.
Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2479