Search results for: Number of neurons in the hidden layer.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4589

Search results for: Number of neurons in the hidden layer.

4589 The Impact of the Number of Neurons in the Hidden Layer on the Performance of MLP Neural Network: Application to the Fast Identification of Toxic Gases

Authors: Slimane Ouhmad, Abdellah Halimi

Abstract:

In this work, neural networks methods MLP type were applied to a database from an array of six sensors for the detection of three toxic gases. The choice of the number of hidden layers and the weight values are influential on the convergence of the learning algorithm. We proposed, in this article, a mathematical formula to determine the optimal number of hidden layers and good weight values based on the method of back propagation of errors. The results of this modeling have improved discrimination of these gases and optimized the computation time. The model presented here has proven to be an effective application for the fast identification of toxic gases.

Keywords: Back-propagation, Computing time, Fast identification, MLP neural network, Number of neurons in the hidden layer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2205
4588 Parameter Sensitivity Analysis of Artificial Neural Network for Predicting Water Turbidity

Authors: Chia-Ling Chang, Chung-Sheng Liao

Abstract:

The present study focuses on the discussion over the parameter of Artificial Neural Network (ANN). Sensitivity analysis is applied to assess the effect of the parameters of ANN on the prediction of turbidity of raw water in the water treatment plant. The result shows that transfer function of hidden layer is a critical parameter of ANN. When the transfer function changes, the reliability of prediction of water turbidity is greatly different. Moreover, the estimated water turbidity is less sensitive to training times and learning velocity than the number of neurons in the hidden layer. Therefore, it is important to select an appropriate transfer function and suitable number of neurons in the hidden layer in the process of parameter training and validation.

Keywords: Artificial Neural Network (ANN), sensitivity analysis, turbidity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2740
4587 Artificial Neural Network with Steepest Descent Backpropagation Training Algorithm for Modeling Inverse Kinematics of Manipulator

Authors: Thiang, Handry Khoswanto, Rendy Pangaldus

Abstract:

Inverse kinematics analysis plays an important role in developing a robot manipulator. But it is not too easy to derive the inverse kinematic equation of a robot manipulator especially robot manipulator which has numerous degree of freedom. This paper describes an application of Artificial Neural Network for modeling the inverse kinematics equation of a robot manipulator. In this case, the robot has three degree of freedoms and the robot was implemented for drilling a printed circuit board. The artificial neural network architecture used for modeling is a multilayer perceptron networks with steepest descent backpropagation training algorithm. The designed artificial neural network has 2 inputs, 2 outputs and varies in number of hidden layer. Experiments were done in variation of number of hidden layer and learning rate. Experimental results show that the best architecture of artificial neural network used for modeling inverse kinematics of is multilayer perceptron with 1 hidden layer and 38 neurons per hidden layer. This network resulted a RMSE value of 0.01474.

Keywords: Artificial neural network, back propagation, inverse kinematics, manipulator, robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2220
4586 Inferring the Dynamics of “Hidden“ Neurons from Electrophysiological Recordings

Authors: Valeri A. Makarov, Nazareth P. Castellanos

Abstract:

Statistical analysis of electrophysiological recordings obtained under, e.g. tactile, stimulation frequently suggests participation in the network dynamics of experimentally unobserved “hidden" neurons. Such interneurons making synapses to experimentally recorded neurons may strongly alter their dynamical responses to the stimuli. We propose a mathematical method that formalizes this possibility and provides an algorithm for inferring on the presence and dynamics of hidden neurons based on fitting of the experimental data to spike trains generated by the network model. The model makes use of Integrate and Fire neurons “chemically" coupled through exponentially decaying synaptic currents. We test the method on simulated data and also provide an example of its application to the experimental recording from the Dorsal Column Nuclei neurons of the rat under tactile stimulation of a hind limb.

Keywords: Integrate and fire neuron, neural network models, spike trains.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1287
4585 Optimizing the Probabilistic Neural Network Training Algorithm for Multi-Class Identification

Authors: Abdelhadi Lotfi, Abdelkader Benyettou

Abstract:

In this work, a training algorithm for probabilistic neural networks (PNN) is presented. The algorithm addresses one of the major drawbacks of PNN, which is the size of the hidden layer in the network. By using a cross-validation training algorithm, the number of hidden neurons is shrunk to a smaller number consisting of the most representative samples of the training set. This is done without affecting the overall architecture of the network. Performance of the network is compared against performance of standard PNN for different databases from the UCI database repository. Results show an important gain in network size and performance.

Keywords: Classification, probabilistic neural networks, network optimization, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1140
4584 Application of Neural Network in User Authentication for Smart Home System

Authors: A. Joseph, D.B.L. Bong, D.A.A. Mat

Abstract:

Security has been an important issue and concern in the smart home systems. Smart home networks consist of a wide range of wired or wireless devices, there is possibility that illegal access to some restricted data or devices may happen. Password-based authentication is widely used to identify authorize users, because this method is cheap, easy and quite accurate. In this paper, a neural network is trained to store the passwords instead of using verification table. This method is useful in solving security problems that happened in some authentication system. The conventional way to train the network using Backpropagation (BPN) requires a long training time. Hence, a faster training algorithm, Resilient Backpropagation (RPROP) is embedded to the MLPs Neural Network to accelerate the training process. For the Data Part, 200 sets of UserID and Passwords were created and encoded into binary as the input. The simulation had been carried out to evaluate the performance for different number of hidden neurons and combination of transfer functions. Mean Square Error (MSE), training time and number of epochs are used to determine the network performance. From the results obtained, using Tansig and Purelin in hidden and output layer and 250 hidden neurons gave the better performance. As a result, a password-based user authentication system for smart home by using neural network had been developed successfully.

Keywords: Neural Network, User Authentication, Smart Home, Security

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1986
4583 Application of Extreme Learning Machine Method for Time Series Analysis

Authors: Rampal Singh, S. Balasundaram

Abstract:

In this paper, we study the application of Extreme Learning Machine (ELM) algorithm for single layered feedforward neural networks to non-linear chaotic time series problems. In this algorithm the input weights and the hidden layer bias are randomly chosen. The ELM formulation leads to solving a system of linear equations in terms of the unknown weights connecting the hidden layer to the output layer. The solution of this general system of linear equations will be obtained using Moore-Penrose generalized pseudo inverse. For the study of the application of the method we consider the time series generated by the Mackey Glass delay differential equation with different time delays, Santa Fe A and UCR heart beat rate ECG time series. For the choice of sigmoid, sin and hardlim activation functions the optimal values for the memory order and the number of hidden neurons which give the best prediction performance in terms of root mean square error are determined. It is observed that the results obtained are in close agreement with the exact solution of the problems considered which clearly shows that ELM is a very promising alternative method for time series prediction.

Keywords: Chaotic time series, Extreme learning machine, Generalization performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3454
4582 Convergence Analysis of Training Two-Hidden-Layer Partially Over-Parameterized ReLU Networks via Gradient Descent

Authors: Zhifeng Kong

Abstract:

Over-parameterized neural networks have attracted a great deal of attention in recent deep learning theory research, as they challenge the classic perspective of over-fitting when the model has excessive parameters and have gained empirical success in various settings. While a number of theoretical works have been presented to demystify properties of such models, the convergence properties of such models are still far from being thoroughly understood. In this work, we study the convergence properties of training two-hidden-layer partially over-parameterized fully connected networks with the Rectified Linear Unit activation via gradient descent. To our knowledge, this is the first theoretical work to understand convergence properties of deep over-parameterized networks without the equally-wide-hidden-layer assumption and other unrealistic assumptions. We provide a probabilistic lower bound of the widths of hidden layers and proved linear convergence rate of gradient descent. We also conducted experiments on synthetic and real-world datasets to validate our theory.

Keywords: Over-parameterization, Rectified Linear Units (ReLU), convergence, gradient descent, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795
4581 Optimization of the Input Layer Structure for Feed-Forward Narx Neural Networks

Authors: Zongyan Li, Matt Best

Abstract:

This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.

Keywords: Correlation analysis, F-ratio, Levenberg-Marquardt, MSE, NARX, neural network, optimisation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2145
4580 Modeling and Prediction of Zinc Extraction Efficiency from Concentrate by Operating Condition and Using Artificial Neural Networks

Authors: S. Mousavian, D. Ashouri, F. Mousavian, V. Nikkhah Rashidabad, N. Ghazinia

Abstract:

PH, temperature and time of extraction of each stage,  agitation speed and delay time between stages effect on efficiency of  zinc extraction from concentrate. In this research, efficiency of zinc  extraction was predicted as a function of mentioned variable by  artificial neural networks (ANN). ANN with different layer was  employed and the result show that the networks with 8 neurons in  hidden layer has good agreement with experimental data.

 

Keywords: Zinc extraction, Efficiency, Neural networks, Operating condition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
4579 Binary Mixture of Copper-Cobalt Ions Uptake by Zeolite using Neural Network

Authors: John Kabuba, Antoine Mulaba-Bafubiandi, Kim Battle

Abstract:

In this study a neural network (NN) was proposed to predict the sorption of binary mixture of copper-cobalt ions into clinoptilolite as ion-exchanger. The configuration of the backpropagation neural network giving the smallest mean square error was three-layer NN with tangent sigmoid transfer function at hidden layer with 10 neurons, linear transfer function at output layer and Levenberg-Marquardt backpropagation training algorithm. Experiments have been carried out in the batch reactor to obtain equilibrium data of the individual sorption and the mixture of coppercobalt ions. The obtained modeling results have shown that the used of neural network has better adjusted the equilibrium data of the binary system when compared with the conventional sorption isotherm models.

Keywords: Adsorption isotherm, binary system, neural network; sorption

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1986
4578 A Critics Study of Neural Networks Applied to ion-Exchange Process

Authors: John Kabuba, Antoine Mulaba-Bafubiandi, Kim Battle

Abstract:

This paper presents a critical study about the application of Neural Networks to ion-exchange process. Ionexchange is a complex non-linear process involving many factors influencing the ions uptake mechanisms from the pregnant solution. The following step includes the elution. Published data presents empirical isotherm equations with definite shortcomings resulting in unreliable predictions. Although Neural Network simulation technique encounters a number of disadvantages including its “black box", and a limited ability to explicitly identify possible causal relationships, it has the advantage to implicitly handle complex nonlinear relationships between dependent and independent variables. In the present paper, the Neural Network model based on the back-propagation algorithm Levenberg-Marquardt was developed using a three layer approach with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and linear transfer function (purelin) at out layer. The above mentioned approach has been used to test the effectiveness in simulating ion exchange processes. The modeling results showed that there is an excellent agreement between the experimental data and the predicted values of copper ions removed from aqueous solutions.

Keywords: Copper, ion-exchange process, neural networks, simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
4577 Javanese Character Recognition Using Hidden Markov Model

Authors: Anastasia Rita Widiarti, Phalita Nari Wastu

Abstract:

Hidden Markov Model (HMM) is a stochastic method which has been used in various signal processing and character recognition. This study proposes to use HMM to recognize Javanese characters from a number of different handwritings, whereby HMM is used to optimize the number of state and feature extraction. An 85.7 % accuracy is obtained as the best result in 16-stated vertical model using pure HMM. This initial result is satisfactory for prompting further research.

Keywords: Character recognition, off-line handwritingrecognition, Hidden Markov Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
4576 Pattern Classification of Back-Propagation Algorithm Using Exclusive Connecting Network

Authors: Insung Jung, Gi-Nam Wang

Abstract:

The objective of this paper is to a design of pattern classification model based on the back-propagation (BP) algorithm for decision support system. Standard BP model has done full connection of each node in the layers from input to output layers. Therefore, it takes a lot of computing time and iteration computing for good performance and less accepted error rate when we are doing some pattern generation or training the network. However, this model is using exclusive connection in between hidden layer nodes and output nodes. The advantage of this model is less number of iteration and better performance compare with standard back-propagation model. We simulated some cases of classification data and different setting of network factors (e.g. hidden layer number and nodes, number of classification and iteration). During our simulation, we found that most of simulations cases were satisfied by BP based using exclusive connection network model compared to standard BP. We expect that this algorithm can be available to identification of user face, analysis of data, mapping data in between environment data and information.

Keywords: Neural network, Back-propagation, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
4575 Synthesis of Wavelet Filters using Wavelet Neural Networks

Authors: Wajdi Bellil, Chokri Ben Amar, Adel M. Alimi

Abstract:

An application of Beta wavelet networks to synthesize pass-high and pass-low wavelet filters is investigated in this work. A Beta wavelet network is constructed using a parametric function called Beta function in order to resolve some nonlinear approximation problem. We combine the filter design theory with wavelet network approximation to synthesize perfect filter reconstruction. The order filter is given by the number of neurons in the hidden layer of the neural network. In this paper we use only the first derivative of Beta function to illustrate the proposed design procedures and exhibit its performance.

Keywords: Beta wavelets, Wavenet, multiresolution analysis, perfect filter reconstruction, salient point detect, repeatability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
4574 A Self Supervised Bi-directional Neural Network (BDSONN) Architecture for Object Extraction Guided by Beta Activation Function and Adaptive Fuzzy Context Sensitive Thresholding

Authors: Siddhartha Bhattacharyya, Paramartha Dutta, Ujjwal Maulik, Prashanta Kumar Nandi

Abstract:

A multilayer self organizing neural neural network (MLSONN) architecture for binary object extraction, guided by a beta activation function and characterized by backpropagation of errors estimated from the linear indices of fuzziness of the network output states, is discussed. Since the MLSONN architecture is designed to operate in a single point fixed/uniform thresholding scenario, it does not take into cognizance the heterogeneity of image information in the extraction process. The performance of the MLSONN architecture with representative values of the threshold parameters of the beta activation function employed is also studied. A three layer bidirectional self organizing neural network (BDSONN) architecture comprising fully connected neurons, for the extraction of objects from a noisy background and capable of incorporating the underlying image context heterogeneity through variable and adaptive thresholding, is proposed in this article. The input layer of the network architecture represents the fuzzy membership information of the image scene to be extracted. The second layer (the intermediate layer) and the final layer (the output layer) of the network architecture deal with the self supervised object extraction task by bi-directional propagation of the network states. Each layer except the output layer is connected to the next layer following a neighborhood based topology. The output layer neurons are in turn, connected to the intermediate layer following similar topology, thus forming a counter-propagating architecture with the intermediate layer. The novelty of the proposed architecture is that the assignment/updating of the inter-layer connection weights are done using the relative fuzzy membership values at the constituent neurons in the different network layers. Another interesting feature of the network lies in the fact that the processing capabilities of the intermediate and the output layer neurons are guided by a beta activation function, which uses image context sensitive adaptive thresholding arising out of the fuzzy cardinality estimates of the different network neighborhood fuzzy subsets, rather than resorting to fixed and single point thresholding. An application of the proposed architecture for object extraction is demonstrated using a synthetic and a real life image. The extraction efficiency of the proposed network architecture is evaluated by a proposed system transfer index characteristic of the network.

Keywords: Beta activation function, fuzzy cardinality, multilayer self organizing neural network, object extraction,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1505
4573 Investigation of Artificial Neural Networks Performance to Predict Net Heating Value of Crude Oil by Its Properties

Authors: Mousavian, M. Moghimi Mofrad, M. H. Vakili, D. Ashouri, R. Alizadeh

Abstract:

The aim of this research is to use artificial neural networks computing technology for estimating the net heating value (NHV) of crude oil by its Properties. The approach is based on training the neural network simulator uses back-propagation as the learning algorithm for a predefined range of analytically generated well test response. The network with 8 neurons in one hidden layer was selected and prediction of this network has been good agreement with experimental data.

Keywords: Neural Network, Net Heating Value, Crude Oil, Experimental, Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523
4572 A New Self-Adaptive EP Approach for ANN Weights Training

Authors: Kristina Davoian, Wolfram-M. Lippe

Abstract:

Evolutionary Programming (EP) represents a methodology of Evolutionary Algorithms (EA) in which mutation is considered as a main reproduction operator. This paper presents a novel EP approach for Artificial Neural Networks (ANN) learning. The proposed strategy consists of two components: the self-adaptive, which contains phenotype information and the dynamic, which is described by genotype. Self-adaptation is achieved by the addition of a value, called the network weight, which depends on a total number of hidden layers and an average number of neurons in hidden layers. The dynamic component changes its value depending on the fitness of a chromosome, exposed to mutation. Thus, the mutation step size is controlled by two components, encapsulated in the algorithm, which adjust it according to the characteristics of a predefined ANN architecture and the fitness of a particular chromosome. The comparative analysis of the proposed approach and the classical EP (Gaussian mutation) showed, that that the significant acceleration of the evolution process is achieved by using both phenotype and genotype information in the mutation strategy.

Keywords: Artificial Neural Networks (ANN), Learning Theory, Evolutionary Programming (EP), Mutation, Self-Adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1775
4571 Neural Network Based Speech to Text in Malay Language

Authors: H. F. A. Abdul Ghani, R. R. Porle

Abstract:

Speech to text in Malay language is a system that converts Malay speech into text. The Malay language recognition system is still limited, thus, this paper aims to investigate the performance of ten Malay words obtained from the online Malay news. The methodology consists of three stages, which are preprocessing, feature extraction, and speech classification. In preprocessing stage, the speech samples are filtered using pre emphasis. After that, feature extraction method is applied to the samples using Mel Frequency Cepstrum Coefficient (MFCC). Lastly, speech classification is performed using Feedforward Neural Network (FFNN). The accuracy of the classification is further investigated based on the hidden layer size. From experimentation, the classifier with 40 hidden neurons shows the highest classification rate which is 94%.  

Keywords: Feed-Forward Neural Network, FFNN, Malay speech recognition, Mel Frequency Cepstrum Coefficient, MFCC, speech-to-text.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652
4570 Metabolic Predictive Model for PMV Control Based on Deep Learning

Authors: Eunji Choi, Borang Park, Youngjae Choi, Jinwoo Moon

Abstract:

In this study, a predictive model for estimating the metabolism (MET) of human body was developed for the optimal control of indoor thermal environment. Human body images for indoor activities and human body joint coordinated values were collected as data sets, which are used in predictive model. A deep learning algorithm was used in an initial model, and its number of hidden layers and hidden neurons were optimized. Lastly, the model prediction performance was analyzed after the model being trained through collected data. In conclusion, the possibility of MET prediction was confirmed, and the direction of the future study was proposed as developing various data and the predictive model.

Keywords: Deep learning, indoor quality, metabolism, predictive model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1129
4569 Performance Analysis of Evolutionary ANN for Output Prediction of a Grid-Connected Photovoltaic System

Authors: S.I Sulaiman, T.K Abdul Rahman, I. Musirin, S. Shaari

Abstract:

This paper presents performance analysis of the Evolutionary Programming-Artificial Neural Network (EPANN) based technique to optimize the architecture and training parameters of a one-hidden layer feedforward ANN model for the prediction of energy output from a grid connected photovoltaic system. The ANN utilizes solar radiation and ambient temperature as its inputs while the output is the total watt-hour energy produced from the grid-connected PV system. EP is used to optimize the regression performance of the ANN model by determining the optimum values for the number of nodes in the hidden layer as well as the optimal momentum rate and learning rate for the training. The EPANN model is tested using two types of transfer function for the hidden layer, namely the tangent sigmoid and logarithmic sigmoid. The best transfer function, neural topology and learning parameters were selected based on the highest regression performance obtained during the ANN training and testing process. It is observed that the best transfer function configuration for the prediction model is [logarithmic sigmoid, purely linear].

Keywords: Artificial neural network (ANN), Correlation coefficient (R), Evolutionary programming-ANN (EPANN), Photovoltaic (PV), logarithmic sigmoid and tangent sigmoid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1848
4568 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body

Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi

Abstract:

The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.

Keywords: Accu-Chek, diabetes, neural network, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531
4567 Classification of Prostate Cell Nuclei using Artificial Neural Network Methods

Authors: M. Sinecen, M. Makinacı

Abstract:

The purpose of this paper is to assess the value of neural networks for classification of cancer and noncancer prostate cells. Gauss Markov Random Fields, Fourier entropy and wavelet average deviation features are calculated from 80 noncancer and 80 cancer prostate cell nuclei. For classification, artificial neural network techniques which are multilayer perceptron, radial basis function and learning vector quantization are used. Two methods are utilized for multilayer perceptron. First method has single hidden layer and between 3-15 nodes, second method has two hidden layer and each layer has between 3-15 nodes. Overall classification rate of 86.88% is achieved.

Keywords: Artificial neural networks, texture classification, cancer diagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524
4566 Prediction of Vapor Liquid Equilibrium for Dilute Solutions of Components in Ionic Liquid by Neural Networks

Authors: S. Mousavian, A. Abedianpour, A. Khanmohammadi, S. Hematian, Gh. Eidi Veisi

Abstract:

Ionic liquids are finding a wide range of applications from reaction media to separations and materials processing. In these applications, Vapor–Liquid equilibrium (VLE) is the most important one. VLE for six systems at 353 K and activity coefficients at infinite dilution [(γ)_i^∞] for various solutes (alkanes, alkenes, cycloalkanes, cycloalkenes, aromatics, alcohols, ketones, esters, ethers, and water) in the ionic liquids (1-ethyl-3-methylimidazolium bis (trifluoromethylsulfonyl)imide [EMIM][BTI], 1-hexyl-3-methyl imidazolium bis (trifluoromethylsulfonyl) imide [HMIM][BTI], 1-octyl-3-methylimidazolium bis(trifluoromethylsulfonyl) imide [OMIM][BTI], and 1-butyl-1-methylpyrrolidinium bis (trifluoromethylsulfonyl) imide [BMPYR][BTI]) have been used to train neural networks in the temperature range from (303 to 333) K. Densities of the ionic liquids, Hildebrant constant of substances, and temperature were selected as input of neural networks. The networks with different hidden layers were examined. Networks with seven neurons in one hidden layer have minimum error and good agreement with experimental data.

Keywords: Ionic liquid, Neural networks, VLE, Dilute solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314
4565 Optimal Multilayer Perceptron Structure For Classification of HIV Sub-Type Viruses

Authors: Zeyneb Kurt, Oguzhan Yavuz

Abstract:

The feature of HIV genome is in a wide range because of it is highly heterogeneous. Hence, the infection ability of the virus changes related with different chemokine receptors. From this point, R5 and X4 HIV viruses use CCR5 and CXCR5 coreceptors respectively while R5X4 viruses can utilize both coreceptors. Recently, in Bioinformatics, R5X4 viruses have been studied to classify by using the coreceptors of HIV genome. The aim of this study is to develop the optimal Multilayer Perceptron (MLP) for high classification accuracy of HIV sub-type viruses. To accomplish this purpose, the unit number in hidden layer was incremented one by one, from one to a particular number. The statistical data of R5X4, R5 and X4 viruses was preprocessed by the signal processing methods. Accessible residues of these virus sequences were extracted and modeled by Auto-Regressive Model (AR) due to the dimension of residues is large and different from each other. Finally the pre-processed dataset was used to evolve MLP with various number of hidden units to determine R5X4 viruses. Furthermore, ROC analysis was used to figure out the optimal MLP structure.

Keywords: Multilayer Perceptron, Auto-Regressive Model, HIV, ROC Analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1387
4564 Validation Testing for Temporal Neural Networks for RBF Recognition

Authors: Khaled E. A. Negm

Abstract:

A neuron can emit spikes in an irregular time basis and by averaging over a certain time window one would ignore a lot of information. It is known that in the context of fast information processing there is no sufficient time to sample an average firing rate of the spiking neurons. The present work shows that the spiking neurons are capable of computing the radial basis functions by storing the relevant information in the neurons' delays. One of the fundamental findings of the this research also is that when using overlapping receptive fields to encode the data patterns it increases the network-s clustering capacity. The clustering algorithm that is discussed here is interesting from computer science and neuroscience point of view as well as from a perspective.

Keywords: Temporal Neurons, RBF Recognition, Perturbation, On Line Recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
4563 Tuning Neurons to Interaural Intensity Differences Using Spike Timing-Dependent Plasticity

Authors: Bertrand Fontaine, Herbert Peremans

Abstract:

Mammals are known to use Interaural Intensity Difference (IID) to determine azimuthal position of high frequency sounds. In the Lateral Superior Olive (LSO) neurons have firing behaviours which vary systematicaly with IID. Those neurons receive excitatory inputs from the ipsilateral ear and inhibitory inputs from the contralateral one. The IID sensitivity of a LSO neuron is thought to be due to delay differences between both ears, delays due to different synaptic delays and to intensity-dependent delays. In this paper we model the auditory pathway until the LSO. Inputs to LSO neurons are at first numerous and differ in their relative delays. Spike Timing-Dependent Plasticity is then used to prune those connections. We compare the pruned neuron responses with physiological data and analyse the relationship between IID-s of teacher stimuli and IID sensitivities of trained LSO neurons.

Keywords: Interaural difference, lateral superior olive, spike time-dependent plasticity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1426
4562 Identifying a Drug Addict Person Using Artificial Neural Networks

Authors: Mustafa Al Sukar, Azzam Sleit, Abdullatif Abu-Dalhoum, Bassam Al-Kasasbeh

Abstract:

Use and abuse of drugs by teens is very common and can have dangerous consequences. The drugs contribute to physical and sexual aggression such as assault or rape. Some teenagers regularly use drugs to compensate for depression, anxiety or a lack of positive social skills. Teen resort to smoking should not be minimized because it can be "gateway drugs" for other drugs (marijuana, cocaine, hallucinogens, inhalants, and heroin). The combination of teenagers' curiosity, risk taking behavior, and social pressure make it very difficult to say no. This leads most teenagers to the questions: "Will it hurt to try once?" Nowadays, technological advances are changing our lives very rapidly and adding a lot of technologies that help us to track the risk of drug abuse such as smart phones, Wireless Sensor Networks (WSNs), Internet of Things (IoT), etc. This technique may help us to early discovery of drug abuse in order to prevent an aggravation of the influence of drugs on the abuser. In this paper, we have developed a Decision Support System (DSS) for detecting the drug abuse using Artificial Neural Network (ANN); we used a Multilayer Perceptron (MLP) feed-forward neural network in developing the system. The input layer includes 50 variables while the output layer contains one neuron which indicates whether the person is a drug addict. An iterative process is used to determine the number of hidden layers and the number of neurons in each one. We used multiple experiment models that have been completed with Log-Sigmoid transfer function. Particularly, 10-fold cross validation schemes are used to access the generalization of the proposed system. The experiment results have obtained 98.42% classification accuracy for correct diagnosis in our system. The data had been taken from 184 cases in Jordan according to a set of questions compiled from Specialists, and data have been obtained through the families of drug abusers.

Keywords: Artificial Neural Network, Decision Support System, drug abuse, drug addiction, Multilayer Perceptron.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600
4561 Development of Single Layer of WO3 on Large Spatial Resolution by Atomic Layer Deposition Technique

Authors: S. Zhuiykov, Zh. Hai, H. Xu, C. Xue

Abstract:

Unique and distinctive properties could be obtained on such two-dimensional (2D) semiconductor as tungsten trioxide (WO3) when the reduction from multi-layer to one fundamental layer thickness takes place. This transition without damaging single-layer on a large spatial resolution remained elusive until the atomic layer deposition (ALD) technique was utilized. Here we report the ALD-enabled atomic-layer-precision development of a single layer WO3 with thickness of 0.77±0.07 nm on a large spatial resolution by using (tBuN)2W(NMe2)2 as tungsten precursor and H2O as oxygen precursor, without affecting the underlying SiO2/Si substrate. Versatility of ALD is in tuning recipe in order to achieve the complete WO3 with desired number of WO3 layers including monolayer. Governed by self-limiting surface reactions, the ALD-enabled approach is versatile, scalable and applicable for a broader range of 2D semiconductors and various device applications.

Keywords: Atomic layer deposition, tungsten oxide, WO3, two-dimensional semiconductors, single fundamental layer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557
4560 Application of BP Neural Network Model in Sports Aerobics Performance Evaluation

Authors: Shuhe Shao

Abstract:

This article provides partial evaluation index and its standard of sports aerobics, including the following 12 indexes: health vitality, coordination, flexibility, accuracy, pace, endurance, elasticity, self-confidence, form, control, uniformity and musicality. The three-layer BP artificial neural network model including input layer, hidden layer and output layer is established. The result shows that the model can well reflect the non-linear relationship between the performance of 12 indexes and the overall performance. The predicted value of each sample is very close to the true value, with a relative error fluctuating around of 5%, and the network training is successful. It shows that BP network has high prediction accuracy and good generalization capacity if being applied in sports aerobics performance evaluation after effective training.

Keywords: BP neural network, sports aerobics, performance, evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563