Search results for: neural activation
1391 Multilevel Activation Functions For True Color Image Segmentation Using a Self Supervised Parallel Self Organizing Neural Network (PSONN) Architecture: A Comparative Study
Authors: Siddhartha Bhattacharyya, Paramartha Dutta, Ujjwal Maulik, Prashanta Kumar Nandi
Abstract:
The paper describes a self supervised parallel self organizing neural network (PSONN) architecture for true color image segmentation. The proposed architecture is a parallel extension of the standard single self organizing neural network architecture (SONN) and comprises an input (source) layer of image information, three single self organizing neural network architectures for segmentation of the different primary color components in a color image scene and one final output (sink) layer for fusion of the segmented color component images. Responses to the different shades of color components are induced in each of the three single network architectures (meant for component level processing) by applying a multilevel version of the characteristic activation function, which maps the input color information into different shades of color components, thereby yielding a processed component color image segmented on the basis of the different shades of component colors. The number of target classes in the segmented image corresponds to the number of levels in the multilevel activation function. Since the multilevel version of the activation function exhibits several subnormal responses to the input color image scene information, the system errors of the three component network architectures are computed from some subnormal linear index of fuzziness of the component color image scenes at the individual level. Several multilevel activation functions are employed for segmentation of the input color image scene using the proposed network architecture. Results of the application of the multilevel activation functions to the PSONN architecture are reported on three real life true color images. The results are substantiated empirically with the correlation coefficients between the segmented images and the original images.
Keywords: Colour image segmentation, fuzzy set theory, multi-level activation functions, parallel self-organizing neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20221390 Application of Wavelet Neural Networks in Optimization of Skeletal Buildings under Frequency Constraints
Authors: Mohammad Reza Ghasemi, Amin Ghorbani
Abstract:
The main goal of the present work is to decrease the computational burden for optimum design of steel frames with frequency constraints using a new type of neural networks called Wavelet Neural Network. It is contested to train a suitable neural network for frequency approximation work as the analysis program. The combination of wavelet theory and Neural Networks (NN) has lead to the development of wavelet neural networks. Wavelet neural networks are feed-forward networks using wavelet as activation function. Wavelets are mathematical functions within suitable inner parameters, which help them to approximate arbitrary functions. WNN was used to predict the frequency of the structures. In WNN a RAtional function with Second order Poles (RASP) wavelet was used as a transfer function. It is shown that the convergence speed was faster than other neural networks. Also comparisons of WNN with the embedded Artificial Neural Network (ANN) and with approximate techniques and also with analytical solutions are available in the literature.Keywords: Weight Minimization, Frequency Constraints, Steel Frames, ANN, WNN, RASP Function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17411389 Hidden Markov Model for the Simulation Study of Neural States and Intentionality
Authors: R. B. Mishra
Abstract:
Hidden Markov Model (HMM) has been used in prediction and determination of states that generate different neural activations as well as mental working conditions. This paper addresses two applications of HMM; one to determine the optimal sequence of states for two neural states: Active (AC) and Inactive (IA) for the three emission (observations) which are for No Working (NW), Waiting (WT) and Working (W) conditions of human beings. Another is for the determination of optimal sequence of intentionality i.e. Believe (B), Desire (D), and Intention (I) as the states and three observational sequences: NW, WT and W. The computational results are encouraging and useful.Keywords: BDI, HMM, neural activation, optimal states, working conditions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8701388 An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks
Authors: Apoorva Vinod, Archana Mathur, Snehanshu Saha
Abstract:
Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs – Sigmoid, ReLU, and Tanh – have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment on multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLU-ReLU) combination. Our results show that on using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT).
Keywords: Activation Function, Universal Approximation function, Neural Networks, convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541387 Existence and Stability Analysis of Discrete-time Fuzzy BAM Neural Networks with Delays and Impulses
Authors: Chao Wang, Yongkun Li
Abstract:
In this paper, the discrete-time fuzzy BAM neural network with delays and impulses is studied. Sufficient conditions are obtained for the existence and global stability of a unique equilibrium of this class of fuzzy BAM neural networks with Lipschitzian activation functions without assuming their boundedness, monotonicity or differentiability and subjected to impulsive state displacements at fixed instants of time. Some numerical examples are given to demonstrate the effectiveness of the obtained results.
Keywords: Discrete-time fuzzy BAM neural networks, ımpulses, global exponential stability, global asymptotical stability, equilibrium point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15091386 Facial Emotion Recognition with Convolutional Neural Network Based Architecture
Authors: Koray U. Erbas
Abstract:
Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.
Keywords: Convolutional Neural Network, Deep Learning, Deep Learning Based FER, Facial Emotion Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13741385 A New Robust Stability Criterion for Dynamical Neural Networks with Mixed Time Delays
Authors: Guang Zhou, Shouming Zhong
Abstract:
In this paper, we investigate the problem of the existence, uniqueness and global asymptotic stability of the equilibrium point for a class of neural networks, the neutral system has mixed time delays and parameter uncertainties. Under the assumption that the activation functions are globally Lipschitz continuous, we drive a new criterion for the robust stability of a class of neural networks with time delays by utilizing the Lyapunov stability theorems and the Homomorphic mapping theorem. Numerical examples are given to illustrate the effectiveness and the advantage of the proposed main results.
Keywords: Neural networks, Delayed systems, Lyapunov function, Stability analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15841384 Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model
Authors: Anupama Pande, Ashok Kumar Thakur, Swapnoneel Roy
Abstract:
A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.Keywords: Complex valued neural network, Generalized Meanneuron model, Signal processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17301383 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function
Authors: Anupama Pande, Vishik Goel
Abstract:
A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.
Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24111382 Ultrasound-Assisted Pd Activation Process for Electroless Silver Plating
Authors: Chang-Myeon Lee, Min-Hyung Lee, Jin-Young Hur, Ho-Nyun Lee, Hong-Kee Lee
Abstract:
An ultrasound-assisted activation method for electroless silver plating is presented in this study. When the ultrasound was applied during the activation step, the amount of the Pd species adsorbed on substrate surfaces was higher than that of sample pretreated with a conventional activation process without ultrasound irradiation. With this activation method, it was also shown that the adsorbed Pd species with a size of about 5 nm were uniformly distributed on the surfaces, thus a smooth and uniform coating on the surfaces was obtained by subsequent electroless silver plating. The samples after each step were characterized by AFM, XPS, FIB, and SEM.Keywords: Cavitation, Electroless silver, Pd activation, Ultrasonic
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23851381 Complex-Valued Neural Networks for Blind Equalization of Time-Varying Channels
Authors: Rajoo Pandey
Abstract:
Most of the commonly used blind equalization algorithms are based on the minimization of a nonconvex and nonlinear cost function and a neural network gives smaller residual error as compared to a linear structure. The efficacy of complex valued feedforward neural networks for blind equalization of linear and nonlinear communication channels has been confirmed by many studies. In this paper we present two neural network models for blind equalization of time-varying channels, for M-ary QAM and PSK signals. The complex valued activation functions, suitable for these signal constellations in time-varying environment, are introduced and the learning algorithms based on the CMA cost function are derived. The improved performance of the proposed models is confirmed through computer simulations.
Keywords: Blind Equalization, Neural Networks, Constant Modulus Algorithm, Time-varying channels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18921380 Performance Evaluation of Complex Valued Neural Networks Using Various Error Functions
Authors: Anita S. Gangal, P. K. Kalra, D. S. Chauhan
Abstract:
The backpropagation algorithm in general employs quadratic error function. In fact, most of the problems that involve minimization employ the Quadratic error function. With alternative error functions the performance of the optimization scheme can be improved. The new error functions help in suppressing the ill-effects of the outliers and have shown good performance to noise. In this paper we have tried to evaluate and compare the relative performance of complex valued neural network using different error functions. During first simulation for complex XOR gate it is observed that some error functions like Absolute error, Cauchy error function can replace Quadratic error function. In the second simulation it is observed that for some error functions the performance of the complex valued neural network depends on the architecture of the network whereas with few other error functions convergence speed of the network is independent of architecture of the neural network.Keywords: Complex backpropagation algorithm, complex errorfunctions, complex valued neural network, split activation function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24251379 A Combined Neural Network Approach to Soccer Player Prediction
Authors: Wenbin Zhang, Hantian Wu, Jian Tang
Abstract:
An artificial neural network is a mathematical model inspired by biological neural networks. There are several kinds of neural networks and they are widely used in many areas, such as: prediction, detection, and classification. Meanwhile, in day to day life, people always have to make many difficult decisions. For example, the coach of a soccer club has to decide which offensive player to be selected to play in a certain game. This work describes a novel Neural Network using a combination of the General Regression Neural Network and the Probabilistic Neural Networks to help a soccer coach make an informed decision.
Keywords: General Regression Neural Network, Probabilistic Neural Networks, Neural function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37651378 A Self Supervised Bi-directional Neural Network (BDSONN) Architecture for Object Extraction Guided by Beta Activation Function and Adaptive Fuzzy Context Sensitive Thresholding
Authors: Siddhartha Bhattacharyya, Paramartha Dutta, Ujjwal Maulik, Prashanta Kumar Nandi
Abstract:
A multilayer self organizing neural neural network (MLSONN) architecture for binary object extraction, guided by a beta activation function and characterized by backpropagation of errors estimated from the linear indices of fuzziness of the network output states, is discussed. Since the MLSONN architecture is designed to operate in a single point fixed/uniform thresholding scenario, it does not take into cognizance the heterogeneity of image information in the extraction process. The performance of the MLSONN architecture with representative values of the threshold parameters of the beta activation function employed is also studied. A three layer bidirectional self organizing neural network (BDSONN) architecture comprising fully connected neurons, for the extraction of objects from a noisy background and capable of incorporating the underlying image context heterogeneity through variable and adaptive thresholding, is proposed in this article. The input layer of the network architecture represents the fuzzy membership information of the image scene to be extracted. The second layer (the intermediate layer) and the final layer (the output layer) of the network architecture deal with the self supervised object extraction task by bi-directional propagation of the network states. Each layer except the output layer is connected to the next layer following a neighborhood based topology. The output layer neurons are in turn, connected to the intermediate layer following similar topology, thus forming a counter-propagating architecture with the intermediate layer. The novelty of the proposed architecture is that the assignment/updating of the inter-layer connection weights are done using the relative fuzzy membership values at the constituent neurons in the different network layers. Another interesting feature of the network lies in the fact that the processing capabilities of the intermediate and the output layer neurons are guided by a beta activation function, which uses image context sensitive adaptive thresholding arising out of the fuzzy cardinality estimates of the different network neighborhood fuzzy subsets, rather than resorting to fixed and single point thresholding. An application of the proposed architecture for object extraction is demonstrated using a synthetic and a real life image. The extraction efficiency of the proposed network architecture is evaluated by a proposed system transfer index characteristic of the network.Keywords: Beta activation function, fuzzy cardinality, multilayer self organizing neural network, object extraction,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15661377 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks
Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing
Abstract:
The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28381376 Periodic Solutions of Recurrent Neural Networks with Distributed Delays and Impulses on Time Scales
Authors: Yaping Ren, Yongkun Li
Abstract:
In this paper, by using the continuation theorem of coincidence degree theory, M-matrix theory and constructing some suitable Lyapunov functions, some sufficient conditions are obtained for the existence and global exponential stability of periodic solutions of recurrent neural networks with distributed delays and impulses on time scales. Without assuming the boundedness of the activation functions gj, hj , these results are less restrictive than those given in the earlier references.
Keywords: Recurrent neural networks, global exponential stability, periodic solutions, distributed delays, impulses, time scales.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15951375 An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks
Authors: N. M. Nawi, R. S. Ransing, M. R. Ransing
Abstract:
The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.
Keywords: Adaptive gain variation, back-propagation, activation function, conjugate gradient, search direction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15211374 Investigation of Improved Chaotic Signal Tracking by Echo State Neural Networks and Multilayer Perceptron via Training of Extended Kalman Filter Approach
Authors: Farhad Asadi, S. Hossein Sadati
Abstract:
This paper presents a prediction performance of feedforward Multilayer Perceptron (MLP) and Echo State Networks (ESN) trained with extended Kalman filter. Feedforward neural networks and ESN are powerful neural networks which can track and predict nonlinear signals. However, their tracking performance depends on the specific signals or data sets, having the risk of instability accompanied by large error. In this study we explore this process by applying different network size and leaking rate for prediction of nonlinear or chaotic signals in MLP neural networks. Major problems of ESN training such as the problem of initialization of the network and improvement in the prediction performance are tackled. The influence of coefficient of activation function in the hidden layer and other key parameters are investigated by simulation results. Extended Kalman filter is employed in order to improve the sequential and regulation learning rate of the feedforward neural networks. This training approach has vital features in the training of the network when signals have chaotic or non-stationary sequential pattern. Minimization of the variance in each step of the computation and hence smoothing of tracking were obtained by examining the results, indicating satisfactory tracking characteristics for certain conditions. In addition, simulation results confirmed satisfactory performance of both of the two neural networks with modified parameterization in tracking of the nonlinear signals.Keywords: Feedforward neural networks, nonlinear signal prediction, echo state neural networks approach, leaking rates, capacity of neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7591373 Evaluating Generative Neural Attention Weights-Based Chatbot on Customer Support Twitter Dataset
Authors: Sinarwati Mohamad Suhaili, Naomie Salim, Mohamad Nazim Jambli
Abstract:
Sequence-to-sequence (seq2seq) models augmented with attention mechanisms are increasingly important in automated customer service. These models, adept at recognizing complex relationships between input and output sequences, are essential for optimizing chatbot responses. Central to these mechanisms are neural attention weights that determine the model’s focus during sequence generation. Despite their widespread use, there remains a gap in the comparative analysis of different attention weighting functions within seq2seq models, particularly in the context of chatbots utilizing the Customer Support Twitter (CST) dataset. This study addresses this gap by evaluating four distinct attention-scoring functions—dot, multiplicative/general, additive, and an extended multiplicative function with a tanh activation parameter — in neural generative seq2seq models. Using the CST dataset, these models were trained and evaluated over 10 epochs with the AdamW optimizer. Evaluation criteria included validation loss and BLEU scores implemented under both greedy and beam search strategies with a beam size of k = 3. Results indicate that the model with the tanh-augmented multiplicative function significantly outperforms its counterparts, achieving the lowest validation loss (1.136484) and the highest BLEU scores (0.438926 under greedy search, 0.443000 under beam search, k = 3). These findings emphasize the crucial influence of selecting an appropriate attention-scoring function to enhance the performance of seq2seq models for chatbots, particularly highlighting the model integrating tanh activation as a promising approach to improving chatbot quality in customer support contexts.
Keywords: Attention weight, chatbot, encoder-decoder, neural generative attention, score function, sequence-to-sequence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 931372 Detection of Actuator Faults for an Attitude Control System using Neural Network
Authors: S. Montenegro, W. Hu
Abstract:
The objective of this paper is to develop a neural network-based residual generator to detect the fault in the actuators for a specific communication satellite in its attitude control system (ACS). First, a dynamic multilayer perceptron network with dynamic neurons is used, those neurons correspond a second order linear Infinite Impulse Response (IIR) filter and a nonlinear activation function with adjustable parameters. Second, the parameters from the network are adjusted to minimize a performance index specified by the output estimated error, with the given input-output data collected from the specific ACS. Then, the proposed dynamic neural network is trained and applied for detecting the faults injected to the wheel, which is the main actuator in the normal mode for the communication satellite. Then the performance and capabilities of the proposed network were tested and compared with a conventional model-based observer residual, showing the differences between these two methods, and indicating the benefit of the proposed algorithm to know the real status of the momentum wheel. Finally, the application of the methods in a satellite ground station is discussed.Keywords: Satellite, Attitude Control, Momentum Wheel, Neural Network, Fault Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19921371 Reactive Neural Control for Phototaxis and Obstacle Avoidance Behavior of Walking Machines
Authors: Poramate Manoonpong, Frank Pasemann, Florentin Wörgötter
Abstract:
This paper describes reactive neural control used to generate phototaxis and obstacle avoidance behavior of walking machines. It utilizes discrete-time neurodynamics and consists of two main neural modules: neural preprocessing and modular neural control. The neural preprocessing network acts as a sensory fusion unit. It filters sensory noise and shapes sensory data to drive the corresponding reactive behavior. On the other hand, modular neural control based on a central pattern generator is applied for locomotion of walking machines. It coordinates leg movements and can generate omnidirectional walking. As a result, through a sensorimotor loop this reactive neural controller enables the machines to explore a dynamic environment by avoiding obstacles, turn toward a light source, and then stop near to it.Keywords: Recurrent neural networks, Walking robots, Modular neural control, Phototaxis, Obstacle avoidance behavior.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17291370 The Effect of Deformation Activation Volume, Strain Rate Sensitivity and Processing Temperature of Grain Size Variants
Authors: P. B. Sob, A. A. Alugongo, T. B. Tengen
Abstract:
The activation volume of 6082T6 aluminum is investigated at different temperatures for grain size variants. The deformation activation volume was computed on the basis of the relationship between the Boltzmann’s constant k, the testing temperatures, the material strain rate sensitivity and the material yield stress grain size variants. The material strain rate sensitivity is computed as a function of yield stress and strain rate grain size variants. The effect of the material strain rate sensitivity and the deformation activation volume of 6082T6 aluminum at different temperatures of 3-D grain are discussed. It is shown that the strain rate sensitivities and activation volume are negative for the grain size variants during the deformation of nanostructured materials. It is also observed that the activation volume vary in different ways with the equivalent radius, semi minor axis radius, semi major axis radius and major axis radius. From the obtained results it is shown that the variation of activation volume increase and decrease with the testing temperature. It was revealed that, increase in strain rate sensitivity led to decrease in activation volume whereas increase in activation volume led to decrease in strain rate sensitivity.
Keywords: Nanostructured materials, grain size variants, temperature, yield stress, strain rate sensitivity, activation volume.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26021369 A Fast Neural Algorithm for Serial Code Detection in a Stream of Sequential Data
Authors: Hazem M. El-Bakry, Qiangfu Zhao
Abstract:
In recent years, fast neural networks for object/face detection have been introduced based on cross correlation in the frequency domain between the input matrix and the hidden weights of neural networks. In our previous papers [3,4], fast neural networks for certain code detection was introduced. It was proved in [10] that for fast neural networks to give the same correct results as conventional neural networks, both the weights of neural networks and the input matrix must be symmetric. This condition made those fast neural networks slower than conventional neural networks. Another symmetric form for the input matrix was introduced in [1-9] to speed up the operation of these fast neural networks. Here, corrections for the cross correlation equations (given in [13,15,16]) to compensate for the symmetry condition are presented. After these corrections, it is proved mathematically that the number of computation steps required for fast neural networks is less than that needed by classical neural networks. Furthermore, there is no need for converting the input data into symmetric form. Moreover, such new idea is applied to increase the speed of neural networks in case of processing complex values. Simulation results after these corrections using MATLAB confirm the theoretical computations.
Keywords: Fast Code/Data Detection, Neural Networks, Cross Correlation, real/complex values.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16281368 The Multi-Layered Perceptrons Neural Networks for the Prediction of Daily Solar Radiation
Authors: Radouane Iqdour, Abdelouhab Zeroual
Abstract:
The Multi-Layered Perceptron (MLP) Neural networks have been very successful in a number of signal processing applications. In this work we have studied the possibilities and the met difficulties in the application of the MLP neural networks for the prediction of daily solar radiation data. We have used the Polack-Ribière algorithm for training the neural networks. A comparison, in term of the statistical indicators, with a linear model most used in literature, is also performed, and the obtained results show that the neural networks are more efficient and gave the best results.Keywords: Daily solar radiation, Prediction, MLP neural networks, linear model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13311367 Increasing The Speed of Convergence of an Artificial Neural Network based ARMA Coefficients Determination Technique
Authors: Abiodun M. Aibinu, Momoh J. E. Salami, Amir A. Shafie, Athaur Rahman Najeeb
Abstract:
In this paper, novel techniques in increasing the accuracy and speed of convergence of a Feed forward Back propagation Artificial Neural Network (FFBPNN) with polynomial activation function reported in literature is presented. These technique was subsequently used to determine the coefficients of Autoregressive Moving Average (ARMA) and Autoregressive (AR) system. The results obtained by introducing sequential and batch method of weight initialization, batch method of weight and coefficient update, adaptive momentum and learning rate technique gives more accurate result and significant reduction in convergence time when compared t the traditional method of back propagation algorithm, thereby making FFBPNN an appropriate technique for online ARMA coefficient determination.Keywords: Adaptive Learning rate, Adaptive momentum, Autoregressive, Modeling, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14981366 2n Almost Periodic Attractors for Cohen-Grossberg Neural Networks with Variable and Distribute Delays
Abstract:
In this paper, we investigate dynamics of 2n almost periodic attractors for Cohen-Grossberg neural networks (CGNNs) with variable and distribute time delays. By imposing some new assumptions on activation functions and system parameters, we split invariant basin of CGNNs into 2n compact convex subsets. Then the existence of 2n almost periodic solutions lying in compact convex subsets is attained due to employment of the theory of exponential dichotomy and Schauder-s fixed point theorem. Meanwhile, we derive some new criteria for the networks to converge toward these 2n almost periodic solutions and exponential attracting domains are also given correspondingly.
Keywords: CGNNs, almost periodic solution, invariant basins, attracting domains.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13811365 Estimation of the Moisture Diffusivity and Activation Energy in Thin Layer Drying of Ginger Slices
Authors: Ebru Kavak Akpinar, Seda Toraman
Abstract:
In the present work, the effective moisture diffusivity and activation energy were calculated using an infinite series solution of Fick-s diffusion equation. The results showed that increasing drying temperature accelerated the drying process. All drying experiments had only falling rate period. The average effective moisture diffusivity values varied from 2.807x10-10 to 6.977x10-10m2 s_1 over the temperature and velocity range. The temperature dependence of the effective moisture diffusivity for the thin layer drying of the ginger slices was satisfactorily described by an Arrhenius-type relationship with activation energy values of 19.313- 22.722 kJ.mol-1 within 40–70 °C and 0.8-3 ms-1 temperature range.Keywords: Ginger, Drying, Activation energy, Moisture diffusivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27111364 Odor Discrimination Using Neural Decoding of Olfactory Bulbs in Rats
Authors: K.-J. You, H.J. Lee, Y. Lang, C. Im, C.S. Koh, H.-C. Shin
Abstract:
This paper presents a novel method for inferring the odor based on neural activities observed from rats- main olfactory bulbs. Multi-channel extra-cellular single unit recordings were done by micro-wire electrodes (tungsten, 50μm, 32 channels) implanted in the mitral/tufted cell layers of the main olfactory bulb of anesthetized rats to obtain neural responses to various odors. Neural response as a key feature was measured by substraction of neural firing rate before stimulus from after. For odor inference, we have developed a decoding method based on the maximum likelihood (ML) estimation. The results have shown that the average decoding accuracy is about 100.0%, 96.0%, 84.0%, and 100.0% with four rats, respectively. This work has profound implications for a novel brain-machine interface system for odor inference.Keywords: biomedical signal processing, neural engineering, olfactory, neural decoding, BMI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16141363 A Cognitive Model for Frequency Signal Classification
Authors: Rui Antunes, Fernando V. Coito
Abstract:
This article presents the development of a neural network cognitive model for the classification and detection of different frequency signals. The basic structure of the implemented neural network was inspired on the perception process that humans generally make in order to visually distinguish between high and low frequency signals. It is based on the dynamic neural network concept, with delays. A special two-layer feedforward neural net structure was successfully implemented, trained and validated, to achieve minimum target error. Training confirmed that this neural net structure descents and converges to a human perception classification solution, even when far away from the target.Keywords: Neural Networks, Signal Classification, Adaptative Filters, Cognitive Neuroscience
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16661362 Applications of Cascade Correlation Neural Networks for Cipher System Identification
Authors: B. Chandra, P. Paul Varghese
Abstract:
Crypto System Identification is one of the challenging tasks in Crypt analysis. The paper discusses the possibility of employing Neural Networks for identification of Cipher Systems from cipher texts. Cascade Correlation Neural Network and Back Propagation Network have been employed for identification of Cipher Systems. Very large collection of cipher texts were generated using a Block Cipher (Enhanced RC6) and a Stream Cipher (SEAL). Promising results were obtained in terms of accuracy using both the Neural Network models but it was observed that the Cascade Correlation Neural Network Model performed better compared to Back Propagation Network.
Keywords: Back Propagation Neural Networks, CascadeCorrelation Neural Network, Crypto systems, Block Cipher, StreamCipher.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2445