Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping

Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa

Abstract:

The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.

Keywords: Neural network computing, information processing, input-output mapping, training time, computers with high memory.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1126449

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1326

References:


[1] A. Alessandri, L. Cassettari, R. Mosca, “Nonparametric nonlinear regression using polynomial, and neural approximators,” A numerical comparison. Comput Manag. Sci., 2009, vol. 6, pp. 5–24.
[2] H.K. Lam, U. Ekong, H. Liu, B. Xiao, H. Araujo, S.H. Ling, and K.Y. Chan, “A study of neural-network-based classifiers for material classification,” Neurocomputing., 2014, vol. 144, pp. 367–377.
[3] A. Nazemi, M. Dehghan, “A neural network method for solving support vector classification problems,” Neurocomputing., 2015, vol. 152, pp. 369–376.
[4] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks., 1989, vol. 2, pp. 359.
[5] G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control, Signals and Systems, 1989, vol. 2, pp. 303.
[6] V. Kurkova, “Kolmogorov's theorem and multilayer neural networks,” Neural Networks., 1992, vol. 5, pp. 501-506.
[7] R.S. Scalero, N. Tepedelenlioglu, “A fast new algorithm for training feedforward neural networks,” IEEE Trans. Signal Processing., 1992, vol. 40, pp. 202–210.
[8] M.T. Hagan, M.B. Menhaj, “Training feedforward networks with the Marquardt algorithm,” IEEE Trans. Neural Netw., 1994, vol. 5, pp. 989–993.
[9] Y. Liu, J.A. Starzyk, Z. Zhu, “Optimized approximation algorithm in neural networks without overfitting,” IEEE Trans. Neural Netws., 2008, vol. 19, pp. 983–995.
[10] M. Bogdan, M. Wilamowski, H. Yu, “ Improved computation for Levenberg–Marquardt training,” IEEE Trans. Neural Netw., 2010, vol. 21, pp. 930-937.
[11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradientbased learning applied to document recognition,” Intelligent Signal Processing, IEEE Press., 2001, pp. 306–351, 200.
[12] M. Ventresca, H.R. Tizhoosh, “Improving gradient based learning algorithms or large scale feedforward networks,” IJCNN’09: Proceedings of the International Joint Conference on Neural Networks. IEEE Press, Piscataway, NJ, USA, 2009, pp. 1529–1536.
[13] J.M. Wu, “Multilayer Potts perceptrons with Levenberg-Marquardt learning,” IEEE Trans. NeuralNetw., 2008, vol. 19, pp. 2032–2043.
[14] P. Chandra, Y. Singh, “An activation function adapting training algorithm for sigmoidal feedforward networks,” Neurocomputing., 2004, vol. 61, pp. 429–437.
[15] F.M. Ham, I. Kostanic, “Principles of neurocomputing for science and engineering,” 2nd edn. McGraw Hill, 2001, New York.
[16] L. Ait-Gougam, M. Tribeche, F. Mekideche-Chafa, “A systematic investigation of a neural network for function approximation” Neural Networks., 2008, vol. 21, pp. 1311_1317.
[17] M. Bogdan, F. Wilamowski, Y. Hao, “Improved computation for Levenberg Marquardt training” IEEE Trans. Neural Netw., 2010, vol. 21, pp. 930-93.
[18] S. Haykin, “Neural Networks: A Comprehensive Foundation,” Macmillan College Publishing, 1994, New-York.
[19] Y. Liu, J.A Starzyk, and Z. Zhu, “Optimized approximation algorithm in neural networks without overfitting,” IEEE Trans. Neural Netws., 2008, vol. 19, pp. 983–995.
[20] M. Bogdan, M. Wilamowski, and Yu. Hao, “Improved Computation for Levenberg–Marquardt Training,” IEEE Trans., 2010.
[21] N. Ampazis, and S.J. Perantonis, “Two highly efficient second-order algorithms for training feedforward networks,” IEEE Trans. Neural Netw., 2002, vol. 13, pp.1064–1074.
[22] D. Nguyen, B. Widrow, “Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights,” Proc. of the Int. Joint Conference on Neural Networks., 1990, vol. 3, pp. 21–26.