Dynamic Threshold Adjustment Approach For Neural Networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32799
Dynamic Threshold Adjustment Approach For Neural Networks

Authors: Hamza A. Ali, Waleed A. J. Rasheed

Abstract:

The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.

Keywords: Classification, Recognition, Neural Networks, Pattern Recognition, Generalization.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1085744

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583

References:


[1] A. Krogh and J. Vedelsby, "Neural Network Ensembles, Cross Validation, and Active Learning, Advances in Neural Information Processing Systems, G. Tesauro, D. Touretsky, and T. Leen, eds., 7, Cambridge, Mass.: MIT Press, 1995.
[2] R. V. Purohit and S. A. Imam, " Feature Extraction Pattern Recognition: A Review", MASAUM Journal Of Reviews and Surveys, 1 (1), September 2009. pp 27 - 45.
[3] B. Lerner, H. Guterman, M. Aladjem and I. Dinstein, "A Comparative Study of Neural Network Based Feature Extraction Paradigms", Pattern Recognition Letters, 20 (1), pp. 7- 14, 1999.
[4] B. Ripley, "Pattern Recognition and Neural Networks", Cambridge, Mass.: Cambridge Univ. Press, 1996.
[5] C. Stergiou and D. Siganos, "Neural Networks", http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html.
[6] W. A. J. Rasheed and H. A. Ali, "Generalization Aspect of Neural Networks on Upgrading Assimilation Structure into Accommodating Scheme", Journal of Computer Science, 5 (3), 2009, pp: 177-183.
[7] R. Rojas, "Neural Networks: A Systematic Introduction", Springer- Verlag, Berlin Medleberg, New York, 1996.
[8] M. T. Hagan, H. B. Demuth, M. Beale, "Neural Network Design", China Machine Press, CITIC Publishing House, Beijing, 2002.http://portal.acm.org/citation.cfm?id=249049.
[9] Ji. Chuanyi and Ma, "Combinations of Weak Classifiers", IEEE Trans. On Neural Networks, 8 (1), 1997, pp: 32-42. DOI: 10.1109/72.554189.
[10] H. Ishibuchi and M. Nii, "Fuzzification of input vector for improving the generalization ability of neural networks", The Int-l Joint Conference on Neural Networks, Anchorage, 4th-8th May, Alaska, 2, 1998, pp: 1153-1158. DOI: 10.1109/FUZZY.1998.686281.
[11] Wu. Yan and S. Wang, "A New Algorithm to Improve the Learning Performance of Neural Network through Result-Feedback", Journal of Computer Research and Development (in Chinese), 41 (9), 2004, pp: 488-492. DOI: 10.1109/WCICA.2004.1341928.
[12] N. Feng, F. Wang and Y. Qiu, "Novel Approach for Promoting the Generalization Ability of Neural Networks", International Journal of Signal Processing, Waset, 2 (2), 2006, pp: 131-135. www.waset.org/ijsp/v2/v2-2-20.pdf.
[13] T. Ganchev, D. K. Tasoulis, M. N. Vrahatis, and N. Fakotakis, "Generalized Locally recurrent probabilistic neural networks for text independent speaker verification", Proceeding of the EuroSpeech, 3, 2007, pp: 1673-1676. DOI: 10.1016/j.neucom.2006.05.012.
[14] E. Inohira, T. Uoi and H. Yokoi, "Generalization Capability of Neural Networks for Generation of Coordinated Motion of Hybrid Prosthesis with a Healthy Arm", International Journal of Innovative Computing, Information and Control, 4 (2), 2008, pp: 471- 484. www.ijicic.org/07-044-1.pdf.
[15] A. Engelbrecht, "Sensitivity Analysis for Selective Learning by Feed forward Neural Networks", Fundamental Informaticae, South Africa., 2001, http://portal.acm.org/citation.cfm?id=1219998.
[16] J. Piaget, "The Origin of Intelligence in the Child", Routledge and Kegan Paul, Great Britain, 1979.
[17] R. Nielson, "Neurocomputing. Addison Wesley Publishing", USA, 1989. http://portal.acm.org/citation.cfm?id=103996.
[18] J. Robert, R. J. Sternberg and J. C. Kaufman, "The evolution of intelligence", Published by Lawrence Erlbaum Associates, 2002. http://books.google.jo/books?id=xgQ1MKljVuIC&hl=en.
[19] Proben1, "neural network bench mark Database Via FTP", from ftp.cs.cmu.edu
[128.2.206.173] as /afs/cs/project/connect/bench/contrib./prechelt/probwn1.tar.gz or ftp.era.uka.de
[129.13.10.90] as /pub/neuron/proben.tar.gz,1996.
[20] W. A. J. Rasheed and H. A. Ali, "Cooperative Neural Network Generalization Model Incorporating Classification and Association", European Journal of Scientific Research, 36 (4), 2009, pp: 639-648.
[21] J. Dayhoff, "Neural Network Architectures: An Introduction", Van Nostrand Reinhold, New York, 1990, ISBN 0-442-20744-1.
[22] M. Caudill and C. Butler, "Naturally Intelligent Systems", MIT Press: Cambridge, Massachusetts, 1990.