Novel Approach for Promoting the Generalization Ability of Neural Networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32795
Novel Approach for Promoting the Generalization Ability of Neural Networks

Authors: Naiqin Feng, Fang Wang, Yuhui Qiu

Abstract:

A new approach to promote the generalization ability of neural networks is presented. It is based on the point of view of fuzzy theory. This approach is implemented through shrinking or magnifying the input vector, thereby reducing the difference between training set and testing set. It is called “shrinking-magnifying approach" (SMA). At the same time, a new algorithm; α-algorithm is presented to find out the appropriate shrinking-magnifying-factor (SMF) α and obtain better generalization ability of neural networks. Quite a few simulation experiments serve to study the effect of SMA and α-algorithm. The experiment results are discussed in detail, and the function principle of SMA is analyzed in theory. The results of experiments and analyses show that the new approach is not only simpler and easier, but also is very effective to many neural networks and many classification problems. In our experiments, the proportions promoting the generalization ability of neural networks have even reached 90%.

Keywords: Fuzzy theory, generalization, misclassification rate, neural network.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1082593

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487

References:


[1] Martin T Hagan, Howard B Demuth, Mark Beale. Neural Network Design. Beijing: China Machine Press, CITIC Publishing House, 2002, 8.
[2] W. S. Sarle. Stopped training and other remedies for over fitting, to appear in Proceedings of the 27th Symposium on the Interface, 1995.
[3] G. E. Hinton. Connectionist learning procedures. Artificial Intelligence, 1989, 40:185-234.
[4] A. S. Weigand, D. E. Rumelhart, and B. A. Huberman. Generalization by weight elimination with application to forecasting. In Advances in Neural Information Processing Systems 3, R. P. Lippman, J. E. Moody and D. J. Touretzky, eds, San Mateo, CA: Morgan Kaufmann, 1991, 575-582.
[5] Yan Wu, Shoujue Wang. A New Algorithm to Improve the Learning Performance of Neural Network through Result-Feedback. Journal of Computer Research and Development (in Chinese), 2004, 41(9), 488-492.
[6] H. Ishibuchi, M. Nii. Fuzzification of input vector for improving the generalization ability of neural networks. The Int-l Joint Conf. on Neural Networks, Anchorage, Alaska, 1998.
[7] L. K. Hansen, P. Salamon. Neural Network Ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12 (10): 993-1001.
[8] D. Opitz, R. Maclin. Popular Ensemble Methods: An Empirical Study. Journal of Artificial Intelligence Research, 1999, 11: 169-198.
[9] Li-Xin Wang. A Course in Fuzzy Systems and Control. Upper Saddle River, NJ: Prentice-Hall Inc, A Pearson Education Company, 1997.
[10] J. S. R. Jang, C. T. Sun, E. Mizutani. Neuro-Fuzzy and Soft Computing. Upper Saddle River, NJ: Prentice-Hall Inc, Simon & Schuster/A Viacom Company, 1997.