An Extension of Multi-Layer Perceptron Based on Layer-Topology
Authors: Jānis Zuters
Abstract:
There are a lot of extensions made to the classic model of multi-layer perceptron (MLP). A notable amount of them has been designed to hasten the learning process without considering the quality of generalization. The paper proposes a new MLP extension based on exploiting topology of the input layer of the network. Experimental results show the extended model to improve upon generalization capability in certain cases. The new model requires additional computational resources to compare to the classic model, nevertheless the loss in efficiency isn-t regarded to be significant.
Keywords: Learning algorithm, multi-layer perceptron, topology.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1080452
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508References:
[1] Haykin, Simon, "Neural networks: a comprehensive foundation," 2nd ed. Prentice-Hall, Inc, 1999, pp. 2, 166-167, 205-206, 250.
[2] Scherer, Andreas, "Neuronale Netze: Grundlagen und Anwendungen," Vieweg, 1997, pp. 73-75, 81-87, 142-145.