Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30178
Unsupervised Feature Learning by Pre-Route Simulation of Auto-Encoder Behavior Model

Authors: Youngjae Jin, Daeshik Kim


This paper describes a cycle accurate simulation results of weight values learned by an auto-encoder behavior model in terms of pre-route simulation. Given the results we visualized the first layer representations with natural images. Many common deep learning threads have focused on learning high-level abstraction of unlabeled raw data by unsupervised feature learning. However, in the process of handling such a huge amount of data, the learning method’s computation complexity and time limited advanced research. These limitations came from the fact these algorithms were computed by using only single core CPUs. For this reason, parallel-based hardware, FPGAs, was seen as a possible solution to overcome these limitations. We adopted and simulated the ready-made auto-encoder to design a behavior model in VerilogHDL before designing hardware. With the auto-encoder behavior model pre-route simulation, we obtained the cycle accurate results of the parameter of each hidden layer by using MODELSIM. The cycle accurate results are very important factor in designing a parallel-based digital hardware. Finally this paper shows an appropriate operation of behavior model based pre-route simulation. Moreover, we visualized learning latent representations of the first hidden layer with Kyoto natural image dataset.

Keywords: Auto-encoder, Behavior model simulation, Digital hardware design, Pre-route simulation, Unsupervised feature learning.

Digital Object Identifier (DOI):

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2369


[1] Hinton, G. E., Osindero, S., Teh, Y., (2006). "A fast learning algorithm for deep belief nets”, Neural Computation, 18, 1527~1554.
[2] Ranzato, Marc’Aurelio, et al. "Unsupervised learning of invariant feature hierarchies with applications to object recognition." Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on. IEEE, 2007.
[3] Ranzato, Marc'Aurelio, and Yann Adviser-Lecun. "Unsupervised learning of feature hierarchies." (2009).
[4] Sang Kyun Kim, Lawrence C. McAfee, Peter L. McMahon, Kunle Olukotun, "A highly scalable Restricted Boltzmann Machine FPGA implementation”, FPL2009.
[5] Farabet, Clément, et al. "Hardware accelerated convolutional neural networks for synthetic vision systems." Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on. IEEE, 2010.
[6] Liang, Yun, et al. "High-level synthesis: Productivity, performance, and software constraints." Journal of Electrical and Computer Engineering 2012.
[7] Liu, Dong C., and Jorge Nocedal. "On the limited memory BFGS method for large scale optimization." Mathematical programming 45.1-3 (1989): 503-528.
[8] Bengio, Yoshua, et al. "Greedy layer-wise training of deep networks." Advances in neural information processing systems 19 (2007): 153.
[9] G. Tesauro(1992), "Practical issues in temporal difference learning. Machine Learning, 8, 257~277”,
[10] Erhan, Dumitru, et al. "Why does unsupervised pre-training help deep learning?." The Journal of Machine Learning Research 11 (2010): 625-660.
[11] Andrew Ng, "Sparse autoencoder”, Stanford CS294A Lecture notes.
[12] Coates, Adam, Andrew Y. Ng, and Honglak Lee. "An analysis of single-layer networks in unsupervised feature learning." International Conference on Artificial Intelligence and Statistics. 2011.
[13] Ritter, Helge J., Thomas M. Martinetz, and Klaus J. Schulten. "Topology-conserving maps for learning visuo-motor-coordination." Neural networks 2.3 (1989): 159-168.
[14] Bottou, Léon. "Large-scale machine learning with stochastic gradient descent." Proceedings of COMPSTAT'2010. Physica-Verlag HD, 2010. 177-186.
[15] Ngiam, Jiquan, et al. "On optimization methods for deep learning." Proceedings of the 28th International Conference on Machine Learning.
[16] Bengio, Yoshua. "Deep Learning of Representations for Unsupervised and Transfer Learning." Journal of Machine Learning Research-Proceedings Track 27 (2012): 17-36.
[17] Farabet, Clément, et al. Large-scale FPGA-based convolutional networks. Cambridge, UK: Cambridge University Press, 2011.
[18] Omondi, Amos R., and JagathChandanaRajapakse, eds. FPGA implementations of neural networks. Vol. 365. New York, NY, USA:: Springer, 2006.
[19] Erhan, Dumitru, et al. Visualizing higher-layer features of a deep network. Technical report, University of Montreal, 2009.
[20] Kullback, "Information theory and statistics”, John Wiley and sons, NY, 1959.