**Commenced**in January 2007

**Frequency:**Monthly

**Edition:**International

**Paper Count:**30840

##### Incremental Learning of Independent Topic Analysis

**Authors:**
Takahiro Nishigaki,
Katsumi Nitta,
Takashi Onoda

**Abstract:**

**Keywords:**
Text Mining,
independent component analysis,
independent,
incremental,
topic extraction

**Digital Object Identifier (DOI):**
doi.org/10.5281/zenodo.1128933

**References:**

[1] Akhtar, M. T., Jung, T.-P., Makeig, S., and Cauwenberghs, G., 2012. Recursive independent component analysis for online blind source separation, IEEE International Symposium on Circuits and Systems.

[2] Amari, S., Cichocki, A., and Yang, H.-H., 1996. A new learning algorithm for blind signal separation, In Advances in Neural Information Processing Systems, D.S. Touretzky, M.C. Mozer, and M.E. Hasselmo, Editor, Vol.8, pp.757–763, The MIT Press.

[3] Azoury, K. S., and Warmuth, M. K., 2001. Relative loss bounds for on-line density estimation with the exponential family of distribution, Machine Learning, Vol.42, No.3, pp.211–246.

[4] Banerjee, A., Merugu, S., Dhillon, I., and Ghosh, J., 2005. Clustering with Bregman divergences, Journal of Machine Learning Research, Vol. 6, pp.1705–1749.

[5] Banerjee, A., and Basu, S. 2007. Topic Models over Text Streams: A Study of Batch and Online Unsupervised Learning, SIAM International Conference on Data Mining.

[6] Bassiou, N. K., and Kotropoulos, C. L., 2014. Online PLSA: Batch Updating Techniques Including Out-of-Vocabulary Words, IEEE Transaction on Neural Networks and Learning Systems, Vol.25, Issue.11, pp.1953–1966.

[7] Bell, A. J., and Sejnowski, T. J., 1997. An information-maximization approach to blind separation and blind dexonvolution, Neural Computation, Vol.7, pp.1129–1159.

[8] Blei, D. M., Ng, A. Y., and Jordan, M. I. 2003. Latent dirichlet allocation, The Journal of Machine Learning Research, Vol. 3, pp. 993–1022.

[9] Blei, D. M. 2012. Probabilistic topic models, Commun. ACM, Vol. 55, No. 4, pp. 77–84.

[10] Brown, G., Pocock, A., Zhao, M-J., and Luj´an, M. 2012. Conditional likelihood maximisation: A unifying framework for information theoretic feature, Journal of Machine Learning Research (JMLR), Vol. 13, pp. 27–66.

[11] Chien, J.-T., and Wu, M.-S., 2007. Adaptive Bayesian Latent Semantic Analysis, IEEE Transactions on Audio, Speech and Language Processing, Vol.16, Issue.1, pp.198–207.

[12] Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. 1990. Indexing by latent semantic analysis, Journal of the American Society of Information Science, Vol. 41, No. 6, pp. 391–407.

[13] Elkan, C., 2006. Clustering documents with an exponential-family approximation of the Dirichlet compund multinomial distribution, In Proceedings of the 23rd International Conference on Machine Learning.

[14] Hertz, J., Krogh, A., and Palmer, R. G., 1991. Introduction To the Theory of Neural Computation, Addison-Wesley, Reading, MA.

[15] Hofmann, T. 1999. Probabilistic latent semantic analysis, Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence (UAI’99), pp. 289–29, Morgan Kaufmann Publishers Inc..

[16] Hyv¨arinen A. 1999. Fast and robust fixed-point algorithms for independent component analysis, IEEE Trans. on Neural Networks, Vol. 10, No. 3.

[17] Hyv¨arinen, A., Karhunen, J. and Oja, E. 2001. Independent component analysis, John Wiley & Sons.

[18] Lichman, M. 2013. UCI machine learning repository, http://archive.ics.uci.edu/ml , Accessed on 11/11/2016.

[19] Madsen, R., Kauchak, D., and Elkan, C., 2005. Modeling word burstiness using the Dirichlet distribution, In Proceedings of the 22nd International Conference on Machine Learning.

[20] Neal, R.M., and Hinton, G.E., 1998. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M.I. Jordan, editor, Learning in Graphical Model, pp.355–368, MIT Press.

[21] Oja, E., and Karhunen, J., 1985. On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix, Journal of Mathematical Analysis and Application, Vol.106, pp.69–84.

[22] Salton, G., Fox, E. A., Wu, H. 1983. Extended boolean information retrieval, Commun. ACM, Vol. 26, No. 11, pp. 1022–1036.

[23] Sanger, T. D., 1989. Optimal unsupervised learning in a single-layer linear feedforward neural network, IEEE Transaction on Neural Networks, Vol.2, pp. 459–473.

[24] Schraudolph, N. N., and Giannakopoulos, X., 1999. Online Independent Component Analysis With Local Learning Rate Adaptation, NIPS, pp. 789–795

[25] Shinohara, Y. 1999. Independent Topic Analysis : Extraction of Characteristic Topics by maximization of Independence, Technical report of IEICE.

[26] Shinohara, Y. 2000. Development of Browsing Assistance System for finding Primary Topics and Tracking their Changes in a Document Database, CRIEPI Research Report.

[27] Sirovich, I., and Kirby, M., 1987. Low-Dimensional procedure for the caracterization of human faces, Journal of Optical Society of America A, Vol.4, No.3, pp.519–524.

[28] Song, X., Lin, C.-Y., Tseng, B. L., Sun, M.-T., 2005. Modeling and predicting personal information dissemination behavior, In Proceedings of the 11th ACM SIGKKD International Conference on Knowledge Discovery and Data Mining.

[29] Tanaka, M, Shinohara, Y. 2003. Topic-Based Dynamic Document Management System for discovering Important and New Topics, CRIEPI Research Report.

[30] Weng, J., Zhang, Y, and Hwang, W.-S., 2003. Candid Covariance-free Incremental Principal Compoenent Aalysis, IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol.25, No.8, pp.1034–1040.

[31] Zhao, Y. and Karypis, G. 2002. Evaluation of hierarchical clustering algorithms for document datasets, Conference of Information and Knowledge Management (CIKM), pp. 515–524, ACM.

[32] Zhong, S., and Ghosh, J. 2003. A comparative study of generative models for document clustering, Data Mining Workshop on Clustering High Dimensional Data and Its Applications.