Multivariate Output-Associative RVM for Multi-Dimensional Affect Predictions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32769
Multivariate Output-Associative RVM for Multi-Dimensional Affect Predictions

Authors: Achut Manandhar, Kenneth D. Morton, Peter A. Torrione, Leslie M. Collins

Abstract:

The current trends in affect recognition research are to consider continuous observations from spontaneous natural interactions in people using multiple feature modalities, and to represent affect in terms of continuous dimensions, incorporate spatio-temporal correlation among affect dimensions, and provide fast affect predictions. These research efforts have been propelled by a growing effort to develop affect recognition system that can be implemented to enable seamless real-time human-computer interaction in a wide variety of applications. Motivated by these desired attributes of an affect recognition system, in this work a multi-dimensional affect prediction approach is proposed by integrating multivariate Relevance Vector Machine (MVRVM) with a recently developed Output-associative Relevance Vector Machine (OARVM) approach. The resulting approach can provide fast continuous affect predictions by jointly modeling the multiple affect dimensions and their correlations. Experiments on the RECOLA database show that the proposed approach performs competitively with the OARVM while providing faster predictions during testing.

Keywords: Dimensional affect prediction, Output-associative RVM, Multivariate regression.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1111901

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612

References:


[1] M. Pantic, A. Nijholt, A. Pentland, and T. S. Huanag, “Human-centred intelligent human? computer interaction (hci2): how far are we from attaining it?” International Journal of Autonomous and Adaptive Communications Systems, vol. 1, no. 2, pp. 168–187, 2008.
[2] M. Schroder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, M. Valstar, and M. Wollmer, “Building autonomous sensitive artificial listeners,” Affective Computing, IEEE Transactions on, vol. 3, no. 2, pp. 165–183, April 2012.
[3] M. Mihelj, D. Novak, and M. Munih, “Emotion-aware system for upper extremity rehabilitation,” in Virtual Rehabilitation International Conference, 2009. IEEE, 2009, pp. 160–165.
[4] P. Lucey, J. F. Cohn, I. Matthews, S. Lucey, S. Sridharan, J. Howlett, and K. M. Prkachin, “Automatically detecting pain in video through facial action units,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 41, no. 3, pp. 664–674, 2011.
[5] R. W. Picard, “Future affective technology for autism and emotion communication,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 364, no. 1535, pp. 3575–3584, 2009.
[6] G. C. Littlewort, M. S. Bartlett, L. P. Salamanca, and J. Reilly, “Automated measurement of children’s facial expressions during problem solving tasks,” in Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on. IEEE, 2011, pp. 30–35.
[7] F. Eyben, M. W¨ollmer, T. Poitschke, B. Schuller, C. Blaschke, B. F¨arber, and N. Nguyen-Thien, “Emotion on the roadnecessity, acceptance, and feasibility of affective computing in the car,” Advances in human-computer interaction, vol. 2010, 2010.
[8] M. Soleymani, G. Chanel, J. J. Kierkels, and T. Pun, “Affective characterization of movie scenes based on multimedia content analysis and user’s physiological emotional responses,” in Multimedia, 2008. ISM 2008. Tenth IEEE International Symposium on. Ieee, 2008, pp. 228–235.
[9] K. Sun, J. Yu, Y. Huang, and X. Hu, “An improved valence-arousal emotion space for video affective content representation and recognition,” in Multimedia and Expo, 2009. ICME 2009. IEEE International Conference on. IEEE, 2009, pp. 566–569.
[10] J. Sanghvi, G. Castellano, I. Leite, A. Pereira, P. W. McOwan, and A. Paiva, “Automatic analysis of affective postures and body motion to detect engagement with a game companion,” in Human-Robot Interaction (HRI), 2011 6th ACM/IEEE International Conference on. IEEE, 2011, pp. 305–311.
[11] Z. Zeng, M. Pantic, G. Roisman, T. S. Huang et al., “A survey of affect recognition methods: Audio, visual, and spontaneous expressions,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 31, no. 1, pp. 39–58, 2009.
[12] H. Gunes and B. Schuller, “Categorical and dimensional affect analysis in continuous input: Current trends and future directions,” Image and Vision Computing, vol. 31, no. 2, pp. 120–136, 2013.
[13] M. A. Nicolaou, H. Gunes, and M. Pantic, “A multi-layer hybrid framework for dimensional emotion classification,” in Proceedings of the 19th ACM international conference on Multimedia. ACM, 2011, pp. 933–936.
[14] “Output-associative rvm regression for dimensional and continuous emotion prediction,” Image and Vision Computing, vol. 30, no. 3, pp. 186–196, 2012.
[15] A. Mehrabian, “Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament,” Current Psychology, vol. 14, no. 4, pp. 261–292, 1996.
[16] R. Dietz and A. Lang, “Affective agents: Effects of agent affect on arousal, attention, liking and learning,” in Proceedings of the Third International Cognitive Technology Conference, San Francisco, 1999.
[17] J. R. Fontaine, K. R. Scherer, E. B. Roesch, and P. C. Ellsworth, “The world of emotions is not two-dimensional,” Psychological science, vol. 18, no. 12, pp. 1050–1057, 2007.
[18] H. Gunes and M. Pantic, “Automatic measurement of affect in dimensional and continuous spaces: Why, what, and how?” in Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research. ACM, 2010, p. 12.
[19] D. Grandjean, D. Sander, and K. R. Scherer, “Conscious emotional experience emerges as a function of multilevel, appraisal-driven response synchronization,” Consciousness and cognition, vol. 17, no. 2, pp. 484–495, 2008.
[20] M. Valstar, B. Schuller, K. Smith, T. Almaev, F. Eyben, J. Krajewski, R. Cowie, and M. Pantic, “Avec 2014: 3d dimensional affect and depression recognition challenge,” in Proceedings of the 4th International Workshop on Audio/Visual Emotion Challenge, ser. AVEC ’14. New York, NY, USA: ACM, 2014, pp. 3–10.
[Online]. Available: http://doi.acm.org/10.1145/2661806.2661807
[21] F. Ringeval, F. Eyben, E. Kroupi, A. Yuce, J.-P. Thiran, T. Ebrahimi, D. Lalanne, and B. Schuller, “Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data,” Pattern Recognition Letters, 2014.
[22] M. W¨ollmer, B. Schuller, F. Eyben, and G. Rigoll, “Combining long short-term memory and dynamic bayesian networks for incremental emotion-sensitive artificial listening,” Selected Topics in Signal Processing, IEEE Journal of, vol. 4, no. 5, pp. 867–881, 2010.
[23] F. Ringeval, B. Schuller, M. Valstar, S. Jaiswal, E. Marchi, D. Lalanne, R. Cowie, and M. Pantic, “The av+ec 2015 multimodal affect recognition challenge: Bridging across audio, video, and physiological data,” 2015.
[24] A. Thayananthan, R. Navaratnam, B. Stenger, P. H. Torr, and R. Cipolla, Multivariate relevance vector machines for tracking. Springer, 2006.
[25] F. Ringeval, A. Sonderegger, J. Sauer, and D. Lalanne, “Introducing the recola multimodal corpus of remote collaborative and affective interactions,” in Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on. IEEE, 2013, pp. 1–8.
[26] M. E. Tipping, “Sparse bayesian learning and the relevance vector machine,” The journal of machine learning research, vol. 1, pp. 211–244, 2001.
[27] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, pp. 273–297, 1995, 10.1007/BF00994018.
[28] C. M. Bishop and M. E. Tipping, “Variational relevance vector machines,” in Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 2000, pp. 46–53.
[29] M. E. Tipping, A. C. Faul et al., “Fast marginal likelihood maximisation for sparse bayesian models,” in Proceedings of the ninth international workshop on artificial intelligence and statistics, vol. 1, no. 3, 2003.
[30] D. J. Sheskin, Handbook of parametric and nonparametric statistical procedures. crc Press, 2007.
[31] I. Lawrence and K. Lin, “A concordance correlation coefficient to evaluate reproducibility,” Biometrics, pp. 255–268, 1989.
[32] B. Krishnapuram, A. Harternink, L. Carin, and M. A. Figueiredo, “A bayesian approach to joint feature selection and classifier design,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 26, no. 9, pp. 1105–1111, 2004.