A method for Music Classification Based On Perceived Mood Detection for Indian Bollywood Music
Authors: Vallabha Hampiholi
Abstract:
A lot of research has been done in the past decade in the field of audio content analysis for extracting various information from audio signal. One such significant information is the "perceived mood" or the "emotions" related to a music or audio clip. This information is extremely useful in applications like creating or adapting the play-list based on the mood of the listener. This information could also be helpful in better classification of the music database. In this paper we have presented a method to classify music not just based on the meta-data of the audio clip but also include the "mood" factor to help improve the music classification. We propose an automated and efficient way of classifying music samples based on the mood detection from the audio data. We in particular try to classify the music based on mood for Indian bollywood music. The proposed method tries to address the following problem statement: Genre information (usually part of the audio meta-data) alone does not help in better music classification. For example the acoustic version of the song "nothing else matters by Metallica" can be classified as melody music and thereby a person in relaxing or chill out mood might want to listen to this track. But more often than not this track is associated with metal / heavy rock genre and if a listener classified his play-list based on the genre information alone for his current mood, the user shall miss out on listening to this track. Currently methods exist to detect mood in western or similar kind of music. Our paper tries to solve the issue for Indian bollywood music from an Indian cultural context
Keywords: Mood, music classification, music genre, rhythm, music analysis.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1084718
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3480References:
[1] Carolyn J. Murrock,"Music and Mood," in Psychology of Moods 2005
[2] D. Huron, "Perceptual and cognitive applications in music information retrieval," in Proc. Int. Symp. Music Information Retrieval (ISMIR),2000.
[3] Ellis, D.P.W.; Poliner, G.E.; , "Identifying ÔÇÿCover Songs- with Chroma Features and Dynamic Programming Beat Tracking," Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on , vol.4, no., pp.IV-1429-IV-1432, 15-20 April 2007
[4] E. Scheirer, "Tempo and beat analysis of acoustic musical signals," J. Acoust. Soc. Amer., vol. 103, no. 1, pp. 588601, 1998.
[5] Peeters, G.; , "Spectral and Temporal Periodicity Representations of Rhythm for the Automatic Classification of Music Audio Signal," Audio, Speech, and Language Processing, IEEE Transactions on, vol.19, no.5, pp.1242-1252, July 2011
[6] Tsunoo, E.; Tzanetakis, G.; Ono, N.; Sagayama, S.; , "Beyond Timbral Statistics: Improving Music Classification Using Percussive Patterns and Bass Lines," Audio, Speech, and Language Processing, IEEE Transactions on, vol.19, no.4, pp.1003-1014, May 2011
[7] Mancini, M.; Bresin, R.; Pelachaud, C.; , "A Virtual Head Driven by Music Expressivity," Audio, Speech, and Language Processing, IEEE Transactions on, vol.15, no.6, pp.1833-1841, Aug. 2007
[8] Lie Lu; Liu, D.; Hong-Jiang Zhang; , "Automatic mood detection and tracking of music audio signals," Audio, Speech, and Language Processing, IEEE Transactions on, vol.14, no.1, pp. 5- 18, Jan. 2006
[9] Proc. ISMIR: Int. Symp. Music Information Retrieval,
[Online]. http://www.ismir.net/.
[10] Thayer, Robert E. (1998). "The Biopsychology of Mood and Arousal". New York, NY: Oxford University Press
[11] Picard, Rosalind."Affective Computing" MIT Technical Report #321, 1995
[12] P. N. Juslin; P. Laukka, "Communication of emotions in vocal expression and music performance: Different channels, same code?," Psychol. Bull.,vol. 129, no. 5, pp. 770814, 2003.
[13] Pachet, F.; Roy, P.; , "Improving Multilabel Analysis of Music Titles: A Large-Scale Validation of the Correction Approach," Audio, Speech, and Language Processing, IEEE Transactions on, vol.17, no.2, pp.335-343, Feb. 2009
[14] Hu, Xiao.; J. Stephen Downie,"Exploring mood metadata: Relationships with genre, artist and usage metadata,"International Conference on Music Information Retrieval (ISMIR 2007),Vienna, September 23-27, 2007
[15] L.-L. Balkwill; W. F. Thompson; R. Matsunag, "Recognition of emotion in Japanese, Western, and Hindustani music by Japanese listeners", Japanese Psychological Research, Volume 46, No. 4, 337349, 2004
[16] P. N. Juslin, "Cue utilization in communication of emotion in music performance: relating performance to perception," J. Exper. Psychol.: Human Percept. Perf.,vol. 16, no. 6, pp. 17971813, 2000.
[17] Patrik N. Juslin, John A. Sloboda, "Handbook of music and emotion: theory, research, applications"
[18] Olivier Lartillot; Petri Toiviainen,"A Matlab Toolbox For Musical Feature Extraction From Audio.," in Proc. of the 10th Int. Conference on Digital Audio Effects (DAFx-07),Bordeaux, France, September 10-15, 2007
[19] Siemer M. (2005). "Moods as multiple-object directed and as objectless affective states: An examination of the dispositional theory of moods." Cognition & Emotion: January 2008. Vol 22, Iss. 1; p. 815-845.
[20] J. A. Russell, "A circumplex model of affect," J. Personality Social Psychology,vol. 39, pp. 11611178, 1980.
[21] G. Tzanetakis and P. Cook, "Multifeature audio segmentation for browsing and annotation," in Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics,1999.
[22] William Moylan, "The art of recording: understanding and crafting the mix"
[23] Rossitza Setchi, "Knowledge-Based And Intelligent Information And Engineering Systems:" 14th International Conference,KES 2010, Cardiff, UK, September 8-10, 2010
[24] Eerola, T., Lartillot, O., and Toiviainen, P. "Prediction of multidimensional emotional ratings in music from audio using multivariate regression models." In Proceedings of 10th International Conference on Music Information RetrievalISMIR 2009, pages 621-626.
[25] Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, Ian H. Witten (2009); "The WEKA Data Mining Software: An Update"; SIGKDD Explorations, Volume 11, Issue 1.
[26] Ross Quinlan (1993). C4.5: "Programs for Machine Learning, Morgan Kaufmann Publishers", San Mateo, CA.
[27] Meng, A.; Ahrendt, P.; Larsen, J.; Hansen, L.K.; , "Temporal Feature Integration for Music Genre Classification," Audio, Speech, and Language Processing, IEEE Transactions on, vol.15, no.5, pp.1654-1664, July 2007
[28] Hanjalic, A.; , "Extracting moods from pictures and sounds: towards truly personalized TV," Signal Processing Magazine, IEEE, vol.23, no.2, pp.90-100, March 2006
[29] Changsheng Xu; Maddage, N.C.; Xi Shao; , "Automatic music classification and summarization," Speech and Audio Processing, IEEE Transactions on, vol.13, no.3, pp. 441- 450, May 2005