Spatial Audio Player Using Musical Genre Classification
Authors: Jun-Yong Lee, Hyoung-Gook Kim
Abstract:
In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing.
Keywords: Automatic equalization, genre classification, music segment detection, spatial audio processing.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1094235
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628References:
[1] G. Tzanetakis, P. Cook, "Musical genre classification of audio signals,” IEEE Transactions on Speech and Audio Processing, vol. 10, no. 5, pp. 293-302, Jul. 2002.
[2] T. Li, and G. Tzanetakis, "Factors in automatic musical genre classification of audio signals,” in Proc. WASPAA 2003, pp. 143-146, Oct. 2003.
[3] M. McKinney, and J. Breebaart, "Features for audio and music classification,” in Proc. ISMR2003, pp. 151-158, Oct. 2003.
[4] D. R. Begault, E. M. Wenzel, and M. R. Anderson, "Direct comparison of the impact of head-tracking, reverberation, and individualized head-related transfer functions on the spatial perception of a virtual speech source”, J. Audio Eng. Soc., vol. 49, no. 10, pp. 904-916, Oct. 2001.
[5] R. A. Reale, J. Chen, J. E. Hind, J. F. Brugge, "An implementation of virtual acoustic space for neurophysiological studies of directional hearing,” Virtual Auditory Space: Generation and Applications, pp. 153-170, 1996.
[6] U. P. Svensson and U. Kristiansen, "Computational modeling and simulation of acoustic spaces,” Proc. AES 22nd Conf. on Virtual, Synthetic and Entertainment Audio, pp. 11-30, Jun. 2002.
[7] F. Freeland, L. Biscainho, and P. Diniz, "Efficient HRTF interpolation in 3D moving sound,” Proc. AES 22nd Conf. on Virtual, Synthetic and Entertainment Audio, pp. 106-114, Jun. 2002.
[8] B. Gardener, and K. Martin, "HRTF measurements of a Kemar dummy-head microphone,” MIT Media Lab Perceptual Computing, May. 1994.