Laban Movement Analysis Using Kinect
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33093
Laban Movement Analysis Using Kinect

Authors: Ran Bernstein, Tal Shafir, Rachelle Tsachor, Karen Studd, Assaf Schuster

Abstract:

Laban Movement Analysis (LMA), developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Many applications that use motion capture data might be significantly leveraged if the Laban qualities will be recognized automatically. This paper presents an automated recognition method of Laban qualities from motion capture skeletal recordings and it is demonstrated on the output of Microsoft’s Kinect V2 sensor.

Keywords: Laban Movement Analysis, Kinect, Machine Learning.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1108751

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2832

References:


[1] Rudolf Laban and Lisa Ullmann. The Mastery of Movement. 1971.
[2] Groff, E. (1995). Laban movement analysis: Charting the ineffable domain of human movement. Journal of Physical Education, Recreation & Dance, 66(2), 27-27.
[3] Megumi Masuda and Shohei Kato. Motion rendering system for emotion expression of human form robots based on Laban movement analysis. In RO-MAN, 2010, pages 324–329. IEEE, 2010.
[4] Jacqyln A Levy and Marshall P Duke. The use of laban movement analysis in the study of personality, emotional state and movement style: An exploratory investigation of the veridicality of” body language”. Individual Differences Research, 1(1), 2003.
[5] Antonio Camurri, Barbara Mazzarino, Gualtiero Volpe, Pietro Morasso, Federica Priano, and Cristina Re. Application of multimedia techniques in the physical rehabilitation of Parkinson’s patients. The Journal of Visualization and Computer Animation, 14(5):269–278, 2003.
[6] Anh-Tuan Nguyen, Wei Chen, and Matthias Rauterberg. Online feedback system for public speakers. In IEEE Symp. e-Learning, e- Management and e-Services, 2012.
[7] Haris Zacharatos, Christos Gatzoulis, Yiorgos Chrysanthou, and Andreas Aristidou. Emotion recognition for exergames using Laban movement analysis. In Proceedings of the Motion on Games, pages 39– 44. ACM, 2013.
[8] Private consultation with Shafir Tal and Tsachor Rachelle.
[9] Diane Chi, Monica Costa, Liwei Zhao, and Norman Badler. The emote model for effort and shape. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 173–182. ACM Press/Addison-Wesley Publishing Co., 2000.
[10] Jean-Claude Martin, Radoslaw Niewiadomski, Laurence Devillers, Stephanie Buisine, and Catherine Pelachaud. Multimodal complex emotions: Gesture expressivity and blended facial expressions. International Journal of Humanoid Robotics, 3(03):269–291, 2006.
[11] Jorg Rett and Jorge Dias. Human-robot interface with anticipatory characteristics based on Laban movement analysis and bayesian models. In Rehabilitation Robotics. ICORR, IEEE 10th International Conference on, pages 257–268. IEEE, 2007.
[12] Tino Lourens, Roos Van Berkel, and Emilia Barakova. Communicating emotions and mental states to robots in a real time parallel framework using Laban movement analysis. Robotics and Autonomous Systems, 58(12):1256–1265, 2010.
[13] Ali-Akbar Samadani, Sarahjane Burton, Rob Gorbet, and Dana Kulic. Laban effort and shape analysis of affective hand and arm movements. In Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on, pages 343–348. IEEE, 2013.
[14] Moshe Gabel, Ran Gilad-Bachrach, Erin Renshaw, and Assaf Schuster. Full body gait analysis with kinect. In Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE, pages 1964–1967. IEEE, 2012.
[15] Woo Hyun Kim, Jeong Woo Park, Won Hyong Lee, Hui Sung Lee, and Myung Jin Chung. Lma based emotional motion representation using rgb-d camera. In Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction, pages 163–164. IEEE Press, 2013.