Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Reviewed documents

Table of conferences

  1. MOCO’22: Movement and Computing
    1. Optimized Motion Capture System for Full Body Human Motion Capturing Case Study of Educational Institution and Small Animation Production
    2. Experimental Creation of Contemporary Dance Works Using a Body-part Motion Synthesis System
    3. Sensor-based Activity Recognition using Deep Learning: A Comparative Study
    4. Machine Art: Exploring Abstract Human Animation Through Machine Learning Methods
  2. Moco’20
    1. MoViz: A Visualization Tool for Comparing Motion Capture Data Clustering Algorithms
  3. Moco’19
    1. Evaluating movement qualities with visual feedback for real-time motion capture
  4. Others
    1. A webcam-based machine learning approach for the three-dimensional range of motion evaluation
    2. A low cost real-time motion tracking approach using webcam technology
    3. VIBE: Video Inference for Human Body Pose and Shape Estimation

The papers are grouped by conferences. There are several main conferences that I was focusing on.

MOCO’22: Movement and Computing

I specifically focused on Session 4: Movement Recognition and Analysis.

Optimized Motion Capture System for Full Body Human Motion Capturing Case Study of Educational Institution and Small Animation Production

  • Motion capture system or MOCAP is a set of devices used for capturing moving objects. In addition to had used in the scientific community, Medical, Engineering, MOCAP is currently being used extensively in film and animation industry to create realistic movement of the characters and cartoons. A popular MOCAP system used to capture the movement is Optical Motion Capture, called Optical MOCAP that is able to apply a variety of object motions. Nowadays, the price of MOCAP system is high. And if the user uses full system of MOCAP to capture the movement of actor, some incorrect movement data occurred. This reason, the user needs to take a lot of effort to correct the movement data. With the high price and used effort, making institution or manufacturer difficult for decision-making provides the MOCAP to use. This research has the idea to study the adjustment device of MOCAP system that has minimal system and proper motions for basic of full body human movement, including of walking, running and jumping. By adjusting the number of cameras, the number of reflect markers, the placement of the camera. To capture the movement of the display area 10 square meters with review data movement and the movement of the cartoon show in real-time motions. The results showed that the number of cameras 4-6 to capture the movements of the actors in the area of 10 square meters and a height of 2.5 meters with digital camera, named Eagle Digital, placed at different. Use the least 29 points of reflection markers placed on actor, gait walking, running and jumping. There are both important moving markers and referencing markers. The importance points must placed the marker are actor head and a long of the spine. The results of this study will work with the amount of data movement is reduced. As well as to decide to use MOCAP equipment to suit the job they need.
  • Notes: Too expensive, needs too many cameras.

Experimental Creation of Contemporary Dance Works Using a Body-part Motion Synthesis System

  • We developed a body-part motion synthesis system (BMSS) that synthesizes 3D motion data captured from performances of professional dancers to support the creation of contemporary dance works. To evaluate the usefulness of the system, three professional choreographers created their original dance works experimentally using the BMSS three times, and dancers performed their works in theaters. By analyzing the sequence data created by the BMSS obtained through an interview with the choreographers, we found that the choreographers could discover a variety of uses for the BMSS by becoming proficient in its use. The characteristics of the choreography created by each choreographer were also clarified.
  • Notes: Not very related to movement analysis but focus more on the creation side.

Sensor-based Activity Recognition using Deep Learning: A Comparative Study

  • With the wide availability of inertial sensors in smartphones and connected objects, interest in sensor-based activity recognition has risen. Yet, recognizing human actions from inertial data remains a challenging task because of the complexity of human movements and of inter-individual differences in movement execution. Recently, approaches based on deep neural networks have shown success on standardized activity recognition datasets, yet few works investigate systematically how these models generalize to other protocols for data collection. We present a study that evaluates the performance of various deep learning architectures for activity recognition from a single inertial measurement unit, on a recognition task combining data from six publicly available datasets. We found that the best performance on this combined dataset is obtained with an approach combining the continuous wavelet transform and 2D convolutional neural networks.
  • Notes: Sensor-based recognition, not precise enough for dance movements’ recognition.

Machine Art: Exploring Abstract Human Animation Through Machine Learning Methods

  • Visual media and performance art have a symbiotic relationship. They support one another and engage the audience by providing an experience or telling a story. This comparative study explores the accuracy, efficiency, and cost factors of using machine learning based motion capture methods in performance art. There is extensive research in the field of machine learning methods for human pose estimation, but the outputs of such work are rarely used as inputs for performance art. In this paper we present a practice-based research project that involves producing animations that match a performer’s movements using machine learning based motion capture methods. We use human poses derived from low-cost video capture as an input into high-resolution abstract forms that accompany and synchronise with dance performances. A single-camera approach is examined and compared to existing methods. We find that compared with existing motion capture methods the machine learning based methods require less setup time, and less equipment is required resulting in considerably lower cost. This research suggests that machine learning has considerable potential to improve the quality of human pose estimation in performance art, visual effects and motion capture, and make it more accessible for arts companies with limited resources.
  • Notes: Interesting, machine-learning based low-cost model to recognize movements.
  • VIBE (Video Inference for Human Body Pose and Shape Estimation)

Moco’20

MoViz: A Visualization Tool for Comparing Motion Capture Data Clustering Algorithms

  • Motion capture data is useful for machine learning applications in a variety of domains (e.g. movement improvisation, physical therapy, character animation in games), but many of these domains require large, diverse datasets with data that is difficult to label. This has precipitated the use of unsupervised learning algorithms for analyzing motion capture datasets. However, there is a distinct lack of tools that aid in the qualitative evaluation of these unsupervised algorithms. In this paper, we present the design of MoViz, a novel visualization tool that enables comparative qualitative evaluation of otherwise “black-box” algorithms for pre-processing and clustering large and diverse motion capture datasets. We applied MoViz to the evaluation of three different gesture clustering pipelines used in the LuminAI improvisational dance system. This evaluation revealed features of the pipelines that may not otherwise have been apparent, suggesting directions for iterative design improvements. This use case demonstrates the potential for this tool to be used by researchers and designers in the field of movement and computing seeking to better understand and evaluate the algorithms they are using to make sense of otherwise intractably large and complex datasets.
  • Notes: Might be helpful in the movement analysis step.

Moco’19

Didn’t choose to read the papers published before 19 carefully.

Evaluating movement qualities with visual feedback for real-time motion capture

  • The focus of this paper is to investigate how the design of visual feedback on full body movement affects the quality of the movements. Informed by the theory of embodiment in interaction design and media technology, as well as by the Laban theory of effort, a computer application was implemented in which users are able to project their movements onto two visuals (‘Particle’ and ‘Metal’) We investigated whether the visual designs influenced movers through an experiment where participants were randomly assigned to one of the visuals while performing a set of simple tasks. Qualitative analysis of participants’ verbal movement descriptions as well as analysis of quantitative movement features combine several perspectives with respect to describing the differences and the change in the movement qualities. The qualitative data shows clear differences between the groups. The quantitative data indicates that all groups move differently when visual feedback is provided. Our results contribute to the design effort of visual modality in movement-focused design of extended realities.
  • Notes: Not very related, might be inspiring for the movement analysis step.

Others

A webcam-based machine learning approach for the three-dimensional range of motion evaluation

  • Joint range of motion (ROM) is an important quantitative measure for physical therapy. Commonly relying on a goniometer, accurate and reliable ROM measurement requires extensive training and practice. This, in turn, imposes a significant barrier for those who have limited in-person access to healthcare. The current study presents and evaluates an alternative machine learning-based ROM evaluation method that could be remotely accessed via a webcam. To evaluate its reliability, the ROM measurements for a diverse set of joints (neck, spine, and upper and lower extremities) derived using this method were compared to those obtained from a state-of-the-art marker-based, optical motion capture system. Results showed that the webcam-based solution provides high test-retest reliability and inter-rater reliability at a fraction of the cost of the marker-based system. More importantly, the machine-learning-based method has been shown to be more consistent in tracking joint positions during movements, making it more reliable than the optical motion capture system. The proposed webcam-based ROM evaluation method could be easily adapted for clinical practice and shows tremendous potential for the tele-implementation of physical therapy and rehabilitation.
  • Notes: Raised a question - how 3D information is important in dance movement analysis?

A low cost real-time motion tracking approach using webcam technology

  • Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training.
  • Notes: Focused more on physical therapy not dance analysis.

VIBE: Video Inference for Human Body Pose and Shape Estimation

  • Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methods fail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose Video Inference for Body Pose and Shape Estimation (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a temporal network architecture and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at this https URL.
  • Github page
  • Notes: Need further reading.