Huawei/3DLife Challenge

Realistic Interaction in Online Virtual Environments

This challenge calls for demonstrations of technologies that support real-time realistic interaction between humans in online virtual environments. This includes approaches for 3D signal processing, computer graphics, human computer interaction and human factors. To this end, we propose a scenario for online interaction and provide a data set around this to support investigation and demonstrations of various technical components.

Consider an online dance class provided by an expert Salsa dancer teacher to be delivered via the web. The teacher will perform the class with all movements captured by a state of the art optical motion capture system. The resulting motion data will be used to animate a realistic avatar of the teacher in an online virtual ballet studio. Students attending the online master-class will do so by manifesting their own individual avatar in the virtual dance studio. The real-time animation of each student's avatar will be driven by whatever 3D capture technology is available to him/her.  This could be captured via visual sensing techniques using a single camera, a camera network, wearable inertial motion sensing, or recent gaming controllers such as the Nintendo Wii or the Microsoft Kinect. The animation of the student's avatar in the virtual space will be real-time and realistically rendered, subject to the granularity of representation and interaction available from each capture mechanism.

Of course, we are not expecting participants to this challenge to recreate this scenario, but rather work with the provided data set to illustrate key technical components that would be required to realize this kind of online interaction and communication. This could include, but is not limited to:

  • 3D data acquisition and processing from multiple sensor data sources;
  • Realistic (optionally real-time) rendering of 3D data based on noisy or incomplete sources;
  • Realistic and naturalistic marker-less motion capture;
  • Human factors around interaction modalities in virtual worlds

A data set is provided that consists of a dance teacher and a student performing a series of movements, that will include:

  • Motion capture data for the teacher’s movements via an optical motion capture rig;
  • Synchronized audio and video capture of the student from multiple calibrated sources;
  • Original music excerpts consisting of a few tracks at different tempos varying from low to fast;
  • Inertial (accelerometer + gyroscope + magnometer) sensor data captured from multiple sensors on the student’s body;
  • Depth maps for student performance captured using a Microsoft Kinect.
  • Ratings of the student performances by the teacher;
  • A form of annotation of the choreographies (mostly basic steps and movements for salsa beginners) performed.

The data set is available at http://perso.telecom-paristech.fr/~essid/3dlife-gc-11.

ACM Multimedia 2011

Nov 28th - Dec 1st, 2011 Scottsdale, Arizona, USA

Back To Top