
SIGGRAPH 2014 News: Advancements in Kinect Based Motion Capture
Today at SIGGRAPH 2014, presenters from USC Institute for Creative Technologies, Miximo, Inc. And Vancouver Institute of Media Arts gave in-depth presentations for new technologies in motion capture that will reduce the cost and ultimately speed up the process for game developers.
Ari Shapiro, Andrew Feng and Evan Suma from USC Institute for Creative Technologies gave very compelling presentations over their system for creating a 3D model and automatically rigging, skinning and simulating it in a 3D environment in just a matter of minutes. All this is done by capturing the human figure using Microsoft's Kinect sensor.
The process is started by first scanning the actor in four bind poses; left, front, right and back. The entire process takes about five minutes and you can import to any animation application and add the mo-cap data from there. During the presentation the entire process to get from the scan with Kinect to applying animation to the model in the 3D app took about ten minutes.
The model quality is suitable, the coloring, body shape and the likeness of the actor is there but the 3D model is not a high resolution mesh. The ability to do home-based scanning with the Kinect allows for game developers and designers to quickly create rigged 3D models for animation in a short amount of time with very low cost. This type of avatar creation is best suitable for use in games where the character is in the distance or part of a crowd.
In the future they hope to get their system to create a deformable face rig by moving the actors face closer to the Kinect during the scanning process.
Charles Pina, and Emiliano Gambaretto from Mixamo, Inc. demonstrated their facial motion capture technology Face Plus which takes a simple webcam and the Unity game engine and creates facial animation for your 3D character all while being displayed in real-time. This system works by detecting your face through your webcam, it then identifies the facial appearance and expressions of your face. To demonstrate this technology they utilized their Face Plus in the 3D short "Unplugged" which is rendered in real-time in Unity.
Typically facial animation in games is done either with keyframe animation or facial mocap. In this scenario the mocap is done through a simple webcam. Face Plus is currently supported for Unity and MotionBuilder and is designed to let the animators do this right on their computers using their webcam for extremely fast facial animations.
The benefits for a system like this are huge; extremely low costs and virtually no setup time because it's completely markerless facial capture. You also see real-time results from your facial performances and the result it is having on your 3D mesh. Face Plus is able to achieve this by utilizing GPU acceleration and runs at 50 FPS on higher end graphics cards and 24 FPS on lower end graphics cards.
In order to utilize Face Plus you need to have a character rig that is Face Plus ready. You can upload your character to the Maximo site and they will apply the facial rig required, then you can download it as an FBX with blendshapes applied, ready for Face Plus!
Rapid Avatar Capture and Simulation Using the Kinect

Real-Time Facial Animation with Just a Webcam
