Today at SIGGRAPH 2014
, presenters from USC Institute for Creative Technologies
, Miximo, Inc
. And Vancouver Institute of Media Arts
gave in-depth presentations for new technologies in motion capture that will reduce the cost and ultimately speed up the process for game developers.
Rapid Avatar Capture and Simulation Using the Kinect
Ari Shapiro, Andrew Feng and Evan Suma from USC Institute for Creative Technologies gave very compelling presentations over their system for creating a 3D model and automatically rigging, skinning and simulating it in a 3D environment in just a matter of minutes. All this is done by capturing the human figure using Microsoft's Kinect
The process is started by first scanning the actor in four bind poses; left, front, right and back. The entire process takes about five minutes and you can import to any animation application and add the mo-cap data from there. During the presentation the entire process to get from the scan with Kinect to applying animation to the model in the 3D app took about ten minutes.
The model quality is suitable, the coloring, body shape and the likeness of the actor is there but the 3D model is not a high resolution mesh. The ability to do home-based scanning with the Kinect allows for game developers and designers to quickly create rigged 3D models for animation in a short amount of time with very low cost. This type of avatar creation is best suitable for use in games where the character is in the distance or part of a crowd.
In the future they hope to get their system to create a deformable face rig by moving the actors face closer to the Kinect during the scanning process.
Real-Time Facial Animation with Just a Webcam
Charles Pina, and Emiliano Gambaretto from Mixamo, Inc. demonstrated their facial motion capture technology Face Plus
which takes a simple webcam and the Unity
game engine and creates facial animation for your 3D character all while being displayed in real-time. This system works by detecting your face through your webcam, it then identifies the facial appearance and expressions of your face. To demonstrate this technology they utilized their Face Plus in the 3D short "Unplugged
" which is rendered in real-time in Unity.
Typically facial animation in games is done either with keyframe animation or facial mocap. In this scenario the mocap is done through a simple webcam. Face Plus is currently supported for Unity and MotionBuilder
and is designed to let the animators do this right on their computers using their webcam for extremely fast facial animations.
The benefits for a system like this are huge; extremely low costs and virtually no setup time because it's completely markerless facial capture. You also see real-time results from your facial performances and the result it is having on your 3D mesh. Face Plus is able to achieve this by utilizing GPU acceleration and runs at 50 FPS on higher end graphics cards and 24 FPS on lower end graphics cards.
In order to utilize Face Plus you need to have a character rig that is Face Plus ready. You can upload your character to the Maximo site and they will apply the facial rig required, then you can download it as an FBX with blendshapes applied, ready for Face Plus!
Facial Motion Capture Using Kinect
Izmeth Siddeek from Vancouver Institute of Media Arts use to be the lead character artist at Capcom
. During his presentation he spoke about his work on games like Dead Rising
and Mass Effect.
During those productions he saw there was an overwhelming amount of animations needed, which is what sparked his interest in low-cost motion capture.
Izmeth stated that the standard for motion capture in the industry is the Vicon
capture system. This is very powerful and has extremely high quality output but it's also very expensive and is not good for smaller budgets. Izmeth also noted that a lot of studios are now embracing the Kinect, and animators have them on their desk to quickly capture mo-cap.
The Kinect seems to have gained a lot of traction in the game industry and is a lot more than just a motion sensing technology for virtual Tennis. It works by using the infrared projector and the optical camera. The Kinect uses the depth map to generate a point cloud which tries to match the infrared coordinates with point data, the depth information can then be applied back to a skeleton. Izmeth uses the software "Face Shift
" to create the motion capture directly from the Kinect and then streams the animation into Unity.
With these advancements in motion capture it opens the door for smaller budget studios to create great performances through the use of commercial motion sensing cameras like the Kinect. The capabilities of these types of programs and devices are only going to become more powerful in the future.