In this course we learn how to use the Kinect and Kinect SDK to build a real application that takes advantage of the Kinect's color camera and depth detection and uses gestures and audio commands to control the application.
Have you played with Microsoft Kinect? Do you think it is pretty cool? Wouldn't it be awesome if you could write your own Kinect enabled applications to take advantage of this awesome piece of technology? In this course I'll walk you through the creation of a real application using the Kinect. We'll be building a Fruit Ninja clone called Shape Ninja which will be capable of detecting chopping gestures and responding to audio commands. We'll start off by learning a little bit about the Kinect itself and the Kinect SDK. Then, I'll show you just how easy it is to get color image and depth data from the Kinect. We'll make our application be able to detect and respond to a chopping gesture. And, we'll wrap things up by learning how to use the Microsoft Speech Platform SDK in combination with the Kinect's audio sensors to implement real voice commands for our application. If you've been waiting for the opportunity to check out the Kinect, but you didn't know where to start or perhaps you thought it would be difficult to learn, this course will get you up and running with the Kinect in no time.
Meet The Kinect Hi, this is John Sonmez from Pluralsight, and welcome to this course on building a Real Application With the Kinect. The Kinect is an exciting piece of hardware that had a huge impact on the future of not just gaming, but human computer interaction in general. Did you know that as of February 2013, there were over 24 million Kinects sold? Kinect actually claimed the Guinness World Record of being the fastest selling consumer electronics device when it sold over 8 million units in the first 60 days after it was released. So it's no surprise that you're interested in this course on Kinect, and I'm interested in making this course. The course will be pretty fast pace as we build an entire real working application that will work with the Kinect. We'll be using the Kinect for Windows SDK to gain access to the Kinects powerful hardware and sensors. So stay tuned, and get ready to have some fun as we explore this amazing device.
Capturing Image Data Hi, this is John Sonmez from Pluralsight, and in this module we'll be learning how to use the Kinect's color camera to capture image data and display it in our application. Capturing image data from the Kinect is pretty easy to do with the Kinect SDK, but understanding how it works takes a bit more effort. In this module, we'll explore not only the Kinect api, but also learn how exactly the image data that comes from the Kinect is represented, and how we can transform that data into a bitmap image. By the end of this module, you should know how to get image data from the Kinect's color camera, and how to transform that image data into a bitmap that can be displayed on the screen. You'll also learn enough about how images and bitmaps work to experiment with creating your own filters and transformations to create different effects.
Capturing Depth Data Hi, this is John Sonmez from Pluralsight, and in this module we'll be learning how to capture and process data from the depth stream provided by the Kinect sensor. Depth data is really what Kinect is all about. Without the ability to detect depth data, the Kinect wouldn't be much more than a fancy camera and a microphone. The Kinect gives us the ability to access a raw data stream of depth data for the Kinect's field of vision. Using this depth data, we can figure out exactly how far away each detected pixel in our image frame is from the Kinect. We also have the ability to determine whether a particular pixel from that data is identified by the Kinect as a human. Everything happens at a blazing fast speed on the actual Kinect hardware to give us access to all the data at 30 FPS.
Tracking Gestures Hi, this is John Sonmez from Pluralsight, and in this module we'll be learning how to use the skeleton data stream generated by the Kinect to track gestures. The Kinect is a powerful piece of hardware that has the ability to perform complex calculations in a very short amount of time to determine pretty accurately the position of human joints from the depth data it receives. Unfortunately, the Kinect and the Kinect SDK don't give us default gestures that are automatically recognized, but it isn't that hard to build that capability into our own application. In this module, we'll add the ability to chop shapes by learning how to track where a human hand is and follow its movement. By the end of this module, our game will actually start to take shape as we are able to do something more than just displaying an image.
Adding Audio Commands Hi, this is John Sonmez from Pluralsight, and in this module we'll wrap up the creation of our Shape Ninja application as we utilize the Kinect's microphone array to add some voice commands. Many developers are aware of the powerful capabilities of the Kinect to detect human skeletons and allow for user interaction using gestures and movements, but the Kinect is also well suited for voice-enabled applications because of its powerful microphones, which are able to remove background noise and focus on human speech. The only problem is just like gestures, the Kinect doesn't have built in voice recognition capabilities. But, using the Microsoft speech recognition SDK, we can easily use the Kinect's microphones and add complex speech recognition capabilities to our applications. By the end of this module, you should have a good understanding of how to implement speech recognition in your application, and even how to create your own grammars.