The Google Play Services API provides a breadth of tools to help build out great features natively in Android Applications. This course will introduce you to the basics of implementing two of the these features: Nearby Messaging and Mobile Vision.
In the past, developers had to rely upon third party, online services to offer features such as Text Detection or Barcode Scanning in their apps. In this course, Getting Started with the Google Play Services API, you’ll be introduced to these features and learn how to integrate them into your apps. First, you'll learn how to implement Nearby Messaging natively in your apps and be introduced to the code needed to send messages between Android apps that are within close proximity of each other. Then, you'll be introduced to Mobile Vision. Finally, you'll explore how to use three features available to us in the Google Play Services API: Text Detection, Barcode Detection, and Face Detection. When you’re finished with this course you’ll have the fundamental knowledge needed to offer these advanced features in the apps you build or in apps you’ve built already.
Javon enjoys developing Android Applications and has published four Android apps along with two open source Android libraries for use through JCenter. He also has a strong interest in numerous areas of Mathematics and enjoys playing Basketball and online chess in his spare time.
Section Introduction Transcripts
Section Introduction Transcripts
Course Overview Hi everyone. My name is Javon Davis, and welcome to my course, Getting Started with Google Play Services. I'm a mobile automation engineer at Quality Works Consulting Group, and I've been developing mobile applications for over four years. Something not well known is that the Google API provides powerful features like mobile vision and proximity based messaging natively within the android SDK. In this course, we're going to introduce you to all these exciting features and more. Some of the major topics that we'll cover will include barcode scanning, text detection and text recognition, face detection and face image processing, and proximity based messaging through the Nearby Messaging API. By the end of this course, you'll know how to get started building up these awesome features into your android apps. Before beginning this course, you should be familiar with the fundamentals of android development. Topics such as activities, fragments, and Gradle should all sound familiar to you. The course will not focus on building android apps, but more mostly to implement these features within android apps. I hope you'll join me on this journey to learn more about mobile vision and nearby messaging in android with the Getting Started with Google Play Services course at Pluralsight.
Intro to Mobile Vision with Text Detection Hi, my name is Javon. Welcome to the first mobile vision module. Before I start showing you how to detect text in your applications, I'll take this opportunity to explain what the term mobile vision means. When we as mobile developers use the term mobile vision, we're referencing the ability for mobile applications to process the world visually similarly to how we, as humans, do. When I show you this picture of a group of people, you can clearly identify the faces. If I showed you an image with this street address, you'd be able to process it and know that words were present and then identify it as a street address, right? So this kind of ability in mobile devices has been coined as mobile vision. Google has provided us with an API through Google Play Services that we can use to implement mobile vision in our apps. The API provides three useful mobile vision features: text detection, barcode detection, and face detection. And we'll be going through all three of these in this course. So let's get right to it.
Mobile Vision: Barcode Detection My name is Javon. Welcome to the module on using mobile vision for barcode detection. If you missed it, there's a short discussion of what mobile vision is in the beginning of the previous module where we started with text detection. Otherwise, let's get to it. So the Google Play Services library gives us another cool API to work with called the Barcode API. The Barcode API detects barcodes in real time on any device in any orientation. It can also detect multiple barcodes at once. The best part about all of this is that we don't need an internet connection to use it. All of this is done locally on the device, so we won't have to worry about our users having to connect to the internet to have to use this feature. Barcodes and QR codes holds amazing amount of data, and having a mobile device be able to access this data can be greatly empowering to your users. Most people might think there are only two types of barcodes, the 1-dimensional and the 2-dimensional. But within these, there's a number of different formats that can be handled. The Play Services API made its best effort to support a large number of these formats. It reads QR codes, Code-128, which is a popular one of 1D barcodes, the one that we're most used to, Code-93, AZTEC, PDF-417, and a number of others. Believe me, that's a lot more than a number of alternatives offered, so it's impressive that they handle so much. And the best part about it is that it automatically parses the barcodes, determines the kind of barcode it is, and can extract a large variety of data, such as URLs, contact information, calendar events, phone numbers, and so much more.
Mobile Vision: Face Detection Hi, I'm Javon, and this is the final module of this course where we'll talk about how you can build rock star apps that can detect faces. Let me start by clarifying what face detection is. Face detection is simply identifying the presence of a human face. Nothing more and nothing less. This is different from what is known as face recognition, which is the ability to identify if two faces are identical. We also have what is known as face tracking, which is basically face detection extended and applied to a number of sequential frames. Throughout the module, you'll hear me use terms such as landmarks and classifications, so let me fill you in on what these are. Landmarks are well-known parts of the human face. The left eye, right eye, and nose base are all examples of landmarks. The Face API provides the ability to find landmarks on a detected face. Classification is determining whether a certain facial characteristic is present. For example, a face can be classified with regards to whether its eyes are opened or closed. Another example is whether the face is smiling or not. Alright, so let's get into what we have to work with through the Google Play Services API. The Play Services API gives us what's known as the Face API. This API gives us the ability to perform face detection, and once a face is detected, it can be searched for landmarks such as the eyes and nose and then classify the face based on various characteristics. I think being able to detect faces and their activities opens up a whole new realm of possibilities for what we can do with our applications. We can do things like build avatars based on a user's expression or use facial activity as a way of interacting with the app, like blink to fire in a shooter game. I'm excited to go through it with you and see what you'll build.