Wouldn't it be great if an app user could simply ask to use one of your app's features directly from the Siri interface? This course, An Introduction to SiriKit, will show you how to create vocal commands using the SiriKit framework. First, you'll go over what SiriKit can and cannot do along with a preview of what you will be creating in the course. Next, you get to see what Intents Extensions are and how an app handles them when communicating with SiriKit, as well as how to develop a UI that is called after Siri receives vocal instruction. Finally, you'll see how to record and store audio then transfer it to Siri for transcription. By the end of this course, you'll be ready to create fully-featured voice dictation using SiriKit. Software required: Xcode 8.
About the author
About the author
Grant Klimaytys is a developer, educator, and entrepreneur. But mostly, he would describe himself as obsessed with efficiency. That efficiency extends across many coding fields such as Swift, .NET, Java, and multiple web development frameworks. He is the principle author on his app development site blogging regularly whenever he discovers something that may benefit other developers.
Course Overview Hello everyone. My name is Grant Klimaytys and welcome to my course on, An Introduction to SiriKit. I run a software development business specializing in cross platform mobile apps. One of the most important tasks we have is staying on the absolute cutting edge of mobile ecosystems, the most prominent of which is iOS. This course is all about showing you the brand new SiriKit features that have finally been opened up with the release of iOS 10. Some of the major topics that we will cover include, creating voice controls for your app right from the Siri interface, customizing the Siri interface to make it a seamless experience, and using the power of Siri to turn speech into text seamlessly for your users. By the end of this course you'll know how to really wow your users with amazing voice control features. Before beginning the course you should be familiar with basic iOS and Swift or Objective C concepts. I hope you'll join me on this journey putting your apps ahead of the competition with An Introduction to SiriKit course at Pluralsight.
Introduction to SiriKit Hello. My name is Grant Klimaytys, and I'm a consultant and trainer for several popular app development platforms. I'd like to welcome you to my course on an Introduction to SiriKit. This course contains instructions on implementing a highly anticipated feature in iOS 10 and beyond, SiriKit. To start with, we should first understand what Siri is. Siri is the voice recognition technology that powers voice commands and speech to text transcription on iOS. So far this technology has been limited to Apple produced apps only, some functions of which you may be familiar with. For example, pressing and holding the home button to launch a voice activated command screen or using dictation within apps that normally accept input through the keyboard. Now, as of iOS 10, Apple has finally opened up Siri for use by us developers. That means we now get access to all those great features I mentioned, as well as the ability to customize out own Siri implementation. Through this course you will learn what SiriKit is, what it can and cannot do, how to implement voice commands, UI extensions in SiriKit, and voice to speech transcriptions, so if the prospect of creating voice command enabled apps excites you, then let's get going.
Using SiriKit Intents Extensions Intents Extensions are the most widely used part of Siri. They perform an action based on a vocal command issued by a user. With the introduction of SiriKit to developers we can now use the Siri functionality ourselves by registering custom commands for our apps, and these can be called directly from the Siri interface. For example, you would say the following into Siri to send a message, Use Chat App to send a message to Grant with the content, Where are you? This is a pretty neat feature, but be warned that this initial release of SiriKit is limited to select app types, which I will go through; however, regardless of which solution you wish to use, the architecture of implementing SiriKit is the same. This section is going to walk you through that implementation in the following manner. I'll first show you the allowed intent types of SiriKit, how to set up SiriKit frameworks for use within your app, how to create a basic Intents Extension, and finally, how to use authentication checks within your app to ask your user to open up your app from Siri when Siri can't quite handle what they need to do. I'm excited to show you all SiriKit has to offer, so let's get started.
SiriKit Intents UI Extensions Welcome to this module on SiriKit Intents UI Extensions. Now these are a fantastic way to pass information to your user without them having to open the app. They allow your app to display a custom view instead of or in conjunction with the information that Siri provides. This is a good thing because your app experience can start outside the app, and can continue to reinforce your brand. UI Extensions are unique to your app and customizable. In this module you'll be creating a very simple custom interface for Siri that looks like this. This is known as a Siri snippet, and you'll get all the information you need to create your own, and your ones will probably look a little better than mine. In this module you're going to learn all about setting up your UI extensions for first use, how to customize the view controllers that sit behind them, how to display those view controllers inside of the Siri interface, and finally, an optional step of removing the Siri UI to give your app an overall look and feel that matches the in-app experience. This is going to be a great module, so let's get started.
Transcribing Speech to Text with SiriKit Transcription of speech to text with SiriKit is one of the most exciting new features of iOS 10. The process is very simple. Speech is given to Siri either in prerecorded format or via live speech, the data is sent off in blocks to the Apple servers for processing, and the transcription is sent back to the device in segments. Now from that description you've probably figured out that this is an asynchronous operation, so there's a little threading to consider. On the whole though, the implementation pattern in Swift takes care of this for us. In this module you're going to learn how to set up and ask for required permissions when it comes to transcription, how to set up an instance of a recording in order to record a file, and finally, how to implement and handle a transcription request. Okay. Let's get going.