SageMaker is a fully managed machine learning (ML) platform on AWS which makes prototyping, building, training, and hosting ML models very simple indeed. In this course, Deep Learning Using TensorFlow and Apache MXNet on AWS SageMaker, you'll be shown how to use the built-in algorithms, such as the linear learner and PCA, hosted on SageMaker containers. The only code you need to write is to prepare your data. You'll then see the 3 different ways in which you build your own custom model on SageMaker. You'll bring your own pre-trained model and host it on SageMaker's first party containers. You'll then work on building your model using Apache MXNet and finally bring a custom container to be trained on SageMaker.
When you have finished with this course, you will also know how you can connect to other AWS services such as S3 and Redshift to access your training data, run training in a distributed manner, and autoscale your model variants.
A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.
Course Overview Hi, my name is Janani Ravi, and welcome to this course on Deep Learning Using TensorFlow and Apache MXNet on AWS SageMaker. A little about myself, I have a master degree in electrical engineering from Stanford and have worked at companies such as Microsoft, Google, and Flipkart. At Google, I was one of the first engineers working on realtime collaborative editing in Google Docs, and I hold four patents for its underlying technologies. I currently work on my own startup, Loonycorn, a studio for high-quality video content. SageMaker is a fully-managed machine-learning platform on AWS, which makes prototyping, building, training, and hosting ML models very simple indeed. SageMaker uses Notebook instances, which hold Jupyter Notebooks to prototype and prepare your data for training your model. There are a number of different ways in which you can build and train models on AWS. SageMaker offers built-in algorithms that the only code you need to write is to prepare your data. The actual code for the model is hosted in AWS containers. If you prefer to build your own custom model, you can do it using the TensorFlow or the Apache MXNet Deep Learning Frameworks. You can also bring your own pre-trained model, and host it on AWS' first-party containers. SageMaker also gives you the option to bring your custom code in your own custom container. It has built-in support for Docker containers, which can host your ML model. The examples in this course also cover how you can connect to other AWS services such as S3 buckets and the Redshift data warehouse. At the end of this course, you should be very comfortable building, training, and hosting your ML models on the SageMaker platform.
Machine Learning on the Cloud with AWS SageMaker Hi, and welcome to this course on Deep Learning Using TensorFlow and Apache MXNet on AWS SageMaker. SageMaker is Amazon's offering to allow you to build and train machine-learning models on the cloud. It's a fully-managed machine-learning service. This means that you do not have to worry about the nitty-gritty details of installing the correct libraries for your machine-learning models to run, or managing the distribution of your training algorithm across workers. SageMaker abstracts all of this away from you, so that you can focus on building the right machine-learning model, and training it with the appropriate data for your use case. SageMaker is on AWS, which means it's entirely cloud based, so if you want to scale your training fields or your deployment, that's as easy as adding additional instances. SageMaker provides an integrated Jupyter Notebook instance where you can develop and prototype your machine-learning models. Jupyter Notebooks are browser-based Python shells, which allow for interactive programming. You write in your code, and see the results right there on screen. There are a number of different ways in which you can build your machine-learning models in SageMaker. You can develop custom models from scratch, or use built-in models on your training data. SageMaker also allows you to bring in your pre-trained model to host on the cloud, or if your custom code is complex with many dependencies, you can choose to wrap it up in a container, and train and host this container at scale.
Using Built-in Algorithms in SageMaker Hi, and welcome to this module on Using Built-in Algorithms in SageMaker. SageMaker provides a number of machine-learning models out of the box. There are a variety of built-in models that deal with different ML problem types. The idea behind these built-in algorithms is that the developer need write no code for the actual machine-learning model. All she has to do is to specify the training dataset in the right format for the built-in algorithm. These built-in algorithms are not pre-trained machine-learning models which are ready to use, but have to be explicitly trained using your dataset. The format of the training data is model dependent. Based on the built-in algorithm that you want to use, the format will change. You'll have to look up the documentation for that specific model to understand how the input is to be specified. SageMaker offers a wide range of supervised and unsupervised learning models. Examples are the Linear Learner, Principal Components Analysis K-means Clustering, XGBoost, and so on.
Using Custom Code, Models, and Containers in SageMaker Hi, and welcome to this module where we'll talk about how we can build, train, and deploy custom code, custom models, and custom containers on SageMaker. SageMaker provides support for deep-learning framework such as Apache MXNet and TensorFlow. You can build your machine-learning modules and neural networks using these frameworks, and train and deploy them on SageMaker. SageMaker also allows you to bring your own model. Let's say you have a custom model that you have trained elsewhere. You can bring your model parameters to SageMaker, and inject them into the first-party containers that SageMaker provides. The built-in algorithms that SageMaker provides can be injected with your own model parameters. Or if you've gone a step further and packaged your machine-learning custom code, along with all its dependencies as a Docker container, you can build this container on SageMaker, register it with an Elastic Compute Registry that AWS offers, and train and run this model on the cloud. In this module, we'll also integrate with other Amazon services, we'll connect to a Redshift cluster, and use it to retrieve data for training.
Implementing Distributed Training and Autoscaling on SageMaker Hi, and welcome to this module where we'll learn how we can Implement Distributed Training and Autoscaling on SageMaker. We'll train a custom model for which code has been written using the TensorFlow framework. We saw Apache MXNet in an earlier module. In this module, we'll see custom code in TensorFlow. We'll distribute the training of our machine-learning model across multiple ML-compute instances. Once this model has been deployed and hosted on SageMaker, we'll see how you can set up the settings for this particular model variant, so that this variant is auto-scaled. SageMaker will increase the number of deployed instances of the model if the traffic to the model increases.