Deploying TensorFlow Models to AWS, Azure, and the GCP

This course will help the data scientist or engineer with a great ML model, built in TensorFlow, deploy that model to production locally or on the three major cloud platforms; Azure, AWS, or the GCP.
Course info
Level
Intermediate
Updated
Apr 30, 2018
Duration
2h 11m
Table of contents
Using TensorFlow Serving
Containerizing TensorFlow Models Using Docker on Microsoft Azure
Deploying TensorFlow Models on Amazon AWS
Deploying TensorFlow Models on the Google Cloud Platform
Course Overview
Description
Course info
Level
Intermediate
Updated
Apr 30, 2018
Duration
2h 11m
Description

Deploying and hosting your trained TensorFlow model locally or on your cloud platform of choice - Azure, AWS or, the GCP, can be challenging. In this course, Deploying TensorFlow Models to AWS, Azure, and the GCP, you will learn how to take your model to production on the platform of your choice. This course starts off by focusing on how you can save the model parameters of a trained model using the Saved Model interface, a universal interface for TensorFlow models. You will then learn how to scale the locally hosted model by packaging all dependencies in a Docker container. You will then get introduced to the AWS SageMaker service, the fully managed ML service offered by Amazon. Finally, you will get to work on deploying your model on the Google Cloud Platform using the Cloud ML Engine. At the end of the course, you will be familiar with how a production-ready TensorFlow model is set up as well as how to build and train your models end to end on your local machine and on the three major cloud platforms. Software required: TensorFlow, Python.

About the author
About the author

A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.

More from the author
Building Features from Image Data
Advanced
2h 10m
Aug 13, 2019
Designing a Machine Learning Model
Intermediate
3h 25m
Aug 13, 2019
More courses by Janani Ravi
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi, my name is Janani Ravi, and welcome to this course on deploying TensorFlow models to AWS, Azure, and the GCP. A little about myself. I have a Masters in electrical engineering from Stanford, and have worked at companies such as MicroSoft, Google and Flipkart. At Google I was one of the first engineers working on the real-time collaborative editing in Google Docs, and I hold four patents for its underlying technologies. I currently work on my own startup, Loonycorn, a studio for high quality video content. This course will help you deploy and host your trained TensorFlow model locally, or on the cloud platform of your choice, Azure, AWS, or the GCP. This course starts off by focusing on how you can save the model parameters of a trained model using the save model interface, a universal interface for all TensorFlow models. Saved models can then be deployed in an on-premise datacenter using the TF model server, which deploys and hosts the model locally. You'll then learn how to scale the locally hosted model, by packaging all dependencies in a Docker container. But you might want to work with TensorFlow on AWS. You'll then get introduced to the AWS SageMaker service, the fully managed ML service offered by Amazon. SageMaker makes it very simple to run distributed training on the cloud, and deploy your model on multiple instances. Or, you might want to work with the GCP. You'll then study how you can deploy your model on the Google cloud platform using the cloud ML engine. Cloud MLE abstracts away the process of distributed training and deployment behind the very simple GCloud command line tools. At the end of this course, you will have learned how a production ready TensorFlow model is set up, and you'll learn to build and train your models end to end, on your local machine, and on the three major cloud platforms.