Deploying TensorFlow Models to AWS, Azure, and the GCP
This course will help the data scientist or engineer with a great ML model, built in TensorFlow, deploy that model to production locally or on the three major cloud platforms; Azure, AWS, or the GCP.
What you'll learn
Deploying and hosting your trained TensorFlow model locally or on your cloud platform of choice - Azure, AWS or, the GCP, can be challenging. In this course, Deploying TensorFlow Models to AWS, Azure, and the GCP, you will learn how to take your model to production on the platform of your choice. This course starts off by focusing on how you can save the model parameters of a trained model using the Saved Model interface, a universal interface for TensorFlow models. You will then learn how to scale the locally hosted model by packaging all dependencies in a Docker container. You will then get introduced to the AWS SageMaker service, the fully managed ML service offered by Amazon. Finally, you will get to work on deploying your model on the Google Cloud Platform using the Cloud ML Engine. At the end of the course, you will be familiar with how a production-ready TensorFlow model is set up as well as how to build and train your models end to end on your local machine and on the three major cloud platforms. Software required: TensorFlow, Python.
Table of contents
- Module Overview 1m
- Prerequisites and Course Overview 3m
- The Machine Learning Workflow: Local Serving 3m
- Demo: Exploring the Churn Prediction Dataset 4m
- Demo: Training and the Experiment Function 3m
- The Saved Model 2m
- The TensorFlow Model Server 2m
- gRPC and Protocol Buffers 2m
- Demo: Setting up the Azure VM 3m
- Demo: Installing TensorFlow, gRPC, Serving APIs and the Model Server 3m
- Demo: Deploying and Hosting the MNIST Classification Model 3m
- Demo: Setting up the Churn Model 3m
- Demo: Training and Saving the Model 4m
- Demo: Making Predictions from a Saved Model 6m
- Module Overview 1m
- Azure ML IaaS and PaaS Options 6m
- Containers and VMs 3m
- Demo: Docker CE Install 2m
- Demo: Building the Docker Image 4m
- Demo: Running a Docker Container for Predictions 2m
- Demo: Registering the Image with Docker Hub 3m
- Demo: Running Docker Using the Docker Hub Image 2m
- Demo: Making Predictions from a Saved Model Using a Docker Container 3m
- Module Overview 1m
- The Machine Learning Workflow: SageMaker 4m
- Training the Model 2m
- Deploying the Model 3m
- Training and Inference Code Interface 2m
- Demo: Setting up an S3 Bucket 2m
- Demo: Setting up a Notebook Instance 2m
- Demo: Data Preparation 2m
- Demo: Setting up the TensorFlow Model 3m
- Demo: Training and Deploying the Model 3m
- Demo: Models and Endpoints 2m
- Module Overview 2m
- Cloud ML Engine vs. SageMaker 4m
- The Machine Learning Workflow: Cloud ML Engine 4m
- Training the Model 4m
- Deploying the Model 2m
- Demo: Connecting to Datalab 3m
- Demo: Creating a GCS Bucket 1m
- Demo: Data Preparation 2m
- Demo: Setting up Bucket Permissions 3m
- Demo: Python Package Contents 4m
- Demo: Local Training and Prediction 2m
- Demo: Distributed Training and Deployment 3m
- Demo: Making Predictions Using Cloud ML Endpoints 1m
- Summary and Further Study 2m