Deploying Containerized Workloads Using Google Cloud Kubernetes Engine

This course deals with the Google Kubernetes Engine, the most robust and seamless way to run containerized workloads on the GCP. Cluster creation, the use of volume storage abstractions, and ingress and service objects are all covered in this course.
Course info
Rating
(21)
Level
Beginner
Updated
Jan 11, 2019
Duration
2h 51m
Table of contents
Course Overview
Introducing Google Kubernetes Engine (GKE)
Creating and Administering GKE Clusters
Deploying Containerized Workloads to GKE Clusters
Monitoring GKE Clusters Using Stackdriver
Description
Course info
Rating
(21)
Level
Beginner
Updated
Jan 11, 2019
Duration
2h 51m
Description

Running Kubernetes clusters on the cloud involves working with a variety of technologies, including Docker, Kubernetes, and GCE Compute Engine Virtual Machine instances. This can sometimes get quite involved. In this course, Deploying Containerized Workloads Using Google Cloud Kubernetes Engine, you will learn how to deploy and configure clusters of VM instances running your Docker containers on the Google Cloud Platform using the Google Kubernetes Service. First, you will learn where GKE fits relative to other GCP compute options such as GCE VMs, App Engine, and Cloud Functions. You will understand fundamental building blocks in Kubernetes, such as pods, nodes and node pools, and how these relate to the fundamental building blocks of Docker, namely containers. Pods, ReplicaSets, and Deployments are core Kubernetes concepts, and you will understand each of these in detail. Next, you will discover how to create, manage, and scale clusters using the Horizontal Pod Autoscaler (HPA). You will also learn about StatefulSets and DaemonSets on the GKE. Finally, you will explore how to share states using volume abstractions, and field user requests using service and ingress objects. You will see how custom Docker images are built and placed in the Google Container Registry, and learn a new and advanced feature, binary authorization. When you’re finished with this course, you will have the skills and knowledge of the Google Kubernetes Engine needed to construct scalable clusters running Docker containers on the GCP.

About the author
About the author

A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.

More from the author
Mining Data from Text
Intermediate
2h 21m
Jun 28, 2019
Building Regression Models with scikit-learn
Intermediate
2h 42m
Jun 28, 2019
More courses by Janani Ravi
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi. My name is Janani Ravi, and welcome to this course on Deploying Containerized Workloads Using Google Cloud Kubernetes Engine. A little about myself. I have a masters in electrical engineering from Stanford, and have worked with companies such as Microsoft, Google, and Flipkart. At Google, I was one of the first engineers working on real-time collaborative editing in Google Docs, and I hold four patents for its underlying technologies. I currently work on my own startup, Loonycorn, a studio for high quality video content. In this course, you will gain the ability to deploy and configure clusters of VM instances running your Docker containers on the Google Cloud platform using the Google Kubernetes engine. First, you'll learn where the GKE fits relative to other GCP compute options, such as VMs, app engine, and cloud functions. You will understand the fundamental building blocks in Kubernetes such as pods, nodes, node pools, and how these relate to the fundamental building blocks of Docker, namely containers. Kubernetes has several powerful abstractions such as replica sets, which add horizontal scaling to pods and deployments which provide deployment and rollback functionality to replica sets. These are code Kubernetes concepts and you'll understand each of these in detail. Next, you will discover how to create, manage, and scale clusters. This involves the use of something known as the horizontal pod autoscaler HPE, which is a different scaling mechanism than that used with GCP VM instance coops. Finally, you will explore how to share state using volume abstractions and field user requests using servers and ingress objects. You'll see how custom Docker images are built and placed in the Google container registry providing the seamless integration between GCP and Kubernetes. You will also learn a new and advanced feature, binary authorization, which can be used to ensure that only signed verified container images are deployed to your cluster. When you are finished with this course, you will have the skills and knowledge of the Google Kubernetes engine needed to construct scalable clusters running Docker containers on the GCP.

Introducing Google Kubernetes Engine (GKE)
Hi, and welcome to this course on Deploying Containerized Workloads Using Google Cloud Kubernetes engine. Containers, specifically Docker containers, are getting to be very popular for packaging up your application code and all of its dependencies into a single unit and deploying this unit either on an on-premise data center or on the cloud. Containers are an ideal deployment mechanism for the hybrid multi-cloud world that we are moving towards. Organizations with large on-premise data centers are contemplating a move to the cloud, but they don't want to be tied into one cloud provider. This is where containers are so useful. Once you have many containers running, typically thousands, you'll need some way to manage and orchestrate these containers and this is where Kubernetes is fast gaining popularity. Kubernetes, or K8s, as it's popularly called, it's a container orchestration technology, which adds a little of abstraction which makes it easy for you to deploy and manage thousands of containers. Other container orchestration technologies exist, but Kubernetes is fast becoming the industry standard. Kubernetes was originally developed at Google, which is why it has a very special relationship with the GCP. If you're working on the GCP, you'll find that the GKE, or the Google Kubernetes engine, offers you powerful features and flexibility in how you deploy and manage your applications on the cloud.

Creating and Administering GKE Clusters
Hi, and welcome to this module where we'll see how we can create and administer Kubernetes clusters on the Google Cloud Platform. GKE clusters run Kubernetes on Google compute engine virtual machines. The nodes of your clusters are compute engine instances. When you get working with the hands-on demos in this module, you'll find that creating and provisioning a Kubernetes cluster on the GCP is very, very straightforward. You can use the web console, the gcloud command-line utility, as well as kubectl. The first two options are, of course, Google specific. The third option kube control, or kubectl, is the command-line utility that you might use on any Kubernetes cluster, even on your on-premise machines. Along with other nitty gritty details of cluster configuration, you'll also see how you can resize, expand, and autoscale your cluster nodes. We'll also see how you can configure your nodes so that they are auto-upgrading and auto-repairing, already useful features when you're working in a production environment that requires high availability.

Deploying Containerized Workloads to GKE Clusters
Hi, and welcome to this module on Deploying Containerized Workloads to GKE Clusters. We've already seen how a simple deployment works. We've also exposed that deployment as a service. In this module, we'll see how we can set up a custom image for deployment. Our custom application will be deployed to containers. We'll also set up services and ingress objects on the GKE. We'll also see an example of setting up a container cluster which has multiple services running. These services use volume abstractions for shared state. We'll see how we can use the persistent volume clean to access persistent volumes. And finally, we'll round this module off by working with a brand-new feature that is available on the GKE. This is the ability to deploy attested containers to our GKE cluster using binary authorization. This is a great new feature that ensures that only trusted containers can be deployed to your production environments.

Monitoring GKE Clusters Using Stackdriver
Hi, and welcome to this module where we'll see how we can use the Stackdriver suite of tools to monitor GKE clusters on the Google Cloud. Stackdriver offers a suite of tools on the GCP for monitoring, logging, error reporting, handling traces, debugging, you name it. Stackdriver works with all of GCP services, it can also be configured to monitor resources on AWS. Stackdriver, in fact, offers a special monitoring service that integrates closely with Kubernetes called Stackdriver Kubernetes Monitoring. Kubernetes is also integrated with the open source Prometheus monitoring and support tool. If you're used to working with Prometheus, the GCP has made things very simple for you. You can integrate your Prometheus metrics and view them in Stackdriver.