- Lab
-
Libraries: If you want this lab, consider one of these libraries.
- Cloud

Autoscaling in Kubernetes
One of the most power features of orchestration tools such as Kubernetes is the ability to automcatically scale resource allocation in response to real-time changes in resource usage. In the context of continuous deployment, this provides a great deal of stability, with less need for human intervention. In this lesson, you will learn the basics of autoscaling in Kubernetes by creating a simple Horizontal Pod Autoscaler which will create and destroy pod replicas in response to CPU utilization.

Lab Info
Table of Contents
-
Challenge
Install the Kubernetes metrics API in the cluster.
To accomplish this, do the following:
- Clone the Kubernetes metrics repo.
git clone https://github.com/kubernetes-incubator/metrics-server.git
- Apply the standard configurations to install the Metrics API.
cd metrics-server/ git checkout ed0663b3b4ddbfab5afea166dfd68c677930d22e kubectl create -f deploy/1.8+/
- Wait a few seconds for the metrics server pods to start. You can see their status with
kubectl get pods -n kube-system
.
-
Challenge
Configure a Horizontal Pod Autoscaler to autoscale the train schedule app.
Check the
example-solution
branch of the source code repo for an example of the code changes needed in the train-schedule-kube.yml file: https://github.com/linuxacademy/cicd-pipeline-train-schedule-autoscaling/blob/example-solution/train-schedule-kube.yml.To complete this task, you will need to do the following:
- Create a fork of the source code at https://github.com/linuxacademy/cicd-pipeline-train-schedule-autoscaling.
- Add a CPU resource request in train-schedule-kube.yml for the pods created by the train-schedule deployment. Check the example solution if you need to need to know where to add this.
resources: requests: cpu: 200m
- Define a HorizontalPodAutoscaler in train-schedule-kube.yml to autoscale in response to CPU load.
--- apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: train-schedule namespace: default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: train-schedule-deployment minReplicas: 1 maxReplicas: 4 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 50
- Generate some load on the app to see the autoscaler in action!
About the author
Real skill practice before real-world application
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Learn by doing
Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.
Follow your guide
All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.
Turn time into mastery
On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.