Podcasts

010 - Why 2020 is the year of Kubernetes with Kelsey Hightower

January 07, 2020

Kelsey Hightower, Developer Advocate at Google, chats with Jeremy about his journey as a developer, what makes Kubernetes so powerful and the projects he loves working on.


If you enjoy this episode, please consider leaving a review on Apple Podcasts or wherever you listen.

Please send any questions or comments to podcast@pluralsight.com.

Transcript

Jeremy M.:
Hello and welcome to All Hands on Tech where today's leaders talk tomorrow's technology. I'm Jeremy Morgan. Kubernetes has turned traditional development on its head with the rise of containers and microservices. We're developing software in completely different ways than we did five years ago. Now when I think about the thought leaders in the Kubernetes space, one of the first people to come to mind is Kelsey Hightower. He's a principal developer advocate at Google, an author, a keynote speaker at CubeCon and a tech guy to the core. We talk about the state of Kubernetes and where it's headed, as well as talk about his roots from running his own computer shop to becoming a world famous Kubernetes expert. So let's welcome Kelsey Hightower. And how are you doing today, Kelsey?

Kelsey H.:
Awesome. It's gray skies, but I'm happy to be here.

Jeremy M.:
Yes very, very gray skies out here today. So tell us a little bit about yourself and what you do.

Kelsey H.:
I work in developer relations at Google, which is this interesting role where we work with people, customers, people out in the community on a various set of things from open source Kubernetes to things on Google cloud platform. So I'm part of that feedback loop who helps product engineering build a product that not only hopefully that you like, but are willing to pay for.

Jeremy M.:
Awesome. So what was the journey like to get where you are? Did you start out as kind of a tech person, start out somewhere else and kind of evolve into it or how did you get to where you are now?

Kelsey H.:
Yeah, so out of high school I opened a computer store near where I grew up in Atlanta, Georgia. And this is during that time where people used to build their own machines, right? Like pick your modem, pick your graphics card.

Kelsey H.:
And in addition to that little computer store, I also had a couple of customers who we would do I guess on demand tech support, right? They order a new printer, they order a few machines from Dell and they need to get them set up.

Kelsey H.:
And I did that for a number of years before landing my first job working in a data center at Google. And I guess over time from becoming a system administrator, I've had every job you can think of. I've worked tech support where you answer the phone. I've been an engineering manager, I've managed dev teams. And along the way I guess a lot of open source work either working at Corrales or Puppet Labs and now here at Google.

Jeremy M.:
That's awesome. Pretty cool. So for somebody who's not really familiar with Kubernetes, what is it at a high level?

Kelsey H.:
It's a really long name to describe what we considered today a container management platform, right? So at the very highest level, if you come across the name Kubernetes ... it's Greek for helmsmen is someone who drives the ship.

Kelsey H.:
And the idea here is that when you have lots of applications you can decide to package them into containers. So this is kind of a up level from the world of Dockers. Ideally as an application developer, you can now package your app in this universal format we call containers or container images. And once you pair that up with like thousands of machines, you need some way to orchestrate that. So how do you decide what applications run on which machines and a very high level that's the problem that Kubernetes is trying to solve.

Jeremy M.:
Why would an organization want to use Kubernetes?

Kelsey H.:
That is the million dollar question that gets asked over and over again. So if you're an organization, it depends on your size, if you're startup, you may decide that maybe you don't need Kubernetes on day one and you can just use like a platform as a service. Think something like Heroku where you just take your source code and let them run it for you. Maybe as your team kind of grows a bit more or maybe your application stack gets a little bit more complex, meaning lots of microservices, 10 to 20 of different services, hundreds of machines, maybe even tens of machines. Then at that point you kind of want a common language or common tool to describe your deployment. So the ability to say, I need three copies of this application in this particular part of the world backed by a load balancer. So, Kubernetes gives you that ability to articulate what your deployment needs and then keep that running.

Kelsey H.:
Now as you get a little bit bigger, let's say you're at a typical enterprise with thousands of employees, thousands of applications, and you've been around for awhile. When you really peel back the covers, the Kubernetes, it looks like all of those things you've been trying to build over time. If you have more than a handful of machines, then you've already dealt with the idea of auto scaling, a low balancing, fail over. All of these things are the things you would build if you had enough time or/and the experience. Kubernetes comes with a lot of that stuff baked in and a big contribution from the community of people with the same problems as you do.

Jeremy M.:
What are some of the big obstacles for an organization in the beginning when they're first adopting Kubernetes?

Kelsey H.:
I think all new technology has a learning curve. If you were to start with Linux for the very first time, your biggest learning curve will probably be things like how to even get it installed, which Linux distro to choose. And then once you got it installed, how do you lock it down and secure it?

Kelsey H.:
How do you patch it, how do you update it? And then once you figure all of that out, how do you deploy your applications to it? If you just take that from the standard operating system world, those same problems apply to Kubernetes. And I think where it gets a little bit more complex, it's much newer than a single operating system running on a single machine. Like people know how to do that. There's lots of talent out there that can do that. But once you fast forward to distributed systems, now you have a whole new set of concepts to master, understand.

Kelsey H.:
And it may take you a little while longer, but to me it resembles a complexity that you find in any system. I can think back into my first days trying to edit a file with [inaudible 00:05:59]. I couldn't exit for a long time so I just shut down the computer. I think most systems have an initial learning curve and Kubernetes is no different.

Jeremy M.:
Is there any time an organization wouldn't want to use Kubernetes?

Kelsey H.:
Yeah, I think a lot of people run to Kubernetes to solve maybe problems that maybe Kubernetes is a good fit to decide, right? Like all the things we mentioned earlier, this idea of having this tool that can manage multiple machines and keep things running and do basic fail over. But if you think that you can ignore the basics, like the way you write your applications, they need to have things like health checks, metrics will be great.

Kelsey H.:
The ability to go down at any moment and then recover their own state. Kubernetes just can't do that for you automatically, it can meet you halfway and do some of the infrastructure basics, but when it comes to application design, you can't just rub Kubernetes on your applications and make their problems go away. It isn't that.

Kelsey H.:
There's another area where you have to be a bit careful. If you don't have a lot of experience with databases, and most people don't. Most people create a database on a static set of servers and never touch it until there's a problem. If you try to move that into Kubernetes you may not have the operational expertise to manage a database in such a dynamic environment.

Jeremy M.:
And so are there some other challenges with with the tendency model of it, because I know that when you're developing applications, some applications can run with a soft tendency model where it can just run in a container and it can run with other ones next to it, no big deal. But some of them do need to be isolated and do need a little more OS level type architecture. And is that something that that's a big challenge in Kubernetes right now?

Kelsey H.:
Well, I would say Kubernetes lets you make the decision that you think is best for you. so if your applications really need to be one application [inaudible 00:07:59], virtual machine or per physical machine, then Kubernetes will definitely let you do that. You can definitely say anti affinity or you can say only one app per machine if that's really what you want to do.

Kelsey H.:
The drawback there is of course you're going to lose some benefits around density. It's a little easier in the cloud because you can create virtual machines a little closer to the memory and CPU requirements of your workloads. So you don't have to use Kubernetes in what most people think of in its default configuration, which is this idea of packing as many containers as possible in order to leverage as many resources as possible. And that then packing does have the trade off of dealing with maybe less security if you're just running with maybe something like Dockers or container runtime. So it's kind of a trade off, but depending on which one you make, Kubernetes can help you enforce it.

Jeremy M.:
It seems like there's two camps out there right now as far as people opining on it. And they say that VMs are going to replace containers completely. And then of course there's others that are saying no containers are going to replace them. And then there's the middle ground, which probably makes more sense that they're both going to exist in this world. And since Kubernetes can manage both of them, it's well positioned for that. But which camp would you say that that you're in as far as the future goes?

Kelsey H.:
Yeah, I mean that argument is like saying nails will replace hammers, it's like ... they're complimentary technologies. So if you have a bare metal machine, so that's a physical machine and you have a kernel on there, one way to partition up that machine or to leverage all of its capabilities, you can use containers just like you use processes today, right? I can put 25 different processes, [inaudible 00:09:49] machine, and they will all take the amount of CPU Ram that they need. That's been a thing we've been able to do forever.

Kelsey H.:
So in this case, we're just going to leverage container images as a packaging technology. This is not a virtualization technology at all, this is just processes on steroids, right? So that's what containers are. Now. The decision to virtualize is independent of the decision to containerize. If you have a bare metal machine that you want to carve up into maybe smaller resource chunks or different security boundaries, you pick virtualization for that. I mean I guess at some point you could back container technology packaging format and couple it with these lightweight virtual machines. You hear things like firecracker and so forth ... But again, that's virtualization just being used to run a process. These are just two separate things.

Jeremy M.:
How is Kubernetes different from something like Docker Swarm?

Kelsey H.:
Kubernetes and Docker Swarm aim for the same things in many ways. You take a group of machines and you expose an API that allows people to deploy containers to them. So at the very high level, they look very similar. When you start diving down into the API, that's where things start to differ pretty quickly. In a Docker Swarm world, you have this concept of a container and volumes and secrets, load balancers and service discovery. But in the Kubernetes world, you have a concept of a pod, and this is probably be the biggest difference most people would see on the surface. So a pod is one or more containers deployed as a set. The way to think about this is let's say you have an app written in the go programming language, so it's a little web server, it takes web requests, but let's say you don't want to add any authentication to it or anything fancy, you just want to take web requests.

Kelsey H.:
While with the pod what you can do is you can have a dedicated instance of let's say Ingenex, Apache, any web server you like Envoy, if you will, and you can deploy those things together as a logical unit. And then the pod becomes a place where you can actually attach a bunch of psych cards that do additional functionality. So think about logging. You may want to use a logging agent such as Splunk. So the combination of Apache, my application and Splunk, those three things equal one logical app. And I can manage those pods using something like a deployment and maybe stamp out five of those.

Jeremy M.:
So what's new in the Kubernetes world right now? What do you see on the horizon that's coming that we should have an eye on?

Kelsey H.:
Well, I think a lot of people, if you're in the Windows community, meaning you're probably on the lookout for more stability around that. So early on Kubernetes was designed mainly for Linux VMs running in the cloud, right? Just to kind of put it that way. And in many ways in tech, a lot of work from Microsoft to add support for native container technology, dealing with windows containers in terms of how you package your app in images that required the Windows user land or runtime to make things work so that it gets stable over time.

Kelsey H.:
And I would say the other big one is just better support for stateful applications. These are apps that need their own data stores, volumes, snapshots, data replication and backups. That's a lot of work going on in what we call the storage CIC. So CSI is the container storage interface and the goal is just to make that a little bit more robust so we can start to abstract away some of the complexity that you see around storage.

Jeremy M.:
With the Google cloud platform, are there any types of orchestration tools that are being built around Kubernetes to host Kubernetes? Like say with AWS they're starting to have a lot of tools where they're bringing in Kubernetes clusters and managing them, things like that. There's a little bit of that going on in Azure. Is there anything going on like that with the Google cloud platform?

Kelsey H.:
Yeah. I would say years ago, almost three, almost four years now, we've had this thing called the Google Container Engine or Google Kubernetes Engine and the goal there is that for those looking to leverage Kubernetes but may want some help curating and managing those clusters than what you find in GCP like you find in some of the other cloud providers is this idea of using a dedicated cluster management API. We just call it GKE for short. An idea there is you can stamp out as many clusters as you want across various zones and regions and then use things like low balancers and the other cloud provider integrations to tie them back together.

Jeremy M.:
So how many nodes can you reasonably fit into a single cluster?

Kelsey H.:
Man that question comes up a bunch. And that one's tricky because the scale factor into Kubernetes really has a lot to do with the API server. So without going into too much detail, just know that there's a central point of control called the API server. And with the API server, it's responsible for taking in all of the cluster state. So run these 50,000 pods, connect to these in number of nodes, and then the scheduler and the various components they need to read the API to make decisions. Let's say you have a three node cluster, then what's going to happen is you'll probably have something like 16 CPU per node and the scalability factor, there's probably just going to be the amount of compute resources that you have across those three nodes. So maybe you're going to get 2030 pods maximum per node because you're just going to run out of CPU resources.

Kelsey H.:
Now let's start to think about the limit we kind of publish is like 5,000 notes and the reason why we put it in those 5,000 nodes is because once you start adding lots of these pods/slash containers across these nodes, each of the worker agents on each node need to update the API server and say things like, Hey, this pod is running, or Hey, this pod isn't running anymore. You may have to reschedule it. When you multiply that chatter times 50 or 5,000 you start to put a lot of tax on the API server so you're going to need more of them. And then if you do that long enough, you're just going to run out of bandwidth to handle any more nodes or API calls. And that's when we start pushing people towards another cluster to partition off and get more resources to manage more notes.

Jeremy M.:
Okay, that makes sense. Yeah, I would have never thought about that as a network bandwidth being a possible bottleneck. But the way you say it makes perfect sense.

Kelsey H.:
Yeah. The database can only take so much change so fast. There's a cache involved, there's a lots of moving pieces just outside of the raw Kubernetes components themselves, which are quite scalable and you just have the API server to contend with.

Jeremy M.:
As you mentioned, like in the old days, we'd have a server for everything, so you'd spin up a server or maybe even a server full of BMS, but you spin up a server for a website or an application at an organization. And then another group says, well, we need our own server for our own special things. So we'll spin that up and in another group says, well, we don't want your server. And so you have all these servers that are under utilized, spread all over the place. Kubernetes is one of the things that can help solve that problem, of course, all that wasted CPU and hard drive space from each one. But do you see that happening in the Kubernetes world with clusters where you know, each developer wants their own cluster and each team wants their own cluster and is there a good way to deal with that where you start to have a big sprawl of a whole bunch of clusters all over the place?

Kelsey H.:
Yeah. So this gets to the human problem. All of these tools aside, they're only here to assist what we're doing, so if you think about your question, why are people asking for servers? Why are people asking for clusters? They asked for these things because we expose them to them. Say, Hey, these servers have this much CPU and memory. These servers can get access to these other things. This cluster has all this stuff you need in it. So once you start to talk in those terms, then people will ask for those things. What we really want is the ability to say, if I deploy your app, what things does it need to get to and what are its requirements? And the Kubernetes API for the most part does essentially that, right? So it says things like, "how much CPU and memory do you need? What's the maximum out of CPM memory you should have access to and can use other API components such as, you know, pod security, network security and other policies that control how you talk to other things?"

Kelsey H.:
Ideally this should be enough for most people to deploy into whoever's managing their compute resources, right? So if you do it well, you don't even have to tell people using Kubernetes, you can just let the API kind of carve up these things. The problem though is when a team says, Hey, we've set up this cluster for you in Germany and this is the only place where you can actually do things or talk to the things in Germany. So what will happen is, now you've leaked that kind of detail. So it's cool that people can have an affinity to a particular zone or region. But once you start saying that it's this cluster and the way you deployed things is via this command, you start giving everyone KUCTL, the Kubernetes command line tool, then you're going to end up in a world where yeah, people may start asking for their own clusters so they don't make mistakes in someone else's cluster.

Kelsey H.:
So the goal to help with that really is more about CICD and the abstractions that you lay on top. So Kubernetes can help with the whole, you know, getting more density if multitenancy is something that your application is fine with, but you're going to have to look at other tools if you want to extract the way this idea of give me a server, give me a cluster.

Jeremy M.:
Okay. Yeah. That brings me to another question. You know, there's a lot of of growth in the Kubernetes community and ecosystem, but there's also this huge ecosystem of add-ons and applications surrounding it. Are there any cool ones that you can think of that we should know about?

Kelsey H.:
Oh man. There's lots of cool ones. Like if you want to work flow engine, like something that can say, do this step, then do this other step, but then do this other step that the other one fails. There's one cart Argo and this comes from the general community and what they've done is they've leveraged the Kubernetes to declare a style API to allow you to just build a workflow engine right on top of your existing Kubernetes cluster.

Kelsey H.:
There's other ones that are a little bit more simple like the external DNS add-on. And what that does is it can look for annotations in your deployments of services and take those annotations and automate the configuration of your DNS provider. So take an IP address and map it to a domain name for you. And we have a bunch of these little activities kind of running around the Kubernetes community and the goal is, just like when we were sharing our basket grips, our configuration management modules, we expected a whole ecosystem of people that are sharing add-ons small and large what the rest of the community.

Jeremy M.:
Okay, so how would somebody go about learning Kubernetes? Where should they start?

Kelsey H.:
Yeah. If you're in operations than the ideal place to start would be setting up your own cluster, getting a feel for how things fit together. I wrote a guide called Kubernetes The Hard Way and the goal is you'll spend a couple of hours basically piecing together a Kubernetes cluster so you understand how it all fits together, how it works.

Kelsey H.:
If you're in operations really the goal is how do you set up a cluster. There's lots of other resources out there for thinking about cluster design, security and performance tuning, and your goal there really is to provide a platform for other people in the organization. Now in operations, you can go one step further and start to build your own high level APIs on top of Kubernetes so you don't have to expose so much of the raw infrastructure to other people. Now, if you're a developer, I mean this would be someone that traditionally has a focus on writing applications and maybe even supporting the applications that they deploy.

Kelsey H.:
And in that world, maybe you're less concerned about how to install and maintain a raw Kubernetes cluster, and your goal would be learning more about the Kubernetes API, how to do things like health checks for your application, how to do things like pod affinity, how to make sure that your app has the right number of instances. There's things like the vertical pod autoscaler, so when your app runs out of memory, we can resize that application and have it rescheduled to a machine that's a bigger fit. All of those things relate to the Kubernetes API. And for some developers, those that are building platforms or building SAS appliances that may need to leverage some of the platform features of Kubernetes, then you start looking at things like emission controllers, the Kubernetes API and custom resource definitions often referred to as CRDs.

Jeremy M.:
So is there a way that somebody could run a small Kubernetes cluster like on a local machine, kind of in a virtual environment just to test out how it works, how to put it together, how to maybe write a few things and throw it in there?

Kelsey H.:
Yeah. So if you're running a distro like Ubuntu, you can use MicroK8s and that's a one command install app, get installed MicroK8s and then you have the one node Kubernetes cluster kind of ready to go. There's another take on that which is a miniQ, which is one of the original single node local developer tools that a lot of people have been using for a long time. And there's a new one on the scene called K3S and what it does is it toys with the implementation of Kubernetes just a little bit.

Kelsey H.:
It replaces things like [inaudible 00:23:36], the current key value database where Kubernetes stores all of this configuration data. And it replaces that with something a little bit more lightweight like SQL Lite. And then it puts everything in a single binary. So now you can just deploy with a simple command and there you go. You have a single node cluster and if you need to add additional nodes, maybe you're on your local laptop and you want to toy around with having multiple machines in your cluster. You can always add another virtual machine and connect it as a node and also have that participate in a cluster.

Jeremy M.:
Nice. Yeah, that's pretty cool. So is there anything coming out in the next year or so with Kubernetes or Google cloud platform that you're really excited about?

Kelsey H.:
Yeah, I think a lot of things in the Google cloud platform world, you know it's all about stability, right? This is getting Kubernetes on Prim. Some people want GKE and other cloud providers like Amazon or Azure. And the goal is where you'll start to see over time it's just better multi cluster management tools. So we have tools like Anthos configuration management at those service mesh. This whole Anthos brand of products which are commercial offering to help people manage multiple Kubernetes clusters, whether that's on GCP, on Prem or another cloud provider.

Kelsey H.:
In the open source world, what you're starting to see a lot more of just people coming around to this idea that Kubernetes as a platform for building other platforms. And that means you're going to get things like CubeFlow, which is a machine learning platform that's built on top of Kubernetes, kind of centered around TensorFlow, it's where it gets his name from. And you're going to see a lot more of those. Whether it's service meshes like Istio and so forth.

Jeremy M.:
How do you tackle learning something new, personally, what does your learning plan look like?

Kelsey H.:
Yeah, I guess it's a little bit skewed now that I've been around for a while, but I like to focus just on the fundamentals. I like to make sure things are simple to understand for myself.

Kelsey H.:
If I see a new thing come out, let's take Service Mesh for instance. So Envoy comes out and there's this thing called Service Mesh, supposed to help multiple applications talk to each other with some other features in between, like policy management, deciding what applications can talk to each other. A can talk to B but not to C. Then it also helps you with things like metrics and logging and monitoring. T.

Kelsey H.:
here's all kinds of features that go into that. Now you may think this is all just brand new wrapped up into this idea of Service Mesh and when I peel back the covers, I look at the fundamentals. Some of the security aspects are just TLS mutual auth. I could do that with existing tools, maybe not as convenient but definitely works. Other things like distributed tracing, you could do that with something like a library from open tracing, so there's fundamentals baked into a lot of these things we call new that if you understand the fundamentals, it makes it so much easier to do the mental mapping when new things come out.

Jeremy M.:
You mentioned earlier where they say, "Rub some Kubernetes on it" and so then people focus on containers and orchestration and management and things like that, but at the same time, exactly like you said, Linux administration is something that if you want to keep getting better and keep getting skilled at learning better Linux administration or Windows system administration could probably help you a lot in the decisions you make as you're scaling out things and building things.

Kelsey H.:
Yeah, I always try to go and find something that's similar. So if I look at Kubernetes, I try to find something that's similar. When I look at containers and container images, I go to find something similar. Something similar in that regard would be RPMs and Yum repositories, right? In the Red Hat world you can package your application using RPM spec file. It's going to create this image that you can then push to a Yum repository. Then you can go to any machine and do Yum install application and verify its signature if you want. These are all very similar to what people are doing with container images.

Jeremy M.:
I've never looked at it that way, but that's very interesting. So what cool projects are you working on right now?

Kelsey H.:
Oh my goodness. There's too many things to work on. Some of them I can't tell you about until you see me ship them. A lot of the things that I'm working on is really trying to make this stuff disappear. So that would be AKA like serverless. How many compute platforms can we hide and level up the user experience to the point where you can say, give me my app or give me my job or give me my process and run it for me.

Kelsey H.:
To me that's the most interesting work because it's a very hard problem to extract the away some of these tools without losing the potential and the power that those tools offer. And the other thing that I've been working on recently is just figuring out how to help maybe people where technology isn't their core. For example, I was just helping, my wife automate, she's a vice principal at a local middle school, and I was helping her automate the idea of taking all of the students that have won an award.

Kelsey H.:
Think about 500 to a thousand kids who have won an achievement award. So you would have a database where all of their names and their GPs are in this database broken out by grade level. And then I was able to learn the Google cloud G suite.

Kelsey H.:
So this is going to be like Google Docs, Google Sheets, Google Presentations, and I was able to kind of glue together all of those things to create a workflow where they can create a template in Google Slides and I can just pull the data off of a spreadsheet and then just generate all 1000 certificates, which basically saves them three or four days worth of work because they used to manually do this kind of thing.

Jeremy M.:
Oh wow.

Kelsey H.:
Yeah. So finding more places where I can teach people who are outside of this field, how to get more out of their existing tools. That's super interesting to me.

Jeremy M.:
For our final question, since I'm a Portland area person also, what's your favorite part of living in the Portland area?

Kelsey H.:
I think the people, this is what made me move here in the first place. So I grew up in Long Beach, California for almost a third of my life. Then Atlanta, Georgia, where all of my family was from. And then I came to Portland area about six or seven years ago. And I came mainly because of, you know, Portland has this weird, I guess weird as the exact word, culture, and the people seemed to be fairly nice. The weather isn't that great, but I think the people make up for and the food options are amazing and it's just a different way of life. There's trees, there's outdoor stuff, and I've taken to a lot of those things, so I really enjoyed this area. Just gives me a way to get away from it all.

Jeremy M.:
Cool. Thank you very much for doing this.

Kelsey H.:
Awesome was glad to be here.

Jeremy M.:
Thank you for listening to All Hands on Tech. If you like it, please rate us. You can see episode transcripts and more at pluralsight.com/podcast.