Go allows applications to run extremely quickly and efficiently. However, eventually, a single instance of your application isn't enough. This course will teach you how to refactor your application to prepare it to scale across multiple servers.
Go allows highly performant applications to be built easily due to its focus on simplicity and speed. However, there comes a point where an application simply can't keep up when operating in a single process. This course, Scaling Go Applications Horizontally, will offer a series of techniques that will teach you how to spread the load out amongst many different processes. You'll also learn how to create distributed applications that can scale dynamically to meet changing demands. And, in order to gain the best possible understanding of the challenges and potential solutions, you'll learn how to write everything in pure Go, relying only on the standard library and no third party libraries for assistance. Armed with this knowledge, you'll be better equipped to understand what a third-party library can do to help you, or if you can get by with a custom solution. By the end of this course, you'll be able to refactor and scale your Go applications across multiple servers so that your apps can run faster and simpler than ever.
Michael Van Sickle is an application architect in Akron, Ohio.
He is a mechanical engineer by training and a software engineer by choice.
He is passionate about learning new programming languages and user
Course Overview Hi everyone. My name is Michael Van Sickle, and welcome to my course, Scaling Go Applications Horizontally. I'm a software engineer at SitePen. Go allows applications to run extremely quickly and efficiently; however, there comes a point when a single instance of your application isn't enough. In this course, we're going to learn how to refactor your application to prepare it to scale across multiple servers. Some of the major topics that we'll cover include how to server multiple application instances with a load balancer, how to add caching in a scalable manner, and how to manage logs in a distributed application. By the end of this course, you'll know the fundamentals of what it means to scale an application horizontally and how you can implement your own scalable services. Before beginning the course, you should be familiar with the Go language and how to work with Docker containers. I hope you'll join me on this journey to learn how to scale Go applications horizontally with this course at Pluralsight.
Initial Optimizations At this point, we have our game plan in place for where we want to take our application, but I want to make sure we don't get ahead of ourselves. One of the most important things that you can do when you're looking at scaling your application is to find a way to not need to scale it at all, and we're going to do that by making sure that we're using the existing resources that we have available as efficiently as possible. And this is something that a lot of companies have found as they've moved from other programming languages over to Go. The inherent efficiency in the Go language has allowed many companies to dramatically reduce the number of servers they have and simplify their infrastructure. Now you've already made that optimization, but there are a couple others that I think we should look at. Now for this module, I really only want to focus on two. There are a lot of optimizations that we can make in our applications, but I just want to give you a flavor for the kind of things that you might want to look at and how you can add them to your Go application. The first is content compression. Now you've probably noticed that all of our traffic in this application is using HTTP as the protocol. Well, right now we're just sending that as a raw text stream across the network, and that's actually a pretty inefficient way to send data, so we're going to look at some compression technologies that are available and then apply it to our demo application. After that, we're going to layer on HTTP/2. Now right now we're using HTTP/1. 1, which is the default protocol whenever you're using HTTP in a Go application. However, by making a couple very simple changes, we're going to be able to switch over to HTTP/2, and by doing that we're going reap quite a few benefits. So let's started by talking about content compression, why it's a good idea, and how to apply it to our Go application.
Scaling the Web Service Tier via Load Balancing Hello, and welcome back to this course where we're learning how to horizontally scale our Go web applications. In the last module, we took a little bit of time to make sure that the resources that we were using were being used as efficiently as possible. In this module, we're going to make the assumption that our best efforts were in vain and were simply no longer able to host our application in the single process that it's in right now, so in this module it's time for us to figure out how to actually scale our application across multiple processes that are potentially running across multiple servers. Now we're going to go about this in a couple of steps. The first thing that I'd like to do is review the target architecture and how I intend to map that onto our demo application. We'll do that in theory first, and then we'll drop into the code and spend quite a bit of time refactoring our application and building the entire load balancer. After we're done with that, we'll have the load balancer and the web applications that we have right now are really going to turn into the service providers that that load balancer is going to rely on. However, we're not going to have a way for them to get acquainted with one another, and so for the bottom half of this module I'd like to talk about provider registration, and that's going to involve a discussion that entails how the web applications are actually going to inform the load balancer when they start up and once they start up how the load balancer can get in touch with them when it has a request that it needs to have honored. We're also going to cover in this section how the web applications will signal the load balancer when they shut down and how the load balancer is going to find out if one of the service providers disappears without properly shutting down. So let's get started by talking about the target architecture that we're going to be moving toward in this module.
Adding a Caching Service Hello, and welcome to this course where we're talking about how to horizontally scale our applications in Go. In the last module, we covered a lot of the topics that you might think of when we're talking about horizontally scaling our applications. We have load balancers and application servers in place so that we can delegate off the primary task of the application, that is calculating the results and generating them, off to those application servers, and the load balancer can then utilize those application servers in order to increase its capacity. However, while we've increased the number of requests that we could handle in total, each one of those requests is actually going to slow down. So in this module, I'd like to talk about how to add a caching service into the application architecture that we've been working with so that we can reduce some of that overhead that we've added. So this module is going to be pretty simple. We're just going to break it down into two parts. First, I'd like to talk about the current state of the application: where we started, where we've ended up with now, and where we need to go, and that'll be covered in the architectural options. So we've already talked about that we're going to be adding caching in this module, but that's not a trivial decision to make. We have to talk about what caching strategies we have and then select one that's most appropriate for our application. So let's get started and talk about the current state of our application and why caching is going to be important for us going forward.
Centralized Logging Hello, my name is Michael Van Sickle. Welcome back to this course where we're talking about how to create horizontally scaled web applications with Go. Back in module 3, we handled the heavy lifting of actually horizontally scaling our application. We did that by introducing a load balancer that was served by a bank of application servers that were really just a refactored version of our existing application. However, in the last module, we discovered that scaling out the capacity of our application doesn't solve all of our problems. A decentralized architecture actually runs into additional problems, and we need additional tools to mitigate those. In the last module, what we had to deal with was the fact that each request that our application served was slowed down by the additional communication that had to happen when we introduced the load balancer in-between the requestor and the web applications. In this module, we're going to tackle another challenge that can come up with a decentralized architecture, and that is login. So as we go through this, we're going to break this module down into three parts. First is why do we do logging in the first place? Now for some of you that may be really trivial, and this may just be a review, but for others, maybe you haven't actually been introduced to the value of logs in a production environment, and it's always good to have a review. Then we'll move on to talk about the challenge of logging in a decentralized architecture and some of the frustrations that that kind of architecture can bring. And then finally, as you might expect, we'll introduce a possible solution that can help deal with the challenges that we'll introduce earlier in the module. Okay, so let's dive in and do a review about why logs are a good idea in the first place.