What's a container? What IT Ops and devs need to know
According to my better half, a container is something that food goes into when you're done, but haven't finished everything. Still struggling with the concept, as tinfoil always seemed fine. But in the IT world, containers are the new shiny.
My problem, however, is that everyone seems to try and explain containers by comparing them to virtualization. Or, they'll even refer to containers as a "kind of virtualization." I'm not sure that's a good way to understand them. While there are certainly some workloads that we've done in the past with a virtual machine, and should maybe do in the future in a container, they're really two different beasts. So, just for this post, let's not use the word virtualization at all. Instead, let's start by looking at the actors in a container environment.
Container environments: the main players
Host operating system
You start with the host operating system. In the past, this was a Linux distribution, because containers as we know them today, by and large, derived from a feature of the Linux kernel. Today, more robust containerization on Linux often comes from libdocker, created by Docker, but Linux has been the host OS for containers. Microsoft jumped into the mix with Windows Server 2016, which features native Windows container capabilities.
Now, in the bad old days, you just installed applications directly onto the host operating system. The application usually had a number of dependencies and prerequisites - so, in Linux terms, you may have had to install a punch of prereq packages first. The application also dropped a lot of its own stuff all over the place. This created a bit of a hassle with application deployment, in fact, because making an application work was often a trial-and-error effort. In fact, whole installer and packaging technologies were created to help better bundle an application and its prereqs, specifically to avoid those problems. But you still ran into issues. A package that expects to be installed on CentOS 6.1 might not install so well onto RHEL7, just because of the differences.
You also ran into problems with multiple apps on a single host, sometimes. Application A wants Package 1, but it wants version 2.3, which might not be able to live alongside the version 2.2 that Application B wants.
So the next actor in the system is the container itself. This is - I'm oversimplifying a bit, but bear with me - basically a folder that contains everything the application needs above and beyond the base OS. And base OS, in the container world, is rapidly becoming a smaller and smaller thing. Initially, base OS was a full Linux distribution, which packs quite a lot of stuff into the box. Nowadays, a container host OS may be a very lightweight thing indeed, such as CoreOS, VMware's Photon or Microsoft's Nano Server. So rather than assuming a fairly well-equipped base OS, a container assumes a fairly minimal base OS, and brings along whatever it needs.
The containerization layer
The magic all happens in the third actor, which is the middleman that actually does containerization. Docker, if you will, or the container layer of Windows Server. This middleman layer essentially wraps a blanket around the container, making the container - and the application therein - believe it is the only thing running on the host OS. The application sees only its files, only its processes and so on. The actual level of isolation depends a bit on circumstances, and isn't as total as the isolation between, say, virtual machines, so containers aren't perfect for every application scenario.
This middleman does some pretty cool stuff. For example, and not unlike Microsoft's App-V, the containerization layer can intercept application read and write requests, so that while an application may think it's writing to a system-level folder, and later reading data from that location, it is in fact reading and writing data to its own folder, maintaining the "container" around the application.
Containers: How virtualization fits in
So where does virtualization come in? Well, with containers, you get some of the same isolation between applications on the same host, as I've indicated. But you don't have to have a host that's pretending to be several different pieces of hardware - so you tend to get more applications per host, meaning better density in the data center. You can also mix and match, running one virtual machine, for example, as a container host that in turn runs multiple application containers. Docker offers clustering solutions for their stack, giving you a VM-like ability to move containers across hosts, without the use of traditional virtual machines.
Containers are another tool in the arsenal. They're especially well suited to tightly-scoped applications, like Web applications, and so right now you see them running in those contexts a lot. As developers learn to write applications that are better "containerized," we'll likely see them used in more situations. Containers offer advantages - spinning up a new one is markedly faster than spinning up a new VM, its guest OS and its applications, so in a rapid test-dev-test-dev cycle, containers can mean less waiting. But containers aren't the end-all, be-all—as I've noted, they have a different concept of isolation (and Microsoft's Hyper-V Containers muddle the picture by offering a third isolation model), so in some security-sensitive scenarios, containers might not work.
Overall, containers are an exciting and rapidly-evolving field, which means whatever you know about them today might be different a year from now. So it's important to keep up.
Check out this course: Docker & Containers: The Big Picture