Pretty much everyone and their pet are trying to put their applications inside containers. There are talks upon talks about them and how they are changing the face of technology forever. You are probably containing some application right this minute, aren’t you? Probably using Docker, right?
But, what is a container? What does Docker do?
According to this diagram I found on some random website; container engines like Docker basically provide 4 capabilities:
- Control Groups — the ability to partition resources such as compute and memory so that the container cannot monopolize a machine’s resources.
- Namespaces — the ability to isolate processes so that they get their own view of the world and generally can’t modify or view another one.
- Layer Capabilities — the ability to snapshot filesystem state and isolate writes from the base image.
Pro Tip: Always end your lists with “Other” — that makes it exhaustive.
These capabilities can be boiled down into two categories:
- Constraints — Limits on machine resources it can use
- Isolation — Limits on what it has access to view and modify
Applications used to get these properties from being inside a Virtual Machine. A modern container system such as Docker attempts to give applications both of these properties with minimal overhead as compared to VMs — replacing them for this use-case.
Jet runs many hundreds of .NET executables on Windows. So we asked — what are the options for containing applications there? In 2016, Microsoft decided to partner with Docker to create a container engine implementing the Docker spec. This makes it easy to run containers natively for Windows with tools you are already using:
docker run -it microsoft/windowsservercore cmdand you are done*.
In fact, Microsoft liked containers so much, they made them twice:
- Process Containers (Also known as Windows Server Containers — WSC)
- Hyper-V Containers
If you are like me, you are probably used to running containers under Linux. But Windows is very different from Linux.
The Windows Operating System is highly-integrated. It exposes its API via DLLs, not syscalls. The internal workings of how DLLs actually interact with the OS is undocumented, but is tightly coupled with the OS services that are running, which in turn have their own coupling with other services and DLLS, ad nauseam.
This means, while you can share the kernel, you can’t isolate an application completely from the system services and DLLs. WSCs need a copy of the critical system services and all OS DLLs required to make Windows API calls. It does not matter in what language you write your application; eventually something needs to make API call to the OS — this coupling is inescapable.
Because of this, a WSC container looks like:
In this diagram, each container has a set of System Processes (and DLLs, etc…) as well as Application Processes (your app). This has some impact on their portability.
Windows Server Containers will not run on different versions of Windows
Remember that asterisk after
docker run above? It’s not quite that simple.
If you want to run a WSC on Server 2016, you must have built the container from a Server 2016 base. If you run it on Windows Server 1709 it will not work: it is blocked from starting. This also means, for example, if you run Nano Server as your host OS, you can only run Nano Server base images on that host. This is simply not a limitation that you’ll find with Linux containers.
**WARNING: Speculation** Why does this limitation exist? If the Docker engine on Windows did allow them to run, they might work (they might not crash immediately) However, Microsoft cannot guarantee that the undocumented features the integrated systems (OS Services, DLLs, etc.) rely on would remain stable between builds — so they just disabled them from starting to save you from undefined and potentially undesirable behavior.
Windows Containers are (currently) Big
Windows containers are larger, on average, than Linux containers. There is no
FROM scratch option. You must carry the base image with you everywhere. To Microsoft and the OS Team’s credit, they are doing a lot of work to bring this down and make image sizes smaller and more practical; but it comes at a cost. During the strip down, features of Windows are lost. Notably, .NET Framework is not available, and neither is WMI. A fine trade-off though if you are concerned about image size and you compiled a stand-alone native or .NET Core binary to run in the container.
Windows Hyper-V Containers
In order to address one of the limitations, Microsoft added a new type of container isolation mode called
hyperv which (unsurprisingly) uses Hyper-V Virtualization to run the container images.
In the Hyper-V version, we no longer have a shared kernel anymore — hence we no longer need to match the windows build versions between the Host OS and the Container. This obviously comes with an additional cost: Each Hyper-V container runs within its own Virtual Machine. In a cloud environment, this typically requires support for Nested Virtualization.
Now, that sounds crazy, right? Didn’t I just say earlier that containers were supposed to replace VMs!? Well, it’s a bit less crazy than you’d think on first glance. Microsoft did a ton of work to make the VM as small as possible, install and run as little as possible, and share as much memory between VMs during start-up so that start up times didn’t suffer because each container had to “boot up” every time to reconstruct effectively the same initial memory state.
So, this mode makes containers portable again! — Sort of... Hyper-V Virtualization ensures that older images will continue to work on newer instances of Windows, but it does not allow newer versions of windows containers to run on older versions of a Windows Host OS. If you try to do that, you’ll encounter this error:
There is a Windows Container Version Compatibility matrix available for you to see exactly what versions of which containers work on which host OS and under what isolation modes.
Hyper-V doesn’t address the image size part of the portability problem either. You run the same container image regardless of the isolation mode you specify in your
docker run command. Some Windows base-images — on Azure for example — are pre-loaded with their container image, so that may not make this such a big deal.
Now you know a little bit about Windows Containers, how they work, and their limitations. If you still want more, I recommend watching this great talk from DockerCon16 called “Windows Server & Docker - The Internals Behind Bringing Docker & Containers to Windows” by John Starks & Taylor Brown
Should you use Windows Containers Today?
One use-case I’d say an emphatic yes to right now is build-environments. They make a lot of sense for creating disposable containers where you need to bundle a set of tools together to run builds and tests of your application. They can also be used for running the supporting infrastructure such as MSSQL Databases for your integration tests. In these use-cases, the benefits far-outweigh the trade-offs. Containerization provides an excellent alternative to constructing bespoke machine types with lots of software tools pre-installed.
However, containers on Windows are not like Linux. There are caveats that hopefully will get ironed out over the next year or two and make them more usable. They are currently large, and not as portable as Linux containers. They have additional overhead due to the nature of the Windows Operating System as well as added virtualization if you are using Hyper-V isolation. If all this strikes you as just a tad unreasonable, Check out my follow-up post on Containing Windows Executables with Damon.
If you like the challenges of building complex & reliable systems and are interested in solving complex problems, check out our job openings.
2018–12–18: This post claimed that .NET Core and PowerShell were not available in Nano Server. However, it does in fact contain PowerShell and .NET Core. It does not have .NET Framework. Thanks to Thomas Zühlke for the correction.