Getting Started with Docker for Windows

AO.com
AO’s Engineering blog
8 min readJul 2, 2020

By Devops Platform Engineer, Christopher Dunne

How many times have you heard “It works on my machine”? Docker erradicates issues that occur due to environmental differences.

What is Docker?

Docker is a tool which can be utilised to package up an application and it’s dependencies into a single deployable container, which houses everything needed to successfully run the application. By implementing containerisation it isolates the application from the host system it runs on; providing a consistent experience throughout the development, testing and deployment process.

It allows you to build your environment, in the form of an image, and run it anywhere. Machines no longer need provisioning; if the machine has Docker installed the application will run, as the image has everything you need baked into it.

Docker is completely open source, and has a plethora of high profile contributors; such as Microsoft, IBM, and RedHat.

Prerequisites

This guide assumes no prior knowledge of Docker, with the aim to help you get started on your journey with Docker. You will need to have an application available for use with this guide, and the following steps contain everything required to get up and running with containerisation using Docker. I encourage you to experiment along the way, and customise the setup to align with your own requirements.

Getting Setup

To get started with Docker you will need to dowload Docker Desktop for Windows. When you have downloaded and run through the installation you will have Docker setup on your machine. Some installation steps may require you to restart your machine, and possibly enable virtualisation in your BIOS menu if it’s not already.

If you are using the home edition of a windows operating system, it will not support Hyper-V and you will need to download the Docker Toolbox instead.

I recommend the use of Visual Studio Code alongside the Docker extension; this will provide syntax highlighting and a GUI for interacting with Images, Containers and Registries.

Docker Images

Docker images are created using a Dockerfile, within this file you will build up layers which will define your image.

Docker images generally rely on a base image, which is declared at the top of your dockerfile:

FROM mcr.microsoft.com/dotnet/framework/aspnet:4.7.2-windowsservercore-ltsc2019

The image declared after the FROM statement is taken from Docker Hub for Microsoft Images

Once you have a base image, everything else is built ontop of that. Each command defined in your Dockerfile is a new layer of your Docker image. The layers of your image are essentially stacked in the opposite direction they are declared in your Dockerfile.

Creating an Image

If you want to experiment with Docker outside of an existing application, you can create a simple sample project or use a Visual Studio template.

Within the root of your solution add a directory called docker. Within this directory create a file called Dockerfile with no file extension, and add the base image defined in the previous section to the first line. This image is for .NET Framework, you will need to find the base image which corresponds to the type of application you are working with. You can now start building ontop of this base to create your own image.

You can set a default shell for RUN commands, we’ll add this directly after the base image declaration:

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"]

Now we can execute powershell scripts from our Dockerfile, by utilising the RUN command. I recommend keeping larger scripts in seperate files, to ensure the Dockerfile is clean and easy to maintain.

Create an environment variable for the application root using the ENV command; you can also use ARG for passing in parameters, but this is only available at build time.

ENV APP_ROOT="c:\\application"

Add the directory to our image using powershell:

RUN New-Item -Type Directory -Path $env:APP_ROOT

Now set the working directory of our dockerfile using the WORKDIR command, you can then use the COPY command to add the application to a directory on the image. You will generally use COPY for copying local files into your image, there is also an ADD command which is usually reserved for adding external resources. The following commands will copy your src directory to the application directory on the C Drive of your image, we’ll take avantage of chaining the files to copy across by adding a powershell script for use later with the ENTRYPOINT described below:

WORKDIR $APP_ROOT
COPY ./src ./docker/Start.ps1 .

You can use the EXPOSE keyword to define ports to publish, this acts as an instruction regarding which ports are expected to be exposed by the user of the image. If you wish to publish the ports defined by EXPOSE you can use -p when running your image in a container.

EXPOSE 443

Add an ENTRYPOINT, and define a script file to run when the container is instantiated. This can be used for things like health checks, configuration, and keeping the container running.

ENTRYPOINT ["powershell.exe", ".\\Start.ps1"]

Our start script can contain any functionality that will be executed during the docker run command.

We’ll keep the start script basic for the sake of getting setup quickly, you can extend this in any way you wish. Create the Start.ps1 file within the docker directory, and add the following inside:

Import-Module WebAdministration$Sitename = "application"
$HostName = "localhost"
$SiteFolder = "$ENV:APP_ROOT"
$IISSite = "IIS:\Sites\$SiteName"
New-WebSite -Name $SiteName -PhysicalPath $SiteFolder -Force
Set-ItemProperty $IISSite -name Bindings -value @{protocol="http";bindingInformation="*:80:$HostName"}
$accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("IIS_IUSRS", "FullControl", "ContainerInherit,ObjectInherit", "None", "Allow")
$acl = Get-ACL $SiteFolder
$acl.AddAccessRule($accessRule)
Set-ACL -Path $SiteFolder -ACLObject $acl
Start-Service W3SVC
Start-Website -Name $Sitename
$healthy = $true
while($healthy)
{
try
{
$Request = Invoke-WebRequest -uri 'http://localhost/' -UseBasicParsing
$Status = $Request.statuscode
Write-Host "Status: $Status"
}
catch
{
Write-Output $_
$healthy = $false;
}
start-sleep -seconds 120
}

This script sets up IIS for our application, and loops through a health check. This container is only stopped if the health check fails. You will need to update the Sitename variable if you haven’t used “application”.

Our complete Dockerfile:

FROM mcr.microsoft.com/dotnet/framework/aspnet:4.7.2-windowsservercore-ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"]
ENV APP_ROOT="c:\\application"
RUN New-Item -Type Directory -Path $env:APP_ROOT
WORKDIR $APP_ROOT
COPY ./src ./docker/Start.ps1 ./
EXPOSE 80
ENTRYPOINT ["powershell.exe", ".\\Start.ps1"]

Caching

The layers of your Docker image are individually cached, which when leveraged correctly can speed up your builds significantly. This can be highly beneficial, especially when working with windows containers.

Once a layer has changed, every layer ontop of that cannot use the previously cached version. So it’s best to order your Dockerfile so that the parts which change less frequently are at the top, and the parts that change more often are at the bottom.

Docker Ignore

In the root directory of your application add a file called .dockerignore. In the docker ignore file you will add the paths and files you don’t want to include in your build. By removing unnecessary files you will greatly reduce the time it takes Docker to scan the root directory.

This is an important step, as you’ll be running docker build quite frequently as you get things set up and it will greatly reduce your build times. When getting started I was caught out by not having a .dockerignore, and it was scanning through thousands of node package files, which made each build painfully slow.

You can ignore specific files within your docker ignore file, or ignore everything and include only the stuff you want to use:

**
!/docker
!/src

Docker Containers

Containers are simply the decoupled environment in which your Docker images are run.

Open up your terminal, and build your docker image:

docker build --file "C:\application\docker\Dockerfile" "C:\application"

The file parameter points to the destination of your Dockerfile, whereas the second parameter is the context in which the Dockerfile is run.

Output:

Sending build context to Docker daemon  97.04MB
Step 1/8 : FROM mcr.microsoft.com/windows/servercore:ltsc2019
---> e43347a4426d
Step 2/8 : SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"]
---> Using cache
---> 73fbd46fd406
Step 3/8 : ENV APP_ROOT="c:/application"
---> Using cache
---> 4d6d191bc23f
Step 4/8 : RUN New-Item -Type Directory -Path $env:APP_ROOT
---> Using cache
---> 88d88747fedc
Step 5/8 : WORKDIR $APP_ROOT
---> Running in 4b088fd84b0b
Removing intermediate container 4b088fd84b0b
---> 599fab8491f0
Step 6/8 : COPY ./src ./docker/Start.ps1 ./
---> 230ce3dcae9c
Step 7/8 : EXPOSE 443
---> Running in b2f713a5ecda
Removing intermediate container b2f713a5ecda
---> c5ba3d36527d
Step 8/8 : ENTRYPOINT ["powershell.exe", ".\\Start.ps1"]
---> Running in fae82c28a6a0
Removing intermediate container fae82c28a6a0
---> d91e30fdac0b
Successfully built d91e30fdac0b

The end of the build will output your image id, you can find the image later using:

docker image ls

Now we have an image with your application and all of it’s dependencies baked in, we can simply run it in a container:

docker run d91e30fdac0b

This will run the image in a container, and hit the entry point we defined earlier to set up IIS for the application and perform a simple health check. The health check should output a status of 200 to the terminal.

To get the id of the running container use:

docker ps

You can now use the container id to grab the ip address of your container:

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" 0b3307372853

Now you have the ip address you can use this to host head to your application, and load it up in a browser.

If you have followed along you should now have a working application running in a Docker container. Hopefully you’ll have gained enough insight to understand what Docker is and how it could potentially benefit you. Working with windows containers can be bulky, especially with .NET Framework, but it still has some amazing benefits. If you experiment further with Linux containers and .NET Core, you will see how lightning fast it can be to build and deploy using Docker.

What Next?

You can now build your Docker images and run them inside a container, but where do you go from here? Docker alone will not provide huge benefits to your deployments, you will need to look into a container orchastration tool to manage automating application deployments, scaling, and management. Some popular tools for this are:

  • Kubernetes — This is by far the most popular tool, it was created by Google and is now maintained by the Cloud Native Computing Foundation. This is the best option for portability.
  • ECS — Amazon’s Elastic Container Service for orchastrating containers on the AWS platform.
  • ACI — Microsoft’s Azure Container Instances for orchastrating containers on the Azure platform.
  • Marathon — This is Kubernetes, but for long running processes on Apache Mesos.

Cheat Sheet

CommandDescriptiondocker build –fileBuild your image with filepath pointing to your dockerfile, root is where the Dockerfile is run fromdocker runRun your image in a containerdocker image lsList all your docker imagesdocker psList of running containersdocker rmiRemove image, space seperate to remove multiple at oncedocker killStop a running containerdocker rm -f $(docker ps -aq)Remove all containersdocker image pruneRemove dangling imagesdocker container pruneRemove idle containersdocker system pruneRemoves stopped containers, containerless networks and dangling imagesdocker inspect -f “{{ .NetworkSettings.Networks.nat.IPAddress }}”Get the ip of your containerdocker exec -it powershellaccess powershell on a running container

ParametersDescription–rmUse with docker run to remove image after run-dDetach from the terminal-aAdd to docker ps to show all containers

Further Reading

--

--

AO.com
AO’s Engineering blog

The online electrical retailer creating better tomorrows for customers. We specialise in making people happy.