Docker on Windows 10 using Windows Subsystem for Linux

Jared Middleton
9 min readFeb 23, 2019

--

I don’t know much about Docker. So, I recently tried to learn more about it. I figured a good place to start would be the official Docker “Get Started” walk-through.

The computer I used runs Windows 10 Pro (Version 10.0.17763 Build 17763, but the version at the time would’ve been a few updates back), though I prefer to use Linux distros. I figured I might as well try Docker through Windows Subsystem for Linux (WSL) because then I don’t need go to through the pain of setting up a dual-boot or something, but I still get to use my preferred Linux commands. Why not, right?

It turned out not to be quite as straightforward as I anticipated. As I was figuring it all out, I wasn’t really finding any resources on how to do what I was trying to do (none that were working for me anyways). So I kept notes and figured I’d later post a recap of the process, and this is it. :P

Disclaimer: I’m not talking about installing Docker directly into WSL — rather, I installed it through Windows 10, and then I was just trying to interface with it through WSL (because as far as I could tell, the former doesn’t work).

Note: I don’t have full explanations for everything here. I’m no Docker expert, nor an expert on WSL. I changed the terminal program I was using part-way because I didn’t want to waste hours figuring out how to configure the first one better. I’m sure I could’ve done a lot of things better — so you’re absolutely welcome to leave comments sharing info on how to improve this process.

Summary

I think this process was a bit all-over-the-place because of how much I over-complicated it. At the same time, I feel like I got a much better understanding of Docker and its surrounding technologies than if everything had just worked out perfectly the first time. I still really enjoyed going through this walk-through, so I’m perfectly happy.

The sections align with each Docker walk-through ‘Part’, and I’ve omitted a lot of information which I felt was already sufficiently covered in the walk-through.

This is just a shallow overview of the actual process, with some little dives into detail where things went wrong.

Versions

Docker

At the time of writing this article, I’m on Docker version 2.0.0.3. But just note that I may have updated in the 6 weeks or so since I went through the process described in this article — though I imagine it’s still pretty much the same.

Terminal

In terms of my terminal emulator, I was using hyper.js to start. However, about half-way through you have to run some docker-machine ssh commands. I found that they wouldn’t work properly, because it was trying to use Windows commands like SET for applying environment variables. Alternatively, I found if I used Git Bash, then it would appropriately be using export in the script it generated, which then meant the docker-machine ssh would actually work.

For expediency’s sake, I haven’t bothered to go and update the first half of the notes with Git Bash screenshots, etc — but I’d recommend the reader starts with a terminal which appropriately interprets being in WSL as needing linux commands rather than Windows. (Though there may be issues connecting file systems — see “Deploying App to Swarm Cluster” section for my solution to that issue.)

Part 1: Getting Started

First of all, I figured I’d try installing Docker into WSL through the official Docker install steps for docker-ce on Ubuntu.

That was immediately an issue once I tried running docker, because Ubuntu couldn’t find a worker on port 2375, and I didn’t manage to figure out how to get Docker pointed at it as I intended. I saw an article which detailed how to get it working, but that solution wasn’t working for me. So from there, I decided to uninstall docker from the subsystem, and install it in Windows, then just point the bash commands to the windows processes.

For reference, here is the post I found which detailed why it’s not possible to get Docker running properly within the subsystem itself.

So I swapped over to the docker installation steps for Windows 10, went into BIOS to enable Virtualization, then enabled Hyper-V.

Added these lines to ~/.bashrc :

export PATH="$HOME/bin:$HOME/.local/bin:$PATH"
export PATH="$PATH:/mnt/c/Program\ Files/Docker/Docker/resources/bin"
alias docker=docker.exe
alias docker-compose=docker-compose.exe

In Docker settings, enabled:

Then bash docker commands were working (in WSL).

I verified by running some images as per the suggestion of the walk-through. The hello-world and ubuntu images both worked perfectly. The Nginx webserver, however, didn’t work initially because it couldn’t connect to port 80 (which IIS runs on, and I need running for work). However, I was able to get it working with an alternative port mapped (ex/ use --publish 4000:80 )

Part 2: Containers

As the walk-through said, I created a basic little python app which just serves a page with a hostname and a Dockerfile, and then ran docker build to create the image.

Issue: docker couldn’t find the file using a relative path or any sort of path which was based in the subsystem. To get it to build, I added another variable to ~/.bashrc which pointed to my WSL home dir.

In my case, it was: C:/Users/jared/AppData/Local/Packages/CanonicalGroupLimited.Ubuntu16.04onWindows_79rhkp1fndgsc/LocalState/rootfs/home/jared

Then I ran build as docker build -t buildname $WSLHOME"/path/to/dir" which worked!

Note: on Windows, you must explicitly run docker stop <container_id> to get the container to stop.

Part 3: Services

Next was services.

Following the walk-through, I created the docker-compose.yml file and ran docker swarm init,

then deployed the service with docker stack deploy -c $WSLHOME"/path/to/docker-compose.yml" <service_name>,

and then verified it with docker service ls and docker service ps <service_name>.

The tear-down for this step was just docker stack rm <service_name> followed by docker swarm leave --force. Nice and easy; no issues in this step.

Part 4: Swarms

This is where most of the issues arose.

Creating a Cluster

I did what the walk-through said: Hyper-V Manager > Virtual Switch Manager > Create (External) Virtual Switch.

I then found issues with my network connectivity for about 15 mins — I could ping google, but nothing would load in my browser. So I swapped the switch to be an Internal switch instead, and my browser network connectivity was back. I tried moving forward using the internal switch, but when it came time to connect the swarm workers, they couldn’t find each other.

In the end, I had to use the default switch which already existed for it all to work for me. I imagine this was something specific to already having a switch in place and that there would’ve been some way to get it working… but, again, it wasn’t really my priority for this exercise.

I then had to add an alias in ~/.bashrc for the docker-machine executable: alias docker-machine=docker-machine.exe (in retrospect, I suppose I also could’ve just added something to PATH. maybe try that first if you’re working on this right now!).

After which I was trying to add a vm with docker-machine: docker-machine create -d hyperv — hyperv-virtual-switch “myswitch” myvm1, but it still gave me an error:

$ docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1Creating CA: C:\Users\jared\.docker\machine\certs\ca.pemCreating client certificate: C:\Users\jared\.docker\machine\certs\cert.pemRunning pre-create checks...Error with pre-create check: "Hyper-v commands have to be run as an Administrator"You can further specify your shell with either 'cmd' or 'powershell' with the --shell flag.

As you can see, it was important to run hyper.js (/ whatever terminal emulator you’re using) with admin privileges for docker-machine create.

Next was to initialize swarm & nodes: docker-machine ssh myvm1 “docker swarm init --advertise-addr <vm_ip>”… aaand it didn’t work. (Yes, I was sure that I was using the right IP.)

I also tried the suggested alternative: eval$(docker-machine env myvm1). It also didn’t work. So I tried checking what the output of docker-machine env myvm1 even was…

There lied the problem. It was using SET (Windows-appropriate) to create environment variables rather than export (Ubuntu-appropriate).

I found that by using --shell bash I could get it to use export, but even if I ran eval$(docker-machine env myvm1 --shell bash) I still wouldn’t be able to connect.

After some messing around, I found docker-machine env would work properly in Git Bash, so from this point I switched and the VM-related commands started working.

I scrapped the old machines and started fresh cause things had gotten messy. Then I went ahead and set the swarm up:

  • docker-machine create -d hyperv --hyperv-virtual-switch “Default Switch” myvm1
  • docker-machine ssh myvm1 “docker swarm init --advertise-addr 172.17.180.71”
  • docker-machine create -d hyperv --hyperv-virtual-switch “Default Switch” myvm2
  • docker-machine ssh myvm2 “docker swarm join-token <token> <ip>:2377”
  • docker-machine ssh myvm1 “docker node ls”
  • docker-machine ssh myvm1
  • (create .yml file)
  • docker stack deploy -c docker-compose.yml getstartedlab
  • docker service ls
  • docker service ps getstartedlab_web

Deploy app to the swarm cluster

At this point, I was supposed to deploy to the stack using docker stack deploy -c path/to/file.yml. However, I couldn’t find any way to reference the file path I needed from within Git Bash. I expected there to be a /c or /mnt/c directory which would mount to the Windows filesystem. No such luck.

So since I needed a mount point, I added one. To do so, I had to add a line C: /c to the end of /etc/fstab in Git Bash. Then, after restarting the bash session, I was able to find a path through the Windows filesystem into the WSL filesystem.

I first added another variable for storing the location of the WSL home dir: export WSLHOME=”C:/Users/jared/AppData/Local/Packages/CanonicalGroupLimited.Ubuntu16.04onWindows_79rhkp1fndgsc/LocalState/rootfs/home/jared” and once I had that I was able to deploy with docker stack deploy -c $WSLHOME"/path/to/file.yml" getstartedlab.

Which finally worked for deploying it!

From here on out, there weren’t really any more issues (thankfully)! I’ll still just summarize the remaining steps I took through the end of the walk-through.

Iterating and Scaling the App

Next was to change the app behaviour, rebuild the image, and push it.

Steps:

  • Modified app.py
  • docker build -t jnotelddim/get-started:part4 $WSLHOME"\dev\docker-contianer"
  • docker push jnotelddim/get-started:part4
  • update .yml file to reference :part4
  • Re-deploy image (same as other deployment step)

For cleanup I just removed the stack and unset the docker-machine variables as instructed in the walk-through.

Part 5: Stacks

Working with stacks worked smoothly once I’d had the rest of the configuration set up.

Add New Service

Adding a service just included adding a section to the .yml file.

I then deployed it to the swarm, and verified that it worked with its visualizer in the browser.

Persist the Data

Persisting the App data just required adding the Redis service; ie, a new section in the .yml file and a ./data directory on myvm1, followed by another re-deploy.

Conclusion

Overall, I absolutely overcomplicated the process and made it waaaayyyy harder for myself than I needed to… However, I’d also argue that I learned a lot more about how all the new tools I was using actually work. If I were to do it again, I think I’d do it just the same. Though if time permitted, it might also be nice to work out some of those kinks I didn’t have time to resolve.

Anyways, I hope this helps someone at some point.

--

--