Upgrading a Dev Container to .NET 5.0

Manfred Lange
Dec 22, 2020 · 17 min read

Changing the target framework and other dev environment improvements

Source: https://devblogs.microsoft.com/dotnet/introducing-net-5/

In this article:

  • Changing the .NET SDK to .NET 5.0
  • Adding a missing tool required by some VS Code extensions
  • Adding an entrypoint script to improve running as non-root
  • Installing dotnet-outdated-tool for keeping nuget packages up-to-date
  • Switching the target framework to .NET 5.0

Introduction

Previously, we looked at how to create a dev container for .NET Core 3.1. Since then .NET 5.0 has been released. Therefore, in this article we’ll look at how upgrade the dev container to reflect those changes.

As always, we’ll again be using as a running example a fictitious product named “Mahi”. The word Mahi means “task” in Te Reo Māori, the language spoken by the native people of Aotearoa, the country also known as New Zealand.

Mahi will be a very simple task manager. We won’t have a commercially viable product at the end. But we will learn new concepts as we work through new features. Keep in mind, that the code base is not meant for production. You are welcome to use it as inspiration for your own work, commercial or otherwise. The responsibility is entirely yours, though, if you do.

The complete source code for this article is available at https://github.com/mahi-app/CmdLine in branch “article-2020–12–22”. This branch has the code base as of the end of this article.

For this article we’ll need the following:

  • VS Code
  • VS Code Extension Pack “Remote Development” (identifier: ms-vscode-remote.vscode-remote-extensionpack)
  • Git client
  • Docker Desktop (Windows, MacOS) or Docker Engine (Linux)

To get the starting code on which this article is based, i.e., the starting line, clone https://github.com/mahi-app/CmdLine and then switch to branch “article-2020–11–15”. This branch gives you the starting line for this article.

To set up the starting position for this article, follow these steps:

  1. Open a bash terminal. Windows only: if you are using Ubuntu as your distro, open an instance of the “Ubuntu App” which is effectively a bash terminal. Also see the note below regarding Windows with WSL2.
  2. Navigate to your home directory by executing command “cd ~”
  3. Create a new directory with command “mkdir projects”
  4. Switch to the new directory with “cd projects”
  5. Clone the repo with “git clone https://github.com/mahi-app/CmdLine
  6. Switch to the correct branch with “git checkout article-2020–11–15”
  7. You are ready to start

Note regrading Windows with WSL2 Only: If you clone the repo on Windows, make sure you clone it into the Linux file system, i.e. the file system of your distro. Otherwise, VS Code and its extensions will start showing weird behaviors. This issue is caused by missing inotify messages for file changes. You can find out more about this issue in the article “Docker Desktop on WSL2: The Problem with Mixing File Systems”. If you use “cd ~” in the bash terminal, in general you will be fine. If you run “pwd” in the bash terminal and the output starts with something like “/mnt/c/Users”, you are in the wrong place as it would a mounted NTFS folder. Use “cd ~” to resolve this.

Improving the Dev Container

As a first step for improving the dev container, we’ll be upgrading the .NET SDK to version 5.0. This is relatively easy.

To get started, open a bash shell and switch to the local clone of the repository. For example, if you used the steps in “Prerequisites”, then the following command should bring you there:

cd ~/projects/CmdLine

Once there, execute

code .

This will launch VS Code. If prompted to re-open the directory in the dev container, accept the offer and do so. Wait until docker is finished and VS Code has installed all extensions in the docker container.

Our first step will be to confirm which branch we are on the correct branch. Execute the following command:

git branch

This should output “* article-2020–11–15” (note the asterisk). If this not what you are seeing then switch to the branch with the starting position using this command:

git checkout article-2020–11–15

Then confirm with “git branch” that the switch was successful.

If you like you can create a new branch in your local clone and commit to it as you read this article. Be aware, though, that you won’t be able to push to the clone on github.

Our next step is replacing .NET Core 3.1 with .NET 5.0. In fact, we will be even more specific: we are going to use the .NET SDK version 5.0.101. This is the latest stable release as of writing. Because we are using a dev container, this upgrade does not require installing the new framework version. Instead, we will just use a new base image for our dev container. The new base image has the SDK pre-installed.

Within VS Code open the file “dev/Dockerfile”. In it replace the first line with the following code

FROM mcr.microsoft.com/dotnet/sdk:5.0.101

With this, one would think, everything is fine. However, that is not the case as we will see shortly. To see what the problem is, we’ll rebuild the container by clicking on the green corner at the left end of the status bar:

Rebuilding the Dev Container in VS Code

Wait until the container has been re-built. This may take a while longer if the base container image needs to be downloaded. Note, that we will be rebuilding the dev container a few times. You can always refer to this section in case you forgot how to do that.

One VS Code has rebuilt the container, open a terminal in VS Code and execute the following command:

ps

This will produce an error similar to the following:

Command ‘ps’ not found

While we may not need that command for development tasks, some VS Code extensions depend on it and don’t work correctly or not at all if ps is not available. Therefore, let’s fix this problem, which is easy to do.

Again, open the dockerfile and just after the “FROM” directive add the following two lines. Make sure to include the backslash at the end of the first of these two lines.

RUN apt-get update && \
apt-get install -y procps

The first few lines of the Dockerfile should now look as follows:

Adding missing “ps” command in Dockerfile

Again, click in the status bar in the bottom left corner and choose “Remote-Containers: Rebuild Container” once more. Wait until everything is finished and then try the “ps” command in a terminal window inside of VS to confirm this now works as expected:

Command “ps” successful

To confirm that we are in fact now using the .NET 5 SDK, run “dotnet — info” in the terminal window in VS Code. You should see output similar to the following:

dotnet — info for .NET 5.0

We have successfully changed the .NET SDK to .NET 5.0. Well done!

There are a couple more things that we should improve with our dev container. One of them is reviewing how we configure the non-root user for the dev container. Remember: for security reasons it’s a best practice to use the dev container as a non-root user only. This ensures that processes and commands running inside of the dev container do not have root permissions. What is more important, this also applies to all mounted directories (which in turn may allow access to even more commands, directories and files). In fact, the non-root user won’t have sudo available either which plugs that potential hole as well.

In Linux each user is a member of a group. Both the user and the group have a number by which they are identified. An example would be a number like 1000. Give this a try by running the command “id” in a terminal window in VS Code:

Running “id” in the dev container

This command gives you information about the current user, i.e. “mahi” in this case. The id for the group and the user are both 1000 in this case. We didn’t control this. This number just happened to be the next one to use when we added the user using “useradd” in the Dockerfile. Also, the value for “groups” lists the groups the user is a member of. In this case this is just “mahi” (but not “sudo”).

We want to be specific about the user and group id. For this to happen, replace the line

RUN useradd -m -s $(which bash) mahi

with the following two lines:

RUN groupadd -g 1001 -r mahi && \
useradd -u 1001 -r -g mahi -m -s $(which bash) mahi

In this instance we specify the user id and the group id as the number 1001. The first few lines of the Dockerfile should now look as follows:

Rebuild the container for this change to take effect, then check the output for “OmniSharp Log”. You should find an error similar to the following:

Issues with Permissions

This particular issue is related to the user VS Code is using to connect with the remote vscode-server. In this case VS Code is using the user and group id of the user that launched VS Code. Let’s check that out.

In the bash terminal that we used to start VS Code with command “code .” execute the command “id”. In my case the output looks as follows:

Output of “id” on the host (or in WSL2)

As you can see the user id (“uid”) is 1000. Therefore, if VS Code uses this user to attach to the dev container, then “access denied” errors are not really a surprise. Let’s be specific about what user VS Code connects as. Reopen the folder locally (Windows only: in WSL if using WSL).

Open the file “.devcontainer/devcontainer.json” in VS Code and add the following line

“remoteUser”: “mahi”, 

to the configuration as follows:

Specifying the user for the dev container

Then reopen the folder in docker. When we check the “OmniSharp Log” we notice that the error is still present. So, this was not enough just yet. We need more modifications: we need to ensure that the user we connect as owns the directory with the clone.

Reopen the folder locally again (Windows only: re-open in WSL if using WSL). Next, we’ll move changing the file ownership using “chown” from the Dockerfile to a script named “entrypoint.sh”. Also, we’ll remove the “USER” directory from the Dockerfile.

The reason for moving “chown” has to do with the fact that first the container is built and started using the Dockerfile and docker-compose. Only, then is mounting the volume complete, i.e., mounting of the clone into the dev container. We can “chown” the director and its contents only, after the mounting has completed. Therefore executing “chown” in Dockerfile is too early in the process. By moving it to a later stage we’ll make this more robust.

Apply the following lines

# Create working directory. Ownership will be changed in entrypoint.sh which
# executes *after* the volume has been mounted.
RUN mkdir /app
# Copy entrypoint script into container, make it executable, then execute it:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT [“/entrypoint.sh”]

to the Dockerfile:

What we do here is this: In line 16 we create a directory “/app”. This will become the mount point for the repository clone. Then in line 19 we copy a file named entrypoint.sh into /entrypoint.sh. This is at the root of the file system so anything we do with “/app” won’t interfere.

In line 20 we change the permissions with “chmod” to make the script “/entrypoint.sh” executable. Note, that at this point of the Dockerfile commands are still executed as the root user. The “ENTRYPOINT” directive in line 21 tells Docker to execute “/entrypoint.sh” once the container is up and running.

Next, we’ll create the script “/entrypoint.sh”. We’ll create this file at “dev/entrypoint.sh”. It has the following content:

#!/bin/sh# Change ownership of all directories and files in the mounted volume, i.e.
# what has been mapped from the host:
chown -R mahi:mahi /app
# Finally invoke what has been specified as CMD in Dockerfile or command in docker-compose:
“$@”

Or in VS Code:

There is not much content in this file. In line 5 all we do is changing the ownership of the directory “/app” and all of its content to “mahi:mahi”. (the first is the group name, the second one the user name). In line 8 we just execute what might be specified with the CMD directive in the Dockerfile or as value for the “command” parameter in the docker-compose file.

Let’s revisit the reason why we moved the chown command from Dockerfile to the entrypoint.sh script. As mentioned before, the Dockerfile is used to create the container image. Then docker-compose.yml tells Docker to mount the host directory at “/app”. Only once this has completed, we can reliably chown the directory “/app”.

Another option would be to use the “USER” directive in the Dockerfile to switch to user “mahi”. However, if we did, we couldn’t execute “chown” in the script “/entrypoint.sh” as “chown” requires root privileges. User “mahi” does not have those privileges by design.

So, to clarify the chronological order of events, here are the main steps as they are executed when the dev container is started:

  1. If the container image doesn’t exist just yet, the instructions in Dockerfile are executed as “root” to create the docker image.
  2. When we run up the dev container, the content of “docker-compose.yml” is used to orchestrate the containers we want, two in this case. This includes mounting the parent directory on the host, i.e. the root of the repository clone, at “/app” inside the dev container (see lines 8 and 9 in the docker-compose.yml file). This part is still executed as root.
  3. Still running as root, and once the container is up and running, Docker executes the script specified by the “ENTRYPOINT” directive in the Docker file. This script changes the ownership of the directory “/app” inside the dev container to “mahi:mahi”.
  4. Next, the “command” specified in docker-compose.yml is executed. We still run as root.
  5. Once all of the preceding steps are complete VS Code considers the container to be up and running. At that point VS Code connects as user “mahi” (configured via “remoteUser”, see above) and installs “vscode-server” and then all other extensions that may be configured for the dev container. “vscode-server” and everything it spawns inside the dev container runs as “mahi” including any terminals that may be open, just as we wanted.

In a VS Code terminal you an experiment in the dev container. Try using “sudo” or accessing directories or files that are owned by “root”. You will find that user “mahi” has no root privileges.

Choosing a Better User Id and Group Id

As described above, we used the number 1001 for the user id and for the group id for non-root user “mahi”. There is one problem, though, with this. To see that problem, go to the bash terminal from which we started VS Code and run the following commands:

cd ~/projects/mahi-app/CmdLine
ls -lart

This should give you an output similar to the following:

Notice how the group is listed as 1001 and the user as docker. If we were to edit any of the files listed or any files in those directories then you’ll notice that we don’t have permission to do so. The reason is that your own user id and group id is different. Execute the command “id” in this bash terminal. This should give you output similar to the following (your user name will be different):

Notice that the user id (“uid”) and the group id (“gid”) are 1000 (the number may be different in your case). Because this number is different to the number 1001 that we assigned to user “mahi” and to group “mahi”, Linux will prevent us from editing these files.

We can fix this by running “sudo chown .”. However, we wouldn’t want to run this all the time. Also, as soon as we re-open this in the dev container, it will be changed again back to 1001. There is a better option. We will change mahi’s user id and and group id to match the id of the user as which we run on Linux outside of the container.

In my case, it’s the user “manfred” and the group “manfred” and both have the number 1000. Let’s use these numbers in the Dockerfile when we add the group and the user:

If in your environment the command “id” shows a different number for the group and/or the user id on the host, then use those numbers instead.

With this change in place, rebuild the dev container once more. Then, in the bash terminal on the host (the terminal from where you launched VS Code), run “ls -lart” which gives you something similar to:

Running the same in the dev container gives us the following:

Even though we see different user and group names the underlying ids for group and user are the same as we have mounted the host directory into the dev container. We wouldn’t need the “chown” command in this case. Howerver, the “chown” command in script “/entrypoint.sh” ensures that we have sufficient permissions in the dev container. If you want to confirm that the ids are the same, just run the “id” command both in the dev container as well as on the host’s bash terminal.

Note that in the last picture the parent directory (“..”) is still owned by root. This is exactly what we wanted. We use the dev container as user “mahi” and have permissions only in the directory “/app” (and mahi’s home directory) but nowhere else. This means if any malicious code should manage to execute within the dev container, it will be limited to what is accessible inside of “/app” of the dev container and to what is available in the home directory (“~”). We haven’t mounted anything elsewhere, so the home directory is limited to the dev container as well.

Furthermore, sudo is not available in the dev container and even if it was, the non-root user is not a member of that group. And even all of that was available, then you’d still have to manually enter a password to execute sudo.

Now, that we have strengthened the security of our dev container by using it as a non-root user, let’s next look at another improvement that allows us maintaining the nuget packages we use in our project.

Keeping Nuget Packages Up-To-Date

Often, it’s a bit tedious to keep track of all the nuget packages that we use across a solution that in turn may consist of several if not dozens of projects. To make this a bit simpler, it’d be nice to write something like “dotnet outdated” to create a list of nuget packages that we use and which are not on the latest stable release yet.

Such a tool exists. It is called “dotnet-outdated”. It is open source and it’s installation is quite simple. We want to do better than that, though. We want to automatically install the tool when we build the container image, so we don’t have to worry about it anymore.

Once you’ve opened the repo in the dev container, open the file “dev/Dockerfile”. Then add the the following lines at the end of the file:

# Install dotnet-outdated (see
# https://github.com/dotnet-outdated/dotnet-outdated)
RUN runuser -l faker -c “dotnet tool install --global dotnet-outdated-tool”
# runuser installs it as if the non-root user was installing it.
# This makes it available to that non-root user
ENV PATH “$PATH:/home/faker/.dotnet/tools”

The file should now look as follows:

In line 24 we execute the command as user “mahi” to make sure that user has access to it. In line 27 we add the location of the tool to the environment variable “PATH”. This allows running the tool from any directory.

Again, rebuild the container. Then go to directory “/app/src/CmdLine” and run the following command:

dotnet outdated

In my case this gave me the following output:

You can see that the tool found one nuget package that is not on the most recent stable version. The patch level for Fluent Migrator was at “.9” while the most recent stable release was “.10”. dotnet-outdated will use different colors depending on whether there is a new major, minor or patch level release.

To update the outdated nuget package, we can rerun the command with the “-u” option. Here is the output from my environment:

This tool has now become one of the default tools I add the dev containers I use. Running this tool could even become an automated task in a build pipeline, obviously followed by running a comprehensive suite of automated tests. Perhaps a topic for a future article.

Changing the Target Framework to .NET 5.0

We have changed the base image for the dev container from .NET Core 3.1 to .NET 5.0.101 which is the latest as of writing. However, the project is still targeting netcoreapp3.1. As the final item in this article let’s see if we can target .NET 5.0 without too many changes.

Open the file “/app/src/CmdLine/CmdLine.csproj”. Then change the target framework from “netcoreapp3.1” to “net5.0”. The content should then look something like this:

Now run switch to directory “/app/src/CmdLine” in the dev container and execute the command

dotnet build 

The output should be something like the following:

Note how the output directory path now contains “net5.0”. Previously it was “netcoreapp3.1”. At least the compilation was successful. We can’t do much more at this point as we don’t have tests yet. These will be added in one of the next articles.

Summary

In this article we improved the setup of a non-root user for the dev container to follow best security practice. It’s a one-off, so once the change has been applied, we won’t have think about it for the time being.

We also switched the dev container from .NET Core 3.1 to .NET 5.0 by essentially replacing the base image in the Dockerfile. The new base image comes with the .NET 5.0 SDK pre-installed. We added installation of “ps” to ensure that VS Code extensions that depend on play nice.

Next we changed the target framework for our project from “netcoreapp3.1” to “net5.0”, a single line change in our case as well. There were no breaking changes that affected our source code at this point (if any).

And finally, we added dotnet-outdated that will make it easier to keep the nuget packages we reference on the most recent stable version.

In the next few articles, we’ll return our focus on expanding the functionality of the Mahi-app project.

Thank you for reading! If you have questions or suggestions, please make use of the comment’s functionality below.

References and Additional Material

The following references and suggestions for additional material might be helpful if you’d like to explore the topics in this article further:

The Startup

Get smarter at building your thing. Join The Startup’s +789K followers.

By The Startup

Get smarter at building your thing. Subscribe to receive The Startup's top 10 most read stories — delivered straight into your inbox, once a week. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Manfred Lange

Written by

I’m a Principal Consultant at boutique firm HYPR Innovation in New Zealand. Currently, my main focus is helping clients to build scalable SaaS products.

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +789K followers.

Manfred Lange

Written by

I’m a Principal Consultant at boutique firm HYPR Innovation in New Zealand. Currently, my main focus is helping clients to build scalable SaaS products.

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +789K followers.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store