Dockerized SQL Server for Development

Manfred Lange
Nov 1, 2020 · 13 min read

How to quickly spin up and use a SQL Server instance cross-platform.

Compose file with database container

In this article:

  • Containerizing the Database Server
  • Extending the dev container with a database server container
  • Automatically starting the database server
  • Automatically creating the database
  • Connecting to the database from the host


In my previous article I described how to create a full-time development container for .NET Core 3.1 with VS Code. The key point is that the development container (or short “dev container”) not only runs for specific tasks such like compiling, debugging, etc. The idea is to keep the dev container running all the time.

In that configuration VS Code’s frontend is running on the host while VS Code’s backend — VS Code Server — is magically installed and running in the dev container. The result is that as a developer we work seamlessly within the dev container. This includes VS Code extensions for which we can choose if we want them to be automatically installed in the dev container.

In that article we got to the point that we ran the command “dotnet — info” to display the information about the .NET Core runtimes and SDKs available in the container. In this article we will update our container configuration with a container for the database server and a script for automatically creating a database for us.

In this article we’ll be using as a running example a fictitious product named “Mahi”. The word Mahi means “task” in Te Reo Māori, the language spoken by the native people of Aotearoa, the country also known as New Zealand.

Mahi is a very simple task manager. We won’t have a commercially viable product at the end. But we will learn new concepts as we work through new features. Keep in mind, that the code base is not meant for production. You are welcome to use it for your own work, commercial or otherwise. The responsibility is entirely yours, though.

The complete source code for this article is available at

The list of prerequisites is quite short:

  • Current version of a git client
  • Recent stable version of VS Code
  • Docker Desktop (Windows, MacOS) or Docker Engine (Linux)
  • VS Code Extension pack “Remote Development” (identifier: ms-vscode-remote.vscode-remote-extensionpack)

Either follow the instructions in the previous article or grab the starting code from

Container Configuration

In my line of work, I am privileged to get first hand insights into the development environments of a lot of different companies, large and small, and across a wide range of industries. In some cases, I observe developers using remote databases, e.g. on a server or in the cloud. In other cases, I observe that the database server instance is installed on the developer’s computer. While all of these setups work to some degree, they also have disadvantages:

  • Remote database server instances or remote database may limit what a developer is allowed to do. At times the IT department needs to get involved, negatively impacting the developer’s productivity. In some really bad cases, the database server or a database is shared between developers.
  • Locally installed database servers give the developer more autonomy. However, the disadvantage is that with patches, updates, service packs, versions, instance names, and possibly different host operating systems it’s easy to introduce differences between any two development computers. While most of the time this may not be an issue, any development environment benefits from minimizing such variations as much as possible.

Containers offer an alternative approach that avoids these disadvantages by using pre-built container images. All developers using the same docker image will have a database server instance that is configured identically. Each developer has full control of that instance. And furthermore, at any time, the database container can be removed and a new one created. On other words: you can always start from scratch at a well-known stable position. Furthermore, the developer can spin up additional server instances if and when needed.

Note that in this description, I didn’t use the term “SQL Server” as this applies to a range of database servers. SQL Server is just one of them. Other images are available with pre-installed database servers such as Postgres, MySQL, MariaDB and many more. Just head over to Docker Hub at and search for your favorite database server.

As I work with many different clients, I try to avoid installing software on my computer that may be used by one client only. Instead I use containers wherever possible also to avoid conflicts between different version, patch-levels, etc.

For Mahi we’ll be using SQL Server. With small adjustments we could use a different database server just as well.

In the previous article (see link above) we used a docker-compose file to configure the dev container. Docker-compose files — or “compose files” in short — are intended to be used for configuring a group of containers. In that other article we had just one such container. This time we will add a database running in a second container. For that we change the content of “dev/docker-compose.yml” as follows:

Source code for this file available as Gist

In Docker terms we are adding a second “service”. This service is referred to as “database” in this docker compose file. Each service will get its own container. Here we have two services, namely “cmdline-dev” (see line 4) and “database” (see line 12).

In line 13 we specify the image we want to use for the database container. Microsoft is kind enough to provide images with SQL Server pre-installed, which is a big time-saver. Also, we don’t want to become experts in installing or managing a SQL Server instance. We just want to use it. And since we are using a pre-built image, we don’t need the “build” instructions that we used for the dev container (see lines 5 and 6).

Lines 14 to 16 specify a couple of environment variables required to run up the database container. SQL Server needs acceptance of their end-user license agreement (EULA) and a password for the Server Administrator (SA) account for the server instance. Which password we use isn’t critical here as long as it follows the default rules for passwords for SQL Server. Here it’s minimum length, one or more lower case, one more upper case and at least one digit. If your password doesn’t meet those requirements, the container won’t be built correctly or won’t start at all.

A more interesting part are lines 17 and 18. SQL Server expects incoming requests at network port 1433 so will listen on that port for any connection attempts. This would be a problem if you were to start multiple instances of the container or if you already had an instance of SQL Server installed on your host (Windows or Linux). Each network port can be used by one process only. Not coincidentally, Docker offers a solution for exactly that. We’ll look into that in the next section.

Using a compose file essentially defines one or more containers that together form what Docker calls a “project”. This activity is also referred to “orchestrating”. It is possible to use more than one compose file. And it is also possible to orchestrate a group of docker containers with other tools. One tool providing orchestration is Kubernetes (see We won’t go into more details in this article, though.

To avoid network port conflicts Docker offers the mapping of network ports. A port number available on the host is mapped to a port number as seen from inside the container. In line 18 of the docker compose file above, network port 7610 is mapped to network port 1433. This means that a request made to localhost:7610 from outside Docker will be received by the container as if the request was made to localhost:1433. The process inside the container and listening at port 1433 is oblivious of this. Equally a client outside Docker sending the request is unaffected by this as well. Docker handles this mapping transparently.

In practical terms this means that the SQL Server instance inside of the container listens at port 1433 for all incoming requests. A client, e.g. SQL Server Management Studio (SSMS), would connect at port 7610, i.e. the one that is exposed from by Docker for the container to the outside world. The following screenshot shows the connection dialog of SSMS running outside Docker:

Note the syntax for entering the network port number: hostname and port number separated by a comma.

Still, we cannot connect just yet. Although we have updated the compose file “dev/docker-compose.yml” we haven’t built the container yet. The container with the database server is not running yet.

Before we move on, though, one more comment on port numbers. Which port number should we use? How do we know whether a specific port number is available? For ports that are currently in use, the operating system of the host offers some way of finding out. However, the challenge is that there is no way to find out which port number may be used by any program that is currently not running (please leave a comment in case you know of a way of how to do that).

I have used the following heuristic that worked quite well so far. I simply calculate the number of days since 01 Jan 2000. That number becomes the port number. That’s how I came up with port number 7610 because when I started writing this article, 7610 days had passed since 01 January 2000. I’m sure there are other options, so use whichever works best for you. If a port is in use, you’ll get an error telling you so. In that case just try another port.

With the updated file “dev/docker-compose.yml” we can now rebuild our containers, or in Docker terms the “project” which now has two “services”, i.e. containers. Rebuilding is easy. In VS Code we open the command palette (On Windows: Ctrl+Shift+P) and type “remote-containers” in the search textbox. Then we select “Remote-Containers: Rebuild Container”:

All going well, this should then start up not only the dev container but also the container with the database instance. In VS Code we can use the Docker tab to see our group of containers, grouped under “dev”:

From this view, however, we cannot see the port mapping. Therefore, we open a terminal on the host and then type the command

docker ps

which should then display something similar to the following (you may need to double click to zoom into the image):

In this screenshot, the first two entries are our containers. You should have those two entries as well. The container IDs will be different. The very first entry is our database container. We can see how port 7610 is mapped to 1433. We can also see that for the dev container we do not expose any ports at the moment. The entry for “PORTS” is empty.

There is another entry that uses Microsoft’s SQL Server image. In other words, the screenshot is showing two instances of SQL Server running. Both listen at port 1433 — inside their respective container. However, one instance is exposing 7610 to the outside (that is our container) and the other is exposing port 7605. The latter instance is from a project I started a few days earlier, hence the lesser port number.

All of that looks fine. But there is a final piece missing: We have a SQL Server instance but at this point we don’t have a database yet. Automatically creating the database is our next task.

We need to decide when and how to create the database. In practice, in most cloud deployments the database itself is typically taken care of by the devops team. Databases are usually considered to be part of the cloud infrastructure. So, we need a solution that works in our development environment without causing any issues elsewhere. And we prefer a solution that automatically creates a database when we open the dev container in VS Code and the database doesn’t exist yet.

The solution that I typically choose is an initialization script that is executed when the dev container is built. For that script we create a new file at “dev/init.ps1” and give it the following content:

File content available on Github

In this case we are using PowerShell. PowerShell is available out of the box. It is part of the .NET Core runtime and the .NET Core SDK which are pre-installed in the dev container. This makes it easy to install a SQL Server command line interface (CLI) in the form of a PowerShell module, see line 5. We don’t want to be prompted during execution so specify “-Confirm:$False” and also “-Force”.

Lines 10 to 27 implement a retry loop. The database server needs some time to become operational after the container is started. In my environment I observe that it takes somewhere between 2 and 6 retries. It may be different in your environment as many factors can influence the speed. As is, the script tries creating a database for up to one minute.

The actual work is done in line 16 where we create the database. Note that we use port 1433 because we are inside of docker. This script is not trying to connect from the outside world, e.g. from the host, to the database container. Therefore, we don’t use port 7610. The correct port in this instance is 1433. This related to a concept of virtual networks that Docker creates. We won’t cover this topic in this article.

Also, the script is referring to the database container as “database” rather than “localhost”. This is the name we chose in the compose file for that service. Docker makes this service available at hostname “database” to all containers that are part of the same Docker “project”. When this script is executed, the dev container can contact the database server using the hostname “database”. From a dev container perspective, “localhost” would be the dev container itself.

There is one other aspect we need to consider. The database container must be up and running by the time the initialization script is executed. The database container is a dependency of the dev container. To reflect this dependency and to add execution of the init script, we change the content of “dev/docker-compose.yml” as follows:

File content available on Github

In lines 10 and 11 we tell Docker that the container (aka service) “cmdline-dev” depends on “database”. It is important to remember that the name “database” in line 11 has to match a service name listed elsewhere.

Lines 15 to 21 list service “database” and instructions for how to create the container for that service. We covered the service “database” earlier in this article.

In line 12 we use “>” to allow for the subsequent lines to be considered as if they were part of line 12.

In line 13 we execute command

bash -c

which instructs bash to execute a command. The command we want bash to execute consists of two parts. The first part is

pwsh /app/dev/init.ps1

and the second part is

sleep infinity

which we already had.

The syntax using “&&” instructs bash to execute the two parts sequentially. As before, we still need “sleep infinity” to ensure that the dev container keeps running. See the previous article for more details.

Now we are good to go, and we can again rebuild the container using the palette command “Remote-Containers: Rebuild Container”.

Accessing the Database

After we have rebuilt the container, we can access the database container with our favorite tool to see if the script was executed successfully. We should be able to see the newly created database in the dockerized SQL Server.

Tools to connect include SSMS, DBeaver, RazorSQL, and similar more. Whatever your preference, you should be able to connect from your development computer (the one hosting Docker) with the following parameters:

  • Server: localhost
  • Port: 7610
  • User: sa
  • Password: PassWord42

Then you should be able to see something similar to the following (RazorSQL in this case):

Or with SSMS it should look something like this:

Obviously, this approach not only works for SQL Server. So, if your preference is Postgres or something else, perhaps experiment with that, too.

Equally, if you don’t want to use a dev container, you can still benefit from using a dockerized database server, SQL Server or otherwise. Running up a new instance is easy. Trying out different database servers is easy. Running multiple instances in parallel is easy (if ever needed).

And beyond that you could also consider pre-populated databases in a docker container. Then, you could store that container in a file to be restored where and when needed, e.g. for select tests.

All of these topics and many more are beyond the scope of this article. They are intended to create enough food for thought and to encourage exploring more of what you can do with dockerized database servers.

Thank you for reading!

Suggested next article: “Easy Schema Migrations in .NET Core”

References and Additional Material

Here are some suggestions for references with additional material or opportunities for further research:

Get smarter at building your thing. Join The Startup’s +789K followers.

By The Startup

Get smarter at building your thing. Subscribe to receive The Startup's top 10 most read stories — delivered straight into your inbox, once a week. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Manfred Lange

Written by

I’m a Principal Consultant at boutique firm HYPR Innovation in New Zealand. Currently, my main focus is helping clients to build scalable SaaS products.

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +789K followers.

Manfred Lange

Written by

I’m a Principal Consultant at boutique firm HYPR Innovation in New Zealand. Currently, my main focus is helping clients to build scalable SaaS products.

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +789K followers.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store