ASP.NET core and MySQL with Docker — Part 3

Connecting the web service container to a MySQL container

This article is the final part of a series of 3 posts:

  1. Creating and containerizing the web API using a in memory db
  2. Building the code inside a container and using docker compose
  3. Connecting the web service to a MySQL container (this)

The full code can be found here.

In the previous posts we created a ASP.NET core Web API using an in-memory database. We then learnt how to build and run the solution on docker and docker-compose. In this post we will finally create a MySQL container with persistent storage, and link the web API container to use it.

Starting a MySQL container

The official MySQL container is available on docker hub. We will use the stable version 5.7. To start a simple instance of the container run:

> docker run --name db -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql/mysql-server:5.7

We can connect to the container and run the mysql client using:

> docker exec -it db mysql -u root -p
Enter password: my-secret-pw
mysql> show databases;

MySQL environment variables

The MySQL image takes a number of environment variables. They are all described in the docker hub page. Here we’ll explore some of them:

  1. MYSQL_ROOT_PASSWORD is used to set the root password for mysql server. The root user has access to all databases on the server, and can create/modify new databases. Passing passwords in terminal is insecure, and therefore this option should be avoided for non-toy applications.
  2. MYSQL_RANDOM_ROOT_PASSWORD asks the container to create a random root password for the server. You can read the generated password by looking at the container logs using docker logs <container-name>. We will be using this option in the future.
  3. MYSQL_ROOT_HOST is used to specify the IP address of the root user. The container, by default, does not allow anyone to log into the server as root from outside the container. This prevents other containers (or the host too) from connecting to the db (using root credentials). You can use this flag to pass the IP of the container or host which should be allowed to connect to the server with root credentials. Eg. To allow the host to connect, you would set -e MYSQL_ROOT_HOST="".
  4. MYSQL_DATABASE creates a database on the server on container startup.
  5. MYSQL_USER, MYSQL_PASSWORD are used to create a new user (not root) with super user privileges over MYSQL_DATABASE. Note that unlike root, these new user credentials can be used to connect to the db from outside the container too.

We will not connect to the MySQL container (from our web service container) using root privileges. This is because in order connect using root privileges we will need to start the MySQL container with -e MYSQL_ROOT_HOST=<container_IP>. This creates an ugly dependency loop:

  • The MySQL container depends on the web service container for its IP address.
  • The web service container depends on the MySQL container to connect to the db.

This creates a problem. Which container starts first? There are ways to make this work, but that would involve ugly hacks. Instead we will use MYSQL_USER credentials to connect to our db container. This obviously requires us to specify MYSQL_DATABASE too.

Persistent storage

A container encapsulates its own file system. Therefore, when the container is destroyed, we will lose all db data. To persist data we will need to use a docker managed volume and map it to /var/lib/mysql — the default location where MySQL server stores db data. The MySQL container will create a docker volume if none is specified, but it will be better to pass our own named volume.

Finally, lets start a proper mysql container to use with our web service.

# Stop and remove the container created previously.
> docker stop db; docker rm db;
# Create a db volume
> docker volume create dbvol
# Start the container
# Let us also forward the MySQL server port 3306 to the host so that the host can connect using MYSQL_USER credentials to the db from localhost:3306.
> docker run --name db -e MYSQL_RANDOM_ROOT_PASSWORD=yes -e MYSQL_DATABASE=Students -e MYSQL_USER=user_name_1 -e MYSQL_PASSWORD=my-secret-pw -v dbvol:/var/lib/mysql -p 3306:3306-d mysql/mysql-server:5.7

Configuring the Web API

In order to now connect to the MySQL db, we will need to change our ASP.NET application code. First, we will need to install Entity Framework provider for MySQL. We will be using the Pomelo Foundation provider — at the time of writing this article, the official MySQL doesn’t work with ASP.NET core 2.0.

# Install the EF provider
> dotnet add package Pomelo.EntityFrameworkCore.MySql --version 2.0.0

Creating EF migrations

In order to create the initial db, we need to create an EF migration for our DB schema. In order to create migrations we need to install the entity framework dotnet cli packages. Unfortunately, we can’t install them directly through the command line.

Add the following ItemGroup to your .csproj:

<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.0.0" />
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0" />

Now run dotnet restore to install the required packages. To create a migration, we need to be connected to the db. Our web app is currently configured to connect to an in-memory db. Modify Startup.cs.

Using the Pomelo provider we are connecting from the host to the MySQL container db. Notice that the Configure function has a new context.Database.Migrate() call. This ensures that when the application is started, the connected db has all the required tables and migrations applied.

Create a migration using dotnet ef migrations add Initial. This creates a migration named Initial. context.Database.Migrate will apply this migration when we run the app and create the initial tables in the empty database.

Running the web service

Docker allows us to connect two containers using the --link option with docker run. Eg:

> docker run --name mybash --link db -it bash

This links the db container to the mybash container. When a container A is linked to container B, all the environment variables declared in A are exported to B, all exposed ports are opened to container B, and the IP address of A is also exported by the name of container A. Eg in mybash run

# All env vars from 'db' will be prefixed with 'db'
> env
# Ping container A
> ping db

Using this option, we can easily connect to the MySQL db from within our container. We will need to make some changes in Startup.cs though to make a MySQL connection. Right now we have specified the connection string Server element as localhost; to connect from within our container, it needs to be the MySQL container’s name i.e. db because the MySQL server’s IP is exported with the name of the MySQL container.

Finally, let’s run the service:

# Build the image
> docker-compose -f .\Docker\docker-compose.yml build --no-cache
> docker-compose -f .\Docker\docker-compose.yml up builder
# Note we can't yet start our service using docker-compose since it doesn't link to the MySQL container yet.
# The compose up operation would have created a volume called docker_appbuild.
> docker run --name webapp.test -p 5000:7909 -v docker_appbuild:/app --link db -dit webapp:0.1.0

You can test the service using Postman, or the cURL commands mentioned in the 1st article. Eg:

# Add a new department
> curl -X POST \
http://localhost:5000/api/Department/ \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-H 'postman-token: ce786dd0-5d17-bc94-81d3-8191203649cb' \
-d '{
"Name": "CSE"
# Get a department by id
> curl -X GET \
http://localhost:5000/api/Department/1 \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-H 'postman-token: 9402340e-d28b-8ae7-4265-8d2ce3ca33aa'

Running with docker-compose

So far we have manually created the MySQL and service containers. We had to run multiple commands to create, link, and run each container. As if that wan’t bad enough, if we change the credentials, table name, or container name of the MySQL server we will need to change the connection string in the web app container too.

Docker-compose can help us reduce the number of steps needed to spawn the containers. Using environment variables in the docker-compose.yml file we can create a single point of control for the db credentials, and the connection string.

Modify the existing compose file to add a new MySQL service, and link it to webapp.

The MySQL service is pretty straight forward. We use the mysql-server image from docker hub, and map a persistent storage. What is new is the environment and env_file sections. These sections allow us to define environment variables inside our container — akin to using docker run with the -e option.

  • The environment tag takes a dictionary with each entry describing the <key>:<value>. Here we pass MYSQL_RANDOM_ROOT_PASSWORD as !!str yes. "yes" could also have been used, but simple yes would not work as that would be a Boolean value.
  • The env_file tag takes a list of file paths (relative to the compose file). These files are similar to the .env file introduced in the previous section. They contain a list of <key>=<value> pairs.

secrets.env will contain our db credentials, and table name.

# Ideally, this should not be in the source control

A note on environment variables again

Unlike the .env file, the environment and env_file tags do not substitute variables in the compose file. They are used solely to populate environment variables inside the container. So they can’t be used to subsitute values in the build, image, and container_name tags.

That is why the variable MYSQL_SERVER_NAME is not defined in secrets.env, we need to add an entry for it in .env.

# MySQL vars in .env

Linking to web app

The webapp section of the compose file has also been modified to include secrets.env and pass in MYSQL_SERVER_NAME to denote the MySQL container name.

The link between webapp and mysql is described using the link tag. A link is an automatic dependency, and hence we don’t need to add mysql to the depends_on tag.

Note: The link and depends_on tag will just ensure that the dependency container is started before the webapp container. They do not ensure that the service inside the container has started i.e. in this case just because the db container started, doesn’t mean that MySQL server has finished initializing. It is our responsibility to wait for the db to be ready before connecting to it.

Running the service

Because we are storing all our db configurations — including container name, credentials, table name, etc — in sercrets.env and .env and passing them as environment variables to the containers, we don’t need to hard code the connection string in the app.

We create the connection string by accessing the values of the environment variables we passed into our container.

In order to ensure that the MySQL container is ready before we connect permanently, we use exponential back-off in the WaitForDBInit function.

We can finally run our service using

# Clean up
> docker stop webapp.test; docker stop db; docker container prune; docker volume prune;
# Build new images
> docker-compose -f .\Docker\docker-compose.yml build --no-cache
> docker-compose -f .\Docker\docker-compose.yml up -d

The MySQL container takes approximately a minute to finish initializing. The service will be ready when it is able to connect to the db. We can check the stdout of webapp using docker logs webapp.test.


We have successfully connected to a MySQL container from another container. But there is little point of using containers if we can’t use it in distributed settings. In the future, I hope to explore how to run and manage db clusters easily using container orchestration tools.