Understanding Docker Container Memory Limit Behavior

Bao Nguyen
Nov 26, 2019 · 4 min read

Docker allows you to set the limit of memory, CPU, and also recently GPU on your container. It is not as simple as it sounds.

Let’s go ahead and try this

docker run --rm --memory 50mb busybox free -m

The above command creates a container with 50mb of memory and runs free to report the available memory. If you run this on Mac, you should see a similar output like the below screenshot.

Why doesn’t it show 50mb as in the memory parameter? Why it shows 2gb of memory, and where is the 2gb come from?

This is the first catch of container memory limitation. The --memory parameter limits the container memory usage, and Docker will kill the container if the container tries to use more than the limited memory. But inside the container, you still see the whole system available memory. free reports the available memory, not the allowed memory. It is the same for os.totalmem (nodejs) or psutil.virtual_memory (python).

To see the allowed memory inside the container, you need to look at /sys/fs/cgroup/memory/memory.limit_in_bytes.

This is important if your container contains some logic bases on memory, such as allocating a buffer. As of right now, not many docker images are memory-limit-aware, so you have to take proper configuration to tell the application the container’s memory limit. Such as for redis, you need to repeat the allowed memory in maxmemory.

So where the 2gb come from? If you run Docker on Mac, it is the default configuration. If you run on Linux, you would see the host available memory. You can read more about why free reports wrong memory in the link.

Second question: if you set --memory 50m to your container, how much memory the container allows to use? Let’s try this:

docker run --memory 50m --rm -it progrium/stress --vm 1 --vm-bytes 62914560 --timeout 1s

The above command uses the stress utility to allocate 60mb of memory inside a container with only 50mb memory. If you run this on Mac, you should see something as below:

Look like the container actually uses more than 50mb. Shouldn’t it be killed for OOM?

This is the second catch. Depending on the configuration, the container may use more than the allowed memory. When you only set --memory=50mb the memory limit can be up to 150mb, with the extra 100mb swap. And swapping is super slow. The proper way is to set --memory-swap equals to --memory as states in Docker document. Run the below command again to see the container is killed due to OOM.

docker run --memory 50m --memory-swap 50m --rm -it progrium/stress --vm 1 --vm-bytes 62914560 --timeout 1s

If you use Kubernetes, the first catch is also the same. Luckily, since Kubernetes enforces swap to be disabled since 1.8, you don’t have to worry much about the second catch.

Knowing all of this is very important. Because it helps you to configure the memory limitation properly. I hope this post is helpful for you, and feel free to let me know your question in the comment 😄.

Follow us on Twitter 🐦 and Facebook 👥 and join our Facebook Group 💬.

To join our community Slack 🗣️ and read our weekly Faun topics 🗞️, click here⬇

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇


The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store