How to choose Docker container CPU properties for our Java service?

Kostiantyn Ivanov
5 min readSep 24, 2023

--

What we will learn

  • How to check available amount of CPUs for docker container
  • How to limit number of CPUs for specific container
  • What is the throttling and how a low CPU limit can affect our application
  • Some hints about specification of CPU limits for containerised java applications

How to check available amount of CPUs for docker container

In different OSs we have a number of commands that return us available CPUs:
Linux:

nproc

Windows:

msinfo32

MacOS:

system_profiler SPHardwareDataType

All the commands above will show you a number of CPU cores provided for OS. In a docker we also have a command to show this information:

docker info

it will show different information about current docker server including this one:

OSType: linux
Architecture: aarch64
CPUs: 5

It actually give us a hint, which architecture we may use for our images and how many CPUs we can use.

Another interesting point is that the number of CPUs may be different between OS command and Deocker info. We never faced in on Linux but for Windows and MacOS usually the root cause was a resources settings of Docker Desktop:

How to limit number of CPUs for specific container

NOTE: We will discover options for docker-compose. The plain docker run command has all of them as well.

services:
app:
...
cpus: 4
cpuset: "1,4,5,6"
#cpu_shares: 2048
...

--cpus=<value>Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting

--cpusetLimit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).

--cpu-sharesSet this flag to a value greater or less than the default of 1024 to increase or reduce the container's weight, and give it access to a greater or lesser proportion of the host machine's CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. --cpu-shares does not prevent containers from being scheduled in swarm mode. It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access.

In our practice we limit CPUs using constraints “cpus” and “cpuset” dividing this resource between containers on the node. It can work really well when you know how many containers are going are going to be run on the host.

What is the throttling and how a low CPU limit can affect our application

So, we already aware about how to limit CPU resources for our container. We already have a cool java application that responds with acceptable latency:

Let’s just try to save our electricity and make a strong cpu limit=0.5 for this application container:

services:
app:
...
cpus: 0.5
...

Let’s take a look at the bootstrap log of this application:

The first weird thing is the bootstrap time was increased dramatically.

Let’s try to run the same calls as above:

Our latency became a few times worst. But why?

Let’s take a looks at the CPU load in our container using docker stats command:

we can notice that all the 50% of CPU (according to our limits it’s a 100% of available CPUs) is always loaded. We face a throttling.

CPU throttling is the intentional reduction of a computer processor’s performance to limit its power consumption, heat generation, or to manage system resources efficiently.

If we don’t want our constrains to affect the latency — we have to choose them properly.

Hints about specification of CPU limits for containerised java applications

We already know, that extra low CPU limits may affect our application latency but what the values should we put (at less as minimum)?

We would say, that any java service that use modern garbage collectors needs to have at less two CPUs. In another way you will not take benefits from parallel garbage collection and you GCs will work in the same thread as a maim application code. So each garbage collection will affect the latency. The higher value of CPU limit is really depends on you application load. Let’s just test if 2 CPU will be enough for our test application:

services:
app:
...
cpus: 2
...

Bootstrap time:

Calls:

Looks better, isn’t? =)

Summary

We discovered CPU limitation for docker containers and we hope now it’s more clear why it cold be not the best idea to leave you containers without limits out apply to strong limits.

Links

test application sources with docker-compose file: https://github.com/sIvanovKonstantyn/frameworks-comparation/tree/javamem

The article from the same series about memory limitations: https://medium.com/@svosh2/how-to-choose-jvm-and-docker-container-properties-for-our-java-service-a04bb9e2c855

--

--