Unthrottled: How a Valid Fix Becomes a Regression

Dave Chiluk
Dec 19, 2019 · 7 min read

This post is the second in a two-part series.

In a previous post, I outlined how we recognized a major throttling issue involving CFS-Cgroup bandwidth control. To uncover the problem, we created a reproducer and used git bisect to identify the causal commit. But that commit appeared completely valid, which added even more complications. In this post, I’ll explain how we uncovered the root of the throttling problem and how we solved it.

A busy highway at night
A busy highway at night
Photo by Jake Givens on Unsplash

Scheduling on multiple CPUs with many threads

The kernel scheduler uses a global quota bucket located in cfs_bandwidth->quota. It allocates slices of this quota to each core (cfs_rq->runtime_remaining) on an as-needed basis. This slice amount defaults to 5ms, but you can tune it via the kernel.sched_cfs_bandwidth_slice_us sysctl tunable.

If all threads in a cgroup stop being runnable on a particular CPU, such as blocking on IO, the kernel returns all but 1ms of this slack quota to the global bucket. The kernel leaves 1ms behind, because this decreases global bucket lock contention for many high performance computing applications. At the end of the period, the scheduler expires any remaining core-local time slice and refills the global quota bucket.

That’s at least how it has worked since commit 512ac999 and v4.18 of the kernel.

To clarify, here’s an example of a multi-threaded daemon with two worker threads, each pinned to their own core. The top graph shows the cgroup’s global quota over time. This starts with 20ms of quota, which correlates to .2 CPU. The middle graph shows the quota assigned to per-CPU queues, and the bottom graph shows when the workers were actually running on their CPU.

Multi-threaded daemon with two worker threads
Multi-threaded daemon with two worker threads

At 10ms:

  • A request comes in for worker 1.
  • A slice of quota is transferred from the global quota to the per-CPU queue for CPU 1.
  • Worker 1 takes exactly 5ms to process and respond to the request.

At 17ms:

  • A request comes in for worker 2.
  • A slice of quota is transferred from the global quota to the per-CPU queue for CPU 2.

The chance that worker 1 takes precisely 5ms to respond to a request is incredibly unrealistic. What happens if the request requires some other amount of processing time?

Multi-threaded daemon with two worker threads
Multi-threaded daemon with two worker threads

At 30ms:

  • A request comes in for worker 1.
  • Worker 1 needs only 1ms to process the request, leaving 4ms remaining on the per-CPU bucket for CPU 1.
  • Since there is time remaining on the per-CPU run queue, but there are no more runnable threads on CPU 1, a timer is set to return the slack quota back to the global bucket. This timer is set for 7ms after worker 1 stops running.

At 38ms:

  • The slack timer set on CPU 1 triggers and returns all but 1 ms of quota back to the global quota pool.
  • This leaves 1 ms of quota on CPU 1.

At 41ms:

  • Worker 2 receives a long request.
  • All the remaining time is transferred from the global bucket to CPU 2’s per-CPU bucket, and worker 2 uses all of it.

At 49ms:

  • Worker 2 on CPU 2 is now throttled without completing the request.
  • This occurs in spite of the fact that CPU 1 still has 1ms of quota.

While 1ms might not have much impact on a two-core machine, those milliseconds add up on high-core count machines. If we hit this behavior on an 88 core (n) machine, we could potentially strand 87 (n-1) milliseconds per period. That’s 87ms or 870 millicores or .87 CPU that could potentially be unusable. That’s how we hit low-quota usage with excessive throttling. Aha!

Back when 8- and 10-core machines were considered huge, this issue went largely unnoticed. Now that core count is all the rage, this problem has become much more apparent. This is why we noticed an increase in throttling for the same application when run on higher core count machines.

Note: If an application only has 100ms of quota (1 CPU), and the kernel uses 5ms slices, the application can only use 20 cores before running out of quota (100 ms / 5 ms slice = 20 slices). Any threads scheduled on the other 68 cores in an 88-core behemoth are then throttled and must wait for slack time to be returned to the global bucket before running.

Resolving a long-standing bug

-       if (cfs_rq->runtime_expires != cfs_b->runtime_expires) {
+ if (cfs_rq->expires_seq == cfs_b->expires_seq) {
/* extend local deadline, drift is bounded above by 2 ticks */
cfs_rq->runtime_expires += TICK_NSEC;
} else {
/* global deadline is ahead, expiration has passed */
cfs_rq->runtime_remaining = 0;
}

The pre-patch code expired runtime if and only if the per-CPU expire time matched the global expire time (cfs_rq->runtime_expires != cfs_b->runtime_expires). By instrumenting the kernel, I proved that this condition was almost never the case on my nodes. Therefore, those 1 milliseconds never expired. The patch changed this logic from being clock time based to a period sequence count, resolving a long-standing bug in the kernel.

The original intention of that code was to expire any remaining CPU-local time at the end of the period. Commit 512ac999 actually fixed this so the quota properly expired. This results in quota being strictly limited for each period.

When CFS-Cgroup bandwidth control was initially created, time-sharing on supercomputers was one of the key features. This strict enforcement worked well for those CPU-bound applications since they used all their quota in each period anyway, and none of it ever expired. For Java web applications with tons of tiny worker threads, this meant tons of quota expiring each period, 1ms at a time.

The solution

First, we tried implementing “rollover minutes” that banked expiring quota and made it usable in the next period. This created a thundering herd problem on the global bucket lock at the period boundary. Then, we tried to make quota expiration configurable separate from the period. This led to other issues where bursty applications could consume way more quota in some periods. We also tried returning all the slack quota when threads became unable to run, but this led to a ton of lock contention and some performance issues. Ben Segall, the author of the CFS scheduler, suggested tracking the core-local slack and reclaiming it only when needed. This solution had performance issues of its own on high-core count machines.

As it turns out, the solution was actually staring us right in the face the whole time. No one had noticed any issues with CFS CPU bandwidth constraints since 2014. Then, the expiration bug was fixed in commit 512ac999, and lots of people started reporting the throttling problem.

So, why not remove the expiration logic altogether? That’s the solution we ended up pushing back into the mainline kernel. Now, instead of being strictly limited to a quota amount of time per period, we still strictly enforce average CPU usage over a longer time window. Additionally, the amount that an application can burst is limited to 1ms for each CPU queue. You can read the whole conversation and see the five subsequent patch revisions on the Linux kernel mailing list archives.

These changes are now a part of the 5.4+ mainline kernels. They have been backported onto many available kernels:

  • Linux-stable: 4.14.154+, 4.19.84+, 5.3.9+
  • Ubuntu: 4.15.0–67+, 5.3.0–24+
  • Redhat Enterprise Linux:
    RHEL 7: 3.10.0–1062.8.1.el7+
    RHEL 8: 4.18.0–147.2.1.el8_1+
  • CoreOS: v4.19.84+

The results

Graph showing decrease in required CPU load
Graph showing decrease in required CPU load

How to mitigate the issue

  • Monitor your throttled percentage
  • Upgrade your kernels
  • If you are using Kubernetes, use whole CPU quotas, as this decreases the number of schedulable CPUs available to the cgroup
  • Increase quota where necessary

Ongoing scheduler developments

To read more about kernel scheduler bugs in Kubernetes, see these interesting GitHub issues:

Please also feel free to tweet your questions to me @dchiluk.

Cross-posted on Indeed Engineering Blog.

Indeed Engineering

Stories from Indeed Engineering

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store