Kubenatives

Kubenatives

Share this post

Kubenatives
Kubenatives
The Hidden CPU Throttling Crisis in Your Kubernetes Cluster

The Hidden CPU Throttling Crisis in Your Kubernetes Cluster

How a 100-millisecond decision is silently killing your application performance

Sharon Sahadevan's avatar
Sharon Sahadevan
Aug 10, 2025
∙ Paid
6

Share this post

Kubenatives
Kubenatives
The Hidden CPU Throttling Crisis in Your Kubernetes Cluster
3
Share

Picture this: You're a diligent platform engineer. You've carefully allocated CPU resources to your applications 1 CPU requested, 5 CPUs as the limit.

Your monitoring dashboard shows your app cruising along at a mere 140 millicores, sometimes peaking at 400 millicores. Life is good, right?

Wrong.

Your application is being throttled. Hard. Users are experiencing random slowdowns. API calls that should take 50ms are suddenly taking 500ms. And your monitoring is lying to you about why.

Welcome to one of Kubernetes' most misunderstood problems: CPU throttling in the age of microservices.

The 100-Millisecond Problem Nobody Talks About

Here's what your Kubernetes vendor doesn't advertise: every 100 milliseconds, your application's fate is decided. Not every second, not every minute every 100 milliseconds. This seemingly arbitrary number, buried deep in the Linux kernel's CFS (Completely Fair Scheduler), is the source of more production incidents than most teams realize.

Let me show you why.

A Tale of Two Timescales

When you set a CPU limit in Kubernetes, you're actually setting up a fascinating mathematical constraint that operates on a completely different timescale than your monitoring:

resources:
  requests:
    cpu: 1000m  # "I need at least 1 CPU"
  limits:
    cpu: 5000m  # "Never give me more than 5 CPUs"

What Kubernetes actually does with this is remarkable in its simplicity and devastating in its implications:

  • Your 5 CPU limit becomes a quota: 500 milliseconds of CPU time per 100-millisecond period

  • Every 100ms, the kernel resets your allowance

  • Use it all up in 20ms? Too bad—you're frozen for the next 80ms

The Monitoring Illusion

Here's where it gets interesting. Your Prometheus metrics, your Datadog dashboard, your Grafana panels—they're all showing you averages over seconds or minutes. It's like trying to catch lightning with a long-exposure camera.

Real scenario from a production system:

Milliseconds 0-20:    App bursts to 25 CPUs (processing a batch)
Milliseconds 20-100:  App is throttled (hit 5 CPU limit)
Milliseconds 100-200: App uses 0.5 CPU (waiting for I/O)
Milliseconds 200-300: App uses 0.3 CPU (idle)
...
Minute average: 140 millicores 🎭
Reality: 80% of requests were delayed by throttling

The Burst Pattern Crisis

Modern applications don't use CPU like factories use electricity—as a steady, predictable flow. They use it like your teenager uses your credit card—nothing for days, then suddenly everything at once.

Consider a typical web service handling requests:

  1. Request arrives → Sudden CPU spike for JSON parsing

  2. Database query → CPU idle, waiting for I/O

  3. Response processing → Another CPU burst for serialization

  4. Encryption → Massive CPU spike for TLS

  5. Logging → Small CPU burst for structured logging

All of this happens in milliseconds. To the CFS scheduler, your well-behaved service looks like a NASCAR driver in a school zone—constantly slamming into the speed limit.

The Multi-Threading Multiplication Effect

But wait, it gets worse. If you're running a modern application, you're probably using multiple threads or goroutines. Here's where the math becomes truly painful:

Scenario: Node.js Application with Worker Threads

  • 10 worker threads

  • Each can use 100% of a CPU core

  • Theoretical peak: 10 CPUs

  • Your limit: 5 CPUs

  • Result: Constant throttling, even at "14% average utilization"

The kernel doesn't care that your average is low. For those few milliseconds when all your threads wake up simultaneously (like when processing a batch of messages from a queue), you're hitting that ceiling hard.

The Real-World Impact

Let me share some actual numbers from production systems I've analyzed:

E-commerce API Service

  • CPU Limit: 4 cores

  • Average Usage: 0.8 cores (20%)

  • Throttling: 45% of periods

  • P99 Latency Impact: +240ms

Data Processing Pipeline

  • CPU Limit: 8 cores

  • Average Usage: 2.1 cores (26%)

  • Throttling: 62% of periods

  • Batch Processing Time: +4x slower

Real-time Analytics Service

  • CPU Limit: 2 cores

  • Average Usage: 0.4 cores (20%)

  • Throttling: 31% of periods

  • User Experience: "Random" freezes lasting 50-100ms

Detecting the Hidden Throttle

Most teams don't even know they're being throttled. Here's your detection toolkit:

1. The Kernel Never Lies

SSH into your node and check the cgroup stats:

cat /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod<pod-uid>/cpu.stat

nr_periods 521948
nr_throttled 162847
throttled_time 84762138272

If nr_throttled is more than 1% of nr_periods, you have a problem.

2. The Prometheus Queries That Matter

# Throttling percentage
rate(container_cpu_cfs_throttled_periods_total[5m]) 
/ rate(container_cpu_cfs_periods_total[5m]) 
* 100

# Time spent throttled
rate(container_cpu_cfs_throttled_seconds_total[5m])

# The shocking truth - throttling vs actual usage
rate(container_cpu_cfs_throttled_periods_total[5m])
/ rate(container_cpu_usage_seconds_total[5m])

3. The Application Symptoms

  • Random latency spikes that don't correlate with load

  • Timeout errors during "low usage" periods

  • Worker threads that seem to "freeze"

  • Batch jobs taking inconsistent time

  • WebSocket connections dropping mysteriously

Keep reading with a 7-day free trial

Subscribe to Kubenatives to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Sharon Sahadevan
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share