Coding Challenge #5: Kubernetes Autoscaling Simulator
Practical coding challenge for Devops Engineers.
The Problem
Your Kubernetes cluster is running critical workloads, but you’re struggling to optimize resource allocation. Horizontal Pod Autoscaling (HPA) and Cluster Autoscaler are powerful, but their behavior is hard to predict under varying loads. Misconfigured autoscaling can lead to overprovisioning (wasting costs) or underprovisioning (causing downtime).
You need a tool to simulate how HPA and Cluster Autoscaler would respond to different workload patterns, helping you tune scaling policies before applying them in production.
Your Objective
Write a Python or Golang script that:
Accepts Input Parameters
Namespace (default:
default
)Simulated workload file (JSON or YAML) specifying pod resource requests and a time-series of CPU/memory load
HPA configuration (min/max replicas, target CPU/memory utilization)
Cluster Autoscaler parameters (node pool size, min/max nodes)
Core Functionality
Parse the workload file to simulate pod resource demands over time (e.g., CPU/memory spikes at specific intervals)
Model HPA behavior:
Calculate desired replicas using the formula:
desired_replicas = ceil(current_usage / target_usage * current_replicas)
Respect min and max replica constraints
Model Cluster Autoscaler behavior:
Estimate node requirements based on pod resource requests and node capacity
Simulate node provisioning/deprovisioning within min and max node limits
Track and log:
Number of replicas over time
Number of nodes over time
Resource utilization (CPU/memory) per pod and node
Scaling events (upscale/downscale, node add/remove)
Output
Print a time-series summary for each simulation step (e.g., every minute), including:
Timestamp (simulated)
Pod count (current and desired replicas)
Node count (current and desired)
Average CPU/memory utilization across pods
Scaling events (e.g., "Scaled up to 5 replicas", "Added 1 node")
Visualize results using a simple ASCII chart (e.g., replica count over time)
Save detailed results to a CSV file (e.g.,
simulation_results.csv
) with columns for timestamp, replicas, nodes, CPU/memory usage, and events
Bonus Features
Add
--all-namespaces
to simulate workloads across the entire clusterSupport custom node types (e.g., CPU and memory capacity via
--node-type cpu:16,memory:64Gi
)Simulate latency in scaling events (e.g.,
--scale-delay 30s
for HPA and--node-provision-delay 5m
for Cluster Autoscaler)Generate graphical plots using
matplotlib
(Python) or Go plotting libraries instead of ASCII chartsAdd validation for input files and warn about unrealistic configurations (e.g., resource requests exceeding node capacity)
Requirements
Use standard libraries for parsing JSON/YAML and handling time-series data
Implement HPA logic based on the Kubernetes HPA algorithm
Model scheduling simplistically by assuming a pod fits if total resource requests do not exceed node capacity
Handle edge cases like insufficient capacity and invalid HPA configs
Tips
Use a loop to iterate over time steps in the workload file and update states
For HPA:
desired = ceil(current_metric / target_metric * current_replicas)
For Cluster Autoscaler:
Sum pod requests and divide by node capacity to estimate required nodesUse libraries like
tabulate
(Python) ortablewriter
(Go) for clean outputTest with various workload patterns like steady load, spikes, or cyclical behavior
Example Workload File (workload.json)
{
"pods": [
{
"name": "app-pod",
"requests": { "cpu": "500m", "memory": "512Mi" },
"load": [
{ "time": "2025-05-10T00:00:00Z", "cpu": "200m", "memory": "300Mi" },
{ "time": "2025-05-10T00:01:00Z", "cpu": "800m", "memory": "700Mi" }
]
}
]
}
Why This Is Useful
Helps predict and visualize autoscaling behavior
Avoids costly overprovisioning and prevents outages due to underprovisioning
Provides a safe environment to test scaling configurations
Valuable for capacity planning and DevOps training
Submission
Follow @sharonsahadevan
on X and LinkedIn for weekly Kubernetes challenges and insights.
KubeNatives Newsletter is a reader-supported publication. Subscribe for weekly Kubernetes challenges and expert insights.