When it comes to allocating CPU and memory resources to workloads in Kubernetes, there’s broad agreement on the importance of setting request values.
The lack of controversy around setting requests is due to general awareness that workloads without a minimum amount of CPU are at risk of pod eviction.
The internet is full of opinions, of course.
Considerations for Setting CPU Limits
While those are all logical reasons to not set CPU limits, doing so across the board doesn’t satisfy all use cases.
Considerations for Setting Memory Limits
If it doesn’t, you end up with an out-of-memory (OOM) kill — or worse, if you’re dealing with a memory leak, which can provoke failure across the entire node.
To further complicate things, internal resource configurations in the JVM are tied to limits in Kubernetes.
The reality is that there is no single axiomatic truth when it comes to setting Kubernetes limits.
Automation informed both by engineering expertise and the real-time needs of an application is the only combination suited to address the challenges we’ve discussed.