Story with Horizontal Pod Autoscaler

Cristian Glavan
2 min readMay 23, 2021
Photo by Tri Eptaroka Mardiana on Unsplash

Say you want to set up a Horizontal Pod Autoscaler for your k8s deployment so it can scale nice and easy.
Everything is set up, your deployment is waiting for the HPA it needs in its life. You know by now from many debugging sessions that in order for HPA to work, you need to have all the containers in your pods to have resources defined like so:

- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

Sounds nice, but will it actually scale?

Well if you are me, and you used “kubectl apply -f” for deploying your manifests and at some point, prior to adding your HPA, you had a sidecar container for say logging but decided to remove it guess what: “apply” does not remove elements from your deployment so I found myself with “cannot read metric value”.
Yes, my sidecar container did not have any resources defined. I removed the clandestine sidecar container from my deployment and the HPA worked like a charm.

It’s things like this that make eat you up inside. Lesson learned: apply is not my friend, and if I do use it: always check objects as they are inside the cluster and not as they are in my manifests as code.

--

--

Cristian Glavan

DevOps Engineer @ Zitec.com — Full-Stack Web Developer — Creative— Martial Artist — Curious — Lateral Thinking