GKE cluster - difference in resource request/limits seen in pod manifest VS on node

I've observed a "weird" behavior of the GKE cluster, it seems that resource requests/limits set in pod/deployment are not respected (or wrongly interpreted) by nodes

Any idea what may be the reason for this behaviour and how to solve this? (it cause a lot of issues with resource allocation in cluster)

Example of a pod which runs with `50m` CPU request, which are seen as `250m` by node:

$ kubectl get pod core-worker-6bcf9d4877-5wqpb -n austria -o=jsonpath='{range .spec.containers[*]}{"Container Name: "}{.name}{"\n Requests:\n CPU: "}{.resources.requests.cpu}{"\n Memory: "}{.resources.requests.memory}{"\n Limits:\n CPU: "}{.resources.limits.cpu}{"\n Memory: "}{.resources.limits.memory}{"\n"}{end}'

Container Name: core-worker
Requests:
CPU: 50m
Memory: 256Mi
Limits:
CPU:
Memory: 1Gi

and now the node perspective:

$ kubectl describe node $(kubectl get pod core-worker-6bcf9d4877-5wqpb -n austria -o=custom-columns=NODE:.spec.nodeName)

...
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
austria core-worker-6bcf9d4877-5wqpb 250m (26%) 500m (53%) 256Mi (5%) 1Gi (21%) 18m

Version: 1.27.11-gke.1062003
Nodes: 1.26.5-gke.2700

Solved Solved
0 1 227
1 ACCEPTED SOLUTION

Turns out the values shown by "describe node" were the ones from `initContainer`

View solution in original post

1 REPLY 1

Turns out the values shown by "describe node" were the ones from `initContainer`

Top Labels in this Space