You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using the K8s proxy has a significant performance penalty.
Here are some benchmarks:
Context: Simple role granting everything:
Getting resources from a brand new empty kind cluster:
no proxy: kubectl get all -A 0.06s user 0.02s system 119% cpu 0.066 total
direct proxy: kubectl get all -A 0.11s user 0.06s system 7% cpu 2.466 total
joined proxy: kubectl get all -A 0.12s user 0.06s system 1% cpu 18.366 total
Creating 6 namespaces:
no proxy: kubectl apply -f foo2.yaml 0.17s user 0.04s system 35% cpu 0.582 total
direct proxy: kubectl apply -f ns.yaml 0.27s user 0.08s system 11% cpu 2.926 total
joined proxy: kubectl apply -f foo2.yaml 0.25s user 0.09s system 1% cpu 25.428 total
Creating 6 pods:
no proxy: kubectl apply -f foo3.yaml 0.18s user 0.03s system 31% cpu 0.667 total
direct proxy: kubectl apply -f foo3.yaml 0.21s user 0.11s system 8% cpu 3.873 total
joined proxy: kubectl apply -f foo3.yaml 0.26s user 0.08s system 0% cpu 37.165 total
Getting resources from a populated cluster (6ns, 200 pods per ns):
no proxy: kubectl get all -A 0.16s user 0.05s system 108% cpu 0.189 total
direct proxy: kubectl get all -A 0.29s user 0.10s system 12% cpu 3.168 total
joined proxy: kubectl get all -A 0.29s user 0.13s system 1% cpu 22.754 total
More complex role with pattern matching yields similar results, so the complexity of the role doesn't seem to be a cause.
The number of resources slightly slows down the lookup but not significantly.
After digging, the slowdown increases with the number of resources.
I suspect that the causes are among:
we force upgrade to http/2 pretty much everywhere
we don't reuse sockets
missing cache around permissions
double work checking permissions when hopping from agent to agent
The text was updated successfully, but these errors were encountered:
Using the K8s proxy has a significant performance penalty.
Here are some benchmarks:
Context: Simple role granting everything:
Getting resources from a brand new empty kind cluster:
kubectl get all -A 0.06s user 0.02s system 119% cpu 0.066 total
kubectl get all -A 0.11s user 0.06s system 7% cpu 2.466 total
kubectl get all -A 0.12s user 0.06s system 1% cpu 18.366 total
Creating 6 namespaces:
kubectl apply -f foo2.yaml 0.17s user 0.04s system 35% cpu 0.582 total
kubectl apply -f ns.yaml 0.27s user 0.08s system 11% cpu 2.926 total
kubectl apply -f foo2.yaml 0.25s user 0.09s system 1% cpu 25.428 total
Creating 6 pods:
kubectl apply -f foo3.yaml 0.18s user 0.03s system 31% cpu 0.667 total
kubectl apply -f foo3.yaml 0.21s user 0.11s system 8% cpu 3.873 total
kubectl apply -f foo3.yaml 0.26s user 0.08s system 0% cpu 37.165 total
Getting resources from a populated cluster (6ns, 200 pods per ns):
kubectl get all -A 0.16s user 0.05s system 108% cpu 0.189 total
kubectl get all -A 0.29s user 0.10s system 12% cpu 3.168 total
kubectl get all -A 0.29s user 0.13s system 1% cpu 22.754 total
More complex role with pattern matching yields similar results, so the complexity of the role doesn't seem to be a cause.
The number of resources slightly slows down the lookup but not significantly.
After digging, the slowdown increases with the number of resources.
I suspect that the causes are among:
The text was updated successfully, but these errors were encountered: