Replies: 2 comments 6 replies
-
@unmade can you share some examples of what the degradation looks like? Does it just take longer? Consume more memory? CPU? All of the above? cc @afdesk |
Beta Was this translation helpful? Give feedback.
1 reply
-
@unmade thanks for the report. update: @unmade Could you share more details about time? $ time trivy k8s --scanners vuln -f json --report all --exclude-kinds replicationcontrollers,daemonsets,cronjobs,jobs,services,configmaps,roles,rolebindings,networkpolicies,ingresses,resourcequotas,limitranges,clusterroles,clusterrolebindings
...
trivy k8s --scanners vuln -f json --report all --exclude-kinds 0.65s user 0.51s system 8% cpu 14.132 total
$ time ~/.opt/trivy-58-2 k8s --scanners vuln -f json --report all --exclude-kinds replicationcontrollers,daemonsets,cronjobs,jobs,services,configmaps,roles,rolebindings,networkpolicies,ingresses,resourcequotas,limitranges,clusterroles,clusterrolebindings
...
~/.opt/trivy-58-2 k8s --scanners vuln -f json --report all --exclude-kinds 0.64s user 0.38s system 9% cpu 10.910 total There was changed a way to include/exclude kinds, so I believe that it may contain some issues. also, in your case, why do use long $ trivy k8s --scanners vuln --include-kinds pods,deployments,replicasets,statefulsets,serviceaccounts --disable-node-collector --report summary |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Description
I noticed a significant difference in performance for k8s cluster scanning between
v0.58
andv0.59
. Thev0.59
takes approx. 10x more time to complete.Here is the command I'm running:
Trivy is granted the following role:
I tried local runs as well as runs from within k8s cluster.
I don't get much information from the output other than on v0.59 the waiting time is significantly more than for v0.58. Any help into how to debug/profile where the bottleneck would be appreciated!
Desired Behavior
Performance remains the same for k8s cluster scanning in version 0.59
Actual Behavior
Performance degraded the same for k8s cluster scanning in version 0.59
Reproduction Steps
Have a k8s cluster with few deployments, replicasets, statefulsets. Grant Trivy Role as in the example above. 1. Run the scan with Trivy v0.58.3 2. Run the scan with Trivy v0.59.1
Target
Kubernetes
Scanner
Vulnerability
Output Format
JSON
Mode
Standalone
Debug Output
There is no much difference in the output between 0.58 and 0.59, other that 0.59 takes much longer to complete.
Operating System
macOS / Docker container deployed to k8s
Version
Checklist
trivy clean --all
Beta Was this translation helpful? Give feedback.
All reactions