-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CRO admission plugin: never mutate below namespace minimums #18553
CRO admission plugin: never mutate below namespace minimums #18553
Conversation
/cc @derekwaynecarr PTAL |
@frobware: GitHub didn't allow me to request PR reviews from the following users: PTAL. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -45,10 +44,12 @@ func Register(plugins *admission.Plugins) { | |||
glog.Infof("Admission plugin %q is not configured so it will be disabled.", api.PluginName) | |||
return nil, nil | |||
} | |||
return newClusterResourceOverride(pluginConfig) | |||
return newClusterResourceOverride(pluginConfig, defaultGetNamespaceLimitRanges) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is an odd injection pattern... why not have a LimitRangeLister in the plugin and populate it like this:
// SetInternalKubeInformerFactory implements the WantsInternalKubeInformerFactory interface.
func (p *clusterResourceOverridePlugin) SetInternalKubeInformerFactory(f informers.SharedInformerFactory) {
p.limitRangesLister = f.Core().InternalVersion().LimitRanges().Lister()
}
|
||
for _, limitRange := range limitRanges { | ||
for _, limit := range limitRange.Spec.Limits { | ||
if limit.Type != kapi.LimitTypeContainer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not familiar with what this means... if there's a pod limitrange that says min is 256MB, and a container limit range that says the min is 384MB, does that not mean the min on a container is 384? if so, don't we need to include this type of limit?
I also expected opting into specifically recognized types (pod and maybe container), rather than negative matching (this logic would include PVC limit ranges, for example)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also expected opting into specifically recognized types (pod and maybe container), rather than negative matching (this logic would include PVC limit ranges, for example)
Doesn't this only consider container types?
There's an explicit test case for checking against non-container types:
e76cb4d#diff-62f7f5e71423c1d5dfa6407e3984804bR306
Happy to drop the continue
and do the appends iff it's a container type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks fine to me. You are only modifying pod objects and not pvcs in this scenario. Checking for limit type of Container only is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One structural nit so it’s clearer. Also not clear why we have the integration changes in this pr but no big deal there. Update if you agree then lgtm
// minResourceLimits finds the minimum CPU and minimum Memory resource | ||
// values across all the limits in limitRanges. Nil is returned if | ||
// there is CPU or Memory resource respectively. | ||
func minResourceLimits(limitRanges []*kapi.LimitRange) (*resource.Quantity, *resource.Quantity) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think this is clearer if minResourceLimits(limits, resourceName) quantity
Then calling side you can just call twice with cpu and memory
|
||
for _, limitRange := range limitRanges { | ||
for _, limit := range limitRange.Spec.Limits { | ||
if limit.Type != kapi.LimitTypeContainer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks fine to me. You are only modifying pod objects and not pvcs in this scenario. Checking for limit type of Container only is fine.
@@ -123,7 +123,7 @@ func TestRegistryClientConnectPulpRegistry(t *testing.T) { | |||
}, imageNotFoundErrorPatterns...) | |||
if err != nil { | |||
if strings.Contains(err.Error(), "x509: certificate has expired or is not yet valid") { | |||
t.Skip("SKIPPING: due to expired certificate of %s: %v", pulpRegistryName, err) | |||
t.Skipf("SKIPPING: due to expired certificate of %s: %v", pulpRegistryName, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this in this pr?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a drive-by fix; noticed this when building and running the unit tests for this change. Can drop it and follow up with discrete change.
We call limitranger.NewLimitRanger(nil) where the nil means use default actions. Also remove type limitRangerActions.
963196d
to
ca4d3d6
Compare
Commit b886341 is my change to accommodate this new behaviour. I think its worthwhile taking a look that to see if a) the change makes sense and b) that the test overall makes sense now that we clamp to the floor of the minimums. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: derekwaynecarr, frobware The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
Automatic merge from submit-queue (batch tested with PRs 18576, 18553). |
Update the ClusterResourceOverride admission plugin to never mutate
container resource requirements below the floor specified by any
LimitRange from the same namespace.
Reason:
Web console stripped out cluster resource override config awareness,
but the user experience is broken right now when users select a
memory limit that is the floor of the LimitRange value.
You can see this on free-int by setting a memory limit of 100Mi,
your deployment will not work because we allowed
ClusterResourceOverride to go below the LimitRange floor.
The implication of this change is the over-commit ratio is skewed at
the lower end of the resource consumption scale (but honestly, that
makes sense).