-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add a clusterquota reconciliation controller #9658
Conversation
|
||
&kapi.DeleteOptions{}, | ||
&kapi.ListOptions{}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why ListOptions is twice here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also shouldn't kapi objects be registered upstream?
Updated, test added, ready for review. [test] |
#9701 re[test] |
@liggitt ptal |
@@ -48,6 +49,13 @@ func (c *clusterResourceQuotas) Create(resourceQuota *quotaapi.ClusterResourceQu | |||
return | |||
} | |||
|
|||
// Update updates an existing deploymentConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix or remove
kapi.Kind("ReplicationController"), | ||
kapi.Kind("PersistentVolumeClaim"), | ||
kapi.Kind("Secret"), | ||
kapi.Kind("ConfigMap"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these all we can replenish? Image strems? Deployment configs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these all we can replenish? Image strems? Deployment configs?
for now, there's an item on the card: https://trello.com/c/qJUVjQnj/713-13-multi-project-quota
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add a utility method upstream to pull this list so we can pull from a common set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened tracking issue: kubernetes/kubernetes#28601
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add a utility method upstream to pull this list so we can pull from a common set?
It's more structurally flawed. The choice should be based on GroupResource and that needs to built based on an intersection with enabled resources. It doesn't actually have to be limited to this server's hosting, since it could observe changes in federated servers.
In addition, we need to extend our InformerFactory to transparently handle generic listener requests to properly feed this beast.
We should probably restructure the replenishment controller to work as a listener/notifier and upstream the bucketter I built here to be able to handle partial updates.
I haven't prioritized the list of TODOs, but its a significant endeavor.
I left some comments but @derekwaynecarr or @liggitt should have a look as well |
} | ||
|
||
// MappingChangeListener is notified of changes to the mapping. It must not block. | ||
type MappingChangeListener interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The listener is new, the rest of this is a straight move. Don't fear.
@derekwaynecarr @liggitt bump |
m.lock.RUnlock() | ||
|
||
if !selectorMatches(selector, selectorExists, quota) { | ||
return false, false, true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do we know the namespace matches? don't we need to evaluate labelsMatch
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do we know the namespace matches? don't we need to evaluate labelsMatch?
added
Comments addressed. Did you make it all the way through? This upstream has permission to [merge]. |
Evaluated for origin test up to cca0e89 |
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5881/) |
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5881/) (Image: devenv-rhel7_4543) |
Evaluated for origin merge up to cca0e89 |
delete(e.dirtyWork, key) | ||
|
||
if len(work) != 0 { | ||
e.inProgress[key] = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to set this unconditionally, otherwise any work enqueued before Done() is called will be put into work
instead of dirtyWork
, and lost when Done() is called and we do e.work[key] = e.dirtyWork[key]
Great work, this LGTM. |
This adds the controller to do the priming, handle namespaces coming and going, and replenishment.
It's still short tests, but an algorithmic check is welcome.
@Kargakis