-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apps: use scale client to scale replication controllers #19310
apps: use scale client to scale replication controllers #19310
Conversation
decoder: decoder, | ||
hookExecutor: stratsupport.NewHookExecutor(client.Core(), tagClient, client.Core(), os.Stdout, decoder), | ||
retryPeriod: 1 * time.Second, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tnozicka this is removing the retryPeriod as we hardcoded it to 1 second anyway and I don't see any benefits of having it customizable as we don't exercise this in tests anyway. FYI
_, scaleErr := s.scaleClient.Scales(deployment.Namespace).Update(kapi.Resource("replicationcontrollers"), curScaleCopy) | ||
return scaleErr | ||
}) | ||
// FIXME: The error admission returns here should be 503 (come back later) or similar. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/cc @sttts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the api server report that it is healthy at that point? We are waiting for the informer caches to warm up. This message suggests that the shared informers are not sync'ed yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if the server is reported healthy when the caches are not synced yet, @deads2k ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if the server is reported healthy when the caches are not synced yet, @deads2k ?
Yeah, it's healthy before a sync.
/retest |
1 similar comment
/retest |
724bfc1
to
7edb59b
Compare
if !alreadyScaled { | ||
// FIXME: This should really be a watch, however the scaler client does not implement the watch interface atm. | ||
err = wait.PollImmediate(1*time.Second, retryTimeout, func() (bool, error) { | ||
curScale, err := s.scaleClient.Scales(deployment.Namespace).Get(kapi.Resource("replicationcontrollers"), deployment.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you get to it before this merges, you want k8s.io/api/core/v1.Resource
/lgtm that test client is a bit icky. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, mfojtik The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This will switch the kubectl scaler in deployer pod to use the scale client to scale replication controllers.
It builds on @deads2k #19275 (only last commit is new) and replaces #19299 and #19296.
There was a discussion under #19299 about mutating the spec.replicas field on RC directly with a benefit of not needed a new client and being able to do atomic updates if we need to change other fields during scaling operation. The scale client has benefit of not needing to encode entire RC when updating one field and is preferred way to execute the "scale" operation upstream (HPA use the same mechanism).
I will squash this after @deads2k PR merges.
/cc @deads2k @tnozicka