Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apps: replace kubectl scaler in deployer with direct client call and polling #19299

Closed
wants to merge 1 commit into from

Conversation

mfojtik
Copy link
Contributor

@mfojtik mfojtik commented Apr 10, 2018

Alternative to #19296

@deads2k i'm not sure I did the wiring for the client right.. seems to require more than usual client does ;-)

Also fixing the unit tests might be hell :)

@openshift-ci-robot openshift-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Apr 10, 2018
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mfojtik

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 10, 2018
@mfojtik
Copy link
Contributor Author

mfojtik commented Apr 10, 2018

@deads2k it feels a little overkill to use this client, but if that will pass me trough broken conversion of autoscaling in RC UpdateScale() i don't care ;)

if err != nil {
return err
}
cachedDiscovery := discocache.NewMemCacheClient(kubeExternalClient.Discovery())
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k this is what i'm talking about...

@mfojtik
Copy link
Contributor Author

mfojtik commented Apr 10, 2018

(tested manually and this works ;-)

// This error is returned when the lifecycle admission plugin cache is not fully
// synchronized. In that case the scaling should be retried.
//
// FIXME: The error returned from admission should not be forbidden but come-back-later error.
if errors.IsForbidden(scaleErr) && strings.Contains(scaleErr.Error(), "not yet ready to handle request") {
if errors.IsForbidden(updateScaleErr) && strings.Contains(updateScaleErr.Error(), "not yet ready to handle request") {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k not sure if we need this with scale client

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k saw your pkg/apps/util client, will move to that when your PR merges.

@tnozicka
Copy link
Contributor

@mfojtik @deads2k I am starting to be in favor of not using the /scale subresource at all because we don't need it. We have RC client, we need it (and the permissions) for other stuff, it's already plumbed in - just use it to edit RC.spec.replicas and be done with it. We know it's RC so we don't need to go the long way.

@mfojtik
Copy link
Contributor Author

mfojtik commented Apr 10, 2018

@tnozicka i'm for scale client because:

  1. It is preferred way upstream to scale resources that support scaling
  2. HPA/autoscaler use it, so this proves it works for them as well
  3. We do just scale here, we are not updating other RC fields

@deads2k any other things that scale client does better than editing the replicas directly?

@tnozicka
Copy link
Contributor

The point of /scale subresource (and client) is to replace clients that don't need to read or edit the rest - to strip down permissions and allow discoverability - if you already have the full client there is no point in creating the restricted one

@openshift-ci-robot
Copy link

@mfojtik: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
ci/openshift-jenkins/unit 69dd78c link /test unit
ci/openshift-jenkins/extended_conformance_install 69dd78c link /test extended_conformance_install
ci/openshift-jenkins/gcp 69dd78c link /test gcp

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@deads2k
Copy link
Contributor

deads2k commented Apr 10, 2018

The point of /scale subresource (and client) is to replace clients that don't need to read or edit the rest - to strip down permissions and allow discoverability - if you already have the full client there is no point in creating the restricted one

The point is to avoid encoding knowledge of struct types when you only need to encode knowledge of "scale this thing". It's not about which clients you have access to.

@tnozicka
Copy link
Contributor

The point is to avoid encoding knowledge of struct types when you only need to encode knowledge of "scale this thing". It's not about which clients you have access to.

If you are working with a known particular resource, knowing that scale is done as spec.replicas or /scale is equal. (Both are actually a convention.)

My point is that in a case when you have the full client and all the knowledge about the resource, creating a client that is a strict subset of the one you already have, seem weird to me.

@tnozicka
Copy link
Contributor

Also if you'd need to update an annotation e.g. like upstream does
https://github.com/kubernetes/kubernetes/blob/1fa06a6bd43e91b679e81202160190c0e7c7881b/pkg/controller/deployment/sync.go#L404-L423
what would the benefit of doing 2 api calls with 2 clients and loosing atomicity?

I see no benefit in using a scale client only downsides, upstream doesn't use that.

@mfojtik
Copy link
Contributor Author

mfojtik commented Apr 10, 2018

@tnozicka we don't set any annotations in deployer during scaling, right? For deployer the scale is usually "i want to get from N to M". The scaleAndWait (used by both strategies we have) performs just scale and then waiting for the scale to take effect.

I would say that even in upstream, I think they should scale first, then wait for scale to succeed (iow. the RS get the replicas it should) and then update the annotation to reflect the new state. I don't think the upstream operation must be atomic (in this case?).

@mfojtik
Copy link
Contributor Author

mfojtik commented Apr 10, 2018

I don't have strong opinion about the mechanics of how the deployer should scale, since the DC is a technical debt and we just moving from one scaler mechanism that is no longer working for us to a new one to plumb it... If the scale client offers a way to scale up without a need to decode/encode entire RC just to get one field updated, I see that as benefit. If we ever going to switch DC to use RS (for example if we decide we will migrate existing DC to D ?), then this will work nicely.

Again, not having strong opinions about either way.

@deads2k
Copy link
Contributor

deads2k commented Apr 10, 2018

I don't have strong opinion about the mechanics of how the deployer should scale, since the DC is a technical debt and we just moving from one scaler mechanism that is no longer working for us to a new one to plumb it... If the scale client offers a way to scale up without a need to decode/encode entire RC just to get one field updated, I see that as benefit. If we ever going to switch DC to use RS (for example if we decide we will migrate existing DC to D ?), then this will work nicely.

Unless we have a strong technical reason to avoid it, we should use the scale client to scale a workload object.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants