Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apps: replace kubectl scaler in deployer with direct client call and polling #19296

Closed
wants to merge 1 commit into from

Conversation

mfojtik
Copy link
Contributor

@mfojtik mfojtik commented Apr 10, 2018

This replace the kubectl generic scaler in deployer with direct client calls and add retry mechanisms for conflict cases and cold caches.

/cc @tnozicka

Waiting for @deads2k #19275 to merge and be rebased on top.

@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mfojtik

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 10, 2018
kapi "k8s.io/kubernetes/pkg/apis/core"
kclientset "k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset"
kcoreclient "k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset/typed/core/internalversion"
"k8s.io/kubernetes/pkg/kubectl"
"k8s.io/kubernetes/staging/src/k8s.io/apimachinery/pkg/api/errors"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bad import

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like new goland screwing up these recently

}
if scaler.RetryCount != 2 {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tnozicka @deads2k not sure how to test this case... i can add stab into strategy but i found it ugly to have a counter there ;(

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tnozicka @deads2k not sure how to test this case... i can add stab into strategy but i found it ugly to have a counter there ;(

testing the client actions like your'e doing seems fair.

return true, nil
}
// Update replication controller scale
err := wait.PollImmediate(1*time.Second, retryTimeout, func() (bool, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're already here, how about a scaleClient? It doesn't support watch, but you aren't watching anyway.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k i want to add watch when @tnozicka upstream PR fixing watch merges, I think we agreed with Tomas that this is just temporary

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k i want to add watch when @tnozicka upstream PR fixing watch merges, I think we agreed with Tomas that this is just temporary

We don't have watch on subresources. You really want to avoid using the generic scale client?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k what difference it made? is GetScale() and UpdateScale() going away?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually I might consider the generic scale client... something tells me this:

error: couldn't scale router-1 to 1: autoscaling.Scale is not suitable for converting to "v1"

might be bugged generated client ?

@openshift-ci-robot
Copy link

openshift-ci-robot commented Apr 10, 2018

@mfojtik: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
ci/openshift-jenkins/extended_conformance_install b6ee713 link /test extended_conformance_install
ci/openshift-jenkins/end_to_end b6ee713 link /test end_to_end
ci/openshift-jenkins/gcp b6ee713 link /test gcp

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants