-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.12.2 rebase #67
1.12.2 rebase #67
Conversation
Origin-commit: 7331c6412a9ef1b23155d7fd928f4ddc6961a05b
Origin-commit: a5fade4cb1bb90919a356defa541a4f8ec7d5bb8
…m to allow multiple containers to union for swagger
:100644 100644 b32534e... 3e694fc... M pkg/controller/serviceaccount/tokens_controller.go
Doesn't offer enough use, complicates creation and setup.
:000000 100644 0000000000... 748bed00fb... A staging/src/k8s.io/apiserver/pkg/admission/patch.go :100644 100644 dd1368d4dd... 0e422f990c... M staging/src/k8s.io/apiserver/pkg/admission/plugins.go
:100644 100644 9fa52a0021... dab9e7233d... M staging/src/k8s.io/apiserver/pkg/server/config.go :100644 100644 9be725d4e2... 65e2ed1670... M staging/src/k8s.io/apiserver/pkg/server/genericapiserver.go :100644 100644 08e342ef56... 9218a1e185... M staging/src/k8s.io/apiserver/pkg/server/routes/swagger.go
…ore than one is present
:100644 100644 6dab988e52... 74b9cce53a... M staging/src/k8s.io/apiserver/pkg/server/config.go :100644 100644 dd9ab0dcfe... fcaec9e519... M staging/src/k8s.io/apiserver/pkg/server/routes/version.go
:100644 100644 4e0ce16e85... 68d79a7878... M test/e2e/common/downwardapi_volume.go
Origin-commit: 495b8f4f7563cfee27824bb98bc73fabc4add064
Origin-commit: 33a71aff9bb4e204bf2e15af4cdfb5bd0525ce4e
d6eaeb8
to
c5e2820
Compare
Yes, Looking at the other one... |
After we land the rebase, we should consider reverting:
|
lgtm |
And openshift/origin#20158 removed the need for |
The feature gate is not yet enabled and may not be for several releases. Pod team owns allowing this to be used.
We are still keeping RBAC policies and service accounts installed. This is just to avoid duplicating that work in ansible for now. Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1602793 Origin-commit: f9d325875a67ddc89f20fd2597199859e864d2a6
Origin-commit: 557317b91e4032718f1b8d1f60a307f28403afe8
…ility Origin-commit: 183457c62577085c312f9f694a241bbe18aa0ae8
…njob e2e Origin-commit: fcc740f1c4a6b50b953921287259e6afc0b88f4a
openshift-io/node-selector if scheduler.alpha.kubernetes.io/node-selector is set. Origin-commit: f2d078606421a611f377038149dd161a6263e04f
Origin-commit: d6648903cd21a8fe333c2572dda003ac78760b12
Origin-commit: 8b9969bab1311082afd90a0018376b2731e758f1
Origin-commit: 653bec41e858a8086f9b3b45d1f9d7f9ce703a9a
Signed-off-by: Mrunal Patel <[email protected]> Origin-commit: 075640e111e0316b775bb4a6d0dcae5d9fc8f389
We are seeing flakes where pod event isn't yet visible when we check for it leading to test failure. Signed-off-by: Mrunal Patel <[email protected]> Origin-commit: 1f7577f947464fd386981f770437f9461aa1bee3
Origin-commit: 10523f8f8001565ac4de1c4c0b0fdb241ffe0b37
With CRI-O we've been hitting a lot of flakes with the following test: [sig-apps] CronJob should remove from active list jobs that have been deleted The events shown in the test failures in both kube and openshift were the following: STEP: Found 13 events. Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:02 +0000 UTC - event for forbid: {cronjob-controller } SuccessfulCreate: Created job forbid-1540412040 Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:02 +0000 UTC - event for forbid-1540412040: {job-controller } SuccessfulCreate: Created pod: forbid-1540412040-z7n7t Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:02 +0000 UTC - event for forbid-1540412040-z7n7t: {default-scheduler } Scheduled: Successfully assigned e2e-tests-cronjob-rjr2m/forbid-1540412040-z7n7t to 127.0.0.1 Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:03 +0000 UTC - event for forbid-1540412040-z7n7t: {kubelet 127.0.0.1} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:03 +0000 UTC - event for forbid-1540412040-z7n7t: {kubelet 127.0.0.1} Created: Created container Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:03 +0000 UTC - event for forbid-1540412040-z7n7t: {kubelet 127.0.0.1} Started: Started container Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:12 +0000 UTC - event for forbid: {cronjob-controller } MissingJob: Active job went missing: forbid-1540412040 Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:02 +0000 UTC - event for forbid: {cronjob-controller } SuccessfulCreate: Created job forbid-1540412100 Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:02 +0000 UTC - event for forbid-1540412100: {job-controller } SuccessfulCreate: Created pod: forbid-1540412100-rq89l Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:02 +0000 UTC - event for forbid-1540412100-rq89l: {default-scheduler } Scheduled: Successfully assigned e2e-tests-cronjob-rjr2m/forbid-1540412100-rq89l to 127.0.0.1 Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:06 +0000 UTC - event for forbid-1540412100-rq89l: {kubelet 127.0.0.1} Started: Started container Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:06 +0000 UTC - event for forbid-1540412100-rq89l: {kubelet 127.0.0.1} Created: Created container Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:06 +0000 UTC - event for forbid-1540412100-rq89l: {kubelet 127.0.0.1} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine The code in test is racy because the Forbid policy can still let the controller to create a new pod for the cronjob. CRI-O is fast at re-creating the pod and by the time the test code reaches the check, it fails. The events are as follow: [It] should remove from active list jobs that have been deleted /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:192 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Deleting the job STEP: deleting Job.batch forbid-1540412040 in namespace e2e-tests-cronjob-rjr2m, will wait for the garbage collector to delete the pods Oct 24 20:14:02.533: INFO: Deleting Job.batch forbid-1540412040 took: 2.699182ms Oct 24 20:14:02.634: INFO: Terminating Job.batch forbid-1540412040 pods took: 100.223228ms STEP: Ensuring job was deleted STEP: Ensuring there are no active jobs in the cronjob [AfterEach] [sig-apps] CronJob /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148 It looks clear that by the time we're ensuring that there are no more active jobs, there could be _already_ a new job spinned, making the test flakes. This PR fixes all the above by making sure that the _deleted_ job is not in the Active list anymore, besides other pod already running but with different UUID which is going to be fine anyway for the purpose of the test. Signed-off-by: Antonio Murdaca <[email protected]>
Perform bootstrapping in the background when client cert rotation is on, enabling static pods to start before a control plane is reachable.
c5e2820
to
c94ff00
Compare
SGTM I've fixed unit tests, so this currently is green in units. I'll leave it open until I get a reasonable proof it's working as it should in origin. |
The following commit I dropped and need careful examination:
UPSTREAM: <carry>: patch in a non-standard location for apiservices
@deads2kUPSTREAM: <drop>: make RootFsInfo error non-fatal on start
@sjenningAdditionally, do we still need these commits:
UPSTREAM: <drop>: hack to "fix" period problem.
@sjenning @deads2kUPSTREAM: <carry>: coerce string->int, empty object -> slice for backwards compatibility
@deads2kThe full pick list: https://docs.google.com/spreadsheets/d/1xi5SNL96wqBlIpuIB7d4vRlhBNawhoe2-MWDZMbL6Ig/edit?usp=sharing
@openshift/sig-master
@deads2k
@smarterclayton for c8921588f8068ca61e9703da7782fd5b95452080