Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UPSTREAM: Attach/detach PV bug fixes #14119

Merged
merged 6 commits into from
May 13, 2017
Merged

UPSTREAM: Attach/detach PV bug fixes #14119

merged 6 commits into from
May 13, 2017

Conversation

wongma7
Copy link
Contributor

@wongma7 wongma7 commented May 9, 2017

Not all applied cleanly, notably most of 39732's changes to attach_detach_controller.go had to be manually applied I guess because git couldn't figure out the offset. Was not tricky or anything but still needs to be reviewed

@wongma7
Copy link
Contributor Author

wongma7 commented May 9, 2017

@childsb
Copy link
Contributor

childsb commented May 9, 2017

[test]

// return
// }
if !kcache.WaitForCacheSync(stopCh, adc.podsSynced, adc.nodesSynced, adc.pvcsSynced, adc.pvsSynced) {
return
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought there should be some error logged here if this fails. no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, absent from upstream too. There will at least be an unexpected "Shutting down attach detach controller". Anyway I think if it failed something catastrophic would have had to happened to controller-manager

Copy link
Contributor Author

@wongma7 wongma7 May 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shoot, I just realized you are right about this, it wouldn't hurt to have runtime.HandleError(fmt.Errorf("timed out waiting for caches to sync")) here. Should I add it now since merge failed? :)

edit: nvm, like I said earlier it would have to be some catastrophic stuff happening for it to matter...I only realized this because of the upstream 1.6 cherry-pick

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

k whatever I just went ahead and added it since tests keep failing, it's in the ose backport so may as well do it here

@wongma7
Copy link
Contributor Author

wongma7 commented May 9, 2017

from scripts/verify-generated-swagger-spec/logs/openshift.log, it fataled with

F0509 19:02:25.563534 30049 auth.go:161] etcdserver: not capable

#13224

edit: correct issue #14063

@wongma7
Copy link
Contributor Author

wongma7 commented May 9, 2017

[test]

1 similar comment
@gnufied
Copy link
Member

gnufied commented May 9, 2017

[test]

@wongma7
Copy link
Contributor Author

wongma7 commented May 10, 2017

re[test] needed, flake here: #14110

@childsb
Copy link
Contributor

childsb commented May 10, 2017

[merge]

@openshift-bot
Copy link
Contributor

openshift-bot commented May 10, 2017

continuous-integration/openshift-jenkins/merge Waiting: You are in the build queue at position: 6

@wongma7
Copy link
Contributor Author

wongma7 commented May 10, 2017

2017-05-10T21:01:17.042425000Z Error removing intermediate container c98a0f6d1a9d: Driver devicemapper failed to remove root filesystem c98a0f6d1a9dc4262e41fd7ee496262f2a47d834704f62023dda669fdb82305f: failed to remove device 4c10136014d710097d4fd458e6b1d530b70d255b1614a8187544633023bc263d:Device is Busy

@jsafrane
Copy link
Contributor

[test]

1 similar comment
@jsafrane
Copy link
Contributor

[test]

@wongma7
Copy link
Contributor Author

wongma7 commented May 11, 2017

[merge]

1 similar comment
@jsafrane
Copy link
Contributor

[merge]

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to 0ade7d0

@wongma7
Copy link
Contributor Author

wongma7 commented May 11, 2017

The test failure revealed a PR we missed, kubernetes/kubernetes#43289

The issue is that it's possible for nodeAdd to fire before dswp is initialized, causing a nil pointer exception.

We ought to port kubernetes/kubernetes#44760 over too, working on it

@wongma7
Copy link
Contributor Author

wongma7 commented May 11, 2017

[test] *

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 8b7ba77

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin/1377/) (Base Commit: f24a57f)

@jsafrane
Copy link
Contributor

[merge]

@jsafrane
Copy link
Contributor

[merge]!

@gnufied
Copy link
Member

gnufied commented May 13, 2017

This is still at #7 in merge-queue. :(

@eparis
Copy link
Member

eparis commented May 13, 2017

this already went into 3.5, so i'm going by hand.

@eparis eparis merged commit c3ffdb0 into openshift:master May 13, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants