Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"oc cluster up --version v3.6.1" is failing with oc v3.6.0 binary #17821

Closed
LalatenduMohanty opened this issue Dec 15, 2017 · 4 comments
Closed
Assignees
Labels
component/cluster-up kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P3

Comments

@LalatenduMohanty
Copy link
Member

oc binary v3.6.0 can not provision OpenShift instance of v3.6.1 using oc cluster up --version v3.6.1

Version

[provide output of the openshift version or oc version command]

Steps To Reproduce
  1. Get oc binary of v3.6.0
  2. Run oc cluster up --version v3.6.1
Current Result
$ oc cluster up --version v3.6.1
Starting OpenShift using openshift/origin:v3.6.1 ...
Pulling image openshift/origin:v3.6.1
Pulled 1/4 layers, 26% complete
Pulled 2/4 layers, 71% complete
Pulled 3/4 layers, 85% complete
Pulled 4/4 layers, 100% complete
Extracting
Image pull complete
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v3.6.1 image ...
   Pulling image openshift/origin:v3.6.1
   Pulled 1/4 layers, 26% complete
   Pulled 2/4 layers, 71% complete
   Pulled 3/4 layers, 85% complete
   Pulled 4/4 layers, 100% complete
   Extracting
   Image pull complete
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ...
   Using Docker shared volumes for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ...
   Using 127.0.0.1 as the server IP
-- Starting OpenShift container ...
   Creating initial OpenShift configuration
   Starting OpenShift using container 'origin'
FAIL
   Error: could not start OpenShift container "origin"
   Details:
     No log available from "origin" container
Expected Result

There should be no failure in the output of oc cluster up --version v3.6.1

@praveenkumar
Copy link
Contributor

Once oc cluster fails then following logs which we got from origin container which already fixed for #15868 and respective BZ https://bugzilla.redhat.com/show_bug.cgi?id=1481801 with using new version oc binary it works as expected.

E1215 06:51:37.188306    3087 controllermanager.go:337] Server isn't healthy yet.  Waiting a little while.
2017-12-15 06:51:37.193513 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:4001: getsockopt: connection refused"; Reconnecting to {127.0.0.1:4001 <nil>}
F1215 06:51:37.197571    3087 start_allinone.go:99] Server could not start: Couldn't init admission plugin "openshift.io/ImagePolicy": [openshift.io/ImagePolicy.resolutionRules[0].policy: Required value: a policy must be specified for this resource, openshift.io/ImagePolicy.resolutionRules[1].policy: Required value: a policy must be specified for this resource, openshift.io/ImagePolicy.resolutionRules[2].policy: Required value: a policy must be specified for this resource, openshift.io/ImagePolicy.resolutionRules[3].policy: Required value: a policy must be specified for this resource, openshift.io/ImagePolicy.resolutionRules[4].policy: Required value: a policy must be specified for this resource]

@pweil- pweil- added component/cluster-up kind/bug Categorizes issue or PR as related to a bug. priority/P3 labels Jan 3, 2018
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 3, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 3, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cluster-up kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P3
Projects
None yet
Development

No branches or pull requests

6 participants