Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to mount volume in router pod with "oc cluster up" on mac #15951

Closed
andreabattaglia opened this issue Aug 24, 2017 · 18 comments
Closed

unable to mount volume in router pod with "oc cluster up" on mac #15951

andreabattaglia opened this issue Aug 24, 2017 · 18 comments
Assignees
Labels
component/cluster-up component/storage kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2

Comments

@andreabattaglia
Copy link

In order to have a local installation of openshift on my Mabook Pro I've used the "oc cluster up" feature.
The version of Docker is 1.12.3 (I've tried the latest one as well: 17.06-ce, but nothing changed)
The version of oc client (downloaded from developers.redhat.com) is 3.6.173.0.5
The version of the Mac OS X is Os X El Capitan 10.11.6 (15G1611)

In order to simplify the script, I'm using the oc-cluster-wrapper script (this is not relevant, but I'll try to provide the while set of info I've got)

Version

oc v3.6.173.0.5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth

Server https://127.0.0.1:8443
openshift v3.6.173.0.5
kubernetes v1.6.1+5115d708d7

Steps To Reproduce
  1. Install Docker
  2. Download oc client anc copy it to /usr/local/bin
  3. Setup unsecure registry on docker
  4. Download oc-cluster-wrapper from github (https://github.com/openshift-evangelists/oc-cluster-wrapper)
  5. run "oc-cluster up" on your terminal
Current Result

Running the "oc-cluster up" command into my terminal I get the following output:

----- BEGIN
$ oc-cluster up
Using client for ocp v3.6.173.0.5
Using default profile
[INFO] Created self signed certs. You can avoid self signed certificates warnings by trusting this certificate: /Users/andreabattaglia/.oc/certs/master.server.crt
[INFO] Running a new cluster
[INFO] Shared certificates copied into the cluster
oc cluster up --version v3.6.173.0.5 --image registry.access.redhat.com/openshift3/ose --public-hostname 127.0.0.1 --routing-suffix apps.127.0.0.1.nip.io --host-data-dir /Users/andreabattaglia/.oc/profiles/default/data --host-config-dir /Users/andreabattaglia/.oc/profiles/default/config --host-pv-dir /Users/andreabattaglia/.oc/profiles/default/pv --use-existing-config -e TZ=CEST
Starting OpenShift using registry.access.redhat.com/openshift3/ose:v3.6.173.0.5 ...
Pulling image registry.access.redhat.com/openshift3/ose:v3.6.173.0.5
Pulled 1/4 layers, 25% complete
Pulled 1/4 layers, 25% complete
Pulled 2/4 layers, 50% complete
Pulled 3/4 layers, 75% complete
Pulled 3/4 layers, 75% complete
Pulled 4/4 layers, 100% complete
Extracting
Image pull complete
OpenShift server started.

The server is accessible via web console at:
https://127.0.0.1:8443

You are logged in as:
User: developer
Password:

To login as administrator:
oc login -u system:admin

-- Any user is sudoer. They can execute commands with '--as=system:admin'
-- 10 Persistent Volumes are available for use
-- User admin has been set as cluster administrator
Switched to context "default".
-- Adding an oc-profile=default label to every generated image so they can be later removed
[INFO] Cluster created succesfully
Restarting openshift. Done
----- END

After more than 10 minutes, the status of the cluster is the following:

----- BEGIN
oc-cluster status
Using client for ocp v3.6.173.0.5
oc cluster running. Current profile
Web console URL: https://127.0.0.1:8443

Config is at host directory /Users/andreabattaglia/.oc/profiles/default/config
Volumes are at host directory /var/lib/origin/openshift.local.volumes
Persistent volumes are at host directory /Users/andreabattaglia/.oc/profiles/default/pv
Data is at host directory /Users/andreabattaglia/.oc/profiles/default/data

Notice: Router is not yet ready

Notice: 1 OpenShift component(s) are not yet ready (see above)
----- END

In the Events section of Openshift metrics I get the following error (please view the attached image for the full events stack):

----- BEGIN
Unable to mount volumes for pod "docker-registry-1-deploy_default(a369cf7a-8899-11e7-8ae3-62f757198d90)": timeout expired waiting for volumes to attach/mount for pod "default"/"docker-registry-1-deploy". list of unattached/unmounted volumes=[deployer-token-k8j7l]

screen shot 2017-08-24 at 09 35 19

----- END
Expected Result

The expectation is to get a working instance of openshift in a docker container.

Additional Information

$ oc adm diagnostics
[Note] Determining if client configuration exists for client/cluster diagnostics
Info: Successfully read a client config file at '/Users/andreabattaglia/.kube/config'
[Note] Could not configure a client with cluster-admin permissions for the current server, so cluster diagnostics will be skipped

[Note] Running diagnostic: ConfigContexts[myproject/192-168-64-3:8443/system:admin]
Description: Validate client config context is complete and has connectivity

ERROR: [DCli0010 from diagnostic ConfigContexts@openshift/origin/pkg/diagnostics/client/config_contexts.go:285]
For client config context 'myproject/192-168-64-3:8443/system:admin':
The server URL is 'https://192.168.64.3:8443'
The user authentication is 'system:admin/192-168-64-3:8443'
The current project is 'myproject'
(*url.Error) Get https://192.168.64.3:8443/api: dial tcp 192.168.64.3:8443: i/o timeout

   This means that when we tried to connect to the master API server,
   we could not reach the host at all.
   * You may have specified the wrong host address.
   * This could mean the host is completely unavailable (down).
   * This could indicate a routing problem or a firewall that simply
     drops requests rather than responding by resetting the connection.
   * It does not generally mean that DNS name resolution failed (which
     would be a different error) though the problem could be that it
     gave the wrong address.

[Note] Running diagnostic: ConfigContexts[myproject/192-168-64-3:8443/developer]
Description: Validate client config context is complete and has connectivity

ERROR: [DCli0010 from diagnostic ConfigContexts@openshift/origin/pkg/diagnostics/client/config_contexts.go:285]
For client config context 'myproject/192-168-64-3:8443/developer':
The server URL is 'https://192.168.64.3:8443'
The user authentication is 'developer/192-168-64-3:8443'
The current project is 'myproject'
(*url.Error) Get https://192.168.64.3:8443/api: dial tcp 192.168.64.3:8443: i/o timeout

   This means that when we tried to connect to the master API server,
   we could not reach the host at all.
   * You may have specified the wrong host address.
   * This could mean the host is completely unavailable (down).
   * This could indicate a routing problem or a firewall that simply
     drops requests rather than responding by resetting the connection.
   * It does not generally mean that DNS name resolution failed (which
     would be a different error) though the problem could be that it
     gave the wrong address.

[Note] Running diagnostic: ConfigContexts[/127-0-0-1:8443/developer]
Description: Validate client config context is complete and has connectivity

Info: For client config context '/127-0-0-1:8443/developer':
The server URL is 'https://127.0.0.1:8443'
The user authentication is 'developer/127-0-0-1:8443'
The current project is 'default'
Successfully requested project list; has access to project(s):
[myproject]

[Note] Running diagnostic: ConfigContexts[default/127-0-0-1:8443/system:admin]
Description: Validate client config context is complete and has connectivity

Info: For client config context 'default/127-0-0-1:8443/system:admin':
The server URL is 'https://127.0.0.1:8443'
The user authentication is 'system:admin/127-0-0-1:8443'
The current project is 'default'
Successfully requested project list; has access to project(s):
[default kube-public kube-system myproject openshift openshift-infra]

[Note] Running diagnostic: ConfigContexts[default/192-168-64-3:8443/system:admin]
Description: Validate client config context is complete and has connectivity

ERROR: [DCli0010 from diagnostic ConfigContexts@openshift/origin/pkg/diagnostics/client/config_contexts.go:285]
For client config context 'default/192-168-64-3:8443/system:admin':
The server URL is 'https://192.168.64.3:8443'
The user authentication is 'system:admin/127-0-0-1:8443'
The current project is 'default'
(*url.Error) Get https://192.168.64.3:8443/api: dial tcp 192.168.64.3:8443: i/o timeout

   This means that when we tried to connect to the master API server,
   we could not reach the host at all.
   * You may have specified the wrong host address.
   * This could mean the host is completely unavailable (down).
   * This could indicate a routing problem or a firewall that simply
     drops requests rather than responding by resetting the connection.
   * It does not generally mean that DNS name resolution failed (which
     would be a different error) though the problem could be that it
     gave the wrong address.

[Note] Running diagnostic: ConfigContexts[default]
Description: Validate client config context is complete and has connectivity

Info: The current client config context is 'default':
The server URL is 'https://127.0.0.1:8443'
The user authentication is 'developer/default'
The current project is 'myproject'
Successfully requested project list; has access to project(s):
[myproject]

[Note] Running diagnostic: DiagnosticPod
Description: Create a pod to run diagnostics from the application standpoint

Info: Output from the diagnostic pod (image registry.access.redhat.com/openshift3/ose-deployer:v3.6.173.0.5):
[Note] Running diagnostic: PodCheckAuth
Description: Check that service account credentials authenticate as expected

   Info:  Service account token successfully authenticated to master
   Info:  Service account token was authenticated by the integrated registry.
   
   [Note] Running diagnostic: PodCheckDns
          Description: Check that DNS within a pod works as expected
          
   [Note] Summary of diagnostics execution (version v3.6.173.0.5):
   [Note] Completed with no errors or warnings seen.

[Note] Running diagnostic: NetworkCheck
Description: Create a pod on all schedulable nodes and run network diagnostics from the application standpoint

ERROR: [DNet2001 from diagnostic NetworkCheck@openshift/origin/pkg/diagnostics/network/run_pod.go:83]
Checking network plugin failed. Error: User "developer" cannot get clusternetworks at the cluster scope

[Note] Summary of diagnostics execution (version v3.6.173.0.5):
[Note] Errors seen: 4

oc get all -o json -n default
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {
"creationTimestamp": "2017-08-24T06:58:30Z",
"generation": 2,
"labels": {
"docker-registry": "default"
},
"name": "docker-registry",
"namespace": "default",
"resourceVersion": "1412",
"selfLink": "/oapi/v1/namespaces/default/deploymentconfigs/docker-registry",
"uid": "a3518bb4-8899-11e7-8ae3-62f757198d90"
},
"spec": {
"replicas": 1,
"selector": {
"docker-registry": "default"
},
"strategy": {
"activeDeadlineSeconds": 21600,
"resources": {},
"rollingParams": {
"intervalSeconds": 1,
"maxSurge": "25%",
"maxUnavailable": "25%",
"timeoutSeconds": 600,
"updatePeriodSeconds": 1
},
"type": "Rolling"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"docker-registry": "default"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "REGISTRY_HTTP_ADDR",
"value": ":5000"
},
{
"name": "REGISTRY_HTTP_NET",
"value": "tcp"
},
{
"name": "REGISTRY_HTTP_SECRET",
"value": "U/3HePKLV39PnWSn8k8CgZ3Yy23DYp3gXgvDozyU+Kk="
},
{
"name": "REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA",
"value": "false"
}
],
"image": "registry.access.redhat.com/openshift3/ose-docker-registry:v3.6.173.0.5",
"imagePullPolicy": "IfNotPresent",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/healthz",
"port": 5000,
"scheme": "HTTP"
},
"initialDelaySeconds": 10,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"name": "registry",
"ports": [
{
"containerPort": 5000,
"protocol": "TCP"
}
],
"readinessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/healthz",
"port": 5000,
"scheme": "HTTP"
},
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"resources": {
"requests": {
"cpu": "100m",
"memory": "256Mi"
}
},
"securityContext": {
"privileged": true
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/registry",
"name": "registry-storage"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "registry",
"serviceAccountName": "registry",
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"hostPath": {
"path": "/Users/andreabattaglia/.oc/profiles/default/pv/registry"
},
"name": "registry-storage"
}
]
}
},
"test": false,
"triggers": [
{
"type": "ConfigChange"
}
]
},
"status": {
"availableReplicas": 1,
"conditions": [
{
"lastTransitionTime": "2017-08-24T07:01:12Z",
"lastUpdateTime": "2017-08-24T07:01:12Z",
"message": "Deployment config has minimum availability.",
"status": "True",
"type": "Available"
},
{
"lastTransitionTime": "2017-08-24T06:59:02Z",
"lastUpdateTime": "2017-08-24T07:01:13Z",
"message": "replication controller "docker-registry-1" successfully rolled out",
"reason": "NewReplicationControllerAvailable",
"status": "True",
"type": "Progressing"
}
],
"details": {
"causes": [
{
"type": "ConfigChange"
}
],
"message": "config change"
},
"latestVersion": 1,
"observedGeneration": 2,
"readyReplicas": 1,
"replicas": 1,
"unavailableReplicas": 0,
"updatedReplicas": 1
}
},
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {
"creationTimestamp": "2017-08-24T06:58:31Z",
"generation": 2,
"labels": {
"router": "router"
},
"name": "router",
"namespace": "default",
"resourceVersion": "1527",
"selfLink": "/oapi/v1/namespaces/default/deploymentconfigs/router",
"uid": "a3e883e9-8899-11e7-8ae3-62f757198d90"
},
"spec": {
"replicas": 1,
"selector": {
"router": "router"
},
"strategy": {
"activeDeadlineSeconds": 21600,
"resources": {},
"rollingParams": {
"intervalSeconds": 1,
"maxSurge": 0,
"maxUnavailable": "25%",
"timeoutSeconds": 600,
"updatePeriodSeconds": 1
},
"type": "Rolling"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"router": "router"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "DEFAULT_CERTIFICATE_DIR",
"value": "/etc/pki/tls/private"
},
{
"name": "DEFAULT_CERTIFICATE_PATH",
"value": "/etc/pki/tls/private/tls.crt"
},
{
"name": "ROUTER_CIPHERS"
},
{
"name": "ROUTER_EXTERNAL_HOST_HOSTNAME"
},
{
"name": "ROUTER_EXTERNAL_HOST_HTTPS_VSERVER"
},
{
"name": "ROUTER_EXTERNAL_HOST_HTTP_VSERVER"
},
{
"name": "ROUTER_EXTERNAL_HOST_INSECURE",
"value": "false"
},
{
"name": "ROUTER_EXTERNAL_HOST_INTERNAL_ADDRESS"
},
{
"name": "ROUTER_EXTERNAL_HOST_PARTITION_PATH"
},
{
"name": "ROUTER_EXTERNAL_HOST_PASSWORD"
},
{
"name": "ROUTER_EXTERNAL_HOST_PRIVKEY",
"value": "/etc/secret-volume/router.pem"
},
{
"name": "ROUTER_EXTERNAL_HOST_USERNAME"
},
{
"name": "ROUTER_EXTERNAL_HOST_VXLAN_GW_CIDR"
},
{
"name": "ROUTER_LISTEN_ADDR",
"value": "0.0.0.0:1936"
},
{
"name": "ROUTER_METRICS_TYPE",
"value": "haproxy"
},
{
"name": "ROUTER_SERVICE_HTTPS_PORT",
"value": "443"
},
{
"name": "ROUTER_SERVICE_HTTP_PORT",
"value": "80"
},
{
"name": "ROUTER_SERVICE_NAME",
"value": "router"
},
{
"name": "ROUTER_SERVICE_NAMESPACE",
"value": "default"
},
{
"name": "ROUTER_SUBDOMAIN"
},
{
"name": "STATS_PASSWORD",
"value": "xNjYnbiahe"
},
{
"name": "STATS_PORT",
"value": "1936"
},
{
"name": "STATS_USERNAME",
"value": "admin"
}
],
"image": "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.6.173.0.5",
"imagePullPolicy": "IfNotPresent",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/healthz",
"port": 1936,
"scheme": "HTTP"
},
"initialDelaySeconds": 10,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 1
},
"name": "router",
"ports": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "TCP"
},
{
"containerPort": 443,
"hostPort": 443,
"protocol": "TCP"
},
{
"containerPort": 1936,
"hostPort": 1936,
"name": "stats",
"protocol": "TCP"
}
],
"readinessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/healthz",
"port": 1936,
"scheme": "HTTP"
},
"initialDelaySeconds": 10,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 1
},
"resources": {
"requests": {
"cpu": "100m",
"memory": "256Mi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/etc/pki/tls/private",
"name": "server-certificate",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "router",
"serviceAccountName": "router",
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"name": "server-certificate",
"secret": {
"defaultMode": 420,
"secretName": "router-certs"
}
}
]
}
},
"test": false,
"triggers": [
{
"type": "ConfigChange"
}
]
},
"status": {
"availableReplicas": 0,
"conditions": [
{
"lastTransitionTime": "2017-08-24T06:58:31Z",
"lastUpdateTime": "2017-08-24T06:58:31Z",
"message": "Deployment config does not have minimum availability.",
"status": "False",
"type": "Available"
},
{
"lastTransitionTime": "2017-08-24T07:09:01Z",
"lastUpdateTime": "2017-08-24T07:09:01Z",
"message": "replication controller "router-1" has failed progressing",
"reason": "ProgressDeadlineExceeded",
"status": "False",
"type": "Progressing"
}
],
"details": {
"causes": [
{
"type": "ConfigChange"
}
],
"message": "config change"
},
"latestVersion": 1,
"observedGeneration": 2,
"replicas": 0,
"unavailableReplicas": 0,
"updatedReplicas": 0
}
},
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"annotations": {
"openshift.io/deployer-pod.name": "docker-registry-1-deploy",
"openshift.io/deployment-config.latest-version": "1",
"openshift.io/deployment-config.name": "docker-registry",
"openshift.io/deployment.phase": "Complete",
"openshift.io/deployment.replicas": "1",
"openshift.io/deployment.status-reason": "config change",
"openshift.io/encoded-deployment-config": "{"kind":"DeploymentConfig","apiVersion":"v1","metadata":{"name":"docker-registry","namespace":"default","selfLink":"/apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/docker-registry","uid":"a3518bb4-8899-11e7-8ae3-62f757198d90","resourceVersion":"807","generation":2,"creationTimestamp":"2017-08-24T06:58:30Z","labels":{"docker-registry":"default"}},"spec":{"strategy":{"type":"Rolling","rollingParams":{"updatePeriodSeconds":1,"intervalSeconds":1,"timeoutSeconds":600,"maxUnavailable":"25%","maxSurge":"25%"},"resources":{},"activeDeadlineSeconds":21600},"triggers":[{"type":"ConfigChange"}],"replicas":1,"test":false,"selector":{"docker-registry":"default"},"template":{"metadata":{"creationTimestamp":null,"labels":{"docker-registry":"default"}},"spec":{"volumes":[{"name":"registry-storage","hostPath":{"path":"/Users/andreabattaglia/.oc/profiles/default/pv/registry"}}],"containers":[{"name":"registry","image":"registry.access.redhat.com/openshift3/ose-docker-registry:v3.6.173.0.5","ports":[{"containerPort":5000,"protocol":"TCP"}],"env":[{"name":"REGISTRY_HTTP_ADDR","value":":5000"},{"name":"REGISTRY_HTTP_NET","value":"tcp"},{"name":"REGISTRY_HTTP_SECRET","value":"U/3HePKLV39PnWSn8k8CgZ3Yy23DYp3gXgvDozyU+Kk="},{"name":"REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA","value":"false"}],"resources":{"requests":{"cpu":"100m","memory":"256Mi"}},"volumeMounts":[{"name":"registry-storage","mountPath":"/registry"}],"livenessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTP"},"initialDelaySeconds":10,"timeoutSeconds":5,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTP"},"timeoutSeconds":5,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"registry","serviceAccount":"registry","securityContext":{},"schedulerName":"default-scheduler"}}},"status":{"latestVersion":1,"observedGeneration":1,"replicas":0,"updatedReplicas":0,"availableReplicas":0,"unavailableReplicas":0,"details":{"message":"config change","causes":[{"type":"ConfigChange"}]},"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2017-08-24T06:58:30Z","lastTransitionTime":"2017-08-24T06:58:30Z","message":"Deployment config does not have minimum availability."}]}}\n"
},
"creationTimestamp": "2017-08-24T06:58:31Z",
"generation": 2,
"labels": {
"docker-registry": "default",
"openshift.io/deployment-config.name": "docker-registry"
},
"name": "docker-registry-1",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "apps.openshift.io/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "DeploymentConfig",
"name": "docker-registry",
"uid": "a3518bb4-8899-11e7-8ae3-62f757198d90"
}
],
"resourceVersion": "1411",
"selfLink": "/api/v1/namespaces/default/replicationcontrollers/docker-registry-1",
"uid": "a36005e9-8899-11e7-8ae3-62f757198d90"
},
"spec": {
"replicas": 1,
"selector": {
"deployment": "docker-registry-1",
"deploymentconfig": "docker-registry",
"docker-registry": "default"
},
"template": {
"metadata": {
"annotations": {
"openshift.io/deployment-config.latest-version": "1",
"openshift.io/deployment-config.name": "docker-registry",
"openshift.io/deployment.name": "docker-registry-1"
},
"creationTimestamp": null,
"labels": {
"deployment": "docker-registry-1",
"deploymentconfig": "docker-registry",
"docker-registry": "default"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "REGISTRY_HTTP_ADDR",
"value": ":5000"
},
{
"name": "REGISTRY_HTTP_NET",
"value": "tcp"
},
{
"name": "REGISTRY_HTTP_SECRET",
"value": "U/3HePKLV39PnWSn8k8CgZ3Yy23DYp3gXgvDozyU+Kk="
},
{
"name": "REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA",
"value": "false"
}
],
"image": "registry.access.redhat.com/openshift3/ose-docker-registry:v3.6.173.0.5",
"imagePullPolicy": "IfNotPresent",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/healthz",
"port": 5000,
"scheme": "HTTP"
},
"initialDelaySeconds": 10,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"name": "registry",
"ports": [
{
"containerPort": 5000,
"protocol": "TCP"
}
],
"readinessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/healthz",
"port": 5000,
"scheme": "HTTP"
},
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"resources": {
"requests": {
"cpu": "100m",
"memory": "256Mi"
}
},
"securityContext": {
"privileged": true
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/registry",
"name": "registry-storage"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "registry",
"serviceAccountName": "registry",
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"hostPath": {
"path": "/Users/andreabattaglia/.oc/profiles/default/pv/registry"
},
"name": "registry-storage"
}
]
}
}
},
"status": {
"availableReplicas": 1,
"fullyLabeledReplicas": 1,
"observedGeneration": 2,
"readyReplicas": 1,
"replicas": 1
}
},
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/desired-replicas": "1",
"openshift.io/deployer-pod.name": "router-1-deploy",
"openshift.io/deployment-config.latest-version": "1",
"openshift.io/deployment-config.name": "router",
"openshift.io/deployment.phase": "Failed",
"openshift.io/deployment.replicas": "0",
"openshift.io/deployment.status-reason": "config change",
"openshift.io/encoded-deployment-config": "{"kind":"DeploymentConfig","apiVersion":"v1","metadata":{"name":"router","namespace":"default","selfLink":"/apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/router","uid":"a3e883e9-8899-11e7-8ae3-62f757198d90","resourceVersion":"861","generation":2,"creationTimestamp":"2017-08-24T06:58:31Z","labels":{"router":"router"}},"spec":{"strategy":{"type":"Rolling","rollingParams":{"updatePeriodSeconds":1,"intervalSeconds":1,"timeoutSeconds":600,"maxUnavailable":"25%","maxSurge":0},"resources":{},"activeDeadlineSeconds":21600},"triggers":[{"type":"ConfigChange"}],"replicas":1,"test":false,"selector":{"router":"router"},"template":{"metadata":{"creationTimestamp":null,"labels":{"router":"router"}},"spec":{"volumes":[{"name":"server-certificate","secret":{"secretName":"router-certs","defaultMode":420}}],"containers":[{"name":"router","image":"registry.access.redhat.com/openshift3/ose-haproxy-router:v3.6.173.0.5","ports":[{"hostPort":80,"containerPort":80,"protocol":"TCP"},{"hostPort":443,"containerPort":443,"protocol":"TCP"},{"name":"stats","hostPort":1936,"containerPort":1936,"protocol":"TCP"}],"env":[{"name":"DEFAULT_CERTIFICATE_DIR","value":"/etc/pki/tls/private"},{"name":"DEFAULT_CERTIFICATE_PATH","value":"/etc/pki/tls/private/tls.crt"},{"name":"ROUTER_CIPHERS"},{"name":"ROUTER_EXTERNAL_HOST_HOSTNAME"},{"name":"ROUTER_EXTERNAL_HOST_HTTPS_VSERVER"},{"name":"ROUTER_EXTERNAL_HOST_HTTP_VSERVER"},{"name":"ROUTER_EXTERNAL_HOST_INSECURE","value":"false"},{"name":"ROUTER_EXTERNAL_HOST_INTERNAL_ADDRESS"},{"name":"ROUTER_EXTERNAL_HOST_PARTITION_PATH"},{"name":"ROUTER_EXTERNAL_HOST_PASSWORD"},{"name":"ROUTER_EXTERNAL_HOST_PRIVKEY","value":"/etc/secret-volume/router.pem"},{"name":"ROUTER_EXTERNAL_HOST_USERNAME"},{"name":"ROUTER_EXTERNAL_HOST_VXLAN_GW_CIDR"},{"name":"ROUTER_LISTEN_ADDR","value":"0.0.0.0:1936"},{"name":"ROUTER_METRICS_TYPE","value":"haproxy"},{"name":"ROUTER_SERVICE_HTTPS_PORT","value":"443"},{"name":"ROUTER_SERVICE_HTTP_PORT","value":"80"},{"name":"ROUTER_SERVICE_NAME","value":"router"},{"name":"ROUTER_SERVICE_NAMESPACE","value":"default"},{"name":"ROUTER_SUBDOMAIN"},{"name":"STATS_PASSWORD","value":"xNjYnbiahe"},{"name":"STATS_PORT","value":"1936"},{"name":"STATS_USERNAME","value":"admin"}],"resources":{"requests":{"cpu":"100m","memory":"256Mi"}},"volumeMounts":[{"name":"server-certificate","readOnly":true,"mountPath":"/etc/pki/tls/private"}],"livenessProbe":{"httpGet":{"path":"/healthz","port":1936,"scheme":"HTTP"},"initialDelaySeconds":10,"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/healthz","port":1936,"scheme":"HTTP"},"initialDelaySeconds":10,"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"router","serviceAccount":"router","securityContext":{},"schedulerName":"default-scheduler"}}},"status":{"latestVersion":1,"observedGeneration":1,"replicas":0,"updatedReplicas":0,"availableReplicas":0,"unavailableReplicas":0,"details":{"message":"config change","causes":[{"type":"ConfigChange"}]},"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2017-08-24T06:58:31Z","lastTransitionTime":"2017-08-24T06:58:31Z","message":"Deployment config does not have minimum availability."}]}}\n"
},
"creationTimestamp": "2017-08-24T06:58:31Z",
"generation": 3,
"labels": {
"openshift.io/deployment-config.name": "router",
"router": "router"
},
"name": "router-1",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "apps.openshift.io/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "DeploymentConfig",
"name": "router",
"uid": "a3e883e9-8899-11e7-8ae3-62f757198d90"
}
],
"resourceVersion": "1526",
"selfLink": "/api/v1/namespaces/default/replicationcontrollers/router-1",
"uid": "a3edbe2b-8899-11e7-8ae3-62f757198d90"
},
"spec": {
"replicas": 0,
"selector": {
"deployment": "router-1",
"deploymentconfig": "router",
"router": "router"
},
"template": {
"metadata": {
"annotations": {
"openshift.io/deployment-config.latest-version": "1",
"openshift.io/deployment-config.name": "router",
"openshift.io/deployment.name": "router-1"
},
"creationTimestamp": null,
"labels": {
"deployment": "router-1",
"deploymentconfig": "router",
"router": "router"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "DEFAULT_CERTIFICATE_DIR",
"value": "/etc/pki/tls/private"
},
{
"name": "DEFAULT_CERTIFICATE_PATH",
"value": "/etc/pki/tls/private/tls.crt"
},
{
"name": "ROUTER_CIPHERS"
},
{
"name": "ROUTER_EXTERNAL_HOST_HOSTNAME"
},
{
"name": "ROUTER_EXTERNAL_HOST_HTTPS_VSERVER"
},
{
"name": "ROUTER_EXTERNAL_HOST_HTTP_VSERVER"
},
{
"name": "ROUTER_EXTERNAL_HOST_INSECURE",
"value": "false"
},
{
"name": "ROUTER_EXTERNAL_HOST_INTERNAL_ADDRESS"
},
{
"name": "ROUTER_EXTERNAL_HOST_PARTITION_PATH"
},
{
"name": "ROUTER_EXTERNAL_HOST_PASSWORD"
},
{
"name": "ROUTER_EXTERNAL_HOST_PRIVKEY",
"value": "/etc/secret-volume/router.pem"
},
{
"name": "ROUTER_EXTERNAL_HOST_USERNAME"
},
{
"name": "ROUTER_EXTERNAL_HOST_VXLAN_GW_CIDR"
},
{
"name": "ROUTER_LISTEN_ADDR",
"value": "0.0.0.0:1936"
},
{
"name": "ROUTER_METRICS_TYPE",
"value": "haproxy"
},
{
"name": "ROUTER_SERVICE_HTTPS_PORT",
"value": "443"
},
{
"name": "ROUTER_SERVICE_HTTP_PORT",
"value": "80"
},
{
"name": "ROUTER_SERVICE_NAME",
"value": "router"
},
{
"name": "ROUTER_SERVICE_NAMESPACE",
"value": "default"
},
{
"name": "ROUTER_SUBDOMAIN"
},
{
"name": "STATS_PASSWORD",
"value": "xNjYnbiahe"
},
{
"name": "STATS_PORT",
"value": "1936"
},
{
"name": "STATS_USERNAME",
"value": "admin"
}
],
"image": "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.6.173.0.5",
"imagePullPolicy": "IfNotPresent",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/healthz",
"port": 1936,
"scheme": "HTTP"
},
"initialDelaySeconds": 10,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 1
},
"name": "router",
"ports": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "TCP"
},
{
"containerPort": 443,
"hostPort": 443,
"protocol": "TCP"
},
{
"containerPort": 1936,
"hostPort": 1936,
"name": "stats",
"protocol": "TCP"
}
],
"readinessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/healthz",
"port": 1936,
"scheme": "HTTP"
},
"initialDelaySeconds": 10,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 1
},
"resources": {
"requests": {
"cpu": "100m",
"memory": "256Mi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/etc/pki/tls/private",
"name": "server-certificate",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "router",
"serviceAccountName": "router",
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"name": "server-certificate",
"secret": {
"defaultMode": 420,
"secretName": "router-certs"
}
}
]
}
}
},
"status": {
"observedGeneration": 3,
"replicas": 0
}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2017-08-24T06:58:30Z",
"labels": {
"docker-registry": "default"
},
"name": "docker-registry",
"namespace": "default",
"resourceVersion": "805",
"selfLink": "/api/v1/namespaces/default/services/docker-registry",
"uid": "a352d002-8899-11e7-8ae3-62f757198d90"
},
"spec": {
"clusterIP": "172.30.1.1",
"ports": [
{
"name": "5000-tcp",
"port": 5000,
"protocol": "TCP",
"targetPort": 5000
}
],
"selector": {
"docker-registry": "default"
},
"sessionAffinity": "ClientIP",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2017-08-24T06:58:25Z",
"labels": {
"component": "apiserver",
"provider": "kubernetes"
},
"name": "kubernetes",
"namespace": "default",
"resourceVersion": "9",
"selfLink": "/api/v1/namespaces/default/services/kubernetes",
"uid": "a034e612-8899-11e7-8ae3-62f757198d90"
},
"spec": {
"clusterIP": "172.30.0.1",
"ports": [
{
"name": "https",
"port": 443,
"protocol": "TCP",
"targetPort": 8443
},
{
"name": "dns",
"port": 53,
"protocol": "UDP",
"targetPort": 8053
},
{
"name": "dns-tcp",
"port": 53,
"protocol": "TCP",
"targetPort": 8053
}
],
"sessionAffinity": "ClientIP",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"annotations": {
"prometheus.io/port": "1936",
"prometheus.io/scrape": "true",
"prometheus.openshift.io/password": "xNjYnbiahe",
"prometheus.openshift.io/username": "admin"
},
"creationTimestamp": "2017-08-24T06:58:31Z",
"labels": {
"router": "router"
},
"name": "router",
"namespace": "default",
"resourceVersion": "860",
"selfLink": "/api/v1/namespaces/default/services/router",
"uid": "a3e9c47c-8899-11e7-8ae3-62f757198d90"
},
"spec": {
"clusterIP": "172.30.65.194",
"ports": [
{
"name": "80-tcp",
"port": 80,
"protocol": "TCP",
"targetPort": 80
},
{
"name": "443-tcp",
"port": 443,
"protocol": "TCP",
"targetPort": 443
},
{
"name": "1936-tcp",
"port": 1936,
"protocol": "TCP",
"targetPort": 1936
}
],
"selector": {
"router": "router"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
},
{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"creationTimestamp": "2017-08-24T06:58:30Z",
"labels": {
"controller-uid": "a2d8ea5c-8899-11e7-8ae3-62f757198d90",
"job-name": "persistent-volume-setup"
},
"name": "persistent-volume-setup",
"namespace": "default",
"resourceVersion": "1377",
"selfLink": "/apis/batch/v1/namespaces/default/jobs/persistent-volume-setup",
"uid": "a2d8ea5c-8899-11e7-8ae3-62f757198d90"
},
"spec": {
"activeDeadlineSeconds": 1200,
"completions": 1,
"parallelism": 1,
"selector": {
"matchLabels": {
"controller-uid": "a2d8ea5c-8899-11e7-8ae3-62f757198d90"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"controller-uid": "a2d8ea5c-8899-11e7-8ae3-62f757198d90",
"job-name": "persistent-volume-setup"
}
},
"spec": {
"containers": [
{
"command": [
"/bin/bash",
"-c",
"#/bin/bash\n\nset -e\n\nfunction generate_pv() {\n local basedir="${1}"\n local name="${2}"\ncat \u003c\u003cEOF\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: ${name}\n labels:\n volume: ${name}\nspec:\n capacity:\n storage: 100Gi\n accessModes:\n - ReadWriteOnce\n - ReadWriteMany\n - ReadOnlyMany\n hostPath:\n path: ${basedir}/${name}\n persistentVolumeReclaimPolicy: Recycle\nEOF\n}\n\nfunction setup_pv_dir() {\n local dir="${1}"\n if [[ ! -d "${dir}" ]]; then\n mkdir -p "${dir}"\n fi\n if ! chcon -t svirt_sandbox_file_t "${dir}" \u0026\u003e /dev/null; then\n echo "Not setting SELinux content for ${dir}"\n fi\n chmod 770 "${dir}"\n}\n\nfunction create_pv() {\n local basedir="${1}"\n local name="${2}"\n\n setup_pv_dir "${basedir}/${name}"\n if ! oc get pv "${name}" \u0026\u003e /dev/null; then \n generate_pv "${basedir}" "${name}" | oc create -f -\n else\n echo "persistentvolume ${name} already exists"\n fi\n}\n\nbasedir="/Users/andreabattaglia/.oc/profiles/default/pv"\nsetup_pv_dir "${basedir}/registry"\n\nfor i in $(seq -f "%04g" 1 100); do\n create_pv "${basedir}" "pv${i}"\ndone\n"
],
"image": "registry.access.redhat.com/openshift3/ose:v3.6.173.0.5",
"imagePullPolicy": "IfNotPresent",
"name": "storage-setup-job",
"resources": {},
"securityContext": {
"privileged": true
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/Users/andreabattaglia/.oc/profiles/default/pv",
"name": "pvdir"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Never",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "pvinstaller",
"serviceAccountName": "pvinstaller",
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"hostPath": {
"path": "/Users/andreabattaglia/.oc/profiles/default/pv"
},
"name": "pvdir"
}
]
}
}
},
"status": {
"completionTime": "2017-08-24T06:59:40Z",
"conditions": [
{
"lastProbeTime": "2017-08-24T06:59:40Z",
"lastTransitionTime": "2017-08-24T06:59:40Z",
"status": "True",
"type": "Complete"
}
],
"startTime": "2017-08-24T06:58:30Z",
"succeeded": 1
}
},
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
"kubernetes.io/created-by": "{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"docker-registry-1","uid":"a36005e9-8899-11e7-8ae3-62f757198d90","apiVersion":"v1","resourceVersion":"1129"}}\n",
"openshift.io/deployment-config.latest-version": "1",
"openshift.io/deployment-config.name": "docker-registry",
"openshift.io/deployment.name": "docker-registry-1",
"openshift.io/scc": "privileged"
},
"creationTimestamp": "2017-08-24T06:59:02Z",
"generateName": "docker-registry-1-",
"labels": {
"deployment": "docker-registry-1",
"deploymentconfig": "docker-registry",
"docker-registry": "default"
},
"name": "docker-registry-1-v2vtl",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "ReplicationController",
"name": "docker-registry-1",
"uid": "a36005e9-8899-11e7-8ae3-62f757198d90"
}
],
"resourceVersion": "1406",
"selfLink": "/api/v1/namespaces/default/pods/docker-registry-1-v2vtl",
"uid": "b640fb4c-8899-11e7-8f48-62f757198d90"
},
"spec": {
"containers": [
{
"env": [
{
"name": "REGISTRY_HTTP_ADDR",
"value": ":5000"
},
{
"name": "REGISTRY_HTTP_NET",
"value": "tcp"
},
{
"name": "REGISTRY_HTTP_SECRET",
"value": "U/3HePKLV39PnWSn8k8CgZ3Yy23DYp3gXgvDozyU+Kk="
},
{
"name": "REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA",
"value": "false"
}
],
"image": "registry.access.redhat.com/openshift3/ose-docker-registry:v3.6.173.0.5",
"imagePullPolicy": "IfNotPresent",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/healthz",
"port": 5000,
"scheme": "HTTP"
},
"initialDelaySeconds": 10,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"name": "registry",
"ports": [
{
"containerPort": 5000,
"protocol": "TCP"
}
],
"readinessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/healthz",
"port": 5000,
"scheme": "HTTP"
},
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"resources": {
"requests": {
"cpu": "100m",
"memory": "256Mi"
}
},
"securityContext": {
"privileged": true
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/registry",
"name": "registry-storage"
},
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "registry-token-mwg26",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"imagePullSecrets": [
{
"name": "registry-dockercfg-vh4q5"
}
],
"nodeName": "localhost",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "registry",
"serviceAccountName": "registry",
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"hostPath": {
"path": "/Users/andreabattaglia/.oc/profiles/default/pv/registry"
},
"name": "registry-storage"
},
{
"name": "registry-token-mwg26",
"secret": {
"defaultMode": 420,
"secretName": "registry-token-mwg26"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2017-08-24T06:59:02Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2017-08-24T07:01:12Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2017-08-24T06:59:02Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://d0396ca73df14d3aa443c55fd56ae32a8603a05661f92d4c1be0c5e2aafdd036",
"image": "registry.access.redhat.com/openshift3/ose-docker-registry:v3.6.173.0.5",
"imageID": "docker-pullable://registry.access.redhat.com/openshift3/ose-docker-registry@sha256:cb0a9039b7f037a3cca476f3b0f2fe32b9f21a37f15670e270bd01db21b66c19",
"lastState": {},
"name": "registry",
"ready": true,
"restartCount": 0,
"state": {
"running": {
"startedAt": "2017-08-24T07:01:07Z"
}
}
}
],
"hostIP": "192.168.65.2",
"phase": "Running",
"podIP": "172.17.0.6",
"qosClass": "Burstable",
"startTime": "2017-08-24T06:59:02Z"
}
},
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
"kubernetes.io/created-by": "{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"Job","namespace":"default","name":"persistent-volume-setup","uid":"a2d8ea5c-8899-11e7-8ae3-62f757198d90","apiVersion":"batch","resourceVersion":"752"}}\n",
"openshift.io/scc": "privileged"
},
"creationTimestamp": "2017-08-24T06:58:30Z",
"generateName": "persistent-volume-setup-",
"labels": {
"controller-uid": "a2d8ea5c-8899-11e7-8ae3-62f757198d90",
"job-name": "persistent-volume-setup"
},
"name": "persistent-volume-setup-vvnnh",
"namespace": "default",
"resourceVersion": "1376",
"selfLink": "/api/v1/namespaces/default/pods/persistent-volume-setup-vvnnh",
"uid": "a2da2e57-8899-11e7-8ae3-62f757198d90"
},
"spec": {
"containers": [
{
"command": [
"/bin/bash",
"-c",
"#/bin/bash\n\nset -e\n\nfunction generate_pv() {\n local basedir="${1}"\n local name="${2}"\ncat \u003c\u003cEOF\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: ${name}\n labels:\n volume: ${name}\nspec:\n capacity:\n storage: 100Gi\n accessModes:\n - ReadWriteOnce\n - ReadWriteMany\n - ReadOnlyMany\n hostPath:\n path: ${basedir}/${name}\n persistentVolumeReclaimPolicy: Recycle\nEOF\n}\n\nfunction setup_pv_dir() {\n local dir="${1}"\n if [[ ! -d "${dir}" ]]; then\n mkdir -p "${dir}"\n fi\n if ! chcon -t svirt_sandbox_file_t "${dir}" \u0026\u003e /dev/null; then\n echo "Not setting SELinux content for ${dir}"\n fi\n chmod 770 "${dir}"\n}\n\nfunction create_pv() {\n local basedir="${1}"\n local name="${2}"\n\n setup_pv_dir "${basedir}/${name}"\n if ! oc get pv "${name}" \u0026\u003e /dev/null; then \n generate_pv "${basedir}" "${name}" | oc create -f -\n else\n echo "persistentvolume ${name} already exists"\n fi\n}\n\nbasedir="/Users/andreabattaglia/.oc/profiles/default/pv"\nsetup_pv_dir "${basedir}/registry"\n\nfor i in $(seq -f "%04g" 1 100); do\n create_pv "${basedir}" "pv${i}"\ndone\n"
],
"image": "registry.access.redhat.com/openshift3/ose:v3.6.173.0.5",
"imagePullPolicy": "IfNotPresent",
"name": "storage-setup-job",
"resources": {},
"securityContext": {
"privileged": true
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/Users/andreabattaglia/.oc/profiles/default/pv",
"name": "pvdir"
},
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "pvinstaller-token-hcjll",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"nodeName": "localhost",
"restartPolicy": "Never",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "pvinstaller",
"serviceAccountName": "pvinstaller",
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"hostPath": {
"path": "/Users/andreabattaglia/.oc/profiles/default/pv"
},
"name": "pvdir"
},
{
"name": "pvinstaller-token-hcjll",
"secret": {
"defaultMode": 420,
"secretName": "pvinstaller-token-hcjll"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2017-08-24T06:58:33Z",
"reason": "PodCompleted",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2017-08-24T06:59:40Z",
"reason": "PodCompleted",
"status": "False",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2017-08-24T06:58:30Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://046a44bd2ae0b82ffa9303af8ea0d65eaeb17d878ce3553603c01005075c001d",
"image": "registry.access.redhat.com/openshift3/ose:v3.6.173.0.5",
"imageID": "docker-pullable://registry.access.redhat.com/openshift3/ose@sha256:807e2cca5196358e0163d9dc3a9e8d4c935677e9ea5fe7607b5c50978352bc9e",
"lastState": {},
"name": "storage-setup-job",
"ready": false,
"restartCount": 0,
"state": {
"terminated": {
"containerID": "docker://046a44bd2ae0b82ffa9303af8ea0d65eaeb17d878ce3553603c01005075c001d",
"exitCode": 0,
"finishedAt": "2017-08-24T06:59:39Z",
"reason": "Completed",
"startedAt": "2017-08-24T06:58:55Z"
}
}
}
],
"hostIP": "192.168.65.2",
"phase": "Succeeded",
"podIP": "172.17.0.2",
"qosClass": "BestEffort",
"startTime": "2017-08-24T06:58:33Z"
}
},
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
"openshift.io/deployment.name": "router-1",
"openshift.io/scc": "restricted"
},
"creationTimestamp": "2017-08-24T06:58:32Z",
"labels": {
"openshift.io/deployer-pod-for.name": "router-1"
},
"name": "router-1-deploy",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "router-1",
"uid": "a3edbe2b-8899-11e7-8ae3-62f757198d90"
}
],
"resourceVersion": "1517",
"selfLink": "/api/v1/namespaces/default/pods/router-1-deploy",
"uid": "a3f5d9aa-8899-11e7-8ae3-62f757198d90"
},
"spec": {
"activeDeadlineSeconds": 21600,
"containers": [
{
"env": [
{
"name": "KUBERNETES_MASTER",
"value": "https://127.0.0.1:8443"
},
{
"name": "OPENSHIFT_MASTER",
"value": "https://127.0.0.1:8443"
},
{
"name": "BEARER_TOKEN_FILE",
"value": "/var/run/secrets/kubernetes.io/serviceaccount/token"
},
{
"name": "OPENSHIFT_CA_DATA",
"value": "-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE1MDM1MzAwMTkwHhcNMTcwODIzMjMxMzM5WhcNMjIwODIy\nMjMxMzQwWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MDM1MzAwMTkw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxGqrwrpE52l34qstxWyPq\nbbK0+qfjKrXBhgtKu7JlSpNuvidhW0MHZoQP2i1lSTPD2VbPwJBMy0FElsIKhNbk\nHyIDQNOJ4YP/hbXx+xb077mUohR0eOgMDB7DrUsd6DD/uzEkYIKxE+BBLyFFuphW\nVB7SwV3d0KUkfJZqZTQGeFqzUavX5w53biqZOms69bvwz85JaH6FoHpuOBJtycay\nu4O4+lvDfur+ExW80dMQ5iMkbxzAZnA28rZuT8F+SjiOX/F4Ik0umR6PVRZ5+E62\nWV2DvTHH6s7j7jICsuusaTo14KlZ2aaYt+eMRR6VfMecX/L+0NelRBCkfufyuGGH\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQBXYC0famMZfFNYRxfAgfdFpoudiKMT17j1K2oBJ0ETbyIS\nEhm9endcPYjocHIhJh1pwUwj+X19dpfqwMLk3iXij8IKON34XI2q+ooFswOBYAD6\n0steSJH5eLyLEtycIBy+39tTxVt5XsNchLIcV9PuutZGw9u3QQYVdZXW2CE7NNAD\nAm7xG3e5tEMhFTIAM70+waAd+9rzX+C20yi9brR32mgTI987xRr1tc51gn1UdXK8\nJE0h8Ke6DxYoAiBz7vd2s0BVhxTTqjCx0ElM5UU1vtdmzyiNlHjoDyeChSpibpVo\nUGYAMZExdql88RfhHhyBEdPLsNO80t8j3FoaHeLW\n-----END CERTIFICATE-----\n"
},
{
"name": "OPENSHIFT_DEPLOYMENT_NAME",
"value": "router-1"
},
{
"name": "OPENSHIFT_DEPLOYMENT_NAMESPACE",
"value": "default"
}
],
"image": "registry.access.redhat.com/openshift3/ose-deployer:v3.6.173.0.5",
"imagePullPolicy": "IfNotPresent",
"name": "deployment",
"resources": {},
"securityContext": {
"capabilities": {
"drop": [
"KILL",
"MKNOD",
"SETGID",
"SETUID",
"SYS_CHROOT"
]
},
"privileged": false,
"runAsUser": 1000020000,
"seLinuxOptions": {
"level": "s0:c5,c0"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "deployer-token-k8j7l",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"nodeName": "localhost",
"restartPolicy": "Never",
"schedulerName": "default-scheduler",
"securityContext": {
"fsGroup": 1000020000,
"seLinuxOptions": {
"level": "s0:c5,c0"
}
},
"serviceAccount": "deployer",
"serviceAccountName": "deployer",
"terminationGracePeriodSeconds": 10,
"volumes": [
{
"name": "deployer-token-k8j7l",
"secret": {
"defaultMode": 420,
"secretName": "deployer-token-k8j7l"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2017-08-24T06:58:33Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2017-08-24T07:09:01Z",
"message": "containers with unready status: [deployment]",
"reason": "ContainersNotReady",
"status": "False",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2017-08-24T06:58:32Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://f601e7e8207a0f362d93ef241a26d833400d77c061e25824f666c1f10c39b63d",
"image": "registry.access.redhat.com/openshift3/ose-deployer:v3.6.173.0.5",
"imageID": "docker-pullable://registry.access.redhat.com/openshift3/ose-deployer@sha256:623c7b20215e67cfaf879b906eb3acf1696a8e89fdd57b44a6c929ec62ae0d44",
"lastState": {},
"name": "deployment",
"ready": false,
"restartCount": 0,
"state": {
"terminated": {
"containerID": "docker://f601e7e8207a0f362d93ef241a26d833400d77c061e25824f666c1f10c39b63d",
"exitCode": 1,
"finishedAt": "2017-08-24T07:09:00Z",
"reason": "Error",
"startedAt": "2017-08-24T06:58:59Z"
}
}
}
],
"hostIP": "192.168.65.2",
"phase": "Failed",
"podIP": "172.17.0.4",
"qosClass": "BestEffort",
"startTime": "2017-08-24T06:58:33Z"
}
}
],
"kind": "List",
"metadata": {},
"resourceVersion": "",
"selfLink": ""
}

@pweil- pweil- added the kind/bug Categorizes issue or PR as related to a bug. label Aug 28, 2017
@csrwng
Copy link
Contributor

csrwng commented Aug 28, 2017

@andreabattaglia how much cpu/memory are you giving docker ? do you get a different result if you increase the amount of memory?

@andreabattaglia
Copy link
Author

@csrwng Docker machine has been started with 4 cores and 4 GB memory. After the first failure, I've tried to start from scratch, allocating 6 GB and 8 GB, unsuccessfully

@csrwng
Copy link
Contributor

csrwng commented Aug 29, 2017

@andreabattaglia is it the same failure every time? (failure to mount the token on the router deployer pod)

What happens if you login as admin and manually redeploy the router?

oc login -u system:admin
oc rollout latest dc/router -n default

@andreabattaglia
Copy link
Author

@csrwng it happens exactly the same every time. I've tried to redeploy the router pod manually as well, getting the same error.

@csrwng
Copy link
Contributor

csrwng commented Sep 8, 2017

@andreabattaglia sorry for the delay in getting back to you. What happens if you try running 'oc cluster up' without the wrapper (and host mounted directories)?
I'm not able to reproduce this locally, so I'm wondering if it's an issue with host mounting of directories. Also, it'd be good to reset your Docker just to rule out any issues with the Docker host file system.

@andreabattaglia
Copy link
Author

@csrwng thanks for your reply. That's one of my concerns, indeed. I'll try again to reset everything locally and reinstall docker. Will keep you updated

@andreabattaglia
Copy link
Author

@csrwng no good news. I've tried to update both my MacOS and Docker to the latest version and i have noticed this during the OCP setup (which, btw, keeps failing).
Please have a look at the attached image

screen shot 2017-09-20 at 11 26 28 am

The command used to provision and start the local OCP cluster is:

$ oc cluster up --version v3.6.173.0.5 --image registry.access.redhat.com/openshift3/ose --public-hostname 127.0.0.1 --routing-suffix apps.127.0.0.1.nip.io --host-data-dir /Users/andreabattaglia/.oc/profiles/miaw/data --host-config-dir /Users/andreabattaglia/.oc/profiles/miaw/config --host-pv-dir /Users/andreabattaglia/.oc/profiles/miaw/pv --use-existing-config -e TZ=CEST

@xman-berlin
Copy link

xman-berlin commented Oct 29, 2017

Did you solve your issue in the meantime. I'm facing almost the same issue and already tried to fix it for days - no success yet. I created a question on stackoverflow.

@xman-berlin
Copy link

Finally, I installed minishift and everything works as expected.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 24, 2018
@DaleBingham
Copy link

I am running 'oc cluster up' with Docker Engine: 17.12.0-ce and Mac OS X latest 10.13.3 (17D102). I got OpenShift Origin to work however if I just try logging in with the system login, I am getting this error if I try to deploy the httpd on a brand new 'oc cluster up'. I am running with the --public-hostname 'my 192.168.x.x' for my IP. I can log in and see thing w/in OpenShift but deploying always gets this problem.

I also gave 'developer' rights to see the 'default' project and the Docker Registry gives me this error:
MountVolume.SetUp failed for volume "deployer-token-pgjw6" : exit status 1

I get that for the docker registry and the router in the Default project.

@csrwng
Copy link
Contributor

csrwng commented Mar 15, 2018

@DaleBingham there is this issue with the latest Docker for Mac -> #18596 The only sane workaround at the moment is to downgrade as mentioned in the issue.

@DaleBingham
Copy link

@csrwng thank you. I have a CentOS VM on my Mac I am running this on for now so I can use that. Just stinks. I am subscribed to #18569 so I will see when they fix it. Or download the source and see what is causing it.

@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 14, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@kalote
Copy link

kalote commented Jan 23, 2019

Hello,

I still have this issue:
Docker 2.0.0.2 / Engine 18.09.1
oc v3.7.0+7ed6862
kubernetes v1.7.6+a08f5eeb62

When i run oc cluster up, it starts the cluster, but Failed Mount: MountVolume.SetUp failed everywhere (router, registry, ...).

Note: i have the proxy Issue when starting Docker, but i'm not sure it's related.

WARNING: An HTTP proxy (gateway.docker.internal:3128) is configured for the Docker daemon, but you did not specify one for cluster up
WARNING: An HTTPS proxy (gateway.docker.internal:3129) is configured for the Docker daemon, but you did not specify one for cluster up
WARNING: A proxy is configured for Docker, however 172.30.1.1 is not included in its NO_PROXY list.
   172.30.1.1 needs to be included in the Docker daemon's NO_PROXY environment variable so pushes to the local OpenShift registry can succeed.

I tried reseting docker, running with --host-data-dir and other configs, running with sudo command, but still same result.

Thanks for your support.

@kalote
Copy link

kalote commented Jan 23, 2019

/reopen

@openshift-ci-robot
Copy link

@kalote: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cluster-up component/storage kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Projects
None yet
Development

No branches or pull requests

8 participants