Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image resource quota should deny a push of built image exceeding openshift.io/imagestreams quota #17786

Open
dmage opened this issue Dec 14, 2017 · 15 comments
Assignees
Labels
component/image kind/bug Categorizes issue or PR as related to a bug. kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/P0

Comments

@dmage
Copy link
Contributor

dmage commented Dec 14, 2017

[Feature:ImageQuota][registry][Serial] Image resource quota 
  should deny a push of built image exceeding openshift.io/imagestreams quota [Suite:openshift]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/quota_admission.go:48
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Feature:ImageQuota][registry][Serial] Image resource quota
  /tmp/openshift/build-rpm-release/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 14 14:08:04.438: INFO: >>> kubeConfig: /etc/origin/master/admin.kubeconfig
STEP: Building a namespace api object
Dec 14 14:08:04.483: INFO: configPath is now "/tmp/extended-test-resourcequota-admission-4ghrs-x88xg-user.kubeconfig"
Dec 14 14:08:04.483: INFO: The user is now "extended-test-resourcequota-admission-4ghrs-x88xg-user"
Dec 14 14:08:04.483: INFO: Creating project "extended-test-resourcequota-admission-4ghrs-x88xg"
Dec 14 14:08:04.567: INFO: Waiting on permissions in project "extended-test-resourcequota-admission-4ghrs-x88xg" ...
STEP: Waiting for a default service account to be provisioned in namespace
[JustBeforeEach] [Feature:ImageQuota][registry][Serial] Image resource quota
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/quota_admission.go:33
STEP: Waiting for builder service account
[It] should deny a push of built image exceeding openshift.io/imagestreams quota [Suite:openshift]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/quota_admission.go:48
STEP: creating resource quota with a limit map[openshift.io/imagestreams:{{0 0} {<nil>} 0 DecimalSI}]
STEP: waiting for resource quota isquota to get updated
STEP: trying to push image exceeding quota map[openshift.io/imagestreams:{{0 0} {<nil>} 0 DecimalSI}]
Step 1 : FROM scratch
 ---> 
Step 2 : COPY data1 /data1
 ---> 6b4258c5956a
Removing intermediate container 5988180cdc24
Successfully built 6b4258c5956a
Dec 14 14:08:06.031: INFO: Running 'oc whoami --config=/tmp/extended-test-resourcequota-admission-4ghrs-x88xg-user.kubeconfig --namespace=extended-test-resourcequota-admission-4ghrs-x88xg -t'
The push refers to a repository [172.30.40.79:5000/extended-test-resourcequota-admission-4ghrs-x88xg/first]
Preparing
Pushing [==================================================>]    512 B
Pushing
Pushing [==================================================>]    637 B
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushed
STEP: bump the quota to openshift.io/imagestreams=1
STEP: waiting for resource quota isquota to get updated
STEP: trying to push image below quota map[openshift.io/imagestreams:{{1 0} {<nil>}  DecimalSI}]
Step 1 : FROM scratch
 ---> 
Step 2 : COPY data1 /data1
 ---> cf7e50787b3e
Removing intermediate container 656e04f8bf40
Successfully built cf7e50787b3e
Dec 14 14:08:07.856: INFO: Running 'oc whoami --config=/tmp/extended-test-resourcequota-admission-4ghrs-x88xg-user.kubeconfig --namespace=extended-test-resourcequota-admission-4ghrs-x88xg -t'
The push refers to a repository [172.30.40.79:5000/extended-test-resourcequota-admission-4ghrs-x88xg/first]
Preparing
Pushing [==================================================>]    512 B
Pushing
Pushing [==================================================>]    637 B
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushed
tag1: digest: sha256:2a74f58d7593b8103c73261aa200c723320289fc016588836c3fe21d2c6fcd73 size: 524
STEP: waiting for resource quota isquota to get updated
STEP: trying to push image to existing image stream map[openshift.io/imagestreams:{{1 0} {<nil>}  DecimalSI}]
Step 1 : FROM scratch
 ---> 
Step 2 : COPY data1 /data1
 ---> 1c1680a12eb8
Removing intermediate container c2a3611a136a
Successfully built 1c1680a12eb8
Dec 14 14:08:09.701: INFO: Running 'oc whoami --config=/tmp/extended-test-resourcequota-admission-4ghrs-x88xg-user.kubeconfig --namespace=extended-test-resourcequota-admission-4ghrs-x88xg -t'
The push refers to a repository [172.30.40.79:5000/extended-test-resourcequota-admission-4ghrs-x88xg/first]
Preparing
Pushing [==================================================>]    512 B
Pushing
Pushing [==================================================>]    637 B
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushed
tag2: digest: sha256:2dada2d420a69fe90d77567725224e55abd997319ba13111b04041f7392745fe size: 524
STEP: trying to push image exceeding quota map[openshift.io/imagestreams:{{1 0} {<nil>}  DecimalSI}]
Step 1 : FROM scratch
 ---> 
Step 2 : COPY data1 /data1
 ---> b0d0fda0371d
Removing intermediate container 59dab0e06572
Successfully built b0d0fda0371d
Dec 14 14:08:11.381: INFO: Running 'oc whoami --config=/tmp/extended-test-resourcequota-admission-4ghrs-x88xg-user.kubeconfig --namespace=extended-test-resourcequota-admission-4ghrs-x88xg -t'
The push refers to a repository [172.30.40.79:5000/extended-test-resourcequota-admission-4ghrs-x88xg/second]
Preparing
Pushing [==================================================>]    512 B
Pushing
Pushing [==================================================>]    637 B
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushed
STEP: bump the quota to openshift.io/imagestreams=2
STEP: waiting for resource quota isquota to get updated
STEP: waiting for resource quota isquota to get updated
STEP: trying to push image below quota map[openshift.io/imagestreams:{{2 0} {<nil>}  DecimalSI}]
Step 1 : FROM scratch
 ---> 
Step 2 : COPY data1 /data1
 ---> 05bc73bd6744
Removing intermediate container 0b2417cd4c6e
Successfully built 05bc73bd6744
Dec 14 14:08:13.077: INFO: Running 'oc whoami --config=/tmp/extended-test-resourcequota-admission-4ghrs-x88xg-user.kubeconfig --namespace=extended-test-resourcequota-admission-4ghrs-x88xg -t'
The push refers to a repository [172.30.40.79:5000/extended-test-resourcequota-admission-4ghrs-x88xg/second]
Preparing
Pushing [==================================================>]    512 B
Pushing
Pushing [==================================================>]    637 B
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushed
tag1: digest: sha256:2ea2c3c42e1ca708089a3c1ca47f2e5fd13af7103de4f77ccd6f91a38adb0b5e size: 524
STEP: waiting for resource quota isquota to get updated
STEP: trying to push image exceeding quota map[openshift.io/imagestreams:{{2 0} {<nil>}  DecimalSI}]
Step 1 : FROM scratch
 ---> 
Step 2 : COPY data1 /data1
 ---> 82e6a6c83365
Removing intermediate container d85c25b3a3fe
Successfully built 82e6a6c83365
Dec 14 14:08:14.825: INFO: Running 'oc whoami --config=/tmp/extended-test-resourcequota-admission-4ghrs-x88xg-user.kubeconfig --namespace=extended-test-resourcequota-admission-4ghrs-x88xg -t'
The push refers to a repository [172.30.40.79:5000/extended-test-resourcequota-admission-4ghrs-x88xg/third]
Preparing
Pushing [==================================================>]    512 B
Pushing
Pushing [==================================================>]    637 B
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushing [==================================================>] 2.048 kB
Pushing
Pushed
STEP: deleting first image stream
STEP: Deleting quota isquota
STEP: Deleting images and image streams in project "extended-test-resourcequota-admission-4ghrs-x88xg-s2"
STEP: Deleting project "extended-test-resourcequota-admission-4ghrs-x88xg-s2"
STEP: Deleting images and image streams in project "extended-test-resourcequota-admission-4ghrs-x88xg-s1"
STEP: Deleting project "extended-test-resourcequota-admission-4ghrs-x88xg-s1"
STEP: Deleting images and image streams in project "extended-test-resourcequota-admission-4ghrs-x88xg-shared"
STEP: Deleting project "extended-test-resourcequota-admission-4ghrs-x88xg-shared"
STEP: Deleting images and image streams in project "extended-test-resourcequota-admission-4ghrs-x88xg"
[AfterEach] [Feature:ImageQuota][registry][Serial] Image resource quota
  /tmp/openshift/build-rpm-release/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
STEP: Collecting events from namespace "extended-test-resourcequota-admission-4ghrs-x88xg".
STEP: Found 0 events.
Dec 14 14:08:45.407: INFO: POD                       NODE                           PHASE    GRACE  CONDITIONS
Dec 14 14:08:45.407: INFO: docker-registry-4-xpgfp   ip-172-18-13-128.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 14:07:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 14:07:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 14:07:13 +0000 UTC  }]
Dec 14 14:08:45.407: INFO: registry-console-1-56p2l  ip-172-18-13-128.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:55:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:56:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:55:01 +0000 UTC  }]
Dec 14 14:08:45.407: INFO: router-1-d54zc            ip-172-18-13-128.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:53:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:54:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:53:50 +0000 UTC  }]
Dec 14 14:08:45.407: INFO: apiserver-vn2zr           ip-172-18-13-128.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:55:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:56:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:56:00 +0000 UTC  }]
Dec 14 14:08:45.407: INFO: controller-manager-sqxh2  ip-172-18-13-128.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:55:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:56:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:56:00 +0000 UTC  }]
Dec 14 14:08:45.407: INFO: asb-1-deploy              ip-172-18-13-128.ec2.internal  Failed          [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:56:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-12-14 14:06:26 +0000 UTC ContainersNotReady containers with unready status: [deployment]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:56:20 +0000 UTC  }]
Dec 14 14:08:45.407: INFO: asb-etcd-1-deploy         ip-172-18-13-128.ec2.internal  Failed          [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:56:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-12-14 14:06:28 +0000 UTC ContainersNotReady containers with unready status: [deployment]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:56:22 +0000 UTC  }]
Dec 14 14:08:45.407: INFO: apiserver-8z6s4           ip-172-18-13-128.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:56:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:57:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-12-14 13:57:00 +0000 UTC  }]
Dec 14 14:08:45.407: INFO: 
Dec 14 14:08:45.409: INFO: 
Logging node info for node ip-172-18-13-128.ec2.internal
Dec 14 14:08:45.411: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-172-18-13-128.ec2.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-172-18-13-128.ec2.internal,UID:07d0d39d-e0d6-11e7-8e83-0ea5a274e0d2,ResourceVersion:3803,Generation:0,CreationTimestamp:2017-12-14 13:52:31 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: ip-172-18-13-128.ec2.internal,openshift-infra: apiserver,region: infra,zone: default,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:,ExternalID:ip-172-18-13-128.ec2.internal,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},memory: {{16657121280 0} {<nil>}  BinarySI},pods: {{40 0} {<nil>} 40 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},memory: {{16552263680 0} {<nil>}  BinarySI},pods: {{40 0} {<nil>} 40 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2017-12-14 14:08:43 +0000 UTC 2017-12-14 13:52:31 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-12-14 14:08:43 +0000 UTC 2017-12-14 13:52:31 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-12-14 14:08:43 +0000 UTC 2017-12-14 13:52:31 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-12-14 14:08:43 +0000 UTC 2017-12-14 13:53:11 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.18.13.128} {Hostname ip-172-18-13-128.ec2.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4372f1e2f8c642d3a2f3ed11aa3fe654,SystemUUID:EC2696C2-D133-3E7B-8CCE-D70E5FB3CF95,BootID:fd324466-7dff-47ab-ae86-a10065ef9e68,KernelVersion:3.10.0-693.11.1.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.4 (Maipo),ContainerRuntimeVersion:docker://1.12.6,KubeletVersion:v1.8.1+0d5291c,KubeProxyVersion:v1.8.1+0d5291c,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/openvswitch:d3cdfec openshift/openvswitch:latest] 1449562143} {[openshift/node:d3cdfec openshift/node:latest] 1447782422} {[openshift/origin-keepalived-ipfailover:d3cdfec openshift/origin-keepalived-ipfailover:latest] 1302704262} {[openshift/origin-haproxy-router:d3cdfec openshift/origin-haproxy-router:latest] 1297290633} {[openshift/origin-recycler:d3cdfec openshift/origin-recycler:latest] 1275767348} {[openshift/origin-docker-builder:d3cdfec openshift/origin-docker-builder:latest] 1275767348} {[openshift/origin-f5-router:d3cdfec openshift/origin-f5-router:latest] 1275767348} {[openshift/origin-deployer:d3cdfec openshift/origin-deployer:latest] 1275767348} {[openshift/origin:d3cdfec openshift/origin:latest] 1275767348} {[openshift/origin-sti-builder:d3cdfec openshift/origin-sti-builder:latest] 1275767348} {[docker.io/openshift/origin@sha256:57071030fd1eb41646085a3bd8cac7c078438f23f60170aafc6d59623b5b22ed docker.io/openshift/origin:latest] 1275556871} {[docker.io/openshift/origin-gce@sha256:b2e2940f2855ef4ddcfe4cc3ae89f308b341c68e32f1786014303dce0c7865fe docker.io/openshift/origin-gce:latest] 956874595} {[docker.io/openshift/origin-release@sha256:095c6cda79411532fc27555c676a9b75d04cdde3c7e3fbc73e61210a3c19630f docker.io/openshift/origin-release:golang-1.9] 899635805} {[openshift/origin-logging-eventrouter:b737acc openshift/origin-logging-eventrouter:latest] 840408370} {[docker.io/openshift/origin-release@sha256:da2d208b42de3a6c34d64b861e0013d790a1f8e6da1b854e3bd306e8ab745b36 docker.io/openshift/origin-release:golang-1.8] 834331811} {[docker.io/openshift/origin-release@sha256:b82c4cd9dc5b1bd947017ef0c232951b13d19730543326ba218ce6491d7d60ad docker.io/openshift/origin-release:golang-1.7] 828020628} {[openshift/origin-logging-elasticsearch:b737acc openshift/origin-logging-elasticsearch:latest] 702838374} {[openshift/origin-logging-fluentd:b737acc openshift/origin-logging-fluentd:latest] 677776311} {[openshift/origin-federation:d3cdfec openshift/origin-federation:latest] 676846604} {[openshift/origin-logging-auth-proxy:b737acc openshift/origin-logging-auth-proxy:latest] 668306902} {[docker.io/node@sha256:5757581a8ff7e08041512a54aa3f573d33fecdce81d603e48a759956cd99bdd3 docker.io/node:4.7.2] 650142332} {[docker.io/centos/ruby-24-centos7@sha256:fedc5fb6a8084fd062aa85f50a3fbf15f856cc01ef7cb1461de7060b2e8128d5 docker.io/centos/ruby-24-centos7:latest] 554698786} {[openshift/origin-logging-kibana:b737acc openshift/origin-logging-kibana:latest] 548422783} {[openshift/origin-docker-registry:d3cdfec openshift/origin-docker-registry:da151d9 openshift/origin-docker-registry:latest] 485055056} {[openshift/origin-docker-registry:c540fa0] 484992540} {[openshift/origin-egress-http-proxy:d3cdfec openshift/origin-egress-http-proxy:latest] 428528862} {[openshift/origin-egress-router:d3cdfec openshift/origin-egress-router:latest] 396688755} {[openshift/origin-base:d3cdfec openshift/origin-base:latest] 394914146} {[docker.io/openshift/base-centos7@sha256:aea292a3bddba020cde0ee83e6a45807931eb607c164ec6a3674f67039d8cd7c docker.io/openshift/base-centos7:latest] 383049978} {[docker.io/cockpit/kubernetes@sha256:a8e58cd5e6f5a4d12d1e2dfd339686b74f3c22586952ca7aa184dc254ab49714 docker.io/cockpit/kubernetes:latest] 375914047} {[openshift/origin-cluster-capacity:d3cdfec openshift/origin-cluster-capacity:latest] 316141233} {[openshift/origin-template-service-broker:d3cdfec openshift/origin-template-service-broker:latest] 309569586} {[openshift/origin-service-catalog:d3cdfec openshift/origin-service-catalog:latest] 283942819} {[docker.io/openshift/origin-service-catalog@sha256:957934537721da33362693d4f1590dc79dc5da7438799bf14d645165768e53ef docker.io/openshift/origin-service-catalog:latest] 283922314} {[openshift/origin-logging-curator:b737acc openshift/origin-logging-curator:latest] 236625906} {[openshift/origin-pod:d3cdfec openshift/origin-pod:latest] 224883243} {[openshift/origin-source:d3cdfec openshift/origin-source:latest] 203538613} {[docker.io/centos@sha256:3b1a65e9a05f0a77b5e8a698d3359459904c2a354dc3b25ae2e2f5c95f0b3667 docker.io/centos:7 docker.io/centos:centos7] 203538471} {[openshift/hello-openshift:d3cdfec openshift/hello-openshift:latest] 6089990}],VolumesInUse:[],VolumesAttached:[],},}
Dec 14 14:08:45.413: INFO: 
Logging kubelet events for node ip-172-18-13-128.ec2.internal
Dec 14 14:08:45.420: INFO: 
Logging pods the kubelet thinks is on node ip-172-18-13-128.ec2.internal
Dec 14 14:08:45.442: INFO: router-1-d54zc started at 2017-12-14 13:53:50 +0000 UTC (0+1 container statuses recorded)
Dec 14 14:08:45.442: INFO: 	Container router ready: true, restart count 0
Dec 14 14:08:45.442: INFO: registry-console-1-56p2l started at 2017-12-14 13:55:01 +0000 UTC (0+1 container statuses recorded)
Dec 14 14:08:45.442: INFO: 	Container registry-console ready: true, restart count 0
Dec 14 14:08:45.442: INFO: apiserver-vn2zr started at 2017-12-14 13:55:36 +0000 UTC (0+1 container statuses recorded)
Dec 14 14:08:45.442: INFO: 	Container apiserver ready: true, restart count 0
Dec 14 14:08:45.442: INFO: asb-etcd-1-deploy started at 2017-12-14 13:56:22 +0000 UTC (0+1 container statuses recorded)
Dec 14 14:08:45.442: INFO: 	Container deployment ready: false, restart count 0
Dec 14 14:08:45.442: INFO: apiserver-8z6s4 started at 2017-12-14 13:56:30 +0000 UTC (0+1 container statuses recorded)
Dec 14 14:08:45.442: INFO: 	Container c ready: true, restart count 0
Dec 14 14:08:45.442: INFO: asb-1-deploy started at 2017-12-14 13:56:20 +0000 UTC (0+1 container statuses recorded)
Dec 14 14:08:45.442: INFO: 	Container deployment ready: false, restart count 0
Dec 14 14:08:45.442: INFO: controller-manager-sqxh2 started at 2017-12-14 13:55:36 +0000 UTC (0+1 container statuses recorded)
Dec 14 14:08:45.442: INFO: 	Container controller-manager ready: true, restart count 2
Dec 14 14:08:45.442: INFO: docker-registry-4-xpgfp started at 2017-12-14 14:07:13 +0000 UTC (0+1 container statuses recorded)
Dec 14 14:08:45.442: INFO: 	Container registry ready: true, restart count 0
W1214 14:08:45.444234   50971 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 14 14:08:45.465: INFO: 
Latency metrics for node ip-172-18-13-128.ec2.internal
Dec 14 14:08:45.465: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m3.018246s}
Dec 14 14:08:45.465: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m3.018246s}
Dec 14 14:08:45.465: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m3.016118s}
Dec 14 14:08:45.465: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m3.001254s}
STEP: Dumping a list of prepulled images on each node...
Dec 14 14:08:45.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-resourcequota-admission-4ghrs-x88xg" for this suite.
Dec 14 14:08:51.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 14 14:08:51.528: INFO: namespace: extended-test-resourcequota-admission-4ghrs-x88xg, resource: bindings, ignored listing per whitelist
Dec 14 14:08:51.623: INFO: namespace extended-test-resourcequota-admission-4ghrs-x88xg deletion completed in 6.154258186s

• Failure [47.185 seconds]
[Feature:ImageQuota][registry][Serial] Image resource quota
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/quota_admission.go:29
  should deny a push of built image exceeding openshift.io/imagestreams quota [Suite:openshift] [It]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/quota_admission.go:48

  Expected error:
      <*errors.errorString | 0xc4202645c0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/quota_admission.go:109

/kind test-flake

@openshift-ci-robot openshift-ci-robot added the kind/test-flake Categorizes issue or PR as related to test flakes. label Dec 14, 2017
@dmage dmage added kind/bug Categorizes issue or PR as related to a bug. priority/P1 labels Dec 14, 2017
@dmage
Copy link
Contributor Author

dmage commented Dec 14, 2017

Looks like the rebase broke quota for image streams.

@bparees
Copy link
Contributor

bparees commented Dec 14, 2017

well that didn't take long(for an origin change to break image-registry while passing all origin tests). @stevekuznetsov @smarterclayton what is our plan for having cohesive CI across repos such that origin PRs actually have to pass the tests of their dependent repos?

@stevekuznetsov
Copy link
Contributor

What job did this break in? We are running the registry tests in Origin e2e

@stevekuznetsov
Copy link
Contributor

And which PR in Origin broke the registry but passed the tests?

@stevekuznetsov
Copy link
Contributor

Like, if the registry built and deployed from master doesn't pass e2e in Origin I don't see how that would work

@bparees
Copy link
Contributor

bparees commented Dec 14, 2017

@stevekuznetsov #17786 (comment)

So are we really running the image-registry extended tests against origin repo PRs?

@dmage
Copy link
Contributor Author

dmage commented Dec 14, 2017

@stevekuznetsov #17576 ci/openshift-jenkins/extended_image_registry Skipped

@bparees
Copy link
Contributor

bparees commented Dec 14, 2017

@deads2k the broken quota issue i just mentioned to you.

@stevekuznetsov
Copy link
Contributor

IIRC some of the registry tests were in the normal conformance e2e bucket, no? Other than that -- unless we want to make the image registry job mandatory, it should have been on the author of the rebase PR to run the other tests as well.

@dmage dmage assigned pweil- and unassigned dmage Jan 3, 2018
@bparees
Copy link
Contributor

bparees commented Jan 3, 2018

@dmage can you give a reason for assigning this back to @pweil- ? I know the master team broke this in the rebase (and maybe owns the fix for it?) but i'm not sure they know that unless you explain it.

@dmage
Copy link
Contributor Author

dmage commented Jan 3, 2018

@bparees because I know nothing about quota.

@bparees bparees assigned mfojtik and deads2k and unassigned bparees and pweil- Jan 3, 2018
@bparees
Copy link
Contributor

bparees commented Jan 3, 2018

@mfojtik @deads2k : My understanding is that @dmage worked around this in the test by making it wait longer (10mins, to get a relist) but that @deads2k was looking into fixing the quota behavior that was broken in the rebase.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 3, 2018
@bparees
Copy link
Contributor

bparees commented Apr 3, 2018

/lifecycle frozen
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/image kind/bug Categorizes issue or PR as related to a bug. kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/P0
Projects
None yet
Development

No branches or pull requests

8 participants