Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We're still vetting Docker RPMs ourselves #18294

Closed
stevekuznetsov opened this issue Jan 25, 2018 · 9 comments
Closed

We're still vetting Docker RPMs ourselves #18294

stevekuznetsov opened this issue Jan 25, 2018 · 9 comments
Assignees
Labels
area/tests lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P0

Comments

@stevekuznetsov
Copy link
Contributor

Symptom:

  1. Hosts:    localhost
     Play:     OpenShift Health Checks
     Task:     Run health checks (install) - EL
     Message:  One or more checks failed
     Details:  check "docker_image_availability":
               Some dependencies are required in order to check Docker image availability.
               Unable to install required packages on this host:
                   python-docker-py,
                   skopeo
               Error: Package: 1:skopeo-0.1.26-2.dev.git2e8377a.el7.x86_64 (oso-rhui-rhel-server-extras)
                          Requires: skopeo-containers = 1:0.1.26-2.dev.git2e8377a.el7
                          Installed: 1:skopeo-containers-0.1.27-3.dev.git14245f2.el7.x86_64 (@httpsmirroropenshiftcomenterpriserheldockertestedx8664os)
                              skopeo-containers = 1:0.1.27-3.dev.git14245f2.el7
                          Available: 1:skopeo-containers-0.1.17-0.7.git1f655f3.el7.x86_64 (oso-rhui-rhel-server-extras)
                              skopeo-containers = 1:0.1.17-0.7.git1f655f3.el7
                          Available: 1:skopeo-containers-0.1.17-1.el7.x86_64 (oso-rhui-rhel-server-extras)
                              skopeo-containers = 1:0.1.17-1.el7
                          Available: 1:skopeo-containers-0.1.18-1.el7.x86_64 (oso-rhui-rhel-server-extras)
                              skopeo-containers = 1:0.1.18-1.el7
                          Available: 1:skopeo-containers-0.1.19-1.el7.x86_64 (oso-rhui-rhel-server-extras)
                              skopeo-containers = 1:0.1.19-1.el7
                          Available: 1:skopeo-containers-0.1.20-2.el7.x86_64 (oso-rhui-rhel-server-extras)
                              skopeo-containers = 1:0.1.20-2.el7
                          Available: 1:skopeo-containers-0.1.20-2.1.gite802625.el7.x86_64 (oso-rhui-rhel-server-extras)
                              skopeo-containers = 1:0.1.20-2.1.gite802625.el7
                          Available: 1:skopeo-containers-0.1.23-1.git1bbd87f.el7.x86_64 (oso-rhui-rhel-server-extras)
                              skopeo-containers = 1:0.1.23-1.git1bbd87f.el7
                          Available: 1:skopeo-containers-0.1.24-1.dev.git28d4e08.el7.x86_64 (oso-rhui-rhel-server-extras)
                              skopeo-containers = 1:0.1.24-1.dev.git28d4e08.el7
                          Available: 1:skopeo-containers-0.1.26-2.dev.git2e8377a.el7.x86_64 (oso-rhui-rhel-server-extras)
                              skopeo-containers = 1:0.1.26-2.dev.git2e8377a.el7
               

The repo has skopeo-0.1.27-3.dev.git14245f2.el7.x86_64.rpm FWIW, no clue how an older skopeo was installed.

@stevekuznetsov stevekuznetsov self-assigned this Jan 25, 2018
@stevekuznetsov stevekuznetsov added kind/bug Categorizes issue or PR as related to a bug. priority/P0 area/tests kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. labels Jan 25, 2018
@stevekuznetsov
Copy link
Contributor Author

Should be fixed in openshift-eng/aos-cd-jobs@38e99d4 ?

Basic idea is:

  • we gather full list of necessary RPMs to install bleeding edge Docker
  • when we validate bleeding edge Docker, we publish those in the dockertested repo
  • when we install Docker, we turn that repo on and then off again
  • when we install Docker, it installed skopeo-containers but not skopeo
  • when openshift-ansible went to install skopeo, it couldn't reach the disabled dockertested repo

@stevekuznetsov
Copy link
Contributor Author

The whole approach to this needs to be 100000% rethought. As an aside, this whole process was a crutch until the Docker team could run Origin e2es as a test before they push out new RPMs. @jtligon did that happen?

@jtligon
Copy link

jtligon commented Jan 25, 2018 via email

@stevekuznetsov stevekuznetsov changed the title Wrong skopeo RPM installed, breaking health-checks We're still vetting Docker RPMs ourselves Jan 25, 2018
@stevekuznetsov
Copy link
Contributor Author

/assign @jtligon
/unassign

We need to get rid of our Docker testing jobs and just pull from RHEL 7 Next. We have delivered all the bits that are necessary to get Origin conformance to be simple to run.

@stevekuznetsov stevekuznetsov removed kind/bug Categorizes issue or PR as related to a bug. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. labels Jan 25, 2018
@runcom
Copy link
Member

runcom commented Jan 31, 2018

/unassign @jtligon
/assign @runcom

@openshift-ci-robot openshift-ci-robot assigned runcom and unassigned jtligon Jan 31, 2018
@runcom
Copy link
Member

runcom commented Jan 31, 2018

Basic idea is:

we gather full list of necessary RPMs to install bleeding edge Docker
when we validate bleeding edge Docker, we publish those in the dockertested repo
when we install Docker, we turn that repo on and then off again
when we install Docker, it installed skopeo-containers but not skopeo
when openshift-ansible went to install skopeo, it couldn't reach the disabled dockertested repo

we can definitely rework this as:

I'm not sure we can actually do better than that. It's a chicken-egg situation where we could test docker and it works for us with origin and when we hand the rpm out to you guys it breaks. I mean, with what I've propsed above, we should be able to catch at least issues like the one here in this issue. Any other docker issues will be actually catched by the origin CI.

The container team is going to own the jobs needed so we can offload Steve from this.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 1, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 1, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/tests lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P0
Projects
None yet
Development

No branches or pull requests

5 participants