Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'oc apply' consistently fails to update deploymentconfigs after creation with origin 3.7.0 #17998

Closed
mc-meta opened this issue Jan 5, 2018 · 7 comments
Assignees
Labels
component/apps kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2

Comments

@mc-meta
Copy link

mc-meta commented Jan 5, 2018

When running a simple (silly) teaching demo workflow at:

  https://github.com/michelebariani/minishift-demo-fortune

on minishift(with 3.7.0)/openshift origin 3.7.0 we are facing some problems that were not there in previous versions (e.g. 3.6.1).

We are able to initially deploy the applications and create deployment configs via 'oc apply -f'.
App components come up just fine and all works as expected, but subsequent invocations to update deploymentconfigs fail with ImagePullBackOff and at some point dc deployment is rolledback.
From relevant errors, this seems to be related with the use of relative image coordinates:

met [05:49pm]  ~/git/minishift-demo-fortune/s06/app-fortune> grep image: fortune-dc.yaml
          image: fortune/quote:latest
          image: fortune/db:latest

If we prefix image reference with default registry docker-registry.default.svc:5000, 'oc apply' works as expected and problem seems to disappear:

met [07:11pm]  ~/git/minishift-demo-fortune/s06/app-fortune> grep image: fortune-dc.yaml
          image: docker-registry.default.svc:5000/fortune/quote:latest
          image: docker-registry.default.svc:5000/fortune/db:latest
met [07:11pm]  ~/git/minishift-demo-fortune/s06/app-fortune> oc apply -f fortune-dc.yaml
deploymentconfig "fortune" configured

met [07:14pm]  ~/git/minishift-demo-fortune/s06/app-fortune> oc get events |grep -i pull|tail -n 4

2m         2m          1         fortune-26-wwgfj    Pod                     spec.containers{db}           Normal    Pulling                          {kubelet mid1-s101orig-5.xxxx.com}    pulling image "docker-registry.default.svc:5000/fortune/db:latest"
1m         1m          1         fortune-26-wwgfj    Pod                     spec.containers{db}           Normal    Pulled                           {kubelet mid1-s101orig-5.xxxx.com}    Successfully pulled image "docker-registry.default.svc:5000/fortune/db:latest"
1m         1m          1         fortune-26-wwgfj    Pod                     spec.containers{quote}        Normal    Pulling                          {kubelet mid1-s101orig-5.xxxx.com}    pulling image "docker-registry.default.svc:5000/fortune/quote:latest"
1m         1m          1         fortune-26-wwgfj    Pod                     spec.containers{quote}        Normal    Pulled                           {kubelet mid1-s101orig-5.xxxx.com}    Successfully pulled image "docker-registry.default.svc:5000/fortune/quote:latest"

As anticipated, this behaviour is not present on openshift versions <= 3.6.1 and 'oc apply' is able to consistently update existing objects even with references to relative images.

Version

[root@mid1-s101orig-1 ~]# oc version
oc v3.7.0+7ed6862
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://origin-cluster-stg.dodi.tech:8443
openshift v3.7.0+7ed6862
kubernetes v1.7.6+a08f5eeb62

Steps To Reproduce
  1. Clone https://github.com/michelebariani/minishift-demo-fortune
  2. Pick one of the openshift related stages (s0[3-6])
  3. Create 'fortune' project, build and push needed images and create objects with 'oc apply'
  4. Verify all works as expected
  5. Update a dc (e.g. change number or replicas) and redeploy with 'oc apply'
Current Result

pods are not coming up with ImagePullBackOff and at some point dc deployment is rolledback:

met [05:49pm]  ~/git/minishift-demo-fortune/s06/app-fortune> oc apply -f fortune-dc.yaml
deploymentconfig "fortune" configured
met [05:50pm]  ~/git/minishift-demo-fortune/s06/app-fortune> oc get events |grep -i pull
15s        28s         2         fortune-24-6v7rr    Pod                     spec.containers{db}           Normal    Pulling                 {kubelet mid1-s101orig-4.xxxx.com}    pulling image "fortune/db:latest"
13s        26s         2         fortune-24-6v7rr    Pod                     spec.containers{db}           Warning   Failed                  {kubelet mid1-s101orig-4.xxxx.com}    Failed to pull image "fortune/db:latest": rpc error: code = 2 desc = Error: image fortune/db:latest not found
13s        26s         2         fortune-24-6v7rr    Pod                     spec.containers{db}           Warning   Failed                  {kubelet mid1-s101orig-4.xxxx.com}    Error: ErrImagePull
26s        26s         1         fortune-24-6v7rr    Pod                     spec.containers{quote}        Normal    Pulling                 {kubelet mid1-s101orig-4.xxxx.com}    pulling image "fortune/quote:latest"
21s        21s         1         fortune-24-6v7rr    Pod                     spec.containers{quote}        Warning   Failed                  {kubelet mid1-s101orig-4.xxxx.com}    Failed to pull image "fortune/quote:latest": rpc error: code = 2 desc = Error: image fortune/quote:latest not found
21s        21s         1         fortune-24-6v7rr    Pod                     spec.containers{quote}        Warning   Failed                  {kubelet mid1-s101orig-4.xxxx.com}    Error: ErrImagePull
12s        20s         3         fortune-24-6v7rr    Pod                     spec.containers{db}           Normal    BackOff                 {kubelet mid1-s101orig-4.xxxx.com}    Back-off pulling image "fortune/db:latest"
Expected Result

Be able to do simple updates to deploymentconfigs with 'oc apply' even when relative image references are present (as during initial creation)

Additional Information

Same behaviour seems to be present on minishift with 3.7.0. A similar report has already been filed with minishift/minishift#1821 and relevant origin issue #17705

Thanks for any insight.

@pweil- pweil- added component/apps kind/bug Categorizes issue or PR as related to a bug. priority/P1 labels Jan 8, 2018
@mfojtik
Copy link
Contributor

mfojtik commented Jan 15, 2018

@mc-meta how do you create the apps? does the DC have trigger? Also can you gist the result of oc apply --loglevel=10 ? Why you need to change the image manually and add the prefix? The trigger should take care of that when the image is deployed.

@michelebariani
Copy link

michelebariani commented Jan 15, 2018

Hi @mfojtik , as I'm working with @mc-meta on this I can provide some first feedback to you.

The apps are created with oc apply and yes there are triggers, as per the github link provided by @mc-meta see e.g. https://github.com/michelebariani/minishift-demo-fortune/blob/master/s06/app-fortune/fortune-dc.yaml.j2 (this template then gets rendered as a .yaml)

The "add the prefix" part was not something we wanted/needed to do, it was just a workaround that seems to actually work, we mentioned it to help pinpointing the root cause.

I'll have a go at oc apply --loglevel=10 and be back to you

@michelebariani
Copy link

michelebariani commented Jan 15, 2018

Hi again @mfojtik , I'm attaching log files as requested.

output-0.log refers to the first oc apply --loglevel=10, that creates the app.

output-1.log refers to a second oc apply --loglevel=10, whose purpose is to change the number of replicas, with anything else untouched (especially containers images) but without getting the desired result in the end (as per initial description of the issue).

output-0.log
output-1.log

@oesah
Copy link

oesah commented Mar 13, 2018

I have the same issue. Usually I would expect apply to update the configs, but my changes are not applied to the configs I defined in a template. Any updates on this? Basically what I try to do is to apply a template to the project after a build in case something changed in the setup. Bad idea?

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 11, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/apps kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Projects
None yet
Development

No branches or pull requests

7 participants