-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster up: do not re-initialize a cluster that already has been initialized #11146
cluster up: do not re-initialize a cluster that already has been initialized #11146
Conversation
@bparees ptal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one nit and lgtm, but what's the use case? if i've got a running cluster and i run oc cluster up again, what am i expecting to have happen here?
|
||
cfg, _, err := c.OpenShiftHelper().GetConfig(c.LocalConfigDir) | ||
if err != nil { | ||
glog.V(2).Infof("error reeading config: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reading.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
@bparees the use case is that across different runs of cluster up you may be preserving your data and config, and may not want cluster up to undo changes you made. For example, you could have deleted the default project, replaced default imagestreams, or changed the user provider. |
@bparees this is needed |
only if we want to encourage using oc cluster up for maintaining "long term" environments. I think we should at least stop and discuss that before plowing ahead. I'm not saying it's not valid, but i wonder what sort of a slippery slope we're on before people are asking for oc cluster up to launch multiple nodes, or run multimaster, or otherwise supplant ansible for managing a cluster? @smarterclayton wdyt? |
If the origin container exists we should stop immediately. Is this trying to reapply config? |
@smarterclayton this is for the use case when you are reusing config or the data directory (starting cluster up again). If you've already initialized your data, there's no need to re-initialize it. |
8c379ed
to
7bbe7da
Compare
@bparees I wouldn't necessarily call it a long-term use case to change the default config then start cluster up again to try out your config change. I would think that's something we need for the developer use case. Also, this change is not adding any function to cluster up, simply preventing the blind initialization of data when you already have an existing data/config you want to use. |
If you have a container already, you should use docker start origin. On Sep 28, 2016, at 9:07 PM, Cesar Wong [email protected] wrote: @bparees https://github.com/bparees I wouldn't necessarily call it a — |
You don't have a container. All you have is config and data. |
I am extremely skeptical of this use case. Let's talk tomorrow On Sep 28, 2016, at 9:23 PM, Cesar Wong [email protected] wrote: If you have a container already, you should use docker start origin. You don't have a container. All you have is config and data. — |
7bbe7da
to
f1a126f
Compare
@smarterclayon @bparees @cswong this does not add any additional use case. Currently there's a switch --keep-config. This PR just makes that switch consistent. If you keep configuration between restarts of the clusters shouldn't be initializing again stuff, as a) it shouldn't be needed and b) if a user has deleted changed some config why bringing back up the cluster should change user decisions? |
@jorgemoralespou yes, I guess I didn't explain that well enough... will talk with @smarterclayton later today. And here I thought this was the least controversial out of any of the 'cluster up' fixes :-) |
@cswong, @smarterclayton should change his nick to fightingclayton or opinnionatedclayton. :-D |
[merge] |
Evaluated for origin merge up to f1a126f |
[Test]ing while waiting on the merge queue |
Flake #11094 |
Evaluated for origin test up to f1a126f |
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/9525/) |
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/9533/) (Image: devenv-rhel7_5106) |
A condition function can now be attached to a bootstrap task so that if the condition evaluates to false, the task is skipped altogether.