-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster up: add persistent volumes on startup #12456
Conversation
@bparees ptal - I marked this PR as WIP because I'd like it to be tried out and tested to make sure it meets our use cases for persistence. |
@jorgemoralespou @GrahamDumpleton your feedback is greatly appreciated. |
lgtm |
Are you having auto volume provisioning? If not, how do you specify number of volumes to pre create? How do you override size created as? There is also still the question of how to deal with volumes when released. The persistent volumes need to be set as 'Retain' as too risk to delete automatically. Plus you can't necessarily delete them depending on how file ownership is. FWIW, in our latest wrappers we pre-create 10 persistent volumes of 10Gi. In my version of the wrapper I allow volume count and volume size to be overridden. Our wrappers also provide commands for manually adding volumes later. These can be additional anonymous volumes, or volumes you associate with a specific directory and set up with a claim so can be associated with specific application more easily when the directory is not empty and has data for specific application. |
no. it was investigated but auto provisioning doesn't handle permissions properly on the directories it creates.
it creates 1000.
you don't. they are all created as 100gig volumes, openshift will assign you the smallest available volume that is greater than or equal to your request size, so by doing this all the volumes should be viable for any PVC. For any reasonable use of oc cluster up, that should be sufficient for all needs.
our assumption is 1000 ought to be enough for the intended purposes of oc cluster up(which does not include long running/managed clusters), but there is a plan to document how to create additional PVs if desired. (hopefully still on @csrwng's checklist, as part of the doc for this feature). |
- ReadWriteOnce | ||
hostPath: | ||
path: ${BASEDIR}/${NAME} | ||
persistentVolumeReclaimPolicy: Retain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does recycle not work for hostpath volumes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bparees I tested this and it does work. Once you delete a PVC, the contents of the pv directory are wiped out. Trying to understand if this is what we should be using ('Recycle' instead of 'Retain')
Is the volume created on the origin container or on the host? |
Closed by accident. |
the actual storage is a directory on the host filesystem. |
It should be noted that mounting volumes from the real underlying host (not the VM) on MacOS X will not work properly if using VirtualBox. This is why only support Docker for Mac with our wrappers and not older Docker version based on boot2docker. When using docker-machine, the volume path isn't on local host but can only be whatever docker-machine instance is is created which runs Docker. |
Correct, however even then you get the benefit of persistence, even though it's not accessible on your host machine. |
@GrahamDumpleton I want to better understand the risk. If you get rid of the PVC, should the contents of the directory not go away? |
@csrwong In case of trying to have 2 clusters sharing same volumes on the
host, retains makes sense, as then a cluster could wipe data from the other
cluster. This is where it get's complicated give a one solution that works
for any use case for developers, hence the wrappers we create provide more
meaningful commands and options. (e.g.
https://github.com/openshift-evangelists/oc-cluster-wrapper/blob/master/plugins.d/volumes.global.plugin
)
When we're talking about a cluster for development, these tasks that are
"admin" tasks should be properly delegated to the "developer" but in a
developer friendly way. But this only makes sense when thinking the user of
the tool is a developer. Again, I never know what's the end goal of
"cluster up" :-D
|
it's definitely not to run 2 clusters on the same host. |
@jorgemoralespou in such a use case (where you have a specific pv you want to reuse across different clusters) I would consider the pv more like a "pet" rather than "cattle". In that case, you're better off creating your own pv with the appropriate retention policy. At the very least we can have documentation to help you do that. However, the purpose of this pull is to add cattle type of pv's to cluster up environments. |
0d35314
to
e885542
Compare
@bparees @jorgemoralespou there's 2 concerns that I still have regarding this pull:
|
Why not? not saying at the same time (concurrently), but definitely I see many people (developers) creating multiple clusters with different information and starting/stopping the appropriate cluster at will. @csrwng Why not using a Job to do the PV creation, so if it fails, it will get launched on next start. Also, PV creation (the Job) should be idempotent to not create the already created PVs. Does it make sense? |
oc cluster up is not for maintaining and managing long lived clusters. if you're using it to stop+restart clusters, you're already off the use cases we are targeting. |
@csrwng --host-pv-dir is my vote but i don't have strong feelings. |
Then why are we providing keep-config? What is long lived clusters for you? I don't agree you assesment here. |
@csrwng tested. Errors with following configuration:
Included the flag:
|
@csrwong knowing the time it takes to provision and that when volumes are
shared on the host this will create 1000 directories, I would ask for a
flag to specify the number of pvs with a possibly minimum of 10.
Otherwise it's not friendly.
|
When I talk about risk, my opinion may be clouded by what may have been incorrect behaviour seen in early versions of oc cluster up. I will have to test again, but my recollection was that when it would recycle the directories, it wouldn't always delete everything that was in them. For example, if the application had deliberately made files read only, then the cleanup process wouldn't remove them. this meant those files carried over to the next use of the directory as a volume and could cause problems. As far as use cases for oc cluster up, if the official stance is that there is only interest in the specific use case that engineering has for using this with testing, then it shouldn't be promoted at all to end users as something that can be used. To me this is the wrong attitude though. There should be acceptance that other people will want to use this differently to what you have in mind, and whatever you do to make it work for your use case, you simply need to keep in mind that what other people do will be different, and not do things that will specifically block other use cases or make things much harder. For example, it is being flexible in changing things if necessary, such as was done with having the developer account login with a fixed password on every up. This was relaxed and meant that we could set up the identity provider as htpasswd and use real passwords. So always make things optional, easily disabled, or reconfigurable. Don't force a particular way of thinking or doing things that then makes it unusable to others. If this is seen as just too hard, then a variant of the oc cluster concept should be broken out as a distinct project and let the community guide its direction as to what it should do. To that end, it perhaps should be a separate project under the MiniShift umbrella, given that MiniShift will likely also want to drive this down a path more useful to developers of applications, rather than testers. |
The challenge is we don't want oc cluster up to turn into an alternative cluster management tool. We don't want to dilute our dev or test resources in that way. If you need a highly configurable/manageable cluster, that's what ansible is for, possibly it's what the CDK/minishift are for too. |
We aren't expecting all the separate things we are doing in wrappers to be pushed down into oc cluster up. We are happy simply if things are done in oc cluster up in a way which still enables us to do extra things, and doesn't block us. MiniShift will no doubt have the same concerns. So I wouldn't necessarily be rushing to add extra features just because we have them in our wrappers. |
For v1.4.0-rc1 on MacOS X, I have also added a |
e885542
to
1fa88b8
Compare
I've now updated the job to be restartable and changed the name of the host dir flag to be more consistent with the other hostdir flags. |
@csrwng I still think 1000 PVs is quite a lot. |
@jorgemoralespou what do we gain by reducing the number? |
1fa88b8
to
b37416f
Compare
Usability.
- "oc get pv" gives me a list too long (even with 100)
- "ls $host-pv-dir" gives me a list too long, and in my case with multiple
profiles, I end up creating 1000 directories per profile.
- provisioning is much faster. My laptop fan is making noise while creating
these PVs.
Also, I would set the accessmode to RWO,RWX at least. Right now if I claim
for a RWX it will remain pending :-(
|
b37416f
to
b255e6f
Compare
ok, now creating 100 pv with all modes (RWO,RWX,ROX) |
Tested on Mac, Win, Linux |
lgtm |
[merge] |
I assume this will be in 1.5 and not in 1.4. Is that correct?
|
@jorgemoralespou yes |
Flake #12530 |
Evaluated for origin merge up to b255e6f |
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/12965/) (Base Commit: eaa36ed) (Image: devenv-rhel7_5703) |
Adds persistence support to 'cluster up'