Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UPSTREAM: 30145: Add PVC storage to Limit Range #11396

Merged
merged 1 commit into from
Oct 22, 2016

Conversation

markturansky
Copy link
Member

@markturansky
Copy link
Member Author

@pweil- @liggitt PTAL? Not sure who to ping for this cherry pick.

Docs added here: openshift/openshift-docs#3084

@pweil-
Copy link

pweil- commented Oct 20, 2016

@markturansky since this is going in to Kube 1.5 and 3.4 is based on 1.4 is there justification for merge? Ie, is this a critical feature for 3.4?

@smarterclayton thoughts on the above? At this point I think we should be holding off cherry-picks as much as possible.

@markturansky
Copy link
Member Author

Our Dedicated offering can get by with ClusterResourceQuota, which would quota storage by project. This solution works with the limited numbers in Dedicated.

In the Online version, CRQ won't scale as needed and so we can't use it. We need to limit the size of PVCs in a project in Online. We can already limit the count. This would limit overall consumption.

@abhgupta please confirm

@markturansky
Copy link
Member Author

markturansky commented Oct 20, 2016

Also worth noting, the Ops teams really want to get multi-zone volumes in Online. 3.3 supports multi-zone, but we can't because our custom provisioner is what provides our current storage consumption limits. So we're looking to kill our custom provisioner in 3.4, but we need this cherry-pick.

@abhgupta
Copy link
Member

@pweil- this is required for Online DevPreview and the upcoming paid tier that will be based on OCP 3.4. We have been hit with multiple issues maintaining the custom dynamic provisioner with Online and really need to be able to get rid of that and just leverage the product to provide the quota/limit restrictions.

If this does not become part of OCP 3.4, then we will need to rebase our custom provisioner to ensure it works well with the stock PVC controllers (OCP 3.4), which poses additional challenges and delays at our end.

@derekwaynecarr derekwaynecarr self-assigned this Oct 21, 2016
@@ -2544,6 +2544,17 @@ func ValidateLimitRange(limitRange *api.LimitRange) field.ErrorList {
}
}

if limit.Type == api.LimitTypePersistentVolumeClaim {
_, minQuantityFound := limit.Min[api.ResourceStorage]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@markturansky - i must have missed this in the review upstream. i should be able to set just a min and not a max, or a max and not a min, disagree?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allowing either to be empty opens the limit range to whatever the underlying infrastructure wants to enforce. IDK if that's good or bad, but AWS, for example, would enforce a 1Gi minimum and some large max Gi.

0 is an effective "no min".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a confusing special case and I would prefer to get rid of it.

can we get rid of it here and make a separate pr to upstream that i can merge?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can loosen the validation upstream and merge this now.

@derekwaynecarr
Copy link
Member

[merge]

@abhgupta
Copy link
Member

re-[merge] Failure in TestRegistryClientAPIv1 seems like a fluke

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to c0b3b9d

@openshift-bot
Copy link
Contributor

[Test]ing while waiting on the merge queue

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to c0b3b9d

@openshift-bot
Copy link
Contributor

openshift-bot commented Oct 21, 2016

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/10451/) (Base Commit: bdf48b1) (Image: devenv-rhel7_5218)

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/10450/) (Base Commit: 2016d68)

@derekwaynecarr
Copy link
Member

FYI - 3.4 has the ability quota cumulative requests.storage and not just
number of claims, but I assume you want this to control the size of an
individual claim. If it lets us get rid of other tech debt, I am fine
merging this now. I agree that we should challenge all things like this in
the future though.

On Thu, Oct 20, 2016 at 4:13 PM, Abhishek Gupta [email protected]
wrote:

@pweil- https://github.com/pweil- this is required for Online
DevPreview and the upcoming paid tier that will be based on OCP 3.4. We
have been hit with multiple issues maintaining the custom dynamic
provisioner with Online and really need to be able to get rid of that and
just leverage the product to provide the quota/limit restrictions.

If this does not become part of OCP 3.4, then we will need to rebase our
custom provisioner to ensure it works well with the stock PVC controllers
(OCP 3.4), which poses additional challenges and delays at our end.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#11396 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AF8dbP5xw13dHESj5IWecqYvI6mrlsP5ks5q18txgaJpZM4KY1A0
.

k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this pull request Nov 9, 2016
…lidation

Automatic merge from submit-queue

Loosened validation on PVC LimitRanger

This PR loosens validation on PVC LimitRanger so that either Min or Max are required, but not both.

Per @derekwaynecarr  openshift/origin#11396 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants