-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UPSTREAM: <carry>: pods in openshift-* namespace can be marked critical #19104
UPSTREAM: <carry>: pods in openshift-* namespace can be marked critical #19104
Conversation
/hold pending review. |
@derekwaynecarr where are we using |
@derekwaynecarr is it for avoiding eviction by kubelet when eviction threshold is reached? Or is there any other reason? just trying to understand. |
Using |
@derekwaynecarr it is not clear to me why we need this for static pod based deployment topology? |
@aveshagarwal see details here: kubernetes/kubernetes#40573 static pods once evicted are never restarted, so we need to ensure they are never evicted. |
Wondering what would evicts those pods in the context we are discussing here? If the eviction happens due to reaching any eviction thresholds on a machine, we should make sure such conditions don't occur on those machines (should not be difficult if we dont allow any other pods on the machines), where infra static pods are running. If its about drain, drain by default does not evict static pods unless |
There will be lots of workloads on masters with varying CPU load. It is
not a safe assumption to assume that eviction will not happen on masters.
…On Mon, Mar 26, 2018 at 4:46 PM, Avesh Agarwal ***@***.***> wrote:
@aveshagarwal <https://github.com/aveshagarwal> see details here:
kubernetes/kubernetes#40573
<kubernetes/kubernetes#40573>
static pods once evicted are never restarted, so we need to ensure they
are never evicted.
Wondering what would evicts those pods in the context we are discussing
here? If the eviction happens due to reaching any eviction thresholds on a
machine, we should make sure such conditions don't occur on those machines
(should not be difficult if we dont allow any other pods on the machines),
where infra static pods are running. If its about drain, drain by default
does not evict static pods unless --force is used.
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
<#19104 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p4ZqW7nrewL08mpxPPAEvUoAj927ks5tiVOkgaJpZM4S7voT>
.
|
@aveshagarwal independent of masters, openshift-logging would benefit as well. |
I agree with that. Was just wondering if there is an option to not use For |
This doesn't build right now
|
183ee02
to
04cf5c8
Compare
current list of critical pods:
should be critical (but do not appear to be as of 4/20)
up for debate:
fyi @smarterclayton |
04cf5c8
to
3ac1753
Compare
i think this should pass all checkers now... |
3ac1753
to
ee1fddd
Compare
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: derekwaynecarr The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
63e003e
to
b4fd35b
Compare
/lgtm will drop once we get priority in 1.11 |
/test gcp |
/retest Please review the full test history for this PR and help us cut down flakes. |
2 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
@@ -287,7 +287,7 @@ var defaultKubernetesFeatureGates = map[utilfeature.Feature]utilfeature.FeatureS | |||
AppArmor: {Default: true, PreRelease: utilfeature.Beta}, | |||
DynamicKubeletConfig: {Default: false, PreRelease: utilfeature.Alpha}, | |||
ExperimentalHostUserNamespaceDefaultingGate: {Default: false, PreRelease: utilfeature.Beta}, | |||
ExperimentalCriticalPodAnnotation: {Default: false, PreRelease: utilfeature.Alpha}, | |||
ExperimentalCriticalPodAnnotation: {Default: true, PreRelease: utilfeature.Alpha}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that I am responding late but yesterday, I realized that a side-effect of enabling this feature is kubelet will start preempting pods(https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/preemption/preemption.go) when there is a resource crunch on the node. This is when kubelet starts or when static pods are admitted onto the node but there is very good chance that some 'high priority'(not critical ones) pods are evicted from the node as preemption logic there is not priority aware.(except for critical pods).
If we pursue the static pod based deployment topology, we should extend the critical pod support to openshift-* namespaces pending priority/preemption graduation in kubernetes.
input from @smarterclayton @liggitt @sjenning @aveshagarwal appreciated.