-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deploy: add conditions when creating replication controllers #11412
deploy: add conditions when creating replication controllers #11412
Conversation
[test] |
#11353 [test] |
// If the condition that we are about to add already exists and has the same status | ||
// then all we need is to update its LastProbe time. | ||
currentCond := GetDeploymentCondition(*status, condition.Type) | ||
if currentCond != nil && currentCond.Status == condition.Status && currentCond.Reason == condition.Reason { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this mean we are updating all deployments frequently? How many extra writes does this add per deployment? How frequently will we write now (since all condition changes now result in writes to etcd)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For Progressing conditions that's fine because once we transition to a terminal state we don't update them anymore. I've tested locally by adding some sleeps around to simulate load in the queue and as I expected it seems that we hotloop between updates for the Available condition because we always update its probe time. Now I think lastProbeTime for deployment conditions is a bad idea. For Available is obvious. Progressing is more tricky because 1) this condition should originally be added in the rolling updater, but having it in the controller is a nice abstraction (DC will be reconciled because of a RC update, condition as part of the status is updated in the controller) 2) Since we have it updated in the controller, we are going to hotloop as long as the deployment is progressing and I can imagine long-running deployments saturating the queue. That being said, I am removing lastProbeTime from the equation.
selector := labels.Set(config.Spec.Selector).AsSelector() | ||
// TODO: Replace with using rc.status.availableReplicas that comes with the next rebase. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
open an issue for this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM |
continuous-integration/openshift-jenkins/test Waiting: Determining build queue position |
Pushed two new small updates:
PTAL |
#11442 [test] |
msg := fmt.Sprintf("Replication controller %q has completed progressing", latestRC.Name) | ||
condition := deployutil.NewDeploymentCondition(deployapi.DeploymentProgressing, kapi.ConditionTrue, deployutil.NewRcAvailableReason, msg) | ||
deployutil.SetDeploymentCondition(newStatus, *condition) | ||
} | ||
} else if config.Status.LatestVersion == 0 && deployutil.HasImageChangeTrigger(&config) { | ||
condition := deployutil.NewDeploymentCondition(deployapi.DeploymentProgressing, kapi.ConditionUnknown, "", "Waiting on an image update") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I have configChange trigger then it can also wait for the config update right? (so I can oc set image). I don't think we should set this condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't wait for config changes, the trigger controller bumps latestVersion as soon as it notices a config change. We have this message in oc status
and I figured it would be more discoverable to have it here but I should have it in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That being said I am splitting it out of this PR and will open a follow-up
Adds conditions to the deployment config when a replication controller is created or when an error occurs while trying to create.
LGTM |
[merge] |
#11499 [test] |
Evaluated for origin test up to e343373 |
continuous-integration/openshift-jenkins/test FAILURE (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/10465/) (Base Commit: 11a00ce) |
Evaluated for origin merge up to e343373 |
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/10467/) (Base Commit: 47fd7d6) |
Adds conditions to the deployment config when a replication controller
is created or when an error occurs while trying to create.
@mfojtik @smarterclayton
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1386018
Part of https://trello.com/c/OIggCmzo/699-5-deployments-downstream-conditions