-
Notifications
You must be signed in to change notification settings - Fork 551
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kube|etcd|origin] Update plugins for static pod deployments #1383
Conversation
sos/plugins/etcd.py
Outdated
if path.exists('/etc/origin/node/pods/etcd.yaml'): | ||
self.cmd = 'master-exec etcd etcd etcdctl' | ||
else: | ||
self.cmd = 'etcdctl' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory this should just work; however, some RHEL versions use etcdv2 and others use etcdv3, and this command works using an env variable to determine which subcommand (etcdctl2 or etcdctl3) you want to use.
I wonder if/how we could better determine what db schema etcd was using (outside of these commands) so we know what command to properly call? Without this, we might need to try calling both to see which one works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to version-control the command we use since the etcd
package drops /usr/bin/etcdctl
regardless of which version that's installed. The package is responsible for this, so sos shouldn't attempt to circumvent it even if it is in an attempt to ensure versioning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The self.cmd
member seems to be used in just one place in exactly one method (setup()
): let's ditch it and just use a local variable instead? (etcd_cmd
would be a bit more readable and no great burden since we lose the self.
prefix for e.g.).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks good but a couple of small points need some attention before we can merge this.
]): | ||
return True | ||
return False | ||
return any([path.exists(f) for f in self.files]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic here looks a bit weird to me: we have a plugin files list, which, if any of the files exist causes the plugin to run (the plugin will also run if those files do not exist, but one or more package matches are present).
Then we have a separate check_is_master()
method, that used to check two specific "master" paths (the API server key, and the master-config YML file).
Now it just does the same as the default check_enabled()
does for the files
list: it returns True
if any of the Kubernetes.files
paths exist - so if the plugin is running (at least from a file trigger), then check_is_master()
is always True
.
Is there something subtle I've missed here? If the logic is correct then we should be able to dispense with all the call to check_is_master()
; if it's not then it seems like there's some false sharing between the plugin files
list and the list of files that check_is_master()
should be operating on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, the files
list is strictly locations that a kubernetes master would have that a worker node would not have - there are no files (at least that I'm aware of) that a node would have that a master wouldn't have. For worker node collection we're relying on just the package check.
The reason for this is that we still want to collect the kube*
journals but literally everything else in the plugin would not be available on the worker nodes.
sos/plugins/etcd.py
Outdated
if path.exists('/etc/origin/node/pods/etcd.yaml'): | ||
self.cmd = 'master-exec etcd etcd etcdctl' | ||
else: | ||
self.cmd = 'etcdctl' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The self.cmd
member seems to be used in just one place in exactly one method (setup()
): let's ditch it and just use a local variable instead? (etcd_cmd
would be a bit more readable and no great burden since we lose the self.
prefix for e.g.).
The 3.10 version of OCP changes the deployment configurations for etcd and kubernetes components, and additionally changes the way the etcdctl command is called when running in a static pod. Update these plugins to support this new deployment style. Signed-off-by: Jake Hunsaker <[email protected]>
Versions of kubernetes after 1.5 use --kubeconfig instead of --config to specify a configuration file to use for kubectl commands. Update the kubernetes plugin to use the proper syntax. Signed-off-by: Jake Hunsaker <[email protected]>
Adds collection of 'oc adm top' output for images and imagestreams. Resolves: sosreport#1165 Signed-off-by: Jake Hunsaker <[email protected]>
Per Origin 3.10 release notes the static pod configuration for Origin/OCP changes the way we need to collect etcd information and potentially how the kubernetes plugin should be enabled.
Also, correct the kubernetes syntax for specifying a config file override.
Finally, resolve #1165 by including
oc adm top
output.Please place an 'X' inside each '[]' to confirm you adhere to our Contributor Guidelines