Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wire in aggregator #14285

Merged
merged 7 commits into from
May 28, 2017
Merged

wire in aggregator #14285

merged 7 commits into from
May 28, 2017

Conversation

deads2k
Copy link
Contributor

@deads2k deads2k commented May 22, 2017

I think there are more picks that are going to be required and I'm sorting out some bad behavior around hangs, but this is the direction I'm going in.

@openshift/api-review for the the aggregator config commit.

Right now, I've only tested this manually (no test image is yet pushed and it requires custom master-config.yaml and active nodes).

  1. Generate front-proxy-ca cert/key and aggregator-front-proxy-client cert/key. I did this with
oadm ca create-signer-cert --cert=openshift.local.config/master/front-proxy-ca.crt --key=openshift.local.config/master/front-proxy-ca.key
oadm create-api-client-config --certificate-authority=openshift.local.config/master/front-proxy-ca.crt --signer-cert=openshift.local.config/master/front-proxy-ca.crt --signer-key=openshift.local.config/master/front-proxy-ca.key --user aggregator-front-proxy --client-dir=openshift.local.config/master

and then deleting everything but the files I wanted.
2. Generated a fresh master-config.yaml.
3. Replace stanzas in master-config.yaml with

aggregatorConfig:
  proxyClientInfo:
    certFile: aggregator-front-proxy.crt
    keyFile: aggregator-front-proxy.key
authConfig:
  requestHeader:
    clientCA: front-proxy-ca.crt
    clientCommonNames: 
    - aggregator-front-proxy
    usernameHeaders:
    - X-Remote-User
    groupHeaders:
    - X-Remote-Group
    extraHeaderPrefixes:
    - X-Remote-Extra-
  1. sudo $(which openshift) start --master-config openshift.local.config/master/master-config.yaml --node-config openshift.local.config/node-deads-dev-01/node-config.yaml
  2. Create the image to use by cloning kube and building: nice make WHAT=vendor/k8s.io/sample-apiserver/ && vendor/k8s.io/sample-apiserver/hack/build-image.sh
  3. oc new-project wardle - new project to start with
  4. oadm policy add-scc-to-user privileged -z apiserver - etcd doesn't like running as non-root and I didn't feel like trying to deal with it
  5. oc create policybinding kube-system -n kube-system
  6. oc create -f test/extended/testdata/aggregator/ - create the resources
  7. oc get flunders - proof!

@deads2k
Copy link
Contributor Author

deads2k commented May 22, 2017

[test]

@deads2k
Copy link
Contributor Author

deads2k commented May 22, 2017

Error from server (InternalError): an error on the server ("Error: 'net/http: invalid header field name \"X-Remote-Extra-authorization.openshift.io/scopes\"'\nTrying to reach: 'https://172.30.134.150/apis/wardle.k8s.io/v1alpha1'") has prevented the request from succeeding

Well, that's a problem. @liggitt got suggestions?

@deads2k
Copy link
Contributor Author

deads2k commented May 22, 2017

Well, it can proxy certificate based users at the moment, so its a strict improvement. Let's get some reviews on this much.

@deads2k deads2k changed the title [wip] wire in aggregator wire in aggregator May 22, 2017
@deads2k
Copy link
Contributor Author

deads2k commented May 22, 2017

@bparees you were asking.

@liggitt
Copy link
Contributor

liggitt commented May 23, 2017

Error from server (InternalError): an error on the server ("Error: 'net/http: invalid header field name "X-Remote-Extra-authorization.openshift.io/scopes"'\nTrying to reach: 'https://172.30.134.150/apis/wardle.k8s.io/v1alpha1'") has prevented the request from succeeding

Well, that's a problem. @liggitt got suggestions?

...

@deads2k
Copy link
Contributor Author

deads2k commented May 23, 2017

Well, that's a problem. @liggitt got suggestions?

Yeah. Technically its never worked, so changing the name can work. scopes.authorization.openshift.io I guess.

@deads2k
Copy link
Contributor Author

deads2k commented May 23, 2017

I won't claim its turnkey, but I've added directions @bparees or @pmorie want to try it.

@bparees
Copy link
Contributor

bparees commented May 23, 2017

jessica actually gave me a path to console SC usage that doesn't require the aggregator after all, so i'm going to try that first. I'm still bleeding from my experience as an api server guinea pig.

@deads2k
Copy link
Contributor Author

deads2k commented May 23, 2017

@sdodson I'm guessing that @pmorie and @derekwaynecarr have already engaged you regarding how to wire up the service catalog for its beta installation, but the instructions I've included in the description show the particular fields used for wiring the API server to enable the aggregator to route traffic to the service catalog.

@sdodson
Copy link
Member

sdodson commented May 23, 2017

@ewolinetz and @jpeeler are collaborating (or about to) on that work.

@liggitt
Copy link
Contributor

liggitt commented May 24, 2017

add aggregator config commit LGTM

// TODO this is probably an indication that we need explicit and precise control over the discovery chain
// but for now its a special case
// apps has to come last for compatibility with 1.5 kubectl clients
if apiService.Spec.Group == "apps" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to confirm, this won't conflict with apps.openshift.io?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvmd.


aggregatorConfig, err := c.createAggregatorConfig(*kc.Master.GenericConfig)
if err != nil {
glog.Fatalf("Failed to launch master: %v", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failed to create aggregator config

@@ -22,7 +22,7 @@ const (
VerbAll = "*"
NonResourceAll = "*"

ScopesKey = "authorization.openshift.io/scopes"
ScopesKey = "scopes.authorization.openshift.io"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to update/migrate something after this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to update/migrate something after this change?

No. I reasoned through this with jordan. It's merging in a separate pull.

@mfojtik
Copy link
Contributor

mfojtik commented May 24, 2017

LGTM, but will prefer @enj to have second look before merging.

@deads2k
Copy link
Contributor Author

deads2k commented May 24, 2017

@smarterclayton I seem to be failing on

===== Verifying Generated Bindata =====
Generating bindata...
FAILURE: Generation of fresh bindata failed:
error: extended bindata is 656537 bytes, reduce the size of the import

Any idea what's up?

@deads2k
Copy link
Contributor Author

deads2k commented May 24, 2017

comments addressed [merge]

Copy link
Contributor

@enj enj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments.

if err != nil {
t.Fatalf("error starting server: %#v", err)
}
kubeConfigFile := masterConfig.MasterClients.OpenShiftLoopbackKubeConfig
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you will drop this commit so the comment gets added.

@@ -22,7 +22,7 @@ const (
VerbAll = "*"
NonResourceAll = "*"

ScopesKey = "authorization.openshift.io/scopes"
ScopesKey = "scopes.authorization.openshift.io"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

er... so based on comments this never worked? How did we not notice?

go apiserver.Run(utilwait.NeverStop)

// Attempt to verify the server came up for 20 seconds (100 tries * 100ms, 100ms timeout per try)
cmdutil.WaitForSuccessfulDial(c.TLS, c.Options.ServingInfo.BindNetwork, c.Options.ServingInfo.BindAddress, 100*time.Millisecond, 100*time.Millisecond, 100)
Copy link
Contributor

@enj enj May 24, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we not care if we fail here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we not care if we fail here?

pre-existing. Not really sure what it does

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pre-existing. Not really sure what it does

Tries a bunch of times and returns and error on failure (which we ignore and just keep going).

if err != nil {
glog.Fatalf("Failed to create aggregator config: %v", err)
}
aggregatorServer, err := createAggregatorServer(aggregatorConfig, apiserver.GenericAPIServer, kc.Informers.InternalKubernetesInformers(), stopCh)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you plan to use kc.Informers.InternalKubernetesInformers() later?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you plan to use kc.Informers.InternalKubernetesInformers() later?

it gets started elsewhere to drive this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it gets started elsewhere to drive this.

I do not understand what you mean. All I was saying is that createAggregatorServer does not use the sharedInformers parameter, so why bother passing it?

@deads2k
Copy link
Contributor Author

deads2k commented May 26, 2017

Looks like I have some verifies to sort through.

@smarterclayton
Copy link
Contributor

This is a blocker for the 3.6 release, I'm putting this at [severity:blocker]. Will review

@deads2k
Copy link
Contributor Author

deads2k commented May 26, 2017

rebased and fixed up verifies I think

@deads2k
Copy link
Contributor Author

deads2k commented May 26, 2017

@smarterclayton I seem to be unable to make protobuf verification happy. Can you try and see?

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to c57c9c8

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test FAILURE (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin/1808/) (Base Commit: 69de7d3)

@smarterclayton
Copy link
Contributor

Flake from journalctl output.

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to c57c9c8

@openshift-bot
Copy link
Contributor

openshift-bot commented May 28, 2017

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/832/) (Base Commit: 63fe34a) (Extended Tests: blocker) (Image: devenv-rhel7_6280)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants