Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenShift 3.10 Release Notes Tracker #8651

Closed
liggitt opened this issue Apr 9, 2018 · 35 comments
Closed

OpenShift 3.10 Release Notes Tracker #8651

liggitt opened this issue Apr 9, 2018 · 35 comments
Assignees

Comments

@liggitt
Copy link

liggitt commented Apr 9, 2018

No description provided.

@liggitt
Copy link
Author

liggitt commented Apr 9, 2018

cc @openshift/team-documentation

@liggitt
Copy link
Author

liggitt commented Apr 9, 2018

Removals:

  • The deprecated -p <POD> flag to oc port-forward is removed. Use oc port-forward pod/<POD> instead

Deprecations:

  • When enabling or disabling API groups with the --runtime-config flag in kubernetesMasterConfig.apiServerArguments, specify <group>/<version> without the apis/ prefix (in future releases, the apis/ prefix will be disallowed). For example:
    kubernetesMasterConfig:
      apiServerArguments:
        runtime-config:
        - apps.k8s.io/v1beta1=false
        - apps.k8s.io/v1beta2=false
    ...

Changes:

  • The output format of -o name now includes the API group and singular kind. For example:
    $ oc get imagestream/my-image-stream -o name
    imagestream.image.openshift.io/my-image-stream

@ahardin-rh
Copy link
Contributor

We're going to deprecate web console support for IE 11 in 3.10, to be removed in 3.12. Edge browser will still be supported.

@ghost
Copy link

ghost commented May 2, 2018

Add information on changes to the local provisioner configuration:

"Adding a new device is semi-automatic. The provisioner periodically checks for new mounts in the configured directories. The administrator needs to create a new subdirectory there, mount a device there, and allow the pods to use the device by applying the SELinux label."

PR: #8899

BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1569911

@sdodson
Copy link
Member

sdodson commented May 9, 2018

When configuring the OpenStack cloud provider the node's hostname must match the instance name in OpenStack in order ensure that the registered name conforms to DNS-1123 spec. https://bugzilla.redhat.com/show_bug.cgi?id=1566455#c10

@mrogers950
Copy link
Contributor

Removals: The deprecated openshift-namespace flag has been removed from the oc adm create-bootstrap-policy-file command.
Issue: openshift/origin#15825

@ahardin-rh
Copy link
Contributor

@sdodson
Copy link
Member

sdodson commented Jun 6, 2018

#9875 It's no longer possible to configure the dnsIP value of the node which could previously have been set via openshift_dns_ip.

@deads2k
Copy link
Contributor

deads2k commented Jun 12, 2018

The "openshift-infra" namespace is reserved for system components. It does not run openshift admission plugins for kubernetes resources. SCC admission will not run for pods in the "openshift-infra" namespace. This can cause pods to fail, especially if they make use of persistent volume claims and rely on SCC-assigned uid/fsGroup/supplementalGroup/seLinux settings.

Per @liggitt's comment in openshift/origin#19889 (comment)

@soltysh
Copy link

soltysh commented Jun 12, 2018

  1. Groups pruning: Add groups pruning section #9453
  2. oc edit now respects KUBE_EDITOR. OC_EDITOR support will be removed in a future release, switch to KUBE_EDITOR.

@gaurav-nelson
Copy link
Contributor

Service Catalog CLI #9653

The Service Catalog command-line interface (CLI) utility called svcat is available for easier interaction with Service Catalog resources. svcat communicates with the Service Catalog API by using the aggregated API endpoint on an OpenShift cluster.

@liggitt
Copy link
Author

liggitt commented Jun 14, 2018

The batch/v2alpha1 API version is no longer served by default. If required, it can be re-enabled with this config:

kubernetesMasterConfig:
  apiServerArguments:
    ...
    runtime-config:
    - apis/batch/v2alpha1=true

@kalexand-rh
Copy link
Contributor

https://bugzilla.redhat.com/show_bug.cgi?id=1535585 added the openshift_additional_ca parameter, but I don't see it covered in the docs.

@mburke5678
Copy link
Contributor

Node Problem Detector (Tech Preview)
The Node Problem Detector monitors the health of your nodes by finding certain problems and reporting these problems to the API server, where external controllers could take action.

Descheduler
The descheduler moves pods from less deisrable nodes to new nodes for various reasons, such as:

  • Node utilization
  • Scheduling decision changed, as taints or labels are added to or removed from nodes, pod/node affinity requirements are not satisfied any more.
  • Node failure.
  • New nodes added to clusters.

Hugepages support
Applications in a pod can allocate and consume pre-allocated huge pages (a memory page that is larger than 4Ki) to more efficiently manage memory.

System services now hosted on pods
Each of the system services, API, controllers, and etcd, used to run as system services on the master. These services now run on static pods in the cluster. As a result, there are new commands to restart these services: master-restart api, master-restart controllers, and master-restart etcd. To view log information on these services, use master-logs api api, master-logs controllers controllers and master-logs etcd etcd.

New node configuration process
You can modify exiting nodes through a configuration map rather than the node-config.yaml. The installation creates three node configuration groups: node-config-master, node-config-infra, and node-config-compute and creates a configuration map for each group. A sync pod watches for changes to these configuration maps. When a change is detected, the sync pod updates the node-config.yaml file on all of the nodes.

@sdodson
Copy link
Member

sdodson commented Jun 25, 2018

openshift/openshift-ansible#8955
#10396

We should be discouraging reliance on openshift_docker_additional_registries

@bmcelvee
Copy link
Contributor

Prometheus was updated, so we can update the following from previous release notes:

Prometheus (Technology Preview)
Prometheus remains in Technology Preview and is not for production workloads. Prometheus, AlertManager, and AlertBuffer versions are now updated and node-exporter is now included:

prometheus 2.2.1

Alertmanager 0.14.0

AlertBuffer 0.2

node_exporter 0.15.2

You can deploy Prometheus on an OpenShift Container Platform cluster, collect Kubernetes and infrastructure metrics, and get alerts. You can see and query metrics and alerts on the Prometheus web dashboard. Alternatively, you can bring your own Grafana and hook it up to Prometheus.

See Prometheus on OpenShift for more information.

@deads2k
Copy link
Contributor

deads2k commented Jun 27, 2018

subjectaccessreviews.authorization.openshift.io and resourceaccessreviews.authorization.openshift.io will be cluster-scoped only in a future release. Use localsubjectaccessreviews.authorization.openshift.io and localresourceaccessreviews.authorization.openshift.io if you need namespace scoped requests

@bparees
Copy link
Contributor

bparees commented Jun 28, 2018

The default imagestreams now use pullthrough. This means that the internal registry will pull these images on behalf of the user. If you modify the upstream location of the images in the imagestream, the registry will pull from that location. This means the registry must be able to trust the upstream location. If your upstream location uses a certificate that is not part of the standard system trust store, pulls will fail. You will need to mount the appropriate trust store into the docker-registry pod to provide appropriate certificates in this case, in the /etc/tls directory path.

The imageimport process now runs inside a pod (the apiserver pod). Imageimport needs to trust registries it is importing from. If the source registry uses a certicate that is not signed by a CA that is in the standard system store, you will need to provide appropriate trust store information to the apiserver pod. This can be done by mounting content into to the pod's /etc/tls directory.

@bmcelvee fyi

@soltysh
Copy link

soltysh commented Jun 29, 2018

This release notes has to be for 3.10 to announce future changes:
In 3.11 when invoking oc commands against a local file you will have to use a --local flag when you don't want the client to contact with the server.

@enj
Copy link

enj commented Jun 29, 2018

The use of self hosted versions of GitLab with a version less than v11.1.0 is deprecated as of OCP v3.10. Users of self hosted versions should upgrade their GitLab installation as soon as possible. No action is required if the hosted version at gitlab.com is used as that environment is always running the latest version.

@kalexand-rh @openshift/sig-security

@gnufied
Copy link
Member

gnufied commented Jun 29, 2018

When using flexvolume for performing attach/detach - the flex binary must not have external dependencies and should be self contained. Flexvolume plugin path on atomic hosts has been changed to /etc/origin/kubelet-plugins which applies to both master and compute nodes.

cc @openshift/sig-storage @knobunc

@sferich888
Copy link
Contributor

TLSV1.2 is the only supported security version in OpenShift Container Platform version 3.4 and later. You must update if you are using TLSV1.0 or TLSV1.1.

Did we push an update for 3.4+ to denote this? How is this a 3.10 release note item, unless this is purly related to 3.10?

@sferich888
Copy link
Contributor

http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-tenant-driven-storage-snapshotting

We should keep the example name in sync.
You create and example with name: snapshot-demo but then have a restore example with name: snapshot-pv-provisioning-demo. If my assumptions are correct, the name here needs to be the same for the saved, snapshot to be restored.

This section (primarly a link to where a user can learn more) is missing information on how long it will take to take the snapshot, how to see if a snapshot was taken, how long it takes to restore a snapshot, and how to know if a snapshot is being restored, or has completed restoring. What happens to the objects in the system after each of the dentoted operations?

@sferich888
Copy link
Contributor

@jeremyeder http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-scale-cluster-limits sneakaly hids the fact that from 3.9 to 3.10 we move decreas the number of pods per namespace from 15,000 to 3,000, however http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/scaling_performance/cluster_limits.html#scaling-performance-current-cluster-limits offers no reason for this drop, other than what a user can deduce from http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/scaling_performance/cluster_limits.html#_footnote_3 (and assume that we have added or seprated controllers, and thus lost the capability to run as many pods per namespace?

Can / Should we have someone explain this better either in the links denoted?

@sferich888
Copy link
Contributor

Format issue:

http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-IP-failover-management-limited-to-254-groups

By default, [product-title] assigns one IP address to each group.

@sferich888
Copy link
Contributor

Format issue:

http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-system-services-now-hosted-on-pods

master-restart etcd.

@sferich888
Copy link
Contributor

sferich888 commented Jun 29, 2018

Its not clear if http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-system-services-now-hosted-on-pods provides host level commands or if these are part of the oc / openshift binary?

This might be answered by http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-important-installation-changes (so a call out to this section might be needed)?

@sferich888
Copy link
Contributor

http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-group-pruning

Can we change the title of this to LDAP Group Pruning?

@sferich888
Copy link
Contributor

sferich888 commented Jun 29, 2018

All Feedback is from section:
http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-important-installation-changes


The control plane components (etcd, API server, and controllers) are now run as static pods by the kubelet on the masters.

Provide a link to http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-system-services-now-hosted-on-pods


  • The containerized mode for OpenShift Container Platform is no longer supported (where components run in docker containers) and the 3.10 upgrade will require you to move to RPMs or system containers for the kubelet by setting openshift_use_system_containers=true.
  • System containers for the control plane components are no longer supported. Those components run as static pods instead and the upgrade will automatically make this conversion.

These statements are confusing!

  • With the first 2 bullet points we explain, services (api, controllers, etcd) are now containers, then explain running containers is not supported? Are we saying these services are not docker containers? or are we saying these services are not meant to be started directly by docker?
  • Should/Do we need to explain that the 'kubelet' is the only sytem_container? The system container (for the control plane vs the system) concept compared to the service pod contecpt in these notes gets confusing with out a clear definition of what everything is.

Suggested Changes (will need edits for container names - IDK what they all are):

The containerized mode (starting containers directly from docker) for OpenShift Container Platform is no longer supported. The 3.10 upgrade will require you to move to RPMs or the kubelet system containers (by setting openshift_use_system_containers=true), to start or run the platform.

  • We have removed support for the following system containers (etcd, master-api, master-controllers, etc), as these components now run as static pods, the upgrade will automatically make this conversion.

If a link to CONTAINERIZED INSTALLATION METHOD REMOVED and CONTROL PLANE AS STATIC PODS you could also fix this issue by pointing customers to these notable changes.


Node bootstrapping (controlled by the inventory variable openshift_node_bootstrap) defaults to True instead of False, which means nodes will pull their configuration and client and server certificates from the master.

  • Remove and, so this reads: configuration, client and server certificates

limited (proxy and log levels)

This designation makes it sound/feel like these are the only 2 things that can be changed (this might drive questions with customers, about other configuration they might have in place (shoud we link to a section where these configuration options are explained better)?


  • usr/local/bin/master-logs (etcd etcd|api api|controllers)
  • usr/local/bin/master-exec (etcd etcd|api api|controllers)

Should be:

  • usr/local/bin/master-logs (etcd etcd|api api|controllers conterollers)
  • usr/local/bin/master-exec (etcd etcd|api api|controllers conterollers)

@sferich888
Copy link
Contributor

All Feedback is from section:

http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-notable-technical-changes


While previously run as systemd services, the control plane components

This should read:

While previously run as systemd services or system containers, the control plane components


system containers are no longer supported, with the exception of the node service RHEL Atomic Host.

This should read:

system containers are no longer supported, (sans the kublet) with the exception of the node service RHEL Atomic Host.

Key here is we call out what system_container is still supported/used.


@sferich888
Copy link
Contributor

The TP table needs to be reviewed:

oc CLI Plug-ins was TP in 3.7 as well

^^ makes me wonder if the table was moved properly.

@sferich888
Copy link
Contributor

http://file.rdu.redhat.com/~ahardin/06272018/3-10-release-notes/release_notes/ocp_3_10_release_notes.html#ocp-310-known-issues

Should list:

There is one known Kubelet wedge state that will be fixed in the 1.10 rebase where the Kubelet will display messages like system:anonymous cannot access resource foo. This means that the certificates expired before the kubelet could refresh them. If restarting the kubelet does not fix the issue, delete the contents of /etc/origin/node/certificates/, and then restart the kubelet.

From an above install section.

@reestr
Copy link

reestr commented Jul 3, 2018

We've stated that quick installation is deprecated in the release notes and I can see where there is a docs update to remove references to quick installer. However, the getting started guide still lists the quick install method 'atomic-openshift-installer install'. This needs removing or updating.

@soltysh
Copy link

soltysh commented Jul 5, 2018

In 3.10 oc rollout latest ... --output=revision will be deprecated, use oc rollout latest ... --output jsonpath={.status.latestVersion} or oc rollout latest ... --output go-template={{.status.latestVersion} instead.

@jeremyeder
Copy link
Contributor

@sferich888 we're doing more concentrated testing on these limits. Long story short, they were never really tested upstream or down. So far the new numbers are low, but the original numbers are high if you have services in your deployments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests