5/5 - (2 votes)

CronJob

The blog post says that CronJob are declared stable, but then there is a small clarification – the version of the API is declared stable, that is, the manifest structure is kind: cronJob, but with the controller, which is responsible for implementing the logic of work, everything is much more interesting.

In version 1.20 the CronJob controller version 2 was added. In the new version 1.21 it was transferred to the beta stage and enabled by default. In version 1.22 it is planned to remove the old CronJob controller code. Very, very fast changes, usually not inherent in release cycles in Kubernetes.

Why then did they make a new controller if all the problems with crownjobs remained unresolved? It turns out that the old controller unnecessarily loaded the Kubernetes API and did not have time to create a Job if there were more than 1000 Cronjob manifests in the cluster. The new version of the controller is written according to the latest guidelines and is much faster.

Immutable Secret and ConfigMap

Added the ability to create protected from changes secrets and config maps. Apparently, protection from juniors who are “pushed bad configuration”. ConfigMap should be deployed via helm charts, and secrets should be stored in Vault. Where there is a history of changes, and your CI / CD should not allow rolling out non-working configs for production.

IPv4 / IPv6 Dual-Stack support

IPv6 support is now enabled by default, the only subtlety is your CNI should be able to do Dual-Stack as well. Calico can)

Graceful Node Shutdown

Kubelet has learned to detect a situation when a node is shutdown with the shutdown command, and now sends a sigterm. TODO: Test if container runtime completes faster than kubelet and what happens with simple systemctl shutdown kubelet.

PodSecurityPolicy Deprecation

Another controversial news. PSPs have been declared deprecated and scheduled for removal in version 1.25. But at the same time PSP Replacement Policy (policies for replacing the policy) are in the state of the project, and the alpha version is promised to be shown only in Kubernetes 1.22. For a brief overview of what is being designed there, see KEP # 2582. The strangest thing it says is the suggestion to use a namespace label to determine by what rules to validate pod manifests. It turns out that by giving someone the rights to edit the namespace, you also give him an easy way to get the rights of the cluster administrator.

Let’s see what will happen in the end, but for now we are offered to smoothly switch to using standard PSPs, analogs of which in the form of built-in profiles will be hardcoded into the new PSPv2 admission plugin.

Or switch to third-party solutions such as the Open Policy Agent Gatekeeper.

Urgent Upgrade Notes

The default is now cgroupDriver systemd. Remember to check your containerd settings when installing a new cluster or adding nodes. But that’s not all. In version 1.22, they promise to force the change of the cgroup driver in kubelet to systemd when updating the cluster, so it’s time to read the migration guide and start changing the driver.

Many small changes in the CSI and PV area: old labels, flags and metrics are no longer used. In principle, it’s okay, most likely, you just need to update the used CSI drivers.

The kubeadm kubeconfig user, certs, and debug commands have been moved from experimental to persistent and must now be specified without the alpha word.

The functionality of the kubectl run command continues to be cut. Removed a whole set of keys for creating service and cronjob, declared obsolete keys for setting requests and limits, service account and using hostport. They actively force to use only ready-made yaml-manifests to create cluster objects.

We finally removed support for the kubectl --export key. And how convenient it was using this key to get a manifest from a ready-made cluster object to create a copy of it, for example, copy a secret with a TLS certificate to another namespace.

Everyone who uses vSphere versions lower than 67u3 is recommended to upgrade, there is still time until kubernetes 1.24 is released.

Interesting little innovations

Added endPort field to NetworkPolicy to support port ranges. Rejoice lovers of running Asterisk on a cluster.

The MaxSurge field has been added to DaemonSets, now during the update of the daemon, you can specify that first a new pod is created on the node, and after it is launched, the old one is deleted.

Keepalive pings were added to the kubectlexec and portforward commands, and now intermediate HTTP balancers will not drop the connection if there is no activity in it.

A suspend field was added to Job and a whole blog article was written about it. Only it is not clear what is the point in this – imitation of the work of Kafka or Rabbitmq?

Now you can select namespaces by their name. It’s just that the kubernetes.io/metadata.name tag is automatically added to the namespace manifest.

Added the InternalTrafficPolicy field to Service. If you specify Local value in it, traffic will be directed only to pods located on the same cluster node as the pod that sent the request. While in alpha status, featureGate = ServiceInternalTrafficPolicy must be enabled for use.

Finally, we have included the TTL Controller, which allows you to delete the manifests of completed Job.

The annotation kubectl.kubernetes.io/default-container was added to the pod manifest, with which you can specify in which pod container to exec, whose logs to watch, and so on, if the -c switch is not specified when calling.