Launching the first kubernetes master

First of all, we need to generate a config for kubeadmin

Initiate the first master

If kubeadm works without errors, then at the output we get approximately the following result:

 

CNI Calico Installation

The time has come to establish a network in which our pods will work. I use calico, and we will now lauch it.

But from the beginning lets configure access for kubelet. We execute all commands on master01

If you are working from root

If as simple user

You can also manage the cluster from your laptop or any local machine. To do this, copy the /etc/kubernetes/admin.conf file to your laptop or any other machine in $ HOME / .kube / config

Now put CNI according to the Kubernetes documentation

Wait until all the pods will run

Launching the second and third kubernetes masters

Before starting master02 and master03, you need to copy the certificates from master01 that kubeadm generated when creating the cluster. We will copy via scp

On master01

On master02 and master03

Create a config for kubeadm

And add master02 and master03 to the cluster

Adding of worker nodes to the cluster

At the moment, we have a cluster in which three master nodes are already running. But master nodes are machines on which run api, scheduler, and other services of the kubernetes cluster. So to can run our pods, we need the so-called worker nodes.

If you are limited in resources, then you can run pods on master nodes, but we do not recommend doing this.

Install the kubelet, kubeadm, kubectl and docker nodes on the worker same as on the master nodes

Now it’s time to return to the line that kubeadm generated while installing the master node.

It looks like this.

It is necessary to execute this command on each worker node.

If you have not written a token, then you can generate a new

After work of kubeadm, your new node is entered into the cluster and ready for work

Now let’s look at the result

Installation of haproxy on worknodes

Now we have a working cluster with three master nodes and three worker nodes.

The problem is that now our worker nodes do not have HA mode.

If we look at the kubelet config file, we will see that our worker nodes access only one of the three master nodes.

In ourcase, this is master03. With this configuration, if master03 crashes, the worker node will lose communication with the cluster API server. To make our cluster fully HA, we will install a Load Balancer (Haproxy) on each of the workers, which according to round robin will divide the requests between three master nodes, and in the kubelet config on worker nodes we will change the server address to 127.0.0.1:6443

First of all, install HAProxy on each worker node.

After HAproxy is installed, we need to create a config for it.

If on the worker nodes there is no directory with config files, then we clone it

And run the config script with the haproxy flag

The script will configure and restart haproxy.

Check if haproxy started listening to port 6443.

Now we need to tell kubelet to access localhost instead of the master node. To do this, edit the server value in the /etc/kubernetes/kubelet.conf and /etc/kubernetes/bootstrap-kubelet.conf files on all worker nodes.

The server value should look like this:

After making the changes, restart the kubelet and docker services

Check if all nodes are working properly.

So far, we have no applications in the cluster to test HA. But we can stop the operation of kubelet on the first master node and make sure that our cluster works well.

Check from the second master node

All nodes are working normally, except the one on which we stopped the services.

Don’t forget to turn back the kubernetes services on the first master node

Installing of the Ingress Controller

Ingress controller is Kubernetes add-on, with which we can access our applications from the outside. A detailed description you can find in the Kuberbnetes documentation. There are quite a lot of ingress controllers; we use a controller from Nginx and will talk about its installation. The documentation about the operation, configuration and installation of the Ngressx Ingress controller can be found on the official website.

Let’s start the installation, all commands can be executed from master01.

Install the controller itself

And now – a service through which ingress will be available

To do this, prepare the config

And send it to our cluster

Check if our Ingress works on the right addresses and listens to the right ports.

Web UI (Dashboard) Installation

Kubernetes has a standard Web UI, through which it is sometimes convenient to quickly look at the state of a cluster or its individual parts. In our work, we often use dashboard for the initial diagnosis of deployments or the state of a cluster parts.

Installation. We are using the stable version and haven’t tried 2.0 yet.

After we installed the panel in our cluster, it became available at

But in order to reach it, we need to use ports from the local machine with the help of kubectl proxy. For us, this scheme is not very convenient. Therefore, we will change the service of the control panel so that the dashboard becomes available on the address of any cluster node on port 30443. There are still other ways to access the dashboard, for example, through ingress. Perhaps we will consider this method in the following publications.

To change the service, run the deploy of the already changed service

Now we need to create the admin user and token to access the cluster through the dashboard

After that, you can log in to the control panel at https://10.73.71.25:30443

Dashboard home screen

Congratulations! If you have reached this step, then you have a working HA cluster of kubernetes, which is ready for the deployment of your applications.

Kubernetes is the key point of the microservice infrastructure on which various add-ons are required. We plan to talk about some of them in future publications.

We will try to answer all questions in the comments or you can drop us a line per: [email protected]