DEV Community

Cover image for Setting up a Kubernetes cluster with Kubeadm and Containerd
Sergio Peris
Sergio Peris

Posted on • Originally published at sertxu.dev

Setting up a Kubernetes cluster with Kubeadm and Containerd

In this tutorial, we'll go through the step-by-step process of installing a Kubernetes cluster using Kubeadm and Containerd.

I'm going to use a machine with Ubuntu server 24.04 LTS so if you're using a different OS you might have to adapt some commands.

This tutorial was written using Kubernetes version 1.32, if you're using a newer version some steps might have changed.

Also keep in mind all the following steps will be executed as root unless otherwise stated, so no command will start with sudo.

Set up nodes

The following steps should be run on all nodes.

Enable IPv4 packet forwarding

We should enable IPv4 packet forwarding so the network can work as expected.

sysctl net.ipv4.ip_forward=1
Enter fullscreen mode Exit fullscreen mode

To make this change persistent between reboots, we should modify the /etc/sysctl.conf file.

# Uncomment the next line to enable packet forwarding for IPv4
- #net.ipv4.ip_forward=1
+ net.ipv4.ip_forward=1
Enter fullscreen mode Exit fullscreen mode

Install dependencies

There are some common dependencies we must be sure we have installed in our system.

apt update
apt install ca-certificates curl apt-transport-https ca-certificates gpg
Enter fullscreen mode Exit fullscreen mode

Install containerd.io

To download containerd.io we should add the Docker repository to our system.

First, we add the Docker's official GPG key:

install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
Enter fullscreen mode Exit fullscreen mode

Next, we add the Docker repository:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null
Enter fullscreen mode Exit fullscreen mode

Now we can install containerd.io:

apt update
apt install containerd.io
Enter fullscreen mode Exit fullscreen mode

Configure systemd cgroup driver for containerd

First, we need to create a containerd configuration file at /etc/containerd/config.toml.

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml > /dev/null
Enter fullscreen mode Exit fullscreen mode

Now, we should enable the systemd cgroup driver for the CRI at /etc/containerd/config.toml.

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
-           SystemdCgroup = false
+           SystemdCgroup = true
Enter fullscreen mode Exit fullscreen mode

Instead of editing manually the file you can run the following command:

sed -i "s/SystemdCgroup = false/SystemdCgroup = true/g" "/etc/containerd/config.toml"
Enter fullscreen mode Exit fullscreen mode

Now we restart containerd and check its status to make sure it is working.

systemctl restart containerd
systemctl status containerd
Enter fullscreen mode Exit fullscreen mode

Install kubeadm, kubelet and kubectl

First, we should download Kubernetes' official GPG key:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | \
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Enter fullscreen mode Exit fullscreen mode

Next, we add the Kubernetes repository to our system:

echo \
    "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /" | \
    tee /etc/apt/sources.list.d/kubernetes.list
Enter fullscreen mode Exit fullscreen mode

We can now install the Kubernetes packages and hold the packages to the downloaded version.

apt update
apt install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode

Finally, we can enable the kubelet service:

systemctl enable --now kubelet
Enter fullscreen mode Exit fullscreen mode

Set up Kubernetes cluster

The following steps should only be run on the control plane node.

The control plane node is the one in charge of supervising and administrating the k8s cluster.

Initialize the k8s control plane

To initialize the control plane, we're going to specify the network CIDR for our pods, this is the default network for Calico and if you don't have a good reason, it's better to keep it as it is.

kubeadm init --pod-network-cidr=192.168.0.0/16
Enter fullscreen mode Exit fullscreen mode

Once you run this command it will output a kubeadm join command, keep it safe as we will use it later.

Prepare non-root user

To operate with our Kubernetes cluster, it's better to use a non-root user.

If you already have one with sudo permission log in with it, if you don't have one run the following command to create it:

adduser kubernetes
usermod -aG sudo kubernetes
su - kubernetes
Enter fullscreen mode Exit fullscreen mode

To manage the cluster, the user must have the k8s config file at ~./kube.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

To check if the user can manage the cluster we can run the following command:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

It should output something like the following:

NAME                           STATUS     ROLES           AGE   VERSION
eu-central-1.binarycomet.net   NotReady   control-plane   15m   v1.32.3
Enter fullscreen mode Exit fullscreen mode

The following steps should be run using this user.

Deploy a Container Network Interface (CNI)

The pods require a CNI to communicate between them, there are a few options, we'll use the Calico operator.

First, we download the configuration files:

wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.4/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.4/manifests/custom-resources.yaml
Enter fullscreen mode Exit fullscreen mode

If you're installing a newer Kubernetes version, check the Calico releases at https://github.com/projectcalico/calico/releases

If you changed the pod-network-cidr while initializing the cluster, you should update the CIDR at the custom-resources.yaml configuration file.

Next, we import the configuration files to our cluster.

kubectl create -f ./tigera-operator.yaml
kubectl create -f ./custom-resources.yaml
Enter fullscreen mode Exit fullscreen mode

After a few seconds check the pods status to make sure everything is working:

kubectl get pods -n calico-system
Enter fullscreen mode Exit fullscreen mode
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-88ff6f9d5-v4pp2   1/1     Running   0          102s
calico-node-mh8gd                         1/1     Running   0          102s
calico-typha-6fc55bd49d-s62bq             1/1     Running   0          102s
csi-node-driver-2r96g                     2/2     Running   0          102s
Enter fullscreen mode Exit fullscreen mode

Join the worker nodes to the cluster

The following steps should be run only on worker nodes.

Prepare non-root user

As we've done at the control plane node, we're going to use a non-root user.

If you already have one with sudo permission log in with it, if you don't have one run the following command to create it:

adduser kubernetes
usermod -aG sudo kubernetes
su - kubernetes
Enter fullscreen mode Exit fullscreen mode

Join cluster

Using the non-root user we should run the kubeadm join command we've keept safe previously:

kubeadm join <IP>:<port> --token <token> \
        --discovery-token-ca-cert-hash sha256:<hash>
Enter fullscreen mode Exit fullscreen mode

Check worker node status

To check the worker node status, we should run the following command at the control plane node.

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode
NAME                           STATUS   ROLES           AGE   VERSION
eu-central-1.binarycomet.net   Ready    control-plane   78m   v1.32.3
eu-central-2.binarycomet.net   Ready    <none>          53m   v1.32.3
eu-central-3.binarycomet.net   Ready    <none>          25m   v1.32.3
Enter fullscreen mode Exit fullscreen mode

Set the worker role to the worker node

The worker node is already part of the cluster, but it has no role. To give it the worker role we should run the following command:

kubectl label node <node-name> node-role.kubernetes.io/worker=worker
Enter fullscreen mode Exit fullscreen mode

We check the status again to make sure the node has obtained the worker role.

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode
NAME                           STATUS   ROLES           AGE   VERSION
eu-central-1.binarycomet.net   Ready    control-plane   78m   v1.32.3
eu-central-2.binarycomet.net   Ready    worker          53m   v1.32.3
eu-central-3.binarycomet.net   Ready    worker          25m   v1.32.3
Enter fullscreen mode Exit fullscreen mode

Top comments (0)