Installing Kubernetes from scratch can seem intimidating, especially with all the moving parts involved—but using
kubeadm
makes it much more approachable. In this step-by-step tutorial, we'll walk you through the complete process of setting up a Kubernetes cluster using kubeadm
, explaining each configuration and decision along the way.Prerequisites
We’ll do a minimal installation for a lower environment, with one master and two worker nodes.For the master we’ll use:
- Ubuntu 24.04 LTS
- 2 vCPU
- 8 GiB Memory
- Ubuntu 24.04 LTS
- 2 vCPU
- 4 GiB Memory
System Preparation (All nodes)
Update Repositories
Run these commands in all nodes. We’ll start by updating our repositories.sudo apt update && sudo apt upgrade -y
Provide hostnames for the nodes
For the master:sudo hostnamectl set-hostname k8s-master
sudo hostnamectl set-hostname k8s-worker-01
sudo hostnamectl set-hostname k8s-worker-02
$ sudo vi /etc/hosts
Disable Swap
To install K8s, it is required to disable swap. Swap is a portion of the disk that acts as virtual memory when our system runs out of physical RAM. It’s a mechanism of our OS to keep running when the demand for memory is higher than the real one. Kubernetes does not like this. Kubernetes expects memory usage to be accurate and tight. If swap is enabled:- It can’t tell how much actual RAM is available.
- It may overschedule workloads.
- It won’t handle memory pressure correctly.
- Worst-case: pods crash, nodes get marked
NotReady
.
free -h
To disable the swap, run the following command:
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Enable required kernel modules
Kernel modules are pieces of code that can be loaded into the Linux kernel on demand to add functionality — like networking protocols, file systems, or container support — without needing to reboot or recompile the kernel. For K8s to work we need to enable two kernel modules:- overlay - This is a filesystem module that supports overlay filesystems. An overlay filesystem (specifically, OverlayFS in Linux) is a union filesystem that lets you combine multiple directories (layers) into a single virtual view. This is one of the most characteristic features of containers. This allows containers to define two layers: the
lowerdir
which is the read-only base (e.g. a container image) and theupperdir
, which is the writable layer where changes in a container go. This is crucial for containers because it allows the container runtime to reuse base images without duplicating data and make containers writable. - br_netfilter - A module that allows Linux bridges (like the ones used in container networking) to pass traffic through iptables/netfilter. It ensures bridged network traffic is visible to iptables. Kubernetes uses virtual bridges (like
cni0
,docker0
,flannel.1
, etc.) to connect containers. K8s also uses virtual bridges to enforce NetworkPolicies, NAT, and firewall rules, and ensure traffic must be passed through iptables.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Add sysctl settings
We’ll need to modify some networking settings in our nodes:cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
net.bridge.bridge-nf-call-iptables = 1
- Tells the Linux kernel to pass bridged IPv4 traffic (e.g. container traffic) through the iptables firewall (by default, bridged traffic bypasses iptables). Kubernetes uses bridged networks (via CNI plugins such as Flannel or Calico). K8s needs the iptables to see the containers traffic to apply NetworkPolicies and control the trafficnet.bridge.bridge-nf-call-ip6tables = 1
- Same as the previous one for IPv6 traffic, in case our K8s cluster runs with dual stack (IPv4 and IPv6)net.ipv4.ip_forward = 1
- Enables packet forwarding at the kernel level. This allows Linux to forward packets between different network interfaces. For our K8s cluster we’ll need the nodes can route packets between pods, between pods and services or across nodes. Without this setting, we’d not be able to have communication pod-to-pod or pod-to-service, especially across nodes.
Install Container Runtime (containerd) (All nodes)
We’ll use containerd as the container runtime for our K8s. containerd is a lightweight container runtime. It doesn’t include a build system or developer tooling (as Docker does) — it’s just what Kubernetes needs to run containers. It is a CNCF project, and is considered the standard runtime under the hood of many container tools (Docker, CRI-0, etc)To install it, run the following command:
sudo apt install -y containerd
Set up containerd to use systemd cgroups
Kubernetes expects the container runtime to use systemd cgroups. Systemd handles resource isolation and limits more reliably than legacycgroupfs
. To make K8s work properly we need to make sure that the kubelets and the container runtime uses cgroups. For that, the config.toml needs to be patched so containerd and kubelet speak the same language (systemd). We’ll follow the next steps:By default, containerd has no config. So, right after installing containerd, we’ll generate one with the
config default command
. That will provide us with all the default options with which containerd is currently running.sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
/etc/containerd/config.toml
With that file we can tune the containerd config.This TOML file tells containerd:
- How to manage images and containers
- What plugins to use
- Which runtime to run containers with
- Whether to enable/disallow features like TLS, sandboxing, or systemd cgroups
/etc/containerd/config.toml
:[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = false
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
sudo systemctl restart containerd
sudo systemctl enable containerd
Step 3: Install kubeadm, kubelet, kubectl (All nodes)
In this tutorial, we’ll be installing the lates K8s version. At the time of writing this post, that is version 1.31. In the future, if you need a newer version, check out the official docs.First, we’ll add the K8s repo. For that, first we’ll install the packages required to use the K8s apt repository
sudo apt
-
get
update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
apt-transport-https
- Enables theapt
package manager to download packages over HTTPS (secure HTTP). The Kubernetes apt repository (https://apt.kubernetes.io
) serves its packages over HTTPSca-certificates
- Provides a set of trusted Certificate Authority (CA) certificates used to verify SSL connections. Whenapt
connects tohttps://apt.kubernetes.io
, it uses these certificates to verify the authenticity of the server. This ensures you’re downloading real Kubernetes packages, not malicious fakes and prevents man-in-the-middle (MITM) attackscurl
- In case it is not installed on our servers, we’ll use curl to download files from remote URLsgpg
- (GNU Privacy Guard) is a tool to verify and sign files using cryptographic signatures. GPG ensures the packages haven’t been tampered with between Google and our server.
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl
enable
--now kubelet
Create the cluster (Master Node only)
To create our K8s cluster we just need to run one command with kubeadm:sudo kubeadm init --apiserver-advertise-address [YOUR_MASTER_NODE_IP] --pod-network-cidr "[YOUR_POD_CIDR]" --upload-certs
--apiserver-advertise-address
- Specifies the IP address the Kubernetes API server should advertise to other nodes in the cluster. The API server binds to this address, and it's the one worker nodes will use to connect to the control plane when joining. If our Control Plane has only one node, that would be the master node IP address. However, in a Production cluster (minimum 3 Master Nodes) this should be the virtual IP from which a load balancer will distributes the requests to individual master nodes.--pod-network-cidr
- Specifies the CIDR block for the pod network, i.e., the range of IPs that will be assigned to pods. The default value is 10.244.0.0/16 but we can choose a different one. Just make sure, if you make it smaller, that you’ll have enough IPs for all your pods.--upload-certs
- Automatically uploads control-plane TLS certificates to the cluster. This is needed when you plan to join additional control-plane nodes usingkubeadm join --control-plane.
Without it, we would need to manually copy certs to each new control-plane node.
First, it will give us the join command we’ll need to use to add a node to the cluster (copy that, we’ll use it in further steps).
Secondly, it will generate a kube-config file to be used with kubectl and the commands to run to use kubectl with this kube-config as non-root user.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
Installing a Pod network add-on
Kubernetes does not come with a built-in networking implementation. Instead, it delegates all networking to CNI plugins, which implement the logic to:- Provide each pod with its own unique IP address
- Enable pod-to-pod communication, even between nodes
- Handle DNS resolution within the cluster
- Manage network policies (with some CNIs)
There are quite a few CNI plugins that we can use, most popular ones being Flannel, Calico or Cilium. In this tutorial, we’ll use Flannel. Check out the Flannel docs here. To deploy it we just need to run one command
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
Verify now that the master node is in Ready status:
Join Worker Nodes
Run the command output fromkubeadm init
, like this:sudo kubeadm join <control-plane-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
kubeadm token create --print-join-command
kubeadm join 192.168.1.100:6443 —token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
192.168.1.100:6443
: IP and port of your control plane node--token
: Auth token to join--discovery-token-ca-cert-hash
: Ensures the node is connecting to a trusted cluster
Verify Cluster
Back on the control plane:kubectl get nodes
Ready
state.