Skip to content

Kubeadm

Estimated time to read: 5 minutes

System Requirement

Before we begin the installation process, ensure you have the following prerequisites:

  • A minimum of three nodes (one master and two worker nodes) running either Red Hat Enterprise Linux 9 or CentOS 9.
  • Each node should have a minimum of 2GB RAM and 2 CPU cores.

Prerequisites Configuration

  • If you do not have a DNS setup, each node should have the following entries in the **/etc/hosts** file for local name resolution.

    /etc/host
    # Kubernetes Cluster
    192.168.1.26    ofl-kubemaster    
    192.168.1.27    ofl-kube-node-1   
    192.168.1.27    ofl-kube-node-2
    

Replace with your actual hostname and IP address

  • Install and add Kernel Headers on each node

    • First, ensure that you have the appropriate kernel headers installed on your system (on each node).

      sudo dnf install kernel-devel-$(uname -r)
      
    • To load the necessary kernel modules required by Kubernetes, you can use the **modprobe** command followed by the module names (on each node). Here’s how you can do it

      sudo modprobe br_netfilter
      sudo modprobe ip_vs
      sudo modprobe ip_vs_rr
      sudo modprobe ip_vs_wrr
      sudo modprobe ip_vs_sh
      sudo modprobe overlay
      
    • Create a configuration file (as the root user on each node) to ensure these modules load at system boot

      cat > /etc/modules-load.d/kubernetes.conf << EOF
      br_netfilter
      ip_vs
      ip_vs_rr
      ip_vs_wrr
      ip_vs_sh
      overlay
      EOF
      
  • Configure Sysctl

    • To set specific sysctl settings (on each node) that Kubernetes relies on, you can update the system’s kernel parameters. These settings ensure optimal performance and compatibility for Kubernetes. Here’s how you can configure the necessary sysctl settings

      cat > /etc/sysctl.d/kubernetes.conf << EOF
      net.ipv4.ip_forward = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      EOF
      
      sysctl --system
      
  • Disabling Swap

    • Turn the swap off as Kubernetes does not support it (To take effect Immediately). 

      sudo swapoff -a
      
    • comment out the swap filesystem entry in /etc/fstab for the persistence across reboot. 

      sed -e '/swap/s/^/#/g' -i /etc/fstab
      
  • Install Container Runtime

    we’ll install Containerd on each node. Containerd serves as a crucial container runtime responsible for managing and executing containers, which serve as the fundamental units of Kubernetes applications.

    • Add the Docker CE Repository

      sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
      
      sudo dnf makecache
      
    • Install the containerd.io package:

      sudo yum -y install containerd.io
      
    • Configure continerd

      cat /etc/containerd/config.toml
      
    • Run the following command to build out the containerd configuration file

      sudo sh -c "containerd config default > /etc/containerd/config.toml" ; cat /etc/containerd/config.toml
      
    • Using your preferred text editor, open the **/etc/containerd/config.toml** file and set the SystemdCgroup variable to true (SystemdCgroup = true):

      sudo vim /etc/containerd/config.toml
      
      sudo systemctl enable --now containerd.service
      
      sudo systemctl reboot
      
      sudo systemctl status containerd.service
      
  • Firewall rules

    • Allow specific ports used by Kubernetes components through the firewall, you can execute the following commands (on each node):

      sudo firewall-cmd --zone=public --permanent --add-port=6443/tcp
      sudo firewall-cmd --zone=public --permanent --add-port=2379-2380/tcp
      sudo firewall-cmd --zone=public --permanent --add-port=10250/tcp
      sudo firewall-cmd --zone=public --permanent --add-port=10251/tcp
      sudo firewall-cmd --zone=public --permanent --add-port=10252/tcp
      sudo firewall-cmd --zone=public --permanent --add-port=10255/tcp
      sudo firewall-cmd --zone=public --permanent --add-port=5473/tcp
      

      sudo firewall-cmd --reload
      
  • Kubernetes Repository

    • Add the Kubernetes repository (as the root user) to your package manager

      cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
      enabled=1
      gpgcheck=1
      gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
      exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
      EOF
      
      yum repolist 
      
  • Install Kubernetes Packages

    dnf makecache; dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    
    systemctl enable --now kubelet.service
    
  • Initializing Kubernetes Control Plane

    • Initialize the Kubernetes Cluster by running the "kubeadm init" command.

      sudo kubeadm init --pod-network-cidr=192.168.0.0/16 | tee bootstrap.txt
      
    • Let’s run the below commands to set variables to be able to manage kubernetes cluster.

      mkdir -p $HOME/.kube
      
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
      
  • Deploy Pod Network

    • To enable networking between pods across the cluster, deploy a pod network. For example, deploy the Tigera Operator for Calico
    kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
    
  • Join Worker Nodes

    sudo kubeadm token create --print-join-command
    
  • List the nodes

    kubectl get nodes