This document provides a step-by-step guide to deploying a Kubernetes cluster using Kubeadm (version 1.21.11). The deployment process includes setting up the control plane node and worker nodes, and verifying the cluster’s functionality.
Preparation
Servers
HOST
IP
SYSTEM
master1
192.168.1.10
centos7
master2
192.168.1.11
centos7
node1
192.168.1.20
centos7
node2
192.168.1.21
centos7
Server Port
Open the relevant firewall ports, modify the security group settings for the cloud server, and update the firewall rules on the local server. For test you can set all trust or disable the firewall.
To run Kubernetes, container runtime support is required, so we need to install a container runtime on the server. The most commonly used container runtime is Docker, but the officially recommended container runtime is Containerd.
Note: In Kubernetes version 1.24, support for docker-shim was removed. If you are installing a Kubernetes version higher than 1.24, it is recommended to install Containerd.
# permanent sed -i 's/enforcing/disable/' /etc/selinux/config # temporary setenforce 0
Disable SWAP
#(Some cloud services are turned off by default, which can be viewed with the free command or by looking at /etc/fstab) # permanent swapoff -a # temporary sed -ri 's/.*swap.*/#&/' /etc/fstab
In Kubernetes, there are two service proxy models: one based on iptables and another based on IPVS. When comparing the two, the performance of IPVS is significantly higher. However, to use IPVS, you need to manually load the IPVS modules.
systemctl start kubelet systemctl enable kubelet # It is normal for kubelet to report errors because the current cluster node is not up. systemctl status kubelet
Images
Get Image List
kubeadm config images list kubeadm config images list --kubernetes-version=v1.21.11 kubeadm config images list --kubernetes-version=v1.21.11 --v=5
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
# restart kube-proxy,get pod and delete pod kubectl get pod -n kube-system -o wide | grep kube-proxy
# check ipvsadm -L -n
At this point the K8S cluster has been built, the cloud host can be configured through the NLB to achieve external access to the ApiService, the local server can be configured with Keepalived + HAProxy on the ApiService proxy!