\nPlease note that the certificate-key gives access to cluster sensitive data. As a safeguard, uploaded-certs will be deleted in two hours. If you want to add another control plane into the cluster after two hours since you initialized the first control plane, you can use the command below to re-upload the certificates and generate a new decryption key.<\/p>\n\n\n\n
sudo kubeadm init phase upload-certs --upload-certs<\/code><\/pre>\n\n\n\nSample output;<\/p>\n\n\n\n
[upload-certs] Storing the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace\n[upload-certs] Using certificate key:\n458b0e87a28080c4792333e2d1fdbe7c28ea216e72016998c0b04326a75579c8<\/strong><\/code><\/pre>\n\n\n\nPrint the join command<\/p>\n\n\n\n
kubeadm token create --print-join-command<\/code><\/pre>\n\n\n\nSample output;<\/p>\n\n\n\n
kubeadm join 192.168.122.254:6443 --token q7sc7n.snwhru3n8e3o9lsq --discovery-token-ca-cert-hash sha256:ac08ef4c66538dfbf86a9cd554399c3d979ff370dfc9ca9119ac4ec45fdd0691<\/code><\/pre>\n\n\n\nThen the command to join other control plane into cluster becomes;<\/p>\n\n\n\n
sudo kubeadm join 192.168.122.254:6443 --token q7sc7n.snwhru3n8e3o9lsq --discovery-token-ca-cert-hash sha256:ac08ef4c66538dfbf86a9cd554399c3d979ff370dfc9ca9119ac4ec45fdd0691 --control-plane --certificate-key XXX<\/code><\/pre>\n\n\n\nWhere XXX is the certificate key printed by the sudo kubeadm init phase upload-certs –upload-certs<\/strong> command.<\/p>\n<\/blockquote>\n\n\n\nSample cluster join command output;<\/p>\n\n\n\n
[preflight] Running pre-flight checks\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n[preflight] Running pre-flight checks before initializing the new control plane instance\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[download-certs] Downloading the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace\n[download-certs] Saving the certificates to the folder: \"\/etc\/kubernetes\/pki\"\n[certs] Using certificateDir folder \"\/etc\/kubernetes\/pki\"\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-02] and IPs [10.96.0.1 192.168.122.59 192.168.122.254]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"etcd\/server\" certificate and key\n[certs] etcd\/server serving cert is signed for DNS names [localhost master-02] and IPs [192.168.122.59 127.0.0.1 ::1]\n[certs] Generating \"etcd\/peer\" certificate and key\n[certs] etcd\/peer serving cert is signed for DNS names [localhost master-02] and IPs [192.168.122.59 127.0.0.1 ::1]\n[certs] Generating \"etcd\/healthcheck-client\" certificate and key\n[certs] Valid certificates and keys now exist in \"\/etc\/kubernetes\/pki\"\n[certs] Using the existing \"sa\" key\n[kubeconfig] Generating kubeconfig files\n[kubeconfig] Using kubeconfig folder \"\/etc\/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"\/etc\/kubernetes\/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[check-etcd] Checking that the etcd cluster is healthy\n[kubelet-start] Writing kubelet configuration to file \"\/var\/lib\/kubelet\/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"\/var\/lib\/kubelet\/kubeadm-flags.env\"\n[kubelet-start] Starting the kubelet\n[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.537305ms\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap\n[etcd] Announced new etcd member joining to the existing etcd cluster\n[etcd] Creating static Pod manifest for \"etcd\"\n{\"level\":\"warn\",\"ts\":\"2024-06-08T05:37:40.997002Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.10\/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints:\/\/0xc0006b9180\/192.168.122.58:2379\",\"attempt\":0,\"error\":\"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader\"}\n{\"level\":\"warn\",\"ts\":\"2024-06-08T05:37:41.494954Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.10\/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints:\/\/0xc0006b9180\/192.168.122.58:2379\",\"attempt\":0,\"error\":\"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader\"}\n[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s\nThe 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation\n[mark-control-plane] Marking the node master-02 as control-plane by adding the labels: [node-role.kubernetes.io\/control-plane node.kubernetes.io\/exclude-from-external-load-balancers]\n[mark-control-plane] Marking the node master-02 as control-plane by adding the taints [node-role.kubernetes.io\/control-plane:NoSchedule]\n\nThis node has joined the cluster and a new control plane instance was created:\n\n* Certificate signing request was sent to apiserver and approval was received.\n* The Kubelet was informed of the new secure connection details.\n* Control plane label and taint were applied to the new node.\n* The Kubernetes control plane instances scaled up.\n* A new etcd member was added to the local\/stacked etcd cluster.\n\nTo start administering your cluster from this node, you need to run the following as a regular user:\n\n\tmkdir -p $HOME\/.kube\n\tsudo cp -i \/etc\/kubernetes\/admin.conf $HOME\/.kube\/config\n\tsudo chown $(id -u):$(id -g) $HOME\/.kube\/config\n\nRun 'kubectl get nodes' to see this node join the cluster.\n<\/code><\/pre>\n\n\n\nYou can see that the type of cluster is auto-detected, A new etcd member was added to the local\/stacked etcd cluster.<\/strong><\/p>\n\n\n\nTo start administering your cluster from other control plane nodes, you need to run the following as a regular user:<\/p>\n\n\n\n
mkdir -p $HOME\/.kube\nsudo cp -i \/etc\/kubernetes\/admin.conf $HOME\/.kube\/config\nsudo chown $(id -u):$(id -g) $HOME\/.kube\/config\n<\/code><\/pre>\n\n\n\nRun the same join command on other control plane node and install Kubeconfig to allow you administer cluster as regular user.<\/p>\n\n\n\n
Then run the command below to confirm if the node is added to the cluster.<\/p>\n\n\n\n
kubectl get nodes<\/code><\/pre>\n\n\n\nSample output;<\/p>\n\n\n\n
NAME STATUS ROLES AGE VERSION\nmaster-01 Ready control-plane 13m v1.30.1\nmaster-02 Ready control-plane 5m39s v1.30.1\nmaster-03 Ready control-plane 18s v1.30.1\n<\/code><\/pre>\n\n\n\nAs you can see, we now have three control plane nodes in the cluster.<\/p>\n\n\n\n
Similarly, check the Pods related to the control plane on the kube-system<\/strong> namespace.<\/p>\n\n\n\nkubectl get pods -n kube-system<\/code><\/pre>\n\n\n\n