Menu

Virtual Geek

Tales from real IT system administrators world and non-production environment

Install and configure Kubernetes cluster master nodes using kubeadm - Part 2

After preparing NGINX load balancer for Kubernetes API Server in Configure Nginx Load Balancer for the Kubernetes API Server - Part 1. It is time to use this LB in Kubernetes cluster. I will setup 3 master nodes in control-plane. Before deploying master nodes using kubeadm, I am preparing servers and installing required packages with below commandset.

You can view step by step screenshots and description instructions for below commands in another article - How to install kubernetes master control-plane on ubuntu Part 1.

#sudo root access on system
sudo su -

#Update and upgrade Ubuntu OS
apt-get update -y && apt-get upgrade -y

#Disable swap Kubernetes will give you errors and warnings
swapoff -a
#vim /etc/fstab
sed -i 's/.* none.* swap.* sw.*/#&/' /etc/fstab
#sudo sed -i '/.* none.* swap.* sw.*/s/^#//' /etc/fstab
cat /etc/fstab

#Install required packages
sudo apt-get install curl apt-transport-https vim wget ca-certificates gnupg lsb-release -y

#Enable bride network visible to kubernetes 
lsmod | grep br_netfilter
modprobe br_netfilter
sysctl net.bridge.bridge-nf-call-iptables=1
lsmod | grep br_netfilter

#Docker Installation and configuration
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update -y
sudo apt-get install docker-ce docker-ce-cli containerd.io -y

mkdir -p /etc/docker

cat <<EOF | sudo tee /etc/docker/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts":
{ "max-size": "100m" },
"storage-driver": "overlay2"
}
EOF

systemctl enable docker
systemctl restart docker
systemctl status docker | cat

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

#Kubernetes k8s Installation
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-get update -y 
apt-get install kubectl kubeadm kubelet kubernetes-cni -y

#Enable Firewall to allow K8S API port
sudo ufw allow 6443
sudo ufw allow 6443/tcp

Complete articles:
Configure Nginx Load Balancer for the Kubernetes API Server - Part 1
Install and configure Kubernetes cluster master nodes using kubeadm - Part 2
Install and configure Kubernetes cluster worker nodes using kubeadm - Part 3

Once above settings are configured and packages are installed, initiate first master node in the Kubernetes cluster with kubeadm init command.

Define --pod-network-cidr parameter with argument 10.244.0.0/16 (Its basically used in flannel kubernetes cni network plugins, You can use your own network segment and while setting up cni plugin you need to use the same IP network space in the network CNI plugin configuration). Specify a stable IP address or DNS name for the control plane in parameter --control-plane-endpoint. In my case argument 192.168.34.60:6443 is the load balancer I configured for kubernetes in my earlier article Configure Nginx Load Balancer for the Kubernetes API Server - Part 1.

--upload-certs is to upload control-plane certificates to the kubeadm-certs Secret, If you don't use this parameter, when you try to join other master nodes to control plane certificates will not be visible to other nodes. Parameter --apiserver-cert-extra-sans is an optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names. But when you have multiple Master nodes in the control-plane in Kubernetes cluster, you will have to define other Master nodes hostname, dns names and ip addresses as I mentioned below.

ubuntu kubelet manifest kubernetes kube-apiserver controller-manager scheduler kubeadm kube-system namspace certs control-plane kube-proxy coredns kubeconfig init token ca cert hash.jpg

root@k8smaster01:~#
root@k8smaster01:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint 192.168.34.60:6443 --upload-certs \
>     --apiserver-cert-extra-sans=*.vcloud-lab.com,127.0.0.1,k8smaster01,192.168.34.61,k8smaster02,192.168.34.62,k8smaster03,192.168.34.63
[init] Using Kubernetes version: v1.25.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [*.vcloud-lab.com k8smaster01 k8smaster02 k8smaster03 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.34.61 192.168.34.60 127.0.0.1 192.168.34.62 192.168.34.63]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster01 localhost] and IPs [192.168.34.61 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster01 localhost] and IPs [192.168.34.61 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.027646 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
abafd79e05f1089152ea4921eb38d5ccaf8c020adf3ce5bb0b3a9969962c9545
[mark-control-plane] Marking the node k8smaster01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8smaster01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: tpxbcw.sk0f43qcxzm5ky61
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.34.60:6443 --token tpxbcw.sk0f43qcxzm5ky61 \
        --discovery-token-ca-cert-hash sha256:031b7d3401ab07e651f93403b5364db16ea323cebf34a7772a473009ac5b1de3 \
        --control-plane --certificate-key abafd79e05f1089152ea4921eb38d5ccaf8c020adf3ce5bb0b3a9969962c9545

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.34.60:6443 --token tpxbcw.sk0f43qcxzm5ky61 \
        --discovery-token-ca-cert-hash sha256:031b7d3401ab07e651f93403b5364db16ea323cebf34a7772a473009ac5b1de3

This is my kubernetes cluster architecture diagram, I have setup first Master node in the Kubernetes cluster. Next I will setup remaining Master nodes in control-plane.

Kubernetes Infrastructure in my home lab.jpg  Kubernetes Infrastructure in my home lab architectural diagram.jpg

Once the first Master node is setup I can verify the version with rest api using Kubernetes using Load Balancer IP and port with curl.

Kubernetes cluster control plane major minor version kubectl kubeadm kubelet git commit tree state build goverision compiler platform linux ubuntu initiate join.jpg

root@k8smaster01:~#
root@k8smaster01:~# curl https://192.168.34.60:6443/version -k
{
  "major": "1",
  "minor": "25",
  "gitVersion": "v1.25.2",
  "gitCommit": "5835544ca568b757a8ecae5c153f317e5736700e",
  "gitTreeState": "clean",
  "buildDate": "2022-09-21T14:27:13Z",
  "goVersion": "go1.19.1",
  "compiler": "gc",
  "platform": "linux/amd64"
}root@k8smaster01:~#
root@k8smaster01:~#

Here I am adding second Master node to the Kubernetes cluster control-plane using kubeadm join command. You can find this join link details when you initiate first Master Node as shown in above screenshot. The join url expires after 23 hours.

Parameter --token is use this token for both discovery-token and tls-bootstrap-token when those values are not provided. You can generate a token using command kubadm token create--discovery-token-ca-cert-hash is used for token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>"). Using parameter --control-plane creates a new control plane instance on this node, Adds master node to control-plane. --certificate-key Use this key to decrypt the certificate secrets uploaded by init.

kubeadm join control-plane discovery token ca cert hash certificate key preflight kubeconfig kubelete etcd manifest kubernetes master node worker front proxy client proxy dns coredns pki etc yaml yml taints minikube nginx.jpg

root@k8smaster02:~#   kubeadm join 192.168.34.60:6443 --token tpxbcw.sk0f43qcxzm5ky61 \
>         --discovery-token-ca-cert-hash sha256:031b7d3401ab07e651f93403b5364db16ea323cebf34a7772a473009ac5b1de3 \
>         --control-plane --certificate-key abafd79e05f1089152ea4921eb38d5ccaf8c020adf3ce5bb0b3a9969962c9545
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster02 localhost] and IPs [192.168.34.62 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster02 localhost] and IPs [192.168.34.62 127.0.0.1 ::1]
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [*.vcloud-lab.com k8smaster01 k8smaster02 k8smaster03 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.34.62 192.168.34.60 127.0.0.1 192.168.34.61 192.168.34.63]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8smaster02 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8smaster02 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

root@k8smaster02:~#

This is my third Master node I am adding to in my architecture as shown in the diagram, I have prepared the system with settings and installed packages. What I have done is created a bash script with the top commands to prepare Master host server. You can download this bash script k8sinstall.sh here or it also available on github.com/janviudapi. To give the owner all permissions and world read and execute use command chmod 755 k8sinstall.sh. Execute it using sudo ./k8sinstall.sh.

kubernetes installation and configuration install script bash .sh chmod 755 google ubuntu linux deb packages libsystem container master control plane kubelet cni networking k-proxy kubeadm kubectl apt-get.jpg

In case if the kubeadm join link token is expired (valid for 23 hours), you can create/generate one using below command to join Master node.

kubeadm token create print-join-command control-plane certificate key kubeadm init phase upload-certs grep namespace echo kubernetes k8s google aks discovery container master-plane control.jpg

root@k8smaster02:~#
root@k8smaster02:~# echo $(kubeadm token create --print-join-command) --control-plane --certificate-key $(kubeadm init phase upload-certs --upload-certs | grep -vw -e certificate -e Namespace)
kubeadm join 192.168.34.60:6443 --token x0ef25.63d1tmqcrndsdta9 --discovery-token-ca-cert-hash sha256:031b7d3401ab07e651f93403b5364db16ea323cebf34a7772a473009ac5b1de3 --control-plane --certificate-key dc8e94c70d6708ffc960cc7f6a13e1155a4958040408a3cfdf0ceca7a94e86aa
root@k8smaster02:~#

Use the newly created kubeadm join command to add third Master node in cluster. 

Microsoft putty sudo su - kubeadm join certificate api server etcdserver client etcdctl default.svc.cluster.local manifest yaml k8s kube proxy control plane .kubeconfig kubelete kubectl.jpg

Use any of the Master node in Kubernetes cluster to export KUBECONFIG file configuration. kubeconfig files are used to organize information about clusters, users, namespaces, and authentication mechanisms. View all the nodes using kubectl get nodes command. All the nodes are in NotReady status. For this You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.

I am choosing tigera-operator from project calico as Kubernetes CNI model plugin. I will create resources using kubectl tool. Next download custom-resources.yaml file from calico project.

kubectl get nodes  kubectl create tigera-operator callico project custom-resources kubectl get nodes not ready coredns not ready control-plane role master nodes ippool controller kube network policy.jpg

root@k8smaster03:~# export KUBECONFIG=/etc/kubernetes/admin.conf
root@k8smaster03:~#
root@k8smaster03:~# kubectl get nodes
NAME          STATUS     ROLES           AGE     VERSION
k8smaster01   NotReady   control-plane   3h11m   v1.25.2
k8smaster02   NotReady   control-plane   3h8m    v1.25.2
k8smaster03   NotReady   control-plane   20m     v1.25.2
root@k8smaster03:~#
root@k8smaster03:~# kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
root@k8smaster03:~#
root@k8smaster03:~# wget https://docs.projectcalico.org/manifests/custom-resources.yaml
--2022-10-08 17:13:10--  https://docs.projectcalico.org/manifests/custom-resources.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 2600:1f18:2489:8201:d278:9378:2114:f6e5, 2600:1f18:2489:8200:a007:6646:1f31:908c, 35.231.210.182, ...
Connecting to docs.projectcalico.org (docs.projectcalico.org)|2600:1f18:2489:8201:d278:9378:2114:f6e5|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 827 [text/yaml]
Saving to: ‘custom-resources.yaml’

custom-resources.yaml                           100%[=====================================================================================================>]     827  --.-KB/s    in 0s

2022-10-08 17:13:10 (32.4 MB/s) - ‘custom-resources.yaml’ saved [827/827]

root@k8smaster03:~#

Modify custom-resources.yaml file change the cidr ip (Use the same Pod cidr ip block we used while initiating kubeadm). I have automated this task and written a sed command to change IP in cidr and saved the content in another yaml file.

flannel kubernetes networking custom-resources.yaml projectcalico networking cidr block sed awk configuration container coredns not working tigera apiserver kubelet proxy.jpg 

root@k8smaster03:~# cat custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 192.168.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

root@k8smaster03:~#
root@k8smaster03:~# sed 's/\([0-9]\{1,\}\.\)\{3\}[0-9]\{1,\}/10.244.0.0/g' custom-resources.yaml > custom-resourcesconf.yaml
root@k8smaster03:~#
root@k8smaster03:~# cat custom-resources
custom-resourcesconf.yaml  custom-resources.yaml
root@k8smaster03:~# cat custom-resourcesconf.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

root@k8smaster03:~# 

Apply new custom-resourcesconf.yaml file using kubectl command tool. My three Master nodes in control-plane setup is ready. Check the status of nodes with command kubectl get nods, It will take some time to come nodes in ready status. Additionally you can use the status of all Pods residing inside calico and kube-system namespaces. All pods are running and in ready state. (It takes some time to get ready all pods). If you see coredns pods are in running status with all pods are ready.

kubectl apply custom-resources.yaml kubectl get nodes -A pods calico etcd kube-apiserver controller-manager proxy schdedler tigera calico node typha networking kubernetes k8s homelab kube-system.jpg

root@k8smaster03:~# kubectl apply -f custom-resourcesconf.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
root@k8smaster03:~#
root@k8smaster03:~# kubectl get nodes
NAME          STATUS     ROLES           AGE     VERSION
k8smaster01   NotReady   control-plane   3h49m   v1.25.2
k8smaster02   NotReady   control-plane   3h46m   v1.25.2
k8smaster03   NotReady   control-plane   58m     v1.25.2
root@k8smaster03:~# kubectl get nodes
NAME          STATUS   ROLES           AGE     VERSION
k8smaster01   Ready    control-plane   3h50m   v1.25.2
k8smaster02   Ready    control-plane   3h46m   v1.25.2
k8smaster03   Ready    control-plane   59m     v1.25.2
root@k8smaster03:~#
root@k8smaster03:~# kubectl get pods -A
NAMESPACE          NAME                                       READY   STATUS    RESTARTS       AGE
calico-apiserver   calico-apiserver-577c9b8b8-bd8fb           1/1     Running   0              23s
calico-apiserver   calico-apiserver-577c9b8b8-dhfvp           0/1     Running   0              23s
calico-system      calico-kube-controllers-85666c5b94-dmrxt   1/1     Running   0              80s
calico-system      calico-node-9p9r8                          1/1     Running   0              80s
calico-system      calico-node-ckg9c                          1/1     Running   0              80s
calico-system      calico-node-jqgh6                          1/1     Running   0              80s
calico-system      calico-typha-6d744588c9-m27t4              1/1     Running   0              80s
calico-system      calico-typha-6d744588c9-xw8vc              1/1     Running   0              79s
calico-system      csi-node-driver-58d4g                      2/2     Running   0              53s
calico-system      csi-node-driver-cmtjw                      2/2     Running   0              56s
calico-system      csi-node-driver-lkrnv                      2/2     Running   0              43s
kube-system        coredns-565d847f94-ggw2q                   1/1     Running   0              3h50m
kube-system        coredns-565d847f94-wbd6p                   1/1     Running   0              3h50m
kube-system        etcd-k8smaster01                           1/1     Running   2              3h50m
kube-system        etcd-k8smaster02                           1/1     Running   1              157m
kube-system        etcd-k8smaster03                           1/1     Running   0              59m
kube-system        kube-apiserver-k8smaster01                 1/1     Running   2              3h50m
kube-system        kube-apiserver-k8smaster02                 1/1     Running   1              3h46m
kube-system        kube-apiserver-k8smaster03                 1/1     Running   0              59m
kube-system        kube-controller-manager-k8smaster01        1/1     Running   3 (157m ago)   3h50m
kube-system        kube-controller-manager-k8smaster02        1/1     Running   1              3h47m
kube-system        kube-controller-manager-k8smaster03        1/1     Running   0              59m
kube-system        kube-proxy-fv49j                           1/1     Running   1              3h47m
kube-system        kube-proxy-m5l9b                           1/1     Running   0              3h50m
kube-system        kube-proxy-zfr5l                           1/1     Running   0              59m
kube-system        kube-scheduler-k8smaster01                 1/1     Running   3 (157m ago)   3h50m
kube-system        kube-scheduler-k8smaster02                 1/1     Running   1              3h47m
kube-system        kube-scheduler-k8smaster03                 1/1     Running   0              59m
tigera-operator    tigera-operator-6675dc47f4-4f87g           1/1     Running   0              36m
root@k8smaster03:~#

Useful Articles
ansible create an array with set_fact
Ansible get information from esxi advanced settings nested dictionary with unique keynames
Install Ansible AWX Tower on Ubuntu Linux
Ansible AWX installation error Cannot have both the docker-py and docker python modules
Ansible AWX installation error docker-compose run --rm --service-ports task awx-manage migrate --no-input
docker: Got permission denied while trying to connect to the Docker daemon socket
Ansible AWX Tower create Manual SCM (Source Control Credential Type) project
Reset Ansible AWX Tower admin password
Install Ansible AWX on Microsoft Windows OS
Step by Step Install Ansible on Ubuntu OS
Install Ansible AWX Tower on Ubuntu Linux OS
Ansible AWX Tower Github inventory integration | Github inventory source

Go Back

Comment

Blog Search

Page Views

11273035

Follow me on Blogarama