The intent of this Load balancer is to spread the workload among different servers or applications. It can be set up on both physical and virtual infrastructures. The load balancer traces the accessibility and availability of pods with the Kubernetes Endpoints API. When it gets an app request for a certain Kubernetes service, the Kubernetes load balancer sorts in order or round robins the application request among appropriate Kubernetes pods for the service.
Here are two kinds of load balancers:
L4 load balancers, or else known as network load balancers
They handle layer 4 data that is present at the network and transport (TCP/UDP) level. These load balancers do not aim on application knowledge, such as content type, cookies, header location, etc. This way they will only transmit traffic built on network layer data.
L7 load balancers, otherwise famously known as application load balancers
Unlike L4 load balancers, this kind of load balancer redirects traffic by utilizing the application layer configuration. These load balancers manage a greater amount of data, and are built on more information. This involves HTTP, HTTPS and SSL protocols, for example.
In earlier article I configured Kubernetes Ingress Controller Setup and deploy Ingress controller for Kubernetes on Bare Metal servers
At this moment while writing this blog, MetalLB application product is the only Load Balancer supported for Bare Metal Kubernetes Cluster. When you use a Load Balancer type Kubernetes Service on Cloud, Cloud providers deploy their own Load Balancer resource for Kubernetes services in the cloud.
To deploy MetalLB Load Balancer in Kubernetes Cluster you can found instructions from https://metallb.universe.tf/. Use below github url to deploy Kubernetes resources and custom resource definition (crds) in the cluster.
ubuntu@k8smaster01:~$ sudo su - root@k8smaster01:~# root@k8smaster01:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml namespace/metallb-system created customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created serviceaccount/controller created serviceaccount/speaker created role.rbac.authorization.k8s.io/controller created role.rbac.authorization.k8s.io/pod-lister created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/controller created rolebinding.rbac.authorization.k8s.io/pod-lister created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created secret/webhook-server-cert created service/webhook-service created deployment.apps/controller created daemonset.apps/speaker created validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created root@k8smaster01:~#

Next create IPAddressPool resource. For IP addresses choose and assign the unassigned IP address range in the yaml file configuration. In my LAB environment I am using 192.168.34.0/24 IP address block. From this block I will carve out and use the 192.168.34.230 to 192.168.34.240 IP range and will make sure it will not get used or assigned to other systems in my network. Apply the configuration.
root@k8smaster01:~# cat ippool.yaml apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: metallb namespace: metallb-system spec: addresses: - 192.168.34.230-192.168.34.240 root@k8smaster01:~# root@k8smaster01:~# kubectl apply -f ippool.yaml ipaddresspool.metallb.io/metallb created root@k8smaster01:~#

Next use the IP address pool name in L2Advertisement resource yaml file to advertise IP address in network infrastructure. Apply this yaml file configuration and create resource with kubectl tool
Below is the Kubernetes cluster configuration setup in my lab:
Configure Nginx Load Balancer for the Kubernetes API Server - Part 1
Install and configure Kubernetes cluster master nodes using kubeadm - Part 2
Install and configure Kubernetes cluster worker nodes using kubeadm - Part 3
root@k8smaster01:~# cat lbl2adv.yaml apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2lbadv namespace: metallb-system spec: ipAddressPools: - metallb root@k8smaster01:~# root@k8smaster01:~# kubectl apply -f lbl2adv.yaml l2advertisement.metallb.io/l2lbadv created root@k8smaster01:~#

MetalLB Load Balancer is ready in my bare metal Kubernetes cluster environment. Its time to test it. I will deploy few resources such as Namespace, Deployment (Nginx Image with custom static web pages) and Service with LoadBalancer type pointing to deployment.
Download this project deployment yaml here or it is also available on github.com/janviudapi.
root@k8smaster01:~# kubectl apply -f lbtest.yaml namespace/green-project created deployment.apps/green created service/green-svc created root@k8smaster01:~# #lbtest.yaml with Kubernetes resources #Namespace, Deployment and Services #This will create a new Namespace, Deployment and lb Service will be created under this apiVersion: v1 kind: Namespace metadata: name: green-project labels: app: green --- #This will create Deployment with 2 pod replicas of NGINX images - Labels: Green apiVersion: apps/v1 kind: Deployment metadata: name: green labels: app: green namespace: green-project spec: replicas: 2 selector: matchLabels: app: green template: metadata: labels: app: green spec: volumes: - name: websitedata emptyDir: {} initContainers: - name: webcontent image: busybox volumeMounts: - name: websitedata mountPath: /websitedata command: ["/bin/sh"] args: ["-c", 'echo "Welcome to http://vcloud-lab.com Green_Application" > /websitedata/index.html'] containers: - name: green image: nginx ports: - containerPort: 80 volumeMounts: - name: websitedata mountPath: "/usr/share/nginx/html" #For nginx image path - "/usr/share/nginx/html" readOnly: true --- #This will create a service with LoadBalancer type with label selector green apiVersion: v1 kind: Service metadata: name: green-svc namespace: green-project spec: selector: app: green type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80
Once you get the information for services using kubectl tool, you can see there is an External-IP assigned to the service and TYPE is LoadBalancer. In absence of LoadBalancer such as MetalLB for Bare Metal, you will see External-IP is pending or none. LoadBalancer assigns the first IP address from the address pool as mentioned above.
root@k8smaster01:~# root@k8smaster01:~# kubectl get all -n green-project NAME READY STATUS RESTARTS AGE pod/green-7dd977478c-nrcbp 1/1 Running 0 89s pod/green-7dd977478c-xpvtz 1/1 Running 0 89s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/green-svc LoadBalancer 10.110.135.177 192.168.34.230 80:32193/TCP 89s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/green 2/2 2 2 89s NAME DESIRED CURRENT READY AGE replicaset.apps/green-7dd977478c 2 2 2 89s root@k8smaster01:~#

Test the External-IP in the browser from any other system. You will get the response and Load Balancer is working good.
Useful Articles
Kubernetes kubeadm join could not find a jws signature in the cluster-info ConfigMap for token ID
Kubernetes kubeadm join couldn't validate the identity of the API server connection refused
How to install kubernetes master control-plane on ubuntu Part 1
How to install kubernetes worker node on ubuntu Part 2
ansible create an array with set_fact
Ansible get information from esxi advanced settings nested dictionary with unique keynames
Install Ansible AWX Tower on Ubuntu Linux
Ansible AWX installation error Cannot have both the docker-py and docker python modules
Ansible AWX installation error docker-compose run --rm --service-ports task awx-manage migrate --no-input