Menu

Virtual Geek

Tales from real IT system administrators world and non-production environment

Setup HAProxy for Ingress Controller Kubernetes Cluster

In earlier article I configured Setup and deploy Ingress controller for Kubernetes on Bare Metal servers. To get the web url working in the browser I had used local hosts file to add DNS host entries under directory /etc, but everytime I try the dns url in the browser it was taking me to the first entry in the host file (Kubernetes worker node). I wanted the proper load balancer plus reverse proxy for my web servers to server the request. To fulfil my requirement I set up HAProxy server as Reverse Proxy.

I added HAProxy server host entry in the DNS server for multiple web urls (blue and green). When user try to connect to the those urls from browser, HAproxy forward that request to backend worker nodes where Ingress controller intercept it and apply the Ingress Rules which will serve the correct webpages on users web browser.

Kubernetes Ingress controller haproxy reverse proxy configuration diagram.jpg

Below is the Kubernetes cluster configuration setup in my lab: 
Configure Nginx Load Balancer for the Kubernetes API Server - Part 1
Install and configure Kubernetes cluster master nodes using kubeadm - Part 2
Install and configure Kubernetes cluster worker nodes using kubeadm - Part 3

Download below Kubernetes yaml manifest files for deploying sample blue and green project.

Deployment.yamlService.yamlingress.yaml

You can download this complete project zip here or it is also available on github.com/janviudapi.

To show the demo I am deploying 2 kubernetes deployments resources with name green and blue. These deployment pods are hosting NGINX image with web servers and has two colored webpages representing deployment/pod names. 

kubernetes cluster deployment ingress service yaml yml apply kubectl kubelet intress controller haproxy reverse proxy create resource k8s kube-proxy networking overlay traffic nsx.jpg

root@k8smaster01:~/project#
root@k8smaster01:~/project# ls
deployment.yaml ingress.yaml service.yaml
root@k8smaster01:~/project#
root@k8smaster01:~/project# kubectl apply -f deployment.yaml
deployment.apps/green created
deployment.apps/blue created
root@k8smaster01:~/project#

Once deployment pods are created in kubernetes environment verify them and create service for deployment with the corresponding name. In the last create Ingress resource, as you can see in the screenshot. There are two urls ingress rules green.example.com and blue.example.com which are pointing to respective services and services are pointing towards the respective deployment pods.

kubernetes cluster kubect get pod deployment get pod apply file yaml yml service ingress backend frontend svc selector clusterip nodeport loadbalancer pathtype nginx.jpg

root@k8smaster01:~/project# kubectl get deployment -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
blue    1/1     1            1           18s   blue         nginx    app=blue
green   1/1     1            1           18s   green        nginx    app=green
root@k8smaster01:~/project#
root@k8smaster01:~/project# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP              NODE          NOMINATED NODE   READINESS GATES
blue-74d688555d-kz575    1/1     Running   0          30s   10.244.218.43   k8sworker01   <none>           <none>
green-7dd977478c-gvtsw   1/1     Running   0          30s   10.244.53.187   k8sworker03   <none>           <none>
root@k8smaster01:~/project#
root@k8smaster01:~/project# kubectl apply -f service.yaml
service/green-svc created
service/blue-svc created
root@k8smaster01:~/project#
root@k8smaster01:~/project# kubectl get service -o wide
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
blue-svc     ClusterIP   10.103.37.43   <none>        80/TCP    13s   app=blue
green-svc    ClusterIP   10.109.11.76   <none>        80/TCP    13s   app=green
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   18d   <none>
root@k8smaster01:~/project#
root@k8smaster01:~/project# kubectl apply -f ingress.yaml
ingress.networking.k8s.io/ingress configured
root@k8smaster01:~/project#
root@k8smaster01:~/project# kubectl describe ingress ingress
Name:             ingress
Labels:           name=color-ingress
Namespace:        default
Address:
Ingress Class:    <none>
Default backend:  <default>
Rules:
  Host               Path  Backends
  ----               ----  --------
  green.example.com
                     /   green-svc:80 (10.244.53.187:80)
  blue.example.com
                     /   blue-svc:80 (10.244.218.43:80)
Annotations:         kubernetes.io/ingress.class: nginx
Events:
  Type    Reason          Age                From                      Message
  ----    ------          ----               ----                      -------
  Normal  AddedOrUpdated  13s (x4 over 13m)  nginx-ingress-controller  Configuration for default/ingress was added or updated
  Normal  AddedOrUpdated  13s (x4 over 13m)  nginx-ingress-controller  Configuration for default/ingress was added or updated
  Normal  AddedOrUpdated  13s (x4 over 13m)  nginx-ingress-controller  Configuration for default/ingress was added or updated
root@k8smaster01:~/project#

I have configured my project on Kubernetes Cluster. Now I will login to HAProxy server which I will configure it as reverse proxy. I have updated and upgraded sever using apt-get commands. Once upgrade is completed install HAProxy using apt command.

microsoft azure dns haproxy cfg configuration frontend backend roundrobin ssl kubernetes master worker nodes pods deployment k8s control-plane ingress controller outbound networking traffic.jpg

ubuntu@haproxy:~$
ubuntu@haproxy:~$ sudo su -
root@haproxy:~#
root@haproxy:~# apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:3 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu focal-security InRelease
Reading package lists... Done
root@haproxy:~#
root@haproxy:~# apt-get upgrade -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
Try Ubuntu Pro beta with a free personal subscription on up to 5 machines.
Learn more at https://ubuntu.com/pro
The following packages have been kept back:
  fwupd libfwupd2 libfwupdplugin5
0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
root@haproxy:~#
root@haproxy:~# apt-get install haproxy -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
  vim-haproxy haproxy-doc
The following NEW packages will be installed:
  haproxy
0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded.
Need to get 1,538 kB of archives.
After this operation, 3,320 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 haproxy amd64 2.0.29-0ubuntu1 [1,538 kB]
Fetched 1,538 kB in 0s (5,655 kB/s)
Selecting previously unselected package haproxy.
(Reading database ... 184135 files and directories currently installed.)
Preparing to unpack .../haproxy_2.0.29-0ubuntu1_amd64.deb ...
Unpacking haproxy (2.0.29-0ubuntu1) ...
Setting up haproxy (2.0.29-0ubuntu1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /lib/systemd/system/haproxy.service.
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for rsyslog (8.2001.0-1ubuntu1.3) ...
Processing triggers for systemd (245.4-4ubuntu3.18) ...
root@haproxy:~#

Once HAProxy is installed edit /etc/haproxy/haproxy/haproxy.cfg and add below worker node IP/Hostnames information in frontend and backend section at the bottom of configuration file as highlighted.

microsoft azure dns haproxy cfg configuration frontend backend roundrobin ssl kubernetes master worker nodes pods deployment k8s control-plane ingress controller outbound networking traffic.jpg

root@haproxy:~# vim /etc/haproxy/haproxy.cfg
root@haproxy:~# cat /etc/haproxy/haproxy.cfg
global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

frontend Local_Server
    bind *:80
    mode http
    default_backend ingress

backend ingress
    mode http
    balance roundrobin
    server ingress1 192.168.34.66:80
    server ingress2 192.168.34.67:80
    server ingress3 192.168.34.68:80
root@haproxy:~#

Once haproxy.cfg file is updated, restart haproxy daemon service and check the status. If there are misspelling or incorrect configuration in the cfg file service will not start (ErrorJob for haproxy.service failed because the control process exited with error code. See "systemctl status haproxy.service" and "journalctl -xe" for details.). In the next ipconfig command it shows my IP address which I will add in the DNS server.

Microsoft systemctl restart haproxy status haproxy load balancer lb configuration ifconfig ens33 kubectl get deployment ingress controller pod service kubernetes cluster k8s docker container nginx apache httpd yaml yml.jpg

root@haproxy:~#
root@haproxy:~# systemctl restart haproxy
root@haproxy:~#
root@haproxy:~# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
     Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-10-26 17:25:47 IST; 20s ago
       Docs: man:haproxy(1)
             file:/usr/share/doc/haproxy/configuration.txt.gz
    Process: 40376 ExecStartPre=/usr/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS)
   Main PID: 40378 (haproxy)
      Tasks: 3 (limit: 4572)
     Memory: 34.2M
     CGroup: /system.slice/haproxy.service
             ├─40378 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
             └─40381 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock

Oct 26 17:25:47 haproxy systemd[1]: Starting HAProxy Load Balancer...
Oct 26 17:25:47 haproxy haproxy[40378]: Proxy Local_Server started.
Oct 26 17:25:47 haproxy haproxy[40378]: Proxy Local_Server started.
Oct 26 17:25:47 haproxy haproxy[40378]: Proxy ingress started.
Oct 26 17:25:47 haproxy haproxy[40378]: Proxy ingress started.
Oct 26 17:25:47 haproxy haproxy[40378]: [NOTICE] 298/172547 (40378) : New worker #1 (40381) forked
Oct 26 17:25:47 haproxy systemd[1]: Started HAProxy Load Balancer.
root@haproxy:~#
root@haproxy:~# ifconfig ens33
ens33: flags=4163  mtu 1500
        inet 192.168.34.70  netmask 255.255.255.0  broadcast 192.168.34.255
        inet6 fe80::250:56ff:fe82:f872  prefixlen 64  scopeid 0x20

I added two entry for blue.example.com and green.example.com in the my DNS server and tested urls in the browser, it is working as expected.

Kubernetes HAProxy reverse proxy microsoft dns server domain naming system ingress controller kubernetes cluster k8s k3s minikube pod deployment example.com service nodeport clusterip.jpg

Useful Articles
Kubernetes kubeadm join could not find a jws signature in the cluster-info ConfigMap for token ID
Kubernetes kubeadm join couldn't validate the identity of the API server connection refused
How to install kubernetes master control-plane on ubuntu Part 1
How to install kubernetes worker node on ubuntu Part 2
ansible create an array with set_fact
Ansible get information from esxi advanced settings nested dictionary with unique keynames
Install Ansible AWX Tower on Ubuntu Linux
Ansible AWX installation error Cannot have both the docker-py and docker python modules
Ansible AWX installation error docker-compose run --rm --service-ports task awx-manage migrate --no-input

Go Back

Comment

Blog Search

Page Views

11273136

Follow me on Blogarama