# Load Balancer
Red Hat OpenShift Container Platform 4.9 uses an external load balancer to communicate from outside the cluster with services running inside the cluster. This section assumes that there is a load balancer available within the deployment environment and is available for use. This solution was developed using HA Proxy, an open source solution with one (1) virtual machine for load balancing functionality.
This section covers its configuration. In a production environment, Hewlett Packard Enterprise recommends the use of enterprise load balancing such as F5 Networks Big-IP and its associated products.
# Install and Configure HAProxy
This section covers configuring HAProxy using ansible playbook.
# Procedure
Copy the ssh keys from installer machine to HAProxy machine by running following command
> ssh-copy-id root@haproxy_machine_ip
Navigate to
$BASE_DIR/haproxy
Update hosts file with bootstrap, master and worker details.
#change your fqdn names as per your environment
[lb]
lb01 ansible_host=20.x.x.x ansible_user=<installermachine_username>
[boot]
ocpboot ansible_host=<fqdn>
#ex:- ocpboot.tennet.com
[master]
ocpmaster1 ansible_host=<fqdn>
ocpmaster2 ansible_host=<fqdn>
ocpmaster3 ansible_host=<fqdn>
#ex:- ocpmaster.tennet.com
[worker]
ocpworker1 ansible_host=<fqdn>
ocpworker2 ansible_host=<fqdn>
ocpworker3 ansible_host=<fqdn>
#ex:- ocpworker.tennet.com
[cluster:children]
lb
boot
master
worker
- Execute the ansible playbook to configure HAProxy
> ansible-playbook -i hosts haproxy.yaml
The following entries are made in the haproxy.cfg file.
Sample haproxy.cfg file
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
#backend static
# balance roundrobin
# server static 127.0.0.1:4331 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
frontend ocp4-kubernetes-api-server
bind *:6443
default_backend ocp4-kubernetes-api-server
mode tcp
option tcplog
backend ocp4-kubernetes-api-server
mode tcp
balance source
server bootstrap bootstrap.ocp.pxelocal.local:6443 check
server master01 master01.ocp.pxelocal.local:6443 check
server master02 master02.ocp.pxelocal.local:6443 check
server master03 master03.ocp.pxelocal.local:6443 check
frontend ocp4-machine-config-server68
bind *:22623
default_backend ocp4-machine-config-server
mode tcp
option tcplog
backend ocp4-machine-config-server
balance source
mode tcp
server bootstrap bootstrap.ocp.pxelocal.local:22623 check
server master01 master01.ocp.pxelocal.local:22623 check
server master02 master02.ocp.pxelocal.local:22623 check
server master03 master03.ocp.pxelocal.local:22623 check
frontend ocp4-router-http
bind *:80
default_backend ocp4-router-http
mode tcp
option tcplog
backend ocp4-router-http
balance source
mode tcp
server worker03 worker03.ocp.pxelocal.local:80 check
server worker04 worker04.ocp.pxelocal.local:80 check
server worker01 worker01.ocp.pxelocal.local:80 check
server worker02 worker02.ocp.pxelocal.local:80 check
# if creating a cluster with only master nodes to begin with and later adding the worker nodes, master nodes should be added in this section instead of worker nodes. Once all the worker nodes are added into the cluster, this configuration needs to be updated with the worker nodes.
# server master01 master01.ocp.pxelocal.local:80 check
# server master02 master02.ocp.pxelocal.local:80 check
# server master03 master03.ocp.pxelocal.local:80 check
frontend ocp4-router-https
bind *:443
default_backend ocp4-router-https
mode tcp
option tcplog
backend ocp4-router-https
balance source
mode tcp
server worker03 worker03.ocp.pxelocal.local:443 check
server worker04 worker04.ocp.pxelocal.local:443 check
server worker01 worker01.ocp.pxelocal.local:443 check
server worker01 worker02.ocp.pxelocal.local:443 check
# if creating a cluster with only master nodes to begin with and later adding the worker nodes, master nodes should be added in this section instead of worker nodes. Once all the worker nodes are added into the cluster, this configuration needs to be updated with the worker nodes.
# server master01 master01.ocp.pxelocal.local:443 check
# server master02 master02.ocp.pxelocal.local:443 check
# server master03 master03.ocp.pxelocal.local:443 check
NOTE
The load balancer configuration should contain values that are aligned to the installation environment.