# Adding RHEL 8.9 worker nodes to RHOCP cluster using Ansible playbooks

The Lite Touch Installation (LTI) package includes Ansible playbooks with scripts to add the RHEL 8.9 worker nodes to the RHOCP cluster. You can use one of the following two methods to add the RHEL 8.9 worker nodes:

  • Run a consolidated playbook: This method includes a single playbook, site.yml, that contains a script to perform all the tasks for adding the RHEL 8.9 worker nodes to the existing RHOCP cluster. To run LTI using a consolidated playbook:
$ ansible-playbook -i inventory/hosts site.yml --ask-vault-pass

NOTE

The default password for the Ansible vault file is changeme

  • Run individual playbooks: This method includes multiple playbooks with scripts that enable you to deploy specific tasks for adding the RHEL 8.9 worker nodes to the existing RHOCP cluster. The playbooks in this method must be executed in a specific sequence to add the worker nodes.

The following table includes the purpose of each playbook required for the deployment:

TABLE 9. Add RHEL 8.9 nodes using Ansible playbooks

Playbook Description
rhel8_os_deployment.yml This playbook contains the scripts to deploy RHEL 8.9 OS on worker nodes.
copy_ssh.yml This playbook contains the script to copy the SSH public key to the RHEL 8.9 worker nodes.
prepare_worker_nodes.yml This playbook contains the script to prepare nodes for the RHEL 8.9 worker nodes.
ntp.yml This playbook contains the script to create NTP setup to enable time synchronization on the worker nodes.
openshift-ansible/playbooks/scaleup.yml This playbook contains the script to add worker nodes to the RHOCP cluster. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on the new hosts.

To run individual playbooks do one of the following:

  1. Edit site.yaml file and add a comment for all the playbooks except the ones that you want to execute.

For example, add the following comments in the site.yaml file to deploy RHEL 8.9 OS on the worker nodes:

import_playbook: playbooks/rhel8_os_deployment.yml

# import_playbook: playbooks/copy_ssh.yml

# import_playbook: playbooks/prepare_worker_nodes.yml

# import_playbook: playbooks/ntp.yml

# import_playbook: openshift-ansible/playbooks/scaleup.yml

OR

Run the individual YAML files using the following command:

$ ansible-playbook -i hosts playbooks/<yaml_filename>.yml --ask-vault-pass

For example, run the following YAML file to deploy RHEL 8.9 OS on the worker nodes:

$ ansible-playbook -i hosts playbooks/rhel8_os_deployment.yml --ask-vault-pass

For more information on executing individual playbooks, see the consecutive sections.

# Adding RHEL 8.9 worker nodes

This section describes how to add RHEL 8.9 worker nodes to an existing RHOCP cluster.

To add RHEL 8.9 worker nodes to the RHOCP cluster:

  1. Login to the Installer VM.

This installer VM was created as a KVM VM on one of the head nodes using the rhel8_installerVM.yml playbook. For more information, see the Creating RHEL 8 installer machine section.

  1. Navigate to the directory $BASE_DIR/worker_nodes/
cd $BASE_DIR/worker_nodes/

NOTE

$BASE_DIR refers to /opt/hpe-solutions-openshift/DL-LTI-Openshift/

Run the following commands on the rhel8 installer VM to edit the vault input file.

ansible-vault edit input.yaml

The installation user should review hosts file (located on the installer VM at $BASE_DIR/inventory/hosts)

vi inventory/hosts
  1. Copy Rhel8.9 DVD ISO to /usr/share/nginx/html/
  2. Navigate to the $BASE_DIR/worker_nodes/ directory and run the following command:
$ sh setup.sh
  1. Add the worker nodes to the cluster using one of the following methods:
  • Run the following sequence of playbooks:
ansible-playbook -i inventory/hosts playbooks/rhel8_os_deployment.yml --ask-vault-pass

ansible-playbook -i inventory/hosts playbooks/copy_ssh.yml --ask-vault-pass

ansible-playbook -i inventory/hosts playbooks/prepare_worker_nodes.yml --ask-vault-pass

ansible-playbook -i inventory/hosts playbooks/ntp.yml --ask-vault-pass

ansible-playbook -i inventory/hosts openshift-ansible/playbooks/scaleup.yml --ask-vault-pass

OR

  • If you want to deploy the entire solution to add the worker nodes to the cluster, execute the following playbook:
$ ansible-playbook -i inventory/hosts site.yml --ask-vault-pass
  1. Once all the playbooks are executed successfully, check the status of the node using the following command:
$ oc get nodes

The following output is displayed:

NAME			STATUS	ROLES		AGE	VERSION

master0.ocp.ngs.local	Ready	master,worker	3d	v1.28.9+8ca71f7

master1.ocp.ngs.local	Ready	master,worker	3d	v1.28.9+8ca71f7

master2.ocp.ngs.local	Ready	master,worker	3d	v1.28.9+8ca71f7

worker1.ocp.ngs.local	Ready	worker		1d	v1.28.9+8ca71f7

worker2.ocp.ngs.local	Ready	worker		1d	v1.28.9+8ca71f7

worker3.ocp.ngs.local	Ready	worker		1d	v1.28.9+8ca71f7
  1. Once the worker nodes are added to the cluster, set the “mastersSchedulable” parameter as false to ensure that the master nodes are not used to schedule pods.
  2. Edit the schedulers.config.openshift.io resource.
$ oc edit schedulers.config.openshift.io cluster

Configure the mastersSchedulable field.

apiVersion: config.openshift.io/v1 

kind: Scheduler 

metadata: 

`	`creationTimestamp: “2024-05-20T13:03:23Z"

`	`generation: 2

`	`name: cluster

`	`resourceVersion: “67299"

`	`selfLink: /apis/config.openshift.io/v1/schedulers/cluster

`	`uid: a636d30a-d377-11e9-88d4-0a60097bee62

spec:

`	`mastersSchedulable: false 

`	`policy:

`		`name: “"

status: { }

NOTE

Set the mastersSchedulable to true to allow Control Plane nodes to be schedulable or false to disallow Control Plane nodes to be schedulable.

  1. Save the file to apply the changes.
$ oc get nodes

The following output is displayed:

NAME			STATUS	ROLES	AGE	VERSION

master0.ocp.ngs.local	Ready	master	3d	v1.28.9+8ca71f7

master1.ocp.ngs.local	Ready	master	3d	v1.28.9+8ca71f7

master2.ocp.ngs.local	Ready	master	3d	v1.28.9+8ca71f7

worker1.ocp.ngs.local	Ready	worker		1d	v1.28.9+8ca71f7

worker2.ocp.ngs.local	Ready	worker		1d	v1.28.9+8ca71f7

worker3.ocp.ngs.local	Ready	worker		1d	v1.28.9+8ca71f7

NOTE

To add more worker nodes, update worker details in HAProxy and binddns on head nodes and then add RHEL 8.9 worker nodes to the RHOCP cluster.