# Adding RHCOS VM worker nodes to RHOCP cluster using Ansible playbooks

The Lite Touch Installation (LTI) package includes Ansible playbooks with scripts to add the RHCOS worker nodes to the RHOCP cluster. You can use one of the following two methods to add the RHCOS worker nodes:

  • Run a consolidated playbook: This method includes a single playbook, site.yml, that contains a script to perform all the tasks for adding the RHCOS worker nodes to the existing RHOCP cluster. To run LTI using a consolidated playbook:
$ ansible-playbook -i hosts site.yml --ask-vault-pass

NOTE

The default password for the Ansible vault file is changeme

  • Run individual playbooks: This method includes multiple playbooks with scripts that enable you to deploy specific tasks for adding the RHCOS worker nodes to the existing RHOCP cluster. The playbooks in this method must be executed in a specific sequence to add the worker nodes.

The following table includes the purpose of each playbook required for the deployment:

TABLE 9. Playbook Description

Playbook Description
rhel8_os_deployment.yml This playbook contains the scripts to deploy RHEL 8.9 OS on baremetal servers.
copy_ssh_workernode.yml This playbook contains the script to copy the ssh public key from installer machine to the KVM worker nodes.
prepare_rhel_hosts.yml This playbook contains the script to prepare KVM worker nodes with required packages and subscription.
ntp.yml This playbook contains the script to create NTP setup on KVM worker nodes to make sure time synchronization.
binddns.yml This playbook contains the script to deploy bind dns on three head nodes and it will work as both Active & Passive.
haproxy.yml This playbook contains the script to deploy haproxy on the head nodes and it will act as Active.
storage_pool.yml This playbook contains the script to create the storage pools on the KVM Worker nodes.
deploy_ipxe_ocp.yml This playbook contains the script to deploy the ipxe code on the 9 installer machine.
ocp_rhcosworkervm.yml This playbook contains the script to add kvm based coreos nodes to exsting Openshift cluster.

To run individual playbooks do one of the following:

  1. Edit site.yml file and add a comment for all the playbooks except the ones that you want to execute.

For example, add the following comments in the site.yml file to deploy RHCOS on the worker nodes:

import_playbook: playbooks/rhel8_os_deployment.yml
# import_playbook: playbooks/copy_ssh_workernode.yml
# import_playbook: playbooks/prepare_rhel_hosts.yml
# import_playbook: playbooks/ntp.yml
# import_playbook: playbooks/binddns.yml
# import_playbook: playbooks/haproxy.yml
# import_playbook: playbooks/storage_pool.yml
# import_playbook: playbooks/deploy_ipxe_ocp.yml
# import_playbook: playbooks/ocp_rhcosworkervm.yml

OR

Run the individual YAML files using the following command:

$ ansible-playbook -i hosts playbooks/<yaml_filename>.yml --ask-vault-pass

For example, run the following YAML file to deploy RHEL 8.9 OS on the worker nodes:

$ ansible-playbook -i hosts playbooks/rhel8_os_deployment.yml --ask-vault-pass

For more information on executing individual playbooks, see the consecutive sections.

# Adding RHCOS worker nodes

This section covers the steps to Enable KVM hypervisor on Worker Nodes and add RHCOS worker VM nodes to an existing Red Hat OpenShift Container Platform cluster.

  1. Login to the Installer VM.

This installer VM was created as a KVM VM on one of the head nodes using the rhel8_installerVM.yml playbook. For more information, see the Creating RHEL 8 installer machine section.

  1. Navigate to the $BASE_DIR(/opt/hpe-solutions-openshift/DL-LTI-Openshift/) directory, then copy input file and hosts file to $BASE_DIR/coreos_kvmworker_nodes/ and later update ocp worker details in input file and kvm_workernodes group as per sample host file.
ansible-vault edit input.yaml
vi hosts
	'[kvm_workernodes]
	KVMworker1 IP
	KVMworker2 IP
	KVMworker3 IP'

NOTE

ansible vault password is changeme

  1. Copy RHEL 8.9 DVD ISO to the /usr/share/nginx/html/ directory.
  2. Navigate to the /opt/hpe-solutions-openshift/DL-LTI-Openshift/coreos_kvmworker_nodes/ directory add the worker nodes to the cluster using one of the following methods:
  • Run the following sequence of playbooks:
	ansible-playbook -i hosts playbooks/rhel8_os_deployment.yml --ask-vault-pass
	ansible-playbook -i hosts playbooks/copy_ssh_workernode.yml --ask-vault-pass
	ansible-playbook -i hosts playbooks/prepare_rhel_hosts.yml --ask-vault-pass
	ansible-playbook -i hosts playbooks/ntp.yml --ask-vault-pass
	ansible-playbook -i hosts playbooks/binddns.yml --ask-vault-pass
	ansible-playbook -i hosts playbooks/haproxy.yml --ask-vault-pass
	ansible-playbook -i hosts playbooks/storage_pool.yml --ask-vault-pass
	ansible-playbook -i hosts playbooks/deploy_ipxe_ocp.yml --ask-vault-pass
	ansible-playbook -i hosts playbooks/ocp_rhcosworkervm.yml --ask-vault-pass

OR

  • If you want to deploy the entire solution to add the RHCOS worker nodes to the cluster, execute the following playbook:
$ ansible-playbook -i hosts site.yml --ask-vault-pass
  1. After successful execution of all playbooks, check the node status as below.

Approving server certificates (CSR) for newly added nodes

The administrator needs to approve the CSR requests generated by each kubelet.

You can approve all Pending CSR requests using below command

$ oc get csr -o json | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
  1. Later, Verify Node status using below command:
$ oc get nodes
  1. Execute the following command to set the parameter mastersSchedulable parameter as false, so that master nodes will not be used to schedule pods.
$ oc edit scheduler