# Adding BareMetal CoreOS worker nodes to RHOCP cluster using Ansible playbooks
The Lite Touch Installation (LTI) package includes Ansible playbooks with scripts to add the bare metal CoreOS worker nodes to the RHOCP cluster. You can use one of the following two methods to add the CoreOS worker nodes:
- Run a consolidated playbook: This method includes a single playbook, site.yml, that contains a script to perform all the tasks for adding the CoreOS worker nodes to the existing RHOCP cluster. To run LTI using a consolidated playbook:
$ ansible-playbook -i hosts site.yml --ask-vault-pass
NOTE
The default password for the Ansible vault file is changeme
If network interface bonding is required on the bare metal worker nodes, follow the Step 4 and then proceed for the CSR certificate verfication.
- Run individual playbooks: This method includes multiple playbooks with scripts that enable you to deploy specific tasks for adding the CoreOS worker nodes to the existing RHOCP cluster. The playbooks in this method must be executed in a specific sequence to add the worker nodes.
The following table includes the purpose of each playbook required for the deployment:
TABLE 9. Playbook Description
Playbook | Description |
---|---|
binddns.yml | This playbook contains the script to deploy bind dns on three worker nodes and it will work as both Active & Passive. |
haproxy.yml | This playbook contains the script to deploy haproxy on the worker nodes and it will act as Active. |
deploy_ipxe_ocp.yml | This playbook contains the script to deploy the ipxe code on the worker machine. |
To run individual playbooks do one of the following:
- Edit site.yml file and add a comment for all the playbooks except the ones that you want to execute.
For example, add the following comments in the site.yml file to bind dns on the worker nodes:
import_playbook: playbooks/binddns.yml
# import_playbook: playbooks/haproxy.yml
# import_playbook: playbooks/deploy_ipxe_ocp.yml
OR
Run the individual YAML files using the following command:
$ ansible-playbook -i hosts playbooks/<yaml_filename>.yml --ask-vault-pass
For example, run the following YAML file to bind dns to the worker nodes:
$ ansible-playbook -i hosts playbooks/binddns.yml --ask-vault-pass
For more information on executing individual playbooks, see the consecutive sections.
# Adding CoreOS worker nodes
This section covers the steps to add RHCOS worker nodes to an existing Red Hat OpenShift Container Platform cluster.
- Login to the Installer VM.
This installer VM was created as a KVM VM on one of the head nodes using the rhel8_installerVM.yml playbook. For more information, see the Creating RHEL 8 installer machine section.
- Navigate to the $BASE_DIR(/opt/hpe-solutions-openshift/DL-LTI-Openshift/) directory, then copy input file and hosts file to $BASE_DIR/coreos_BareMetalworker_nodes/ and later update ocp worker details in input file.
ansible-vault edit input.yaml
------------------------------------------------------------------------------------------------------------
ocp_workers:
- name: worker1
ip: 172.28.xx.xxx
fqdn: xxx.ocp.isv.local #ex. mworker1.ocp.isv.local
mac_address: XX:XX:XX:XX:XX:XX #For BareMetal core os worker update mac address of server NIC
- name: worker2
ip: 172.28.xx.xxx
fqdn: xxx.ocp.isv.local #ex. mworker2.ocp.isv.local
mac_address: XX:XX:XX:XX:XX:XX #For BareMetal core os worker update mac address of server NIC
- name: worker3
ip: 172.28.xx.xxx
fqdn: xxx.ocp.isv.local #ex. mworker3.ocp.isv.local
mac_address: XX:XX:XX:XX:XX:XX #For BareMetal core os worker update mac address of server NIC
------------------------------------------------------------------------------------------------------------
NOTE
import the hosts file from the $BASE_DIR
ansible vault password is changeme
- Navigate to the /opt/hpe-solutions-openshift/DL-LTI-Openshift/coreos_BareMetalworker_nodes/ directory add the worker nodes to the cluster using one of the following methods:
- Run the following sequence of playbooks:
$ ansible-playbook -i hosts playbooks/binddns.yml --ask-vault-pass
$ ansible-playbook -i hosts playbooks/haproxy.yml --ask-vault-pass
$ ansible-playbook -i hosts playbooks/deploy_ipxe_ocp.yml --ask-vault-pass
OR
- If you want to deploy the entire solution to add the RH CoreOS worker nodes to the cluster, execute the following playbook:
$ ansible-playbook -i hosts site.yml --ask-vault-pass
- Execute the following command for creating bonding on the network interfaces for baremetal CoreOS worker nodes
$ ssh core@<CoreOS workerIP>
$ ip -o link show | grep 'state UP' | awk -F ': ' '{print $2}' ###to retrive only the names of the network interfaces that are currently UP
sample output from above command:
ens1f0np0
ens1f1np1
$ sudo nmcli connection add type bond con-name "bond0" ifname bond0
$ sudo nmcli connection modify bond0 bond.options "mode=active-backup,downdelay=0,miimon=100,updelay=0"
$ sudo nmcli connection add type ethernet slave-type bond con-name bond0-if1 ifname ens1f0np0 master bond0 ###ens1f0np0 interface names from the sample output
$ sudo nmcli connection add type ethernet slave-type bond con-name bond0-if2 ifname ens1f1np1 master bond0 ###ens1f1np1 interface names from the sample output
$ sudo nmcli connection up bond0
$ sudo nmcli connection modify "bond0" ipv4.addresses '<<CoreOS IP with netmask>>' ipv4.gateway '<<gateway IP>>' ipv4.dns '<<dns server IP(all the head node IP)>>' ipv4.dns-search '<<domain name>>' ipv4.method manual
example:
sudo nmcli connection modify "bond0" ipv4.addresses '172.28.*.*/24' ipv4.gateway '172.28.*.*' ipv4.dns '172.28.*.*,172.28.*.*,172.28.*.*' ipv4.dns-search 'isv.local' ipv4.method manual
$ sudo reboot
- After successful execution of all playbooks, check the node status as below.
Approving server certificates (CSR) for newly added nodes
The administrator needs to approve the CSR requests generated by each kubelet.
You can approve all Pending CSR requests using below command
$ oc get csr -o json | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
- Later, Verify Node status using below command:
$ oc get nodes
- Execute the following command to set the parameter mastersSchedulable parameter as false, so that master nodes will not be used to schedule pods.
$ oc edit scheduler