# Red Hat OCP worker nodes
This section covers the steps to add RHCOS worker nodes to an existing OpenShift Container Platform 4 cluster.
PREREQUISITES
A Red Hat OpenShift Container Platform 4 cluster is available within the deployment environment.
A worker node ignition file is generated along with the bootstrap and master ignition files. Refer to the section Kubernetes manifests and ignition files in this document for details on generating manifest and ignition files.
NOTE
It is important to use the worker.ign ignition file generated along with the master.ign file used to create the OpenShift cluster. If this is not the case, then "certificate signing requests" in OpenShift will not recognize these worker nodes.
A working node is available that needs to be attached to the existing OpenShift cluster and used as the worker node.
# Procedure
Login to the installer VM as non-root user.
Execute the following command to get the nodes associated with the OpenShift cluster.
> export KUBECONFIG=$BASE_DIR/installer/ignitions/auth/kubeconfig > export PATH=$PATH:$BASE_DIR/installer/library/openshift_components > oc get nodes
The output is as follows.
NAME STATUS ROLES AGE VERSION master1.ocp.twentynet.local Ready master,worker 5d v1.14.6+888f9c630 master2.ocp.twentynet.local Ready master,worker 5d v1.14.6+888f9c630 master3.ocp.twentynet.local Ready master,worker 5d v1.14.6+888f9c630
Since the worker node is booted with the existing worker ignition file, the node is recognized by the current OpenShift cluster. However, the certificate signing request is pending. Execute the following command to get the certificate signing requests.
> oc get csr
The output is as follows.
NAME AGE REQUESTOR CONDITION csr-8pj6k 28m system:node:worker1.ocp.twentynet.local Pending csr-9pj7c 28m system:node:worker2.ocp.twentynet.local Pending csr-9pj1s 28m system:node:worker3.ocp.twentynet.local Pending
Execute the following command to approve all of the pending certificate signing requests and to add the worker nodes to the cluster.
> oc get csr | awk '{print $1}'| while read line; do ./oc adm certificate approve $line; done
Verify that the certificate signing requests for the worker nodes are approved using the following command.
> oc get csr
The output is as follows.
NAME AGE REQUESTOR CONDITION csr-8pj6k 28m system:node:worker1.ocp.twentynet.local Approved,Issued csr-9pj7c 28m system:node:worker2.ocp.twentynet.local Approved,Issued csr-9pj1s 28m system:node:worker3.ocp.twentynet.local Approved,Issued
Verify that the worker nodes are added to the cluster using the following command.
> oc get nodes
The output is as follows.
NAME STATUS ROLES AGE VERSION master1.ocp.twentynet.local Ready master,worker 5d v1.14.6+888f9c630 master2.ocp.twentynet.local Ready master,worker 5d v1.14.6+888f9c630 master3.ocp.twentynet.local Ready master,worker 5d v1.14.6+888f9c630 worker1.ocp.twentynet.local Ready worker 5d v1.14.6+888f9c630 worker2.ocp.twentynet.local Ready worker 5d v1.14.6+888f9c630 worker3.ocp.twentynet.local Ready worker 5d v1.14.6+888f9c630
Execute the following command to set the parameter mastersSchedulable parameter as false, so that master nodes will not be used to schedule pods.
> oc edit scheduler
The output is as follows.
apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: "2019-12-13T10:34:48Z" generation: 2 name: cluster resourceVersion: "1748652" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: 30245db9-1d94-11ea-8066-000c29c3ee8e spec: mastersSchedulable: false policy: name: "" status: {}
Execute the following command to verify the master roles have been reset.
> oc get nodes
The output is as follows.
NAME STATUS ROLES AGE VERSION master1.ocp.twentynet.local Ready master 5d v1.14.6+888f9c630 master2.ocp.twentynet.local Ready master 5d v1.14.6+888f9c630 master3.ocp.twentynet.local Ready master 5d v1.14.6+888f9c630 worker1.ocp.twentynet.local Ready worker 5d v1.14.6+888f9c630 worker2.ocp.twentynet.local Ready worker 5d v1.14.6+888f9c630 worker3.ocp.twentynet.local Ready worker 5d v1.14.6+888f9c630
# Adding RHEL 7.6 worker nodes
This section covers the steps to add RHEL 7.6 worker nodes to an existing Red Hat OpenShift Container Platform 4 cluster.
# Prerequisites
Ensure the required packages are installed and any necessary configuration is performed on the installer VM. Refer to the section Installer machine of this document for details on the prerequisites and configuration steps.
RHEL nodes are prepared for installation. Refer to the section Preparing worker nodes with RHEL of this document for details on preparing the RHEL 7.6 worker nodes.
# Procedure
Perform the following steps on the installer VM:
- Download the Red Hat OpenShift Container Platform 4 Ansible package
to enable the addition of RHEL 7.6 worker nodes to an existing
OpenShift cluster.
> sudo yum install openshift-ansible openshift-clients jq
- Create an Ansible inventory file exclusively for adding RHEL worker
nodes that is named path/inventory/hosts that defines your
compute machine nodes and required variables as listed.
[all:vars] # Username that runs the Ansible tasks on the remote compute machines # ansible_user=root # If you do not specify root for the ansible_user, # you must set ansible_become to True and assign the user sudo permissions. ansible_become=True # Path to the kubeconfig file for your cluster openshift_kubeconfig_path="~/.kube/config" # FQDN of each RHEL machine to add to the cluster [new_workers] worker-0.example.com worker-1.example.com
- Execute the following commands to run the playbook which adds RHEL
7.6 worker nodes to the existing cluster.
> cd /usr/share/ansible/openshift-ansible > ansible-playbook -i <path to inventory hosts file> playbooks/scaleup.yml
# Approving the certificate signing requests for your machines
When new machines are added to the cluster, pending certificate signing requests (CSRs) are generated for each machine that is added. Confirm these CSRs are approved. You can also approve the CSR if necessary.
PREREQUISITES
- jq package is installed.
# Procedure
Confirm that the cluster recognizes the machines by executing the following command. The output lists all of the machines that have been created.
> oc get nodes
Output is as follows:
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.14.6+c4799753c master-1 Ready master 63m v1.14.6+c4799753c master-2 Ready master 64m v1.14.6+c4799753c worker-0 NotReady worker 76s v1.14.6+c4799753c worker-1 NotReady worker 70s v1.14.6+c4799753c
Review the pending CSRs and ensure that there is a client and server request with Pending or Approved status for each machine added to the cluster.
> oc get csr
Output is as follows:
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending
Since the CSRs rotate automatically, it is important to approve the CSRs within an hour of adding the machines to the cluster. If the CSRs are not approved within an hour, the certificates will rotate, and more than two certificates will be present for each node. All certificates must be approved. After the initial CSRs are approved, the subsequent node client CSRs are automatically approved by the cluster kube-controller-manager.
To approve CSRs individually, run the following command for each valid CSR. In this example, <csr_name> is the name of a CSR from the list of current CSRs.
> oc adm certificate approve <csr_name>;
If all CSRs are valid, approve all of them by running the following command.
> oc get csr -o json | jq -r '.items\[\] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve