# Deploying RHOCP cluster using Ansible playbooks

The Lite Touch Installation (LTI) package includes Ansible playbooks with scripts to deploy RHOCP cluster. You can use one of the following two methods to deploy RHOCP cluster:

  • Run a consolidated playbook: This method includes a single playbook for deploying the entire solution. This site.yml playbook contains a script that performs all the tasks starting from the OS deployment until the RHOCP cluster is successfully installed and running. To run LTI using a consolidated playbook:
$ ansible-playbook -i hosts site.yml --ask-vault-pass

NOTE

The default password for the Ansible vault file is changeme

  • Run individual playbooks: This method includes multiple playbooks with scripts that enable you to deploy specific parts of the solution depending on your requirements. The playbooks in this method must be executed in a specific sequence to deploy the solution. The following table includes the purpose of each playbook required for the deployment:

TABLE 8. RHOCP cluster deployment using Ansible playbooks

Playbook Description
rhel8_os_deployment.yml This playbook contains the script to deploy RHEL 8.9 OS on BareMetal servers.
copy_ssh_headnode.yml This playbook contains the script to copy the SSH public key from the installer machine to the head nodes.
prepare_rhel_hosts.yml This playbook contains the script to prepare nodes for the RHOCP head nodes.
ntp.yml This playbook contains the script to create NTP setup to enable time synchronization on head nodes.
binddns.yml This playbook contains the script to deploy Bind DNS on three head nodes and acts as active-passive cluster configuration.
haproxy.yml This playbook contains the script to deploy HAProxy on the head nodes and acts as active-active cluster configuration.
squid_proxy.yml This playbook contains the script to deploy the Squid proxy on the head nodes to get web access.
storage_pool.yml This playbook contains the script to create the storage pools on the head nodes.
rhel8_installerVM.yml This playbook contains the script to create a RHEL 8 installer machine, which will also be used as an installer at a later stage.
copy_ssh_installerVM.yml This playbook contains the script to copy the SSH public key to the RHEL 8 installer machine.
prepare_rhel8_installer.yml This playbook contains the script to prepare the RHEL 8 installer.
copy_scripts.yml This playbooks contains the script to copy ansible code to rhel8 installer and headnodes.
download_ocp_packages.yml This playbook contains the script to download the required RHOCP packages.
generate_manifest.yml This playbook contains the script to generate the manifest files.
copy_ocp_tool.yml This playbook contains the script to copy the RHOCP tools from the current installer to the head nodes and RHEL 8 installer.
deploy_ipxe_ocp.yml This playbook contains the script to deploy the iPXE server on the head nodes.
ocp_vm.yml This playbook contains the script to create bootstrap and master nodes.

To run individual playbooks:

  • Do one of the following:
  1. Edit site.yml file and add a comment for all the playbooks you do not want to execute.

For example, add the following comments in the site.yml file to deploy RHEL 8.9 OS:

- import_playbook: playbooks/rhel8_os_deployment.yml
- import_playbook: playbooks/copy_ssh_headnode.yml
- import_playbook: playbooks/prepare_rhel_hosts.yml
- import_playbook: playbooks/ntp.yml
- import_playbook: playbooks/binddns.yml
- import_playbook: playbooks/haproxy.yml
- import_playbook: playbooks/squid_proxy.yml
- import_playbook: playbooks/storage_pool.yml
- import_playbook: playbooks/rhel8_installerVM.yml
- import_playbook: playbooks/copy_ssh_installerVM.yml
- import_playbook: playbooks/prepare_rhel8_installer.yml
- import_playbook: playbooks/download_ocp_packages.yml
- import_playbook: playbooks/generate_manifest.yml
- import_playbook: playbooks/copy_ocp_tool.yml
- import_playbook: playbooks/deploy_ipxe_ocp.yml
- import_playbook: playbooks/ocp_vm.yml
  1. Run the individual YAML files using the following command:
$ ansible-playbook -i hosts playbooks/<yaml_filename>.yml --ask-vault-pass

For example, run the following YAML file to deploy RHEL 8.9 OS:

$ ansible-playbook -i hosts playbooks/rhel8_os_deployment.yml --ask-vault-pass

For more information on executing individual playbooks, see the consecutive sections.

# Deploying RHEL8 OS on baremetal servers

This section describes how to run the playbook that contains the script for deploying RHEL 8.9 OS on BareMetal servers. To deploy RHEL 8.9 OS on the head nodes:

  1. Navigate to the $BASE_DIR(/opt/hpe-solutions-openshift/DL-LTI-Openshift/) directory on the installer.
  2. Run the following playbook:
$ ansible-playbook -i hosts playbooks/rhel8_os_deployment.yml --ask-vault-pass

# Copying SSH key to head nodes

Once the OS is installed on the head nodes, copy the ssh key from the installer machine to the head nodes. It uses playbook that contains the script to copy the SSH public key from the installer machine to the head nodes.

To copy the SSH key to the head node run the following playbook:

$ ansible-playbook -i hosts playbooks/copy_ssh_headnode.yml --ask-vault-pass

# Setting up head nodes

This section describes how to run the playbook that contains the script to prepare nodes for the RHOCP head nodes.

To register the head nodes to Red Hat subscription and download and install KVM Virtualization packages run the following playbook:

$ ansible-playbook -i hosts playbooks/prepare_rhel_hosts.yml --ask-vault-pass

# Setting up NTP server on head nodes

This section describes how to run the playbook that contains the script to set up NTP server and enable time synchronization on all head nodes.

To set up NTP server on head nodes run the following playbook:

$ ansible-playbook -i hosts playbooks/ntp.yml --ask-vault-pass

# Deploying Bind DNS on head nodes

This section describes how to deploy Bind DNS service on all three head nodes for active-passive cluster configuration.

To deploy Bind DNS service on head nodes run the following playbook:

$ ansible-playbook -i hosts playbooks/binddns.yml --ask-vault-pass

# Deploying HAProxy on head nodes

The RHOCP 4.15 uses an external load balancer to communicate from outside the cluster with services running inside the cluster. This section describes how to deploy HAProxy on all three head nodes for active-active cluster configuration.

To deploy HAProxy server configuration on head nodes run the following playbook:

$ ansible-playbook -i hosts playbooks/haproxy.yml --ask-vault-pass

# Deploying Squid proxy on head nodes

Squid is a proxy server that caches content to reduce bandwidth and load web pages more quickly. This section describes how to set up Squid as a proxy for HTTP, HTTPS, and FTP protocol, as well as authentication and restricting access. It uses a playbook that contains the script to deploy the Squid proxy on the head nodes to get web access.

To deploy Squid proxy server on head nodes run the following playbook:

$ ansible-playbook -i hosts playbooks/squid_proxy.yml --ask-vault-pass

# Creating storage pools on head nodes

This section describes how to use the storage_pool.yml playbook that contains the script to create the storage pools on the head nodes.

To create the storage pools run the following playbook:

$ ansible-playbook -i hosts playbooks/storage_pool.yml --ask-vault-pass

# Creating RHEL 8 installer machine

This section describes how to create a RHEL 8 installer machine using the rhel8_installerVM.yml playbook. This installer machine is also used as an installer for deploying the RHOCP cluster and adding RHEL 8.9 worker nodes.

To create a RHEL 8 installer machine run the following playbook:

$ ansible-playbook -i hosts playbooks/rhel8_installerVM.yml --ask-vault-pass

# Copying SSH key to RHEL 8 installer machine

This section describes how to copy the SSH public key to the RHEL 8 installer machine using the copy_ssh_installerVM.yml playbook.

To copy the SSH public key to the RHEL 8 installer machine run the following playbook:

$ ansible-playbook -i hosts playbooks/copy_ssh_installerVM.yml --ask-vault-pass

# Setting up RHEL 8 installer

This section describes how to set up the RHEL 8 installer using the prepare_rhel8_installer.yml playbook.

To set up the RHEL 8 installer run the following playbook:

$ ansible-playbook -i hosts playbooks/prepare_rhel8_installer.yml --ask-vault-pass

# Downloading RHOCP packages

This section provides details about downloading the required RHOCP 4.15 packages using a playbook.

To download RHOCP 4.15 packages:

Download the required packages on the installer VM with the following playbook:

$ ansible-playbook -i hosts playbooks/download_ocp_packages.yml --ask-vault-pass

# Generating Kubernetes manifest files

The manifests and ignition files define the master node and worker node configuration and are key components of the RHOCP 4.15 installation. This section describes how to use the generate_manifest.yml playbook that contains the script to generate the manifest files.

To generate Kubernetes manifest files run the following playbook:

$ ansible-playbook -i hosts playbooks/generate_manifest.yml --ask-vault-pass

# Copying RHOCP tools

This section describes how to copy the RHOCP tools from the present installer to head nodes and RHEL 8 installer using the copy_ocp_tool.yml playbook.

To copy the RHOCP tools to the head nodes and RHEL 8 installer run the following playbook:

$ ansible-playbook -i hosts playbooks/copy_ocp_tool.yml --ask-vault-pass

# Deploying iPXE server on head nodes

This section describes how to deploy the iPXE server on the head nodes using the deploy_ipxe_ocp.yml playbook.

To deploy the iPXE server run the following playbook:

$ ansible-playbook -i hosts playbooks/deploy_ipxe_ocp.yml --ask-vault-pass

# Creating bootstrap and master nodes

This section describes how to create bootstrap and master nodes using the scripts in the ocp_vm.yml playbook.

To create bootstrap and master VMs on Kernel-based Virtual Machine (KVM):

Run the following playbook:

$ ansible-playbook -i hosts playbooks/ocp_vm.yml --ask-vault-pass

# Deploying RHOCP cluster

Once the playbooks are executed successfully and the Bootstrap and master nodes are deployed with the RHCOS, deploy the RHOCP cluster.

To deploy the RHOCP cluster:

  1. Login to the installer VM.

This installer VM was created as a KVM VM on one of the head nodes using the rhel8_installerVM.yml playbook. For more information, see the Creating RHEL 8 installer machine section.

  1. Add the kubeconfig path in the environment variables using the following command:
$ export KUBECONFIG=/opt/hpe-solutions-openshift/DL-LTI-Openshift/playbooks/roles/generate_ignition_files/ignitions/auth/kubeconfig
  1. Run the following command:
$ openshift-install wait-for bootstrap-complete --dir=/opt/hpe-solutions-openshift/DL-LTI-Openshift/playbooks/roles/generate_ignition_files/ignitions --log-level debug
  1. Complete the RHOCP 4.15 cluster installation with the following command:
$openshift-install wait-for install-complete --dir=/opt/hpe-solutions-openshift/DL-LTI-Openshift/playbooks/roles/generate_ignition_files/ignitions --log-level=debug

The following output is displayed:

DEBUG OpenShift Installer v4.15

DEBUG Built from commit 6ed04f65b0f6a1e11f10afe658465ba8195ac459 

INFO Waiting up to 30m0s for the cluster at https://api.rrocp.pxelocal.local:6443 to initialize... 

DEBUG Still waiting for the cluster to initialize: Working towards 4.15: 99% complete 

DEBUG Still waiting for the cluster to initialize: Working towards 4.15: 99% complete, waiting on authentication, console,image-registry 

DEBUG Still waiting for the cluster to initialize: Working towards 4.15: 99% complete 

DEBUG Still waiting for the cluster to initialize: Working towards 4.15: 100% complete, waiting on image-registry 

DEBUG Still waiting for the cluster to initialize: Cluster operator image-registry is still updating 

DEBUG Still waiting for the cluster to initialize: Cluster operator image-registry is still updating 

DEBUG Cluster is initialized 

INFO Waiting up to 10m0s for the openshift-console route to be created...

DEBUG Route found in openshift-console namespace: console 

DEBUG Route found in openshift-console namespace: downloads 

DEBUG OpenShift console route is created 

INFO Install complete! 

INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp.ngs.local

INFO Login to the console with user: kubeadmin, password: a6hKv-okLUA-Q9p3q-UXLc3

The RHOCP cluster is successfully installed.

  1. After the installation is complete, check the status of the created cluster:
$ oc get nodes

# Running Red Hat OpenShift Container Platform Console

Prerequisites:

The RHOCP cluster installation must be complete.

NOTE

The installer machine provides the Red Hat OpenShift Container Platform Console link and login details when the RHOCP cluster installation is complete.

To access the Red Hat OpenShift Container Platform Console:

  1. Open a web browser and enter the following link:
https://console-openshift-console.apps.<customer.defined.domain> 

Sample one for reference:  https://console-openshift-console.apps.ocp.ngs.local

Log in to the Red Hat OpenShift Container Platform Console with the following credentials:

- Username: kubeadmin
- Password: <password>

NOTE

If the password is lost or forgotten, search for the kubeadmin-password file located in the /opt/hpe-solutions-openshift/DL-LTI-Openshift/playbooks/roles/generate_ignition_files/ignitions/auth/kubeadmin-password directory on the installer machine.

The following figure shows the Red Hat OpenShift Container Platform Console after successful deployment:

FIGURE 8. Red Hat OpenShift Container Platform Console login screen