# Solution Deployment Flow
Figure 25 shows the flow of the installation process and aligns with this document.
Figure 25: Solution Deployment Flow
# Setup iPXE, TFTP, and DHCP for RHCOS
In this setup, the machine is booted by leveraging the iPXE server. In this step we prepare the iPXE and TFTP server to able to boot RHCOS. This is the initial stage and DHCP is an integral part of the PXE boot process. So, configuring the DHCP is also important. This configuration can be done using sudo access. The details to configure the iPXE set up is listed in the Deployment guide at https://github.com/HewlettPackard/hpe-solutions-openshift/tree/master/synergy/scalable (opens new window).
# Configure a load balancer for Red Hat OpenShift 4 nodes
In multi-node cluster deployment of OpenShift, the load balancer is mandatory. Hewlett Packard Enterprise in this solution has leveraged HAProxy load balancing required traffic. This configuration can be done using sudo access. For commercial load balancer such as F5 Big-IP or any other OpenShift Container Platform 4 supported load balancer, you need to visit the manufacture website. For more details on configuring sudo to allow non-root users to execute root level commands and for information on HAProxy configuration, see the HPE solutions for Red Hat OpenShift Container Platform GitHub at https://github.com/HewlettPackard/hpe-solutions-openshift/tree/master/synergy/scalable (opens new window).
# Configure DNS
In user-provisioned infrastructure (UPI), DNS records are required for each machine. These records must be able to resolve the hostnames of all other machines in a Red Hat OpenShift Container Platform cluster. This component also can be configured using sudo access for Linux-based DNS solution or Windows-based DNS solution. For more information on Role-based access control, see Windows Role based Access (opens new window). For third-party DNS solutions, you need to visit the manufacture website. This provides more details to configure the sudo to allow non-root users to execute root level commands. For more information, see section User-provisioned DNS requirements (opens new window).
# Configure firewall ports
In user-provisioned infrastructure (UPI), network connectivity between machines allows cluster components to communicate within the Red Hat OpenShift Container Platform cluster. Hence, the required ports must be open between Red Hat OpenShift cluster nodes. This component also can be configured using sudo access for Linux-based firewall. For third-party firewall solutions, you need to visit the manufacture website. This provides details on configuring sudo to allow non-root users to execute root level commands. For more information, see section networking requirements for user-provisioned infrastructure. (opens new window)
# Start Red Hat OpenShift Container Platform 4 user provisioned infrastructure setup
The user provisioned infrastructure (UPI), begins with installing a bastion host. This setup uses RHEL 7.6 virtual machine as a bastion host. This bastion host is used for deployment and management of the Red Hat OpenShift Container Platform 4 version clusters. The setup and configuration can be completed using sudo user access. For more information, see section Generating an SSH private key and adding it to the agent (opens new window).
# Download Red Hat OpenShift Container Platform 4 software version
Download OpenShift Container Platform 4. Check the access token for your cluster and install it on the bastion host. The bastion host is used for deploying and managing the OpenShift Container Platform 4 version clusters. The setup and configuration can be completed using sudo user access. For more information, see section Obtaining the installation program (opens new window).
# Create ignition config files
This step begins with the creation of the install-config.yaml in a new folder. Use the OpenShift install tool to convert the yaml to the ignition config files required to install the Red Hat OpenShift Container Platform 4. There is no system modification done on the bastion host or the provisioning server. This setup can be completed using sudo access. For more information, see https://docs.openshift.com/container-platform/4.6/installing/installing_bare_metal/installing-bare-metal.html#installation-initializing-manual_installing-bare-metal (opens new window)
# Upload ignition config files to the web
This step involves uploading the ignition config files to an internal website that allows anonymous access to the PXE boot process. Update the PXE default file to point to the website location of the ignition file. The action required in this step can be done using sudo user. For more information, see section Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines by PXE or iPXE booting. (opens new window)
In a virtualized setup to deploy OpenShift Container Platform 4, a template for OVA image is created. This template is used for creating nodes on the cluster. The ignition config files are provided on each node while provisioning the VMs. For more information on Create template for OVA Image, see https://docs.openshift.com/container-platform/4.6/installing/installing_vsphere/installing-vsphere.html (opens new window)
# Configure the HPE Synergy Compute for iPXE boot
The configuration involves setting up the server profile in HPE Synergy Composer for iPXE boot and for the required storage. Hewlett Packard Enterprise uses the HPE Synergy Composer to create the server profiles and templates. The access to the composer UI is that of a non-root user. Hence, from a security standpoint, no root access is being used for HPE Synergy Composer access. For more information see HPE OneView 4.2 User Guide Server profiles and server profile templates section.
# Bootstrap Node
The bootstrap node is a temporary node that is used to bring up the OpenShift cluster. After the cluster is up, this machine can be decommissioned, and the hardware will be reused. The iPXE boot process must use bootstrapping information as a part of the iPXE boot parameter to install the RHCOS on this node.
# Master Node
The master node uses the iPXE image for RHCOS after the bootstrap node. The iPXE boot process must use the master.ign information as part of the iPXE boot parameter to install the RHCOS on this node. The root user is not active by default in RHCOS. So, root login is not available. Instead, log in as the core user.
# Create the cluster
The four nodes, one bootstrap and three master nodes boot up and are available at the login prompt for RHCOS. Use the OpenShift install tool to complete the bootstrap process. For more information, see Creating the cluster. (opens new window) This action is taken using the sudo user logged in on the bastion host or provision server.
# Login to the cluster
After the bootstrap process has completed successfully, login to the cluster. The kubeconfig file is present in the auth directory where the ignition files are created on the bastion host. Export the cluster kubeconfig file and login to your cluster as a default system user. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during the Red Hat OpenShift Container Platform installation. After logging in, approve the pending OpenShift CSR for the nodes. For more information, see section Logging in to the cluster. (opens new window)
# Initial operator configuration
After the control plane initializes, you must immediately configure operators that are not available. This ensures their availability (for example image-registry). For more information, see section Initial Operator Configuration (opens new window). This action is taken using the sudo user logged in on the bastion host or provision server.
# Worker node
This step involves decommissioning the bootstrap node and deleting the associated HPE Synergy server profile. Boot the compute nodes associated with worker node profile that has the second volume for local storage setup using iPXE. The root user is not active by default in RHCOS. So, root login is not available, instead, log in as the core user.
# Complete the installation of Red Hat OpenShift Container Platform 4 and higher versions for user provisioned infrastructure
After the worker node boots up successfully, use the oc get nodes from the bastion host. The admin can see the worker nodes as part of the OpenShift cluster. Run the OpenShift install tool to complete the installation. For more information, see section completing installation on user-provisioned infrastructure. (opens new window) After this process is completed, it will provide the URL for the Red Hat OpenShift Container Platform 4 version of the console along with the temporary user kubeadmin and temporary password for login.
# Log in to the Red Hat OpenShift Container Platform 4 and higher versions of the console
Log in to the OpenShift Container Platform 4 version of the console using the URL, username, and password provided in the complete Red Hat OpenShift Container Platform 4 user-provisioned infrastructure. Set up a new user with the cluster admin privileges. For more information, see section Understanding authentication. (opens new window)
# Standards used in this document
This document makes use of the following standard terms:
Installation user -- Individual or individuals responsible for carrying out the installation tasks to produce a functional Red Hat OpenShift Container Platform 4.6 solution on HPE Synergy.
Installer machine -- The system that is capable of connecting to various components within the solution and is used to run most of the key commands. In this solution, this machine also serves as the Ansible Engine host. For more information, refer to the section Installer machine in this document for more details.