# Virtual node configuration

This section describes the process to deploy virtualization hosts for OpenShift. This section outlines the steps required to configure virtual machine master and worker nodes. At a high level, these steps are as follows:

  • Deploying the vSphere hosts

  • Creating the data center, cluster, and adding hosts into the cluster

  • Creating a datastore in vCenter

  • Create virtual master nodes

  • Deploying virtual worker nodes

NOTE

Hewlett Packard Enterprise utilized a consistent method for deployment that would allow for mixed deployments of virtual and physical master and worker nodes and built this solution on bare metal using the Red Hat OpenShift Container Platform user-provisioned infrastructure. For more details on the bare metal provisioner, refer to https://cloud.redhat.com/openshift/install/metal/user-provisioned (opens new window). If the intent is to have an overall virtual environment, it is recommended the installation user utilizes Red Hat's virtual provisioning methods found at https://docs.openshift.com/container-platform/4.6/installing/installing_vsphere/installing-vsphere.html#installing-vsphere (opens new window).

# Deploying vSphere hosts

Refer to the section Server Profiles in this document to create the server profile for the vSphere hosts.

After the successful creation of the server profile, install the hypervisor. The following steps describes the process to install the hypervisor:

  1. From the HPE OneView interface, navigate to Server Profiles and select ESXi-empty-volume Server Profile, Select Actions > Launch Console.

  2. From the Remote Console window, choose Virtual Drives -> Image File CD-ROM/DVD from the iLO options menu bar.

  3. Navigate to the VMware ESXi 6.7 ISO file located on the installation system. Select the ISO file and click Open.

  4. If the server is in the powered off state, power switch on the server by selecting Power Switch -> Momentary Press.

  5. During boot, press F11 Boot Menu and select iLO Virtual USB 3: iLO Virtual CD-ROM.

  6. When the VMware ESXi installation media has finished loading, proceed through the VMware user prompts. For storage device, select the 40GiB OS volume created on the HPE Image Streamer during server profile creation and set the root password.

  7. Wait until the vSphere installation is complete.

  8. After the installation is complete, press F2 to enter the vSphere host configuration page and update the IP address, gateway, DNS, hostname of the host and enable SSH.

  9. After the host is reachable, proceed with the next section.

# HPE OneView for VMware vCenter

HPE OneView for VMware vCenter is a single, integrated plug-in application for VMware vCenter management. This enables the vSphere administrator to quickly obtain context-aware information about HPE Servers and HPE Storage in their VMware vSphere data center directly from within vCenter. This application enables the vSphere administrator to easily manage physical servers and storage, datastores, and virtual machines. By providing the ability to clearly view and directly manage the HPE Infrastructure from within the vCenter console, the productivity of VMware administrator increases. This also enhances the ability to ensure quality of service.

For more details, refer to the HPE documentation at https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=HPVPR (opens new window).

# Creating the Data center, Cluster and adding Hosts in VMware vCenter

This section assumes a VMware vCenter server is available within the installation environment. A data center is a structure in VMware vCenter which contains clusters, hosts, and datastore. To begin with, a data center needs to be created, followed by the clusters and adding hosts into the clusters.

To create a data center, a cluster enabled with vSAN and DRS and adding hosts, the installation user will need to edit the vault file and the variables YAML file. Using an editor, open the file /opt/hpe/solutions/ocp/hpe-solutions-openshift/synergy/scalable/vsphere/vcenter/roles/prepare_vcenter/vars/main.yml to provide the names for data center, clusters and vSphere hostnames. A sample input file is listed and as follows. Installation user should modify this file to suit the environment.

In the Ansible vault file (secret.yml) found at /opt/hpe/solutions/ocp/hpe-solutions-openshift/synergy/scalable/vsphere/vcenter, provide the vCenter and the vSphere host credentials.

# vsphere hosts credentials

vsphere_username: <username>

vsphere_password: <password>

# vcenter hostname/ip address and credentials

vcenter_hostname: x.x.x.x

vcenter_username: <username>

vcenter_password: <password>

NOTE

This section assumes all the virtualization hosts have a common username and password. If it does not have a common username and password, it is up to the installation user to add the virtualization hosts within the appropriate cluster.

Variables for running the playbook can be found at /opt/hpe/solutions/ocp/hpe-solutions-openshift/synergy/scalable/vsphere/vcenter/roles/prepare_vcenter/vars/main.yml.

# custom name for data center to be created.

datacenter_name: datacenter

# custom name of the compute clusters with the ESXi hosts for Management VMs.

management_cluster_name: management-cluster

# hostname or IP address of the vsphere hosts utilized for the management nodes.

vsphere_host_01: 10.0.x.x

vsphere_host_02: 10.0.x.x

vsphere_host_03: 10.0.x.x

After the variable files are updated with the appropriate values, execute the following command within the installer VM to create the data center, clusters, and add hosts into respective clusters.

> cd /opt/hpe/solutions/ocp/hpe-solutions-openshift/synergy/scalable/vsphere/vcenter/

> ansible-playbook playbooks/prepare_vcenter.yml –ask-vault-pass

# Creating a Datastore in vCenter

A datastore needs to be created in VMware vCenter from the volume carved out of HPE Storage SANs to store the VMs. The following are the steps to create a datastore in vCenter:

  1. From the vSphere Web Client navigator, right-click the cluster, select Storage from the menu, and then select the New Datastore.

  2. From the Type page, select VMFS as the Datastore type and click Next.

  3. Enter the datastore name and if necessary, select the placement location for the datastore and click Next.

  4. Select the device to use for the datastore and click Next.

  5. From VMFS version page, select VMFS 6 and click Next.

  6. Define the following configuration requirements for the datastore as per the installation environment and click Next.

    a. Specify partition configuration

    b. Datastore Size

    c. Block Size

    d. Space Reclamation Granularity

    e. Space Reclamation Priority

  7. On the Ready to complete page, review the Datastore configuration and click Finish.

NOTE

If you utilize virtual worker nodes, repeat this section to create a Datastore to store the worker virtual machines.

# Red Hat OpenShift Container Platform sizing

Red Hat OpenShift Container Platform sizing varies depending on the requirements of the organization and type of deployment. This section highlights the host sizing details recommended by Red Hat.

Resource Bootstrap node Master node Worker node
CPU 4 4 4
Memory 16GB 16GB 16GB
Disk storage 120GB 120GB 120GB
Disk storage 120GB 120GB 120GB

Disk partitions on each of the nodes are as follows.

  • /var -- 40GB

  • /usr/local/bin -- 1GB

  • Temporary directory -- 1GB

NOTE

Sizing for worker nodes is ultimately dependent on the container workloads and their CPU, memory, and disk requirements.

For more information about Red Hat OpenShift Container Platform sizing, refer to the Red Hat OpenShift Container Platform 4.6 product documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/scalability_and_performance/index (opens new window).

# Deploying virtual master nodes

This section outlines the steps to create the virtual machines used as the master nodes.

NOTE

This section utilized vSphere rules such as affinity and anti-affinity rules to ensure no two master nodes are present on the same vSphere host, hence it is essential to enable vMotion in all the vSphere hosts. If not enabled, select the vSphere host in the VMware vCenter server user interface, click -> Configure -> Networking -> VMkernel adapters -> Management Network -> Edit and select the checkbox against vMotion to enable vMotion.

To create the virtual machines for the OpenShift master nodes, edit the variables file. Use an editor such as Vim or Nano, open the file /opt/hpe/solutions/ocp/hpe-solutions-openshift/synergy/scalable/vsphere/virtual_nodes /roles/deploy_vm/vars/main.yml. The variable file contains information about the VMs, vCenter, hostnames, IP addresses, memory, and CPU. A sample variable file is provided and as follows. The installation user should modify the file to make it suitable for the target environment.

    # Name of the Data center

    datacenter_name: <datacentername>

    # Name of the compute clusters with the ESXi hosts for Management VMs

    management_cluster_name: <data_cluster_name>

    # Name of the Datastore to store the VMs

    management_datastore_name: <datastore_name>

    # Name of the coreOS guest image

    guest_template: coreos64Guest

    # Disk size in GB/GiB

    bootstrap_disk: 120

    master_disk: 120

    lb_disk: 50

    # number of CPUs

    bootstrap_cpu: 4

    master_cpu: 4

    lb_cpu: 4

    # Memory size in MB/MiB

    bootstrap_memory: 16400

    master_memory: 16400

    lb_memory: 16400

    gateway: <replace_with_gateway_ip>

    dns_server: <replace_with_dns_server_ip>

    domain: <replace_with_domain_ip>

    # name of the master, bootstrap and lb nodes < short names, not the FQDN >

    bootstrap01_name: <bootstrap01_host_name>

    master01_name: <master01_host_name>

    master02_name: <master02_host_name>

    master03_name: <master03_host_name>

    lb01_name: <lb01_host_name>

    domain_name: "<sub_domain>"

    # Network names for the management network

    datacenter_network_name: "<network_name>"

    # vSphere affinity & anti-affinity rules

    affinity_rule_name: "vsphere-anti-affinty-rule"

    anti_affinity_rule_name: "vsphere-affinty-rule"

After the variable file is updated, execute the following command from the installer machine to deploy the specified VMs.

> cd /opt/hpe/solutions/ocp/hpe-solutions-openshift/synergy/scalable/vsphere/virtual_nodes
> ansible-playbook playbooks/deploy_vm.yml –ask-vault-pass

deploy_vm.yml playbooks create 3x VMs to be used as master nodes, 1x VM to be used as load balancer node and 1x VM to be used as a bootstrap node. All the master VMs will be deployed on different hosts whereas bootstrap & haproxy VMs will be deployed on any single host.

Wait for some time for vSphere rules to be applicable on VMs. vSphere rules can be viewed at vCenter server -> Datacenter -> Cluster -> Configure -> VM/Host Rules.The 3 master nodes are part of the vsphere-anti-affinity-rule and each master VM will reside on a different vSphere host. The bootstrap and load balancer VMs are part of the vsphere-affinity-rule and they are co-resident on one of 3 vSphere hosts.

It is recommended to ensure the Boot Delay is long enough to enable OS installation via PXE server.

After the virtual machines are successfully created, refer to the following steps to install the operating system on the bootstrap node and the master nodes:

  1. Ensure that the location of ignition files of the corresponding nodes is updated in the PXE configuration files.

  2. Ensure the MAC address of the network adapter in VM is updated with the corresponding IP address in the DHCP configuration file.

  3. Ensure that the load balancer server is up and running.

  4. From the VMware vCenter Server, select the VM, and launch the VM Remote console.

  5. From the Remote Console window, power on the VM.

  6. While booting, select the appropriate OS label.

  7. Wait until the OS installation is complete.

  8. Verify the installation by logging on to the node from the installer VM using the following command.

> ssh core@< replace_with_node_fqdn_or_ip >

After the RHCOS master nodes are ready, refer to the section Red Hat OpenShift Container Platform deployment in this document to create the OpenShift 4 cluster.

# Deploying virtual worker nodes

This section outlines the steps to create virtual machines and configure them to be used as worker nodes.

# Creating virtual machines

This section outlines the steps to create virtual machines.

  1. Login to vCenter using the Web Client and select an ESXi Host. Right-click the host and then click New Virtual Machine to open a New Virtual Machine Wizard.

  2. From Select a creation type, select Create a new virtual machine and click Next.

  3. Enter a unique Name for the VM and select the Datacenter. Click Next.

  4. Select the Cluster on which the VM can be deployed. Click Next.

  5. Select the Datastore on which the VM can be stored and click Next.

  6. On the Select compatibility page, choose ESXI 6.7 and later and click Next.

  7. On the Select a guest OS page, choose the Guest OS family as Linux and Guest OS Version as Red Hat Enterprise Linux 7 (64 bit) (in case of RHEL worker) and Red Hat CoreOS (in case of Red Hat CoreOS worker) and select Next.

  8. In the Customize hardware page, configure the Virtual Hardware with 4 CPU, 16 GB Memory, 150 GB dual Hard Disk as per requirement and attach the Operating System from the datastore. Select the Connect at Power on option and click Next.

  9. Review the virtual machine configuration before deploying the virtual machine and click Finish to complete the New Virtual Machine wizard.

# Red Hat CoreOS worker nodes

Refer to the section Creating virtual machines in the document to create virtual machines. After the virtual machines are successfully created, refer to the following steps to install the operating system on the worker nodes:

  1. Ensure that the location of ignition files of the corresponding nodes is updated in the PXE configuration files.

  2. Ensure the MAC address of the network adapter in VM is updated with the corresponding IP address in the DHCP configuration file.

  3. Ensure that the load balancer server is up and running.

  4. From the VMware vCenter Server, select the VM and launch the VM Remote console.

  5. From the Remote Console window, power on the VM.

  6. While booting, select the appropriate OS label.

  7. Wait until the OS installation is complete.

  8. Verify the installation by logging on to the node from the installer VM using the following command.

> ssh core@< replace_with_node_fqdn_or_ip >

After the RHCOS worker nodes are up and running, refer to the section Adding Red Hat CoreOS worker nodes in the document to add them to OpenShift 4 cluster.

# RHEL 7.6 worker nodes

Refer to the section Creating virtual machines in this document to create virtual machines. After the virtual machines are successfully created, follow these steps to install the operating system on the worker nodes:

  1. Ensure that the location of ignition files of the corresponding nodes is updated in the PXE configuration files.

  2. Ensure the MAC address of the network adapter in VM is updated with the corresponding IP address in the DHCP configuration file.

  3. Ensure that the load balancer server is up and running.

  4. From the VMware vCenter Server, select the VM, and launch the VM Remote console.

  5. From the Remote Console window, power on the VM.

  6. While booting, select the appropriate OS label.

  7. Wait until the OS installation is complete.

  8. Verify the installation by logging on to the node from the installer VM using the following command.

> ssh root@< node_fqdn or node_ip_address>

Once the RHEL 7.6 nodes are reachable, refer to the section Preparing worker nodes with RHEL in the document to prepare the RHEL worker nodes. After preparing the worker nodes, refer to the section Adding RHEL 7.6 worker nodes in the document to add them to the OpenShift 4 cluster.