# Worker node configuration

This section outlines the steps to configure the automation environment and utilizes the automation environment to create the server profile for all Synergy nodes and install RHEL operating system on each of them.

# Preparing the execution environment

This section provides a detailed overview and steps to configure the components deployed for this solution.

# Installer machine

This document assumes that a server running Red Hat Enterprise Linux 7.6 with the following configuration exists within the deployment environment and is accessible to the installation user to be used in an installer machine.

  1. At least 200 GB of free disk space (especially in the "/" partition), 4 CPU cores and 8 GB RAM.

  2. One (1) network interface with static IP address configured on same network as the management plane of the bare-metal servers and has access to internet.

In this solution, a virtual machine is used to act as an installer machine and the same host is utilized as an Ansible Engine host.

# Setup the prerequisites

Login to the installer VM and perform the following steps:

  1. Execute the following command to register the host and attach the host pool with Red Hat.
> subscription-manager register --username=<username> --password=<password> --auto-attach
  1. Disable all repositories and enable only the repositories required for the installer VM.
> subscription-manager repos --disable *
> subscription-manager repos --disable="*" \
  --enable="rhel-7-server-rpms" \
  --enable="rhel-7-server-extras-rpms" \
  --enable="rhel-7-server-optional-rpms" \
  --enable="rhel-server-rhscl-7-rpms"

Perform the following steps to setup Python, Ansible, and the required repositories.

  1. Use the following commands to install and enable Python 3.6.x.
> yum -y install rh-python36
> scl enable rh-python36 bash
  1. Use the following commands to create and activate a Python3 virtual environment for deploying this solution.
> python3 -m venv <virtual_environment_name>
> source <virtual_environment_name>/bin/activate
  1. Use the following command to install Ansible 2.9.
> python3 -m pip install ansible==2.9.0
  1. Execute the following commands in the Ansible Engine to download the repositories.
> mkdir --p /opt/hpe/solutions/hpecp/
> cd /opt/hpe/solutions/hpecp/
> yum install --y git
> git clone <https://github.com/HewlettPackard/hpe-solutions-hpecp.git>
> git clone <https://github.com/HewlettPackard/oneview-ansible.git>
> cd oneview-ansible
> pip install --r requirements.txt
> export ANSIBLE_LIBRARY=/opt/hpe/solutions/hpecp/oneview-ansible/library
> export ANSIBLE_MODULE_UTILS=/opt/hpe/solutions/hpecp/oneview-ansible /library/module_utils

NOTE

The value for the constant "BASE_DIR" referred to in this deployment guide is /opt/hpe/solutions/hpecp/hpe-solutions-hpecp/scripts/

# Ansible inventory file

The files that are cloned from the GitHub site include a sample inventory file. The installation user should review this file (located on the installer VM at BASE_DIR/infrastructure/hosts and BASE_DIR/prepare_hpecp_hosts/hosts) and ensure that the information within the file accurately reflects the information in their environment.

Use an editor such as vi or nano to edit the inventory file.

> vi BASE_DIR/infrastructure/hosts
> vi BASE_DIR/prepare_hpecp_hosts/hosts

NOTE

The values provided in the variable files, inventory files, figures, and tables in this document are intended to be used for reference purpose. It is expected that the installation user updates them to suit the local environment.

By default, the forks parameter in Ansible is limited to 5. Execute the following command to update the Ansible configuration to increase the value of number of hosts that are getting configured using this playbook.

> export ANSIBLE_FORKS=<number_of_hosts_present_in_the_ansible_inventory_file>

# Ansible vault

A preconfigured Ansible vault file (secret.yml) is provided as a part of this solution, which consists of sensitive information to support the host and virtual machine deployment.

Run the following commands on the installer VM to edit the vault to match the installation environment.

> ansible-vault edit BASE_DIR/infrastructure/secret.yml
> ansible-vault edit BASE_DIR/prepare_hpecp_hosts/secret.yml

NOTE

The default password for the Ansible vault file is changeme.

# Server profiles

Server profiles are used to configure the personality of the compute resources. A server profile allows a set of configuration parameters, including firmware recipe, network and storage, BIOS tuning, boot order configuration, local storage configuration, and more to be templatized. These templates are the key to deliver the "infrastructure as code" capabilities of the HPE Synergy platform. For the purpose of this solution, a template is created which can be leveraged for HPE Ezmeral Container Platform controller, Kubernetes master and worker nodes.

# Automation overview

The folder infrastructure within the folder BASE_DIR contains Ansible scripts to automate the creation of server profiles in HPE OneView. The scripts are as follows:

  1. inputs.yml: This file contains input variables to create the server profile template and server profile.

  2. hosts: This is the inventory file which the installer VM will use to reference nodes for which the server profile needs to be created.

  3. secret.yml: This is an Ansible vault file that consists of sensitive information, such as HPE OneView IP address and credentials.

  4. playbooks: This folder consists of playbooks to upload firmware bundle, create server profile templates and the server profiles for controller, gateway, master and worker nodes.

  5. roles: This folder consists of Ansible roles to upload firmware bundle, create the server profile template and server profile. Each role is associated with a set of tasks to accomplish the expected output and they are present in the task directory within the role.

NOTE

The parameter names are case sensitive. Ensure that parameter names and functions are accurately recorded for the installation environment. Any variation in parameter names and functions will result in the failure of automated configuration steps.

# Automated deployment of server profile

Perform the following steps to create server profiles for all the bare-metal nodes participating in the HPE Container Platform cluster.

  1. Update the inventory file found at BASE_DIR/infrastructure directory within the installer VM to include server hardware details. The installation user should update the values to suit their environment.
[server_profile_template]
"Frame1, bay 5" type="SY 480 Gen10 1" role="master"
"Frame1, bay 6" type="SY 480 Gen10 1" role="worker"
"Frame1, bay 7" type="SY 480 Gen10 1" role="gateway"
"Frame1, bay 8" type="SY 480 Gen10 1" role="controller"
[server_profile]
# format "enclosure serial number, server bay number", type=<server hardware type> role=<role(master/worker/controller) of the server> name=<server profile name>
"Frame1, bay 9" type="SY 480 Gen10 1" role="controller" name="cp_controller_01"
"Frame2, bay 9" type="SY 480 Gen10 1" role="controller" name="cp_controller_02"
"Frame3, bay 9" type="SY 480 Gen10 1" role="controller" name="cp_controller_03"
"Frame1, bay 5" type="SY 480 Gen10 1" role="gateway" name="cp_gateway_02"
"Frame2, bay 5" type="SY 480 Gen10 1" role="gateway" name="cp_gateway_02"
"Frame1, bay 4" type="SY 480 Gen10 1" role="master" name="cp_master_01"
"Frame2, bay 4" type="SY 480 Gen10 1" role="master" name="cp_master_02"
"Frame3, bay 4" type="SY 480 Gen10 1" role="master" name="cp_master_03"
"Frame1, bay 6" type="SY 480 Gen10 1" role="worker" name="cp_worker_01"
"Frame2, bay 6" type="SY 480 Gen10 1" role="worker" name="cp_worker_02"
"Frame3, bay 6" type="SY 480 Gen10 1" role="worker" name="cp_worker_03"
  1. Update the input.yml file found at *BASE_DIR/infrastructure. *
# Variables for creating the Server Profile and Server Profile Template as per the OneView configuration
# Example: enclosure_group: EG1
enclosure_group: <Enclosure group name as per OneView>

# Example: deployment_network_name: TwentyNet
deployment_network_name: <Deployment network name as per OneView>

#<iscsi_A network name as per OneView>
iSCSI_A_network_name: <iSCSI_SAN_A primary iSCSI network name as per one view>

#<iscsi_B network name as per OneView>
iSCSI_B_network_name: <iSCSI_SAN_B Secondary iSCSI network name as per one view>

# Example: worker_template_name: hpecp_worker_template
#<Custom name for workerServerProfileTemplate>
worker_template_name: <Custom name for worker_SPT>

#<Custom name for gatewayServerProfileTemplate>
gateway_template_name: <Custom name for gateway_SPT>

#<Custom name for MasterServerProfileTemplate>
master_template_name: <Custom name for master_SPT>

#<Custom name for ControllerServerProfileTemplate>
controller_template_name: <Custom name for controller_SPT>

# This variable is required when user wants to upload a firmware bundle or firmware baseline for Compute Module of HPE Synergy to HPE OneView
# Path on installer machine where the firmware bundle or firmware baseline for Compute Module of HPE Synergy is present.
# This path should end with "/" but it should not include firmware file name.
# Example: fw_bundle_path: BASE_DIR/infrastructure/roles/upload_firmware_bundle/files/
fw_bundle_path: <Firmware Bundle file path>

# Provide the firmware file name with extension
# fw_bundle_file_name: HPE_Synergy_Custom_SPP2019.12_20200326_Z7550-96866.iso
fw_bundle_file_name: <Firmware file name with extension>

###### Enable or Diable Firmware update, BIOS Setttings, and iLO settings by changing the flag to true or false #########
# manage firmware
managefw: true

# manage BIOS security settings
manageBios: true
bioscomplianceControl: Checked

# manage iLO User Account settings
manageilo: true
ilocomplianceControl: Checked
  1. Update the secret.yml found at *BASE_DIR/infrastructure. *
# oneview ip address and credentials.
oneview_ip: x.x.x.x
oneview_username: <username>
oneview_password: <password>
  1. After the inventory and variable files are updated with the appropriate values, execute the following commands on the installer VM to upload the firmware bundle, create the server profile template and server profile.
> cd BASE_DIR/infrastructure
> ansible-playbook -i hosts playbooks/upload_firmware_bundle.yml --ask-vault-pass
> ansible-playbook -i hosts playbooks/server_profile_template_fw.yml --ask-vault-pass
> ansible-playbook -i hosts playbooks/server_profile_fw.yml --ask-vault-pass

NOTE

BASE_DIR is defined and set in the Installer machine section within this document.

# Deploying operating system on bare metal nodes

This section outlines the steps to programmatically deploy RHEL 7.7 operating system on all the bare-metal nodes participating in the HPE Ezmeral Container Platform cluster.

# Automation overview

The folder os_deployment within the folder BASE_DIR consists of scripts to deploy operating system over bare metal servers using virtual media. An overview of the scripts are as follows:

  1. deploy_os.py: This is the main file which validates the prerequisites and triggers the OS deployment based on the type of the operating system.

  2. setup.sh: This script configures the prerequisite environment required for the successful execution of OS deployment script.

  3. requirements.txt: This file includes the list of prerequisites required for the OS installation and it will be installed by the setup.sh script.

  4. kickstart_files: This folder consists of the kickstart files for RHEL installation.

  5. input_files: This folder consists of 2 input files, config.json and server_details.json.

    a. config.json: This file takes the input details related to web server and operating system to be installed.

    b. server_details.json: This file takes the input details of custom configuration for servers on which operating system is to be installed.

  6. rhel_operations.py: This script includes the functions relevant for the creation of custom RHEL image based on the input configuration.

  7. ilo_operations.py: This script includes the functions relevant to iLO operations.

  8. image_operations.py: This script includes the functions relevant to OS image related operations.

  9. redfish_object.py: This script includes the functions relevant to iLO object related operations.

Prerequisites

  1. RHEL 7.7 ISO image is present in the HTTP file path within the installer machine.

  2. iLO account with administrative privileges is required for each of the servers whose OS is configured via automation enlisted in this section

# Automated deployment of operating system

Perform the following steps to install RHEL 7.7 operating system on all the bare-metal nodes participating in the HPE Ezmeral Container Platform cluster.

  1. Execute the following command to setup the prerequisite environment.

    > sh setup.sh
    
  2. Update the Ansible vault input file, input_files/config.json file with the details of OneView, web server, and operating system to be installed. Execute the following command to edit the config.json input file.

    > ansible-vault edit config.json
    

    Default password for the Ansible vault files config.json is changeme. Example values for the input configuration is as follows. It is expected to update this file to suit the installation environment.

       {
       "HTTP_server_base_url" : "http://10.0.x.x/",
       "HTTP_file_path" : "/usr/share/nginx/html/",
       "OS_type" : "rhel7",
       "OS_image_name" : "<rhel_os_iso_image_name_with_extension>"
       }
    

    NOTE

    Acceptable value for "OS_type" variable is "rhel7" for RHEL 7.7.

  3. Update the Ansible vault input file, input_files/server_details.json file with the details of servers on which operating system is to be installed.

    Execute the following command to edit the server_details.json input file.

    > ansible-vault edit server_details.json
    

    NOTE

    It is essential to provide a hashed password for the "Host_Password" field in the server_details.json input file. Execute the following command with the choice of password to generate an MD5 hash and provide its output to the "Host_Password" field.

    > openssl passwd -1 <Password>
    

    Default password for the Ansible vault files server_details.json is changeme. Example values for the input configuration for deploying RHEL 7.7 is as follows. It is expected to update this file to suit the installation environment.

    [{
    "Server_serial_number" : "MXxxxxxDN",
    "ILO_Address" : "10.x.x.x",
    "ILO_Username" : "<username_of_iLO_account_with_admin_privileges>",
    "ILO_Password" : "<password_of_iLO_account_with_admin_privileges>",
    "Hostname" : "master01.twentynet.local",
    "Host_IP" : "20.x.x.x",
    "Host_Username" : "root",
    "Host_Password" : "<hashed_valued_of_complex_password>",
    "Host_Netmask" : "255.x.x.x",
    "Host_Gateway" : "20.x.x.x",
    "Host_DNS" : "20.x.x.x",
    "Server_Role" : "master"
    },
    {
    "Server_serial_number" : "MXxxxxxDN",
    "ILO_Address" : "10.x.x.x",
    "ILO_Username" : "username",
    "ILO_Password" : "password",
    "Hostname" : "worker01.twentynet.local",
    "Host_IP" : "20.x.x.x",
    "Host_Username" : "root",
    "Host_Password" : "<hashed_valued_of_complex_password>",
    "Host_Netmask" : "255.x.x.x",
    "Host_Gateway" : "20.x.x.x",
    "Host_DNS" : "20.x.x.x",
    "Server_Role" : "worker",
    "Additional_Networks" : [{
    "IP" : "30.0.x.x",
    "Netmask" : "255.255.x.x"
    },
    {
    "IP" : "40.0.x.x",
    "Netmask" : "255.255.x.x"
    }]
    }]
    

    NOTE

    Acceptable value for "Server_Role" variable is "master" for RHEL controller node, RHEL gateway load balancer node or RHEL master node and "worker" for RHEL worker node.

  4. Execute the script to deploy the operating system.

    > python3 deploy_os.py
    

    NOTE

    Generic settings done as a part of kickstart file for RHEL are as follows. It is recommended that the user reviews and modifies the kickstart files. (kickstart_files/ks_rhel7.cfg and kickstart_files/ks_rhel7_worker.cfg) to suit their requirements.

    • Graphical installation

    • Language - en_US

    • Keyboard & layout - US

    • System service chronyd disabled

    • timezone - Asia/Kolkata

    • Bootloader config

      • bootloader location=mbr

      • clearpart --all --initlabel

      • ignoredisk --only-use=sda

      • part swap --asprimary --fstype="swap" --size=77263

      • part /boot --fstype xfs --size=300

      • part /boot/efi --fstype="vfat" --size=1024

      • part / --size=500 --grow

    • NIC teaming is perfomed with devices ens3f0 and ens3f1. It is assigned with the Host_IP defined in the input_files/server_details.json input file.

    • For the worker nodes, devices ens3f4 and ens3f5 are assigned with iSCSI_A IP address and iSCSI_B IP address respectively which is defined in the input_files/server_details.json file.