# Preparing the execution environment

This section provides a detailed overview and steps to configure the components deployed for this solution.

# Installer machine

This document assumes that a server running Red Hat Enterprise Linux (RHEL) 7.6 exists within the deployment environment and is accessible to the installation user to be used as an installer machine. This server must have internet connectivity. In this solution, a virtual machine is used to act as an installer machine and the same host is utilized as an Ansible Engine host.

# Non-root user access

The industry-wide security best practice is to avoid the use of root user account for administration of Linux-based servers. However, certain operations require root user privileges to perform tasks. In those cases, it is best to use the sudo command to obtain the necessary privilege escalation on a short-term basis. The sudo command allows programs and commands to be run with the security privileges of another user (Root is the default user) and can restrict the permission to specific groups, users, and individual commands.

The root user is not active by default in RHCOS. Instead, log in as the core user.

Use the following steps to create a non-root user for the OpenShift installation process:

  1. Login to the installer VM as root. Refer to the Installer machine section in this document for more details about the installer VM.

  2. Execute the following command to create a non-root user.

> adduser openshift_admin
  1. Execute the following command to set password for the non-root user.
> passwd openshift_admin
  1. Add non-root user's group in sudoers file and use the following command to find non-root user's group.
> id -Gn openshift_admin
openshift_admin
  1. Edit the sudoers file and use the following command to add the entry of non-root user's group in the sudoers file.
> visudo

Add a non-root user's group entry in sudoers file as follows. Allow the following commands to run anywhere in non-root user environment

openshift_admin	ALL=(ALL) /usr/bin/chmod, /bin/yum, /usr/bin/yum-config-manager, /sbin/subscription-manager, /usr/bin/git, /bin/>>vi, /bin/vim, /bin/mkdir, /usr/bin/cat, /usr/bin/echo, /usr/bin/python, /usr/bin/sed, /usr/bin/chown, /bin/sh, /bin/cp, /bin/ansible-vault, /usr/bin/scp, /usr/bin/rpm, /usr/sbin/chkconfig, /usr/bin/systemctl, /usr/bin/journalctl, /usr/bin/curl, /usr/bin/tar,  /usr/bin/genisoimage, /usr/bin/mount , /usr/bin/umount, /usr/bin/rsync, /usr/bin/find, /usr/bin/mv, /usr/bin/nano, /usr/sbin/dnsmasq, /usr/sbin/setsebool
  1. Execute the following command to change the user (non-root user).
> su openshift_admin
  1. Register the host and execute the following command to attach the host pool with Red Hat.
> sudo yum -y install subscription-manager
> sudo subscription-manager register --username=<username> --password=<password> --auto-attach
> sudo subscription-manager attach --pool=<pool_id>
  1. Disable all repositories and enable only the repositories required for the installer VM.
> sudo yum -y install yum-utils
> sudo yum-config-manager --disable *
> sudo subscription-manager repos --disable="*" 
  --enable="rhel-7-server-rpms" 
  --enable="rhel-7-server-extras-rpms" 
  --enable="rhel-7-server-optional-rpms" 
  --enable="--enable rhel-server-rhscl-7-rpms"
  1. Use the following command to install Ansible.
> sudo yum -y install ansible
  1. Install Git package on installer VM for performing Git- related operations.
> sudo yum -y install git
  1. Execute the following commands to download the hpe-solutions-openshift repository.
> mkdir -p /opt/hpe/solutions/ocp

> cd /opt/hpe/solutions/ocp

> sudo git clone <https://github.com/HewlettPackard/hpe-solutions-openshift.git>

> sudo chown -R openshift_admin:openshift_admin /opt/hpe/solutions/ocp
  1. Create an environment variable BASE_DIR and point it to the following path.
> export BASE_DIR=/opt/hpe/hpe-solutions-openshift/synergy/scalable
  1. After the hpe-solutions-openshift repository is downloaded, navigate to the path /etc/ansible/hpe-solutions-openshift/synergy/scalable/installer/playbooks. The scripts within this directory assists in configuring the prerequisites for the environment. The details of the scripts are as follows:
-   python_env.sh : This script installs Python 3.

-   ansible_env.sh : This script creates a Python 3 virtual environment and installs Ansible within the virtual environment.

-   download_oneview_packages.sh : This script installs the prerequisite modules such as HPE oneview-ansible, HPE oneview-python and VMware pyVmomi within the virtual environment.
  1. Steps to configure the prerequisite environment are as follows:
-   Change the directory to /etc/ansible/hpe-solutions-openshift/synergy/scalable/installer/playbooks

    ```bash
    $ cd $BASE_DIR/installer/playbooks
    ```

-   Execute the following command to setup prerequisite Python environment.

    ```bash
    $ yum -y install @development
    $ sudo sh python_env.sh
    ```

-   Execute the following command to enable Python 3.

    ```bash
    $ scl enable rh-python36 bash
    ```

-   Execute the following command to configure the Ansible environment.

    ```bash
    $ sudo sh ansible_env.sh
    ```

-   Execute the following command to download the HPE OneView packages.

    ```bash
    $ sudo sh download_oneview_packages.sh
    ```

-   Enable the virtual environment with the following command.

    ```bash
    $ source ../ocp_venv/bin/activate
    ```

-   Execute the following command to set the environment variables.

    ```bash
    $ export ANSIBLE_LIBRARY=$BASE_DIR/installer/library/oneview-ansible/library

    $ export ANSIBLE_MODULE_UTILS=$BASE_DIR/installer/library/oneview-ansible/library/module_utils
    ```

# Kubernetes manifests and ignition files

Manifests and ignition files define the master node and worker node configuration and are key components of the Red Hat OpenShift Container Platform 4 installation.

Before creating the manifest files and ignition files, it is necessary to download the Red Hat OpenShift 4 packages. Execute the following command on the installer VM to download the required packages.

$ cd $BASE_DIR/installer

$ ansible-playbook playbooks/download_ocp_package.yml

The OpenShift packages downloaded after executing the download_ocp_package.yml playbook can be found on the installer VM at /etc/ansible/hpe-solutions-openshift/synergy/scalable/installer/library/openshift_components. To execute any OpenShift related adhoc commands, it is advised to execute them from within this folder.

To create the manifest files and the ignition files, edit the install-config.yaml file provided in the directory /etc/ansible/hpe-solutions-openshift/synergy/scalable/installer/ignitions to include the following details:

  • baseDomain : Base domain of the DNS which hosts Red Hat OpenShift Container Platform.

  • name : Name of the OpenShift cluster. This is same as the new domain created in DNS.

  • replicas : Update this field to reflect the corresponding number of master or worker instances required for the OpenShift cluster as per the installation environment requirements. It is recommended to have a minimum of 3 master nodes and 2 worker nodes per OpenShift cluster.

  • clusterNetworks : This field is pre-populated by Red Hat. Update this field only if a custom cluster network is to be used.

  • pullSecret : Update this field with the pull secret for the Red Hat account. Login to Red Hat account https://cloud.redhat.com/openshift/install/metal/user-provisioned (opens new window) and retrieve the pull secret.

  • sshKey : Update this field with the sshKey of the installer VM and copy the SSH key in install-config.yaml file. Generate the SSH key with following command.

    $ ssh-keygen
    

    A sample install-config.yaml file appears as follows. Update the fields to suit the installation environment.

    
    apiVersion: v1
    
    baseDomain: &lt;name of the base domain&gt;
    
    compute:
    
    - hyperthreading: Enabled
    
    name: worker
    
    replicas: 2
    
    controlPlane:
    
    hyperthreading: Enabled
    
    name: master
    
    replicas: 3
    
    metadata:
    
    name: &lt;name of the cluster, same as the new domain under the base domain created&gt;
    
    networking:
    
    clusterNetworks:
    
    - cidr: 12.128.0.0/14
    
    hostPrefix: 23
    
    networkType: OpenShiftSDN
    
    serviceNetwork:
    
    - 172.30.0.0/16
    
    platform:
    
    none: {}
    
    pullSecret: ‘pull secret provided as per the Red Hat account’
    
    sshKey: ‘ ssh key of the installer VM ’
    
    

    Execute the following command on the installer VM to create the manifest files and the ignition files required to install Red Hat OpenShift.

    $ cd $BASE_DIR/installer
    $ ansible-playbook playbooks/create_manifest_ignitions.yml
    $ sudo chmod +r installer/ignitions/*.ign
    

The ignition files are generated on the installer VM within the folder /etc/ansible/hpe-solutions-openshift/synergy/scalable/installer/ignitions.

Note

The ignition files have a time-out period of 24 hours and it is critical that the clusters are created within 24 hours of generating the ignition files. If it crosses 24 hours, then regenerate the ignition files again by clearing up the files from the directory where the ignition files were saved.

# OS deployment

# PXE server

In this solution, a PXE server is used for OS deployment and is configured on CentOS (version: CentOS Linux release 7.6.1810 (Core)). The PXE server uses the FTP service for file distribution but can be altered to support HTTP or NFS.

This section highlights the steps to configure a PXE server:

  1. Login to the CentOS server to be configured as a PXE server as a user that can run commands as root via sudo.

  2. Install packages such as DHCP, TFTP server, vSFTPD (FTP server) and xinetd using the following command.

    $ sudo yum install dhcp tftp tftp-server syslinux vsftpd xinetd
    
  3. Update the DHCP configuration file at /etc/dhcp/dhcpd.conf with the MAC addresses, IP addresses, DNS, and routing details of the installation environment. Domain search is optional. A sample DHCP configuration file is shown as follows.

    ddns-update-style interim;
    ignore client-updates;
    authoritative;
    allow booting;
    allow bootp;
    
    # internal subnet for my DHCP Server
    subnet 20.0.x.x netmask 255.0.0.0 {
    range 20.0.x.x 20.0.x.x;
    deny unknown-clients;
    option domain-name-servers 20.x.x.x;
    option domain-name "twentynet.local";
    option routers 20.x.x.x;
    option broadcast-address 20.255.255.255;
    default-lease-time 600;
    max-lease-time 7200;
    next-server 20.x.x.x;
    filename "pxelinux.0";
    }
    
    #######################################
    host bootstrap {
    hardware ethernet 00:50:56:xx:98:df;
    fixed-address 20.0.x.x;
    }
    host master01 {
    hardware ethernet 00:50:56:95:xx:82;
    fixed-address 20.0.x.x;
    }
    host worker01 {
    hardware ethernet 00:50:56:xx:ab:82;
    fixed-address 20.0.x.x;
    }
    
  4. Trivial File Transfer Protocol (TFTP) is used to transfer files from data server to clients without any kind of authentication. TFTP is used for ignition file loading in PXE based environments. To configure the TFTP server, edit the configuration file /etc/xinetd.d/tftp. Change the parameter ‘disable = yes’ to ‘disable = no’ and leave the other parameters as is. To edit the /etc/xinetd.d/tftp file, execute the following command.

    $ sudo vi  /etc/xinetd.d/tftp
    

    The TFTP configuration file is shown below.

    service tftp
       {
    
            socket_type = dgram
            protocol = udp
            wait = yes
            user = root
            server = /usr/sbin/in.tftpd
            server_args = -s /var/lib/tftpboot
            disable = no
            per_source = 11
            cps = 100 2
            flags = IPv4
        }
    

    Network boot related files must be placed in the tftp root directory /var/lib/tftpboot. Run the following commands to copy the required network boot files to /var/lib/tftpboot/.

    $ sudo cp –v /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot
    
    $ sudo cp –v /usr/share/syslinux/menu.c32 /var/lib/tftpboot
    
    $ sudo cp –v /usr/share/syslinux/memdisk /var/lib/tftpboot
    
    $ sudo cp –v /usr/share/syslinux/mboot.c32 /var/lib/tftpboot
    
    $ sudo cp –v /usr/share/syslinux/chain.c32 /var/lib/tftpboot
    
    $ sudo mkdir /var/lib/tftpboot/pxelinux.cfg
    
    $ sudo mkdir /var/lib/tftpboot/networkboot
    
  5. Copy the RHCOS 4 and RHEL 7.6 ISO files to the PXE server. Mount it to the /mnt/ directory and then copy the contents of the ISO to the local FTP server using the following commands.

    $ sudo mount –o loop &lt;OS file name&gt; /mnt/
    
    $ cd /mnt/
    
    $ sudo cp –av * /var/ftp/pub/
    
  6. Copy the kernel file (vmlinuz) and initrd file from /mnt to /var/lib/tftpboot/networkboot/ using the following commands.

    $ sudo cp /mnt/images/pxeboot/vmlinuz /var/lib/tftpboot/networkboot/
    
    $ sudo cp /mnt/images/pxeboot/initrd.img /var/lib/tftpboot/networkboot
    
  7. Unmount the ISO files using the following command.

    $ sudo unmount /mnt/
    
  8. For RHEL nodes, create and utilize a new kickstart file under the folder /var/ftp/pub with the name “rhel7.cfg” using the following command.

    $ sudo vi /var/ftp/pub/rhel7.cfg
    

    An example kickstart file is shown as follows. The installation user should create a kickstart file to meet the requirements of their installation environment.

    firewall --disabled
    # Install OS instead of upgrade
    install
    # Use FTP installation media
    url --url="ftp://&lt;FTP_server_IP_address&gt;/pub/rhel76/"
    # Root password
    # root password can be plaintext as shown below
    # rootpw –plaintext &lt;password&gt;
    # root password is encrypted using the command “openssl passwd -1 &lt;password&gt;” and resultant output is provided for rootpw as shown below
    rootpw --iscrypted $6$uiq8l/7xEWsYXhrvaEgan4N21yhLa8K.U7UA12Th3PD11GOXvEcI40gp
    # System authorization information
    auth useshadow passalgo=sha512
    # Use graphical install
    graphical
    firstboot disable
    # System keyboard, timezone, language
    keyboard us
    timezone Europe/Amsterdam
    lang en_US
    # SELinux configuration
    selinux disabled
    # Installation logging level
    logging level=info
    # System bootloader configuration
    bootloader location=mbr
    clearpart --all --initlabel
    part swap --asprimary --fstype="swap" --size=1
    part /boot --fstype xfs --size=300
    part pv.01 --size=1 --grow
    volgroup root_vg01 pv.01
    logvol / --fstype xfs --name=lv_01 --vgname=root_vg01 --size=1 --grow
    %packages
    @^minimal
    @core
    %end
    %addon com_redhat_kdump --disable --reserve-mb='auto'
    %end
    
  9. Create a PXE menu.

    • Create a PXE menu file at the location /var/lib/tftpboot/pxelinux.cfg/default using the command.

      $ sudo vi /var/lib/tftpboot/pxelinux.cfg/default
      
    • For each of the OS boot options, provide the following details:

      • MENU LABEL : Custom name of the respective menu label.

      • KERNEL : Kernel details of the operating system.

      • APPEND : Path of bootloader file along with path of cfg or ignition files (in case of RHCOS) or configuration file (in case of RHEL).

    A sample PXE menu is as follows.

    default menu.c32
    
    prompt 0
    
    timeout 30
    
    MENU TITLE LinuxTechi.com PXE Menu
    
    LABEL rhel76
    
    MENU LABEL RHEL76-Buedata
    
    KERNEL /rhel76/vmlinuz
    
    APPEND initrd=/rhel76/initrd.img inst.repo=ftp://&lt;FTP_server_IP_address&gt;/pub/rhel76 ks=ftp://&lt;FTP_server_IP_address&gt;/pub/rhel76-hcp.cfg
    
    LABEL rhcos-bootstrap
    
    MENU LABEL Install RHCOS4.3 sec-Bootstrap
    
    KERNEL /networkboot/rhcos-4.3.0-x86_64-installer-kernel
    
    APPEND ip=dhcp rd.neednet=1 initrd=/networkboot/rhcos-4.3.0-x86_64-installer-initramfs.img console=tty0 console=ttyS0 coreos.inst=yes coreos.inst.install_dev=sda coreos.inst.image_url= ftp://&lt;FTP_server_IP_address&gt;/pub/rhcos-4.3.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url= ftp://&lt;FTP_server_IP_address&gt;/pub/sec/bootstrap.ign
    
    LABEL rhcos-master
    
    MENU LABEL Install RHCOS4.2 sec-Master
    
    KERNEL /networkboot/rhcos-4.3.0-x86_64-installer-kernel
    
    APPEND ip=dhcp rd.neednet=1 initrd=/networkboot/rhcos-4.3.0-x86_64-installer-initramfs.img console=tty0 console=ttyS0 coreos.inst=yes coreos.inst.install_dev=sda coreos.inst.image_url= ftp://&lt;FTP_server_IP_address&gt;/pub/rhcos-4.3.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=ftp://&lt;FTP_server_IP_address&gt;/pub/sec/master.ign
    
    LABEL rhcos-worker
    
    MENU LABEL Install RHCOS4.2 sec-Worker
    
    KERNEL /networkboot/rhcos-4.3.0-x86_64-installer-kernel
    
    APPEND ip=dhcp rd.neednet=1 initrd=/networkboot/rhcos-4.3.0-x86_64-installer-initramfs.img console=tty0 console=ttyS0 coreos.inst=yes coreos.inst.install_dev=sda coreos.inst.image_url= ftp://&lt;FTP_server_IP_address&gt;/pub/rhcos-4.3.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=ftp://&lt;FTP_server_IP_address&gt;/pub/sec/worker.ign
    
  10. Start and enable xinetd, dhcpd and vsftpd using the following commands.

        $ sudo systemctl start xinetd
    
        $ sudo systemctl enable xinetd
    
        $ sudo systemctl start dhcpd.service
    
        $ sudo systemctl enable dhcpd.service
    
        $ sudo systemctl start vsftpd
    
        $ sudo systemctl enable vsftpd
    
  11. Configure SELinux for FTP.

    $ sudo setsebool –P allow_ftpd_full_access 1
    
  12. Open ports in the firewall using the following firewall-cmd commands.

    $ sudo firewall-cmd --add-service-ftp --permanent
    
    $ sudo firewall-cmd --add-service-dhcp --permanent
    
    $ sudo firewall-cmd –reload
    

Note

It is crucial to generate ignition files, copy them to the TFTP server, and update the path in the PXE default file. For more information about generating the ignition files, refer to the Kubernetes manifests and ignition files section in this document.

# iPXE

This folder consists of scripts to deploy operating system over servers using virtual media.

Prerequisites

Installer machine with the following:

  1. Web server (preferably Nginx) is configured.

# Installation

  1. Install the prerequisites.

    $ cd $BASE_DIR/os_deployment
    $ pip3 install requirements.txt
    

    Note

    BASE_DIR is defined in "Installer machine section.

  2. Update the servers dictionary in deploy_os.py script with the server details on which the operating system is to be installed.

    Example value is as follows:

     server = [{
         "Server_serial_number"  : "MXQ920023X",
         "ILO_Address"           : "10.0.x.x",
         "ILO_Username"          : "admin",
         "ILO_Password"          : "password",
         "Hostname"              : "master.tennet.local",
         "Host_IP"               : "x.x.x.x",
         "Host_Username"         : "root",
         "Host_Password"         : "Password",
         "Host_Netmask"          : "255.255.0.0",
         "Host_Gateway"          : "10.0.x.x",
         "Host_DNS"              : "10.0.x.x",
         "OS_image_name"         : "rhel-server-7.6-x86_64-dvd.iso",
         "Web_server_address"    : "10.0.x.x"
     },
     {
         "Server_serial_number"  : "MXQ93906KB",
         "ILO_Address"           : "10.0.x.x",
         "ILO_Username"          : "admin",
         "ILO_Password"          : "password",
         "Hostname"              : "worker.tennet.local",
         "Host_IP"               : "10.0.x.x",
         "Host_Username"         : "root",
         "Host_Password"         : "Password",
         "Host_Netmask"          : "255.255.0.0",
         "Host_Gateway"          : "10.0.x.x",
         "Host_DNS"              : "10.0.x.x",
         "OS_image_name"         : "rhel-server-7.6-x86_64-dvd.iso",
         "Web_server_address"    : "10.0.x.x"
     }]
    
  3. Execute the script to deploy operating system.

    $ python3 deploy_os.py
    

# ESXi deployment

This section outlines the steps to programmatically deploy ESXi on all the bare metal nodes.

Prequisites

  • RHEL 7.6 Installer machine with the following configuration is essential to initiate the OS deployment process.

  • ESXi ISO image is present in the HTTP file path within the installer machine.

# Installation

  1. Enable Python 3 and Ansible environment as mentioned in "installer machine" section of deployment guide.

  2. Execute the following command on the installer VM to point to the esxi deployment directory.

    $ cd $BASE_DIR/os_deployment/deploy_esxi
    
  3. Use the following command to install requirements.

    $ sudo sh setup.sh 
    
  4. Edit input files using the following command.

    $ sudo ansible-vault edit input_files/config.yml
    $ Enter the password
    

Note

The default password for the Ansible vault file is changeme

  1. Update the input_files/config.yml file with the details of web server and operating system to be installed.

    Example values for the input configuration is as follows:

    config:
      HTTP_server_base_url: http://10.0.x.x/
      HTTP_file_path: /usr/share/nginx/html/
      OS_type: esxi67
      OS_image_name: <ISO_image_name>.iso
      base_kickstart_filepath: kickstart_files/ks_esxi67.cfg
    
    

Note

Acceptable values for "OS_type" variable is "esxi67" for ESXi 6.7.

  1. Update the input_files or server_details.yml file with the details of servers on which ESXi is to be installed.

    Example values for the input configuration for deploying ESXi 6.7 is as follows:

    servers:
       -  Server_serial_number: MXxxxxxDP
          ILO_Address: 10.0.x.x
          ILO_Username: username
          ILO_Password: password
          Hostname: vsphere01.twentynet.local
          Host_IP: 20.x.x.x
          Host_Username: root
          Host_Password: Password
          Host_Netmask: 255.x.x.x
          Host_Gateway: 20.x.x.x
          Host_DNS: 20.x.x.x
       - Server_serial_number: MXxxxxxDQ
          ILO_Address: 10.0.x.x
          ILO_Username: username
          ILO_Password: password
          Hostname: vsphere02.twentynet.local
          Host_IP: 20.0.x.x
          Host_Username: root
          Host_Password: Password
          Host_Netmask: 255.x.x.x
          Host_Gateway: 20.x.x.x
          Host_DNS: 20.x.x.x
    

    Note

    • It is recommended to provide a complex password for the "Host_Password" variable.
    • Provide administrative priviliged iLO account username and password.

::: 7. Running playbook to deploy ESXi.

$ ansible-playbook deploy.yml --ask-vault-pass

Note

  • In the process of ESXi deployment, ISO image contents will be forcefully moved to inside $BASE_DIR/deploy_esxi/files folder and it needs to be deleted in case of space issues.

  • BASE_DIR is defined in Installer machine section.

Note

  1. Generic settings done as part of kickstart file for ESXi are as follows. It is recommended that the user reviews and modifies the kickstart file (kickstart_files/ks_esxi67.cfg) to suit their requirements.
    • Accept End User License Agreement (EULA)
    • clearpart --alldrives --overwritevmfs
    • install --firstdisk --overwritevmfs
    • %firstboot --interpreter=busybox
    • One standard switch vswitch0 is created with uplinks vmnic0 and vmnic1. it is assigned with the Host_IP defined in the input_files/server_details.yml input file.
    • NIC teaming is performed with vmnic0 being the active uplink and vmnic1 being the standby uplink.
    • NIC failover policy is set to --failback yes --failure-detection link --load-balancing mac --notify-switches yes.