# Preparing the execution environment
This section provides a detailed overview and steps to configure the components deployed for this solution.
# Installer machine
This document assumes that a server running Red Hat Enterprise Linux (RHEL) 7.6 exists within the deployment environment and is accessible to the installation user to be used as an installer machine. This server must have internet connectivity. In this solution, a virtual machine is used to act as an installer machine and the same host is utilized as an Ansible Engine host.
# Non-root user access
The industry-wide security best practice is to avoid the use of root user account for administration of Linux-based servers. However, certain operations require root user privileges to perform tasks. In those cases, it is best to use the sudo command to obtain the necessary privilege escalation on a short-term basis. The sudo command allows programs and commands to be run with the security privileges of another user (Root is the default user) and can restrict the permission to specific groups, users, and individual commands.
The root user is not active by default in RHCOS. Instead, log in as the core user.
Use the following steps to create a non-root user for the OpenShift installation process:
Login to the installer VM as root. Refer to the section Installer machine in this document for more details about the installer VM.
Execute the following command to create a non-root user.
> adduser openshift_admin
Execute the following command to set password for the non-root user.
> passwd openshift_admin
Add non-root user's group in sudoers file and use the following command to find non-root user's group.
> id -Gn openshift_admin openshift_admin
Edit the sudoers file and use the following command to add the entry of non-root user's group in the sudoers file.
> visudo
Add a non-root user's group entry in sudoers file as follows.
# Allow the following commands to run anywhere in non-root user environment ` openshift_admin ALL=(ALL) NOPASSWD: /usr/bin/chmod, /usr/bin/yum, /usr/bin/yum-config-manager, /usr/sbin/subscription-manager, /usr/bin/git, /usr/bin/vi, /usr/bin/vim, /usr/bin/mkdir, /usr/bin/cat, /usr/bin/echo, /usr/bin/python, /usr/bin/sed, /usr/bin/chown, /usr/bin/sh, /usr/bin/cp, /usr/bin/ansible-vault, /usr/bin/scp, /usr/bin/rpm, /usr/sbin/chkconfig, /usr/bin/systemctl, /usr/bin/journalctl, /usr/bin/curl, /usr/bin/tar, /usr/bin/genisoimage, /usr/bin/mount, /usr/bin/umount, /usr/bin/rsync, /usr/bin/find, /usr/bin/mv, /usr/bin/nano, /usr/sbin/dnsmasq, /usr/sbin/setsebool
Execute the following command to change the user (non-root user).
> su openshift_admin
Register the host and execute the following command to attach the host pool with Red Hat.
> sudo yum -y install subscription-manager
> sudo subscription-manager register --username=<username> --password=<password> --auto-attach
> sudo subscription-manager attach --pool=<pool_id>
Disable all repositories and enable only the repositories required for the installer VM.
> sudo yum -y install yum-utils
> sudo yum-config-manager --disable \*
> sudo subscription-manager repos --disable="*" \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-optional-rpms" \ --enable="rhel-server-rhscl-7-rpms"
Use the following command to install Ansible.
> sudo yum -y install ansible
Install Git package on installer VM for performing Git- related operations.
> sudo yum -y install git
Execute the following commands to download the hpe-solutions-openshift repository.
> sudo mkdir -p /opt/hpe/solutions/ocp
> cd /opt/hpe/solutions/ocp
> sudo git clone <https://github.com/HewlettPackard/hpe-solutions-openshift.git>
> sudo chown -R openshift_admin:openshift_admin /opt/hpe/solutions/ocp
Create an environment variable BASE_DIR and point it to the following path.
> export BASE_DIR=/opt/hpe/solutions/ocp/hpe-solutions-openshift/DL/scalable
After the hpe-solutions-openshift repository is downloaded, navigate to the path /opt/hpe/solutions/ocp*/hpe-solutions-openshift/DL/scalable/installer/playbooks. The scripts within this directory assists in configuring the Prerequisites for the environment. The details of the scripts are as follows:
python_env.sh - this script installs Python 3.
ansible_env.sh – this script creates a Python 3 virtual environment and installs Ansible within the virtual environment.
Steps to configure the prerequisite environment are as follows:
Change the directory to /opt/hpe/solutions/ocp/hpe-solutions-openshift/DL/scalable/installer/playbooks
> cd $BASE_DIR/installer/playbooks
Execute the following command to setup prerequisite Python environment.
> yum -y install @development
> sudo sh python_env.sh
Execute the following command to enable Python 3.
> scl enable rh-python36 bash
Execute the following command to configure the Ansible environment.
> sudo sh ansible_env.sh
Enable the virtual environment with the following command.
> source ../ocp_venv/bin/activate
Execute the following command to set the environment variables.
> export ANSIBLE_LIBRARY=$BASE_DIR/installer/library/oneview-ansible/library
# Kubernetes manifests and ignition files
Manifests and ignition files define the master node and worker node configuration and are key components of the Red Hat OpenShift Container Platform 4 installation.
Before creating the manifest files and ignition files, it is necessary to download the Red Hat OpenShift 4 packages. Execute the following command on the installer VM to download the required packages.
> cd $BASE_DIR/installer
> ansible-playbook playbooks/download_ocp_package.yml
The OpenShift packages downloaded after executing the download_ocp_package.yml playbook can be found on the installer VM at /opt/hpe/solutions/ocp/hpe-solutions-openshift/DL/scalable/installer/library/openshift_components. To execute any OpenShift related adhoc commands, it is advised to execute them from within this folder.
To create the manifest files and the ignition files, edit the install-config.yaml file provided in the directory /opt/hpe/solutions/ocp/hpe-solutions-openshift/DL/scalable/installer/ignitions to include the following details:
baseDomain -- Base domain of the DNS which hosts Red Hat OpenShift Container Platform.
name -- Name of the OpenShift cluster. This is same as the new domain created in DNS.
replicas -- Update this field to reflect the corresponding number of master or worker instances required for the OpenShift cluster as per the installation environment requirements. It is recommended to have a minimum of 3 master nodes and 2 worker nodes per OpenShift cluster.
clusterNetworks -- This field is pre-populated by Red Hat. Update this field only if a custom cluster network is to be used.
pullSecret -- Update this field with the pull secret for the Red Hat account. Login to Red Hat account https://cloud.redhat.com/openshift/install/metal/user-provisioned (opens new window) and retrieve the pull secret.
sshKey -- Update this field with the sshKey of the installer VM and copy the SSH key in install-config.yaml file. Generate the SSH key with following command.
> ssh-keygen
An example install-config.yaml file appears as follows. Update the fields to suit the installation environment.
apiVersion: v1 baseDomain: name of the base domain compute: - hyperthreading: Enabled name: worker replicas: 2 controlPlane: hyperthreading: Enabled name: master replicas: 3 metadata: name: name of the cluster, same as the new domain under the base domain created; networking: clusterNetworks: - cidr: 12.128.0.0/14 hostPrefix: 23 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: ‘pull secret provided as per the Red Hat account’ sshKey: ‘ ssh key of the installer VM ’
Execute the following command on the installer VM to create the manifest files and the ignition files required to install Red Hat OpenShift.
> cd $BASE_DIR/installer
> ansible-playbook playbooks/create_manifest_ignitions.yml
> sudo chmod +r installer/ignitions/*.ign
The ignition files are generated on the installer VM within the folder /opt/hpe/solutions/ocp/hpe-solutions-openshift/DL/scalable/installer/ignitions.
NOTE
The ignition files have a time-out period of 24 hours and it is critical that the clusters are created within 24 hours of generating the ignition files. If it crosses 24 hours, then regenerate the ignition files again by clearing up the files from the directory where the ignition files were saved.