# Preparing the execution environment for RHOCP worker3 node
Prerequisites:
- RedHat Enterprise Linux8.8 must be installed and registered on your host machine
- Configure BOND > VLAN > Bridge
Setting up RHEL 8.8 installer machine
This section assumes the following considerations for our deployment environment:
- A server running Red Hat Enterprise Linux (RHEL) 8.8 exists within the deployment environment and is accessible to the installation user to be used as an installer machine. This server must have internet connectivity.
- A virtual machine is used to act as an installer machine and the same host is utilized as an Ansible Engine host. We are using one of the worker3 machines as an installer machine to execute Ansible Playbook.
Prerequisites to execute ansible playbook:
RHEL 8.8 installer machine must have the following configurations:
- The installer machine must have at least 500 GB disk space (especially in the "/" partition), 4 CPU cores and 16 GB RAM.
- RHEL 8.8 installer machine must be subscribed with valid Red Hat credentials. To register the installer machine for the Red Hat subscription, run the following command:
$ sudo subscription-manager register --username=<username> --password=<password> --auto-attach
- Sync time with NTP server.
- SSH key pair must be available on the installer machine. If the SSH key is not available, generate a new SSH key pair with the following command:
$ ssh-keygen
To set up the installer machine:
- Create and activate a Python3 virtual environment for deploying this solution with the following commands:
$ python3 -m venv <virtual_environment_name>
$ source <virtual_environment_name>/bin/activate
- Download the OpenShift repositories using the following commands in the Ansible Engine:
$ mkdir /opt
$ cd /opt
$ yum install -y git
$ git clone <https://github.com/HewlettPackard/hpe-solutions-openshift.git>
- Setup the installer machine to configure the nginx, development tools, and other python packages required for LTI installation. Navigate to the $BASE_DIR directory and run the following command:
$ cd $BASE_DIR
$ sh setup.sh
Note
$BASE_DIR refers to /opt/hpe-solutions-openshift/DL-LTI-Openshift/
As part of setup.sh script it will create nginx service, so user must download and copy RHEL 8.8 DVD ISO to /usr/share/nginx/html/
Minimum Storage requirement for management servers
Management Servers | Host OS disk | Storage Pool disk |
---|---|---|
Server 1 | 2 x 1.6 TB | 2 x 1.6 TB |
Server 2 | 2 x 1.6 TB | 2 x 1.6 TB |
Server 3 | 2 x 1.6 TB | 2 x 1.6 TB |
Host OS disk – raid1 for redundancy
- Creating and deleting logical drives
Create and delete logical drives on the head nodes following below steps.
Input File Update:-
User has to update the input.yaml file in $BASE_DIR/create_delete_logicaldrives directory to execute the logical drive script.
User needs to update all the details in the input.yaml file which include:-
ILOServers:
- ILOIP: 172.28.*.*
ILOuser: admin
ILOPassword: Password
controller: 12
RAID: Raid1
PhysicalDrives: 1I:1:1,1I:1:2
- ILOIP: 172.28.*.*
ILOuser: admin
ILOPassword: Password
controller: 1
RAID: Raid1
PhysicalDrives: 1I:3:1,1I:3:2
- ILOIP: 172.28.*.*
ILOuser: admin
ILOPassword: Password
controller: 11
RAID: Raid1
PhysicalDrives: 1I:3:1,1I:3:2
Note:-
1. To find controller id login to the respective ILO -> System Information -> Storage tab where inside Location find the **slot number** as the controller id.
# Example - Slot = 12
2. To find the PhysicalDrives login to the respective ILO -> System Information -> Storage tab inside Unconfigured Drives where under Location you can deduce PhysicalDrives based on these information:
# Slot: 12:Port=1I:Box=1:Bay=1
# Example - 1I:1:1 ('Port:Box:Bay')
# Slot: 12:Port=1I:Box=1:Bay=2
# Example - 1I:1:2 ('Port:Box:Bay')
Playbook Execution:-
To delete all the existing logical drives in the server in case if any and to create new logical drives named 'RHEL Boot Volume' in respective ILO servers run the site.yml playbook inside create_delete_logicaldrives directory with the below mentioned command
$ ansible-playbook site.yml --ask-vault-pass
We can provide the input variables in any one of the below methods
Method 1. Input.py : Automation way of taking input
Through the input.py, go to the directory /opt/hpe-solutions-openshift/DL-LTI-Openshift/ and run the below command.
python input.py
Here it will prompt for values which are not obtained from SCID json files.
A sample input collection through input.py is as follows.
Enter server serial number for the first head node server ( Example: 2M2210020X )
2M205107TH
Enter ILO address for the first head node server ( Example: 192.28.201.5 )
172.28.201.13
Enter ILO username for the first head node server ( Example: admin )
admin
Enter ILO password for the first head node server ( Example: Password )
Password
Enter Host FQDN for the first head node server ( Example: kvm1.xyz.local )
headnode1.isv.local
etc ...............................'
After execution of input.py, it will generate input.yaml and hosts file in the same location.
Method 2. Input.yaml: Manually editing input file
Go to the directory $BASE_DIR(/opt/hpe-solutions-openshift/DL-LTI-Openshift/), here we will have input.yaml and hosts files.
- A preconfigured Ansible vault file (input.yaml) is provided as a part of this solution, which consists of sensitive information to support the host and virtual machine deployment.
cd $BASE_DIR
Run the following commands on the installer VM to edit the vault to match the installation environment.
ansible-vault edit input.yaml
NOTE
The default password for the Ansible vault file is changeme
Sample input_sample.yml can be found in the $BASE_DIR along with description of each input variable.
A sample input.yaml file is as follows with a few filled parameters.
- Server_serial_number: 2M20510XXX
ILO_Address: 172.28.*.*
ILO_Username: admin
ILO_Password: *****
Hostname: headnode3.XX.XX #ex. headnode3.isv.local
Host_Username: root
Host_Password: ******
HWADDR1: XX:XX:XX:XX:XX:XX #mac address for server physical interface1
HWADDR2: XX:XX:XX:XX:XX:XX #mac address for server physical interface2
Host_OS_disk: sda
Host_VlanID: 230
Host_IP: 172.28.*.*
Host_Netmask: 255.*.*.*
Host_Prefix: XX #ex. 8,24,32
Host_Gateway: 172.28.*.*
Host_DNS: 172.28.*.*
Host_Domain: XX.XX #ex. isv.local
corporate_proxy: 172.28.*.* #provide corporate proxy, ex. proxy.houston.hpecorp.net
corporate_proxy_port: XX #corporate proxy port no, ex. 8080
config:
HTTP_server_base_url: http://172.28.*.*/ #Installer IP address
HTTP_file_path: /usr/share/nginx/html/
OS_type: rhel8
OS_image_name: rhel-8.8-x86_64-dvd.iso # ISO image should be present in /usr/share/nginx/html/
base_kickstart_filepath: /opt/hpe-solutions-openshift/DL-LTI-Openshift/playbooks/roles/rhel8_os_deployment/tasks/ks_rhel8.cfg
A sample hosts files is as follows
'[kvm_nodes]
172.28.*.*
172.28.*.*
172.28.*.*
[ansible_host]
172.28.*.*
[rhel8_installerVM]
172.28.*.*
[binddns_masters]
172.28.*.*
[binddns_slaves]
172.28.*.*
172.28.*.*
[masters_info]
master1 ip=172.28.*.* hostname=headnode1
[slaves_info]
slave1 ip=172.28.*.* hostname=headnode2
slave2 ip=172.28.*.* hostname=headnode3