# Solution Physical components

This section outlines the hardware, software, and service components utilized in this solution.

# Hardware

Figure 5 shows the physical configuration of the racks along with storage devices used in this solution. Figure 5 depicts the hardware layout in the test environment. However, this is subject to change based on the customer's requirements.

Figure 5. Hardware layout within the rack along with HPE Storage systems

The configuration outlined in this document is based on the design guidance of an HPE Converged Architecture 750 Foundation model which offers an improved time to deployment and tested firmware recipe. The recipe can be retrieved at https://support.hpe.com/hpesc/public/docDisplay?docId=a00098137en_us (opens new window). It is strongly recommended that the installation user utilizes the latest available matrix. Hewlett Packard Enterprise has tested this solution with the latest firmware recipe available as of March 2020, including HPE OneView for Synergy 5.0.

The installation user has the flexibility to customize the HPE components throughout this stack in accordance with the unique IT and workload requirements or to build the solution with individual components rather than using HPE CS750.

Table 1 highlights the individual components and their quantities as deployed within the solution.

Table 1. Components utilized in the creation of this solution.

Component Quantity Description
HPE Synergy 12000 Frame 3 Three (3) HPE Synergy 12000 Frames house the infrastructure used for the solution
HPE Synergy Composer 2 Two (2) HPE Synergy Composers for core configuration and lifecycle management of Synergy components
HPE Virtual Connect 100Gb SE F8 Module 2 A total of two (2) HPE Virtual Connect 100Gb SE F8 Modules provide network connectivity into and out of the frames
HPE Synergy 12G SAS Connection Module 6 Six (6) HPE 12G SAS Connection Modules (two (2) per frame)
HPE Synergy 480 Gen10 Compute Module 6 Three (3) bare metal master nodes and three (3) bare metal for worker nodes
HPE Synergy D3940 Storage 3 Three (3) HPE Synergy D3940 12Gb SAS CTO Drive Enclosure with 40 SFF (2.5in) Drive Bays
HPE Nimble Storage 1 One (1) array for persistent volume
HPE 3PAR StoreServ 1 One (1) HPE 3PAR array
HPE FlexFabric 2-Slot Switch 2 Each switch contains one (1) each of the HPE 5945 modules listed as follows
HPE 5945 24p SFP+ and 2p QSFP+ Module 2 One module per HPE FlexFabric 2-Slot Switch
HPE 5945 8p QSFP+ Module 2 One module per HPE FlexFabric 2-Slot Switch

NOTE

The HPE Storage systems mentioned in Table 1 is for representational purpose. Use the required amount of storage system based on the deployment requirements.

# Software

Table 2 describes the versions of important software utilized in the creation of this solution. The installation user should ensure that they download or have access to this software. Ensure that the appropriate subscriptions and licensing are in place to use within the planned time frame.

Table 2. Major software versions used in the creation of this solution

Component Version
Red Hat Enterprise Linux CoreOS (RHCOS) 4.6
Red Hat OpenShift Container Platform 4
HPE Nimble OS 5.0.8
HPE 3PAR OS 3.3.1

NOTE

The latest sub-version of each component listed in Table 2 should be installed.

When utilizing virtualized nodes, the software version used in the creation of this solution are shown in Table 3.

Table 3. Software versions used with virtualized implementations

Component Version
VMware vSphere ESXi 6.7 U2 (Build: 13981272)
VMware vCenter Server Appliance 6.7 Update 2c (Build: 14070457)

Software installed on the installer machine is shown in Table 4.

Table 4. Software installed on the installer machine

Component Version
Ansible 2.9
Python 3.6
Java 1.8
Openshift Container Platform packages 4.6

# Services

This document is built with assumptions about services and network ports available within the implementation environment. This section discusses those assumptions.

Table 5 disseminates the services required in this solution and provides a high-level explanation of their function.

Table 5. Services used in the creation of this solution.

Service Description/Notes
DNS Provides name resolution on management and data center networks
DHCP Provides IP address leases on PXE, management and usually for data center networks
NTP Ensures consistent time across the solution stack
PXE Enables booting of operating systems

# DNS

Domain Name Services must be in place for the management and data center networks. Ensure that both forward and reverse lookups are working for all hosts.

# DHCP

DHCP should be present and able to provide IP address leases on the PXE, management, and data center networks.

# NTP

A Network Time Protocol (NTP) server should be available to hosts within the solution environment.

# PXE

Because all nodes in this solution are booted using PXE, a properly configured PXE server is essential.

# Network port

The port information listed in Table 6 allows cluster components to communicate with each other. This information can be retrieved from bootstrap, master, and worker nodes by running the following command.

> netstat –tupln

Table 6 shows list of network ports used by the services under OpenShift Container Platform 4.6.

Table 6. List of network ports.

Protocol Port Number/Range Service Type Other details
TCP 80 HTTP Traffic The machines that run the Ingress router pods, compute, or worker by default.
443 HTTPS traffic
2379-2380 etcd server, peer and metrics ports
6443 Kubernetes API The Bootstrap machine and masters.
9000-9999 Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.
10249-10259 The default ports that Kubernetes reserves
10256 openshift-sdn
22623 Machine Config Server The Bootstrap machine and masters.
UDP 4789 VXLAN and GENEVE
6081 VXLAN and GENEVE
9000-9999 Host level services, including the node exporter on ports 9100-9101.
30000-32767 Kubernetes NodePort

For more information on the network port requirements for Red Hat OpenShift 4, refer to the documentation from Red Hat at

https://docs.openshift.com/container-platform/4.6/installing/installing_bare_metal/installing-bare-metal.html#installation-network-user-infra_installing-bare-metal (opens new window).