# Storage Options

# Alletra/Nimble Storage

# HPE CSI Driver Architecture

A diagramatic representation of the HPE CSI driver architecture is illustrated in the figure 10.

Figure 10: CSI Driver Architecture

The OpenShift Container Platform 4.15 cluster comprises the master and worker nodes (physical and virtual) with CoreOS deployed as the operating system. The iSCSI interface configured on the host nodes establishes the connection with the HPE Alletra array to the cluster. Upon successful deployment of HPE CSI Driver, the CSI controller, CSI Driver, and 3PAR CSP and Nimble CSP gets deployed which communicates with the HPE Alletra array via REST APIs. The associated features on Storage Class such as CSI Provisioner, CSI Attacher, and others are configured on the Storage Class.

# Deploying HPE CSI Driver for HPE Alletra storage on RHOCP 4.15

This section describes how to deploy HPE CSI Driver for HPE Alletra storage on an existing RHOCP 4.15.

Prerequisites:

Before configuring the HPE CSI Driver, the following prerequisites must be met:

  1. RHOCP 4.15 must be successfully deployed, and console should be accessible.

  2. iSCSI interface must be configured on the HPE Alletra Storage array. For more information, see the Infrastructure Master Reference Architecture based on HPE Alletra 6000 (opens new window).

  3. Additional iSCSI network interfaces must be configured on physical worker nodes.()

  4. Deploy scc.yaml file to enable Security Context Constraints (SCC).

Configuring iSCSI interface on worker nodes

The RHOCP 4.15 cluster comprises the master and physical worker nodes with RHEL 8.9 deployed as the operating system. The iSCSI interface is configured on the host nodes to establish the connection with the HPE Alletra array to the cluster. Apart from the host nodes, additional iSCSI interface needs to be configured on all the worker (physical and virtual) nodes for establishing the connection between the RHOCP cluster and HPE Alletra arrays.

To configure iSCSI interface on physical RHEL worker nodes:

  • Configure iSCSI A connection as a storage interface and iSCSI B connection as an additional storage for redundancy

For example, the iSCSI_A and iSCSI_B interface connection is configured on worker1 node, as shown in Figure 11.

FIGURE 11 iSCSI_A and iSCSI_B interface connection

Creating namespace

To create a namespace, in this case, hpe-csi:

  1. Open Red Hat OpenShift Container Platform Console on a supported web browser.

  2. Click Administration → Namespaces on the left pane.

  3. Click Create Namespaces.

  4. On the Create Namespace dialog box, enter hpe-csi.

  5. Click Create.

Deploying Security Context Constraints (SCC)

The HPE CSI Driver needs to run in the privileged mode to get access to the host ports, host network, and to mount the host path volume. Before deploying HPE CSI Driver operator on RHOCP cluster, deploy SCC to allow HPE CSI Driver to run with these privileges.

Prerequisites:

  • Ensure that you can access the scc.yaml file from the following GitHub link:

https://scod.hpedev.io/partners/redhat_openshift/examples/scc/hpe-csi-scc.yaml (opens new window)

To deploy SCC:

  1. On the installer VM, download scc.yaml file from the GitHub (opens new window) link using the following command:
$ curl -sL https://scod.hpedev.io/partners/redhat_openshift/examples/scc/hpe-csi-scc.yaml > hpe-csi-scc.yaml
  1. Edit relevant parameters such as project name or namespace in the hpe-csi-scc.yaml file.

  2. Change my-hpe-csi-operator to the name of the project (in this case, hpe-csi) where the CSI Operator is being deployed using the following command:

$ oc new-project hpe-csi --display-name="HPE CSI Driver for Kubernetes"
$ sed -i'' -e 's/my-hpe-csi-driver-operator/hpe-csi/g' hpe-csi-scc.yaml
  1. Save the file.

The following figure illustrates the parameter that needs to be edited (project name) where the HPE CSI Driver operator is deployed:

FIGURE 12: Editing hpe-cs-scc.yaml file

  1. Deploy SCC using the following command and check the output:
$ oc create -f hpe-csi-scc.yaml

The following output is displayed:

securitycontextconstraints.security.openshift.io/hpe-csi-scc created

Installing and configuring HPE CSI Driver

NOTE

HPE CSI Driver version 2.4.2 is used for deploying HPE Alletra Storage on the RHOCP 4.15.

Prerequisites:

Before installing the HPE CSI Driver from the Red Hat OpenShift Container Platform Console:

  • Create a namespace for HPE CSI Driver
  • Deploy SCC for the created namespace

Installing HPE CSI Driver Operator using Red Hat OperatorHub

To install HPE CSI Driver Operator from the Red Hat OperatorHub:

  1. Log in to the Red Hat OpenShift Container Platform Console.

  2. Navigate to Operators → OperatorHub.

  3. Search for HPE CSI Driver Operator from the list of operators and click HPE CSI Driver Operator.

  4. On the HPE CSI Operator for Kubernetes page, click Install.

FIGURE 13: HPE CSI Driver Operator search

  1. On the Create Operator Subscription page, select the appropriate options:

    1. Select "A specific namespace on the cluster" in the Installation Mode option.

    2. Select the appropriate namespace (in this case, hpe-csi) in the Installed Namespace option.

    3. Select "stable" in the Update Channel option.

    4. Select "Automatic" in the Approval Strategy option.

    FIGURE 14: Create Operator Subscription

    1. Click Install.
  2. The Installed Operators page is displayed with the status of the operator.

FIGURE 15: Installed Operators

Creating HPE CSI Driver

The HPE CSI Driver is a multi-vendor and multi-backend driver where each implementation has a Container Storage Provider (CSP). The HPE CSI Driver for Kubernetes uses CSP to perform data management operations on storage resources such as searching for a logical unit number (lun) and so on. The HPE CSI Driver allows any vendor or project to develop its own CSP using the CSP specification (opens new window). It enables the third parties to integrate their storage solution into Kubernetes and takes care of all the intricacies.

To create the HPE CSI Driver:

  1. Log in to the Red Hat OpenShift Container Platform Console.

  2. Navigate to  Operators → Installed Operators on the left pane to view the installed operators.

  3. On the Installed Operators page, select HPECSIDriver from the Project drop-down list to switch to the hpe-csi project.

  4. On the hpe-csi project, select HPECSIDriver tab.

  5. Click Create HPECSIDriver.

  6. Click Create.

FIGURE 16: HPE CSI Driver creation

Verifying HPECSIDriver configuration

After the HPECSIDriver is deployed, the deployment pods such as hpe-csi-controller, hpe-csi-driver, primera3par-csp, and Nimble-csp are displayed on the Pods page.

FIGURE 17: Deployment pods for HPECSIDriver

NOTE

The Nimble Storage CSP also supports HPE Alletra 6000.

To verify the HPE CSI node information:

  1. On the installer VM, check HPENodeInfo and network status of worker nodes with the following commands.
$ oc get HPENodeInfo

$ oc get HPENodeInfo/<workernode fqdn> -o yaml

The following output is displayed:

FIGURE 18: HPENodeInfo on the cluster

Creating HPE Alletra StorageClass

After HPE CSI Driver is deployed, two additional objects, Secret and StorageClass, must be created to initiate the provisioning of persistent storage.

Creating Alletra Secret

To create a new Secret via CLI that will be used with HPE Alletra:

  1. Add the name, Namespace, backend username, backend password and the backend IP address in the Alletra-secret.yaml file and save it to be used by the CSP.

The following details are provided in the Alletra-secret.yaml file:

apiVersion: v1

kind: Secret

metadata:

  name: alletra-secret

  namespace: hpe-csi

stringData:

  serviceName: alletra-csp-svc

  servicePort: "8080"

  backend: alletramgmtip      # update alletramgmt ip

  username: admin

  password: admin
  1. Create the Alletra-secret.yaml file with the following command:
$ oc create -f Alletra-secret.yaml
  1. The following output displays the status of the alletra-secret with the "hpe-csi" namespace.

FIGURE 19: HPE Alletra Secret status

Creating StorageClass with HPE Alletra Secret

This section describes how to create a new StorageClass using the existing Alletra-secret and the necessary StorageClass parameters.

To create a new StorageClass using the Alletra-secret:

  1. Edit the following parameters in the Alletra-storageclass.yaml file:
apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

  annotations:

    storageclass.kubernetes.io/is-default-class: "true"

  name: Alletra-storageclass

provisioner: csi.hpe.com

parameters:

  csi.storage.k8s.io/fstype: xfs

  csi.storage.k8s.io/controller-expand-secret-name: alletra-secret

  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-csi

  csi.storage.k8s.io/controller-publish-secret-name: alletra-secret

  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-csi

  csi.storage.k8s.io/node-publish-secret-name: alletra-secret

  csi.storage.k8s.io/node-publish-secret-namespace: hpe-csi

  csi.storage.k8s.io/node-stage-secret-name: alletra-secret

  csi.storage.k8s.io/node-stage-secret-namespace: hpe-csi

  csi.storage.k8s.io/provisioner-secret-name: alletra-secret

  csi.storage.k8s.io/provisioner-secret-namespace: hpe-csi

  description: "Volume created by the HPE CSI Driver for Kubernetes"

reclaimPolicy: Delete

allowVolumeExpansion: true
  1. Create the Alletra-storageclass.yml file with the following command:
$ oc create -f Alletra-storageclass.yml
  1. Verify the name of the storage class (in this case, Alletra-storageclass).

FIGURE 20: HPE Alletra StorageClass status

# Deploying Openshift Data Foundation

This section covers deploying OpenShift Data Foundation 4.15 on existing Red Hat OpenShift Container Platform 4.15 worker nodes.

The OpenShift Data Foundation operator installation will be using Local Storage operator which will use file system storage of 10GB for monitoring purpose and block storage of 500GB/2TB for OSD (Object Storage Daemon) volumes. These OSD are useful for configuring any application on top of OCP cluster using ODF configuration.

Figure 21. Logical storage Layout in Solution

The below operators are required to create ODF cluster and deployed through automation fashion.

  • Local Storage Operator

  • OpenShift Data Foundation Operator

# Flow Diagram

Figure 22. Deploying OpenShift Container Storage Solution Flow Diagram

# Configuration requirements

The below table shows about all required worker node configuration.

Server Role CPU RAM HardDisk1 HardDisk2 HardDisk3
Worker 16 64 120 GB 10 GB 500 GB/2 TB

# Pre-requisites

  1. Red Hat OpenShift Container Platform 4.15 cluster console is required with the login credentials.

  2. Availability of any local storage from any storage (i.e Nimble,Alletra,Local Storage) in OpenShift Container Platform.

  3. ODF installation on OCP 4.15 cluster requires a minimum of 3 worker nodes but ODF should have exact 3 worker nodes which use two more hard disks with 10GB for mon POD (3 in total using always a PVC) + 500GB (or more than 500GB) volume (a PVC using the default "thin" storage class) for the OSD volumes. It also requires 16 CPUs and 64GB RAM for each node and worker node hard disk configuration as shown in above figure.

# Scripts for deploying ODF cluster

NOTE

BASE_DIR - is a base directory path for all automated scripts directories are in place and path is /opt/hpe-solutions-openshift/DL-LTI-Openshift/

This section provides details on the scripts developed to automate the installation of ODF operator on the OCP cluster. The scripts used to deploy ODF can be found in the installer VM at $BASE_DIR/odf_installation.

  1. install_odf_operator.py - main python script which installs Local Storage operator, OpenShift Container Storage operators, creates file system & block storage and also creates SCs, PVs, PVCs.

  2. config.py - This python script is used to convert user input values into program variables for usage by the install_odf_operator.py script.

  3. userinput.json - The userinput.json file needs to be modified as per user configuration and requirements for installing and scaling ODF cluster.

  4. create_local_storage_operator.yaml -- Creates Local Storage operator's Namespace, installs Local Storage operator.

  5. auto_discover_devices.yaml -– Creates the LocalVolumeDiscovery resource using this file.

  6. localvolumeset.yaml -– Creates the LocalVolumeSets

  7. odf_operator.yaml -- This playbook creates OpenShift Container Storage Namespace and OpenShift Data Foundation Operator.

  8. storagecluster.yaml -– This playbook creates storage classes, PVCs (Persistent Volume Claim), pods to bring up the ODF cluster.

# Installing OpenShift Container Storage on OpenShift Container Platform

  1. Login to the installer machine as non-root user and browse to python virtual environment as per DG.

  2. Update the userinput.json file is found at $BASE_DIR/odf_installation with the following setup configuration details:

    OPENSHIFT_DOMAIN: "<OpenShift Server sub domain fqdn (api.domain.base_domain)>",
    
    OPENSHIFT_PORT: "<OpenShift Server port number (OpenShift Container Platform runs on port 6443 by default)>",
    
    LOCAL_STORAGE_VERSION": "<OCP_cluster_version>",
    
    OPENSHIFT_CONTAINER_STORAGE_VERSION": "",
    
    OPENSHIFT_CLIENT_PATH: "<Provide oc absolute path ending with / OR leave empty in case oc is available under /usr/local/bin>",
    
    "OPENSHIFT_CONTAINER_PLATFORM_WORKER_NODES": <Provide OCP worker nodes fqdn list ["sworker1.fqdn", "sworker2.fqdn","sworker3.fqdn"]>,
    
    "OPENSHIFT_USERNAME": "",
    
    "OPENSHIFT_PASSWORD": "",
    
    "DISK_NUMBER": ""
    
  3. Execute the following command to deploy ODF cluster.

    > cd $BASE_DIR/odf_installation
    
    > python -W ignore install_odf_operator.py
    

    The output of the above command as shown below:

    > python -W ignore install_odf_operator.py
    
    Enter key for encrypted variables:
    
    Logging into your OpenShift Cluster
    
    Successfully logged into the OpenShift Cluster
    
    Waiting for 1 minutes to 'Local Storage' operator to be available on OCP web console..!!
    
    'Local Storage' operator is created..!!
    
    Waiting for 2 minutes to OCS operator to be available on OCP web console..!!
    
    'OpenShift Container Storage' operator is created..!!
    
    INFO:
    
    1) Run the below command to list all PODs and PVCs of OCS cluster.
    
      'oc get pod,pvc -n openshift-storage'
    
    2) Wait for 'pod/ocs-operator-xxxx' pod to be up and running.
    
    3) Log into OCP web GUI and check Persistant Stoarge in dashboard.
    
    $
    

# Validation of the OpenShift Data Foundation cluster

The required operators will be created after the execution of the script and they will be reflected in the OpenShift console. This section outlines the steps to verify the operators created through script and are reflected in the GUI:

  1. Login to the OpenShift Container Platform web console as the user with administrative privileges.

  2. Navigate to Operators -> Project (local-storage) -> Installed Operators select your project name.

  3. The Openshift data foundation operator will be available in the OpenShift web console as shown in below Figure.

  4. Navigate to Operators -> Installed Operators select your project name openshift-storage for OpenShift Container Storage operator.

  5. The OpenShift Container Storage operator will be available on the OpenShift Container Platform web console as shown in below Figure.

  6. SCs of OpenShift Data Foundation operator on CLI as shown in below.

    > oc get sc
    
    NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    local-sc                      kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  45h
    localblock-sc                 kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  45h
    odf-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              false                  45h
    odf-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              false                  45h
    openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  45h
    
  7. PVs of OpenShift Data Foundation operator on CLI as shown in below.

    > oc get pv
    
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS                  REASON   AGE
    local-pv-2febb788                          500Gi      RWO            Delete           Bound    openshift-storage/odf-deviceset-1-0-lhtvh   localblock-sc                          45h
    local-pv-5f19c0e5                          10Gi       RWO            Delete           Bound    openshift-storage/rook-ceph-mon-b           local-sc                               45h
    local-pv-b73b1cd5                          500Gi      RWO            Delete           Bound    openshift-storage/odf-deviceset-2-0-jmmck   localblock-sc                          45h
    local-pv-b8ba8c38                          10Gi       RWO            Delete           Bound    openshift-storage/rook-ceph-mon-a           local-sc                               45h
    local-pv-c3a372f6                          10Gi       RWO            Delete           Bound    openshift-storage/rook-ceph-mon-c           local-sc                               45h
    local-pv-e5e3d596                          500Gi      RWO            Delete           Bound    openshift-storage/odf-deviceset-0-0-5jxg7   localblock-sc                          45h
    pvc-8f3e3d8b-6be7-4ba8-8968-69cbc866c89f   50Gi       RWO            Delete           Bound    openshift-storage/db-noobaa-db-0            ocs-storagecluster-ceph-rbd            45h
     $
    
  8. PVCs of OpenShift Data Foundation operator on CLI as shown in below.

    > oc get pvc -n openshift-storage
    
    NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    db-noobaa-db-0            Bound    pvc-8f3e3d8b-6be7-4ba8-8968-69cbc866c89f   50Gi       RWO            odf-storagecluster-ceph-rbd   45h
    odf-deviceset-0-0-5jxg7   Bound    local-pv-e5e3d596                          500Gi      RWO            localblock-sc                 45h
    odf-deviceset-1-0-lhtvh   Bound    local-pv-2febb788                          500Gi      RWO            localblock-sc                 45h
    odf-deviceset-2-0-jmmck   Bound    local-pv-b73b1cd5                          500Gi      RWO            localblock-sc                 45h
    rook-ceph-mon-a           Bound    local-pv-b8ba8c38                          10Gi       RWO            local-sc                      45h
    rook-ceph-mon-b           Bound    local-pv-5f19c0e5                          10Gi       RWO            local-sc                      45h
    rook-ceph-mon-c           Bound    local-pv-c3a372f6                          10Gi       RWO            local-sc                      45h
     $
    
  9. PODs of OpenShift Data Foundation operator on CLI as shown in below.

    > oc get pod -n openshift-storage
    
    NAME READY STATUS RESTARTS AGE
    
    csi-cephfsplugin-6xpsk 3/3 Running 0 45h
    
    csi-cephfsplugin-7khm6 3/3 Running 0 17m
    
    csi-cephfsplugin-bb48n 3/3 Running 0 45h
    
    csi-cephfsplugin-cfzx6 3/3 Running 0 15m
    
    csi-cephfsplugin-provisioner-79587c64f9-2dpm6 5/5 Running 0 45h
    
    csi-cephfsplugin-provisioner-79587c64f9-hf46x 5/5 Running 0 45h
    
    csi-cephfsplugin-w6p6v 3/3 Running 0 45h
    
    csi-rbdplugin-2z686 3/3 Running 0 45h
    
    csi-rbdplugin-6tv5m 3/3 Running 0 45h
    
    csi-rbdplugin-jgf5z 3/3 Running 0 17m
    
    csi-rbdplugin-provisioner-5f495c4566-76rqm 5/5 Running 0 45h
    
    csi-rbdplugin-provisioner-5f495c4566-pzvww 5/5 Running 0 45h
    
    csi-rbdplugin-v7lfx 3/3 Running 0 45h
    
    csi-rbdplugin-ztdjs 3/3 Running 0 15m
    
    noobaa-core-0 1/1 Running 0 45h
    
    noobaa-db-0 1/1 Running 0 45h
    
    noobaa-endpoint-6458fc874f-vpznd 1/1 Running 0 45h
    
    noobaa-operator-7f4495fc6-lmk9k 1/1 Running 0 45h
    
    odf-operator-5d664769f-59v8j 1/1 Running 0 45h
    
    rook-ceph-crashcollector-sworker1.socp.twentynet.local-84dddddb 1/1 Running 0 45h
    
    rook-ceph-crashcollector-sworker2.socp.twentynet.local-8b5qzzsz 1/1 Running 0 45h
    
    rook-ceph-crashcollector-sworker3.socp.twentynet.local-699n9kzp 1/1 Running 0 45h
    
    rook-ceph-drain-canary-sworker1.socp.twentynet.local-85bffzm66m 1/1 Running 0 45h
    
    rook-ceph-drain-canary-sworker2.socp.twentynet.local-66bcfjjfkr 1/1 Running 0 45h
    
    rook-ceph-drain-canary-sworker3.socp.twentynet.local-5f6b57c9nt 1/1 Running 0 45h
    
    rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-67fbb67dkb4kz 1/1 Running 0 45h
    
    rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-db66df7dtqkmx 1/1 Running 0 45h
    
    rook-ceph-mgr-a-6f5f7b58dc-fjvjc 1/1 Running 0 45h
    
    rook-ceph-mon-a-76cc49c944-5pcgf 1/1 Running 0 45h
    
    rook-ceph-mon-b-b9449cdd7-s6mct 1/1 Running 0 45h
    
    rook-ceph-mon-c-59d854cd8-gn6sd 1/1 Running 0 45h
    
    rook-ceph-operator-775cd6cd66-sdfph 1/1 Running 0 45h
    
    rook-ceph-osd-0-7644557bfb-9l7ns 1/1 Running 0 45h
    
    rook-ceph-osd-1-7694c74948-lc9sf 1/1 Running 0 45h
    
    rook-ceph-osd-2-794547558-wjcpz 1/1 Running 0 45h
    
    rook-ceph-osd-prepare-ocs-deviceset-0-0-5jxg7-t89zh 0/1 Completed 0 45h
    
    rook-ceph-osd-prepare-ocs-deviceset-1-0-lhtvh-f2znl 0/1 Completed 0 45h
    
    rook-ceph-osd-prepare-ocs-deviceset-2-0-jmmck-wsrb2 0/1 Completed 0 45h
    
    rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-67b7865qx276 1/1 Running 0 45h
    
    $
    
  10. Storage capacity of ODF cluster with 3 worker nodes (3x500Gi) on OCP web cluster as shown below figure.

![](../media/figure54.png)

# Validating ODF with deploying WordPress application

This section covers the steps to validate the OpenShift Data Foundation deployment (ODF) by deploying 2-tier application along with MySQL database.

Prerequisites

  • OCP 4.15 cluster must be installed.

  • ODF to claim persistent volume (PV).

# Deploying WordPress application

NOTE

BASE_DIR - is a base directory path for all automated scripts directories are in place and path is /opt/hpe-solutions-openshift/DL-LTI-Openshift

  1. Login to the installer machine as a non-root user.

  2. From within the repository, navigate to the WordPress script folder

    > cd $BASE_DIR/odf_installation/wordpress
    
  3. Run below script to deploy Wordpress application along with MySQL

    > ./deploy_wordpress.sh
    

The deploy_wordpress.sh scripts does the following activities.

  • Creates project

  • Sets default storage class

  • Deploys Wordpress and MySQL app

  • Create routes

  1. Below is the output of the scripts

    > ./deploy_wordpress.sh
    

    Output of the command follows:

    Already on project "wordpress" on server "https://api.socp.twentynet.local:6443".
    
    You can add applications to this project with the 'new-app' command.
    For example, try:
    
    oc new-app ruby~https://github.com/sclorg/ruby-ex.git
    
    to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
    
    kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
    
    Already on project "wordpress" on server "https://api.socp.twentynet.local:6443".
    
    clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid
    added: "default"error: --overwrite is false but found the following declared annotation(s):
    'storageclass.kubernetes.io/is-default-class' already has a value (true)
    
    service/wordpress-http created
    
    service/wordpress-mysql created
    
    persistentvolumeclaim/mysql-pv-claim created
    
    persistentvolumeclaim/wp-pv-claim created
    
    secret/mysql-pass created
    
    deployment.apps/wordpress-mysql created
    
    deployment.apps/wordpress created
    
    route.route.openshift.io/wordpress-http created
    
    URL to access application
    
    wordpress-http-wordpress.apps.socp.twentynet.local
    
    $
    

# Verifying the WordPress deployment

  1. Execute the following command to verify the persistent volume associated with WordPress application and MySQL database.

    > oc get pods,pvc,route -n wordpress
    
    NAME                                  READY   STATUS    RESTARTS   AGE
    pod/wordpress-6f69797b8f-hqpss        1/1     Running   0          5m52s
    pod/wordpress-mysql-8f4b599b5-cd2s2   1/1     Running   0          5m52s
    
    NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/mysql-pv-claim   Bound    pvc-ccf2a578-9ba3-4577-8115-7c80ac200a9c   5Gi        RWO            ocs-storagecluster-ceph-rbd   5m50s
    persistentvolumeclaim/wp-pv-claim      Bound    pvc-3acec0a0-943d-4138-bda9-5b57f8c35c5d   5Gi        RWO            ocs-storagecluster-ceph-rbd   5m50s
    
    NAME                                      HOST/PORT                                              PATH   SERVICES         PORT     TERMINATION   WILDCARD
    route.route.openshift.io/wordpress-http   wordpress-http-wordpress.apps.socp.twentynet.local          wordpress-http   80-tcp                 None
    $
    
  2. Access the route url in browser to access the WordPress application as shown below.