Configuring cluster logging
Deploying worker nodes for cluster logging
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16GB of memory and extra CPU resources. The initial set of deployed OpenShift Container Platform worker nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory and CPU requirements. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments.
You should add new worker nodes to your cluster to facilitate hosting the logging stack, setting the cpu and
ram attributes to sufficiently large values for a production environment. In your hosts file, add new entries in
the [rhcos_worker] group:
[rhcos_worker]
hpe-worker0 ansible_host=10.15.155.213
hpe-worker1 ansible_host=10.15.155.214
hpe-worker2 ansible_host=10.15.155.215 cpus=8 ram=32768 # Larger worker node for EFK
hpe-worker3 ansible_host=10.15.155.216 cpus=8 ram=32768 # Larger worker node for EFK
hpe-worker4 ansible_host=10.15.155.217 cpus=8 ram=32768 # Larger worker node for EFK
In the above example, each of these large CoreOS worker nodes will be allocated 8 virtual CPU cores and 32GB of RAM.
These values override the default limits of 4 virtual CPU cores and 16GB RAM defined in the group_vars/worker.yml file.
Deploy the additional, large worker nodes using the procedure described in the section Deploying CoreOS worker nodes.
Elasticsearch storage
You can configure a persistent storage class and size for the Elasticsearch cluster. The Cluster Logging Operator creates a PersistentVolumeClaim for each data node in the Elasticsearch cluster based on these parameters.
| Variable | File | Description |
|---|---|---|
efk_es_pv_size | playbooks/roles/efk/vars/main.yml | Size of the Persistent Volume used to hold Elasticsearch data. The default size is '200G'. |
efk_es_pv_storage_class | playbooks/roles/efk/vars/main.yml | The Storage Class to use when creating Elasticsearch Persistent Volumes. The default storage class name is 'thin'. |
Customizing cluster logging
The template file playbooks/roles/efk/templates/clo-crd.yml.j2 contains the configuration used for deploying
Elasticsearch, Fluentd and Kibana. The requests and limits settings for the resources required for each of the logging components can be modified in this file.