Red Hat OpenShift Logging implementation

Red Hat OpenShift provides integration support for logging and observability with the Red Hat OpenShift Logging (RHOL) framework. RHOL deploys instances of the following software:

RHOL framework component Description Reference
Elasticsearch Distributed and high-performance search and analytics engine. Supports full-text and structured search. Allows indexing and search capabilities for large volumes of log and document data. https://www.elastic.co/elasticsearch/
Fluentd Pluggable and scalable log and data collector. Standardises upstream and downstream data integration. https://www.fluentd.org/
Kibana Robust data visualisation client application for Elasticsearch. Allows broadly customisable query functionality and corresponding visualisation capabilities. Supports operational and real-time observability. https://www.elastic.co/kibana/

Together these software components are often referred to as the EFK stack. The composition of these technologies provides powerful and extendable mechanisms for logging and observability, including:

  • Broad support for log consumption, including native support for a variety of operational and software interfaces.
  • High-performance graph-based accession of log data.
  • Visualisation and observability of log data and associated metrics.
Red Hat OpenShift implementation

HeartAI orchestrates system services with the Kubernetes-based Red Hat OpenShift container platform. Further information about the HeartAI implementation of Red Hat OpenShift may be found with the following documentation:

Fluentd event logging

Through the HeartAI implementation of Fluentd there is a general coverage of event logs. This includes information about the event log, the log collector, the OpenShift environment, and the specific log message itself.

These general event logs collect the following information:

  • Information about the event log:
    • The event log timestamp.
    • The event log index.
    • The event log score.
    • The event log type.
    • The event log level.
    • The event log type.
  • Information about the logging collector:
    • The logging collector input name.
    • The logging collector host address.
    • The logging collector name.
    • The logging collector timestamp.
    • The logging collector version.
  • Information about the OpenShift deployment environment:
    • The node hostname.
    • The cluster root URL.
    • The namespace ID.
    • The namespace name.
    • Namespace labels.
    • The pod ID.
    • The pod virtual IP.
    • The pod hostname.
    • The pod name.
    • The container ID.
    • The container name.
    • The container image ID.
    • The container image name.
    • Container labels.
  • The event log message in two formats:
    • As a raw string.
    • Parsed and encoded as JSON.
Sanitised logs

The host network hostname and address have been removed from the following event log examples.

Event logs are viewable from within the Kibana component of Red Hat OpenShift Logging deployments. The following examples show the event log in rendered and JSON encoded forms.

heartai-kibana-log-example.png

The following log example shows the above log as a JSON encoded document:

heartai-kibana-log-example-json.png

Kibana UI for log discovery

The follow image shows the Kibana web interface for log discovery, providing log aggregation and observability approaches for available log data. The Kibana log discovery web interface contains a variety of functionalities for querying, processing, and visualising log data, including:

  • Real-time collection and analysis of log data from backing Elasticsearch instances.
  • In-built support for log data querying and processing, allowing the creating of log data reporting and visualisation pipelines.
  • Native support for Red Hat OpenShift instances, with the following default log fields specified:
    • Kubernetes namespace name.
    • Kubernetes namespace ID.
    • Kubernetes pod name.
    • Kubernetes pod hostname.
    • Kubernetes container name.
    • Kubernetes container ID.
    • Log message ID.
    • Log message.
    • Log timestamp.
    • Received timestamp.
  • The ability to save the visualisation as a template, with support to export and import to other Kibana instances.

heartai-kibana-discover.png

Kibana UI for namespace-level log visualisation

The follow image shows the Kibana web interface for log data visualisation from the HeartAI HIB interface service namespace, providing log aggregation and visualisation approaches for corresponding log data. The Kibana log data visualisation web interface contains a variety of functionalities for querying, processing, and visualising log data, including:

  • Real-time collection and analysis of log data from backing Elasticsearch instances.
  • In-built support for log data querying and processing, allowing the creation of log data visualisation pipelines.
  • A visualisation of log data from the HeartAI HIB interface service namespace, including:
    • Moving average of log count over time.
    • Time aggregation interval of 10 minutes.
  • The ability to save the visualisation as a template, with support to export and import to other Kibana instances.

heartai-kibana-visualize.png

Kibana UI for namespace-level log dashboard

The follow image shows the Kibana web interface for log data dashboarding, providing an ability to compose several log data visualisation into a comprehensive overview dashboard. The Kibana log data dashboarding web interface supports:

  • Real-time collection and analysis of log data from backing Elasticsearch instances.
  • Composition of several log data visualisations onto a dashboard plane.
  • The ability to save the visualisation as a template, with support to export and import to other Kibana instances.

heartai-kibana-dashboard.png

Kibana UI for cluster-level log visualisation

The follow image shows the Kibana web interface for log data visualisation for a HeartAI instance of a Red Hat OpenShift cluster, providing log aggregation and visualisation approaches for corresponding log data. The Kibana log data visualisation web interface contains a variety of functionalities for querying, processing, and visualising log data, including:

  • Real-time collection and analysis of log data from backing Elasticsearch instances.
  • In-built support for log data querying and processing, allowing the creation of log data visualisation pipelines.
  • A visualisation of log data from a HeartAI instance of Red Hat OpenShift:
    • Log count per cluster namespace.
  • The ability to save the visualisation as a template, with support to export and import to other Kibana instances.

heartai-kibana-visualize-cluster-namespace-log-counts.png

Example: SAVCS service-level logging and monitoring

SAVCS service-level deployments are supported by integrated event observability, including live reporting and visualisation dashboards. These functionalities are supporting by the HeartAI implementation of Red Hat OpenShift Logging. This supports broad observability of important service-level events such as database query logs, data audit candidate logs, and SAVCS activity logs.

The following observability dashboard shows a variety of observability measures for SAVCS generally, including:

  • Log extracts for database queries that have been initiated.
  • Log extracts for database queries that have been successful.
  • Count visualisations for database queries that have been initiated.
  • Count visualisations for database queries that have been successful.
  • Query duration for database queries to the Sunrise EMR CUR tables.
  • Query duration for database queries to the Sunrise EMR VIEW tables.
  • Row counts for query counts for EMR CUR and VIEW tables.
  • Count of detected audit candidates.
  • Log extracts for scheduled processing of service-level detected audit candidates.
  • Log extracts for scheduled processing of service-level SAVCS activity data.

heartai-services-savcs-logging-and-monitoring.png