Red Hat OpenShift implementation

Red Hat OpenShift is an enterprise-grade implementation of Kubernetes, providing a modern and secure platform for the orchestration of container-based solutions. OpenShift provides general platform-level capabilities, including:

  • Abstractions for container-level deployments.
  • Software orchestration that is natively cloud-based and distributable.
  • Secure implementations of software-defined networking.
  • Real-time and aggregated logging, monitoring, and observability.
  • Frameworks for eventing and alerting.
  • Controls for resource management.
  • Access-level controls based around service accounts, roles, groups, and role bindings.
  • Graphical user interfaces for both administrators and developers.

HeartAI deploys OpenShift with Microsoft Azure Red Hat OpenShift (ARO), a fully-managed implementation of OpenShift within Microsoft Azure. ARO is deployed to instances of Microsoft Azure cloud resources, which are fully-managed by Microsoft Azure, including operational lifecycle management, patching and updating, logging, monitoring, and security hardening.

OpenShift identity provision and access control

HeartAI instances of OpenShift integrate natively with the HeartAI implementation of Keycloak. This allows users to authenticate with OpenShift through the OpenShift OAuth 2.0 and OpenID Connect framework. With support for federated identity, users will also be able to authenticate with their Microsoft Azure Active Directory identity.

Identity and access management

Further information about HeartAI identity and access management may be found with the following documentation:

The OpenShift identity provider allows role-based access control (RBAC) at the following two levels:

RBAC level Description
Cluster RBAC Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles.
Local RBAC Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles.

Access control is managed with the following authorisation objects:

Authorization object Description
Rules Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods.
Roles Collections of rules. You can associate, or bind, users and groups to multiple roles.
Bindings Associations between users and/or groups with a role.

The following roles exist within the HeartAI OpenShift instance:

Role Description
admin A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota.
basic-user A user that can get basic information about projects and users.
cluster-admin A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project.
cluster-status A user that can get basic cluster status information.
edit A user that can modify most objects in a project but does not have the power to view or modify roles or bindings.
self-provisioner A user that can create their own projects.
view A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings.

Corresponding roles are applied to HeartAI administrators and developers in relation to their access to the OpenShift environment.

References

Further information about these approaches may be found with the following external references:

OpenShift console

OpenShift provides a comprehensive graphical user interface console for both administrator and developer environments. The following image shows the OpenShift console overview, describing various cluster components, including:

  • General cluster information.
  • Cluster status and availability.
  • Cluster resource deployment metrics.
  • Real-time monitoring of resource usage, events, and alerting.

openshift-console-cluster-overview.png

OpenShift Namespaces

OpenShift Namespaces provides cluster scoping that logical separates individual namespaces within corresponding overlay VXLAN networks. Through this approach distinct namespaces are effectively compartmentalised from each other.

Example: OpenShift declaration file for Namespace

The following example shows an OpenShift Namespace declaration file for the HeartAI HelloWorldService production environment:

apiVersion: v1
kind: Namespace
metadata:
  name: heartai-hello-world-prod

OpenShift Pods

OpenShift Pod resources are atomic and composable components of OpenShift clusters, representing the smallest logical orchestration unit. Pods are deployed to corresponding virtual IPs on the residing host network, with these networks being logically separated by VXLAN at the level of OpenShift Namespaces. Pod resource declarations may specify zero-or-more corresponding containers with associated configurations, through which the OpenShift cluster maintains the declared number of containers within the Pod host environment.

Example: OpenShift declaration file for Pod

The following example shows an OpenShift Pod declaration file for the HeartAI HelloWorldService production environment namespace:

apiVersion: "v1"
kind: Pod
metadata:
  name: heartai-hello-world
  namespace: heartai-hello-world-prod
  labels:
    app: heartai-hello-world
    version: v0.31.106
    actorSystemName: heartai-hello-world
  annotations:
    sidecar.istio.io/inject: "true"
    traffic.sidecar.istio.io/includeInboundPorts: "2552,8558,14000,14020"
    traffic.sidecar.istio.io/excludeOutboundPorts: "2552,8558"
spec:
  containers:
    - name: heartai-hello-world
      image: "quay.io/heartai/heartai-hello-world:0.31.106"
      imagePullPolicy: Always
      livenessProbe:
        httpGet:
          path: /alive
          port: management
        initialDelaySeconds: 20
        periodSeconds: 10
      readinessProbe:
        httpGet:
          path: /ready
          port: management
        initialDelaySeconds: 20
        periodSeconds: 10
      ports:
        - name: remoting
          containerPort: 2552
          protocol: TCP
        - name: management
          containerPort: 8558
          protocol: TCP
        - name: http
          containerPort: 14000
          protocol: TCP
        - name: https
          containerPort: 14020
          protocol: TCP
      env:
        - name: OS_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: JAVA_OPTS
          value: "-Xms1024m -Xmx1024m -Dconfig.resource=production.conf"
        - name: APPLICATION_SECRET
          valueFrom:
            secretKeyRef:
              name: heartai-hello-world-play-secret
              key: secret
        - name: AKKA_CLUSTER_BOOTSTRAP_SERVICE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: "metadata.labels['app']"
        - name: REQUIRED_CONTACT_POINT_NR
          value: "1"
        - name: AKKA_REMOTING_PORT
          value: "2552"
        - name: AKKA_MANAGEMENT_PORT
          value: "8558"
        - name: HTTP_BIND_ADDRESS
          value: "0.0.0.0"
        - name: HTTP_PORT
          value: "14000"
        - name: HTTPS_PORT
          value: "14020"
        - name: KAFKA_BROKERS_SERVICE
          value: "strimzi-kafka-kafka-brokers.heartai-strimzi.svc.cluster.local:9092"
        - name: KAFKA_BOOTSTRAP_SERVICE
          value: "strimzi-kafka-kafka-bootstrap.heartai-strimzi.svc.cluster.local:9092"
        - name: POSTGRESQL_CONTACT_POINT
          valueFrom:
            secretKeyRef:
              name: postgres-url
              key: secret
        - name: POSTGRESQL_USERNAME
          valueFrom:
            secretKeyRef:
              name: postgres-id
              key: secret
        - name: POSTGRESQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-key
              key: secret
        - name: SERVICE_TOPIC_GREETING_MESSAGES_CHANGED
          value: "hello_world_greeting_messages_changed_prod"
      resources:
        limits:
          cpu: 500m
          memory: 2048Mi
        requests:
          cpu: 100m
          memory: 1024Mi

Example: OpenShift console for namespace-level Pods

The following image shows the OpenShift console for Pods within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-pods.png

Example: OpenShift console for Pod

The following image shows the Red Hat OpenShift web interface console for a Pod resource instance. The Pod web interface console provides information and functionality to support OpenShift hosting of Pod resources, including:

  • The managing Namespace of the Pod resource.
  • The Pod name.
  • Monitoring metrics of the Pod, including:
    • Memory utilisation.
    • CPU utilisation.
    • Filesystem utilisation.
    • Network inbound bandwidth.
    • Network outbound bandwidth.
  • Assigned labels of the Pod.
  • Pod health status.
  • Pod virtual IP address assignment.
  • The hosting Node of the Pod.
  • The Pod creation timestamp.
  • The owning resource.
  • Pod-hosted containers.
  • Pod-hosted volumes.
  • The event history of the Pod.

openshift-console-heartai-acs-pods-sensor.png

OpenShift Deployments

OpenShift Deployments allow the declaration of Pod deployments and higher-level configuration for how these Pods should be orchestrated and managed within the OpenShift environment. The OpenShift Deployment Controller synchronises the actual state of the cluster to the declared state with configurable controls for how this synchronisation operates.

Example: OpenShift declaration file for Deployment

The following example shows an OpenShift Deployment declaration file for the HeartAI HelloWorldService production environment namespace:

apiVersion: "apps/v1"
kind: Deployment
metadata:
  name: heartai-hello-world
  namespace: heartai-hello-world-prod
  labels:
    app: heartai-hello-world
spec:
  replicas: 1
  selector:
    matchLabels:
      app: heartai-hello-world
  strategy:
    rollingUpdate:
      maxSurge: 3
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: heartai-hello-world
        version: v0.31.106
        actorSystemName: heartai-hello-world
      annotations:
        sidecar.istio.io/inject: "true"
        traffic.sidecar.istio.io/includeInboundPorts: "2552,8558,14000,14020"
        traffic.sidecar.istio.io/excludeOutboundPorts: "2552,8558"
    spec:
      containers:
        - name: heartai-hello-world
          image: "quay.io/heartai/heartai-hello-world:0.31.106"
          imagePullPolicy: Always
          livenessProbe:
            httpGet:
              path: /alive
              port: management
            initialDelaySeconds: 20
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /ready
              port: management
            initialDelaySeconds: 20
            periodSeconds: 10
          ports:
            - name: remoting
              containerPort: 2552
              protocol: TCP
            - name: management
              containerPort: 8558
              protocol: TCP
            - name: http
              containerPort: 14000
              protocol: TCP
            - name: https
              containerPort: 14020
              protocol: TCP
          env:
            - name: OS_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: JAVA_OPTS
              value: "-Xms1024m -Xmx1024m -Dconfig.resource=production.conf"
            - name: APPLICATION_SECRET
              valueFrom:
                secretKeyRef:
                  name: heartai-play-secret
                  key: secret
            - name: AKKA_CLUSTER_BOOTSTRAP_SERVICE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: "metadata.labels['app']"
            - name: REQUIRED_CONTACT_POINT_NR
              value: "1"
            - name: AKKA_REMOTING_PORT
              value: "2552"
            - name: AKKA_MANAGEMENT_PORT
              value: "8558"
            - name: HTTP_BIND_ADDRESS
              value: "0.0.0.0"
            - name: HTTP_PORT
              value: "14000"
            - name: HTTPS_PORT
              value: "14020"
            - name: KAFKA_BROKERS_SERVICE
              value: "strimzi-kafka-kafka-brokers.heartai-strimzi.svc.cluster.local:9092"
            - name: KAFKA_BOOTSTRAP_SERVICE
              value: "strimzi-kafka-kafka-bootstrap.heartai-strimzi.svc.cluster.local:9092"
            - name: POSTGRESQL_CONTACT_POINT
              valueFrom:
                secretKeyRef:
                  name: postgres-url
                  key: secret
            - name: POSTGRESQL_USERNAME
              valueFrom:
                secretKeyRef:
                  name: postgres-id
                  key: secret
            - name: POSTGRESQL_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-key
                  key: secret
            - name: SERVICE_TOPIC_GREETING_MESSAGES_CHANGED
              value: "hello_world_greeting_messages_changed_prod"
          resources:
            limits:
              cpu: 500m
              memory: 2048Mi
            requests:
              cpu: 100m
              memory: 1024Mi

Example: OpenShift console for namespace-level Deployments

The following image shows the OpenShift console for Deployments within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-deployments.png

Example: OpenShift console for Deployment

The following image shows the OpenShift console for the sensor Deployment within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-deployments-sensor.png

OpenShift ReplicaSets

Example: OpenShift console for namespace-level ReplicaSets

The following image shows the OpenShift console for ReplicaSets within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-replicasets.png

Example: OpenShift console for ReplicaSet

The following image shows the OpenShift console for the sensor ReplicaSet within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-replicasets-sensor.png

OpenShift Secrets

Encrypted sensitive data store that is injectable at initiation / run-time to corresponding system components.

Example: OpenShift console for namespace-level Secrets

The following image shows the OpenShift console for Secrets within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-secrets.png

OpenShift Services

OpenShift Services provide a cluster-internal access point to corresponding overlay network address spaces. For network routing to OpenShift Pods, Services allow consistent resolution to the virtual IP address space of one-or-more Pods, noting that such Pods may generally be transient. Access to deployment Pods through a Service is location transparent, scalable, and tolerant to Pod failure. Services may also be configured with load balancing, port-forwarding capability, and session affinity.

Service discovery

HeartAI services provide approaches for location transparent service discovery within corresponding HeartAI Red Hat OpenShift container platform instances. Further information about HeartAI service discovery may be found with the following documentation:

Example: OpenShift declaration file for Service

The following example shows an OpenShift Service declaration file for the HeartAI HelloWorldService production environment namespace:

apiVersion: v1
kind: Service
metadata:
  name: heartai-hello-world
  namespace: heartai-hello-world-prod
spec:
  ports:
    - name: http
      port: 80
      targetPort: 14000
  selector:
    app: heartai-hello-world
  type: LoadBalancer

Example: OpenShift console for namespace-level Services

The following image shows the OpenShift console for Services within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-services.png

Example: OpenShift console for Service

The following image shows the OpenShift console for the sensor Service within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-services-sensor.png

OpenShift Routes

OpenShift Routes provide ingress points to access cluster services. A typical route refers to a cluster-internal Service that resolves to corresponding Pods. Routes may be accessed through public or private networks where appropriate.

Routes support the establishment of TLS-encrypted connections. The system OpenShift implementation natively supports the following methods of TLS connection:

TLS connection type Description
Passthrough Server certificates from downstream server passed through the edge router and presented to requesting agent. TLS encryption occurs end-to-end from requesting agent to downstream server.
Re-encryption The edge router presents its server certificate to the requesting agent, and the edge router itself requests a server certificate from the downstream server. TLS encryption occurs both between the requesting agent and the edge router, and the edge router and the downstream server.
Edge termination The edge router presents its server certificate to the the requesting agent, but the edge router does not request a server certificate from the downstream server. TLS encryption occurs between the requesting agent and the edge router, but not between the edge router and the downstream server.

For purposes of security, HeartAI only implements the passthrough and re-encryption methods of TLS connections.

Example: OpenShift console for namespace-level Routes

The following image shows the OpenShift console for Routes within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-routes.png

Example: OpenShift console for Route

The following image shows the OpenShift console for the sensor Service within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-routes-central.png

OpenShift PersistentVolumes

OpenShift PersistentVolumes provide methods to abstract storage provision and consumption. PersistentVolume resources are units of storage that are manually provisioned by a cluster administrator or are dynamically generated by a corresponding StorageClass. PersistentVolumes may be consumed by cluster Resources, such as Pods, but also have a lifecycle that is independent of the consumer.

For HeartAI instances of Microsoft Azure Red Hat OpenShift, PersistentVolume resources are provided by Azure Managed Disks.

Example: OpenShift console for PersistentVolumes

The following image shows the Red Hat OpenShift web interface console for cluster-level PersistentVolumes. The PersistentVolumes web interface console provides:

  • An overview of cluster-level PersistentVolumes, including:
    • PersistentVolume names.
    • The operation status of PersistentVolumes.
    • The associated PersistentVolumeClaims.
    • The allocated storage capacities.
    • Assigned metadata labels.
    • Creation timestamps.

openshift-console-cluster-persistentvolumes.png

OpenShift PrometheusRules

OpenShift PrometheusRules provide generic and extensible alerting in response to system state or event behaviour. OpenShift alerting functionality is integrated with the Prometheus monitoring solutions. Alerts are transmissible to various end-points, such as email and SMS, and forwarding functionality exists with native adapters, such as for the Splunk data-observability platform.

Example: OpenShift console for cluster PrometheusRules

The following image shows the OpenShift console for PrometheusRules within the HeartAI OpenShift cluster:

openshift-console-cluster-monitoring-alertrules.png

Example: OpenShift console for PrometheusRule

The following image shows the OpenShift console for the PrometheusRule NodeFileSystemAlmostOutOfSpace:

openshift-console-cluster-monitoring-alertrules-nodefilesystemalmostoutofspace.png

OpenShift Monitoring

OpenShift provides real-time system monitoring and logging with Grafana as a real-time observability solution, Prometheus for monitoring systems and services, and Alertmanager for event-triggered system behaviour.

Example: OpenShift Monitoring for etcd

The following image shows the OpenShift condole for monitoring of the cluster etcd instances. The OpenShift console for monitoring of these resources provides information for:

  • The number of active etcd replicas.
  • The remote procedure call (RPC) rate.
  • The number of active streams.
  • The etcd database size.
  • The duration of disk synchronisation.
  • Memory usage for active etcd replicas.
  • Client traffic in.
  • Client traffic out.
  • Peer traffic in.
  • Peer traffic out.
  • Total raft proposals.
  • Total leader elections per day.

openshift-console-cluster-monitoring-etcd.png

Example: OpenShift Monitoring for cluster compute resources

The following image shows OpenShift console for monitoring of the compute resources of an OpenShift cluster instance. The OpenShift console for monitoring of these resources provides information for:

  • Headlines: CPU utilisation
  • Headlines: CPU requests committed
  • Headlines: CPU requests limited
  • Headlines: Memory utilisation
  • Headlines: Memory requests committed
  • Headlines: Memory requests limited
  • CPU: CPU usage
  • CPU: CPU quota
  • Memory: Memory usage
  • Memory: Requests by namespace
  • Network: Current network usage
  • Network: Receive bandwidth
  • Network: Transmit bandwidth
  • Network: Average container bandwidth by namespace: Received
  • Network: Average container bandwidth by namespace: Transmitted
  • Network: Rate of received packets
  • Network: Rate of transmitted packets
  • Network: Rate of received packets dropped
  • Network: Rate of transmitted packets dropped

openshift-console-cluster-monitoring-compute.png

Example: OpenShift Monitoring for cluster networking resources

The following image shows OpenShift console for monitoring of the networking resources of an OpenShift cluster instance. The OpenShift console for monitoring of these resources provides information for:

  • Bandwidth: Current rate of bytes received
  • Bandwidth: Current rate of bytes transmitted
  • Bandwidth: Current status
  • Bandwidth history: Receive bandwidth
  • Bandwidth history: Transmit bandwidth
  • Packets: Rate of received packets
  • Packets: Rate of transmitted packets
  • Errors: Rate of received packets dropped
  • Errors: Rate of transmitted packets dropped
  • Errors: Rate of TCP retransmits out of all sent segments
  • Errors: Rate of TCP SYN retransmits out of all retransmits

openshift-console-cluster-monitoring-networking.png

Example: OpenShift Monitoring for cluster USE Method

The following image shows OpenShift console for monitoring of the USE Method of an OpenShift cluster instance. The OpenShift console for monitoring of these resources provides information for:

  • CPU utilisation
  • CPU saturation
  • Memory utilisation
  • Memory saturation
  • Network utilisation
  • Network saturation
  • Disk IO utilisation
  • Disk IO saturation
  • Disk space utilisation

openshift-console-cluster-monitoring-use-method.png

Example: OpenShift Monitoring for node USE Method

The following image shows OpenShift console for monitoring of the USE Method of an individual node within an OpenShift cluster instance. The OpenShift console for monitoring of these resources provides information for:

  • CPU utilisation
  • CPU saturation
  • Memory utilisation
  • Memory saturation
  • Network utilisation
  • Network saturation
  • Disk IO utilisation
  • Disk IO saturation
  • Disk space utilisation

openshift-console-node-monitoring-use-method.png

Example: OpenShift Monitoring for cluster Prometheus resources

The following image shows OpenShift console for monitoring of Prometheus resources within an OpenShift cluster instance. The OpenShift console for monitoring of these resources provides information for:

  • Prometheus stats
  • Discovery: Target sync
  • Discovery: Targets
  • Retrieval: Average scrape interval duration
  • Retrieval: Scrape failures
  • Retrieval: Appended samples
  • Storage: Head series
  • Storage: Head chunks
  • Query: Query rate
  • Query: Stage duration

openshift-console-cluster-monitoring-prometheus.png

OpenShift ServiceAccounts

OpenShift ServiceAccounts provide resources for abstracting service principles. ServiceAccounts may specify cluster- and namespace-level access control with corresponding Roles and RoleBindings.

Example: OpenShift console for namespace ServiceAccounts

The following image shows the OpenShift console for ServiceAccounts within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-serviceaccounts.png

OpenShift Roles

OpenShift Roles provide specifications to configure access control at the cluster- or namespace-level.

Example: OpenShift declaration file for Role

The following example shows a Role declaration from the HeartAI HelloWorldService production environment. This particular Role declaration provides the permission required for the HelloWorldService service Deployment to be able to locate corresponding Pods of the service, fulfilling the ability of individual Pod resources to provide location transparent service discovery to other Pod resources.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: heartai-pod-reader
  namespace: heartai-hello-world-prod
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]

Example: OpenShift console for namespace Roles

The following image shows the OpenShift console for Roles within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-roles.png

OpenShift RoleBindings

OpenShift Role Bindings allow the association of Roles to corresponding ServiceAccounts.

Example: OpenShift declaration file for RoleBinding

The following example shows a Role Binding declaration from the HeartAI HelloWorldService production environment.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: heartai-hello-world-prod
subjects:
  - kind: ServiceAccount
    name: default
roleRef:
  kind: Role
  name: heartai-pod-reader
  apiGroup: rbac.authorization.k8s.io

Example: OpenShift console for namespace RoleBindings

The following image shows the OpenShift console for RolesBindings within the HeartAI Red Hat Advanced Cluster Security production environment namespace:

openshift-console-heartai-acs-rolebindings.png

OpenShift Nodes

OpenShift Nodes provide resource abstractions for the underlying physical or virtual machines of the cluster. For the HeartAI instances of Microsoft Azure Red Hat OpenShift, Node resources are implemented by instances of Microsoft Azure Virtual Machines.

Example: OpenShift console for cluster Nodes

The following image shows the OpenShift console for cluster Nodes:

openshift-console-cluster-nodes.png

OpenShift Machines

OpenShift Machines provide additional abstractions to Node resources specifically for OpenShift-based clusters.

Example: OpenShift console for cluster Machines

The following image shows the OpenShift console for cluster Machines:

openshift-console-cluster-machines.png

OpenShift Operators

The following image shows the OpenShift console for cluster Machines: OpenShift Operators are extendable compositions of OpenShift resources generally, and provide a declarative framework for the composition of these resources. The Red Hat Marketplace provides a variety of Operators that are readily deployable to OpenShift clusters.

Example: OpenShift console for Operator Hub

The following image shows the OpenShift console for the Operator Hub, with filtering applied to show security-based Operators:

openshift-console-cluster-operatorhub-security.png

OpenShift Topology

OpenShift provides capabilities that are tailored for developer experiences. In addition to the administrator console, OpenShift provides a dedicated developer portal.

Example: OpenShift console for namespace-level topology overview

The following image shows the OpenShift developer console topology overview for the HeartAI Red Hat Advanced Cluster Security namespace:

openshift-console-heartai-acs-topology-overview.png

Example: OpenShift console for namespace-level topology graph

The following image shows the OpenShift web interface console for a namespace topology graph. This web interface is specifically available through the developer console and provides information about an application-level Deployment. The topology graph web interface provides:

  • The managing project of the Deployment.
  • An overview of an application-level Deployment, including:
    • A topological visualisation of the resource components that compose the Deployment.
    • Information about Deployment resources:

openshift-console-heartai-acs-topology-graph.png