Deploying to an OpenShift instance

The following documentation provides instructions for deploying an instance of HeartAI to Red Hat OpenShift. This approach deploys the primary environments for HeartAI production environment instances and is typically coordinated in partnership with a host organisation.

Red Hat OpenShift implementation

Red Hat OpenShift is an enterprise-grade implementation of Kubernetes, providing a modern and secure platform for the orchestration of container-based solutions. OpenShift provides general platform-level capabilities, including:

  • Abstractions for container-level deployments.
  • Software orchestration that is natively cloud-based and distributable.
  • Secure implementations of software-defined networking.
  • Real-time and aggregated logging, monitoring, and observability.
  • Frameworks for eventing and alerting.
  • Controls for resource management.
  • Access-level controls based around service accounts, roles, groups, and role bindings.
  • Graphical user interfaces for both administrators and developers.

HeartAI deploys OpenShift with Microsoft Azure Red Hat OpenShift (ARO), a fully-managed implementation of OpenShift within Microsoft Azure. ARO is deployed to instances of Microsoft Azure cloud resources, which are fully-managed by Microsoft Azure, including operational lifecycle management, patching and updating, logging, monitoring, and security hardening.

Red Hat OpenShift implementation

HeartAI orchestrates system services with the Kubernetes-based Red Hat OpenShift container platform. Further information about the HeartAI implementation of Red Hat OpenShift may be found with the following documentation:

Instance configuration

The following configuration options apply to a deployment instance of HeartAI. Deployment instances share a common platform base, and the following configuration parameters allows fundamental deployment functionally to be specified for the hosting environment.

Currently, the only available configuration option is to specify the host organisation:

ORG=<host-organisation>

Network tools

HeartAI instances of OpenShift provide network tooling to assist with the deployment of the cluster and to assess network connectivity within the HeartAI environment and the network of a hosting organisation.

Network-Multitool

For general network assessment and diagnostics tooling, HeartAI OpenShift instances provide protected instances of Network-Multitool. These tools provide broad network assessment capabilities. To assist with the security of Network-Multitool deployments, these instances are deployment with a ClusterIP, and only provide connectivity within the OpenShift cluster. In addition, Network-Multitool instances are deployed to a dedicated namespace, and only the OpenShift cluster-admin Role has access to this environment.

The following shell script shows the deployment process for instances of Network-Multitool:

NS=heartai-network-multitool
oc apply -f yaml/network-multitool-namespace.yaml
oc project $NS
oc adm policy add-scc-to-user privileged -z default -n $NS
oc apply -f yaml/network-multitool-deployment.yaml -n $NS

The YAML declaration file for a Network-Multitool Namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: heartai-network-multitool

The YAML declaration file for a Network-Multitool Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: network-multitool
  namespace: heartai-network-multitool
spec:
  selector:
    matchLabels:
      app: network-multitool
  replicas: 1
  template:
    metadata:
      labels:
        app: network-multitool
      annotations:
        sidecar.istio.io/inject: "true"
    spec:
      containers:
        - name: network-multitool
          image: praqma/network-multitool
      securityContext:
        privileged: true
        runAsUser: 0

Configuring the default ingress certificate

The default ingress certificate may be changed by patching the cluster with a trusted certificate authority certificate and corresponding domain verified certificate:

CERT=cert.crt
KEY=cert.key
CA=ca.crt

oc create configmap custom-ca \
--from-file=ca-bundle.crt=$CERT \
-n openshift-config

oc patch proxy/cluster \
--type=merge \
--patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}'

oc create secret tls heartai-aro-custom-ca-certs \
--cert=$CA \
--key=$KEY \
-n openshift-ingress

oc patch ingresscontroller.operator default \
--type=merge -p \
'{"spec":{"defaultCertificate": {"name": "heartai-aro-custom-ca-certs"}}}' \
-n openshift-ingress-operator

Configuring the console hostname

oc patch consoles.operator.openshift.io cluster \
  --patch '{"spec":{"route":{"hostname":"console.apps.aro.$ORG.heartai.net"}}}' \
  --type=merge

Patching the cluster with a Red Hat pull secret

The default HeartAI installation of an OpenShift instance does not provide a Red Hat pull secret. This secret may be configured following cluster deployment by patching the cluster pull secret authentications.

OpenShift identity provision and access control

HeartAI instances of OpenShift integrate natively with the HeartAI implementation of Keycloak. This allows users to authenticate with OpenShift through the OpenShift OAuth 2.0 and OpenID Connect framework. With support for federated identity, users will also be able to authenticate with their Microsoft Azure Active Directory identity.

Installing Keycloak

Keycloak integrates an authorisation service implemented with OAuth 2.0, an identity service implemented with OpenID Connect, and provides advanced identity and access features such as single sign-on (SSO), multi-factor authentication (MFA), identity brokering, and federated identity. Authentication with OpenID Connect allows identity brokering through OpenID Connect and SAML, and identity federation through Kerberos and LDAP.

The following example shows a shell script to deploy a Keycloak instance:

NS=heartai-keycloak
oc apply -f ../yaml/keycloak-namespace.yaml
oc project $NS
oc apply -f ../yaml/keycloak-operator.yaml
oc apply -f ../yaml/keycloak-route.yaml

The following example shows a YAML declaration file for a Keycloak Namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: heartai-keycloak

The following example shows a YAML declaration file for a Keycloak Operator:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: heartai-keycloak
  namespace: heartai-keycloak
spec:
  targetNamespaces:
    - heartai-keycloak
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: keycloak-operator
  namespace: heartai-keycloak
spec:
  channel: alpha
  installPlanApproval: Manual
  name: keycloak-operator
  source: community-operators
  sourceNamespace: openshift-marketplace

The following example shows YAML declaration file for a Keycloak Route:

kind: Route
apiVersion: route.openshift.io/v1
metadata:
  annotations:
    haproxy.router.openshift.io/balance: source
  name: keycloak-self-managed
  namespace: heartai-keycloak
spec:
  host: keycloak.apps.aro.sah.heartai.net
  to:
    kind: Service
    name: keycloak
    weight: 100
  port:
    targetPort: keycloak
  tls:
    termination: reencrypt
    insecureEdgeTerminationPolicy: Redirect
  wildcardPolicy: None

Configuring OpenShift for Keycloak identity

Review:

The following values should be provided to Keycloak:

Valid Redirect URLS:
  https://console.apps.aro.$ORG.heartai.net/
  https://oauth-openshift.apps.aro.$ORG.heartai.net/oauth2callback/keycloak/

Create the Keycloak client secret for the OpenShift OAuth configuration:

oc -n openshift-config create secret generic keycloak-client-secret \
  --from-literal=clientSecret=$KEYCLOAK_CLIENT_SECRET

Associate the Keycloak instance as the identity provider for OpenShift. The following values should be provided to the OpenShift OAuth configuration:

spec:
  identityProviders:
    - mappingMethod: claim
      name: keycloak
      openID:
        claims:
          email:
            - email
          name:
            - name
          preferredUsername:
            - preferred_username
        clientID: $ORG-heartai-aro-prod-aue-001
        clientSecret:
          name: ${keycloak-client-secret}
        extraScopes: []
        issuer: >-
          https://keycloak.apps.aro.$ORG.heartai.net/auth/realms/heartai_openshift
      type: OpenID
Identity and access management

Further information about the HeartAI identity and access implementation may be found with the following documentation:

Installing cert-manager

The following example shows a shell script to deploy a cert-manager instance:

NS=heartai-cert-manager
oc create namespace $NS
oc project $NS
helm repo add jetstack https://charts.jetstack.io
oc apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.crds.yaml

helm install \
  cert-manager jetstack/cert-manager \
  -n $NS \
  --version v1.4.0 \
  --set extraArgs='{--dns01-recursive-nameservers-only,--dns01-self-check-nameservers=8.8.8.8:53\,1.1.1.1:53}'

The following example shows a YAML declaration file for a cert-manager Namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: heartai-cert-manager

Installing OpenShift Service Mesh

In addition to the SDN capabilities provided by OpenShift, HeartAI networking also implements OpenShift Service Mesh, providing advanced mechanisms for communication across services. The cloud-native service-mesh software Istio extends the OpenShift SDN with programmable and application-aware declarative network implementations. A core feature of Istio is the Envoy service proxy that is injectable as a sidecar into virtual IP hosts of the OpenShift SDN. Istio provides general approaches for network deployments, routing, traffic management, telemetry, and security. The management console Kiali provides capabilities for configuration, eventing, metrics, visualisation, and validation of network deployments that are implemented with Istio. Kiali allows for the display of service mesh structure by inferring traffic topology and health status. Kiali also provides native integration for the Grafana observability platform, and the Jaeger distributed tracing software.

OpenShift Service Mesh composes the following technologies:

OpenShift Service Mesh technology Description Reference
Istio Modern service mesh that is natively implemented with Kubernetes networking capabilities. Supports advanced network routing and monitoring through the use of injectable sidecar proxies. https://istio.io/
Kiali Management console for observability and management of Istio implementations. https://kiali.io/
Jaeger End-to-end distributed tracing. First-class integration with Istio and Kiali. https://www.jaegertracing.io/

The following example shows a shell script to deploy an OpenShift Service Mesh instance:

NS=heartai-ossm
oc apply -f ../yaml/openshift-service-mesh-namespace.yaml
oc project $NS
oc apply -f ../yaml/openshift-service-mesh-operator.yaml

The following example shows a YAML declaration file for an OpenShift Service Mesh Namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: heartai-ossm

The following example shows a YAML declaration file for an OpenShift Service Mesh Operator:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: heartai-ossm
  namespace: heartai-ossm
spec:
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: servicemeshoperator
  namespace: heartai-ossm
spec:
  channel: stable
  installPlanApproval: Manual
  name: servicemeshoperator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: jaeger-product
  namespace: heartai-ossm
spec:
  channel: stable
  installPlanApproval: Manual
  name: jaeger-product
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: kiali-ossm
  namespace: heartai-ossm
spec:
  channel: stable
  installPlanApproval: Manual
  name: kiali-ossm
  source: redhat-operators
  sourceNamespace: openshift-marketplace

Wait for the operator to install and then perform the following deployment steps.

The following example shows a shell script to deploy an OpenShift Service Mesh control plane:

oc apply -f ../yaml/openshift-service-mesh-control-plane-namespace.yaml
NS=heartai-ossm-control-plane
oc project $NS
oc apply -f ../yaml/openshift-service-mesh-control-plane-control-plane.yaml
oc apply -f ../yaml/openshift-service-mesh-control-plane-member-roll.yaml
oc create secret generic cloudflare-api-token \
  --from-literal=api-token="$(az keyvault secret show --vault-name "sah-heartai-kv-prod" --name "sah-heartai-aro-prod-aue-001-cloudflare-api-key" --query value -o tsv)" \
  -n $NS
oc apply -f ../yaml/cert-manager-issuer-lets-encrypt.yaml
oc apply -f ../yaml/cert-manager-issuer-lets-encrypt-staging.yaml

The following example shows a shell script to deploy an OpenShift Service Mesh ServiceMeshControlPlane:

apiVersion: v1
kind: Namespace
metadata:
  name: heartai-ossm-control-plane

The following example shows a YAML declaration file for an OpenShift Service Mesh ServiceMeshControlPlane:

apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
  name: basic
  namespace: heartai-ossm-control-plane
spec:
  version: v2.1
  tracing:
    sampling: 10000
    type: Jaeger
  policy:
    type: Istiod
  telemetry:
    type: Istiod
  addons:
    grafana:
      enabled: true
    jaeger:
      install:
        storage:
          type: Memory
    kiali:
      enabled: true
    prometheus:
      enabled: true

The following example shows a YAML declaration file for an OpenShift Service Mesh ServiceMeshMemberRoll:

apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
  name: default
  namespace: heartai-ossm-control-plane
spec:
  members:
    - heartai-hello-world-dev
    - heartai-hello-world-prod
    - heartai-hib-interface-dev
    - heartai-hib-interface-prod
    - heartai-gitops
    - heartai-network-multitool
    - heartai-rapidxai-dev
    - heartai-rapidxai-prod
    - heartai-svcs-dev
    - heartai-svcs-prod

Installing pgAdmin

The following example shows a shell script to deploy a pgAdmin instance:

NS=heartai-pgadmin
oc apply -f ../yaml/pgadmin-namespace.yaml
oc project $NS
oc create secret generic pgadmin-key \
  --from-literal=secret="$(az keyvault secret show --vault-name "sah-heartai-kv-prod" --name "pgadmin-key" --query value -o tsv)" \
  -n $NS
oc adm policy add-scc-to-user privileged -z default -n $NS
oc apply -f ../yaml/pgadmin-deployment.yaml

The following example shows a YAML declaration file for a pgAdmin Namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: heartai-pgadmin

The following example shows a YAML declaration file for a pgAdmin Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgadmin
  namespace: heartai-pgadmin
spec:
  selector:
    matchLabels:
      app: pgadmin
  replicas: 1
  template:
    metadata:
      labels:
        app: pgadmin
    spec:
      containers:
        - name: pgadmin
          image: dpage/pgadmin4
          ports:
            - containerPort: 80
          env:
            - name: TZ
              value: "Australia/Adelaide"
            - name: PGADMIN_DEFAULT_EMAIL
              value: "[email protected]"
            - name: PGADMIN_DEFAULT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: pgadmin-key
                  key: secret
            - name: PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION
              value: "False"
      securityContext:
        privileged: true
        runAsUser: 0
---
apiVersion: v1
kind: Service
metadata:
  name: pgadmin-service-http
  namespace: heartai-pgadmin
spec:
  selector:
    app: pgadmin
  ports:
    - protocol: TCP
      name: http
      port: 80
      targetPort: 80
  type: LoadBalancer
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
  annotations:
    kubernetes.io/tls-acme: "true"
  name: pgadmin-route
  namespace: heartai-pgadmin
spec:
  host: "pgadmin.apps.aro.$ORG.heartai.net"
  path: "/"
  to:
    kind: Service
    name: pgadmin-service-http
  port:
    targetPort: http
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect

Installing Strimzi

Review:

oc get packagemanifests | grep strimzi
oc describe packagemanifests strimzi-kafka-operator

The following example shows a shell script to deploy a Strimzi instance:

NS=heartai-strimzi
oc apply -f ../yaml/strimzi-namespace.yaml
oc project $NS
oc apply -f ../yaml/strimzi-operator.yaml

The following example shows a YAML declaration file for a Strimzi Deployment:

apiVersion: v1
kind: Namespace
metadata:
  name: heartai-strimzi

The following example shows a YAML declaration file for a Strimzi Operator:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: heartai-strimzi
  namespace: heartai-strimzi
spec:
  targetNamespaces:
    - heartai-strimzi
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: strimzi-kafka-operator
  namespace: heartai-strimzi
spec:
  channel: strimzi-0.26.x
  installPlanApproval: Manual
  name: strimzi-kafka-operator
  source: community-operators
  sourceNamespace: openshift-marketplace

Wait for the operator to install and then perform the following deployment steps.

The following example shows a shell script to deploy a Strimzi Kafka:

NS=heartai-strimzi
oc project $NS
oc apply -f ../yaml/strimzi-kafka.yaml

The following example shows a YAML declaration file for a Strimzi Kafka:

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: strimzi-kafka
  namespace: heartai-strimzi
spec:
  kafka:
    version: 3.0.0
    replicas: 3
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "3.0"
      inter.broker.protocol.version: "3.0"
    storage:
      type: persistent-claim
      size: 20Gi
      deleteClaim: false
      class: managed-premium
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 20Gi
      deleteClaim: false
      class: managed-premium
  entityOperator:
    topicOperator: { }
    userOperator: { }

Installing OpenShift GitOps

Within HeartAI OpenShift instances, the OpenShift GitOps Operator provides declarative approaches for GitOps lifecycle management and continuous delivery. A core component of this process is the management of cluster resources with an integrated Argo CD instance. The Argo CD framework defines a Kubernetes Application custom resource that provides functionality to synchronise with GitHub hosted source repositories. Through these Application resources, Argo CD monitors for updates to the HeartAI GitHub repository, and synchronisation is triggered when modifications are made to the master branch. Triggered behaviour includes applying the cluster resource declaration files to the HeartAI OpenShift instance, which coordinates the deployment of OpenShift resources to the cluster environment. These approaches allow the deployment of platform resources to occur through the GitHub managed review and deployment processes, and provide a supportive framework to optimise developer and contributor productivity and experience.

Review:

The following example shows a shell script to deploy an OpenShift GitOps instance:

NS=heartai-gitops
oc apply -f ../yaml/openshift-gitops-namespace.yaml
oc project $NS
oc apply -f ../yaml/openshift-gitops-operator.yaml

The following example shows a YAML declaration file for an OpenShift GitOps Namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: openshift-logging

The following example shows a YAML declaration file for an OpenShift GitOps Operator:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: heartai-gitops
  namespace: heartai-gitops
spec:
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-gitops-operator
  namespace: heartai-gitops
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: openshift-gitops-operator
  channel: stable
  installPlanApproval: Manual

Also review:

Installing OpenShift Logging

Red Hat OpenShift provides integration support for logging and observability with the Red Hat OpenShift Logging (RHOL) framework. RHOL deploys instances of the following software:

RHOL framework component Description Reference
Elasticsearch Distributed and high-performance search and analytics engine. Supports full-text and structured search. Allows indexing and search capabilities for large volumes of log and document data. https://www.elastic.co/elasticsearch/
Fluentd Pluggable and scalable log and data collector. Standardises upstream and downstream data integration. https://www.fluentd.org/
Kibana Robust data visualisation client application for Elasticsearch. Allows broadly customisable query functionality and corresponding visualisation capabilities. Supports operational and real-time observability. https://www.elastic.co/kibana/

Together these software components are often referred to as the EFK stack. The composition of these technologies provides powerful and extendable mechanisms for logging and observability, including:

  • Broad support for log consumption, including native support for a variety of operational and software interfaces.
  • High-performance graph-based accession of log data.
  • Visualisation and observability of log data and associated metrics.

Review:

The following example shows a shell script to deploy the Red Hat Elasticsearch Operator:

NS=openshift-operators-redhat
oc apply -f ../yaml/elasticsearch-namespace.yaml
oc project $NS
oc apply -f ../yaml/elasticsearch-operator.yaml

The above shell script deploys the Red Hat Elasticsearch Operator by creating a corresponding namespace and initialising the Operator.

The following example shows a YAML declaration file for an OpenShift Elasticsearch Namespace resource:

apiVersion: v1
kind: Namespace
metadata:
  name: openshift-operators-redhat
  annotations:
    openshift.io/node-selector: ""
  labels:
    openshift.io/cluster-monitoring: "true"

The following example shows a YAML declaration file for an OpenShift Elasticsearch Operator resource:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  namespace: openshift-operators-redhat
  name: openshift-operators-redhat
spec: {}
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  namespace: openshift-operators-redhat
  name: elasticsearch-operator
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: elasticsearch-operator
  channel: stable
  installPlanApproval: Manual

The following example shows a shell script to deploy the Red Hat Logging Operator:

NS=openshift-logging
oc apply -f ../yaml/openshift-logging-namespace.yaml
oc project $NS
oc apply -f ../yaml/openshift-logging-operator.yaml

The above shell script deploys the Red Hat Logging Operator by creating a corresponding namespace and initialising the Operator.

The following example shows a YAML declaration file for an OpenShift Logging Namespace resource:

apiVersion: v1
kind: Namespace
metadata:
  name: openshift-logging
  annotations:
    openshift.io/node-selector: ""
  labels:
    openshift.io/cluster-monitoring: "true"

The following example shows a YAML declaration file for an OpenShift Logging Operator resource:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  namespace: openshift-logging
  name: cluster-logging
spec:
  targetNamespaces:
    - openshift-logging
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  namespace: openshift-logging
  name: cluster-logging
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: cluster-logging
  channel: stable
  installPlanApproval: Manual

The following example shows a shell script to deploy a Red Hat Logging ClusterLogging resource instance:

NS=openshift-logging
oc project $NS
oc apply -f ../yaml/openshift-logging-clusterlogging.yaml

The above shell script deploys the Red Hat Logging ClusterLogging resource instance by applying the following declaration file. This should be applied once the Red Hat Logging Operator is in a ready state.

The following example shows a YAML declaration file for an OpenShift Logging ClusterLogging resource:

apiVersion: "logging.openshift.io/v1"
kind: ClusterLogging
metadata:
  namespace: openshift-logging
  name: instance
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    retentionPolicy:
      application:
        maxAge: 180d
      infra:
        maxAge: 180d
      audit:
        maxAge: 180d
    elasticsearch:
      nodeCount: 3
      storage:
        storageClassName: "managed-premium"
        size: 200G
      resources:
        requests:
          memory: "8Gi"
      proxy:
        resources:
          limits:
            memory: 256Mi
          requests:
            memory: 256Mi
      redundancyPolicy: "SingleRedundancy"
  visualization:
    type: "kibana"
    kibana:
      replicas: 1
  collection:
    logs:
      type: "fluentd"
      fluentd: {}

Following the deployment of the ClusterLogging resource, define a Kibana index pattern: