Opportunities to support digital health

HeartAI hopes to support health system care by increasing the capabilities and capacities of the digital health ecosystem. HeartAI platform development achieves this by providing modern approaches to resource management, platform orchestration, security implementations, data integration and access, data reporting and visualisation, powerful and real-time analytics, and many opportunities to support clinical service delivery and health system management.

The following platform potentials provide a representation of HeartAI opportunities to support digital health:

Overview

HeartAI hopes to support digital health by developing a robust foundational platform that is capable of modern and best-practice approaches to digital health care enablement. This is achieved both in relation to technical platform design and service implementation, but also with regards towards organisational principles with capability growth. By iterative and continued development of these values, HeartAI strives to support health system care with modern cloud platform infrastructure, powerful service architectures, dynamic data and analytics capabilities, reliable security practices, rigorous health information systems, and an adaptable and progressive digital culture.

HeartAI is cloud-native and implements modern and best practice cloud solutions. A major deployment of HeartAI to Microsoft Azure has recently been provisioned within the SA Government network and the SA Health Azure management group, representing one of the first large-scale cloud deployments within the South Australian digital health ecosystem. The dynamic capabilities of cloud computing allows resources to be provisioned or deprovisioned in response to supply and demand pressures. This allows HeartAI platform resource utilisation to be fundamentally cost-effective. HeartAI resource management is mature and capable of efficiently supporting large-scale systems, with automated and rigorous administrative and operational processes. By extending these resource capabilities with real-time logging, monitoring, and observability, HeartAI platform deployments are well-understood and reliable.

Platform security is a central consideration and is greatly respected. HeartAI provides extensive logging and monitoring across platform components, and maintains and regularly audits these records. Platform networking implementations are structured and hardened to ensure that network communication is secure. Cryptographic methods, such as storage and network encryption, are implemented and enforced as the default security posture. HeartAI also provides best practice identity and access management to support user authentication and authorisation methods. This ensures that users have access appropriate for their use, and all platform interaction is securely recorded. In addition, platform components are continuously scanned and assessed for vulnerabilities and compliance violations, with automated mechanisms to detect and recommend remediation approaches. HeartAI has been rigorously and independently reviewed for compliance with South Australian state policy for digital solutions in coordination with Digital Health SA, the primary digital health organisation within SA Health.

Application services provide capabilities that meet the complex requirements necessary for digital health solutions. HeartAI services are high-performance, supporting large-scale data storage and transmission, with particular support for real-time data streaming. This allows platform services to interface with medical instruments that generate high-throughput data, such as clinical observation monitors and wearable devices. Broad support for interface interoperability allows services to readily integrate with a variety of international, legacy, and proprietary data standards. HeartAI services are also natively distributable and are able to scaled to meet resource utilisation needs, ensuring that services are highly-available and are suitable for mission critical operations.

HeartAI analytical approaches are developed to be suitable for the clinical environment. Often, conventional analytical methodologies, such as machine learning and pattern recognition techniques, perform poorly in the clinical context because of nuanced issues of causality and statistical bias. This follows from factors such as incomplete and non-observed data, complex physiological mechanisms, and potentially non-recorded indications of why and when data was collected. To overcome these challenges, HeartAI specialises with modern analytical approaches including probabilistic programming, causal modelling, and Bayesian statistics. The implementation of technologies such as Stan provides sophisticated and powerful methodologies to analyse health system data.

Platform development is supported by environments and tooling to encourage productive developer experiences and streamlined platform deployment. HeartAI provides preconfigured and readily deployable development environments and extensive documentation to nurture developer growth. Development processes are further supported by tooling such as software version control and source code management, including integrated review and consolidation processes. Platform deployment is automated with these approaches, allowing operational and service deployments to follow rapidly from developer contributions, a process pattern often referred to as GitOps. HeartAI encourages organisational and personal growth, including ongoing development of capability.

HeartAI is implemented in practice with major clinical innovation projects, targeted to improve clinical service delivery and digital health system capability and capacity. The RAPIDx AI project provides a real-time artificial intelligence service to support patients presenting with chest pain to South Australian emergency departments. The HAVEN SA project supports clinical care for vulnerable patients by implementing real-time and continuous unit- and patient-level monitoring, particularly to detect patients at-risk of clinical deterioration. The PHOCQUS project provides an established governance framework for large-scale health data integration and access.

It is hoped that HeartAI can support digital health through these principles and practices. Through a platform development process that encourages growth at all levels, HeartAI aims to increase the capabilities and capacities of the health system, and translate these to best practice innovations for the clinical environment. The following sections of this documentation section explores these opportunities and describes how HeartAI can provide an important contribution to health system care.

Cost-effective resource use

HeartAI resource use is cloud-native and primarily managed at two levels: (i) Within the Microsoft Azure environment, and (ii) Within the Red Hat OpenShift environment, which itself manages and abstracts resources from Azure.

Within the Red Hat OpenShift environment, resources are allocated and shared from an underlying resource pool. This allows individual system components to optimise their use of available resources while mitigating potential resource wastage. In the event that there are insufficient resources to support resource demand, additional resources may be provisioned from Microsoft Azure, with automation achievable through event-driven behaviour. Similarly, periods of excess resource supply may trigger resource deprovisioning. This optimises HeartAI resource use on the basis of system resource supply and demand pressures, allowing resource use to be continually managed. By effective use of these mechanisms, HeartAI achieves high efficiency with resource use, and is often a significantly more cost-effective digital health solution.

Example: Microsoft Azure virtual machines

The following example shows virtual machines that have been provisioned from within the Microsoft Azure environment. These resources have been provisioned specifically for use within the corresponding Red Hat OpenShift environment. The quantity and configuration of these resources may be modified manually or automatically in response to system supply and demand.

heartai-azure-monitor-openshift-nodes.png

Example: Red Hat OpenShift cluster Node resources

The following example shows the above virtual machines from within the Red Hat OpenShift environment. Red Hat OpenShift may itself communicate with the Microsoft Azure resource API to provision or deprovision resources in response to resource utilisation.

openshift-console-cluster-nodes.png

Example: Red Hat OpenShift namespace-level Pod resources

The following image shows how platform components within the Red Hat OpenShift environment are abstracted from the underlying virtual machine infrastructure. Each system component shares resources from the available virtual machines to optimise overall resource use.

openshift-console-cluster-nodes.png

Monitored infrastructure costing

HeartAI cloud resource use is available on-demand or as reserved instances that are provisioned in advance for a specified period of time, often with a significant reduction in price. HeartAI infrastructure is designed to use cloud resources in response to system demand and supply pressures, including provisioning and deprovisioning of resources in response to these pressures. This is further supported by real-time monitoring of resource use and costing. Azure cloud resources are often priced by seconds or milliseconds of resource use, allowing fine-tuned resource use optimisation.

Example: Azure Portal for subscription

The following image shows the Azure Portal web interface for an Azure subscription. This interface displays:

  • Information about the subscription, including the subscription ID and resource location directory.
  • Costing by resource, including aggregated costing reports and forecasted costing.
  • Summary details about the subscription.

heartai-azure-subscription.png

Dynamic infrastructure management

HeartAI resource usage is designed to be responsive to changes in platform state, including automated provisioning and deprovisioning of platform resources and capabilities for self-healing and recovery management. This is further support by infrastructure management that is dynamic and declarative. Resources, including primary infrastructural components, are representable by declarable software constructs which may be used to manage the actual platform state. For example, Microsoft Azure cloud resources are manageable through a synchronisation process with the Azure Resource Manager API. This supports HeartAI operations though:

Example: Terraform implementation

The HeartAI implementation of Microsoft Azure is managed with the Terraform declarative infrastructure-as-code software framework. Terraform allows for the declaration of system components using configuration files specified with the HashiCorp Configuration Language (HCL). Collections of these configuration files provide a declarative representation of HeartAI infrastructure-level components, which are synchronisable with the state of Microsoft Azure environments through the Azure Resource Manager API. Infrastructure deployment with Terraform supports HeartAI system infrastructure management in a way that is consistent, maintainable, scalable, and reproducible.

Example: HashiCorp Terraform declaration for Microsoft Azure Key Vault

The following example shows Terraform declarations to configure and deploy an instance of Azure Key Vault. This implementation coordinates instances of the following Azure components:

Azure component Functionality for Azure Key Vault
Azure Key Vault Configure and deploy an instance of Azure Key Vault
resource "azurerm_key_vault" "keyvault" {
  name = "sah-heartai-kv-prod"
  resource_group_name = azurerm_resource_group.rg_keyvault.name
  location = azurerm_resource_group.rg_keyvault.location
  enabled_for_disk_encryption = true
  tenant_id = data.azurerm_client_config.client.tenant_id
  soft_delete_retention_days = var.kv_prod_aue_001_soft_delete_retention_days
  purge_protection_enabled = true

  sku_name = "standard"

  network_acls {
    default_action = "Deny"
    bypass = "AzureServices"
  }

  tags = {
    "Application Name" = var.heartai-environment.application-name
    "Application Owner" = var.heartai-environment.application-owner
    "Environment" = var.heartai-environment.environment
    "Division Department" = var.heartai-environment.division-department
    "Cost Centre" = var.heartai-environment.cost-centre
  }
}

Enterprise-grade platform orchestration

Maturity with platform orchestration is fundamental for operating a system in enterprise-ready, large-scale. and mission-critical contexts. To meet these demands, HeartAI orchestrates with the best-in-class Red Hat OpenShift orchestration platform.

Red Hat OpenShift is an enterprise-grade implementation of Kubernetes, providing a modern and secure platform for the orchestration of container-based solutions. OpenShift provides general platform-level capabilities, including:

  • Abstractions for container-level deployments.
  • Software orchestration that is natively cloud-based and distributable.
  • Secure implementations of software-defined networking.
  • Real-time and aggregated logging, monitoring, and observability.
  • Frameworks for eventing and alerting.
  • Controls for resource management.
  • Access-level controls based around service accounts, roles, groups, and role bindings.
  • Graphical user interfaces for both administrators and developers.

HeartAI deploys OpenShift with Microsoft Azure Red Hat OpenShift (ARO), a fully-managed implementation of OpenShift within Microsoft Azure. ARO is deployed to instances of Microsoft Azure cloud resources, which are fully-managed by Microsoft Azure, including operational lifecycle management, patching and updating, logging, monitoring, and security hardening.

Example: OpenShift administrator console

OpenShift provides a comprehensive graphical user interface console for both administrator and developer environments. The following image shows the OpenShift console overview, describing various cluster components, including:

  • General cluster information.
  • Cluster status and availability.
  • Cluster resource deployment metrics.
  • Real-time monitoring of resource usage, events, and alerting.

openshift-console-cluster-overview.png

Example: Red Hat OpenShift developer console

OpenShift provides capabilities that are tailored for developer experiences. In addition to the administrator console, OpenShift provides a dedicated developer portal.

The following image shows the OpenShift web interface console for a namespace topology graph. This web interface is specifically available through the developer console and provides information about an application-level Deployment. The topology graph web interface provides:

  • The managing project of the Deployment.
  • An overview of an application-level Deployment, including:
    • A topological visualisation of the resource components that compose the Deployment.
    • Information about Deployment resources:

openshift-console-heartai-acs-topology-graph.png

Example: Red Hat OpenShift console for Pod resource

The following image shows the Red Hat OpenShift web interface console for a Pod resource instance. The Pod web interface console provides information and functionality to support OpenShift hosting of Pod resources, including:

  • The managing Namespace of the Pod resource.
  • The Pod name.
  • Monitoring metrics of the Pod, including:
    • Memory utilisation.
    • CPU utilisation.
    • Filesystem utilisation.
    • Network inbound bandwidth.
    • Network outbound bandwidth.
  • Assigned labels of the Pod.
  • Pod health status.
  • Pod virtual IP address assignment.
  • The hosting Node of the Pod.
  • The Pod creation timestamp.
  • The owning resource.
  • Pod-hosted containers.
  • Pod-hosted volumes.
  • The event history of the Pod.

openshift-console-heartai-acs-pods-sensor.png

Example: Red Hat OpenShift console for PersistentVolumes resources

The following image shows the Red Hat OpenShift web interface console for cluster-level PersistentVolumes. The PersistentVolumes web interface console provides:

  • An overview of cluster-level PersistentVolumes, including:
    • PersistentVolume names.
    • The operation status of PersistentVolumes.
    • The associated PersistentVolumeClaims.
    • The allocated storage capacities.
    • Assigned metadata labels.
    • Creation timestamps.

openshift-console-cluster-persistentvolumes.png

Real-time platform monitoring and observability

HeartAI platform design encourages relatively small and well-defined modular components that compose together to create a highly dynamic and flexible system. To ensure optimal platform functionality, and to detect any abnormal system behaviour, it is imperative that platform components are continuously monitoring and assessed. HeartAI implements modern and cloud-native approaches for platform monitoring and observability, providing support for:

  • The continuous collection, aggregation, and processing of monitoring and telemetry metrics across all platform components, including:
    • Cloud infrastructure resources.
    • Orchestrated operational components.
    • Networking devices and applications.
    • Identity and authorisation services.
    • Application-level and service-level deployments.
    • Analytical services.
  • Real-time reporting and visualisation of metrics data, including scalable dashboard technologies and reporting frameworks.
  • Event-driving system behaviour in response to changing system state, such as the provisioning and deprovisioning of platform resources.
  • Alerting and system recovery mechanisms in situations of abnormal system activity. For example, HeartAI administrators may be digitally contacted upon detection of excessive system resource utilisation.
  • Intelligent pattern and threat detection, particularly in response to suspicious system behaviour.

These capabilities are crucial for HeartAI operations, particularly in clinically sensitive and mission-critical environments. Maturity with platform monitoring and observability ensures that HeartAI is stable, reliable, consistent, and responsive to abnormal system behaviour.

Example: Azure Insights UI for OpenShift cluster

The following image shows the Azure Insights web interface for cluster monitoring and logging for a HeartAI instance of Red Hat OpenShift. This interface provides an overview of the OpenShift cluster, with information describing the current cluster resource utilisation. The Azure Insights cluster web interface provides:

  • An overview of OpenShift cluster resource utilisation, including:
    • Node CPU utilisation.
    • Node memory utilisation.
    • Node count.
    • Active pod count.

heartai-azure-monitor-openshift-cluster.png

Example: Azure Insights UI for OpenShift containers

The following image shows the Azure Insights web interface for container monitoring and logging for a HeartAI instance of Red Hat OpenShift. This interface provides an overview of OpenShift cluster containers, describing the status and resource utilisation of cluster containers. The Azure Insights containers web interface provides:

  • A tabled report describing OpenShift cluster containers, including:
    • Container names.
    • Container health status.
    • Container CPU utilisation.
    • Pod assignments.
    • Node assignments.
    • Container deployment restart count.
    • Container uptime.

heartai-azure-monitor-openshift-containers.png

Example: Azure Sentinel

Log Analytics workspaces are aggregated together with Azure Sentinel, providing the functionality of an integrated security information and event management (SIEM) platform.

Azure Sentinel provides:

  • Real-time collection of Azure resource event data.
  • Event-driven alerting and pattern detection.
  • Detection of abnormal or suspicious event behaviour.
  • Visualisations of events and alerts.
  • Analysis of anomalous activity.
  • Geolocation detection for event behaviour patterns.

heartai-azure-sentinel.png

Example: Grafana implementation

Grafana is a real-time observability solution, providing various adaptors to interface with data and metrics providers, and allowing this data to be processed and visualised with a large selection of dashboard functionality. HeartAI instances of Red Hat OpenShift are natively integrated with Grafana, and are further supporting by Prometheus for monitoring systems and services, and Alertmanager for event-triggered system behaviour.

Example: Grafana monitoring for OpenShift cluster networking

The following example shows Grafana monitoring for cluster-level networking within a HeartAI instance of Red Hat OpenShift. The following networking metrics are collected and displayed:

  • Bandwidth: Current rate of bytes received
  • Bandwidth: Current rate of bytes transmitted
  • Bandwidth: Current status
  • Bandwidth history: Receive bandwidth
  • Bandwidth history: Transmit bandwidth
  • Packets: Rate of received packets
  • Packets: Rate of transmitted packets
  • Errors: Rate of received packets dropped
  • Errors: Rate of transmitted packets dropped
  • Errors: Rate of TCP retransmits out of all sent segments
  • Errors: Rate of TCP SYN retransmits out of all retransmits

grafana-kubernetes-networking-cluster.png

Example: Grafana monitoring for OpenShift cluster USE Method

The following example shows Grafana monitoring for a cluster Utilization Saturation and Errors (USE) Method within a HeartAI instance of Red Hat OpenShift. A variety of general resource metrics are collected and displayed:

  • CPU utilisation
  • CPU saturation
  • Memory utilisation
  • Memory saturation
  • Network utilisation
  • Network saturation
  • Disk IO utilisation
  • Disk IO saturation
  • Disk space utilisation

grafana-use-method-cluster.png

Example: Grafana monitoring for OpenShift cluster etcd key-value store

The following example shows Grafana monitoring for the cluster-level integrated etcd key-value store within a HeartAI instance of Red Hat OpenShift. The following etcd metrics are collected and displayed:

  • RPC rate
  • Active streams
  • DB size
  • Disk sync duration
  • Memory
  • Client traffic in
  • Client traffic out
  • Peer traffic in
  • Peer traffic out
  • Raft proposals
  • Total leader elections per day

grafana-etcd.png

Example: pgAdmin UI for PostgreSQL data server

The following image shows the pgAdmin web interface for administration and development of a PostgreSQL data server instance. The pgAdmin web interface provides functionality to administer and develop with PostgreSQL instances, including:

  • An overview of pgAdmin-interfaced PostgreSQL data server instances, including:
    • Corresponding PostgreSQL databases.
    • Monitoring metrics and visualisations for:
      • Active database sessions.
      • Database transactions per second.
      • Database tuples in.
      • Database tuples out.
      • Database block I/O.
    • Server activity reporting.

pgadmin-monitoring.png

Real-time platform logging and observability

HeartAI platform orchestration allows the decomposition of complex software into relatively small and transportable components. Managing these components at scale can be challenging by the possible breadth and diversity of these components, with potentially thousands of application-level containers, associated infrastructural and operational constructs, and orchestration mechanisms working together to achieve system functionality. To ensure system consistency and correctness, HeartAI collects and acts upon comprehensive integrated logging of these platform components, and provides capabilities to assess and respond to these in real-time. HeartAI real-time platform logging and observability is crucial for:

  • Timely observability of platform components, including infrastructural, operational, and application-level components.
  • Logging capabilities, particularly supporting analysis and visualisation of log data.
  • Audit capabilities to assess platform activity and to detect unusual logging patterns. This is also fundamental for meeting platform compliance obligations.
  • Observability solutions to analyse and visualise log data, particularly supporting real-time logging activity and assessments of expected platform health.

Example: Red Hat OpenShift Logging implementation

Red Hat OpenShift provides integration support for logging and observability with the Red Hat OpenShift Logging (RHOL) framework. RHOL deploys instances of the following software:

RHOL framework component Description Reference
Elasticsearch Distributed and high-performance search and analytics engine. Supports full-text and structured search. Allows indexing and search capabilities for large volumes of log and document data. https://www.elastic.co/elasticsearch/
Fluentd Pluggable and scalable log and data collector. Standardises upstream and downstream data integration. https://www.fluentd.org/
Kibana Robust data visualisation client application for Elasticsearch. Allows broadly customisable query functionality and corresponding visualisation capabilities. Supports operational and real-time observability. https://www.elastic.co/kibana/

Together these software components are often referred to as the EFK stack. The composition of these technologies provides powerful and extendable mechanisms for logging and observability, including:

  • Broad support for log consumption, including native support for a variety of operational and software interfaces.
  • High-performance graph-based accession of log data.
  • Visualisation and observability of log data and associated metrics.

Example: Kibana UI for log discovery

The follow image shows the Kibana web interface for log discovery, providing log aggregation and observability approaches for available log data. The Kibana log discovery web interface contains a variety of functionalities for querying, processing, and visualising log data, including:

  • Real-time collection and analysis of log data from backing Elasticsearch instances.
  • In-built support for log data querying and processing, allowing the creating of log data reporting and visualisation pipelines.
  • Native support for Red Hat OpenShift instances, with the following default log fields specified:
    • Kubernetes namespace name.
    • Kubernetes namespace ID.
    • Kubernetes pod name.
    • Kubernetes pod hostname.
    • Kubernetes container name.
    • Kubernetes container ID.
    • Log message ID.
    • Log message.
    • Log timestamp.
    • Received timestamp.
  • The ability to save the visualisation as a template, with support to export and import to other Kibana instances.

heartai-kibana-discover.png

Example: Kibana UI for namespace-level log dashboard

The follow image shows the Kibana web interface for log data dashboarding, providing an ability to compose several log data visualisation into a comprehensive overview dashboard. The Kibana log data dashboarding web interface supports:

  • Real-time collection and analysis of log data from backing Elasticsearch instances.
  • Composition of several log data visualisations onto a dashboard plane.
  • The ability to save the visualisation as a template, with support to export and import to other Kibana instances.

heartai-kibana-dashboard.png

Example: Kibana UI for cluster-level log visualisation

The follow image shows the Kibana web interface for log data visualisation for a HeartAI instance of a Red Hat OpenShift cluster, providing log aggregation and visualisation approaches for corresponding log data. The Kibana log data visualisation web interface contains a variety of functionalities for querying, processing, and visualising log data, including:

  • Real-time collection and analysis of log data from backing Elasticsearch instances.
  • In-built support for log data querying and processing, allowing the creation of log data visualisation pipelines.
  • A visualisation of log data from a HeartAI instance of Red Hat OpenShift:
    • Log count per cluster namespace.
  • The ability to save the visualisation as a template, with support to export and import to other Kibana instances.

heartai-kibana-visualize-cluster-namespace-log-counts.png

Real-time vulnerability and compliance management

By extending foundational platform capabilities, HeartAI supports real-time monitoring of Red Hat OpenShift instances with technologies such as Red Hat Advanced Cluster Security. This allows continuous scanning of platform containers with automated detection of platform vulnerabilities, compliance violations, security posture defects, and other issues of concern. These features are critical for ensuring that HeartAI environments are well-understood and secure, providing such benefits as:

  • Real-time and continuous scanning of vulnerability and compliance for platform containers.
  • Comparative assessments to well-established compliance standards.
  • Metrics and visualisations of risk and policy violations.
  • Suggested actions for platform remediation.

Example: Red Hat Advanced Cluster Security implementation

Red Hat Advanced Cluster Security (RHACS) is a Kubernetes-native security platform that provides best-in-class security integrations and solutions for container-based environments. RHACS security capabilities ensure that Kubernetes-based infrastructure is continuously protected and secure. HeartAI instances of Red Hat OpenShift are natively integrated with RHACS.

RHACS collects, monitors, and evaluates system-level events including:

  • Process execution.
  • Network connections and traffic.
  • Access control and privilege escalation.

In combination with behavioral pattern base-lining, this allows for the detection of anomalous activity indicative of potentially malicious intent such as active malware, resource hijacking, unauthorised access control, and system intrusions.

Environments protected by RHACS are continuously scanned with reference to best practice compliance frameworks such as the Center for Internet Security (CIS) benchmarks and the Health Insurance Portability and Accountability Act of 1996 (HIPAA). By default, the following compliance standards are integrated with RHACS:

RHACS supports DevSecOps by providing integrated workflows for CI/CD, including policy validation at deploy-time and runtime to restrict high-risk workloads from being deployed. By shifting security left, vulnerable and mis-configured images can be remediated with real-time feedback and alerts.

Example: RHACS UI overview

The following image shows the RHACS overview web interface. This section of RHACS displays an overview of security, compliance, and resource management for Kubernetes-based clusters and resources. The overview web interface provides visualisations for:

  • A summary of compliance violations and risk severity in relation to rigorous compliance standards.
  • Compliance violations and risk severity corresponding to cluster instances.
  • Deployment prioritisation by risk severity.
  • Active violations over time.
  • Abnormal activity detections.
  • Continuous assessment of DevOps best practices.
  • Container compliance assessment against the Docker CIS compliance benchmarks.
  • Risk severity for Kubernetes events.
  • Network tooling risk evaluation.
  • Assessment of cluster privileges and permissions assignment.
  • Review of security best practices.
  • Risk severity of cluster-level modifications.
  • Analysis of vulnerability management.

rhacs-vulnerability-management.png

Example: RHACS UI vulnerability management

The following image shows the RHACS vulnerability management web interface. The vulnerability management section of RHACS identifies vulnerabilities for Kubernetes-based clusters and associated container images. The vulnerability management web interface provides:

  • Visualisations of deployment risk by critical vulnerabilities and exposures.
  • Cluster-level reporting of container image risk.
  • Insights into the most frequently violated policies.
  • Recently detected vulnerabilities.
  • Deployment reporting for severe policy violations.
  • Reporting of orchestrator and Istio vulnerabilities.
  • Cluster vulnerabilities by frequency.

rhacs-vulnerability-management.png

Example: RHACS UI violations

The following image shows the RHACS violations web interface. The violations section of RHACS reports insights into Kubernetes-based resources across corresponding clusters. Violations are reported at the resource level, and information is described relating to resource-type, associated policy violation, policy enforcement, risk severity, policy category, resource lifecycle, and time of violation. The violations web interface provides:

  • A tabled report of cluster violations, including:
    • The resource entities where the violation exists.
    • The type of resource entities.
    • The compliance policy the entities are breaching.
    • Information about the enforcement status of the corresponding compliance policies.
    • The risk severity of the violation.
    • The risk category of the violation.
    • The cluster lifecycle that is associated with the resource entity.
    • The detection time of the compliance violation.

rhacs-violations.png

Robust networking practices

HeartAI network management leverages modern cloud and platform networking capabilities to ensure that platform networks are stable, secure, and manageable at scale. HeartAI platform deployments typically occur as private networks within existing organisational network structures, and this approach of private network extension greatly minimises the attack surface exposure of HeartAI networks. In addition, these networks are often themselves decomposed, with internal private subnetworks and overlay networks allocated for corresponding use by an operational or application-level deployment. This process of hardened network security even when internal to HeartAI networking environments allows HeartAI to operate with a zero trust security model. These capabilities are further extended by declarative management of networking resources and real-time network monitoring and pattern detection.

HeartAI networking solutions provide:

  • Secure and hardened networking constructs as a fundamental design principle.
  • Minimal network exposure models, particularly by deploying through private network extension.
  • Declarative and dynamic management of platform networking resources.
  • Compartmentalised platform network deployments.
  • Real-time logging and monitoring capabilities.
  • Network resource management at scale.

Example: Azure network architecture

The following figure shows a structural overview of Microsoft Azure cloud resources within a HeartAI production environment instance. This figures represents:

  • A corresponding Azure vWAN hub, including:
    • An Azure ExpressRoute as an example of external network connectivity.
    • An Azure Virtual WAN instance.
    • Network peering between a HeartAI Azure Virtual Network instance and a corresponding Azure vWAN hub.
  • A HeartAI Azure Virtual Network instance, with the following contained resources:
    • Azure Red Hat OpenShift Master nodes.
    • Azure Red Hat OpenShift Worker nodes.
    • Azure private endpoints, with internal and private network connectivity to Azure cloud services.
  • Azure cloud services, including:

heartai-azure-network-architecture.svg

Example: OpenShift service endpoint architecture

heartai-proxied-service-endpoint-network-architecture.svg

Example: OpenShift client endpoint architecture

heartai-proxied-client-endpoint-network-architecture.svg

Best-practice identity and access management

Robust identity and access management is crucial for ensuring users and service-principals have appropriate access to platform components, and that this access is subject to non-repudiation constraints. HeartAI identity management uses well-established and best practice approaches, including support integrated support for identity principal authentication and authorisation. Together, these capabilities ensure that platform access is allocated only by the minimum necessary permissions necessary to meet corresponding use requirements.

HeartAI identity and access management provides:

Example: Keycloak implementation

Keycloak integrates an authorisation service implemented with OAuth 2.0, an identity service implemented with OpenID Connect, and provides advanced identity and access features such as single sign-on (SSO), multi-factor authentication (MFA), identity brokering, and federated identity. Authentication with OpenID Connect allows identity brokering through OpenID Connect and SAML, and identity federation through Kerberos and LDAP.

Example: Keycloak direct service authorisation

The following example shows the authorisation flow for a client to interface with a service endpoint directly, with a direct client such as curl or wget. In this context the client is acting directly by end-user action, rather than on behalf of the end-user. Through these approaches a client may request an access token by providing client credentials, such as a client identity and client secret pair, to the token endpoint of OpenID Connect. This authorisation flow uses the OAuth 2.0 Client Credentials Grant. Following a successful authorisation flow, the client may pass the access token to corresponding protected system service endpoints via an Authorization: Bearer $ACCESS_TOKEN HTTP header.

HAI_Direct_Service_Request.svg

Example: Keycloak indirect service authorisation

The following example shows the authorisation flow for a client to interface with a service endpoint indirectly, with an indirect client such as a web interface application or a desktop application. In this context the client is acting indirectly on behalf of the end-user, and the end-user should delegate authorisation to such a client. With this approach the corresponding client will perform the OAuth 2.0 Authorization Code Grant and attempt to authenticate the end-user by redirection to the Keycloak-integrated OpenID Connect service. The authenticated end-user will be prompted to delegate authority to the client for the scope of client permissions specified. The client may then request an access token from the OAuth 2.0 server by acting through the authorisation of the end-user. Following a successful authorisation flow, the client may pass the access token to corresponding protected system service endpoints via an Authorization: Bearer $ACCESS_TOKEN HTTP header.

HAI-Indirect-Service-Request.svg

Example: Keycloak UI for realm token settings

The following image shows the Keycloak realm settings web interface for tokens. Keycloak tokens Base64-encoded JSON Web Tokens (JWTs), as a component of the JavaScript Object Signing and Encryption (JOSE) specifications . The realm settings web interface for tokens provides:

  • Realm-level token configuration options, including:
    • Default signature algorithm.
    • Refresh token revocation.
    • SSO session idle time.
    • SSO session maximum time.
    • SSO session idle remember me time.
    • SSO session maximum remember me time.
    • Offline session idle time.
    • Limitations for offline session maximum time.
    • Client session idle time.
    • Client session maximum time.
    • Access token lifespan.
    • Access token lifespan for implicit flow.
    • Client login timeout.
    • Login timeout.
    • Login action timeout.

Keycloak-realm-tokens.png

Example: Keycloak UI for realm client settings

The following image shows the Keycloak realm clients web interface for client settings. Keycloak realm clients are configurable with a range of settings. The realm clients web interface for client settings provides:

  • Client-level configuration options, including:
    • The client ID.
    • The client name.
    • The client description.
    • Whether the client is enabled.
    • Whether the client requires end-user authorisation consent.
    • The client login theme.
    • The client authentication protocol.
    • The client access type.
    • Whether the OAuth 2.0 Authorization Code Grant is enabled.
    • Whether the OAuth 2.0 Implicit Grant is enabled.
    • Whether the OAuth 2.0 Password Grant is enabled.
    • The root URL.
    • The valid redirect URIs.
    • The base URL.
    • The admin URL.

Keycloak-client-settings.png

Modern service architecture

HeartAI services generally refers to the primary application-level software that provides domain-relevant behaviour, including functionalities such as:

  • Data integration.
  • Data processing.
  • Data linkage.
  • Data aggregation.
  • Data brokering.
  • Reporting and analytics.

Services often implement reactive microservices architectures and follow concepts from The Reactive Manifesto. For HeartAI platform development, service design encourages high-performance and extendable architectures. Current HeartAI service design supports:

  • Natively cloud deployable and distributable services.
  • High-performance services, with support for real-time data streaming.
  • Well-defined service scope, with service context defined with a corresponding domain entity.
  • Mature support for backing service integration, such as PostgreSQL data server integration and Apache Kafka message bus integration.
  • Hardened security constructs including identity integration and rigorous logging, monitoring, and auditing capabilities.
  • Service development that allows iterative and well-managed development practices.

These capabilities are particularly important for the digital health ecosystem, where there are many data and application assets, and often service requirements are complex with a large variety of interface standards. To support health system care, HeartAI services have the capability to provide:

  • Broad support for data interface standards, including international, legacy, and proprietary standards, such as the HL7 health data standard.
  • High-performance data processing, with support for stream-native data interfacing and transmission. This allows interfacing with high-throughput data generation systems such as:
    • Patient observation machines.
    • Anaesthetic machines.
    • Ambulance GPS devices.
    • Wearable devices.
    • Bio-implantable devices.

Example: Service architecture components

The follow figure and table describe a typical HeartAI service architecture:

HAI-Service-Architecture.svg

Service component Description
Service API Service endpoint application-programming interface (API). Typically a web services endpoint with support for HTTP or WebSockets protocols. Provides a facade to the underlying implementation layer. Supports strong endpoint security and logging of endpoint interaction.
Service IMPL Software layer implementation for the service API. May provide service functionality directly, but often coordinates with service domain entity references through a service command, effectively functioning as a task scheduler. Also provides mechanisms for authentication/authorisation and subscription to the service-brokered message bus instances.
Entity cluster Distributed entity cluster implemented with Akka Cluster. Provides primary domain behaviour through an event-sourcing paradigm. The service implementation layer communicates with this layer through domain commands, typically with asynchronous communication. Domain commands may generate domain events, which are persisted to the write-side database to guarantee eventual consistency of event acknowledgement. These events are also published to the software event stream to trigger downstream behaviour.
Write-side database Write-side data persistence component of service event sourcing process. Optimised for high-throughput writes. Provides a guarantee of eventual consistency for the write-side component of the event-sourcing paradigm. Acknowledgements from the write-side database provide the base reference for successful communication with a corresponding domain entity.
Event stream Software-layer event stream to trigger downstream event-driven behaviour. Publishes event-driven behaviour to (i) The service implementation layer, (ii) Service-brokered message bus endpoints, (iii) Service read-side repositories.
Read-side database Read-side data persistence component of service event sourcing process. Optimised for high-throughput reads. Through the service implementation, the read-side database subscribes to the software-layer event stream to process events into corresponding objects appropriate for structured persistence within the read-side data store. Persistence of these objects is eventually consistent with reference to the event-stream, although consistency is typically achieved within seconds. Functions as a backing service to the service implementation for read-side database queries.
Message bus subscription Message bus endpoints implemented with Apache Kafka. Enables service communication through a distributed publish-subscribe paradigm. Functions as a backing service to the service implementation for subscription to the message bus.
Message bus publication Message bus endpoints implemented with Apache Kafka. Enables service communication through a distributed publish-subscribe paradigm. Functions as a backing service to the event stream for publication to the message bus.

Example: Service domain entity architecture

The states of system services are managed through the design concept of service domain entities. The state of a service domain entity is typically contained within a corresponding bounded context, often referenced by a corresponding aggregate root. Design of service domain entities also follows concepts and ideas from Domain-Driven Design.

State progression follows the principles of event-sourcing. Through this approach, service behaviour is progressed within a domain entity by message passing a domain Command. This Command has the potential to generate domain Events. A domain Event itself has the potential to alter the domain State. Akka Persistence provides abstractions for managing these approaches and guaranteeing consistencies with data persistence. PostgreSQL data servers provide write-side persistence of entity state as an event journal of all generated events within the service domain entity. This append-only process has a high-throughput and low-latency and generally provides useful advantages to overall data architecture. Domain entities are internally instances of Akka actors and are themselves able to be passivated into or populated from backing data server instances.

heartai-event-sourcing-process.svg

Example: Service event processing

System services publish persisted Events to intra-service EventStreams. The events of these event steams may modify general system behaviour in three ways:

  • Service events may be published to a network message bus that is coordinated with Apache Kafka. Subscribing clients (or other services) may then receive these events and trigger domain behaviour.
  • By processing these events, services may generate read-side data projections that are eventually consisted to read-side repositories, such as a PostgreSQL database. Often query requests through service APIs are directed to these read-side repositories to perform the query. This data is serialisable to transmit back to the user. These approaches are particularly suitable for high-throughput querying and analytics.
  • The event stream may also implement general service behaviour at the service implementation.

The following figure shows the processing behaviour that is triggered by a service EventStream:

heartai-service-event-processing.svg

Example: Lagom implementation

HeartAI services are developed with the Lagom microservices framework. Lagom provides libraries for the Scala and Java programming languages. HeartAI services are primarily developed with the Scala language. Lagom design choices provide a structured environment for developers to benefit from modern microservices software concepts, and many of the Lagom implementations are best practice for reactive microservices architectures.

Lagom provides best practice constructs for:

Although Lagom is relatively opinionated as a framework, the native implementation of Akka and Play allows for diverse flexibility and extensibility.

Lagom implementation

HeartAI services are developed with the Lagom microservices framework. Further information about the HeartAI Lagom implementation may be found with the following documentation:

Interface interoperability

HeartAI service interfaces are designed to be lightweight and well-defined, providing first-class support for widely-used interface protocols such as HTTPS and WebSocket Secure. For more nuanced interface requirements, service interfaces are extendable with capabilities provided by the Alpakka project, supporting a broad variety of interface protocols and data transmission methods.

These capabilities allow HeartAI services to be interoperable with many interfaces, including support for legacy and proprietary interfaces. Within digital health systems in particular there are numerous and varied standards for interface specifications, such as the HL7 message formats and DICOM medical image formats.

System services support data transmission across many common data interfaces. In addition to standard synchronous data transmission, asynchronous non-blocking data streaming is supported for many interfaces.

Example: Alpakka implementation

The Alpakka project natively supports many common data interfaces, with support also for legacy and proprietary interfaces. These interfaces implement reactive stream-native integration pipelines for Java and Scala. HeartAI services implement Akka Streams to provide functionality for reactive and stream-oriented programming. These approaches support the Reactive Streams specifications and the JDK 9+ java.util.concurrent.Flow implementations and allow internal and external system data transmission to support non-blocking reactive backpressure propagation.

Through these implementations, HeartAI services provide native support for the following data interfaces:

Example: Hello world ping interface

The following example shows a ping interface for the HeartAI Hello World service - a development service that is often used for service testing and profiling. HeartAI services are implemented with the Lagom microservices framework, and Lagom provides abstractions for specifying service interface endpoints through the use of service descriptors. The following service descriptor specifies the resource paths of the Hello World service:

override final def descriptor: Descriptor = {
  import Service._
  named("hello-world")
    .withCalls(
      restCall(Method.GET, "/hello/api/public/ping", pingService()),
      restCall(Method.POST, "/hello/api/public/ping", pingServiceByPOST()),
      restCall(Method.GET, "/hello/api/public/ping_ws_count", pingServiceByWebSocketCount),
      restCall(Method.GET, "/hello/api/public/ping_ws_echo", pingServiceByWebSocketEcho),
      restCall(Method.GET, "/hello/api/public/hello/:id", this.helloPublic _),
      restCall(Method.GET, "/hello/api/secure/hello/:id", this.helloSecure _),
      restCall(Method.POST, "/hello/api/secure/greeting/:id", this.updateGreetingMessage _))
    .withTopics(
      topic(HelloWorldServiceAPI.GREETING_MESSAGES_CHANGED_TOPIC, greetingUpdatedTopic _)
        .addProperty(
          KafkaProperties.partitionKeyStrategy,
          PartitionKeyStrategy[Greeting](_.id)))
    .withAutoAcl(true)
}

The following example command will request with HTTP GET at the pingService() service endpoint of the HeartAI HelloWorldService production environment:

Request

curl -i https://hello.prod.apps.aro.sah.heartai.net/hello/api/public/ping

Response

HTTP/1.1 200 OK
Date: Mon, 12 Jul 2021 02:24:27 GMT
Content-Type: application/json
Content-Length: 217
Set-Cookie: 5986eb34c84b3f6448c727496d958b60=822550baf33ca32a1a3d66b80439c9df; path=/; HttpOnly; Secure; SameSite=None
Cache-control: private

{
  "msg":"Hello from class net.heartai.hello_world.HelloWorldServiceIMPL!",
  "service":"class net.heartai.hello_world.HelloWorldServiceIMPL",
  "idPing":"07c6e254-a10f-4181-92d6-6174fbe9c4e4",
  "timestampPing":1626056667079
}

Service mesh implementations

HeartAI application services are typically deployed within a corresponding service mesh, allowing fine-grained control over service-to-service and user-to-service network communication. In particular, HeartAI service mesh deployments support real-time monitoring and observability of service mesh activity. HeartAI service mesh implementations provide:

  • Structured and secure service deployments within corresponding service mesh constructs.
  • Declarative approaches for service mesh resource management, with fine-grained control over service-to-service and user-to-service network communication.
  • Real-time monitoring and observability or service mesh activity, including:
    • Service mesh components and network relationships.
    • Service mesh traffic flow.
    • Service mesh traffic status.
    • Distributed tracing capabilities.

Example: Kiali implementation

Kiali provides capabilities for configuration, eventing, metrics, visualisation, and validation of Istio service mesh software. Kiali allows for the display of service mesh structure by inferring traffic topology and health status, and supports reporting and visualisation with:

  • Graphs displaying service mesh topology and real-time traffic flow.
  • Service mesh status and health updates.
  • An overview of service mesh components, including associated workloads and services.
  • Reports and status checks for inbound and outbound traffic.
  • Metrics and visualisation for inbound and outbound traffic.
  • Tooling for distributed service mesh tracing.

HeartAI instances of Red Hat OpenShift implementation are extendable with OpenShift Service Mesh, integrating Istio, Kiali, and the Jaeger distributed tracing software.

Example: Kiali UI application graph

The following image shows an example of the Kiali application graph web interface corresponding to an implementation of Istio for the heartai-hib-interface-prod namespace. The Kiali application graph interface provides:

  • Observability of service mesh traffic.
  • Indications of traffic status and volume.
  • Service mesh components:
    • The service mesh gateway
    • The service mesh reverse-proxy
    • PostgreSQL (postgresql) as a backing service.
    • Apache Kafka (kafka-brokers) as a backing service.

heartai-istio-kiali-app-graph-hib-interface-prod.png

Example: Kiali UI application overview

The following image shows an example of the Kiali application overview web interface corresponding to an implementation of Istio for the heartai-hib-interface-prod namespace. The Kiali application overview interface provides:

  • The workloads and services of the service mesh.
  • An overview of the service mesh topology.
  • An overview of the service mesh health.

heartai-istio-kiali-application-overview-hib-interface-prod.png

Source, project, and product management

HeartAI source code, task and project management, and associated processes are administered through the Git version control software and the GitHub software development framework. Git and GitHub are the primary mechanisms to contribute to, manage, and deploy HeartAI. These frameworks provide structured and dynamic processes for administrators and developers to manage HeartAI, including support for:

  • Software, document, and process version control.
  • Project and product management tooling.
  • Functionalities to package and deploy software for both local machine and production environments.
  • Administrator and developer GitHub account management.
  • Utilities for software testing, analysis, and deployment.
GitHub implementation

HeartAI source code, task and project management, and associated processes are administered through the Git version control software and the GitHub software development framework. Further information about the HeartAI implementation of GitHub may be found with following documentation:

Example: GitHub UI for repository

The following example shows the GitHub web interface for the repository overview. This page provides a high-level overview of the repository, and includes information such as:

  • The repository file structure, with detail about the most recent commit for the directory / file.
  • The rendered GitHub README documentation.
  • Repository metadata, including corresponding website(s) and most recent software version release.
  • Published repository packages.
  • Repository contributors.

Example: GitHub UI for repository overview

github-1-home.png

Example: GitHub UI for commits

The following example shows the GitHub web interface for a Git commit, representing an incremental change to the repository history. For commits, GitHub provides functionalities such as:

  • Assignment of the commit with a repository Git commit ID as a unique SHA-1 hash of commit information.
  • What changes were made to the repository.
  • Who authored the changes.
  • Functionality and metrics to understand the extent of the changes.

Example: GitHub UI for repository commit

github-8-commit.png

Example: GitHub UI for pull requests

The following examples show the GitHub web interface for a GitHub pull request, representing a request to merge one repository branch with another. For pull requests, GitHub provides functionalities such as:

  • Request process to submit an incremental update to a primary repository branch.
  • Ability to assign reviewers to review a pull request.
  • Ability to label pull requests, and associate pull requests to GitHub projects and GitHub milestones.
  • Enforcement of repository policies, such as requiring at least one administrator review and approval before accepting a pull request.
  • Integrated testing and event-driven behaviour through GitHub Actions capabilities.

Example: GitHub UI for repository closed pull requests

github-4b-pull-requests.png

Example: GitHub UI for repository pull request

github-4-pull-request.png

Example: GitHub UI for networks

The following example shows the GitHub web interface for GitHub networks. This graph allows an interactive visualisation of repository development history. The GitHub web interface for networks provides:

  • A scrollable and interactive visualisation of repository development history.
  • Linking of network data points corresponding to a repository Git commit ID.
  • Popup information when interacting with network data points, displaying further information about the corresponding commit.

Example: GitHub UI for repository network

github-7-network.png

Example: GitHub UI for issues

The following examples show the GitHub web interface for GitHub issues. This page provides functionalities to support project planning, task management, and team management. An issue typically represents a well-defined task or query in relation to the repository, and the issues web interface provides a centralised framework to manage these tasks. For issues, GitHub provides support for:

  • Issue creation with authorship corresponding to the GitHub user account.
  • Functionalities for labeling and tagging issues.
  • Assigning repository members to the issue.
  • Assigning issues to GitHub projects.
  • Assigning issues to GitHub milestones.

Example: GitHub UI for repository open issues

github-2-issues.png

Example: GitHub UI for milestones

The following example shows the GitHub web interface for GitHub milestones. This page provides functionalities to support project planning over larger time epochs. A milestone typically bundles repository issues related to a major platform or project development goal. For milestones, GitHub provides support for:

  • Documenting milestone purposes and goals.
  • Linkage between repository issues and a milestone.
  • Providing an overview of milestone development progress.

Example: GitHub UI for repository milestone open issues

heartai-software-github-milestones

Example: GitHub UI for projects

The following example shows the GitHub web interface for GitHub projects, providing Kanban-like functionalities to manage projects and product. For projects, GitHub provides functionalities such as:

  • Project tasks that are linkable to corresponding repository issues, with inheritance of issue features such as web interface interactivity integrations.
  • Inheritance of GitHub issue assignees and labels.
  • Customisable board columns.

Example: GitHub UI for repository project board

github-3-projects.png

Example: GitHub UI for GitHub Actions

The following example shows the GitHub web interface for GitHub Actions. This feature allows general event-driven behaviour to occur following interaction with GitHub as a Git commit, such as behaviour driven by a Git push to the GitHub repository, or as part of a GitHub pull request process. These capabilities are backed by Microsoft Azure cloud resources, and include broad access to compute services. This is particularly suitable for performing unit- and integration-testing of repository software and for deployment processing generally. The GitHub Actions web interface provides:

  • An overview of all GitHub Actions workflows and jobs that have been registered with the repository.
  • Linking of the GitHub Actions workflow with a repository Git commit ID.
  • The ability to view the GitHub Actions workflow in real-time, including logging functionality through backing Microsoft Azure cloud resources.

Example: GitHub UI for repository GitHub Actions workflow

github-actions.png

Rapid and continuous deployment

By consolidating service testing, packaging, and deployment within the HeartAI GitHub repository, services may be automatically deployed to Red Hat OpenShift following successful testing and review, a process pattern that is referred to as GitOps. Services may be deployed to live production readily, in many cases within minutes of service testing. This allows HeartAI system services to be updated and deployed potentially hundreds of times every day, with complete support for version control, zero-downtime version roll-forward, and version roll-back.

Example: Argo CD implementation

Within HeartAI OpenShift instances, the OpenShift GitOps Operator provides declarative approaches for GitOps lifecycle management and continuous delivery. A core component of this process is the management of cluster resources with an integrated Argo CD instance. The Argo CD framework defines a Kubernetes Application custom resource that provides functionality to synchronise with GitHub hosted source repositories. Through these Application resources, Argo CD monitors for updates to the HeartAI GitHub repository, and synchronisation is triggered when modifications are made to the master branch. Triggered behaviour includes applying the cluster resource declaration files to the HeartAI OpenShift instance, which coordinates the deployment of OpenShift resources to the cluster environment. These approaches allow the deployment of platform resources to occur through the GitHub managed review and deployment processes, and provide a supportive framework to optimise developer and contributor productivity and experience.

Example: Argo CD GitOps process

The following example shows the GitOps process for monitoring the HeartAI GitHub repository for changes to repository state, and applying any state changes to HeartAI instances of Red Hat OpenShift. This process occurs automatically following contribution to the HeartAI GitHub repository, and provides a well-defined and rapid deployment pipeline.

heartai-argocd-gitops-process.svg

Example: Argo CD UI applications

The following image shows the Argo CD web interface for Application resources. The Application web interface provides:

  • An overview of Application resources that are managed by Argo CD.
  • Information about Application resources, including:
    • The associated project that manages the Application resource.
    • Labels that have been applied to the Application.
    • Health and synchronisation status.
    • The corresponding GitHub repository path.
    • The target branch of the repository.
    • The Application deployment destination.
    • The associated Application namespace.

heartai-argocd-applications-tiles.png

Example: Argo CD UI application resources tree

The following image shows the Argo CD web interface for the resources of the heartai-acs Application with a tree view. This representation displays the management association between these resources. The Application tree view provides:

  • Information about the Application, including:
    • The health of the Application.
    • Details about current and previous synchronisation states.
  • An overview of the resources that are managed by the Application.
  • The hierarchical management structure between these resources.
  • Information about Application resources, including:
    • The resource name and icon.
    • Resource-level health and synchronisation status.

heartai-argocd-application-tree.png

Productive development environments

HeartAI developers are supported with development environments that encourage a positive and productive development experience. HeartAI instances are deployable to a developer’s local machine with automated installation steps, and these instances are secure, lightweight, and ephemeral.

Local machine deployments are orchestrated with the Docker Compose container specification tool, which provides transient instances of the message bus software Apache Kafka, the data server PostgreSQL, the identity and access management platform Keycloak, and the networking stack Traefik. The integration of these software components with local machine deployments allows developers to begin productive work readily, and reduces configuration drift between developer environments.

HeartAI repository branches allow different aspects of the repository to be separated and developed independently. The following branches implement the primary functionality of HeartAI and are protected by default:

Branch name Description Restrictions Requirements
master Representative branch for HeartAI system deployment Must be merged into from a pull request. Pull request must have at least one administrator review
gh-pages Representative branch for HeartAI website deployment Must be merged into from a pull request. Pull request must have at least one administrator review

To apply source contribution to these branches, contributors must pull request their branch to merge with a corresponding origin branch, typically the protected branches noted above. Contributions to these branches are governed by an administrative review process as represented with the following figure:

heartai-review-process-for-source-contribution.svg

With administrative approval, the contributor’s branch may be merged with the nominated origin branch, and will become part of the representative branches of the HeartAI source. The representative branches, master and gh-pages, trigger downstream deployment behaviour following modification to the origin source. This includes unit and integration testing, publication of representative packages and images, and deployment to HeartAI instances of Red Hat OpenShift.

Data integration and access

Health data and information is being created in an ever-increasing quantity and quality. However, integration of these data sources is sub-optimal, and access is often difficult to navigate and technically limited. Through HeartAI provided interface interoperability, data integration is readily achieved and extendable. HeartAI has the capability to integrate data from a variety of health data resources and provide these resources at large-scale.

HeartAI data integration and access approaches provide:

  • Broad integration capabilities, including integration with international, legacy, and proprietary health data standards.
  • High-performance and reliable data systems, particularly supporting cloud based data resources with high-availability.
  • Secure approaches and environments to access data. The HeartAI implementation of Red Hat CodeReady Workspaces provides secure environments with natively integrated access to data resources.
  • Established data sharing governance and policies. For an example of HeartAI supported data sharing consider the PHOCQUS project for secondary use sharing of health data for research purposes.

Example: PHOCQUS

The PHenotyping Outcomes for clinical Care, Quality, and Service (PHOCQUS) is an exemplar initiative of Health Data Science & Clinical Trials, Flinders University, South Australia, to develop a modern data integration platform to enhance capabilities for clinical audit and research, service innovation, and operationalisation of digital implementations. The PHOCQUS project provides data system capabilities through an automated data retrieval and collation process by linking currently collected routine clinical health service for opt-out consenting patients under the custodianship of the involved institutions and clinical areas. These approaches will allow the development of digital phenotypes of a range of diseases and therapeutic care, patient co-morbidities, social determinants of health, and health service characteristics. The HeartAI system provides the technical implementation for the PHOCQUS project, with a modern best-practice deployment of cloud infrastructure, high-performance data systems, and enhanced platform management and operation.

PHOCQUS

Further information about the PHOCQUS project may be found with the following documentation sections:

High-performance clinical operations

Example: Statewide Virtual Care Service

The SA Health Statewide Virtual Care Service (SVCS) is a multi-disciplinary health system service initiative that will provide clinicians, health system administrators, and supporting communities with innovative capabilities and improved care delivery by providing an interface between health system services, including:

SVCS includes metropolitan and regional health services and represents a system-wide approach to virtual care within South Australia.

The service aims to improve many areas of health system service delivery and care, by providing innovations with:

  • Increased access and availability of emergency services.
  • Enhanced delivery of care.
  • Expedited times to triage.
  • Care pathways that avoid unnecessary emergency department admissions.
  • Reductions in ambulance ramping time.
  • Integration with modern digital and information systems.
  • Monitoring and operational capabilities.

SVCS currently provides four operational care pathways:

  • Virtual Emergency Service

The Virtual Emergency Service (VES) enables a point-of-contact for SA Ambulance Service clinicians by providing live telehealth integration and supporting services. This allows real-time paramedic and emergency care to be delivered while ambulance services are on the scene with a patient. These capabilities support clinical decision-making and may offer alternative services for care delivery, such as care-in-place service delivery, helping to alleviate emergency department and inpatient admissions.

  • Rural Virtual Care Service

The Rural Virtual Care Service (RVCS) provides virtual and remote access to clinical services for regional and remote health services and patients with potentially urgent medical conditions, by enabling virtual specialist and advanced services to be delivered to these sites on the basis of health system need. In addition, RVCS also supports regional transfers to metropolitan hospitals, including appropriate site transfer planning and bed allocation.

  • Health Navigator service

The Health Navigator service (HNAV) provides additional support to ambulance and paramedic services through the integrated capabilities of SVCS. Through this service, paramedic support and liaisons on-site at SVCS connect with SAAS paramedic staff at the patient location, and provide additional guidance to the treating team. This process also enhances SAAS services by utilising EMR capabilities within SVCS, creating a more holistic view of the patient journey with additional information about history, medical management, and health system engagement. SAAS staff on-site at SVCS may also request a clinical consultation, where the patient will be transferred to a SVCS clinical for review.

  • Clinical Telephone Assessment service

The Clinical Telephone Assessment (CTA) service provides enhanced and integrated clinical care for patients in aged care facilities. This service supports nursing staff located at these facilities with SAAS paramedic services delivered via telehealth from SVCS, including registration into the EMR and integration with the general patient journey. Through this process, the patient may receive supportive care remotely, or where beneficial the SVCS team can coordinate the organisation of an on-site paramedic response.

SVCS initiated in December 2021, with a multidisciplinary team of ~50 staff, including clinicians, paramedics. nurses, administrators, engineers, and analysts. The operational unit for the SVCS is based at the Tonsley Innovation District.

HeartAI provides support for real-time and robust information systems, empowering service visibility and operations, including service-level information systems for:

  • Calls received by the service
  • Emergency department presentations
  • Inpatient admissions
  • Patients treated in regional health networks
  • Patient demographics
  • Patient ramping
  • SAAS telecommunications
  • Direct ward admissions
  • Regional transfers
  • Triage information
  • Clinical presentation
  • Service decision-making and outcomes
Projects: Statewide Virtual Care Service

Further information about HeartAI support for SVCS may be found with the following documentation:

HeartAI also provides operational and analytical support for SVCS through modern front-end applications that are integrated with rigorous health information systems, allowing real-time data and information to be provided for the purposes of:

  • Insight and value from high-performance, real-time, and rigorous health data and information systems.
  • Operational constructs and practices that support rapid response to health system needs.
  • Real-time capable visualisation software with support for data streaming and event-driven behaviour.
  • Mature processes and practices to rapidly respond to clinically important information and activity.
Applications: Statewide Virtual Care Service

Further information about the Statewide Virtual Care Service application may be found with the following documentation:

Simulated examples

Simulated examples

The examples shown here are developed with data simulation capabilities. Although the underlying data generation process is modelled on actual SVCS data and information, the simulated data is carefully curated to be appropriate for non-sensitive development and demonstrations.

Example: SVCS service activity - Overview

The service activity overview page provides a summary of current service activity with a focus on providing information relevant for present day-to-day operations. The page produces a range of reports and plots, including:

  • A tabulated report for current admissions of the current day.
  • A tabulated report for discharges of the current day.
  • A visualisation for the current day admissions, with aggregation by hour.
  • Reports and visualisations of totals for the current day for the following service metrics:
    • VCS service stream.
    • Patient age group.
    • Patient gender.
    • Triage acuity.
    • Triage primary complaint.
    • Service primary outcome.

heartai-applications-statewide-virtual-care-service-service-activity-overview.png

Example: SVCS service activity - Timeline

The service activity timeline provides visualisation of service activity in the form of an interactive timeline. The timeline may be configured with a separator variable and a date may be selected from a calendar input. By clicking on a timeline record, the corresponding record data will display by the timeline. The service activity timeline provides:

  • An interactive timeline of service activity generated by the R timevis package.
  • The ability to select a separator variable that partitions timeline records according to the selected variable.
  • The ability to select a date from a calendar input.
  • Interactive display of the corresponding record data by clicking on a timeline record.

heartai-applications-statewide-virtual-care-service-service-activity-timeline.png

Example: SVCS service metrics - Historical activity

The service metrics historical activity page provides a long-term view of service activity, producing summary reports and visualisations to assess service demand and variability. The page provides:

  • Summary reports and visualisations of service admissions for:
    • The prior two weeks.
    • The prior four-two weeks.
    • The prior twelve weeks.
    • The prior twelve months.
  • A time series plot over the prior twelve weeks.
  • A time series plot over the prior twelve weeks with kernel smoothing.
  • A time series plot over the prior twelve weeks with stochastic modelling.

heartai-applications-statewide-virtual-care-service-service-metrics-activity-historical-png

Example: SVCS service metrics - Patient population historical

The service metrics patient population historical page provides a range of functionality to support the assessment of the service patient population. The page provides:

  • An overview of tabulated reports and visualisations of the patient population, including:
    • Tabulated reports by a range of service metrics.
    • Stacked column charts by a range of service metrics.

heartai-applications-svcs-service-metrics-patient-population-historical2.png

Example: SVCS service metrics - Patient population over time

The service metrics patient population over time page provides a range of functionality to support the assessment of the service patient population over selectable periods of time. The page provides:

  • The ability to select from a range of service metrics.
  • The ability to select a service statistic from either count or rate.
  • The ability to select a time period, including month, week, day, or hour.
  • The ability to filter the corresponding data set from a range of preset date ranges.
  • Visualisations of the patient population over time, including:
    • Stacked column chart by period.
    • Line chart by period.
    • Kernel smoothed chart by period.

heartai-applications-svcs-service-metrics-patient-population-over-time2.png

Example: SVCS service metrics - Length of service

The service metrics length of service page provides functionality to assess length of service of the service. The page provides:

  • The ability to select a grouping metric from a range of service metrics.
  • The ability to select a time period, including month, week, day, or hour.
  • The ability to filter the corresponding data set from a range of preset date ranges.
  • A tabulated report and visualisation of length of service, including:
    • Tabulated report of length of service with optional grouping by a service metric.
    • Bar chart of length of service with optional grouping by a service metric.

heartai-applications-svcs-service-metrics-length-of-service.png

Example: SVCS service metrics - ED avoidance

The service metrics ED avoidance page provides functionality to assess ED avoidance of the service. The page provides:

  • The ability to select a grouping metric from a range of service metrics.
  • The ability to select a time period, including month, week, day, or hour.
  • The ability to filter the corresponding data set from a range of preset date ranges.
  • A tabulated report and visualisation of ED avoidance, including:
    • Tabulated report of ED avoidance with optional grouping by a service metric.
    • Stacked column chart of ED avoidance with optional grouping by a service metric.

heartai-applications-svcs-service-metrics-ed-avoidance.png

Modern analytical methodologies

HeartAI extends robust data systems with modern and high-performance analytical capabilities, including support for state-of-the-art probabilistic computation and machine learning methodologies. These capabilities allow for conventional data reporting and analytics through to real-time artificial intelligence and learning systems. HeartAI analytical capabilities allow for real-time analytics including prediction, decision support, and optimisation.

For probabilistic computation, HeartAI implements Stan as a powerful probabilistic programming language and high-performance statistical computation library. Stan provides extensive support for probabilistic programming constructs and modern Markov chain Monte Carlo (MCMC) optimisation methods, including Hamiltonian Monte Carlo and no-U-turn sampling (NUTS) optimisation approaches.

For machine learning methodologies, HeartAI implements conventional XGBoost regularising gradient boosting frameworks, and further extends with modern deep learning approaches using PyTorch with the Python programming language.

Example: Stan implementation

Stan is a powerful probabilistic programming language and high-performance probabilistic computation library, with support for:

  • Robust and mature probabilistic programming language constructs.
  • High-performance mathematical computation libraries.
  • Markov chain Monte Carlo (MCMC) optimisation methods.
  • Bayesian inference.
  • Variational inference.
  • Interfaces to data and analysis languages (R, Python, shell, MATLAB, Julia, Stata).

Clinical decision support systems

With the advent of powerful computational hardware and modern analytical approaches, there is a greater ability than ever before to provide insights and actions that have the potential to improve patient outcomes and streamline the delivery of healthcare services. Clinical decision support systems represent modern implementations of clinically-driven health data and information use, with the potential to support and optimise clinical understanding and service delivery.

Example: RAPIDx AI

To assist with the medical management of patients presenting to the emergency department with potential acute coronary syndrome (ACS), the RAPIDx AI project will integrate clinical care with validated real-time data and modern analytical methods to better support clinical decision-making and help establish the South Australian health system as an effective learning health system. The RAPIDx AI project will deploy an AI-based diagnostic algorithm for patients with potential Type I or Type II myocardial infarction (MI) and myocardial injury within the emergency departments (EDs) of six South Australian hospitals, and will provide protocolised recommendations for medical management of these patients. The RAPIDx AI project is administered by Flinders University, South Australia. The HeartAI system provides the digital platform to enable real-time data and analytical methods. In a supporting partnership with the RAPIDx AI project, Siemens will deploy the RAPIDx AI Clinical Interface Prototype to provide a robust interface at the clinical point-of-care. Modern analytical capabilities are developed in partnership with the Australian Institute for Machine Learning, University of Adelaide, South Australia.

RAPIDx AI

Further information about the RAPIDx AI project may be found with the following documentation sections:

Example: RAPIDx AI XGBoost analytical methodology

Acknowledgement

This implementation of the RAPIDx AI analytical methodology was developed by Dr Lukasz Wiklendt.

There are 5 outcomes: Normal, Chronic, Acute, T2MI, T1MI.

A two-level model is trained, with a binary classifier at each level:

  1. The first level discrimintates between {Normal, Chronic} and {Acute, T2MI, T1MI}.
  2. The second level discriminates between T1MI and {Acute, T2MI}.

The model tree can be visualised as:

+-----------------+--------------------+
| Normal, Chronic | Acute, T2MI, T1MI  |  Level 1
+-----------------+-------------+------+
        |         | Acute, T2MI | T1MI |  Level 2
        |         +-------------+------+
        |                |         |    
        v                v         v    
 {Normal, Chronic} {Acute, T2MI}  T1MI 

At level 1, {Normal, Chronic} is referred to as negative, and{Acute, T2MI, T1MI} as positive.

At level 2, {Acute, T2MI} is referred to as negative, and T1MI as positive:

+--------+-----------------+
| L1 Neg |     L1 Pos      |  Level 1
+--------+--------+--------+
         | L2 Neg | L2 Pos |  Level 2
         +--------+--------+

Classifying a patient consists of passing their features first into the level 1 model. The algorithm finishes on a negative outcome {Normal, Chronic}. On a positive outcome, the patient’s features are passed into the level 2 model. This results in classifying a patient into one of three outcomes {Normal Chronic}, {Acute, T2MI}, T1MI.

Example: RAPIDx AI XGBoost analytical methodology model performance

RAPIDx-AI-xgb.png

Example: RAPIDx AI XGBoost analytical methodology feature importance

RAPIDx-AI-xgb-imps.png

Example: RAPIDx AI XGBoost analytical methodology correlation coefficient

RAPIDx-AI-xgb-corr-coef.png

Example: RAPIDx AI PyTorch analytical methodology

Acknowledgement

This implementation of the RAPIDx AI analytical methodology was developed by Dr Zhibin Liao.

This analytical model implements a deep learning framework using PyTorch with the Python programming language.

The objectives of this approach were:

  • To develop generative models of the troponin profile in the first 24 hours of emergency presentation. This implements a pharmacometrics-based compartment model to estimate the release and clearance of troponin from circulation and major physiological compartments (e.g. liver). This approach is estimatable from zero-or-more troponin measurements observed at any time between the time of emergency department presentation to the first 24 hours following, and also accounts for patient clinical and demographic features.
  • To develop predictive models of cardiac diagnoses at time-of-presentation as a probability simplex and as defined cardiac diagnosis aggregations corresponding to clinical workflow decision-making.
  • To develop predictive models of cardiac outcomes occurring within 30 days from emergency department presentation.
  • To provide the above model functionalities through a well-defined and thin-layer web services interface. This interface handles input variable validation and processing, and returns the resultant model parameter estimates to the user. Web service capabilities are implemented with the Python http.server module. In production, this web service is proxied through a hardened HeartAI Lagom-implemented web service.

Example: RAPIDx AI PyTorch analytical methodology troponin profile model

RAPIDx-AI-analytics-zl-troponin-profile-model.jpg

Example: RAPIDx AI PyTorch analytical methodology cardiac simplex model

RAPIDx-AI-analytics-zl-cardiac-simplex-model.jpg

Real-time clinical monitoring systems

Clinical service operations are continuing to evolve with information-driven insights promising to support clinical care through capabilities such as:

  • Unit- and patient-level monitoring.
  • Assessment of patient risk and vulnerability.
  • Detection of patient deterioration.

Among many other potential approaches to support clinical practice.

By empowering clinical care with real-time information and analytics systems, HeartAI platform capabilities enable dynamic and interactive approaches to clinical care management. These approaches are particularly powerful at assessing complex information systems with automated data processing, reporting, analysis, and alerting. For example, HeartAI services support integration with patient observation monitors, allowing real-time data streams for heart rate, oxygen saturation, temperature, and blood pressure. With modern monitor systems, this data is generated potentially many times per second. However, within the complexity of clinical care environments, the full breadth of this information is often lost. HeartAI provides automated mechanisms to monitor and assess these live data streams, and allows event-driven behaviour and alerting to follow clinically relevant pattern and risk detection.

Example: HAVEN SA

The South Australian Hospital Alerting Via Electronic Noticeboard (HAVEN SA) project aims to develop and implement digital solutions to support the medical management of deteriorating patients within South Australian hospital care environments. The project primarily aims to deploy modern real-time capable data and analytics systems to detect and respond to patients before the occurrence of serious adverse events. A comprehensive clinical, research, economic, and behavioural framework is proposed to support project implementation. This project is inspired by and implemented in partnership with the University of Oxford HAVEN project with adaptions for the South Australian health system. The HAVEN SA project is administered by (proposed) the South Australian Health and Medical Research Institute, South Australia, and the Central Adelaide Local Health Network, SA Health, South Australia. The HeartAI system provides the implementing platform for the project, powering modern and scalable data integration, secure and robust deployment operations, and platform approaches for analytical development.

HAVEN SA

Further information about the HAVEN SA project may be found with the following documentation sections:

Domain name management

The HeartAI domain name is rigorously managed and utilised for enhanced platform functionality. HeartAI domain name resolution allows name resolution to be consistent for client services and end-users, providing reliable methods to interface with HeartAI systems and services.

In addition to hostname name resolution, HeartAI domain names allow authoritative signing of digital certificates, which is particularly important for establishing platform communication that is capable of secure encryption.

Example: Domain name registration

HeartAI is registered as a second-level domain with the following second-level domain name:

Domain name Description
heartai.net HeartAI second-level domain name

Example: Global name server implementation

The HeartAI global name servers are hosted with Cloudflare DNS. The following name servers are active:

Global name server Name server host Description
Cloudflare DNS angela.ns.cloudflare.com HeartAI global domain name server
Cloudflate DNS stirling.ns.cloudflare.com HeartAI global domain name server

HeartAI DNS records are managed through Cloudflare DNS and HeartAI global fully qualified domain names (FQDN) are resolvable by requesting name resolution to the above name servers.

The following table shows some examples of globally resolvable HeartAI FQDNs:

Example global FQDN Description Networks resolvable from
www.heartai.net HeartAI website hosted by GitHub Pages. Globally across the public internet.
postman.heartai.net HeartAI implementation of Postman. Globally across the public internet.

Example: Private name server implementation

In addition to global domain name server functionality, HeartAI also implements private network domain name resolution. This occurs at two primary levels of the HeartAI networking stack:

Private name server Name server host Description
Azure Private DNS 168.63.129.16 Private DNS functionality implemented with Azure Private DNS. Provides domain name resolution from within the HeartAI Azure environment. Does not expose IP addresses to the public internet.
OpenShift SDN DNS dns-default.openshift-dns.svc.cluster.local Private DNS functionality implemented with Red Hat OpenShift. Provides domain name resolution from within the HeartAI OpenShift software-defined network (SDN). As the HeartAI instance of Red Hat OpenShift is provided by Microsoft Azure Red Hat OpenShift and exists within corresponding Azure Private DNS zones, private OpenShift SDN DNS name resolution typically occurs as a sub-tree of the Azure Private DNS.

Similarly to global name resolution, private FQDNs are resolvable from within networks that have access to the corresponding private name servers that resolve those FQDNs. Unlike global name resolution, private name servers may also provide authoritative DNS records for resolvable FQDNs that are not globally a sub-domain of heartai.net. For example. the sub-domains svc.cluster.local and example.subdomain are resolvable FQDNs that are authoritatively resolvable from within their corresponding private DNS zones. This approach is often used in combination with the OpenShift and Kubernetes DNS capabilities to provide location transparent name resolution for the virtual IP addresses that exist within these environments.

The following table shows some examples of privately resolvable HeartAI FQDNs:

Example private FQDN Description Networks resolvable from
sah.heartai.net HeartAI Azure Private DNS zone. Corresponding Azure Private DNS zone and OpenShift DNS zones.
sah-heartai-kv-prod.vault.azure.net HeartAI Azure Key Vault Private DNS Zone Corresponding Azure Private DNS zone and OpenShift DNS zones.
api.aro.sah.heartai.net HeartAI Red Hat OpenShift control plane API. Corresponding Azure Private DNS zone and OpenShift DNS zones.
*.apps.aro.sah.heartai.net HeartAI Red Hat OpenShift endpoint routes. Corresponding Azure Private DNS zone and OpenShift DNS zones.
hello.prod.apps.aro.sah.heartai.net HeartAI hello world production environment endpoint route. Corresponding Azure Private DNS zone and OpenShift DNS zones.
heartai-hello-world.heartai-hello-world-prod.svc.cluster.local HeartAI hello world production environment endpoint service. Corresponding OpenShift DNS zones.
heartai-hello-world HeartAI hello world production environment endpoint service. Corresponding OpenShift DNS zones only from within the heartai-hello-world-prod OpenShift namespace.

Example: Cloudflare implementation

The HeartAI global name servers are hosted with Cloudflare DNS. For the components of HeartAI that are hosted on the public internet (this website and the Postman API reference), all traffic is proxied through Cloudflare as an edge network.

The following image shows an overview of analytics collected over a one-week period for connections that are proxied through Cloudflare:

heartai-cloudflare-all-pages-analytics.png

Stakeholder engagement

The HeartAI team are actively engaged with stakeholders within South Australia and abroad. These stakeholders have provided advice, guidance, and support for HeartAI. The team are thankful for the support from:

  • SA Health
  • Southern Adelaide Local Health Network
  • Central Adelaide Local Health Network
  • Northern Adelaide Local Health Network
  • Commission on Excellence and Innovation in Health
  • Office for the Chief Medical Information Officer
  • Digital Health SA
  • Health Translation SA
  • South Australian Health and Medical Research Institute
  • Flinders University
  • The University of Adelaide
  • University of South Australia
  • Australian Institute for Machine Learning
  • Flinders Cardiac Surgery Research
  • Microsoft
  • Red Hat
  • Allscripts
  • Siemens

Proactive organisational and personal growth

The HeartAI team are passionate about continued capability development, and encourage proactivity towards positive organisational and personal growth. For HeartAI platform development, best-practice concepts and implementations are actively researched and considered. For team member growth, a culture of support and guidance is evolving. By active efforts toward organisational and personal growth, HeartAI promotes a purposeful and productive development experience.

Example: HeartAI repository tree

The following image shows the HeartAI repository tree as a visualisation produced with the gource software version control visualisation tool.

HeartAI v0.33.0

heartai-v0.33.0.png