Policy: Artificial intelligence assurance

Alignment with digital systems policy and operating models

Deploying AI in practice is often enabled through a well-designed digital foundation. This includes supporting infrastructure, primary and secondary sources of data, software packaging and orchestration, security frameworks, and observability. As a result of this, an operational AI system is built upon a broad establishment of digital capability, and alignment to digital system policy should be a central concern.

Among many, these should consider:

  • Privacy
  • Safety
  • Information technology
  • Information security
  • Cybersecurity
  • Change management

Alignment with ethical review processes

The implementation of an AI system may also be subject to review by a ethical review process, such as the Human Research Ethics Committees that are active within many health institutions. These review processes will consider whether the use of the AI system is appropriate for the setting by assessing the risk and benefit of the system, and by reviewing whether particular aspects, such as consent, should be required. Expert council for AI may be found on overseeing committees which can often provide guidance for the ethical use of AI in these contexts.

Project-level governance

It may also be appropriate to establish project-level governance and oversight committees for specific AI implementations. The purpose of this governance is to ensure that the AI system is fulfilling its intended use within the organisation, and that the system is safe and effective.

Project-level governance may consider:

  • The intended objectives of the AI implementation.
  • Outcomes and performance monitoring.
  • Safety and assurance.
  • Identification and formalisation of responsible parties.
  • Any particular concerns, for example regulatory requirements for medical software.

Separation of data environments and responsibilities

Access to high-quality data sets is foundational to the success of the AI development lifecycle. However, this is often limited because of the relatively unstructured way that data is accessed. For example, a researcher may be provided with raw data extracts with minimal post-processing or assurance, offloading a significant burden with data management to the researcher. In a typical working environment, this can result in the repeated duplication of these data extracts across many devices, often with little management or oversight. In some instances it may be better to separate model development from training and deployment, which can minimise exposure to sensitive data and reduce the risk of unmanaged access to data resources.

Through this approach, organisational responsibilities may be distinguished such that a limited amount of trained personnel have access to sensitive data resources. A typical organisational structure could consider the following roles:

  • AI engineers, who can work more freely with public, deidentified, or simulated data to generate representative model constructs.
  • Data engineers, who can provide high-quality ways of consuming data, including publishing data structures and definition dictionaries.
  • Machine learning operations (MLOps) engineers, who can specialise in model deployment and orchestration.

By implementing these processes the secure management of data resources can be supported and data extraction and duplication is avoided. AI engineers can develop model structures remotely and deploy these to secure environments where data are hosted. In addition, constrained approaches with data consumption, for example with models that can train incrementally over time, can limit the scale of data interaction at any one time.

Deployment orchestration

The deployment of AI models can be varied, with a range of approaches to train, serve, and orchestrate AI constructs. This can lead to potentially unmanaged AI deployment pipelines and a limited ability to standardise and scale these systems. By defining a standard set of operating procedures for the AI deployments, these systems can be better managed and a greater level of assurance can be achieved.

Approaches for robust AI deployment include:

  • Automated approaches to build, package, and deploy AI constructs.
  • Standardisation of the AI development and deployment lifecycle.
  • Persistence of history through data lineage and model metadata registries.
  • Ability to orchestrate model experiments and to maintain a model registry.
  • Standardised performance metrics, model pathology detection, subpopulation bias detection, and intermodel comparative analysis.
  • Live and continuous monitoring of model inference, with real-time observability.

These frameworks allow AI to operate at scale, potentially supporting many more developers or users through standardised access and approaches.

Logging, monitoring, and observability

To ensure that AI systems are functioning as expected, it is important that events are logged and observed throughout the AI lifecycle. Events should capture information related to:

  • Model registration and metadata
  • Model training and tuning process
  • Deployment and orchestration
  • Live inference and user interactions
  • Quality assurance triggers
  • Domain / business logic triggers
  • Computational resource usage metrics

These logged events should be well-structured and auditable, including provenance and responsible owners. Event and observability infrastructure should also be purposeful for operational or non-operational use cases. For operational systems in particular, real-time monitoring should be considered as a foundational capability.

Change management processes

A well-structured change management process is important for ensuring the ongoing rigour of AI implementations. Generally, when there is a new deployment, or a significant change to an existing deployment, the developers of AI systems should coordinate with the responsible owner(s) and corresponding governance stakeholders to assess the impact of the change. This could include go-live assessments, impact evaluation, change planning, user education, and formation of a service level agreement.

Particular attention should be applied to the specific concerns of an AI implementation, such as:

  • Ongoing monitoring and auditing.
  • Key dependencies such as source data systems.
  • Engaging with dedicated analytical expertise within the organisation.
  • The responsible owner(s) of the AI system should be identified and formalised within organisational processes.

Where appropriate, organisations can leverage existing change management processes(e.g.digital systems). However, organisations should also consider establishing a dedicated function for the assessment of AI.

Causality and bias

Cause and effect is at the heart of probabilistically correct model structure and an understanding of causality is essential to disambiguate conditional biases. Similarly, purely data driven approaches may be insufficient for determining causal relationships. In medicine, often the observed data is the result of complex changes to physiology, and it may be important to account for causal factors that are throughout the physiological system. A robust AI system should consider the role of causality as a means of ensuring trust and the responsible use of these technologies.

Policy

  1. Alignment with digital systems policy and operating models

    1. The implementation of AI that involves the use of digital systems generally must align with corresponding policies related to these systems. This should include policies relating to:
      1. Privacy
      2. Safety
      3. Information technology
      4. Information security
      5. Cybersecurity
      6. Change management
  2. Alignment with ethical review processes

    1. If the AI implementation is subject to an ethical review process, the ethical review must be completed before the implementation can be activated.
    2. Any stipulations of the ethical review process, such as appropriate consent, must be fulfilled as a prerequisite of the use of the AI implementation.
  3. Project-level governance

    1. In instances where a specific project-level governance framework has been established for an AI implementation, this framework must be followed as a condition of the implementation.
    2. Such a project-level governance framework should specify:
      1. The intended objectives of the AI implementation.
      2. The responsible owners of the implementation and expected duties of these owners.
  4. Governance and compliance

    1. All HeartAI administrators and developers must understand and agree to this policy before they can gain access to HeartAI platform components.
    2. In circumstances where this policy either lacks specification or conflicts with a state of federal policy, the existing state of federal policy will take precedence. HeartAI administrators are tasked with resolving policy deficits by approved modification or extension to HeartAI policy.
    3. HeartAI administrators are responsible for ensuring that this policy aligns with state and federal policies.
  5. Ongoing review

    1. All HeartAI administrators and developers must understand and agree to this policy before they can gain access to HeartAI platform components.
    2. In circumstances where this policy either lacks specification or conflicts with a state of federal policy, the existing state of federal policy will take precedence. HeartAI administrators are tasked with resolving policy deficits by approved modification or extension to HeartAI policy.
    3. HeartAI administrators are responsible for ensuring that this policy aligns with state and federal policies.