Deploying AI in practice is often enabled through a well-designed digital foundation. This includes supporting infrastructure, primary and secondary sources of data, software packaging and orchestration, security frameworks, and observability. As a result of this, an operational AI system is built upon a broad establishment of digital capability, and alignment to digital system policy should be a central concern.
Among many, these should consider:
The implementation of an AI system may also be subject to review by a ethical review process, such as the Human Research Ethics Committees that are active within many health institutions. These review processes will consider whether the use of the AI system is appropriate for the setting by assessing the risk and benefit of the system, and by reviewing whether particular aspects, such as consent, should be required. Expert council for AI may be found on overseeing committees which can often provide guidance for the ethical use of AI in these contexts.
It may also be appropriate to establish project-level governance and oversight committees for specific AI implementations. The purpose of this governance is to ensure that the AI system is fulfilling its intended use within the organisation, and that the system is safe and effective.
Project-level governance may consider:
Access to high-quality data sets is foundational to the success of the AI development lifecycle. However, this is often limited because of the relatively unstructured way that data is accessed. For example, a researcher may be provided with raw data extracts with minimal post-processing or assurance, offloading a significant burden with data management to the researcher. In a typical working environment, this can result in the repeated duplication of these data extracts across many devices, often with little management or oversight. In some instances it may be better to separate model development from training and deployment, which can minimise exposure to sensitive data and reduce the risk of unmanaged access to data resources.
Through this approach, organisational responsibilities may be distinguished such that a limited amount of trained personnel have access to sensitive data resources. A typical organisational structure could consider the following roles:
By implementing these processes the secure management of data resources can be supported and data extraction and duplication is avoided. AI engineers can develop model structures remotely and deploy these to secure environments where data are hosted. In addition, constrained approaches with data consumption, for example with models that can train incrementally over time, can limit the scale of data interaction at any one time.
The deployment of AI models can be varied, with a range of approaches to train, serve, and orchestrate AI constructs. This can lead to potentially unmanaged AI deployment pipelines and a limited ability to standardise and scale these systems. By defining a standard set of operating procedures for the AI deployments, these systems can be better managed and a greater level of assurance can be achieved.
Approaches for robust AI deployment include:
These frameworks allow AI to operate at scale, potentially supporting many more developers or users through standardised access and approaches.
To ensure that AI systems are functioning as expected, it is important that events are logged and observed throughout the AI lifecycle. Events should capture information related to:
These logged events should be well-structured and auditable, including provenance and responsible owners. Event and observability infrastructure should also be purposeful for operational or non-operational use cases. For operational systems in particular, real-time monitoring should be considered as a foundational capability.
A well-structured change management process is important for ensuring the ongoing rigour of AI implementations. Generally, when there is a new deployment, or a significant change to an existing deployment, the developers of AI systems should coordinate with the responsible owner(s) and corresponding governance stakeholders to assess the impact of the change. This could include go-live assessments, impact evaluation, change planning, user education, and formation of a service level agreement.
Particular attention should be applied to the specific concerns of an AI implementation, such as:
Where appropriate, organisations can leverage existing change management processes(e.g.digital systems). However, organisations should also consider establishing a dedicated function for the assessment of AI.
Cause and effect is at the heart of probabilistically correct model structure and an understanding of causality is essential to disambiguate conditional biases. Similarly, purely data driven approaches may be insufficient for determining causal relationships. In medicine, often the observed data is the result of complex changes to physiology, and it may be important to account for causal factors that are throughout the physiological system. A robust AI system should consider the role of causality as a means of ensuring trust and the responsible use of these technologies.