Hot on the heels of recent announcements from the U.S. Food and Drug Administration (see our prior blogs here), the European Medicines Agency (“EMA”) has joined the conversation on the use of Artificial Intelligence (“AI”) and Machine Learning (“ML”) technologies in the medicinal product lifecycle.
AI and ML have the potential to enhance every stage of the medicinal product lifecycle, from drug discovery, through to clinical development, manufacturing and post-market pharmacovigilance. These technologies can display intelligent behaviour and can analyse huge amounts of data. They are also extremely flexible as they can be trained using data, rather than explicit programming. When used correctly, AI and ML can “effectively support the acquisition, transformation, analysis, and interpretation of data within the medicinal product lifecycle.”
However, the nature of these technologies also leads to certain risks. Importantly, there can be a lack of transparency in the models. Also, the data-driven approach means they can be prone to bias. The EMA has therefore published a draft “Reflection paper on use of Artificial Intelligence (AI) in medicinal product lifecycle” (the “Draft Reflection Paper”), which is open to consultation until 31 December 2023. The EMA sees the Draft Reflection Paper as a way to open “a dialogue with developers, academics, and other regulators.”
What does the Draft Reflection Paper cover?
The Draft Reflection Paper sets out the EMA’s current thinking on the use of AI to “support the safe and effective development, regulation and use of … medicines.” It applies primarily to human medicines, noting that while similar principles apply to veterinary medicines specific reflections/guidance are needed for the veterinary space.
The purpose of the Draft Reflection Paper is to identify use of AI/ML that fall within the EMA’s/National Competent Authorities’ remit. This obviously includes the use of AI in the medicinal product lifecycle but also extends to the use of medical devices with AI/ML technology that are used to generate evidence to support an EU marketing authorisation (i.e., used within the context of clinical trials or combined with the use of a medicinal product).
Use of AI/ML in the medicines lifecycle
The EMA highlights as a “key principle” that marketing authorisation applicants (“Applicants”) and marketing authorisation holders (“MAH”) will bear responsibility for ensuring AI/ML they use is “fit for purpose and are in line with ethical, technical, scientific, and regulatory standards as described in GxP standards and current EMA scientific guidelines.”
In summary, the Draft Reflection Paper requires that Applicants take a “risk-based approach for development, deployment and performance monitoring of AI and ML tools.” The degree of risk will be determined by a number of factors, including: the AI technology itself; the context of use; the degree of influence of the AI/ML technology; and the stage of lifecycle of the medicinal product.
The Draft Reflection Paper considers use of AI/ML at different stages along the product lifecycle and sets out principles and an indication of risk of applying AI/ML at each such stage:
- Drug discovery — the EMA acknowledges that the use of AI/ML in drug discovery may be low risk from a regulatory perspective, “as the risk of non-optimal performance often mainly affects the sponsor.” However, if results contribute to the total body of evidence presented for regulatory review then the regulatory risk increases.
- Non-clinical development — AI/ML (e.g, “AI/ML modelling approaches to replace, reduce, and refine the use of animals”) should follow Good Laboratory Practice (“GLP”), where applicable. Applicants should consider Application of GLP Principles to Computerised Systems and GLP Data Integrity and their SOPs should cover AI/ML.
- Clinical trials — AI/ML models (for example, that support selection of patients based on disease characteristics or clinical parameters) must comply with ICH GCP. The regulatory risk for use of AI/ML increases from early stage to pivotal clinical trials. Where models are generated for clinical trials, it is likely they will be considered part of the clinical trial data or trial protocol dossier and the models must be made available for regulators to assess at the time of marketing authorisation or clinical trial application. Where data collected/generated with AI/ML may impact the regulatory assessment of a medicine, the EMA recommends early regulatory interaction.
- Precision medicine — the EMA considers the use of AI/ML in individualizing treatment (e.g., patient selection, dosing, de novo design of product variants) as high-risk from a medicines regulation perspective. The EMA recommends “special care … in defining what constitutes a change inposology (requiring a regulatory evaluation before implementation), to provide guidance that the prescribers can critically apprehend, and include fall-back treatment strategies in cases of technical failure.”
- Product information — AI/ML might be used to draft, compile, translate or review information documents. Recognizing the risk of hallucinations (which may be plausible but erroneous output) by generative language models, the EMA expects use of such technologies only under “close human supervision.”
- Manufacturing — use of AI/ML in drug manufacturing is expected to increase in the future and the EMA notes that this must comply with relevant quality management principles.
- Post-authorization phase — AI/ML is likely to have potential to support post-authorization safety and efficacy studies in human medicines, plus pharmacovigilance activities, such as adverse event report management and signal detection. The MAH must “validate, monitor and document model performance and include AI/ML operations in the pharmacovigilance system, to mitigate risks related to all algorithms and models used.”
Considerations for use of AI/ML
The Draft Reflection Paper sets out detailed measures that Applicants can take when using AI/ML technologies. Some key points include:
- Interacting with regulators: Applicants should carry out a regulatory impact and risk analysis. The higher the regulatory impact or risk associated with the use of AI/ML technologies, the sooner the EMA recommends the Applicant engages with regulators to seek scientific advice.
- Technical considerations:
- Data acquisition: Applicants should use all efforts and active measures to avoid integration of bias in AI/ML and should document the source of data and the process of acquisition in a traceable manner in line with GxP.
- Training, validation and test data: the EMA discusses validation of models, which is importantly different from the concept of validation in the field of medicines.
- Model development: the EMA encourages development and use of generalizable and robust models.
- Performance Assessments: the Paper highlights the importance of selecting the correct metrics for performance assessments.
- Interpretability and explainability: although transparent models are preferred, the EMA states that a “black box” model may be acceptable if developers can substantiate why transparent models are unsatisfactory. The EMA encourages use of methods within the field of explainable AI wherever possible.
- Model deployment: a risk-based approach is required for model deployment.
- Ethical Principles: developers should follow basic ethical principles defined in the guidelines for trustworthy AI and presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI). They should also take a “human-centric” approach to all development and deployment of AI/ML.
- Governance, Data Protection and Integrity: Applicants and MAHs also need to consider and reflect governance, data protection and integrity principles.
Next Steps
The EMA will finalize the Draft Reflection Paper following the end of the consultation period. It also intends to provide additional guidance on risk-management and may update existing guidance to take into account the specific issues that AI/ML pose.
Given that the Draft Reflection Paper puts the onus on Applicants and MAHs to ensure the algorithms, models, datasets etc. they use are compliant, biopharma companies considering the use of AI/ML should watch this space and keep up to date with upcoming developments.