The International Coalition of Medicines Regulatory Authorities (“ICMRA”) has published a report on the use of artificial intelligence (“AI”) to develop medicines (the “AI Report”) that provides a series of recommendations on how regulators and stakeholders can address challenges posed by AI.  The ICMRA notes that there are numerous opportunities to apply AI to medicines development, but that AI poses a number of challenges to existing regulatory frameworks.  The AI Report discusses these opportunities and challenges in detail based on several case studies, and provides a set of recommendations for implementation by the ICMRA and its member authorities, which includes the European Medicines Agency (the “EMA”), the USA’s Food and Drug Administration, and the World Health Organisation.  Based on the AI report, we expect to see an increased focus on adapting regulatory frameworks to deal with AI products going forwards both on an international and national level.

ICMRA and the AI Report

The ICMRA is an informal group of leaders of medicines regulatory authorities from around the world.  The EMA notes it was set up to provide “strategic coordination, advocacy and leadership,” with a view to adapting regulatory frameworks to “facilitate safe and timely access to innovative medicines.”  Through horizon scanning the ICMRA identified AI as an area that challenges existing regulatory frameworks.[1]  Thus, members of the ICMRA[2] used case studies to “stress-test” their regulatory frameworks.  The AI Report sets out the case studies and the ICMRA’s recommendations on how to adapt these frameworks to better cope with the challenges posed by AI.

What is Artificial Intelligence?

The AI Report explains that AI is a broad term to “encompass iterative, ‘learning’ algorithms that utilise (big) data and high computing power to make interpretations, predictions or decisions in an autonomous or semi-autonomous fashion that could be seen to imitate intelligent behaviour”, while noting that there is currently no single and agreed definition of AI.  AI systems can be standalone software, or embedded in hardware devices.  The AI Report states that “prevalent methods” used for AI systems include machine learning, deep learning, natural language processing and robotic process automation.

Use of AI in the Development of Medicines

The AI Report explains that those in the public and private sector are increasingly using AI in the development of medicinal products and across all stages of a medicine’s lifecycle.  The AI Report identifies uses in: (i) target profile identification and validation; (ii) compound screening and lead identification; (iii) pre-clinical and clinical development (including “annotation and analysis of clinical data in trials”); (iv) clinical use optimisation; (v) regulatory applications (e.g. for dossier preparations); and (vi) post-marketing requirements (e.g. pharmacovigilance and adverse event reporting).

However, the AI Report also highlights important limitations with the use of AI (e.g., social bias leading to discrimination and/or misguided learning).  Additionally, “self-learning” AI systems are associated with difficulty in predicting outputs or describing with sufficient transparency to users how the AI will function.  This can pose risks to patients.

Stress-Testing Regulatory Systems

The ICMRA members ran two hypothetical case studies to stress test how their regulatory systems would cope with AI products/uses.  These case studies involved:

(1)       the use of AI in clinical medicine development (in this scenario using a Central Nervous System App or “CNS App”).  The CNS App uses data from electronic health records to build upon existing gold-standard diagnostic tools based on a variety of neurological variables, e.g., speed, movement, memory, etc.  It applies AI to identify associations between the variables, disease progression and treatment.  A company would use it to select patients for clinical trials and to monitor their progression.  Post-approval, the company would use the CNS App to monitor effectiveness, adherence and response; and

(2)       use of AI for pharmacovigilance literature searches and signal management.  A company would deploy machine learning methods based on an existing bibliographic and signal training dataset, in the hope that it would improve the sensitivity and accuracy of signal detection and literature searches.

The AI Report sets out the results of these case studies “elucidating the regulatory challenges and use classifications” by region or regulator.  The results indicate that areas that might cause challenges/need consideration and these include:

  • product classification (noting that there were limitations to the current EU medical device classification system);
  • obtaining early advice from regulators;
  • clinical development (noting that in the EU development of medical devices and medicinal products require different regulators);
  • obtaining scientific advice from regulators (advising on the utility of the AI approach and validity of the AI-generated data);
  • assessment for marketing authorisations (highlighting consideration needs to be given to hardware, firmware, software, governance of quality systems, data security and privacy, and data management of old and new data); and
  • Post-approval (noting consideration needs to be given to the change management plans, hardware and firmware, software, governance and auditing of data sets, updates, and post-marketing risk surveillance and vigilance).

Recommendations

The AI Report sets out detailed recommendations for implementation by ICMRA and its members in Section 4.  These recommendations cover “General Recommendations for AI,” “Recommendations related to case study AI in Medicine Development, Clinical Trials and Use – Central Nervous System App using AI,” and “Recommendations related to case study AI in pharmacovigilance.”  These also include some specific “Recommendations for the EU.

The EMA identify three of the main findings and recommendations in their news update on the AI Report as:

  • Regulators may need to apply a risk-based approach to assessing and regulating AI, which could be informed through exchange and collaboration in ICMRA;
  • Sponsors, developers and pharmaceutical companies should establish strengthened governance structures to oversee algorithms and AI deployments that are closely linked to the benefit/risk of a medicinal product;
  • Regulatory guidelines for AI development, validation and use with medicinal products should be developed in areas such as data provenance, reliability, transparency and understandability, pharmacovigilance, and real-world monitoring of patient functioning.” (emphasis added)

Most regulators and stakeholders were already aware of challenges posed by AI products and have struggled with the lack of AI-specific guidance available.  As such, a number of the ICMRA members are already carrying out activities that concern AI (e.g., the European Commission recently published a legislative proposal for a Regulation on Artificial Intelligence—see our previous blog post here).

Following the AI Report, we expect an increased focus on developing regulatory frameworks (either by updating legislation or through developing guidance) to cater to the specific challenges posed by AI in the context of medicines development.

[1] The AI Report states that through horizon scanning members have identified three “challenging topics” so far.  These are 3D printing, gene editing and AI.

[2] The following members of the ICMRA were involved in the AI Report: the the Italian Medicines Agency (AIFA), the Danish Medicines Agency (DKMA), the European Medicines Agency (EMA), the USA’s Food and Drug Administration (FDA), Health Canada (HC), the Irish Health Products Regulatory Authority (HPRA), Swissmedic and the World Health Organisation (WHO).

Photo of Grant Castle Grant Castle

Grant Castle is a partner in London and Dublin practicing in the areas of EU, UK and Irish life sciences regulatory law. He supports innovative pharmaceutical, biotech, medical device and diagnostics manufacturers on regulatory, compliance, legislative, policy, market access and public law litigation…

Grant Castle is a partner in London and Dublin practicing in the areas of EU, UK and Irish life sciences regulatory law. He supports innovative pharmaceutical, biotech, medical device and diagnostics manufacturers on regulatory, compliance, legislative, policy, market access and public law litigation matters in the EU, UK, and Irish Courts.

He is one of the Co-chairs of Covington’s Life Sciences Industry Group and is Head of Covington’s European Life Sciences Regulatory Practice.

Grant regularly advises on:

  • EU and UK regulatory pathways to market for pharmaceuticals and medical devices, including in vitro diagnostics and on associated product life cycle management;
  • Pharmaceutical GxPs, including those governing pharmacovigilance, manufacturing, the supply chain and both clinical and non-clinical research;
  • Medical device CE and UKCA marking, quality systems, device vigilance and rules governing clinical investigations and performance evaluations of medical devices and in vitro diagnostics;
  • Advertising and promotion of both pharmaceuticals and medical devices; and
  • Pricing, reimbursement and market access for both pharmaceuticals and medical devices.

Grant also handles procedural matters before EU, UK and Irish regulators and UK and Irish market access bodies, where necessary bringing judicial reviews for his life sciences clients before the EU, UK and Irish Courts.

Chambers UK has ranked Grant in Band 1 for Life Sciences Regulatory for the last 18 years. He is recognized by Chambers UK, Life Sciences as “excellent,” “a knowledgeable lawyer with a strong presence in the industry,” who provides “absolutely first-rate regulatory advice,” according to sources, who also describe him as “one of the key players in that area,” whilst Chambers Global sources report that “he worked in the sector for many years, and has a thorough understanding of how the industry ticks.” He is praised by clients for his “absolutely first-rate” European regulatory practice. Legal 500 UK notes that he is “highly competent in understanding legal and technical biological issues.”

Photo of Ellie Handy Ellie Handy

Working with companies in the life sciences and technology sectors, Ellie Handy focuses on EU, Irish, and UK life sciences regulatory and commercial matters.

Ellie advises clients on regulatory issues including classification, biologics, orphans, paediatrics, GxP, market and data exclusivity, clinical research, labelling…

Working with companies in the life sciences and technology sectors, Ellie Handy focuses on EU, Irish, and UK life sciences regulatory and commercial matters.

Ellie advises clients on regulatory issues including classification, biologics, orphans, paediatrics, GxP, market and data exclusivity, clinical research, labelling and promotion, reporting obligations, medical devices, and digital health. Ellie also advises companies in the food, cosmetic and consumer products sectors regarding regulatory compliance and borderline issues. Ellie provides advice in relation to corporate transactions and restructuring, in particular performing regulatory due diligence.

Ellie represents and works with a wide range of clients working in the life sciences and technology sectors on both contentious and non-contentious regulatory matters.

Ellie’s pro bono work includes assisting charities. In addition to her role at Covington, Ellie spent three years working life sciences regulatory practice in London.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.