Policy monitor

European Medicines Agency – Draft reflection paper on the use of AI in the medicinal product lifecycle

AI and ML tools can be used to help with the acquisition, transformation, analysis, and interpretation of data in all stages of the medicinal product lifecycle, i.e. from the drug discovery stage to the post-authorization stage. This can have tremendous benefits but also involves a wide range of challenges. With its reflection paper, the EMA aims to set out guidelines for the responsible development and deployment of AI and ML in this field. It advocates for a human-centric approach, and emphasizes the need to use AI in a legal and ethical way, with due regard to our fundamental rights and freedoms.

What: Policy-orienting document; paper/study

Impactscore: 4

For who: Developers and deployers of AI, regulators, policymakers, academics, pharmaceutical companies

URL: https://www.ema.europa.eu/en/documents/scientific-guideline/draft-reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle_en.pdf.

Key takeaway for Flanders:

Considering the EMA puts the obligation on marketing authorization applicants (MAAs) and holders (MAHs) to ensure the algorithms, ML models, datasets etc. they use, are compliant, Flemish pharmaceutical companies thinking about using AI systems should monitor the regulatory developments in this space. They have the opportunity to express their concerns and comment on the paper until the end of this year via this form. The EMA is expected to deliver a finalized version of the paper some time afterwards.



AI/ML applications have proven to be effective tools in the medicinal product lifecycle, which consists of the drug discovery stage, the development and trial stage, the manufacturing stage and the authorization and post-authorization stages. In clinical trials, for example, AI/ML systems are aiding in selecting patients “based on certain disease characteristics or other clinical parameters”. Given the rapid development and increasing use of such systems, the European Medicines Agency (EMA) deemed it necessary to provide guidance to ensure “that the full potential of these innovations can be realized for the benefit of patients’ and animal’s health”. After all, these systems not only produce benefits but also engender risks and challenges, such as ‘black box’ algorithms and technical failures.

The reflection paper

An essential principle resonating throughout the paper, is that the marketing authorization applicants (MAAs) and holders (MAHs) are responsible for ensuring that all “algorithms, models, datasets and data processing pipelines” used in the medicinal product lifecycle are fit-for-purpose, and are in line with the “ethical, technical, scientific, and regulatory standards” described in the GxP standards and the current EMA scientific guidelines.

As such, it is the responsibility of MAAs and MAHs to, among other requirements:

  1. “validate, monitor and document” ML model performance. Active measures should be taken to prevent the integration of bias in the model;
  2. “mitigate risks related to all algorithms and models used”. The EMA promotes a risk-based approach for the “development, deployment and performance monitoring” of AI/ML tools, the degree of risk being determined by a number of factors, including: the tool itself, the context and stage in which it is being used, and the amount of influence it has;
  3. ensure that standard operating procedures “promote a development practice that favors model generalizability and robustness”;
  4. keep the required documentation and logs for an external assessment of the development practices; and
  5. ensure that all personal data are processed in accordance with Union data protection law.

The EMA also recommends to take account of the “Ethics Guidelines for Trustworthy AI” drafted by the AI High-Level Expert Group, set up by the European Commission. The Guidelines have been translated into a practical checklist called the “Assessment List for Trustworthy AI” (ALTAI), which you can access here. We have also published a related guide here.

Furthermore, MAAs should carry out a regulatory impact assessment and risk analysis. The higher the regulatory impact or risk of using the AI system, the sooner the applicant ought to contact the relevant regulator(s). For example: in case an AI system is used to develop, evaluate or monitor a medicine, and is expected to impact the “benefit-risk balance” of that medicine, the EMA encourages developers to seek “early regulatory support” such as scientific advice.

What’s next?

The EMA has invited all interested stakeholders to participate in the joint HMA/EMA workshop scheduled for 20-21 November (registration deadline: 3 November) (link) and to share their comments on the draft (link). This public consultation period will last until 31 December 2023, after which the paper will be finalized. Additionally, the EMA intends to provide more guidance on risk-management and plans on updating existing guidelines to factor in the specific issues posed by AI/ML.