Policy Monitor

European Commission - Living guidelines on the responsible use of generative AI in research

The European Research Area Forum (a platform under the authority of the European Commission and composed of European countries and research and innovation stakeholders) recognized the complexity arising from various institutions issuing guidance on the appropriate use of generative AI tools and took the initiative to develop unified guidelines to ensure their effective and ethical utilization in research. The new "Living Guidelines" aim to offer a single, clear resource for research stakeholders across Europe. They emphasize the responsible use of generative AI to ensure research integrity and promote transparency throughout the research process. The key principles focus on reliability (ensuring quality data and results), honesty (disclosing AI use and conducting research fairly), respect (considering societal impacts and ethical issues), and accountability (taking ownership of research outputs).

What: Policy orienting document

Impact score: 3

For whom: policymakers, researchers, academic institutions, businesses

URL: https://research-and-innovatio...

The principles of the new guidelines are based on existing frameworks such as the European Code of Conduct for Research Integrity and the Guidelines on Trustworthy Artificial Intelligence. The recommendations are divided into three subgroups: researchers, and research organisations.

Researchers

Researchers using generative AI should maintain ultimate responsibility for the scientific output, ensuring integrity and accountability while recognizing the limitations of AI tools such as bias and inaccuracies. Generative AI should not be used to produce content that falsifies, alters, or manipulates original research. Transparency is key, with researchers openly detailing the generative AI tools used, their impact on the research process, and providing input-output data when possible. It is recommended that researchers discuss the limitations of generative AI tools, such as possible biases as measures to counteract this. The guidelines emphasize that attention must be paid to privacy, confidentiality, and intellectual property rights, safeguarding sensitive information from unauthorized use and adhering to relevant legislation, particularly regarding personal data protection and intellectual property rights.

Continuous learning and responsible usage are also emphasized, with researchers staying informed about best practices and refraining from substantial use of generative AI in sensitive activities that could impact other researchers or organisations (for example peer review, evaluation of research proposals, etc). to mitigate risks such as unfair treatment or exposure of unpublished work. However, research shows that the use of generative AI in peer reviews is already common practice.

Research organisations

To ensure responsible usage of generative AI in research, research organizations should actively promote, guide, and support its responsible application by providing training on verification, privacy maintenance, bias addressing, and intellectual property protection. They should also monitor the development and utilization of generative AI systems, analyzing limitations, providing feedback to researchers, and sharing insights with the scientific community to prevent misuse. Integrating generative AI guidelines into general research practices and ethics, organizations should openly discuss their implementation with staff and stakeholders, applying them whenever possible and supplementing them with additional recommendations as needed. Additionally, implementing locally hosted or cloud-based generative AI tools under their governance enhances data protection and confidentiality, contingent upon ensuring robust cybersecurity measures, particularly for internet-connected systems.

Funding organisations

Research funding organizations, since they operate in different contexts and follow different mandates and regulations, should tailor their measures and practices regarding generative AI to their specific contexts and objectives. They can promote and support responsible AI use by designing funding instruments that align with ethical standards and legal requirements, while also encouraging researchers to adhere to these guidelines. Internally, these organizations should lead by example, ensuring transparent and responsible use of generative AI in their processes, particularly in assessment and evaluation activities, while upholding confidentiality and fairness. Careful consideration should be given to the selection of AI tools, prioritizing those that meet quality, transparency, data protection, and intellectual property standards.

Furthermore, funding organizations should request transparency from applicants regarding their use of generative AI, allowing them to declare and explain their use of such tools in their research activities. Additionally, these organizations should actively monitor the evolving landscape of generative AI, promoting and funding training programs to foster ethical and responsible AI use in scientific research.

Because of the dynamic nature of the AI and technology policy landscape, it is the ambition of the European Research Area Forum to update the guidelines at regular intervals. To reinforce this objective, an opportunity to provide feedback on the current set of guidelines is being offered.