Generative AI: guidelines on responsible use in research ​​

Generative AI

The European Commission, in collaboration with the ERA Forum stakeholders, has published living guidelines on the responsible use of generative AI in research. ​ These guidelines aim to provide researchers, research organizations, and research funding organizations with recommendations on how to use generative AI in a responsible and ethical manner. ​

Key recommendations for researchers: ​

  • Researchers should remain ultimately responsible for the scientific output generated by or with the support of AI tools. ​ They should maintain a critical approach to the output produced by generative AI and be aware of its limitations, such as bias and inaccuracies. ​
  • Transparency is crucial when using generative AI. ​ Researchers should detail which generative AI tools have been used substantially in their research processes and disclose the limitations of these tools. ​
  • Privacy, confidentiality, and intellectual property rights should be respected when sharing sensitive or protected information with AI tools. ​
  • Researchers should adhere to applicable national, EU, and international legislation, particularly regarding intellectual property rights and personal data protection. ​
  • Continuous learning and training on the proper use of generative AI tools are essential to maximize their benefits. ​

Key recommendations for research organizations: ​

  • Research organizations should promote, guide, and support the responsible use of generative AI in research activities. ​ They should provide training and guidelines to ensure compliance with ethical and legal requirements. ​
  • Active monitoring of the development and use of generative AI systems within organizations is necessary to provide further guidance and recommendations to researchers. ​
  • These generative AI guidelines should be integrated into the general research guidelines for good research practices and ethics.
  • Research organizations should consider implementing locally hosted or cloud-based generative AI tools that they govern themselves to ensure data protection and confidentiality. ​

Key recommendations for research funding organizations: ​

  • Research funding organizations should promote and support the responsible use of generative AI in research. ​ They should design funding instruments that encourage ethical and responsible use of generative AI technologies. ​
  • Transparency from applicants regarding their use of generative AI should be requested, and mechanisms for reporting should be facilitated. ​
  • Research funding organizations should review their internal processes and ensure the transparent and responsible use of generative AI. ​

The guidelines are based on key principles of research integrity, trustworthiness in AI, and other frameworks of principles developed by various organizations. ​ The European Code of Conduct for Research Integrity and the Ethics Guidelines for Trustworthy AI by the EU High-Level Expert Group on AI served as important references. ​

The guidelines are considered “living” and will be regularly updated to keep up with the rapid technological advancements in generative AI. These guidelines aim to provide clarity and reassurance to researchers and organizations using generative AI in their research activities.