Report

Insights on guidelines for generative AI from the learning community meet-up

06.11.2024

Generative AI (GenAI) tools like Microsoft Copilot, ChatGPT and DALL·E offer organisations a lot of new opportunities. However, their use also raises important questions about responsible practices. How should we handle the data we input? What are the implications for intellectual property? And can we trust the output of these tools? There is a role for organisations to govern and guide the use of GenAI, but what’s the best way to do this?

On 17 September, the ‘AI and ethics in practice’ learning community met to discuss best practices and challenges regarding the governance of GenAI. The meet-up covered guidelines for use, with insights provided by Kevin Macnish (Sopra Steria) and Hana Van Elst (GPT Academy, UCLL – University of Applied Sciences Leuven-Limburg).

AI governance and generative AI

Kevin Macnish started the meet-up by sharing how Sopra Steria approaches AI governance and the specific challenges of GenAI. AI governance is often thought of in terms of principles: 80% of organisations have some kind of principles for responsible AI, but these often lack a breakdown into processes and practices.

Kevin Macnish’s slide on the possible ethical harms of GenAI. Reference: Weidinger, L., et al. Taxonomy of risks posed by language models. In: 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, pp. 214–229. New York, NY, USA. (2022). https://doi.org/10.1145/3531146.3533088

With GenAI, ethical challenges such as hallucinations, bias and environmental impact are more pronounced. GenAI is very dynamic and can be used for a variety of purposes. But this makes it even more difficult to predict possible ethical harms, especially when implemented on a large scale. Good governance is therefore crucial to help employees use GenAI responsibly.

At Sopra Steria, AI governance is set up so that anyone looking to buy, create or use an AI system needs to fill in a short self-assessment form, which is then discussed by the AI governance board. The AI governance board comprises senior employees with roles such as an ethics officer and data protection officer, etc. The seniority of the board ensures their decisions are followed and implemented. Sopra Steria also aims to set up an AI advisory group, in which external stakeholders will be able to share their view on the AI decisions being made.

During the meet-up, the discussion focused on how guidelines can be a part of AI governance. One major challenge is the granularity of the guidelines: they need to be concrete but still concise. Practical use cases can help illustrate how you expect employees to use (or not use) GenAI tools.

Generative AI for SMEs

Next, Hana Van Elst introduced the GPT Academy and its HumAIn Resources project. GPT Academy is a UCLL project and GenAI partner for SMEs. The HumAIn Resources project focuses on GenAI use in HR.

Reference: afbeelding gemaakt met Midjourney

Hana pointed out that, while SMEs are mainly focused on their core business, they are still eager to explore opportunities with GenAI. However, there are still a lot of uncertainties, such as which specific tools to use, how to create guidelines and the possible legal issues. This makes it difficult for them to develop appropriate policies or governance strategies. As a result, SMEs often abandon the use of GenAI altogether. This means they risk missing out on competitive advantages and may lead to employees using the tools secretly, further increasing risks to the organisation – commonly referred to as ‘shadow ICT’.

The GPT Academy offers support through courses and workshops. However, they are still exploring how to help SMEs get the most out of GenAI opportunities. The group discussed how guidelines might aid GenAI adoption, and how to compose them.

Some conclusions:

  • For guidelines, you can start from your organisation’s existing ethical/sustainable/brand priorities.

  • Develop a high-level vision that your organisation can use as a starting point for creating your own policy.

  • Organise lunch sessions about GenAI to identify and discuss current uses.

  • Create a simple and practical manual with ‘GenAI do’s & don’ts’ that you can build up over time.

  • Build on or borrow from other existing guidelines, from other organisations, other sectors or on other subjects (such as GDPR).

Continuing the discussion on generative AI…

For many organisations, GenAI remains largely uncharted territory, challenging users to rethink daily practices and work processes. During the next learning community meet-up on 10 December 2024, we’ll be discussing the ecological sustainability of GenAI. How can we reap the benefits of GenAI in the long term, while mitigating the negative effects as much as possible? Two experts will share their view and best practices and we also invite you to share your knowledge, experiences and questions during the discussions.

This meeting is organised together with the SustAIn.Brussels initiative and will be a live event in Brussels, Belgium. More information and registration on our website!

The Knowledge Centre Data & Society, together with the GPT Academy and NXTGN, is also working on adapting the AI Blindspots cards for GenAI. Do you want to be involved or stay up to date? Let us know via email.