REPORT

Vision texts on the use of AI: do’s, don’ts, and next steps

26.02.2024

Vision documents, list of principles, ethical frameworks … A lot of us are contributing to these documents, but what purposes do they serve? Is it essential to create them, and are they worth the effort? How can we translate these theoretical constructs into actionable steps? These questions took centre stage during the learning community ‘AI and ethics in practice’ meet-up on February 20, 2024.

In this article, we summarise the do’s and don’ts of developing such documents and how you can go from a vision document to tangible tools and actionable steps that align with your organisation’s daily operation.

The insights shared in this article stem from four insightful presentations and subsequent discussions with our community members.

The presenters were:

  • Rob Heyman (Knowledge Centre Data & Society) shared four insights on developing vision text on the development or use of AI.
  • Katrien Alen and Kevin Polley (Kenniscentrum Digisprong) explained how the vision text on using AI in education was developed.
  • Hilde Vandenhoudt (LiCalab) gave more information on the principles for caring technology, and how they (will) translate this document into practical insights and hands-on tools.
  • Annelies Vanderhoydonks (AI Competence Centre, Digitaal Vlaanderen) talked about the constitution of the six principles on using AI in governments.

Best practices for developing vision documents on responsible AI

Do’s:

  1. Clearly define the purpose: Explain the goal of your vision document, along with the list of principles or guidelines. Why are you creating this document, and what outcomes do you aim to achieve? If this is sensitive, try to communicate these purposes internally.
  2. Allocate sufficient time for refinement: Engage in iterative revisions, actively incorporating stakeholder feedback. Ensure that the final text resonates with their perspectives and aligns with organisational objectives.
  3. Tailor your vision text to stakeholders: Recognize that lengthy texts can overwhelm readers. Instead, consider crafting succinct articles that clarify each principle. Focus on providing actionable insights that resonate with your specific audience.
  4. Gather insights from your sector:
    • Conduct interviews with various stakeholders, both direct and indirect.
    • Dive into relevant reports.
    • Establish an advisory board comprising sector experts.
    • Facilitate co-creation sessions to refine different aspects of your vision text collaboratively.
  5. For sector-wide vision texts, start by defining what AI means for your sector. From there, develop principles and guidelines that align with your industry’s unique context.
  6. Leverage future thinking: e.g., envision the AI landscape in 2030 and identify which principles are essential for immediate inclusion in your vision document.

Don’ts:

  1. Avoid generic vision documents. Your organisation’s vision text should stand out from the crowd. Refrain from creating principles or guidelines that lack practicality or uniqueness.
  2. Move beyond words. Don’t settle for a vision document that will never be implemented. Consider clear actions and actors to implement your vision.
  3. Clarify responsibilities: Rather than burdening everyone, assign clear responsibilities. Ensure that each team member understands their role in implementing responsible AI.

Moving from Vision to Action: Practical Steps for Implementing Your Vision Document

Having a vision document is a critical initial step, but its true impact lies in how it translates into practical actions. How can an organisation or a sector ensure this vision document becomes part of their daily activities and philosophy? In our meet-up, we explored several next steps:

  1. Translate to a common language: First, bridge the gap between vision statements and everyday understanding. Translate the vision document to a clear, accessible language accessible to all stakeholders. Only when everyone understands the vision, alignment can be possible.
  2. Fund concrete projects: To make principles actionable, consider funding projects directly integrating them into organisational processes. Alternatively, incentivise companies and organisations to embed these principles within their innovation cycles. Tangible projects bring principles to life.
  3. Seek inspiration from real cases: Learn from companies and organisations that have successfully embraced ethical principles. Investigate their approaches, actions, and outcomes. Inspirational cases provide valuable insights and practical guidance.
  4. Define specific actions and tips: Move beyond theory by defining particular actions. Create guidelines, checklists, and practical tips that operationalise the vision. These actionable steps empower employees to align their daily work with the overarching principles.
  5. Establish a governance structure: A vision document should not stop at good intentions. Set up a governance structure to evaluate its implementation. Recognise that different teams or organisations may adopt principles at varying speeds. Flexibility allows for effective integration.
  6. Address AI ethics fatigue: As AI ethics gains prominence, guard against fatigue. Link guidelines for responsible AI development and use to broader ethical principles within your organisation. Harmonise these frameworks to avoid overwhelming stakeholders.
  7. Establish a consortium or oversight entity: Form a consortium or dedicated entity responsible for overseeing the implementation of the vision document. This group should take ownership of the principles and evaluate the organisation’s adherence to them. Their role includes monitoring progress, addressing challenges, and ensuring alignment with the vision.
  8. Appoint AI ambassadors and frontrunners: Identify individuals within the organisation who are passionate about AI and understand its potential. Appoint these ambassadors or frontrunners as advocates for responsible AI adoption. They can scan their teams for AI opportunities aligned with the vision, take charge of integrating AI systems and ensuring they align with ethical principles, and educate colleagues about responsible AI practices.

During our discussions, we emphasized the importance of monitoring ethical aspects and conducting audits in AI projects. The Knowledge Centre Data & Society developed an ethical logbook in which you can keep track of your ethical considerations during an AI project. This logbook aims to not only keep track of the considerations and decisions you made in an AI project but also to have a communication tool in a team or between teams to talk about ethics and AI. Before auditing AI projects, a crucial step is to imagine the possible impact of an AI system on (in)direct stakeholders. Using tools such as the AI Blindspots and the Tarot Cards of Tech in the design phase of the AI system, these risks are assessed before the system is already developed. Other tools can be found on the toolspage on the website of the Knowledge Centre Data & Society.

Our next meeting is at the Flanders AI Forum on June 11, 2024 in Ghent. You can already show your interest for this event via this form, and we’ll keep you updated on this meet-up of the community at the Flanders AI Forum. More info.

If you are interested in joining this learning community, contact info@data-en-maatschappij.ai or register for one of our future meet-ups, which you can find on our event page.

Downloads

Download the presentation of Annelies Vanderhoydonks (AI Competence Centre, Digitaal Vlaanderen).