Course: Trust and AI
Content
The main idea of this talk is that trust, transparency, and explainability in artificial intelligence are relational. So it is not about how much trust people have in AI but rather, how does a particular person stand towards one specific form of AI and the context in which it is applied.
To see trust and transparency in context we work with different methods and tools to make these implicit factors explicit:
- TTC labs Explore interface design for data and Analyze transparency in context
- Techcards
For explainability we look at:
- TAIM workshop: trustworthiness of the AI system workshop
- The Explanation goodness checklist
Learning objective(s) of the course
To identify various forms of transparency;
To sample different tools to increase trust and transparency;
Being able to ask some basic questions to measure trust with a target audience.
What we do
A staff member will work with you and your team to figure out how to create trust in your innovation, not only with the early adaptors (those who don't need to be convinced of your innovation and pick it up immediately) but also with those who don't immediately see the added value of your innovation or perhaps see the added value, but still encounter a number of stumbling blocks (e.g. difficulties in use).
Time and costs
A course of one and a half hours, with the possibility of extension depending on the purpose and target group of the course.