Report

Insights on social sustainability from the learning community meet-up

10.01.2024

How can new digital and AI innovations be implemented in a workplace in a social and smooth way? How to stay focussed on the people that will work with the new technology? What to do when new technologies ignite social issues between colleagues?

When digital (AI) innovations are introduced in the workplace, the technical and business model aspects of the innovation are often the sole or main focus. But a technological innovation needs to be embedded and accepted more broadly by employees in order to be successful. Many workplace innovations require new skills, competences and processes. They depend upon the engagement of competent employees in order to reach their potential. How do we ensure this social sustainability of innovations in the workplace?

We discussed this topic during the latest gathering of the learning community 'AI and ethics in practice' in Brussels, with testimonials by Dieter Carels (Minds ALike Consulting), Shirley Elprama (imec-SMIT-VUB) and Christophe Benoît (Knowledge Center AI, Erasmus Hogeschool Brussel).

Our 3 main insights

  • You cannot just drop off a new technology in the workplace and leave: you need to guide the process, manage expectations and create guidelines and rules for use.

  • Every digital change is essentially a human change: people have to change their habits and different misconceptions can be in the way of this happening.

  • Altruism can be a driving factor for innovation: if you can get the right people involved, they can inspire bottom-up change and adoption of new technologies.

pictures by Anaïs Ntabundi

The meet-up

First, Dieter Carels talked about different ‘syndromes’ or misconceptions that often pop up when new technologies are introduced to the workfloor, and what can be done about them. An example of such a misconception is the ‘dog on a leash’-syndrome: not everyone goes at the same speed in adopting new technologies, it’s important to make sure there is not a pulling game going on, but to have fun and open discovery sessions where the two sides can inspire each other.

Shirley Elprama talked about a project where she did research on the acceptance of exoskeletons in different production companies. She learned that there are many different aspects to the introduction of such a technology on the workfloor, such as managing expectations (‘No, you will not be like the Hulk with an exoskeleton!’) and practical obstacles that make use more complicated.

Finally, Christophe Benoît talked about a discovery he made about how certain kinds of people can inspire bottom-up adoption of technology, because they have social networks that are based on altruism: they want to help each other. The excitement about a new technology could in that way spread throughout the whole network.

Afterwards, we split up into three groups to discuss related challenges that the participants brought forward. We talked about

  • how to deal with two very clear opposing camps regarding ChatGPT in an organisation,

  • how to deal with different ecosystems in an organisation that each have their way of working and go to one general overarching way of working within the organisation,

  • and how to evaluate the implementation of guidelines on the use of AI and the necessity to make changes to the guidelines based on actual usage and technological advancements.

This gathering of the learning community was a collaboration between the Knowledge Centre Data & Society and the SustAIn.Brussels initiative, which is coordinated by Sirris, Agoria, BeCentral, VUB and ULB.


A learning community, you say?

This meet-up was organised in the context of the learning community 'AI and ethics in practice'. In this network we stimulate the sharing of knowledge, experiences, best practices and questions between ethics officers, digital ethicists, AI ethics officers, and other professionals who are either directly or indirectly concerned with the subject.

The learning community focuses specifically on the application of ethics and social issues in the context of the development, implementation and use of data and AI applications. We therefore focus specifically on professionals who are in practice, in their daily work, dealing with the ethical and social issues surrounding data & AI.

Join one of the next (online) learning community meetings in 2024!