The government of the United Kingdom has published a paper that provides guidance to civil servants regarding the use of generative AI to prevent harm and to ensure a correct usage of such systems.
Are you focussing on the ethical, legal and societal aspects of AI and data-driven technologies in your work? Would you like to discuss this with other professionals and learn from peers? Then the learning community ‘AI and ethics in practice’ of the Knowledge Centre Data & Society might be exactly what you are looking for!
A competency model for developing responsible data-driven systems and artificial intelligence.
This resolution includes some additional action plans that build on the efforts already made under the Flemish Policy Plan on Artificial Intelligence
In this Data-Date we invited professional (digital) ethics consultants and a health innovator to discuss the role of the digital ethicist and how organisations can make a start in implementing ethical technology development processes.
The Spanish government will set up a state agency to monitor AI and supervise algorithms this year
De 193 lidstaten van UNESCO, waaronder België, maar ook China en Rusland, hebben de allereerste wereldwijde norm over de ethiek van AI aangenomen
This draft report lists the many challenges and opportunities related to AI and launches a series of policy proposals.
The Digital Ethics Compass toolkit includes several questions, recommendations and a workshop to learn more about how to design digital products in an ethical way and how to use this as a competitive advantage.
On how to apply ethics at the corporate level
On AI and current and future ethical and legal challenges
On the application of AI in society, in the class room, in your work environment, ...
On the importance of ethical reflection moments in an innovation process: tools and methodologies
On identifying ethical, legal and societal challenges in your data or AI application
On privacy dilemmas in data collection and use
Practical tips for writing ethics and regulations into a project proposal
About transparency and explainability of AI systems
On identifying ethical, legal and societal challenges in healthcare applications
IEEE heeft een norm ontwikkeld die toelaat ethische kwesties aan te pakken tijdens het ontwerpen van systemen.
How can you avoid replicating societal biases, prejudices and structural disparities in your AI healthcare system? In order to help you do this, the Knowledge Centre Data & Society developed the AI Blindspots healthcare card set.
A role-playing game for companies and governments to get to know people's digital barriers and to test their own digital services.
It is important to develop technologies in a responsible way, to increase trust and acceptance and to avoid a negative social impact as much as possible. A Digital Ethicist helps to translate this vision into the working practice within an organisation.
How can you take into account possible prejudices and structural inequalities before, during and after the development of your AI application? In order to help you do this, the Knowledge Centre Data & Society developed the AI Blindspots card set.
This brAInfood presents a questionnaire for users of AI technology to determine for themselves whether an AI system is behaving ethically or not.
You’d like to develop products that would make the world a bit better? Omidyar offers a tool to help you with the development of responsible technology. Bonus: you also receive a guide to actually change the way your organisation develops tech.
A good way to get acquainted with the idea of explainable AI is to look at IBM Research's AI Explainability 360 Open Source Toolkit. In addition they offer multiple algorithms as a possible start of using explainable AI (XAI).
How much data are you willing to share to fight corona? What is acceptable and what is not? Knowledge Centre Data & Society investigates.
The Institute for the Future (IFTF) and the Omidyar Network have developed a toolkit to help predict difficult and unwelcome consequences and to prevent these from occurring while you develop products and projects based on AI.
The Data Ethics Canvas provides tools to address ethical questions during the design and development of a project or product based on data, such as an AI application.
The approach to guidance ethics aims to start the dialogue about the connection between ethics and technology through a workshop.
With the Building an Algorithm Tool you ask ethical questions during the entire AI process: the design, development, test phase and implementation of the AI application.
The creators of the Data Ethics Guide want to highlight the issues surrounding ethics and make them manageable for companies that work with AI technology. They introduce different concepts of how ethics can be researched within a company. They do this through questions that help determine the current ethical situation.
Do you want to know how ethical your AI application is? The AI systems Ethics Self-Assessment Tool helps you to estimate four ethical principles yourself.
This tool was created by and for data scientists. It consists of various methodologies that all can be applied at the start of a project. The aim is to document ethical issues and decisions in this regard, for example, to provide accountability towards stakeholders.
This ethical framework is intended for government agencies. However, it can also be useful for other organizations.
The Ethics framework of Machine Intelligence Garage consists of seven principles, each with a set of questions that can lead to a better understanding of how to deal with ethics in your design.
This tool is a self-assessment based on a questionnaire. This can help determine if your company is ready to build a robust, secure and ethical AI solution.