Participants were also invited to provide feedback on the definitions and concepts used in articles 5 and 6 of the draft AI Act. More specifically, they were asked to assess the understandability and clarity of the various concepts used. Below, we present some insights from the survey.
Participants were asked to review the definition of the high-risk category. There are two types of high-risk categories. Either an AI system is a product or a safety component of a product, covered by certain Union harmonisation legislation (listed in Annex II, section A of the AI Act) and such product is required to undergo a third-party conformity assessment. Or, the AI system is applied in a certain sector with a specific purpose (as listed in Annex III).
Regarding the first type of high-risk AI systems, participants found that the harmonisation legislation listed in Annex II is in many cases not specific enough.
Concerning the second type of high-risk AI systems, we assessed the description of the different sectors and purposes. Regarding, 'biometric identification and categorisation of natural persons', the description of the purpose (i.e. AI systems intended to be used for 'real-time' and 'post' remote biometric identification of natural persons) still raises many questions.
For example, the use of the term ‘remote’ creates uncertainty. It is not clear to participants whether this is about physical distance and what this means for fingerprint-based identification. The description of the sector also leads to confusion. Namely, it points to both 'identification' and 'categorisation', while the latter term is not contained in the actual purpose description. Indeed, there is no mention of assigning a person to a category in the purpose description.
Concerns also emerge regarding the sector of 'Management and operation of critical infrastructure' where the specific use of AI systems as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, is considered to be high risk.
The text defines safety component as follows: a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property.
For some participants, the term 'safety components' is too general and not specific enough. Participants suggested including concrete examples in the preamble or recitals. The definition is also broader than what the term suggests. Many parts of a product or system can cause health damage or safety risks if they fail or malfunction, without the term 'safety component' being associated with them. One of the participants gives the following example: ”e.g. pressure relief valve of a high-pressure cooking pot is a safety component. But the lid shouldn’t be categorised as such. However, a sudden crack in the lid (system component) can lead to health risks in case of failure. This makes the lid also a safety component.”
The definition of a safety component is also incomplete, according to some. In addition to people and property, the scope of the potential danger should also include animals or even flora.
The fifth sector of high-risk applications concerns "Access to and enjoyment of essential private and public services and benefits". The second specific purpose under that sector refers to “AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small-scale providers for their own use”
According to several participants, the reference to “AI systems put into service by small-scale providers for their own use” creates ambiguity and is a source of unclarity. Who are small-scale providers? SMEs? When is something small-scale? etc.
The term 'own use' was also considered too vague by some participants. Finally, the question was asked why there should be any difference at all between large and small users in terms of their own use.