Nieuw thema: AI-geletterdheid

Verken hier
Deze afbeelding toont een groot wit, traditioneel, oud gebouw. De bovenste helft van het gebouw vertegenwoordigt de geesteswetenschappen (wat wordt gesymboliseerd door de ingesloten tekst uit de klassieke literatuur die vaag bovenop het gebouw te zien is). Het onderste deel van het gebouw is versierd met wiskundige formules om de wetenschappen voor te stellen. De middelste laag van de afbeelding is zwaar gepixeld. Op de trappen aan de voorkant van het gebouw staat een groep geleerden, gekleed in formele pakken en stropdassen, die bij de ingang staan te praten en sommigen zitten op de trappen. Er zijn twee stenen, standbeeld-achtige handen die het gebouw aan de linkerkant uit elkaar rekken. 

Op de voorgrond van het beeld staan 8 studenten - die alleen vanaf de achterkant te zien zijn. Hun afstudeerjurken hebben felblauwe kappen en het lijkt alsof ze allemaal naar het oude gebouw lopen dat op een afstand op de achtergrond staat. Op de voorgrond staan verschillende studenten. 

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. 

In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.
03.12.2025

(EN) Digital Omnibus & GDPR: a practical take on the changes

[Deze blog is enkel beschikbaar in het Engels.]

There has been a lot of fuzz around the Digital Omnibus and its proposed amendments to the GDPR. The package introduces multiple changes, but what do they actually mean in practice for companies? This first analysis considers the proposed changes from a practical perspective. Do keep in mind that that this remains only a commission proposal, which will now enter the legislative process. Both the European Parliament and the Council are expected to introduce their own amendments, meaning that it will likely take some time before a final version is adopted.

Definition of personal data

The proposal explicitly incorporates the relative approach to the definition of personal data following the EDPS v. SRB judgement. In practice, the long-standing debate[1] on what constitutes personal data is unlikely to disappear. It will therefore remain key for businesses to document whether they have or may have the means to identify the data subject, for example through the combination of different (external) data sets. Useful guidance can be found, for example, in the ICO’s guidance on pseudonymisation. The Commission would also adopt guidance defining in which cases pseudonymised data constitutes personal data. When carrying out this assessment, companies should keep in mind that the notion of personal data should be interpreted broadly which has been a deliberative choice of the European legislator based on the fundamental right to protection of personal data.[2]

Further processing for archiving in the public interest, scientific or historical research, or statistical purposes

The proposal deems further processing for archiving in the public interest, scientific or historical research, or statistical purposes automatically compatible with the initial purpose, eliminating the need for a compatibility assessment. In practice, compatibility assessments for research and statistical further processing were already highly standardised. Therefore, the impact of the proposed change is likely to be limited for companies. Moreover the elements underlying the compatibility assessment (e.g., no processing of sensitive data, pseudonymisation) remain relevant for compliance with other GDPR obligations, meaning companies must continue to pay close attention to them. 

Processing in the context of the development and operation of AI

Legitimate interest is explicitly recognised as a legal basis for the processing carried out for the development and operation of AI systems in the proposal. In practice, companies already relied on legitimate interest for such processing activities. In that regard, it is welcome that the proposal would offer clearer guidance for doing so by including appropriate safeguards (e.g., enhanced transparency, data minimisation, etc.). While these safeguards stem from principles that already had to be taken into account under the GDPR, the proposal also appears to introduce an absolute right to object, allowing data subjects to oppose such processing without the controller being able to balance interests. This will likely have an impact on how companies design and develop AI systems.

Processing of sensitive data

The proposal introduces an exception allowing the processing of sensitive data for developing and operating an AI system. Controllers must implement organisational and technical measures to avoid processing sensitive data, and any sensitive data that is nevertheless processed must be removed. If removal requires disproportionate effort, the controller must at least ensure that the data cannot be used to generate outputs or be disclosed. This pragmatic approach recognises that sensitive data cannot always be fully excluded in AI development, while at the same requiring AI providers to ensure that their systems can “unlearn” and delete such data from the outset. In practice for companies, this may facilitate AI training, but also underscores the importance of privacy by design. Moreover, companies should keep in mind that ‘disproportionate effort’ elsewhere in the GDPR is interpreted strictly, and the mere fact that removal would be time-consuming or burdensome is unlikely to suffice.[3]

Second, the proposal permits the processing of biometric data needed to verify an individual’s identity when the biometric data or tool remains under the individual’s sole control, offering companies more flexibility where explicit consent would previously have been required.

Exceptions to the information obligation under Article 13 GDPR (when data is collected directly)

Article 13.4 GDPR currently exempts controllers from providing information where the data subject already has it, though this is rarely applied in practice. The proposal revises this so that the information obligation does not apply where there are reasonable grounds to assume the data subject already knows the controller’s identity and the purposes of processing. This is subject to strict safeguards: it only applies where data is collected within a clear controller-data subject relationship, is not data-intensive, involves no disclosures to recipients or transfers outside the EEA, and does not pose a high risk to individuals’ rights. In practice, this could be useful for companies when data is collected directly through a website contact form, provided the controller’s identity and the purposes of processing are clear. 

Furthermore the proposal introduces a new exception to the information obligation under Article 13 GDPR for research purposes where providing information is impossible, requires disproportionate effort, or would make the research impossible or seriously undermine its objectives. A nearly identical exception already exists in Article 14.5, b) GDPR, which instead applies when data is collected indirectly. In practice, however, it may be difficult to justify the use of this exception when data is obtained directly from individuals. As a result, the practical impact of this change is likely to remain limited.

Abuse of rights

For the right of access, the proposal explicitly allows controllers to refuse requests where data subjects abuse their GDPR rights. In practice, abuse has been already frequently invoked before courts and national data protection authorities, although it has been rarely accepted. The Court of Justice ruled in FT (C-307/22) that an access request cannot be refused if it is exercised, even partially, to genuinely obtain information about the processing of personal data or to verify its lawful use. Therefore, only requests pursued solely for purposes unrelated to data protection rights may qualify as an abuse of rights (e.g. sole intention to initiate proceedings). Because data subjects can easily assert a link to understanding the processing, the practical impact is limited, and expressly adding this general principle of law to Article 12 GDPR is unlikely to change this.

Automated individual decision-making

With regard to automated individual decision-making, the proposal moves away from a prohibition-based approach including exceptions to defining the scenarios under which such processing is permitted. In practice, the impact for companies appears limited, as the circumstances in which automated decision-making is allowed, remain unchanged. Importantly, the proposal also clarifies that when automated decision-making is “necessary” for entering into or performing a contract, this does not require the decision to be possible only through automated means. Requiring that would make it nearly impossible for companies to rely on this ground.

Data breaches

The proposal also simplifies data breach rules with a more pragmatic approach. First, the threshold for notifying a data breach is raised from “likely to result in a risk” to “likely to result in a high risk”, reflecting how national data protection authorities in practice already tend to focus on data breaches likely to result in a high risk. Furthermore, the notification deadline is extended from 72 to 96 hours, making compliance more workable for companies, particularly over weekends or public holidays. Finally, an EU-wide notification template will further harmonize cross-border reporting and a single entry point for notifications will be established, also serving other digital regulations (e.g. eIDAS, NIS2 and DORA).

DPIA

In the context of data protection impact assessment the proposal also aims to harmonise, by introducing EU-wide lists of processing activities that either require or do not require a DPIA. In practice, this is expected to simplify compliance for international companies.

Conclusion

To conclude, the Omnibus amendments to the GDPR largely appear to build on existing case law, seek greater harmonisation across the EU, and aim to make the framework more workable in line with current practice and regulatory guidance by national and European data protection authorities. The additional safeguards introduced in this context are generally welcome. However, it remains debatable whether the GDPR text is the right place to codify all these elements. Further refinement through case law and regulatory guidance, where specific circumstances can be taken into account, may ultimately provide a more meaningful and nuanced approach than generalization in the legislative text. Would it not have been more effective to retain the existing text and continue refining its interpretation through guidance and case law? 

Footnotes

[1] It can be argued that the ‘relative’ approach towards the notion of personal data finds support in previous case law of the Court of Justice as well. The Court of Justice has systematically ruled that it should be examined whether an entity has or may have the means reasonably likely to be used to identify the data subject. E.g. In Breyer, the Court of Justice concluded that the online media services provider had the means reasonably likely to be used to identify the data subject by pointing to the legal channels in the event of cyber attacks for accessing additional information enabling identification, held by the internet service provider (CJEU 19 October 2016, nr. Case C-582/14 Breyer, para 47). In IAB Europe, the Court of Justice concluded that IAB Europe had the means reasonably likely to be used to identify the data subject, because their members  who held the additional information enabling identification, are required to provide IAB Europe with all relevant information upon request (CJEU 7 March 2024, nr. C-604/22 IAB Europe, para 48).

[2] European Commission, ‘A comprehensive approach on personal data protection in the European Union’ (Communication), COM (2010) 609 final, p. 5.

[3] See for example in the context of the right of access (EDPB, ‘Guidelines on data subject rights - Right of access’, 01/2022, 28 March 2023, para 188.)

Auteur

Julie Mannekens

Julie Mannekens

Legal researcher at Centre for IT and IP Law KULeuven