Tag Archives: chatgpt

The future of GDPR? Focus on Automated Decisions

The Digital Omnibus Proposal.  The so called “Digital Omnibus” regulation proposal promises to lighten the burden of compliance with data protection legislation. Its aim is “to ensure that the rules continue to be fit for supporting innovation and growth”. Europe is not giving up on privacy, but it is willing to simplify it.

EDPB and EDPS Chime In.  The proposal, published in November 2025, has recently been the subject matter of a joint opinion by the European Data Protection Board and the European Data Protection Supervisor. While these two bodies are apparently in favor of facilitating GDPR compliance and strengthening consistency in its application, they express significant concerns regarding the impact of the changes on the fundamental rights and freedoms of individuals. They also fear that the proposal will create additional legal uncertainties.

The GDPR of the Future.  Gitti and Partners is embarking in a series of blog posts to explain what the GDPR may look like if the Digital Omnibus proposal becomes law. Today we focus on changes to the provision on automated decisions.

Automated Decisions: from Right to Prohibition.  Article 22 of the current GDPR regulates automated individual decision-making. The current language of the provision frames the rule as a “right”: a data subject is entitled not to be subject to a decision based solely on automated processing, unless certain conditions apply. The new proposal, instead, shapes a similar rule as a prohibition.

Conditions for Automated Decisions.  The new proposal reads (new language highlighted in yellow): “1. A decision which produces legal effects for a data subject or similarly significantly affects him or her may be based solely on automated processing, including profiling, only where that decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller regardless of whether the decision could be taken otherwise than by solely automated means. […]”

While – as before – the automated decision is legitimate if necessary to enter into or perform a contract with the data subject, the novelty is that the necessity of the automated decision can be assessed “regardless of whether the decision could be taken otherwise than by solely automated means”. Therefore:

  • An automated decision that does not produce any legal effects is fine.
  • An automated decision producing legal effects may be based on automated processing only if the decision is necessary to enter into or perform a contract with the data subject.
  • No automated decision is allowed unless it is necessary to enter or perform a contract with the data subject.
  • In order to add certainty to the interpretation of the requirement of “necessity”, the proposal clarifies that the decision may be regarded as necessary also if the decision could be made by a human. In the words of the EDPB/EDPS opinion, “the requirement of necessity does not mean that the mere fact that a decision could theoretically also be taken by a human should prevent the controller from taking the decision by solely automated means”.
  • In short, “The fact that the decision could also be taken by a human does not prevent the controller from taking the decision by solely automated processing” (recital (38) of the Digital Omnibus proposal).

Bottom line: the data controller may choose a human decision process or an automated decision process so long as they are necessary to enter into or perform a contract with the data subject.

In conclusion, as shown in the “AI First” policy, the EU is now worried that AI may not be fully exploited. The above changes are supposed to encourage automated decisions even if such decisions could be taken by a human being.

Stay tuned for more angles of the Digital Omnibus.

Italy’s New AI Law: A Boost for Healthcare Research?


Italy has recently enacted its own “Artificial Intelligence Act”, set to take effect on October 10, 2025.

You might be wondering: Did we really need another layer of AI regulation? That was our initial reaction, too. But a closer look reveals that the Italian AI Law introduces several interesting provisions, especially in the healthcare sector, that could facilitate research for both public and private entities. Here are some highlights:

1. Healthcare Data Processing as Based on Public Interest

The law explicitly recognizes that the processing of health-related personal data by:

  • Public or private non-profit entities,
  • Research hospitals (IRCCS),
  • Private entities collaborating with the above for healthcare research,

is of “substantial public interest.” This significantly expands the scope of Article 9(2)(g) of the GDPR, offering a clearer legal basis for processing sensitive data in research contexts.

2. Secondary Use of Data

The law introduces a simplified regime for the secondary use of personal data without direct identifiers. In particular:

  • No new consent required, as long as data subjects are informed (even via a website).
  • Automatic authorization unless blocked by the Data Protection Authority within 30 days of notification.

This provision applies only to the entities mentioned above so it is limited in scope, but in any case significantly strengthens the framework for nonprofit research projects.

3. Freedom to Anonymize, Pseudonymize and Synthesize

Under Article 8(4) of the AI Law, processing data for anonymization, pseudonymization, or synthesization is always permitted, provided the data subject is informed. This is a major step forward in enabling privacy-preserving AI research.

4. Guidelines and Governance

The law delegates the creation of technical guidelines to:

  • AGENAS – for anonymization and synthetic data generation.
  • Ministry of Health – for processing health data in research, including AI applications.

It also establishes a national AI platform at AGENAS, which will act as the data controller for personal data collected and generated within the platform.


Final Thoughts

While the GDPR aimed to support research, its implementation often created legal uncertainty and operational hurdles. Italy’s AI Law appears to address some of these gaps, offering a more pragmatic and enabling framework for healthcare research.