Tag Archives: AI

The future of GDPR? Focus on Automated Decisions

The Digital Omnibus Proposal.  The so called “Digital Omnibus” regulation proposal promises to lighten the burden of compliance with data protection legislation. Its aim is “to ensure that the rules continue to be fit for supporting innovation and growth”. Europe is not giving up on privacy, but it is willing to simplify it.

EDPB and EDPS Chime In.  The proposal, published in November 2025, has recently been the subject matter of a joint opinion by the European Data Protection Board and the European Data Protection Supervisor. While these two bodies are apparently in favor of facilitating GDPR compliance and strengthening consistency in its application, they express significant concerns regarding the impact of the changes on the fundamental rights and freedoms of individuals. They also fear that the proposal will create additional legal uncertainties.

The GDPR of the Future.  Gitti and Partners is embarking in a series of blog posts to explain what the GDPR may look like if the Digital Omnibus proposal becomes law. Today we focus on changes to the provision on automated decisions.

Automated Decisions: from Right to Prohibition.  Article 22 of the current GDPR regulates automated individual decision-making. The current language of the provision frames the rule as a “right”: a data subject is entitled not to be subject to a decision based solely on automated processing, unless certain conditions apply. The new proposal, instead, shapes a similar rule as a prohibition.

Conditions for Automated Decisions.  The new proposal reads (new language highlighted in yellow): “1. A decision which produces legal effects for a data subject or similarly significantly affects him or her may be based solely on automated processing, including profiling, only where that decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller regardless of whether the decision could be taken otherwise than by solely automated means. […]”

While – as before – the automated decision is legitimate if necessary to enter into or perform a contract with the data subject, the novelty is that the necessity of the automated decision can be assessed “regardless of whether the decision could be taken otherwise than by solely automated means”. Therefore:

  • An automated decision that does not produce any legal effects is fine.
  • An automated decision producing legal effects may be based on automated processing only if the decision is necessary to enter into or perform a contract with the data subject.
  • No automated decision is allowed unless it is necessary to enter or perform a contract with the data subject.
  • In order to add certainty to the interpretation of the requirement of “necessity”, the proposal clarifies that the decision may be regarded as necessary also if the decision could be made by a human. In the words of the EDPB/EDPS opinion, “the requirement of necessity does not mean that the mere fact that a decision could theoretically also be taken by a human should prevent the controller from taking the decision by solely automated means”.
  • In short, “The fact that the decision could also be taken by a human does not prevent the controller from taking the decision by solely automated processing” (recital (38) of the Digital Omnibus proposal).

Bottom line: the data controller may choose a human decision process or an automated decision process so long as they are necessary to enter into or perform a contract with the data subject.

In conclusion, as shown in the “AI First” policy, the EU is now worried that AI may not be fully exploited. The above changes are supposed to encourage automated decisions even if such decisions could be taken by a human being.

Stay tuned for more angles of the Digital Omnibus.

A More Volatile World: The Digital Omnibus

On November 19, 2025, the European Commission unveiled a landmark proposal: the Digital Omnibus Regulation. This initiative is not just another legislative tweak – it signals a philosophical shift in how Europe approaches digital regulation. In a world increasingly defined by volatility, complexity, and rapid technological change, the Commission seems to be saying: “We’ve heard you – let’s regulate, but let’s make it easier to comply.”

Why Now? The Context Behind the ‘Digital Omnibus’

The proposal comes against a backdrop of mounting pressure on Europe’s competitiveness. In his now-famous “Please, do something” speech to the European Parliament, Mario Draghi urged EU institutions to act decisively to restore Europe’s ability to innovate and compete globally. Could the Digital Omnibus be seen as a response to this heartfelt appeal?

For years, the EU has been a global pioneer in digital regulation – think GDPR, AI Act, Data Act, Digital Services Act (DSA), Digital Markets Act (DMA), NIS2, and more. But this success has come at a cost: fragmentation, complexity, and heavy compliance burdens. Businesses have struggled to navigate overlapping obligations. The Digital Omnibus is designed to change that. In the “explanatory memorandum” to the Digital Omnibus, the Commission emblematically acknowledges, for instance, that “some entities, especially smaller companies and associations with a low number of non-intensive, often low-risk data processing operations, expressed concerns regarding the application of some obligations of the GDPR”.

The ‘Digital Omnibus’ Proposal

The proposal introduces technical amendments and structural simplifications across a wide range of legislation, including:

  • General Data Protection Regulation (GDPR)
  • AI Act
  • Data Act
  • ePrivacy Directive
  • NIS2 Directive
  • Data Governance Act
  • Free Flow of Non-Personal Data Regulation
  • Platform-to-Business (P2B) Regulation (to be repealed

Key Highlights

  • GDPR Simplification:
    • Clarifies the definition of personal data
    • Supports controllers with respect to the criteria and means to determine whether data resulting from pseudonymization does not constitute personal data
    • Introduces flexibility for AI development: processing personal data for AI training under “legitimate interest,” with safeguards.
    • Modernizes cookie consent rules – centralized browser settings to end “cookie fatigue.”
  • AI Act Adjustments:
    • Expands regulatory sandboxes and simplifies compliance for SMEs and mid-cap companies.
    • Clarifies the interplay between the AI Act and other EU legislation
    • Introduces an obligation on the Commission and Member States to foster AI literacy
  • Incident Reporting:
    • Creates a single-entry point for incident notifications under GDPR, NIS2, DORA, and CER – ending duplicative reporting.

A New Philosophy?

There are strong indications that the “Digital Omnibus” is more than a mere technical adjustment and may represents a strategic shift in EU “digital law”. The proposals will now proceed to the European Parliament and the Council for deliberation. It remains to be seen whether words will be turned into action.

Italy’s New AI Law: A Boost for Healthcare Research?


Italy has recently enacted its own “Artificial Intelligence Act”, set to take effect on October 10, 2025.

You might be wondering: Did we really need another layer of AI regulation? That was our initial reaction, too. But a closer look reveals that the Italian AI Law introduces several interesting provisions, especially in the healthcare sector, that could facilitate research for both public and private entities. Here are some highlights:

1. Healthcare Data Processing as Based on Public Interest

The law explicitly recognizes that the processing of health-related personal data by:

  • Public or private non-profit entities,
  • Research hospitals (IRCCS),
  • Private entities collaborating with the above for healthcare research,

is of “substantial public interest.” This significantly expands the scope of Article 9(2)(g) of the GDPR, offering a clearer legal basis for processing sensitive data in research contexts.

2. Secondary Use of Data

The law introduces a simplified regime for the secondary use of personal data without direct identifiers. In particular:

  • No new consent required, as long as data subjects are informed (even via a website).
  • Automatic authorization unless blocked by the Data Protection Authority within 30 days of notification.

This provision applies only to the entities mentioned above so it is limited in scope, but in any case significantly strengthens the framework for nonprofit research projects.

3. Freedom to Anonymize, Pseudonymize and Synthesize

Under Article 8(4) of the AI Law, processing data for anonymization, pseudonymization, or synthesization is always permitted, provided the data subject is informed. This is a major step forward in enabling privacy-preserving AI research.

4. Guidelines and Governance

The law delegates the creation of technical guidelines to:

  • AGENAS – for anonymization and synthetic data generation.
  • Ministry of Health – for processing health data in research, including AI applications.

It also establishes a national AI platform at AGENAS, which will act as the data controller for personal data collected and generated within the platform.


Final Thoughts

While the GDPR aimed to support research, its implementation often created legal uncertainty and operational hurdles. Italy’s AI Law appears to address some of these gaps, offering a more pragmatic and enabling framework for healthcare research.

AI Breakfasts Continue

Our breakfast presentation series dedicated to AI continues. Join us for our next event on May 24, 2024 at 9 via Dante in Milan! Our partner, professor Camilla Ferrari of the University of Milan, will be speaking about the impact of AI on contracts.

Curious about past presentations on AI and AI liability? You may find below our slides (in Italian).

Quick Guide on Legislation In Force and Legislation Stalled

Just a quick blog post to align our readers on which legislation is in force and which is stalled at the moment:

  • The Ultimate Beneficial Owners register (discussed here), which companies strived to populate by December 11, 2023, is currently on hold due to administrative litigation that currently blocks the application of the register.
  • The European Regulation on Artificial Intelligence, which we already discussed here, is now final. It will enter into force in 2 years.
  • Legislation on payback for medical devices will be scrutinized by the Italian Constitutional Court thanks to decisions of the Lazio Administrative Court issued on November 24, 2023.
  • The Italian Sunshine Act (Law no. 62 of 2022), which we illustrated here, is in force but not yet applicable since the transparency website is not yet live.
  • Next week the Whistleblowing Law (analyzed here and here) will be mandatory for all companies in scope.
  • The Digital Services Act and the Digital Markets Act are in force.

AI and Healthcare: Recommendations by the Italian Data Protection Authority

The use of Artificial Intelligence in healthcare continues to grow and it is poised to reach 188 billion by 2030. It also raises many concerns.

The Italian data protection authority (Garante) has recently issued recommendations based on 10 points, which can be found here.

The Garante particularly insists on:

  1. Human in the loop: a human being must be involved in the control, validation or change of the automatic decision;
  2. No algorithmic discrimination: trustworthy AI systems should reduce mistakes and avoid discrimination due to inaccurate processing of health data;
  3. Data quality: health data must be correct and updated. Representation of interested subjects must correctly reflect the population.
  4. Transparency: the interested subject must be able to know the decisional processes based on automated processes and must receive information on the logic adopted so as to be able to understand it (easier said than done!). The Garante also requires that at least an excerpt of the Data Protection Impact Assessment is published.

Other recommendations are not surprising for anyone familiar with the GDPR:

  • Profiling and decisions based on automated processes must be expressly allowed by Member State’s laws.
  • The principles of privacy by design and privacy by default obviously play a big role in healthcare AI systems.
  • Roles of controller and processor must be correctly allocated: in particular, the public administration must ensure that external entities processing data are appointed as data processors.
  • A Data Protection Impact Assessment must be carried out and any risks must be evaluated.
  • Integrity, security and confidentiality of data must be ensured.

Striving for genuine transparency in connection with very complex and rapidly evolving algorythms is not going to be an easy task for the data controller. Similarly, understanding how AI works in a healthcare setting is not going to be simple for patients.

AI Liability Directive: Key Takeaways

We have already illustrated the new proposed rules for a product liability directive on this blog. We now analyze the proposal for a AI Liability Directive, which offers interesting insights on how liability rules will be tweaked when Artificial Intelligence is concerned. In fact, as noted by the Commission’s explanatory memorandum to the AI Liability Directive, “the ‘black box’ effect can make it difficult for the victim to prove fault and causality and there may be uncertainty as to how the courts will interpret and apply existing national liability rules in cases involving AI“.

These slides may help understanding the AI Liability Directive. If you have questions or doubts, do not hesitate to reach out to us.

EU Policies for the Digital Age

Confused about the Digital Service Act, the Digital Markets Act, the Data Governance Act and the Data Act? My recent article tries to make sense of all of them:

https://www.mondaq.com/italy/data-protection/1195638/an-overview-of-the-european-union-laws-and-policies-for-the-digital-age

The article also explains that the European Union has a strong vision on principles and policies for the digital age and aspires to a worldwide leadership role in the governance of digital phenomena.

Paola Sangiovanni to Speak on Artificial Intelligence

Our firm will be attending the EMEA Regional Meeting of Ally Law in Malta next week and on Friday November 15th I will be speaking at a panel discussion titled “Keeping an Eye on AI: Ethical and Regulatory Considerations.” 

Artificial intelligence is a hot topic, also in the med-tech field, and poses exciting legal, ethical and regulatory questions. I am sure this will be an interesting opportunity to discuss them with legal and technical experts.