Tag Archives: artificial intelligence

The future of GDPR? Focus on Automated Decisions

The Digital Omnibus Proposal.  The so called “Digital Omnibus” regulation proposal promises to lighten the burden of compliance with data protection legislation. Its aim is “to ensure that the rules continue to be fit for supporting innovation and growth”. Europe is not giving up on privacy, but it is willing to simplify it.

EDPB and EDPS Chime In.  The proposal, published in November 2025, has recently been the subject matter of a joint opinion by the European Data Protection Board and the European Data Protection Supervisor. While these two bodies are apparently in favor of facilitating GDPR compliance and strengthening consistency in its application, they express significant concerns regarding the impact of the changes on the fundamental rights and freedoms of individuals. They also fear that the proposal will create additional legal uncertainties.

The GDPR of the Future.  Gitti and Partners is embarking in a series of blog posts to explain what the GDPR may look like if the Digital Omnibus proposal becomes law. Today we focus on changes to the provision on automated decisions.

Automated Decisions: from Right to Prohibition.  Article 22 of the current GDPR regulates automated individual decision-making. The current language of the provision frames the rule as a “right”: a data subject is entitled not to be subject to a decision based solely on automated processing, unless certain conditions apply. The new proposal, instead, shapes a similar rule as a prohibition.

Conditions for Automated Decisions.  The new proposal reads (new language highlighted in yellow): “1. A decision which produces legal effects for a data subject or similarly significantly affects him or her may be based solely on automated processing, including profiling, only where that decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller regardless of whether the decision could be taken otherwise than by solely automated means. […]”

While – as before – the automated decision is legitimate if necessary to enter into or perform a contract with the data subject, the novelty is that the necessity of the automated decision can be assessed “regardless of whether the decision could be taken otherwise than by solely automated means”. Therefore:

  • An automated decision that does not produce any legal effects is fine.
  • An automated decision producing legal effects may be based on automated processing only if the decision is necessary to enter into or perform a contract with the data subject.
  • No automated decision is allowed unless it is necessary to enter or perform a contract with the data subject.
  • In order to add certainty to the interpretation of the requirement of “necessity”, the proposal clarifies that the decision may be regarded as necessary also if the decision could be made by a human. In the words of the EDPB/EDPS opinion, “the requirement of necessity does not mean that the mere fact that a decision could theoretically also be taken by a human should prevent the controller from taking the decision by solely automated means”.
  • In short, “The fact that the decision could also be taken by a human does not prevent the controller from taking the decision by solely automated processing” (recital (38) of the Digital Omnibus proposal).

Bottom line: the data controller may choose a human decision process or an automated decision process so long as they are necessary to enter into or perform a contract with the data subject.

In conclusion, as shown in the “AI First” policy, the EU is now worried that AI may not be fully exploited. The above changes are supposed to encourage automated decisions even if such decisions could be taken by a human being.

Stay tuned for more angles of the Digital Omnibus.

A More Volatile World: The Digital Omnibus

On November 19, 2025, the European Commission unveiled a landmark proposal: the Digital Omnibus Regulation. This initiative is not just another legislative tweak – it signals a philosophical shift in how Europe approaches digital regulation. In a world increasingly defined by volatility, complexity, and rapid technological change, the Commission seems to be saying: “We’ve heard you – let’s regulate, but let’s make it easier to comply.”

Why Now? The Context Behind the ‘Digital Omnibus’

The proposal comes against a backdrop of mounting pressure on Europe’s competitiveness. In his now-famous “Please, do something” speech to the European Parliament, Mario Draghi urged EU institutions to act decisively to restore Europe’s ability to innovate and compete globally. Could the Digital Omnibus be seen as a response to this heartfelt appeal?

For years, the EU has been a global pioneer in digital regulation – think GDPR, AI Act, Data Act, Digital Services Act (DSA), Digital Markets Act (DMA), NIS2, and more. But this success has come at a cost: fragmentation, complexity, and heavy compliance burdens. Businesses have struggled to navigate overlapping obligations. The Digital Omnibus is designed to change that. In the “explanatory memorandum” to the Digital Omnibus, the Commission emblematically acknowledges, for instance, that “some entities, especially smaller companies and associations with a low number of non-intensive, often low-risk data processing operations, expressed concerns regarding the application of some obligations of the GDPR”.

The ‘Digital Omnibus’ Proposal

The proposal introduces technical amendments and structural simplifications across a wide range of legislation, including:

  • General Data Protection Regulation (GDPR)
  • AI Act
  • Data Act
  • ePrivacy Directive
  • NIS2 Directive
  • Data Governance Act
  • Free Flow of Non-Personal Data Regulation
  • Platform-to-Business (P2B) Regulation (to be repealed

Key Highlights

  • GDPR Simplification:
    • Clarifies the definition of personal data
    • Supports controllers with respect to the criteria and means to determine whether data resulting from pseudonymization does not constitute personal data
    • Introduces flexibility for AI development: processing personal data for AI training under “legitimate interest,” with safeguards.
    • Modernizes cookie consent rules – centralized browser settings to end “cookie fatigue.”
  • AI Act Adjustments:
    • Expands regulatory sandboxes and simplifies compliance for SMEs and mid-cap companies.
    • Clarifies the interplay between the AI Act and other EU legislation
    • Introduces an obligation on the Commission and Member States to foster AI literacy
  • Incident Reporting:
    • Creates a single-entry point for incident notifications under GDPR, NIS2, DORA, and CER – ending duplicative reporting.

A New Philosophy?

There are strong indications that the “Digital Omnibus” is more than a mere technical adjustment and may represents a strategic shift in EU “digital law”. The proposals will now proceed to the European Parliament and the Council for deliberation. It remains to be seen whether words will be turned into action.

Italy’s New AI Law: A Boost for Healthcare Research?


Italy has recently enacted its own “Artificial Intelligence Act”, set to take effect on October 10, 2025.

You might be wondering: Did we really need another layer of AI regulation? That was our initial reaction, too. But a closer look reveals that the Italian AI Law introduces several interesting provisions, especially in the healthcare sector, that could facilitate research for both public and private entities. Here are some highlights:

1. Healthcare Data Processing as Based on Public Interest

The law explicitly recognizes that the processing of health-related personal data by:

  • Public or private non-profit entities,
  • Research hospitals (IRCCS),
  • Private entities collaborating with the above for healthcare research,

is of “substantial public interest.” This significantly expands the scope of Article 9(2)(g) of the GDPR, offering a clearer legal basis for processing sensitive data in research contexts.

2. Secondary Use of Data

The law introduces a simplified regime for the secondary use of personal data without direct identifiers. In particular:

  • No new consent required, as long as data subjects are informed (even via a website).
  • Automatic authorization unless blocked by the Data Protection Authority within 30 days of notification.

This provision applies only to the entities mentioned above so it is limited in scope, but in any case significantly strengthens the framework for nonprofit research projects.

3. Freedom to Anonymize, Pseudonymize and Synthesize

Under Article 8(4) of the AI Law, processing data for anonymization, pseudonymization, or synthesization is always permitted, provided the data subject is informed. This is a major step forward in enabling privacy-preserving AI research.

4. Guidelines and Governance

The law delegates the creation of technical guidelines to:

  • AGENAS – for anonymization and synthetic data generation.
  • Ministry of Health – for processing health data in research, including AI applications.

It also establishes a national AI platform at AGENAS, which will act as the data controller for personal data collected and generated within the platform.


Final Thoughts

While the GDPR aimed to support research, its implementation often created legal uncertainty and operational hurdles. Italy’s AI Law appears to address some of these gaps, offering a more pragmatic and enabling framework for healthcare research.

AI Liability Directive: Key Takeaways

We have already illustrated the new proposed rules for a product liability directive on this blog. We now analyze the proposal for a AI Liability Directive, which offers interesting insights on how liability rules will be tweaked when Artificial Intelligence is concerned. In fact, as noted by the Commission’s explanatory memorandum to the AI Liability Directive, “the ‘black box’ effect can make it difficult for the victim to prove fault and causality and there may be uncertainty as to how the courts will interpret and apply existing national liability rules in cases involving AI“.

These slides may help understanding the AI Liability Directive. If you have questions or doubts, do not hesitate to reach out to us.

EU Policies for the Digital Age

Confused about the Digital Service Act, the Digital Markets Act, the Data Governance Act and the Data Act? My recent article tries to make sense of all of them:

https://www.mondaq.com/italy/data-protection/1195638/an-overview-of-the-european-union-laws-and-policies-for-the-digital-age

The article also explains that the European Union has a strong vision on principles and policies for the digital age and aspires to a worldwide leadership role in the governance of digital phenomena.

Paola Sangiovanni to Speak on Artificial Intelligence

Our firm will be attending the EMEA Regional Meeting of Ally Law in Malta next week and on Friday November 15th I will be speaking at a panel discussion titled “Keeping an Eye on AI: Ethical and Regulatory Considerations.” 

Artificial intelligence is a hot topic, also in the med-tech field, and poses exciting legal, ethical and regulatory questions. I am sure this will be an interesting opportunity to discuss them with legal and technical experts. 

 

Weekend Reading Recommendations

Ready for the weekend? I have these article on my reading list: perhaps you, too, may enjoy some food for thought on some of the hottest topics in the fields of law and innovation:

  • A Layered Model for AI Governance”: https://cyber.harvard.edu/node/100108, on governance for artificial intelligence aimed at ensuring transparency and accountability and addressing massive information asymmetries between the developers of artificial intelligence systems and consumers and policymakers;

 

 

 

Whatever you will be reading, have a great weekend!

Artificial intelligence and robotics: a report reflects on legal issues

With its report issued on May 31, 2016 by the European Parliament (“Report”), the European Union has stepped into the debate on how to deal with artificial intelligence and robotics (“AI&R”). The ultimate goal of the European Parliament is to set forth a common legal framework that may avoid discrepancies arising from different national legislations, which would otherwise create obstacles to an effective development of robotics.

The Report introduces ethical principles concerning the development of AI&R for civil use and proposes a Charter on Robotics, composed by a Code of Ethical Conduct for Robotics Engineers, a Code for Research Ethics Committees and Licenses for Designers and Users.

Furthermore, the Report suggests the creation of a European Agency for AI&R, having an adequate budget, which would be able to generate the necessary technical, ethical and regulatory expertise. Such agency would monitor research and development activities in order to be able to recommend regulatory standards and address customer protection issues in these fields.

The Report, which recommends to the Commission to prepare a proposal of directive on civil law rules on robotics, illustrates many of the issues that society could face in a few decades regarding the relationship between humans and humanoids. In fact, a wide range of robots already can, and could even more in the future, affect people’s life in their roles as care robots, medical robots, human repair and enhancement robots, doctor training robots, and so on.

A further development that may be concerning for lawyers is connected to the announcement, a few days ago, by the University College London that a computer has been able to predict, through a machine-learning algorithm, the decisions by the European Court of Human Rights with a 79% accuracy. Will this result in a more automatic and predictable application of the law?

In order to secure the highest degree of professional competence possible, as well as to protect patients’ health when AI&R is used in the health field, the Report recommends to strengthen legal and regulatory measures such as data protection and data ownership, standardization, safety and security.

One concern arising from the Report is civil liability arising from the use of robots. Should the owner be liable for damages caused by a smart robot? In fact, in the future, more and more robots will be able to make “smart” autonomous decisions and interact with third parties independently, as well as cause damages by their own. Should such damages be the responsibility of the person who designed, trained or operated the robot?

Some argue in favor of a strict liability rule, “thus requiring only proof that damage has occurred and the establishment of a causal link between the harmful behavior of the robot and the damage suffered by the injured party”.

The Report goes even further by asking the Commission to create a compulsory insurance scheme for owners and producers to cover damage potentially caused by robots and a compensation fund guaranteeing compensation for damages, but also allowing investments and donations in favor of robots.

Exciting times lay ahead of us. It remains to be seen if the current legal principles will be sufficient or if new ones will actually be necessary.