Tag Archives: technology

The future of GDPR? Focus on Automated Decisions

The Digital Omnibus Proposal.  The so called “Digital Omnibus” regulation proposal promises to lighten the burden of compliance with data protection legislation. Its aim is “to ensure that the rules continue to be fit for supporting innovation and growth”. Europe is not giving up on privacy, but it is willing to simplify it.

EDPB and EDPS Chime In.  The proposal, published in November 2025, has recently been the subject matter of a joint opinion by the European Data Protection Board and the European Data Protection Supervisor. While these two bodies are apparently in favor of facilitating GDPR compliance and strengthening consistency in its application, they express significant concerns regarding the impact of the changes on the fundamental rights and freedoms of individuals. They also fear that the proposal will create additional legal uncertainties.

The GDPR of the Future.  Gitti and Partners is embarking in a series of blog posts to explain what the GDPR may look like if the Digital Omnibus proposal becomes law. Today we focus on changes to the provision on automated decisions.

Automated Decisions: from Right to Prohibition.  Article 22 of the current GDPR regulates automated individual decision-making. The current language of the provision frames the rule as a “right”: a data subject is entitled not to be subject to a decision based solely on automated processing, unless certain conditions apply. The new proposal, instead, shapes a similar rule as a prohibition.

Conditions for Automated Decisions.  The new proposal reads (new language highlighted in yellow): “1. A decision which produces legal effects for a data subject or similarly significantly affects him or her may be based solely on automated processing, including profiling, only where that decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller regardless of whether the decision could be taken otherwise than by solely automated means. […]”

While – as before – the automated decision is legitimate if necessary to enter into or perform a contract with the data subject, the novelty is that the necessity of the automated decision can be assessed “regardless of whether the decision could be taken otherwise than by solely automated means”. Therefore:

  • An automated decision that does not produce any legal effects is fine.
  • An automated decision producing legal effects may be based on automated processing only if the decision is necessary to enter into or perform a contract with the data subject.
  • No automated decision is allowed unless it is necessary to enter or perform a contract with the data subject.
  • In order to add certainty to the interpretation of the requirement of “necessity”, the proposal clarifies that the decision may be regarded as necessary also if the decision could be made by a human. In the words of the EDPB/EDPS opinion, “the requirement of necessity does not mean that the mere fact that a decision could theoretically also be taken by a human should prevent the controller from taking the decision by solely automated means”.
  • In short, “The fact that the decision could also be taken by a human does not prevent the controller from taking the decision by solely automated processing” (recital (38) of the Digital Omnibus proposal).

Bottom line: the data controller may choose a human decision process or an automated decision process so long as they are necessary to enter into or perform a contract with the data subject.

In conclusion, as shown in the “AI First” policy, the EU is now worried that AI may not be fully exploited. The above changes are supposed to encourage automated decisions even if such decisions could be taken by a human being.

Stay tuned for more angles of the Digital Omnibus.

A More Volatile World: The Digital Omnibus

On November 19, 2025, the European Commission unveiled a landmark proposal: the Digital Omnibus Regulation. This initiative is not just another legislative tweak – it signals a philosophical shift in how Europe approaches digital regulation. In a world increasingly defined by volatility, complexity, and rapid technological change, the Commission seems to be saying: “We’ve heard you – let’s regulate, but let’s make it easier to comply.”

Why Now? The Context Behind the ‘Digital Omnibus’

The proposal comes against a backdrop of mounting pressure on Europe’s competitiveness. In his now-famous “Please, do something” speech to the European Parliament, Mario Draghi urged EU institutions to act decisively to restore Europe’s ability to innovate and compete globally. Could the Digital Omnibus be seen as a response to this heartfelt appeal?

For years, the EU has been a global pioneer in digital regulation – think GDPR, AI Act, Data Act, Digital Services Act (DSA), Digital Markets Act (DMA), NIS2, and more. But this success has come at a cost: fragmentation, complexity, and heavy compliance burdens. Businesses have struggled to navigate overlapping obligations. The Digital Omnibus is designed to change that. In the “explanatory memorandum” to the Digital Omnibus, the Commission emblematically acknowledges, for instance, that “some entities, especially smaller companies and associations with a low number of non-intensive, often low-risk data processing operations, expressed concerns regarding the application of some obligations of the GDPR”.

The ‘Digital Omnibus’ Proposal

The proposal introduces technical amendments and structural simplifications across a wide range of legislation, including:

  • General Data Protection Regulation (GDPR)
  • AI Act
  • Data Act
  • ePrivacy Directive
  • NIS2 Directive
  • Data Governance Act
  • Free Flow of Non-Personal Data Regulation
  • Platform-to-Business (P2B) Regulation (to be repealed

Key Highlights

  • GDPR Simplification:
    • Clarifies the definition of personal data
    • Supports controllers with respect to the criteria and means to determine whether data resulting from pseudonymization does not constitute personal data
    • Introduces flexibility for AI development: processing personal data for AI training under “legitimate interest,” with safeguards.
    • Modernizes cookie consent rules – centralized browser settings to end “cookie fatigue.”
  • AI Act Adjustments:
    • Expands regulatory sandboxes and simplifies compliance for SMEs and mid-cap companies.
    • Clarifies the interplay between the AI Act and other EU legislation
    • Introduces an obligation on the Commission and Member States to foster AI literacy
  • Incident Reporting:
    • Creates a single-entry point for incident notifications under GDPR, NIS2, DORA, and CER – ending duplicative reporting.

A New Philosophy?

There are strong indications that the “Digital Omnibus” is more than a mere technical adjustment and may represents a strategic shift in EU “digital law”. The proposals will now proceed to the European Parliament and the Council for deliberation. It remains to be seen whether words will be turned into action.

Italy’s New AI Law: A Boost for Healthcare Research?


Italy has recently enacted its own “Artificial Intelligence Act”, set to take effect on October 10, 2025.

You might be wondering: Did we really need another layer of AI regulation? That was our initial reaction, too. But a closer look reveals that the Italian AI Law introduces several interesting provisions, especially in the healthcare sector, that could facilitate research for both public and private entities. Here are some highlights:

1. Healthcare Data Processing as Based on Public Interest

The law explicitly recognizes that the processing of health-related personal data by:

  • Public or private non-profit entities,
  • Research hospitals (IRCCS),
  • Private entities collaborating with the above for healthcare research,

is of “substantial public interest.” This significantly expands the scope of Article 9(2)(g) of the GDPR, offering a clearer legal basis for processing sensitive data in research contexts.

2. Secondary Use of Data

The law introduces a simplified regime for the secondary use of personal data without direct identifiers. In particular:

  • No new consent required, as long as data subjects are informed (even via a website).
  • Automatic authorization unless blocked by the Data Protection Authority within 30 days of notification.

This provision applies only to the entities mentioned above so it is limited in scope, but in any case significantly strengthens the framework for nonprofit research projects.

3. Freedom to Anonymize, Pseudonymize and Synthesize

Under Article 8(4) of the AI Law, processing data for anonymization, pseudonymization, or synthesization is always permitted, provided the data subject is informed. This is a major step forward in enabling privacy-preserving AI research.

4. Guidelines and Governance

The law delegates the creation of technical guidelines to:

  • AGENAS – for anonymization and synthetic data generation.
  • Ministry of Health – for processing health data in research, including AI applications.

It also establishes a national AI platform at AGENAS, which will act as the data controller for personal data collected and generated within the platform.


Final Thoughts

While the GDPR aimed to support research, its implementation often created legal uncertainty and operational hurdles. Italy’s AI Law appears to address some of these gaps, offering a more pragmatic and enabling framework for healthcare research.

Your Face at the Airport: the EDPB Weighs in on Face Boarding

As you wander around an airport waiting to travel for the summer, you may notice that your image is captured by various devices. This process, known as facial recognition or “face boarding”, has recently been the subject matter of an opinion by the EDPB https://www.edpb.europa.eu/edpb_it, which issued an opinion (no. 11/2024, https://www.edpb.europa.eu/our-work-tools/our-documents/opinion-board-art-64/opinion-112024-use-facial-recognition-streamline_en, pursuant to article 64 of the GDPR) – on the processing of data obtained in airports using facial recognition to streamline airport passenger’s flow.

The EDPB assessed the compatibility of such data processing with:

  • article 5(1)(e) and (f) of the GDPR on storage limitation and integrity and confidentiality;
  • article 25 of the GDPR on privacy by default and privacy by design;
  • article 32 of the GDPR on security of processing.

The opinion takes into account four different scenarios:

  • Scenario 1: Storage of an enrolled biometric template – which is a set of biometric features stored in a database for future authentication purposes – only in the hands of the passenger.

Enrolment consists in recording – by each passenger who has consented to such processing – the biometric template and ID necessary for the processing, on the passenger’s device. Neither the passengers’ ID, nor their biometric data are retained by the airport operator after the enrolment process.

The passenger is authenticated when going through specific checkpoints at the airport (equipped with QR scanners and cameras), through the use of a QR code produced by the passenger’s device, where the biometric template is stored.

The EDPB opinion concludes that such processing could be considered in principle compatible with article 5(1)(f), 25 and 32 of the GDPR (nonetheless, appropriate safeguards must be implemented, including an impact assessment).

  • Scenario 2: centralized storage of an enrolled biometric template in an encrypted form, stored in a database within the airport premises and with a key solely in the passenger’s hands.

The enrolment is controlled by the airport operator and consists in generating ID and biometric data that is encrypted with a key/ secret. The database is stored within the airport premises, under the control of the airport operator. Individual-specific encryption keys/ secrets are stored only on the individual’s device

Passengers are authenticated when going through specific checkpoints, equipped with a control pod, a QR scanner and a camera. The passenger’s data are sent to the database to request the encrypted template, which is then checked locally on the pod and/or user’s device.

The opinion concludes that such processing could be considered in principle compatible with article 5(1)(e)(f), 25 and 32 of the GDPR subject to appropriate safeguards. In fact, the intrusiveness from such processing through a centralized system can be counterbalanced by the involvement of the passengers, who hold control of the key to their encrypted data.

  • Scenario 3: centralized storage of an enrolled biometric template in a database within the airport, under the control of the airport operator and Scenario 4: centralized storage of an enrolled biometric template in a cloud, under the control of the airline company or its cloud service provider.

The enrolment is done either in a remote mode or at airport terminals.

At the airport passengers go through dedicated control pods equipped with a camera. Biometric data is sent to the centralized database or to the cloud server – where the matching of the data is processed. The biometric matching is only performed when the passengers present themselves at pre-defined control points at the airport, but the data processing itself is done in the cloud or in centralized databases.

The EDPB considers that the use of biometric data for identification purposes in large central databases, as in Scenarios 3 and 4, interfere with the fundamental rights of data subjects and could possibly entail serious consequences. As such, Scenarios 3 and 4 are not compatible with article 25 of the GDPR because they imply the search of passengers within a central database, by processing each biometric sample captured. Also, taking into account the state of the art, the measures envisaged in such Scenarios would not ensure an appropriate level of security under article 5(1)(f) of the GDPR.

In conclusion, the EDPB regards with suspicion the processing (through matching-and-authenticating process) of biometric templates of the passengers when it happens in centralized storage tools (databases or clouds). The EDPB regards that this increases risks for the security of data, requires the processing of much more data and does not leave passengers in control of the data.

A New European Digital Identity

On March 26, 2024 the Council adopted a new framework for a European digital identity (eID).

Background. In June 2021, the Commission proposed a framework for a eID that would be available to all EU citizens, residents, and businesses, via a European digital identity wallet (EDIWs). The new framework amends the 2014 regulation on electronic identification and trust services for electronic transactions in the internal market (eIDAS regulation n. 910/2014), which laid the foundations for safely accessing public services and carrying out transactions online and across borders in the EU. According to the Commission, the revision of the regulation is needed since only 14% of key public service providers across all Member States allow cross-border authentication with an e-Identity system.

Entry into Force.  The revised regulation will be published in the EU’s Official Journal and will enter into force 20 days after its publication. The regulation will be fully implemented by 2026.

Digital Wallets.  Member States will have to offer citizens and businesses digital wallets that will be able to link their national digital identities with proof of other personal attributes (e.g., driving license, bank account). Citizens will be able to prove their identity simply using their mobile phones.

EU-wide Recognition.  The new EDIWs will enable all citizens to access online services with their national digital identification, which will be recognised throughout the EU. Uses of EDIWs include: opening a bank account, checking in in a hotel, filing tax returns, storing a medical prescription, signing legal documents.

The Right to Digital Identity.  The fundamental purpose of the regulation is to establish the right to a digital identity for Union citizens and to enhance their privacy.

Main features of EDIWs.  According to the new regulation:

• the use of EIDWs shall be voluntary and shall be provided directly, under mandate or recognition by a Member State;

• EDIWs shall enable the user to (1) securely request, store, delete, share person identification data and to authenticate to relying parties; (2) generate pseudonyms and store them encrypted; (3) access a log of all transactions and report to the national authority any unlawful or suspicious request for data; (4) sign or seal by means of qualified electronic signatures; (5) exercise the rights to data portability.

Privacy.  Privacy will be safeguarded through different technologies, such as cryptographic methods allowing to validate whether a given statement based on the person’s identification data is true without revealing any data on which that statement is based. Moreover, EDIWswillhave a dashboard embedded into the design to allow users to request the immediate erasure of any personal data pursuant to Article 17 of the Regulation (EU) 2016/679.

Paola Sangiovanni to Speak on Artificial Intelligence

Our firm will be attending the EMEA Regional Meeting of Ally Law in Malta next week and on Friday November 15th I will be speaking at a panel discussion titled “Keeping an Eye on AI: Ethical and Regulatory Considerations.” 

Artificial intelligence is a hot topic, also in the med-tech field, and poses exciting legal, ethical and regulatory questions. I am sure this will be an interesting opportunity to discuss them with legal and technical experts. 

 

Healthcare, Technology and Malpractice in 2030

The “Home-Spital” of 2030.

I have enjoyed reading this article on what healthcare may look like in 2030 (in wealthy countries, may I point out). The author of the article says goodbye to the hospital, while welcoming the “home-spital”. She imagines that technology (think driverless cars and robot workers) will help us live in a safer world. Technology will also help preventing certain diseases. Regenerative options will slow down ageing. “You will go to hospital to be patched up and put back on track. Some hospital practices might even go away completely, and the need for hospitalization will eventually disappear. Not by 2030, but soon after”, she predicts.

Healthcare and Technology will be Increasingly Intertwined.

Telemedicine may become so pervasive that hospitals may be empty of patients and filled with patients’ data, continuously fed through wearable patient-monitoring devices or all kinds of sensors. Hospitals may become bio-printing laboratories, where 3D printers will manufacture organs, tissues and bones on demand.

It is somewhat uplifting to imagine that medicine may become so technologically advanced, so personalized and so effective, and health so plentiful. Others, however, warn against the threat of a de-humanized medicine that will solely rely on machines and will be unable to offer a human side to suffering individuals.

Will Technology Render Doctors Error-Free?

While this new world will pose issues of privacy, data security and fraud, will it solve the problem of malpractice? What will be the role of doctors in 2030? Will technology eradicate human error?

Technology is already helping doctors in many ways: drugs, devices, diagnostic instruments are now less harmful, more precise and a lot more effective. Watson computer is assisting oncologists finding the appropriate cure. Simulators helps doctor in their training and in performing surgical procedures. Checklists, protocols and guidelines can be embedded in the doctors’ routine so as to limit, recognize or avoid repetition of human error. We can foresee a world of doctors who follow protocols embedded in devices, leaving less room for deviation from standard practice, but also from mistakes: a computerized doctor, almost. Will this make doctors error-free?

Of Course, Technology can be a Source of Error, too.

The idea of technological devices that are perfectly designed and always perfectly functioning is false, as any product liability lawyer knows. Even the best technology is subject to faulty design of a whole line of products, or faulty manufacturing of a single product.

Malpractice and Product Liability Litigation may Merge in 2030.

Litigation may simply become more complex. In fact, doctors will be sued by patients along with creators of health apps, health data centers, data carriers, device or drug manufacturers, subjects who feed data to 3D printers or who analyze and monitor data processed by devices. It will be increasingly harder to disentangle doctors’ negligence with liability of med-tech, diagnostic or pharma companies. Litigation will rely even more heavily on the opinion of court appointed experts, who will need to be a panel of specialists with bioengineering, medical and information technology skills.

Two classes of doctors will probably emerge, even more distinctively than before: doctors who follow protocols suggested by computers, whose tasks will become closer to those of paramedics, and doctors engaged in research who write protocols that will bind other doctors. The first class will probably see a reduction in its freedom to make medical choices, but may be increasingly shielded from medical malpractice litigation. The protocol-writing doctors will work even more closely with the industry that designs, tests and manufactures medical technology.

Watch Out for the Paradox of Automation!

As this very interesting article (based on an analysis of the 2009 crash of Air France Flight 447, which killed 228 people) suggests, the so called “Paradox of Automation” could come into play. Tim Harford, the author, explains it as follows: “First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response. A more capable and reliable automatic system makes the situation worse.

Technology that babysits doctors may ultimately weaken their skills. While automated devices may limit small errors, they may “create the opportunities for large ones”.

Conclusions.

Technology surely helps, who could deny that? But a messianic hope that technology will propel us into a risk-free, error-free and… malpractice-free world is a simplistic approach that is plain wrong.