AI-Generated Images Without Consent: Why the New DPA Joint Statement Matters

Last month, multiple Data Protection Authorities issued a joint statement addressing a rapidly growing concern: the creation and circulation of realistic AI-generated images depicting identifiable individuals without their knowledge or consent.

The concern is about generative models that can now produce highly convincing images of people who never posed for a photograph, never uploaded such images, and never agreed to be portrayed.

From a legal perspective, the issue impacts three areas:

  • personal data

  • biometric inference

  • reputational harm

Under EU data protection law, an image of a person is personal data whenever that individual can be identified, directly or indirectly. The complication with AI-generated imagery is that the image may not be a photograph of a real event, yet it may still convincingly represent a real person, leading others to believe it is authentic.

This creates a new type of privacy risk: synthetic personal data that continues to affect a real individual.

The enforcement challenge

Regulators clearly recognize the potential harm: impersonation, harassment, non-consensual sexual imagery, fraud, and manipulation.

Traditional data protection enforcement assumes:

  • A controller can be identified

  • Processing can be localized

  • Removal remedies are effective

AI imagery breaks all three assumptions because these images can be generated anonymously, uploaded to a single platform, reshared across dozens of services, stored privately, or distributed peer-to-peer. Even when a platform removes content, copies often persist elsewhere. The legal right to erasure becomes technically fragile.

The issue is not only illegal generation, but also uncontrolled replication.

Where liability may shift

The real regulatory focus is likely to move toward intermediaries:

  • platforms hosting the content

  • providers of generative tools

  • services enabling large-scale dissemination

We may therefore see authorities testing duties such as:

  • abuse prevention mechanisms

  • identity-related safeguards

  • detection and response procedures

  • complaint handling frameworks

In other words, compliance will increasingly be evaluated not only on whether harmful images exist, but on how quickly and effectively a service can react once notified.

What companies can do

Organizations integrating image generation features, even as a minor product capability, should treat this as a privacy and product-design mechanism.

Recommended practical steps:

  1. Assess whether outputs could depict real individuals

  2. Implement prompt restrictions and safety filters

  3. Create a rapid takedown and reporting process

  4. Log incidents and response times

  5. Update privacy documentation and risk assessments

The joint statement signals that regulators recognize AI harms may not always be preventable, but they increasingly expect companies to demonstrate preparedness and response capabilities. Take this into consideration when providing services that may intersect with AI-generated images.

The content of this article is general information, not tailored legal advice for your specific situation. It has a strictly informative and general purpose; the information contained does not constitute legal advice.

Every business is different. For personalised consultancy, schedule a consultation call or write to us directly at 📧 anamaria@legallyremote.online.

Next
Next

A Practical Guide for Startups and Digital Businesses: The Hidden Legal Obligations of Online Platforms