
Author: Adlaw International Published: 12.09.2025
Legal challenges of deepfakes in advertising - key takeaways from the Adlaw conference in Warsaw
Legal challenges of deepfakes in advertising - key takeaways from the Adlaw conference in Warsaw
This year’s AdLaw Conference took place in Warsaw, from 4 to 6 September. The main theme of the event was “Legal Challenges of Deepfakes in Advertising.” The conference gathered lawyers and advertising professionals who shared insights, local case studies, and practical experiences related to AI-generated fake content used in marketing communications. Representatives of AdLaw member firms from Poland, Germany, the Netherlands, Spain, Portugal, the United Kingdom, Canada, and Australia attended the meeting. The conference was organized by Łaszczuk & Partners.
Speakers
Presentations on the legal situation in individual jurisdictions were delivered by:
• Joep Meddens (Höcker Advocaten B.V., the Netherlands)
• Riccardo Rossotto (RPLT, Italy)
• Ricardo Pérez Solero (Estudio Legal de Comunicación, Spain)
• Agnieszka Zwierzyńska (Łaszczuk & Partners, Poland)
The perspective of advertising industry practitioners was presented by special guests Radek Miklaszewski and Marcin Płassowski (4/4 grupa, Poland).
The conference concluded with a fruitful discussion building on the presentations, with an exchange of experiences and practical examples of the challenges that clients, their marketing teams, and legal advisors will increasingly face in the context of deepfakes in advertising.
Topics and key takeaways
Joep Meddens, advocate and partner at Höcker Advocaten B.V., spoke about deepfakes, lookalikes and platform responsibility in the Netherlands. According to him, deepfakes are getting ever more sophisticated. As with all AI applications, this offers opportunities to both benevolent and malevolent actors. One of the risks that is real is misleading advertising using deepfakes. The law offers various ways to act against such practices. Action can be taken against scammers or those who profit from the scam. This can be based on image rights, IP rights, rules against misleading advertising and criminal law. If the misleading actors cannot be found or if enforcement proves difficult, acting against the platform on which they operate can be effective. Joep Meddens said that in the Netherlands, they had ample experience with these means of action, and this experience was at the core of his presentation. “At Höcker, we build on long-standing experience fighting piracy on behalf of the creative industries, as well as in-depth knowledge of advertising law. We do not mislead when we assure you Höcker can help you fight deepfakes online,” he added.
Ricardo Rosotto, advocate and partner at the Italian law firm RPLT, discussed the current deepfake situation in Italy and highlighted them as one of the most serious threats to democracy, reputation, and trust in Europe. In his opinion, the deepfake phenomenon and manipulated media, often amplified by foreign interests, could be more dangerous for societies than an armored tank.
Drawing on recent scandals in Italy, he described how well-known women discovered their images misused in pornographic videos, causing severe reputational damage and raising complex legal challenges since many sites operate abroad. Based on his own experience, he also shared examples of attempted blackmail using fabricated videos and scenarios in which businesses or cultural events could be sabotaged by AI-generated content. While acknowledging positive uses of the technology in cinema and education, he stressed that its abuses far outweighed the benefits if left unchecked. He concluded that only a mix of stronger laws, technological safeguards, digital education, and international cooperation could effectively contain the destructive potential of deepfakes.
Agnieszka Zwierzyńska, advocate and Senior Managing Associate at Łaszczuk & Partners, in her presentation titled “Deepfakes in advertising as a violation of personal rights and privacy – the Polish case-law”, focused on current developments in this area. She presented, among other things, recent experiences (including those of Łaszczuk & Partners) related to the removal of deepfakes from online platforms, as well as the latest rulings by Polish courts concerning unlawful use of individuals’ voice and image.
She also referred to the most high-profile deepfake-related case in Poland to date – the decision of the Polish Data Protection Authority to ban Meta from publishing advertisements containing a manipulated image of Polish entrepreneur Rafał Brzoska and his wife.
In Agnieszka Zwierzyńska’s view, Polish consumers and the market are only beginning to learn how to recognize advertisements that use deepfake technology, and awareness of available legal remedies in case of violations is still not widespread.
“Łaszczuk & Partners already dispose of unique know-how in combating deepfakes, supported by years of experience in the fields of competition law, advertising law, and personal data protection,” Agnieszka Zwierzyńska said.
Ricardo Pérez Solero, advocate at Estudio Legal de Comunicación said that in Spain, as in many other Member States of the European Union, the use of deepfakes in the context of commercial communication entails significant legal risks, given the broad and comprehensive regulatory framework applicable to advertising practices.
In recent years, various instances of deepfake technology have been produced and disseminated within the Spanish market.
While many of these uses fall within the limits of lawful advertising, there have also been cases involving the unauthorized reproduction of the likenesses of well-known individuals. Such practices constitute a violation of personality rights, in particular the right to one’s own image, and may give rise to both civil liability and, in certain circumstances, criminal consequences under Spanish law.
On March 11, the Council of Ministers approved the Draft Bill for the Proper Use and Governance of Artificial Intelligence (“Draft AI Law”), which develops the sanctioning and governance regime provided for in Regulation (EU) 2024/1689 (“AI Regulation” or “AIR”).
Very serious infringements will be sanctioned with fines ranging from 7.5 to 35 million euros or up to 2% and 3% of the previous year’s global turnover, if greater, except in the case of SMEs, where the lower of the two amounts may apply.
Without prejudice to the sanction imposed, Article 34 of the Draft Bill establishes that the offender shall be obliged to restore the situation altered by them to its original state, as well as to compensate for damages caused, which may be determined by the competent authority.
At a session on AI deepfakes in media, Radek Miklaszewski and Marcin Płassowski representing the 4/4 (Czteryczwarte) Grupa agency, an independent Polish marketing communications group, delivered a stark message: deepfakes – AI-generated video, audio, images and text – have moved from novelty to a full-blown trust crisis. Legitimate uses (film continuity, education, virtual influencers) are rising, but abuse is scaling faster: political disinformation, celebrity investment scams, fake interviews, and voice-clone social engineering.
Evidence on stage: a YouTube Shorts spoof of Poland’s President Andrzej Duda; a hacked-site video of Ukraine’s President Volodymyr Zelensky urging surrender; Elon Musk-style fraud pitches; the viral “DeepTomCruise” series; Jordan Peele’s Obama PSA; and a Russian ad featuring Bruce Willis’s likeness via spanning harm, satire, and use without consent.
Why does it matter? Marketing departments and marketers face fake endorsements and brand-voice hijacks; newsrooms struggle with real-time verification; elections risk: both fabricated clips and the “liar’s dividend,” where real statements can be dismissed as fake. Bottom line is simple – “trust” as a competitive moat is eroding.
Playbook from experts is simple – pairing AI detection tools with provenance/watermarking and combining them with contingency plans embedded in staff training, crisis drills, and stakeholder media literacy are an obvious must.
Key takeaway is to build plans for a shift from reactive denial to proactive verification before the next deepfake AI shockwave.
Advertising in the age of deepfakes - opportunities and risks under EU Law
The rise of AI, including deepfake technology is transforming advertising. What was once a futuristic experiment has become a tool brands can use to create powerful, personalized campaigns. But the same technology poses serious risks: reputational attacks, consumer deception, and legal uncertainty. The European Union has begun to address these risks through new regulations.
The AI Act
• Adopted in 2024, the Artificial Intelligence Act is the EU’s first comprehensive AI regulation.
• It defines deepfakes as AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.
• It requires transparency: as of 2 August 2026, deployers (e.g. brands) will have to ensure such content is clearly labelled as AI-generated.
• Exceptions exist for art, satire, and fiction or the crime-prevention.
• Non-compliance carries heavy penalties – up to €15 million or 3% of global turnover.
• The Act also promotes watermarking, metadata labeling, and industry codes of practice.
The Digital Services Act (DSA)
• The DSA does not regulate AI directly, but it governs platform responsibility for illegal or harmful content.
• Very Large Online Platforms (VLOPs) must act quickly on flagged content, maintain transparency in advertising, and publish ad repositories.
• Manipulated ads or deepfakes used for disinformation may fall under these duties.
Other Initiatives
• The EU Code of Practice on Disinformation includes commitments on transparency and tackling synthetic media.
• Platforms are already testing watermarking and labeling, though practices remain uneven.
Implications for brands
Regardless of the existing obligations (protection of image, intellectual property, consumer protection), additional obligations regarding transparency (including labeling deepfakes) will be imposed on brands. Transparency will not just be a regulatory requirement but also a reputational necessity.
In practice, this means:
• Clear labelling – AI-generated images, videos, or audio in ads must include visible or audible notices such as “This content was generated by AI” (2 August 2026).
• Watermarking and metadata – technical identifiers (invisible watermarks, metadata tags) may be embedded in creative materials to signal synthetic origin.
• Ad repositories and disclosures – on very large online platforms, AI-based ads may need to be stored in searchable repositories with information about targeting, reach, and source.
• Rights management – regardless of the new regulations, brands must verify that AI tools do not infringe third-party IP or personality rights (e.g. use of likeness, voice).
• Consumer protection – in accordance with the already existing regulations, campaigns must avoid misleading impressions that AI-generated content depicts real individuals or events.