Deepfake Regulation—What is Next for AI Laws in the UK?

November 23, 2023

As artificial intelligence becomes more powerful, the risks it poses are growing just as fast, and nowhere is this more evident than in the rise of deepfakes. From impersonating public figures to spreading false information at scale, deepfakes are blurring the line between what’s real and what’s not. For businesses, governments, and the public alike, the consequences can be serious, ranging from reputational damage to financial fraud and democratic interference.

In the UK, conversations around AI regulation, particularly in relation to deepfakes, are heating up. But while the technology accelerates, the laws trying to keep up remain fragmented and inconsistent. With elections looming, social trust eroding, and cybercrime on the rise, many are asking: What comes next for deepfake laws in the UK? And more importantly, will it be enough?

In this article, we’ll explore:

  • What is a deepfake?
  • How deepfakes work
  • Are deepfakes illegal in the UK?
  • How to regulate deepfakes
  • The current deepfake ecosystem in the UK
  • Stealing China's Ideas
  • Why Won't Western Democracies Ban Fake News?
  • Current Impact of Deepfakes on Democracy
  • Can you Identify an AI Deepfake Using AI Software?
  • How Does the UK Compare With Other Countries on AI Regulations?
  • What AI Regulations Can We Expect in the UK?
  • What AI Laws Should be Put in Place?
  • Will This be Enough to Save Us?

Let’s start with the basics.

What is a deepfake?

A deepfake is a piece of synthetic media - usually a video, image, or audio clip - that has been digitally altered using artificial intelligence to make it appear as though someone said or did something they never actually did. The term comes from a combination of “deep learning” (a type of AI) and “fake,” and it's no exaggeration to say that deepfakes are becoming one of the most disruptive technologies of the decade.

What makes deepfakes so concerning is their increasing realism. Unlike obvious hoaxes or amateur edits, deepfakes can be near-indistinguishable from real footage, allowing bad actors to create fake political speeches, fraudulent business communications, or damaging content involving public figures and private individuals alike.

This isn’t just a theoretical risk. Deepfakes are already being used in phishing scams, reputation attacks, and even election interference. For UK businesses and institutions, this means deepfakes pose both security and compliance risks, especially as they become harder to detect and easier to produce.

How deepfakes work

Deepfakes are created using a form of AI called deep learning, specifically generative adversarial networks (GANs). This involves two neural networks working against each other: one generates fake content, and the other attempts to detect whether it’s fake. Over time, the generator improves until it can produce hyper-realistic media that the detector struggles to distinguish from the real thing.

Here’s a simplified breakdown of the process:

  1. Data Collection: The AI model is trained on large datasets: videos, audio clips, and photos of the target individual.
  2. Training: The model learns to mimic the voice, facial movements, gestures, and expressions of the person.
  3. Synthesis: Once trained, the model can overlay this synthetic data onto another piece of media, creating an entirely fake but visually convincing result.
  4. Refinement: As more content is processed, the model “learns” and the deepfake becomes more seamless and believable.

Thanks to open-source tools and deep learning models, this technology is now widely accessible, meaning you don’t need a state-level lab to create convincing fake content. That accessibility is what makes deepfakes especially dangerous - they’re no longer niche or experimental. They’re here, and they’re spreading.

Are deepfakes illegal in the UK?

At present, deepfakes are not comprehensively illegal in the UK. While some laws can apply to specific uses of deepfakes, such as those involving harassment, fraud, or defamation, there is no standalone legislation that directly addresses the creation or distribution of deepfake content.

However, certain legal frameworks can come into play:

In short, current UK laws are reactive, not proactive. They punish harmful outcomes, but don’t effectively prevent the creation or spread of deepfakes in the first place. This legal gap leaves businesses, public figures, and institutions exposed, and underscores the urgent need for regulatory reform.

How to regulate deepfakes

Regulating deepfakes is a delicate balance: protect society from harm without stifling innovation. While there's no one-size-fits-all solution, there are several viable approaches the UK could take to start closing the gap.

1. Mandatory Labelling of AI-Generated Content

Introducing legislation that requires any AI-generated media, especially video and audio, to be clearly labelled as synthetic would increase transparency and help the public spot manipulated content more easily.

2. Criminalising Malicious Use

A direct legal framework that makes it a criminal offence to knowingly create or distribute deepfakes with the intent to deceive, defraud, or cause harm would give law enforcement and businesses clearer recourse.

3. Content Authentication Standards

Encouraging or mandating the use of digital watermarking or blockchain-based authentication could help platforms verify whether media has been tampered with or synthetically generated.

4. Platform Accountability

Placing legal responsibility on social media companies and hosting platforms to detect, label, or remove malicious deepfakes quickly could limit their reach before damage is done.

5. Public Education and AI Literacy

No regulation will be fully effective without public awareness. Government-backed campaigns can help individuals and organisations learn how to spot deepfakes and understand the risks they pose.

The key is not just regulation, but regulation that evolves alongside the technology. Otherwise, we risk using 20th-century laws to fight 21st-century threats.

The current deepfake ecosystem in the UK

AI regulations in the UK have so far been surprisingly thin on the ground, despite repeated calls for stronger legislation, particularly when it comes to deepfakes. The government has shown a reluctance to introduce heavy-handed restrictions on artificial intelligence, aiming instead to avoid dampening innovation in the sector.

The current proposal favours a light-touch approach, with only limited AI-specific rules, many of which are expected to fall under broader online safety regulations rather than new legislation designed exclusively for AI.

One area that has been identified for targeted regulation is the use of deepfakes in political advertising. The UK government has announced its intention to legislate in this space, requiring that any deepfake content used in public political campaigns be clearly labelled. In practice, this would mean that, much like disclaimers used in commercial advertising, political ads featuring deepfake likenesses must explicitly inform viewers that the content is synthetic and not a genuine representation of the individual.

Although still in its early stages, the deepfake landscape in the UK has already seen several high-profile and controversial cases:

  • A deepfake scam video using ITV presenter Martin Lewis’ likeness to promote a fake investment service
  • AI-generated audio of Keir Starmer, in which the Labour leader appears to berate a staff member in an aggressive rant
  • Fabricated images and videos of Donald Trump being arrested, or Joe Biden and other world leaders singing on stage, all created using deepfake technology to depict scenarios that never occurred

These incidents have prompted growing concern about the technology’s potential for misuse, particularly in the lead-up to elections. While the government’s move to regulate political deepfakes is a step forward, the broader legal framework surrounding AI-generated content remains largely underdeveloped.

Stealing China's ideas

When the topic of AI regulation comes up, it’s often framed around concerns that authoritarian countries like China or Russia might exploit these technologies unchecked. But there’s an irony here: China is actually ahead of the UK, the US, and much of Europe when it comes to regulating artificial intelligence.

Chinese legislation goes much further in terms of both scope and enforcement. It imposes strict rules that Western democracies have been hesitant to adopt, particularly around the use of synthetic media like deepfakes.

Take the recent UK proposal to label deepfakes in political ads, for example. It’s quite possible that this idea wasn’t born in Westminster at all, but borrowed from China, where similar requirements were introduced as early as 2022. In fact, Chinese authorities have gone a step further, banning the malicious use of fake news altogether.

While this level of control raises serious concerns about civil liberties, it also highlights the regulatory gap between nations trying to preserve democratic norms and those moving swiftly to lock down emerging technologies.

Why won't Western democracies ban fake news? 

As deepfake technology and fake news continue to undermine trust in media and politics, a natural question arises: Why haven’t countries like the UK, the US, or other Western democracies simply banned fake news outright?

The answer largely comes down to freedom of speech. In democratic societies, the line between fake news and dissenting opinion can be dangerously blurry. What one person sees as misinformation, another might view as a genuine belief or even an inconvenient truth.

Banning fake news opens up a host of ethical and legal problems. Who decides what qualifies as fake? And what’s to stop those in power from using such laws to silence criticism?

For comparison, under China’s AI regulations, content generated by generative AI is required to “reflect the Socialist Core Values.” That means anything seen as critical of the government, such as a post claiming “The Chinese government is too oppressive”, could be labelled fake news and lead to serious punishment.

In a democratic context, giving any ruling authority the power to define what counts as fake news risks turning those same laws into tools for censorship. It’s a slippery slope: the moment criticism becomes illegal, so does healthy public debate.

That’s why, despite the challenges posed by misinformation, Western governments have been hesitant to introduce outright bans. The stakes aren’t just political - they’re constitutional.

The impact of deepfakes on democracy

One incident that reignited concerns over AI misuse in the UK was the deepfake audio clip of Sir Keir Starmer, which quickly went viral. The recording appeared to capture the Labour leader shouting and swearing at a staff member over a lost tablet. Set against the ambient background noise of what seemed like a campaign HQ, the 30-second clip mimicked a real-world setting with unsettling accuracy.

Lines like “I literally just told you... No, I’m sick of it, f*****g moron, just shut your mouth” were delivered in a convincing replica of Starmer’s voice. Within just 12 hours of being posted on X (formerly Twitter), the clip had been viewed over 1.3 million times, spreading rapidly across social media before any clarification could catch up.

What made this possible? AI voice synthesis tools like ElevenLabs can generate highly realistic speech using as little as 30 seconds of sample audio. Pulling from public speeches or media appearances, the software replicates not only tone and cadence, but even the quality of the original recording, whether echoey, muffled, or studio-grade.

In one revealing demonstration, sound engineer Mike Russell deconstructed the viral clip, testing several tools before landing on ElevenLabs as the most effective. Using publicly available footage of Starmer, he was able to create an almost indistinguishable deepfake, substituting the profanity with toned-down phrases like “Where’s the forking tablet?” to demonstrate how easily these recordings can be manipulated. By layering stock background noise and tweaking audio dynamics to sound like a distant recording, the result was chillingly realistic.

The broader danger is clear: with just seconds of real audio, you can create a synthetic recording of anyone saying virtually anything. And as the technology becomes more sophisticated, detecting these fakes is becoming harder by the day.

This doesn’t just threaten reputations - it has serious legal and political implications. In court, the credibility of audio evidence may be called into question. A recording presented as proof could be dismissed by the defence as a deepfake, with no easy way to prove otherwise. In politics, the risk is even more destabilising. Imagine a climate activist “endorsing” fossil fuels, or a peace campaigner “calling for war”, not in satire, but in what appears to be their own voice.

In an environment where anyone can be made to say anything, public trust begins to erode. And when voters no longer know what’s real, making informed decisions, especially at the ballot box, becomes incredibly difficult.

Can you identify an AI deepfake using AI software?

One of the most concerning aspects of deepfake audio is how convincing it can be, even to the human ear. The viral Keir Starmer clip is a perfect example. Debate continues online over whether it was real, AI-generated, or something in between, with listeners split and no definitive answer available.

This raises a logical question: can AI help us detect what AI creates?

Some tools suggest it’s possible. For instance, ElevenLabs, one of the leading voice synthesis platforms, has a built-in feature designed to detect whether an audio file was generated using its own system. You can upload a clip, and the software will attempt to determine whether it’s a deepfake produced by ElevenLabs or genuine human speech.

But there are limitations, big ones.

First, the detection only works for content created within the ElevenLabs ecosystem. If the audio was generated using a different tool or altered in post-production, the software may not recognise it at all.

Second, and more critically, even ElevenLabs struggles to identify its own deepfakes. In tests conducted by sound engineer Mike Russell, the system was initially able to flag a piece of AI-generated speech with 97% confidence. But after adding some simple background noise and making minor audio tweaks, the software reversed its verdict, now claiming with 98% confidence that the clip was genuine human speech.

This was the exact same clip, created by ElevenLabs, now misidentified because of a few surface-level modifications.

Even more striking, when the original viral Keir Starmer clip was run through the software, the tool reported that it was real human speech. That result could mean one of several things: the clip was genuine, it was created with another tool, or the deepfake had simply been manipulated just enough to fool the detection algorithm.

What this tells us is clear: there is currently no reliable, standardised way to detect AI-generated audio with certainty. That makes the idea of regulating or legislating deepfakes incredibly complicated. How do you enforce rules around content that you can’t confidently identify?

It's like trying to censor offensive language in a language you don’t speak - you don’t know when violations are happening, so you can’t step in to stop them.

How does the UK compare with other countries on AI regulations?

One of the most common arguments against strong AI regulation in the UK is that it could stifle innovation and leave the country lagging behind less restrictive nations. The fear is that while the UK plays it safe, countries like China will charge ahead with unchecked development and gain a dominant edge in the AI race.

Yet, somewhat ironically, China is currently leading the world in AI regulation. It was the first country to introduce formal laws targeting AI and deepfake technologies, beating even the EU, whose landmark AI Act is still pending finalisation, and far outpacing the United States, where the AI “Bill of Rights” remains an advisory document, not a binding legal framework.

Of course, this doesn’t mean the fear of being outpaced is entirely unfounded. China's motivations are widely seen as self-preserving, aimed at protecting political power by controlling narratives and information. Its laws mandating the labelling of deepfakes and banning fake news likely have less to do with safeguarding society and more to do with shielding the state from dissent.

Importantly, these regulations do not prevent the Chinese government from pursuing malicious or militarised uses of AI, such as weapon development, cyberattacks, or biological research. In that sense, China's AI laws provide little comfort to the global community. They’re more about controlling internal risk than preventing international harm.

Meanwhile, in the US, the recently released AI Bill of Rights suggests that new AI software should undergo rigorous safety testing, especially to ensure it can’t be used to create nuclear or biological weapons. While the intent is commendable, the scope is limited. The guidelines don’t apply retroactively to existing tools on the market, such as ChatGPT or other generative models, which many would argue is a case of closing the barn door after the horse has already bolted.

As for the UK, it sits somewhere between these extremes. With few concrete AI laws in place, the current approach favours flexibility and innovation over enforcement. But with public concern rising and high-profile deepfake incidents making headlines, pressure is building for a more structured and globally competitive regulatory response.

Will these restrictions and guidelines be enough?

The biggest flaw in many of the proposed AI laws, whether in the US guidelines, the EU AI Act, or other national frameworks, is that they tend to focus on the technology we have today, not what’s likely to emerge in the next 5 or 10 years. That approach risks falling dangerously behind the curve.

For example, current legislation aimed at preventing AI from being used to build a nuclear bomb may sound reassuring, but it’s arguably too narrow. Future AI systems might not replicate existing weapons, they could invent entirely new forms of warfare or capabilities we haven’t even imagined yet. And if we don’t recognise these threats, we’ll have no means of detecting or regulating them.

The real danger lies not just in what AI can do now, but in what it will become. As AI systems grow in capability and autonomy, they may soon surpass our ability to understand or contain them. As Geoffrey Hinton, often referred to as the "Godfather of AI," warned in a recent interview with CNN:

“It will figure out ways to manipulate us. It will figure out how to get around the restrictions we impose on it.”

Futurist Ray Kurzweil offers a similar caution. He envisions future machine intelligence as something closer to deities, vastly more powerful and complex than any human mind. In his analogy, the relationship between humans and future AI may resemble that of a mouse living in the walls of your house: the mouse may have opinions, but they pose no threat, and barely register in your world.

This metaphor highlights a sobering truth: human laws and moral frameworks may become irrelevant to entities operating on a completely different level of intelligence. Trying to enforce compliance could become as futile as expecting a dormouse to influence global policy.

If future AI can bypass our laws as easily as stepping over a crack in the pavement, then the regulatory efforts we’re making today, while necessary, may ultimately be insufficient. The challenge ahead isn’t just about rule-setting. It’s about confronting the limits of human control in a world we may soon no longer fully understand.

What AI regulations can we expect in the UK?

When it comes to AI regulation, the UK has so far lagged behind countries like the US, China, and the EU. This isn’t due to a lack of capability, but rather a deliberate choice. The government has taken a cautious, pro-innovation stance, aiming to avoid stifling creativity or slowing down growth in the UK’s tech sector by introducing restrictive laws too early.

That said, regulation is coming, and businesses should start preparing now.

In March 2023, the UK Government released its white paper titled “A Pro-Innovation Approach to AI Regulation.” Rather than setting hard rules, the paper outlines a set of guiding principles around safety, transparency, and accountability. Much like the US’s AI “Bill of Rights,” it stops short of creating enforceable legislation and instead presents a flexible framework.

The current plan is to let individual industries oversee AI regulation within their own sectors. In other words, instead of centralised legislation, the government expects sector-specific regulators to keep AI use in check. However, this hands-off approach has already raised concerns. Feedback from both industry leaders and regulators suggests that more direct government involvement may be necessary, especially to ensure consistency and accountability across the board.

If this self-regulation model proves ineffective, the government may step in with a “duty to consider” clause, requiring regulators to take into account AI principles relating to safety, fairness, and security. But even this would be a far cry from legally binding rules.

In its current form, the UK’s strategy could best be described as regulatory light-touch, or even laissez-faire. While the government promotes it as pro-innovation, critics argue it creates a “Wild West” environment where companies effectively regulate themselves, and guidelines are optional rather than enforceable.

The lack of concrete legislation leaves a gap that malicious actors may well exploit, whether through financial scams, misinformation campaigns, or AI-driven cybercrime. As the legal framework evolves slowly, businesses need to be proactive, developing internal safeguards, monitoring AI use across operations, and staying alert to emerging risks.

What AI laws should be put in place?

So far, much of the AI regulation proposed in the US and EU has centred around issues of algorithmic bias and fairness. This includes preventing discrimination in facial recognition software, often less accurate for certain racial groups,and addressing historic cases like Amazon’s hiring algorithm, which infamously favoured male candidates for senior roles.

While these are critical areas to address, they also focus primarily on existing technology. Given the rapid pace of development in machine learning, it’s not unreasonable to argue that regulation needs to go further, anticipating not only today's risks but tomorrow’s realities.

There are a number of laws the UK could consider enacting now to help future-proof society against emerging threats. These might include:

  • Mandatory labelling of deepfakes in political advertising, already proposed by the UK Government
  • The right to know if you are interacting with an AI system or a human, especially in customer service or public communication
  • Consent requirements for using someone’s likeness in deepfake or AI-generated audio/video content, particularly for public figures and actors
  • A modernised, enforceable version of Asimov’s Laws of Robotics, where AI systems must be programmed not to harm humans or, through inaction, allow harm to occur
  • US-style safeguards that require AI tools to undergo risk testing, ensuring they can’t be repurposed for biological or nuclear weapon development

These laws wouldn’t eliminate every risk, but they would begin to set ethical and legal boundaries around how AI can be developed and used, especially as its power and influence continue to grow.

Will this be enough to save us?

Probably not. But maybe that’s the wrong question.

So far, most conversations around AI legislation have focused on how to limit or restrict AI; what rules we can impose, what capabilities we should contain. These are valid concerns. But perhaps we should also be asking: what can we do that machines can’t?

Rather than spending all our energy trying to weaken AI, we may need to focus on strengthening what makes us uniquely human. AI can analyse data, generate content, and simulate conversation, but it still lacks real emotional intelligence, empathy, morality, and creativity in its truest form.

These human qualities, our ability to care, to connect, to feel, might be our most valuable assets in a future shaped by machine intelligence. Instead of fearing replacement, we could focus on becoming more human, not less. That’s where our advantage lies.

Final thoughts

The rise of deepfakes and the broader challenges posed by rapidly advancing AI aren’t just political or ethical concerns; they’re operational ones. For businesses, the ability to recognise threats, protect digital assets, and respond to evolving risks is becoming a fundamental part of day-to-day resilience. While legislation will eventually catch up, organisations can’t afford to wait. 

Now is the time to strengthen your defences, invest in intelligent systems, and seek expert IT guidance to stay ahead of the curve. In the battle against deepfakes and digital deception, having the right technology and the right support could make all the difference.

Contact us at Lyon today to see how we can help your business.

Write to us,
we will get back to you soon

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.