Deepfake Regulation—What is Next for AI Laws in the UK?

Nov 23, 2023

AI regulations in the UK have been surprisingly thin on the ground, despite many calls to strengthen legislation in this area. 

The UK Government has so far been reluctant to impose heavy restrictions on artificial intelligence in order to avoid curtailing or dampening innovation.

It has been proposed that a light touch approach would be opted for, with few AI-specific rules that are not covered by general online safety rules.

One area that has been announced will be legislated on is in the use of deepfakes within political ads.

Whilst still in its infancy, there have been a number of controversial uses of deepfake technology for malicious uses that have caused a stir. These include

  • Deepfake scam using ITV’s financial advice presenter Martin Lewis’ likeness to endorse a fake investment service
  • Keir Starmer audio generated by AI in which the leader of the opposition appears to berate a staff member in an abusive rant
  • Photos of Trump being arrested and Biden and other world leaders singing on stage or using deepfake technology to depict them in situations that were entirely fabricated

 

The UK Government has announced that they intend to regulate the use of deepfake technology within political ads and require that any public use should be clearly labelled.

So in much the same way that you would have a disclaimer on a commercial advertisement, if a deepfake likeness was being used it would need to be clearly represented to the viewer that this was the case, rather than trying to fool the audience into thinking that the political figure themselves had made the statement.

Stealing China's Ideas

When AI regulations are brought up, they are often framed in the context of countries such as Russia and China getting hold of the technology to use it for nefarious and unregulated purposes. 

Ironically, Chinese legislation on artificial intelligence is much further along than the UK, US, or Europe. 

It also goes much further in terms of the rules, imposing restrictions that are far more stringent than anything planned in Western democracies. 

This notion of labelling deepfakes in political adverts could have originated from China and there's a good chance that we pinched the idea off them. They had already started putting this in place as early as 2022. Chinese authorities have gone a step further by banning the malicious use of fake news altogether. 

Why Won't Western Democracies Ban Fake News? 

The rise in deep fake technology and fake news has created significant problems for various organisations and political processes.

So, why has fake news not been banned in the UK and other Western democracies like the USA and Europe? 

The main reason that democratic governments are reluctant to ban fake news is due to the impact on free speech. One person's fake news is another's revelation or original ideas. 

Banning fake news when it cannot be clearly defined is dangerous territory to get into, in terms of freedom of speech.

For example, if I write a social media post in China under their legislation and say something like “The Chinese Government is too oppressive”, they can take a look at that and decide they are actually very benign and kind rulers and this is fake news and have me imprisoned or otherwise punished as a result of my transgression. 

The Chinese regulations on AI that have been quickly drafted in the wake of the recent boom in this technology, explicitly state that “content generated through the use of generative AI shall reflect the Socialist Core Values”.

Fake news is quite a subjective concept. Even if you try to be unbiased and as objective as possible, if you give one ruling power the overall authority to decide which news is fake, there's a natural tendency to feel that any views counter to their own or critical of their government also fall under the category of “fake news”. 

In this way, legislation that bans fake news can be used as a weapon to stifle or censor free speech and prevent the normal debate from taking place that is part of the democratic process.

Current Impact of Deepfakes on Democracy 

One key event that has triggered calls for increased scrutiny and regulation of AI is the deepfake audio file of Sir Keir Starmer that recently went viral. 

The clip appears to be recorded from an event or campaign HQ, complete with background noise and over the space of around 30 seconds Sir Keir Starmer can be heard ranting and swearing, supposedly at a member of staff for losing a tablet.

“I literally just told you,” the fake version of Keir Starmer whines, “No I'm sick of it, f*****g moron, just shut your mouth”, etc.

Within 12 hours of being released, the clip was viewed over 1.3 million times with users sharing and reposting the tweet originally posted on X formerly known as Twitter.

With software such as ElevenLabs, it is possible to generate a convincing replica of a person's speech using only 30 seconds of sample audio taken from a news clip or public speech. 

Of course, the output is reflective of the audio sample used as an input—so if the original audio is echoey, then the deepfake version will also be. If a studio-quality recording is used, then the deepfake audio will be studio-quality as well.

There is an interesting video online, where sound engineer Mike Russell attempts to deconstruct, identify and replicate deepfake audio as used in the Keir Starmer example.

Various software is tested for the purpose with ElevenLabs being settled on as providing the most convincing version.

It doesn't take long for Russell to rustle up a very realistic replica of the audio taken from Keir Starmer TV appearances, albeit with the swearing replaced so there's a lot of “where's the forking tablet, I'm forking sick of it”, etc. 

Lifting stock footage of background noise and applying it to the audio file and reducing the volume of the main speaker as if recorded from a distance gives an authentic feel to the dialogue and when you listen to either of these deepfake audios it is impossible to tell whether you are listening to the real speaker or not. 

The dangerous element of deepfake audio is that using 30 seconds of random words you can get a convincing recording of any person saying any specific phrase or lines you want them to read out, and nobody will be able to tell the difference. 

This could have a huge impact on evidence that is admissible in court as any recording could be claimed by the defence to be a deepfake and there is no way of countering or disproving this assertion and saying no this is definitely real, it was the defendant speaking those words. 

Of course, in politics, there could be huge ramifications to the democratic process when lifelike versions of leading figures can be seen making arguments that are the opposite of what they stand for or campaign on.

A climate change activist could be shown at an opening ceremony of a coal power plant giving a speech about how great fossil fuels are, or a campaigner for peace calling for war.

With nobody believing anything that is said it is very difficult to make a properly informed and educated decision on political matters such as who to vote for in an election. 

Can you Identify an AI Deepfake Using AI Software?

A human being may be unable to spot a deepfake from genuine speech—the debate over whether the Keir Starmer audio was real or not still rages on, with some still convinced that it is the real deal and others sure it must be fake or using generative AI. 

Can we not, however, use these AI tools to police themselves effectively and identify when they have been used through detailed analysis of the sound file? 

In fact, software such as ElevenLabs includes this exact feature and you can play an audio file into it and it will attempt to identify whether it was created in ElevenLabs and therefore deepfake or if it was more likely a real person speaking. 

It should be stressed however that it only claims to be able to identify deepfakes created specifically using its own ElevenLabs software. It won't work if the deepfake audio was created on a different platform. 

Also, before you run down the street cheering for the saviour of humanity and democracy, it should be noted that this feature of identifying deepfakes like many things has perhaps been oversold somewhat—the main reason being that it doesn't actually work and can be tricked very easily. 

For example in the tests performed by sound engineer Mike Russel, the sample audio he created was first recognised as 97% certain to be created using ElevenLabs—all good so far. Yet it did not take much digital trickery—adding in some background noise and tinkering with some wavelengths and components of the audio—for the software to change its mind and say it was around 98% sure it was real human speech. 

Bear in mind that this was audio that had just been manufactured using ElevenLabs software which it can supposedly detect and identify. 

Also when the original viral post was played to the software it said it was real human speech, so this could mean that it wasn't deepfake at all, or simply using different software, or even using the same software with a small amount of digital trickery. 

So there is by no means a clear answer or definite way to identify deepfake technology. If it is legislated against, how would this be enforced if we can't even tell when it is happening?

It would be like trying to censor offensive language on a TV show in Spain when you didn't speak any Spanish. You don't know when the transgressions are taking place, so you could not effectively police that and identify any issues.

How Does the UK Compare With Other Countries on AI Regulations?

During the debate on AI regulations, the point is often made that if we heavily legislate on artificial intelligence and curtail innovation in this field then we will be vastly outstripped by countries such as China enjoying unregulated use of AI.

It is perhaps quite odd then that China is effectively the global leader on AI regulations, being the first to bring them in—slightly ahead of the EU whose comprehensive AI Act is expected towards the end of 2023 and the US Bill of Rights on AI being more an advisory set of guidelines than legally binding rules.

This does not mean however that the threat is not real or the argument was flawed in some way.

The regulations China has introduced to label deepfakes and ban fake news are most likely self-serving and aim to protect themselves and their position in power, not allowing this to be undermined by fake news or AI programs.

There is nothing to stop the Chinese Government from carrying out malicious or military uses of AI such as creating a new weapon or type of malware or biological virus. There is no real protection for the people of China or any other country as a result of these laws.

The US Congress has recently passed a Bill of Rights on AI software that recommends any new piece of software undergoes stringent tests to ensure it cannot be used to make a nuclear bomb or a biological weapon.

While this may have a distinctive American flavour, it perhaps reflects the naivety of humankind more than just that of the US Government. 

In itself, it may be quite a wise move to try and prevent future AI software from being able to create nuclear or biological weapons. There is an issue of impracticality as current software on the market such as ChatGPT would not be subject to these guidelines, so there is an argument that it is closing the barn door after the horse has bolted.

Will These Restrictions and Guidelines Be Enough?

The main issue with the US guidelines and EU AI Act is that they do not address the true threat of AI and like most legislation that will come through, will be based on technology that is currently on the market, not what will be possible in 5 or 10 years.

Attempting to impose restrictions on AI to prevent it from developing a nuclear bomb could be seen as relatively short-sighted when the AI of the future could create a completely new weapon that may have been previously unheard of, with no way of detecting it or recognising it as a weapon. 

The real danger is that we have created something that will soon be much more intelligent and capable than ourselves. 

As the godfather of AI Geoffrey Hinton points out in an interview with CNN, “It will figure out ways to manipulate us. It will figure out how to get around the restrictions we impose on it.” 

Futurist Ray Kurzweil imagines the machine intelligence of the future to be more like gods and in the same way that a mouse living in the skirting board does not impact or threaten your life with its views, the same will be true for the future AI systems and how much our human laws will be able to affect and control them—the dynamic of power and comparative intelligence will be the same.

 A dormouse does not have sufficient capability or intelligence to attempt to control us and maybe trying to control the AI in future generations will be a similarly gargantuan task.

Navigating and circumventing our own feeble human laws and interventions will be like stepping over a puddle or a crack in the pavement for any future AI with intelligence far outweighing the human mind.

What AI Regulations Can We Expect in the UK?

The UK has lagged behind other countries such as the US, EU, and China in terms of AI legislation. This is mostly down to a reluctance to ruin creativity and innovation in the market by imposing strict rules, as opposed to any inability to legislate in this area.

AI regulation is coming, however, and preparing businesses for this eventuality will be a key consideration. So what sort of AI-based laws can we expect in the UK?

The Government’s proposals for future regulatory reform were set out in the March 2023 white paper, ‘A pro-innovation approach to AI regulation’.

Similar to the US Bill of Rights on AI, the UK regulations in this area are more like guiding principles that set out the framework, as opposed to specific rules defined in law.

It is expected that individual industry sectors will scrutinise the use of AI within their own regulatory bodies and the government will not get too involved with making any specific AI laws. Feedback from industry leaders and regulators suggested they probably should. 

The most the Government will offer in this regard is that if the system of industries regulating their own AI use does not work as they hope, they will introduce a duty that requires regulators to consider the principles in the AI guidelines about safety, security, accountability and fairness. 

In terms of legislation, it is quite wishy-washy from the UK side. The Government claims that it is being pro-innovation but it could also be accurately described as a Wild West-type approach—no rules or specific laws governing actions that companies take, they get to make up the rules for themselves. There's a set of guidelines they don't need to follow but maybe regulators will be told to keep them in mind somewhere down the line. 

With no real AI-based laws in place there is a good chance that malicious actors will use AI for a whole host of criminal and harmful activities and businesses will need to be prepared for this eventuality.

What AI Laws Should be Put in Place?

The measures in the US and EU focus mostly on the protection from bias in data sets and ensuring that citizens are not unfairly treated as a result of AI software. For example, with facial recognition software being biassed or ineffective with certain racial groups and algorithms used by Amazon picking all men for a senior position within the company.

These measures focus mostly on potential issues from technology that is currently available. This may seem like a sensible thing to do but the pace and acceleration of development in machine learning may mean that we need to legislate now for things that don’t exist yet.

Purely just looking at the concerns from technology that is available to us now there are certain laws that could and arguably should be put in place in order to protect citizens in an AI-driven future. These may include 

  • Deepfake labelling on political ads as suggested by the UK Government
  • The right to know if you are dealing with an AI bot or a real human
  • The requirement for companies to obtain the permission of an actor or public figure if their likeness is to be used in either deepfake or AI-generated audio or video
  • A law similar to Asimov's Laws of Robotics would perhaps be useful where the machines follow a program that will not allow them to harm a human or through inaction allow humans to come to harm
  • US-style restrictions or tests to prove that an AI tool cannot be used to make biological or nuclear weapons

 

Will This be Enough to Save Us?

Most likely not, but perhaps we are looking at this problem the wrong way round. Most considerations so far have been how can governments around the world lessen the effectiveness of AI if needed. What can we do to limit or inhibit the AI systems for our own safety?

Maybe we should look at this from the opposite angle and instead of finding ways to limit the power of AI, we should focus on ways to capitalise on the values that make us human and how we can set ourselves apart from the machines and remain competitive and useful. 

Machine learning is not capable of experiencing emotions or true feelings and these could be the key selling points of the human race that ensure we can never be fully replaced. Instead of trying to reduce the influence of AI and make it less like a human, we should perhaps focus on making ourselves more human and less like robots in the first place.

 

Contact us at Lyon today to see how we can help your business.