The increase in AI generated deepfakes is creating an unease in the UK as fears of scams and danger rises, data from YouGov shows.
Most people in the UK are not confident in their ability to identify a deepfake, if they saw one across audio, images, videos and/or text, according to statistics from June 2024, with the data showing just 9% confident in identifying.
The increase in the usage of AI generated deepfakes has created a sense of worry amongst the public that they will not be able to adapt.
Content creator Joanne Nattendo said: “There’s too many dangers.
“People have come out saying their faces have been used for nudes, and you can film people saying stuff they never said.
“With deepfake images and videos, people can easily get away with scamming.
“You can get accused of saying stuff because there’s a video of you – and it wasn’t you.
“Everybody is vulnerable – if you’ve got a social media account with all your pictures, someone can use your pictures.
“It’s scary in my opinion.”
Twenty-three-year-old Nattendo is among the age group of whom 19% believe they would be confident to identify deepfakes.
She said this statistic is no surprise as the development of AI is increasing in a way that humans are struggling to keep up with.

She said: “I have seen a news page on TikTok and in the background they would have a news setup and then audio of someone talking.
“And there was a video about a teacher who had been dismissed because he was swearing at the children, and it looked real, and a lot of people believed it.
“It was set up as a news page and it used the AI voice and I think when you use that, people think you are legit, but it was all fake.”
The data shows the majority of online content suspected to be deepfakes is video and for a world that is becoming more digital, this could be worrying.
Nattendo said: “Video is the worst to deceive the public.
“If someone makes a video of me saying stuff that I haven’t said, things that could be really harmful, dangerous or, making threats to others, I could get attacked for something I didn’t say.
“That’s definitely the most dangerous because who is going to believe you. You’d say, ‘oh it’s AI’, but who’s going to believe you? No-one’s going to buy it.”
Suspected deepfakes had been encountered more than 500 times in the past six months by those surveyed.
Content creator Annabelle Palmer 28 said: “I often see deepfakes on my TikTok For You page, and a lot of the time it’s very difficult to recognise it’s AI.
“It does make me scared of what is to come, and I often question, how far will AI go – sometimes I think, in about 10 years’ time it will be extremely difficult to tell the truth from a lie: in social media, politics, scam calls – everything.”
As artificial intelligence grows in popularity, creatives fear the lack of laws to protect their work from being stolen and used as a deepfake is a risk to the industry.
Writer Christina Alagaratnam uses AI to design posters for her plays, theatre shows and book covers, but states there is not much protection provided for writers and creatives.

She is not surprised by the lack of confidence in identifying AI deepfakes as many people are not aware of what to look for as, AI is an advanced edit of the usual Photoshop and touch-ups people are used to.
She added, although people may know it’s AI, the general public would still need protection.
The 32-year-old said: “It’s very easy for our work to get stolen and I just think there needs to be some kind of protection for artists and creatives that if any-one tries to copy it, it’s more than just plagiarism.
“There is a system that I think a lot of writers are doing now, especially if you are self-publishing, which is saying ‘I do not give permission for my voice to be used or my writing style to be used’ because anyone can take it and put it through ChatGPT or an AI software and copy your style of writing.
“So it’s actually quite scary but it’s something that needs to be protected.
“Someone can say, ‘write me a story in the voice of Christina Alagaratnam’ and it will do it.”
According to the statistics, people between the ages of 16-44 were more confident with their ability to identify AI generated deepfakes than those who were 45 and above, with the majority claiming they had seen humorous, scam advertisements or political deepfakes the most.
Last year, there were concerns among the public regarding Reform candidate Mark Matlock, some assumed he was a deepfake creation, made by AI, after he had edited a photo of himself.
This increases a worry among the public as after wrongly assuming the edit to certainly be a deepfake, it welcomes confusion as to what is real and what is fake and in regards to politics which paths the way of society, this is very concerning.
Matlock defended himself against these claims and The Reform Party denied the use of AI.
Alagaratnam said: “I think it’s used to lure people onto a certain side politically.
“Particularly, if one has an agenda, they will use AI to create something for example, ‘oh, look what’s going on in your neighbourhood’ and then some people might believe it.
“It is very manipulative, and I think that it just persuades people to vote for something that is according to their agenda.
“It’s messing with people’s minds – I know politics can be a manipulation itself, but AI just takes it to another level.”
She added in the past politicians had to go out, campaign and meet the public but now, they can just superimpose themselves online and say they’ve visited soup kitchens, charities, schools etc, when they have not.
Many people use AI to create a utopia to demonstrate their true desires and, this had been done by President Donald Trump – where a video was shared on Truth Social of Trump’s idea of his vision of Gaza.
Writer, Alagaratnam said “The utopia was interesting because even though you know it’s AI, it’s a vision and an image that is placed in the minds of people and it can make them think ‘that could be the future.’”
Featured image: permission to use