News

AI generated deepfakes are a worry for the UK public

The increase in AI generated deepfakes is creating an unease in the UK as fears of scams and danger rises, data from YouGov shows.

Most people in the UK are not confident in their ability to identify a deepfake, if they saw one across audio, images, videos and/or text, according to statistics from June 2024.

The increase in the usage of AI generated deepfakes has created a sense of worry amongst the public that they will not be able to adapt.

Content creator Joanne Nattendo said: “There’s too many dangers.

“People have come out saying their faces have been used for nudes, and you can film people saying stuff they never said. 

“With deepfake images and videos, people can easily get away with scamming.

“You can get accused of saying stuff because there’s a video of you – and it wasn’t you.

“Everybody is vulnerable – if you’ve got a social media account with all your pictures, someone can use your pictures.

“It’s scary in my opinion.”

Twenty-three-year-old Nattendo is among the age group of whom 19% believe they would be confident to identify deepfakes.

She said this statistic is no surprise as the development of AI is increasing in a way that humans are struggling to keep up with.

A selfie of Joanne Nattendo in a games arcade.
Content creator Joanne Nattendo says the creation of AI generated deepfakes is dangerous and needs to be controlled

She said: “I have seen a news page on TikTok and in the background they would have a news setup and then audio of someone talking. 

“And there was a video about a teacher who had been dismissed because he was swearing at the children, and it looked real, and a lot of people believed it. 

“It was set up as a news page and it used the AI voice and I think when you use that, people think you are legit, but it was all fake.” 

The data shows the majority of online content suspected to be deepfakes is video and for a world that is becoming more digital, this could be worrying.

@joanne.reacts

Make sure you’re in the know , these thieves are consistently developing @Channel 5 . Enjoying documentaries by channel 5 lately #fyp #news #journalism #documentary #scam #scammeralert #con #fraud #beware #alert #channel5

♬ original sound – Joanne Reacts

Nattendo said: “Video is the worst to deceive the public.

“If someone makes a video of me saying stuff that I haven’t said, things that could be really harmful, dangerous or, making threats to others, I could get attacked for something I didn’t say. 

“That’s definitely the most dangerous because who is going to believe you. You’d say, ‘oh it’s AI’, but who’s going to believe you? No-one’s going to buy it.”

As artificial intelligence grows in popularity, creatives fear the lack of laws to protect their work from being stolen and used as a deepfake is a risk to the industry.

Writer Christina Alagaratnam uses AI to design posters for her plays, theatre shows and book covers, but states there is not much protection provided for writers and creatives.

Photo of writer Christina Alagaratnam.
Writer Christina Alagaratnam says the rise in deepfakes could welcome scams and is a threat to the creative industry

The 32-year-old said: “It’s very easy for our work to get stolen and I just think there needs to be some kind of protection for artists and creatives that if any-one tries to copy it, it’s more than just plagiarism.

“There is a system that I think a lot of writers are doing now, especially if you are self-publishing, which is saying ‘I do not give permission for my voice to be used or my writing style to be used’ because anyone can take it and put it through ChatGPT or an AI software and copy your style of writing. 

“So it’s actually quite scary but it’s something that needs to be protected. 

“Someone can say, ‘write me a story in the voice of Christina Alagaratnam’ and it will do it.”

According to the statistics, people between the ages of 16-44 were more confident with their ability to identify AI generated deepfakes than those who were 45 and above, with the majority claiming they had seen humorous, scam advertisements or political deepfakes the most.

Alagaratnam said: “I think it’s used to lure people onto a certain side.

“Particularly, if one has an agenda, they will use AI to create something for example, ‘oh, look what’s going on in your neighbourhood’ and then some people might believe it.

“It is very manipulative and I think that it just persuades people to vote for something that is according to their agenda.

“It’s messing with people’s minds – I know politics can be a manipulation itself but AI just takes it to another level.”

Last year, there were concerns among the public regarding Reform candidate Mark Matlock, some assumed he was a deepfake creation, made by AI, after he had edited a photo of himself.

Matlock defended himself against these claims and The Reform Party denied the use of AI.

Alagaratnam said: “I think young people know it’s AI but at the same time they can’t help but be mesmerised by it. 

“You know it’s not real, but you just can’t stop looking at it even though you know there’s no such thing.

“The general public would need protection from scammers.

“The main thing that they would use AI for is to get money off people, especially the elderly.”

Featured image: permission to use

Join the discussion

Subscribe
Notify of
guest

Yes, I would like to receive emails from South West Londoner. Sign me up!



By submitting this form, you are consenting to receive marketing emails from: South West Londoner. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

0 Comments
Inline Feedbacks
View all comments

Related Articles