Unregulated generative AI will have dire consequences, London academics and charity workers have warned.
Members of the Institute for Artificial Intelligence at King’s College London (KCL), as well as project leads at domestic abuse charities Refuge and Solace, have noted the potential dangers of generative AI, artificial intelligence designed to produce text, images or videos by applying machine learning techniques to large collections of data.
While other generative AIs, such a Meta’s Vibes and Google’s Veo 3, have already been released in the UK, Sora 2.0 is yet to release as OpenAI works to comply with European and UK law.
Dr Isabela Parisio, postdoctorate research associate at KCL, said: “AI brings up a whole debate about revisiting our values as a society.
“Nobody wants to go there, but it’s a discussion we need to start having, because we already see children have committed suicide.”
While making clear that proper use of AI can greatly benefit education and productivity, Dr Parisio explained the effects of its frequent misuse.
She said: “It can deepen the digital divide that we already have, enhancing the qualities that are already there for lots of technical reasons, including bias and discrimination.”
Other issues that came up were privacy issues, misappropriation of protected data, security issues, fraud, child abuse, copyright issues, misinformation, misrepresentation of historical facts, and sustainability.
Dr Nessa Keddo, senior lecturer in media, diversity and technology at KCL, is researching a paper on how people are using AI-generated content, particularly with shareable means on TikTok, to create racially charged videos.
She said: “It’s looking at how the technology is being experimented with in wish fulfilment scenarios, and looking at who is creating these videos, and the purpose of them being created.”
An example of this is generative AI ‘clap back videos’, Dr Keddo explained, where Veo 3 is used to fabricate footage of minorities being verbally and, in some cases, physically attacked.
She said: “I think what we’re going to see in London, in the UK, is our likeness being used without our permission.
“And, of course, we don’t have a massive team of lawyers behind us to take these big companies to court.”
A study published by Health Psychology Research in January 2025 found that continuous TikTok use could lead to addiction, with negative effects such as a lack of spatial/temporal awareness, procrastination and impaired self-control.
A literature review by Jonathan Hiadt, Zach Rausch and Jean Twenge found that the majority of studies found an association between social media use and bad mental health outcomes in teenagers.
Sora 2 uses a similar user interface to TikTok, with users being able to swipe, share, like and comment on generative AI videos made in-app.
Larome Hyde, children and technology project lead at Refuge, said: “When I speak about it from a child’s perspective, we’re now in a day and age where technology has infiltrated their, and our, daily lives.
“In terms of the impact it has on the survivor and being able to reach out, we have seen instances and heard of serious outcomes where we are not able to hear their story until the very end, when they have taken their life – that is the impact it can have.”
Sasa Onyango, head of operations for children and young people services at Solace, said: “It’s not just about deepfakes and sexual violence, which obviously is highly problematic, but any other information – everybody can just create content that they want, put it out there, and people are not going to question it.
“If you are bombarded by that all the time, it leads to quite a lot of radicalization of young people at the moment, with the manosphere and incel ideology.
“There are lots of positives to social media… But the saddest part is that actually that is so sidelined because of the negative side that is so much more pervasive and so much more impactful to all of us.”
Non-consensual pornography accounted for 96% of deepfakes online, a 2019 study by Amsterdam-based cybersecurity company Deeptrace found.
AI companies have taken steps to diminish some concerns.
Google’s Veo 3 embeds identification markers and watermarks to help make clear what is created by generative AI, and has announced the launch of an online service which can verify videos.
Google have made clear that while they respect the intent of user prompts and that means they may create content that may offend when instructed by the user to do so, they are staying vigilant.
They said: “At times, some prompts can challenge these tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.”
However, regulation and education regularly came up as a potential safeguard against the risks of generative AI, and while the government’s Online Safety Bill was mentioned as a step in the right direction, the consensus was that more needs to be done.
Dr Parisio said: “One thing is capacity building within the public sector, specifically with the regulators themselves. Here in the UK, we need to ensure that these regulators are going to have the proper training and the proper skills.
“The government talks a lot about an assurance ecosystem.
“We need to ensure that this covers the whole AI life cycle and take measures to prevent things such as greenwashing and just box-ticking exercises.”
“One of the government’s goals is economic growth, and this is a big part of the AI action plan.
“Growth is good, but it cannot come at all costs.”
Dr Keddo said: “I think there needs to be more onus in terms of education and accessibility of information.
“When you go on OpenAI’s website, for example, and you’re looking for this information, it really is dug down and hidden: it’s not very clear, it’s not in user friendly language and it’s dotted around in different places.”
“In terms of who is doing the educating, the government are trying to do things, but it’s just like academia is very slow.
“A report on all of this probably won’t be released once they’ve done an in-depth consultation, which won’t be released next year, and then it won’t be relevant, because the technology would have completely changed by then.”
Onyango agreed on the slow response by the government.
She said: “Young people are definitely asking for more protection from government, from parents, from educators, from everyone.
“But I do find like the UK Government quite slow in regards to anything that is coming up with technology-based abuse.
“I don’t think the Online Safety Act is going far enough or strong enough.
“We cannot expect tech companies to safeguard people, because we are not the users. We are their products.”
Meta, OpenAI and the Department for Science, Innovation and Technology have been approached for comment.
When life is difficult, Samaritans are here – day or night, 365 days a year. You can call them for free on 116 123, email them at [email protected], or visit samaritans.org to find your nearest branch.
Featured image credit: cottonbro studio via Pexels






Join the discussion