top of page
Search

Fake Image, Real Pain: The Wider Impact of Computer Generated Images.

ree

It’s not just AI that hurts. 

When a friend of mine got a message saying his face had been used in a porn clip, he assumed someone was winding him up. It turned out to be true. 

Someone unknown had taken a photo from his social media and crudely put it on a sexual image. It wasn’t sophisticated, it made no attempt to be, but another person had already seen it and sent it to him because they thought it was funny. In doing that, they’d unknowingly committed an offence under new laws on non-consensual sexual images. So, not only had the person creating the image broken the law, the person sharing that image had too. 

My friend hadn’t been touched, but he felt exposed, embarrassed and angry. He was unsure who had made the images and started worrying who else may have seen them and if more images would be created. 


Deepfakes: a new route to old harm 

At the StopSO conference in London last week, there was a lot of talk about how realistic AI-generated sexual images and deepfake pornography have become. Some are almost impossible to tell apart from real footage. 

We hear about these incredibly realistic deepfake porn cases more often, and they’re worrying, but it’s important not to overlook the other end of the spectrum. The crude edits, screenshots, and badly made images are still out there. And people are being investigated and charged for creating sexual images of people they know for their own gratification. 

They might not be realistic, but that doesn’t make them harmless. The person in the image doesn’t see a “bad Photoshop.” They see their face attached to something they never consented to. Even when it’s clearly not real, it can still feel intrusive and crosses a very personal line. 


The law and accountability 

Since January 2024, under the Online Safety Act, it’s a criminal offence to create or share a sexual or deepfake image of someone without their consent, including AI pornography and digitally manipulated sexual content. 

Intent doesn’t cancel out the harm. It doesn’t matter whether it was meant as a joke, made in curiosity, or shared privately. 

Many of the people we work with didn’t set out to cause harm. They describe acting impulsively, or not really thinking about it, believing that if they were creating the images for themselves, there wasn’t a victim. 

There’s often a mindset of, “If someone doesn’t know, it can’t hurt them.” But it can and it does. 

Our work focuses on slowing that process down, exploring empathy, consent, and the real human impact behind those decisions. 


What we see at Safer Lives 

We work with people who never imagined their behaviour would be classed as online sexual offending or image-based sexual abuse. They might have downloaded an app, shared a fake image in a group chat, or viewed manipulated content without thinking about the person behind it. 

Through our programmes, we help people: 

· Understand consent and digital boundaries 

· Build empathy for those affected 

· Recognise desensitisation and minimisation in their thinking 

When people start to connect with the reality of harm - the fear, humiliation and loss of trust - accountability and genuine change can start to happen. 


A shared responsibility 

AI is moving faster than our ability to regulate or fully understand it. The conversations at StopSO made that clear. But while we focus on how advanced this technology has become, we can’t forget the everyday harm caused by the amateur versions, the quick edits, the crude collages, the “obvious fakes” made for a laugh or for arousal. 

Because whether it’s slick AI pornography or something pieced together on a phone, the harm is still human. 

This is the work we do at Safer Lives, helping people make sense of their behaviour online and understand the real-world impact.  

Find out more at www.saferlives.com To contact Safer Lives click here. 



 
 
 

Comments


bottom of page