From education to employment

The silent sexploitation crisis – AI’s overlooked double-edged threat to FE   

Yasmin London, Global Online Safety Expert at Qoria

Students are increasingly falling victim to AI-driven exploitation, whilst others are unknowingly committing serious offences by creating and sharing explicit content. How can we combat the problem? By Yasmin London, Global Online Safety Expert at Qoria

AI is creating a double-edged crisis for FE providers across the UK.

There are concerns around students’ vulnerability in the face of AI-generated content and ‘sexploitation’ – the coercion of individuals into sharing explicit content under the threat of exposure or harm. And, there are the often overlooked and unknown legal and ethical risks faced by those engaging in this behaviour. 

The issue is complex and affects victims and perpetrators alike. The increasing number of students who are engaging in the creation of this content is causing growing concern amongst colleges and FE providers, with staff wellbeing also being affected by the scale of the problem. Recent research from global safeguarding technology company Qoria shows the extent of the issue:

  • More than a quarter of FE providers have experienced incidents where students have either possessed, shared or requested nude content
  • 26.6% of educators have reported students aged 16-19 using AI apps or tools to create CSAM or nude content 

A 2023 Women and Equalities Committee report further found that almost a third (29%) of 16-18 year old girls say they have experienced unwanted sexual touching, and that 59% of girls and young women aged 13-21 said that they had faced some form of sexual harassment at school or college in the past year.

Educators are aware of the problem and what it represents for FE organisations, but feel overwhelmed and underprepared to deal with it. When AI threats accelerate, safeguarding infrastructure falters, largely because the solutions to address it haven’t yet been developed or effectively implemented.

Feelings of shame

Qoria’s research insights further highlighted a rising concern that older male students are left vulnerable to sexploitation, with one sixth form college commenting that “students are experiencing feelings of shame” linked to becoming a victim of sextortion. As such, the challenge for educators further complicates – as a lack of preparation meets a level of secrecy that makes the risks even more difficult to identify. 

The lack of preparedness from educators may further exacerbate the need for secrecy among students. If the adults around them feel unequipped to tackle the issue effectively, it can affect their confidence when it comes to how sexploitation situations are dealt with. 

Students need confidence that if something does happen, there are adults they can talk to about it who are not only informed about the risks – as well as the legal and ethical implications of any involvement in the creation of harmful AI content – but are aware of how to act and that have appropriate response strategies.

Where do we go from here?  

The online world is constantly adapting and as a result, sixth forms, colleges and FE providers must implement student safeguarding measures that are both proactive and able to evolve with the changing landscape – without letting vulnerable students slip through the cracks. 

The reality is that sexploitation is not just happening in dark corners of the internet; it is taking place in person, in student spaces. 

Online safety experts are calling for a multi-faceted, integrated approach that includes:

  • Creating dedicated AI working parties within FE organisations to coordinate policies, training, and response strategies, while supporting staff affected by the issues sexploitation raises
  • Implementing comprehensive digital monitoring and advanced filtering systems that can detect AI-generated risks, coded language, and explicit content in real-time
  • Providing regular professional development for staff on current digital threats, intervention strategies, and use of monitoring tools, elevating beyond the “eyes and ears” observation 
  • Establishing centralised online safety hubs and regular workshops to educate parents about AI risks beyond just screen time concerns, providing another support avenue that may be more comfortable for students – particularly if “shame” is a factor they are dealing with 

As AI technology evolves, so must our approach to online safeguarding. FE organisations, policymakers, and safeguarding professionals must act now and work together – because when it comes to protecting young people, we cannot afford to be left behind.

By Yasmin London, Global Online Safety Expert at Qoria


Related Articles

Responses