The AI Risks Students Aren’t Being Told About: Chatbots, Deepfakes, and Real Consequences
Most school conversations about AI focus on cheating. That isn’t the real safeguarding risk. I’ve just delivered an assembly to every student in my school because there are two urgent dangers young people are walking into without understanding what’s at stake.
1. Chatbots feel safe, but they cannot protect you
Teenagers often talk to AI when they’re stressed because it feels private and supportive. And yes, the advice can sound good. But a chatbot cannot notice patterns, cannot tell an adult, cannot report a concern, and cannot follow up the next day. It cannot keep you safe.
Teachers do. Teachers don’t give up on you. That is the difference between comfort and safeguarding.
2. Deepfakes are now a serious legal risk for students
Deepfakes don’t just harm the person targeted. They harm the person who created or shared them. In Spain, teenagers have already been found guilty for AI-generated nude images of classmates. Even forwarding a fake image can put you at risk.
And this danger has just increased. Sora 2 now allows cameos, meaning anyone can instantly place a real person’s face into an AI-generated video that looks completely convincing. No skill. No editing. A five-second decision can create emotional harm for the victim and potentially criminal consequences for the creator.
Most students simply do not know this.
This is why I made the video
Young people need clear facts before they make mistakes that can’t be undone. If you work in a school, please share this with your students. It can be used in tutor time, PSHE, ICT or safeguarding sessions.
