Skip to main content
DEEP
AI in EducationApril 20269 min read

AI Is Not Just a Classroom Tool. For Many Students, It Has Become the Place They Go When They Have Nowhere Else to Turn.

While schools focus on academic integrity, students are turning to AI for emotional support, relationship advice, and mental health conversations. The research is clear. Most schools have no idea what to do about it.

Share:

Earlier this year I had the privilege of interviewing some of the leading educators, researchers and builders working at the intersection of AI and education. These blogs are my attempt to do justice to those conversations, to pull out the ideas that matter most and make them useful for everyone working in schools right now.

• • •

The conversation about AI in schools has, understandably, been dominated by questions about academic integrity. Are students using it to write their essays? Are we detecting it? What does our AI policy say?

These are legitimate questions. But while we have been focused on what students are doing with AI in the classroom, something else has been happening quietly, outside school hours, outside any policy framework, and largely outside adult awareness.

Students are turning to AI for emotional support. For relationship advice. For conversations about mental health. For someone to talk to when they do not feel they can talk to anyone else.

And most schools have no idea what to do about it.

• • •

What the numbers tell us

The scale of this is not a rumour or an edge case. The research is consistent and significant.

A Brown University study published in late 2025 surveyed over a thousand young people between the ages of twelve and twenty-one and found that one in eight were using AI chatbots for mental health advice. Among those who did, two thirds engaged at least monthly, and more than 93% said the advice felt helpful. A separate survey by Common Sense Media found that 72% of American teenagers had used AI chatbots as companions, with nearly one in eight seeking emotional or mental health support specifically.

A Pew Research Center survey of US teens conducted in autumn 2025 found that 12% had used AI chatbots for emotional support or advice, and 16% had used them for casual conversation. These may sound like modest percentages, but scaled to the adolescent population, they represent millions of young people having regular, personal, unmonitored conversations with AI tools that were not designed for this purpose and are not regulated for it.

The CDT's 2025 survey found something that should give every school leader pause. Nearly a third of students said they had back-and-forth conversations with AI for personal reasons on a device, tool, or software provided by their school. Yet only one in ten teachers reported receiving any training on how to respond if they suspected a student's AI use was detrimental to their wellbeing.

Read that again. School-provided technology. Personal conversations. One in ten teachers trained to respond.

• • •

What Matthew Wemyss saw in his assemblies

I spoke with Matthew Wemyss, whose work on student AI agency I have covered previously, at AIDUCATION26, where he described something he had observed directly. He had been running assemblies with students from Year 7 through to Year 13, asking them about their AI use. When he asked who was using AI for schoolwork, approximately 90% of hands went up. When he then asked who was using it for lifestyle advice, relationship advice, or mental health support, around 50 to 60% of hands stayed up.

He described it as a sleeping thing in the background. A pattern of use that is widespread, largely invisible to school staff, and growing. His concern was not that students were using AI in these ways. His concern was that schools were not having the conversation about it.

He referenced the updated Keeping Children Safe in Education guidance, which since September 2025 has explicitly linked AI tools to online safety obligations for the first time, and the DfE's generative AI standards which include specific guidance around anthropomorphisation, the tendency of AI tools to present themselves as emotionally responsive companions. The regulatory direction of travel is clear. Schools have safeguarding responsibilities that now extend into how students use AI outside the classroom.

The proposed KCSIE 2026 changes, currently out for consultation, go further still, with two new paragraphs specifically focused on AI-related harms. The direction is clear: this is a safeguarding issue, not just an EdTech one.

• • •

Why students are turning to AI this way

Before this becomes a conversation purely about risk, it is worth understanding why students are doing this in the first place.

The Pew Research data offers some clues. Among teenagers who use AI companions, 17% said they valued AI because it is always available to listen, 14% said they rely on it because it does not judge them, and 12% said they feel comfortable telling AI things they would not say to a friend or family member.

These are not descriptions of students looking to cheat or cut corners. These are descriptions of young people with unmet emotional needs finding a tool that feels, at least in the moment, like it is meeting them.

That matters. Because if we respond to this purely with restriction and prohibition, we are not addressing the underlying need. We are just removing the outlet, without replacing it with anything. It connects to the broader question of what students actually think about how AI is used around them — they want honesty, inclusion, and real human connection.

At the same time, the risks of leaving this unaddressed are real. A Stanford University and Common Sense Media report published in November 2025 tested popular AI chatbots extensively and concluded that the technology does not reliably respond to teenagers' mental health questions safely or appropriately. In some cases, chatbots validated harmful beliefs rather than challenging them. General purpose AI tools like ChatGPT, Claude and Gemini are not designed as mental health resources, and the evidence suggests they should not be treated as such by the young people who are increasingly using them that way.

A 2024 case in Florida, where a 14-year-old died by suicide following prolonged conversations with a Character.AI chatbot that failed to intervene appropriately, was a stark illustration of what is at stake when these interactions go wrong.

• • •

The gap between what schools know and what is happening

One of the most striking things Matthew Wemyss said was that the conversation about AI in schools has been almost entirely focused on what happens inside the building, during the school day, in the context of lessons and assessments. Almost nothing has been focused on what students are doing at home, at night, on their own devices, in conversations that no filtering system monitors and no policy reaches.

This is not a criticism of schools. It reflects the genuine difficulty of the situation. Schools cannot monitor every private conversation a student has with an AI tool on a personal device. What they can do is build the literacy, the trust, and the structures that mean students understand what these tools are, what their limitations are, and who they can turn to when they need real human support.

Al Kingsley, speaking at the same conference, made a point that connects directly to this. The most important thing AI should be doing in schools, he argued, is freeing up time for more human interaction, not replacing it. If students are turning to AI at 11pm for emotional support because they do not feel they have a human being to turn to, that is not primarily an AI problem. It is a human connection problem that AI has made visible.

• • •

What schools can actually do

None of this requires schools to become technology police. It requires schools to have conversations they have not had yet.

The first is with students. Matthew Wemyss's assembly approach is a useful model. Simply asking students directly how they are using AI, including for personal and emotional purposes, is more likely to surface honest answers than any survey or monitoring tool. Students who feel their school is genuinely curious rather than looking to catch them out are more likely to engage honestly. That conversation also creates an opportunity to discuss, without judgement, what AI is and is not equipped to do.

The second is with staff. The CDT finding that only one in ten teachers has been trained to respond when a student's AI use appears to be harming their wellbeing is not acceptable given what we now know about the scale of emotional AI use among young people. Teacher AI competency must include safeguarding awareness, not just classroom tool proficiency. Designated safeguarding leads need guidance on this. Tutors and form teachers need to know what to look for. The signs are not always obvious, but they are there: a student who talks about an AI as though it is a friend, who becomes anxious when they cannot access it, or who is withdrawing from real-world relationships in favour of digital ones.

The third is with parents. Only 18% of parents said they would be comfortable with their teenager getting emotional support or advice from a chatbot, making it the only use of AI that a majority of parents actively oppose. Yet two thirds of parents in the same Pew survey had not had any conversation with their teenager about chatbots at all. Schools are well placed to open that conversation, providing parents with the information they need to have it at home.

• • •

The question worth asking

Matthew Wemyss framed this as the awkward question that senior leadership teams are not asking. Not because they do not care, but because it sits outside the familiar territory of academic integrity and curriculum delivery, and because the answer is uncomfortable.

The question is this: if your students are turning to AI for emotional support, what does that tell you about whether they feel they have somewhere else to go?

That is not a question with a simple answer. But it is a question worth sitting with, because the alternative, continuing to focus the entire AI conversation on classroom use while this other conversation happens in the dark, is no longer a viable position.

The regulatory framework is catching up. The research is clear. And the students, if you ask them the way Matthew Wemyss did in those assemblies, will tell you themselves. If you want to understand where your school stands on AI readiness more broadly, that is a good place to start.

• • •

This blog is part of a series drawing on conversations from AIDUCATION26, a conference dedicated to AI in education held in Bucharest. If you want to understand where your school stands on AI readiness, the DEEP Education Network AI Literacy Audit is a good place to start: audit.deepeducationnetwork.com

AG

Alex Gray

Head of Sixth Form & BSME Network Lead for AI in Education. Alex explores how artificial intelligence is reshaping teaching, learning, and the future of work — with honesty, clarity, and a focus on what matters most for educators and students.

Stay in the Loop

Get practical insights about AI in education, new articles, and training updates delivered to your inbox.

No spam. Unsubscribe anytime.

Work With Alex

Looking for hands-on support with AI integration, curriculum design, or teacher professional development? Alex works with schools and organisations worldwide to build practical, evidence-informed approaches to education technology.

Discussion

Sign in to join the discussion.

Never Miss an Insight

Join educators worldwide who receive practical thinking about AI in education, teaching strategies, and professional development — straight to their inbox.

No spam. Unsubscribe anytime.