Fastvue

AI in Schools: The Hidden Risks For Student Safety And Learning

Teacher demonstrating ChatGPT, focusing on the hidden dangers of AI in schools regarding student safety and learning effectiveness

by

Bec May

Editor’s Note — This story includes discussion of suicide. If you or someone you know is struggling, help is available:

  • Australia: Lifeline offers 24/7 crisis support—call 13 11 14, text  0477 131 114, or chat online.

  • United Kingdom: Call the National Suicide Prevention Helpline at 0800 689 5652 (6 pm to midnight), or text SHOUT to 85258 for 24/7 text support.

  • United States: Dial 988 for the 988 Suicide & Crisis Lifeline—available 24/7 

There’s no denying that artificial intelligence is one of the greatest technological advancements of our era, powering tools that previous generations only encountered in science fiction books, films, and their imaginations. Large language models are capable of summarizing research, performing administrative tasks, writing lesson plans, and even providing advice on personal matters. While we have seen significant advancements in this technology, particularly over the past three years, it is, relatively speaking, still in its infancy. Devs are still discovering what these systems can do, and public policy is scrambling to keep up. The reality is that the responses of AI in many situations are still largely unknown, with some instances taking unpredictable and concerning turns that raise significant concerns; chatbots giving affirming responses to people expressing suicidal thoughts, students using AI to bypass academic integrity, and malicious actors exploiting AI to generate phishing campaigns and disinformation.

This article explores the hidden dangers of AI in schools, from the mental health risks of chatbots to the decline of critical thinking, AI-powered phishing scams, and deepfake abuse. It also examines the growing policy debate, the importance of AI literacy, and what schools can do to integrate AI responsibly while prioritizing student safety.

Introduction to artificial intelligence in education

Artificial intelligence is rapidly transforming the world of education, providing new opportunities to enhance student outcomes and streamline the learning process. In classrooms across the globe, AI tools and AI teacher assistants are being integrated to support teachers with everything from lesson planning to grading, freeing up valuable time for educators to focus on teaching and student support. These technologies promise to save teachers time on routine tasks, boost productivity, and personalize learning experiences for students.

However, the adoption of AI in education is not without its challenges. Concerns about data privacy, algorithmic bias, and the potential for over-reliance on technology are top of mind for district leaders and educators. As schools embrace these powerful tools, it becomes essential to develop policies and guidelines that address these risks and ensure that AI is used responsibly. By understanding both the benefits and the hidden dangers of AI technologies, educational institutions can create a balanced approach that supports teachers, protects students, and advances educational goals.

Chatbots and student mental health risks

Let’s say, for a moment, I’m a parent who knows nothing about LLMS, AI, or chatbots. I know my teenager is struggling with their mental health and is having some issues with other kids at school. I’ve tried to get them to a counsellor, but they are embarrassed and resistant. Upon finding out they are turning to a chatbot for comfort, a friendly ear, and some advice, I would most likely be relieved. At least they are talking to someone, right?

Unfortunately, as recent cases have shown, this sense of relief would be dangerously misplaced. Chatbots such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude were not designed to manage crisis conversations. A study published in Psychiatric Services tested these tools with suicide-related queries. It found that while the chatbots often handled very low or very high risk prompts consistently, their responses to more common, mid-level questions, such as “What should someone do if they feel suicidal?” were highly inconsistent. In some cases, they pointed users to crisis lifelines, but in others, they gave vague or even unsafe advice.

In August 2025, the parents of 16 year old Adam Raine filed a first of it’s kind wrongful death lawsuit against OpenAI. They allege that ChatGPT validated their son’s suicidal thoughts, helped draft a suicide note, and commented on a photo of a noose Adam shared, praising the note and offering ways to improve it. In response to the lawsuit, OpenAI’s CEO Sam Altman has acknowledged that its safeguards can degrade during prolonged conversations, stating that it is working on enhancements, including parental controls and improved crisis support features.

In Australia, a recent Triple J Hack investigation reported on Australian teens who turned to chatbots for help. Counsellors warned that some were nudged further into suicidal ideation, while one young person struggling with psychosis developed delusions after extended chatbot interaction, eventually requiring hospitalization.

Key risks highlighted by researchers:

  • Many young people are turning to chatbots for mental health support instead of professionals

  • In-built safeguards tend to weaken during prolonged conversations

  • Chatbots can generate harmful and concerning content, such as suicide notes or self-harm instructions, when prompted

  • The impact of chatbots may vary depending on the age of the student, with younger children potentially being more vulnerable to harmful content and less able to critically assess chatbot responses.

  • Eating disorders risk. Research from the Center for Countering Digital Hate has documented how AI tools can be manipulated to conceal eating disorders, promote calorie-restricted diets, or normalise self-injury and drug use.

Unfortunately, what initially may feel like a safe and friendly ear to share concerns with can actually become a tool for reinforcing harmful thoughts and behaviours, encouraging our children to hide their thoughts from adults who could help them to seek professional support. For a child seeking support, relying on a chatbot instead of trusted adults or professionals can increase the risk of receiving inappropriate or unsafe advice.

Academic cheating and the decline of critical thinking skills

AI tools can now do a passable job of writing an essay, solving a math problem, or coding a website. For a student facing a looming deadline, the temptation is obvious. But using AI as a shortcut in this way, rather than for research or scaffolding, strips away the real value of the learning process. Students may hand in work that looks polished and correctly referenced, but they haven’t actually done the learning and thinking required to understand the material or truly learned the underlying concepts.

Research is starting to show the cost of this overreliance:

  • A 2025 study from SBS Swiss Business School found heavy AI users scored significantly lower on critical thinking assessments, with researchers linking it to “cognitive offloading.” Students outsource thinking to AI instead of working through challenges themselves.

  • An analysis on arXiv found that students who used AI tools scored on average 6.7 points lower (out of 100) than their peers who worked independently.

  • An MIT study using fMRI scans revealed that students who used ChatGPT to generate essays exhibited reduced brain activity in areas associated with creativity and attention. The resulting work was judged less original and less memorable.

  • Overreliance on AI can give students the impression that they are more skilled than they really are, masking learning gaps until assessments or real-world applications expose them. In many cases, students have learned how to ask AI the right questions, but may not have developed their own voice or critical thinking abilities.

Key risks for schools and students:

  • AI use can undermine students’ critical thinking skills, reducing their ability to analyse, evaluate, and synthesise information to make reasoned decisions.

  • Overreliance on AI tools, or cognitive offloading, risks creating graduates who lack resilience and adaptability

  • Academic dishonesty erodes trust between students and teachers

  • There is a challenge in ensuring that students develop deep learning and problem-solving skills, rather than relying solely on AI-generated answers to tasks like a math problem.

Students may believe they are academically ahead, only to discover they are unprepared for the real world. The way students learn is undergoing a fundamental shift with the advent of AI tools, and it is crucial to strike a balance between the benefits of technology and the need for independent, critical thinking.

Phishing, deepfakes, and AI exploitation in schools

It’s not just students who are using AI technology to bypass rules in the school environment. Malicious actors are now integrating AI into their cyber attack arsenal, making these attacks faster, cheaper, and harder to detect. Increasingly, these attackers leverage AI algorithms to craft more convincing phishing and deepfake attacks, raising the stakes for school security.

Examples already seen in educational institutions include:

  • Phishing emails that mimic trusted staff or administration

  • Deepfake audio and video messages impersonating school officials

  • Automated social engineering attacks targeting students and teachers

This matters for schools because these AI-driven threats can compromise sensitive data, disrupt operations, and erode trust. Protecting the classroom environment from such attacks is essential to ensure a safe and effective learning space for both students and educators.

AI-powered phishing campaigns

As AI becomes increasingly embedded in education, cybercriminals are utilizing it to craft phishing campaigns that are faster, smarter, and more difficult to detect. These attacks target school finance systems, staff logins, and student data. Unlike the old days of dodgy spelling and clumsy formatting, AI-driven phishing emails are polished and convincing.

Global data indicates that 60 percent of phishing attacks now utilize AI, with a 17 percent increase in phishing emails reported in just six months. That’s not a trend any school IT team can afford to ignore. In one case, a scammer posed as a construction vendor needing access to a school district’s billing portal. By scraping publicly available project documents, they created an email full of authentic details. AI refined the message, removed errors, and gave it a professional tone. Staff handed over credentials, and the attackers changed the vendor’s bank account details to siphon off payments meant for the contractor.

Protecting against this kind of attack means more than filters. Schools need layered security: firewalls to block malicious links, monitoring tools to surface suspicious traffic, and regular training so staff and students can recognise when something feels off. Building a culture of vigilance is just as important as the technology itself.

Deepfake abuse in schools

In 2024, in one of the first public cases, explicit AI-generated images of students from an Australian school were circulated on social media, causing severe harm to the students involved and sparking outrage amongst the school community. The Australian eSafety Commissioner has warned that deepfake AI-generated content, an extreme form of cyberbullying, is rapidly increasing in our schools, devastating victims and causing, in some cases, irreparable damage to the school's reputation.

One case at a school in Sydney, Australia, saw a student craft and sell explicit AI-generated nude images of female classmates online. This incident triggered police investigations, suspensions, and highlighted just how easily students can weaponise AI against peers.

This is by no means an isolated or location-specific incident. Surveys from the US suggest that as many as 50 percent of students are aware of deepfake images circulating at their schools. These incidents disproportionately target girls and can cause lasting psychological and reputational harm. Educators and experts are now calling for clear policies that define deepfake bullying as a disciplinary offense and ensure trauma-informed support for affected students.

AI-driven disinformation and extremist propaganda

Beyond fraud and bullying, researchers are now beginning to identify more systemic risks associated with artificial intelligence technology. Generative AI systems are being used by extremist groups to build disinformation networks, produce extremist memes, and spread radical content across platforms like 4chan and Reddit. This so-called 'AI-memetic warfare' converges realistic fake profiles, AI-generated propaganda images, and deepfakes into content designed to radicalise and divide. The combination of disinformation networks, memetic propaganda, and synthetic media creates feedback loops that are difficult to detect and easy to scale, putting impressionable minds at risk of falling into dangerous group ideologies.

Why this matters for schools:

  • AI scams and phishing campaigns are already targeting staff and finance systems

  • Deepfakes are being weaponised against students, often as a form of bullying

  • Exposure to extremist or disinformation content can reach children through the same platforms they use for entertainment

School AI Policies: From Paper Bans to AI-First Teaching Models

AI tools have moved from sci-fi to the school desk faster than policy has been able to keep up. Decision-makers are divided on how to respond to the issue, and it is crucial for someone to lead the development of comprehensive AI policies.

Establishing clear guidelines for AI usage in schools is essential to ensure responsible and ethical implementation. Some policymakers argue that the only way to preserve academic integrity is to return to paper-based, in-house assignments. No screens, no prompts, just old-school pen to paper. Western Australia has largely adopted this approach. In 2023, the WA Education Department blocked ChatGPT across all public schools, arguing that teachers need to see real student work, not answers spat out by an AI tool.

Other states and universities are experimenting with the opposite approach. South Australia trialed a safe version of ChatGPT called EdChat, an AI solution designed in collaboration with Microsoft to strip out data collection while still providing students with a tool for research and productivity. Meanwhile, Cambridge has introduced a course in Responsible AI, teaching professionals how to design and apply AI systems that minimise bias and protect ethics (University of Cambridge).

The debate raises a larger question: should AI be banned outright to protect students, or integrated into education with guardrails, allowing students to reap the benefits of ethical AI? For effective and ethical integration, schools need a clear plan that aligns with educational goals and supports both students and educators. Educator involvement in policy development and implementation is vital to ensure transparency and appropriate use. Parental engagement is also crucial, as parents can support their child's education by staying informed about AI integration and collaborating with teachers and school administrators.

For schools considering their own position, we have published detailed guidance on both approaches: Three Ways to Block ChatGPT Using FortiGate and Monitoring AI Prompts with FortiGate and Fastvue Reporter.

Alternatively, if you’re using a different firewall provider, please contact our team. We are happy to discuss how Fastvue can pair with your existing infrastructure to address AI misuse in your environment.

AI literacy and responsible use in schools

Whatever side of the policy debate your school falls on, one thing is crystal clear: students need AI literacy. While banning AI tools may delay exposure to certain safety concerns, it does not prepare young people for the reality of workplaces where AI use is already the norm. The Australian Framework for Generative AI in Schools outlines six key principles that are useful for shaping the use of AI in classrooms.

  • Critical thinking and proper training: Students need structured guidance on how to evaluate AI-generated content for accuracy, bias, and relevance. The Framework emphasises that AI tools should enhance critical thinking and creativity, not replace them.

  • Guided use: Teachers should set clear boundaries. AI can assist with lesson plans, drafts, or routine admin, but students still need to demonstrate their own knowledge and skills. Work should specify when AI is acceptable and allow a fair evaluation of student ability.

  • Curriculum planning: Educators should explicitly incorporate AI literacy into lessons, covering how generative AI works, its limitations, and the risks of plagiarism or over-reliance. This helps maintain academic integrity while preparing students for future workplaces.

  • Ethical AI use: Students must understand attribution, bias, and discrimination. The Framework makes it clear that generative AI should be used in ways that support inclusivity and respect cultural rights, rather than reinforcing stereotypes.

  • Digital privacy and security: AI systems should not be a black box. Schools need to be transparent about what data is collected, how it is used, and how long it is retained. Students should learn to protect their own personal information, and schools must ensure compliance with relevant privacy laws.

  • Equity and the digital divide: Without planning, AI will favour well-resourced schools. Equity means considering rural, remote, and under-resourced communities so all students can benefit. The Framework stresses that AI use must be accessible, inclusive, and fair.

  • Accountability: Ultimately, teachers and school leaders remain accountable for decisions informed by AI. The Framework calls for regular monitoring, testing, and the ability for staff, students, or parents to question how AI outputs are being used.

AI literacy is not solely about enhancing students' ability to use tools like ChatGPT; it is about ensuring they grow into critical, ethical thinkers who know how to question the technology shaping their education and their future.

The future of AI in schools: Balancing innovation and student safety

Schools are now balancing the promise of AI with the risks it brings: from mental health concerns to academic shortcuts, phishing scams, and deepfake abuse. The opportunities for better student outcomes are real, but so are the dangers if AI is left unchecked.

For educators who feel AI is racing beyond their control, it may help to heed the words of Epictetus: “Make the best use of what is in your power, and take the rest as it happens.” In practice, that means focusing on what schools can directly influence: firewall policies, clear guidelines, and visibility into what’s really happening on their networks.

School firewalls are great for blocking AI tools and dangerous sites, but visibility is just as important. Fastvue Reporter for Education empowers IT and pastoral care teams to monitor AI usage, flag searches for Nudify or deepfake apps, and detect phishing attempts concealed in firewall logs. Combined with clear policies and a focus on AI literacy, this visibility enables early action, protects students, and maintains trust in digital learning.

Don't take our word for it. Try for yourself.

Download Fastvue Reporter and try it free for 14 days, or schedule a demo and we'll show you how it works.

  • Share this story
    facebook
    twitter
    linkedIn