by
Simon May
How schools can understand AI usage, manage emerging risks and protect students while adopting new technology
Artificial intelligence is rapidly becoming part of everyday life in UK schools. Teachers are exploring generative AI tools for lesson planning, research, and administrative tasks. At the same time, students increasingly experiment with AI chatbots such as Google Gemini and Microsoft Copilot to support learning and problem-solving.
These developments bring exciting opportunities for educational settings, but they also introduce new safeguarding concerns, data protection responsibilities and online safety risks.
We explore these issues further in our article AI in Schools: The Hidden Risks for Student Safety and Learning, which examines how generative AI tools can affect student wellbeing, learning and online safety.
Many artificial intelligence systems rely on large language models that generate human-like content from user-provided input. When students or staff interact with generative AI models, they may unintentionally enter personal data, sensitive data or school information into external platforms.
As a result, school leaders, safeguarding teams and data protection officers are increasingly asking how AI monitoring in schools should work and how AI usage can be understood without introducing unnecessary complexity.
Fastvue helps schools gain visibility into how artificial intelligence and generative AI tools are accessed across their networks, supporting safeguarding, IT, and leadership teams with clear oversight of AI activity.
Artificial intelligence is now appearing across educational settings in many forms. Schools are encountering generative AI tools, AI chatbots, automated decision-making systems and AI platforms embedded in digital learning environments.
The UK government and the Department for Education recognise that schools must understand both the opportunities and the emerging risks created by these technologies.
Guidance on the use of generative AI from the Department for Education emphasises the need for appropriate safeguards when using AI tools in education. At the same time, Keeping Children Safe in Education (KCSIE) makes clear that schools must maintain effective filtering and monitoring systems that support online safety.
These expectations mean schools should understand:
• How AI tools are accessed across the school network
• When AI usage occurs during the school day
• Whether AI systems may introduce safeguarding risks
• How monitoring systems help identify potential concerns
Fastvue supports schools by helping them understand how AI tools are actually appearing on their network, providing:
• Visibility into access to AI platforms
• Understanding of when generative AI tools are used
• Identification of changes in AI usage patterns
• Evidence that AI activity is being actively monitored
This helps school leaders demonstrate that AI activity is being understood and managed rather than simply assumed to be safe.
Generative artificial intelligence offers powerful capabilities for learning and creativity. However, without appropriate safeguards, AI tools can also introduce safeguarding risks.
Examples of emerging risks associated with generative AI include:
• Exposure to harmful content generated by AI systems
• Misuse of AI-generated content for cyberbullying or harassment
• Attempts to bypass filtering systems using AI prompts
• Sharing of personal data or sensitive data with external AI platforms
• Over-reliance on AI tools for academic work
Young people may experiment with generative AI tools in ways that create safeguarding concerns. Students may test AI chatbots with inappropriate prompts, attempt to generate harmful content or activities, or explore tools designed to bypass safeguards.
Monitoring AI usage helps schools recognise patterns that may indicate potential safeguarding concerns, particularly when behaviour escalates or deviates from a pupil’s usual activity.
The use of artificial intelligence in schools must also comply with UK data protection laws and the Information Commissioner’s Office's expectations.
Schools must ensure that AI systems are used in ways that protect personal data and align with school data protection policies.
Important considerations include:
• Understanding how AI models process data
• Protecting special category data relating to students
• Conducting risk assessments before adopting AI tools
• Ensuring appropriate safeguards exist for generative AI systems
Data protection officers and school leaders increasingly need visibility into how AI tools are accessed within their networks.
Without monitoring systems, schools may not know which generative AI tools are being used or whether personal data has been entered into AI platforms.
Artificial intelligence technologies are evolving quickly, and schools need practical ways to understand how these systems are used in real environments.
Fastvue supports school leaders and safeguarding teams by providing:
Visibility into AI usage across users and devices
Identification of unusual behaviour or search anomalies
Monitoring of access to AI chatbots and generative AI platforms
Reports that support safeguarding policy reviews
Evidence for discussions with governors and leadership teams
This allows schools to move from assumptions to evidence when discussing the use of artificial intelligence.
Instead of relying on anecdotal understanding of how students may be using AI tools, schools gain insight into real activity occurring across their networks.
Many schools are now asking about visibility into AI prompts or conversations with AI chatbots.
Fastvue’s ability to log AI prompts depends on the firewall.
When firewall vendors capture prompt or request data in their logs, Fastvue can present this information in a readable and auditable format to support safeguarding oversight.
Where prompt-level logging is not yet available, schools can still rely on visibility into:
AI platform access
Behavioural patterns across users
Changes in usage frequency
Attempts to bypass filtering or monitoring systems
As firewall vendors expand their monitoring capabilities for AI tools, Fastvue is designed to incorporate these developments, allowing schools to strengthen oversight without deploying additional monitoring systems.
As artificial intelligence becomes more common in education, schools are beginning to update their safeguarding, acceptable use, and data protection policies to reflect how AI tools should be used.
The Department for Education encourages schools to establish clear expectations around generative AI tools, particularly when they are used for lesson planning, administrative tasks or student learning activities.
School policies increasingly address questions such as:
How students can safely access AI chatbots
When AI-generated content may be appropriate for learning tasks
How teachers should support responsible AI use
How monitoring systems help identify safeguarding concerns
These policies help schools balance the opportunities created by artificial intelligence with the responsibility to protect students and maintain strong online safety practices.
Schools across the UK are still developing their approach to artificial intelligence governance, safeguarding oversight and risk assessments.
To support these conversations, we have created a short guide explaining how schools can understand AI activity across their networks.
The guide explains:
Meeting KCSIE 2025 Expectations Around Online Monitoring and AI
Protecting School Networks From AI-Driven Cyber Risks
Governance, Transparency, and Responsible AI Use
AI Prompt Visibility and the Role of the Firewall
Artificial intelligence is now part of everyday life in schools. Students and teachers are experimenting with generative AI tools, AI chatbots and AI systems that promise new ways of learning and working.
At the same time, schools must manage emerging risks related to safeguarding, data protection, and the misuse of AI-generated content.
Fastvue helps UK schools support this responsibility by providing:
Practical visibility into AI use on the school network
Support for proactive safeguarding and early intervention
Clear evidence for leadership, governors and inspectors
Confidence that AI monitoring can evolve as technology develops
The focus is on understanding rather than surveillance, on evidence rather than assumption, and on supporting schools in protecting students while responsibly adopting new technology.
Download Fastvue Reporter and try it free for 14 days, or schedule a demo and we'll show you how it works.

