by
Bec May
It's one thing for a school to overlook the fact that young people are spending their days in the classroom trying to sneak onto Roblox or YouTube. Annoying? Yes. Disruptive? Sure. But it's far from life-threatening.
It's another thing entirely to miss that a student is repeatedly searching for how to end their life.
That's precisely what happened in the tragic case of Frankie Thomas in 2018. Over a period of months, Frankie, a 15-year-old student at a UK special needs educational school, was accessing content about violent rape, self-harm and suicide on a school-issued iPad and laptop while on the school network. While Frankie's mother had asked the school to monitor her teen's internet usage, the filtering and monitoring solution that an external company provided was not working, and the school had no backup monitoring systems in place.
Over the course of several months, Frankie began reading numerous stories about suicide on Wattpad, a social media platform that allows students in some schools to upload short stories. On the day in question, Frankie read a story involving a suicide in detail. Frankie went home that afternoon and tragically took her own life in the manner described in the article.
Sadly, this isn't a rare case, and nearly seven years on, schools are still struggling with system failures, unclear responsibilities, and a lack of confidence in their monitoring tools. This is not a simple conversation about screen time or gaming policies. It's about safeguarding students in real and immediate ways — not just from inappropriate content, but also intervening when warning signs emerge that something much deeper is happening.
While web filtering systems, including firewalls, operate on predefined categories such as 'adult content', gambling', or 'violence', they can only block access to URLs and domains already classified under those categories. This means they sometimes block legitimate content (false positives) or miss harmful sites that haven't yet been categorised (false negatives). They also can't detect or interpret specific search terms a student types into Google, the videos they are watching on YouTube, or the questions they ask AI tools like ChatGPT. While filters have their role, they only see what's happening on the surface. Monitoring tools such as Fastvue Reporter, on the other hand, go deeper, analysing actual user behaviour and extracting content from the traffic that the firewall alone can't interpret. It's like lifting the hood on encrypted activity and helping schools detect early signs of risk before it escalates.
By offering a window into students' online behaviour, monitoring provides valuable safeguarding data, surfacing patterns and anomalies that can indicate issues like:
Bullying
Eating disorders
Online harassment
Online predators and grooming
Radicalisation
Self-harm
Other forms of harmful or distressing content
Frankie's story should be a wake-up call. While her school believed its filtering and monitoring provision covered its internet-connected devices, this was mainly because no alerts were being received. The truth of the matter was that while these tools had been purchased, they were never set up correctly, and no one was taking ownership of the school's online safety.
There's no question that filtering is an essential part of student safeguarding, blocking web access to harmful content. However, it's network monitoring that tells you what's being searched for, repeatedly accessed, or deliberately bypassed. The two must work together, and Frankie's case shows what happens when they don't. A functioning monitoring system could have:
Detected her searches
Flagged her distress
Triggered an appropriate response from the school leadership
Silence in the system isn’t always a sign of safety. Sometimes, it’s a sign that no one’s listening.
Under statutory guidance from KCSIE, schools must implement 'appropriate filtering and monitoring'. But it's more than just a mandate; it's a shared responsibility to keep our children safe online. Your designated safeguarding lead, senior leadership team, governor, and your IT staff should all understand your school's monitoring provision. From how it's configured to how alerts are handled, safeguarding is most effective when it's hard-boiled into your school environment, not bolted on as an afterthought.
Online safety policies and educational resources are vital in creating a safe environment in the digital age, serving as the foundation on which all student safeguarding protocols are built.
Your online safety policies should explain:
What's monitored and why (eg, search terms, video titles, attempted access to harmful categories)
Who reviews alerts (such as the DSL, deputy DSL, or pastoral team)
How students are supported when concerns are flagged
How often are systems reviewed and refined (e.g. keyword updates, trend reviews, alert audit logs)
Ideally, the platform dashboard will be easy to read, prioritise high-risk activities, offer timeline context, and allow you to hone in on cohorts or individual students to support early intervention without overwhelming staff.
When Frankie Thomas's case went to court, it sparked national media coverage, with the inquest highlighting the systemic failure in her school's approach to digital safeguarding. As a direct consequence, the Department for Education initiated a review of the Keeping Children Safe in Education (KCSIE) statutory guidance, which included a consultation with schools about their confidence in selecting appropriate filtering and monitoring systems. One key question asked was:
“Do you feel able to make informed decisions on which filtering and monitoring systems your school or college should use?”
Less than half of respondents—just 47 per cent—said they felt confident in making that decision. Meanwhile, 11 per cent outright said they did not, revealing a clear gap in support and guidance for schools navigating this critical area of online safety.
Like all technologies, filtering and monitoring systems are not infallible. Even when deployed correctly, they can fail to block concerning content, unnecessarily block access to valuable and legitimate resources, or simply go unmonitored due to misconfiguration or inadequate training.
Key issues that schools continue to encounter include:
Failure to block new or harmful content: Filters generally rely on predefined categories and may not catch emerging online threats or nuanced harmful content
Over-blocking legitimate resources: Overzealous filtering has been shown to restrict access to valuable educational and mental health support materials.
Excessive alerts: High volumes of low-priority alerts can lead to alert fatigue among staff, causing critical issues to be overlooked
A lack of contextual data: Alerts based solely on keywords may not provide sufficient context, making it challenging to assess the severity of a situation.
When selecting a digital safeguarding solution for your school or college, monitoring capabilities should be at the top of your checklist. Here’s what matters:
You need to know exactly who is doing what, when, and for how long. User activity should be traceable to individual students across all school and college network-connected devices. This level of visibility will enable your safeguarding team to respond with clarity and precision.
Alerts should surface student safeguarding risks the moment they occur. This includes searches related to self-harm and extremism. Alerts should be sent immediately, not hours or days later, so your designated safeguarding lead (DSL) can act swiftly and with confidence.
Monitoring must align with the UK GDPR and the guidance of the Information Commissioner's Office. This means minimising unnecessary data capture and securing and safeguarding information by ensuring only authorised staff can access sensitive information. Over-surveillance not only undermines trust, but it can also breach compliance obligations, leading to reputational damage, regulatory scrutiny, and potential legal consequences for your school or college.
To adequately meet safeguarding requirements, monitoring needs to extend to all devices connected to the school's network. This includes school and college devices and student-owned devices accessing the network via Wi-Fi. A strong solution provides network-level visibility, eliminating the need for complex device management software or agent-based configuration, and ensuring comprehensive oversight of access to inappropriate websites and other potential risks.
A strong monitoring solution doesn't just flag digital student safety incidents; it provides safeguarding teams with a broader perspective on these issues. Look for systems that can generate regular, granular reports that can help highlight trends in student behaviour. This might include the number of incidents flagged in key risk categories, increases in access to certain viral videos, or shifting engagement with particular platforms like TikTok or Reddit, which can signal emerging risks. Fastvue Reporter facilitates scheduled reporting, delivered daily, weekly, or monthly, to help Designated safeguarding leads, wellbeing staff, and senior leaders track patterns, identify emerging concerns, and act early to protect students. This broader visibility enables educational institutions to transition from a reactive to a proactive safeguarding posture, responding not only to individual events but also to the stories those events reveal over time.
Unlike filtering tools that are designed to block, monitoring is about visibility—understanding what's happening across your school's network so you can safeguard effectively. Fastvue Reporter is designed to provide staff with a clear, contextual view of student behaviour, rather than relying on vague internet searches and endless logs.
Fastvue Reporter doesn’t just show you a URL — it shows you a pattern.
Differentiates between actual visits and passive hits. Through our site clean technology, Fastvue Reporter identifies when a student really accessed a site, not just when a background image or ad pixel loaded.
Highlights repeated access to the same or related content. Whether it’s a string of self-harm searches or frequent visits to a grooming-related forum, Fastvue builds a clear picture of escalating behaviour.
Surfaces grouped user activity across time. Staff can view timelines and activity logs for individual users, making it easier to identify concerning trends.
Dashboards are clean and usable. No clutter. No ambiguous acronyms. Just a streamlined interface that prioritises high-risk searches and online activities so staff know exactly where to focus.
Sadly, no one noticed the critical warning signs in Frankie's digital life. Not because they weren't there, but because her school's filtering and monitoring systems lacked the visibility, accuracy, and people backing to ensure they were picked up.
When it comes to student safety, silence shouldn't be interpreted as calm. It may, in fact, signal that your school's online safety strategies are falling short: no alerts, no visibility, no action. The education sector has an imperative to move beyond passive filtering to active safeguarding systems that don't just block predefined URLs but also surface signs of mental health issues, distress, danger, and the need for help.
Let’s stop assuming that tech alone is enough. Let’s demand systems that are monitored, maintained, and meaningful. Because our children deserve more than silence. They deserve to be heard.
Download Fastvue Reporter and try it free for 14 days, or schedule a demo and we'll show you how it works.