by
Bec May
If you work in education in 2026, chances are you're already well-schooled (yes, I did ;) in the term digital citizenship.
Along with digital literacy, digital citizenship has become one of those buzzwords that seems to pop up everywhere: policy documents, conference sessions, safeguarding discussions, curriculum planning meetings, staff emails, webinars, the list goes on.
And for good reason. Far from the online safety lessons of yesteryear, digital citizenship now sits at the intersection of curriculum, online safety, behavior, and wellbeing. Encouraging, teaching, and embedding digital citizenship across the wider school experience plays a key part in students developing into thoughtful, capable, independent citizens who go on to participate effectively in a digital-first world.
It also gives more practical meaning to some of the big goals schools already talk about: resilience, respect, critical thinking, responsible use of technology, and preparing students for life beyond the classroom.
In this guide, we'll break down why digital citizenship matters, the five digital competencies, and what schools should focus on as they help shape the next generation of digital citizens.
Looking for a practical way to see what is happening in students’ digital world?
Fastvue Reporter for Education helps schools turn firewall and platform activity into clearer reporting, faster investigations, and more informed safeguarding conversations.
Digital citizenship is the ability to use digital technology safely, responsibly, critically, and effectively. For schools, that means helping students make informed choices about how they behave online, evaluate information, communicate with others, and manage their digital footprint over time. It includes online safety, but it is broader than online safety alone.
Not quite.
Online safety is a core part of digital citizenship, but it is not the whole picture. Online safety is about helping students recognize and reduce harm online and protect their well-being, while digital citizenship is about how they participate in digital life more broadly.
It includes safety but also covers how students communicate, treat others, assess information online, use digital tools for learning, interact with AI, and manage their digital footprint over time.
A student may know how to avoid obvious online risks but still demonstrate poor digital citizenship. They might spread misinformation, enter PII into AI tools, behave badly in group chats, or post personal pictures and information that may cause problems for them later.
Online safety asks 'How do we protect students from harm online?" Digital citizenship asks a broader and perhaps more useful question: “How do we help students use technology well?”
One useful model comes from the ISTE-led DiGCit Coalition, which frames digital citizenship around five competencies: Balanced, Informed, Inclusive, Engaged, and Alert. ISTE’s aim is to shift digital citizenship away from a list of “don’ts” and towards the habits and actions schools actually want students to develop.
For schools, these five competencies are a helpful bridge between theory and practice.
Balanced: helping students build healthy habits around screen use, attention, sleep, and digital wellness.
Informed: teaching students how to evaluate information, question sources, recognize misinformation, and fact-check AI-generated content.
Inclusive: Teaching respectful communication, empathy, and how to participate in digital spaces, without piling on, excluding others, or amplifying harm.
Engaged: Helping students use technology purposefully, whether for learning, collaboration, creativity, or positive participation in online communities.
Alert: Helping students recognize risk, protect privacy, manage passwords, spot suspicious contact, and create safer online spaces for themselves and others.
Schools have been teaching online safety and responsible technology use for years, but the context has changed significantly over the last 10 years and even more dramatically in the last 5.
Ten years ago, digital citizenship lessons (which were probably called online safety lessons) usually focused on privacy settings, stranger danger, and not posting a drunk photo of yourself on Facebook when you're supposed to be at your shift at Maccas. Still useful, obviously. Nobody needs their 15-year-old GitHub username, featuring both a swear word and the name of a major tech mogul, following them into a job interview. For anybody wondering, yes, I learned this the hard way. Thankfully, I work with the legends at Fastvue, who have a healthy sense of humor and were once feisty teenagers themselves ;)
Students previously had basic internet access for research, submitting assignments, sending the occasional email, or changing their MySpace song. Now, they are moving between group chats, multiple social media accounts, shared documents and drive folders, gaming communities, Artificial Intelligence tools, and school-managed platforms.
The risks students face are now more wide-reaching, hit faster, are more personal, and are more difficult for adults to see.
In Australia, eSafety's 2025 research found that 74% of Australian children aged 10 to 17 had seen or heard online content associated with harm, including fight videos, dangerous challenges, hateful content, sexual material, drug-related content, extreme violence, and content related to self-harm. More than half of the young people surveyed had experienced some form of cyberbullying, 60% had seen or heard online hate, and 27% had personally experienced online hate. One in four had experienced non-consensual tracking, monitoring, or harassment.
That is the real dig-cit landscape students are dealing with. Not passwords, privacy settings, and 'be kind online' platitudes. To become good digital citizens, they need the practical and critical skills to recognize harm, respond safely, question what they see, protect themselves and others, and know when to ask for help.
Hate speech is another daily reality that young people face in the digital world. eSafety reports that over 50% of young people have seen or heard hateful comments about a cultural or religious group online. Their research shows that students respond in a wide range of different ways when they encounter harmful content. Some report it, some ignore it, some comment against it, and some share it with friends. This is digital citizenship at play. Rather than just teaching students that 'hate speech' is bad, educators need to teach the critical skills required to respond to this content responsibly. Digital citizenship promotes respectful communication across different cultures, improving collaboration in a globalized society.
Online hate speech can be a wider pathway into extremist content, conspiracy thinking, violent ideologies, misogyny, racism, antisemitism, Islamophobia, and other grievance-based online communications. While it may be hard to believe that this type of content pervades much of the internet, it pops up in gaming chat, video recommendations, meme pages, livestreams, anonymous forums, and social media feeds.
One need only look to the manosphere to understand how students get unwittingly drawn into these rabbit holes. Searches on confidence, dating, fitness, and self-improvement can quickly feed algorithms that start pushing content containing messages of misogyny, entitlement, and harmful attitudes towards women and girls.
These concerns walk a dangerous line. A student may think they are just being curious, edgy, or funny, but saving, sharing, or engaging with violent, hateful, or extremist content can quickly become a safeguarding issue and increasingly, a legal one. In the UK, schools have a legal Prevent duty to help protect students from being drawn into terrorism. In the year ending March 2024, Educational Institutions in England and Wales made almost 3,000 referrals due to concerns about young people's vulnerability to radicalization.
AI is becoming entangled with student wellbeing, learning, and critical thinking. Pew Research Center's 2026 study on How Teens Use and View AI found that 57% of US teens had used chatbots to search for information, 54% had used them to help with schoolwork, and 12% had used them for emotional support or advice. Common Sense Media also found that five teens had used generative AI for school assignments, with 46% of those students saying they had done so without their teacher's permission.
AI literacy is often discussed in terms of plagiarism or cheating; however, it also belongs in digital citizenship. Students need to understand where AI is useful, where it is unreliable, and when they are asking a tool to do thinking or emotional work that should involve a human being.
For a deeper look at these risks, see our article on AI in schools: the hidden risks for student safety and learning.
Ofcom's 2025 children's media literacy report found that fewer children are being taught how to spot fake news, with only 23% taught how to spot it and only 19% taught what to do once they recognize it.
This is a huge problem in a digital society, where in the same year. 72% of adults report encountering misinformation on a digital platform, with 64% of Facebook users reporting the same. As you can see, the math ain't mathing on this one.
Students no longer turn to the first traditional online research platforms via credible sources like libraries. They're looking to socials, search engines with AI-driven summaries at the top of the page, and their favorite influencers as sources of truth. Add into the mix deepfakes and AI-generated media of public figures, celebrities, journalists, and even politicians appearing as near-indetectable simulacra, and it's clear that kids need to know how to spot fake content more than ever before.
Stranger danger has long been something we've taught our kids. The problem in the online world is that strangers can hide behind fake profiles, fake ages, fake genders, and may even pose as real people the student already knows. Chatting with strangers is also enticing; it offers the perceived safety of anonymity and carries a certain level of risk that can be appealing to teenagers looking to push boundaries. Reports show that 38% of young people chat to strangers online. Research from 2025 found a shocking one in five teens had experienced sexual extortion, with many incidents occurring between the ages of 13 and 15 and some before the age of twelve.
For anyone familiar with Amanda Todd's case, this is not an abstract risk.
Amanda was a 15-year-old Canadian student who died by suicide in 2012 after years of online sexual extortion, harassment, and bullying. As a young teen, she was coerced into exposing herself on webcam. The person behind the abuse captured the image, used it to blackmail her, and later sent it to people in her school community. Amanda changed schools, but the harassment followed her. In 2022, Aydin Coban was convicted in British Columbia of charges including extortion, criminal harassment, communicating with a young person to commit a sexual offense, and possession and distribution of child sexual abuse material.
Digital citizenship will not stop every predator, every coercive message, or every unsafe online interaction. But it can give students the language to recognize what is happening earlier. It can help them understand that a threat to share an image is abuse, not their fault. It can teach them not to forward, save, joke about, or weaponize someone else’s intimate image. And it can make clear that asking for help is not “making it worse”. It is the way the harm starts to stop.
Digital citizenship is a broad term that covers many issues, which is part of why it can feel kind of vague. However, when it's broken into teachable areas, it starts to come into focus.
A strong digital citizenship curriculum should help students understand how to stay safe online, think critically, communicate respectfully, and protect their privacy, manage their digital footprint, and use technology in ways that support their learning and well-being.
For schools, key areas include:
This includes staying safe online, protecting personal information, managing passwords, recognizing scams and suspicious contacts, and understanding when to ask an adult for help. This also includes basic cybersecurity habits, such as not sharing login credentials, checking links before clicking, and understanding why personal info should not be entered into unfamiliar websites, apps, or AI tools.
Students should learn how to:
Identify suspicious messages and manipulative behavior online
protect usernames, passwords, and other personal information
understand what privacy settings do, and what they do not do
avoid clicking suspicious links, downloading unknown files, or sharing login details
recognize when an app, site, or chat space may be unsafe
ask for help when something online feels risky, confusing, or wrong
Fastvue helps schools identify online activity that may indicate safety concerns, including blocked access attempts, visits to potentially unsafe or inappropriate content, and attempts to access risky chat apps.
This includes checking sources, recognizing misinformation, understanding bias (including confirmation bias), identifying fake and manipulated content, and knowing that information published online is not automatically true because it is popular, looks polished, or is confidently written.
Teaching digital literacy involves teaching students to:
Check who created a resource
Compare claims across reliable (and unreliable) sources
Quickly recognize clickbait, misleading headlines, fake images, and manipulated videos
Understand how misinformation spreads in online communities
Fact-check AI-generated answers
Fastvue helps schools see when students are accessing unreliable news sources, are stuck in potentially dangerous or conspiracy-laden rabbit holes, or are accessing other concerning content on the network, giving teachers a partial starting point for conversations around how to evaluate knowledge.
It can be easy to forget that behind digital devices are real people with real feelings. Tone can be misread. Screenshots can travel, and pile-ons in group chats can escalate quickly.
Digital etiquette involves practicing kindness and communicating respectfully online, taking into account the lack of body language and tone in digital communication.
Students should learn how to:
communicate respectfully online, even when they disagree
understand the fine line between banter and bullying
avoid sharing harmful, hateful, and humiliating content
understand the impact of exclusion, rumors, and screenshots
report cyberbullying or other unsafe behavior
When an online communication issue occurs, Fastvue Reporter helps schools investigate the network activity surrounding the event. For example, if a student is accused of creating a deepfake, sharing humiliating content, or using an AI tool to generate harmful messages or images, Fastvue can help staff see whether image-generation, face-swapping, text-generation, or similar platforms were accessed on the school network. Where Fastvue’s Microsoft 365 email and chat monitoring is enabled, schools can also investigate whether the issue extended into school-managed emails, Teams chats, or channel messages. This gives staff a clearer context around both how the content may have been created and how it was shared, discussed, or escalated.
What students post, like, upload, and share online can leave a lasting record. Students need to think beyond the present moment and consider how their online actions create a permanent digital footprint that may affect their privacy, safety, relationships, reputation, and future opportunities.
Students need to:
Understand how usernames, images, and location sharing can reveal personal details
Think before posting, forwarding, or screenshotting content
Manage privacy settings and account visibility
Understand how digital footprints are built over time.
Recognize that private online spaces rarely stay that way
Fastvue gives schools visibility in the digital traces students leave through searches, websites, videos, and AI use on the school network. This enables a practical way to discuss digital footprint, privacy, and accountability using real examples of how online actions can leave a trail.
Understanding and taking ownership of digital device use, and recognizing how digital media can affect their attention, sleep, moods, friendships, and everyday life, is key to helping students recognize when digital tools are supporting them and when they are actually getting in the way. Creating healthy digital citizens starts with students being able to question their own habits: Is this helping me to learn, connect, and create, or is it draining my attention, mood, and sleep? It is a much more useful lesson than telling them to get off their phones and get some fresh air.
Fastvue’s usage reports help schools understand how digital tools are used during the school day, including patterns that may indicate distraction, avoidance, or unhealthy habits.
AI literacy is a new and emerging aspect of digital citizenship. Students need to understand how to use AI tools in ways that are useful, ethical, and safe, while recognizing that these tools can be wrong, biased, or misused.
Check AI-generated answers against reliable sources
understand that AI tools can be wrong or biased
avoid entering personal, sensitive, or confidential information
Use AI to support learning rather than replace thinking
Follow school rules around attribution and academic integrity
recognize when AI use crosses a line in assignments or assessments
For supported firewalls, Fastvue can help schools turn logged AI activity into readable reports and alerts. Wellbeing teams and teachers use these insights to give practical advice on academic integrity and what responsible AI use actually looks like in practice.
Where AI prompt logging is available, staff have a practical starting point for conversations about how students are using AI, what they are asking it to do, and how to tailor education around effective AI prompt engineering.
Shaping the digital citizens of tomorrow is not about adding another slogan to a school strategy document or directing students to more boring online resources they will never read.
It's about helping students build the judgment, habits, and confidence they need to live, learn, and communicate safely in a digital world.
For schools, the work is both educational and practical. Lessons, policies, and pastoral conversations matter, but so does having visibility into the digital environments students are actually using. When schools can see the patterns, risks, and behaviors behind the screen, they are better placed to support students early, respond proportionately, and make digital citizenship part of everyday safeguarding, not just a once-a-year lesson.
Download Fastvue Reporter and try it free for 14 days, or schedule a demo and we'll show you how it works.

