Artificial intelligence is no longer a future concept in education. It is already embedded in the tools students and educators use every day. From search engines and chatbots to recommendation systems and AI-generated content, whether schools have formally adopted AI or not, the reality is clear: AI is already part of the learning environment.  

As we head toward UK Safer Internet Day 2026, with its theme “Smart tech, safe choices – Exploring the safe and responsible use of AI”, now is the time to pause and ask an important question: Are we prepared for the safeguarding implications of AI in schools? 

AI Is No Longer Optional 

Students are using AI to research topics, draft assignments, generate images, summarise content, and explore ideas. Many of these tools are freely available, easy to access, and require little to no technical knowledge. 

In many cases, students encounter AI without actively seeking it out through search results, social media feeds, and recommended content. This widespread availability presents both opportunity and challenge. While AI can support learning and accessibility, it can also introduce new risks that traditional safeguarding approaches were never designed to address. 

What Do We Mean by “AI”? 

AI in education is not a single tool and not all AI functions in the same way. Broadly, AI includes systems that analyse data, influence decisions, or respond to users, often in ways that feel increasingly human.  

One important distinction is between generative AI and other forms of AI. Generative AI creates new content in response to prompts. This includes: 

  • Chatbots and AI-assisted writing tools 
  • Image, video, and audio generation tools 

Other forms of AI shape online experiences without generating content directly, such as: 

  • Recommendation algorithms that influence search results and social media feeds 
  • Automated moderation and monitoring systems 

 

Why does this matter for safeguarding? Generative AI produces content in real time, which can make misleading, biased, or age-inappropriate information appear credible and authoritative. Because this content is dynamic and unpredictable, it can be difficult to manage using traditional, static controls alone.  

Understanding these differences helps schools make informed decisions about how AI is used and monitored. As AI becomes embedded across the digital environments students already use, safeguarding approaches must focus not only on access, but on visibility, context, and early indicators of risk enabling smart technology use alongside safe choices. 

The Opportunities AI Brings 

Used responsibly, AI can offer meaningful benefits in education. It has the potential to support personalised learning, helping students engage with material at their own pace and level. Accessibility tools such as speech-to-text, translation, and summarisation can remove barriers for learners with diverse needs. 

From a safeguarding perspective, AI can also assist in identifying online risks more quickly. Advanced monitoring and threat detection capabilities can help surface concerning behaviours or content earlier, allowing for faster intervention. However, these benefits are only realised when AI is implemented with intention and oversight. 

The Risks That Come With AI 

AI is not risk-free. One of the most pressing concerns is the spread of misinformation and deepfakes. AI-generated content can appear highly convincing, making it increasingly difficult for young people to distinguish fact from fiction. 

There is also the risk of exposure to harmful or age-inappropriate material. Not all AI tools apply the same safeguards, and some may surface extremist, biased, or explicit content if not properly filtered or moderated. 

Over-reliance on AI is another emerging concern. When students treat AI outputs as authoritative, rather than something to question and evaluate, critical thinking skills can be undermined. Add to this ongoing data privacy concerns, and the need for responsible AI use becomes even clearer.  

Why Safeguarding Needs to Evolve 

Traditional web filtering and acceptable-use policies were designed for a more predictable internet. AI changes that landscape. AI tools can generate content dynamically, respond differently to similar prompts, and evolve rapidly over time. 

This means safeguarding must evolve too. Schools need approaches that combine education, policy, and technology, not to block innovation, but to ensure it is used safely.  

Supporting Safe and Responsible AI Use With onGuard 

As schools explore the benefits of AI, safeguarding must keep pace. The theme of UK Safer Internet Day 2026 “Smart tech, safe choices” highlights the need for technologies that enable innovation while protecting student wellbeing. Netsweeper’s onGuard Digital Safeguarding Solution is designed to support exactly that balance. 

AI-driven tools can surface harmful, misleading, or age-inappropriate content unexpectedly, and students may not always recognise when interactions or outputs become unsafe. onGuard helps schools gain visibility into online behaviours and emerging risks, including those influenced by AI-powered platforms, without relying solely on traditional blocking.  

Through intelligent monitoring and contextual alerts, onGuard enables safeguarding teams to identify early indicators of concern; such as exposure to harmful content, signs of distress, or risky online interactions allowing for timely and proportionate intervention. This approach supports critical thinking and responsible use, rather than discouraging exploration or learning. 

As conversations around AI in education continue, solutions like onGuard help schools move from reactive responses to proactive safeguarding ensuring that smart technology is paired with safe choices. 

Continuing the Conversation 

AI is already in schools. The question now is whether we respond proactively or reactively. In our upcoming podcast episode, recorded ahead of Safer Internet Day, we explore how schools can approach AI with clarity, confidence, and care including real-world scenarios and what safe and responsible use looks like in practice. 

 Listen now to our Safer Internet Day podcast episode exploring safe and responsible AI use in real-world school settings: AI SAFETY IN SCHOOLS: RESPONSIBLE USE, RISKS, AND SAFEGUARDING STUDENTS