As artificial intelligence becomes increasingly embedded in classrooms, safeguarding teams and educators are being asked to balance innovation with responsibility. From generative AI tools that support learning and creativity to algorithm-driven systems shaping online experiences, the choices schools make today will have long-lasting implications for student safety and wellbeing.
In this podcast, we explore what safe and responsible use of AI really means in an educational context, cutting through the hype to focus on practical realities. The conversation examines the differences between generative and non-generative AI, why that distinction matters for safeguarding, and how visibility, policy, and proactive monitoring play a critical role in protecting young people online. We also discuss the evolving risks AI introduces, including exposure to harmful content, misuse of tools, and reduced transparency, alongside the opportunities AI presents when implemented thoughtfully.
With UK Safer Internet Day 2026 in mind, this conversation reflects the theme “Smart tech, safe choices: exploring the safe and responsible use of AI”, reinforcing a simple but vital message: AI safety isn’t a one-off conversation or an annual awareness moment. It’s an ongoing commitment that requires informed decision-making, clear boundaries, and a shared responsibility between technology providers, schools, and safeguarding leaders.
John Robb
As we approach UK Safer Internet Day 2026, the theme “Smart tech, safe choices” couldn’t be more timely. AI is now woven into how young people learn, search, create, communicate—often in ways that aren’t visible to schools. In this episode, we’re joined by our very own Nick Levy, Regional Director of the UK, to explore what AI really means for safeguarding, where the risks and opportunities lie, and how schools can support safe informed choices in an AI driven world.
Nick, before we dive in, can you briefly introduce yourself and your role?
Nick Levey
Yes, I can. My name is Nick Levey. I’m the Regional Director for Netsweeper for the UK. I’ve been involved in child safeguarding and AI for a long time, since about 2012 and it’s lovely to be here.
John Robb
Thank you for joining us today.
Let’s kick this off and get straight into it. UK Safer Internet Day has been happening for 20 years. This year, 2026, the focus is on “smart tech, safe choices”. From a safeguarding perspective, why is AI such a central part of that conversation this year?
Nick Levey
So, I think it’s really interesting. It’s actually not just our industry that’s grappling with what AI means. It’s everywhere. If you go into a great number of industries in the UK right now, everyone’s trying to grapple with what AI means in an adapting landscape and regulators too. And the truth is, this is a brand-new type of technology, or at least it’s a new adaptation of an old technology and a completely new way of framing the problem.
It sits right at the intersection of technology and human choice. We’ve had technologies that have existed before, things like simple algorithms that have been able to make decisions for us. The difference with AI is it responds conversationally and it adapts to the users in real time. So, from a safeguarding viewpoint, this isn’t just content that’s being accessed. This is not something that’s created and put on a website, and then someone views. This is content that’s actually being created in real time, and its data that is actually being created with the input of the user. So, the user himself is able to frame the discussion, frame the creation of content, and create content and interact with that content in new and unique ways.
It presents an awful lot of new, new scenarios for us, but we really haven’t been grappling with until the last two or three years.
John Robb
That’s something we’re going to come back to a little bit later, the whole idea that it’s made by the user. We’re going come back to that. I think it informs some of the choices that are happening.
AI today is often talked about as something new, but from what Netsweeper sees across school networks, how long has AI really been part of everyday student activity?
Nick Levey
If you look at everyday student activity, AI isn’t particularly new. It’s something that’s been going on for a long time. Most people will be familiar with things like the auto complete search suggestions on Google. That has been going on for at least a decade, possibly more. Recommendations in media types from things like YouTube and music player services. They’ve been there for a very long time. Smart moderation tools inside apps have been there for at least a decade and maybe more, complex algorithms, which one could class as AI have been part of a learning ecosystem since I was in school. For the very first instances of people trying to create something that works, AI actually goes back to the 1980s.
There were some problems there in creation and it started to amplify some of the biases that already existed in the system, but AI in its simplest terms, is not especially new to us. What’s new at the minute is scale and availability. There wasn’t a website until a few years ago that I could go onto and ask it to produce me a video of a dancing cat or do my maths homework, or any of the nefarious things that we as Netsweeper are trying to be involved in protecting children from.
That’s the new part. AI itself and complex algorithms, you’re talking the eighties, and more complex algorithms that look something like what we would call AI now, 2010, somewhere around there. But this is not new stuff.
John Robb
It’s been around for a while in various different forms. Are schools always aware of where and how students are interacting with AI or is a lot of this happening without clear visibility?
Nick Levey
I would argue that most of it happens without clear visibility. AI as a general rule, if you look at the lens that we look at; web filtering, there is a traditional system that says you may block AI or you may allow AI. Getting full control of what happens within there is actually pretty difficult. And there’s AI tools built into learning platforms and of all this sort of stuff. So, looking through the lens of historical technologies, a lot of this is going completely unfiltered and people are interacting it in a way that is not transparent to the moderators, to the teachers, to the leadership school within the teams. It’s creating blind spots, and Netsweeper is actually specializing in creating technologies to try and fill those blind spots. Things like our onGuard monitoring product. We’re not physically filtering out the technologies. We’re trying to stop people coming to harm by interacting with them.
John Robb
Safer Internet Day’s been around for 20 years. It’s not about fear; it’s about making better choices. When you look back at that lens of things are happening that there isn’t quite as much control over, what are some of the positive ways we’re seeing AI used in schools when the right safeguards are in place? From your role, how can AI actually support safeguarding outcomes rather than undermine them?
Nick Levey
We use it every single day. From a safeguarding outcomes viewpoint, our technologies are underpinned by AI. We use AI to make appropriate safeguarding decisions for goals and interjects in the life of young people for the positive.
There’s been a dogma for many years which humans should be making these sorts of decisions. The view of humans is that when humans see data, humans can make decisions better than machines. Now that was probably true up until the early 2000s. The truth is when you’re looking at a data set with millions upon millions, upon millions of points, and you try and stick that in front of a human, the honest truth is humans aren’t that good at picking the patterns in it.
And when you’re looking at safeguarding outcomes, you’re looking at patterns. It’s pattern recognition. What’s great at picking out those patterns, is actually machines. You can’t give a human being a million data points and say, show me dangerous ones, or at least not easily, and not quickly. So, we use that technology to try and produce good safeguarding outcomes for young people.
If you look at the classroom and the school environment, the enhancements that are coming from AI are just fantastic. You know, differentiated learning, accessibility, support, drafting and brainstorming. Even young people simply being able to do complex research in a matter of seconds and shift through information and produce good learning outcomes from that as well.
So, the implications of AI and education are actually going to be extremely positive, or at least I believe we’ve just got to make sure we have the correct checks and balances in place and make sure that people are interacting with it in the correct way. But I genuinely believe it’s a positive AI environment in UK education.
John Robb
There are positive outcomes, there’s negative outcomes. We want to focus on the positive outcomes, but what are some of the most common AI-related risks that Netsweeper is seeing in the real school environments right now? What are those risks when you look at what you’re finding; that people should be aware of.
Nick Levey
From a safeguarding viewpoint, they broadly fall into a couple of categories. One is exposure to harmful material. Everyone knows that things like ChatGPT have safeguards built into them. If you go into it and you say, can you tell me how to build a bomb? It’ll tell you no. If you say, tell me a racist joke, it’ll tell you no.
Unfortunately, what most people don’t know is for an awful lot of platforms that use the same basic technology but don’t have those safeguards built into them, and if you go onto there and say, tell me a racist joke, it’ll tell you one. And that’s problematic.
The second is image generation and content generation as a general, is extremely harmful. If you look only this year, 2026, there’s a scandal on X, which used to be called Twitter in the UK, with people using its inbuilt AI tool to create pornographic images. Those pornographic images were not all of adults, and they were mostly not of consenting people. So, we had children and we had people who didn’t consent to do this, and that is a very, very, very harmful use of AI and it’s something we need to protect young people from.
We then have more niche categories, and an over reliance on AI. Critical thinking is becoming a problem and exposure to inaccurate information through AI. It’s a common misconception. AI does not think. AI takes a language model, which has been trained and produces the most probable outcome. So, if I feed it full of garbage, it presents garbage to me, and some of those outcomes that come out can be incredibly harmful, can be bigoted, can be problematic, can sometimes be dangerous.
John Robb
That leads me to this notion of the AI generated content that can be particularly challenging for young people when it comes to spotting what’s real, what’s appropriate, what’s trustworthy. Where are the safeguards there?
Nick Levey
The honest truth is, built into the platforms, those safeguards are more or less non-existent, and this creates a problem. People will be familiar with the concepts of things like word salad and AI. So, John, you and I can ask AI the same question and get two radically different answers, right? The misinformation that can be produced in replicate, it has inputs that are not transparent to the user.
So, everyone’s familiar with the fact that if I Google “chicken chops”, it will produce ones in my area. It’ll produce results that are tailored to me, and that will be different to the ones that come to you.
And my newsfeed has different things than your newsfeed because that’s what the companies think that we’re into. That’s been relatively well known to us for a while. What is less well known is actually AI does the same thing. It takes my previous search history, it tailors its results based on the things it thinks I want, and at least with traditional algorithms, I could always tell you have a machine making the decision. I could say, “well it did this for four points, this for nine points, it took off two here, it went, left here”. In a true AI environment, a large language model trained on a diverse data set, you more-or-less cannot do that, or at least cannot do that in any meaningful way.
And the platforms are really not trying to interject safeguards there. That’s where people like Netsweeper are coming to try and protect the young people from this content that in an AI world is going to exist, right? The genies out of a bottle. We’re never going to get rid of AI now, the trick is in trying to produce safe environments for it, walled gardens, and protect users from harm.
John Robb
You mentioned at the start, the different results that people see. That’s really what you were talking about here, is the AI engines themselves. They have a relationship with the person that’s making the request sort of in chatbots and that produces unique content, which then makes it hard to apply standardized rules to those tools because nothing is standard, everything is unique to the individual user. How can we help in a safeguarding perspective in that world? What kind of technology can we use to help protect those pupils in that way?
Nick Levey
The first thing to say is safe use of AI is not banning AI. I always use an analogy when I’m speaking to schools. I have a 7-year-old and a 10-year-old, and we teach from road safety. The incorrect way to teach road safety would be to say, “Cars are dangerous. If you go to a road, there are cars, therefore, don’t go near roads”. It just doesn’t work. And it’s the same with AI. If you say AI is dangerous, stay away from it, well, you’ve just missed the point. AI exists and students are going to interact with it, so you can’t ban it. It’s just not possible.
So, in practice, what do we do? Well, as educators, we create clear policies for the safe use of AI. We teach students how to use AI safely and what is going to produce good learning outcomes and good results as opposed to bad results.
We employ technologies to try and make sure that the darker edges of this do not impact the children. No matter how many safeguards you put in an AI system, someone is going to trick the system somewhere, and that’s when you need something like Netsweeper to try and keep it safe, to try and help the educators to safeguard for children.
John Robb
Netsweeper has a long history of doing content filtering and some people assume filtering doesn’t matter now because everything’s AI driven. From your perspective, why is web filtering still an important tool in the drive towards safeguarding children?
Nick Levey
AI is emergent, but it represents an absolute fraction of a percentage of safeguarding harms we see online. Now, that is growing but most of the material that we are worried about with children actually interacting with on the web is not created by ChatGPT. It exists on known platforms that host bad information, and those platforms always constantly evolve, and again, we use algorithms and we use AI to try and spot these things and adapt to them in real time.
But when people say AI, they tend to mean generative AI. These are the things that I load up, and I talk to them and they talk to me like I’m a human. I say, no, not that one. Do this one. Those are new, but the traditional harms, like suicidal material, that’s really about user-to-user interaction. The most common one of those that we see is people going onto forums of like-minded people and interacting with each other. Primarily, people want human interaction. People, for the most part, are not talking to Grok or ChatGPT or whatever it happens to be. That’s user-to-user-based interaction on known platforms.
So, the overwhelming majority of the material that we are concerned with is actually not AI generated. It’s human generated and it doesn’t exist on AI platforms, so web filtering is still the lion’s share of the prevention of this. Now the reaction to it is guided by our monitoring product. And again, that has AI components to it, but web filtering is not going away. It still needs to be there, and it still needs to be effective and robust.
John Robb
The internet’s very big. There’s still lots of content out there, and you still need to protect people from that content, not just the AI or generative AI. It’s a multi-dimensional approach, and you need all of the approaches. The children are being inundated from multiple vectors, and you have to be able to protect them from that. So, you need multiple tools. Speaking of multiple tools, how do approaches like dynamic filtering, monitoring, human moderation, how do those things help schools balance innovation with protection?
Nick Levey
If you look at our human monitoring, what we should say is our monitoring is AI assisted human monitoring. We get computers to make alerts, and then we get humans to moderate them. That’s incredibly valuable. It’s that synergy between human intelligence and artificial intelligence trying to produce safeguarding outcomes, and that’s incredibly, incredibly important. Computers, as I say, are great at learning the patterns. Humans are actually great at the interpretation. The computer picks the pattern. The human does the interpretation. That’s incredibly important. Escalation pathways, it’s no good to just say, we’ve caught this event and this user is now at risk of harm. An actual human has to decide what to do about that thing. Computers are still bad at that. You know, go and tell your computer to call an ambulance because we think a child’s just come to some form of danger. Can’t do it. You need humans in there. So that’s the monitoring piece.
If you start looking at dynamic web filtering, that’s just an incredibly valuable tool. On a standard day on Netsweeper CNS (Category Naming System), we can see over a hundred new web classifications that come from our content naming service, which is our dynamic category engine. That is a hundred million URLs that have been classified via our system into a certain category. And many of them are completely fine. The 10% or 20% that are not, they need to be caught by our systems and so that students cannot interact with them. The best cure is prevention. If we can stop people interacting with this stuff altogether, then we prevent an awful lot of harm and the best way to do that is using dynamic content analysis for web filtering.
John Robb
Safer Internet Day is the focal point here. AI safety isn’t a one-day issue. When you think about schools and the mindset that they have, what is one thing you want to encourage schools and organizations to reflect on when it comes to AI and safeguarding?
Nick Levey
So, it’s really not one thing, and I think you use some of the correct words there. The trouble is when we get this big focal point, UK Safer Internet Day, whatever it happens to be, people think of it as a moment. It’s almost a “where were you when this”, and if we want to engage effectively, we don’t want a moment, we want what you said, which is a mindset.
We want people really weaving this into the fabric of an educational environment. So, what do schools need to do? I said before, proactive policies. You need policies to try to mitigate the harms of these built into the fabric of the school. We need reactive responses, so when these things come up, we need to be able to respond to them effectively.
Now, I’ve, I’ve intentionally missed a point because you have your policy, you have your response. Somewhere in the middle of that is detection and that’s what Netsweeper is, because if you can’t see the thing, you can’t do anything with it, right? So, we need technologies to try and do that detection element for us.
And then the school has its policies to react and adapt change for policies. I don’t know what AI platform’s going to exist in two years. Anyone who tells you we do is foolhardy or misleading you. Schools need to constantly adapt these policies, constantly be working on them. And I’ve said this a few times because I truly believe we cannot ask, “can we stop students using ai?” It’s not possible. It’s not desirable, even. What we need to ask ourselves as an educational environment is, do we understand how they are using AI, is the first thing. Do we understand why they are using AI and can we help them use it more safely? Those are the three questions that need to be built in from the bedrock of any of these policies.
John Robb
I think there’s a lot of information you’ve provided here, Nick. Ideally, it is that mindset, it is the thinking of the approach and how you go about doing that. We hope that schools will reflect on that and take that into their environments and look to build systems that help keep children safe. With all things it’s neither good or evil, it’s just a thing, and we have to find a way to make it as safe as possible.
So, thank you for those insights and supporting kids in what it is that you are doing every day working with Netsweeper. We appreciate that also, and hopefully Safer Internet Day does raise awareness. Even if it’s not in the education, but in the families at home, if they understand that they also have a role to play in all of this, that is helpful. So, thanks again, Nick. We appreciate it.
Nick Levey
One final thought, John. One thing that from a Netsweeper perspective we really want to get through to people here is visibility enables choice. Everything that I’ve just talked about with looking forward is underpinned by being able to see the thing to make the decision and that is what we at Netsweeper are all about. You can’t support safe decisions if you can’t see them. And you can’t react to threats if you can’t see them either. That is one point I would like to leave anyone listening this with.
John Robb
You can’t manage what you can’t measure. That’s what it’s all about. Visibility.
Thanks very much, Nick.
Nick Levey
Thanks guys.
