In today’s digital landscape, ensuring online child safety has become more crucial than ever. The proliferation of online threats, coupled with the alarming rise of AI-generated Child Sexual Abuse Material (CSAM), underscores the urgency to safeguard children in virtual spaces. As technology advances, so do the tactics of those seeking to exploit vulnerabilities, making it imperative to establish stringent regulations that address these issues head-on. By implementing and enforcing robust measures, we can create a safer online environment for children, shielding them from potential harm and fostering responsible and ethical AI usage to combat the growing menace of CSAM. 

On our most recent episode of Inside the Sweeps we have Lloyd Richardson from the Canadian Centre for Child Protection speaking about the significance of ensuring the safety of children online, driven by the surge in online risks, the emergence of AI-generated CSAM, and the necessity for rigorous regulatory measures to counteract and prevent the spread of such harmful content. 

Alison Bussey
Welcome everyone and thank you for joining us on another episode of our podcast, Inside the Sweeps. I’m really looking forward to speaking with our guests today, Lloyd, can you tell us about the Canadian Centre for Child Protection’s mission and initiatives, and what your role as Director of IT is in helping your organization achieve these goals? 

Lloyd Richardson
Yeah. Thank you for having me, Alison. So yes, my name is Lloyd Richardson. I work at the Canadian Centre for Child Protection. We were an organization founded back in 1985 as a missing children’s organization in the province of Manitoba. So, our beginnings were humble. Sadly, the beginnings of our organization were related to a missing and subsequently murdered child here in in Winnipeg, Candace Derksen, and her parents founded the organization, Child Find Manitoba, that has grown into the Canadian Centre for Child Protection that we have today. So, the different parts of the organization would be initially our missing children services, which we still run today. But as the Internet became more of a thing in the 90s and the early 2000s, we formed Canada’s national tip line for reporting the online sexual exploitation of children cybertip.ca. We also grew some educational programs out of the center as well, relating generally to child safety and then later in 2016 we founded something called Project Arachnid, which is a global tool for reducing the amount of child sexual exploitation material that sits on the Internet. So that’s sort of our global endeavor that’s grown from this from this initial sort of thing that grew in Manitoba a long time ago. So, my role here as the Director of Technology involves many of those different programs. So, we have a large team of technical folks here that work primarily on the tip line stuff, but also in the education side as well as Project Arachnid. 

Alison Bussey
Awesome. So, obviously we’re going to kind of talk more about that online aspect of protecting children. So, the digital landscape is constantly evolving. We see online risks emerge and advance really quickly, but because of this it seems like no single entity can address these complex challenges alone. Collaboration between organizations, governments, educators, technology providers, means they can pool strengths to develop solutions. With that in mind, how does the Canadian Centre for Child Protection work alongside organizations like Netsweeper to create a safer online environment for children, but kind of, I guess, more importantly, how does this collaboration address the unique challenges that children face when navigating the online world? 

Lloyd Richardson
Yeah. So, umm, child safety on the Internet is a very multifaceted problem. So, when we look at this, as you said, there’s not going be one entity that can absolutely solve that problem. It’s going to be a community of entities that are able to reduce the risks that children face online. So specifically using the Netsweeper example, with Project Arachnid we make available URLs that we find that contain child sexual abuse material and make those available to filtering companies like Netsweeper which is, you know, a piece of the puzzle. There’s a lot of people that would say filtering isn’t the solution because obviously it is not panacea, it’s not going to solve all the problems out there. But it is again one of those tools that you can use as a piece in the greater puzzle to reduce risks to children and reduce the proliferation of child sexual abuse material. Obviously, law enforcement plays a role here as well. Industry plays a role in so many ways. Governments play a role in terms of enacting sorely needed legislation in this space. I think if we look at and we’ll probably get into this a little bit later, but the internet is not a safe place for children sadly. We’re a little bit devoid of regulation. I don’t think the online space is caught up in terms of what we’ve done in the offline space. We need to look at how we create separate, safe environments for children to essentially make the Internet something that is usable by everyone, not just by adults. 

Alison Bussey
So you mentioned laws not kind of having caught up. When we look at like parents and educators, they often, I think, find themselves in a constant pursuit of vigilance to keep their children safe, which probably feels like an uphill battle to stay ahead of online threats. What are some of the latest online threats and dangers that they should be aware of to protect children effectively? And is there any maybe specific instance you can think of that highlights the impact of online threats? 

Lloyd Richardson
Yeah, that’s a really good question. So, when we touch on the idea of parental involvement in child safety online, absolutely that’s a requirement. However, that seems to be a pretty common refrain from industry in general, right? It’s always the parent’s responsibility and you kind of touched on it, it’s an uphill battle, right? We can’t lump all of the responsibility on parents to protect their children on the Internet. I think there’s a huge gap in what you see industry doing in this space. Simply just not doing enough, being more concerned about their profits than actually protecting children online. To sort of cite more recent examples, I can cite a plethora, but I guess one of the more common things that we’ve seen in the last couple of years is probably related to young males and sextortion. So, we’re seeing on any of these platforms where younger boys or teenage boys are essentially getting targeted by criminal organizations that coerce them into sending sexualized images and then threaten to send them around all of the social media that that child might have, exposing them to their friends, their parents, and inevitably asking for monetary transfer. And this is, make no mistake, organized crime. What we see is, quite frankly, the companies that these children have accounts on not doing nearly enough to prevent this type of thing from happening as sort of treated as a “oh, it happened, what can we do? We can maybe shut down some accounts and move along”. There’s no sort of real incident response or any sort of effort to prevent this activity from happening. It’s very reactive. And again, I go back to the idea that you know, like if you if you have a mid-teenage boy, it’s going to be difficult to monitor their absolute online profile of what they’re doing. I mean, you can say you can give them the education, the tools, teenage boys are impulsive. They’re going to do things like this, and I think that I think that we need to sort of even the playing field there and hold the industry more responsible for the things that happen within their environment. So again, I get back to what I said before about the offline world. Do we allow 14-year-old boys to go into bars? And like we don’t, we adequately age verify them. Sure, some slip through, but the status of age verification and the online world is pathetic right now to be quite honest. 

Alison Bussey
So you’re talking about the sextortion of males is one of the examples, but there is a varying landscape of child exploitation online. And I think recently we’ve seen a rise in AI generated CSAM which is quite alarming. You see AI and it can craft such deceptive and lifelike content so easily, but I think this might also evade traditional detection approaches. Can you explain maybe a little bit about AI generated CSAM and how it differs from traditional forms? 

Lloyd Richardson
Sure. So, we’ve absolutely seen the trade of this type of material, and I put it into a few different buckets of it. So, when we talk about AI generated material, we can think of material where it’s not a real child at all. So, it’s been completely concocted by stable diffusion with a few other models layered on it. So, the image you see would be very difficult for an analyst to tell it that it isn’t a real child, but this child doesn’t exist, it constitutes CSAM. Some other, perhaps a little more concerning, are the images where we’re getting into the deepfakes of things. So, you have a concocted image, or you’re doing putting the face of a known child. So, say I have a neighbor that I see, and I can see there Instagram profile, I’m able to grab their face and I can go put that on various different images and create bespoke, child sexual abuse material, which is absolutely happening in the wild right now, as it were. So very concerning in terms of the availability of this material ability to create this type of CSAM, it’s going to be a concern for law enforcement in terms of the volume. Law enforcement is already really, really overwhelmed with CSAM and in general just the exploitation of children on the internet. Now we throw an entire other category. You think about a law enforcement officer looking at an image that, trying to identify a child that doesn’t exist. That’s not something that we really need, and the list of to-dos in this space, it’s going to certainly affect things in that regard. So, identification of synthetic CSAM I’ll call it, is probably an important thing to do. And again, you can get back into the regulation side of things. Those that are making these tools should likely be watermarking images that are in fact synthetic or looking into some technologies that way. And there’s some things that are done on that side, but again it’s a little bit of the Wild West and yeah, it’s definitely a problem. 

Alison Bussey
As you described this, you know generative AI for CSAM as the Wild West, there was a recent report that researchers tracked 172% increase in the volume of shared CSAM produced by generative AI, just in the first quarter of this year, which is crazy. Why has the emergence of AI generated CSAM become a pressing concern when we look at child exploitation and online safety? 

Lloyd Richardson
Well, I’d go back to what I said before about the burden that’s going to cause on law enforcement. The reality is there’s a lot of work already in this space and we are certainly seeing more of it, but I wouldn’t say necessarily that that it’s overstepping what we see in terms of real children in these images. The numbers there are quite frankly staggering. And just to put that into context, we haven’t seen the same sort of transfer of these generated images, in my mind, I think the biggest concern is going to be the ones related to the deepfake type stuff. You know, like where you have someone who wants to create material of a child that’s close to you. It’s sort of really, it begs some questions surrounding “OK, well, how available, I’m going to take a picture of my child. Maybe I should be thinking a little bit more about how broadly shared that is”, you know, and I think that that’s been a sort of an issue for a while now and perhaps this will sort of highlight that problem, right, where you’re maybe wanting to not overshare information or, what have you, publicly related to your own children. But yeah, it’s definitely another problem in the list of child exploitation on the internet that we’re going to have to come up with some solutions to. 

Alison Bussey
We’ve talked not only about the complexity, but the rapid advancement in AI in the past few years, but obviously with that comes some intricate challenges. As we’ve mentioned, for law enforcement and technology platforms who, you know, need to detect and combat the proliferation of AI generated CSAM. What are some of the challenges that law enforcement and technology platforms face in detecting this type of CSAM? 

Lloyd Richardson
Yeah, I touched on the law enforcement side of things less so related to the detection side, but the vast majority of people doing proactive detection of known child sexual abuse material are using hashes or perceptual hashing technology. So that’s the ability to match an image that you’ve actually seen before to a similar version of that image. So obviously that doesn’t work when you throw generative AI into the mix because it’s an entirely new image, so you need to look at different strategies to detect generative AI images. Which, there are certainly ways to do that. There are ways to detect CSAM using AI itself, though the accuracy rate of those tools is certainly not what it would be of these different hashing technologies. But again, it’s sort of an arms race, right? Like it’s technologies interesting in that way and that you can use some of those tools for good. But sadly, the bad uses tend to stack over top of the good ones, so very concerning. 

Alison Bussey
I think the bad uses seem to also outpace sometimes the good uses. We we’ve been talking about law enforcement and technology platforms and their challenges, but there seems to also be a gap between emerging challenges presented by AI generated CSAM and the legal frameworks designed to combat them. How do you think laws can effectively address the issue of AI generated CSAM considering the rapid pace of the technology right now? 

Lloyd Richardson
Yeah, it’ll be interesting to see how this plays out. So, if you look at places like Canada, for example, has a very broad definition for what constitutes child sexual abuse material. So, it would already be covered under the Canadian Criminal Code. So, it’s not necessarily a problem within Canada in terms of labeling this as CSAM as well. Some other locations in the world have similar legislation. In the United States, it could be a little bit more tricky because their definition of what child sexual abuse material is a lot more narrow. I think that could be probably related to some of the sort of free speech arguments and what have you in the United States, but that does certainly create issues when we have analysts that are sending takedown notices for this type of material. As I talked about before, if we’re looking at an image, if an analyst cannot tell if that’s a real child or not… if a human being looks at an image and that looks like a child and it’s ruled by AI, I mean, I don’t see that as an issue in terms of sending a notice. But I think it will raise some questions about, “OK, is this a real child in the image?” And I think the United States, in particular, will probably need to be looking into how to address this within their legal framework. 

Alison Bussey
So, kind of moving to another part of the collaboration efforts. When we look at tech companies, they can possess the means and influence to detect and impede the distribution of harmful materials. Taking proactive measures not only safeguards users, especially children, but also upholds their potential ethical obligations, what are what responsibilities do tech companies bear in preventing the spread of AI generated CSAM on their platforms? Or like what measures can they take? 

Lloyd Richardson
So many. I would probably step the question back to like what’s being done to stop the proliferation of non-AI generated CSAM and say that that’s been, not pointing to a specific entity, but looking at tech in general, I’d say that’s been a pretty big failure to date. So, to throw on the idea of AI generated CSAM, I truly don’t believe that the issue has actually taken that seriously, and I touched on it earlier when you know, it’s one thing for a company to say we have zero tolerance for this type of material. I mean, it’s easy to say that certainly, but when you keep having the material show up on your service like I’ve got to ask the question, “how much are you actually investing in having this not end up on your system” because it strikes me that there’s a sort of discord with the amount of money invested and what zero tolerance actually means. And I think it’s hard for some of these platforms when you’re dealing with a massive amount of user generated content, margins are thin on moderation and it’s a hard thing to do. So really, it’s not about zero tolerance, it’s about how much of this can I allow on my system and still seem like a legitimate entity, right? I think I think that’s probably more the way to look at it, and I think the same thing falls into place when we’re dealing with AI generated CSAM. Sure, it’s going to be a harder thing to detect because it’s got those issues that I talked about earlier in terms of detection being harder, but tech is really good at innovating in other ways. In terms of making money, I think we can suggest that innovating in terms of how to keep children safe is probably a pretty good place to spend money as well. 

Alison Bussey
Yeah, I think most people listening to this would totally agree with that sentiment. But continuing on that note of responsibility, the role of tech companies in preventing the spread of AI generated CSAM is closely tied to diverse regulations that govern issues worldwide. Regulations regarding AI generated CSAM or CSAM can vary significantly, I think you touched on it a little bit before, from one country to another. How do you think regulations regarding AI generated CSAM differ from country to country? And do you have any examples of stringent regulations of a country that might set them apart currently? 

Lloyd Richardson
To be honest, I don’t know of any regulation related to like AI generated anything. I think we’re sort of at a state where very little regulation related to anything that’s fits on the Internet, and generative AI is a relatively new thing. So, I think governments are kind of behind the 8 ball a little bit on that, because who’s the one dictating what’s happening with AI in general right now, it’s those who have investment in the space. You look at that what they were talking about before about a pause on what we’re doing in the AI space. It all came from the people who are developing it right, and it’s more for their own interests rather than anything else. So, again, it points to governments needing to do a little bit more in this space, but it’s a hard job certainly because tech likes to move fast and break things. That’s the moniker that they use and sadly, we’re seeing some of the spillover effect from that and it’s certainly wider than just child sexual abuse. I mean, this is just one piece of the pie of the harms caused by this. So, I think in general we need to look at like, OK, what harms are going to get caused by, there’s the statement that people suggest technology is inherently neutral, and there is some truth to that, but it doesn’t mean that you take no consideration in the technology that you’re creating and what it’s able to do and how you engineer it, right? A lot of those questions I think, are sort of fussed over by the idea that “ohh yeah, technology is, is always neutral” and I don’t think that’s necessarily actually true. 

Alison Bussey
It’s often described a little bit as the Wild West, and it really is when you look at laws and regulations and what you can do online versus what you could do in person. When we look at political bodies or policymakers, do you see there’s any specific regulatory approaches that countries are taking to tackle this or are there different approaches, like focusing on technology development, while others maybe emphasize legal measures? 

Lloyd Richardson
I think that there are certainly some good regulatory efforts afoot right now, not specifically related to generative AI, but related to like online safety in general. You’re seeing that come out of the EU and the UK, and you’re also seeing fierce pushback by a lot of bodies not wanting this, this regulation. It’s a pretty strong set of, it’s an interesting group of essentially big tech, plus privacy advocates that push back against any sort of regulatory approach to anything on the Internet whatsoever. So, I guess I’m heartened by the fact that something is happening. We have something happening in Canada as well right now. I think these are moves in the right direction because we’ve had nothing for the last, you know, 25 years. So, this is this is good, and I think I think we’ll probably see more in this space, but again, it’s going to be a hard road because you know we’ve essentially gotten used to the Internet being the Wild West for the last 25 years. I like to look at it as a large techno libertarian failed experiment, right? It’s like the Internet has provided some like absolutely great things, but there’s been a failure to sort of address the harms that it’s causing. We’re kind of just, we shovel those under the rug, so I’m really like I said, I’m really happy to see movement in other countries as well as Canada to get something on the books in terms of regulation. 

Alison Bussey
So we’ve talked about the collaboration between law enforcement, policymakers and technology platforms to kind of deal with this issue. In what ways can we collaborate to ensure compliance with country specific regulations and potentially assist in preventing AI generated CSAM do you think? 

Lloyd Richardson
That’s a tricky one because it goes back to the last question where like the Internet is sort of considered this globalized thing and sovereign law doesn’t apply. I think there needs to be a little bit of a reset on that, right. The idea that you’re not this universal entity that can’t, even though the laws of your country don’t apply to you, and you’re in your own country. And I think there’s a general sort of disdain for any sort of interference in that, right. So, you see that certainly within a company like Netsweeper that’s involved in filtering, right? So, you’re always juxtaposed against some authoritarian regime in terms of like what, what you’re going to filter? So, say the Canadian government chose to filter things that were illegal within Canada, and actually you see that right now, it happens for copyright material like sports streaming and what have you, that actually happens. But it’s only happening in this sort of niche space that involves a lot of money. And people tend to push back against that because again, the idea that sovereign law shouldn’t apply because the Internet is this weird thing that’s removed from law, and it’s global in nature, and I can do whatever I want. Again, that sort of applies to the generative AI side of things as well, because you’re dealing with software that’s developed in other countries, and how do you how do you enforce laws in that space? I I’d say that policymakers, lawmakers, have a lot of very hard work to do to sort of assess out these issues. 

Alison Bussey
So I just want to take a moment before we wrap up and say a huge thank you for joining us today. Are there any closing thoughts you’d like to leave the audience with on kind of what we talked about today? 

Lloyd Richardson
No, I think I probably got most of it out. There was some good questions that, I was pretty straightforward and my views on this. So yeah, that’s it was great. I really appreciate you having me on the podcast. 

Alison Bussey
Awesome. Thank you so much for joining us. 

Lloyd Richardson
Thank you, Alison.