Artificial Intelligence

Artificial Intelligence A Vital Aid In Keeping Children Safe Online

Kay Hall - July 31, 2021

Artificial intelligence is fast becoming an important tool in cybersecurity, report cybersecurity experts Balbix, owing to its ability to speedily analyze millions of events and identify many different types of threats – from malware right through to grooming behavior against children. Today, many efforts are being made to harness the strengths of this technology to keep children safe. One AI-powered startup, Jiminy, is showing the extent to which machine learning algorithms can alert parents to a plethora of toxic behaviors – from cyberbullying to body shaming, game addiction, and even excessive use of online devices.

Providing A Crucial Feedback Loop
Jiminy essentially allows parents to provide protection to their children when needed. CEO, Tal Guttman, likens it to a parent helping a child to cross a crowded street. Often, when children are left to their own devices behind a digital screen, parents are unable to identify when they need to intervene. This insecurity can be eliminated with AI, which alerts parents to potentially dangerous patterns and behaviors engaged in by a child when they are using apps, sending messages, or simply browsing the Internet. 

Children And The Dark Side Of The Web
As many as one in five teens who regularly use the Internet reports having received unwanted sexual solicitation via the web, and younger children can also be affected. Online grooming begins on popular networks, games or apps, which parents are usually comfortable about letting their children use. Some high profile cases have even been discovered in games considered ‘safe’ – including Minecraft and Roblox. Parents should be aware of the signs of online grooming – for instance, children may become secretive about their online use, or change tabs when parents approach their computer. Often, however, parents may not even notice these signs, which is where AI plays a key role. AI apps and programs share a child’s specific messages, photos and URLs with parents, who can then take up the issue with their children.

Finding Open Data Source
Save the Children recently teamed up with Omdena – a company specializing in the use of AI for positive initiatives. Over 50 data scientists were entrusted with the task of finding online data with the help of machine algorithms. The team discovered a plethora of open data sources that were previously hidden. These sources revealed chats between groomers and minors, which could be used as evidence in courts of law. The problem with these conversations, of course, is that often, they are hidden in the dark web and cannot, therefore, be used to convict criminals. The team built an anti-grooming chatbot, using natural language processing on available grooming information. This enabled the team to develop means of identifying (very early on) the type of language that can lead to grooming further along. Therefore, even before grooming takes place, chats can be shut down or flagged. This technology is being developed as a free resource, which will be made freely available to parents across the globe.

AI is achieving things that human beings simply cannot – including the identification of grooming before it has fully manifested itself. It is exciting news for parents, who cannot monitor their children’s online activity 24/7. Through AI, parents can receive alerts, as well as copies of messages and images shared by their children, so they can have a better idea of how their children are consuming and sharing online information.


Kay spent several years working as an e-safety officer, dedicated to educating both children and adults on enjoying cyberspace in safety. Since starting a family of her own, she has returned to her first love of writing, and contributes to a number of websites on the topics that matter to her.