Menu Close

Sway.ly Launches: AI to Protect Children from Harmful Social Media in the UK and US

child buries head sitting on stairs

Childhood is under siege. From violent clips and misogynistic rants to endless filters flogging toxic beauty standards, today’s kids are being force-fed a digital diet that would make any parent gag. Now, a new app is stepping into the fray, using AI to protect children from harmful social media in a way that’s less Big Brother, more big brotherly advice.

Sway.ly promises to help families navigate the relentless tide of online junk food for the mind. The app doesn’t just slap on bans or firewalls. Instead, it flags harmful posts across TikTok, Instagram and YouTube, explains why they’re problematic, and even suggests who to follow—or unfollow—to retrain the feed.

And the need is glaring. Fresh research commissioned by Sway.ly lays bare the scale of the problem:

  • 77% of children say social media negatively affects their physical or emotional health.
  • 72% of UK children report seeing content in the past month that left them upset, sad, or angry.
  • The worst offenders? Fake news (24%), hate (23%), violence (22%), body image pressure (22%), and over-sexualised content (20%).
  • Parents’ top fears: abuse (38%), hate (33%) and adult content (32%).
  • A brutal divide: 35% of neurodivergent children have faced cyberbullying, compared with 20% of their neurotypical peers.

In other words, while parents dread the big-ticket horrors, their kids are being chipped away at by something more insidious—what the Sway.ly team dubs “longitudinal overexposure”. It’s the constant drip-drip-drip of algorithm-fed poison that wears young minds down.

“Not what they’re searching for—it’s what they’re being served”

Mike Bennett, Sway.ly Co-Founder, CEO and father of three, puts it bluntly: “We’ve entered an era where children’s sense of self and reality is being reshaped – not by what they’re searching for, but by what they’re being served. Instead of fear-driven censorship, we need education, tools and family-first technology that equip young people to navigate the online world.

The Online Safety Act is a step in the right direction, with its aim to protect children online, but it falls short in the same way many technology solutions do – by focusing too heavily on banning rather than empowering. Children today are highly tech-savvy; many use VPNs, second phones, or fake profiles to get around restrictions. That’s why we need to move beyond blanket bans and focus on educating children to make better digital choices themselves.”

Daniela Fernandez, Sway.ly’s Chief Strategy Officer and mother of one, adds: “Our research shows the real danger isn’t just in the obvious stuff – it’s the cumulative impact of being exposed, over and over again, to toxic, warped and unrealistic content. We call this longitudinal overexposure, and it is relentless. It’s not violent enough to block, but it’s quietly shaping how young people see themselves and the world.

Most parents simply can’t keep up with the pace – the language, the trends, the sheer volume of content is constantly shifting. That’s why we built Sway.ly: to keep up and help decode what kids are really seeing, and to give parents the insight and tools they need to respond with confidence. It’s about rebuilding trust between kids and parents – and using AI for good, to support smarter habits and healthier minds.”

A new approach to digital parenting

Where most apps police children’s feeds with blunt-force bans, Sway.ly tries something different. It encourages conversation, hands back some agency, and offers “trust scores” to help kids and parents judge content together. It’s designed for guidance, not punishment.

Dr Catherine Knibbs, psychotherapist and online harms consultant, underlined the point: “Removing harmful content isn’t enough – harmful material is too varied, relentless, and adaptive to simply ban. The most powerful tool we have is education. When we create safe spaces for children to talk about what they see online, we empower them to cope, reflect, and choose. Technology solutions like Sway.ly matter because they focus on equipping families to navigate – not avoid – the digital world with resilience and trust.”

The health toll

The numbers don’t just point to digital discomfort—they map a national health issue. According to Sway.ly’s study, 66% of children and 68% of parents report at least one physical or emotional symptom linked to social media use. From tiredness (28%) and sore eyes (25%) to headaches (18%), sleep disruption (20%), and rising anxiety (14%), the cost of scrolling is becoming impossible to ignore.

Neurodivergent children are hit harder still, facing greater risks of cyberbullying and harm. Their parents also report heavier emotional fallout, underscoring the urgent need for family-first tools that adapt as fast as the platforms do.

AI to protect children from harmful social media

Built with backing from Innovate UK, psychotherapists, and AI specialists, Sway.ly is marketed as a constantly updating tool that can decode shifting memes, slang, and algorithms faster than parents—or static filters—ever could.

At around £2.60 per user per month, the Family Plan is pitched as “less than the price of a coffee, for peace of mind.”

The mission is simple: to use AI to protect children from harmful social media without walling them off from the digital world entirely. Instead, Sway.ly promises to educate, empower, and—perhaps most importantly—rebuild the fractured trust between kids and parents trying to navigate online life together.

For more information and to download Sway.ly, visit www.Sway.ly.

Related Posts