A growing number of AI-powered mental health apps – from mood trackers to chatbots that simulate conversations with therapists – are becoming available as an alternative to mental health professionals to meet the demand. These tools promise a more affordable and accessible way to support mental well-being. But when it comes to children, experts are urging caution.
Many of these AI apps are aimed at adults and remain unregulated. Yet discussions are emerging around whether they could also be used to support children’s mental health. Dr Bryanna Moore, Assistant Professor of Health Humanities and Bioethics at the University of Rochester Medical Center, wants to ensure that these discussions include ethical considerations.
“No one is talking about what is different about kids – how their minds work, how they’re embedded within their family unit, how their decision making is different,”
says Moore, in a recent commentary published in the Journal of Pediatrics. “Children are particularly vulnerable. Their social, emotional, and cognitive development is just at a different stage than adults.”
There are growing concerns that AI therapy chatbots could hinder children’s social development. Studies show that children often see robots as having thoughts and feelings, which could lead them to form attachments to chatbots rather than building healthy relationships with real people.
Unlike human therapists, AI doesn’t consider a child’s wider social environment – their home life, friendships, or family dynamics – all crucial to their mental health. Human therapists observe these contexts to assess a child’s safety and engage the family in therapy. Chatbots can’t do that, which means they could miss vital warning signs or moments where a child may need urgent help.