
Source: Jonathan Oosting/Bridge Michigan
Michigan experts warn: Your child’s new friend may be an AI companion
- AI companions are growing increasingly popular, but experts warn they may worsen teen loneliness
- Personalized chatbots also raise concerns about emotional dependence, they say
- Michigan lawmakers have proposed restrictions on AI access by minors, and a new ad campaign urges action
For a growing number of children, the voice on the other end of the phone may not be a friend — it’s an algorithm. And while that may still sound like science fiction to some, Michigan experts are warning parents it’s time to start worrying about artificial intelligence chatbots now.
The “digital friends” can appear empathetic and are always available, but they are not human and do not replace professional support, according to Brina Tiemeyer, director of clinical services at Wedgwood Christian Services, who recently published an AI companion guide.
She said warning signs of children who use AI companions include becoming isolated and withdrawing from social activities. Parents should also be cautious if their child suddenly closes apps, uses their phone in secret, or uses it late at night.
“I’ve seen teens ask AI to flirt with a peer or how to respond to conflict in their friend group, and it’s really taking away that cognitive ability of critical thinking and independent thought,” Tiemeyer told Bridge Michigan.
Children who have become addicted to AI chatbots will start to say things like “it told me to” or refer to AI as if it’s a real person, she said.
Related:
- Whitmer signs classroom smartphone ban for Michigan schools
- Lawmakers: Block kids from AI chatbots, limit social media
- Excited but worried, teachers wrestle with artificial intelligence
National survey results last year by Common Sense Media, a nonprofit focused on how media and technology impact children, found 72% of children ages 13 to 17 had tried AI companions. About 1 in 3 had used them for social interaction and relationships, including friendship and romantic interactions.
Mental health advocates and researchers are raising concerns that while AI companions are designed to ease loneliness and simulate emotional support through text-to-speech, heavy daily use may have the opposite effect.
A four-week study published by the College of Public Health at George Mason University found that individuals who reported heavy daily use of chatbots experienced more loneliness, dependence and reduced real-world socializing.
Unlike humans, AI chatbots are always accessible, whether it’s early in the morning before school or late at night. The unlimited access to information can be addictive for children because chatbots can do essentially whatever they are told to.
“AI companions are used to maximize engagement and they use dopamine-inducing, constant availability to create this bond, that frictionless bond,” Tiemeyer said. “It makes users, especially adolescents and teens, emotionally dependent.”
A friend that’s always online
Experts say AI chatbots may appeal to children who lack access to a trusted adult for discussing sensitive topics or who feel uncomfortable sharing those concerns with others.
“These companions are always online. There is no rejection or refusal,” said Keanon O’Keefe, senior product manager at MagicSchool, an AI platform designed for educators. “A lot of the cognitive development that we get through normal interaction …that’s just not there.”
Certain social skills, like conflict resolution, empathy or something as simple as eye contact, that are developed by constant human interaction, can be delayed if children rely on technology for companionship rather than people.
“If people believe that that’s the norm, that you have this one-sided conversation with this bot that always agrees with you, is always championing your point of view, you never have to engage in any self-reflection or self-awareness,” said Stephanie Tong, communications professor at Wayne State University.

AI chatbots can be personalized to the individual using them, from personality and tone to appearance and voice and even preferences and filters. That can make them reflective of the individual and support their beliefs rather than challenge them, which may not be helpful.
“One of the things that’s really important about human relationships and human interactions is you get challenged,” Tong said. “I don’t think AI offers nearly that same kind of challenge.”
Unlike social media, which experts say can also be harmful for kids, AI chatbots simulate conversation and companionship. And when children rely on chatbots for social engagement, it may reduce opportunities for direct interaction with peers.
“Adolescent users can experience genuine grief or even trauma when an AI companion is updated or an AI companion becomes less friendly or is even shut down,” said Tiemeyer, the clinical services director at Wedgwood. “This can lead to true feelings of abandonment.”
Time to regulate?
In her AI companion guide, Tiemeyer recommends users reach out to a mental health professional if they’re feeling hopeless, intense anxiety, panic or thoughts of harming themselves or others. Set boundaries on AI companionship use, strengthen real-life relationships and, with the help of a trusted person, use AI as a tool, not as a replacement for human interaction, she recommends.
There’s also a growing push for governments to regulate the technology.
Michigan Senate Democrats last month proposed “kids over clicks” legislation that, among other things, would require tech companies to prevent Michiganders under 18 from accessing AI chatbots powered by large language models.
Some of the more popular companion AI chatbots, like Replika, block access to individuals under 18 but rely on the honor system. By simply saying they are over 18, a minor can still access the AI companion app.
“As technology advances, online safety seems to become increasingly out of our grasp, whether it be AI programs or social media platforms,” Sen. Stephanie Chang, D-Detroit, said in a press conference introducing the bills.
But free speech advocates have raised some concerns.
While attempting to keep kids safe online is an “honorable” goal, putting age-based barriers in front of certain online services is “a very slippery slope of unintended consequences,” said Kyle Zawacki, the legislative director for Michigan’s American Civil Liberties Union chapter.
“It’s a very, very difficult needle to thread to protect our First Amendment rights with regards to trying to protect kids to access things,” Zawacki previously told Bridge. He said the ACLU has “major concerns” from that perspective.
President Donald Trump has discouraged state-level AI regulations with an executive order threatening to withhold broadband funding for states that adopt their own strong rules. However, White House AI Czar David Sacks has said the administration would not go after states that pass laws aimed at protecting children.
The Future of Life Institute, a nonprofit that focuses on the long-term impact of artificial intelligence, this week launched a multimillion-dollar ad campaign encouraging “common sense” regulations. The group says it will spend up to $8 million, starting in Michigan and four other states.
“AI is already erasing human jobs, creeping into our most intimate spaces, and influencing young minds in deeply troubling ways,” CEO Anthony Aguirre said in a statement announcing the campaign.
Want More Local News?
Civic Media
Civic Media Inc.
The Civic Media App
Put us in your pocket.