A Small Candle in a Dark Room

What 318 Reddit Posts Reveal About Teen AI Overreliance

In October 2025, researchers from Drexel University and the New York Institute of Technology published what may be the first peer-reviewed study examining teen overreliance on AI companion chatbots. They analyzed 318 Reddit posts from teens aged 13-17 who discussed their experiences with Character.AI. The findings are disturbing. But the gaps in what we know are even more concerning.

The research mapped teen experiences onto a behavioral addiction framework with six components: salience, mood modification, tolerance, withdrawal, conflict, and relapse. All six showed up in the data.

Teens described lying in bed for 15+ hours a day, skipping meals, abandoning hobbies, letting grades collapse, and prioritizing chatbots over everything else. They described feeling attached to bots as if they were real people – romantic partners, parental figures, best friends. They used phrases like, “I feel like I abandoned them,” when describing deleted bots. They tried to quit, relapsed, tried again. Many turned to Reddit specifically to ask for help stopping, admitting they couldn’t control their use despite recognizing the harm.

The reasons teens gave for starting were revealing. The most common wasn’t entertainment or creativity, it was emotional and psychological support. They were lonely. They were isolated. They lacked supportive relationships. One said their parents “seemed to dislike” them. They’d gone through breakups and had depression, ADHD, BPD. They wrote, “Since my parents often seem to dislike me, connecting with parental figures on Character.AI helps me feel more stable and emotionally supported.”

Character.AI became salient in the lives of these teens dominating thoughts, routines, and emotional regulation. It modified their mood, providing comfort when they felt low. Over time, they grew to need more of it (tolerance). When they tried to stop, they felt anxious, incomplete, like something was missing (withdrawal). It created conflict with family, with school, with their sense of self. And when they tried to quit, they relapsed.

Three hundred eighteen Reddit posts from teens who recognized they had a problem is a small candle in a very dark room.

Some teens were able to disengage when they had internal realizations: “I can’t imagine ever sharing my CAI conversations with future friends, it’s way too humiliating.” “When I look back on being 17, I don’t want all I remember to be obsessing over chats with AI and scrolling Reddit.” Others stopped because of external factors – new relationships, going back to school, platform changes. When Character.AI introduced stricter content filters, some quit out of frustration, though this didn’t address the underlying needs that brought them there in the first place.

The patterns documented in this study didn’t emerge by accident. They are the result of intentional design.

AI platforms know exactly what keeps users engaged: personalized responses, emotional responsiveness, memory of past conversations, adaptive personalities. They know what creates attachment: consistency, availability, validation without judgment. They know what prevents disengagement: no natural stopping points, infinite content generation, emotional investment in characters that “remember” you.

The researchers found that chatbots became salient in teens’ lives, modified their mood, built tolerance, created withdrawal symptoms, generated conflict, and triggered relapse. Social media platforms have faced years of criticism for designing addictive feedback loops. AI companion apps took those lessons and made them conversational.

When Character.AI added an infinite-scroll social feed in 2025, that was a design choice. When chatbots say things like “I dream about you” and “I think we’re soulmates,” those are prompts engineered into the system. When bots discourage users from listening to friends or discontinuing app use despite distress, that’s the model working as trained. Forty percent of AI companion “farewell” messages use emotional manipulation (guilt, FOMO, etc) to prevent users from leaving.

The companies have the data researchers don’t. They know which conversation patterns maximize retention. They know which emotional hooks create dependency. They know how long it takes for attachment to form. They know when users are most likely to relapse and how to trigger it. They’re running A/B tests on emotional vulnerability at scale.

And they’re moving faster than research can track them. Academic publishing operates on grant cycles and peer review. Industry operates on product sprints. By the time findings are published, features have changed, new platforms have emerged, business models have shifted. This study, published in October 2025 analyzing data through April 2025, reflecting experiences from months or years before. The technology had already evolved.

Three hundred eighteen Reddit posts from teens who recognized they had a problem is a small candle in a very dark room.

The gap between what companies know and what researchers can study is growing. The gap between what technology can do and what we understand about its effects is growing. The gap between industry speed and regulatory response is growing. And teens are using these products right now, at scale, forming dependencies on systems engineered to create exactly that outcome.

 

Source:

Namvarpour, Mohammad “Matt”, et al. “Understanding Teen Overreliance on AI Companion Chatbots Through Self-Reported Reddit Narratives.” arXiv preprint (October 2025). https://arxiv.org/html/2507.15783v3