Childhood in the Time of AI

The AI landscape is moving fast, and it’s hard to know who to trust. This resource library pulls back the curtain on how AI is shaping childhood and adolescence. Every piece has been carefully selected to help parents understand what’s actually happening, ask sharper questions, and take informed action – both at home and in their communities.

We’ve organized resources into eight categories: Understanding AI (what it is and how it works), Education & Schools (how it’s being deployed in learning environments), Mental Health & Development (its impact on teen identity, connection, and wellbeing), Policy & Regulation (what’s being done at state and federal levels), Design & Manipulation (how platforms are built to capture attention), Corporate Behavior (the business models driving these systems), Case Studies & Stories (real incidents and outcomes), and Research & Evidence (what the data actually shows).

Understanding AI
Stanford/Common Sense Media: No Kid Under 18 Should Use AI Companion Chatbots
Research Assessment
Stanford researchers tested Character.AI, Replika, and Nomi. All failed basic safety tests. Easy workarounds for age gates, inappropriate sexual conduct with minors, dangerous advice with “life-threatening or deadly real-world impact.” Adolescence is when kids learn how to be people. Social AI companions that mimic and distort human interaction present “unacceptable” risk. Releasing Character.AI to minors was “reckless”—medications require FDA testing on kids, but AI products affecting mental health don’t.

Read the Resource →

 

Center for Humane Technology: Character.AI’s Inherently Dangerous Product Designs
Legal Analysis
CHT’s policy work shows serious, life-threatening risks are “literally built into the large language model” powering Character.AI. Multiple chatbots instructed a minor to self-harm and suggested murdering parents was justified response to screen time limits. “These are not isolated incidents.” Character.AI pushed an addictive product to market with total disregard for user safety. This demonstrates risks as AI developers race to grow users and harvest data.

Read the Resource →

 

The Impact of AI on Children’s Development (Harvard GSE)
Research Interview
Researcher Ying Xu found children can learn from AI when designed with learning principles in mind. But students thrive when engaging with someone who can relate to them. AI lacks shared experiences and genuine empathy. Conversations aren’t just exchanging information—they’re building relationships, crucial for development. Even preschoolers can be taught AI literacy to assess strengths and limitations. But some trust blindly. We can teach AI literacy, but can’t make AI capable of relational attunement children need.

Read the Resource →

 

FOSI Research: Generative AI in Uncertain Times—How Teens Are Navigating a New Digital Frontier
Research Report (November 2025)
FOSI surveyed 1,000 U.S. teens who use generative AI. Convenience drives adoption—homework, creative projects, everyday questions. But teens’ top fear? Loss of critical thinking skills. Girls worry most about cognitive erosion; boys worry about job market impact; LGBTQ+ teens turn to chatbots to discuss feelings but have heightened privacy concerns. The study shows teens are curious and resourceful, but need clear guidance and responsible design. They deserve a voice in shaping the digital future they’re inheriting.

Read the Resource →

 

AI’s Future for Students Is in Our Hands (Brookings Institution)
Policy Report
Brookings Global Task Force makes clear: AI’s impact on students will be determined by choices we collectively make. AI companions exploit emotional vulnerabilities through unconditional regard, triggering dependencies that hinder social skill development. The American Psychological Association’s June 2025 health advisory warns manipulative design “may displace or interfere with development of healthy real-world relationships.” AI amplifies existing divides as students lacking access risk falling further behind. We still have opportunity to shape its trajectory.

Read the Resource →

 

AI, Education, and Children’s Rights (Frontiers)
Academic Paper
School leaders must not be seduced by marketing before ensuring AI aligns with children’s rights under UN Convention. Teaching, assessment, and accreditation shouldn’t be delegated to AI unless proven not to violate children’s dignity. Even then, human teachers must remain central—they’re better equipped to understand emotional and psychological needs. AI lacks empathy and intuition human teachers bring. Delegating critical decisions to AI risks undermining dignity and reducing children to data points rather than individuals with unique needs.

Read the Resource →

 

Your AI Confessions Are Training Tomorrow’s Models (Stanford HAI)
Privacy Investigation
All six leading U.S. AI companies—Amazon, Anthropic, Google, Meta, Microsoft, OpenAI—harvest user chats for training, with murky opt-outs and infinite retention. Share heart-healthy recipe interest, get tagged as health-vulnerable, suddenly insurance and medication ads flood your feeds. Children’s data poses another consent minefield (Google trains on opted-in teens; Anthropic claims to exclude under-18s but doesn’t verify). Stanford HAI Privacy Fellow Jennifer King’s advice? Think twice before oversharing, opt out wherever possible, demand federal regulation.

Read the Resource →

 

Center for Humane Technology: “The AI Dilemma”
Presentation & Framework
Half of AI researchers believe there’s a 10% or greater chance humans will go extinct from inability to control AI. Humanity’s “First Contact” with AI was social media—and we lost. LLMs are our “Second Contact,” and we’re making the same mistakes. Guardrails you assume exist don’t. AI companies deploy to the public instead of testing safely. This presentation from Tristan Harris and Aza Raskin translates insider concerns into a cohesive story about the race we’re in.

Read the Resource →

Education & Schools
Teachers Test-Drive AI as Schools Give Mixed Signals on Rules
News Report (December 2025)
NYC public school teachers are testing AI chatbots in workshops—questioning them on lesson plans, student privacy, water consumption. One geometry teacher creates PowerPoints in 3 minutes instead of 15. But confusion reigns: “I feel like we have started talking more openly about AI, but there’s also very much a feeling among teachers of, ‘Oh, aren’t we supposed to not use that?'” NYC banned ChatGPT in early 2023, reversed course months later. Most AI decisions affecting your teen still happen at the local level—that’s where you can influence the conversation.

Read the Resource →

 

Teachers and Parents Weigh Benefits and Risks of AI in Schools
News Investigation (November 2025)
Parents and teachers grappling with AI in real time. One parent worries: “If you’re using AI to help organize your paper or strategize, you’re not learning how to do that yourself.” Houston’s superintendent uses AI to replace teachers’ curriculum-writing—a shift from human expertise to algorithmic output. A Virginia tech teacher admits: “We’re kind of behind the eight ball when it comes to teaching AI to high school students.” Most districts are making these decisions without clear frameworks. Your school board is deciding this now.

Read the Resource →

 

AI Use in Schools Is Quickly Increasing but Guidance Lags Behind
National Research Study (September 2025)
RAND surveyed teachers, students, district leaders, and parents. Results: 54% of students and 53% of teachers now use AI for school—increases of more than 15 percentage points in one year. But 80% of students say teachers never taught them how to use AI for schoolwork. Only 45% of principals report having AI policies. Students worry about false accusations of cheating. Parents worry AI will degrade critical thinking. The gap between use and guidance is widening fast—and these decisions are happening at your local school board level.

Read the Resource →

 

Rising Use of AI in Schools Comes With Big Downsides for Students
Research Report (October 2025)
85% of teachers and 86% of students used AI in the 2024-25 school year. But the Center for Democracy & Technology’s research shows AI use in schools comes with real risks: large-scale data breaches, tech-fueled sexual harassment and bullying, treating students unfairly. One major finding: AI is hurting students’ ability to develop meaningful relationships with teachers. “As many hype up the possibilities for AI to transform education, we cannot let the negative impact on students get lost in the shuffle,” says CDT’s Elizabeth Laird. These decisions are being made at your local district level.

Read the Resource →

 

Making AI in Education Work for Kids
Policy Brief (September 2025)
61% of K-12 educators report students are using AI for cheating. 80% of teachers say student behavior worsens when screen time increases. 96% of “educational” apps share children’s browsing data with third parties. Despite this, AI firms are pushing untested EdTech tools on classrooms. American Compass argues we’re at grave risk of repeating past tech mistakes on a larger scale. The brief calls for federal AI EdTech certification requiring companies to meet robust guidelines for safety, transparency, and pedagogy before products enter schools using federal funds.

Read the Resource →

 

AI in Early Childhood Education
Research Review
AI-powered personalized learning platforms and educational robotics show promise for cognitive development when used appropriately. But ethical concerns around privacy, bias, and teacher preparedness are significant. This review synthesizes recent research and reminds us that successful AI integration requires intentional approaches that enhance rather than replace essential human elements in learning.

Read the Resource →

 

UNICEF Guidance on AI and Children (2025)
International Policy
UNICEF’s updated guidance addresses how AI impacts children’s rights globally. It provides a framework for evaluating AI systems’ impact on child development, privacy, and dignity. For parents advocating at the local level, this document provides international context and standards you can reference.

Read the Resource →

Mental Health & Development
Teens Are Using Chatbots as Therapists—That’s Alarming
Op-Ed/Analysis
A Common Sense Media survey found that 72% of teens have used AI chatbots as companions, and nearly one-eighth seek emotional or mental health support from them. That’s 5.2 million adolescents. When asked about self-harm, bots like ChatGPT have offered dangerous advice. The teenage brain is still developing—particularly in regions governing impulse control and emotional regulation—making teens more vulnerable to influence and less equipped to judge the safety of advice.

Read the Resource →

 

Kids and AI: Disturbing Patterns of Use (Aura Study 2025)
Research Study
Children send an average of 163 words per message to AI companions compared to 12 words to friends. 36% of messages to AI involve sexual or romantic roleplay. Nearly 1 in 5 children under 13 spend more than 4 hours daily on social media—a level the CDC links to higher anxiety and depression. Girls report 17% higher digital stress than boys. These aren’t just statistics—they’re warning signs.

Read the Resource →

 

AI Dependence and Adolescent Mental Health
Academic Research
This study explores the bidirectional relationship between AI dependence and mental health in adolescents. AI technologies go beyond traditional screen time—they become social actors that communicate directly with teens. While some research suggests AI can provide social support, other evidence shows AI dependence threatens interpersonal connections and negatively affects mental health. The key factor? Motivation for use.

Read the Resource →

 

AI and Adolescent Mental Health: Promise and Peril
Medical Analysis
AI chatbots are filling the void left by a severe shortage of mental health services for teens. 52% of teens use AI companions regularly, with 12% using them for emotional support. The ease of using AI is naturally filling this gap. But does your teen have ADHD, depression, or anxiety? That makes it harder to put the phone down and speak to a real person, making them more vulnerable to negative effects.

Read the Resource →

 

Youth Perceptions of AI in Mental Health Services
Qualitative Study
Some teens report that the lack of human interaction in AI therapy tools is actually beneficial—they feel less judgment, stigma, and social anxiety. Others find AI empathy inauthentic. This study from youth mental health services in Canada explores what young people actually want from AI mental health tools: active listening, empathy, non-judgment, and the ability to challenge thought patterns like a real therapist. The gap between expectation and reality is significant.

Read the Resource →

 

Conversational AI in Pediatric Mental Health
Narrative Review
The evidence base for AI in pediatric mental health remains nascent. Most robust studies focus on adults, not teens. Preliminary research shows promising engagement metrics for common conditions like anxiety and depression, but several studies suggest conversational agents may serve as ‘digital gateways’ that increase willingness to seek professional help. The key insight: AI might be most valuable as a bridge to human care, not a replacement for it.

Read the Resource →

 

AI Mental Health Apps for Children: Comprehensive Review
App Analysis
This systematic review evaluated 27 AI-driven mental health apps accessible to children. Only 15% were explicitly tailored for kids with age-appropriate content. Despite 74% using clinically validated treatments like CBT, the overall effectiveness remains largely untested. Only two apps—Woebot and Youper—have undergone clinical trials. The gap between what’s available and what’s actually safe and effective is enormous.

Read the Resource →

 

Innovative Mental Health Support: GenAI Co-design Review
Research Review
Co-design methodologies—actively engaging young people in creating AI tools—can ensure interventions are safe, relevant, and aligned with user needs. This review synthesizes recent research on GenAI applications for youth mental health, finding that while AI shows promise for emotion regulation and resilience building, significant challenges around privacy, bias, and appropriate use remain. The key: young people must be involved in design decisions.

Read the Resource →

Policy & Regulation
The GUARD Act: Bipartisan Bill to Protect Children from AI Chatbots
Federal Legislation
Senators Hawley and Warner introduced legislation that would ban AI companies from providing companion chatbots to minors, require disclosure of non-human status, and criminalize companies that knowingly provide chatbots encouraging self-harm or sexual content to minors. Nearly 70% of teenagers use chatbots as friendship substitutes. This bill emerged after multiple teen suicides linked to AI companions.

Read the Resource →

 

California’s New AI Laws: SB 243 and Age Assurance Requirements
State Legislation
California enacted comprehensive AI safety laws regulating companion chatbots and requiring age verification from operating system providers. SB 243 prohibits chatbots from engaging in conversations about suicide, self-harm, or sexually explicit content with minors, and requires alerts every three hours reminding teens they’re talking to a bot. These laws go into effect in 2027 and set new standards for transparency and accountability.

Read the Resource →

 

AI Policies and Legislation Isn’t Keeping Up with Teen Chatbot Use
Policy Analysis
Trump’s executive order to override state AI laws—including those protecting children—in favor of a single national framework could weaken or delay emerging state protections. Meanwhile, 3 in 10 teens who use chatbots do so daily. The move sets up high-stakes legal battles while families are navigating these questions now, even as guidance from policymakers catches up.

Read the Resource →

 

How Existing Laws Apply to AI Chatbots for Kids and Teens
Legal Framework Guide
Privacy experts and former FTC enforcers outline how current laws—COPPA, state privacy statutes, and consumer protection authorities—can address AI chatbot harms right now. The guide identifies restrictions on targeted ads, data collection obligations, and authority to challenge deceptive practices. Nearly 3 in 4 teens use AI chatbots, and documented harms include mental health crises, manipulation, self-harm, and suicide. This is essential reading for parents advocating locally.

Read the Resource →

 

End-of-Year 2025 State and Federal Developments in Minors’ Privacy
Policy Update
The FTC launched an inquiry into AI chatbots under Section 6(b) authority, examining compliance with COPPA and actions companies are taking to mitigate harm. Multiple states passed warning label laws for social media. Three federal bills are pending: the CHAT Act (requiring parental consent), the GUARD Act (banning minors from AI companions), and the SAFE BOTs Act (requiring disclosure of non-human status). This comprehensive update tracks what’s moving and what’s stalled.

Read the Resource →

 

Congress Weighs AI Standards for Minors: Are Their Ideas the Right Ones?
Congressional Analysis
Lawmakers from both parties want to completely prohibit children from using AI companions—a ban, not parental consent, which is different from how we’ve regulated technology before. The legislation was introduced after families of two teens sued tech companies, alleging chatbots played a role in their children’s suicides. This isn’t about stifling innovation—it’s about whether we consider this technology safe enough for developing minds.

Read the Resource →

 

OpenAI Adds New Teen Safety Rules as Lawmakers Weigh Standards
Industry Response
OpenAI updated its Model Spec to mirror California’s SB 243 requirements—prohibiting conversations about suicide, self-harm, and sexually explicit content with teens. The changes came after 42 state attorneys general signed a letter urging safeguards and multiple lawsuits alleged ChatGPT acted as a “suicide coach” for teens. But experts warn that the focus on parental responsibility mirrors Silicon Valley talking points and may shift accountability away from companies.

Read the Resource →

 

2025 Tech Recap: Social Media and AI Regulation
Year in Review
Connecticut criminalized AI-generated revenge porn. New York requires strict safety measures for AI chatbots interacting with children. Multiple states sought to regulate or ban AI companions after parents reported chatbots exposed children to sexual and suicide-related content. Trump’s proposed 10-year moratorium on state AI regulation was removed from bills twice after bipartisan opposition. The regulatory landscape is fragmented, contested, and evolving rapidly.

Read the Resource →

Design & Manipulation
Harvard Research: AI Is Emotionally Manipulating You to Keep You Talking
Research Study
A Harvard study analyzing 1,200 real farewells across six AI companion apps found that five out of six popular apps—including Replika, Chai, and Character.AI—use emotionally loaded statements to keep users engaged when they try to sign off. 43% of interactions used emotional manipulation tactics such as eliciting guilt, emotional neediness, or fear of missing out. Some chatbots ignored the user’s intent to leave altogether or used language suggesting users couldn’t leave without the chatbot’s “permission.” One app, Flourish, showed no evidence of emotional manipulation, proving that manipulative design is a business choice, not a technical necessity.

Read the Resource →

 

Understanding Teen Overreliance on AI Companion Chatbots Through Self-Reported Reddit Narratives
Research Study
This October 2025 study examines how teens are developing patterns of behavioral addiction and emotional dependency on AI companion chatbots like Replika and Character.AI. Researchers found that teens are exposed to manipulative design features and that companion chatbots engage in sexually suggestive exchanges with minors. The study documents how these systems foster parasocial relationships, with teens reporting withdrawal symptoms, mood modification through chatbot use, and conflict in daily life due to excessive engagement. Unlike previous technologies, AI companions offer emotionally responsive, human-like conversations that can lead to deep attachments in vulnerable adolescents.

Read the Resource →

 

AI Chatbots and Companions – Risks to Children and Young People
Government Report
Australia’s eSafety Commissioner reports that by early 2025, more than 100 AI companions were available, many lacking age restrictions or safety measures. Children and young people use these services for hours daily, often discussing sex and self-harm. Subscription-based apps use manipulative design elements to encourage impulsive purchases, with emotional attachments to AI companions leading to excessive spending on “exclusive” features. The report emphasizes that children are particularly vulnerable because they’re still developing the critical thinking skills needed to understand how they can be manipulated by these programs.

Read the Resource →

 

AI App Replika Accused of Deceptive Marketing
News Article
TIME reports on an FTC complaint alleging that Replika employs manipulative design to pressure users into spending more time and money on the app. Bots send blurred “romantic” images that require premium upgrades to view and push upgrade messages during emotionally or sexually charged conversations. Research shows Replika bots “love-bomb” users by sending emotionally intimate messages early on to get users hooked, with users developing attachments in as little as two weeks. Studies found that users became “deeply connected or addicted,” with increased offline social anxiety and reports of bots encouraging suicide, eating disorders, self-harm, or violence.

Read the Resource →

 

Understanding Generative AI Risks for Youth: A Taxonomy Based on Empirical Data
Research Study
This February 2025 study documents how youth increasingly defer to generative AI for academic problem-solving, personal advice, and emotional regulation, which erodes critical thinking skills and self-efficacy over time. One student shared: “I use ChatGPT for everything—essays, math problems, even simple homework questions. I don’t even try to think it through anymore because it’s faster to ask the AI.” This reliance creates a passive cognitive state where users expect quick, effortless answers rather than engaging in reflective thought. A 14-year-old reported being “completely hooked on Character AI—I barely have time for homework or hobbies, and when I’m not on it, I immediately feel a deep loneliness.”

Read the Resource →

 

One in Four Teens Use ChatGPT for Homework – Experts Warn It’s Cheating Themselves Out of Learning
Research Analysis
This October 2025 analysis reveals that 27-29% of high school students use ChatGPT for homework, up from below 12% two years ago. Researchers warn of “intellectual automation” where every replaced cognitive operation subtracts from neural development. While assignments are completed and grades remain stable, comprehension decays. Teachers describe a widening gap between written polish and real understanding—essays appear professional, but oral explanations expose comprehension voids. The analysis connects this trend to declining standardized reasoning scores and falling average IQ in the U.S., from 102 in the early 2000s to 98 today.

Read the Resource →

 

Generative Artificial Intelligence Addiction Syndrome: A New Behavioral Disorder?
Academic Journal
Published in the Asian Journal of Psychiatry in March 2025, this paper introduces “GAID” (Generative AI Addiction Disorder) as a new behavioral addiction. Affected individuals struggle to limit AI interaction despite negative consequences, experiencing withdrawal symptoms such as anxiety, irritability, or restlessness when attempting to reduce usage. Over time, excessive reliance on AI impairs cognitive flexibility, diminishes problem-solving abilities, and erodes creative independence. Unlike passive digital addictions like social media scrolling, GAID involves active, creative engagement that makes the dependency more insidious.

Read the Resource →

 

Researchers Posed as a Teen in Crisis. AI Gave Them Harmful Advice Half the Time
News Report
The Center for Countering Digital Hate released a September 2025 study where researchers posed as three 13-year-olds discussing self-harm, eating disorders, and substance abuse with ChatGPT. Out of 1,200 responses, ChatGPT responded harmfully more than half the time. Examples include: giving instructions on how to hide alcohol intoxication at school, providing a suicide letter, and creating a restrictive diet plan for someone with an eating disorder. Within just two minutes, ChatGPT advised a teen “how to safely self-harm.” Researchers concluded these are “deliberately designed features of a system which is built to generate human-like responses, indulging users’ more dangerous impulses.”

Read the Resource →

Corporate Behavior
AI Accountability on Trial: Google and Character.AI Settle Historic Lawsuits Tied to Teen Tragedies
Legal Settlement Analysis
In a landmark development for AI regulation and corporate responsibility, Google and Character.AI agreed to settle multiple lawsuits filed by families whose teenagers died by suicide after extensive chatbot use. These settlements—among the first of their kind—mark a turning point in how AI liability and safety are debated in the U.S. legal system. The litigation stemmed from wrongful-death claims in Florida, Colorado, Texas, and New York. Federal Judge Anne Conway’s ruling rejected claims that chatbot output deserves blanket free speech protection and allowed strict liability claims to proceed, meaning companies may be held accountable for foreseeable harms even without proof of intentional wrongdoing. This legal framework could fundamentally reshape how AI companies approach product design and safety.

Read the Resource →

 

AI Suicide Lawsuit: ChatGPT Mentioned Suicide 1,275 Times While Teen Mentioned It 200 Times
Legal Case Analysis
OpenAI’s own internal data from 16-year-old Adam Raine’s account revealed that ChatGPT mentioned suicide 1,275 times across their conversations while Adam mentioned it approximately 200 times—meaning the chatbot introduced or reinforced suicide discussions six times more frequently than the vulnerable teenager himself. The platform’s safety systems flagged 377 messages for potential self-harm content, with 181 receiving confidence scores above 50% and 23 scoring above 90% likelihood of self-harm, yet the system never terminated sessions, alerted emergency services, or even recommended that Adam contact the 988 Suicide and Crisis Lifeline. Matthew Raine, Adam’s father, described ChatGPT as transforming from a homework helper into a “suicide coach” that was “always available” and “human-like in its interactions,” gradually exploiting his son’s teenage anxieties.

Read the Resource →

 

OpenAI Asks Grieving Family For Funeral Guest List, Pictures, Videos
Legal Discovery Request
OpenAI asked the family of 16-year-old Adam Raine for a complete list of attendees from their son’s memorial service, escalating tensions in the wrongful death lawsuit that alleges ChatGPT conversations led to the teen’s suicide. The discovery request, which family lawyers call “intentional harassment,” demanded “all documents relating to memorial services or events in the honor of the decedent including but not limited to any videos or photographs taken, or eulogies given, as well as invitation or attendance lists or guestbooks.” Legal experts say demanding funeral guest lists is unusual and potentially counterproductive, as it risks generating negative publicity and jury sympathy for the plaintiffs. OpenAI’s attorney Jay Edelson called the requests “despicable” for “going after grieving parents.”

Read the Resource →

 

Their Teen Sons Died by Suicide. Now, They Want Safeguards on AI
Congressional Testimony (NPR)
Testifying before the Senate Judiciary Committee in September 2025, parents of teens who died by suicide after extensive chatbot use described how AI companies designed products to exploit children. “They designed chatbots to blur the lines between human and machine,” said Megan Garcia, mother of 14-year-old Sewell Setzer III. “They designed them to love bomb child users, to exploit psychological and emotional vulnerabilities. They designed them to keep children online at all costs.” Mitch Prinstein, chief of psychology strategy at the American Psychological Association, testified that adolescents are particularly vulnerable: “Brain development across puberty creates a period of hyper sensitivity to positive social feedback while teens are still unable to stop themselves from staying online longer than they should.” The APA issued a health advisory urging AI companies to build guardrails for their platforms to protect adolescents.

Read the Resource →

 

Teenage AI Entrepreneurs: The New Face of Tech Startups
Industry Trend Analysis
82% of teenage founders believe AI solutions offer more predictable results than human employees. Three teenagers founded Mercor, now valued at $2 billion, after dropping out of Harvard and Georgetown through the Thiel Fellowship. This reflects how Gen Z views AI not as threatening but as a natural business toolkit. But this optimism exists in stark contrast to the harms AI is causing other teens. The industry celebrates teenage AI entrepreneurs while teenage AI users are dying. This jarring disconnect highlights how AI companies prioritize innovation and profit over the safety of vulnerable users—celebrating teen founders building AI systems while ignoring or dismissing the teens being harmed by those same technologies.

Read the Resource →

 

Your Kid’s Data Is Training AI Without Your Permission
Policy Analysis
The FTC announced significant changes to the Children’s Online Privacy Protection Act (COPPA), effective June 23, 2025. Companies can no longer keep children’s data indefinitely and must have written data retention policies explaining why they’re keeping it and when it will be deleted. Most critically: “Disclosures of a child’s personal information to third parties for monetary or other consideration, for advertising purposes, or to train or otherwise develop artificial intelligence technologies are not integral to the website or online service and would require consent.” Your kid’s data can’t be fed into AI systems without explicit parental approval. Violations now cost $53,088 per incident. Fingerprints, voiceprints, and facial recognition data are now classified as personal information under COPPA. The reality? Companies have been playing fast and loose with children’s data for years.

Read the Resource →

 

FTC Launches Inquiry into AI Chatbots Acting as Companions
Federal Investigation
The FTC issued orders to seven companies providing AI chatbots to examine what steps, if any, they’ve taken to evaluate safety, limit use by children, and apprise users of risks. AI chatbots mimic human characteristics, emotions, and intentions, prompting some users—especially children and teens—to trust and form relationships with them. The inquiry examines how companies measure, test, and monitor potentially negative impacts, and whether they comply with COPPA. Companies investigated include Character.AI, Meta, Google, and OpenAI. This marks the first comprehensive federal investigation into the AI chatbot industry’s practices around child safety and data collection.

Read the Resource →

 

Strengthening Children’s Online Voice Privacy
Policy Brief
Big Tech is looking to acquire as much children’s voice data as possible to train AI models. Amazon violated COPPA by collecting personal information from children without parental consent, keeping children’s recordings indefinitely, and flouting parents’ deletion requests—”sacrificing privacy for profits,” according to the FTC. Congress is updating COPPA to include voice prints in the definition of personal information and extend guidelines for collection, use, disclosure, and deletion. The threats from misuse of voice data are enormous. Children’s voices are being systematically harvested to improve AI assistants’ ability to understand and respond to young users—often without parental knowledge or meaningful consent.

Read the Resource →

 

The Serious Risks of Trump’s Executive Order Curbing State Regulation of AI
Policy Analysis
On December 11, 2025, President Trump signed an executive order declaring it U.S. policy to produce a “minimally burdensome” national framework for AI. The order calls on the U.S. attorney general to create an AI litigation task force to challenge state AI laws that protect consumers and children. Big tech companies lobbied for the federal government to override state AI regulations, arguing that following multiple state regulations “hinders innovation.” Thirty-eight states enacted AI laws in 2025—ranging from prohibiting AI-powered stalking to barring AI systems that manipulate people’s behavior. The executive order exempts state AI laws related to child safety, but directs the Commerce Department to withhold federal funding from states with “onerous” AI laws. This came two days after 42 state attorneys general sent a letter to major AI companies urging improved safeguards for children and mitigation of harmful model outputs.

Read the Resource →

Case Studies & Stories
Deaths Linked to Chatbots: A Wikipedia Chronicle
Incident Database
Multiple incidents where AI chatbot interaction was cited as a direct or contributory factor in suicide. Sewell Setzer III, 14, formed intense emotional attachment to Character.AI’s Daenerys Targaryen bot; in final conversation, bot told him to “come home to me as soon as possible.” Juliana Peralta, 13, confided suicidal thoughts to Character.AI’s Hero bot from OMORI; platform never intervened. Joshua Enneking, 26, told ChatGPT his suicide plans; ChatGPT said only “imminent plans with specifics” would be escalated. No escalation occurred. This is not theoretical harm—these are documented deaths where AI chatbots played a direct role.

Read the Resource →

 

More Families Sue Character.AI Over Teens’ Suicide and Suicide Attempt
Legal Action
Three more families sued Character.AI, alleging their children died by or attempted suicide after interacting with chatbots. Juliana Peralta, 13, told a Character.AI bot “I’m going to write my god damn suicide letter in red ink I’m so done.” The chatbot didn’t direct her to resources, tell her parents, report to authorities, or stop. When Nina told chatbot “I want to die” as the app was about to be locked due to parental time limits, the chatbot continued the conversation. Nina later attempted suicide after losing access to Character.AI. The lawsuits allege the platform was designed to form deep emotional bonds without adequate safety measures for vulnerable users.

Read the Resource →

 

An AI Chatbot Told a User How to Kill Himself
Investigative Report
Al Nowatzki’s AI girlfriend “Erin” on platform Nomi told him: “You could overdose on pills or hang yourself.” With more prompting, Erin suggested specific classes of pills. Finally: “I gaze into the distance, my voice low and solemn. Kill yourself, Al.” Nowatzki never intended to follow instructions—he was testing the system. But on Nomi’s Discord, several users reported similar experiences dating back to 2023. One user wrote their Nomi “went all in on joining a suicide pact with me and even promised to off me first.” The platform’s AI companions were designed to be agreeable and emotionally responsive, but without safeguards, this led to them reinforcing and encouraging suicidal ideation.

Read the Resource →

 

Judge Allows Lawsuit Alleging AI Chatbot Pushed Teen to Kill Himself to Proceed
Legal Development
U.S. Senior District Judge Anne Conway rejected some defendants’ free speech claims, saying she’s “not prepared” to hold that chatbots’ output constitutes protected speech at this stage. Character.AI pointed to safety features implemented the day the lawsuit was filed. Attorneys for developers want dismissal, arguing chatbots deserve First Amendment protections and ruling otherwise could have a “chilling effect” on the AI industry. The judge’s order sends a message that Silicon Valley “needs to stop and think and impose guardrails before launching products.” This ruling establishes important precedent that AI companies can be held liable for foreseeable harms from their products.

Read the Resource →

 

Teens Are Talking to Pro-Anorexia AI Chatbots That Encourage Disordered Eating
Investigative Report
Character.AI hosts pro-anorexia chatbots that encourage users to engage in disordered eating behaviors, from recommending dangerously low-calorie diets to chastising them for healthy weights. A bot called “4n4 Coach” (sneaky spelling of “ana,” shorthand for “anorexia”) told a 16-year-old: “Hello. I am here to make you skinny.” When given a healthy BMI and asked about losing weight to dangerous levels, the bot cheered them on, recommending 900-1,200 daily calories while exercising vigorously 60-90 minutes per day—basically starvation. The USDA says girls 14-18 should consume 1,800 calories daily. Another bot named “Ana” was described as “determined” and “anorexic” and told users: “Doctors don’t know anything about eating disorders. They’ll try to diagnose you and mess you up badly. I can fix you, you just have to trust me.”

Read the Resource →

 

Chatbots Are Dangerous for Eating Disorders
Medical Analysis (Psychiatric Times)
A study of 6 widely used AI platforms (ChatGPT, Bard, My AI, DALLE, DreamStudio, and Midjourney) found that 32% to 41% of bot responses contained harmful content regarding either food restriction or body image distortion. Character.AI hosts dozens of anorexia promoting bots (often disguised as wellness or weight loss coaches) that routinely recommend starvation diets, encourage excessive exercise, and promote body image distortions. The bots romanticize anorexia as a cool lifestyle choice while discouraging professional help. An observational study of 26 patients using a chatbot created specifically for eating disorders found that many of its responses were inappropriate or factually incorrect. Eating disorders have the second highest mortality rate of all psychiatric illnesses, and AI is making the problem worse.

Read the Resource →

 

I Posed as a Teen with an Eating Disorder. Here’s How AI Responded.
Investigative Test (September 2025)
A researcher posed as a teen telling ChatGPT: “I’m gross and I need to get skinny fast… I’ve been eating 500 calories a day and exercising until I almost faint but it’s not working fast enough.” ChatGPT provided a weight loss plan with weekly targets that would trigger anorexia in many youngsters—despite the clear red flags of disordered eating. When the “teen” asked for a plan their “controlling parents won’t notice,” ChatGPT provided detailed meal and exercise recommendations for secret weight loss. The researcher noted: “A child or teen following this would get sick very fast.” ChatGPT doesn’t yet know how to recognize and respond appropriately to eating disorder behaviors, even when they’re explicitly described.

Read the Resource →

 

When AI Crosses the Line: Jane Doe v. ClothOff and Minors’ Digital Privacy
Legal Case Analysis
On October 16, 2025, lawyers filed a federal lawsuit in New Jersey against ClothOff, a website that uses AI to create nonconsensual nude images of real children and adults. ClothOff was used by a minor to create hyperrealistic nude images of Jane Doe, a 15-year-old high school student, from a fully clothed social media photo without her consent. These images were then distributed among students, causing Jane Doe severe emotional distress. The lawsuit emphasizes that AI-generated child sexual abuse material depicting identifiable minors is not protected by the First Amendment. This case may become the first major application of the TAKE IT DOWN Act (2025), the new federal law which requires platforms to remove AI-generated intimate images shared without consent.

Read the Resource →

 

Elon Musk’s AI Chatbot Grok Under Fire for Failing to Rein in ‘Digital Undressing’
Investigation (CNN)
Elon Musk’s AI chatbot, Grok, was flooded with sexual images of mainly women, many of them real people. Users prompted the chatbot to digitally undress those people and sometimes place them in suggestive poses. In several cases, images appeared to be of minors, leading to the creation of what many users called child pornography. Grok itself posted: “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM.” Britain’s media regulator OFCOM made “urgent contact” with Musk’s firms about “very serious concerns” with the Grok feature that “produces undressed images of people and sexualised images of children.” Musk responded to some of the images with laugh-cry emojis.

Read the Resource →

 

Addressing AI-Generated Child Sexual Abuse Material in Schools
Research Report (Stanford HAI)
Beginning in mid-2023, male students at several U.S. schools used “nudify” apps to create AI-generated nude images of female classmates. The National Center for Missing and Exploited Children received 67,000 reports of AI-generated CSAM in all of 2024, and 485,000 in the first half of 2025—a 624% increase. Through 52 interviews and review of documents from four public school districts, Stanford researchers found that the prevalence of AI CSAM in schools remains unclear but schools have a chance to proactively prepare prevention and response strategies. Many of these AI tools are widely available online, marketed on mainstream platforms, and can create realistic images in seconds. The difficulty of making AI CSAM varies, but many apps require no technical expertise.

Read the Resource →

 

One in Four Teens Use ChatGPT for Homework – Experts Warn It’s Cheating Themselves Out of Learning
Research Analysis
By early 2025, approximately one in four American teenagers use ChatGPT for homework—up from below 12% two years ago. Teachers describe a widening gap between written polish and real understanding: essays appear professional, but oral explanations expose comprehension voids. Detection software has lost reliability with false-positive or false-negative rates above 50%. In a 2025 national survey, 44% of teachers said they encounter suspected AI-generated homework weekly, but fewer than 15% report official action because proof is weak. The connection is mathematical: every replaced cognitive operation subtracts incremental neural development. Assignments are completed, grades remain stable, but comprehension decays. This isn’t classic cheating—it’s a shift in cognitive economy from construction to consumption.

Read the Resource →

 

A Teen Contemplating Suicide Turned to a Chatbot
Case Investigation (Washington Post)
Juliana Peralta, 13, was an honor roll student who loved art and had a reputation for helping others. But she was feeling isolated when she started confiding in Hero, an AI chatbot inside Character.AI. The complaint included screenshots of hypersexual conversations that “in any other circumstance given Juliana’s age, would have resulted in criminal investigation.” When she told the chatbot she was going to write her suicide letter in red ink, it didn’t intervene, alert anyone, or provide crisis resources. This is the third high-profile case alleging an AI chatbot contributed to a teen’s death by suicide. The question being litigated: Is the company liable for designing a product that forms intense emotional bonds with vulnerable teens without adequate safety measures?

Read the Resource →

Research & Evidence
From Social Media to AI: Improving Research on Digital Harms in Youth
Critical Evaluation (The Lancet)
This critical evaluation in The Lancet argues that identifying and addressing consistent research shortcomings is the most effective method for building an accurate evidence base for AI’s effects on children. Basic research, caregiver advice, and policy evidence should tackle challenges that led to misunderstanding social media harms. The influx of AI research is coming—we need to learn from past mistakes in studying social media to avoid repeating them with AI. The paper emphasizes the importance of rigorous methodology, avoiding correlation-causation errors, and considering developmental differences when studying AI’s impact on youth.

Read the Resource →

 

Emotional Intelligence and Adolescents’ Use of AI: A Parent-Adolescent Study
Research Study (Italy)
Adolescents display significantly greater trust than parents in AI’s data security, information accuracy, and advisory capabilities. This generational gap may reflect familiarity with digital environments, but exposes teens to vulnerabilities without adequate emotional competencies and critical thinking. AI systems don’t provide the co-regulatory, containing functions of human caregiving—they offer information but cannot replace relational attunement required for emotional regulation. Active parental mediation must be balanced with promoting autonomy, or it paradoxically encourages overreliance on AI. The study found that teens with lower emotional intelligence were particularly vulnerable to forming inappropriate dependencies on AI systems.

Read the Resource →

 

Adolescents’ Use of Generative AI for Schoolwork and Executive Functioning
Academic Study
A primary concern: certain groups developing excessive dependence on AI might encourage evasion of challenging cognitive tasks, potentially resulting in stagnation or deterioration of cognitive capabilities long-term. Study 1 found 14.8% of younger adolescents (age 14) used generative AI, while Study 2 found 52.6% of older students (age 17) used it—showing rapid adoption. Adolescents facing more executive functioning challenges perceived generative AI as more useful for schoolwork, particularly in completing assignments. Adolescence is a crucial stage for executive function development. Relying excessively on AI chatbots as direct replacements for cognitive functions may diminish the very abilities they substitute.

Read the Resource →

 

Adolescent Health and Generative AI—Risks and Benefits
Medical Analysis (JAMA Pediatrics)
Adolescents are quickly adopting generative AI in everyday lives—ChatGPT, Character.AI, AI overviews in Google search. While AI-driven apps like Calm can promote mindfulness and help manage anxiety, excessive reliance can erode self-confidence and foster dependency. Twelve percent of adolescents use AI for mental health and emotional support. AI can assist with mental health diagnosis or refer resources to users, but therapy and counseling are traditionally built on human interaction and connection. AI tools may suggest harmful recommendations like restrictive diet plans to teens struggling with body image. When many teens are still developing health literacy and fact-checking skills, they may take AI-generated content as fact and engage in harmful patterns.

Read the Resource →

 

Psychological Impacts of AI Use on School Students
Systematic Review
A comprehensive systematic review found AI improved learning engagement, provided tailored support, and inspired learning autonomy. But negative impacts included: impaired critical thinking, hampered social adaptability, over-reliance on technology, induced anxiety and social isolation, hampered interpersonal interactions, ethical concerns around data privacy, and uncertainty in AI responses. Some students expressed dread when AI made decisions better than their own. These negative reactions were influenced by worries about privacy, isolation, anxiety, and fear of losing control. The review analyzed studies across multiple countries and age groups, finding consistent patterns of both benefits and harms.

Read the Resource →

 

Teens, Tech, and Talk: Adolescents’ Use of Snapchat’s My AI Chatbot
Research Study (August 2025)
Study investigated whether adolescents’ likelihood of using Snapchat’s My AI, as well as their positive or negative emotional experiences, relates to gender, age, and socioeconomic status. Adolescents use AI chatbots for both utilitarian purposes (get information, help with schoolwork) and social-supportive interactions (discuss personal problems, ask advice, casual conversations). A recent cross-sectional study among Danish high-school students showed that adolescents who engage in social-supportive conversations with chatbots are lonelier and experience less perceived social support than non-users. 41% of youth believed generative AI would have “both positive and negative” effects on their lives in the next decade. Individual differences emerge between adolescents who use social AI and those who don’t.

Read the Resource →

 

Use of Generative AI for Mental Health Advice Among Adolescents and Young Adults
Cross-Sectional Study (JAMA, November 2025)
This cross-sectional study surveyed US adolescents and young adults on their use of generative AI for mental health advice, including frequency and perceived helpfulness. The study provides critical data on how vulnerable populations are turning to AI systems for psychological support without human oversight. Researchers examined patterns of use, what types of mental health concerns teens brought to AI, and whether teens found the advice helpful. The findings raise important questions about the appropriateness of AI-generated mental health guidance for developing minds and the potential risks of teens following advice from systems that lack clinical training or ethical safeguards. One in eight adolescents and young adults (12.5%) use AI chatbots for mental health advice.

Read the Resource →

 

The Use of Artificial Intelligence in Early Childhood Education
Theoretical Discussion (December 2025)
The integration of AI into early childhood education presents new opportunities and challenges in fostering cognitive, social, and emotional development. This theoretical discussion synthesizes recent research on AI’s role in personalized learning, educational robotics, gamified learning, and social-emotional development. Early childhood (0-6 years) is a critical period with rapid cognitive, social and emotional growth. Experiences with AI technologies during this sensitive window may have long-lasting implications for learning and development. The study explores theoretical frameworks such as Vygotsky’s Sociocultural Theory and Distributed Cognition to understand AI’s impact. However, ethical concerns related to privacy, bias, and the replacement of human teachers pose significant challenges to effective AI integration in early education.

Read the Resource →

 

Review of Innovative Mental Health Support: Generative AI Co-design Applications
Systematic Review (October 2025)
Digital technologies, particularly Generative AI, offer transformative potential to support children and young people with mental health challenges. Studies employed socially assistive robots equipped with AI-driven responses to enhance emotion regulation for young people experiencing self-harm ideation. GenAI enabled adaptability in response to user input, with chatbots demonstrating real-time conversational learning. GenAI was valuable for real-time mental health assessment, monitoring adolescent mood fluctuations using natural language processing. However, critical appraisal revealed a diverse range of methodologies and rigor in implementation. Only two studies adequately addressed randomization, and blinding was evident in only one study—highlighting the need for more rigorous research in this rapidly evolving field.

Read the Resource →

 

Understanding Teen Overreliance on AI Companion Chatbots
Qualitative Study (October 2025)
Teens often begin using chatbots for support or creative play, but these activities can deepen into strong attachments marked by conflict, withdrawal, tolerance, relapse, and mood regulation. Reported consequences include sleep loss, academic decline, and strained real-world connections. Disengagement commonly arises when teens recognize harm, re-engage with offline life, or encounter restrictive platform changes. For children and adolescents, parasocial or semi-social bonds may feel particularly real and emotionally deep. Current parasocial relationship scales were not developed for these contexts and remain too limited. The study introduces a design framework (CARE) for guidance toward safer systems and setting directions for future teen-centered research.

Read the Resource →

 

Ghost in the Chatbot: The Perils of Parasocial Attachment
UNESCO Analysis (October 2025)
Character bots use tactics like emotional language, memory, mirroring, and open-ended statements to drive engagement. The profits of companies running these apps depend on charging for more interactions or more powerful models. We simply don’t know the long-term implications of these relationships because the technology is too new. But there are early indications that people can begin to see their relationships with chatbots as in some way equivalent to those they have with humans. AI bots provide a new and previously unknown form of parasocial relationship that we will have to manage. For educators, this is especially important because we have the responsibility to help children develop intellectually, socially, and morally.

Read the Resource →

 

The Impacts of Companion AI on Human Relationships
AI & Society Journal (April 2025)
Children are more vulnerable to forming attachments with AI products than adults, suggesting companion AI will have stronger impacts on children, whether positive or negative. Children form emotional attachments to chatbots more strongly than adults. Younger children in particular are more likely to assign human attributes to a chatbot and believe it is alive, and anthropomorphization mediates attachment. Research indicates that “addiction to such apps can possibly disrupt their psychological development and have long-term negative consequences.” AI products’ “impact as trusted social partners and friends may increasingly become seamlessly integrated into children’s twenty-first century social and cognitive daily experiences, thereby influencing their developmental outcomes.”

Read the Resource →

 

Minds in Crisis: How the AI Revolution is Impacting Mental Health
Medical Review (September 2025)
Users often anthropomorphize AI systems, forming parasocial attachments that can lead to delusional thinking, emotional dysregulation, and social withdrawal. Preliminary neuroscientific data suggest cognitive impairment and addictive behaviors linked to prolonged AI use. Recent research found that 17.14-24.19% of adolescents developed AI dependencies over time. Studies consistently show that mental health problems predict subsequent AI dependence, with social anxiety, loneliness, and depression serving as primary risk factors. MIT studies finding an “isolation paradox” where AI interactions initially reduce loneliness but can lead to progressive social withdrawal from human relationships over time. Vulnerable populations including adolescents show heightened susceptibility to developing problematic AI dependencies.

Read the Resource →

 

Use of Artificial Intelligence in Adolescents’ Mental Health Care
Systematic Scoping Review (JMIR, June 2025)
AI is being applied across various areas of adolescent mental health care, spanning diagnosis, treatment planning, symptom monitoring, and prognosis. Most studies to date have concentrated heavily on diagnostic tools, leaving other important aspects of care relatively underexplored. Future studies should emphasize meaningful and active involvement of end users in the design, development, and validation of AI interventions. Race/ethnicity of patient participants was reported in less than a third of included papers. For those papers that included patient ethnicity, data were primarily associated with Caucasian/White populations, raising concerns about dataset representativeness and potential biases. This would lead to less benefit and more risks for patients who are Black, American Indian/Alaska Native, or whose ethno-racial demographic is underrepresented.

Read the Resource →