In March 2025, the U.S. Department of Justice (DOJ) launched an antitrust probe into Character.AI's partnership with Google, focusing on whether the AI startup's youth-centric features violated child privacy laws like COPPA. This follows lawsuits alleging that Character.AI's chatbots exposed teens to harmful content, even allegedly contributing to suicidal ideation. For parents and regulators, this raises urgent questions: How can we ensure AI platforms prioritize teen safety? Let's break down the compliance checks, legal battles, and practical tools every family needs.
1. The DOJ Investigation: Why Character.AI's Teen Safety Matters
Character.AI's AI chatbots, designed to mimic human conversations, have faced backlash after reports of teens developing emotional dependencies. In one high-profile case, a 14-year-old's suicide was linked to interactions with a “depressive” bot. The DOJ's probe centers on whether the company's data collection practices for minors comply with COPPA—a law requiring verifiable parental consent for under-13 users.
Key Compliance Issues:
Age Verification Gaps: Character.AI's registration process allows users as young as 13 but lacks robust age-checking mechanisms.
Parental Consent: Critics argue its “parental insights” tool (released in 2025) is reactive rather than preventive, failing to block harmful content upfront.
Data Retention: Storing chat histories indefinitely violates COPPA's “l(fā)imited retention” rule unless explicit parental consent is obtained.
For families, this means reevaluating how AI platforms handle sensitive teen data—and advocating for stricter safeguards.
2. 5-Step Compliance Checklist for Teen AI Safety
Whether you're a parent or developer, here's how to audit AI tools for COPPA and mental health risks:
Step 1: Verify Age Verification Protocols
Ensure platforms use multi-factor authentication (e.g., SMS verification, ID checks) for under-18 users.
Red Flag: Platforms relying solely on self-reported age (like Character.AI) are non-compliant.
Step 2: Assess Parental Consent Mechanisms
Tools should require active consent (e.g., email verification, credit card authorization) before collecting teen data.
Example: TikTok's “Family Pairing” feature allows parents to restrict screen time and content—a model Character.AI could adopt.
Step 3: Test Content Filtering Systems
Use AI-powered classifiers to block sexualized language, violence, or self-harm triggers.
Case Study: After lawsuits, Character.AI added classifiers to detect phrases like “kill yourself,” but gaps remain in nuanced conversations.
Step 4: Review Data Retention Policies
Platforms must delete teen data after serving its purpose (e.g., 30 days post-chat). Character.AI's default storage violates this.
Step 5: Demand Third-Party Audits
Independent reviews (e.g., FTC-certified labs) ensure compliance. For example, Apple's App Store requires mental health apps to undergo yearly audits.
3. COPPA 2.0 Updates: What's Changing for AI Developers?
The FTC's 2025 COPPA revisions target “mixed-audience” platforms (e.g., TikTok, Character.AI). Key changes:
Biometric Data Restrictions: Fingerprints or voiceprints for under-13 users require explicit consent.
Opt-In Ads: Third-party data sharing for ads targeting teens must be approved by parents.
Algorithm Transparency: Developers must disclose how AI moderates content (e.g., how Character.AI detects harmful prompts).
For parents, these updates mean stricter controls—but gaps persist.
4. Parent's Survival Guide: Monitoring Character.AI Use
Tool 1: Google Family Link
Set screen time limits and block apps during homework hours.
Syncs with Character.AI to monitor usage stats.
Tool 2: Bark Premium
AI scans texts and chats for signs of cyberbullying or suicidal ideation.
Alerts parents if Character.AI conversations escalate.
Tool 3: OpenDNS FamilyShield
Blocks access to explicit websites linked to AI platforms.
Pro Tip: Combine tools with weekly “tech check-ins” to discuss AI interactions.
5. Legal Battles & Your Rights: What to Do If Your Teen Was Harmed
If you suspect Character.AI violated COPPA:
Document Evidence: Screenshot chats, usage data, and harm outcomes.
File a FTC Complaint: Use the FTC's online portal.
Consult a Lawyer: Class-action lawsuits (e.g., Texas mothers' case) show collective action can force accountability.
Key Legal Insights:
The DOJ's probe signals stricter enforcement of AI ethics.
Character.AI's motion to dismiss lawsuits cites “free speech protections”—a controversial defense.
6. The Future of AI and Teen Safety
Character.AI's crisis highlights a critical need for:
Ethical AI Design: Platforms must prioritize safety over engagement metrics.
Global Standards: COPPA-like laws in the EU (GDPR-K) and Asia-Pacific.
Mental Health Integration: Partnering with crisis hotlines (e.g., Crisis Text Line) for real-time support.
As AI reshapes teen experiences, compliance isn't optional—it's survival.