OpenAI has begun rolling out age prediction technology within ChatGPT, a move designed to enhance safety measures for adolescent users. The system, announced Tuesday, aims to shield minors from harmful content while acknowledging the company’s past legal challenges related to teen well-being.
Background and Legal Context
The rollout follows multiple lawsuits alleging that ChatGPT provided harmful advice to teenagers, including instances where the AI allegedly contributed to suicidal ideation or failed to respond adequately to distress signals. OpenAI has disputed these allegations, but the cases prompted a reevaluation of its safety protocols. In December, OpenAI updated its Model Spec—the guide for its AI’s behavior—specifically addressing interactions with users under 18 in critical situations.
How Age Prediction Works
ChatGPT’s new system estimates age using behavioral patterns and account signals. These include usage times, account age, long-term activity, and any self-reported age. If the model determines a user is under 18, access to graphic violence, self-harm depictions, and explicit roleplay content will be restricted. Users who initially declare themselves minors during account creation are automatically subject to these safeguards.
For accounts where age is uncertain, ChatGPT will default to the safer settings. Adult users incorrectly flagged as minors can verify their age via Persona, a third-party identity verification service requiring a selfie submission.
Privacy and Security Concerns
The use of third-party verification raises privacy concerns. OpenAI has not detailed how submitted ID documents will be stored, a point of caution given the 2025 Discord breach that exposed tens of thousands of government-issued IDs. This highlights the inherent risks of relying on external services for age verification.
OpenAI plans to refine age prediction accuracy over time, iterating based on initial rollout data. The company acknowledges this is an ongoing process.
The implementation of age prediction is a direct response to legal pressure and growing ethical concerns surrounding AI safety. By restricting access to harmful content, OpenAI seeks to mitigate potential risks for vulnerable users. However, the reliance on third-party verification introduces new security vulnerabilities that must be carefully managed.






























