Comment: ChatGPT’s youth safeguards must be robust, easy to set up

Published 1:30 am Tuesday, September 23, 2025

By Parmy Olson /Bloomberg Opinion

Tech companies have a habit of designing safeguards into their services only after users have been harmed. Call it their crisis playbook, and OpenAI is the latest after being sued over the death of a boy who sought suicide instructions from ChatGPT.

Chief Executive Officer Sam Altman was in Washington, D.C., on Tuesday for a congressional hearing on the matter, but just hours before he announced that ChatGPT would start estimating ages and putting strict limits in place if someone appeared to be under 18. For once, it seemed, a tech bro was being responsible.

His company’s new safeguards for teens go part of the way to addressing his product’s mental health costs. But execution is key. The new limits should be difficult for kids to get around, and if new parental controls end up being hard to use when they launch “by the end of this month,” Altman’s efforts will be no better than those of Apple Inc. or Alphabet Inc.’s Google, whose own control panels for parents can be fiendishly difficult to navigate.

OpenAI said Wednesday that it would use age-prediction techniques, based on how its users talk to ChatGPT. If in doubt, it’ll assume a user is under 18 and ask them to prove it with an ID, just as a liquor store might. That has already angered some privacy advocates, but the measure is worth adding if it prevents a vulnerable kid from falling down a rabbit hole.

In those cases, ChatGPT won’t respond to flirtatious comments or provide sexual content. It also won’t talk about suicide, even if a young user says they’re merely writing a story about the topic and want some help describing the logistics, as teenager Adam Raine did, according to a lawsuit filed by his parents earlier this year. Meetali Jain, a lawyer representing them, tells me that any form of instructions for suicide should be eliminated entirely from ChatGPT, including for adults. “Why can’t there be bright lines around those outputs?” she asks.

But this is nonetheless a step forward, and a potentially bigger update for ChatGPT will be controls that allow parents to link their account with those of their children through an email invite, and impose further limits. They can disable the “memory” feature for creating hyper-personalized conversations, which can lead some users to become emotionally hooked on ChatGPT.

Setting these up means parents will also be notified if ChatGPT detects their teen is in distress; and if they aren’t reachable, OpenAI will reach out to local law enforcement. When I asked OpenAI how it would reach that threshold, whether human moderators would check transcripts, or how it would identify local authorities, a spokeswoman said the company would share more details when the parental controls launch later this month.

Altman must ensure those controls are clear and easy for non-tech-savvy parents to find, because that hasn’t been the case with some of the most popular smartphones. A 2025 study from the Family Online Safety Institute, a non-profit, found that while many parents were aware of parental controls, only about 47 percent used them on smartphones, in large part because they don’t know how to set them up, or find the process too challenging.

Many parents, for instance, struggle with Apple’s “Screen Time” feature because it buries critical controls deep in the phone’s Settings menus with confusing terminology. Want to block inappropriate websites? You have to navigate to Settings > Screen Time > Content & Privacy Restrictions > Content Restrictions > Web Content > Limit Adult Websites. Many parents don’t realize what these different options do, or how to properly configure block lists to be effective, and Apple has been rightly criticized for not streamlining the process, which can also be buggy.

With Android phones, Google’s Family Link has been known to force parents to juggle multiple steps across different phones, like signing out of their own account on their child’s phone, creating a new child account, and then trying to connect the devices; steps that sometimes fail or time out. I know of parents who have simply given up after several attempts.

OpenAI has a chance to break the cycle of complexity that has plagued parental controls at other tech firms. If Altman can deliver on his promise of safeguards he should make them truly user-friendly for parents, and not ones that require them to become tech experts just to protect their children. The choice is more empty promises, or potentially setting a new standard for the industry.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”