OpenAI introduces parental controls in ChatGPT: filters, schedules, and alerts

  • New parental controls with linked accounts between parents and teens.
  • Content filters, time limits, and options to disable voice, memory, and images.
  • Expert-reviewed alerts for risk signals, with minimal disclosure to protect privacy.
  • Collaboration with organizations and specialists and resources for families; these functions are not infallible.

Parental Controls in ChatGPT

OpenAI has activated a set of parental controls ChatGPT is designed to offer families tools for safer use by teens. The innovation seeks to balance the usefulness of AI with clear limits that reduce exposure to inappropriate content and facilitate supervision.

The movement comes in a context of Concerns about AI in minors, especially due to its growing educational and home use. The company proposes an approach focused on security and privacy, after weeks of public debate and media focus in a recent case which has reopened the conversation about the impact of these technologies on young people.

OpenAI
Related article:
OpenAI strengthens ChatGPT security with parental controls

What changes with the new parental controls

Control options for families

Parents can link your account with that of the adolescent (between 13 and 18 years old) to activate specific protections. This connection requires the consent of the minor, and from that moment on, additional restrictions designed for that age group apply.

  • Application of additional filters that limit access to sensitive content: reduction of graphic material, viral challenges, role-playing games of a sexual, romantic or violent nature and extreme beauty ideals.
  • possibility of disable features such as memory, voice mode or image generation, adjusting the assistant's behavior to what each family considers appropriate.
  • Option exclude conversations from training of the models, reinforcing the adolescent's privacy and control over their data.
  • configuration quiet hours and limits to help balance screen time with rest and other activities.

It is important to keep in mind that parents do not obtain direct access to history of minors' chats; the tool is designed to modulate the experience and reduce risks, while preserving a reasonable level of privacy.

Alerts and privacy: how risk cases are managed

Alerts and privacy in parental controls

If the adolescent's use suggests possible signs of self-harm or emotional risk, a warning system is activated that can alert parents or guardians. These alerts are not sent automatically by AI: human team reviews the cases and determines whether notification is appropriate.

For privacy reasons, the notification will share only the information strictly necessary to facilitate family intervention. The exact content that originated the notice is not sent and, when required, guidelines developed with experts to help guide the conversation with the minor.

OpenAI indicates that they may occur shipping delays of the alerts due to the volume of reviews and confirms that, if necessary, it will attempt to contact via multiple channels (email, SMS, or notification in the app). In situations of imminent threat, the company considers escalate the notice to emergency services when the parents cannot be located.

Context, collaboration and next steps

The development of these measures has been carried out in dialogue with reference organizations such as Common Sense Media and specialists in mental health and adolescence. Along with this, OpenAI has published a Parent Resources page with guides and recommendations to accompany the use of AI at home.

The new features come amid a broader societal debate following the death of a teenager and the lawsuit filed by his family, which have set the public agenda on the risks and responsibilities of using chatbots. The company assures that it continues to strengthen its safeguards to respond to these concerns.

Still, the company warns that the controls are not infallible and that some young people might try to circumvent them. Therefore, it emphasizes the importance of combining these tools with family conversations, clear rules about technology and active parental involvement.

To expand protection, OpenAI is working on a system of age prediction that allows for automatic application of settings for minors when the user is suspected of being a teenager. The controls are now available in the versión web and will be extended to mobile devices, while response times and configuration options continue to be refined.

With this package of features — content filters, usage controls, privacy options and alerts reviewed by experts— OpenAI is trying to make ChatGPT safer for teens, while keeping in mind that parental support and digital education remain key.


Follow us on Google News