Instagram tests AI tools to restrict underage accounts and enhance privacy
Photo: REUTERS
Meta Platforms Inc. has begun testing a new artificial intelligence (AI) system in the United States designed to identify underage users on Instagram who have listed adult birthdays and automatically place their profiles under restricted “Teen Account” settings.
The initiative, announced this week, builds on Meta’s earlier efforts to enhance youth safety on the platform. Teen Accounts—first introduced in 2024—limit message requests from strangers, reduce exposure to sensitive content, and default accounts to private.
The latest upgrade brings a more aggressive AI-powered approach. The system will analyse behavioural signals such as who users interact with, how they engage with content, and text clues like birthday greetings that conflict with stated birthdates. For example, messages like “Happy 14th birthday!” could flag a user falsely registered as an adult.
Meta says it aims to intervene more proactively by automatically adjusting settings if the AI determines that the account likely belongs to a minor. These changes will take effect without requiring user consent, although flagged users can request a review and revert their settings.
The company admits the technology may occasionally make errors. In such cases, users incorrectly classified as underage will have the option to verify their age through existing methods, including ID uploads, peer confirmation, or a video selfie.
Photo: Instagram
Parental controls are also part of the rollout. Parents of teens using Instagram will start receiving prompts encouraging them to verify their child’s age on the platform. Meta has worked with paediatric psychologists to develop guidance on how parents can approach these conversations.
The move comes amid mounting regulatory scrutiny. In 2024, the European Union opened an investigation into whether Meta was doing enough to protect children online. Similar pressure has come from US lawmakers, with several states introducing legislation targeting youth safety on social media.
Meta maintains that the AI-driven approach is necessary to scale safety interventions and respond to bad actors. However, the update also raises questions about privacy and the reliability of algorithmic classification.
Testing of the feature has begun in the US, with global expansion expected if results prove successful.