Instagram is now using artificial intelligence (AI) to detect users who give false ages during sign-up. The goal is to better protect young people on the platform. The new system looks at suspicious profiles and spots signs of age misrepresentation.
This update is part of a broader effort to keep teens safe online. If Instagram finds that someone is younger than they claimed, their account is automatically classified as a teen profile—with extra privacy and safety tools.
How Instagram’s New AI Age Check Works
The AI system reviews many parts of a user’s activity, including:
-
Profile information
-
Posts, likes, and comments
-
Interactions with other users
Even if a user lies about their birthdate, the AI may spot differences between what they claim and how they act online. If something seems off, the account is flagged for further review.
Instagram, which is owned by Meta, says it will also search for underage users who signed up using false information.
Stronger Protections for Teen Accounts
Once an account is flagged as a teen account, several safety features are turned on:
-
Privacy: Accounts are set to private by default.
-
Messages: Only approved followers can send direct messages.
-
Content limits: Sensitive content—like violent videos or cosmetic surgery posts—is restricted.
Instagram is also adding a screen time reminder. Teens will get a notification after using the app for 60 minutes. A “sleep mode” will automatically turn on between 10 p.m. and 7 a.m. This feature mutes notifications and sends auto-replies to messages.
Parents Will Be Notified and Included
Instagram also wants to involve parents. When an account is switched to teen mode, guardians will be notified. They’ll be encouraged to talk with their children about why it’s important to be honest online.
“We want young people to be safe and understand why age matters online,” Meta said. The new AI helps build a safe space—without making parents feel like they have to watch everything.
Background: Social Media and Teen Mental Health
Instagram’s move comes as social media companies face more pressure to protect young users. In the United States, several states are working on laws that would require strict age checks for online platforms.
But many of these laws face legal roadblocks. Tech companies like Meta argue that app stores—such as Google Play or Apple’s App Store—should handle age verification, not the platforms themselves.
Still, Instagram’s AI rollout is seen as a proactive step. Experts call it an important shift toward greater digital responsibility.
Other Platforms May Follow
Apps like TikTok and YouTube are also adding more safety tools for kids and teens. These include age-based content filters, screen time warnings, and features to block harmful messages.
As public debate grows over how social media affects teen mental health, more platforms may turn to AI and stricter rules.
A recent study by Harvard University found that 70% of teenagers say social media feels stressful or harmful—especially when protections are weak.
With its new AI tool, Instagram is taking clear action to keep young users safe.
The platform can now detect fake ages, adjust settings automatically, and involve parents. It’s a step forward in making the digital world safer for teens.