Meta's Instagram platform is introducing a safety tool that notifies guardians if adolescents conduct multiple lookups on suicide or self-injury topics in quick succession, based on details from the company's newsroom.
This option targets users of Instagram's teen account supervision system. When a young user performs successive queries involving restricted keywords, the guardian receives a 'teen safety alert' through the platform, email, SMS, or WhatsApp.
Accompanying the notification, guardians gain entry to professional tips for starting challenging talks in an encouraging way.
The self-harm detection notices will first appear in the US, UK, Australia, and Canada, with plans for broader international availability later this year.
Meta reports progress on AI-based guardian notifications for additional discussion categories, set for deployment by the end of the year.
The content debuted in our affiliated outlet M3, adapted and tailored from its initial Swedish version.
Viktor contributes articles and features to connected publications M3 and PC för Alla. He maintains a strong focus on tech developments, keeping pace with recent product launches and prominent issues in the consumer technology sector.