Core Viewpoint - Meta's moderation of self-harm content on Instagram is severely inadequate, allowing harmful content to proliferate rather than being effectively removed [1][3][8] Group 1: Study Findings - Danish researchers created a private self-harm network on Instagram, sharing 85 pieces of self-harm-related content, which included increasingly severe images [2] - The study aimed to test Meta's claims of improved content moderation, which the company asserts uses AI to remove about 99% of harmful content before it is reported; however, not a single image was removed during the month-long experiment [3][4] - Digitalt Ansvar found that Instagram's algorithm was not only failing to shut down the self-harm network but was actively helping it to expand, connecting vulnerable users with members of the self-harm group [7][8] Group 2: Meta's Response and Policies - A Meta spokesperson stated that content encouraging self-injury violates their policies, claiming that over 12 million pieces related to suicide and self-injury were removed in the first half of 2024, with 99% being proactively taken down [6] - Despite these claims, the study indicated that Meta's AI technology could identify 38% of self-harm images and 88% of the most severe ones, suggesting that the company has the capability to address the issue but is not implementing it effectively [4][6] Group 3: Implications and Expert Opinions - The inadequate moderation of self-harm content raises concerns about compliance with EU laws, specifically the Digital Services Act, which mandates the identification of systemic risks to physical and mental well-being [5] - Experts, including psychologists, have expressed alarm over Meta's failure to remove explicit self-harm content, indicating that this negligence could trigger vulnerable individuals and contribute to rising suicide rates [11][13][14] - The situation is described as a matter of life and death for young users, with accusations that Meta prioritizes engagement and profit over user safety [14]
Instagram actively helping to spread of self-harm among teenagers, study suggests