A recent study by the Danish organization Digitalt Ansvar has raised alarming concerns about Instagram’s effectiveness in moderating explicit self-harm content among its young users. The research involved creating a private Instagram network with fake profiles of individuals as young as 13 years old, who were used to share increasingly severe images of self-harm over a month. Despite claims made by Meta CEO Mark Zuckerberg that the platform had significantly improved its content moderation capabilities using AI, the study found that none of the 85 pieces of self-harm content were removed by Instagram during the experiment. This research not only underscores the persistence of harmful content on the platform but also questions the effectiveness of Instagram’s moderation policies and practices.
Digitalt Ansvar’s findings challenge Meta’s assertions. The researchers’ own basic AI tool was able to flag 38 percent of the self-harm images and an impressive 88 percent of the most severe content, suggesting that Instagram possesses the technology to manage harmful posts but has opted not to utilize it effectively. In stark contrast to these findings, Instagram reportedly maintains a policy of removing 99 percent of harmful posts proactively. This discrepancy raises questions about the transparency and reliability of Meta’s content moderation efforts and highlights a significant gap between their claims and actual performance.
Moreover, the study argues that Instagram’s insufficient moderation could put the platform at odds with the EU’s Digital Services Act, which seeks to ensure that large digital platforms identify and manage systemic risks to user well-being. Instead of curbing the spread of self-harm content, Instagram’s algorithms appear to enhance it, as users connecting with one member of the distressing content group were promptly suggested to connect with all other members. This trend indicates that the platform not only fails to restrict harmful content but may inadvertently facilitate its proliferation, potentially placing vulnerable teenagers at greater risk.
Ask Hesby Holm, the CEO of Digitalt Ansvar, expressed deep concern regarding this lack of effective moderation. He believes that Meta might refrain from moderating smaller private groups to maintain user engagement and avoid traffic loss. This could have dire consequences for vulnerable adolescents whose exposure to self-harm content remains unchecked. Holm noted that without adequate monitoring, these groups can go unnoticed by parents and authorities, depriving at-risk youth of the support and intervention they may need.
Leading psychologist Lotte Rubæk criticized Instagram’s failure to address explicit self-harm content, emphasizing that it could trigger detrimental behavior among vulnerable teenagers, particularly young women. Having resigned from Meta’s suicide prevention expert group earlier in the year, she asserted that the company’s inaction contributes to rising suicide rates, encapsulating a broader concern about the balance between user engagement and the duty of care owed to vulnerable demographics. Rubæk’s comments reflect a pressing need for Facebook and Instagram to prioritize user safety over monetization strategies, signaling a growing discontent with the social media giant’s practices.
In response to the study, Meta maintained that it is committed to removing content that encourages self-injury, pointing to the removal of millions of related posts in the first half of 2024 as evidence of their efforts. However, mental health professionals and child safety advocates argue that the study indicates Instagram must significantly enhance its protective measures for at-risk users. This situation exemplifies a critical intersection of technology, ethics, and mental health, highlighting the imperative for social media platforms to take real accountability in safeguarding their users, especially when it pertains to their well-being and safety amid increasing mental health crises among youth.