Rise in 'harmful content' since Meta policy rollbacks: survey

Core Viewpoint - The article highlights a significant increase in harmful content across Meta's platforms following the company's decision to end third-party fact-checking and ease moderation policies, leading to concerns among users about safety and expression [1][6]. Summary by Sections Policy Changes - Meta discontinued third-party fact-checking in the US in January, shifting the responsibility to users through a model called "Community Notes" [2][7]. - The company also relaxed restrictions on topics related to gender and sexual identity, allowing users to make accusations based on these characteristics [4]. User Experience - A survey of approximately 7,000 active users revealed that one in six respondents experienced gender-based or sexual violence on Meta platforms, and 66% witnessed harmful content [5]. - Ninety-two percent of users expressed concern about the rise of harmful content, with 77% feeling less safe in expressing themselves freely [6]. Company Response - In its quarterly report, Meta claimed that the changes had minimal impact, stating that enforcement mistakes were halved and the prevalence of violating content remained largely unchanged [8]. - However, the groups behind the survey argued that Meta's report did not accurately reflect user experiences of hate and harassment [8]. Advocacy and Recommendations - Advocacy groups urged Meta to hire an independent third party to analyze the impact of policy changes on harmful content and to reinstate previous content moderation standards [10]. - Concerns were raised about the potential global implications if Meta expands its policy changes beyond the US, affecting its fact-checking programs in over 100 countries [11].