Core Insights - Meta reported a slight increase in online bullying and harassment on Facebook in Q1 2025 compared to Q4 2024, with prevalence rising from 0.06-0.07% to 0.07-0.08% [1][2] - The increase in violent and graphic content also rose from 0.06%-0.07% to about 0.09%, attributed to a spike in sharing violating content in March and ongoing efforts to reduce enforcement mistakes [2][6] Content Moderation Changes - In January, Meta overhauled its content moderation policies, allowing more political content across its platforms and eliminating restrictions on topics like immigration and gender identity [3][4] - The definition of "hate speech" was revised to focus on direct attacks and dehumanizing speech, moving away from a broader range of flagged aggressions [4] - The company replaced third-party fact-checkers with crowd-sourced community notes, similar to its competitor X [4] Impact of New Policies - Meta observed a significant reduction in error rates with the new policies, cutting mistakes in content moderation in half compared to the previous system [5] - The Q1 2025 report reflects these changes, showing a decrease in the amount of content actioned and a reduction in preemptive actions taken by the company [6] - The company aims to balance enforcement levels to avoid both under-enforcement of violating content and excessive mistakes [6] Community Notes and Challenges - Community notes have been described as a means for the democratization of fact-checking, but there are concerns about potential risks of bias and misinformation [8] - The prevalence of online bullying and harassment violations was reported at 0.08% to 0.09% in Q1 2024, compared to around 0.07% in Q1 2023, indicating fluctuations in violation rates [8]
Meta says online harassment is up and false flags are down following a change in content moderation policies