Fact - Checking

Search documents
Meta says online harassment is up and false flags are down following a change in content moderation policies
Business Insider· 2025-05-30 00:51
Core Insights - Meta reported a slight increase in online bullying and harassment on Facebook in Q1 2025 compared to Q4 2024, with prevalence rising from 0.06-0.07% to 0.07-0.08% [1][2] - The increase in violent and graphic content also rose from 0.06%-0.07% to about 0.09%, attributed to a spike in sharing violating content in March and ongoing efforts to reduce enforcement mistakes [2][6] Content Moderation Changes - In January, Meta overhauled its content moderation policies, allowing more political content across its platforms and eliminating restrictions on topics like immigration and gender identity [3][4] - The definition of "hate speech" was revised to focus on direct attacks and dehumanizing speech, moving away from a broader range of flagged aggressions [4] - The company replaced third-party fact-checkers with crowd-sourced community notes, similar to its competitor X [4] Impact of New Policies - Meta observed a significant reduction in error rates with the new policies, cutting mistakes in content moderation in half compared to the previous system [5] - The Q1 2025 report reflects these changes, showing a decrease in the amount of content actioned and a reduction in preemptive actions taken by the company [6] - The company aims to balance enforcement levels to avoid both under-enforcement of violating content and excessive mistakes [6] Community Notes and Challenges - Community notes have been described as a means for the democratization of fact-checking, but there are concerns about potential risks of bias and misinformation [8] - The prevalence of online bullying and harassment violations was reported at 0.08% to 0.09% in Q1 2024, compared to around 0.07% in Q1 2023, indicating fluctuations in violation rates [8]
Meta's oversight board rips Zuckerberg's move to end fact-checking: ‘Potential adverse effects'
New York Post· 2025-04-23 19:47
Core Viewpoint - Meta's independent oversight board criticized the company for hastily removing its fact-checking policy, urging an assessment of potential adverse effects [1][6][10] Group 1: Oversight Board's Rulings - The board upheld some of Meta's decisions to keep controversial content while ordering the removal of posts containing racist slurs [2][7] - The board issued 17 recommendations for improving enforcement of bullying and harassment policies and clarifying banned ideologies [9][10] Group 2: Changes in Content Moderation - Meta replaced its fact-checking policies with a "Community Notes" model, similar to the approach used by Elon Musk's platform X [6][7] - The rule change allowed derogatory references to marginalized groups, focusing instead on detecting terrorism, child exploitation, and fraud [7][10] Group 3: Relationship with Political Figures - Mark Zuckerberg sought to align with the incoming Trump administration, dining with Trump and donating $1 million to his inaugural fund [3][12] - Zuckerberg's actions reflect a strategy to gain favor with political leadership, which has influenced Meta's content moderation policies [2][3] Group 4: Financial Commitment to Oversight Board - Meta has committed to funding the oversight board through 2027, allocating at least $35 million annually over the next three years [12][13]