Content Moderation

Search documents
X @Bloomberg
Bloomberg· 2025-09-25 16:10
Meta is set to face a charge sheet from the EU for failing to adequately police illegal content, risking fines for violating the bloc’s content moderation rulebook https://t.co/vSCpccflOf ...
X @Mike Benz
Mike Benz· 2025-09-03 16:49
Censorship Concerns & Geopolitical Implications - The report highlights concerns that EU censorship policies could be imposed on US social media platforms, potentially affecting American users [2] - The Trump administration was negotiating with the EU, with European censorship of US social media platforms being a point of contention [1] - The EU's size could lead US social media firms to adopt European censorship standards [2] Allegations of Coordinated Censorship Efforts - The report alleges a coordinated effort by French President Emmanuel Macron, legislators, and state-affiliated NGOs to censor users on Twitter for legal speech [3] - It is alleged that these actors aimed to influence Twitter's worldwide "content moderation" for narrative control [3] - President Macron personally contacted Twitter's then-CEO, Jack Dorsey [3] - The report suggests potential illegal activity by various actors involved [3] Emergence of Censorship-by-NGO Proxy Strategy - The report identifies the emergence of a "censorship-by-NGO proxy strategy" [3] - This strategy is described as being at the heart of the Censorship Industrial Complex [3]
X @Mike Benz
Mike Benz· 2025-07-26 16:24
Regulatory Concerns - The House Judiciary Committee report raises concerns about the EU's Digital Services Act (DSA) being misused as a censorship tool [1] - The DSA pressures tech companies to alter global content moderation policies, potentially exporting EU standards beyond the EU [1] - The report highlights the classification of political statements as "illegal hate speech" as a point of concern [1] - The DSA's enforcement may target humor, satire, and personal opinions on immigration and environmental issues [1] - Third parties with potential conflicts of interest and political biases may be involved in DSA enforcement [1] Freedom of Expression - The report suggests the DSA could stifle discourse, democratic debate, and the exchange of diverse ideas [1] - The company expresses commitment to safeguarding freedom of expression [1] - The company aims to resist regulatory overreach that imposes censorship on platforms and users [1]
Rise in 'harmful content' since Meta policy rollbacks: survey
TechXplore· 2025-06-17 09:10
This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility: Harmful content including hate speech has surged across Meta's platforms since the company ended third-party fact- checking in the United States and eased moderation policies, a survey showed Monday. The survey of around 7,000 active users on Instagram, Facebook and Threads comes after the Palo Alto company ditched US fact-checke ...
Facebook's content moderation 'happens too late,' says research
TechXplore· 2025-05-30 15:54
Core Insights - New research from Northeastern University indicates that Facebook's content moderation is often ineffective as it occurs too late, with posts having already reached 75% of their predicted audience before removal [1][2][10] Group 1: Content Moderation Effectiveness - The study reveals that content moderation on Facebook does not significantly impact user experience due to its delayed nature [2][10] - A new metric called "prevented dissemination" was proposed to measure the potential impact of content moderation by predicting future post dissemination [3][4] - The research analyzed over 2.6 million Facebook posts, finding that only a small percentage were removed, with 0.7% in English, 0.2% in Ukrainian, and 0.5% in Russian [8] Group 2: User Engagement Patterns - The top 1% of most-engaged content accounted for 58% of user engagements in English, 45% in Ukrainian, and 57% in Russian [6][7] - A significant portion of engagement occurs quickly, with 83.5% of total engagement happening within the first 48 hours of a post being live [7] - The study found that removing posts only prevented 24% to 30% of their predicted engagement [9] Group 3: Algorithm and Moderation Mismatch - The research highlights a mismatch between the speed of Facebook's content moderation and its recommendation algorithm, suggesting that moderation needs to occur at a pace similar to content recommendations to be effective [10][11] - The majority of removed posts were identified as spam, clickbait, or fraudulent content, indicating the focus of content moderation efforts [8]
Meta says online harassment is up and false flags are down following a change in content moderation policies
Business Insider· 2025-05-30 00:51
Core Insights - Meta reported a slight increase in online bullying and harassment on Facebook in Q1 2025 compared to Q4 2024, with prevalence rising from 0.06-0.07% to 0.07-0.08% [1][2] - The increase in violent and graphic content also rose from 0.06%-0.07% to about 0.09%, attributed to a spike in sharing violating content in March and ongoing efforts to reduce enforcement mistakes [2][6] Content Moderation Changes - In January, Meta overhauled its content moderation policies, allowing more political content across its platforms and eliminating restrictions on topics like immigration and gender identity [3][4] - The definition of "hate speech" was revised to focus on direct attacks and dehumanizing speech, moving away from a broader range of flagged aggressions [4] - The company replaced third-party fact-checkers with crowd-sourced community notes, similar to its competitor X [4] Impact of New Policies - Meta observed a significant reduction in error rates with the new policies, cutting mistakes in content moderation in half compared to the previous system [5] - The Q1 2025 report reflects these changes, showing a decrease in the amount of content actioned and a reduction in preemptive actions taken by the company [6] - The company aims to balance enforcement levels to avoid both under-enforcement of violating content and excessive mistakes [6] Community Notes and Challenges - Community notes have been described as a means for the democratization of fact-checking, but there are concerns about potential risks of bias and misinformation [8] - The prevalence of online bullying and harassment violations was reported at 0.08% to 0.09% in Q1 2024, compared to around 0.07% in Q1 2023, indicating fluctuations in violation rates [8]
Meta's advertisers didn't flinch after it shook up content moderation
Business Insider· 2025-05-01 11:10
Core Insights - Meta's advertising revenue for the first quarter reached $42 billion, exceeding analysts' expectations and reflecting a 16% year-over-year increase [1] - The company is shifting its content moderation strategy, replacing third-party fact-checkers with a community notes system and easing rules on political content and sensitive topics [2][6] - Despite concerns from advertisers regarding user safety, many are expected to continue spending on Meta due to its large audience and effective ad performance [3][6] Advertising Performance - Meta's AI-powered ad tools, Advantage Plus, are credited for driving momentum in ad campaigns by automating user targeting and ad creation [4] - The company anticipates revenue between $42.5 billion and $45.5 billion for the next quarter, surpassing the $44 billion forecast by analysts [6] - Online commerce companies have emerged as the largest contributors to Meta's ad sales growth, indicating a shift in reliance from blue-chip companies to small and medium-sized businesses [7] Market Dynamics - Advertisers are likely to allocate more budgets to established platforms like Facebook and Instagram while reducing spending on smaller social media networks amid economic uncertainty [9] - The contrasting performance of Snap, which faced a decline in shares due to lack of guidance amidst macroeconomic concerns, highlights Meta's relative strength in the advertising market [10]
Meta's oversight board rips Zuckerberg's move to end fact-checking: ‘Potential adverse effects'
New York Post· 2025-04-23 19:47
Core Viewpoint - Meta's independent oversight board criticized the company for hastily removing its fact-checking policy, urging an assessment of potential adverse effects [1][6][10] Group 1: Oversight Board's Rulings - The board upheld some of Meta's decisions to keep controversial content while ordering the removal of posts containing racist slurs [2][7] - The board issued 17 recommendations for improving enforcement of bullying and harassment policies and clarifying banned ideologies [9][10] Group 2: Changes in Content Moderation - Meta replaced its fact-checking policies with a "Community Notes" model, similar to the approach used by Elon Musk's platform X [6][7] - The rule change allowed derogatory references to marginalized groups, focusing instead on detecting terrorism, child exploitation, and fraud [7][10] Group 3: Relationship with Political Figures - Mark Zuckerberg sought to align with the incoming Trump administration, dining with Trump and donating $1 million to his inaugural fund [3][12] - Zuckerberg's actions reflect a strategy to gain favor with political leadership, which has influenced Meta's content moderation policies [2][3] Group 4: Financial Commitment to Oversight Board - Meta has committed to funding the oversight board through 2027, allocating at least $35 million annually over the next three years [12][13]
Meta Oversight Board Urges Company To Assess ‘Human Rights Impact' Of Hateful Conduct Policy
Forbes· 2025-04-23 18:18
Core Viewpoint - Meta's oversight board criticized the company's updated hateful conduct policies, particularly a provision allowing users to describe LGBTQ individuals as mentally ill, urging an assessment of the human rights impact on vulnerable groups [1][2][5]. Group 1: Policy Changes - Meta's oversight board described the updates to the hateful conduct policy as "hastily" made and lacking prior human rights due diligence [2][3]. - The updated policy included renaming "hate speech" to "hateful conduct" and erasing specific examples of hateful conduct while adding controversial ones, such as allowing allegations of mental illness based on gender or sexual orientation [4]. - The previous prohibition against dehumanizing speech, including offensive stereotypes, was removed, and the company replaced its third-party fact-checking program with a community notes program [4]. Group 2: Reactions and Criticism - The Human Rights Campaign criticized the policy changes, claiming they would foster misinformation and identity-based harassment, particularly against LGBTQ individuals [5]. - Meta's CEO Mark Zuckerberg defended the updates as a means to enhance free speech, arguing that previous fact-checking policies were biased and detrimental to trust [6].