Content Moderation
Search documents
X @Bloomberg
Bloomberg· 2025-11-12 17:08
Regulatory Scrutiny - Ireland's media regulator initiates investigation into X (formerly Twitter) [1] - Investigation focuses on X's alleged failure to remove user-reported illegal content [1]
Motion Picture Association demands Meta drop PG-13 label from Instagram teen filters
Reuters· 2025-11-05 17:55
Core Viewpoint - The Motion Picture Association has issued a cease-and-desist letter to Meta, challenging the use of filters based on the PG-13 movie rating system for content moderation on its platform [1] Group 1 - The Motion Picture Association is concerned about the implications of using a movie rating system for moderating content on social media [1] - The cease-and-desist letter indicates potential legal action if Meta does not comply with the request [1] - This action highlights ongoing tensions between content creators and social media platforms regarding content regulation and intellectual property rights [1]
X @Bloomberg
Bloomberg· 2025-10-21 16:22
RT Bloomberg Live (@BloombergLive)"Without the safety technology we have in place OnlyFans can't exist," @OnlyFans' CEO @KeilyBlair reveals how they moderate content on the platform at #BloombergTech.⏯️ https://t.co/nnRX4STmti https://t.co/3Plr03hVDr ...
X @Cointelegraph
Cointelegraph· 2025-10-13 21:00
🚨 INSIGHT: Live streaming makes up 40% of TV time, but one bad chat or donation can still trigger bans or demonetization. Manual filters cannot keep up with fast-changing slang or languages.AI moderation filters risky content in real time while keeping streams safe and smooth.@Streamiverseio, a Web3 donation platform, uses adaptive AI that hides only offensive words and keeps the rest visible to protect creators without killing engagement.It understands context, learns new slang, and adapts to each platform ...
X @Bloomberg
Bloomberg· 2025-09-25 16:10
Regulatory Compliance - Meta is facing an EU charge for inadequate policing of illegal content [1] - The charge risks fines for violating the EU's content moderation rulebook [1]
X @Mike Benz
Mike Benz· 2025-09-03 16:49
Censorship Concerns & Geopolitical Implications - The report highlights concerns that EU censorship policies could be imposed on US social media platforms, potentially affecting American users [2] - The Trump administration was negotiating with the EU, with European censorship of US social media platforms being a point of contention [1] - The EU's size could lead US social media firms to adopt European censorship standards [2] Allegations of Coordinated Censorship Efforts - The report alleges a coordinated effort by French President Emmanuel Macron, legislators, and state-affiliated NGOs to censor users on Twitter for legal speech [3] - It is alleged that these actors aimed to influence Twitter's worldwide "content moderation" for narrative control [3] - President Macron personally contacted Twitter's then-CEO, Jack Dorsey [3] - The report suggests potential illegal activity by various actors involved [3] Emergence of Censorship-by-NGO Proxy Strategy - The report identifies the emergence of a "censorship-by-NGO proxy strategy" [3] - This strategy is described as being at the heart of the Censorship Industrial Complex [3]
X @Mike Benz
Mike Benz· 2025-07-26 16:24
Regulatory Concerns - The House Judiciary Committee report raises concerns about the EU's Digital Services Act (DSA) being misused as a censorship tool [1] - The DSA pressures tech companies to alter global content moderation policies, potentially exporting EU standards beyond the EU [1] - The report highlights the classification of political statements as "illegal hate speech" as a point of concern [1] - The DSA's enforcement may target humor, satire, and personal opinions on immigration and environmental issues [1] - Third parties with potential conflicts of interest and political biases may be involved in DSA enforcement [1] Freedom of Expression - The report suggests the DSA could stifle discourse, democratic debate, and the exchange of diverse ideas [1] - The company expresses commitment to safeguarding freedom of expression [1] - The company aims to resist regulatory overreach that imposes censorship on platforms and users [1]
Rise in 'harmful content' since Meta policy rollbacks: survey
TechXplore· 2025-06-17 09:10
Core Viewpoint - The article highlights a significant increase in harmful content across Meta's platforms following the company's decision to end third-party fact-checking and ease moderation policies, leading to concerns among users about safety and expression [1][6]. Summary by Sections Policy Changes - Meta discontinued third-party fact-checking in the US in January, shifting the responsibility to users through a model called "Community Notes" [2][7]. - The company also relaxed restrictions on topics related to gender and sexual identity, allowing users to make accusations based on these characteristics [4]. User Experience - A survey of approximately 7,000 active users revealed that one in six respondents experienced gender-based or sexual violence on Meta platforms, and 66% witnessed harmful content [5]. - Ninety-two percent of users expressed concern about the rise of harmful content, with 77% feeling less safe in expressing themselves freely [6]. Company Response - In its quarterly report, Meta claimed that the changes had minimal impact, stating that enforcement mistakes were halved and the prevalence of violating content remained largely unchanged [8]. - However, the groups behind the survey argued that Meta's report did not accurately reflect user experiences of hate and harassment [8]. Advocacy and Recommendations - Advocacy groups urged Meta to hire an independent third party to analyze the impact of policy changes on harmful content and to reinstate previous content moderation standards [10]. - Concerns were raised about the potential global implications if Meta expands its policy changes beyond the US, affecting its fact-checking programs in over 100 countries [11].
Facebook's content moderation 'happens too late,' says research
TechXplore· 2025-05-30 15:54
Core Insights - New research from Northeastern University indicates that Facebook's content moderation is often ineffective as it occurs too late, with posts having already reached 75% of their predicted audience before removal [1][2][10] Group 1: Content Moderation Effectiveness - The study reveals that content moderation on Facebook does not significantly impact user experience due to its delayed nature [2][10] - A new metric called "prevented dissemination" was proposed to measure the potential impact of content moderation by predicting future post dissemination [3][4] - The research analyzed over 2.6 million Facebook posts, finding that only a small percentage were removed, with 0.7% in English, 0.2% in Ukrainian, and 0.5% in Russian [8] Group 2: User Engagement Patterns - The top 1% of most-engaged content accounted for 58% of user engagements in English, 45% in Ukrainian, and 57% in Russian [6][7] - A significant portion of engagement occurs quickly, with 83.5% of total engagement happening within the first 48 hours of a post being live [7] - The study found that removing posts only prevented 24% to 30% of their predicted engagement [9] Group 3: Algorithm and Moderation Mismatch - The research highlights a mismatch between the speed of Facebook's content moderation and its recommendation algorithm, suggesting that moderation needs to occur at a pace similar to content recommendations to be effective [10][11] - The majority of removed posts were identified as spam, clickbait, or fraudulent content, indicating the focus of content moderation efforts [8]
Meta says online harassment is up and false flags are down following a change in content moderation policies
Business Insider· 2025-05-30 00:51
Core Insights - Meta reported a slight increase in online bullying and harassment on Facebook in Q1 2025 compared to Q4 2024, with prevalence rising from 0.06-0.07% to 0.07-0.08% [1][2] - The increase in violent and graphic content also rose from 0.06%-0.07% to about 0.09%, attributed to a spike in sharing violating content in March and ongoing efforts to reduce enforcement mistakes [2][6] Content Moderation Changes - In January, Meta overhauled its content moderation policies, allowing more political content across its platforms and eliminating restrictions on topics like immigration and gender identity [3][4] - The definition of "hate speech" was revised to focus on direct attacks and dehumanizing speech, moving away from a broader range of flagged aggressions [4] - The company replaced third-party fact-checkers with crowd-sourced community notes, similar to its competitor X [4] Impact of New Policies - Meta observed a significant reduction in error rates with the new policies, cutting mistakes in content moderation in half compared to the previous system [5] - The Q1 2025 report reflects these changes, showing a decrease in the amount of content actioned and a reduction in preemptive actions taken by the company [6] - The company aims to balance enforcement levels to avoid both under-enforcement of violating content and excessive mistakes [6] Community Notes and Challenges - Community notes have been described as a means for the democratization of fact-checking, but there are concerns about potential risks of bias and misinformation [8] - The prevalence of online bullying and harassment violations was reported at 0.08% to 0.09% in Q1 2024, compared to around 0.07% in Q1 2023, indicating fluctuations in violation rates [8]