Workflow
Content moderation
icon
Search documents
X @Elon Musk
Elon Musk· 2025-07-26 16:14
Regulatory Concerns - The House Judiciary Committee's report raises concerns about the EU's misuse of the Digital Services Act (DSA) as a censorship tool [1] - The DSA pressures tech companies to alter global content moderation policies, potentially exporting EU standards beyond the EU [1] - The DSA's application raises questions regarding the classification of political statements as "illegal hate speech" [1] - The DSA targets humor, satire, and personal opinions on immigration and environmental issues [1] - Concerns exist regarding the one-sided enforcement of the DSA, involving third parties with potential conflicts of interest and political biases [1] Freedom of Expression - The report suggests the DSA stifles discourse, democratic debate, and the exchange of diverse ideas globally [1] - The industry is committed to safeguarding freedom of expression and resisting regulatory overreach that imposes censorship [1]
Meta's Community Notes will use open-source technology from Elon Musk's X
CNBC· 2025-03-13 12:05
Meta's upcoming Community Notes feature for monitoring misinformation through crowdsourcing will use some technology developed by Elon Musk's X for its similar service.On Thursday, Meta revealed in a blog post more details of its new content moderation tool, and said it incorporates the same open-source algorithm that powers X's Community Notes. Meta said that over time it plans to modify the algorithm to better serve its Facebook, Instagram and Threads apps."As X's algorithm and program information is open ...
House panel subpoenas Google parent Alphabet over content moderation during Biden years
New York Post· 2025-03-06 19:37
Core Viewpoint - The House Judiciary Committee has subpoenaed Alphabet to obtain communications regarding content moderation policies with the Biden administration, focusing on conservative topics and potential censorship [1][2][4]. Group 1: Subpoena Details - The subpoena specifically requests communications about content limits or bans related to President Trump, Elon Musk, COVID-19, and other conservative discussions [2]. - Chairman Jim Jordan has expressed concerns that Alphabet has not publicly disavowed the Biden administration's alleged attempts to censor speech [4][6]. Group 2: Context and Reactions - The Trump administration and Republican lawmakers have criticized Big Tech for policies perceived to suppress conservative viewpoints online [3][6]. - Meta Platforms previously indicated that the Biden administration pressured them to censor content, leading to a reduction in content moderation practices [4].
Meta fixes error that exposed Instagram users to graphic and violent content
TechCrunch· 2025-02-27 16:27
Group 1 - Meta has resolved an error that led some users to encounter inappropriate graphic and violent videos in their Instagram Reels feed despite having the "Sensitive Content Control" enabled [1] - The company issued an apology for the mistake, acknowledging that the content shown violated their policy against graphic violence and disturbing imagery [1] - Meta's content policy explicitly prohibits videos depicting dismemberment, visible innards, charred bodies, and sadistic remarks towards suffering humans and animals [1] Group 2 - The error occurred following Meta's announcement to relax its content moderation policies, which is perceived as a strategic move to align with the potential return of Trump to the presidency [2]
Meta apologizes after Instagram users are flooded with violent videos
New York Post· 2025-02-27 14:04
Core Viewpoint - Meta issued an apology for an error in Instagram's recommendation algorithm that led to users being shown disturbing and violent videos, including graphic depictions of fatal incidents, affecting a wide range of users, including minors [1][4][5]. Group 1: Incident Details - The algorithm error resulted in users receiving content from accounts they did not follow, such as "BlackPeopleBeingHurt" and "ShockingTragedies," with some videos receiving millions more views than typical posts from the same accounts [3][5]. - Despite the apology, the company did not disclose the scale of the issue, and reports indicated that disturbing content continued to appear even after the problem was claimed to be resolved [5][12]. Group 2: Content Moderation Policies - The incident occurred as Meta was adjusting its content moderation policies, particularly in relation to automated detection of objectionable material [6][9]. - Meta announced a shift in its moderation strategy to focus on "illegal and high-severity violations" while relying on user reports for less serious violations, which may have contributed to the algorithmic error [9][10]. - The company acknowledged that its systems had been overly aggressive in demoting posts and was in the process of eliminating most of those demotions [10][11]. Group 3: Company Context - Meta's content moderation changes are part of a broader strategy to allow freer expression, which has been interpreted by some as an effort to improve relations with political figures [14][15]. - The company has faced significant staffing reductions, cutting approximately 21,000 jobs, nearly a quarter of its workforce, including roles in civic integrity and safety teams [15].