平台内容治理
Search documents
瞭望 | 平台需畅通公共信息传播渠道
Xin Hua She· 2026-01-05 08:40
Group 1 - The core viewpoint emphasizes the risk of public information being improperly intercepted by algorithms and community rules on social media platforms, highlighting the need for better content management systems [1][2] - Social media platforms are recognized as not only commercial products but also as public information infrastructure, necessitating a deeper understanding of their social functions [1] - There is a call for platforms to establish dedicated identification and protection mechanisms for authoritative sources like mainstream media, ensuring clear standards for content review related to public information [1] Group 2 - The article stresses that public information should not be "silenced" by algorithms, and authoritative voices must not be obscured on platforms, as this affects public awareness and social consensus [2] - Platforms are urged to actively calibrate their content management value orientation to facilitate the smooth dissemination of authoritative information that serves public interests [2]
村支书模仿雷军卖小米,视频下架何以有争议
Nan Fang Du Shi Bao· 2025-12-14 00:30
Core Viewpoint - The incident involving the village secretary from Tengjia Town, Rongcheng City, Shandong Province, selling millet online and being accused by Xiaomi of "defaming" highlights the tension between individual creativity and corporate rights in the digital space [2][3]. Group 1: Incident Overview - The village secretary used a style mimicking Xiaomi's CEO Lei Jun to promote local millet, which led to a complaint from Xiaomi regarding "malicious imitation" and defamation [2]. - After the complaint, the village secretary released an apology video expressing frustration over the inability to use the term "Xiaomi," indicating a perceived overreach by the company [2]. Group 2: Legal and Platform Implications - The key issue in determining whether the village secretary's actions constituted defamation revolves around whether there was any insulting or derogatory behavior that caused tangible harm to Xiaomi [3]. - Many similar disputes typically do not escalate to court but are resolved through platform mediation, which may not adequately clarify the standards for such complaints [3][4]. Group 3: Need for Clear Standards - Xiaomi's complaints about imitation videos are understandable, but there is a pressing need for platforms to establish clear, fair, and transparent standards for handling infringement disputes to maintain credibility [4]. - The current approach of platforms, which often leads to content being taken down without thorough investigation, risks alienating both companies and individuals involved in such disputes [4].
抖音试行社区法律行业公约
Zheng Quan Shi Bao· 2025-11-24 10:57
Group 1 - Douyin officially announced the trial implementation of the "Douyin Community Legal Industry Convention" to enhance the quality of legal content and regulate the dissemination of legal-related content [1][2] - The convention includes two main aspects: account qualification standards and content management standards, aimed at improving platform governance rules and measures for handling violations [1][2] Group 2 - The convention encourages legal content creators to complete the platform's legal professional qualification certification before publishing professional legal service content [2] - It prohibits accounts without platform qualification certification from publishing legal service-related content and from implying they have legal practice backgrounds [2] - The convention also bans impersonation of legal experts, practicing lawyers, and judicial personnel, as well as the use of forged or altered legal credentials to register or upgrade accounts [2]
抖音试行社区法律行业公约
证券时报· 2025-11-24 10:36
Core Viewpoint - Douyin announced the trial implementation of the "Douyin Community Legal Industry Convention" to enhance the quality of legal content on the platform and regulate the dissemination of legal information [1][2]. Group 1: Account Qualification Standards - The platform encourages legal creators to complete legal professional qualification certification before publishing legal service content [4]. - It prohibits accounts without certification from publishing legal service content or implying they have legal qualifications [4][2]. - The platform bans impersonation of legal professionals and the use of forged legal documents to register or upgrade accounts [4][2]. Group 2: Content Management Standards - The platform promotes the publication of objective and neutral legal knowledge to provide users with high-quality legal content [5]. - It prohibits the dissemination of false or misleading legal information, including distorted laws and fabricated policies [6][8]. - The platform forbids providing illegal guidance or services, such as evading legal obligations or promoting illegal collection practices [8][9]. Group 3: Prohibited Legal Services - The platform bans misleading users through free consultations that lead to paid legal services [9]. - It prohibits the use of fabricated cases or exaggerated claims to induce contracts [9]. - The platform restricts the promotion of unqualified legal products and the manipulation of accounts for misleading public judgment [10]. Group 4: Penalties for Violations - Violations of the governance norms may result in warnings, content restrictions, account suspensions, or permanent bans depending on the severity and frequency of the violations [10].
平台内容治理的破局之道
经济观察报· 2025-11-17 13:47
Core Viewpoint - The resolution of content governance lies not only in responsibility allocation and technical optimization but also in shaping a healthy public opinion ecosystem. Platforms should actively guide the production and dissemination of quality content rather than merely acting as "post-deletion machines" [3][26]. Group 1: Current Governance Challenges - The central internet authority has been conducting "Clear and Bright" actions targeting misinformation from self-media, malicious marketing in short videos, and the misuse of AI technology [4]. - The responsibility boundaries of platforms have become increasingly prominent, with issues such as low violation costs for individual users and the disparity between societal expectations and governance capabilities complicating content review efforts [5][7]. - The responsibility for content review is difficult to ascertain, as platforms face significant challenges in managing vast amounts of user-generated content, which lacks the clear contractual obligations seen in e-commerce [8][9][10]. Group 2: Real-World Dilemmas - The sheer volume of content generated daily makes it nearly impossible for platforms to pre-screen all potentially harmful information, leading to a management gap [13]. - The low cost of individual violations encourages risky behavior among users, as the consequences for them are minimal compared to the significant penalties faced by merchants in e-commerce [14][15]. - There is a mismatch between the high expectations placed on platforms by society and their actual capabilities, leading to a "high responsibility, weak means" paradox [16][18]. Group 3: Path to Resolution - A shift from "single-point accountability" to "layered responsibility" is necessary, distributing accountability among individuals, platforms, and society [22]. - Increasing the cost of individual violations and enhancing traceability through improved user identification and behavior monitoring can help mitigate risks [23]. - Platforms should adopt a proactive approach to governance, focusing on "prevention" rather than "post-incident response" to reduce the spread of harmful content [25]. - The ultimate goal of governance should be to cultivate a healthy public opinion ecosystem, where quality content is promoted and becomes mainstream, rather than merely eliminating violations [26].
抖音加强医美内容治理,10.6万个直播间因违规宣传被处置
Xin Jing Bao· 2025-11-17 10:27
Core Insights - Douyin's life service has launched a special action to combat illegal promotion of medical and beauty services in live streaming, resulting in significant enforcement outcomes [1] Group 1: Regulatory Actions - Over 23,000 violators have had their live streaming group buying permissions suspended for more than 30 days, with 605 individuals banned for 180 days due to severe violations [1] - More than 106,000 live streaming rooms have been interrupted due to illegal promotions [1] Group 2: Violations and Tactics - Some influencers have used disguised terms, misleading speech, comment section guidance, and background boards to bypass platform regulations, promoting banned or restricted products and services such as "UK enhancement," "water light needle," "photonic skin rejuvenation," "eyebrow washing," and "laser pigment removal" [1] - These actions not only violate relevant laws and regulations but also severely breach platform rules, negatively impacting consumer experience [1] Group 3: Risk Management - In the first half of the year, the platform established a risk identification system and conducted special rectification actions, addressing violations from 1,517 medical beauty businesses [1]
平台内容治理的破局之道
Jing Ji Guan Cha Bao· 2025-11-14 16:28
Core Viewpoint - The article discusses the ongoing "Clear and Bright" campaign by the Central Cyberspace Administration of China, which targets misinformation, malicious marketing in short videos, and the misuse of AI technology, highlighting the challenges of content regulation on digital platforms [2][3]. Group 1: Responsibility and Governance - The article emphasizes the need to clarify responsibility attribution in digital spaces, particularly in content review, where different platforms have varying responsibilities [4][5]. - Major internet platforms have faced legal scrutiny for allowing false information and inflammatory content to proliferate, indicating a need for stricter content governance [4][6]. - The distinction between e-commerce platforms and content platforms is highlighted, with e-commerce platforms having clearer responsibility structures compared to content platforms, which face challenges due to the decentralized nature of user-generated content [6][7]. Group 2: Challenges in Content Regulation - Content platforms struggle with the sheer volume and diversity of information, making it nearly impossible to pre-screen all content effectively [9][10]. - The low cost of individual user violations leads to a lack of accountability, as users can easily create new accounts to continue posting harmful content after being banned [10][11]. - There is a significant asymmetry between the responsibilities placed on platforms and the powers they possess to enforce compliance, leading to platforms often being blamed for user misconduct [11][12]. Group 3: The Dilemma of Content Review - Platforms face a dilemma in content review, where overly strict measures may alienate users, while lenient policies can lead to the spread of harmful information [13][14]. - Public expectations for platforms to swiftly identify and remove harmful content often exceed the technical capabilities of these platforms, creating a gap between societal demands and operational realities [14][15]. Group 4: Pathways to Resolution - The article suggests a shift from a model of "single-point accountability" to a "layered responsibility" approach, distributing accountability among individuals, platforms, and society [17][18]. - Increasing the cost of individual violations and enhancing traceability of user actions are proposed as methods to improve accountability [18][19]. - The need for algorithmic transparency and the development of multi-objective algorithms that balance user engagement with social responsibility is emphasized [20]. - A proactive approach to content regulation, focusing on prevention rather than reaction, is recommended to mitigate the spread of harmful content [21][22]. - Finally, fostering a healthy discourse ecosystem where platforms actively promote quality content rather than merely reacting to violations is deemed essential for long-term governance success [22].
抖音电商:近半年处置5万条违规运营技巧视频
Sou Hu Cai Jing· 2025-10-29 09:50
Core Insights - Douyin E-commerce has identified misleading content related to "video shooting techniques" and "store management strategies" that could misguide creators and merchants, potentially affecting their operational direction and wasting time [1][3] - The platform has taken action by removing over 50,000 pieces of content related to improper operational techniques in the past six months and plans to enhance monitoring and governance of such content [3] Group 1 - Douyin E-commerce discovered that some creators and merchants are sharing content that is difficult to verify and may not be effective [1] - Misleading content often exaggerates claims and fabricates cases, which can lead to misguided operational strategies for creators and merchants [1] - The platform aims to maintain a healthy creative environment and protect the legitimate rights of creators and merchants [3] Group 2 - Over the last six months, Douyin E-commerce has disposed of more than 50,000 pieces of content related to improper operational techniques [3] - The platform will continue to strengthen monitoring and governance against improper operational techniques and tools [3] - The focus is on ensuring that the rights of creators and merchants are not infringed upon [3]
豆瓣发公告了
Jing Ji Guan Cha Wang· 2025-08-07 15:05
Core Points - Douban announced on August 7, 2025, that it has addressed issues related to fake accounts and AI-generated spam content on its platform [1] - The platform identified a significant number of accounts engaging in "account farming" behavior, which resulted in the posting of a large volume of spam content across various film and book entries [1] - A total of 2,272 accounts were processed for violations, leading to the removal of 348,674 short reviews and 5,701 long reviews deemed as spam [1] Company Actions - Douban emphasized its commitment to maintaining the authenticity of ratings and reviews by combining technology and manual oversight [1] - The company encouraged users to report any misuse of platform features or spam content to enhance community integrity [1] - Douban reiterated that the credibility of user accounts in film ratings is based on real user data, and spam content will be identified and removed by algorithms, with violators facing penalties [1]
【抖音高管回应打击恶意营销号】4月28日讯,抖音集团副总裁李亮转发抖音黑板报博文称:“我之前说过,营销号煽动情绪、制造不实信息,已经成为平台内容生态和算法推荐的污染源。抖音新规明确了要打击的十类营销号违规行为。同时,也推出账号健康分机制,平台会根据账号的健康分,给到内容减少推荐、限制变现权益等阶梯式的处罚措施,此外还设置了通过学习考试进行加分的机制。目前该治理规范正在试运行状态,欢迎大家多提意见和建议。”
news flash· 2025-04-28 08:34
Core Viewpoint - Douyin is actively addressing the issue of malicious marketing accounts that distort information and disrupt the platform's content ecosystem and algorithm recommendations [1] Group 1: Regulatory Measures - Douyin has established new regulations targeting ten types of violations by marketing accounts [1] - A health score mechanism for accounts has been introduced, which will influence content recommendation and monetization rights based on the account's health score [1] - The platform is implementing a tiered penalty system that includes reduced content recommendations and restrictions on monetization rights [1] Group 2: Community Engagement - Douyin encourages user feedback and suggestions during the trial phase of these governance regulations [1]