网络信息治理
Search documents
诱导未成年人打赏、收割网红儿童流量?八部门出台新规治理
Nan Fang Du Shi Bao· 2026-01-23 17:06
Core Viewpoint - The new regulations aim to enhance governance over online content that may negatively impact the physical and mental health of minors, categorizing such content into four specific types and requiring platforms to implement preventive measures starting March 1, 2026 [1][4]. Group 1: Types of Content - The first category includes information that may induce minors to imitate or engage in harmful behaviors, such as content with sexual implications, online violence, and irrational consumption behaviors like blind fandom and tipping [1][2]. - The second category addresses content that could negatively influence minors' values, promoting hedonism, materialism, and pessimism, as well as distorted aesthetics and harmful educational philosophies [2]. - The third category pertains to the inappropriate use of minors' images, including content that exploits minors for attention or profit through controversial portrayals [2][3]. - The fourth category involves the improper disclosure and use of minors' personal information without guardian consent [2]. Group 2: Implementation Measures - The regulations require online platforms and content creators to take preventive measures against harmful content, including managing how such content is presented on their sites [2][3]. - There is a mandate for clear labeling of content that may affect minors' health, with organizations required to provide prominent warnings before displaying such information [3][4]. - The regulations also emphasize the need for robust technical safeguards, ensuring that algorithms and AI services do not promote harmful content to minors [3][5]. Group 3: Regulatory Background - The introduction of these regulations is a response to the "Minor Protection Online Regulations," aiming to clarify the types of harmful content and establish specific standards for governance [4][5]. - Experts believe that the detailed categorization of harmful content will provide clearer guidelines for content review and governance, addressing concerns about "internet celebrity children" and filling existing regulatory gaps [4][5].
抖音小红书被约谈!房地产领域网络乱象何时休?
Jing Ji Guan Cha Wang· 2025-12-18 07:37
Core Viewpoint - The regulatory authorities in Beijing are intensifying efforts to address the spread of false information and market panic regarding the real estate sector, targeting several internet platforms for their role in disseminating misleading content [2][4]. Group 1: Regulatory Actions - A joint meeting was held on December 5, involving the Beijing Municipal Housing and Urban-Rural Development Committee, the Municipal Cyberspace Administration, and the Public Security Bureau, to address issues related to platforms like Douyin, Xiaohongshu, Beike, 58.com, Xianyu, Lianjia, and Wo Ai Wo Jia [2][3]. - The meeting highlighted that some self-media accounts were spreading negative narratives about the Beijing real estate market, which disrupts market order and harms consumer rights [4][6]. Group 2: Platform Responsibilities - Certain platforms have been criticized for lax oversight of illegal and misleading information, relying heavily on algorithmic recommendations that allow the spread of low-quality content [4][5]. - Platforms like Xiaohongshu have been found to host fictitious listings, such as a "two-bedroom apartment in Chaoyang for 3,000 yuan," which misled users and violated consumer protection laws [4][5]. Group 3: Data Security and User Protection - Some platforms have been accused of improperly collecting and using user data, raising concerns about data security and social responsibility [5][6]. - The regulatory approach includes a combination of "interviews, penalties, and credit ratings" to compel platforms to reassess their content ecosystem and ensure compliance with legal obligations [5][6]. Group 4: Long-term Mechanisms - The regulatory initiative aims to establish three long-term mechanisms: mandatory verification codes for listings, AI-driven content review, and a credit rating system for accounts based on their history of violations [6][7]. - Platforms like Beike are implementing systems to verify the authenticity of listings, achieving a 98% identification rate for false listings [7][8]. - A cross-platform blacklist mechanism is being developed to jointly penalize accounts that post misleading content across multiple platforms [8].