Workflow
青少年网络安全
icon
Search documents
印媒:禁用社交媒体,青少年就能安全吗?
Huan Qiu Shi Bao· 2025-08-07 22:57
Core Viewpoint - The recent Australian proposal to ban minors from using YouTube and other social media platforms has sparked intense debate, highlighting the challenges of ensuring online safety for youth in a digital age [1][2]. Group 1: Regulatory Changes - Australia has revoked the exemption previously granted to YouTube, mandating compliance with new online safety regulations aimed at protecting minors [1]. - The proposed "Social Media Minimum Age Law" will prohibit individuals under 16 from using platforms like YouTube, Facebook, and X [1]. Group 2: Effectiveness of Age Restrictions - Research indicates that strict age restrictions do not effectively prevent youth from encountering online dangers, as evidenced by Norway's experience where 72% of 11-year-olds continued to use social media despite a minimum age limit of 13 [1]. - The UK's Online Safety Act, intended to limit minors' access to social networks, has led to absurd situations where youth use virtual avatars to bypass facial recognition technology [1]. Group 3: YouTube's Influence and Risks - YouTube's viewing time surpasses that of traditional media giants like Disney and Netflix, showcasing its appeal but also revealing potential risks associated with its open platform [2]. - A study from Dartmouth College found that while YouTube's algorithm rarely recommends extremist content to users who do not seek it out, such content still exists on the platform [2]. Group 4: Call for Action - Policymakers are urged to push social media platforms to address inherent risks rather than simply imposing age restrictions, advocating for increased transparency in algorithms and targeted solutions from stakeholders [2].
Meta updates safety features for teens. More than 600,000 accounts linked to predatory behavior
CNBC· 2025-07-23 11:00
Group 1 - Meta introduced new safety features for teen users on Facebook and Instagram, including enhanced direct messaging protections to prevent exploitative content [1] - Teens will receive more information about their chat partners, such as account creation dates and safety tips, to help identify potential scammers [1] - The company reported blocking accounts 1 million times and receiving another 1 million reports after issuing a Safety Notice in June [2] Group 2 - Meta removed nearly 135,000 Instagram accounts earlier this year that were found to be sexualizing children, which included accounts leaving sexualized comments or requesting sexual images [3] - The takedown also involved 500,000 Instagram and Facebook accounts linked to the original profiles that were involved in the exploitation [3] - This initiative is part of a broader effort by Meta to protect teens and children on its platforms amid increasing scrutiny from policymakers [2]