Workflow
照片墙
icon
Search documents
澳大利亚更新16岁以下人群社媒禁用名单
Zhong Guo Xin Wen Wang· 2025-11-21 10:48
Group 1 - The Australian government has added the live streaming platform Twitch to the list of social media platforms banned for users under 16 years old, effective December 10 [1][2] - The eSafety Commissioner stated that Twitch is classified as a "social media platform with age restrictions" due to its primary purpose of encouraging user interaction through live streaming and social engagement [1] - A spokesperson for Twitch confirmed that accounts for users under 16 will be prohibited starting January 9, 2024 [1] Group 2 - The eSafety Commissioner indicated that the list of banned social media platforms for users under 16 may change with technological advancements [2] - The Social Media Minimum Age legislation, passed by the Australian Parliament in November 2024, is the first of its kind globally and will officially take effect on December 10, 2023 [2] - Other platforms like Facebook, X, and YouTube have also been included in the ban, with YouTube initially receiving an exemption [2]
配合政府新规 Meta将移除澳未成年人社交媒体账户
Yang Shi Xin Wen· 2025-11-20 10:27
Core Points - Meta announced it will remove social media accounts of users under 16 in Australia starting December 4 to comply with the Australian government's comprehensive ban on social media for minors [2] - The Australian government requires social media platforms, including Facebook, to complete the removal of these accounts by December 10, or face significant fines [2]
配合政府新规 Meta将移除澳大利亚未成年人社交媒体账户
Xin Hua Wang· 2025-11-20 08:33
Group 1 - Meta, the parent company of Facebook, will remove social media accounts of users under 16 in Australia starting December 4 to comply with the Australian government's comprehensive ban on social media for minors [1][3] - The Australian government mandates that social media platforms, including Facebook and Instagram, must complete the removal of relevant accounts by December 10, or face significant fines [1][3] - Approximately 350,000 users aged 13 to 15 are on Instagram, and around 150,000 accounts belong to the same age group on Facebook [3] Group 2 - Meta has begun notifying affected users that they will soon be unable to use Facebook, and their profiles will no longer be visible to themselves or others [3] - Users can restore their accounts once they turn 16, and those disputing the removal can provide government-issued documentation to verify their age [3] - The Australian Parliament passed the "2024 Cyber Security (Minimum Age for Social Media) Amendment" last November, which prohibits minors under 16 from using most social media platforms, with the law taking effect one year later [3] Group 3 - Australia's strict regulations on minors' social media usage have garnered international attention, with New Zealand and the Netherlands considering similar legislation [3] - The social media ban in Australia is regarded as one of the strictest globally, although experts express concerns about the practical challenges of age verification and enforcement [3]
Meta将移除澳大利亚未成年人社交媒体账户
Group 1 - Meta will remove social media accounts of users under 16 in Australia starting December 4, 2024, to comply with the Australian government's ban on social media for minors [1][3] - The Australian government mandates that social media platforms must complete the removal of these accounts by December 10, 2024, or face significant fines [1][3] - Approximately 350,000 users aged 13 to 15 are on photo-sharing platforms, and around 150,000 in the same age group have Facebook accounts [3] Group 2 - Meta has begun notifying affected users that their accounts will no longer be visible and can be restored once they turn 16 [3] - The Australian Federal Parliament passed the "2024 Cybersecurity (Minimum Age for Social Media) Amendment" on November 28, 2023, which prohibits minors under 16 from using most social media platforms [3] - Australia's social media ban is considered one of the strictest globally, prompting similar proposals from other countries like New Zealand and the Netherlands [3]
【微特稿】配合政府新规 Meta将移除澳大利亚未成年人社交媒体账户
Xin Hua She· 2025-11-20 06:50
Core Viewpoint - Meta, the parent company of Facebook, will remove social media accounts of users under 16 in Australia starting December 4 to comply with the Australian government's comprehensive ban on social media for minors [1][2] Group 1: Regulatory Compliance - The Australian government mandates that social media platforms, including Facebook, must complete the removal of accounts for users under 16 by December 10, or face significant fines [1] - Approximately 350,000 users aged 13 to 15 are on photo-sharing platforms, and around 150,000 in the same age group have Facebook accounts [1] Group 2: User Notification and Account Recovery - Meta has begun notifying affected users that they will soon be unable to use Facebook, and their profiles will no longer be visible to themselves or others [1] - Users can recover their accounts once they turn 16, and they can contest the removal by providing government-issued proof of age [1] Group 3: Legislative Context - The Australian Federal Parliament passed the "2024 Cybersecurity (Minimum Age for Social Media) Amendment" on November 28, which prohibits minors under 16 from using most social media platforms, with the law taking effect one year later [1] - Australia's strict regulations on minors' social media usage have garnered international attention, with New Zealand and the Netherlands considering similar legislation [1][2] Group 4: Expert Concerns - Some experts express concerns that the law may only have symbolic significance due to challenges in age verification and regulatory enforcement [2]
欧盟初步认定元宇宙公司违反《数字服务法》或处以高额罚款
Yang Shi Xin Wen· 2025-10-24 13:12
Group 1 - The European Commission has preliminarily determined that the metaverse company has violated the EU's Digital Services Act, with potential fines of up to 6% of its global revenue if non-compliance continues [2] - The Commission criticized the platform for not providing sufficient data access to researchers, hindering the assessment of measures to protect users from illegal or harmful content [2] - The metaverse company is accused of implementing overly cumbersome procedures that result in incomplete or unreliable research data [2] Group 2 - The Commission also pointed out that the metaverse's platforms, including Facebook and Instagram, failed to offer an easy mechanism for reporting illegal content and lacked effective appeal channels after content removal or account suspension [2] - The metaverse company denies the violations, stating that it has adjusted its reporting options, appeal processes, and data access tools in accordance with EU regulations [2] - This investigation is part of a broader enforcement action by the EU to implement the Digital Services Act, which began last year [3]
欧盟、澳大利亚、巴西探索加强未成年人社交媒体使用管理—— 安全上网,促进青少年“数字健康”(国际视点)
Ren Min Ri Bao· 2025-07-03 00:12
Core Viewpoint - The increasing use of social media among teenagers presents both opportunities for engagement and significant challenges related to mental health, privacy, and exposure to harmful content, prompting calls for regulatory measures and educational initiatives across various countries [1][2][3]. Group 1: Social Media Usage and Impact - A study by the World Health Organization revealed that the percentage of teenagers facing issues due to improper social media use rose from 7% in 2018 to 11% in 2022, with an additional 12% at risk of gaming addiction [1]. - In Germany, over 93% of teenagers aged 10 and above use social media, spending an average of 95 minutes daily, with 33% unable to imagine life without it [2]. - In Sweden, police warned that criminal gangs are using social media to recruit minors for illegal activities, with some recruits as young as 11 [2]. Group 2: Regulatory Measures in Europe - Many EU countries are implementing strict age restrictions for social media use, with most platforms prohibiting registration for children under 13, and requiring parental consent for minors [3]. - The EU's "algorithm ban" under the Digital Services Act prohibits personalized advertising to minors and automatic playback features to mitigate addiction risks [3]. - Germany is exploring AI systems to assess user age based on profile information and interactions, automatically converting accounts of identified minors to "teen accounts" with content restrictions [4]. Group 3: Australia’s Legislative Actions - Australia has enacted the 2024 Cybersecurity (Minimum Age for Social Media) Amendment, banning social media use for individuals under 16, with penalties for platforms failing to comply [6]. - The Australian government is collaborating with industry experts to ensure effective implementation of age verification technologies [6]. - The "Head Up Alliance," formed by concerned parents, supports the new legislation aimed at protecting children's mental health from social media's adverse effects [7]. Group 4: Brazil's Approach to Online Safety - In Brazil, a significant portion of teenagers openly shares personal information on social media, raising privacy concerns [8]. - Brazil's Internet Civil Framework and General Data Protection Law require parental consent for collecting minors' data, with new legislation proposed to enhance online safety measures [8][9]. - Schools in Brazil are incorporating cybersecurity education into their curricula to help teenagers recognize and manage social media risks [9].
美媒:如何监管“起重机上的美科技巨头”?
Huan Qiu Shi Bao· 2025-06-16 23:06
Core Viewpoint - The article discusses the potential shift in the regulatory landscape for large technology companies in the U.S., highlighting recent antitrust actions that may disrupt their dominance and foster competition in the tech industry [1][2][3]. Group 1: Antitrust Actions - A federal court in Virginia ruled that Google illegally monopolized two online advertising technology markets, violating antitrust laws [2]. - A district court broke the "Apple Tax" monopoly, prohibiting Apple from charging fees on purchases made outside its app store and restricting developers from directing users to external purchasing options [2]. - The FTC's antitrust lawsuit against Meta (Facebook's parent company) is ongoing, with potential implications for the separation of its services like Instagram and WhatsApp [2][5]. Group 2: Impact on Competition - The antitrust measures could revitalize competition in the tech sector, providing opportunities for smaller companies and improving service quality [3][5]. - If Facebook is forced to separate its services, it could lead to the emergence of multiple social media platforms with different algorithms, enhancing user experience [5]. - A successful lawsuit against Amazon could create a more competitive marketplace, allowing consumers to find better-priced products [5]. Group 3: Historical Context and Lessons - The article references the historical context of AT&T's breakup in 1984, which initially fostered innovation but eventually led to a new form of duopoly in the telecommunications industry [6][7]. - The current situation emphasizes the need for ongoing regulatory oversight to prevent technology companies from abusing their market power [7].
内容审核形同虚设,工具开发缺少投资,Meta成诈骗温床遭多方质疑
新浪财经· 2025-05-19 01:19
Core Viewpoint - Meta is facing a significant surge in online fraud, with its platforms, Facebook and Instagram, becoming primary venues for global scam operations, leading to substantial financial losses for users and revealing systemic regulatory failures within the company [1]. Group 1: Fraud Incidents - Businesses are being used as "endorsement tools" for scams, with real companies' information being misappropriated to create fraudulent advertisements [2]. - Edgar Guzman, a business owner, reported that over 4,400 fraudulent ads used his company's address, while he only posted 15 legitimate ads, highlighting the scale of the issue [3]. - Scammers are employing increasingly sophisticated tactics, such as using images of elderly individuals to promote fake giveaways, leading to unauthorized credit card charges for users [3]. Group 2: Regulatory Failures - The rise of cryptocurrency, AI technology, and cross-border crime networks has significantly increased the scale and impact of online fraud, with 70% of new active advertisers on Meta promoting scams or illegal products [5]. - Data from banks indicate that nearly half of the fraud cases reported through Zelle are linked to Meta platforms, with similar trends observed by regulatory bodies in the UK and Australia [5]. Group 3: Internal Response and Criticism - Internal documents reveal that Meta allows fraud advertisers to accumulate multiple violations before banning them, indicating a high threshold for enforcement actions [6]. - Meta's Marketplace platform has become a breeding ground for scams due to its peer-to-peer transaction model, yet the company has not implemented sufficient measures to address this issue [6]. - Despite claims of increasing anti-fraud investments, analysts suggest that combating fraud is not a top priority for Meta, as evidenced by a lack of investment in automated fraud detection tools [8].
内容审核形同虚设,工具开发缺少投资,Meta成诈骗温床遭多方质疑
Huan Qiu Shi Bao· 2025-05-18 22:45
Core Viewpoint - Meta is facing a significant surge in online fraud, with its platforms Facebook and Instagram becoming primary venues for global scam operations, revealing systemic failures in content moderation [1] Group 1: Fraud Incidents - Numerous businesses, such as "Half Price Wholesale," have reported that scammers are using their information to create fake advertisements, leading to customer complaints and financial losses [2] - A survey indicated that over 4,400 different ads used the address of the aforementioned company, while the actual owner only posted 15 ads [2] - Scammers are employing increasingly sophisticated tactics, including using images of elderly individuals to promote fraudulent offers, resulting in unauthorized credit card charges [2] Group 2: Regulatory Failures - The rise of cryptocurrency, AI technology, and cross-border crime networks has significantly increased the scale and impact of online fraud [3] - A 2022 report from Meta revealed that 70% of new active advertisers on its platform were promoting scams, illegal goods, or low-quality products [3] - Data from banks and regulatory bodies indicate a high proportion of fraud cases linked to Meta, with nearly half of the fraud cases reported through Zelle being associated with Meta platforms [3] Group 3: Inadequate Anti-Fraud Measures - In response to criticism, Meta claims to be addressing a "fraud epidemic" and has implemented various measures, including testing facial recognition technology and increasing user warnings [4] - Despite these claims, Meta's legal defense argues that it bears no legal responsibility for fraud on its platforms, stating it has no obligation to resolve user fraud issues [4] - Analysts suggest that Meta's resource allocation indicates that combating fraud is not a top priority, with more focus on issues like human trafficking and self-harm content rather than fraud prevention [5]