Workflow
Online Safety
icon
Search documents
Exclusive: Meta vowed to stop illegal financial ads in Britain. It failed 1,000 times in a week
Reuters· 2026-03-18 06:03
Core Viewpoint - Meta has repeatedly failed to stop illegal ads for high-risk financial products on its platforms in Britain, despite its commitment to block them, as revealed by the Financial Conduct Authority (FCA) [2][3][4]. Group 1: Findings from the FCA Review - The FCA found that 1,052 illegal ads for currency trading and complex financial instruments were posted on Meta's platforms in one week, with 56% of these ads coming from previously flagged unauthorized advertisers [3][6]. - The review highlighted that a small number of repeat offenders were responsible for the majority of illegal ads, indicating a lack of effective monitoring by Meta [6][7]. - The FCA's review was limited to high-risk financial products, specifically foreign exchange trading and contracts for difference (CFDs), which are known to pose significant risks to consumers [15][16]. Group 2: Meta's Response and Regulatory Context - Meta claims to fight fraud aggressively and takes swift action on reports, but the FCA has not observed a material difference in Meta's approach despite ongoing engagement [5][7][9]. - The Online Safety Act in Britain, which will allow regulators to fine social media companies for running illegal content, is set to come into effect in March 2025, but provisions for scam ads have been delayed until at least 2027 [10][11]. - Meta made a voluntary commitment in 2022 to only allow FCA-authorized firms to run financial services ads, but the FCA lacks the authority to take action against Meta itself [11][12]. Group 3: Broader Implications and Industry Concerns - A report by Reset Tech found that 51.1% of ads referencing major British banks were likely scams, estimating that Meta could host approximately 29,068 scam ads annually, leading to significant consumer exposure [22][23]. - Banks like Barclays and Revolut have expressed concerns about Meta's effectiveness in combating fraud, urging the company to improve its verification systems [25]. - The FCA has been proactive in issuing alerts and taking action against unauthorized advertisers, but many of these advertisers operate outside of Britain [12][13].
Big tech given warning - and deadline - by UK regulator
Sky News· 2026-03-12 01:40
Core Viewpoint - Tech companies are urged to enhance online protections for children after a proposed blanket ban on social media for under-16s was rejected by MPs, highlighting the need for stronger age verification and safety measures [1][2][5]. Group 1: Regulatory Actions - The Information Commissioner's Office (ICO) and Ofcom have demanded that platforms like Facebook, Instagram, Roblox, Snapchat, TikTok, and YouTube provide details on their age verification processes and measures against online grooming by the end of April [1][2]. - Ofcom has called for an end to product testing on children and requires platforms to address harmful algorithms and user update rollouts [2][5]. - ICO has expressed concerns over the enforcement of minimum age policies, noting that 72% of children aged 8 to 12 are accessing age-restricted sites and apps [3]. Group 2: Industry Response - Tech companies, including YouTube and Meta (Facebook and Instagram), claim to have implemented various safety measures, such as AI for age detection and Teen Accounts with built-in protections [9][10]. - Roblox has stated it is in regular communication with Ofcom and has introduced over 140 safety features in the past year, including mandatory age checks for chat access [10][11]. - The Molly Rose Foundation has supported the regulatory push, emphasizing the need for accountability from tech firms regarding children's safety online [8]. Group 3: Future Implications - Ofcom plans to publicly report on the platforms' responses in May and will assess the impact of the Online Safety Act on children's online experiences [5]. - The regulator has indicated readiness to take enforcement action if the responses from tech firms are unsatisfactory, potentially leading to strengthened regulations [6].
UK watchdogs press Meta, TikTok, Snap and YouTube to block children
Reuters· 2026-03-12 00:04
Core Viewpoint - UK regulators are urging major social media platforms to enhance measures to protect children from accessing their services, highlighting failures in enforcing age restrictions and the exposure of children to harmful content [1] Group 1: Regulatory Actions - Ofcom and the Information Commissioner's Office (ICO) are demanding that platforms like Meta, TikTok, Snap, and YouTube implement stricter age checks and safety measures by April 30 [1] - The ICO has called for the adoption of modern age-assurance tools to prevent children under 13 from accessing inappropriate services [1] - Ofcom has the authority to impose fines of up to 10% of a company's global revenue, while the ICO can fine up to 4% of a company's annual turnover [1] Group 2: Concerns Raised - Regulators express growing concern over algorithmic feeds that expose children to harmful or addictive content [1] - Melanie Dawes, Ofcom's chief executive, emphasized the need for companies to prioritize children's safety in their products [1] - The ICO recently fined Reddit nearly £14.5 million for failing to implement effective age checks and unlawfully processing children's data [1]
X @The Economist
The Economist· 2026-02-16 13:00
Youngsters have a right to share in new technologies. Adults must seek to make their time online as safe and as rewarding as possible https://t.co/8WuJdhix3c ...
X @The Economist
The Economist· 2026-02-13 00:00
Youngsters have a right to share in new technologies. Adults must seek to make their time online as safe and as rewarding as possible https://t.co/8WuJdhix3c ...
Roblox Marks Safer Internet Day With Expanded Safety Partnerships and Commitment to Age-Appropriate Communication
Businesswire· 2026-02-10 17:00
Core Insights - Roblox is enhancing its commitment to user safety by introducing new safety partnerships and age-appropriate communication strategies on Safer Internet Day [1] Group 1: Safety Initiatives - Roblox reports strong early adoption of its age-check requirement, with over 45% of its 144 million daily active users completing the check since its introduction [1] - The company has launched a new Youth Guide to Community Standards, co-created with the Roblox Teen Council, to make safety policies more accessible to younger users [1] - Roblox has partnered with the Mental Health Coalition to develop resources focused on civility, mental health, and well-being in gaming, joining the Thrive program to share signals about harmful content [1] Group 2: Age-Appropriate Communication - The age-check requirement is a significant step for Roblox, making it the first large online gaming platform to enforce such a measure for chat access [2] - The company emphasizes the importance of clear communication regarding community standards to deter inappropriate behavior among users [1] - Roblox is implementing measures for users to update their age if mistakes occur during the age-check process, including options for parents to correct their child's age [1]
Raising kids in a world of screens | Stephanie Reina | TEDxEindhoven
TEDx Talks· 2026-01-27 16:28
Parents nowadays are concerned about screen time and keeping kids safe online. Drawing from relatable examples mixed with scientific research, Stephanie uncovers a rarely discussed driver of children’s digital behavior and provides two practical strategies families can apply immediately. Most adults are overwhelmed by social media, just imagine how kids must feel. What can parents do to enable healthy habits? This talk was given at a TEDx event using the TED conference format but independently organized by ...
Apple CEO Tim Cook meets lawmakers to discuss online safety bill for kids
CNBC Television· 2025-12-10 21:00
All right. So, one minute you find Jensen Wong wandering around down there and you've got the microphone in his face and today somebody equally as interesting. >> Apple's Tim Cook.He was on the Hill today. Uh we did ask him a couple questions. Now, we did not hear directly uh from Tim Cook himself.Decided to be a bit of a man of mystery today, but we did know that he has met with lawmakers about a package of bills that are going to get their initial votes in a congressional panel tomorrow. And these deal wi ...
Apple CEO Tim Cook meets lawmakers to discuss online safety bill for kids
CNBC Television· 2025-12-10 18:12
Capitol Hill. Emily Wilkins joins us now. All right.So, one minute you find Jensen Wong wandering around down there and you get the microphone in his face and today somebody equally as interesting. Apple's Tim Cook. He was on the hill today.Uh we did ask him a couple questions. Now, we did not hear directly uh from Tim Cook himself. Decided to be a bit of a man of mystery today.But we did know that he has met with lawmakers about a package of bills that are going to get their initial votes in a congressiona ...
Reddit Cranks Up Safety Rules to Meet — and Beat — Australia's Law
CNET· 2025-12-09 21:01
Core Points - Reddit is implementing age verification measures to comply with new Australian laws that restrict access to users under 16 years old [1][4] - The company is also modifying the app for users under 18 globally, limiting access to NSFW and mature content, and disabling ad personalization [2][6] - Reddit criticizes the Australian legislation as arbitrary and a limitation on free expression, while acknowledging the trend of age verification laws worldwide [3] Group 1: Compliance Measures - Reddit will verify that new and existing account holders in Australia are at least 16 years old [1] - An age prediction model will be used to assess user age, requiring proof of age through government ID or selfies if under 16 [4] - Accounts determined to be under 16 will be suspended [4] Group 2: Data Handling and Privacy - Reddit will securely store age information but not the verification documents or photos [5] - Age information will not be visible to advertisers and will only be used to enhance content relevance [5] Group 3: Platform Safety Enhancements - For users under 18, ad personalization will be disabled, and sensitive ads related to alcohol and gambling will not be shown [6] - Users under 18 will not be allowed to moderate NSFW or mature content communities [6]