Core Viewpoint - Tech companies are urged to enhance online protections for children after a proposed blanket ban on social media for under-16s was rejected by MPs, highlighting the need for stronger age verification and safety measures [1][2][5]. Group 1: Regulatory Actions - The Information Commissioner's Office (ICO) and Ofcom have demanded that platforms like Facebook, Instagram, Roblox, Snapchat, TikTok, and YouTube provide details on their age verification processes and measures against online grooming by the end of April [1][2]. - Ofcom has called for an end to product testing on children and requires platforms to address harmful algorithms and user update rollouts [2][5]. - ICO has expressed concerns over the enforcement of minimum age policies, noting that 72% of children aged 8 to 12 are accessing age-restricted sites and apps [3]. Group 2: Industry Response - Tech companies, including YouTube and Meta (Facebook and Instagram), claim to have implemented various safety measures, such as AI for age detection and Teen Accounts with built-in protections [9][10]. - Roblox has stated it is in regular communication with Ofcom and has introduced over 140 safety features in the past year, including mandatory age checks for chat access [10][11]. - The Molly Rose Foundation has supported the regulatory push, emphasizing the need for accountability from tech firms regarding children's safety online [8]. Group 3: Future Implications - Ofcom plans to publicly report on the platforms' responses in May and will assess the impact of the Online Safety Act on children's online experiences [5]. - The regulator has indicated readiness to take enforcement action if the responses from tech firms are unsatisfactory, potentially leading to strengthened regulations [6].
Big tech given warning - and deadline - by UK regulator