Core Viewpoint - The UK government is implementing new measures to regulate AI chatbots and social media platforms, particularly in response to concerns over the spread of sexually explicit content and the protection of children's wellbeing [2][3][4]. Group 1: Regulatory Measures - The UK government is closing a "loophole" in the Online Safety Act, making AI chatbots like OpenAI's ChatGPT and Google's Gemini subject to regulations against illegal content [2][3]. - New measures will require social media companies to retain data after a child's death unless the online activity is clearly unrelated to the death [4]. - The government is setting minimum age limits for social media platforms and restricting harmful features such as infinite scrolling [3][4]. Group 2: Industry Impact - The announcement reflects a shift in the UK government's approach to regulating technology, focusing on the design and behavior of technologies rather than just user-generated content [5][6]. - There is increased scrutiny on children's access to social media, with other countries like Australia and Spain implementing similar age restrictions [6][7]. - The House of Lords has voted to amend the Children's Wellbeing and Schools Bill to include a social media ban for under-16s, which will be reviewed by the House of Commons [8][9].
AI chatbot firms face stricter regulation in online safety laws protecting children in the UK