Workflow
OpenAI plans new safety measures amid legal pressure
CNBC Televisionยท2025-09-02 16:19

AI Safety and Regulation - OpenAI is launching new safeguards for teens and people in emotional distress, including parental controls that allow adults to monitor chats and receive alerts when the system detects acute distress [1][2] - These safeguards are a response to claims that OpenAI's chatbot has played a role in self-harm cases, with conversations routed to a newer model trained to apply safety rules more consistently [2] - The industry faces increasing legal pressure, including a wrongful death and product liability lawsuit against OpenAI, a copyright suit settlement by Anthropic potentially exposing it to over 1 trillion dollars in damages, and a defamation case against Google over AI overviews [3] - Unlike social media companies, GenAI chatbots do not have Section 230 protection, opening the door to direct liability for copyright, defamation, emotional harm, and even wrongful death [4][5] Market and Valuation - The perception of safety is crucial for Chat GPT, as a loss of trust could negatively impact the consumer story and OpenAI's pursuit of a 500 billion dollar valuation [5] - While enterprise demand drives the biggest deals, the private market hype around OpenAI and its peers is largely built on mass consumer apps [6] Competitive Landscape - Google and Apple are perceived as being more thoughtful and slower to progress in the AI space compared to OpenAI, which had a first-mover advantage with the launch of Chat GPT in November 2022 [8][9] - Google's years of experience navigating risky search queries have given them a better sense of product liability risks compared to OpenAI [9] Legal and Regulatory Environment - Many AI-related legal cases are settling, which means that it's not setting a legal precedent [7] - The White House has been supportive of the AI industry, focusing more on building energy infrastructure to support the industry rather than regulating it [7]