Core Viewpoint - Anthropic, an AI startup founded by former OpenAI members, is revising its risk mitigation policy, raising concerns about AI safety governance in the industry [1][3] Group 1: Policy Changes - Anthropic has implemented a "Responsible Scaling Policy" (RSP) since 2023, which included a commitment to not train or release any AI models without sufficient risk mitigation measures in place [3] - The company has recently decided to overhaul the RSP, removing the key commitment that previously garnered praise for its focus on safety [3] Group 2: Industry Context - The shift in Anthropic's policy comes as major competitors like OpenAI and Google accelerate their development of large AI models, putting Anthropic at risk of being marginalized [3] - Jared Kaplan, the Chief Scientist of Anthropic, stated that halting AI model training is not beneficial, especially in a rapidly evolving technological landscape where competitors may gain an advantage [3]
Anthropic被曝放弃AI安全核心承诺
Xin Lang Cai Jing·2026-02-25 08:41