Core Viewpoint - Anthropic, an AI startup, is significantly altering its flagship safety policy, which previously emphasized a commitment to not train AI systems without adequate safety measures, in response to competitive pressures in the AI industry [1][2][3] Group 1: Policy Changes - The company has decided to reform its Responsible Scaling Policy (RSP), which was a core commitment to ensure safety before training AI models [1] - The new policy includes commitments to greater transparency regarding AI safety risks and performance in safety testing, as well as matching or exceeding competitors' safety efforts [1][3] - The previous prohibition on training models without appropriate safety measures has been lifted, resulting in reduced constraints on the company's safety policies [2] Group 2: Competitive Landscape - Anthropic faces intense competition from companies like OpenAI, Elon Musk's xAI, and Google, all of which are regularly releasing advanced tools [2] - The company is also in a dispute with the U.S. Department of Defense regarding the use of its Claude tool, with the Pentagon issuing an ultimatum about contract terms if usage restrictions are imposed [2] Group 3: Rationale Behind Changes - The adjustments to the safety policy are based on the rapid development of AI and the lack of federal regulations in this area, prompting the company to reassess its safety commitments [3] - The spokesperson emphasized that the policy shift is not related to negotiations with the Pentagon but is a response to the competitive landscape prioritizing AI competitiveness and economic growth [3]
AI对手发展太快,Anthropic放弃重要安全承诺
Feng Huang Wang·2026-02-25 03:01