Ilya闹翻,奥特曼400万年薪急招「末日主管」,上岗即「地狱模式」
3 6 Ke·2025-12-29 09:02

Core Insights - OpenAI is recruiting a "Head of Preparedness" with a starting salary of $555,000 plus equity, which translates to approximately 4 million RMB, indicating a high-level executive position in Silicon Valley [1][4] - The role is described as highly challenging, akin to a "firefighter" or "doomsday supervisor," focusing on managing risks associated with rapidly advancing AI models rather than enhancing their intelligence [5][6] Group 1: Job Responsibilities and Challenges - The new hire will be responsible for establishing safety measures to mitigate risks as AI models become more powerful, particularly in areas like mental health and cybersecurity [6][8] - The position aims to create a coherent and actionable safety process that integrates capability assessment, threat modeling, and mitigation strategies [18][28] Group 2: Context of Recruitment - This recruitment is seen as a response to concerns about "safety hollowing," where profit motives have overshadowed safety protocols at OpenAI, especially following the disbandment of the "superalignment" team [19][24] - The departure of key personnel from OpenAI has raised alarms about the company's commitment to ensuring the safe deployment of advanced AI technologies [23][27] Group 3: Industry Implications - As AI models become more capable, the associated risks are also intensifying, with significant implications for mental health and cybersecurity [10][16] - The competition among major AI firms like Google, Anthropic, and OpenAI necessitates a focus on maintaining safety standards while accelerating technological advancements [28]