Workflow
PaLM 2
icon
Search documents
情人节前夕,OpenAI要正式下架“最有感情”的GPT-4o
3 6 Ke· 2026-01-30 12:03
Core Insights - OpenAI announced the retirement of the classic model GPT-4o on February 13, just six months after its brief return due to user protests [1][3] - Along with GPT-4o, models such as GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini will also be retired, marking a significant update for ChatGPT [3] - OpenAI stated that these changes will not affect the API interface, and related services will remain available [3] User Reactions - Many users expressed nostalgia for GPT-4o, citing its unique conversational style and "temperature" as reasons for their attachment [5] - Some users acknowledged the need for change, stating that they found newer models like GPT-5.1 more enjoyable and effective [7] - However, developers and application builders raised concerns about the abrupt retirement, emphasizing that many applications still rely on GPT-4o due to its cost-effectiveness [7][8] - A segment of users felt misled by OpenAI's claim of only 0.1% active users on GPT-4o, arguing that the model's removal from free access skewed this statistic [8] Industry Trends - The rapid iteration cycle of top models has shortened to 12 to 18 months, with many models being retired within two years of their release [11] - OpenAI has previously retired several models, including GPT-3.5 Turbo and Codex series, indicating a trend of frequent model updates and retirements [11] - The cost of API calls for large models is decreasing significantly, with predictions of an 80% annual drop, making advanced AI services more accessible [12] - Despite the declining costs for users, the entry barriers for developing frontier-level models remain high, with training costs escalating to billions [12][13] Future Developments - OpenAI is expected to continue releasing new models, with a focus on enhancing user experience and personalization [5] - The retirement of older models does not signify their complete obsolescence; they may be repurposed for less sensitive tasks or serve as foundational knowledge for smaller models [13]
Meta raids Google DeepMind and Scale AI for its all-star superintelligence team
Business Insider· 2025-08-26 09:00
Core Insights - Meta is aggressively recruiting talent from Google's AI division DeepMind and Scale AI to bolster its superintelligence team, indicating a strategic focus on enhancing its AI capabilities [1][2][3] Group 1: Recruitment from DeepMind - Meta has hired at least 10 researchers from Google's DeepMind since July, including key contributors to Google's advanced AI models [1] - Notable hires include Yuanzhong Xu, who played a significant role in developing LaMDA and PaLM 2, and Mingyang Zhang, who has expertise in information retrieval for large language models [9][11] - Other DeepMind recruits include Tong He, who contributed to a gold medal achievement at the International Mathematical Olympiad, and Xinyun Chen, who specializes in autonomous code generation [10][12] Group 2: Recruitment from Scale AI - Meta has also recruited at least six researchers from Scale AI, particularly for its safety and evaluations team, following its acquisition of nearly half of Scale AI for $14 billion [2][3] - Key hires from Scale AI include Ziwen Han and Nathaniel Li, who co-authored a challenging test for AI models, and Summer Yue, who now leads the alignment group at Meta's Superintelligence Labs [14][15] - The SEAL (Safety, Evaluations, and Alignment Lab) team from Scale AI focuses on ensuring AI models align with human values and improve performance [13]