Model Retirement
Search documents
情人节前夕,OpenAI要正式下架“最有感情”的GPT-4o
3 6 Ke· 2026-01-30 12:03
Core Insights - OpenAI announced the retirement of the classic model GPT-4o on February 13, just six months after its brief return due to user protests [1][3] - Along with GPT-4o, models such as GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini will also be retired, marking a significant update for ChatGPT [3] - OpenAI stated that these changes will not affect the API interface, and related services will remain available [3] User Reactions - Many users expressed nostalgia for GPT-4o, citing its unique conversational style and "temperature" as reasons for their attachment [5] - Some users acknowledged the need for change, stating that they found newer models like GPT-5.1 more enjoyable and effective [7] - However, developers and application builders raised concerns about the abrupt retirement, emphasizing that many applications still rely on GPT-4o due to its cost-effectiveness [7][8] - A segment of users felt misled by OpenAI's claim of only 0.1% active users on GPT-4o, arguing that the model's removal from free access skewed this statistic [8] Industry Trends - The rapid iteration cycle of top models has shortened to 12 to 18 months, with many models being retired within two years of their release [11] - OpenAI has previously retired several models, including GPT-3.5 Turbo and Codex series, indicating a trend of frequent model updates and retirements [11] - The cost of API calls for large models is decreasing significantly, with predictions of an 80% annual drop, making advanced AI services more accessible [12] - Despite the declining costs for users, the entry barriers for developing frontier-level models remain high, with training costs escalating to billions [12][13] Future Developments - OpenAI is expected to continue releasing new models, with a focus on enhancing user experience and personalization [5] - The retirement of older models does not signify their complete obsolescence; they may be repurposed for less sensitive tasks or serve as foundational knowledge for smaller models [13]