Workflow
大语言模型
icon
Search documents
OpenAI内忧外患拉响“红色警报”:多个项目暂停 神秘模型曝光!
Mei Ri Jing Ji Xin Wen· 2025-12-03 04:58
周一,据媒体报道,奥特曼在内部备忘录中宣布公司进入"红色警报"状态,要求调动更多资源,全力提升ChatGPT的能力以应对日益激烈的竞争。 奥特曼在内部警告员工,谷歌在AI领域的强势回归可能会给OpenAI带来"短期经济压力",而公司首席财务官Sarah Friar也在上月向投资者承认,ChatGPT的 增长出现放缓。 当地时间12月1日,OpenAI CEO山姆·奥特曼(Sam Altman)在内部备忘录里发出"红色警报"(Code Red),宣布暂缓其他项目,调动更多资源提升ChatGPT 的能力。 三年前,ChatGPT横空出世,迫使谷歌高层拉响"红色警报"。三年过去,凭借Gemini 3、Nano Banana Pro等模型的密集发布,谷歌强势回归,Gemini的平均 用户使用时长甚至首次超越了ChatGPT。 这一次,轮到奥特曼慌了。 OpenAI拉响"红色警报":砍项目、停广告、曝光神秘模型 为了守住ChatGPT的阵地,奥特曼决定暂缓其他非核心项目,将所有火力集中于ChatGPT。被暂停的项目清单中,最引人注目的是其商业化前景广阔的广告 业务。 此前,有用户在ChatGPT的测试版应用中发现了"集 ...
奥特曼发红色警报,大模型走进死胡同了吗 ?
3 6 Ke· 2025-12-03 04:31
昨天,OpenAI CEO奥特曼发出了一份内部备忘录,宣布公司进入"Code Red"(红色警报)紧急状态。 表面上看,这是OpenAI针对谷歌、Anthropic这两位强力竞争对手的应急响应。 但更深层的问题是,OpenAI正在面临一个整个行业都无法回避的技术困境。那就是训练成本飙升,模型规模不断扩大,但性能提升却越来越有限。 根据斯坦福大学的《2025年AI指数报告》,2019年到2022年间,训练成本每增加10倍,模型在主流基准测试上的性能平均能提升25%-35%。但到了2023 年之后,同样10倍的成本投入,性能提升就只剩下10%-15%。 更糟糕的是,2024年以来,即使训练成本再翻倍,性能提升往往不足5%,投入产出比正在断崖式下跌。 各家头部模型的表现开始趋同,仿佛集体撞上了某种看不见的天花板。 这引发了一个在AI学术界和产业界激烈争论的问题:大语言模型,是否已经走进了死胡同? 根据半导体行业分析公司SemiAnalysis的爆料,自2024年5月GPT-4o发布以来,OpenAI的顶尖研究人员就再也没有成功完成过一次大规模的全面预训练。 这意味着GPT-5跟GPT-4o之间,其实没有经历真正意义 ...
华为、京东、优必选等先后入局,AI玩具成AI硬件新蓝海?
Guo Ji Jin Rong Bao· 2025-12-03 04:09
试想这样一个场景:当你结束一天的疲惫工作回到家,将自己陷进沙发,无意识地轻声叹道"今天好 累"。话音落下,沙发上憨态可掬的AI玩偶缓缓转向你,露出担忧的表情,用轻柔的语调自然地回 应,"听起来你今天真的辛苦了。" 晚饭后,当你重新坐回沙发,AI玩偶又适时轻声询问,"现在心情好些了吗?"稍作停顿,它像一位记得 你所有喜好的老友般提议,"一会儿要不要一起看你收藏了很久的那部电影?" 在AI技术"狂飙"的当下,这样拥有自我意识、懂你心思并能提供"情绪价值"的AI陪伴硬件正逐渐变为触 手可及的现实。 过去一年多,AI玩具赛道热度迅速攀升。京东平台数据显示,2025年上半年,AI玩具销量环比激增六 倍,同比增速超200%。其间,赛道聚集了跃然创新、珞博智能等率先抢滩的初创公司,奥飞、汤姆猫 等谋求转型的传统玩具厂商,以及包括京东、荣耀在内的科技大厂。尤其华为近期在发售备受瞩目的 Mate 80系列之际,上线了一款情感陪伴AI玩具,将这一新兴品类推向了更广阔的主流消费视野。 不过,难以忽视的是,风口下,AI玩具市场仍未诞生真正的爆款。产品端方面,同质化、交互生硬与 隐私安全等核心挑战仍存,AI玩具距离真正走向成熟或许仍有 ...
为什么OpenAI要启动“红色警报”?英伟达是否也要亮红灯?图说AI竞争
Hua Er Jie Jian Wen· 2025-12-02 22:17
Core Insights - OpenAI's CEO Sam Altman announced a "red alert" to focus all resources on optimizing ChatGPT in response to intense competition from Google's Gemini, indicating a significant shift in the AI competitive landscape [1] - OpenAI has decided to delay the development of other products, including advertising and health AI agents, to enhance the daily user experience of ChatGPT [1] - UBS analyst Tim Arcuri highlighted that Google's new TPU chip, Ironwood, poses a substantial challenge to Nvidia's dominance in the chip market [1][10] Group 1: Competitive Landscape - Google has narrowed the gap with OpenAI across multiple dimensions, with Gemini achieving 100.8 million monthly downloads compared to ChatGPT's 67.8 million [2] - User engagement on Gemini has surpassed that of ChatGPT and other competitors, indicating a shift in user preference [4] - Since the release of Gemini 3, ChatGPT's daily active users have decreased by 6%, reflecting the direct impact of competitive pressure [6] Group 2: Product Development and Strategy - OpenAI's focus is on improving ChatGPT's personalization, speed, reliability, and the range of questions it can answer [1][9] - OpenAI still maintains over 800 million weekly active users, dominating the chatbot market, but is experiencing user attrition towards Google [22] - The company has committed approximately $1.4 trillion in investments for its data center projects over the next eight years to maintain its industry leadership [23] Group 3: Chip Technology and Market Dynamics - Google's Ironwood TPU chip is optimized for large language models and advanced reasoning tasks, significantly enhancing its performance compared to previous generations [11][14] - The Ironwood chip supports up to 9,216 TPU units, far exceeding the capabilities of Nvidia's offerings [15] - Nvidia emphasizes its strong relationship with Google Cloud and argues that cloud providers are unlikely to fully adopt TPU due to the need for extensive workload optimization [23]
OpenAI正开发大语言模型“Garlic”。(The Information)
Hua Er Jie Jian Wen· 2025-12-02 15:05
OpenAI正开发大语言模型"Garlic"。(The Information) 市场有风险,投资需谨慎。本文不构成个人投资建议,也未考虑到个别用户特殊的投资目标、财务状况或需要。用户应考虑本文中的任何 意见、观点或结论是否符合其特定状况。据此投资,责任自负。 风险提示及免责条款 ...
DeepSeek-V3.2正式版及高计算版发布
Xin Hua Wang· 2025-12-02 12:14
公开资料显示,DeepSeek,全称杭州深度求索人工智能基础技术研究有限公司,成立于2023年7月,专 注大语言模型及多模态AI技术研发。(记者张璇) 【纠错】 【责任编辑:薛涛】 据DeepSeek官方消息,12月1日晚间,深度求索公司(DeepSeek)宣布发布两个正式版模型:DeepSeek- V3.2和高计算版本DeepSeek-V3.2-Speciale。 DeepSeek方面介绍,企业推出DeepSeek-V3.2模型,该模型在保持卓越推理能力和智能体性能的同时, 实现了高计算效率的平衡。 ...
NeurIPS 2025|CAKE:大模型驱动的贝叶斯优化新配方,让黑箱优化更智能、更高效
机器之心· 2025-12-02 06:47
以下文章来源于香港中文大学深圳人工智能学院 ,作者智启未来的 香港中文大学深圳人工智能学院 . 欢迎关注香港中文大学(深圳)人工智能学院(SAI),拥抱AI,共建未来! 在科学与工程实践中,常会遇到计算成本高、评估耗时的函数优化问题,例如复杂机器学习模型的超参数调整或新型材料的设计。贝叶斯优化(Bayesian Optimization,BO)作为针对这类 "黑箱" 问题的优化方法,已被证明具备良好效果。然而,该方法的性能很大程度上受限于其内部代理模型的选择,特别是当采 用高斯过程(Gaussian Process,GP)作为代理模型时,核函数的设定尤为关键。若核函数与问题特性不匹配,优化进程可能收敛缓慢,甚至无法得到理想的结 果。 Systems(NeurIPS 2025)接收,论文题为 "Adaptive Kernel Design for Bayesian Optimization Is a Piece of CAKE with LLMs". 该工作提出一个突破性的框架, 利用大 语言模 型(LLMs)的推理与生成能力,在优化过程中自动、动态地设计最优的高斯过程(GP)核函数。这项研究为构建更智能、高效 ...
深演智能冲刺港股:2024年净利骤降64.6% 2025年上半年客户集中度飙至70.2%
Xin Lang Cai Jing· 2025-12-02 00:26
深演智能定位为营销与销售场景的决策AI技术公司,核心产品包括智能广告投放平台AlphaDesk和智能 数据管理平台AlphaData,2025年新增AI智能体系统Deep Agent。然而,公司业务结构呈现显著失衡, 智能广告投放业务收入占比从2022年的82.1%持续攀升,2025年上半年已达93.3%,成为绝对主导业 务;智能数据管理业务占比则从17.9%萎缩至6.7%,业务多元化战略失败。 来源:新浪港股-好仓工作室 主营业务:广告投放依赖加剧 业务结构失衡风险凸显 表:深演智能主营业务收入构成(单位:人民币万元) 业务板块2022年收入占比2023年收入占比2024年收入占比2025年上半年收入占比智能广告投放 82.1%80.5%85.5%93.3%智能数据管理17.9%19.5%14.5%6.7%合计100%100%100%100% 值得注意的是,新增的Deep Agent系统尚未产生实质收入,无法缓解业务单一化风险。智能广告投放业 务高度依赖媒体资源采购,2025年上半年媒体资源采购成本占销售成本比例高达87.1%,成本控制能力 薄弱,对上游媒体代理商议价能力受限。 财务表现:净利润剧烈波动 盈 ...
DeepSeek发布V3.2正式版
Xin Jing Bao· 2025-12-01 15:01
Core Insights - DeepSeek announced the release of two official model versions: DeepSeek-V3.2 and DeepSeek-V3.2-Speciale [1] Model Overview - DeepSeek-V3.2 aims to balance reasoning capability and output length, making it suitable for everyday use, such as Q&A scenarios and general agent tasks [1] - In benchmark tests for reasoning, DeepSeek-V3.2 achieved performance comparable to GPT-5, slightly below Gemini-3.0-Pro [1] - Compared to Kimi-K2-Thinking, V3.2 significantly reduced output length, leading to lower computational costs and reduced user wait times [1] Special Features - DeepSeek-V3.2-Speciale is designed to push the reasoning capabilities of open-source models to the limit, exploring the boundaries of model performance [1] - This version is an enhanced long-thinking variant of DeepSeek-V3.2, incorporating theorem-proving capabilities from DeepSeek-Math-V2 [1] - The model exhibits excellent instruction-following, rigorous mathematical proof, and logical verification abilities, performing comparably to Gemini-3.0-Pro in mainstream reasoning benchmark tests [1]
OpenAI大溃败,GPT-5「换皮」GPT-4o,两年半预训练0突破
3 6 Ke· 2025-12-01 02:12
Core Insights - OpenAI is facing significant challenges with its pre-training processes, particularly for the upcoming GPT-5 model, which reportedly still relies on the foundation of GPT-4o [1][3][12] - The company has not achieved substantial progress in scaling its pre-training efforts since the release of GPT-4o, leading to concerns about the performance of GPT-5 [7][12][20] - Google's TPU technology is emerging as a strong competitor, potentially undermining NVIDIA's dominance in AI hardware, which OpenAI has heavily relied upon [5][26] Pre-training Challenges - OpenAI's pre-training for GPT-5 has been described as a failure, with the internal project "Orion" being downgraded to GPT-4.5 due to unmet expectations [11][12] - The pre-training phase is critical for developing generative AI models, and OpenAI's struggles in this area have raised questions about the capabilities of GPT-5 compared to its predecessors [29][39] - Despite advancements in algorithms reducing the physical computation required for training, OpenAI's Orion project exceeded the typical training duration of 1-2 months, taking over 3 months [14][36] Performance Comparisons - The performance improvements of GPT-5 have been perceived as modest, with industry reactions indicating it is more of an enhancement of GPT-4o rather than a revolutionary upgrade [20][35] - Benchmark comparisons show that Google's Gemini 3 has outperformed GPT-5 in several areas, highlighting the competitive landscape in AI model performance [31] Strategic Shifts - OpenAI is reportedly shifting focus towards a new model, codenamed "Shallotpeat," aimed at addressing the pre-training issues encountered with previous models [46][50] - The company acknowledges the need for specialized models rather than a single "super model," reflecting a broader industry consensus on the diversification of AI applications [54][60] - OpenAI's internal discussions indicate a recognition of Google's advancements in pre-training, marking a significant shift in the competitive dynamics of the AI landscape [27][29]