Workflow
通用人工智能(AGI)
icon
Search documents
不止硅谷十万大裁员!Hinton警告:AI正以最糟糕方式颠覆社会
创业邦· 2025-11-29 03:22
来源丨 新智元 (ID: AI_era ) 作者丨 KingHZ 元宇 AGI冲击已然显现:谁受益、谁买单,正成为这个时代的核心命题。 上周,「AI教父」Hinton直言,科技亿万富翁真心实意押注AI取代大量人力,这会导致社会的完全解 体! 最近,来自亚马逊的匿名人士抗议道: 当前这代AI,几乎成了像亚马逊这类科技巨头沉迷的毒品—— 他们以AI为借口裁员,将节省的资金投入无人付费的AI产品数据中心。 由1000多名亚马逊员工联署的公开信警告称,这种不计代价的AI开发模式可能贻害无穷。 上个月,亚马逊一口气裁掉了3万人。而讽刺的是,这3万人最好、最理想的选择是购买亚马逊股票。 未来,人工智能(AI)带来的究竟是GDP奇迹,还是社会秩序的解体? Hinton: AI导致社会完全解体 上周,77岁的「AI教父」Hinton与美国82岁的参议员Bernie Sanders就AI对就业的威胁,进行了长 达一小时的公开对话。 亚马逊最新财报公布后,市值增加了约2500亿美元 一幅末日图景正在浮现: 从实验室里的担忧,已经蔓延到办公室、 仓库 和数据中心。 根据Challenger、Gray&Christmas等再就业咨询 ...
甲骨文等再贷380亿美元,“OpenAI链”数据中心圈子累计负债已达1000亿美元!
硬AI· 2025-11-28 13:59
最新动态是,围绕OpenAI基础设施建设的新一轮巨额融资正在酝酿。据知情人士向媒体透露,一个银行财团正就在未来几周内敲定一笔高达380亿美元的新增贷 款进行谈判,这笔资金将用于甲骨文(Oracle)和数据中心建设商Vantage为OpenAI建设新的站点。 这笔新贷款将成为压在这张债务网络上的又一根沉重稻草。据分析,包括软银(SoftBank)、甲骨文和CoreWeave在内的OpenAI合作伙伴,此前已为投资 OpenAI或帮助其建设数据中心借入了至少300亿美元。此外,像投资集团Blue Owl Capital和计算基础设施公司Crusoe等,也依赖于与OpenAI的协议来偿还约 280亿美元的贷款。 然而,在这场豪赌中,OpenAI自身的资产负债表却异常"干净"。 据接近该公司的人士透露,OpenAI几乎没有背负债务,仅在去年获得了一项40亿美元的信贷额 度但尚未使用。其战略意图十分明确——一位OpenAI高级管理人员坦言: 围绕OpenAI的数据中心和算力狂潮,合作伙伴举债数百亿美元,形成"OpenAI链"负债网络,累计已逼近1000亿美元,规模堪比六家全球最大企业的净债务总和;而OpenAI却巧妙 ...
甲骨文等再贷380亿美元,“OpenAI链”数据中心圈子累计负债已达1000亿美元
3 6 Ke· 2025-11-28 10:48
为支撑OpenAI的宏伟蓝图,一个由其合作伙伴组成的庞大生态系统正在通过举债为人工智能基础设施 建设狂潮提供资金,而OpenAI自身却巧妙地将财务风险置于体外。 最新动态是,围绕OpenAI基础设施建设的新一轮巨额融资正在酝酿。据知情人士向媒体透露,一个银 行财团正就在未来几周内敲定一笔高达380亿美元的新增贷款进行谈判,这笔资金将用于甲骨文 (Oracle)和数据中心建设商Vantage为OpenAI建设新的站点。 这笔新贷款将成为压在这张债务网络上的又一根沉重稻草。据分析,包括软银(SoftBank)、甲骨文和 CoreWeave在内的OpenAI合作伙伴,此前已为投资OpenAI或帮助其建设数据中心借入了至少300亿美 元。此外,像投资集团Blue Owl Capital和计算基础设施公司Crusoe等,也依赖于与OpenAI的协议来偿 还约280亿美元的贷款。 负债总额逼近1000亿美元,合作伙伴承担财务风险 随着新一轮380亿美元贷款的加入,围绕OpenAI的债务总额正逼近1000亿美元大关。这一规模堪比全球 最大企业借款人的负债水平。根据资产管理公司Janus Henderson在2024年的一份 ...
甲骨文等再贷380亿美元,“OpenAI链”数据中心圈子累计负债已达1000亿美元!
美股IPO· 2025-11-28 09:40
围绕OpenAI的数据中心和算力狂潮,合作伙伴举债数百亿美元,形成"OpenAI链"负债网络,累计已逼近1000 亿美元,规模堪比六家全球最大企业的净债务总和;而OpenAI却巧妙转移风险,自身几乎无债,以极低负担驱 动AI产业大跃进。 为支撑OpenAI的宏伟蓝图,一个由其合作伙伴组成的庞大生态系统正在通过举债为人工智能基础设施建设狂潮 提供资金,而OpenAI自身却巧妙地将财务风险置于体外。 最新动态是,围绕OpenAI基础设施建设的新一轮巨额融资正在酝酿。据知情人士向媒体透露,一个银行财团正 就在未来几周内敲定一笔高达380亿美元的新增贷款进行谈判,这笔资金将用于甲骨文(Oracle)和数据中心建 设商Vantage为OpenAI建设新的站点。 这笔新贷款将成为压在这张债务网络上的又一根沉重稻草。据分析,包括软银(SoftBank)、甲骨文和 CoreWeave在内的OpenAI合作伙伴,此前已为投资OpenAI或帮助其建设数据中心借入了至少300亿美元。此 外,像投资集团Blue Owl Capital和计算基础设施公司Crusoe等,也依赖于与OpenAI的协议来偿还约280亿美元 的贷款。 然而, ...
AI时代的迷失:可怕的不是跟不上变化,而是用旧思维赶路
腾讯研究院· 2025-11-28 08:45
以下文章来源于腾讯新闻大声思考 ,作者马兆远 腾讯新闻大声思考 . 腾讯新闻原创专栏,与优秀作者一起大声思考 马兆远 南方科技大学工学院和商学院双聘教授 过去两年,从ChatGPT的兴起到DeepSeek引发的全球震动,人工智能从高深的专业领域变成了全民讨论的热 点,AI的跃迁式发展让一个事实变得清晰:我们正处在人与智能边界被重新划定的历史节点。 AI的能力边界在快速扩张,许多我们原本以为"只有人类能做"的工作,AI都开始涉足。这种变化带来了集 体性的焦虑和迷茫:在人与智能的边界被重新定义的今天,什么能力是AI不可替代的?我们该如何找到自 己的位置? 在我看来,真正值得我们焦虑思考的,不是AI会变得多强大,是否会挤压、替代和威胁人类生存,而是我 们的思维方式是否已经为这个时代做好了准备,这才是关系到每个人在这个时代下生存状态的核心问题。 我们对AI的认知 取决于看待它的思维方式 当我们谈到AI,总会不自觉地陷入一个误区:以为拥有的技术会决定未来。这种误解非常普遍,也非常危 险。 因为从整个人类文明史来看,技术从来都不是最关键的变量,真正决定时代走向的,是背后的"思维方 式"。 但在我看来,真正的关键问题其实并 ...
不止硅谷十万大裁员,Hinton警告:AI正以最糟糕方式颠覆社会
3 6 Ke· 2025-11-28 08:21
AGI冲击已然显现:谁受益、谁买单,正成为时代核心命题。 他们以AI为借口裁员,将节省的资金投入无人付费的AI产品数据中心。 由1000多名亚马逊员工联署的公开信警告称,这种不计代价的AI开发模式可能贻害无穷。 上个月,亚马逊一口气裁掉了3万人。而讽刺的是,这三万人最好、最理想的选择是购买亚马逊股票。 亚马逊最新财报公布后,市值增加了约2500亿美元 一幅末日图景正在浮现:从实验室里的担忧,已经蔓延到办公室、仓库和数据中心。 未来,人工智能(AI)带来的究竟是GDP奇迹,还是社会秩序的解体? 上周,「AI教父」Hinton直言,科技亿万富翁真心实意押注AI取代大量人力,这会导致社会的完全解体! 最近,来自亚马逊的匿名人士抗议道: 当前这代AI,几乎成了像亚马逊这类科技巨头沉迷的毒品—— 根据Challenger, Gray&Christmas等再就业咨询公司数据,美国企业10月共宣布裁员153074人,创20多年新高。 另据Crunchbase和layoffs.fyi统计,仅2025年,Intel、微软、Verizon、亚马逊等大公司就宣布合计裁撤超过70000个岗位。 国外媒体用「layoffs are p ...
不只是“做题家”!DeepSeek最新模型打破数学推理局限,部分性能超越Gemini DeepThink
Tai Mei Ti A P P· 2025-11-28 05:45
DeepSeek称,这款模型展现了强大的定理证明能力。换句话说,与此前大多大模型在数学方面的表现 不同,Math-V2不再只是"做题家",而真正有可能靠自身全面、严谨的数学推理能力对科学研究产生深 远影响。 DeepSeek也列举了多项验证该模型的强大的证据:Math-V2在IMO(国际数学奥林匹克竞赛)2025和 CMO(中国数学奥林匹克)2024上都取得了金牌级成绩,在北美大学生数学竞赛Putnam 2024上通过扩 展测试计算实现了接近满分的成绩(118/120)。 DeepSeek以验证器为奖励模型训练证明生成器,并激励生成器在最终定稿前尽可能多地识别和解决自 身证明中的问题,并通过扩展验证计算能力,自动标记新的难以验证的证明,从而创建训练数据以进一 步改进验证器。 最终,Math-V2诞生了。 此前,今年7月,OpenAI和谷歌都曾宣布其模型在IMO2025中取得了金牌级成绩,一度形成大模型数学 能力天花板。相比于二者,DeepSeek的Math-V2不仅是首个开源的IMO金牌级模型,在测试中,也在部 分性能上展现出了更大的优势。 在IMO-Proof Bench评估中,基准测试方面Math-V2得 ...
llya最新判断:Scaling Laws逼近极限,AI暴力美学终结
3 6 Ke· 2025-11-26 08:46
Core Insights - Ilya Sutskever, co-founder of OpenAI and a key figure in deep learning, has shifted focus from scaling models to research-driven approaches in AI development [1][2][3] - The industry is moving away from "scale-driven" methods back to "research-driven" strategies, emphasizing the importance of asking the right questions and developing new methodologies [2][3] - Sutskever argues that while AI companies may experience stagnation, they can still generate significant revenue despite reduced innovation [2][3] - The potential for narrow AI models to excel in specific domains suggests that breakthroughs may come from improved learning methods rather than merely increasing model size [3][4] - The emergence of powerful AI could lead to transformative societal changes, including increased productivity and shifts in political and governance structures [3][4] - Sutskever emphasizes the importance of aesthetic principles in research, advocating for simplicity and elegance in AI design [4] Industry Trends - The scaling laws that dominated AI development are nearing their limits, prompting a return to foundational research and exploration [2][28] - The current phase of AI development is characterized by a shift from pre-training to reinforcement learning, which is more resource-intensive [29][30] - The distinction between effective resource utilization and mere computational waste is becoming increasingly blurred in AI research [30][31] - The scale of computational resources available today is substantial, but the focus should be on how effectively these resources are utilized for meaningful research [42][44] Company Insights - Safe Superintelligence (SSI) has raised $3 billion, positioning itself to focus on foundational research without the pressures of market competition [45][46] - SSI's approach to AI development may differ from other companies that prioritize immediate market applications, suggesting a long-term vision for advanced AI [45][46] - The company believes that the true value lies not in the sheer amount of computational power but in the strategic application of that power to drive research [43][44]
马斯克Grok5挑战人类电竞高手 约战《英雄联盟》顶尖战队
Sou Hu Cai Jing· 2025-11-26 02:41
Core Insights - Elon Musk announced that xAI's AI model Grok5 will challenge top human teams in League of Legends in 2026, aiming to test its general capabilities under specific constraints [1][2] - Grok5 will operate under two main constraints: it can only observe the display through a camera with a field of view limited to that of a normal human (20/20 vision), and its response time and click rate must not exceed human levels [1] - The model's release has been postponed to 2026, with a parameter scale of 6 trillion, which is double that of Grok3 and Grok4, and approximately 30 times that of leading models [1] Company Developments - xAI is expanding its supercomputing nodes in Memphis, planning to increase the number of GPUs to 1.5 million to support the training needs of Grok5 [1] - Grok5's design aims to master any game by reading instructions and conducting experiments, marking a significant test for its general intelligence capabilities [1][2] Industry Context - The choice of League of Legends as a challenge is linked to the game's high demands for strategic planning, real-time decision-making, and multi-character collaboration, which are seen as critical benchmarks for assessing artificial general intelligence (AGI) [2] - Previous AI breakthroughs in competitive gaming have relied on algorithm optimization and hardware advantages, but Grok5's challenge will focus on validating its human-like cognitive and decision-making abilities under simulated human physiological constraints [2]
Scaling时代终结了,Ilya Sutskever刚刚宣布
机器之心· 2025-11-26 01:36
Group 1 - The core assertion from Ilya Sutskever is that the "Age of Scaling" has ended, signaling a shift towards a "Research Age" in AI development [1][8][9] - Current AI models exhibit "model jaggedness," performing well on complex evaluations but struggling with simpler tasks, indicating a lack of true understanding and generalization [11][20][21] - Sutskever emphasizes the importance of emotions as analogous to value functions in AI, suggesting that human emotions play a crucial role in decision-making and learning efficiency [28][32][34] Group 2 - The transition from the "Age of Scaling" (2020-2025) to the "Research Age" is characterized by diminishing returns from merely increasing data and computational power, necessitating new methodologies [8][39] - Safe Superintelligence Inc. (SSI) focuses on fundamental technical challenges rather than incremental improvements, aiming to develop safe superintelligent AI before commercial release [9][11][59] - The strategic goal of SSI is to "care for sentient life," which is viewed as a more robust alignment objective than simply obeying human commands [10][11][59] Group 3 - The discussion highlights the disparity in learning efficiency between humans and AI, with humans demonstrating superior sample efficiency and the ability to learn continuously [43][44][48] - Sutskever argues that the current models are akin to students who excel in exams but lack the broader understanding necessary for real-world applications, drawing a parallel to the difference between a "test-taker" and a "gifted student" [11][25][26] - The future of AI may involve multiple large-scale AI clusters, with the potential for a positive trajectory if the leading AIs are aligned with the goal of caring for sentient life [10][11]