超级智能
Search documents
马斯克“新战书”:xAI最早明年实现AGI,两三年内超越竞争对手
Ge Long Hui· 2025-12-18 02:39
根据消息人士援引马斯克的说法,xAI有可能在未来几年内实现通用人工智能(AGI),即达到或超越人类 智能,甚至最早可能在2026年实现。 财经频道更多独家策划、专家专栏,免费查阅>> 责任编辑:栎树 全球首富埃隆•马斯克既是特斯拉的首席执行官,也是xAI的创始人,这两家公司目前都在推进人工智能 (AI)项目。而他本人似乎对xAI的未来很是乐观。 据几位知情人士透露,上周在xAI公司旧金山总部举行的全体员工大会上,马斯克扬言,只要公司能够 顺利挺过未来两到三年,xAI就能战胜竞争对手。 他补充称,该公司快速扩展其算力和数据容量的能力将是在所谓"超级智能"(即超越人类智能)的竞争中 致胜的关键,并最终有望让xAI成为最强大的AI公司。 ...
马斯克:xAI最早2026年实现AGI,公司挺过未来两三年将战胜对手
美股IPO· 2025-12-17 22:52
Core Viewpoint - Musk is optimistic about the future of his AI company xAI, believing it can achieve Artificial General Intelligence (AGI) by 2026 if it survives the next two to three years [1][2][5]. Group 1: Company Progress and Strategy - Musk emphasized that rapid expansion of computational power and data capabilities will be key to xAI's success in the competition for "superintelligence" [2]. - xAI has a significant financial advantage, with annual funding support estimated between $20 billion to $30 billion, and benefits from synergies with Musk's other companies [3]. - The company has rapidly expanded its data center capabilities, with a current GPU count of approximately 200,000, aiming to increase this to 1 million [4]. Group 2: Competitive Landscape - xAI is a relatively new player in the race for AGI, competing against established giants like OpenAI and Google [6]. - The AI competition remains intense, with OpenAI reportedly entering an "emergency state" to accelerate model releases, and Google launching its new Gemini model [6]. Group 3: Product Development - During a recent all-hands meeting, xAI showcased updates to existing products, including Grok Voice and applications for Tesla owners, highlighting improvements in predictive capabilities, voice listening, and video editing [6].
AGI为什么不会到来?这位研究员把AI的“物理极限”讲透了
3 6 Ke· 2025-12-17 11:43
Group 1 - The article discusses the skepticism surrounding the realization of Artificial General Intelligence (AGI), emphasizing that current optimism in the market may be misplaced due to physical constraints on computation [1][4]. - Tim Dettmers argues that computation is fundamentally bound by physical laws, meaning that advancements in intelligence are limited by energy, bandwidth, storage, manufacturing, and cost [3][4]. - Dettmers identifies several key judgments regarding AGI: the success of Transformer models is not coincidental but rather an optimal engineering choice under current physical constraints, and further improvements yield diminishing returns [4][6]. Group 2 - The article highlights that discussions about AGI often overlook the physical realities of computation, leading to misconceptions about the potential for unlimited scaling of intelligence [5][9]. - It is noted that as systems mature, linear improvements require exponentially increasing resource investments, which can lead to diminishing returns [10][16]. - The article points out that the performance gains from GPUs, which have historically driven AI advancements, are nearing their physical and engineering limits, suggesting a shift in focus is necessary [18][22]. Group 3 - Dettmers suggests that the current trajectory of AI development may be approaching a stagnation point, particularly with the introduction of Gemini 3, which could signal a limit to the effectiveness of scaling [33][36]. - The cost structure of scaling has changed, with past linear costs now becoming exponential, indicating that further scaling may not be sustainable without new breakthroughs [35][36]. - The article emphasizes that true AGI must encompass the ability to perform economically meaningful tasks in the real world, which is heavily constrained by physical limitations [49][50]. Group 4 - The discussion includes the notion that the concept of "superintelligence" may be flawed, as it assumes unlimited capacity for self-improvement, which is not feasible given the physical constraints of resources [56][58]. - The article argues that the future of AI will be shaped by economic viability and practical applications rather than the pursuit of an idealized AGI [59][60].
深度|谷歌前CEO谈旧金山共识:当技术融合到一定阶段会出现递归自我改进,AI自主学习创造时代即将到来
Sou Hu Cai Jing· 2025-12-16 02:19
Eric Schmidt 是全球科技与人工智能领域的杰出人物。他曾担任 Google 董事长兼首席执行官,在推动公司发展壮 大及拓展全球影响力方面发挥了关键作用。目前他出任 Relativity Space 董事长兼首席执行官,同时担任 Innovation Endeavors 的创始合伙人。作为人工智能与国家安全领域公认的核心发声者,他持续在美国及全球范围内的技术 创新与战略政策讨论中施加重要影响。 Graham Allison 是知名学者及前政府官员。他现任 Harvard Kennedy School Douglas Dillon Professor of Government ,曾担任该学院创始院长及 Belfer Center for Science and International Affairs 主任。他拥有丰富的公共服务经验,曾在克林顿政府首任任期内担任国防部助理部长,并因杰出贡献荣获国防杰 出公共服务勋章。他的专业领域涵盖国际关系、国家安全及治理领域,是学术界与政策界备受尊崇的权威人士。 传奇回响与对话缘起——追忆基辛格的跨界视野 Graham Allison:非常荣幸能在The John ...
DeepMind科学家惊人预测:AGI在2028年实现,大规模失业要来了
3 6 Ke· 2025-12-15 02:50
就在刚刚,DeepMind首席科学家给出惊人预测:2028年最小AGI或将降临,大规模失业就在眼前,如今,人类正站在风暴的十字路口。没有 准备的人,将被迎头痛击! AGI,究竟何时到来? 最近,Google DeepMind首席AGI科学家兼Shane Legg在访谈中大胆预测:最小AGI将有50%的可能性在2028年实现! 最小AGI(Minimal AGI): 能完成人类典型认知任务,他预测2028年有50%的概率实现。 完全AGI(Full AGI): 能实现人类认知所能达到的全部能力范围(例如发明新理论、创作艺术作品)。 超级智能(ASI): 远超人类认知能力。 总之,AI发展的终极,绝非人类智能。 最终,超级智能ASI将超越人类能力,给我们的经济、社会带来全方位的结构性变革,最终重构一个人类的新世界! 规模与算法,AGI的关键 「我们正站在变革的拐点,而大多数人甚至还未意识到。」 根据他的观点,如今AGI的发展已经远非火花,而是到达了临界点。 他提出了AGI发展的三层境界—— —— Shane Legg,Google DeepMind 联合创始人 过去,谈论AGI的人,常被视为「狂热分子」。 如今, ...
微软高管:若AI威胁人类,将立刻停止研发
财联社· 2025-12-12 05:47
微软消费人工智能主管苏莱曼(Mustafa Suleyman)目前正致力于打造一款"符合人类利益"的超级智能。在本周最新的一场访谈栏目中他承 诺,若该技术对人类构成威胁,将立即停止相关研发工作。 苏莱曼在节目中表示:"我们不会继续开发可能失控的系统。" 值得一提的是,在今年十月一份重塑微软与OpenAI关系的协议达成前,苏莱曼的工作其实一直受限于合同条款 ——该条款禁止微软研发通 用人工智能(通常指具备人类能力的系统)或超越人类能力的超级智能。 而根据新协议,OpenAI可与第三方联合开发部分产品。同时,微软也可独立或与第三方合作开发通用人工智能。 苏莱曼透露,微软实际上已放弃了限制这些权利以换取使用OpenAI最新产品的机会——这是双方此前合作的一部分,多年来微软一直为 OpenAI代建并配置数据中心。 谈及OpenAI时,苏莱曼表示,"他们如今与软银、甲骨文等多家企业达成了协议,建设的数据中心规模已超出微软原计划为其建造的数量。 因此相对应的,我们也获得了自主开发人工智能的权利。" 就目前而言,业内关于超级智能的讨论仍是理论性的。 虽然像ChatGPT这样的人工智能模型,能够以十年前计算机无法做到的方式与 ...
微软AI高管承诺:若超级智能威胁人类,就停止开发
Hua Er Jie Jian Wen· 2025-12-11 18:37
Core Viewpoint - Mustafa Suleyman, head of consumer AI at Microsoft, has committed to halting the development of superintelligent systems if they pose a threat to humanity, emphasizing the need for ethical considerations in AI development [1] Group 1: Microsoft's AI Strategy - Microsoft's shift in AI strategy is attributed to a revised relationship with OpenAI, allowing Microsoft to develop general artificial intelligence (AGI) and superintelligent systems, which were previously restricted [2] - Suleyman stated that Microsoft had previously relinquished development rights in exchange for access to OpenAI's latest products and had invested in building data centers for OpenAI [2] - The recent changes enable Microsoft to explore technologies that could potentially surpass human performance across various tasks, marking a significant transition for the company [2] Group 2: MAI Superintelligence Team - Suleyman announced the formation of the MAI superintelligence team, which he leads, focusing on practical applications in fields like medical diagnostics and education rather than abstract concepts of superintelligence [3] - The team's initial goal is to develop AI that significantly outperforms humans in specific areas, such as expert-level diagnostics and operational planning in clinical settings [3] Group 3: Technological Development and Challenges - Despite ambitions for superintelligence, Suleyman acknowledged that current technology is still evolving and has not yet met consumer and enterprise expectations [4] - The Microsoft Copilot assistant's AI capabilities are still in development and not always accurate, indicating ongoing experimentation [4] - Microsoft has reduced its reliance on OpenAI by incorporating models from Google and Anthropic, following the acquisition of intellectual property from Suleyman's previous company, Inflection AI [4]
专访|“北欧之眼”基金创始人拉斯·特维德:人工智能泡沫可能在未来两三年出现
Sou Hu Cai Jing· 2025-12-08 04:56
Group 1: AI Investment Trends - The global capital market is experiencing a new wave of technology investment centered around artificial intelligence (AI), reshaping growth structures with high capital expenditure in the tech sector acting as a fiscal stimulus amid pressures on traditional industries [1] - AI-related investments currently account for approximately 2% of global GDP, which is considered reasonable compared to historical bubbles like the 19th-century railway boom [5][8] - The current macroeconomic environment is favorable, with strong profit growth and declining interest rates, contrasting with the conditions leading up to the 2000 internet bubble [6] Group 2: AI Technology Development - AI is evolving towards "super intelligence" and "hyper intelligence," with the latter indicating a stage where AI can self-iterate and improve without human intervention [4] - The cost of AI processing is expected to decrease by about 90% annually, with computational efficiency doubling every 3 to 4 months, surpassing Moore's Law [4] - AI's self-improvement capabilities, which began to emerge between 2018 and 2020, are accelerating, indicating a potential for unprecedented technological expansion [5] Group 3: Market Dynamics and Risks - Concerns about "circular financing" among tech giants are viewed as healthy risk-sharing, as companies like Microsoft and Google have substantial cash flow to support their AI investments [6] - The current market situation shows a demand-supply imbalance, with core resources like chips from companies such as NVIDIA and AMD being in short supply [5] Group 4: Future of Work and Economic Implications - The rise of AI is creating a paradox for white-collar workers, where increased efficiency leads to higher workloads and pressure without corresponding wage increases [14] - The transition to a technology-driven economy may lead to a division into three distinct economic "worlds," with varying levels of technological integration and economic growth [16][17] - The importance of adapting to AI and shifting from traditional education to "just-in-time" learning is emphasized, as the rapid pace of technological change diminishes the value of conventional degrees [18][19][20]
安全评估 “亮红灯” 多家顶尖 AI 公司安全体系未达全球要求
Huan Qiu Wang Zi Xun· 2025-12-04 02:57
"未来生命研究所"主席、MIT教授马克斯·泰格马克表示,当前AI协助黑客入侵、诱导人类心理失控及 自残等相关事件引发热议,但美国AI企业所受监管力度甚至低于餐馆,且仍在通过游说抵制强制性安 全规范。与此同时,全球AI领域竞争持续升温,主要科技企业已累计投入数千亿美元用于机器学习技 术的扩展与升级。 据悉,"未来生命研究所"成立于2014年,长期致力于关注智能机器对人类的潜在威胁,早期曾获得特斯 拉CEO马斯克的支持。今年10月,杰弗里·辛顿、约书亚·本吉奥等多位科学家已联合呼吁,暂停超级智 能研发工作,直至公众诉求明确且科研界找到安全管控路径。(纯钧) 评估指出,独立专家通过多维度考察发现,相关企业在追逐超级智能技术突破的过程中,尚未建立起能 有效管控高阶AI系统的可靠方案。这一研究的公布,源于近期多起自杀、自残事件被追溯至AI聊天机 器人,社会各界对具备推理与逻辑能力、甚至可能超越人类的AI系统所带来的潜在冲击愈发担忧。 来源:环球网 【环球网科技综合报道】12月4日消息,据NBC报道,非营利机构"未来生命研究所"发布最新AI安全指 数,评估结果显示,Anthropic、OpenAI、xAI、Meta等全球 ...
研究称 OpenAI、xAI 等全球主要 AI 公司安全措施“不及格”,远未达全球标准
Xin Lang Cai Jing· 2025-12-03 13:21
IT之家 12 月 3 日消息,据路透社报道,"未来生命研究所"今天发布了最新 AI 安全指数,指向 Anthropic、OpenAI、xAI 和 Meta 等主要 AI 公司的安全措 施"远未达到新兴的全球标准"。 机构指出,独立专家的评估显示,各企业一心追逐超级智能,却没有建立能真正管控这一类高阶系统的可靠方案。 Mor World V Business V Markets ∨ Sustainability V Al companies' safety practices fail meet global standards, study show By Reuters December 3, 2025 7:18 PM GMT+8 · Updated 22 mins ago Dec 3 (Reuters) - The safety practices of major artificial intelligence companies, such as A OpenAl, xAI and Meta, are "far short of emerging global standards," accordi ...