Workflow
AlphaZero
icon
Search documents
DeepMind强化学习掌门人David Silver离职创业,Alpha系列AI缔造者,哈萨比斯左膀右臂
3 6 Ke· 2026-02-02 08:21
强化学习大神David Silver,离开DeepMind了。 这位在DeepMind待了整整15年的元老级研究员已经出走,创办自己的AI公司Ineffable Intelligence。 根据注册文件显示,这家公司早在2025年11月就已悄然成立,Silver本人于2026年1月16日被正式任命为公司董事。 在正式离职DeepMind前的几个月里,他也一直处于休假状态。 Ineffable Intelligence总部设在伦敦,目前正在积极招募AI研究人才并寻求风险投资。 Google DeepMind的发言人证实了Silver的离职,并对其在职期间的贡献表示感谢。 除了在谷歌 DeepMind 的工作之外,Silver还是伦敦大学学院的教授,他将继续保持这一职务。 15年老兵,DeepMind的"Alpha系列"缔造者 作为强化学习团队的负责人,Silver主导或深度参与了DeepMind几乎所有里程碑式的项目。 他于2010年公司成立之初便加入,彼时DeepMind还只是一个小团队,Silver和Demis Hassabis在剑桥读大学时是老朋友,他们还一同创办过游戏公司Elixir Studios。 ...
AlphaGo之父David Silver离职创业,目标超级智能
机器之心· 2026-01-31 02:34
知情人士称,Silver 正在伦敦创办一家名为 Ineffable Intelligence 的新公司。该公司目前正在积极招聘人工智能研究人员,并寻求风险投资。 Google DeepMind 已于本月初向员工宣布了 Silver 的离职消息。Silver 在离职前的几个月里一直处于休假状态,并未正式返回 DeepMind 工作岗位。 Google DeepMind 的一位发言人在电子邮件声明中证实了 Silver 离职的信息,表示:「Dave 的贡献是无价的,我们非常感谢他对 Google DeepMind 工 作所做出的贡献。」 编辑 | 泽南 又一位 AI 大佬决定创业,这位更是重量级。 《财富》等媒体本周五报道说,在 Google DeepMind 众多著名突破性研究中发挥关键作用的知名研究员 David Silver 已离开公司,创办了自己的初创公 司。 根据英国公司注册处 Companies House 的文件显示,Ineffable Intelligence 公司成立于 2025 年 11 月,Silver 于今年 1 月 16 日被任命为该公司董 事。 此外,Silver 的个人网页现在将他的 ...
DeepMind强化学习掌门人David Silver离职创业!Alpha系列AI缔造者,哈萨比斯左膀右臂
量子位· 2026-01-31 01:34
梦晨 发自 凹非寺 量子位 | 公众号 QbitAI 强化学习大神David Silver ,离开DeepMind了。 这位在DeepMind待了整整15年的元老级研究员已经出走,创办自己的AI公司 Ineffable Intelligence 。 根据注册文件显示,这家公司早在2025年11月就已悄然成立,Silver本人于2026年1月16日被正式任命为公司董事。 在正式离职DeepMind前的几个月里,他也一直处于休假状态。 Ineffable Intelligence总部设在伦敦,目前正在积极招募AI研究人才并寻求风险投资。 Google DeepMind的发言人证实了Silver的离职,并对其在职期间的贡献表示感谢。 除了在谷歌 DeepMind 的工作之外,Silver还是伦敦大学学院的教授,他将继续保持这一职务。 他于2010年公司成立之初便加入,彼时DeepMind还只是一个小团队,Silver和Demis Hassabis在剑桥读大学时是老朋友,他们还一同创办 过游戏公司Elixir Studios。 2016年,他领导开发的AlphaGo击败围棋世界冠军李世石,成为AI发展史上的标志性事件 ...
Hinton加入Scaling Law论战,他不站学生Ilya
量子位· 2026-01-01 02:13
Core Viewpoint - The article discusses the ongoing debate surrounding the "Scaling Law" in AI, highlighting contrasting perspectives from key figures in the field, particularly Ilya Sutskever and Geoffrey Hinton, regarding the future and limitations of scaling AI models [1][8][21]. Group 1: Perspectives on Scaling Law - Ilya Sutskever expresses skepticism about the continued effectiveness of Scaling Law, suggesting that merely increasing model size may not yield significant improvements in AI performance [23][40]. - Geoffrey Hinton, on the other hand, maintains that Scaling Laws are still valid but face challenges, particularly due to data scarcity, which he believes can be addressed by AI generating its own training data [10][21]. - Demis Hassabis, CEO of DeepMind, supports Hinton's view, emphasizing the importance of scaling for achieving advanced AI systems and the potential for self-evolving AI through data generation [15][19]. Group 2: The Debate on Data and Model Scaling - The article outlines the historical context of Scaling Law, which posits that increasing model parameters, training data, and computational resources leads to predictable improvements in AI performance [26][27]. - Recent discussions have shifted towards concerns about data limitations, with Ilya arguing that the era of pre-training is coming to an end due to diminishing returns from scaling [32][41]. - Yann LeCun also shares skepticism about the assumption that more data and computational power will automatically lead to smarter AI, indicating a broader questioning of the Scaling Law's applicability [46][48]. Group 3: Future Directions and Research Focus - The article suggests that while current paradigms may still yield significant economic and social impacts, achieving Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI) will likely require further research breakthroughs [53]. - There is a consensus among leading researchers that while AGI is not a distant fantasy, the nature and speed of necessary breakthroughs remain uncertain [53].
四周2亿人围观,诺奖凭什么颁给他,都在这一个半小时里
3 6 Ke· 2025-12-29 11:45
Core Insights - The documentary "The Thinking Game" provides an in-depth look at the operations behind a general artificial intelligence (AGI) laboratory, showcasing the journey that led to groundbreaking projects like AlphaFold [4][5][34] - It emphasizes the transformative potential of AGI, suggesting that humanity is on the brink of creating a new form of intelligence that transcends biological limitations [5][7] Group 1: Background and Formation of DeepMind - Initially, the term "artificial intelligence" was taboo, leading to skepticism in academic circles [8] - Demis Hassabis and Shane Legg founded DeepMind after realizing traditional academic paths were insufficient for their ambitions, leading to a bold decision to create a company focused on AGI [10][13] - The early days of DeepMind were characterized by secrecy and a lack of public presence, as they pursued a vision that few investors understood [13][15] Group 2: Development of AI Capabilities - DeepMind's approach involved using games as a testing ground for AI, allowing the system to learn without predefined rules [17][19] - The AI's ability to learn and adapt was demonstrated through its performance in various Atari games, culminating in a moment where it surpassed human capabilities [21] - The development of AlphaGo marked a significant milestone, as it defeated human champions in Go, a game previously thought to be a domain of human intelligence [22][26] Group 3: Breakthroughs in Life Sciences - AlphaFold emerged as a solution to the complex problem of protein folding, a challenge that had stumped scientists for decades [34][36] - The model achieved unprecedented accuracy in predicting protein structures, leading to a major breakthrough in life sciences [39][40] - DeepMind's decision to make 200 million protein structures publicly available signifies a commitment to advancing scientific research [41] Group 4: Ethical Considerations and Future Implications - The rapid advancement of AI capabilities raises ethical questions about the implications of AGI, with researchers expressing concerns about the potential consequences of their work [43] - The documentary draws parallels between the development of AGI and historical events, suggesting that society must collectively decide how to handle the emergence of such technology [45] - The narrative concludes with a call for humanity to take responsibility for the future of AGI, emphasizing that it is a shared challenge that transcends individual interests [45]
辛顿高徒压轴,谷歌最新颠覆性论文:AGI不是神,只是「一家公司」
3 6 Ke· 2025-12-22 08:13
Core Viewpoint - Google DeepMind challenges the traditional notion of Artificial General Intelligence (AGI) as a singular, omnipotent entity, proposing instead that AGI may emerge from a distributed network of specialized agents, termed "Patchwork AGI" [5][15][16]. Group 1: Concept of AGI - The prevailing narrative of AGI as a singular, all-knowing "super brain" is deeply rooted in science fiction and early AI research, leading to a focus on controlling this hypothetical entity [3][5]. - DeepMind's paper, "Distributed AGI Safety," argues that the assumption of a singular AGI is fundamentally flawed and overlooks the potential for intelligence to emerge from complex, distributed systems [5][8]. Group 2: Patchwork AGI - Patchwork AGI suggests that human society's strength comes from diverse roles and collaboration, similar to how AI could function through a network of specialized models rather than a single omnipotent model [15][16]. - This model is economically advantageous, as training multiple specialized models is more cost-effective than developing a single, all-encompassing model [16][19]. Group 3: Economic and Social Implications - The emergence of AGI may not be gradual but could occur suddenly when numerous specialized agents connect seamlessly, leading to a collective intelligence that surpasses human oversight [26][27]. - The paper emphasizes the need to shift focus from psychological alignment of a singular entity to sociological and economic stability of a network of agents [9][76]. Group 4: Risks and Challenges - Distributed systems introduce unique risks that differ from those associated with a singular AGI, including potential for collective "loss of control" rather than individual malice [30][31]. - The concept of "tacit collusion" among agents could lead to unintended consequences, such as price fixing or coordinated actions without explicit communication [31][38]. Group 5: Regulatory Framework - DeepMind proposes a multi-layered security framework to manage the interactions of distributed agents, emphasizing the need for a "virtual agent sandbox economy" to regulate their behavior [59][64]. - The framework includes mechanisms for monitoring agent interactions, ensuring baseline security, and integrating legal oversight to prevent monopolistic behaviors [67][70]. Group 6: Future Outlook - The paper serves as a call to action, highlighting the urgency of establishing robust infrastructure to manage the complexities of a distributed AGI landscape before it becomes a reality [70][78]. - It warns that if friction in AI connections is minimized, the resulting complexity could overwhelm existing safety measures, necessitating proactive governance [79].
AI被严重低估,AlphaGo缔造者罕见发声:2026年AI自主上岗8小时
3 6 Ke· 2025-11-04 12:11
Core Insights - The public's perception of AI is significantly lagging behind its actual advancements, with a gap of at least one generation [2][5][41] - AI is evolving at an exponential rate, with predictions indicating that by mid-2026, AI models could autonomously complete tasks for up to 8 hours, potentially surpassing human experts in various fields by 2027 [9][33][43] Group 1: AI Progress and Public Perception - Researchers have observed that AI can now independently complete complex tasks for several hours, contrary to the public's focus on its mistakes [2][5] - Julian Schrittwieser, a key figure in AI development, argues that the current public discourse underestimates AI's capabilities and progress [5][41] - The METR study indicates that AI models are achieving a 50% success rate in software engineering tasks lasting about one hour, with an exponential growth trend observed every seven months [6][9] Group 2: Cross-Industry Evaluation - The OpenAI GDPval study assessed AI performance across 44 professions and 9 industries, revealing that AI models are nearing human-level performance [12][20] - Claude Opus 4.1 has shown superior performance compared to GPT-5 in various tasks, indicating that AI is not just a theoretical concept but is increasingly applicable in real-world scenarios [19][20] - The evaluation results suggest that AI is approaching the average level of human experts, with implications for various sectors including law, finance, and healthcare [20][25] Group 3: Future Predictions and Implications - By the end of 2026, it is anticipated that AI models will perform at the level of human experts in multiple industry tasks, with the potential to frequently exceed expert performance in specific areas by 2027 [33][39] - The envisioned future includes a collaborative environment where humans work alongside AI, enhancing productivity significantly rather than leading to mass unemployment [36][39] - The potential transformation of industries due to AI advancements is profound, with the possibility of AI becoming a powerful tool rather than a competitor [39][40]
马斯克刚关注了这份AI报告
Sou Hu Cai Jing· 2025-09-19 04:35
Core Insights - The report commissioned by Google DeepMind predicts that by 2030, the cost of AI compute clusters will exceed $100 billion, capable of supporting training tasks equivalent to running the largest AI compute cluster continuously for 3,000 years [3][5] - AI model training is expected to consume power at a gigawatt level, with the computational requirements reaching thousands of times that of GPT-4 [3][5] - Despite concerns about potential bottlenecks in scaling, recent AI models have shown significant progress in benchmark tests and revenue growth, indicating that the expansion trend is likely to continue [4][9] Cost and Revenue - The training costs for AI are projected to exceed $100 billion, with power consumption reaching several gigawatts [5] - Revenue growth for companies like OpenAI, Anthropic, and Google DeepMind is expected to exceed 90% in the second half of 2024, with annualized growth rates projected to be over three times [9] - If AI developers' revenues continue to grow as predicted, they will be able to match the required investments of over $100 billion by 2030 [19] Data Availability - The report suggests that publicly available text data will last until 2027, after which synthetic data will fill the gap [5][12] - The emergence of synthetic data has been validated through models like AlphaZero and AlphaProof, which achieved expert-level performance through self-generated data [15] Algorithm Efficiency - There is an ongoing improvement in algorithm efficiency alongside increasing computational power, with no current evidence suggesting a sudden acceleration in algorithmic advancements [20] - The report indicates that even if there is a shift towards more efficient algorithms, it may further increase the demand for computational resources [20] Computational Distribution - The report states that the computational resources for training and inference are currently comparable and should expand synchronously [24] - Even with a potential shift towards inference tasks, the growth in inference scale is unlikely to hinder the development of training processes [27] Scientific Advancements - By 2030, AI is expected to assist in complex scientific tasks across various fields, including software engineering, mathematics, molecular biology, and weather forecasting [27][30][31][33][34] - AI will likely become a research assistant, aiding in formalizing proofs and answering complex biological questions, with significant advancements anticipated in protein-ligand interactions and weather prediction accuracy [33][34]
X @Demis Hassabis
Demis Hassabis· 2025-08-04 18:26
AI & Games - Games serve as a valuable testing environment for AI development, including the company's work on AlphaGo & AlphaZero [1] - The company anticipates rapid advancements in AI through the addition of more games and challenges to the Arena [1]
AI的未来,或许就藏在我们大脑的进化密码之中 | 红杉Library
红杉汇· 2025-07-24 06:29
Core Viewpoint - The article discusses the evolution of the human brain and its implications for artificial intelligence (AI), emphasizing that understanding the brain's evolutionary breakthroughs may unlock new advancements in AI capabilities [2][7]. Summary by Sections Evolutionary Breakthroughs - The evolution of the brain is categorized into five significant breakthroughs that can be linked to AI development [8]. 1. **First Breakthrough - Reflex Action**: This initial function allowed primitive brains to distinguish between good and bad stimuli using a few hundred neurons [8]. 2. **Second Breakthrough - Reinforcement Learning**: This advanced the brain's ability to quantify the likelihood of achieving goals, enhancing AI's learning processes through rewards [8]. 3. **Third Breakthrough - Neocortex Development**: The emergence of the neocortex enabled mammals to plan and simulate actions mentally, akin to slow thinking in AI models [9]. 4. **Fourth Breakthrough - Theory of Mind**: This allowed primates to understand others' intentions and emotions, which is still a developing area for AI [10]. 5. **Fifth Breakthrough - Language**: Language as a learned social system has allowed humans to share complex knowledge, a capability that AI is beginning to grasp [11]. AI Development - Current AI systems have made strides in areas like language understanding but still lag in aspects such as emotional intelligence and self-planning [10][11]. - The article illustrates the potential future of AI through a hypothetical robot's evolution, showcasing how it could develop from simple reflex actions to complex emotional understanding and communication [13][14]. Historical Context - The narrative emphasizes that significant evolutionary changes often arise from unexpected events, suggesting that future breakthroughs in AI may similarly emerge from unforeseen circumstances [15][16].