Workflow
递归式自我改进
icon
Search documents
Poetiq CEO:递归式自我改进是AI领域的终极目标
Poetiq是一家专注于元系统(Meta-System)架构的AI公司,其核心理念并不是训练一个更大的模 型,而是通过软件层面的系统设计,自动构建「会调用模型的系统」。 Ian Fischer 是 AI 新锐 Poetiq 联席 CEO,兼具连续创业者与 DeepMind 资深研究员双重身份。他早 年创办跨平台开发公司 Apportable(iOS 游戏转 Android 平台),后被 Google 收购。2015 年加入 DeepMind 并深耕十年,专注大模型推理与系统优化,与搭档 Shumeet Baluja 共同发现大模型复杂 推理瓶颈。 2025 年 6 月, Ian Fischer与搭档 联合创立了Poetiq,半年内完成 4580 万美元种子轮融资。 这期访谈中,Ian Fischer讨论了人工智能(AI)的发展和应用,特别介绍了公司开发的"诗意"( poetic )系统及其用于大型语言模型(LLMs)的AI推理工具。Ian Fisher分享了他在夏天使用GPT- 5构建iPhone应用程序的经验,并鼓励大家尝试使用AI而不受限制。他强调了在AI开发中递归自我 改进的重要性,他认为这种方法可以比 ...
按参数算,我们1300克的人脑相当于多大的AI模型?
3 6 Ke· 2026-02-27 12:25
Group 1 - The human brain is estimated to have approximately 86 billion neurons, which translates to a model size of about 86 billion parameters, but when considering the 7,000 synapses per neuron, it equates to roughly 600 trillion parameters [1][2] - The processing capability of the human brain is complex, with neurons functioning more like processor cores rather than simple switches, and the synaptic gaps being around 20 to 40 nanometers, comparable to technology from 2012 [8][9] - The smallest unit of signal transmission in the human brain is the ion channel protein, which operates at an atomic level of 0.3 to 0.5 nanometers, surpassing current silicon-based chip technology [12] Group 2 - The human brain operates at a constant power consumption of about 20 watts, which includes managing various bodily functions, while high-intensity thinking only increases power consumption by approximately 1 watt [19][21] - In comparison, AI models like ChatGPT consume about 0.34 watt-hours per query, indicating that the human brain is still more energy-efficient by two orders of magnitude [22][23] - The efficiency of the human brain in processing information is significantly higher than that of AI models, with humans requiring far fewer data inputs to achieve high levels of generalization [58][60] Group 3 - The context window of advanced AI models like DeepSeek V3 is 128K tokens, while the human brain's short-term memory capacity is limited to about 7±2 chunks, but long-term memory can retain vast amounts of information [34][37][41] - The human brain excels in compression and abstraction, allowing it to distill experiences into essential judgments rather than relying on a fixed context window [42][44] - AI models are beginning to mimic human memory processes, such as using visual tokens for information compression, reflecting similarities in how both systems manage information [47][50] Group 4 - The training data for AI models like GPT-4 is around 130 trillion tokens, while a human child is estimated to encounter about 200 million words by adulthood, highlighting the vast difference in sample efficiency [55][56] - The human brain is pre-equipped with prior knowledge from evolution, allowing for rapid learning and recognition, unlike AI which starts from scratch [63] - The concept of embodied cognition suggests that human thought is influenced by the body, a factor that AI currently lacks, raising questions about the nature of intelligence [64][68] Group 5 - The human brain's capabilities are static, whereas AI models are rapidly evolving, with significant advancements in parameters and algorithms occurring within short timeframes [79][81] - Recursive self-improvement in AI, where AI designs better algorithms for itself, poses a potential challenge to the static nature of human intelligence [86] - The intersection of AI advancement and human cognitive capabilities remains uncertain, with the potential for AI to reach or surpass human intelligence in the future [12][86]
深度|谷歌前CEO:人形机器人或将由中国主导;世界将被廉价的中国机器人淹没,就像它将被廉价的中国电动汽车淹没一样
Z Potentials· 2025-10-03 02:09
Core Insights - The article discusses the competition between the US and China in the field of artificial intelligence (AI), emphasizing the differing approaches and potential outcomes of this rivalry [3][4][5]. - Eric Schmidt highlights the importance of energy supply in the US's ability to leverage its advantages in AI and AGI, suggesting that without sufficient energy, the US may struggle to maintain its lead [5][8]. - The conversation also touches on the potential risks associated with AI, including misinformation, cybersecurity threats, and biological safety concerns, and the need for proactive measures to mitigate these risks [9][10][11]. AI Competition - The US is perceived to be pursuing advanced AI and AGI, while China focuses on applying AI across various products and services in a more traditional manner [4]. - Schmidt believes that the hardware restrictions imposed by the US on China will hinder China's competitiveness in the AI race [4]. - The US has advantages in software development, but China is expected to dominate in the robotics sector, similar to its success in electric vehicles [6][7]. Energy Constraints - The US faces significant energy supply challenges, which could limit its ability to fully utilize its advantages in AI and AGI [5][8]. - Schmidt notes that the US will need to build an additional 92 gigawatts of power generation capacity by 2030 to meet the demands of data centers, highlighting the urgency of addressing energy supply issues [8]. AI Risks and Mitigation - The article discusses the potential for AI-related disasters and the importance of learning from past crises to implement effective regulations and controls [9][10]. - Schmidt identifies misinformation, cybersecurity, and biological safety as key threats that need to be addressed proactively [10][11]. Recommendations for Founders - Schmidt advises founders to focus on rapid action and learning, emphasizing that the barriers to starting a company are lower than ever [16][17]. - He stresses the importance of building scalable platforms that can leverage network effects to create significant wealth for founders [19][20]. Historical Significance - The emergence of AI is compared to historical inventions like electricity and transportation, suggesting that the next decade will be crucial in shaping the future [21][22]. - Companies and nations that embrace AI will likely emerge as winners, while those that lag behind may face significant challenges [22].
刚刚,OpenAI正式发布o3-pro!奥特曼激动更新博客:温和的奇点
机器之心· 2025-06-11 00:24
Core Insights - OpenAI has launched o3-pro, a new model that reportedly shows significant improvements over its predecessor, o3, particularly in areas such as science, education, programming, data analysis, and writing [5][9][22]. Performance Evaluation - The benchmark results indicate that o3-pro has a clear advantage over o3, with higher ratings in clarity, comprehensiveness, instruction adherence, and accuracy [9][11]. - The model has been evaluated using a strict "4/4 reliability" assessment, demonstrating outstanding performance [11][13]. - In the ARC-AGI semi-private evaluation dataset, o3-pro's performance was similar to o3, but at a higher cost [14]. Features and Capabilities - o3-pro supports both text and image input modalities, with a context window size of 200k and a maximum output token count of 100k [18]. - The model's knowledge cutoff is set for June 1, 2024, meaning it lacks information from the past year but can utilize tools for additional context [18]. - API pricing for o3-pro is set at $20 per million input tokens and $80 per million output tokens, which is 87% cheaper than o1-pro but still considered expensive [22]. User Feedback - Early user tests have shown that o3-pro is faster and more accurate than previous models, with notable improvements in programming tasks [29][34]. - Some users expressed disappointment, indicating that not all expectations were met [37]. Future Outlook - Sam Altman's blog post discusses the potential of AI to significantly enhance productivity and scientific progress, suggesting that the future may hold unprecedented advancements [40][44]. - The blog emphasizes the importance of making superintelligence widely accessible and affordable, while also addressing the need for societal discussions on the implications of such technology [59][60].