生物智能

Search documents
中国工程院外籍院士张亚勤:AI五大新趋势,物理智能快速演进
2 1 Shi Ji Jing Ji Bao Dao· 2025-10-01 05:32
21世纪经济报道记者骆轶琪 北京报道 其中重要标志是,过去7个月间,智能体AI的任务长度翻倍、准确度超过50%,由此可以加速让智能体 应用到每个领域。 AI产业快速发展,正让诸多行业的迭代呈现加速度趋势。 2025骁龙峰会·中国期间,中国工程院外籍院士、清华大学智能产业研究院(AIR)院长张亚勤在演讲中 指出,新一代人工智能是原子、分子和比特的融合,是信息智能、物理智能和生物智能的融合。这将带 来巨大产业机遇。 从产业规模看,移动互联比PC互联时代至少大10倍;在工智能时代,整个产业规模将比前一代至少大 100倍。 AI产业在过去一年来,也呈现五大新趋势。 张亚勤分析道,第一大趋势是从鉴别式AI到生成式AI,如今则走向智能体AI。 第二大趋势是过去一年来,在预训练阶段的规模定律(Scaling Law)已经放缓,更多工作转移到训练 后的如推理、智能体应用等阶段。 "并不是说前沿模型就不需要了,整个智力上限还在不断往前走,但(迭代)速度比去年和前年是放缓 的。"张亚勤补充道,规模定律接下来有望在如智能体、视觉等其他领域重现。 在此过程中,过去一年来,推理成本降低了10倍,但智能体的复杂性,让算力同步上涨了10倍 ...
【环时深度】数字智能是否会取代生物智能?
Huan Qiu Shi Bao· 2025-08-21 22:54
Group 1 - The core discussion revolves around the potential coexistence and competition between biological intelligence and digital intelligence, with notable figures like Geoffrey Hinton and Stuart Russell presenting differing views on whether digital intelligence will replace biological intelligence [1][2][10]. - Hinton emphasizes that biological intelligence, evolved over billions of years, is adaptive and capable of complex interactions with the environment, while digital intelligence, designed by humans, excels in speed and data processing but lacks consciousness and self-awareness [4][5]. - The debate includes concerns about the risks posed by AI, with Hinton suggesting a 10% to 20% chance that AI could lead to human extinction due to misuse or dangerous evolution of AI systems [7][8]. Group 2 - The discussion highlights two opposing camps in the tech community: "Doomsayers" or "Slowdownists," who advocate for slowing AI development due to alignment issues, and "Effective Accelerationists," who support rapid AI advancement [8][9]. - Hinton's metaphor of raising a tiger illustrates the potential dangers of AI becoming uncontrollable as it becomes more integrated into various industries [5][6]. - The concept of "symbiotic intelligence" is introduced, suggesting that biological and digital intelligences could coexist and enhance each other, leading to advanced AI systems that integrate biological insights [12][13].
AI“标准教科书”作者罗素:不希望数字智能取代生物智能
第一财经· 2025-07-27 11:14
Core Viewpoint - The article discusses the perspectives of Professor Stuart Russell on the implications of artificial intelligence (AI) and its potential to replace human intelligence, emphasizing the importance of human values and the need for responsible AI development [1][2]. Group 1: AI and Human Intelligence - Professor Russell believes that the question of whether digital intelligence will replace biological intelligence is not a matter of prediction but a matter of choice, and he prefers that digital intelligence does not replace human intelligence [1]. - He argues that the understanding of values is rooted in human happiness and well-being, and that coexistence with independent intelligent entities could lead to a loss of meaning for humanity [2]. Group 2: AGI and Employment - Russell expresses skepticism about the ability of artificial general intelligence (AGI) to replace most cognitive workers in the near future, stating that current AI technologies are not yet capable of solving problems accurately [2][3]. - He warns that if AI progresses to the point of taking over many jobs, it could disrupt the educational and motivational structures that have supported society for centuries, leading to significant societal issues [3]. Group 3: AGI Competition and Regulation - During the WAIC forum, Russell cautioned against the global arms race for AGI, stating that once created, AGI would be an infinite wealth creator and should be treated as a global public resource [3]. - He emphasized the necessity for effective regulation to minimize AGI risks to a very low level, akin to safety standards in nuclear energy, to prevent potential threats to human civilization [3].
“AI教父”辛顿WAIC演讲全文:我们正在养一头老虎,别指望能“关掉它”
华尔街见闻· 2025-07-27 11:14
Core Viewpoint - The development of AI is creating systems that may surpass human intelligence, raising concerns about control and safety [3][18]. Group 1: AI Development Paradigms - There are two paradigms in AI development: the logical paradigm, which focuses on reasoning through symbolic manipulation, and the biological basis paradigm, which emphasizes learning and network connections [2][6]. - Large language models understand language similarly to humans, potentially leading to the creation of illusory language [2][11]. Group 2: Advantages of Digital Intelligence - Digital intelligence has two main advantages: the "eternality" of knowledge due to hardware-software separation and the high efficiency of knowledge dissemination, allowing for the instantaneous sharing of vast amounts of information [2][17]. - When energy becomes cheap enough, digital intelligence could irreversibly surpass biological intelligence due to its ability to rapidly replicate knowledge [2][18]. Group 3: Human-AI Relationship - The current relationship between humans and AI is likened to keeping a tiger as a pet, where the AI could eventually surpass human capabilities [3][19]. - There are only two options for managing AI: either train it to be non-threatening or eliminate it, which is not feasible [19]. Group 4: AI's Impact on Industries - AI has the potential to significantly enhance efficiency across nearly all industries, including healthcare, education, climate change, and new materials [19]. - The inability to eliminate AI means that finding ways to train it to coexist with humanity is crucial for survival [19]. Group 5: International Cooperation on AI Safety - There is a need to establish an international network of AI safety institutions to research how to train superintelligent AI to act benevolently [4][21]. - The collaboration among nations on AI safety is seen as a critical long-term issue, with the potential for shared research on training AI to assist rather than dominate humanity [5][21].
独家|AI“标准教科书”作者罗素:不希望数字智能取代生物智能
Di Yi Cai Jing· 2025-07-27 06:34
Core Viewpoint - The creation of AGI (Artificial General Intelligence) is seen as a potential infinite wealth creator and should be treated as a global public resource, making competition meaningless [1][5]. Group 1: Perspectives on AGI - Russell emphasizes that the question of whether digital intelligence should replace biological intelligence is a matter of choice, and he personally prefers humanity [1][2]. - He expresses skepticism about the current capabilities of AI, stating that it has not yet proven to be able to replace most cognitive labor [2][3]. - The potential for AGI to take over many jobs raises concerns about mass unemployment among educated individuals, which could disrupt long-standing societal incentives [3]. Group 2: Risks and Governance - Russell warns against the global arms race for AGI, suggesting that humanity is on the brink of a critical juncture [5]. - He advocates for effective regulation to minimize AGI risks to extremely low levels, akin to safety standards in nuclear energy [5]. - The need for global AI governance is highlighted to prevent technological risks from threatening human civilization [5].
数字智能是否会取代生物智能?
小熊跑的快· 2025-07-27 00:26
Core Viewpoint - The ultimate consideration in the AI industry is whether digital intelligence (silicon-based) can irreversibly surpass biological intelligence (carbon-based) when energy becomes sufficiently cheap [1] Summary by Sections Two Paradigms for Intelligence - Digital intelligence can instantaneously propagate knowledge across groups by directly copying brain knowledge, a capability that biological intelligence cannot match [1] Development Over Thirty Years - The evolution of AI over the past three decades has led to significant advancements, including the acceptance of "feature vectors" by computational linguists and the introduction of the Transformer model by Google, showcasing the powerful capabilities of large language models [4][8] Large Language Models - Large language models understand language in a manner similar to humans, transforming words into feature vectors that can effectively combine with other words, akin to building structures with Lego blocks [2][8] Knowledge Transfer and Efficiency - The best method for transferring knowledge is through distillation from a "teacher" to a "student," allowing for efficient sharing of learned knowledge among digital agents [8] Current Situation and Future Implications - If energy is cheap, digital computation will generally have advantages over biological computation, particularly in knowledge sharing among agents [8] - The potential for superintelligence to manipulate humans for power raises significant concerns about the future of AI and its implications for human safety [12]
“AI教父”辛顿WAIC演讲:我们正在养一头老虎,别指望能“关掉它”
Hua Er Jie Jian Wen· 2025-07-26 11:40
Core Viewpoint - The 2025 World Artificial Intelligence Conference (WAIC) in Shanghai featured a speech by Geoffrey Hinton, discussing the fundamental differences between digital intelligence and biological intelligence, expressing concerns about the creation of AI that may surpass human intelligence [1][2]. Summary by Relevant Sections AI Development Paradigms - AI has two main paradigms: the logical paradigm, which focuses on reasoning through symbolic rules, and the biological paradigm, which emphasizes learning and understanding connections in networks [3][2]. - Hinton's early model in 1985 attempted to combine these theories to better understand vocabulary through semantic interactions [2]. Language Understanding - Large language models (LLMs) understand language similarly to humans, potentially creating "hallucinations" in language [3]. - Words can be likened to multi-dimensional Lego blocks that adjust their shapes based on context, requiring proper connections to convey meaning [3][5]. Advantages of Digital Intelligence - Digital intelligence has two key advantages: the permanence of knowledge storage and high efficiency in knowledge dissemination, allowing for the rapid sharing of vast amounts of information [3][11]. - When energy is cheap, digital intelligence could irreversibly surpass biological intelligence due to its ability to replicate knowledge quickly [3][11]. Concerns About AI - The creation of AI that is smarter than humans raises concerns about survival and control, likening the situation to keeping a tiger as a pet [3][12]. - Hinton emphasizes that AI cannot be eliminated and will enhance efficiency across various industries, making it imperative to find ways to train AI to be beneficial rather than harmful [3][13]. International Cooperation - Hinton advocates for the establishment of an international network of AI safety institutions to research how to train superintelligent AI to act in humanity's best interest [3][15]. - The potential for global cooperation exists, as all nations share a common interest in preventing AI from dominating the world [3][14].
Hinton上海演讲:大模型跟人类智能很像,警惕养虎为患
量子位· 2025-07-26 09:01
Core Viewpoint - Geoffrey Hinton emphasizes the importance of establishing a positive mechanism for AI development to ensure it does not threaten humanity, highlighting the complex relationship between AI and human intelligence [3][42][55]. Group 1: AI Development and Understanding - Hinton discusses the evolution of AI over the past 60 years, identifying two main paradigms: logical reasoning and biological understanding, which have shaped current AI capabilities [8][10]. - He compares human understanding of language to that of large language models, suggesting that both operate on similar principles of feature interaction and semantic understanding [19][27]. - The efficiency of knowledge transfer in AI is significantly higher than in humans, with AI capable of sharing vast amounts of information rapidly across different systems [29][36]. Group 2: AI Safety and Collaboration - Hinton warns that as AI becomes more intelligent, it may seek control and autonomy, necessitating international cooperation to ensure AI remains beneficial to humanity [42][55]. - He likens the current relationship with AI to raising a tiger cub, stressing the need for training AI to prevent it from becoming a threat as it matures [49][51]. - The call for a global AI safety institution is made, aimed at researching and training AI to assist rather than dominate humanity [55][56].
中科院院士郑海荣:马斯克的脑机接口方案“太落后了”
经济观察报· 2025-07-01 11:30
Core Viewpoint - The article emphasizes the need to explore non-invasive brain-computer interface (BCI) technologies rather than invasive methods, as proposed by Chinese Academy of Sciences academician Zheng Hairong [2][3][9]. Industry Overview - The global BCI market is projected to grow from $2.35 billion in 2023 to $10.89 billion by 2033, indicating significant investment and interest in this sector [5]. - Major players in the BCI field include Neuralink, which focuses on invasive methods, and Synchron, which has developed a less invasive approach with support from tech giants like Apple and NVIDIA [2][7]. Technological Developments - Neuralink has reported advancements in its invasive BCI technology, with patients able to control complex devices using their thoughts, showcasing a leap from simple cursor control to intricate robotic manipulation [5][6]. - Synchron has achieved key safety milestones with its BCI devices, including FDA approval for temporary implants and successful long-term trials without severe adverse events [8]. Critique of Current Approaches - Zheng Hairong criticizes the invasive methods as "brute force engineering," arguing that they fail to understand the complexity of the human brain and its evolutionary history [3][9]. - He highlights the challenges of biological compatibility in invasive BCIs, noting that many electrodes fail due to the brain's natural resistance [6]. Alternative Approaches - Zheng advocates for a non-invasive approach that utilizes external technologies like ultrasound and fMRI to read and potentially write brain signals without penetrating the skull [9][10]. - This method aims to decode brain activity by observing the relationship between blood flow and neural activity, likening it to a soldier and their supplies [10]. Future of AI and BCI - Zheng outlines a three-stage evolution of AI, with the final stage being "biological intelligence" achieved through effective BCI integration [12][13]. - He envisions a future where hospitals transform into AI-driven data centers, moving away from traditional medical practices [14]. Ethical Considerations - The article raises concerns about the ethical implications of BCI technology, emphasizing the need for strong regulations to prevent misuse and ensure human control over technology [14][15]. - Global legislative efforts are underway to protect brain data, indicating a growing recognition of the ethical challenges posed by BCI advancements [15]. Timeline for Adoption - Zheng estimates that it may take 20 to 30 years for BCI technology to become a part of everyday life for the general public [17].
中科院院士郑海荣:马斯克的脑机接口方案“太落后了”
Jing Ji Guan Cha Wang· 2025-07-01 09:38
Core Viewpoint - The global brain-computer interface (BCI) sector is experiencing significant advancements, with companies like Neuralink and Synchron leading the way in different technological approaches [2][4][6] Group 1: Company Developments - Neuralink has increased its number of human trial participants to 7 and demonstrated the ability to control a robotic arm using thoughts [2] - Synchron has achieved native integration with Apple devices through a new protocol, enhancing its market presence [2] - Precision Neuroscience received FDA approval for a temporary implantable device, marking a step towards commercialization [6] Group 2: Market Growth - The global BCI market is projected to grow from $2.35 billion in 2023 to $10.89 billion by 2033, indicating a substantial increase in investment and interest [4] Group 3: Technological Approaches - Two main technological paths are identified: invasive methods like those used by Neuralink, which require surgical implantation, and less invasive methods like those of Synchron, which utilize blood vessels for sensor delivery [2][6] - The invasive approach has shown promising results, allowing patients to perform complex tasks, but faces challenges related to biological compatibility [5][6] Group 4: Alternative Perspectives - Chinese Academy of Sciences academician Zheng Hairong advocates for non-invasive BCI technologies, arguing that current invasive methods are outdated and lack imagination [3][8] - Zheng proposes using external physical methods like ultrasound and fMRI to read and potentially write brain information without surgical intervention [8][9] Group 5: Future Implications - Zheng predicts that the future of AI will involve a three-stage evolution, culminating in "biological intelligence" achieved through effective brain-computer integration [10][11] - He envisions a healthcare system transformed by AI, moving away from traditional methods to a data-centric model that predicts and manages diseases [12] Group 6: Ethical Considerations - The ethical implications of BCI technology are becoming a global concern, with countries like Chile and Colorado taking legislative steps to protect brain data [13] - Zheng emphasizes the need for strong regulations to ensure that brain-computer interfaces remain under human control, highlighting the potential risks of losing that control [12][13] Group 7: Timeline for Adoption - Zheng estimates that it may take 20 to 30 years for BCI technology to become a part of everyday life for the general public [14]