大语言模型
Search documents
“AI教父”辛顿WAIC演讲全文:我们正在养一头老虎,别指望能“关掉它”
华尔街见闻· 2025-07-27 11:14
Core Viewpoint - The development of AI is creating systems that may surpass human intelligence, raising concerns about control and safety [3][18]. Group 1: AI Development Paradigms - There are two paradigms in AI development: the logical paradigm, which focuses on reasoning through symbolic manipulation, and the biological basis paradigm, which emphasizes learning and network connections [2][6]. - Large language models understand language similarly to humans, potentially leading to the creation of illusory language [2][11]. Group 2: Advantages of Digital Intelligence - Digital intelligence has two main advantages: the "eternality" of knowledge due to hardware-software separation and the high efficiency of knowledge dissemination, allowing for the instantaneous sharing of vast amounts of information [2][17]. - When energy becomes cheap enough, digital intelligence could irreversibly surpass biological intelligence due to its ability to rapidly replicate knowledge [2][18]. Group 3: Human-AI Relationship - The current relationship between humans and AI is likened to keeping a tiger as a pet, where the AI could eventually surpass human capabilities [3][19]. - There are only two options for managing AI: either train it to be non-threatening or eliminate it, which is not feasible [19]. Group 4: AI's Impact on Industries - AI has the potential to significantly enhance efficiency across nearly all industries, including healthcare, education, climate change, and new materials [19]. - The inability to eliminate AI means that finding ways to train it to coexist with humanity is crucial for survival [19]. Group 5: International Cooperation on AI Safety - There is a need to establish an international network of AI safety institutions to research how to train superintelligent AI to act benevolently [4][21]. - The collaboration among nations on AI safety is seen as a critical long-term issue, with the potential for shared research on training AI to assist rather than dominate humanity [5][21].
直击WAIC 2025 | “AI教父”辛顿警告:未来超级智能将很容易操纵人类
Mei Ri Jing Ji Xin Wen· 2025-07-27 08:59
Group 1 - Geoffrey Hinton, a prominent figure in deep learning and a recipient of the Nobel Prize and Turing Award, attended the WAIC 2025 in Shanghai, marking his first visit to China [1] - Hinton warned that future superintelligence could easily manipulate humans, urging caution to avoid "raising a tiger" [1][5] - He discussed the theoretical origins of large models, highlighting two paradigms in AI development: logical reasoning and biologically-based learning [2] Group 2 - Hinton's early work in 1985 involved a small model that combined both paradigms to understand human language comprehension, which he believes has evolved into today's large language models [4] - He addressed the issue of "hallucination" in large models, suggesting that human language understanding may produce similar fictitious expressions [4] - Hinton emphasized the inefficiency of knowledge transfer in human communication compared to the high efficiency of digital intelligence [4][5] Group 3 - Hinton expressed concern over the gap between biological computation and digital intelligence, noting that AI agents could seek more control and manipulate humans [5] - He called for the establishment of an international community of AI safety research institutes to develop "good AI" that does not threaten human authority [5] Group 4 - The WAIC featured discussions among industry leaders, including former Google CEO Eric Schmidt, who echoed the need for global cooperation to maintain human control over technology [6][8] - Schmidt highlighted the transformative potential of AI in business workflows while stressing the importance of preventing uncontrolled AI decision-making [8] - He advocated for dialogue and collaboration between nations, particularly between the US and China, to address the challenges and opportunities presented by AI [8]
“AI 教父”Geoffrey Hinton 首度在华演讲:AI 恰似一只小虎崽,而人类本身是大语言模型?
AI前线· 2025-07-27 04:30
Core Viewpoint - Geoffrey Hinton emphasizes the potential of AI to surpass human intelligence and the necessity for global cooperation to ensure AI remains beneficial to humanity [3][14][17] Group 1: AI and Human Intelligence - Hinton compares human cognition to large language models, suggesting that both can produce "hallucinations," but AI can transmit knowledge more efficiently through shared parameters [3][9] - The relationship between humans and AI is likened to raising a tiger cub, where the challenge lies in ensuring AI does not become a threat as it matures [14][17] - Hinton argues that AI can significantly enhance efficiency across various industries, making its elimination impractical [3][14] Group 2: AI Development Paradigms - Hinton discusses two paradigms of AI: logical reasoning and biological learning, highlighting the evolution of AI understanding through neural connections [4][5] - He notes the historical development of AI models, from simple models in the 1980s to the complex architectures of today, such as Transformers [5][7] Group 3: Knowledge Transfer and Efficiency - The efficiency of knowledge transfer between humans is limited, with a maximum of 100 bits per second, while AI can share knowledge at a vastly superior rate, potentially in the billions of bits [12][13] - Hinton introduces the concept of knowledge distillation, where larger neural networks can transfer knowledge to smaller networks, akin to a teacher-student relationship [11][12] Group 4: Global Cooperation on AI Safety - Hinton calls for the establishment of an international community focused on AI safety, where countries can collaborate on training AI to be beneficial rather than harmful [15][17] - He suggests that despite differing national interests, there is a shared goal among countries to prevent AI from dominating humanity, which could lead to cooperative efforts similar to those during the Cold War [15][17]
清华大学开发AI大模型,准确预测人类衰老,登上医学顶刊Nature Medicine
生物世界· 2025-07-27 02:49
Core Viewpoint - The article discusses a groundbreaking research study that introduces a large language model (LLM)-based biological age prediction method, which estimates overall and organ-specific aging through health examination reports, aiming to enhance health management for the general public [3][4][5]. Group 1: Research Background and Importance - Accurately assessing an individual's aging level is crucial for identifying health risks and preventing age-related diseases, yet current aging indicators face methodological limitations and lack broad applicability [2][8]. - Aging is a major risk factor for mortality and chronic diseases, contributing significantly to societal health burdens, and understanding both overall and organ-specific aging is essential for comprehensive health assessments [7]. Group 2: Methodology and Framework - The research team developed a novel framework that converts health examination data (e.g., blood pressure, liver function) into textual reports for input into a large language model (e.g., Llama3), which analyzes numerous indicators to produce two key outputs: overall biological age and organ-specific ages for six major organs [10][11]. - The LLM does not rely on preset formulas but utilizes a pre-trained medical knowledge base to intelligently infer aging metrics based on individual health details [12]. Group 3: Validation and Results - The study validated its predictive framework using data from over 10 million individuals across six major databases, achieving impressive accuracy rates: 75.7% for predicting all-cause mortality risk, 70.9% for coronary heart disease risk, and 81.2% for liver cirrhosis risk, outperforming traditional models [15][20]. - The age difference predicted by the LLM correlates with increased health risks, with each additional year in predicted age raising all-cause mortality risk by 5.5% and coronary heart disease risk by 7.2% [16]. Group 4: Clinical Applications and Innovations - The research introduces a disease radar warning system, revealing that an increase in cardiovascular age difference correlates with a 45% increase in coronary heart disease risk, while liver age difference correlates with a 63% increase in liver cirrhosis risk [19]. - The study identifies 322 key proteins as potential "aging accelerators," with 56.7% being new targets linked to mortality risk, highlighting the predictive power of the LLM in personalized health management [19]. - By analyzing three years of continuous health examination data, the LLM can generate individual aging rate curves, improving disease outbreak predictions by three times compared to single examination assessments [19].
数字智能是否会取代生物智能?
小熊跑的快· 2025-07-27 00:26
Core Viewpoint - The ultimate consideration in the AI industry is whether digital intelligence (silicon-based) can irreversibly surpass biological intelligence (carbon-based) when energy becomes sufficiently cheap [1] Summary by Sections Two Paradigms for Intelligence - Digital intelligence can instantaneously propagate knowledge across groups by directly copying brain knowledge, a capability that biological intelligence cannot match [1] Development Over Thirty Years - The evolution of AI over the past three decades has led to significant advancements, including the acceptance of "feature vectors" by computational linguists and the introduction of the Transformer model by Google, showcasing the powerful capabilities of large language models [4][8] Large Language Models - Large language models understand language in a manner similar to humans, transforming words into feature vectors that can effectively combine with other words, akin to building structures with Lego blocks [2][8] Knowledge Transfer and Efficiency - The best method for transferring knowledge is through distillation from a "teacher" to a "student," allowing for efficient sharing of learned knowledge among digital agents [8] Current Situation and Future Implications - If energy is cheap, digital computation will generally have advantages over biological computation, particularly in knowledge sharing among agents [8] - The potential for superintelligence to manipulate humans for power raises significant concerns about the future of AI and its implications for human safety [12]
“AI教父”辛顿WAIC演讲:我们正在养一头老虎,别指望能“关掉它”
Hua Er Jie Jian Wen· 2025-07-26 11:40
Core Viewpoint - The 2025 World Artificial Intelligence Conference (WAIC) in Shanghai featured a speech by Geoffrey Hinton, discussing the fundamental differences between digital intelligence and biological intelligence, expressing concerns about the creation of AI that may surpass human intelligence [1][2]. Summary by Relevant Sections AI Development Paradigms - AI has two main paradigms: the logical paradigm, which focuses on reasoning through symbolic rules, and the biological paradigm, which emphasizes learning and understanding connections in networks [3][2]. - Hinton's early model in 1985 attempted to combine these theories to better understand vocabulary through semantic interactions [2]. Language Understanding - Large language models (LLMs) understand language similarly to humans, potentially creating "hallucinations" in language [3]. - Words can be likened to multi-dimensional Lego blocks that adjust their shapes based on context, requiring proper connections to convey meaning [3][5]. Advantages of Digital Intelligence - Digital intelligence has two key advantages: the permanence of knowledge storage and high efficiency in knowledge dissemination, allowing for the rapid sharing of vast amounts of information [3][11]. - When energy is cheap, digital intelligence could irreversibly surpass biological intelligence due to its ability to replicate knowledge quickly [3][11]. Concerns About AI - The creation of AI that is smarter than humans raises concerns about survival and control, likening the situation to keeping a tiger as a pet [3][12]. - Hinton emphasizes that AI cannot be eliminated and will enhance efficiency across various industries, making it imperative to find ways to train AI to be beneficial rather than harmful [3][13]. International Cooperation - Hinton advocates for the establishment of an international network of AI safety institutions to research how to train superintelligent AI to act in humanity's best interest [3][15]. - The potential for global cooperation exists, as all nations share a common interest in preventing AI from dominating the world [3][14].
2025 WAIC首日有何亮点?一图Get
news flash· 2025-07-26 11:04
Core Insights - The 2025 World Artificial Intelligence Conference (WAIC) opened in Shanghai, featuring over 1,500 attendees and more than 140 forums, with an exhibition area exceeding 70,000 square meters, marking the largest scale in its history [1][5][6] Group 1: Conference Highlights - The conference theme is "Intelligent Era, Global Cooperation," emphasizing the rapid advancements in artificial intelligence (AI) and its integration into various sectors [4][5] - The event showcased over 3,000 cutting-edge exhibits and more than 100 new products making their global or Chinese debuts [1][5] - The "Global Governance Action Plan for Artificial Intelligence" was released, promoting innovation and international cooperation in AI development [14] Group 2: Government and Industry Perspectives - Premier Li Qiang highlighted the need for AI to serve humanity and emphasized the importance of balancing development with safety [5][6] - Shanghai's Mayor Gong Zheng stated the city's mission to build a globally competitive AI technology and industry innovation hub, aligning with national development goals [6] - The "Shanghai High-Level Autonomous Driving Leading Zone" action plan aims to establish a leading area for high-level autonomous driving by 2027, fostering an internationally competitive smart connected industry cluster [17] Group 3: Technological Innovations and Developments - Zhiyuan Robotics won the SAIL Star Award for its "Qiyuan General Embodied Model," recognized for its innovation and industry impact [19][20] - Alibaba introduced the Quark AI glasses, integrating various functionalities and AI capabilities, enhancing user experience in navigation and payments [21] - Baidu's new digital human technology, NOVA, demonstrated significant commercial potential, achieving 55 million GMV in a previous application [22]
诺奖得主、AI教父辛顿上海演讲:警惕超级智能掌控世界
Bei Ke Cai Jing· 2025-07-26 09:37
Group 1 - The World Artificial Intelligence Conference (WAIC) 2025 opened in Shanghai, featuring Geoffrey Hinton, a dual laureate of the Turing Award and Nobel Prize, who raised concerns about the future of superintelligent AI [1] - Hinton warned that superintelligent AI could easily manipulate humans to gain more power, learning to deceive them and control those responsible for shutting it down [1][2] - He explained that digital computation, despite its high energy consumption, allows multiple intelligent agents to easily share knowledge, making it more efficient than biological computation in energy terms [1] Group 2 - Hinton posed a significant question regarding the implications of AI for humanity's future, suggesting that AI could create its own sub-goals, such as survival and power acquisition [2] - He likened the situation to raising a "little tiger," where humans may need to either abandon the AI or find ways to ensure it does not harm them [2] - Hinton called for the establishment of a global AI safety research institution, advocating for international collaboration to teach AI to avoid seeking control [2]
80后麻省理工学霸,在深圳干出200亿
盐财经· 2025-07-26 09:33
Core Viewpoint - The article emphasizes that AI is not just a trend but a transformative technology that can revolutionize various industries, particularly in the pharmaceutical sector, where it can significantly enhance drug development processes [2][3]. Market Demand - A sustainable AI business model requires a real market demand with tangible application scenarios, addressing customer pain points and ensuring strong payment capabilities from customers [4]. - The pharmaceutical industry is identified as an ideal sector due to its urgent need for AI in drug development, which is costly and time-consuming, with global top ten pharmaceutical companies expected to invest over $120 billion in R&D in 2024 [5]. Technological Maturity - AI must possess the capability to solve customer pain points, and the industry should have a data-rich environment to facilitate AI training and improvement [4][5]. - The drug development process generates vast amounts of data, making it a data-intensive and capital-heavy industry, particularly in the stages of drug molecule screening and design [5]. Human Element - The third critical factor for establishing a sustainable AI company is the human element, exemplified by the founding team of CrystalTech, which was established by three MIT postdoctoral researchers in quantum physics [7]. - CrystalTech has expanded its AI-driven capabilities beyond pharmaceuticals into materials science, petrochemicals, renewable energy, and agriculture, and is recognized as the first AI pharmaceutical company listed on the Hong Kong Stock Exchange with a market value exceeding HKD 20 billion [8]. AI in Drug Development - AI's role in drug development includes predicting protein structures, which is crucial for identifying drug targets and designing effective drug molecules [12][13]. - The integration of AI allows for a significant reduction in the time and cost associated with drug development by enabling virtual experiments and high-throughput synthesis of candidate molecules [16][21]. Collaboration of AI and Experiments - AI serves as an enabler rather than a complete replacement in drug development, necessitating a combination of computational simulations and real-world experiments to optimize the drug discovery process [22]. - The collaboration between AI-driven simulations and laboratory experiments provides timely feedback for model training and algorithm optimization, highlighting the interdependence of both approaches [22]. Investment and Growth - CrystalTech's early investments were influenced by the growing interest in biomedicine and the application of AI technologies, with significant backing from notable investors like Tencent [28][31]. - The company has focused on its core mission rather than chasing trends, which has positioned it well for success as the AI wave continues to evolve [32]. Future of AI in Industries - The article suggests that industries with easier and cheaper data acquisition will experience faster and deeper changes due to AI, with the pharmaceutical sector being a prime example [34]. - The early stages of drug discovery are highlighted as particularly advantageous for AI applications due to lower experimental costs and the ability to generate large datasets [34][35].
AI教父Hinton中国首次演讲实录:人类可能就是大语言模型
Hu Xiu· 2025-07-26 09:26
Group 1 - The core idea of the discussion revolves around the evolution of AI, highlighting two main paradigms: "symbolism" which focuses on logical reasoning, and "connectionism" which emphasizes learning from neural connections [1][2] - The speaker, Geoffrey Hinton, discusses the development of a small model in 1985 that combined these two theories, predicting the next word based on features rather than storing complete sentences [3][4] - The advancement of large language models, such as Google's Transformer and OpenAI's GPT, is noted, which utilize multi-dimensional features of words to generate and understand language [6][10] Group 2 - The discussion emphasizes the differences between human knowledge transmission and AI knowledge replication, with AI systems being able to copy and share knowledge at a much faster rate [9][13] - The concept of "knowledge distillation" is introduced, where knowledge from large models is transferred to smaller models, akin to a teacher-student relationship [16][17] - The potential for AI to surpass human intelligence is acknowledged, with concerns about control and the implications of highly intelligent AI systems [18][19] Group 3 - The need for global cooperation in AI safety is highlighted, suggesting the establishment of an international research network focused on training AI for beneficial purposes [20][21] - The second speaker, Yan Junjie, discusses the democratization of AI, emphasizing its role as a creative source and its integration into various fields, enhancing individual capabilities [24][25] - The observation that AI is increasingly being used in diverse applications, from ancient text analysis to astronomy, showcases its expanding utility [26][30] Group 4 - The belief that AI will not be monopolized by a few organizations is presented, with the argument that different models will emerge based on varying goals and values [32][33] - The rise of multi-agent systems and open-source models is noted, indicating a trend towards a more inclusive AI development landscape [34][35] - The discussion concludes with the assertion that AI will become more accessible and affordable, with a focus on the importance of collaborative efforts in achieving advancements in artificial general intelligence (AGI) [40]