Workflow
大模型慢思考技术
icon
Search documents
【9月9日直播】大模型复杂推理技术:如何重塑AI推理逻辑
机器人大讲堂· 2025-09-03 04:19
Core Viewpoint - The article discusses the evolution of large language models from "fast thinking" to "slow thinking" paradigms, emphasizing the importance of deep reasoning and logical coherence in AI development [2]. Group 1: Slow Thinking Technology - The new model DeepSeek-R1 enhances long reasoning chain capabilities through reinforcement learning, demonstrating superior understanding and decision-making in complex tasks [2]. - "Slow thinking" technology is identified as a key pathway for advancing large models towards higher intelligence levels, leading the industry towards greater automation and reliability [2]. Group 2: Seminar Details - A seminar titled "AI Slow Thinking: Complex Reasoning Technology of Large Models" was organized by Springer Nature, featuring Professor Zhao Xin from Renmin University of China, who shared insights on the latest research in slow thinking technology [2][6]. - Dr. Chang Lanlan, the Director of Computer Science Book Publishing at Springer Nature, discussed the new AI book resources and academic publishing in 2025 [2][6]. Group 3: Speaker Profiles - Professor Zhao Xin has a research focus on information retrieval and natural language processing, with over 200 published papers and significant contributions to large language models [8]. - Dr. Chang Lanlan has extensive experience in computer science book publishing and has been with Springer Nature for 14 years, overseeing AI-related publications [11]. Group 4: Book Recommendations - A new book led by Professor Zhao Xin and his team provides a systematic framework for learners in the large model field, aiming to help readers grasp core concepts and cutting-edge algorithms [19]. - The Springer Nature AI electronic book collection offers a comprehensive resource for research and learning, covering a wide range of topics from foundational knowledge to advanced research outcomes [21].
直播预告| 大模型复杂推理技术: 如何重塑AI推理逻辑
机器人大讲堂· 2025-08-28 10:34
Core Viewpoint - The article discusses the evolution of large language models from "fast thinking" to "slow thinking" paradigms, emphasizing the importance of deep reasoning and logical coherence in AI technology [2]. Group 1: Slow Thinking Technology - The new model DeepSeek-R1 enhances long reasoning chain capabilities through reinforcement learning, demonstrating superior understanding and decision-making in complex tasks [2]. - "Slow thinking" technology is identified as a key path for advancing large models towards higher intelligence levels, leading the industry towards greater automation and reliability [2]. Group 2: Upcoming Seminar - An online seminar titled "AI Slow Thinking: Complex Reasoning Technology of Large Models" is scheduled for September 9, 2025, featuring Professor Zhao Xin from Renmin University of China [2][5]. - The seminar will cover the latest research on "slow thinking" technology and its implications for large models, with discussions led by experts in the field [2][5]. Group 3: Speaker Profiles - Professor Zhao Xin specializes in information retrieval and natural language processing, with over 200 published papers and significant contributions to large language models [7]. - Dr. Chang Lanlan, the Director of Springer Nature Computer Science, will discuss the new AI book resources and their applications in research and education [10]. Group 4: Book Recommendations - A new book on large models, co-authored by Professor Zhao Xin, aims to provide a systematic framework for learners in the field, covering essential concepts and cutting-edge algorithms [16]. - The Springer Nature AI electronic book collection offers a comprehensive resource for researchers and students, covering a wide range of topics from foundational knowledge to advanced research findings [18].