Core Viewpoint - The article discusses the evolution of large language models from "fast thinking" to "slow thinking" paradigms, emphasizing the importance of deep reasoning and logical coherence in AI development [2]. Group 1: Slow Thinking Technology - The new model DeepSeek-R1 enhances long reasoning chain capabilities through reinforcement learning, demonstrating superior understanding and decision-making in complex tasks [2]. - "Slow thinking" technology is identified as a key pathway for advancing large models towards higher intelligence levels, leading the industry towards greater automation and reliability [2]. Group 2: Seminar Details - A seminar titled "AI Slow Thinking: Complex Reasoning Technology of Large Models" was organized by Springer Nature, featuring Professor Zhao Xin from Renmin University of China, who shared insights on the latest research in slow thinking technology [2][6]. - Dr. Chang Lanlan, the Director of Computer Science Book Publishing at Springer Nature, discussed the new AI book resources and academic publishing in 2025 [2][6]. Group 3: Speaker Profiles - Professor Zhao Xin has a research focus on information retrieval and natural language processing, with over 200 published papers and significant contributions to large language models [8]. - Dr. Chang Lanlan has extensive experience in computer science book publishing and has been with Springer Nature for 14 years, overseeing AI-related publications [11]. Group 4: Book Recommendations - A new book led by Professor Zhao Xin and his team provides a systematic framework for learners in the large model field, aiming to help readers grasp core concepts and cutting-edge algorithms [19]. - The Springer Nature AI electronic book collection offers a comprehensive resource for research and learning, covering a wide range of topics from foundational knowledge to advanced research outcomes [21].
【9月9日直播】大模型复杂推理技术:如何重塑AI推理逻辑
机器人大讲堂·2025-09-03 04:19