担任腾讯首席AI科学家后,姚顺雨带领团队揭晓首个研究成果

Core Insights - Tencent's first research outcome under Chief AI Scientist Yao Shunyu has been revealed, focusing on the challenges of learning from context in AI models [1][6] - The competitive landscape is shifting from improving model training to providing rich and relevant context for tasks [1][7] Group 1: Research Findings - The joint research by Tencent's Mixyuan team and Fudan University highlights that enabling large models to learn from context is more challenging than previously thought [6][7] - A benchmark called CL-bench was created to assess language models' ability to learn new knowledge from context, consisting of 500 complex contexts, 1,899 tasks, and 31,607 validation standards [7] - The top ten language models achieved an average task-solving rate of only 17.2% on CL-bench, indicating significant shortcomings in utilizing context effectively [7] Group 2: Future Directions - The research suggests that enhancing models' ability to learn from context could be a key direction for future iterations of large language models [7] - The role of humans in AI systems may evolve from being primary data providers to context providers as models improve their contextual learning capabilities [7] - Memory mechanisms in models are expected to become a core theme in the development of large models by 2026, potentially leading to autonomous learning capabilities [7]

担任腾讯首席AI科学家后,姚顺雨带领团队揭晓首个研究成果 - Reportify