Core Insights - The article discusses the transition of artificial intelligence from a "chat" paradigm to an "actionable" intelligent agent era, emphasizing the need for deep collaboration and experience sharing among developers in optimizing LLM systems [2] Event Overview - A Meetup organized by SGLang community, Machine Heart, and Zhangjiang Incubator will take place on February 6, focusing on LLM system optimization and practical implementation [2] - The event will feature discussions on SGLang's technical roadmap, long-context expansion, RL post-training frameworks, and diffusion language model exploration [2] Event Schedule - The event schedule includes: - 13:30-14:00: Registration - 14:00-14:30: Keynote on SGLang roadmap by Zhang Bozhou, core developer of SGLang [5] - 14:30-15:00: Keynote on Omni-infer performance optimization by Zheng Jinhwan, core developer of Omni-infer [5] - 15:00-15:30: Keynote on slime RL scaling post-training framework by Xie Chengxing, Tsinghua University PhD student [5] - 15:30-16:00: Keynote on SGLang CPP for long-context scaling by Cai Shangming, core developer of SGLang and Mooncake [5] Guest Introductions - Zhang Bozhou: Core developer of SGLang, focusing on open-source LLM support and optimization across different CUDA hardware [8] - Zheng Jinhwan: Huawei technical expert and core contributor to Omni-infer, specializing in high-performance systems and inference optimization [9] - Xie Chengxing: PhD student at Tsinghua University and core developer of the slime RL framework, with a focus on enhancing LLM reasoning and decision-making capabilities [10] - Cai Shangming: Researcher at Alibaba Cloud, core contributor to SGLang and Mooncake, with expertise in high-performance inference systems and distributed machine learning [10] - Li Zehuan: System engineer at Ant Group and core contributor to SGLang, focusing on AI infrastructure optimization [11]
来这场沙龙,一览SGLang X 超长上下文扩展、RL后训练框架、扩散语言模型等前沿技术实践
机器之心·2026-01-29 08:12