SGLang
Search documents
给AI打个分,结果搞出17亿估值独角兽?
3 6 Ke· 2026-01-07 11:04
大模型竞技场LMArena官宣拿下1.5亿美元A轮融资。 估值升至17亿美元,妥妥的新年开门红! 这波融资由Felicis和加州大学投资公司UC Investments领投,Andreessen Horowitz、The House Fund等机构跟投。 资本用真金白银投票,足以见得AI时代大模型评估这个赛道有多香~ 而这支90后华人含量99%团队的走红之路,还得从2023年ChatGPT横空出世后说起。 从学术探索到商业崛起 LMArena的前身是曾经火爆AI圈的Chatbot Arena,最早由LMSYS这个自发的开源组织创建。 组织的核心成员全是来自UC伯克利、斯坦福、UCSD、CMU等顶尖高校的学霸。 他们的开源推理引擎SGLang在业内首次实现了在96块H100上跑出几乎媲美DeepSeek官方报告吞吐量的开源方案。 目前SGLang已经实现大规模部署,被xAI、英伟达、AMD、谷歌云、甲骨文云、阿里云、美团、腾讯云等企业和机构采用。 不过,比起硬核技术,他们最主要也更出圈的工作是对大模型进行评估。 在ChatGPT、Claude一众模型刚刚面世之际,他们率先创办了Chatbot Arena这么一个 ...
给AI打个分,结果搞出17亿估值独角兽???
量子位· 2026-01-07 09:11
闻乐 发自 凹非寺 量子位 | 公众号 QbitAI 大模型竞技场 LMArena 官宣拿下 1.5亿美元 A轮融资。 估值升至17亿美元,妥妥的新年开门红! 这波融资由Felicis和加州大学投资公司UC Investments领投,Andreessen Horowitz、The House Fund等机构跟投。 资本用真金白银投票,足以见得AI时代大模型评估这个赛道有多香~ 而这支90后华人含量99%团队的走红之路,还得从2023年ChatGPT横空出世后说起。 从学术探索到商业崛起 LMArena的前身是曾经火爆AI圈的 Chatbot Arena ,最早由 LMSYS 这个自发的开源组织创建。 组织的核心成员全是来自UC伯克利、斯坦福、UCSD、CMU等顶尖高校的学霸。 他们的开源推理引擎 SGLang 在业内首次实现了在96块H100上跑出几乎媲美DeepSeek官方报告吞吐量的开源方案。 目前SGLang已经实现大规模部署,被xAI、英伟达、AMD、谷歌云、甲骨文云、阿里云、美团、腾讯云等企业和机构采用。 不过,比起硬核技术,他们最主要也更出圈的工作是 对大模型进行评估 。 在ChatGPT、Cl ...
SGLang原生支持昇腾,新模型一键拉起无需改代码
量子位· 2025-12-21 14:13
henry 发自 凹非寺 量子位 | 公众号 QbitAI 当Agent在应用侧不断加速,推理系统能否承受随之而来的真实负载,正在成为行业关注的焦点。 这是12月20日在杭州收官的 SGLang AI 金融 π 对 上,被反复提及的一个背景。 在这场聚焦大模型推理效率的"π对"上—— Agent的Vibe被暂时搁到一边,真正摆上桌面的,是推理系统在真实负载中的工程问题: 高并发请求 、 长上下文窗口 、 多轮推理 、 内存 管理, 以及在具体金融agent场景下的 一致性生成 问题。 同时,在活动讨论中,昇腾作为算力平台也被多次提及。 当前,昇腾已作为SGLang原生支持的后端之一进入主仓库,随着 SGLang推理引擎的更新,DeepSeek、Qwen、GLM等模型可以在不调整 模型参数、不引入额外插件的情况下直接运行,HiCache、Mooncake等系统能力也在对应版本中引入。 可以说,这次SGLang AI金融π对呈现的,并非零散技术点,而是一条清晰的推理工程演进路径——从缓存与内存体系,到权重更新、强化学 习效率,再到算力与模型生态的协同。 接下来,我们具体来看。 而在特定的部署场景,如 金融Agen ...
基于 SGlang RBG + Mooncake 打造生产级云原生大模型推理平台
AI前线· 2025-12-12 00:40
Core Insights - The article emphasizes the rapid evolution of large language model (LLM) inference services into core enterprise infrastructure, focusing on the balance of performance, stability, and cost in building high-performance inference systems [2] - It discusses the transition from monolithic to distributed architectures in LLM inference, highlighting the need for external KVCache to alleviate memory pressure and enhance performance in high-demand scenarios [2][4] Distributed KVCache and Mooncake - Mooncake is introduced as a leading distributed KVCache storage engine designed to provide high throughput and low latency for inference frameworks like SGLang [3] - The article outlines the challenges in managing distributed KVCache systems in production environments, which necessitate the development of RoleBasedGroup (RBG) for unified management of caching and inference nodes [4] RoleBasedGroup (RBG) Design and Challenges - RBG is presented as a Kubernetes-native API aimed at AI inference, facilitating multi-role orchestration to ensure stable and high-performance operations [4][12] - The article identifies five fundamental challenges in deploying large model inference services, including the need for strong state management and performance optimization [12][15] SCOPE Framework - The SCOPE framework is introduced, focusing on five core capabilities: Stability, Coordination, Orchestration, Performance, and Extensibility, which are essential for managing LLM inference services [16][18] - RBG's design allows for rapid architecture iteration and performance-sensitive operations, addressing the complexities of multi-role dependencies and operational efficiency [15][24] Benchmark Testing and Performance Metrics - Benchmark tests demonstrate significant improvements in KVCache hit rates and inference performance, with L3 Mooncake cache achieving a 64.67% hit rate and reducing average TTFT to 2.58 seconds [32][48] - The article highlights the importance of a multi-tier caching architecture in enhancing performance for applications like multi-turn dialogue and AI agents [44] Conclusion and Future Outlook - The integration of RBG and Mooncake is positioned as a transformative approach to building production-grade LLM inference services, emphasizing the need for deep integration of high-performance design with cloud-native operational capabilities [43][44] - The article concludes with a call for community collaboration to advance this paradigm and lay the foundation for the next generation of AI infrastructure [43]
Z Event|Z Potentials × SGLang NeurIPS 全球前沿研究者峰会之夜
Z Potentials· 2025-11-26 04:34
Core Insights - NeurIPS 2025 is set to be a historic event for the future of AI technology, gathering top researchers and engineers in San Diego [1] - Z Potentials is collaborating with SGLang, a leading open-source inference engine community, to create a unique networking opportunity for frontier researchers [2] Event Details - The event will feature prominent researchers from organizations like OpenAI, DeepMind, and Nvidia, focusing on next-generation generative AI and system innovations [1] - The event is scheduled for December 5, from 6:00 PM to 8:00 PM, near the NeurIPS venue in San Diego [6] Collaboration and Support - Z Potentials aims to bridge investment, research, and infrastructure, with SGLang recognized as a standard in the large model inference field [2] - Atlas Cloud is providing significant computational support for the event, enabling the gathering of leading researchers [3]
大模型优秀大脑齐聚硬核开源聚会,SGLang社区举办国内首次Meetup
机器之心· 2025-10-28 06:29
Core Insights - The Pytorch Conference 2025 showcased the vibrant community and significant developments in deep learning, particularly highlighting SGLang's contributions and potential in the industry [1][3][4]. SGLang Overview - SGLang, an open-source high-performance inference engine for large language models and visual language models, originated from RadixAttention and is incubated by the non-profit organization LMSYS. It offers low latency and high throughput inference across various environments, from single GPUs to large distributed clusters [7][8]. Community Engagement - The first Meetup event in Beijing, co-hosted by SGLang, Meituan, and Amazon Web Services, attracted numerous contributors, developers, and scholars, indicating a strong community presence and development potential [4][8]. Technical Developments - The Meetup featured technical discussions on SGLang's architecture, including advancements in KV Cache, Piecewise CUDA Graph, and Spec Decoding, aimed at improving efficiency and compatibility [21][22]. - SGLang's quantization strategies were also discussed, focusing on expanding application range and optimizing model performance [34][35]. Application and Practice - Various industry applications of SGLang were presented, including its integration with Baidu's Ernie 4.5 model for large-scale deployment and optimization in search scenarios [41][42]. - The application of SGLang in WeChat's search function was highlighted, emphasizing the need for high throughput and low latency in user experience [44]. Future Directions - The roadmap for SGLang includes further integration with various hardware and software solutions, aiming to enhance stability and compatibility across different platforms [22][35]. - The Specforge framework, developed by the SGLang team, aims to accelerate large language model inference and has been adopted by major companies like Meituan and NVIDIA [57][58].
KTransformers入选计算机系统顶会、与主流框架合作,趋境&清华让「异构」成为推理新范式
量子位· 2025-10-22 09:12
Core Insights - KTransformers, an open-source project developed by Turing Technology and Tsinghua University's KVCache.AI team, focuses on system innovation during the inference phase of large models, enabling efficient operation on diverse hardware architectures with lower computational power [2][4]. Group 1: KTransformers Overview - KTransformers is a high-performance heterogeneous inference framework that optimally utilizes various computing resources such as GPUs, CPUs, and memory [2]. - The project paper was recognized at the prestigious SOSP 2025 conference, highlighting its significance in the field of computer systems [2][4]. Group 2: Technical Innovations - The framework introduces an "Expert Deferral" mechanism, allowing for efficient scheduling of experts in Mixture of Experts (MoE) models, which reduces computational load without sacrificing model performance [7][13]. - KTransformers achieves nearly 4x speedup on a single Intel Xeon processor compared to traditional PyTorch implementations, significantly enhancing CPU performance in expert calculations [12]. - The system allows for dynamic overlapping of CPU and GPU loads, resulting in a model throughput increase of approximately 1.45 times, with minimal impact on model accuracy [15][16]. Group 3: Collaboration and Ecosystem - KTransformers has partnered with SGLang, a mainstream inference framework, to integrate full GPU inference with heterogeneous inference, enhancing the overall architecture for large model deployment [5][19]. - This collaboration enables developers to access both full GPU and heterogeneous inference capabilities seamlessly, particularly beneficial in scenarios with limited GPU resources [21]. Group 4: Market Position and Future Directions - KTransformers has gained significant traction in the developer community, with over 15.2K stars on GitHub, indicating its widespread adoption as a foundational framework for large model inference [24]. - The project aims to democratize AI capabilities, making them accessible beyond elite computational paths, and is actively collaborating with various domestic CPU and GPU platforms to promote cost-effective solutions [28][29].
首个开源实现100%可复现的稳定RL训练框架来了!2次结果完全重合
量子位· 2025-09-27 01:30
Core Insights - The article discusses the achievement of SGLang and slime teams in creating a fully reproducible and stable reinforcement learning (RL) training framework based on the Qwen3-8B model, addressing the issue of non-deterministic outputs in large language model (LLM) inference [1][2][6]. Group 1: Deterministic Inference - SGLang and slime teams have developed a deterministic inference solution that integrates batch invariant operators, CUDA Graph, radix cache, and chunked prefill, ensuring high performance while maintaining compatibility with key features [5][8]. - The implementation of batch invariant operators addresses the core issue of output uncertainty in LLM inference, which arises from varying batch sizes during dynamic batching [7][8]. - Testing has shown that the average performance drop for SGLang's solution is 34.35%, significantly better than the 61.5% decline reported by Thinking Machines Lab [5][12]. Group 2: Performance Metrics - The article presents performance metrics for different inference modes, showing that deterministic modes yield consistent outputs across various batch sizes, with unique output counts significantly reduced [10][11]. - In terms of end-to-end latency, deterministic inference shows a performance drop of 25% to 45%, with specific backend performance metrics indicating improvements in certain configurations [12][13]. Group 3: Future Developments - Future efforts will focus on optimizing batch invariant operators to enhance performance, particularly for RL inference, and expanding support to mixture of experts (MoE) models [16][18]. - The team aims to improve radix cache functionality and explore tensor parallelism to further enhance the capabilities of deterministic inference [18].
最受欢迎的开源大模型推理框架 vLLM、SGLang 是如何炼成的?
AI科技大本营· 2025-09-24 02:01
Core Viewpoint - The article discusses the development stories of vLLM and SGLang, two prominent open-source inference engines for large language models (LLMs), highlighting their innovations, community engagement, and performance metrics. Group 1: LLM Inference Challenges - The core challenge of LLM inference lies in deploying models with hundreds of billions of parameters under strict constraints of latency, throughput, and cost [3] - The inference process involves applying learned knowledge to new data, which requires efficient computation and memory management [2][3] Group 2: vLLM Development - vLLM originated from a 2023 paper on PagedAttention, which innovatively applied operating system techniques for memory management, significantly enhancing throughput [7][8] - vLLM demonstrated remarkable performance improvements, handling up to 5 times the traffic and increasing throughput by 30 times compared to previous backends [9] - The project quickly evolved from a research initiative to a community-driven open-source project, amassing over 56,000 stars on GitHub and engaging thousands of developers [15][9] Group 3: SGLang Development - SGLang was developed from the paper "SGLang: Efficient Execution of Structured Language Model Programs," featuring RadixAttention for optimized performance [12] - SGLang retains the KVCache from previous requests to reduce computation during the prefill phase, showing significant performance advantages over traditional inference engines [12] - Although SGLang's community is smaller than vLLM's, it has over 2,000 participants and has shown rapid iteration and growth [13] Group 4: Community Engagement - vLLM has a robust community with over 12,000 participants in issues and pull requests, while SGLang's community is less than half that size [15][13] - Both projects have faced challenges in managing a growing number of issues and pull requests, with vLLM generally responding faster than SGLang [13] Group 5: Performance Metrics and Comparisons - vLLM and SGLang have both integrated advanced features like Continuous Batching and various attention mechanisms, leading to significant performance enhancements [29] - The competition between these two projects has intensified, with both claiming performance leadership in their respective releases [26] Group 6: Future Trends and Developments - The article notes that as the performance race heats up, both vLLM and SGLang are focusing on reproducible methods and real-world metrics rather than just benchmark results [26] - The trend indicates a convergence in model architectures and features among leading inference engines, with a shift in competition towards factors beyond performance [29] Group 7: Investment and Support - Both projects have attracted attention from investment firms and open-source foundations, with vLLM receiving support from a16z and SGLang being recognized in the PyTorch ecosystem [31][40]
从中国“霸榜”到全球开源,AI的新思考!GOSIM HANGZHOU 2025圆满收官
AI科技大本营· 2025-09-16 10:33
Core Insights - The GOSIM HANGZHOU 2025 conference highlighted the integration of open-source and AI technologies, showcasing their potential across various industries and emphasizing the importance of community collaboration in driving innovation [1][3][4]. Group 1: Conference Overview - The conference attracted over 200 global leaders in open-source and AI, along with more than 1500 developers, featuring keynote speeches, high-end forums, and specialized discussions on AI models and infrastructure [1][3]. - Keynote speakers included influential figures from organizations like the United Nations and major tech companies, discussing the significance of open-source in AI development and global collaboration [3][6][7]. Group 2: Community and Collaboration - The event emphasized community engagement, with forums dedicated to the Rust programming language and hands-on workshops that fostered interaction among developers [4][5][15]. - The conference featured a strong focus on practical applications, including hackathons that encouraged developers to create innovative solutions in real-time [22][24]. Group 3: AI and Open Source Integration - Discussions on the future of AI highlighted the need for high-quality training data and the challenges of integrating AI into real-world applications, stressing the role of open collaboration in overcoming these hurdles [8][12]. - The conference explored various AI themes, including embodied intelligence, intelligent agents, and the next generation of AI technologies, showcasing advancements and potential applications [10][12][14]. Group 4: Workshops and Practical Engagement - A total of 14 workshops were organized, allowing developers to engage in hands-on learning and collaboration on cutting-edge technologies [17][20]. - The workshops covered a range of topics, from AI inference to cross-platform development, providing participants with practical skills and insights [18][20]. Group 5: Future Directions and Closing Remarks - The conference concluded with a call for continued collaboration in the open-source AI community, setting the stage for future events and innovations [33][34]. - GOSIM HANGZHOU 2025 served as a platform for fostering connections between academia and industry, promoting ongoing dialogue and exploration in the tech community [29][31].