Workflow
SGLang
icon
Search documents
Z Event|Z Potentials × SGLang NeurIPS 全球前沿研究者峰会之夜
Z Potentials· 2025-11-26 04:34
Core Insights - NeurIPS 2025 is set to be a historic event for the future of AI technology, gathering top researchers and engineers in San Diego [1] - Z Potentials is collaborating with SGLang, a leading open-source inference engine community, to create a unique networking opportunity for frontier researchers [2] Event Details - The event will feature prominent researchers from organizations like OpenAI, DeepMind, and Nvidia, focusing on next-generation generative AI and system innovations [1] - The event is scheduled for December 5, from 6:00 PM to 8:00 PM, near the NeurIPS venue in San Diego [6] Collaboration and Support - Z Potentials aims to bridge investment, research, and infrastructure, with SGLang recognized as a standard in the large model inference field [2] - Atlas Cloud is providing significant computational support for the event, enabling the gathering of leading researchers [3]
大模型优秀大脑齐聚硬核开源聚会,SGLang社区举办国内首次Meetup
机器之心· 2025-10-28 06:29
Core Insights - The Pytorch Conference 2025 showcased the vibrant community and significant developments in deep learning, particularly highlighting SGLang's contributions and potential in the industry [1][3][4]. SGLang Overview - SGLang, an open-source high-performance inference engine for large language models and visual language models, originated from RadixAttention and is incubated by the non-profit organization LMSYS. It offers low latency and high throughput inference across various environments, from single GPUs to large distributed clusters [7][8]. Community Engagement - The first Meetup event in Beijing, co-hosted by SGLang, Meituan, and Amazon Web Services, attracted numerous contributors, developers, and scholars, indicating a strong community presence and development potential [4][8]. Technical Developments - The Meetup featured technical discussions on SGLang's architecture, including advancements in KV Cache, Piecewise CUDA Graph, and Spec Decoding, aimed at improving efficiency and compatibility [21][22]. - SGLang's quantization strategies were also discussed, focusing on expanding application range and optimizing model performance [34][35]. Application and Practice - Various industry applications of SGLang were presented, including its integration with Baidu's Ernie 4.5 model for large-scale deployment and optimization in search scenarios [41][42]. - The application of SGLang in WeChat's search function was highlighted, emphasizing the need for high throughput and low latency in user experience [44]. Future Directions - The roadmap for SGLang includes further integration with various hardware and software solutions, aiming to enhance stability and compatibility across different platforms [22][35]. - The Specforge framework, developed by the SGLang team, aims to accelerate large language model inference and has been adopted by major companies like Meituan and NVIDIA [57][58].
KTransformers入选计算机系统顶会、与主流框架合作,趋境&清华让「异构」成为推理新范式
量子位· 2025-10-22 09:12
Core Insights - KTransformers, an open-source project developed by Turing Technology and Tsinghua University's KVCache.AI team, focuses on system innovation during the inference phase of large models, enabling efficient operation on diverse hardware architectures with lower computational power [2][4]. Group 1: KTransformers Overview - KTransformers is a high-performance heterogeneous inference framework that optimally utilizes various computing resources such as GPUs, CPUs, and memory [2]. - The project paper was recognized at the prestigious SOSP 2025 conference, highlighting its significance in the field of computer systems [2][4]. Group 2: Technical Innovations - The framework introduces an "Expert Deferral" mechanism, allowing for efficient scheduling of experts in Mixture of Experts (MoE) models, which reduces computational load without sacrificing model performance [7][13]. - KTransformers achieves nearly 4x speedup on a single Intel Xeon processor compared to traditional PyTorch implementations, significantly enhancing CPU performance in expert calculations [12]. - The system allows for dynamic overlapping of CPU and GPU loads, resulting in a model throughput increase of approximately 1.45 times, with minimal impact on model accuracy [15][16]. Group 3: Collaboration and Ecosystem - KTransformers has partnered with SGLang, a mainstream inference framework, to integrate full GPU inference with heterogeneous inference, enhancing the overall architecture for large model deployment [5][19]. - This collaboration enables developers to access both full GPU and heterogeneous inference capabilities seamlessly, particularly beneficial in scenarios with limited GPU resources [21]. Group 4: Market Position and Future Directions - KTransformers has gained significant traction in the developer community, with over 15.2K stars on GitHub, indicating its widespread adoption as a foundational framework for large model inference [24]. - The project aims to democratize AI capabilities, making them accessible beyond elite computational paths, and is actively collaborating with various domestic CPU and GPU platforms to promote cost-effective solutions [28][29].
首个开源实现100%可复现的稳定RL训练框架来了!2次结果完全重合
量子位· 2025-09-27 01:30
Core Insights - The article discusses the achievement of SGLang and slime teams in creating a fully reproducible and stable reinforcement learning (RL) training framework based on the Qwen3-8B model, addressing the issue of non-deterministic outputs in large language model (LLM) inference [1][2][6]. Group 1: Deterministic Inference - SGLang and slime teams have developed a deterministic inference solution that integrates batch invariant operators, CUDA Graph, radix cache, and chunked prefill, ensuring high performance while maintaining compatibility with key features [5][8]. - The implementation of batch invariant operators addresses the core issue of output uncertainty in LLM inference, which arises from varying batch sizes during dynamic batching [7][8]. - Testing has shown that the average performance drop for SGLang's solution is 34.35%, significantly better than the 61.5% decline reported by Thinking Machines Lab [5][12]. Group 2: Performance Metrics - The article presents performance metrics for different inference modes, showing that deterministic modes yield consistent outputs across various batch sizes, with unique output counts significantly reduced [10][11]. - In terms of end-to-end latency, deterministic inference shows a performance drop of 25% to 45%, with specific backend performance metrics indicating improvements in certain configurations [12][13]. Group 3: Future Developments - Future efforts will focus on optimizing batch invariant operators to enhance performance, particularly for RL inference, and expanding support to mixture of experts (MoE) models [16][18]. - The team aims to improve radix cache functionality and explore tensor parallelism to further enhance the capabilities of deterministic inference [18].
最受欢迎的开源大模型推理框架 vLLM、SGLang 是如何炼成的?
AI科技大本营· 2025-09-24 02:01
Core Viewpoint - The article discusses the development stories of vLLM and SGLang, two prominent open-source inference engines for large language models (LLMs), highlighting their innovations, community engagement, and performance metrics. Group 1: LLM Inference Challenges - The core challenge of LLM inference lies in deploying models with hundreds of billions of parameters under strict constraints of latency, throughput, and cost [3] - The inference process involves applying learned knowledge to new data, which requires efficient computation and memory management [2][3] Group 2: vLLM Development - vLLM originated from a 2023 paper on PagedAttention, which innovatively applied operating system techniques for memory management, significantly enhancing throughput [7][8] - vLLM demonstrated remarkable performance improvements, handling up to 5 times the traffic and increasing throughput by 30 times compared to previous backends [9] - The project quickly evolved from a research initiative to a community-driven open-source project, amassing over 56,000 stars on GitHub and engaging thousands of developers [15][9] Group 3: SGLang Development - SGLang was developed from the paper "SGLang: Efficient Execution of Structured Language Model Programs," featuring RadixAttention for optimized performance [12] - SGLang retains the KVCache from previous requests to reduce computation during the prefill phase, showing significant performance advantages over traditional inference engines [12] - Although SGLang's community is smaller than vLLM's, it has over 2,000 participants and has shown rapid iteration and growth [13] Group 4: Community Engagement - vLLM has a robust community with over 12,000 participants in issues and pull requests, while SGLang's community is less than half that size [15][13] - Both projects have faced challenges in managing a growing number of issues and pull requests, with vLLM generally responding faster than SGLang [13] Group 5: Performance Metrics and Comparisons - vLLM and SGLang have both integrated advanced features like Continuous Batching and various attention mechanisms, leading to significant performance enhancements [29] - The competition between these two projects has intensified, with both claiming performance leadership in their respective releases [26] Group 6: Future Trends and Developments - The article notes that as the performance race heats up, both vLLM and SGLang are focusing on reproducible methods and real-world metrics rather than just benchmark results [26] - The trend indicates a convergence in model architectures and features among leading inference engines, with a shift in competition towards factors beyond performance [29] Group 7: Investment and Support - Both projects have attracted attention from investment firms and open-source foundations, with vLLM receiving support from a16z and SGLang being recognized in the PyTorch ecosystem [31][40]
从中国“霸榜”到全球开源,AI的新思考!GOSIM HANGZHOU 2025圆满收官
AI科技大本营· 2025-09-16 10:33
Core Insights - The GOSIM HANGZHOU 2025 conference highlighted the integration of open-source and AI technologies, showcasing their potential across various industries and emphasizing the importance of community collaboration in driving innovation [1][3][4]. Group 1: Conference Overview - The conference attracted over 200 global leaders in open-source and AI, along with more than 1500 developers, featuring keynote speeches, high-end forums, and specialized discussions on AI models and infrastructure [1][3]. - Keynote speakers included influential figures from organizations like the United Nations and major tech companies, discussing the significance of open-source in AI development and global collaboration [3][6][7]. Group 2: Community and Collaboration - The event emphasized community engagement, with forums dedicated to the Rust programming language and hands-on workshops that fostered interaction among developers [4][5][15]. - The conference featured a strong focus on practical applications, including hackathons that encouraged developers to create innovative solutions in real-time [22][24]. Group 3: AI and Open Source Integration - Discussions on the future of AI highlighted the need for high-quality training data and the challenges of integrating AI into real-world applications, stressing the role of open collaboration in overcoming these hurdles [8][12]. - The conference explored various AI themes, including embodied intelligence, intelligent agents, and the next generation of AI technologies, showcasing advancements and potential applications [10][12][14]. Group 4: Workshops and Practical Engagement - A total of 14 workshops were organized, allowing developers to engage in hands-on learning and collaboration on cutting-edge technologies [17][20]. - The workshops covered a range of topics, from AI inference to cross-platform development, providing participants with practical skills and insights [18][20]. Group 5: Future Directions and Closing Remarks - The conference concluded with a call for continued collaboration in the open-source AI community, setting the stage for future events and innovations [33][34]. - GOSIM HANGZHOU 2025 served as a platform for fostering connections between academia and industry, promoting ongoing dialogue and exploration in the tech community [29][31].
Mira Murati 创业公司首发长文,尝试解决 LLM 推理的不确定性难题
Founder Park· 2025-09-11 07:17
Core Insights - The article discusses the challenges of achieving reproducibility in large language model (LLM) inference, highlighting that even with the same input, different outputs can occur due to the probabilistic nature of the sampling process [10][11] - It introduces the concept of "batch invariance" in LLM inference, emphasizing the need for consistent results regardless of batch size or concurrent requests [35][40] Group 1 - Thinking Machines Lab, founded by former OpenAI CTO Mira Murati, has launched a blog series called "Connectionism" to share insights on AI research [3][8] - The blog's first article addresses the non-determinism in LLM inference, explaining that even with a temperature setting of 0, results can still vary [10][12] - The article identifies floating-point non-associativity and concurrency as key factors contributing to the uncertainty in LLM outputs [13][24] Group 2 - The article explains that the assumption of "concurrency + floating-point" as the sole reason for non-determinism is incomplete, as many operations in LLMs can be deterministic [14][16] - It discusses the importance of understanding the implementation of kernel functions in GPUs, which can lead to unpredictable results due to the lack of synchronization among processing cores [25][29] - The article emphasizes that most LLM operations do not require atomic addition, which is often a source of non-determinism, thus allowing for consistent outputs during forward propagation [32][33] Group 3 - The concept of batch invariance is explored, indicating that the results of LLM inference can be affected by the batch size and the order of operations, leading to inconsistencies [36][40] - The article outlines strategies to achieve batch invariance in key operations like RMSNorm, matrix multiplication, and attention mechanisms, ensuring that outputs remain consistent regardless of batch size [42][60][64] - It concludes with a demonstration of deterministic inference using batch-invariant kernel functions, showing that consistent outputs can be achieved with the right implementation [74][78]
刚刚,Thinking Machines Lab首次发长文,揭开LLM推理不确定性真相
机器之心· 2025-09-11 03:36
Core Viewpoint - The article discusses the challenges of achieving reproducibility in large language models (LLMs) due to the lack of batch invariance, which leads to nondeterministic outputs even under controlled conditions [10][41][46]. Group 1: Introduction to the Issue - Thinking Machines Lab, founded by former OpenAI CTO Mira Murati, published its first article addressing nondeterminism in LLM inference [1][3]. - The blog aims to cover a wide range of topics related to their research, including numerical computation and prompt engineering [3]. Group 2: Understanding Nondeterminism - Reproducibility is a cornerstone of scientific progress, yet obtaining consistent results from LLMs is challenging [10]. - Even with the temperature parameter set to 0, LLM APIs can still produce nondeterministic outputs [11]. - The nondeterminism is attributed to floating-point non-associativity and concurrency, which affects the order of operations in GPU computations [13][30]. Group 3: The Root Cause of Nondeterminism - The article argues that the common assumption linking concurrency and floating-point operations to nondeterminism does not fully explain the issue [14][30]. - Floating-point non-associativity leads to different results based on the order of operations, especially in parallel computations [19][26]. - The actual implementation of kernel functions in LLMs contributes to the nondeterministic behavior observed [27][30]. Group 4: Batch Invariance - The lack of batch invariance is identified as a key factor causing nondeterminism in LLM outputs [41][46]. - Batch size changes can lead to different results for the same input, which is counterintuitive for mathematical functions [43]. - The article emphasizes that ensuring kernel functions are batch invariant is crucial for achieving consistent outputs in LLM inference [46]. Group 5: Solutions for Achieving Determinism - The article outlines strategies to implement batch invariance in key operations such as RMSNorm, matrix multiplication, and attention mechanisms [49][60][71]. - By ensuring that the operations do not depend on batch size, the LLM inference can produce consistent results [46][81]. - The authors provide a demonstration of deterministic inference using their batch-invariant kernel function library [82]. Group 6: Performance Considerations - Initial performance tests indicate that while the batch-invariant kernel functions may not be fully optimized, they do not lead to catastrophic performance declines [89]. - The article highlights the importance of maintaining performance while achieving deterministic outputs in LLMs [88]. Group 7: Implications for Reinforcement Learning - The article discusses how achieving deterministic inference can facilitate true on-policy reinforcement learning by ensuring consistent outputs between training and inference [90]. - This consistency is essential for effective training and sampling processes in reinforcement learning environments [90]. Group 8: Conclusion - The article advocates for a proactive approach to understanding and addressing the sources of nondeterminism in LLMs, encouraging the community to strive for reproducibility in AI systems [93].
Introduction to LLM serving with SGLang - Philip Kiely and Yineng Zhang, Baseten
AI Engineer· 2025-07-26 17:45
SGLang Overview - SGLang is an open-source, high-performance serving framework for large language models (LLMs) and large vision models (VLMs) [5] - SGLang supports day zero releases for new models from labs like Quen and DeepSeek, and has a strong open-source community [7] - The project has grown rapidly, from a research paper in December 2023 to nearly 15,000 GitHub stars in 18 months [9] Usage and Adoption - Base 10 uses SGLang as part of its inference stack for various models [8] - SGLang is also used by XAI for their Glock models, inference providers, cloud providers, research labs, universities, and product companies like Koser [8] Performance Optimization - SGLang's performance can be optimized using flags and configuration options, such as CUDA graph settings [20] - Eagle 3, a speculative decoding algorithm, can be used to improve performance by increasing the token acceptance rate [28][42][43] - The default CUDA graph max batch size on L4 GPUs is eight, but it can be adjusted to improve performance [31][36] Community and Contribution - The SGLang community is active and welcomes contributions [7][54] - Developers can get involved by starring the project on GitHub, filing issues, joining the Slack channel, and contributing to the codebase [9][54][55] - The codebase includes the SGLang runtime, a domain-specific front-end language, and a set of optimized kernels [58]
超大模型推理加速2.18倍!SGLang联合美团技术团队开源投机采样训练框架
量子位· 2025-07-26 09:01
Core Viewpoint - SpecForge is an open-source training framework designed for speculative sampling, specifically tailored for large models, achieving a 2.18x inference acceleration [1][15]. Group 1: SpecForge Overview - SpecForge is developed by the SGLang team in collaboration with Meituan's search recommendation platform and Cloudsway.AI [1]. - The framework is built to address the challenges posed by the increasing size of models, which often leads to lower inference efficiency [4][6]. - SpecForge integrates deeply with the SGLang inference engine, providing a seamless training and inference process for speculative sampling [5][7]. Group 2: Technical Features - The framework incorporates Eagle3, an advanced speculative sampling method that enhances inference speed by training a lightweight draft model to predict token distributions accurately [7]. - SpecForge supports various mainstream models, including complex MoE layers and Transformer variants, ensuring broad applicability [7]. - It features scalable distributed training through Fully Sharded Data Parallel (FSDP) and Tensor Parallelism (TP), optimizing resource utilization on GPU clusters [7][14]. Group 3: Training Modes and Efficiency - SpecForge offers two training modes: Online and Offline, allowing users to choose based on their specific needs and resource availability [10][17]. - The Training-Time Test (TTT) architecture enhances the robustness of the draft model, encapsulating complex processes to simplify implementation for users [9]. - The framework is designed with a focus on memory-efficient training, significantly reducing memory overhead even for trillion-parameter models [7]. Group 4: Experimental Validation - The effectiveness of SpecForge was validated through experiments on datasets like ShareGPT and UltraChat, demonstrating compatibility with the Eagle3 architecture [15]. - The draft models trained using SpecForge achieved a notable 2.18x inference acceleration on the MT-Bench benchmark [15]. Group 5: Future Developments - SpecForge's roadmap includes plans to support additional model architectures and integrate visual-language models (VLM) into the framework [22]. - The team aims to enhance training efficiency through improved parallel strategies and kernel optimizations [22].