SGLang

Search documents
Introduction to LLM serving with SGLang - Philip Kiely and Yineng Zhang, Baseten
AI Engineer· 2025-07-26 17:45
SGLang Overview - SGLang is an open-source, high-performance serving framework for large language models (LLMs) and large vision models (VLMs) [5] - SGLang supports day zero releases for new models from labs like Quen and DeepSeek, and has a strong open-source community [7] - The project has grown rapidly, from a research paper in December 2023 to nearly 15,000 GitHub stars in 18 months [9] Usage and Adoption - Base 10 uses SGLang as part of its inference stack for various models [8] - SGLang is also used by XAI for their Glock models, inference providers, cloud providers, research labs, universities, and product companies like Koser [8] Performance Optimization - SGLang's performance can be optimized using flags and configuration options, such as CUDA graph settings [20] - Eagle 3, a speculative decoding algorithm, can be used to improve performance by increasing the token acceptance rate [28][42][43] - The default CUDA graph max batch size on L4 GPUs is eight, but it can be adjusted to improve performance [31][36] Community and Contribution - The SGLang community is active and welcomes contributions [7][54] - Developers can get involved by starring the project on GitHub, filing issues, joining the Slack channel, and contributing to the codebase [9][54][55] - The codebase includes the SGLang runtime, a domain-specific front-end language, and a set of optimized kernels [58]
超大模型推理加速2.18倍!SGLang联合美团技术团队开源投机采样训练框架
量子位· 2025-07-26 09:01
Core Viewpoint - SpecForge is an open-source training framework designed for speculative sampling, specifically tailored for large models, achieving a 2.18x inference acceleration [1][15]. Group 1: SpecForge Overview - SpecForge is developed by the SGLang team in collaboration with Meituan's search recommendation platform and Cloudsway.AI [1]. - The framework is built to address the challenges posed by the increasing size of models, which often leads to lower inference efficiency [4][6]. - SpecForge integrates deeply with the SGLang inference engine, providing a seamless training and inference process for speculative sampling [5][7]. Group 2: Technical Features - The framework incorporates Eagle3, an advanced speculative sampling method that enhances inference speed by training a lightweight draft model to predict token distributions accurately [7]. - SpecForge supports various mainstream models, including complex MoE layers and Transformer variants, ensuring broad applicability [7]. - It features scalable distributed training through Fully Sharded Data Parallel (FSDP) and Tensor Parallelism (TP), optimizing resource utilization on GPU clusters [7][14]. Group 3: Training Modes and Efficiency - SpecForge offers two training modes: Online and Offline, allowing users to choose based on their specific needs and resource availability [10][17]. - The Training-Time Test (TTT) architecture enhances the robustness of the draft model, encapsulating complex processes to simplify implementation for users [9]. - The framework is designed with a focus on memory-efficient training, significantly reducing memory overhead even for trillion-parameter models [7]. Group 4: Experimental Validation - The effectiveness of SpecForge was validated through experiments on datasets like ShareGPT and UltraChat, demonstrating compatibility with the Eagle3 architecture [15]. - The draft models trained using SpecForge achieved a notable 2.18x inference acceleration on the MT-Bench benchmark [15]. Group 5: Future Developments - SpecForge's roadmap includes plans to support additional model architectures and integrate visual-language models (VLM) into the framework [22]. - The team aims to enhance training efficiency through improved parallel strategies and kernel optimizations [22].
How fast are LLM inference engines anyway? — Charles Frye, Modal
AI Engineer· 2025-06-27 10:01
Open Model Landscape & Benchmarking - Open-weight models are catching up to Frontier Labs in capabilities, making many AI Engineer applications possible that weren't before [1] - Open-source engines like VLM, SGLang, and Tensor TLM are readily available, reducing the need for custom model implementations [1] - Modal has created a public benchmark (modal.com/llmalmanac) for comparing the performance of different models and engines across various context lengths [2][3] Performance Analysis - Throughput is significantly higher when processing longer input contexts (prefill) compared to generating longer output sequences (decode), with up to a 4x improvement observed [15][16] - The time to first token (latency) remains nearly constant even with a 10x increase in input tokens, suggesting a "free lunch" by prioritizing context over reasoning [19] - Gemma 7B models show roughly the same throughput as Qwen 3 models, despite being 10x smaller in model weights, indicating optimization differences [12] Optimization & Infrastructure - Scaling out (adding more GPUs) is the primary method for increasing total throughput, rather than scaling up (optimizing a single GPU) [23] - Benchmarking methodology involves sending a thousand requests to determine maximum throughput and sending single requests to determine fastest possible server run time [24][25] - BF16 precision offers slower tensor core support compared to FP8 or FP4, suggesting potential for even greater performance gains with lower precision formats on newer hardware like Blackwell [16][17]
AI Infra 工程师们如何应对大模型流水线里的“暗涌”?
AI前线· 2025-06-26 05:44
Core Insights - The article discusses the challenges and requirements faced by Infra engineers in the context of AI model training and deployment, emphasizing the importance of robust infrastructure to support large model systems [1][3][4]. Group 1: Event Overview - The AICon Global Artificial Intelligence Development and Application Conference will be held in Beijing on June 27-28, focusing on AI infrastructure and ecosystem building [2]. Group 2: Common Issues in Model Engineering - Infra engineers frequently encounter issues such as training interruptions and performance inconsistencies, particularly in large-scale GPU clusters [4][5]. - The need for effective performance profiling and monitoring systems is highlighted, as manual troubleshooting is inefficient [3][12]. Group 3: Performance and Stability Challenges - Common problems during online training include hardware errors, algorithmic flaws, and configuration issues, which can lead to task failures [4][6]. - The importance of collaboration between Infra engineers and business engineers is emphasized to address complex issues like abnormal loss spikes and runtime errors [5][7]. Group 4: Resource Management and Optimization - Efficient resource scheduling and job tuning are critical for optimizing AI model performance, with a focus on the compatibility of parallel strategies [8][9]. - The integration of new features often requires careful management to avoid conflicts with existing functionalities, necessitating iterative development processes [10][11]. Group 5: Cost Reduction Strategies - Strategies for reducing the cost of large model inference include optimizing caching strategies and improving GPU utilization [14][15][16]. - The design of model architectures should consider deployment performance from the outset to ensure cost efficiency [15]. Group 6: Open Source Challenges - The article discusses the challenges of managing open-source projects, including community engagement and user feedback [19][20]. - Building a sustainable open-source community requires balancing company commitments with community contributions [21][22]. Group 7: GPU Virtualization Trends - The discussion includes insights on GPU virtualization technologies, highlighting the importance of vendor support for effective implementation [22][23]. - The evolution of heterogeneous deployment strategies is noted, with a focus on optimizing resource allocation across different hardware types [24][25].
o3-pro通关“推箱子”,人类怀旧小游戏成了大模型新Benchmark
量子位· 2025-06-16 04:50
Core Viewpoint - Classic nostalgic games like Sokoban and Tetris have become benchmarks for evaluating large models, with the o3-pro model recently surpassing previous performance limits in these games [1][2][6]. Group 1: Benchmark Performance - The o3-pro model successfully completed all levels of Sokoban, which previously had a benchmark limit at the sixth level [3][8]. - In comparison to the previous state-of-the-art model (SOTA), o3, the performance of o3-pro has doubled [3][10]. - The scoring system for Tetris involves calculating the number of placed blocks and the number of cleared lines multiplied by ten, until the game ends [13][22]. Group 2: Game Characteristics and Evaluation - The Lmgame benchmark includes several games, such as 2048, Candy Crush, Super Mario Bros, and Phoenix Wright, each with unique evaluation criteria [18][24]. - The evaluation for 2048 is based on the total value of merged blocks, while Candy Crush measures the total candies eliminated in a fixed number of rounds [24]. - The evaluation methods do not consider time as a factor, focusing instead on game-specific performance metrics [22][24]. Group 3: Model Development and Support - The project is developed by the Hao AI Lab at UCSD, which is affiliated with the machine learning systems and NLP labs [28]. - The lab has received funding from Google and NVIDIA, with NVIDIA donating a DGX B200 system to support their research [34]. - The benchmark is open-source, allowing interested parties to download and test their models [23].
o3-pro通关“推箱子”,人类怀旧小游戏成了大模型新Benchmark
量子位· 2025-06-16 04:49
Core Viewpoint - Classic nostalgic games like "Sokoban" and "Tetris" have become benchmarks for evaluating large models, with the o3-pro model achieving significant breakthroughs in these games [1][6]. Group 1: Benchmark Performance - The o3-pro model surpassed previous benchmarks by completing all levels of Sokoban, while the best prior model, o3, only reached the sixth level [2][3]. - In Tetris, the scoring system combines the number of placed blocks with ten times the number of cleared lines, and o3-pro's performance doubled that of o3 [3][13]. - The o3-pro model's performance is notable for its time-consuming operations, taking several minutes for each move [17]. Group 2: Game Evaluation Standards - The Lmgame benchmark includes various games, with specific evaluation metrics for each, such as total distance moved in Super Mario Bros and total candy cleared in Candy Crush [6][24]. - The evaluation does not consider time as a factor, focusing instead on game-specific performance metrics [22]. - The benchmark is open-source, allowing others to download and test their models [23]. Group 3: Development and Support - The project is developed by the Hao AI Lab at UCSD, which has received support from Google and NVIDIA [28][34]. - The lab has created multiple open-source projects, with FastVideo being the most starred on GitHub [32].
SGLang 推理引擎的技术要点与部署实践|AICon 北京站前瞻
AI前线· 2025-06-13 06:42
Core Insights - SGLang has gained significant traction in the open-source community, achieving nearly 15K stars on GitHub and over 100,000 monthly downloads by June 2025, indicating its popularity and performance [1] - Major industry players such as xAI, Microsoft Azure, NVIDIA, and AMD have adopted SGLang for their production environments, showcasing its reliability and effectiveness [1] - The introduction of a fully open-source large-scale expert parallel deployment solution by SGLang in May 2025 is noted as the only one capable of replicating the performance and cost outlined in the official blog [1] Technical Advantages - The core advantages of SGLang include high-performance implementation and easily modifiable code, which differentiates it from other open-source solutions [3] - Key technologies such as PD separation, speculative decoding, and KV cache offloading have been developed to enhance performance and resource utilization while reducing costs [4][6] Community and Development - The SGLang community plays a crucial role in driving technological evolution and application deployment, with over 100,000 GPU-scale industrial deployment experiences guiding technical advancements [5] - The open-source nature of SGLang encourages widespread participation and contribution, fostering a sense of community and accelerating application implementation [5] Performance Optimization Techniques - PD separation addresses latency fluctuations caused by prefill interruptions during decoding, leading to more stable and uniform decoding delays [6] - Speculative decoding aims to reduce decoding latency by predicting multiple tokens at once, significantly enhancing decoding speed [6] - KV cache offloading allows for the storage of previously computed KV caches in larger storage devices, reducing computation time and response delays in multi-turn dialogues [6] Deployment Challenges - Developers often overlook the importance of tuning numerous configuration parameters, which can significantly impact deployment efficiency despite having substantial computational resources [7] - The complexity of parallel deployment technologies presents compatibility challenges, requiring careful management of resources and load balancing [4][7] Future Directions - The increasing scale of models necessitates the use of more GPUs and efficient parallel strategies for high-performance, low-cost deployments [7] - The upcoming AICon event in Beijing will focus on AI technology advancements and industry applications, providing a platform for further exploration of these topics [8]
SemiAnalysis:AMD vs NVIDIA 推理基准测试:谁赢了?--性能与每百万令牌成本分析
2025-05-25 14:09
Summary of AMD vs NVIDIA Inference Benchmarking Conference Call Industry and Companies Involved - **Industry**: Artificial Intelligence (AI) Inference Solutions - **Companies**: Advanced Micro Devices (AMD) and NVIDIA Core Insights and Arguments 1. **Performance Comparison**: AMD's AI servers have been claimed to provide better inference performance per total cost of ownership (TCO) than NVIDIA, but results show nuanced performance differences across various tasks such as chat applications, document processing, and reasoning [4][5][6] 2. **Workload Performance**: For hyperscalers and enterprises owning GPUs, NVIDIA outperforms AMD in some workloads, while AMD excels in others. However, for short to medium-term rentals, NVIDIA consistently offers better performance per dollar due to a lack of AMD GPU rental providers [6][12][13] 3. **Market Dynamics**: The M25X, intended to compete with NVIDIA's H200, faced shipment delays, leading customers to choose the B200 instead. The M55X is expected to ship later in 2025, further impacting AMD's competitive position [8][10][24] 4. **Software and Developer Experience**: AMD's software support for its GPUs is still lacking compared to NVIDIA's, particularly in terms of developer experience and continuous integration (CI) coverage. This has contributed to AMD's ongoing challenges in the AI software space [9][15][14] 5. **Market Share Trends**: AMD's market share in Datacenter A GPUs has been increasing but is expected to decline in Q2 CY2025 due to NVIDIA's new product launches. However, AMD's upcoming M55X and software improvements may help regain some market share [26][27] Additional Important Points 1. **Benchmarking Methodology**: The benchmarking methodology emphasizes online throughput against end-to-end latency, providing a realistic assessment of performance under operational conditions [30][31] 2. **Latency and Throughput Relationship**: There is a trade-off between throughput and latency; optimizing for one often negatively impacts the other. Understanding this balance is crucial for selecting the right configuration for different applications [35][36] 3. **Inference Engine Selection**: vLLM is the primary inference engine for benchmarking, while TensorRT-LLM (TRT-LLM) is also evaluated. Despite improvements, TRT-LLM still lags behind vLLM in user experience [54][55] 4. **Future Developments**: AMD is encouraged to increase investment in internal cluster resources to improve developer experience and software capabilities, which could lead to better long-term shareholder returns [15] This summary captures the key insights and arguments presented during the conference call, highlighting the competitive landscape between AMD and NVIDIA in the AI inference market.
腾讯、华为、微软、阿里专家齐聚一堂,共谈推理优化实践 | AICon
AI前线· 2025-04-23 07:28
Core Viewpoint - The article emphasizes the rapid evolution of artificial intelligence and the critical role of optimizing inference performance in large models to address computational challenges, memory bottlenecks, and communication pressures [1]. Summary by Sections Inference Performance Optimization - Current optimization efforts focus on three main areas: model optimization, inference acceleration, and engineering optimization. Techniques such as model quantization, pruning, and distillation are employed to reduce computational complexity and enhance inference efficiency [1]. - The DeepSeek-R1-Distill-Qwen-32B model utilizes a distillation strategy to significantly compress resource expenditure while maintaining high performance [1]. AICon Conference - The AICon global AI development and application conference will take place on May 23-24, featuring a special forum on "Strategies for Optimizing Inference Performance of Large Models," led by industry practitioners [1][10]. Expert Presentations - **Xiang Qianbiao - Tencent**: His presentation will cover the AngelHCF inference acceleration framework, detailing its comprehensive exploration in operator design, communication optimization, and architecture adjustments, achieving significant cost and performance advantages [1][2]. - **Zhang Jun - Huawei**: He will discuss the optimization practices of Huawei's Ascend AI framework, focusing on hybrid model advantages, kernel optimization, and strategies for ultra-large MoE models to alleviate communication bottlenecks [3][4]. - **Jiang Huiqiang - Microsoft**: His talk will address efficient long-text methods centered around KV caching, exploring challenges and strategies in the inference process [5][7]. - **Li Yuanlong - Alibaba Cloud**: He will present on cross-layer optimization practices in large model inference, discussing operator fusion, model quantization, and dynamic batching techniques to maximize hardware resource efficiency [6][8]. Technical Trends and Future Directions - The article highlights the importance of understanding the full lifecycle of KV caching and its impact on long-text processing, as well as the need for comprehensive optimization strategies from model architecture to hardware acceleration [7][8]. - The conference will also explore collaborative optimization strategies and the future landscape of inference performance enhancement, including model parallelism and hardware selection [10].
与 00 后开源者聊 DeepSeek 开源周:一直开源最强模型,可能是不想赚钱,也可能是想推动更大变化丨开源对话#2
晚点LatePost· 2025-02-27 14:03
"当 AI 足够强大后,开源还是不是一个好选择?" 整理丨刘倩 程曼祺 嘉宾丨美国西北大学 MLL Lab 博士王子涵 ▲扫描上图中的二维码,可收听播客。《晚点聊 LateTalk》#102 期节目。欢迎在小宇宙、喜马拉雅、苹果 Podcast 等渠道关注、收听我们。 《晚点聊 LateTalk》是《晚点 LatePost》 推出的播客节目。"最一手的商业、科技访谈,最真实的从业者思考。" 这是《晚点 LatePost》 「开源对话」系列的第 2 篇。该系列将收录与开源相关的访谈与讨论。系列文章见文末的合集#开源对话。 上周五,DeepSeek 在官方 Twitter 上预告了下一周会连续 5 天开源 5 个代码库,进入 "open-source week"开源周。 目前 DeepSeek 已放出的 4 个库,主要涉及 DeepSeek-V3/R1 相关的训练与推理代码 。 这是比发布技术报告和开源模型权重更深度的开源。 有了训练和推理 工具,开发者才能更好地在自己的系统里,实现 DeepSeek 系列模型的高效表现。 (注:所有 4 个库和后续开源可见 DeepSeek GitHub 中的 Open-Inf ...