Workflow
NVIDIA Dynamo Open-Source Library Accelerates and Scales AI Reasoning Models
NvidiaNvidia(US:NVDA) Globenewswire·2025-03-18 18:17

Core Insights - NVIDIA has launched NVIDIA Dynamo, an open-source inference software aimed at enhancing AI reasoning models' performance and cost efficiency in AI factories [1][3][13] - The software is designed to maximize token revenue generation by orchestrating inference requests across a large fleet of GPUs, significantly improving throughput and reducing costs [2][3][4] Performance Enhancements - NVIDIA Dynamo doubles the performance and revenue of AI factories using the same number of GPUs when serving Llama models on the NVIDIA Hopper platform [4] - The software's intelligent inference optimizations can increase the number of tokens generated by over 30 times per GPU when running the DeepSeek-R1 model [4] Key Features - NVIDIA Dynamo includes several innovations such as a GPU Planner for dynamic GPU management, a Smart Router to minimize costly recomputations, a Low-Latency Communication Library for efficient data transfer, and a Memory Manager for cost-effective data handling [14][15] - The platform supports disaggregated serving, allowing different computational phases of large language models to be optimized independently across various GPUs [9][14] Industry Adoption - Major companies like Perplexity AI and Together AI are planning to leverage NVIDIA Dynamo for enhanced inference-serving efficiencies and to meet the compute demands of new AI reasoning models [8][10][11] - The software supports various frameworks including PyTorch and NVIDIA TensorRT, facilitating its adoption across enterprises, startups, and research institutions [6][14]