Workflow
Nvidia(NVDA) - 2025 Q3 - Earnings Call Transcript

Financial Data and Key Metrics - Revenue for Q3 2025 was $35.1 billion, up 17% sequentially and 94% year-on-year, significantly exceeding the outlook of $32.5 billion [9] - GAAP gross margin was 74.6%, and non-GAAP gross margin was 75%, down sequentially due to a mix-shift to more complex and higher-cost systems within Data Center [32] - The company returned $11.2 billion to shareholders through share repurchases and cash dividends in Q3 [33] - Q4 revenue is expected to be $37.5 billion, plus or minus 2%, with GAAP and non-GAAP gross margins expected to be 73% and 73.5%, respectively [34] Business Line Performance Data Center - Data Center revenue reached a record $30.8 billion, up 17% sequentially and 112% year-on-year, driven by strong demand for NVIDIA Hopper and H200 GPUs [11] - Cloud service providers accounted for approximately half of Data Center sales, with revenue increasing more than 2 times year-on-year [11] - NVIDIA H200 sales increased significantly, reaching double-digit billions, marking the fastest product ramp in the company's history [11] - NVIDIA GPU regional cloud revenue doubled year-on-year, with growth in North America, EMEA, and Asia Pacific regions [12] Gaming and AI PCs - Gaming revenue was $3.3 billion, up 14% sequentially and 15% year-on-year, driven by strong back-to-school sales and demand for GeForce RTX GPUs [28] - The company began shipping new GeForce RTX AI PCs with up to 321 AI TOPS, with Microsoft's Copilot+ capabilities anticipated in Q4 [29] Professional Visualization (ProViz) - ProViz revenue was $486 million, up 7% sequentially and 17% year-on-year, with NVIDIA RTX workstations remaining the preferred choice for professional graphics and AI-related workloads [30] Automotive - Automotive revenue reached a record $449 million, up 30% sequentially and 72% year-on-year, driven by the ramp of NVIDIA Orin and strong demand for self-driving solutions [31] Market Performance - Data Center revenue in China grew sequentially due to shipments of export-compliant products, though it remains below pre-export control levels [24] - Sovereign AI initiatives are gaining momentum, with countries like India and Japan building AI factories and supercomputers powered by NVIDIA GPUs [25] - Networking revenue increased 20% year-on-year, with strong demand for InfiniBand and Ethernet switches, SmartNICs, and BlueField DPUs [26] Company Strategy and Industry Competition - NVIDIA is focusing on ramping Blackwell production, with demand greatly exceeding supply. The company expects to exceed previous Blackwell revenue estimates [34] - Blackwell is a customizable AI infrastructure with multiple configurations, and the company is working to increase system availability and optimize configurations for customers [34] - The company is investing in data center infrastructure for hardware and software development, supporting new product introductions [35] - NVIDIA is positioning itself as a leader in the AI revolution, with a focus on generative AI, enterprise AI, and industrial AI [19][22] Management Commentary on Business Environment and Future Outlook - Management highlighted the strong demand for Hopper and Blackwell architectures, driven by the adoption of AI and accelerated computing [9][11] - The company expects continued growth in Data Center revenue, with Blackwell demand being "staggering" and supply constraints being a key focus [16][34] - Management emphasized the importance of inference scaling, with NVIDIA being the largest inference platform in the world [13][43] - The company is optimistic about the future of AI, with generative AI and industrial AI expected to drive significant growth in the coming years [19][22] Other Important Information - NVIDIA's software, service, and support revenue is annualizing at $1.5 billion, with expectations to exit the year at over $2 billion [22] - The company is integrating Blackwell systems into diverse Data Center configurations, with major partners like Oracle, Microsoft, and Google racing to deploy Blackwell at scale [16][17] - NVIDIA's upcoming release of NVIDIA NIM is expected to boost Hopper inference performance by an additional 2.4 times [14] Q&A Session Summary Question 1: Scaling for Large Language Models - Jensen Huang discussed the continued scaling of foundation models, post-training scaling, and inference time scaling, with demand for NVIDIA infrastructure remaining strong [41][42][43] Question 2: Blackwell Production and Supply Constraints - Jensen Huang confirmed that Blackwell production is on track, with demand exceeding supply. The company is working with multiple partners to ramp production [48][49][50] - Supply constraints are driven by the complexity of Blackwell systems, which involve multiple custom chips and configurations [51] Question 3: Blackwell Ramp and Gross Margins - Colette Kress explained that gross margins are expected to moderate to the low-70s during the Blackwell ramp, with a return to mid-70s margins as production scales [58] - Jensen Huang highlighted the importance of Blackwell's performance per watt, which drives customer revenues [54] Question 4: Inference Market Growth - Jensen Huang expressed optimism about the growth of the inference market, with NVIDIA positioned as the largest inference platform due to its large installed base [43][76] Question 5: Networking Business and Spectrum-X - Colette Kress noted that networking revenue was down sequentially but expects growth in Q4, driven by demand for InfiniBand and Spectrum-X Ethernet for AI [84] Question 6: Sovereign AI and Gaming Supply Constraints - Colette Kress confirmed that sovereign AI demand remains strong, with growth opportunities in Europe and Asia-Pacific [86] - Gaming supply constraints are due to strong demand and the company's focus on ramping Data Center products [87] Question 7: Sequential Growth and China Business - Jensen Huang emphasized that the company guides one quarter at a time and will comply with any new regulations under the new US administration [92][94] Question 8: Compute Allocation in AI Ecosystem - Jensen Huang discussed the current focus on pre-training foundation models, with post-training and inference scaling also growing in importance [97][98]