H800 GPU
Search documents
DeepSeek使用走私Blackwell芯片训练?英伟达回应
Xin Lang Cai Jing· 2025-12-11 00:03
Core Insights - Nvidia responded to reports that Chinese AI startup DeepSeek is using smuggled Blackwell chips for its upcoming models, amid US export bans on these advanced chips to China [1][2][3] - The relationship between Nvidia and China has become a political focal point in the US, with President Trump stating that Nvidia can sell its H200 chips to approved customers in China, provided the US receives 25% of the sales [3] Group 1: Nvidia's Position - Nvidia has not seen evidence of "ghost data centers" allegedly built to deceive the company and its OEM partners, and it will investigate any leads on smuggling activities [1][3] - Nvidia has been a major beneficiary of the AI boom due to its development of GPUs, which are critical for training models and running large workloads [1][3] Group 2: DeepSeek's Developments - DeepSeek launched an inference model named R1 in January, which quickly topped app store and industry rankings, surprising the US tech community [2][4] - Analysts estimate that the development cost of R1 is significantly lower than that of similar models in the US [4] - In August, DeepSeek hinted at the imminent availability of next-generation chips to support its AI models, claiming that its V3 model was trained using Nvidia's H800 GPU, although some observers believe DeepSeek may possess more advanced computing capabilities [2][4]
Kimi杨植麟称“训练成本很难量化”,仍将坚持开源策略
Di Yi Cai Jing· 2025-11-11 10:35
Core Insights - Kimi, an AI startup, has released its latest open-source model, Kimi K2 Thinking, with a reported training cost of $4.6 million, significantly lower than competitors like DeepSeek V3 at $5.6 million and OpenAI's GPT-3, which costs billions to train [1][2] - The company emphasizes ongoing model updates and improvements, focusing on absolute performance while addressing user concerns regarding inference length and performance discrepancies [1] - Kimi's strategy includes maintaining an open-source approach and advancing the Kimi K2 Thinking model while avoiding direct competition with major players like OpenAI through innovative architecture and cost control [2][4] Model Performance and Market Position - In the latest OpenRouter model usage rankings, five Chinese open-source models, including Kimi's, are among the top twenty, indicating a growing presence in the international market [2] - Kimi's current model can only be accessed via API due to platform limitations, but the team is utilizing H800 GPUs with InfiniBand technology for training, despite having fewer resources compared to U.S. high-end GPUs [2] - The company plans to balance text model development with multi-modal model advancements, aiming to establish a differentiated advantage in the AI landscape [4]
黄金时代即将结束,英伟达股价即将迎来大幅下跌
美股研究社· 2025-03-26 12:45
Core Viewpoint - Increasing evidence suggests that AI training does not necessarily rely on high-end GPUs, which may slow down Nvidia's future growth [2][5][14] Group 1: Nvidia's Financial Performance - Nvidia's data center business has experienced strong growth, with revenue increasing by 216% in FY2024 and 142% in FY2025 [2] - Revenue growth rates for Nvidia are projected at 63% for FY2026, driven by a 70% increase in the data center segment, alongside a recovery in gaming and automotive markets [8][9] - The company's total revenue is expected to reach $430 billion in Q1 FY2026, with a slight fluctuation of 2% [6] Group 2: Competitive Landscape - Ant Group's research indicates that their 300B MoE LLM can be trained on lower-performance GPUs, reducing costs by 20%, which poses a significant risk to Nvidia's market position [2][5] - Major hyperscalers like Meta are developing their own AI training chips, reducing reliance on Nvidia's GPUs, with Meta's internal chip testing marking a critical milestone [5][14] - Custom silicon solutions from companies like Google and Amazon are emerging as attractive alternatives for AI training and inference [5] Group 3: Long-term Growth Challenges - Nvidia's high-end GPU growth may face increasing resistance as AI enters the inference phase and lower-cost models become more prevalent [14] - Analysts have revised growth expectations for Nvidia's data center business, projecting a slowdown to 30% growth in FY2027 and further declines to 20% from FY2028 to FY2030 [8][9] - The company's operating expenses are expected to grow by 19% from FY2028 to FY2030, impacting profit margins [9] Group 4: Capital Expenditure Trends - Major tech companies are significantly increasing capital expenditures, with a projected 46% year-over-year growth in 2025, which may boost demand for Nvidia's GPUs in the short term [12][13] - Nvidia has established its own custom ASIC division, potentially mitigating risks from competitors like Broadcom and Marvell [14]