Workflow
英伟达新型处理器
icon
Search documents
英伟达被曝将推出新芯片以优化人工智能处理速度
Huan Qiu Wang Zi Xun· 2026-02-28 08:33
Core Insights - Nvidia is planning to launch a new processor aimed at helping clients like OpenAI build faster and more efficient AI systems, focusing on AI inference computing to optimize response capabilities of AI models [1][2] Group 1: Product Development - The new system being developed by Nvidia is specifically designed for inference computing, which is expected to significantly enhance the efficiency of AI models when handling complex tasks [2][3] - This new platform is anticipated to be officially unveiled at the Nvidia GTC developer conference next month in San Jose and will utilize chips designed by the startup Groq [2][3] Group 2: Client Needs and Market Dynamics - OpenAI has expressed dissatisfaction with Nvidia's existing hardware regarding response speed for specific types of queries, such as software development and AI interactions, and is seeking new hardware solutions to meet approximately 10% of its inference computing needs [2][3] - OpenAI had previously explored collaboration opportunities with chip startups like Cerebras and Groq to accelerate inference computing capabilities, but discussions with Groq were interrupted due to Nvidia's recent $20 billion licensing agreement with Groq [2][3]
提速AI计算!英伟达新芯片下月GTC大会亮相
Ge Long Hui· 2026-02-28 06:03
Core Insights - Nvidia is planning to launch a new processor designed to help clients like OpenAI develop faster and more efficient AI tools, which could reshape the AI competitive landscape [1] Group 1: Product Development - Nvidia is developing a new "inference" computing system that will change how AI models respond to user queries [1] - The new platform is set to be unveiled next month at the GTC developer conference in San Jose [1] - The system will utilize chips designed by the startup Groq [1] Group 2: Client Relationships - OpenAI is reportedly set to become one of the largest customers for this new processor, marking a significant win for Nvidia [1]