Core Insights - Nvidia is planning to launch a new processor aimed at helping clients like OpenAI build faster and more efficient AI systems, focusing on AI inference computing to optimize response capabilities of AI models [1][2] Group 1: Product Development - The new system being developed by Nvidia is specifically designed for inference computing, which is expected to significantly enhance the efficiency of AI models when handling complex tasks [2][3] - This new platform is anticipated to be officially unveiled at the Nvidia GTC developer conference next month in San Jose and will utilize chips designed by the startup Groq [2][3] Group 2: Client Needs and Market Dynamics - OpenAI has expressed dissatisfaction with Nvidia's existing hardware regarding response speed for specific types of queries, such as software development and AI interactions, and is seeking new hardware solutions to meet approximately 10% of its inference computing needs [2][3] - OpenAI had previously explored collaboration opportunities with chip startups like Cerebras and Groq to accelerate inference computing capabilities, but discussions with Groq were interrupted due to Nvidia's recent $20 billion licensing agreement with Groq [2][3]
英伟达被曝将推出新芯片以优化人工智能处理速度