Workflow
红帽宣布推出llm-d社区,NVIDIA、Google Cloud为创始贡献者
Xin Lang Ke Ji·2025-05-27 03:42

Group 1 - Red Hat has launched a new open-source project called llm-d to meet the large-scale inference demands of generative AI, collaborating with CoreWeave, Google Cloud, IBM Research, and NVIDIA [1][3] - According to Gartner, by 2028, over 80% of data center workload accelerators will be deployed specifically for inference rather than training, indicating a shift in resource allocation [3] - The llm-d project aims to integrate advanced inference capabilities into existing enterprise IT infrastructure, addressing the challenges posed by increasing resource demands and potential bottlenecks in AI innovation [3] Group 2 - The llm-d platform allows IT teams to meet various service demands for critical business workloads while maximizing efficiency and significantly reducing the total cost of ownership associated with high-performance AI accelerators [3] - The project has garnered support from a coalition of generative AI model providers, AI accelerator pioneers, and major AI cloud platforms, indicating deep collaboration within the industry to build large-scale LLM services [3] - Key contributors to the llm-d project include CoreWeave, Google Cloud, IBM Research, and NVIDIA, with partners such as AMD, Cisco, Hugging Face, Intel, Lambda, and Mistral AI [3][4] Group 3 - Google Cloud emphasizes the importance of efficient AI inference in the large-scale deployment of AI to create value for users, highlighting its role as a founding contributor to the llm-d project [4] - NVIDIA views the llm-d project as a significant addition to the open-source AI ecosystem, supporting scalable and high-performance inference as a key to the next wave of generative and agent-based AI [4] - NVIDIA is collaborating with Red Hat and other partners to promote community engagement and industry adoption of the llm-d initiative, leveraging innovations like NIXL to accelerate its development [4]