NVIDIA BlueField DPUs
Search documents
HPE Transforms Distributed AI Factories Into Intelligent AI grid Powered by NVIDIA
Businesswire· 2026-03-17 18:00
Core Insights - HPE has launched the HPE AI Grid, an end-to-end solution designed to connect AI factories and distributed inference clusters, enhancing real-time connectivity for various applications such as retail personalization and predictive maintenance [1][4]. Group 1: Product Overview - The HPE AI Grid is built on NVIDIA's reference architecture, enabling service providers to deploy thousands of distributed inference sites as a unified intelligent system [1][3]. - The solution offers predictable, ultra-low latency performance, zero-touch provisioning, and automated security through integrated orchestration [2][3]. - HPE AI Grid includes HPE ProLiant Compute edge and rack servers equipped with NVIDIA RTX PRO 6000 Blackwell GPUs and other advanced networking components [5]. Group 2: Industry Applications - The HPE AI Grid addresses various service provider use cases, including retail personalization, predictive maintenance, localized edge inference in healthcare, and carrier-grade AI services, all requiring low-latency connectivity [4][6]. - Comcast has initiated AI field trials using HPE ProLiant servers and NVIDIA GPUs to enhance real-time edge AI inferencing for small businesses [6]. Group 3: Strategic Partnerships and Industry Reactions - HPE and NVIDIA have collaborated on TELUS' Sovereign AI Factory, which is recognized as Canada's fastest supercomputer, facilitating innovation at scale [7]. - CityFibre is exploring the potential of HPE AI Grid to support distributed AI inferencing and enhance service delivery through its fiber network [7]. Group 4: Financial Services and Support - HPE Financial Services is promoting the adoption of AI-ready networks by offering 0% financing on networking AIOps software and providing cash savings on AI-ready networking leases [8].
Arrcus Inference Network Fabric (AINF) Announces Integration With NVIDIA Dynamo Framework, NVIDIA Bluefield DPUs and NVIDIA Spectrum Networking to Significantly Improve the Delivery of Next Generation of Physical and Agentic AI Applications
Businesswire· 2026-03-16 14:30
Core Insights - Arrcus has announced the integration of its Inference Network Fabric (AINF) with NVIDIA's AI technologies, aiming to enhance the delivery of next-generation Physical and Agentic AI applications [1][2][3] Group 1: Integration and Benefits - The combined solution will enable intelligent and secure traffic routing for faster application responses, lower latency, improved power efficiency, and reduced cost per inference [1][2] - AINF, powered by NVIDIA, creates a secure and policy-aware inference fabric that spans edge, data center, and cloud environments, allowing operators to deliver real-time AI services at a global scale [2][3] Group 2: Infrastructure and Requirements - As AI transitions from centralized training to globally distributed inference, the infrastructure demands are rapidly changing, necessitating secure, multi-site connectivity and efficient GPU utilization [3][4] - The integration addresses the need for intelligent model resolution, priority classification, and policy enforcement for real-time AI applications, which include robotics, autonomous systems, and video analytics [3][4] Group 3: Technical Features - AINF acts as a central conductor for agentic AI, utilizing intelligent LLM classifiers on NVIDIA infrastructure to optimize model selection and request routing in real-time [6][7] - The integration with NVIDIA BlueField-3 DPUs ensures secure inference traffic across locations, enabling line-rate encryption at up to 400 Gb/s without CPU overhead [8] Group 4: Market Impact and Partnerships - Lightstorm, a partner of Arrcus, is leveraging the AINF solution to enable hyperscalers and enterprises in the Asia-Pacific region to deploy real-time, large-scale AI inferencing [4][5] - The partnership aims to provide purpose-built networking solutions for distributed AI inference and training workloads, combining Arrcus' intelligent routing with Lightstorm's infrastructure capabilities [17]
VAST Data Federal and Leidos Introduce Agentic Cybersecurity with NVIDIA AI
Globenewswire· 2025-10-28 18:30
Core Insights - VAST Data Federal has announced a strategic partnership with Leidos to create a scalable model for cyber defense, leveraging AI technologies to enhance security operations for U.S. public sector agencies [1][3] - The collaboration aims to address the overwhelming volume of security events generated by global enterprises and federal agencies, which currently amount to trillions, by utilizing advanced AI and data processing capabilities [2][6] Company Overview - VAST Data Federal is a subsidiary of VAST Data, focused on delivering AI Operating Systems to defense, intelligence, and civilian agencies in the U.S. public sector [5][7] - The VAST AI Operating System integrates foundational data and compute services, enabling agencies to deploy intelligent systems and automate complex workflows [5][8] Technology and Solutions - The partnership will utilize NVIDIA AI Enterprise software, NVIDIA Morpheus, and BlueField Data Processing Units (DPUs) to enhance real-time inspection and inference capabilities [2][4] - Key features of the solution include immediate visibility across hot and historical data, agentic triage and response, and accelerated data analytics for cyber investigations [6] Operational Benefits - The collaboration aims to reduce alert fatigue and improve response times by automating security processes and enabling policy-driven actions [2][3] - The solution is designed to lower operational costs by eliminating data storage constraints and allowing for the retention of more telemetry data [6]
NVIDIA and Storage Industry Leaders Unveil New Class of Enterprise Infrastructure for the Age of AI
Globenewswire· 2025-03-18 19:24
Core Insights - NVIDIA has introduced the NVIDIA AI Data Platform, a customizable reference design aimed at building AI infrastructure for enterprise storage platforms that support demanding AI inference workloads [1][12] - The platform enables storage providers to create AI query agents that enhance data insights generation in near real-time using NVIDIA's AI Enterprise software [2][5] Group 1: Infrastructure and Technology - The NVIDIA AI Data Platform allows certified storage providers to optimize their infrastructure with NVIDIA Blackwell GPUs, BlueField DPUs, and Spectrum-X networking to enhance AI reasoning workloads [3][6] - BlueField DPUs can deliver up to 1.6 times higher performance than traditional CPU-based storage while reducing power consumption by up to 50%, achieving over 3 times higher performance per watt [6] - Spectrum-X networking can accelerate AI storage traffic by up to 48% compared to traditional Ethernet through adaptive routing and congestion control [6] Group 2: Collaboration and Industry Impact - Leading storage providers such as DDN, Dell Technologies, and IBM are collaborating with NVIDIA to develop customized AI data platforms that leverage enterprise data for complex query responses [4][13] - Jensen Huang, CEO of NVIDIA, emphasized the importance of data as a key resource in the AI era, stating that the collaboration aims to build infrastructure necessary for deploying and scaling agentic AI across hybrid data centers [5] Group 3: AI Query Agents and Capabilities - AI query agents developed using the NVIDIA AI-Q Blueprint can access and process various data types, including structured, semi-structured, and unstructured data from multiple sources [8] - The AI-Q Blueprint utilizes NVIDIA NeMo Retriever microservices to accelerate data extraction and retrieval by up to 15 times on NVIDIA GPUs [7]