DDN Infinia
Search documents
DDN Enterprise AI HyperPOD Captions
DDN· 2026-04-17 14:01
Hi, I'm Jason Brown from DDN. Today I want to show you how our enterprise AI hyper pod built on Super Micro and accelerated by NVIDIA brings enterprise AI infrastructure out of the lab into production in minutes, not months. Organizations have turned AI at scale, but many have hit a wall.months of proof of concepts, fragmented hardware stacks, underutilized GPUs, excessive power and cooling costs, infrastructure built for conventional workloads simply doesn't deliver for modern AI workloads like inference a ...
The DDN Product Roadmap | DDN AI Data Summit 2026
DDN· 2026-04-09 19:32
My name is Omar. I'm about six months old at DDN. Um I focus on product and engineering.Uh today I'm just going to be taking you through a few key themes that we're going to be talking about uh across uh 2026 and some of the new innovations that we're going to be bringing about. uh as sort of you saw through the course of the presentation uh the the the core uh innovations at DDNS are mainly focused around uh you know AI native companies here we're developing a lot of SDKs in order to give you native integr ...
MiTAC Accelerates Next-Gen AI with Turnkey Solutions and Flexible NVIDIA MGX at NVIDIA GTC 2026
Prnewswire· 2026-03-16 20:30
Core Insights - MiTAC Computing Technology Corporation is showcasing its advancements in AI servers and turnkey solutions at NVIDIA GTC 2026, emphasizing "Enterprise AI, Flexible by Design" [1][2] Group 1: Partnerships and Collaborations - MiTAC collaborates with industry leaders such as NVIDIA, AMD, DDN, Intel, Micron, Rafay, Sandisk, and Solidigm to enhance accelerated computing and next-generation data centers [2] - The partnership with Rafay enables a unified control plane for managing large-scale containerized environments, streamlining Kubernetes orchestration and automated dispatching of HPC and AI workloads [2][3] Group 2: Product Offerings - MiTAC's advanced Pod Management Solution is based on the NVIDIA MGX reference architecture, featuring a 4U, 2-socket server with dual AMD EPYC "Venice" processors and support for up to 8 double-width GPUs [4] - An alternative configuration is available with a 4U platform based on MGX powered by dual Intel Xeon 6700P processors, integrating Micron 9550 NVMe SSDs or Solidigm D7-PS1010 drives [5] Group 3: AI Infrastructure Solutions - MiTAC and DDN have partnered to deliver the world's fastest turnkey AI inference and Retrieval-Augmented Generation (RAG) solutions, utilizing DDN Infinia for ultra-low latency document retrieval [5][9] - The solution architecture combines MiTAC's next-generation 4U AI platform with the R1917GC management server, forming a unified AI infrastructure that spans core, edge, and management layers [6][7] Group 4: Storage and Data Management - The GC68A-B8056 storage server is designed for high-density storage, featuring 24 DIMM slots for DDR5-4800 memory and 12 hot-swappable NVMe U.2 drive bays [8] - This architecture supports high-speed data ingestion and sustained throughput necessary for large-scale AI datasets and analytics workloads [8] Group 5: Company Overview - MiTAC Computing Technology Corporation specializes in energy-efficient server solutions, focusing on AI, HPC, cloud, and edge computing, with a commitment to quality and performance [10][11] - The company provides customized platforms for hyperscale data centers and AI applications, leveraging advancements in AI and liquid cooling technologies [11]
Fast Object Storage for AI: Stop GPU Starvation & Boost Training Performance
DDN· 2026-01-12 16:47
[MUSIC PLAYING] Hi, I'm Nasir Wasim from DDN. Today, I'm going to talk about one of the most common problems you might recognize. Your GPUs are sitting idle, waiting for data. We're going to talk about fast object, the high performance object capability in DDN Infinia, designed specifically to eliminate the storage bottlenecks in AI pipeline to keep the GPUs busy. Fast object equals to Infinia. So let's start with what's actually happening in your infrastructure right now. So your organization invested ...
DDN One-Click RAG Pipeline Demo: DDN Infinia & NVIDA NIMs
DDN· 2025-11-11 18:56
Welcome to this demonstration. Today we'll be showing how DDN enables a one-click high-performance rag pipeline for enterprise use. Our rag pipeline solution is enterprise class and easy to deploy and use in any cloud environment whether AWS, GCP, Azure, any NCP cloud and of course on prem.Let's take a closer look at the architecture. This rag pipeline solution is made of several NVIDIA Nemo NIMS or NVIDIA inference microservices which host embedding reranking LLM models a milild vector database a front-end ...
DDN Infinia on OCI: High-Performance AI Storage
DDN· 2025-11-11 18:56
Performance Overview - DDN Infinia demonstrates excellent performance in Oracle Cloud Infrastructure (OCI) with a small six-node cluster [7] - Achieved a consistent 5 milliseconds Time To First Byte (TTFB), which is excellent for S3 object IO [6] Throughput Metrics - Achieved approximately 30 GB/s of put throughput during object population [5] - Each client and Infinia node processed puts at roughly 5 GB/s [5] - Sustained approximately 37.5 GB/s of get throughput during the get benchmark [6] - Load was evenly distributed across all clients and Infinia nodes at around 6.5 GB/s of throughput during get operations [6] Infrastructure and Configuration - The test used six BM dense ioe5 compute instances as hosts for the Infinia cluster [2] - Six BM standard E5.192 instances with single 100 GB connections were used for the clients to avoid networking bottlenecks [2] - Only 32 out of the 128 cores available in the dense ioe5 instances were utilized for the Infinia software [2] - DDN is investigating other OCI instances to prevent overallocation of hardware [3] Technology and Architecture - Infinia architecture provides capabilities for data management, including data IO paths, object file querying, scale-out KV store, always-on encryption, and data reduction [2] - Infinia is fully software-defined and containerized, enabling it to run on physical or virtualized hardware with Intel, AMD, or ARM processors [2] - Implemented high-performance eraser coding, custom fall domains, and the ability to use both TLC and QLC flash [2] Testing Methodology - IO generation was performed using warp in distributed benchmarking mode to ensure a full mesh of IO across all clients and Infinia cluster nodes [3] - Parallel warp was used across all six clients and six Infinia nodes during the put and get tests [4][5][6] Disclaimer - The information presented is for potential future integrations and is a tech preview [1] - The overall capabilities, including the performance of this feature, can and will change [1] - No timelines for delivering this capability should be inferred from this demo [1]
What’s New and What’s Coming at DDN - Dr. James Coomer, DDN
DDN· 2025-09-18 15:11
DDN Exoscaler产品特性与优势 - DDN Exoscaler是一个并行文件系统,专为大规模数据处理而设计,旨在加速GPU流量,提高GPU的生产力,适用于生命科学、金融等多种行业 [1] - 该技术通过消除IO等待时间,使GPU能够持续获取数据,从而加速模型训练、推理以及提高token生成速度 [1] - DDN的解决方案旨在以最小的物理空间、功耗和网络占用提供最大的性能 [1] - DDN提供多种闪存配置(TLC、QLC)和混合系统(HDD),以满足不同客户在成本、性能和容量方面的需求,并支持将这些不同介质类型挂载到同一挂载点 [1][2] - DDN Exoscaler的客户端具备智能性,能够感知数据位置,从而优化数据访问路径,提高效率 [2] - DDN提供数据缩减系统,通过客户端压缩机制,在不影响存储性能的前提下实现数据压缩,数据缩减率通常在2到4倍之间,对于文本和日志数据最高可达50倍 [2] - DDN提供在线升级功能,允许在系统运行过程中进行升级,这对需要保持服务持续性的客户至关重要 [1][2] - DDN提供EMF分析工具,用于全面测试网络,帮助客户快速发现和解决网络问题,确保系统稳定运行 [2] - DDN Exoscaler支持多种协议访问,包括S3、NFS、SMB以及原生并行文件系统,并兼容Prometheus和Grafana等开源监控工具 [2] - DDN的监控系统能够显示哪些用户、客户端或作业正在对文件系统造成压力,帮助云服务提供商确保公平的数据访问 [2] DDN AI400X3产品 - DDN推出AI400X3,专为Nvidia Blackwell架构设计,旨在满足GPU技术快速发展带来的数据存储和访问需求 [1] - AI400X3在2U空间内提供150 GB/s的网络吞吐量,并提供95 GB/s的checkpoint速度 [1][3]
Ask the Experts: Turbocharge Performance with DDN Infinia on Oracle Cloud
DDN· 2025-09-11 15:32
The fastest AI isn’t just about GPUs. It’s about removing I/O bottlenecks that slow your business objectives down. Discover how DDN Infinia and Oracle Cloud Infrastructure (OCI) are redefining AI performance. Join this Ask the Experts session to learn how Infinia’s high-performance, S3-compatible storage along Oracle’s powerful and scalable Cloud Infrastructure deliver ultra-low latency, massive throughput, and linear scalability for the most demanding AI workloads. What you’ll learn: - The biggest challeng ...