DDN
Search documents
Fueling Extreme Performance: How Sandisk NVMe and DDN Power the Next Generation of AI and HPC
DDN· 2025-12-15 19:48
Hey everybody, Ethan Sloan with SanDisk. We're going to talk a little bit about SanDisk DDN powering the next generation of AI and HBC. Nobody's talked AI here, so this will be the first one, right.Obviously, a little bit about me. So, at SanDisk, enterprise solid state drive representation, helping to fuel the uh the channel. So the goal today is to answer questions with regards to who, what, where, why, when, and how.The who part is who am I. Ethan just introduced myself. Let's get into the what.Right. So ...
Unlocking AI and HPC Success with Google Cloud Managed Lustre
DDN· 2025-12-12 20:31
The key to unlocking AI and high performance computing success is data. But as organizations globally raise their AI and HPC ambitions, they are learning that implementing the right data strategy is easier said than done. These challenges can prevent a proof of concept from translating into a successful production grade deployment.Many of the most pressing data related issues come down to some fundamental challenges within the storage infrastructure. Put simply, AI scale workloads require AI scale storage. ...
Fueling Extreme Performance: How Sandisk NVMe and DDN Power the Next Generation of AI and HPC
DDN· 2025-12-12 19:16
Hey everybody, Ethan Sloan with SanDisk. We're going to talk a little bit about SanDisk DDN powering the next generation of AI and HBC. Nobody's talked AI here, so this will be the first one, right.Obviously, a little bit about me. So, at SanDisk, enterprise solid state drive representation, helping to fuel the uh the channel. So the goal today is to answer questions with regards to who, what, where, why, when, and how.The who part is who am I. Ethan just introduced myself. Let's get into the what.Right. So ...
Partner power is fueling the next leap in AI Infrastructure.
DDN· 2025-12-11 17:19
Partnerships & Ecosystem - DDN's partner ecosystem, including GSIs, resellers, OEMs, NCPs, and hyperscalers, is driving advancements in AI infrastructure [1] - A customer-driven approach with partners is enabling the achievement of ambitious goals [1] - DDN's partner ecosystem accelerates go-to-market strategies [1] - DDN's partner ecosystem helps customers scale AI faster [1] Leadership & Strategy - Doug Cook, VP at DDN, emphasizes the importance of partner power in the next leap of AI Infrastructure [1]
AI token factories fail when data becomes the bottleneck
DDN· 2025-12-10 17:02
AI token factories collapse without data intelligence. DDN can help. Here's the business problem.Most AI token factories fail not because of GPUs, but because data becomes a bottleneck. Token cost skyrocket. Power goes through the roof.Time to revenue stalls. And here's the NCP reality for cloud. Profitability isn't about selling GPU hours.It's about lowering token cost. DDN does that across training, inference, and rack at the system level, helping NCPs attract more customers. GPUs create power.DDN turns t ...
Starving GPUs while the power meter spins? Fix the data bottleneck.
DDN· 2025-12-09 22:45
And every time the memory bandwidth gets big and the data demands get bigger. And so the bottlenecks [music] are when the GPUs are trying to run something but they're waiting for data in one way or the other. They're reading or writing.And if they're doing that, then they're wasting resources. They're wasting productivity at massive [music] scale. You know, when we talk about data center efficiency, um the new kind of phrase on everyone's lips [music] when they're building data centers is tokens for what.An ...
I/O Performance Benchmarking from University of Florida's Fourth Generation HiPerGator
DDN· 2025-12-09 00:02
HyperGator Evolution & Infrastructure - HyperGator started in 2012, with expansions in 2013 and a third generation in 2020, including the first super pod A100 at an academic institution [2] - The fourth generation HyperGator was created in 2025, featuring new nodes and DDN storage, with orange storage for capacity and blue storage for high performance [3] - The new fourth generation blue storage is highlighted for its capabilities [4] - The file system is 118% petabytes all flash storage, with two X3 servers for metadata and three X2 and 12 storage trays [11] DDN Storage & Performance - DDN storage was chosen for its high performance, throughput, and ability to handle random IO for various applications like gene sequencing, energy simulation, and weather simulation [4][5] - The system features high peak and sustained performance, supporting 60,000 CPU cores and 1,100 GPU cards [6] - Exascaler's data on metadata feature is heavily relied upon, providing immediate data for files less than 64K [8][9] - The fourth generation blue storage achieved 944 kilops with a score of 273 in the IO500 benchmark using the 10 client challenge [11] University of Florida's AI Initiatives & Impact - The University of Florida is recognized as an AI University in partnership with Nvidia [2] - HyperGator supports diverse research areas including digital twin research, generative AI, robotics, and machine learning [14] - In the past semester, 42 courses were taught to 1,800 students, with approximately 1,000 HyperGator sponsors and 7,000 active users [15][16] - HyperGator supports $85 million in research portfolio, demonstrating a reasonable return on investment [17]
Accelerate Your HPC Workloads with Google Cloud Managed Lustre | Kirill Tropin
DDN· 2025-12-08 23:41
Google Cloud Managed Luster Overview - Google Cloud Managed Luster is a fully managed service running on top of DDNX Scaler, launched 4 and a half months ago [1][7] - It addresses the need for high throughput and low latency storage in HPC environments to keep GPUs and CPUs efficiently fed with data [4][5] - The service is integrated with Google Cloud services like DCS and GKE, offering easy data import/export from/to Google Cloud Storage [7][8] Performance and Scalability - Google Cloud Managed Luster offers up to 1 TB (Terabyte) per second of throughput with sub-millisecond latency and millions of IOPS [9] - It scales from a starting size of 9 TB (Terabyte) up to 8 PB (Petabyte) [9] - Performance tiers range from 125 MB (Megabyte) per second per TB (Terabyte) to 1,000 MB (Megabyte) per second per TB (Terabyte), catering to different throughput needs [15] Customer Benefits and Use Cases - Customers have experienced significant performance improvements, with one customer, Resemble AI, achieving full GPU saturation and 6x faster performance compared to other storage solutions [10] - Sony Honda Mobility's department, Fila, saw a 3x performance improvement compared to their previous storage solution [17] - Key use cases include KV cache, multimodal training, and checkpointing, all requiring low latency and high throughput [11][13][14] Partnership with DDN - Google partnered with DDN (DataDirect Networks) due to their mature, reliable Exoscaler product with a rich feature set [6] - The partnership aims to provide a fully managed solution, relieving customers of storage management burdens [6]
AI Computing as the Foundation for Institutional Strategy | Preston Smith
DDN· 2025-12-08 23:36
Well, thank you very much for having me to glad to be able to speak to all of you again. So, I'm Preston Smith from Purdue University and I'm going to talk about how our AI computing investment is is part of the foundation for our institutional strategy at Purdue. So, Purdue right now has four major pillars strategic projects.You can see here a new campus in Indianapolis. If you if you're familiar with Indiana geography, Purdue is between Indianapolis and Chicago about halfway and Indianapolis will be at th ...
Feeding the Future of AI | James Coomer
DDN· 2025-12-08 18:14
Inference Market & KV Cache Importance - Inference spending is projected to surpass training spending, highlighting its growing significance in the AI landscape [2] - KV cache is crucial for understanding context in prefill stages and augmenting tokens in decode stages during inference [3][4] - Utilizing DDN as a KV cache can potentially save hundreds of millions of dollars by retrieving previously computed contexts instead of recomputing them [5] Disaggregated Inference & Performance - Disaggregated inference, running prefill and decode on different GPUs, improves efficiency, requiring a global KV cache for information dissemination [6] - DDN's fast storage delivers KV caches at extremely high speeds, leading to massive efficiency gains [9] - DDN's throughput is reportedly 15 times faster than competitors, resulting in a 20 times faster token output [10] Productivity & Cost Efficiency - Implementing a fast shared KV cache like DDN can lead to a 60% increase in output from GPU infrastructure [12] - DDN aims to deliver a 60% increase in tokens output per watt, per data center, per GPU, and per capital dollar expenditure [13] - Using DDN offers the strongest improvement in GPU productivity over the next five years by accelerating inference models [12]