DDN, a global leader in AI and data intelligence, has launched DDN CORE. This new unified data engine is built for the toughest AI and HPC environments. This innovation is a key change. It creates a single, efficient base that supports HPC and AI workloads together.
For decades, DDN’s solutions have powered the world’s fastest supercomputers and advanced research institutions. With the launch of DDN CORE, that same high-performance architecture becomes the cornerstone of what the company terms the “AI Factory Era.” The system unifies data workflows for simulation, training, inference, and real-time generation ensuring GPU utilization remains consistently high and infrastructure productivity is maximised.
“The bottleneck in AI isn’t compute anymore it’s data,” said Alex Bouzari, CEO and Co-Founder at DDN. “DDN CORE gives organizations a single data foundation where HPC and AI operate together at full speed and scale. It’s how we turn infrastructure cost into intelligence ROI.”
“DDN CORE was engineered to eliminate idle GPUs,” added Sven Oehme, CTO, DDN. “By combining our expertise in parallel data systems with new intelligence-driven automation, CORE removes I/O latency, streamlines orchestration, and keeps every GPU working not waiting.”
Tackling the Data Challenge in AI Infrastructure
Organizations are investing over US$180 billion annually into AI infrastructure. Yet, fragmentation in systems for training, inference, and data preparation often causes compute resources to lie idle. Simultaneously, global data center power consumption is projected to double to 1,000 TWh, roughly equivalent to the UK’s annual electrical consumption a drain exacerbated by I/O inefficiencies and disjointed architecture.
DDN CORE confronts these hurdles head-on. Rather than supporting separate AI and HPC stacks, it replaces them with a unified data engine engineered to move data at the speed of compute, ensuring every GPU cycle and every watt of power is optimally utilized.
Also Read: Intuit Launches Virtual Team of AI Agents on Intuit Platform to Boost Efficiency for Canadian SMEs
One Engine, All Workloads: The Core Innovation
At its heart, DDN CORE merges two of DDN’s flagship technologies EXAScaler® and Infinia™ into a single, software-defined high-performance data fabric. This unified layer supports the entire AI lifecycle: simulation, training, inference, and retrieval-augmented generation (RAG).
Unlike standard storage upgrades, CORE represents an intelligent software-defined system that delivers uncompromising performance.
Key Performance Highlights
- Unified Data Plane: Offers HPC-level consistency and parallel throughput across on-premises, hybrid, and sovereign cloud environments.
- Training Acceleration: Enables up to 15× faster checkpointing and 4× faster model loading, supporting over 99% GPU utilization in production AI workloads.
- Inference & RAG Optimization: Through integrated caching and token reuse, delivers 25× faster response times and reduces cost per query by 60%.
- High Efficiency: Achieves up to 11× better performance-per-watt and cuts power consumption by 40%.
- Autonomous Operations: Built-in telemetry and self-tuning (via DDN Insight) provide continuous optimization, removing idle cycles without manual intervention.
Software-Defined Intelligence: The Foundation of DDN CORE
DDN CORE operates as a software-defined data engine, serving as the intelligence layer that unifies orchestration, performance, and observability across diverse compute- and storage-architectures. It runs natively on DDN’s own platforms AI400X3 and Infinia and is also certified for deployment on partner systems such as Supermicro and major cloud providers. This ensures consistent AI data performance in any environment.
At SC25, DDN is showcasing new systems optimized for CORE:
- AI400X3 Family: Includes the AI400X3i, SE-2, and SE-4 models, offering up to 140 GB/s read, 110 GB/s write, and 4 million IOPS in just 2U a combination of raw performance and high power density.
- AI2200 (Infinia): Tailored for inference and RAG, doubling throughput and tokens-per-watt to serve hyperscale AI factories.
- Deployment Flexibility: DDN CORE supports both on-premises and cloud deployment, delivering consistent data performance across any infrastructure.
Validation and Ecosystem Synergies
DDN has validated CORE in collaboration with major players across the AI ecosystem:
- NVIDIA Integration: CORE is optimized for the NVIDIA AI Data Platform and AI Factory architectures, and has been tested on NVIDIA GB200 NVL72 GPUs, Spectrum-X Ethernet, and BlueField DPUs for superior throughput and scaling.
- Cloud Partnerships:
o Google Cloud Managed Lustre: Demonstrated up to 70% faster training throughput and 15× faster checkpointing.
o Oracle Cloud Infrastructure: Infinia delivers low-latency inference with dense caching for AI-optimized performance.
o DDN Cloud Program: Through collaborations with CoreWeave, Nebius, and Scaleway, DDN now offers on-demand AI Factory capacity with consistent, production-grade performance.
Industry leaders have also commented on the importance of DDN’s innovations:
“AI-ready storage is no longer optional it’s foundational to running at scale with data that moves at the speed of compute. Leveraging the NVIDIA AI Data Platform reference design, DDN powers AI factories with the performance, throughput, and scale needed to turn data into intelligence in real time.” – Justin Boitano, Vice President, Enterprise AI Products, NVIDIA
“By combining the scale of GCP with the performance of DDN CORE, we’re unlocking new levels of throughput for customers training models in days, not weeks.” – Sameet Agarwal, VP Engineering, Google Cloud “DDN’s GPU-optimized storage technology, combined with the scalability and security of OCI, gives customers a cloud-native platform purpose-built for AI,” said Sachin Menon, Vice President, Cloud Engineering, Oracle. “Together, we are enabling enterprises to run complex AI and analytics workloads at scale, with predictable low latency and high throughput.”





























