Archives

Supermicro Launches Fluid-Optimized NVIDIA Blackwell Solutions

Supermicro

Supermicro’s SuperClusters with NVIDIA HGX TM  B200 8-GPU, NVIDIA GB200, NVL4, and NVL72 systems deliver unprecedented AI compute density

Supermicro, Inc., a provider of complete IT solutions for cloud, AI/ML, storage, and 5G/Edge, announced the highest-performing SuperCluster, an end-to-end AI data center solution powered by the NVIDIA Blackwell platform for the era of generative AI at trillions of parameters. The new SuperCluster dramatically increases the number of NVIDIA HGX B200 8-GPU systems in a liquid-cooled rack, greatly increasing GPU compute density compared to Supermicro’s current industry-leading liquid-cooled NVIDIA HGX H100 and H200-based SuperClusters. Additionally, Supermicro is expanding its NVIDIA Hopper systems portfolio to address the rapid adoption of accelerated computing for HPC applications and mainstream enterprise AI.

“Supermicro has the expertise, speed of delivery, and capacity to deploy the largest liquid-cooled AI data center projects in the world, recently surpassing 100,000 GPUs deployed with collaboration from Supermicro and NVIDIA,” said Charles Liang , president and CEO of Supermicro . “These Supermicro SuperClusters reduce power requirements through DLC efficiency. We now have solutions that leverage the NVIDIA Blackwell platform. With our Building Block approach, we can quickly design servers with the NVIDIA HGX B200 8-GPU, which can be either liquid-cooled or air-cooled. Our SuperClusters deliver unprecedented density, performance, and efficiency, paving the way to even denser AI computing solutions in the future. Supermicro clusters leverage direct liquid cooling, resulting in higher performance, lower power consumption across the data center, and lower operational costs.”

Proven AI Performance at Scale: Supermicro NVIDIA HGX B200 Systems

The enhanced scalable SuperCluster unit is based on a rack-scale design with innovative vertical coolant distribution manifolds (CDMs), which enable a greater number of compute nodes in a single rack. Newly developed and efficient cold plates and an advanced piping design further improve the efficiency of the liquid cooling system. A new CDU option is also available for large installations. Traditional air-cooled data centers can also benefit from the new NVIDIA HGX B200 8-GPU systems with a new air-cooled system chassis.

The new Supermicro NVIDIA HGX B200 8-GPU systems come with a number of upgrades over the previous generation. The new system features improvements in thermals and power delivery, supporting dual 500W Intel® Xeon® 6 (with DDR5 MRDIMMs at 8800 MT/s), or AMD EPYC TM 9005-series processors. A new air-cooled 10U form factor Supermicro NVIDIA HGX B200 system comes with a redesigned chassis and offers increased thermal headroom with room for eight 1000W TDP Blackwell GPUs. These systems are designed with a 1:1 GPU-to-NIC ratio and support NVIDIA BlueField®-3 SuperNICs or NVIDIA ConnectX®  7 NICs that scale up to a high-performance compute fabric. Additionally, two NVIDIA BlueField-3 data processing units (DPUs) per system streamline data processing to and from connected AI storage with high performance.

Also Read: Keysight’s FieldFox Introduces Portable Millimeter-wave Analysis with Virginia Diodes 

Supermicro Solutions with NVIDIA GB200 Grace Blackwell Superchips

Supermicro also offers solutions for all NVIDIA GB200 Grace Blackwell Superchips, including the recently announced NVIDIA GB200 NVL4 Superchip and the NVIDIA GB200 NVL72 single-rack exascale computer.

Supermicro’s NVIDIA MGX family of designs supports the NVIDIA GB200 Grace Blackwell NVL4 Superchip. Unlocking the future of converged HPC and AI, this Superchip delivers revolutionary performance by combining four NVIDIA NVLink™-connected Blackwell GPUs with two NVIDIA Grace™ CPUs via NVLink-C2C. The Superchip is compatible with Supermicro’s liquid-cooled NVIDIA MGX modular systems, doubling performance for scientific computing, graphical neural network (GNN) training, and inference applications.

The NVIDIA GB200 NVL72 SuperCluster with Supermicro end-to-end liquid cooling solution delivers a single-rack exascale supercomputer with SuperCloud Composer (SCC) software, bringing comprehensive monitoring and management capabilities to liquid-cooled data centers. 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs are all connected via fifth-generation NVIDIA NVLink and NVLink Switch, effectively operating as a single high-performance GPU with a massive HBM3e memory pool, enabling 130TB/s of total low-latency GPU communication bandwidth.

Accelerated computing systems with NVIDIA H200 NVL

Supermicro’s 5U PCIe accelerated compute systems are now available with NVIDIA H200 NVL, ideal for lower-power, air-cooled enterprise rack designs that require flexible configurations and deliver acceleration for many AI and HPC workloads, regardless of size. With up to four GPUs connected via NVIDIA NVLink, a 1.5x memory footprint, and a 1.2x bandwidth increase with HBM3e, NVIDIA H200 NVL can fine-tune LLMs in just a few hours, delivering 1.7x faster LLM inference performance than the previous generation. NVIDIA H200 NVL also includes a five-year subscription to NVIDIA AI Enterprise , a cloud-native software platform for developing and deploying production AI.

Supermicro’s X14 and H14 5U PCIe accelerated compute systems support up to two 4-way NVIDIA H200 NVL systems via NVLink technology with a total of 8 GPUs in a system, including 900GB/s GPU-to-GPU interconnection with a combined pool of 564GB of HBM3e memory per 4-GPU NVLink domain. The new PCIe accelerated compute system can support up to 10 PCIe GPUs and now also features the latest Intel Xeon 6 or AMD EPYC 9005-series processors with flexible and versatile options for HPC and AI applications.

Source: PRNewswire