Supermicro Generative AI SuperCluster for Large Language Models

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-GPU-Server-AI-Server-Supermicro-Singapore

In the era of AI, a unit of compute is no longer measured by just the number of servers. Interconnected GPUs, CPUs, memory, storage, and these resources across multiple nodes in racks construct today's artificial Intelligence. The infrastructure requires high-speed and low-latency network fabrics, and carefully designed cooling technologies and power delivery to sustain optimal performance and efficiency for each data center environment. 

Supermicro’s SuperCluster solution provides foundational building blocks for rapidly evolving Generative AI and Large Language Models (LLMs).

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-421GE-TNHR2-LCC-GPU-Server-Supernicro-Singapore

With 256 NVIDIA HGX™ H100/H200 GPUs, 32 4U Liquid-cooled Systems

Datasheet

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-821GE-TNHR-GPU-Server-Supermicro-Singapore

With 256 NVIDIA HGX™ H100/H200 GPUs, 32 8U Air-cooled Systems

Datasheet

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-ARS-111GL-NHR-GPU-Server-Supermicro-Singapore.

With 256 NVIDIA MGX™ GH200 Grace™ Hopper Superchip Systems

Datasheet

Generative AI SuperCluster

 

The full turn-key data center solution accelerates time-to-delivery for mission-critical enterprise use cases, and eliminates the complexity of building a large cluster, which previously was achievable only through the intensive design tuning and time-consuming optimization of supercomputing.

Highest Density Datasheet

 

With 32 NVIDIA HGX H100/H200 8-GPU, 4U Liquid-cooled Systems (256 GPUs) in 5 Racks

Key Features

  • Doubling compute density through Supermicro’s custom liquid-cooling solution with up to 40% reduction in electricity cost for data center
  • 256 NVIDIA H100/H200 GPUs in one scalable unit
  • 20TB of HBM3 with H100 or 36TB of HBM3e with H200 in one scalable unit
  • 1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage for training large language model with up to trillions of parameters
  • Customizable AI data pipeline storage fabric with industry leading parallel file system options
  • NVIDIA AI Enterprise Software Ready

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-421GE-TNHR2-LCC-GPU-ServerCompute Node

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SRS-48UGPU-AI-LCSU-SYS-421GE-TNHR2-LCC-GPU-Server
 

Proven Design Datasheet

 

With 32 NVIDIA HGX H100/H200 8-GPU, 8U Air-cooled Systems (256 GPUs) in 9 Racks

Key Features

  • Proven industry leading architecture for large scale AI infrastructure deployments
  • 256 NVIDIA H100/H200 GPUs in one scalable unit
  • 20TB of HBM3 with H100 or 36TB of HBM3e with H200 in one scalable unit
  • 1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage for training large language model with up to trillions of parameters
  • Customizable AI data pipeline storage fabric with industry leading parallel file system options
  • NVIDIA AI Enterprise Software Ready

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-821GE-GPU-ServerCompute Node

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-821GE-TNHR-SRS-48UGPU-AI-ACSU-GPU-Server
 

Cloud-Scale Inference Datasheet

 

With 256 NVIDIA GH200 Grace Hopper Superchips, 1U MGX Systems in 9 Racks

Key Features

  • Unified GPU and CPU memory for cloud-scale high volume, low-latency, and high batch size inference
  • 1U Air-cooled NVIDIA MGX Systems in 9 Racks, 256 NVIDIA GH200 Grace Hopper Superchips in one scalable unit
  • Up to 144GB of HBM3e + 480GB of LPDDR5X, enough capacity to fit a 70B+ parameter model in one node
  • 400Gb/s InfiniBand or Ethernet non-blocking networking connected to spine-leaf network fabric
  • Customizable AI data pipeline storage fabric with industry leading parallel file system options NVIDIA AI Enterprise software ready

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-ARS-111GL-NHR-GPU-Server.Compute Node

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-ARS-111GL-NHR-SRS-MGX256-SU-001-GPU-Server

Rack-Scale Liquid Cooling Solutions: Superior Cooling, Density, and Sustainability

 

Supermicro liquid cooling solution can reduce OPEX by up to 40%, and allow data centers to run more efficiently with lower PUE. Supermicro has proven liquid cooling deployments at scale and enables data centers operators to deploy the latest and most performance CPUs and GPUs.

Anewtech-Systems-Supermicro--liquid-cooling-server-reduction-Electricity-Cost-data-center Up to 40% REDUCTION 
in Electricity Costs for Entire Data Center
Anewtech-Systems-Supermicro--liquid-cooling-server-reduction-server-noise-data-center. Up to 55% REDUCTION 
in Data Center Server Noise
Anewtech-Systems-Supermicro--liquid-cooling-server-reduction-Electricity-Cost Up to 89% REDUCTION 
in Electricity Costs of Cooling Infrastructure in Server