SYS-221GE-NR

DP Intel 2U PCIe GPU Server with up to 4 NVIDIA H100, H100 NVL, L40S, or A100

Anewtech-Systems-GPU-Servers-Supermicro-SYS-221GE-NR NVIDIA GH200 Grace Hopper Superchip system
  • High density 2U GPU system with up to 4 NVIDIA® H100 PCIe GPUs
  • Highest GPU communication using NVIDIA® NVLINK™
  • PCIe-based H100 NVL with NVLink Support
  • 32 DIMM slots; Up to 8TB: 32x 256 GB DRAM; Memory Type: 4800MHz ECC DDR5
  • 7 PCIe 5.0 x16 FHFL Slots
  • NVIDIA BlueField-3  Data Processing Unit Support for the most demanding accelerated computing workloads
  • E1.S NVMe Storage Support
Datasheet
Product Specification
Anewtech-Systems-GPU-Servers-Supermicro-SYS-221GE-NR-nvidia-gpu-servers NVIDIA GH200 Grace Hopper Superchip system
Anewtech-Systems-GPU-Server-Supermicro-SYS-221GE-NR-nvidia-gpu-server
Product SKUsSuperServer SYS-221GE-NR (Black front & silver body)
MotherboardSuper X13DEH
Processor
CPUDual Socket E (LGA-4677)
4th Gen Intel® Xeon® Scalable Processors
Core CountUp to 56C/112T; Up to 112.5MB Cache per CPU
NoteSupports up to 350W TDP CPUs (Air Cooled)
GPU
Supported GPUNVIDIA PCIe: H100, H100 NVL, L40S, L40, A100
Max GPU CountUp to 4 double-width GPU(s)
CPU-GPU InterconnectPCIe 5.0 x16 CPU-to-GPU Interconnect
GPU-GPU InterconnectNVIDIA® NVLink™ Bridge (optional)
System Memory
MemorySlot Count: 32 DIMM slots
Max Memory (2DPC): Up to 4TB 5600MT/s ECC DDR5
Memory Voltage1.1 V
On-Board Devices
ChipsetIntel® C741
Network Connectivity1x 10GbE BaseT with NVIDIA ConnectX®-7 or Bluefield®-3 DPU
Input / Output
Video1 VGA port(s)
System BIOS
BIOS TypeAMI 32MB SPI Flash EEPROM
Management
Power ConfigurationsACPI Power Management
Power-on mode for AC power recovery
Security
HardwareTrusted Platform Module (TPM) 2.0
PC Health Monitoring
CPU8+4 Phase-switching voltage regulator
Monitors for CPU Cores, Chipset Voltages, Memory
FANFans with tachometer monitoring
Pulse Width Modulated (PWM) fan connectors
Status monitor for speed control
TemperatureMonitoring for CPU and chassis environment
Thermal Control for fan connectors
Chassis
Form Factor2U Rackmount
ModelCSE-GP201TS-R000NP
Dimensions and Weight
Height3.46" (88mm)
Width17.25" (438.4mm)
Depth35.43" (900mm)
Package11" (H) x 22.5" (W) x 45.5" (D)
WeightNet Weight: 67.5 lbs (30.6 kg) 
Gross Weight: 86.5 lbs (39.2 kg)
Available ColorBlack front & silver body
Front Panel
ButtonsPower On/Off button
System Reset button
LEDsHard drive activity LED
Network activity LEDs
Power status LED
System Overheat & Power Fail LED
Expansion Slots
PCI-Express (PCIe)7 PCIe 5.0 x16 FHFL slot(s)
Drive Bays / Storage
Hot-swap8x E1.S hot-swap NVMe drive slots
M.22 M.2 NVMe OR 2 M.2 SATA3
System Cooling
Fans6 heavy duty fans with optimal fan speed control
Air Shroud1 Air Shroud(s)
Power Supply2000W Redundant Titanium Level power supplies
Operating Environment
Environmental Spec.Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F)
Non-operating Temperature: -40°C to 60°C (-40°F to 140°F)
Operating Relative Humidity: 8% to 90% (non-condensing)
Non-operating Relative Humidity: 5% to 95% (non-condensing)

Enterprise AI Inferencing & Training System

Anewtech-Systems-Supermicro-GPU-Server-2U-Grace-MGX-System-SSG-121E-NES24R NVIDIA GH200 Grace Hopper Superchip system

2U MGX Systems   
Modular Building Block Platform Supports GPUs, CPUs, and DPUs

Benefits & Advantages

  • NVIDIA MGX reference design enabling to construct a wide array of platforms and configurations
  • 7 PCIe 5.0 x16 slots in 2U with up to 4 PCIe FHFL DW GPUs and 3 NICs or DPUs
  • Supports both ARM and x86-based configurations and is compatible with current and future generations of GPUs, CPUs and DPUs
  • 7 PCIe 5.0 x16 slots in 2U with up to 4 PCIe FHFL DW GPUs and 3 NICs or DPUs

Key Features

  • Up to 4 H100 PCIe GPUs with optional NVLink Bridge (H100 NVL), L40S, or L40
  • Up to 3 NVIDIA ConnectX-7 400G NDR InfiniBand cards or 3 NVIDIA BlueField®-3 cards
  • Dual 4th Gen Intel Xeon Scalable processors
  • 8 hot-swap E1.S and 2 M.2 slots
  • Supports PCIe 5.0 DDR5 and Compute Express Link 1.1+
 

Grace and x86 MGX System Configurations at a Glance

Supermicro NVIDIA MGX™ 1U/2U Systems with Grace™ CPU Superchip and x86 CPUs are fully optimized to support up to 4 GPUs via PCle without sacrificing I/O networking, or thermals. The ultimate building block architecture allows you to tailor these systems optimized for a variety of accelerated workloads and fields, including Al training and inference, HPC, data analytics, visualization/Omniverse™, and hyperscale cloud applications.

SKU

ARS-121L-DNR

ARS-221GL-NR

SYS-221GE-NR

 Anewetch-Systems-Supermicro-NVIDIA-MGX-gpu-server-ARS-121L-DNR ARS-121L-DNRAnewetch-Systems-Supermicro-NVIDIA-MGX-gpu-server-ARS-221GL-NR ARS-121L-DNRAnewetch-Systems-Supermicro-NVIDIA-MGX-gpu-server-SYS-221GE-NR ARS-121L-DNR
Form Factor1U 2-node system with NVIDIA Grace CPU Superchip per node2U GPU system with single NVIDIA Grace CPU Superchip2U GPU system with dual x86 CPUs
CPU144-core Grace Arm Neoverse V2 CPU in a single chip per node 
(total of 288 cores in one system)
144-core Grace Arm Neoverse V2 CPU in a single chip4th Gen Intel Xeon Scalable Processors 
(Up to 56-core per socket)
GPUPlease contact our sales for possible configurationsUp to 4 double-width GPUs including NVIDIA H100 PCIe, H100 NVL, L40SUp to 4 double-width GPUs including NVIDIA H100 PCIe, H100 NVL, L40S
MemoryUp to 480GB of integrated LPDDR5X memory with ECC and up to 1TB/s of bandwidth per nodeUp to 480GB of integrated LPDDR5X memory with ECC and up to 1TB/s of bandwidth per nodeUp to 2TB, 32x DIMM slots, ECC DDR5-4800
DrivesUp to 4x hot-swap E1.S drives and 2x M.2 NVMe drives per nodeUp to 8x hot-swap E1.S drives and 2x M.2 NVMe drivesUp to 8x hot-swap E1.S drives and 2x M.2 NVMe drives
Networking2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField-3 or ConnectX-7 (e.g., 1 GPU and 1 BlueField-3)3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-7 (in addition to 4x PCIe 5.0 x16 slots for GPUs)3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-7 (in addition to 4x PCIe 5.0 x16 slots for GPUs)
InterconnectNVLink™-C2C with 900GB/s for CPU-CPU interconnect (within node)NVLink Bridge GPU-GPU interconnect supported (e.g., H100 NVL)NVLink™ Bridge GPU-GPU interconnect supported (e.g., H100 NVL)
CoolingAir-coolingAir-coolingAir-cooling
Power2x 2700W Redundant Titanium Level power supplies3x 2000W Redundant Titanium Level power supplies3x 2000W Redundant Titanium Level power supplies

1U/2U NVIDIA Grace™ CPU Superchip & x86 MGX Systems

Datasheet

1U NVIDIA GH200 Grace Hopper™ Superchip Systems

Datasheet