SYS-821GE-TNHR
DP Intel 8U GPU Server with NVIDIA HGX H100/H200 8-GPU and Rear I/O
- 5th/4th Gen Intel® Xeon® Scalable processor support
- 32 DIMM slots Up to 8TB: 32x 256 GB DRAM Memory Type: 4800MHz ECC DDR5
- 2 PCIe Gen 5.0 X16 FHHL Slots, 2 PCIe Gen 5.0 X16 FHHL Slots (optional)
- 8 PCIe Gen 5.0 X16 LP
- Flexible networking options
- 16x 2.5" Hot-swap NVMe drive bays (12x by default, 4x optional)
- 2 M.2 NVMe for boot drive only
- 3x 2.5" Hot-swap SATA drive bays
- Optional: 8x 2.5" Hot-swap SATA drive bays
- 10 heavy duty fans with optimal fan speed control
- 6x 3000W (3+3) Redundant Power Supplies, Titanium Level
- Optional: 8x 3000W (4+4) Redundant Power Supplies, Titanium Level
Key Applications
- High Performance Computing
- AI/Deep Learning Training
- Industrial Automation, Retail
- Conversational AI
- Business Intelligence & Analytics
- Drug Discovery
- Climate and Weather Modeling
- Finance & Economics
Product SKUs | SuperServer SYS-821GE-TNHR |
Motherboard | Super X13DEG-OAD |
Processor | |
CPU | Dual Socket E (LGA-4677) 5th Gen Intel® Xeon® / 4th Gen Intel® Xeon® Scalable processors |
Core Count | Up to 64C/128T; Up to 320MB Cache per CPU |
Note | Supports up to 350W TDP CPUs (Air Cooled) Supports up to 385W TDP CPUs (Liquid Cooled) |
GPU | |
Max GPU Count | 8 onboard GPU(s) |
Supported GPU | NVIDIA SXM: HGX H100 8-GPU (80GB), HGX H200 8-GPU (141GB) |
CPU-GPU Interconnect | PCIe 5.0 x16 CPU-to-GPU Interconnect |
GPU-GPU Interconnect | NVIDIA® NVLink® with NVSwitch™ |
System Memory | |
Memory | Slot Count: 32 DIMM slots Max Memory (1DPC): Up to 4TB 5600MT/s ECC DDR5 RDIMM Max Memory (2DPC): Up to 8TB 4400MT/s ECC DDR5 RDIMM |
Memory Voltage | 1.1 V |
On-Board Devices | |
Chipset | Intel® C741 |
Network Connectivity | 2x 10GbE BaseT with Intel® X550-AT2 (optional) 2x 25GbE SFP28 with Broadcom® BCM57414 (optional) 2x 10GbE BaseT with Intel® X710-AT2 (optional) |
IPMI | Support for Intelligent Platform Management Interface v.2.0 IPMI 2.0 with virtual media over LAN and KVM-over-LAN support |
Input / Output | |
Video | 1 VGA port(s) |
System BIOS | |
BIOS Type | AMI 32MB SPI Flash EEPROM |
Management | |
Software | IPMI 2.0 KVM with dedicated LAN Super Diagnostics Offline SuperDoctor® 5 Supermicro Update Manager (SUM) Supermicro Power Manager (SPM) Supermicro Server Manager (SSM) Redfish API |
Power Configurations | ACPI Power Management Power-on mode for AC power recovery |
Security | |
Hardware | Trusted Platform Module (TPM) 2.0 Silicon Root of Trust (RoT) – NIST 800-193 Compliant |
Features | Cryptographically Signed Firmware Secure Boot Secure Firmware Updates Automatic Firmware Recovery Supply Chain Security: Remote Attestation Runtime BMC Protections System Lockdown |
PC Health Monitoring | |
CPU | 8+4 Phase-switching voltage regulator Monitors for CPU Cores, Chipset Voltages, Memory |
FAN | Fans with tachometer monitoring Pulse Width Modulated (PWM) fan connectors Status monitor for speed control |
Temperature | Monitoring for CPU and chassis environment Thermal Control for fan connectors |
Chassis | |
Form Factor | 8U Rackmount |
Model | CSE-GP801TS |
Dimensions and Weight | |
Height | 14" (355.6mm) |
Width | 17.2" (437mm) |
Depth | 33.2" (843.28mm) |
Package | 29.5" (H) x 27.5" (W) x 51.2" (D) |
Weight | Net Weight: 166 lbs (75.3 kg) Gross Weight: 225 lbs (102.1 kg) |
Available Color | Black front & silver body |
Front Panel | |
Buttons | Power On/Off button System Reset button |
LEDs | Hard drive activity LED Network activity LEDs Power status LED System Overheat & Power Fail LED |
Expansion Slots | |
PCI-Express (PCIe) | 8 PCIe 5.0 x16 LP slot(s) 4 PCIe 5.0 x16 FHHL slot(s) |
Drive Bays / Storage | |
Hot-swap | 19x 2.5" hot-swap NVMe/SATA drive bays (16x 2.5" NVMe dedicated) |
M.2 | 2 M.2 NVMe |
System Cooling | |
Fans | 10 heavy duty fans with optimal fan speed control |
Liquid Cooling | Direct to Chip (D2C) Cold Plate (optional) |
Power Supply | 6x 3000W Redundant Titanium Level power supplies |
Dimension (W x H x L) | 106.5 x 82.1 x 245.5 mm |
AC Input | 3000W: 0240Vdc / 50-60Hz (for CQC only) 2880W: 200-207Vac / 50-60Hz 3000W: 207-240Vac / 50-60Hz |
+12V | Max: 91.66A / Min: 0A (200Vdc-240Vdc) |
12V SB | Max: 3A / Min: 0A |
Output Type | Backplanes (gold finger) |
Operating Environment | |
Environmental Spec. | Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F) Non-operating Temperature: -40°C to 60°C (-40°F to 140°F) Operating Relative Humidity: 8% to 90% (non-condensing) Non-operating Relative Humidity: 5% to 95% (non-condensing) |
The full turn-key data center solution accelerates time-to-delivery for mission-critical enterprise use cases, and eliminates the complexity of building a large cluster, which previously was achievable only through the intensive design tuning and time-consuming optimization of supercomputing.
Proven Design Datasheet
With 32 NVIDIA HGX H100/H200 8-GPU, 8U Air-cooled Systems (256 GPUs) in 9 Racks
Key Features
- Proven industry leading architecture for large scale AI infrastructure deployments
- 256 NVIDIA H100/H200 GPUs in one scalable unit
- 20TB of HBM3 with H100 or 36TB of HBM3e with H200 in one scalable unit
- 1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage for training large language model with up to trillions of parameters
- Customizable AI data pipeline storage fabric with industry leading parallel file system options
- NVIDIA AI Enterprise Software Ready
Compute Node
Liquid Cooling GPU Server
GPU Super Server SYS-821GE-TNHR | |
Overview | 8U Dual Socket (4th Gen Intel® Xeon® Scalable Processors), up to 8 SXM5 GPUs |
CPU | 2x 4th Gen Intel Xeon Scalable Processors |
Memory (additional memory available) | 32 DIMM slots Up to 8TB: 32x 256 GB DRAM |
Graphics | 8x HGX H100 SXM5 GPUs (80GB, 700W TDP) |
Storage (additional storage available) | 8x 2.5” SATA 8x 2.5” NVMe U.2 Via PCIe Switches Additional 8x 2.5” NVMe U.2 Via PCIe Switches (option) 2x NVMe M.2 |
Power | 3+3 Redundant 6x 3000W Titanium Level Efficiency Power |
Accelerate Large Scale AI Training Workloads
Large-Scale AI training demands cutting-edge technologies to maximize parallel computing power of GPUs to handle billions if not trillions of AI model parameters to be trained with massive datasets that are exponentially growing.
Leverage NVIDIA’s HGX™ H100 SXM 8-GPU and the fastest NVLink™ & NVSwitch™ GPU-GPU interconnects with up to 900GB/s bandwidth, and fastest 1:1 networking to each GPU for node clustering, these systems are optimized to train large language models from scratch in the shortest amount of time.
Completing the stack with all-flash NVMe for a faster AI data pipeline, we provide fully integrated racks with liquid cooling options to ensure fast deployment and a smooth AI training experience.
32-Node Scalable Unit Rack Scale Design Close-up
SYS-821GE-TNHR / AS-8125GS-TNHR
Overview | 8U Air-cooled System with NVIDIA HGX H100/H200 |
---|---|
CPU | Dual 5th/4th Gen Intel® Xeon® or AMD EPYC 9004 Series Processors |
Memory | 2TB DDR5 (recommended) |
GPU | NVIDIA HGX H100/H200 8-GPU (80GB HBM3 or 141GB HBM3E per GPU 900GB/S NVLink GPU-GPU Interconnect with NVLink |
Networking | 8x NVDIA ConnectX®-7 Single-port 400Gbps/NDR OSFP NICs 2x NVDIA ConnectX®-7 Dual-port 200Gbps/NDR200 OSFP112 NICs |
Storage | 30.4TB NVMe (4x 7.6TB U.3) 3.8TB NVMe (2x 1.9TB U.3, Boot) [Optional M.2 available] |
Power Supply | 6x 3000W Redundant Titanium Level power supplies |