SYS-421GU-TNXR
Universal 4U Dual Processor (Intel) GPU System with NVIDIA HGX™ H100 4-GPU SXM5 board, NVLINK™ GPU-GPU Interconnect

- 32 DIMM slots Up to 8TB: 32x 256 GB DRAM Memory Type: 4800MHz ECC DDR5
- 8 PCIe Gen 5.0 X16 LP Slots
- Flexible networking options
- 2 M.2 NVMe and SATA for boot drive only
6x 2.5" Hot-swap NVMe/SATA/SAS drive bays

Product SKUs | SuperServer SYS-421GU-TNXR (Black Front & Silver Body) |
Motherboard | Super X13DGU |
Processor | |
CPU | Dual Socket E (LGA-4677) 4th Gen Intel® Xeon® Scalable processors |
Note | Supports up to 350W TDP CPUs (Aircooled) Supports up to 350W TDP CPUs (Liquid Cooled) |
GPU | |
Supported GPU | HGX H100 4-GPU SXM5 Multi-GPU Board |
CPU-GPU Interconnect | PCIe 5.0 x16 CPU-to-GPU Interconnect |
GPU-GPU Interconnect | NVIDIA® NVLink™ |
System Memory | |
Memory | Memory Capacity: 32 DIMM slots Up to 8TB: 32x 256 GB DRAM Memory Type: 4800MHz ECC DDR5 DRAM |
Memory Voltage | 1.2 V |
Error Detection | ECC |
On-Board Devices | |
Chipset | Intel® C741 |
Network Connectivity | 2x 10GbE BaseT with Intel® X710-AT2 |
IPMI | Support for Intelligent Platform Management Interface v.2.0 IPMI 2.0 with virtual media over LAN and KVM-over-LAN support |
Input / Output | |
Video | 1 VGA port(s) |
System BIOS | |
BIOS Type | AMI 32MB SPI Flash EEPROM |
Management | |
Software | OOB Management Package (SFT-OOB-LIC )Redfish API IPMI 2.0 SSMIntel® Node Manager SPMKVM with dedicated LAN SUMNMIWatch DogSuperDoctor® 5 |
Power Configurations | ACPI Power ManagementPower-on mode for AC power recovery |
PC Health Monitoring | |
CPU | 8+4 Phase-switching voltage regulator Monitors for CPU Cores, Chipset Voltages, Memory |
FAN | Fans with tachometer monitoring Pulse Width Modulated (PWM) fan connectors Status monitor for speed control |
Temperature | Monitoring for CPU and chassis environment Thermal Control for fan connectors |
Chassis | |
Form Factor | 4U Rackmount |
Model | CSE-458GTS-R3K06P |
Dimensions and Weight | |
Height | 8.75" (222.5mm) |
Width | 17.67" (449mm) |
Depth | 32.79" (833mm) |
Package | 14.57" (H) x 27.55" (W) x 49.6" (D) |
Weight | Net Weight: 166 lbs (75.3 kg) Gross Weight: 225 lbs (102.1 kg) |
Available Color | Black Front & Silver Body |
Front Panel | |
Buttons | Power On/Off buttonSystem Reset button |
LEDs | Hard drive activity LED Network activity LEDs Power status LED System Overheat & Power Fail LED |
Expansion Slots | |
PCI-Express (PCIe) | 8 PCIe 5.0 X16 slot(s) |
Drive Bays / Storage | |
Hot-swap | 6x 2.5" hot-swap NVMe/SATA drive bays (6x 2.5" NVMe hybrid) |
M.2 | 2 M.2 NVMe OR 2 M.2 SATA3 |
System Cooling | |
Fans | 5 heavy duty fans with optimal fan speed control |
Air Shroud | 1 Air Shroud(s) |
Liquid Cooling | Direct to Chip (D2C) Cold Plate (optional) |
Power Supply | 4x 3000W Redundant Power Supplies, Titanium Level |
AC Input | 3000W: |
DC Output | 3000W |
Output Type | Backplanes (connector) |
Operating Environment | |
Environmental Spec. | Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F) Non-operating Temperature: -40°C to 60°C (-40°F to 140°F) Operating Relative Humidity: 8% to 90% (non-condensing) Non-operating Relative Humidity: 5% to 95% (non-condensing) |

Benefits & Advantages
- Double-precision Tensor Cores delivering up to 535/268 teraFLOPS at FP64 in the 8-GPU/4-GPU respectively
- TF32 precision to reach nearly 8000 teraFLOPs for single-precision matrixmultiplication
- Superior thermal design and liquid cooling option supports maximum power/perfomance CPUs and GPUs
- Dedicated networking and storage per GPU with up to double the NVIDIA GPUDirect throughput of the previous generation
Key Features
- 4 or 8 H100 SXM GPUs with NVLink, interconnect with up to 900GB/s
- Dual 4th Gen Intel Xeon Scalable processors
- Supports PCIe 5.0, DDR5, and Compute Express Link (CXL) 1.1+
- Optimized thermal capacity and airflow to support CPUs up to 350W and GPUs up to 700W with air cooling and optional liquid cooling
- PCIe 5.0 x16 1:1 networking slots for GPUs up to 400 Gbps each supporting GPUDirect Storage and RDMA, and up to 16 U.2 NVMe drive bays, high throughput data pipeline and clustering
Accelerate Large Scale AI Training Workloads
Large-Scale AI training demands cutting-edge technologies to maximize parallel computing power of GPUs to handle billions if not trillions of AI model parameters to be trained with massive datasets that are exponentially growing.
Leverage NVIDIA’s HGX™ H100 SXM 4-GPU and the fastest NVLink™ & NVSwitch™ GPU-GPU interconnects with up to 900GB/s bandwidth, and fastest 1:1 networking to each GPU for node clustering, these systems are optimized to train large language models from scratch in the shortest amount of time.
Deliver optimized systems for the most demanding AI, Cloud, and 5G Edge workloads

Performance Optimized
Enhanced thermal capacity to support the highest performing CPUs and GPUs, plus support for the latest industry technologies including PCIe 5.0, DDR5, CXL 1.1 and high-bandwidth memory.
Energy Efficient
Systems designed for optimal airflow to run in high-temperature data center environments up to 40°C, optional rack-scale liquid cooling solutions and in-house design of Titanium Level power supplies for maximum efficiency.
Improved Security and Manageability
Industry standard compliance for hardware and silicon Root of Trust (RoT), cryptographical attestation of components throughout the entire supply chain and comprehensive remote management capabilities.
Supports Open Industry Standards
Futureproofing and interoperability with support for Open Compute Project (OCP) standards including OCP 3.0, OAM, ORV2 and OSF as well as Open BMC and the E1.S storage form factor.