Universal 4U Dual Processor (Intel) GPU System with NVIDIA HGX™ H100 4-GPU SXM5 board, NVLINK™ GPU-GPU Interconnect

Anewtech Systems GPU Server Supermicro Singapore Superserver Supermicro Servers SYS-421GU-TNXR  Supermicro SYS-421GU-TNXR
  • 32 DIMM slots Up to 8TB: 32x 256 GB DRAM Memory Type: 4800MHz ECC DDR5
  • 8 PCIe Gen 5.0 X16 LP Slots
  • Flexible networking options
  • 2 M.2 NVMe and SATA for boot drive only
    6x 2.5" Hot-swap  NVMe/SATA/SAS drive bays
Product Specification
Product SKUsSuperServer SYS-421GU-TNXR (Black Front & Silver Body)
MotherboardSuper X13DGU
CPUDual Socket E (LGA-4677)
4th Gen Intel® Xeon® Scalable processors
NoteSupports up to 350W TDP CPUs (Aircooled)
Supports up to 350W TDP CPUs (Liquid Cooled)
Supported GPUHGX H100 4-GPU SXM5 Multi-GPU Board
CPU-GPU InterconnectPCIe 5.0 x16 CPU-to-GPU Interconnect
GPU-GPU InterconnectNVIDIA® NVLink™
System Memory
MemoryMemory Capacity: 32 DIMM slots
Up to 8TB: 32x 256 GB DRAM Memory Type: 4800MHz ECC DDR5 DRAM
Memory Voltage1.2 V
Error DetectionECC
On-Board Devices
ChipsetIntel® C741
Network Connectivity2x 10GbE BaseT with Intel® X710-AT2
IPMISupport for Intelligent Platform Management Interface v.2.0
IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Input / Output
Video1 VGA port(s)
System BIOS
SoftwareOOB Management Package (SFT-OOB-LIC )Redfish API IPMI 2.0
SSMIntel® Node Manager
SPMKVM with dedicated LAN
SUMNMIWatch DogSuperDoctor® 5
Power ConfigurationsACPI Power ManagementPower-on mode for AC power recovery
PC Health Monitoring
CPU8+4 Phase-switching voltage regulator
Monitors for CPU Cores, Chipset Voltages, Memory
FANFans with tachometer monitoring
Pulse Width Modulated (PWM) fan connectors
Status monitor for speed control
TemperatureMonitoring for CPU and chassis environment
Thermal Control for fan connectors
Form Factor4U Rackmount
Dimensions and Weight
Height8.75" (222.5mm)
Width17.67" (449mm)
Depth32.79" (833mm)
Package14.57" (H) x 27.55" (W) x 49.6" (D)
WeightNet Weight: 166 lbs (75.3 kg)
Gross Weight: 225 lbs (102.1 kg)
Available ColorBlack Front & Silver Body
Front Panel
ButtonsPower On/Off buttonSystem Reset button
LEDsHard drive activity LED
Network activity LEDs
Power status LED
System Overheat & Power Fail LED
Expansion Slots
PCI-Express (PCIe)8 PCIe 5.0 X16 slot(s)
Drive Bays / Storage
Hot-swap6x 2.5" hot-swap NVMe/SATA drive bays
(6x 2.5" NVMe hybrid)
M.22 M.2 NVMe OR 2 M.2 SATA3
System Cooling
Fans5 heavy duty fans with optimal fan speed control
Air Shroud1 Air Shroud(s)
Liquid CoolingDirect to Chip (D2C) Cold Plate (optional)
Power Supply4x 3000W Redundant Power Supplies, Titanium Level
AC Input3000W:
DC Output3000W
Output TypeBackplanes (connector)
Operating Environment
Environmental Spec.Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F)
Non-operating Temperature: -40°C to 60°C (-40°F to 140°F)
Operating Relative Humidity: 8% to 90% (non-condensing)
Non-operating Relative Humidity: 5% to 95% (non-condensing)
HGX H100 Systems - Designed for Largest AI-fused HPC Clusters

Benefits & Advantages

  • Double-precision Tensor Cores delivering up to 535/268 teraFLOPS at FP64 in the 8-GPU/4-GPU respectively
  • TF32 precision to reach nearly 8000 teraFLOPs for single-precision matrixmultiplication
  • Superior thermal design and liquid cooling option supports maximum power/perfomance CPUs and GPUs
  • Dedicated networking and storage per GPU with up to double the NVIDIA GPUDirect throughput of the previous generation

Key Features

  • 4 or 8 H100 SXM GPUs with NVLink, interconnect with up to 900GB/s
  • Dual 4th Gen Intel Xeon Scalable processors
  • Supports PCIe 5.0, DDR5, and Compute Express Link (CXL) 1.1+
  • Optimized thermal capacity and airflow to support CPUs up to 350W and GPUs up to 700W with air cooling and optional liquid cooling
  • PCIe 5.0 x16 1:1 networking slots for GPUs up to 400 Gbps each supporting GPUDirect Storage and RDMA, and up to 16 U.2 NVMe drive bays, high throughput data pipeline and clustering

Accelerate Large Scale AI Training Workloads

Large-Scale AI training demands cutting-edge technologies to maximize parallel computing power of GPUs to handle billions if not trillions of AI model parameters to be trained with massive datasets that are exponentially growing. 

Leverage NVIDIA’s HGX™ H100 SXM 4-GPU and the fastest NVLink™ & NVSwitch™ GPU-GPU interconnects with up to 900GB/s bandwidth, and fastest 1:1 networking to each GPU for node clustering, these systems are optimized to train large language models from scratch in the shortest amount of time.

Deliver optimized systems for the most demanding AI, Cloud, and 5G Edge workloads


Performance Optimized

Enhanced thermal capacity to support the highest performing CPUs and GPUs, plus support for the latest industry technologies including PCIe 5.0, DDR5, CXL 1.1 and high-bandwidth memory.

Anewtech-Systems-Supermicro-singapore-GPU-Server-Storage-Server-efficiencyEnergy Efficient

Systems designed for optimal airflow to run in high-temperature data center environments up to 40°C, optional rack-scale liquid cooling solutions and in-house design of Titanium Level power supplies for maximum efficiency.


Anewtech-Systems-Supermicro-singapore-GPU-Server-Storage-Server-securityImproved Security and Manageability

Industry standard compliance for hardware and silicon Root of Trust (RoT), cryptographical attestation of components throughout the entire supply chain and comprehensive remote management capabilities.

Anewtech-Systems-Supermicro-singapore-GPU-Server-Storage-Server-openstandardSupports Open Industry Standards

Futureproofing and interoperability with support for Open Compute Project (OCP) standards including OCP 3.0, OAM, ORV2 and OSF as well as Open BMC and the E1.S storage form factor.