AS-8125GS-TNHR
8U GPU Server supports Dual Socket SP5 AMD EPYC™ 9004 Series Processors

- 24 DIMM Slots; Up to 6TB DRAM; 4800 ECC DDR5 LRDIMM;RDIMM;
- 8 PCI-E Gen 5.0 X16 LP, 2 PCI-E Gen 5.0 X16 FHFL Slots
- Flexible networking options
- 1 M.2 NVMe for boot drive only
- 12x 2.5" Hot-swap NVMe drive bays
- 2x 2.5" Hot-swap SATA drive bays
- 10 heavy duty fans with optimal fan speed control
- 8x 3000W redundant Titanium level power supplies

GPU A+ Server AS-8125GS-TNHR
(Complete System Only)
High density 8U system with NVIDIA® HGX™ H100 8-GPU
Highest GPU communication using NVIDIA® NVLINK™ + NVIDIA® NVSwitch™
- High Performance Computing
- AI/Deep Learning Training
- Climate and Weather Modeling
Product SKUs | A+ Server AS-8125GS-TNHR |
Motherboard | Super H13DSG-O-CPU |
Processor | |
CPU | Dual Socket SP5 AMD EPYC™ 9004 Series Processors Support CPU TDP 400W |
Cores | Up to 96C/192T |
GPU | |
Supported GPU | HGX H100 8-GPU SXM5 Multi-GPU Board |
CPU-GPU Interconnect | PCI-E 5.0 x16 CPU-to-GPU Interconnect |
GPU-GPU Interconnect | NVIDIA® NVLink™ with NVSwitch™ |
System Memory | |
Memory | Memory Capacity: 24 DIMM slots Up to 6TB: 24x 256 GB DRAM Memory Type: 4800MHz ECC DDR5 RDIMM/LRDIMM |
On-Board Devices | |
Chipset | AMD SP5 |
Network Connectivity | 2x 10GbE BaseT with Intel® X550-AT2 (optional) |
IPMI | Support for Intelligent Platform Management Interface v.2.0 IPMI 2.0 with virtual media over LAN and KVM-over-LAN support |
Input / Output | |
Video | 1 VGA port(s) |
System BIOS | |
BIOS Type | AMI 32MB SPI Flash EEPROM |
Management | |
Software | OOB Management Package (SFT-OOB-LIC ), Redfish API, IPMI 2.0, SSM, Intel® Node Manager, SPM, KVM with dedicated LAN, SUM, NMI, Watch Dog, SuperDoctor® 5 |
Power Configurations | ACPI Power Management Power-on mode for AC power recovery |
PC Health Monitoring | |
CPU | 7 +1 Phase-switching voltage regulator Monitors for CPU Cores, Chipset Voltages, Memory |
FAN | Fans with tachometer monitoring Pulse Width Modulated (PWM) fan connectors Status monitor for speed control |
Temperature | Monitoring for CPU and chassis environment Thermal Control for fan connectors |
Chassis | |
Form Factor | 8U Rackmount |
Model | CSE-GP801TS |
Dimensions and Weight | |
Height | 14" (355.6mm) |
Width | 17.2" (437mm) |
Depth | 33.2" (843.28mm) |
Package | 29.5" (H) x 27.5" (W) x 51.2" (D) |
Weight | Net Weight: 166 lbs (75.3 kg) Gross Weight: 225 lbs (102.1 kg) |
Available Color | Black Front & Silver Body |
Front Panel | |
Buttons | Power On/Off button System Reset button |
LEDs | Hard drive activity LED Network activity LEDs Power status LED System Overheat & Power Fail LED |
Expansion Slots | |
PCI-Express (PCI-E) | 8 PCIe 5.0 x16 LP, 2 FHFL PCIe 5.0 x16 Slots |
Drive Bays / Storage | |
Hot-swap | 14x 2.5" hot-swap NVMe/SATA drive bays (6x 2.5" NVMe hybrid; 4x 2.5" NVMe dedicated) |
M.2 | 1 M.2 NVMe |
System Cooling | |
Fans | 10 heavy duty fans with optimal fan speed control |
Power Supply | 8x 3000W Redundant Power Supplies, Titanium Level |
AC Input | 3000W: |
DC Output | 3000W |
Output Type | Backplanes (connector) |
Operating Environment | |
Environmental Spec. | Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F) Non-operating Temperature: -40°C to 60°C (-40°F to 140°F) Operating Relative Humidity: 8% to 90% (non-condensing) Non-operating Relative Humidity: 5% to 95% (non-condensing) |
HGX H100 Systems
Multi-Architecture Flexibility with Future-Proof Open-Standards-Based Design
Benefits & Advantages
- High performance GPU interconnect up to 900GB/s - 7x better performance than PCIe
- Superior thermal design supports maximum power/performance CPUs and GPUs
- Dedicated networking and storage per GPU with up to double the NVIDIA GPUDirect throughput of the previous generation
- Modular architecture for storage and I/O configuration flexibility
Key Features
- 8 next-generation H100 SXM GPUs with NVLink, NVSwitch interconnect
- Supports PCIe 5.0, DDR5 and Compute Express Link (CXL) 1.1+
- Innovative modular architecture designed for flexibility and futureproofing in 8U
- Optimized thermal capacity and airflow to support CPUs up to 350W and GPUs up to 700W with air cooling and optional liquid cooling
- PCIe 5.0 x16 1:1 networking slots for GPUs up to 400Gbps each supporting GPUDirect Storage and RDMA and up to 16 U.2 NVMe drive bays

Liquid Cooling GPU Server

GPU Super Server AS -8125GS-TNHR | |
Overview | 8U Dual Socket (4th Gen AMD EPYC™), up to 8 SXM5 GPUs |
CPU | 2x 4th Gen AMD EPYC™ Processors |
Memory (additional memory available) | 24 DIMM slots Up to 6TB ECC DDR5-4800 RDIMM |
Graphics | 8x HGX H100 SXM5 GPUs (80GB, 700W TDP) |
Storage (additional storage available) | 8x 2.5” SATA 8x 2.5” NVMe U.2 Via PCIe Switches Additional 8x 2.5” NVMe U.2 Via PCIe Switches (option) 2x NVMe M.2 |
Power | 3+3 Redundant 6x 3000W Titanium Level Efficiency Power Supplies |
Accelerate Large Scale AI Training Workloads
Large-Scale AI training demands cutting-edge technologies to maximize parallel computing power of GPUs to handle billions if not trillions of AI model parameters to be trained with massive datasets that are exponentially growing.
Leverage NVIDIA’s HGX™ H100 SXM 8-GPU and the fastest NVLink™ & NVSwitch™ GPU-GPU interconnects with up to 900GB/s bandwidth, and fastest 1:1 networking to each GPU for node clustering, these systems are optimized to train large language models from scratch in the shortest amount of time.
Completing the stack with all-flash NVMe for a faster AI data pipeline, we provide fully integrated racks with liquid cooling options to ensure fast deployment and a smooth AI training experience.