AS-8125GS-TNHR

DP AMD 8U Server with NVIDIA HGX H100/H200 8-GPU
Anewtech-Systems-GPU-Server-Supermicro-AS-8125GS-TNHR AMD 8U Server
Anewtech-systems-supermicro-server-nvidia-ai-server-gpu-server
  • High density 8U system for NVIDIA® HGX™ H100/H200 8-GPU
    Highest GPU communication using NVIDIA® NVLINK™ + NVIDIA® NVSwitch™
    8 NIC for GPU direct RDMA (1:1 GPU Ratio)
  • 24 DIMM slots DDR5; up to 6TB 4800MT/s ECC LRDIMM/RDIMM
  • Up to 8 PCIe 5.0 x16 LP + 4 PCIe 5.0 x16 FHFL slots
  • Flexible networking options
  • 12 Hot-swap 2.5" NVMe drive bays + 2 hot-swap 2.5" SATA drive bays
    + 4 hot-swap 2.5" NVMe drive bays (optional)
    1 M.2 NVMe for boot drive only
  • 10 heavy duty fans with optimal fan speed control
  • 6x 3000W redundant Titanium level power supplies

Key Applications

  • High Performance Computing
  • AI/Deep Learning Training
  • Industrial Automation, Retail
  • Climate and Weather Modeling
Product Specification
Anewtech-Systems-GPU-Server-Supermicro-8U-System-NVIDIA-HGX-H100-8-GPU-Server-AS-8125GS-TNHR
Anewtech-Systems-GPU-Server-Supermicro-8U-System-NVIDIA-HGX-H100-8-GPU-Servers-AS-8125GS-TNHR
Anewtech Systems Supermicro Singapore GPU Server Supermicro Servers  Supermicro AS-8125GS-TNHR-superrmicro
 

Supermicro Ready-to-Ship Gold Series Pre-Configured Server


Best-Selling Server Platforms, Pre-Configured with Key Components for Reduced Lead Times

Model: AS -8125GS-TNHR-G1

  • CPU: 2x AMD EPYC™ 9474F (48Core/3.6GHz)
  • GPU: 1x NVIDIA HGX™ H200 8-GPU
  • Memory: 24x 96GB DDR5-5600 (at 4800)
  • Storage: 1x 1.9TB U.2 NVMe SSD
  • Networking: 1x Dual 10GbE RJ45
    8x 400G NDR/ETH OSFP
  • Power Supplies: 6x 3000W Titanium Level
Anewtech-Systems-Supermicro-Gold-Series-GPU-Server-Data-Center-Server-AI-Server
Product SKUsA+ Server AS -8125GS-TNHR
A+ Server AS -8125GS-TNHR-G1 (Gold Series version with pre-configured components)
MotherboardSuper H13DSG-O-CPU-D
Processor
CPUDual Socket SP5
AMD EPYC™ 9004 Series Processors
Core CountUp to 128C/256T
NoteSupports up to 400W TDP CPUs (Air Cooled)
GPU
Max GPU Count8 onboard GPUs
Supported GPUNVIDIA SXM: HGX H100 8-GPU (80GB), HGX H200 8-GPU (141GB), HGX H200 8-GPU (141GB)
CPU-GPU InterconnectPCIe 5.0 x16 CPU-to-GPU Interconnect
GPU-GPU InterconnectNVIDIA® NVLink® with NVSwitch™
System Memory
MemorySlot Count: 24 DIMM slots
Max Memory (1DPC): Up to 6TB 4800MT/s ECC DDR5 RDIMM
Memory Voltage1.1V
On-Board Devices
ChipsetAMD SP5
Input / Output
Video1 VGA port
System BIOS
BIOS TypeAMI 32MB SPI Flash EEPROM
Management
SoftwareSuperCloud Composer®
Supermicro Server Manager (SSM)
Supermicro Update Manager (SUM)
Supermicro SuperDoctor® 5 (SD5)
Super Diagnostics Offline (SDO)
Supermicro Thin-Agent Service (TAS)
SuperServer Automation Assistant (SAA) New!
Power configurationsPower-on mode for AC power recovery
ACPI Power Management
Security
HardwareTrusted Platform Module (TPM) 2.0
Silicon Root of Trust (RoT) – NIST 800-193 Compliant
FeaturesCryptographically Signed Firmware
Secure Boot
Secure Firmware Updates
Automatic Firmware Recovery
Supply Chain Security: Remote Attestation
Runtime BMC Protections
System Lockdown
PC Health Monitoring
CPUMonitors for CPU Cores, Chipset Voltages, Memory
7 +1 Phase-switching voltage regulator
FANFans with tachometer monitoring
Status monitor for speed control
TemperatureMonitoring for CPU and chassis environment
Thermal Control for fan connectors
Chassis
Form Factor8U Rackmount
ModelCSE-GP801TS
Dimensions and Weight
Height14" (355.6 mm)
Width17.2" (437 mm)
Depth33.2" (843.28 mm)
Package29.5" (H) x 27.5" (W) x 51.2" (D)
WeightGross Weight: 225 lbs (102.1 kg)
Net Weight: 166 lbs (75.3 kg)
Available ColorBlack front & silver body
Front Panel
LEDHard drive activity LED
Network activity LEDs
Power status LED
System Overheat & Power Fail LED
ButtonsPower On/Off button
System Reset button
Expansion Slots
PCI-Express (PCIe) ConfigurationDefault
8 PCIe 5.0 x16 LP slots 
2 PCIe 5.0 x16 FHFL slots 
Option A
8 PCIe 5.0 x16 LP slots 
4 PCIe 5.0 x16 FHFL slots
Drive Bays / Storage
Drive Bays ConfigurationDefault: Total 18 bays
2 front hot-swap 2.5" SATA drive bays 
4 front hot-swap 2.5" NVMe* drive bays 
12 front hot-swap 2.5" NVMe drive bays 
(*NVMe support may require additional storage controller and/or cables)
M.21 M.2 NVMe slot (M-key)
System Cooling
Fans10 heavy duty fans with optimal fan speed control
Power Supply
6x 3000W6x 3000W Redundant Titanium Level (96%) power supplies
Operating Environment
Environmental Spec.Operating Temperature: 10°C to 35°C (50°F to 95°F)
Non-operating Temperature: -40°C to 60°C (-40°F to 140°F)
Operating Relative Humidity: 8% to 90% (non-condensing)
Non-operating Relative Humidity: 5% to 95% (non-condensing)
 

Supermicro NVIDIA HGX H100/H200 
8-GPU Servers

Large-Scale AI applications demand greater computing power, faster memory bandwidth, and higher memory capacity to handle today's AI models, reaching up to trillions of parameters. Supermicro NVIDIA HGX 8-GPU Systems are carefully optimized for cooling and power delivery to sustain maximum performance of the 8 interconnected H100/H200 GPUs.
Supermicro NVIDIA HGX Systems are designed to be the scalable building block for AI clusters: each system features 8x 400G NVIDIA BlueField®-3 or ConnectX-7 NICs for a 1:1 GPU-to-NIC ratio with support for NVIDIA Spectrum-X Ethernet or NVIDIA Quantum-2 InfiniBand. These systems can be deployed in a full turn-key Generative AI SuperCluster, from 32 nodes to thousands of nodes, accelerating time-to-delivery of mission-critical AI infrastructure.
Generative AI SuperCluster

The full turn-key data center solution accelerates time-to-delivery for mission-critical enterprise use cases, and eliminates the complexity of building a large cluster, which previously was achievable only through the intensive design tuning and time-consuming optimization of supercomputing.

Proven Design Datasheet

 

With 32 NVIDIA HGX H100/H200 8-GPU, 8U Air-cooled Systems (256 GPUs) in 9 Racks

Key Features

  • Proven industry leading architecture for large scale AI infrastructure deployments
  • 256 NVIDIA H100/H200 GPUs in one scalable unit
  • 20TB of HBM3 with H100 or 36TB of HBM3e with H200 in one scalable unit
  • 1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage for training large language model with up to trillions of parameters
  • Customizable AI data pipeline storage fabric with industry leading parallel file system options
  • NVIDIA AI Enterprise Software Ready

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-821GE-GPU-ServerCompute Node

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-821GE-TNHR-SRS-48UGPU-AI-ACSU-GPU-Server
Large Scale AI Training Workloads

HGX H100 Systems 
Multi-Architecture Flexibility with Future-Proof Open-Standards-Based Design

 

Benefits & Advantages

  • High performance GPU interconnect up to 900GB/s - 7x better performance than PCIe
  • Superior thermal design supports maximum power/performance CPUs and GPUs
  • Dedicated networking and storage per GPU with up to double the NVIDIA GPUDirect throughput of the previous generation
  • Modular architecture for storage and I/O configuration flexibility 

Key Features

  • 8 next-generation H100 SXM GPUs with NVLink, NVSwitch interconnect
  • Supports PCIe 5.0, DDR5 and Compute Express Link (CXL) 1.1+
  • Innovative modular architecture designed for flexibility and futureproofing in 8U
  • Optimized thermal capacity and airflow to support CPUs up to 350W and GPUs up to 700W with air cooling and optional liquid cooling
  • PCIe 5.0 x16 1:1 networking slots for GPUs up to 400Gbps each supporting GPUDirect Storage and RDMA and up to 16 U.2 NVMe drive bays
Anewtech-Systems-Supermicro-GPU-Server-AS-8125GS-TNHR GPU Server Supermicro Singapore Superserver Supermicro Servers

Plug-and-Play for Rapid Generative AI Deployment

The SuperCluster design with 8U air-cooled (shown) or optional 4U liquid-cooled HGX systems comes with 400Gb/s of networking fabrics and non-blocking architecture. These are interconnected into four 8U (or eight 4U) nodes per rack and further into a 32-node cluster that operates as a scalable unit “SU” of compute—providing a foundational building block for generative AI infrastructure.

Whether fitting an enormous foundation model trained on a dataset with trillions of tokens from scratch or building cloud-scale LLM inference infrastructure, the SuperCluster leaf-spine network topology allows it to scale from 32 nodes to thousands of nodes seamlessly. Supermicro’s proven testing processes thoroughly validate the operational effectiveness and efficiency of compute infrastructure before shipping. Customers receive plug-and-play scalable units for rapid deployment.

32-Node Scalable Unit Rack Scale Design Close-up

SYS-821GE-TNHR / AS-8125GS-TNHR

 

Overview8U Air-cooled System with NVIDIA HGX H100/H200
CPUDual 5th/4th Gen Intel® Xeon® or AMD EPYC 9004 Series Processors
Memory2TB DDR5 (recommended)
GPUNVIDIA HGX H100/H200 8-GPU (80GB HBM3 or 141GB HBM3E per GPU 900GB/S
NVLink GPU-GPU Interconnect with NVLink
Networking8x NVDIA ConnectX®-7 Single-port 400Gbps/NDR OSFP NICs
2x NVDIA ConnectX®-7 Dual-port 200Gbps/NDR200 OSFP112 NICs
Storage30.4TB NVMe (4x 7.6TB U.3)
3.8TB NVMe (2x 1.9TB U.3, Boot) [Optional M.2 available]
Power Supply6x 3000W Redundant Titanium Level power supplies
Anewtech-Systems-Supermicro-GPU-Server-SYS-821GE-TNHR_AS--8125GS-TNHR