ARS-111GL-NHR-LCC
NVIDIA GH200 Grace Hopper Superchip GPU Server with liquid-cooling supporting NVIDIA BlueField-3 / NVIDIA ConnectX-7
This system currently supports two E1.S drives direct to the processor and the onboard GPU only
- High density 1U GPU system with Integrated NVIDIA® H100 GPU and liquid cooling
- NVIDIA Grace Hopper™ Superchip (Grace CPU and H100 GPU)
- NVLink® Chip-2-Chip (C2C) high-bandwidth, low-latency interconnect between CPU and GPU at 900GB/s
- Up to 576GB of coherent memory per node including 480GB LPDDR5X and 96GB of HBM3 for LLM applications
- 2x PCIe 5.0 x16 slots supporting NVIDIA BlueField®-3 or ConnectX®-7
- 7 Hot-Swap Heavy Duty Fans with Optimal Fan Speed Control
Key Applications
- High Performance Computing
- AI/Deep Learning Training and Inference
- Large Language Model (LLM) and Generative AI
Product SKUs | ARS-111GL-NHR-LCC (Silver) |
Motherboard | Super G1SMH-G |
Processor | |
CPU | NVIDIA: H100 Tensor Core GPU on GH200 Grace Hopper™ Superchip (Liquid-cooled) |
Core Count | Up to 72C/144T |
Note | Supports up to 1000W TDP CPUs (Liquid Cooled) |
GPU | |
Max GPU Count | 1 onboard GPU(s) |
Supported GPU | NVIDIA: H100 Tensor Core GPU on GH200 Grace Hopper™ Superchip (Liquid-cooled) |
CPU-GPU Interconnect | NVLink®-C2C |
GPU-GPU Interconnect | PCIe |
System Memory | |
Memory | Slot Count: Onboard Memory Max Memory: Up to 480GB ECC LPDDR5X Additional GPU Memory: Up to 96GB ECC HBM3 |
On-Board Devices | |
Chipset | System on Chip |
Network Connectivity | 1x 1GbE BaseT with NVIDIA ConnectX®-7 or Bluefield®-3 DPU |
Input / Output | |
LAN | 1 RJ45 1GbE (Dedicated IPMI port) |
System BIOS | |
BIOS Type | AMI 32MB SPI Flash EEPROM |
PC Health Monitoring | |
CPU | 8+4 Phase-switching voltage regulator Monitors for CPU Cores, Chipset Voltages, Memory |
FAN | Fans with tachometer monitoring Pulse Width Modulated (PWM) fan connectors Status monitor for speed control |
Temperature | Monitoring for CPU and chassis environment Thermal Control for fan connectors |
Chassis | |
Form Factor | 1U Rackmount |
Model | CSE-GP102TS-R000NDFP |
Dimensions and Weight | |
Height | 1.75" (44mm) |
Width | 17.33" (440mm) |
Depth | 37" (940mm) |
Package | 9.5" (H) x 48" (W) x 28" (D) |
Weight | Net Weight: 48.5 lbs (22 kg) Gross Weight: 65.5 lbs (29.7 kg) |
Available Color | Silver |
Expansion Slots | |
PCI-Express (PCIe) | 3 PCIe 5.0 x16 FHFL slot(s) |
Drive Bays / Storage | |
Hot-swap | 8x E1.S hot-swap NVMe drive slots |
M.2 | 2 M.2 NVMe |
System Cooling | |
Fans | 9 Removable heavy-duty 4CM Fan(s) |
Power Supply | 2x 2000W Redundant Titanium Level power supplies |
Dimension (W x H x L) | 73.5 x 40 x 185 mm |
AC Input | 1000W: 100-127Vac / 50-60Hz 1800W: 200-220Vac / 50-60Hz 1980W: 220-230Vac / 50-60Hz 2000W: 220-240Vac / 50-60Hz (for UL only) 2000W: 230-240Vac / 50-60Hz 2000W: 230-240Vdc / 50-60Hz (for CQC only) |
+12V | Max: 83A / Min: 0A (100Vac-127Vac) Max: 150A / Min: 0A (200Vac-220Vac) Max: 165A / Min: 0A (220Vac-230Vac) Max: 166A / Min: 0A (230Vac-240Vac) |
12V SB | Max: 3.5A / Min: 0A |
Output Type | Backplanes (gold finger) |
Operating Environment | |
Environmental Spec. | Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F) Non-operating Temperature: -40°C to 60°C (-40°F to 140°F) Operating Relative Humidity: 8% to 90% (non-condensing) Non-operating Relative Humidity: 5% to 95% (non-condensing) |
1U Grace Hopper MGX Systems Configurations at a Glance
Construct new solutions for accelerated infrastructures enabling scientists and engineers to focus on solving the world’s most important problems with larger datasets, more complex models, and new generative AI workloads. Within the same 1U chassis, Supermicro’s dual NVIDIA GH200 Grace Hopper Superchip systems deliver the highest level of performance for any application on the CUDA platform with substantial speedups for Al workloads with high memory requirements. In addition to hosting up to 2 onboard H100 GPUs in 1U form factor, its modular bays enable full-size PCIe expansions for present and future of accelerated computing components, high-speed scale-out and clustering.
SKU | ARS-111GL-NHR | ARS-111GL-NHR-LCC | ARS-111GL-DNHR-LCC |
1 Node with Liquid Cooling | 2 Nodes with Liquid Cooling | ||
Form Factor | 1U system with single NVIDIA Grace Hopper Superchip (air-cooled) | 1U system with single NVIDIA Grace Hopper Superchip (liquid-cooled) | 1U 2-node system with NVIDIA Grace Hopper Superchip per node (liquid-cooled) |
CPU | 72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip | 72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip | 2x 72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip (1 per node) |
GPU | NVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3e (coming soon) | NVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3e | NVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3e per node |
Memory | Up to 480GB of integrated LPDDR5X with ECC (Up to 480GB + 144GB of fast-access memory) | Up to 480GB of integrated LPDDR5X memory with ECC (Up to 480GB + 144GB of fast-access memory) | Up to 480GB of LPDDR5X per node (Up to 480GB + 144GB of fast-access memory per node) |
Drives | 8x Hot-swap E1.S drives and 2x M.2 NVMe drives | 8x Hot-swap E1.S drives and 2x M.2 NVMe drives | 8x Hot-swap E1.S drives and 2x M.2 NVMe drives |
Networking | 3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-7 | 3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-7 | 2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField-3 or ConnectX-7 |
Interconnect | NVLink-C2C with 900GB/s for CPU-GPU interconnect | NVLink-C2C with 900GB/s for CPU-GPU interconnect | NVLink-C2C with 900GB/s for CPU-GPU interconnect |
Cooling | Air-cooling | Liquid-cooling | Liquid-cooling |
Power | 2x 2000W Redundant Titanium Level power supplies | 2x 2000W Redundant Titanium Level power supplies | 2x 2700W Redundant Titanium Level power supplies |