ARS-111GL-DNHR-LCC

1U NVIDIA GPU Server with 2 Grace-Hopper Super Chips (2 node) with On board Hopper GPU and 72 core Grace CPU per superchip

Anewtech Systems GPU Server Supermicro NVIDIA Grace Hopper Superchip (liquid-cooled) ARS-111GL-DNHR-LCC

Two nodes in a 1U form factor. Each node supports the following:

  • High density 1U 2-node GPU system with Integrated NVIDIA® H100 GPU (1 per Node)
  • NVIDIA Grace Hopper™ Superchip (Grace CPU and H100 GPU)
  • NVLink® Chip-2-Chip (C2C) high-bandwidth, low-latency interconnect between CPU and GPU at 900GB/s
  • Up to 576GB of coherent memory per node including 480GB LPDDR5X and 96GB of HBM3 for LLM applications
  • 2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField®-3 or ConnectX®-7
  • 7 Hot-Swap Heavy Duty Fans with Optimal Fan Speed Control
Product Specification
Anewtech-Systems-Supermicro-GPU-Server-ARS-111GL-DNHR-LCC-NVIDIA-Grace-CPU-Superchip-Supermicro-AI-Server-Singapore
Anewtech-Systems-Supermicro-GPU-Server-ARS-111GL-DNHR-LCC-NVIDIA-Grace-CPU-Superchip-Supermicro-Server
Product SKUsARS-111GL-DNHR-LCC (Silver)
MotherboardSuper G1SMH-G
Processor (per Node)
CPUNVIDIA GH200 Grace Hopper™ Superchip
Core CountUp to 72C/144T
NoteSupports up to 2000W TDP CPUs (Liquid Cooled)​
GPU (per Node)
Max GPU Count1 onboard GPU(s)
Supported GPUNVIDIA GH200 Grace Hopper: Hopper H100 GPU
CPU-GPU InterconnectNVLink®-C2C
GPU-GPU InterconnectPCIe
System Memory (per Node)
MemorySlot Count: Onboard Memory
Max Memory: Up to 480GB ECC LPDDR5X
Additional GPU Memory: Up to 96GB ECC HBM3
On-Board Devices (per Node)
ChipsetSystem on Chip
Network Connectivity1x 1GbE BaseT with NVIDIA ConnectX®-7 or Bluefield®-3 DPU
Input / Output (per Node)
LAN1 RJ45 1GbE (Dedicated IPMI port)
System BIOS
BIOS TypeAMI 32MB SPI Flash EEPROM
PC Health Monitoring
CPU8+4 Phase-switching voltage regulator
Monitors for CPU Cores, Chipset Voltages, Memory
FANFans with tachometer monitoring
Pulse Width Modulated (PWM) fan connectors
Status monitor for speed control
TemperatureMonitoring for CPU and chassis environment
Thermal Control for fan connectors
Chassis
Form Factor1U Rackmount
ModelCSE-GP102TS-R000NDFP 
Dimensions and Weight
Height1.75" (44mm)
Width17.33" (440mm)
Depth37" (940mm)
Package9.5" (H) x 48" (W) x 28" (D)
WeightNet Weight: 48.5 lbs (22 kg) 
Gross Weight: 65.5 lbs (29.7 kg)
Available ColorSilver
Expansion Slots (per Node)
PCI-Express (PCIe)2 PCIe 5.0 x16 FHFL slot(s)
Drive Bays / Storage (per Node)
Hot-swap4x E1.S hot-swap NVMe drive slots
M.22 M.2 NVMe
System Cooling
Fans7 Removable heavy-duty 4CM Fan(s)
Power Supply2x 2700W Redundant Titanium Level power supplies
Operating Environment
Environmental Spec.Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F) 
Non-operating Temperature: -40°C to 60°C (-40°F to 140°F) 
Operating Relative Humidity: 8% to 90% (non-condensing) 
Non-operating Relative Humidity: 5% to 95% (non-condensing)

Enterprise AI Inferencing & Training System

1U Grace Hopper MGX Systems

CPU+GPU Coherent Memory System for AI and HPC Applications

 


Cooling + Efficiency + Power Delivery

Due to its mechanical design and component selection, Supermicro MGX™ Systems optimize cooling, efficiency, and power delivery without sacrifice. Supermicro’s proven Direct-to-Chip liquid cooling solutions can reduce OPEX by more than 40%. 

Up to 2x 2700W redundant Titanium Level power supplies deliver ample power to handle up to the 2000W power requirements of dual Grace Hopper Superchips, with headroom to spare.

Anewtech-Systems-GPU-Server-Supermicro-NVIDIA-ARS-111GL-DNHR-LCC GPU Server Supermicro NVIDIA Grace Hopper Superchip

1U Grace Hopper MGX Systems Configurations at a Glance

Construct new solutions for accelerated infrastructures enabling scientists and engineers to focus on solving the world’s most important problems with larger datasets, more complex models, and new generative AI workloads. Within the same 1U chassis, Supermicro’s dual NVIDIA GH200 Grace Hopper Superchip systems deliver the highest level of performance for any application on the CUDA platform with substantial speedups for Al workloads with high memory requirements. In addition to hosting up to 2 onboard H100 GPUs in 1U form factor, its modular bays enable full-size PCIe expansions for present and future of accelerated computing components, high-speed scale-out and clustering.

SKU

ARS-111GL-NHR

ARS-111GL-NHR-LCC

ARS-111GL-DNHR-LCC

 Anewtech-Systems-GPU-Server-Supermicro-NVIDIA-ARS-111GL-NHR GPU Server Supermicro NVIDIA Grace Hopper SuperchipAnewtech-Systems-GPU-Server-Supermicro-NVIDIA-ARS-111GL-NHR-LCC GPU Server Supermicro NVIDIA Grace Hopper Superchip

 1 Node with Liquid Cooling

Anewtech-Systems-GPU-Server-Supermicro-NVIDIA-ARS-111GL-DNHR-LCC GPU Server Supermicro NVIDIA Grace Hopper Superchip

 2 Nodes with Liquid Cooling

Form Factor1U system with single NVIDIA Grace Hopper Superchip (air-cooled)1U system with single NVIDIA Grace Hopper Superchip (liquid-cooled)1U 2-node system with NVIDIA Grace Hopper Superchip per node (liquid-cooled)
CPU72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip2x 72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip (1 per node)
GPUNVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3e (coming soon)NVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3eNVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3e per node
MemoryUp to 480GB of integrated LPDDR5X with ECC 
(Up to 480GB + 144GB of fast-access memory)
Up to 480GB of integrated LPDDR5X memory with ECC 
(Up to 480GB + 144GB of fast-access memory)
Up to 480GB of LPDDR5X per node 
(Up to 480GB + 144GB of fast-access memory per node)
Drives8x Hot-swap E1.S drives and 2x M.2 NVMe drives8x Hot-swap E1.S drives and 2x M.2 NVMe drives8x Hot-swap E1.S drives and 2x M.2 NVMe drives
Networking3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-73x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-72x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField-3 or ConnectX-7
InterconnectNVLink-C2C with 900GB/s for CPU-GPU interconnectNVLink-C2C with 900GB/s for CPU-GPU interconnectNVLink-C2C with 900GB/s for CPU-GPU interconnect
CoolingAir-coolingLiquid-coolingLiquid-cooling
Power2x 2000W Redundant Titanium Level power supplies2x 2000W Redundant Titanium Level power supplies2x 2700W Redundant Titanium Level power supplies