ARS-111GL-NHR

NVIDIA GH200 Grace Hopper Superchip GPU Server supporting NVIDIA BlueField-3 or NVIDIA ConnectX-7

Anewtech-Systems-GPU-Server-Supermicro-ARS-111GL-NHR 1U NVIDIA Grace-Hopper Super Chip with On board Hopper GPU and 72 core Grace CPU

This system currently supports two E1.S drives direct to the processor and the onboard GPU only.

  • High density 1U GPU system with Integrated NVIDIA® H100 GPU
  • NVIDIA Grace Hopper™ Superchip (Grace CPU and H100 GPU)
  • NVLink® Chip-2-Chip (C2C) high-bandwidth, low-latency interconnect between CPU and GPU at 900GB/s
  • Up to 576GB of coherent memory per node including 480GB LPDDR5X and 96GB of HBM3 for LLM applications
  • 3x PCIe 5.0 x16 slots supporting NVIDIA BlueField®-3 or ConnectX®-7
  • 9 Hot-Swap Heavy Duty Fans with Optimal Fan Speed Control

Key Applications

  • High Performance Computing
  • AI/Deep Learning Training and Inference
  • Large Language Model (LLM) and Generative AI
Product Specification
Anewtech-Systems-Supermicro-GPU-Server-ARS-111GL-NHR-NVIDIA-Grace-CPU-Superchip-Supermicro-Server
Anewtech-Systems-Supermicro-GPU-Server-ARS-111GL-NHR-NVIDIA-Grace-CPU-Superchip-Supermicro-AI-Server-Singapore
Product SKUsARS-111GL-NHR (Silver)
MotherboardSuper G1SMH-G
Processor
CPUNVIDIA GH200 Grace Hopper™ Superchip
Core CountUp to 72C/144T
NoteNVIDIA 72-core NVIDIA Grace CPU on GH200 Grace Hopper™ Superchip
GPU
Max GPU Count1 onboard GPU(s)
Supported GPUNVIDIA: H100 Tensor Core GPU on GH200 Grace Hopper™ Superchip (Air-cooled)
CPU-GPU InterconnectNVLink®-C2C
GPU-GPU InterconnectPCIe
System Memory
MemorySlot Count: Onboard Memory
Max Memory: Up to 480GB ECC LPDDR5X
Additional GPU Memory: Up to 96GB ECC HBM3
On-Board Devices
ChipsetSystem on Chip
Network Connectivity1x 1GbE BaseT with NVIDIA ConnectX®-7 or Bluefield®-3 DPU
Input / Output
LAN1 RJ45 1GbE (Dedicated IPMI port)
USB2 USB port(s) (2 rear)
System BIOS
BIOS TypeAMI 32MB SPI Flash EEPROM
PC Health Monitoring
CPU8+4 Phase-switching voltage regulator
Monitors for CPU Cores, Chipset Voltages, Memory
FANFans with tachometer monitoring
Pulse Width Modulated (PWM) fan connectors
Status monitor for speed control
TemperatureMonitoring for CPU and chassis environment
Thermal Control for fan connectors
Chassis
Form Factor1U Rackmount
ModelCSE-GP102TS-R000NDFP 
Dimensions and Weight
Height1.75" (44mm)
Width17.33" (440mm)
Depth37" (940mm)
Package9.5" (H) x 48" (W) x 28" (D)
WeightNet Weight: 48.5 lbs (22 kg) 
Gross Weight: 65.5 lbs (29.7 kg)
Available ColorSilver
Expansion Slots
PCI-Express (PCIe)3 PCIe 5.0 x16 FHFL slot(s)
Drive Bays / Storage
Hot-swap8x E1.S hot-swap NVMe drive slots
M.22 M.2 NVMe
System Cooling
Fans9 Removable heavy-duty 4CM Fan(s)
Power Supply2x 2000W Redundant Titanium Level power supplies
Dimension (W x H x L)73.5 x 40 x 185 mm
AC Input1000W: 100-127Vac / 50-60Hz 
1800W: 200-220Vac / 50-60Hz 
1980W: 220-230Vac / 50-60Hz 
2000W: 220-240Vac / 50-60Hz (for UL only)
2000W: 230-240Vac / 50-60Hz 
2000W: 230-240Vdc / 50-60Hz (for CQC only)
+12VMax: 83A / Min: 0A (100Vac-127Vac)
Max: 150A / Min: 0A (200Vac-220Vac)
Max: 165A / Min: 0A (220Vac-230Vac)
Max: 166A / Min: 0A (230Vac-240Vac)
12V SBMax: 3.5A / Min: 0A
Output TypeBackplanes (gold finger)
Operating Environment
Environmental Spec.Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F) 
Non-operating Temperature: -40°C to 60°C (-40°F to 140°F) 
Operating Relative Humidity: 8% to 90% (non-condensing) 
Non-operating Relative Humidity: 5% to 95% (non-condensing)
Generative AI SuperCluster

The full turn-key data center solution accelerates time-to-delivery for mission-critical enterprise use cases, and eliminates the complexity of building a large cluster, which previously was achievable only through the intensive design tuning and time-consuming optimization of supercomputing.

Cloud-Scale Inference Datasheet

 

With 256 NVIDIA GH200 Grace Hopper Superchips, 1U MGX Systems in 9 Racks

Key Features

  • Unified GPU and CPU memory for cloud-scale high volume, low-latency, and high batch size inference
  • 1U Air-cooled NVIDIA MGX Systems in 9 Racks, 256 NVIDIA GH200 Grace Hopper Superchips in one scalable unit
  • Up to 144GB of HBM3e + 480GB of LPDDR5X, enough capacity to fit a 70B+ parameter model in one node
  • 400Gb/s InfiniBand or Ethernet non-blocking networking connected to spine-leaf network fabric
  • Customizable AI data pipeline storage fabric with industry leading parallel file system options NVIDIA AI Enterprise software ready

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-ARS-111GL-NHR-GPU-Servers NVIDIA GH200 Grace Hopper Superchip system.Compute Node

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-ARS-111GL-NHR-SRS-MGX256-SU-001-GPU-Server NVIDIA GH200 Grace Hopper Superchip system

1U Grace Hopper MGX Systems Configurations at a Glance

Construct new solutions for accelerated infrastructures enabling scientists and engineers to focus on solving the world’s most important problems with larger datasets, more complex models, and new generative AI workloads. Within the same 1U chassis, Supermicro’s dual NVIDIA GH200 Grace Hopper Superchip systems deliver the highest level of performance for any application on the CUDA platform with substantial speedups for Al workloads with high memory requirements. In addition to hosting up to 2 onboard H100 GPUs in 1U form factor, its modular bays enable full-size PCIe expansions for present and future of accelerated computing components, high-speed scale-out and clustering.

SKU

ARS-111GL-NHR

ARS-111GL-NHR-LCC

ARS-111GL-DNHR-LCC

 Anewtech-Systems-GPU-Server-Supermicro-NVIDIA-ARS-111GL-NHRAnewtech-Systems-GPU-Server-Supermicro-NVIDIA-ARS-111GL-NHR-LCC

 1 Node with Liquid Cooling

Anewtech-Systems-GPU-Server-Supermicro-NVIDIA-ARS-111GL-DNHR-LCC

 2 Nodes with Liquid Cooling

Form Factor1U system with single NVIDIA Grace Hopper Superchip (air-cooled)1U system with single NVIDIA Grace Hopper Superchip (liquid-cooled)1U 2-node system with NVIDIA Grace Hopper Superchip per node (liquid-cooled)
CPU72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip2x 72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip (1 per node)
GPUNVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3e (coming soon)NVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3eNVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3e per node
MemoryUp to 480GB of integrated LPDDR5X with ECC 
(Up to 480GB + 144GB of fast-access memory)
Up to 480GB of integrated LPDDR5X memory with ECC 
(Up to 480GB + 144GB of fast-access memory)
Up to 480GB of LPDDR5X per node 
(Up to 480GB + 144GB of fast-access memory per node)
Drives8x Hot-swap E1.S drives and 2x M.2 NVMe drives8x Hot-swap E1.S drives and 2x M.2 NVMe drives8x Hot-swap E1.S drives and 2x M.2 NVMe drives
Networking3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-73x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-72x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField-3 or ConnectX-7
InterconnectNVLink-C2C with 900GB/s for CPU-GPU interconnectNVLink-C2C with 900GB/s for CPU-GPU interconnectNVLink-C2C with 900GB/s for CPU-GPU interconnect
CoolingAir-coolingLiquid-coolingLiquid-cooling
Power2x 2000W Redundant Titanium Level power supplies2x 2000W Redundant Titanium Level power supplies2x 2700W Redundant Titanium Level power supplies

Resources:

Anewtech-Systems-Supermicro-NVIDIA-MGX-1U-2U-Grace-CPU-Superchip-and-x86-system

1U/2U NVIDIA Grace™ CPU Superchip and x86 Intel® Xeon® Systems

Featuring NVIDIA Grace CPU Superchip. x86-Based Supermicro MGX Systems

Datasheet

Anewtech-Systems-Supermicro-Servers-NVIDIA-MGX-1U-GH200-Grace-Hopper-Systems

1U NVIDIA GH200 Grace Hopper™ Superchip Systems

Grace Hopper Superchip: H100 GPU+ Grace CPU on one Superchip

Datasheet