ARS-121L-DNR

1U 2-Node NVIDIA Grace CPU Superchip GPU Server supporting NVIDIA BlueField-3 or ConnectX-7

Anewtech Systems GPU Servers Supermicro Servers NVIDIA Grace CPU Superchip system ARS-121L-DNR

Two nodes in a 1U form factor. Each node supports the following:

  • High density 1U 2-node system with NVIDIA Grace™ CPU Superchip per node
  • NVIDIA Grace™ CPU Superchip (144-core per node) 
  • NVLink® Chip-2-Chip (C2C) high-bandwidth, low-latency interconnect between CPU and CPU at 900GB/s
  • Up to 480GB LPDDR5X onboard memory
  • 2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField®-3 or ConnectX®-7
  • Up to 4x Hot-swap E1.S drives and 2x M.2 NVMe drives per node
  • 7 Hot-Swap Heavy Duty Fans with Optimal Fan Speed Control

Key Applications

  • High Performance Computing
  • Hyperscale Cloud Applications
  • Data Analytics
Product Specification
Product SKUsARS-121L-DNR (Silver)
MotherboardSuper G1SMH
Processor (per Node)
CPUSingle processor(s) NVIDIA Dual 72-core CPUs on a Grace™ CPU Superchip
NoteSupports up to 500W TDP CPUs (Air Cooled)
GPU (per Node)
Max GPU CountUp to 1 double-width or 1 single-width GPU(s)
GPU-GPU InterconnectPCIe
System Memory (per Node)
MemorySlot Count: Onboard Memory
Max Memory: Up to 480GB ECC
On-Board Devices (per Node)
ChipsetSystem on Chip
Network Connectivity1x 1GbE BaseT with NVIDIA ConnectX®-7 or Bluefield®-3 DPU
Input / Output (per Node)
LAN1 RJ45 1GbE (Dedicated IPMI port)
System BIOS
BIOS TypeAMI 32MB SPI Flash EEPROM
PC Health Monitoring
CPU8+4 Phase-switching voltage regulator
Monitors for CPU Cores, Chipset Voltages, Memory
FANFans with tachometer monitoring
Pulse Width Modulated (PWM) fan connectors
Status monitor for speed control
TemperatureMonitoring for CPU and chassis environment
Thermal Control for fan connectors
Chassis
Form Factor1U Rackmount
ModelCSE-GP102TS-R000NDFP 
Dimensions and Weight
Height1.75" (44mm)
Width17.33" (440mm)
Depth37" (940mm)
Package9.5" (H) x 48" (W) x 28" (D)
WeightNet Weight: 48.5 lbs (22 kg) 
Gross Weight: 65.5 lbs (29.7 kg)
Available ColorSilver
Expansion Slots (per Node)
PCI-Express (PCIe)2 PCIe 5.0 x16 FHFL slot(s)
Drive Bays / Storage (per Node)
Hot-swap4x E1.S hot-swap NVMe drive slots
M.22 M.2 NVMe
System Cooling
Fans7 Removable heavy-duty 4CM Fan(s)
Power Supply2x 2700W Redundant Titanium Level power supplies
Operating Environment
Environmental Spec.Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F) 
Non-operating Temperature: -40°C to 60°C (-40°F to 140°F) 
Operating Relative Humidity: 8% to 90% (non-condensing) 
Non-operating Relative Humidity: 5% to 95% (non-condensing)
 

Grace and x86 MGX System Configurations at a Glance

Supermicro NVIDIA MGX™ 1U/2U Systems with Grace™ CPU Superchip and x86 CPUs are fully optimized to support up to 4 GPUs via PCle without sacrificing I/O networking, or thermals. The ultimate building block architecture allows you to tailor these systems optimized for a variety of accelerated workloads and fields, including Al training and inference, HPC, data analytics, visualization/Omniverse™, and hyperscale cloud applications.

SKU

ARS-121L-DNR

ARS-221GL-NR

SYS-221GE-NR

 Anewetch-Systems-Supermicro-NVIDIA-MGX-gpu-server-ARS-121L-DNR ARS-121L-DNR NVIDIA Grace CPU Superchip systemAnewetch-Systems-Supermicro-NVIDIA-MGX-gpu-server-ARS-221GL-NR ARS-121L-DNR NVIDIA Grace CPU Superchip systemAnewetch-Systems-Supermicro-NVIDIA-MGX-gpu-server-SYS-221GE-NR ARS-121L-DNR NVIDIA Grace CPU Superchip system
Form Factor1U 2-node system with NVIDIA Grace CPU Superchip per node2U GPU system with single NVIDIA Grace CPU Superchip2U GPU system with dual x86 CPUs
CPU144-core Grace Arm Neoverse V2 CPU in a single chip per node 
(total of 288 cores in one system)
144-core Grace Arm Neoverse V2 CPU in a single chip4th Gen Intel Xeon Scalable Processors 
(Up to 56-core per socket)
GPUPlease contact our sales for possible configurationsUp to 4 double-width GPUs including NVIDIA H100 PCIe, H100 NVL, L40SUp to 4 double-width GPUs including NVIDIA H100 PCIe, H100 NVL, L40S
MemoryUp to 480GB of integrated LPDDR5X memory with ECC and up to 1TB/s of bandwidth per nodeUp to 480GB of integrated LPDDR5X memory with ECC and up to 1TB/s of bandwidth per nodeUp to 2TB, 32x DIMM slots, ECC DDR5-4800
DrivesUp to 4x hot-swap E1.S drives and 2x M.2 NVMe drives per nodeUp to 8x hot-swap E1.S drives and 2x M.2 NVMe drivesUp to 8x hot-swap E1.S drives and 2x M.2 NVMe drives
Networking2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField-3 or ConnectX-7 (e.g., 1 GPU and 1 BlueField-3)3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-7 (in addition to 4x PCIe 5.0 x16 slots for GPUs)3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-7 (in addition to 4x PCIe 5.0 x16 slots for GPUs)
InterconnectNVLink™-C2C with 900GB/s for CPU-CPU interconnect (within node)NVLink Bridge GPU-GPU interconnect supported (e.g., H100 NVL)NVLink™ Bridge GPU-GPU interconnect supported (e.g., H100 NVL)
CoolingAir-coolingAir-coolingAir-cooling
Power2x 2700W Redundant Titanium Level power supplies3x 2000W Redundant Titanium Level power supplies3x 2000W Redundant Titanium Level power supplies

Resources:

Anewtech-Systems-Supermicro-NVIDIA-MGX-1U-2U-Grace-CPU-Superchip-and-x86-system

1U/2U NVIDIA Grace™ CPU Superchip and x86 Intel® Xeon® Systems

Featuring NVIDIA Grace CPU Superchip. x86-Based Supermicro MGX Systems

Datasheet

Anewtech-Systems-Supermicro-Servers-NVIDIA-MGX-1U-GH200-Grace-Hopper-Systems

1U NVIDIA GH200 Grace Hopper™ Superchip Systems

Grace Hopper Superchip: H100 GPU+ Grace CPU on one Superchip

Datasheet