ARS-121L-DNR
1U 2-Node NVIDIA Grace CPU Superchip GPU Server supporting NVIDIA BlueField-3 or ConnectX-7
 
Two nodes in a 1U form factor. Each node supports the following:
- High density 1U 2-node system with NVIDIA Grace™ CPU Superchip per node
- NVIDIA Grace™ CPU Superchip (144-core per node)
- NVLink® Chip-2-Chip (C2C) high-bandwidth, low-latency interconnect between CPU and CPU at 900GB/s
- Up to 480GB LPDDR5X onboard memory
- 2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField®-3 or ConnectX®-7
- Up to 4x Hot-swap E1.S drives and 2x M.2 NVMe drives per node
- 7 Hot-Swap Heavy Duty Fans with Optimal Fan Speed Control
Key Applications
- High Performance Computing
- Hyperscale Cloud Applications
- Data Analytics
Product Specification
              | Product SKUs | ARS-121L-DNR (Silver) | 
| Motherboard | Super G1SMH | 
| Processor (per Node) | |
| CPU | Single processor(s) NVIDIA Dual 72-core CPUs on a Grace™ CPU Superchip | 
| Note | Supports up to 500W TDP CPUs (Air Cooled) | 
| GPU (per Node) | |
| Max GPU Count | Up to 1 double-width or 1 single-width GPU(s) | 
| GPU-GPU Interconnect | PCIe | 
| System Memory (per Node) | |
| Memory | Slot Count: Onboard Memory Max Memory: Up to 480GB ECC | 
| On-Board Devices (per Node) | |
| Chipset | System on Chip | 
| Network Connectivity | 1x 1GbE BaseT with NVIDIA ConnectX®-7 or Bluefield®-3 DPU | 
| Input / Output (per Node) | |
| LAN | 1 RJ45 1GbE (Dedicated IPMI port) | 
| System BIOS | |
| BIOS Type | AMI 32MB SPI Flash EEPROM | 
| PC Health Monitoring | |
| CPU | 8+4 Phase-switching voltage regulator Monitors for CPU Cores, Chipset Voltages, Memory | 
| FAN | Fans with tachometer monitoring Pulse Width Modulated (PWM) fan connectors Status monitor for speed control | 
| Temperature | Monitoring for CPU and chassis environment Thermal Control for fan connectors | 
| Chassis | |
| Form Factor | 1U Rackmount | 
| Model | CSE-GP102TS-R000NDFP | 
| Dimensions and Weight | |
| Height | 1.75" (44mm) | 
| Width | 17.33" (440mm) | 
| Depth | 37" (940mm) | 
| Package | 9.5" (H) x 48" (W) x 28" (D) | 
| Weight | Net Weight: 48.5 lbs (22 kg) Gross Weight: 65.5 lbs (29.7 kg) | 
| Available Color | Silver | 
| Expansion Slots (per Node) | |
| PCI-Express (PCIe) | 2 PCIe 5.0 x16 FHFL slot(s) | 
| Drive Bays / Storage (per Node) | |
| Hot-swap | 4x E1.S hot-swap NVMe drive slots | 
| M.2 | 2 M.2 NVMe | 
| System Cooling | |
| Fans | 7 Removable heavy-duty 4CM Fan(s) | 
| Power Supply | 2x 2700W Redundant Titanium Level power supplies | 
| Operating Environment | |
| Environmental Spec. | Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F) Non-operating Temperature: -40°C to 60°C (-40°F to 140°F) Operating Relative Humidity: 8% to 90% (non-condensing) Non-operating Relative Humidity: 5% to 95% (non-condensing) | 
Grace and x86 MGX System Configurations at a Glance
Supermicro NVIDIA MGX™ 1U/2U Systems with Grace™ CPU Superchip and x86 CPUs are fully optimized to support up to 4 GPUs via PCle without sacrificing I/O networking, or thermals. The ultimate building block architecture allows you to tailor these systems optimized for a variety of accelerated workloads and fields, including Al training and inference, HPC, data analytics, visualization/Omniverse™, and hyperscale cloud applications.
| SKU | ARS-121L-DNR | ARS-221GL-NR | SYS-221GE-NR | 
|  |  |  | |
| Form Factor | 1U 2-node system with NVIDIA Grace CPU Superchip per node | 2U GPU system with single NVIDIA Grace CPU Superchip | 2U GPU system with dual x86 CPUs | 
| CPU | 144-core Grace Arm Neoverse V2 CPU in a single chip per node (total of 288 cores in one system) | 144-core Grace Arm Neoverse V2 CPU in a single chip | 4th Gen Intel Xeon Scalable Processors (Up to 56-core per socket) | 
| GPU | Please contact our sales for possible configurations | Up to 4 double-width GPUs including NVIDIA H100 PCIe, H100 NVL, L40S | Up to 4 double-width GPUs including NVIDIA H100 PCIe, H100 NVL, L40S | 
| Memory | Up to 480GB of integrated LPDDR5X memory with ECC and up to 1TB/s of bandwidth per node | Up to 480GB of integrated LPDDR5X memory with ECC and up to 1TB/s of bandwidth per node | Up to 2TB, 32x DIMM slots, ECC DDR5-4800 | 
| Drives | Up to 4x hot-swap E1.S drives and 2x M.2 NVMe drives per node | Up to 8x hot-swap E1.S drives and 2x M.2 NVMe drives | Up to 8x hot-swap E1.S drives and 2x M.2 NVMe drives | 
| Networking | 2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField-3 or ConnectX-7 (e.g., 1 GPU and 1 BlueField-3) | 3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-7 (in addition to 4x PCIe 5.0 x16 slots for GPUs) | 3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-7 (in addition to 4x PCIe 5.0 x16 slots for GPUs) | 
| Interconnect | NVLink™-C2C with 900GB/s for CPU-CPU interconnect (within node) | NVLink Bridge GPU-GPU interconnect supported (e.g., H100 NVL) | NVLink™ Bridge GPU-GPU interconnect supported (e.g., H100 NVL) | 
| Cooling | Air-cooling | Air-cooling | Air-cooling | 
| Power | 2x 2700W Redundant Titanium Level power supplies | 3x 2000W Redundant Titanium Level power supplies | 3x 2000W Redundant Titanium Level power supplies | 

