top of page

NVIDIA Mellanox MQM8700-HS2R Quantum HDR InfiniBand Switch: This is a managed, non-blocking 40-port HDR InfiniBand switch built on the Quantum switching ASIC. It delivers 16Tb/s of aggregate switching capacity with 130ns port-to-port latency. This is an InfiniBand fabric switch, not an Ethernet switch and not an unmanaged edge device. The onboard x86 dual-core processor runs MLNX-OS for subnet management, congestion control, and telemetry. P/N: 920-9B110-00RH-0M0.

Engineering Context

The MQM8700-HS2R is the backbone of modern AI SuperPod architectures. The Quantum ASIC delivers deterministic 130ns cut-through latency across all 40 QSFP56 ports at 200Gb/s HDR line rate. NVIDIA SHARP (Scalable Hierarchical Aggregation and Reduction Protocol) offloads MPI collective operations from the CPU/GPU to the switch network, significantly accelerating AI training epochs. The "Managed" (HS2R) designation means an onboard x86 CPU runs MLNX-OS, providing integrated subnet management (OpenSM), adaptive routing, congestion control, and advanced telemetry. The C2P airflow design draws cool air from the connector (port) side and exhausts through the power supply (rear) side, matching standard cold-aisle rack configurations. Each QSFP56 port supports 2x 100G HDR100 breakout, enabling up to 80 endpoints at 100Gb/s.

Deployment & Use Cases

  • AI Training Cluster Core/Spine: Primary fabric switch for NVIDIA DGX/HGX H100/A100 GPU clusters requiring lossless, ultra-low latency InfiniBand transport.
  • HPC Supercomputing: Top-of-rack aggregation for MPI-based scientific computing workloads with SHARP acceleration.
  • NVMe-oF Storage Fabrics: Lossless InfiniBand transport for NVMe over Fabrics with minimal latency jitter.
  • Dragonfly+ Topologies: Supports advanced non-blocking fabric topologies for hyperscale deployments.

Technical Specifications

  • OEM: NVIDIA Mellanox
  • Model: MQM8700-HS2R (Managed)
  • Part Number: 920-9B110-00RH-0M0
  • Switch ASIC: NVIDIA Quantum
  • Protocol: HDR InfiniBand (200Gb/s per port)
  • Ports: 40x QSFP56
  • Breakout: 2x HDR100 (100Gb/s) per port, up to 80 endpoints
  • Aggregate Bandwidth: 16Tb/s
  • Latency: 130ns port-to-port
  • Management: x86 dual-core, MLNX-OS (OpenSM, SHARP, adaptive routing)
  • Airflow: C2P (Connector to Power)
  • Power: Dual redundant hot-swap AC PSUs
  • Condition: Refurbished — Tested by T.E.S IT-SOLUTIONS

Compatibility & Hard Constraints

  • InfiniBand Only: This is a native InfiniBand switch. It does not route Ethernet traffic. For Ethernet switching, Spectrum-series switches are required.
  • HDR Speed Maximum: Supports HDR (200Gb/s) and HDR100 (100Gb/s). Does not support NDR (400Gb/s). For NDR, the Quantum-2 (QM9700) platform is required.
  • Adapter Compatibility: Compatible with ConnectX-6 VPI/HDR and ConnectX-7 (in HDR mode). Not compatible with EDR-only or FDR adapters at HDR speeds.
  • Airflow Direction: C2P. Do not install in P2C airflow racks without thermal containment planning.
  • Managed Switch: Requires MLNX-OS configuration. This is not a plug-and-play unmanaged switch.

NVIDIA Mellanox® MQM8700-HS2R Quantum HDR 200Gb/s Switch 40-Port QSFP56 C2P

SKU: MQM8700-HS2R_Refurbished
€8,100.00Price
Quantity
    bottom of page