NVIDIA Mellanox® MQM8700-HS2R Quantum™ HDR InfiniBand Switch: This is a managed, non-blocking 40-port HDR (High Data Rate) switch built on the industry-leading Quantum™ ASIC. It delivers 16Tb/s of aggregate switching capacity with ultra-low 130ns port-to-port latency. Unlike unmanaged edge switches, this "Smart Switch" features an onboard x86 dual-core processor to run the MLNX-OS® management stack, enabling advanced subnet management, congestion control, and telemetry directly on the device. It supports full 200Gb/s bandwidth per port via QSFP56 interfaces.
🔧 Technical Specifications
- OEM: NVIDIA / Mellanox
- Model: MQM8700-HS2R (Managed)
- P/N: 920-9B110-00RH-0M0
- Throughput: 200Gb/s per port (HDR)
- Ports: 40x QSFP56
- Airflow: C2P (Connector-to-Power / Red Latch / Port-Side Intake)
- Power Supply: Dual Redundant Hot-Swap AC
🌐 Engineering Context: The MQM8700-HS2R is the backbone of modern AI SuperPods. It features NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology, which offloads collective operations from the CPU/GPU to the switch network, significantly accelerating AI training epochs. The C2P airflow design is engineered for "Cold Aisle" deployment, where cool air enters the port side and exhausts through the rear power supplies, matching the thermal flow of standard server racks.
🧠 Typical Use Cases
- Core/Spine switch for NVIDIA H100/A100 AI Training Clusters.
- High-Performance Computing (HPC) Top-of-Rack aggregation.
- NVMe-oF Storage Fabrics requiring minimal latency jitter.
- Hyperscale Cloud environments utilizing Dragonfly+ topologies.
✅ Compatible With: NVIDIA ConnectX-6 VPI/HDR and ConnectX-7 (in HDR mode) Adapter Cards. Supports HDR (200G), HDR100 (100G via splitters), and EDR (100G) speeds. Compatible with NVIDIA Unified Fabric Manager (UFM) for centralized orchestration.
❌ NOT Compatible With: Standard Ethernet transceivers (unless port is configured for Gateway/VPI mode, though this switch is optimized for native InfiniBand). P2C (Power-to-Connector) airflow environments where hot air is exhausted at the port side (requires thermal containment planning).
⚠️ Airflow Direction (C2P): This unit is "Connector-to-Power" (often marked with Red latches). It pulls cold air from the connector side (Front) and exhausts hot air out the PSU side (Rear). Ensure this matches your data center's Hot/Cold aisle containment strategy to prevent hot-air recirculation.
⚠️ Splitter Support: The 40 QSFP56 ports support 2x100G (HDR100) breakout cables, allowing the switch to support up to 80 endpoints at 100Gb/s, effectively doubling port density for compute nodes.
🤝 Why Choose T.E.S IT-SOLUTIONS?
We specialize in enterprise-grade NVIDIA Mellanox networking hardware.

