top of page
Search

MQM8700-HS2F vs MQM9700: Which HDR vs NDR Switch for Your AI Cluster

Quick Answer

If you are building or expanding an AI training cluster in 2026 and choosing between the Mellanox MQM8700-HS2F (HDR 200Gb/s) and the MQM9700 (NDR 400Gb/s), the answer comes down to three factors: your GPU generation, your fabric scale, and your budget per port. The MQM8700-HS2F is the right choice for A100-class clusters and budget-sensitive HPC fabrics. The MQM9700 is the right choice for H100, H200, and Blackwell deployments where the GPU itself can saturate 400Gb/s.

Specifications At A Glance

MQM8700-HS2F (Quantum HDR):

  • 40 x QSFP56 ports, 200 Gb/s per port

  • 16 Tb/s aggregate non-blocking switching capacity

  • Sub-130 ns port-to-port latency

  • NVIDIA SHARP v2 in-network compute

  • P2C airflow (blue latch, reverse) - HS2R variant for C2P

  • Externally managed, MLNX-OS, onboard Subnet Manager up to 2,048 nodes

  • 1U form factor, dual hot-swap AC PSUs

MQM9700 (Quantum-2 NDR):

  • 32 x OSFP cages physically, 64 x 400Gb/s NDR ports logically (2 ports per cage)

  • 25.6 Tb/s aggregate non-blocking switching capacity

  • Sub-1.2 microsecond cut-through latency at NDR

  • NVIDIA SHARP v3 in-network compute with FP8 reduction support

  • P2C and C2P airflow variants available

  • Externally managed, MLNX-OS, onboard Subnet Manager up to 2,048 nodes

  • 1U form factor, dual hot-swap AC PSUs

The Real Decision Criteria

Most buyers ask the wrong question. The question is not which switch is faster. The question is: can the rest of my fabric saturate 400Gb/s end-to-end?

1. GPU generation drives port speed

NVIDIA A100 GPUs in DGX A100 or HGX A100 systems use ConnectX-6 or ConnectX-7 adapters configured for HDR 200Gb/s. Pairing A100 nodes with an NDR 400Gb/s switch wastes half the switch capacity - the adapter and the GPU PCIe Gen 4.0 x16 host interface cannot consume more than approximately 200Gb/s of network throughput per port. For A100 fabrics, MQM8700-HS2F is the right match.

NVIDIA H100, H200, and Blackwell GPUs use ConnectX-7 NDR adapters and PCIe Gen 5.0 x16 host interfaces, capable of saturating 400Gb/s per port. For H100-class and newer fabrics, MQM9700 is the right match. Using HDR switches with H100 nodes leaves performance on the table for collective operations and gradient sync at scale.

2. Cluster scale changes the math

For clusters under 64 GPUs, a single MQM8700-HS2F (40 ports) covers the workload comfortably with room for storage and management uplinks. Cost per port: roughly half that of NDR equivalents on the refurbished market.

For clusters between 64 and 256 GPUs, the MQM9700's 64 logical ports per 1U give you significant rack-density advantage. One MQM9700 replaces approximately two MQM8700 switches at the leaf layer, reducing cabling complexity and rack space.

For clusters above 1,024 GPUs, NDR is essentially mandatory - HDR fabrics at this scale require so many switches that the spine layer becomes the bottleneck. NDR's higher per-port speed and SHARP v3 reduction trees scale further with fewer hops.

3. Total cost of ownership

Refurbished MQM8700-HS2F units typically range EUR 7,500 to EUR 12,000 depending on condition and source. Refurbished MQM9700 units typically range EUR 22,000 to EUR 35,000. Cables and transceivers compound the difference - QSFP56 HDR optics are mature and inexpensive on the secondary market, while OSFP NDR optics remain premium-priced through 2026.

For a 64-GPU A100 fabric, the switching layer cost difference between an HDR and an NDR build can be EUR 30,000 to EUR 60,000 - money better spent on more GPUs or storage.

Compatibility Quick Reference

MQM8700-HS2F connects with:

  • ConnectX-6 VPI HDR adapters (MCX653105A, MCX653106A series)

  • ConnectX-7 NDR adapters configured for HDR backward-compatible mode

  • NVIDIA MCP1650 passive copper QSFP56 cables

  • NVIDIA MFS1S00 active optical QSFP56 cables

  • HDR100 splitter cables for legacy EDR backward compatibility

MQM9700 connects with:

  • ConnectX-7 NDR adapters (MCX75310AAC-NEAT, MCX75510AAS-NEAT)

  • BlueField-3 DPUs in NDR mode

  • NVIDIA MCP7Y series passive copper OSFP cables

  • NVIDIA MMA4Z00 series OSFP transceivers

  • NDR-to-2xHDR splitter cables for mixed-generation deployments

Airflow Direction - The Hidden Gotcha

Both switches ship in two airflow variants. P2C (Power-to-Connector, blue latch) draws cold air from the PSU side and exhausts hot air from the port side - this is reverse airflow and IS NOT compatible with standard front-to-back rack aisle layouts. C2P (Connector-to-Power) is standard front-to-back airflow.

The HS2F suffix on MQM8700-HS2F denotes P2C airflow. For standard C2P deployments, the correct SKU is MQM8700-HS2R. Mixing airflow directions in the same rack causes thermal recirculation and equipment failure. Always verify your rack airflow design before ordering.

What About SHARP?

NVIDIA SHARP (Scalable Hierarchical Aggregation and Reduction Protocol) offloads MPI and NCCL collective operations directly to the switch silicon. For distributed AI training - particularly all-reduce operations during gradient synchronization - SHARP can deliver 2x to 4x speedup over software-based reductions.

MQM8700-HS2F supports SHARP v2. MQM9700 supports SHARP v3, which adds support for larger reduction trees, FP8 datatype reductions (matching H100 Transformer Engine output), and streaming aggregation for bandwidth-bound workloads. For Llama-class large language model training on H100 or H200 fabrics, SHARP v3 is a meaningful differentiator.

The Decision Matrix

Choose MQM8700-HS2F (HDR) if:

  • You are building or expanding an A100 / DGX A100 cluster

  • Your cluster is under 256 GPUs

  • Budget per port matters and refurbished hardware is acceptable

  • Your existing fabric is already HDR and you are scaling horizontally

Choose MQM9700 (NDR) if:

  • You are building an H100, H200, or Blackwell cluster

  • You are designing a SuperPOD-class fabric (above 1,024 GPUs)

  • You need SHARP v3 for FP8 collective acceleration

  • Rack density and cable count matter more than upfront cost

About T.E.S IT-SOLUTIONS

T.E.S IT-SOLUTIONS supplies refurbished and new NVIDIA Mellanox networking hardware including the MQM8700-HS2F, MQM9700, ConnectX-6 and ConnectX-7 adapters, and the full QSFP56 and OSFP transceiver and cable range. Each switch is bench-tested for port link training, PSU redundancy, fan RPM stability, and MLNX-OS boot integrity prior to dispatch. Global shipping with 1 to 3 business day handling. Visit tes-itsolutions.com for current inventory across the NVIDIA Quantum HDR and Quantum-2 NDR families, or contact us for fabric design consultation.

 
 
 

Recent Posts

See All

Comments


bottom of page