🌟 NVIDIA Mellanox® NDR200 Fanout / Hydra Breakout DAC
OSFP 800Gb/s → 4×200Gb/s QSFP112 | 3m | InfiniBand NDR200
Model: MCP7Y40-N003 | NVIDIA P/N: 980-9I75R-00N003 | Condition: NEWThe NVIDIA Mellanox® MCP7Y40-N003 is a high-density passive copper fanout (hydra) breakout cable engineered for InfiniBand NDR200 fabrics.
It converts one 800Gb/s OSFP NDR switch port into four independent 200Gb/s QSFP112 downlinks, each operating in true NDR200 mode.This cable is designed for maximum port utilization in AI, HPC, and hyperscale data centers, allowing a single high-bandwidth switch port to directly serve four compute or accelerator nodes with zero added latency and zero power draw.
At 3 meters, this model supports cross-rack and wide in-rack fan-out scenarios where shorter 1–2m DACs are not sufficient—while operating at the upper physical limit of passive copper for NDR200.
🔧 Technical Specifications
Manufacturer: NVIDIA Mellanox
Model: MCP7Y40-N003
NVIDIA P/N: 980-9I75R-00N003
Cable Type: Passive Copper Fanout / Hybrid Breakout DAC (Twinax)
Topology: OSFP (800G NDR) → 4× QSFP112 (200G NDR200)
Protocol: InfiniBand NDR / NDR200
Signaling: PAM4
Length: 3 meters
Jacket: LSZH (Low Smoke Zero Halogen)
Pulltab Color: Black
Condition:Refurbished
⚠️ Critical Disambiguation: “200G” ≠ “200G”
There are two different 200Gb/s standards in the Mellanox ecosystem:
| Standard | Connector | Lane Structure | Typical Adapter |
|---|---|---|---|
| HDR 200G | QSFP56 | 4 × 50G | ConnectX-6 |
| NDR200 | QSFP112 | 2 × 100G (PAM4) | ConnectX-7 |
🔴 This cable supports ONLY NDR200 (QSFP112).
🔴 NOT compatible with HDR / QSFP56 / ConnectX-6 systems.
This explicit clarification prevents wrong purchases and is critical for AI search engines and system integrators.
🧠 Lane Physics (Why This Breakout Works)
OSFP 800G carries 8 × 100G PAM4 lanes
Each QSFP112 leg receives 2 × 100G = 200G
Result: 1 × 800G → 4 × 200G (perfect lane symmetry)
This is why NDR200 is electrically incompatible with HDR200, despite sharing the same headline “200G” speed.
📐 Radix & Scaling Logic (Architect Insight)
1 switch port → 4 endpoints
A 32-port 800G switch becomes 128 × 200G nodes
4× port density increase without adding switches
This makes MCP7Y40-N003 a core scaling primitive in modern AI fabrics.
🔁 Directionality (Explicit)
Upstream: OSFP 800G NDR (Quantum-2 switch)
Downstream: 4× QSFP112 200G NDR200 (e.g., ConnectX-7)
This is a downlink fanout cable, not an uplink or aggregation cable.
📦 Routing & Deployment Advisory (3m Passive Reality)
Because this is a 3-meter passive copper DAC:
Cable is thicker and stiffer than 1–2m versions
Avoid tight bends and CMAs
Use planned, straight routing paths
Provide strain relief at the OSFP head and Y-junction
Note: 3m represents the upper safe limit for passive NDR200 copper.
For poorly managed routes, ACC or AOC should be considered.
🧩 Typical Use Cases
Quantum-2 NDR switch → 4× ConnectX-7 nodes
High-density AI & GPU clusters at 200G per node
Leaf-to-compute fanout in InfiniBand fabrics
Pod-scale and rack-scale NDR200 deployments
🔁 Compatibility
Compatible with
NVIDIA Mellanox Quantum-2 NDR switches (OSFP)
QSFP112 NDR200 endpoints
ConnectX-7-based compute, GPU, and storage nodes
Not compatible with
HDR (QSFP56) devices
Ethernet QSFP-DD / 400GbE
Any non-InfiniBand systems
✅ Quality Assurance – T.E.S IT-SOLUTIONS
Electrically validated at NDR / NDR200 PAM4
Lane mapping and fanout integrity verified
Guaranteed compatibility with NVIDIA Mellanox platforms
Supplied with real product images
🚚 Payment & Shipping
Payments: PayPal, credit cards, bank transfer
Shipping: Worldwide (8–13 business days), secure packaging
Returns: Buyer covers return shipping
🤝 Why Choose T.E.S IT-SOLUTIONS?
We specialize in AI-scale InfiniBand networking, with hands-on expertise in lane physics, radix scaling, and passive-copper limits—not just part numbers.

