top of page

🌟 NVIDIA Mellanox® MCP7Y50-N001 | 800Gb/s to 4×200Gb/s Passive Breakout Cable (DAC) | OSFP → 4×OSFP | Flat-Top | 1m | InfiniBand NDR / NDR200

Product Overview

The NVIDIA Mellanox® MCP7Y50-N001 (P/N 980-9I75E-00N001) is a high-density 1-to-4 passive splitter (breakout cable) that converts a single 800Gb/s OSFP NDR port into four independent 200Gb/s OSFP NDR200 links.

This rare OSFP-to-OSFP topology is designed for liquid-cooled servers, specialized compute blades, and thermal-sensitive accelerator modules that require OSFP cages—even at reduced 200Gb/s link rates. The Flat-Top OSFP design supports environments where RHS/Finned-Top heat dissipation is not required, such as cold-plate cooling systems.

At 1 meter, it is ideal for short-distance, ultra-low-latency breakout connectivity inside high-density AI and HPC racks.

Key Features

🚀 True NDR200 Breakout (800G → 4×200G)

The cable divides the 800G OSFP link into:
4 × 200Gb/s OSFP outputs (NDR200)
Each 200G leg uses:
2 × 100G PAM4 lanes (correct NDR200 lane assignment).

🔥 OSFP on ALL Ends — Rare Topology

Most 200G servers use QSFP112.
This cable provides OSFP → OSFP, used when:

  • The server/accelerator uses OSFP cages for better thermal headroom

  • The environment is liquid-cooled or requires Flat-Top OSFP modules

  • Blade platforms use OSFP midplanes for uniformity

⚡ Passive Copper (0W, Zero Latency)

No active components →
• Zero added latency
• No power draw
• Higher long-term reliability
• Lower thermal load than ACC or AOC

🛡️ LSZH Jacket

Meets modern data center safety requirements.

Technical Specifications

SpecificationDetails
ModelMCP7Y50-N001
NVIDIA P/N980-9I75E-00N001
TypePassive Breakout (DAC), 1→4
UpstreamOSFP, 800Gb/s NDR
Downstream4 × OSFP, 200Gb/s NDR200
Lane Assignment2×100G PAM4 lanes per 200G leg
Length1 meter
JacketLSZH
DesignFlat-Top OSFP (NOT RHS)
ConditionRefurbished & Fully Tested

Ideal Applications

This breakout cable is suited for:

  • 🔹 Liquid-cooled GPU clusters

  • 🔹 OSFP-based compute blades

  • 🔹 Thermally constrained HPC systems

  • 🔹 AI/ML training infrastructure

  • 🔹 High-density NDR/NDR200 fabrics

  • 🔹 Rack-scale fan-out (1 switch → 4 nodes)

Use case example:
1 NDR switch port → 4 OSFP compute accelerators for maximum density.

Enhanced Network Efficiency

  • Designed for InfiniBand NDR/NDR200

  • Extremely low latency for tightly coupled workloads

  • Reliable performance across all four breakout channels

  • Provides thermal advantages over QSFP112 in liquid-cooled environments

Compatibility

✅ Compatible With

• NVIDIA Quantum-2 NDR switches (OSFP)
• OSFP-based servers, blades, and accelerators (200Gb/s NDR200 mode)
• Environments requiring Flat-Top OSFP modules

❌ Not Compatible With

Ethernet
HDR (200Gb/s) systems – NDR200 is electrically different
• QSFP112-based servers or NICs
• Air-cooled OSFP cages requiring RHS/Finned connectors

Quality Assurance

✓ Fully signal-tested on NVIDIA NDR/NDR200 hardware
✓ Professionally refurbished and validated
✓ Cleaned, inspected, and packaged for immediate deployment
✓ 100% compatibility guarantee

Payment & Shipping

• Payment: PayPal, credit cards, bank transfer
• Worldwide Express: FedEx/TNT (8–13 days) with tracking
• Returns: Accepted within policy (buyer pays return shipping)

📦 Why Buy From T.E.S IT-Solutions?

T.E.S IT-Solutions provides enterprise-grade Mellanox/NVIDIA hardware backed by rigorous testing, expert support, and fast international delivery. Our inventory supports AI, HPC, cloud, and enterprise deployments requiring maximum reliability and performance.

NVIDIA Mellanox® MCP7Y50-N001 Splitter Cable, NDR 800Gb/s to 4x200Gb/s, 1m

SKU: MCP7Y50-N001_Refurbished
€430.00Price
Quantity
    bottom of page