🌟 NVIDIA Mellanox® MCP7Y50-N003 | 800Gb/s to 4×200Gb/s Passive Breakout Cable (DAC) | 3m | OSFP → 4×OSFP | Flat-Top | InfiniBand NDR / NDR200
Product Overview
The NVIDIA Mellanox® MCP7Y50-N003 (P/N 980-9I46T-00N003) is a high-density 1-to-4 passive breakout cable engineered for InfiniBand NDR and NDR200 deployments. It converts a single 800Gb/s OSFP NDR port into four independent 200Gb/s OSFP NDR200 links, providing exceptional scalability for AI fabrics, HPC clusters, and enterprise compute environments.
With a 3-meter reach, LSZH safety certification, and Flat-Top OSFP connectors, this splitter enables flexible rack layouts where servers, accelerators, and switch ports are positioned farther apart than standard 1m deployments.
Because this is a long-reach passive copper harness operating at 800Gb/s, it is significantly thicker and less flexible than active solutions. Proper cable-routing planning is required.
Key Features
🚀 True NDR200 Quad Breakout (800G → 4×200G)
Each of the four outputs delivers 200Gb/s NDR200, using:
2 × 100G PAM4 lanes per leg (correct NDR200 lane assignment).🔌 OSFP on ALL Ends – Specialized Deployment
Most 200Gb/s nodes use QSFP112, but this cable is designed for environments where:
Compute nodes use OSFP cages
OSFP is preferred for thermal headroom
Systems use liquid cooling or flat-top OSFP modules
Blade and accelerator platforms require uniform OSFP porting
⚡ Passive Copper Design (Zero Power, Zero Latency)
No internal electronics →
No added latency
No power consumption
Higher long-term reliability
Lower thermal load compared to ACC/AOC alternatives
📏 Extended 3-Meter Reach
Ideal for:
Rack-to-rack or pod-to-pod layouts
Distant GPU trays
Liquid-cooled compute clusters
Infrastructure where 1m and 2m cabling is not sufficient
🔥 LSZH Safety Construction
The flame-retardant, low-toxicity jacket meets modern data center standards.
🧊 Flat-Top OSFP Design
For systems where RHS (finned-top) heat sinks are not required.
Not recommended for air-cooled OSFP cages.
Technical Specifications
| Feature | Details |
|---|---|
| Model | MCP7Y50-N003 |
| NVIDIA P/N | 980-9I46T-00N003 |
| Type | Passive Copper Splitter (DAC), 1→4 |
| Upstream Connector | OSFP, 800Gb/s NDR |
| Downstream Connectors | 4 × OSFP, 200Gb/s NDR200 |
| Lane Profile | 2×100G PAM4 lanes per 200G output |
| Length | 3 meters |
| Jacket | LSZH |
| Connector Style | Flat-Top OSFP |
| Condition | Refurbished |
Ideal Applications
This cable is designed for advanced InfiniBand NDR/NDR200 deployments such as:
🔹 Liquid-cooled AI training clusters
🔹 HPC compute blades with OSFP ports
🔹 OSFP-based GPU infrastructure
🔹 High-density NDR fabrics
🔹 Rack-to-rack multi-node fan-outs
🔹 Switch → 4 compute nodes connectivity
This splitter is essential for fabrics requiring maximum port efficiency and multi-node scaling.
Enhanced Network Efficiency
Fully optimized for InfiniBand NDR / NDR200 signaling
Maintains signal integrity across extended 3m passive reach
Zero added latency for synchronized GPU workloads
Ideal for mission-critical AI/ML training, simulation, and HPC
Compatibility Notes
✅ Compatible With
• NVIDIA Quantum-2 NDR OSFP switches
• OSFP-based NICs, compute nodes, accelerator trays
• Liquid-cooled or thermally stabilized environments
❌ Not Compatible With
• Ethernet networks (QSFP-DD / OSFP-800GbE)
• InfiniBand HDR (200Gb/s) – electrically different from NDR200
• QSFP112-based servers or NICs
• Air-cooled OSFP cages requiring RHS heat sinks
Cable Routing Advisory (Important)
At 3 meters, a passive 800Gb/s splitter is:
Thick and mechanically stiff
Not suitable for tight bends or CMAs (cable management arms)
Best routed along structured cable trays
Requires careful strain-relief at OSFP heads and the 1→4 breakout junction
Quality Assurance
✓ Brand new & factory-sealed
✓ Tested for full NDR/NDR200 lane integrity
✓ Verified OSFP-to-OSFP compatibility
✓ Professionally packaged for safe international shipping
Payment & Shipping
Payments: PayPal, credit cards, bank transfer
Shipping: FedEx/TNT Express (8–13 days), tracking included
Returns: Accepted within policy; buyer covers return shipping
📦 Why Choose T.E.S IT-Solutions?
We provide enterprise-grade, fully-tested Mellanox/NVIDIA networking hardware for AI, HPC, and cloud infrastructures. Our expertise ensures seamless integration, maximum reliability, and predictable performance in the highest-demand environments.

