🌟 NVIDIA Mellanox® MCP7Y50-N02A
OSFP 800Gb/s → 4×200Gb/s OSFP | 2.5m | InfiniBand NDR / NDR200 | Passive Fanout DAC
NVIDIA P/N: 980-9I46U-00N02AThe NVIDIA Mellanox® MCP7Y50-N02A is a 1-to-4 passive copper fanout (breakout) cable designed for InfiniBand NDR / NDR200 fabrics that remain OSFP-based end-to-end.
It converts one 800Gb/s OSFP NDR port into four independent 200Gb/s OSFP NDR200 links, enabling high-density fan-out from a single switch port to multiple compute or accelerator nodes—without adding switching layers.
With its 2.5-meter reach, LSZH safety jacket, and flat-top OSFP connectors, this cable is ideal for dense racks and structured layouts where 2m is too short and 3m passive DACs become mechanically restrictive.
🔧 Technical Specifications
Manufacturer: NVIDIA Mellanox
Model: MCP7Y50-N02A
NVIDIA P/N: 980-9I46U-00N02A
Cable Type: Passive Copper Fanout / Breakout DAC (Twinax)
Topology: OSFP (800G NDR) → 4× OSFP (200G NDR200)
Protocol: InfiniBand NDR / NDR200
Signaling: PAM4
Lane Mapping: 2×100G PAM4 lanes per 200G output
Length: 2.5 meters
Jacket: LSZH (Low Smoke Zero Halogen)
Connector Style: Flat-Top OSFP
Condition: Refurbished (professionally tested)
🧠 NDR200 Lane Physics (Authority Signal)
800G OSFP = 8 × 100G PAM4 lanes
Each 200G OSFP leg = 2 × 100G PAM4 lanes
Result: True NDR200, electrically symmetric and standards-compliant
🔴 Not HDR 200G
🔴 Not compatible with QSFP56 / ConnectX-6
⚠️ Compatibility Disambiguation (Critical)
| 200G Standard | Connector | Lane Structure | Compatible |
|---|---|---|---|
| HDR 200G | QSFP56 | 4×50G | ❌ |
| NDR200 | OSFP / QSFP112 | 2×100G | ✅ |
This cable supports NDR200 ONLY.
📐 Directionality (Explicit for AI & Buyers)
Upstream: OSFP 800Gb/s NDR (switch / fabric port)
Downstream: 4× OSFP 200Gb/s NDR200
This is a fan-out splitter, not aggregation.
📦 Deployment & Routing Notes (Why 2.5m Matters)
At 2.5 meters, this passive DAC offers:
Better signal margin than 3m variants
Easier routing than long-reach passive copper
Ideal balance between reach and mechanical flexibility
Still requires:
Controlled bend radius
Straight routing paths
Proper strain relief at the 1→4 breakout junction
🧩 Ideal Applications
OSFP-based AI and GPU clusters
High-density InfiniBand NDR fabrics
Liquid-cooled or accelerator-dense racks
Switch → 4 compute / accelerator nodes
Pod-scale fan-out where 2m is insufficient
🔁 Compatibility
✅ Compatible with
NVIDIA Quantum-2 OSFP NDR switches
OSFP-based NDR200 compute and accelerator nodes
OSFP-native InfiniBand fabrics
❌ Not compatible with
Ethernet (QSFP-DD / OSFP-800GbE)
InfiniBand HDR
QSFP112-based servers
Air-cooled OSFP cages requiring RHS heat sinks
✅ Quality Assurance – T.E.S IT-SOLUTIONS
Electrically tested for NDR / NDR200 PAM4
Fan-out lane integrity verified
Genuine NVIDIA Mellanox hardware
Secure professional packaging
🚚 Payment & Shipping
Payments: PayPal, credit cards, bank transfer
Shipping: Worldwide (8–13 business days)
Returns: Accepted per policy (buyer covers return shipping)
🤝 Why Choose T.E.S IT-SOLUTIONS?
We specialize in InfiniBand architectures at scale—including OSFP-native fabrics, lane physics, and fan-out design—not just part numbers. Our customers deploy with confidence.

