top of page

🌟 NVIDIA Mellanox® OSFP Passive Splitter DAC – 800Gb/s to 2×400Gb/s NDR PAM4, 2m

Model: MCP7Y00-N002 | NVIDIA P/N: 980-9I929-00N002

The NVIDIA Mellanox® MCP7Y00-N002 is an OSFP passive copper splitter (Direct Attach Copper, DAC) engineered for InfiniBand NDR 800Gb/s environments.
It enables a single 800G OSFP port to be cleanly split into two independent 400Gb/s OSFP links, supporting next-generation AI, HPC, and hyperscale data center fabrics.

Built for PAM4 signaling and ultra-high lane density, this 2-meter passive splitter delivers deterministic latency, zero power consumption, and maximum signal integrity in dense rack and row-level deployments.

🔧 Technical Specifications

  • Manufacturer: NVIDIA Mellanox

  • Model: MCP7Y00-N002

  • NVIDIA Part Number: 980-9I929-00N002

  • Cable Type: Passive Copper Splitter DAC (PCC)

  • Connectivity: OSFP → 2× OSFP

  • Primary Protocol: InfiniBand NDR

  • Total Bandwidth: 800Gb/s → 2×400Gb/s

  • Lane Signaling: PAM4

  • Length: 2 meters

  • Condition: Refurbished, fully tested

🌐 Engineering & AI-Cluster Context (Why This Cable Exists)

Why Split 800G into 2×400G
In modern AI fabrics, 800G spine or leaf ports are often split to connect:

  • Two 400G leaf switches

  • Two 400G compute pods

  • Parallel GPU islands

This splitter allows flexible topology scaling without wasting high-value OSFP ports.

OSFP + PAM4 for NDR
OSFP is required for thermal and electrical headroom at 800G speeds.
The cable is tuned for PAM4 modulation, delivering 400G per leg with stable eye margins.

Passive = Deterministic Latency

  • No retimers

  • No DSP

  • No power draw

Critical for synchronous AI training and latency-sensitive HPC workloads.

2m = Practical Row-Level Reach
2 meters is ideal for:

  • Adjacent rack connections

  • In-rack 800G → dual-pod fan-out

  • Avoiding excess stiffness and airflow obstruction seen in longer 800G copper runs

🧠 Typical Deployment Scenarios

  • NVIDIA NDR AI clusters

  • 800G leaf → dual 400G compute fabrics

  • Hyperscale spine/leaf architectures

  • Large language model (LLM) training pods

  • HPC and research supercomputers

🔁 Compatibility (Model-Aware for SEO)

  • NVIDIA Mellanox Quantum-2 / Quantum-X InfiniBand switches

  • OSFP-based 800G NDR switch ports

  • 400G OSFP NDR endpoints

  • NDR-enabled InfiniBand fabrics

⚠️ Requires switch configuration supporting OSFP split / breakout mode

✅ Quality Assurance – T.E.S IT-Solutions

  • Electrically validated at NDR speeds

  • Split integrity and lane mapping verified

  • 100% compatibility with NVIDIA Mellanox OSFP platforms

  • Supplied with real product images

🚚 Payment & Shipping

  • Payments: PayPal, credit card, bank transfer

  • Shipping: Worldwide (8–13 business days), secure packaging

  • Returns: Accepted per return policy (buyer covers return shipping)

🤝 Why Choose T.E.S IT-Solutions?

We specialize in real NVIDIA Mellanox AI infrastructure, not generic networking gear.
Our expertise spans 400G / 800G cabling, NDR fabrics, and cluster topology design.

NVIDIA Mellanox® MCP7Y00-N002 passive splitter cable OSFP 800Gb/s to 2x 400Gb/s

SKU: MCP7Y00-N002_Refurbished
€390.00Price
Quantity
    bottom of page