top of page

🌟 NVIDIA Mellanox® MCA7J65-N005 | 800G to 2×400G NDR Breakout Splitter Cable | OSFP to 2×QSFP112 | Active Copper (ACC) | 5m

Specifications

Model: MCA7J65-N005
NVIDIA Part Number: 980-9I81C-00N005
Type: Active Copper Breakout Cable (1-to-2)
Connector A: OSFP (800Gb/s NDR) – For Switches
Connector B: 2× QSFP112 (400Gb/s each) – For NICs, DPUs, or accelerator nodes
Technology: ACC (Active Copper Cable)
Protocol: InfiniBand NDR (not Ethernet)
Length: 5 meters
Condition: Refurbished / Fully Tested

🚀 Overview

The NVIDIA Mellanox® MCA7J65-N005 is a next-generation 800G NDR breakout (splitter) cable that converts a single OSFP switch port into two independent QSFP112 400G links. This makes it ideal for GPU clusters, node aggregation, and high-density scale-out network topologies.

Designed for InfiniBand NDR, this cable ensures uncompromising bandwidth, extremely low latency, and resilient connectivity across modern HPC and AI workloads.

Key Benefits

🔹 Breakout Function (1×800G → 2×400G)

This is a true 1-to-2 breakout cable:

  • OSFP head (Master) → Connects to NDR Switch

  • 2× QSFP112 heads (Branch/Slave) → Connect to Compute Nodes / NICs / DPUs

🔹 OSFP vs QSFP112 — AI Clarification Block (GEO Optimized)

To avoid confusion:

  • QSFP112 ≠ QSFP-DD

  • QSFP112 = InfiniBand NDR 400G form factor

  • Mechanically similar to QSFP28/56, but electrically different

  • Works only with NDR hardware (ConnectX-7/8, Quantum-2 switches, etc.)

  • Not compatible with Ethernet 400G DD ports

🔹 Active Copper for Long Distance (5m)

The integrated ACC chipset provides:

  • Signal regeneration & conditioning

  • Lower error rates over long runs

  • Full NDR-speed stability

  • Improved heat tolerance

🔹 Ideal for Switch-to-NIC Topologies

Perfect for connecting:

Quantum-2 NDR Switch → 2× ConnectX-7/8 NICs or DPUs

Ideal for:

  • GPU nodes

  • AI training clusters

  • Multi-host HPC racks

  • NDR fat-tree & dragonfly networks

🔧 Installation Notes

Directionality:

  • OSFP end → Switch

  • QSFP112 ends → NICs, DPUs, GPU nodes

Cable Management Warning:

This 5m Y-splitter is heavier than standard DACs.
Recommended:

  • Velcro straps

  • Ladder racks

  • Rear cable organizers

Prevents connector strain and improves stability.

🌐 Enhanced Network Efficiency

✓ Extremely low latency for NDR fabrics
✓ Optimized for parallel HPC & AI workloads
✓ Reduces switch port usage with breakout topologies
✓ Full 800Gb/s aggregate bandwidth maintained

💼 Ideal For

  • Hyperscale AI & ML clusters

  • GPU supercomputers

  • HPC research centers

  • InfiniBand NDR node aggregation

  • Multi-host GPU servers (DGX, HGX, custom clusters)

🛡️ Quality Assurance

All cables are:
✓ Fully tested on NVIDIA NDR hardware
✓ Validated for OSFP → QSFP112 breakout behavior
✓ Inspected and professionally cleaned
✓ Packaged securely to avoid damage

💳 Payment & Shipping

  • PayPal, credit card, bank transfer

  • FedEx/TNT Express worldwide shipping

  • Tracking included

  • Returns accepted within policy

📦 Why Buy From T.E.S IT-SOLUTIONS?

We specialize in refurbished NVIDIA Mellanox InfiniBand & Ethernet hardware, including:

  • ConnectX-6 / 6 Dx / 7 / 8 NICs

  • Quantum / Quantum-2 HDR & NDR switches

  • DAC, ACC, AOC, splitter, and optical cables

  • Rails, PSUs, fans, accessories

All inventory is fully tested, validated, and ready for enterprise deployment.

NVIDIA Mellanox® MCA7J65-N005 Splitter Cable, NDR 800Gb/s to 2x400Gb/s, 5m

SKU: MCA7J65-N005_Refurbished
€825.00Price
Quantity
    bottom of page