🌟 NVIDIA Mellanox® MCA4J80-N002 | InfiniBand NDR (IB NDR) Active Copper Cable | OSFP RHS (Finned-Top) | Twin-Port 2×400Gb/s (800Gb/s) | 2m | New
Part Number: 980-9I60V-00N002
Model: MCA4J80-N002
Technology: InfiniBand NDR (Next Data Rate)
Connector: OSFP RHS (Riding Heat Sink, finned-top)
Cable Type: Active Copper Cable (ACC)
Length: 2 meters
Total Bandwidth: 2×400Gb/s NDR = 800Gb/s aggregate
Condition: New / Factory-Sealed🔧 Technical Specifications
Protocol: InfiniBand NDR (IB NDR)
Port Architecture: Twin-Port (Two 400G NDR links per OSFP connector)
Connector Style: OSFP RHS (Finned-Top for thermal dissipation)
Cable Type: Active Copper Cable with redriver circuitry
Length: 2m
Data Rate: 800Gb/s total (2×400Gb/s)
Compatibility: NVIDIA Quantum-2 platforms
MQM9700 (Modular Quantum-2 Switch)
MQM9790 (Fixed Quantum-2 Switch)
NDR-capable OSFP adapters (ConnectX-7 / BlueField-3)
🌐 Overview — Purpose-Built for NDR Quantum-2 Fabrics
The NVIDIA Mellanox® MCA4J80-N002 is a Twin-Port InfiniBand NDR Active Copper Cable engineered specifically for NVIDIA Quantum-2 (QM9700 / QM9790) switching platforms.
This OSFP RHS finned-top connector ensures proper thermal performance in ultra-dense AI clusters, while ACC redriver technology maintains signal quality for full 800Gb/s NDR bandwidth over a 2m copper link—a distance passive DACs cannot reliably support.
Ideal for short-reach NDR spine/leaf connectivity in AI, HPC, and large-scale InfiniBand fabrics.
⚙️ Key Features
✅ True NDR Performance – 2×400Gb/s Twin-Port Architecture
Each OSFP connector carries two 400Gb/s NDR streams, delivering 800Gb/s total throughput per cable.
✅ OSFP RHS (Finned-Top) – Required for Quantum-2 Thermal Profiles
Designed for use with NVIDIA Quantum-2 systems that require Riding Heat Sink (RHS) form factor for proper airflow and heat dissipation.
✅ ACC Technology (Active Copper with Redrivers)
Ensures signal integrity and stability at NDR data rates over a 2m length.
✅ Quantum-2 Certified Compatibility
Works seamlessly with:
QM9700 Modular Chassis (Quantum-2)
QM9790 Fixed Switch (Quantum-2)
NDR Server Adapters (ConnectX-7 / BlueField-3 OSFP)
✅ Ideal for AI/HPC Clusters
Engineered for NVIDIA GPU-accelerated infrastructures supporting:
Distributed AI training
NVMe-oF over InfiniBand
Magnum IO / GPUDirect Storage
Low-latency supercomputing fabrics
⚠️ Compatibility Advisory (Important – Reduces Returns)
This cable uses a Finned-Top OSFP RHS design.
Some NICs or systems require Flat-Top OSFP modules.
Make sure your device supports RHS (Riding Heat Sink) OSFP before ordering.
🧭 Recommended Use Cases
Quantum-2 NDR spine/leaf cabling
AI training clusters (DGX / HGX platforms)
HPC supercomputing networks
Short-reach (same-rack / adjacent-rack) NDR links
Situations where 1.5m DACs are too short and optics are unnecessary
📋 Quality Assurance
Factory-sealed, 100% authentic NVIDIA Mellanox
Tested for NDR signal integrity
Professionally packaged for global shipping
💳 Payment & Shipping
Payments: PayPal, Credit Cards, Wire Transfers
Shipping: Worldwide (8–13 business days)
Returns: Accepted (buyer pays return shipping)
🤝 Why Buy From T.E.S IT-SOLUTIONS?
Experts in NVIDIA Mellanox InfiniBand & Ethernet
Deep stock of NDR / HDR / 400G / 800G cables
Fast global logistics
Engineering-grade compatibility support

