HPE Mellanox® P65330-001 InfiniBand NDR200 OSFP to 4x QSFP112 DAC: This is a high-performance passive copper breakout cable assembly designed for Next Data Rate (NDR) InfiniBand interconnects. It splits a single 800Gb/s OSFP port (utilizing 8x 100G PAM4 lanes) into four individual 200Gb/s QSFP112 endpoints (2x 100G PAM4 lanes each). This is a fixed-length integrated cable assembly, not a transceiver with a detachable patch cord, ensuring minimized insertion loss and maximized signal integrity for short-reach AI and HPC clusters.
🔧 Technical Specifications
- OEM: HPE
- P/N: P65330-001 (Equivalent to NVIDIA MCP7Y40-N003 / 980-9I75R-00N003)
- Data Rate: NDR200 (800Gb/s Split to 4x 200Gb/s)
- Connector A: 1x OSFP (Octal Small Form-factor Pluggable)
- Connector B: 4x QSFP112 (Quad Small Form-factor Pluggable 112)
- Power Consumption: 0W (Passive Device)
🌐 Engineering Context: This Direct Attach Copper (DAC) solution is engineered for ultra-low latency Top-of-Rack (ToR) deployments within "Dragonfly" or "Fat Tree" topologies. Compared to Active Optical Cables (AOC), this Passive DAC offers zero power consumption, negligible thermal footprint, and significantly higher Mean Time Between Failures (MTBF). It provides Deterministic Latency essential for synchronized AI training workloads and full EMI Immunity due to shielded twinaxial construction.
🧠 Typical Use Cases
- High-Performance Computing (HPC) Clusters using NVIDIA Quantum-2 InfiniBand.
- AI/ML Model Training Nodes (Connecting Compute to Switch).
- Low-latency storage interconnects requiring NDR200 throughput.
- Short-reach intra-rack connectivity (up to 3m).
✅ Compatible With: NVIDIA Quantum-2 Series InfiniBand Switches (OSFP ports) and NVIDIA ConnectX-7 Adapter Cards (QSFP112 ports). Also compatible with select HPE Cray EX supercomputing cabinets. (Requires ports configured for native InfiniBand NDR operation and proper OSFP/QSFP112 heat sink alignment).
❌ NOT Compatible With: Standard 10G/25G/40G Ethernet ports, Legacy QSFP28 (100G) or QSFP56 (200G HDR) ports that do not support 100G-per-lane PAM4 signaling (112G SerDes). Not suitable for long-distance inter-rack connections.
⚠️ Installation & Handling: At 3 meters, 26AWG/30AWG copper conductors are significant in weight and stiffness. Maintain a minimum bend radius of 35mm to prevent impedance mismatch or twinax wire deformation. Ensure the OSFP connector is fully seated and locked before routing the breakout legs to avoid port stress.
🌡️ Thermal Considerations: While the cable itself is passive and generates no heat, the OSFP and QSFP112 connectors block airflow at the port interface. Ensure switch and NIC cooling profiles are set to accommodate high-density copper obstructions.
🤝 Why Choose T.E.S IT-SOLUTIONS? We specialize in enterprise-grade HPE and NVIDIA networking solutions. Every unit is strictly validated for InfiniBand NDR200 performance and physical integrity in our lab. We offer expert consultation, fast worldwide shipping, and a comprehensive warranty to ensure your data center infrastructure runs without compromise.

