NVIDIA Mellanox® MFA7U10-H030 Active Optical Splitter Cable: This is a 30-meter integrated Active Optical Cable (AOC) splitter designed for cross-generational interconnects within InfiniBand fabrics. It features a single OSFP connector on the host end (operating at 400Gb/s total throughput via 8x 50G PAM4 lanes) which breaks out to two QSFP56 connectors on the target ends (each operating at 200Gb/s HDR). This "Twin-Port HDR" configuration enables the connection of next-generation NVIDIA Quantum-2 (NDR) switches to existing Quantum (HDR) switches or ConnectX-6 adapters, effectively bridging the 800G and 200G ecosystems.
🔧 Technical Specifications
- OEM: NVIDIA (Mellanox)
- P/N: 980-9I115-00H030 (Model: MFA7U10-H030)
- Host Interface: 1x OSFP (Finned Top)
- Target Interface: 2x QSFP56
- Data Rate: 400Gb/s (8x50G) split to 2x 200Gb/s (4x50G)
- Length: 30 Meters
- Media: Multimode Fiber (MMF) 850nm
- Power Consumption: Max 10W (OSFP) / 5W (QSFP56)
🌐 Engineering Context: In large-scale HPC upgrades, replacing an entire fabric simultaneously is often impractical. The MFA7U10-H030 facilitates "island" upgrades by allowing new liquid-cooled or air-cooled NDR switches (MQM9700 series) to drive legacy HDR compute nodes. Unlike DAC splitters which are limited to 3 meters, this 30-meter AOC allows for "End-of-Row" or "Middle-of-Row" topology where the switch is physically distant from the target servers. The active optical engine ensures zero signal degradation and immunity to electromagnetic interference (EMI) across the 20m span.
🧠 Typical Use Cases
- Legacy Node Integration: Connecting new OSFP-based Quantum-2 leaf switches to racks of existing HDR-based DGX A100 systems.
- Storage Fan-out: Linking a high-density 400G switch port to distributed 200G NVMe-oF storage targets across adjacent racks.
- Fabric Expansion: Extending the life of HDR infrastructure while establishing an NDR core backbone.
✅ Compatible With: NVIDIA InfiniBand Environments.
- Switch Side (OSFP): NVIDIA Quantum-2 (NDR) Series (e.g., MQM9700, MQM9790). Note: The port must be configured to run at 400G (8x50G) or split mode.
- Device Side (QSFP56): NVIDIA ConnectX-6 VPI, ConnectX-6 Dx, BlueField-2 DPUs, and Quantum (HDR) Switches (MQM8700 series).
- Protocol: Optimized for InfiniBand HDR. Compatible with Ethernet (200GbE/400GbE) only if supported by the switch OS and firmware.
❌ NOT Compatible With: Mechanical Mismatches.
- QSFP-DD Ports: The host end is OSFP and will not fit into QSFP-DD cages found on some non-NVIDIA switches.
- Standard 100G QSFP28: While mechanically similar, QSFP56 uses PAM4 signaling. Legacy QSFP28 ports (NRZ) may not link up without specific auto-negotiation support.
🌡️ Installation & Thermal Management
- Finned Top Design: The OSFP connector features a "Finned Top" heatsink to maximize heat dissipation in air-cooled switches. Ensure the switch chassis has adequate C2P (Connector-to-Power) airflow.
- Fiber Handling: This is a 30m cable. Excess slack must be managed in proper fiber trays (radius >30mm) to prevent macro-bending losses. Do not zip-tie tightly against rack posts.
- Pull-Tab Usage: Use the dedicated pull-tab to unlock the transceiver from the cage. Never pull on the orange fiber jacket itself, as this can sever the internal connection to the optical engine.
🤝 Why Choose T.E.S IT-SOLUTIONS? We specialize in high-performance NVIDIA/Mellanox interconnects. Every MFA7U10-H020 splitter is strictly validated for split-port negotiation and bit-error-rate (BER) stability in our testing lab. We provide expert guidance on mixing NDR and HDR infrastructure, fast worldwide shipping, and a comprehensive warranty to ensure your hybrid fabric operates at peak efficiency.

