NVIDIA Mellanox MBF2M516A-EEEOT BlueField-2 E-Series SmartNIC/DPU: This is a data processing unit (DPU) with dual QSFP56 ports supporting both 100GbE Ethernet and EDR InfiniBand via VPI (Virtual Protocol Interconnect). It integrates an 8-core Arm processor, 16GB DDR4 ECC memory, and 64GB eMMC storage on a PCIe Gen4 x16 adapter card. This is a SmartNIC/DPU — not a standard NIC, not a GPU, and not a standalone server. NVIDIA P/N: 900-9D219-0066-ST0.
Engineering Context
The BlueField-2 represents a fundamentally different architecture from traditional NICs. Beyond line-rate network I/O, it runs a full Linux operating system on its integrated Arm cores, enabling infrastructure offloads (OVS, IPsec, storage virtualization, telemetry) to be executed on the DPU rather than consuming host CPU cycles. The dual QSFP56 ports deliver 100Gb/s each via VPI, meaning the operational protocol (Ethernet or InfiniBand) is determined by firmware configuration and switch fabric type. The 16GB DDR4 with ECC provides working memory for DPU applications, while 64GB eMMC stores the DPU OS and configuration. The FHHL (Full-Height, Half-Length) form factor fits standard PCIe slots. This is the NEW condition variant.
Deployment & Use Cases
- Infrastructure Offload (SmartNIC Mode): Offload OVS, firewalling, encryption, and storage virtualization from host CPUs to the BlueField-2 Arm cores.
- Zero-Trust Security: Hardware-isolated DPU runs security policies independently of the host OS, providing a hardware root of trust even if the host is compromised.
- Network Function Virtualization (NFV): Run virtualized network functions (vRouter, vFirewall, vSwitch) directly on the DPU without host CPU involvement.
- Bare-Metal Cloud Provisioning: Cloud providers use BlueField-2 to manage tenant isolation, networking, and storage at the hardware level.
Technical Specifications
- OEM: NVIDIA Mellanox
- Part Number: MBF2M516A-EEEOT
- NVIDIA P/N: 900-9D219-0066-ST0
- Product Family: BlueField-2 E-Series DPU
- Network Interface: Dual-Port QSFP56 (VPI: 100GbE Ethernet / EDR InfiniBand)
- DPU Processor: 8-core Arm (430MHz/2000MHz)
- DPU Memory: 16GB DDR4 ECC
- DPU Storage: 64GB eMMC
- Host Interface: PCIe Gen4 x16
- Security: Crypto Enabled, Secure Boot Optional
- Form Factor: FHHL (Full-Height, Half-Length)
- Condition: NEW
Compatibility & Hard Constraints
- VPI Protocol Selection: Supports both Ethernet and InfiniBand via firmware configuration. The active protocol depends on the connected switch fabric. Simultaneous mixed-mode (one port Ethernet, one port IB) requires specific firmware and driver support.
- EDR InfiniBand (Not HDR): The InfiniBand interface operates at EDR (100Gb/s) speeds. This is not an HDR (200Gb/s) device. Do not deploy in HDR fabrics expecting HDR line rates.
- PCIe Bandwidth: Operation in PCIe x8 slots will result in host-bus bottlenecking. Full PCIe Gen4 x16 is required for dual-port line-rate operation.
- DPU Software: The Arm cores require NVIDIA DOCA SDK and BlueField firmware. Standard NIC drivers alone do not enable DPU functionality.
