The ConnectX-5 InfiniBand adapter card provides a high-performance and flexible solution with dual port 100Gb/s InfiniBand and Ethernet connection ports, low latency and high message rate, as well as embedded PCIe switches and NVMe over Fabric offloading. These adapters that support intelligent Remote direct memory access (RDMA) provide advanced application uninstallation for High-performance computing (HPC), cloud super large-scale and storage platforms.
Product characteristics:
Label matching and assembly point unloading
Reliable adaptive routing for transmission
Burst buffer offloading for backend checkpoints
NVMe over Fabric (NVMe oF) uninstallation
Embedded PCIe
Enhanced vSwitch/vRouter uninstallation
Overlay network RoCE
PCIe Gen 4.0 support
Compliant with RoHS standards
ODCC compatible
Product advantages:
Up to 100Gb/s connections per port
Industry leading throughput, low latency, low CPU utilization, and high message rate
Innovative rack design for storage and machine learning based on host chain technology
Intelligent interconnection for x86, Power, and GPU based computing and storage platforms
Advanced storage features, including NVMe over Fabric offloading
Intelligent network adapter supporting flexible pipeline programmability
The cutting-edge performance of virtualized networks, including Network Function Virtualization (NFV)
Promoters of efficient service chain capabilities
Efficient I/O consolidation, reducing data center costs and complexity
&Nbsp;
Note: The copyright (in whole or in part) of images or videos related to NVIDIA products belongs to NVIDIA Corporation.
Model name
MCX556A-EDAT
rate
FDR/EDR IB 100GbE (or 40/50/100GbE)
Interface type
Dual Port QSFP28
application area
InfiniBand/Ethernet
hardware interface
PCIe 4.0 x16
RDMA
support
chip type
ConnectX-5 VPI
RoHS
yes
operating system
RHEL/CentOS, Windows, FreeBSD, VMware, OFED, WinOF-2
Link rate
16.0 GT/s