Termékek Menü

MCX556A-EDAT Mellanox ConnectX-5 Ex VPI - Network adapter

444.246 Ft (349.800 Ft + ÁFA)
Gyártó cikkszám: MCX556A-EDAT
Cikkszám: MCX556A-EDAT
Elérhetőség: Aktív termék
Átlagos értékelés: Nem értékelt
Gyártó: Mellanox

Leírás és Paraméterek

MCX556A-EDAT Mellanox ConnectX-5 Ex VPI - Network adapter

  • PCIe 4.0 x16
  • 100Gb Ethernet / 100Gb Infiniband QSFP28 x 2
  • Adaptive routing on reliable transport
  • Embedded PCIe switch
  • High throughput, low latency and low CPU utilization
  • Maximizes data center ROI
  • Innovative rack design for storage
  • Advanced storage capabilities
  • Intelligent network adapter supporting flexible pipeline programmability
  • Advanced performance in virtualized networks
  • HPC environments
    The ConnectX-5 delivers high bandwidth, low latency, and high computation efficiency for high performance, data intensive and scalable compute and storage platforms. ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/PGAS and rendezvous tag matching offload, hardware support for out-of-order RDMA write and read operations, as well as additional PCIe Atomic operations support. The ConnectX-5 VPI utilizes both IBTA RDMA and RoCE technologies, delivering low latency and high performance. The ConnectX-5 enhances RDMA network capabilities by completing the switch adaptive-routing capabilities and supporting out-of-order data delivery, while maintaining ordered completion semantics, providing multi-path reliability and efficient support for network topologies including DragonFly and DragonFly+.
  • Storage environments
    NVMe storage devices offer very fast storage access. The evolving NVMf protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMf target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency. Moreover, the embedded PCIe switch enables customers to build standalone storage or machine learning appliances. Standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

Mellanox Technologies MCX556A-EDAT. Internal. Connectivity technology: Wired, Host interface: PCI Express, Interface: Fiber. Maximum data transfer rate: 100000 Mbit/s

ConnectX-5 Ex VPI Adapter Card EDR IB and 100GbE Dual-Port QSFP28 PCIe4.0 x16

Mellanox ConnectX-5 Ex VPI - Network adapter - PCIe 4.0 x16 - 100Gb Ethernet / 100Gb Infiniband QSFP28 x 2

Intelligent ConnectX-5 adapter cards, the newest additions to the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, introduce new acceleration engines for maximizing High Performance, Web 2.0, Cloud, Data Analytics and Storage platforms.

ConnectX-5 with Virtual Protocol Interconnect supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets.

ConnectX-5 enables higher HPC performance with new Message Passing Interface (MPI) offloads, such as MPI Tag Matching and MPI AlltoAll operations, advanced dynamic routing, and new capabilities to perform various data algorithms.

Moreover, ConnectX-5 Accelerated Switching and Packet Processing (ASAP2) technology enhances offloading of virtual switches, for example, Open V-Switch (OVS), which results in significantly higher data transfer performance without overloading the CPU. Together with native RDMA and RoCE support, ConnectX-5 dramatically improves Cloud and NFV platform efficiency.

Mellanox offers an alternate ConnectX-5 Socket Direct card to enable 100Gb/s transmission rate also for servers without x16 PCIe slots. The adapter's 16-lane PCIe bus is split into two 8-lane buses, with one bus accessible through a PCIe x8 edge connector and the other bus through an x8 parallel connector to an Auxiliary PCIe Connection Card. The two cards are connected using a dedicated harness. Moreover, the card brings improved performance by enabling direct access from each CPU in a dual-socket server to the network through its dedicated PCIe x8 interface.


Erről a termékről még nem érkezett vélemény.

Írja meg véleményét

Megjegyzés: A HTML-kód használata nem engedélyezett!
Értékelés: Rossz Kitűnő

Hasonló termékek