NVIDIA ConnectX-6 InfiniBand Adapter MCX653106A-ECAT 200Gb/s Smart NIC

Detalhes do produto:

Marca: Mellanox
Número do modelo: MCX653106A-ECAT
Documento: connectx-6-infiniband.pdf

Condições de Pagamento e Envio:

Quantidade de ordem mínima: 1 unidade
Preço: Negotiate
Detalhes da embalagem: Caixa externa
Tempo de entrega: Baseado no inventário
Termos de pagamento: T/T
Habilidade da fonte: Fornecimento por projeto/lote
Melhor preço Contato

Informação detalhada

Status dos produtos: Estoque Aplicativo: Servidor
Tipo de interface:: Infiniband Portas: Dual
Velocidade máxima: 100GBE Tipo: Com fio
Doença: Novo e Original Tempo de garantia: 1 ano
Modelo: MCX653106A-ECAT Nome: Cartão de rede Mellanox Original MCX653106A-ECAT conectar X-6 100Gb/s dual-port QSFP56 Ethernet Adap
Palavra-chave: Cartão de rede Mellanox
Destacar:

NVIDIA ConnectX-6 InfiniBand adapter

,

200Gb/s Smart NIC

,

Mellanox network card with warranty

Descrição de produto

NVIDIA ConnectX-6 InfiniBand Adapter MCX653106A-ECAT

200Gb/s Dual-Port Smart Adapter with In-Network Computing

The NVIDIA ConnectX-6 MCX653106A-ECAT delivers up to 200Gb/s bandwidth, sub-microsecond latency, and hardware offloads for HPC, AI, and hyperconverged storage. Featuring RDMA, NVMe-oF acceleration, block-level XTS-AES encryption, and PCIe 4.0, this dual-port QSFP56 InfiniBand adapter maximizes data center efficiency and scalability. Ideal for GPU clusters, ML training, and mission-critical networks.

InfiniBand & Ethernet support RDMA / GPUDirect NVMe-oF offload
Product Overview

The MCX653106A-ECAT is part of the NVIDIA ConnectX-6 InfiniBand adapter family, engineered for the most demanding workloads. It combines two QSFP56 ports capable of 200Gb/s InfiniBand or 200Gb/s Ethernet connectivity, offering hardware-based reliable transport, congestion control, and In-Network Computing engines. By offloading collective operations, MPI tag matching, and encryption from the host CPU, the adapter reduces CPU overhead and increases application performance in large-scale clusters. Enterprises, research labs, and hyperscale data centers rely on ConnectX-6 to build energy-efficient, low-latency fabrics.

Key Features
Port Speed
Up to 200Gb/s per port (HDR InfiniBand / 200GbE)
Message Rate
Up to 215 million messages/sec
Hardware Encryption
Block-level XTS-AES 256/512-bit, FIPS compliant
In-Network Computing
Collective offloads, NVMe-oF target/initiator offloads
Host Interface
PCIe Gen 4.0/3.0 x16 (dual-port support)
Virtualization
SR-IOV up to 1K VFs, ASAP2, Open vSwitch offload
RDMA Capabilities
RoCE, XRC, DCT, On-Demand Paging, Adaptive Routing support
Form Factor
Stand-up PCIe (low-profile), dual-port QSFP56
Advanced Technology

Built on NVIDIA's proven InfiniBand architecture, ConnectX-6 integrates In-Network Computing to accelerate MPI operations, deep learning frameworks, and storage protocols. The adapter supports Remote Direct Memory Access (RDMA) for zero-copy data transfers, bypassing the CPU and kernel. Hardware-based congestion control ensures predictable performance even under heavy load. Additionally, NVIDIA GPUDirect RDMA allows direct data exchange between GPU memory and the network adapter, slashing latency for AI training. With support for NVMe over Fabrics (NVMe-oF) offloads, the card reduces CPU utilization in storage arrays while enabling high-throughput, low-latency access to NVMe flash.

Typical Deployments
  • High Performance Computing (HPC): Large-scale simulations, weather modeling, and computational fluid dynamics requiring low latency and high bandwidth.
  • AI & Machine Learning Clusters: Distributed training of deep neural networks, leveraging GPUDirect and RDMA for maximum efficiency.
  • NVMe-oF Storage Systems: Target or initiator offloads enable high-performance disaggregated storage with low CPU overhead.
  • Hyperscale Data Centers: Virtualized environments with SR-IOV, overlay networks, and service chaining.
  • Financial Services: Ultra-low latency trading infrastructure requiring deterministic performance.
Compatibility

The ConnectX-6 MCX653106A-ECAT is compatible with a wide range of servers, switches, and operating systems. It interoperates with NVIDIA Quantum InfiniBand switches (HDR 200Gb/s), as well as 200GbE Ethernet switches. The adapter supports standard PCIe slots (x16, x8, x4) and includes driver support for major OS platforms.

Technical Specifications
Parameter Specification
Product Model MCX653106A-ECAT
Data Rate 200Gb/s, 100Gb/s, 50Gb/s, 40Gb/s, 25Gb/s, 10Gb/s, 1Gb/s (InfiniBand & Ethernet)
Ports 2x QSFP56 connectors
Host Interface PCIe Gen 4.0 / 3.0 x16 (supports x8, x4, x2, x1 configurations)
Latency Extremely low sub-microsecond (typical <0.8µs)
Message Rate Up to 215 Million messages/sec
Encryption XTS-AES 256/512-bit, FIPS 140-2 compliance ready
Form Factor PCIe low-profile stand-up (tall bracket mounted, short bracket included)
Dimensions (without bracket) 167.65mm x 68.90mm
Power Consumption Typical 22W (depending on traffic)
Virtualization SR-IOV (1K VFs), VMware NetQueue, NPAR, ASAP2 flow offload
Management NC-SI, MCTP over PCIe/SMBus, PLDM for firmware update & monitoring
Remote Boot InfiniBand, iSCSI, PXE, UEFI
Operating Systems RHEL, SLES, Ubuntu, Windows Server, FreeBSD, VMware vSphere, OFED stack
Selection Guide – ConnectX-6 Adapters
Ordering Part Number (OPN) Ports Max Speed Host Interface Key Differentiator
MCX653106A-ECAT 2x QSFP56 100Gb/s (also lower) PCIe 3.0/4.0 x16 Dual-port 100GbE/IB, advanced offloads, crypto optional? No built-in crypto in this variant, but supports block encryption via software? Actually hardware AES engine, consult spec; ideal for virtualization & storage
MCX653105A-HDAT 1x QSFP56 200Gb/s PCIe 3.0/4.0 x16 Single-port 200Gb/s, crypto support
MCX653106A-HDAT 2x QSFP56 200Gb/s PCIe 3.0/4.0 x16 Dual-port 200Gb/s full bandwidth, crypto offload
MCX653105A-ECAT 1x QSFP56 100Gb/s PCIe x16 Single-port 100Gb/s, lower-cost entry
MCX653436A-HDAT (OCP 3.0) 2x QSFP56 200Gb/s PCIe 3.0/4.0 x16 OCP 3.0 small form factor, dual-port
Note: Not all features (e.g., crypto engine) are available in every OPN. MCX653106A-ECAT focuses on dual-port 100Gb/s efficiency with full ConnectX-6 feature set including RDMA, storage offloads, and virtualization. For 200Gb/s dual-port, consider -HDAT variants. Please confirm crypto requirements before ordering.
Why Choose ConnectX-6 MCX653106A-ECAT
  • Maximized Application Performance: Hardware offloads for MPI, NVMe-oF, and encryption free up CPU cores for actual workloads.
  • Future-Ready Bandwidth: PCIe 4.0 and 200Gb/s readiness ensures longevity in high-speed fabrics.
  • In-Network Memory & Computing: Supports collective offloads and burst buffer, reducing data movement overhead.
  • Trusted Security: Block-level AES-XTS encryption with FIPS compliance ensures data-at-rest and in-transit protection without performance hit.
  • Simplified Management: Broad OS and hypervisor support, with unified driver stack (OFED, WinOF-2).
Service & Support

Hong Kong Starsurge Group provides full technical support, warranty coverage, and RMA services for all NVIDIA ConnectX adapters. Our team of networking engineers assists with configuration, firmware updates, and performance tuning. We offer global shipping, bulk pricing for data center projects, and customized stock reservations. For volume orders, contact our sales team to receive tailored quotations and lead time details.

Frequently Asked Questions
Q: Is MCX653106A-ECAT compatible with both InfiniBand and Ethernet?
A: Yes. The ConnectX-6 series supports dual-protocol operation: InfiniBand (up to 200Gb/s per port) and Ethernet (up to 200GbE). The specific OPN supports speeds up to 100Gb/s for both protocols when used with appropriate transceivers and switches.
Q: Does this adapter support GPU Direct RDMA?
A: Absolutely. NVIDIA GPUDirect RDMA is supported, enabling direct communication between GPU memory and the network, ideal for AI frameworks and MPI workloads.
Q: What is the difference between -ECAT and -HDAT suffix?
A: -ECAT denotes maximum speed of 100Gb/s (both IB and ETH) with certain feature sets; -HDAT indicates 200Gb/s capability with hardware crypto and enhanced engines. Choose based on required port bandwidth.
Q: Can I use this card in a PCIe 3.0 slot?
A: Yes, it is backward compatible with PCIe 3.0. However, maximum bandwidth may be limited compared to PCIe 4.0.
Q: Does the adapter include NVMe-oF hardware offload?
A: Yes, ConnectX-6 provides full NVMe-oF target and initiator offloads, significantly reducing CPU overhead for storage workloads.
Important Precautions
• Ensure server motherboard has sufficient airflow for high-speed adapters. Use passive or active QSFP56 cables within recommended temperature ranges.
• Confirm the PCIe slot provides adequate power (up to 75W via slot; this adapter typically uses <25W).
• For liquid-cooled platforms, check compatibility with Intel Server System D50TNP if cold plate variant is needed (this OPN is standard air-cooled).
• Verify OS driver compatibility with latest OFED or WinOF-2 stacks.
About Hong Kong Starsurge Group

Since 2008, Hong Kong Starsurge Group Co., Limited delivers enterprise-grade networking hardware, system integration, and IT services worldwide. As a trusted partner for NVIDIA networking products, Starsurge offers certified solutions for government, finance, healthcare, education, and hyperscale data centers. Our technical team ensures smooth deployment, from pre-sales architecture design to post-sales support. With a customer-first philosophy, we provide tailored, scalable infrastructure components including NICs, switches, cables, and end-to-end network solutions.

Global delivery · Multilingual support · OEM services available

Key Facts at a Glance
2x 100Gb/s ports
215M Msgs/sec
PCIe 4.0 x16
XTS-AES + FIPS
SR-IOV (1K VFs)
NVMe-oF offload
Compatibility Matrix (Quick Reference)
Component / Ecosystem Supported Notes
NVIDIA Quantum HDR Switches ✓ Yes 200Gb/s full fabric integration
Ethernet 200G/100G Switches ✓ Yes Requires compatible transceiver/FEC modes
GPU Direct RDMA ✓ Yes NVIDIA GPU series supported
VMware vSphere / ESXi ✓ Certified Native drivers, SR-IOV support
Windows Server 2019/2022 ✓ Yes WinOF-2 driver package
Linux Kernel & OFED ✓ Full support MLNX_OFED, inbox drivers
Buyer Checklist – Before You Order
  • Confirm required link speed: 100Gb/s dual-port meets your cluster bandwidth plan? For 200Gb/s dual-port, consider -HDAT OPN.
  • Verify server PCIe slot: x16 physical, Gen 3 or Gen 4 recommended.
  • Check cable type: QSFP56 passive copper (up to 5m) or active optical cables for longer reach.
  • Ensure operating system drivers are available (OFED, WinOF).
  • For encryption requirements: confirm if built-in block encryption is needed – MCX653106A-ECAT supports AES-XTS, but always confirm FIPS level with NVIDIA datasheet.
  • Evaluate virtualization needs: SR-IOV, VXLAN offload, etc.

Deseja saber mais detalhes sobre este produto
Estou interessado em NVIDIA ConnectX-6 InfiniBand Adapter MCX653106A-ECAT 200Gb/s Smart NIC você poderia me enviar mais detalhes como tipo, tamanho, quantidade, material, etc.
Obrigado!
Esperando sua resposta.