3
REFERENCE GUIDE ConnectX FDR InfiniBand and 10/40GbE Adapter Cards Why Mellanox? Mellanox delivers the industry’s most robust end-to-end InfiniBand and Ethernet portfolios. Our mature, field-proven product offerings include solutions for I/O, switching, and advanced management software making us the only partner you’ll need for high-performance computing and data center connectivity. Mellanox’s scale-out FDR 56Gb/s InfiniBand and 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management providing the best return-on- investment. Why FDR 56Gb/s InfiniBand? Enables the highest performance and lowest latency – Proven scalability for tens-of-thousands of nodes – Maximum return-on-investment Highest efficiency / maintains balanced system ensuring highest productivity – Provides full bandwidth for PCI 3.0 servers Proven in multi-process networking requirements – Low CPU overhead and high sever utilization Performance driven architecture MPI latency 0.7us, >12GB/s with FDR 56Gb/s InfiniBand (bi- directional) MPI message rate of >90 Million/sec Superior application performance From 30% to over 100% HPC application performance increase Doubles the storage throughput, reducing backup time in half InfiniBand Market Applications InfiniBand is increasingly becomes an interconnect of choice in not just high-performance computing environments, but also in main- stream enterprise grids, data center virtualization solutions, storage, and embedded environments. The low latency and high-performance of InfiniBand coupled with the economic benefits of its consolida- tion and virtualization capabilities provides end-customers the ideal combination as they build out their applications. Why Mellanox 10/40GbE? Mellanox’s scale-out 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Utilizing 10 and 40GbE NICs, core and top-of-rack switches and fabric optimization software, a broader array of end-users can benefit from a more scalable and high-performance Ethernet fabric. Mellanox adapter cards are designed to drive the full performance of PCIe 2.0 and 3.0 I/O over high- speed FDR 56Gb/s InfiniBand and 10/40GbE fabrics. ConnectX InfiniBand and Ethernet adapters lead the market in performance, throughput, power and lowest latency. ConnectX adapter cards provide the highest performing and most flexible interconnect solution for data centers, high-performance computing, Web 2.0, cloud computing, financial services and embedded environments. Key Features – 0.7us application to application latency – 40 or 56Gb/s InfiniBand ports – 10 or 40Gb/s Ethernet Ports PCI Express 3.0 (up to 8GT/s) – CPU offload of transport operations – End-to-end QoS & congestion control – Hardware-based I/O virtualization – TCP/UDP/IP stateless offload Key Advantages – World-class cluster performance – High-performance networking and storage access – Guaranteed bandwidth & low-latency services – Reliable transport – End-to-end storage integrity – I/O consolidation – Virtualization acceleration – Scales to tens-of-thousands of nodes Mellanox 40 and 56Gb/s Infiniband InfiniBand switches deliver the highest performance and density with a complete fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Scalable switch building blocks from 36 to 648 ports in a single enclosure gives IT managers the flexibility to build networks up to tens-of-thousands of nodes. Key Features – 72.5Tb/s switching capacity – 100ns to 510ns switching latency – Hardware-based routing – Congestion control – Quality of Service enforcement – Up to 6 separate subnets – Temperature sensors and voltage monitors Key Advantages – High-performance fabric for parallel computation or I/O convergence – Wirespeed InfiniBand switch platform up to 56Gb/s per port – High-bandwidth, low-latency fabric for compute-intensive applications InfiniBand and Ethernet Switches 3 X X ® S w tch 3828RG Rev 1.0 Mellanox’s scale-out 10 and 40 Gigabit Ethernet switch products offer the industry’s highest density Ethernet switching products. Offering a full product portfolio of top-of-rack 1U Ethernet switches for 10 or 40Gb/s Ethernet ports to the server or to the next level of switching. These switches enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Key Features Up to 36 ports of 40Gb/s non-blocking Ethernet switching in 1U Up to 64 ports of 10Gb/s non-blocking Ethernet switching in 1U 230ns-250ns port to port low latency switching – Low power Key Advantages Optimal for dealing with data center east-west traffic computation or I/O convergence – Highest switching bandwidth in 1U Low OpEx and CapEx and highest ROI

REFERENCE GUIDE - Mellanox Technologies

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: REFERENCE GUIDE - Mellanox Technologies

REFERENCE GUIDE

ConnectX FDR InfiniBand and 10/40GbE Adapter Cards Why Mellanox?Mellanox delivers the industry’s most robust end-to-end InfiniBand and Ethernet portfolios. Our mature, field-provenproduct offerings include solutions for I/O, switching, and advanced management software making us the only partner you’ll need for high-performance computing and data centerconnectivity. Mellanox’s scale-out FDR 56Gb/s InfiniBand and10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management providing the best return-on-investment.

Why FDR 56Gb/s InfiniBand?Enables the highest performance and lowest latency– Proven scalability for tens-of-thousands of nodes– Maximum return-on-investmentHighest efficiency / maintains balanced system ensuring highest productivity– Provides full bandwidth for PCI 3.0 servers– Proven in multi-process networking requirements– Low CPU overhead and high sever utilizationPerformance driven architecture – MPI latency 0.7us, >12GB/s with FDR 56Gb/s InfiniBand (bi-

directional)– MPI message rate of >90 Million/sec Superior application performance– From 30% to over 100% HPC application performance increase– Doubles the storage throughput, reducing backup time in half

InfiniBand Market ApplicationsInfiniBand is increasingly becomes an interconnect of choice in not just high-performance computing environments, but also in main-stream enterprise grids, data center virtualization solutions, storage, and embedded environments. The low latency and high-performance of InfiniBand coupled with the economic benefits of its consolida-tion and virtualization capabilities provides end-customers the ideal combination as they build out their applications.

Why Mellanox 10/40GbE?Mellanox’s scale-out 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Utilizing 10 and 40GbE NICs, core and top-of-rack switches and fabric optimization software, a broader array of end-users can benefit from a more scalable and high-performance Ethernet fabric.

Mellanox adapter cards are designed to drive the full performance of PCIe 2.0 and 3.0 I/O over high-speed FDR 56Gb/s InfiniBand and 10/40GbE fabrics. ConnectX InfiniBand and Ethernet adapters lead the market in performance, throughput, power and lowest latency. ConnectX adapter cards provide the highest performing and most flexible interconnect solution for data centers, high-performance computing, Web 2.0, cloud computing, financial services and embedded environments.

Key Features– 0.7us application to application latency – 40 or 56Gb/s InfiniBand ports – 10 or 40Gb/s Ethernet Ports – PCI Express 3.0 (up to 8GT/s)– CPU offload of transport operations – End-to-end QoS & congestion control – Hardware-based I/O virtualization – TCP/UDP/IP stateless offload

Key Advantages– World-class cluster performance – High-performance networking and storage access – Guaranteed bandwidth & low-latency services – Reliable transport – End-to-end storage integrity – I/O consolidation – Virtualization acceleration – Scales to tens-of-thousands of nodes

Mellanox 40 and 56Gb/s Infiniband InfiniBand switches deliver the highest performance and density with a complete fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Scalable switch building blocks from 36 to 648 ports in a single enclosure gives IT managers the flexibility to build networks up to tens-of-thousands of nodes.

Key Features– 72.5Tb/s switching capacity – 100ns to 510ns switching latency – Hardware-based routing – Congestion control – Quality of Service enforcement – Up to 6 separate subnets – Temperature sensors and voltage

monitors

Key Advantages– High-performance fabric for

parallel computation or I/O convergence

– Wirespeed InfiniBand switch platform up to 56Gb/s per port

– High-bandwidth, low-latency fabric for compute-intensive applications

InfiniBand and Ethernet Switches

3 XX®

Sw tch

3828RG Rev 1.0

Mellanox’s scale-out 10 and 40 Gigabit Ethernet switch products offer the industry’s highest density Ethernet switching products. Offering a full product portfolio of top-of-rack 1U Ethernet switches for 10 or 40Gb/s Ethernet ports to the server or to the next level of switching. These switches enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics.

Key Features– Up to 36 ports of 40Gb/s non-blocking Ethernet

switching in 1U– Up to 64 ports of 10Gb/s non-blocking Ethernet

switching in 1U– 230ns-250ns port to port low latency switching– Low power

Key Advantages– Optimal for dealing with data center east-west

traffic computation or I/O convergence – Highest switching bandwidth in 1U – Low OpEx and CapEx and highest ROI

Page 2: REFERENCE GUIDE - Mellanox Technologies

REFERENCE GUIDE

Mellanox Product DetailsIBM SKU OPN Component Description 3yr Silver Support

ConnectX-2 Adapters

81Y1531 MHQH19B-XTR QDR Single Port QDR ConnectX-2 VPI adapter card, QSFP, IB 40Gb/s

81Y1535 MHQH29B-XTR QDR Dual-port QDR ConnectX-2 VPI adapter card, QSFP, IB 40Gb/s

81Y9990 MNPH29D-XTR 10GbE Dual-Port 10GbE Adapter card, ConnectX-2 EN NIC, dual-port SFP+

ConnectX-3 Adapters

95Y3451 MCX353A-TCBT FDR10 Mellanox ConnectX-3 VPI Single-port QSFP FDR10/10GbE HCA

95Y3455 MCX354A-TCBT FDR10 Mellanox ConnectX-3 VPI Dual-port QSFP FDR10/10GbE HCA

95Y3459 MCX314A-BCBT 40GbE Mellanox ConnectX-3 EN Dual-port QSFP+ 40GbE Adapter

ConnectX-3 12B Release - June 2012

00W0037 MCX353A-FCBT FDR Mellanox ConnectX-3 VPI Single-port QSFP FDR14/40GbE HCA

00W0041 MCX354A-FCBT FDR Mellanox ConnectX-3 VPI Dual-port QSFP FDR14/40GbE HCA

00W0053 MCX312A-XCBT 10GbE Mellanox ConnectX-3 EN Dual-port SFP+ 10GbE Adapter

Edge Switches

81Y1471 MIS5030Q-1SFC QDR 36-Port Managed QDR IB Switch Bundle - 1410 Rack (PSE)

81Y1481 MIS5030Q-1BRC QDR 36-Port Managed QDR IB Switch Bundle - iDPx Rack (oPSE)

49Y0438 VLT-30112-IBM QDR Voltaire 4036 1U 36 PORT QDR SWITCH SINGLE PS

49Y0442 VLT-30015-IBM QDR Voltaire 4036 1U 36 PORT QDR SWITCH FOR IDATAPLEX RAC

90Y3766 MSX6036T-1SFR FDR10 Mellanox SX6036 QDR/FDR10 Switch Chassis 1410 Rack

90Y3776 MSX6036T-1BRR FDR10 Mellanox SX6036 QDR/FDR10 Switch Chassis iDXp

90Y3800 MSX60-PF FDR / FDR10 Mellanox MSX60xx/MSX10xx 300w Power Supply with Intake Fan

90Y3802 MSX60-PR FDR / FDR10 Mellanox MSX60xx/MSX10xx 300w Power Supply w/ Exhaust Fan

49Y0476 VLT-30029-IBM QDR Voltaire Edge PS-36 POWER SUPPLY UNIT AC

FDR Edge Switches 12B Release - June 2012

00W0003 MSX6036F-1SFR FDR Mellanox SX6036 FDR14 InfiniBand Switch (PSE)

00W0007 MSX6036F-1BRR FDR Mellanox SX6036 FDR14 InfiniBand Switch (PSE)

00W0021 MSX1036B-1SFR 40 GbE Mellanox SX1036 40GbE Switch (PSE)

00W0025 MSX1036B-1BRR 40 GbE Mellanox SX1036 40GbE Switch (oPSE)

Chassis Switches

81Y8050 MIS5100Q-3DNC QDR 108 -Port QDR IB Switch Bundle - All spines and 1 Mgmt Module

81Y1491 MIS5200Q-4DNC QDR 216 -Port QDR IB Switch Bundle - All spines and 1 Mgmt Module

81Y8055 MIS5300Q-6DNC QDR 324 -Port QDR IB Switch Bundle - All spines and 1 Mgmt Module

81Y1511 MIS5600Q-10DNC QDR 648-Port QDR IB Switch Bundle - All spines and 1 Mgmt Module

81Y1525 MIS5600MDC QDR Optional Management Module- PPC460 Mgmt Module for MIS5xxx

81Y1527 MIS5001QC QDR Leaf Blade - MIS5xxx Series Chassis Switch 18 port QSFP blade 40Gb/s IB

90Y3804 MSX6000MAR FDR / FDR10 Mellanox SX65xx PPC460-based Management Module

90Y3846 MSX6002TBR FDR10 Mellanox SX65xx QDR/FDR10 Spine Module

90Y3806 MSX6001TR FDR10 Mellanox SX65xx 18-port QSFP FDR10 InfiniBand Leaf Module

00W0019 MSX6536-NR FDR / FDR10 Mellanox SX6536 FDR14 InfiniBand Switch - 648 Port Chassis

FDR Chassis Switch 12B Release - June 2012

00W0011 MSX6512-NR FDR / FDR10 Mellanox SX6512 FDR14 InfiniBand Switch - 216 Port Chassis

00W0029 MSX6001FR FDR Mellanox SX65xx 18-port QSFP FDR14 InfiniBand Leaf Module

00W0033 MSX6002FLR FDR Mellanox SX65xx FDR14 Spine Module

Voltaire Chassis Switches

81Y8060 VLT-30060-IBM QDR GRID DIRECTOR 4200 9 SLOTS QDR BASIC CONFIG

49Y0480 VLT-30042-IBM QDR SFB-4700-X2 QDR FABRIC BOARD

49Y0466 VLT-30043-IBM QDR SLB-4018 18 PORT QDR LINE BOARD

49Y0474 VLT-30044-IBM QDR SMB-CM CHASSIS MANAGEMENT BOARD

49Y0478 VLT-30046-IBM QDR POWER SUPPLY MODULE 1.4KW AC

49Y0446 VLT-30040-IBM QDR GRID DIRECTOR 4700 18 SLOTS QDR BASIC CONFIG

49Y0470 VLT-30041-IBM QDR SFB-4700 QDR FABRIC BOARD

81Y8069 VLT-30061-IBM QDR SFB-4200 QDR FABRIC BOARD

49Y0484 KIT-00022-IBM QDR RACK MOUNT KIT FOR 4700 W/HYPERSCALE FABRIC BOAR

Page 3: REFERENCE GUIDE - Mellanox Technologies

350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: 408-970-3403www.mellanox.com

© Copyright 2012. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect, and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, FabricIT, MLNX-OS, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

Mellanox Product DetailsIBM SKU OPN Component Description 3yr Silver Support

BladeCenter Content

46M6005 VLT-30051-IBM QDR HSSM-INFINIBAND QDR SWITCH MODULE FOR IBM BL

46M6002 46M6002 QDR 2-Port 40Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter

46M6001 46M6001 QDR 2-Port 40Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter

60Y0927 60Y0927 QDR 2-Port 40Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter

60Y0928 60Y0928 QDR 2-Port 40Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter; no Pubs Kit

90Y3572 90Y3572 QDR Mellanox 2-port 10GbE Expansion Card (CFFh) for IBM BladeCenter

90Y3573 90Y3573 QDR Mellanox 2-port 10GbE Expansion Card (CFFh) for IBM BladeCenter

46M6001 Custom QDR BladeCenter H Mezzanine Adapter - 20 and 40 Gb/s IB, Dual Port, PCIe Base 2.0 compliant, 1.1 compatible– 2.5GT/s or 5.0GT/s link rate x8"

90Y3570 Custom 10 Gig Ethernet

"BladeCenter H Mezzanine Adapter Card - 10G Ethernet, Dual Port, PCIe Base 2.0 compliant"

Bridging Solutions

90Y3796 MBX5020-1SFR IB / 10GbE Mellanox BridgeX BX5020 Gateway - 4X QSFP IB ports + 16 SFP+ ports

81Y8105 VLT-30038-IBM IB / 10GbE 4036E 34 PORT QDR SWITCH ETH GATEWAY FOR IDATAPL

81Y8113 VLT-30039-IBM IB / 10GbE 4036E 34 PORT QDR SWITCH ETH GATEWAY-LM FOR IDAT

81Y8101 VLT-30032-IBM IB / 10GbE 4036E 34 PORT QDR SWITCH ETH GATEWAY SINGLE PS

81Y8109 VLT-30033-IBM IB / 10GbE 4036E 34 PORT QDR SWITCH ETH GATEWAY-LOW MEM SIN

49Y0438 VLT-30112-IBM IB / 10GbE 4036 1U 36 PORT QDR SWITCH SINGLE PS

49Y0442 VLT-30015-IBM IB / 10GbE 4036 1U 36 PORT QDR SWITCH FOR IDATAPLEX RAC

49Y0476 VLT-30029-IBM IB / 10GbE PS-36 POWER SUPPLY UNIT AC

Cables and QSA Adapter

90Y3810 MC2206130-001 QDR / FDR10 1m Mellanox QSFP Passive Copper Cable

90Y3814 MC2206130-003 QDR / FDR10 3m Mellanox QSFP Passive Copper Cable

90Y3818 MC2206128-005 QDR / FDR10 5m Mellanox QSFP Passive Copper Cable

95Y3463 MC2206310-003 QDR / FDR10 3m Mellanox QSFP Optical Cable

95Y3467 MC2206310-005 QDR / FDR10 5m Mellanox QSFP Optical Cable

90Y3822 MC2206310-010 QDR / FDR10 10m Mellanox QSFP Optical Cable

90Y3826 MC2206310-015 QDR / FDR10 15m Mellanox QSFP Optical Cable

90Y3830 MC2206310-020 QDR / FDR10 20m Mellanox QSFP Optical Cable

90Y3838 MC2206310-030 QDR / FDR10 30m Mellanox QSFP Optical Cable

90Y3842 MAM1Q00A-QSA QDR / FDR Mellanox QSA Adapter (QSFP to SFP+)

FDR Cables 12B Release - June 2012

00W0049 MC2207130-001 FDR 1m Mellanox QSFP Passive Copper FDR14 InfiniBand Cable

00W0057 MC2207128-003 FDR 3m Mellanox QSFP Passive Copper FDR14 InfiniBand Cable

00W0061 MC2207130-00A FDR 0.5m Mellanox QSFP Passive Copper FDR14 InfiniBand Cable

00W0069 MC2207310-003 FDR 3m Mellanox QSFP Optical FDR14 InfiniBand Cable

00W0073 MC2207310-005 FDR 5m Mellanox QSFP Optical FDR14 InfiniBand Cable

00W0077 MC2207310-010 FDR 10m Mellanox QSFP Optical FDR14 InfiniBand Cable

00W0081 MC2207310-015 FDR 15m Mellanox QSFP Optical FDR14 InfiniBand Cable

00W0085 MC2207310-020 FDR 20m Mellanox QSFP Optical FDR14 InfiniBand Cable

00W0089 MC2207310-030 FDR 30m Mellanox QSFP Optical FDR14 InfiniBand Cable

00W0045 MC2207310-050 FDR 50m Mellanox QSFP Optical FDR14 InfiniBand Cable

3883RG Rev 1.0

Business Development Contact:James Lonergan

Business Development ManagerTel: 512-897-8245

E-mail: [email protected]

Sales Contact:Scott McAuliffe

Director of Sales, EastTel: 978-399-4710

E-mail: [email protected]