Qdr infini band products technical presentation

  • Upload
    xkinanx

  • View
    1.217

  • Download
    4

Embed Size (px)

Citation preview

Sun QDR Infiniband
Product Portfolio

Presenter's Name

Title

Sun Microsystems

Suns Systems
Strategy

Sun Open Network Systems

Sun Innovation

Software

Compute

Network

Storage

Breakthrough EfficiencyIntelligent Scalability

Sun Constellation System Open
Petascale Architecture
Eco-Efficient Building Blocks

Networking

Compute

Storage

Software

Ultra-Dense
Blade Platform Fastest processors: SPARC, AMD
Opteron, Intel Xeon Highest compute density Fastest host channel adaptor

Ultra-Dense
and Ultra-Slim
Switch Solutions 72, 648 and 3456 port InfiniBand switches Unrivaled cable simplification Most economical
InfiniBand cost/port

Comprehensive Software Stack Integrated developer tools Integrated Grid Engine infrastructure Provisioning, monitoring, patching Simplified inventory management

Developer Tools

Provisioning

Grid Engine

Linux

Ultra-Dense
Storage Solution Most economical
and scalable parallel
file system building block Up to 48 TB in 4RU Up to 2TB of SSD Direct cabling to
IB switch

Sun DDR InfiniBand Product Family

Sun InfiniBand Switched Network

Express Module (IB NEM)

Sun Datacenter Switch 3x24

Sun Datacenter Switch 3456

Sun InfiniBand Switched Network Express Module (NEM)

Network Express Module for Sun Blade 6048 chassis

Includes twelve Mellanox InfiniHost III Ex dual port IB Host Channel Adapter (HCA) for the 12 blades in a shelf.

Includes two Mellanox InfiniScale III 24 port IB DDR switches that provide redundant paths for the HCA's

Eliminates the need for 24 cables from the HCA's to the switches

Includes passthrough connectivity for one of the blades' on-board Gigabit Ethernet ports.

Sun Datacenter InfiniBand Switch 3x24

1U switch designed to attach and glue the InfiniBand NEMs in a cluster

Includes 3 independent 24 port switches that do not communicate Mellanox InfiniScale III

unless they are connected to the NEM or using a cable(s) that uses up two or more connectors

Can help create clusters of up to 288 nodes with only 4 Switches, 24 InfiniBand NEMs and 6 Sun Blade 6048 chassis

Sun Datacenter InfiniBand Switch 3456

Largest InfiniBand switch in the market - 3456 ports in a single switch

Needs as few as 1152 cables to communicate to all nodes using the 6048 chassis and InfiniBand NEM

Consists of 24 Line Cards with 24 switches (front) and 18 Fabric Cards (rear) with 8 switches

Total of 720 Mellanox InfiniScale III switches

Very low latency and number of hops to get in and out of the switch

5 hops maximum from any port to any other port in the switch

200 ns per hop -> 1ms latency maximum!

Quad Data RateInfiniBand Products

Sun Datacenter InfiniBand Switch 648
High density, high scalability InfiniBand QDR switch

Switch Performance

648 ports QDR/DDR/SDR InfiniBand

Bisection Bandwidth of 6,480 Tbps

3 Stage internal full Clos fabric

100ns per hop, 300ns max latency (QDR)

Line and Fabric Cards

9 Line Cards with connectors

9 Fabric Cards with no connectors

11 RU Chassis

Mount up to 3 switches in a 19 rack

Host based Sun Subnet Manager

Sun Datacenter InfiniBand Switch 648

11RU 19 Enclosure

Up to 3 per standard 19 rack

1946 ports in a rack!

Passive mid-plane w/ air holes

81 passthrough connectors

Eight 4x IB ports each

9 FCs in rear

9 LCs in front

9 horizontal Line Cards provide InfiniBand cable connectivity

9 vertical Fabric Cards provide communication between line cards and chassis cooling

Mid-plane

Sun Datacenter InfiniBand Switch 648
Architecture diagram

Line Cards (LCs)

72 ports per Line Card

4 x I4 switch Chips

72 ports realized through 24 12x CXP connectors

DDR or QDR

Power Consumption: 450W

9 connectors to fabric cards via mid-plane

24 CXP connectors 3 IB links each (12x)

Line Card Block Diagram

I4

I4

Midplane Connector

Stacked CXP2 12xStacked CXP2 12xStacked CXP2 12xStacked CXP2 12xStacked CXP2 12xStacked CXP2 12xStacked CXP2 12xStacked CXP2 12xStacked CXP2 12xStacked CXP2 12xStacked CXP2 12xStacked CXP2 12x

I4

I4

Six 4x ports per line (18 4x ports per I4 chip)

Two 4x ports per line (18 4x ports per I4 chip)

Eight 4x ports per connector

Fabric Cards (FCs)

9 Fabric Cards per chassis

QDR

2 Mellanox I4 Switch Chips

4 ports to each Line Card

No external connectors

Power Consumption: 200W

Four hot swap fans in each card for chassis cooling

9 connectors to line cards via mid-plane

4 N+1 Fans per FC

Fabric Card Block Diagram

I4

Midplane Connector

I4

FanFanFanFanFour 4x ports per line

Eight 4x ports per connector

Three-stage Fat-Free Topology

Cable Management

Two cable management arms mounted on both side of the chassis with supporting trays

Easy to arrange cables for each line card

Guiding cables either way, up to ceiling or down under floor

Cables and Connectors

1st Gen 12x DDR was iPASS

Proprietary to Sun

2nd Gen 4x QDR is QSFP

Industry Standard for 4x

Supports copper and optical

Strong 3rd party support

2nd Gen 12x QDR is CXP

CXP is an Industry Standard

3:1 Cable reduction

Optical available in 10 & 20M

Copper in 1M, 2M, 3M, and 5M

12x Optical Splitter1CXP to 3 QSFP

12x CXP to CXPOptical

12x CXP to CXPCopper

12x Copper splitter CXP to QSFP's

Power and Cooling

Cooling air flows from front to back

4 hot-swap redundant cooling fans on each Fabric Card

36 cooling fans per system

Four redundant N+1 Power Supplies, 2,900W each

450W per LC

200W per FC

Max power consumption: 6750W

Dimensions

Physical Characteristics without Cable Management

Heigh: 19 inches

Widt: 17.5 inches

Dept: 27 inches

Weigh: 400 pounds

System Management

Two redundant hot-swap Chassis Management Controller cards (CMCs)

Pigeonpoint service processor

One RJ-45 Net-Mgt 100BT port

One RJ-45 serial console port

Hardware remote management via ILOM

CLI, web interface

Support IPMI, SNMP

Dual redundant hot-swap CMC's

InfiniBand Subnet Management

External dual redundant IB subnet managers with OpenSM software

Runs on Linux

Controls IB routing

System HW management via CMC service processor

Sun Datacenter InfiniBand Switch 648
Configuration options

Deploying non-blocking (100%) and oversubscribed fabrics (