28
Hosted by 10Gig Emergence in the Data Center Marc Staimer, CDS, Dragon Slayer Consulting [email protected]

10Gig Emergence in the Data Center

  • Upload
    faxon

  • View
    26

  • Download
    1

Embed Size (px)

DESCRIPTION

10Gig Emergence in the Data Center. Marc Staimer, CDS, Dragon Slayer Consulting [email protected]. 10Gig “101” Applications Value Prop Issues Market Forecast Conclusions. Agenda. 10Gig “101”. What is 10Gig? Why should I care? 10Gig applications? What is the value proposition? - PowerPoint PPT Presentation

Citation preview

Page 1: 10Gig Emergence in the Data Center

Hosted by

10Gig Emergence in the Data Center

Marc Staimer, CDS, Dragon Slayer [email protected]

Page 2: 10Gig Emergence in the Data Center

Hosted by

Agenda

10Gig “101”

Applications

Value Prop

Issues

Market Forecast

Conclusions

Page 3: 10Gig Emergence in the Data Center

Hosted by

10Gig “101”

What is 10Gig?

Why should I care?

10Gig applications?

What is the value proposition?

When will it matter?

Page 4: 10Gig Emergence in the Data Center

Hosted by

What is 10Gig really?

Usually refers to the usable bandwidth• 10Gbps

Ethernet, Fibre Channel, & SONET OC192

10x 1Gig

12.5Gbps total including overhead

• InfiniBand (IBA) is slightly lower

10Gbps is total and 8Gbps is net a.k.a. as 4X

Page 5: 10Gig Emergence in the Data Center

Hosted by

Why should I care?

BW increasing > than ability to use it

My server I/O can’t use it

And my backbones are being swamped

Page 6: 10Gig Emergence in the Data Center

Hosted by

10Gig applications

Switch Trunking

Server-to-storage fan-out

HPCC

DBMS clustering

Shared I/O eliminating bus contention

Page 7: 10Gig Emergence in the Data Center

Hosted by

Switch TrunkingEthernet, FC, & IBA

• Ethernet: 10/100/1000 to the edge, 10Gig Core

• FC: 1,2,4 Gig to edge, 10Gig Core

• IBA: 4X (10Gig) edge, 12X (30Gig) Core

1Gig Ethernet

10Gig Ethernet10Gig SONET

10Gig iSCSI

1Gig Ethernet

10Gig Ethernet

Page 8: 10Gig Emergence in the Data Center

Hosted by

Source: Fibre Channel Industry Association

FC Throughput Gain Example

Speed Throughput MByte/s

(full duplex)

Line Rate

(Gbaud)

1 GFC 200 1.0625

2 GFC 400 2.125

4 GFC 800 4.25

10 GFC 2,400 10.5 or 3.1875

Page 9: 10Gig Emergence in the Data Center

Hosted by

10Gig-4/2Gig FC Switches

Server-Storage Fan-Out Results

FC definitely, GigE (RDMA) maybe• Server-storage pt fan-out increases from 8:1 to 48:1

COMPACT

Pow e r

COMPACT

Power

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Power

COMPACT

Pow e r

COMPACT

Power

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Power

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

COMPACT

Pow e r

40 (1U) IA Application

Servers10Gig FC SAN

Storage

Page 10: 10Gig Emergence in the Data Center

Hosted by

HPCC

InfiniBand • Potential 10GigE (RDMA) down the road

• Key node-to-node issues

Very low latency (minimal fabric hops)

Very high bandwidth

Page 11: 10Gig Emergence in the Data Center

Hosted by

128 (4X) IBA ports in 12U

HPCC Illustrated 128 node 4X IBA Fabric

Vertical rack space• 12U

10Gig connection• Copper

Full bi-sectional bandwidth• 10Gig/port

Max node-to-node hops• 3 Switches, 5 ASICs

Latency• Memory-to-memory = 6s

~ List pricing• < $1k/port

Page 12: 10Gig Emergence in the Data Center

Hosted by

DBMS Clustering Increasing DBMS performance

• IBA (primary focus)

• GigE

Value Prop

• Lower latency

> IOPS

• Higher throughput

• Fewer connections

< complexity

< mgt

IPoIB, uDAPL, SDP, SRP, &

FCP, over IBA

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

Oracle RAC or DB2 Cluster

FC SAN Storage

Page 13: 10Gig Emergence in the Data Center

Hosted by

10Gig Shared I/O: Eliminating Bus Contention

4X IBA HCA on PCI-X bus• Provides I/O for

TCP/IP to Ethernet FCP to Fibre Channel iSCSI to Ethernet

• Shares 10Gig pipes Transparent to apps < cables < costs < complexity

Potentially doable on• FC or Ethernet w/RDMA

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

PowerEdge2450

Lintel/Wintel servers

FC SAN Storage

Page 14: 10Gig Emergence in the Data Center

Hosted by

Blade Server “Fan-in”

Boot OS from external storage

• More blades per storage device

• High activity at startup

SAN Storage

10Gig

1/2/4 Gig

Page 15: 10Gig Emergence in the Data Center

Hosted by

10Gig Issues

I/O Infrastructure

Timing, Availability, & Cost

Copper vs. Optical

Compatibility

MSAs

Page 16: 10Gig Emergence in the Data Center

Hosted by

10Gig I/O Infrastructure

Current I/O Buses cannot handle 10Gig throughput

• PCI = Max 4Gbps

• PCI-X = Max 8Gbps

• Each additional bus card cuts the max in half

Future I/O infrastructure is slipping to the right

• PCI-X 2.0 = Max 26.4Gbps

• PCI Express 1.0 = Max 128Gbps

• Servers utilizing new I/O not available until late 04 early 05

• Storage utilizing new I/O not available until late 04 early 05

Page 17: 10Gig Emergence in the Data Center

Hosted by

10Gig Timing, Availability & Costs

Infrastructure

• Optics

~ $1.2K to $5K/port

• Ethernet Switches

~ $29K/port

Decreasing ~28%/yr

• ~ $8K by 2007

• FC Switches

~ $1.5K/port (w/o optics)

• ~ $.5K/port by 2007

• IBA Switches

~ $1K/port (w/o optics)

Adapters

• 10Gig Ethernet NICs

~ $6K

Timing: 2005

• 10Gig FC HBAs & Target ASICs

~ $5K

Timing: 2005

• 4X IBA HCAs

Timing: Now

12X (30Gig)

• Late 04 early 05

Page 18: 10Gig Emergence in the Data Center

Hosted by

Gartner 10Gig Market Forecasts

Ethernet Switches 2003 2004 2005 2006 2007 CAGR

Ports 1,800 5,000 15,200 51,800 185,000 218%Revenue (K) 52,200$ 97,500$ 218,272$ 554,260$ 1,426,350$ 129%Price/Port 29,000$ 19,500$ 14,360$ 10,700$ 7,710$ -28%

FC Switches 2003 2004 2005 2006 2007 CAGRPorts - 49,000 393,000 1,317,000 2,303,000 261%

Revenue (K) -$ 68,600$ 321,300$ 903,300$ 1,209,700$ 160%Price/Port -$ 1,400$ 818$ 686$ 525$ -28%

IBA Switches 2003 2004 2005 2006 2007 CAGRPorts 2,450.0 81,375 189,177 402,325 531,650 284%

Revenue (K) 3,063$ 81,375$ 160,800$ 261,511$ 239,243$ 197%Price/Port 1,250$ 1,000$ 850$ 650$ 450$ -23%

*Note: IBA numbers are calculated from the Gartner/Dataquest forecast

Page 19: 10Gig Emergence in the Data Center

Hosted by

10Gig Copper vs. Optical

Copper

• Low cost

• Limited distance

~ 15 meters

• Not Cat 5 or 6 compatible

• Cat 7 work going on

~ 100 meters

NOTE: Meter = 3.28 feet

Optical

• High cost

• Multi-mode (common)

Distance limited

• 300 to 550 meters

• Designed for single mode

Dark fiber

• 10 Km

• 40 Km

• Up to 64 Km

Page 20: 10Gig Emergence in the Data Center

Hosted by

10Gig Compatibility Question

Is 10Gig backwards compatible?• Ethernet

A. Yes

B. No

• Fibre Channel

A. Yes

B. No

Page 21: 10Gig Emergence in the Data Center

Hosted by

10Gig Compatibility

10Gig Ethernet & FC

• Not backwards compatible

It’s the optics

And the encoding

• 8B/10B

• 64/66

Page 22: 10Gig Emergence in the Data Center

Hosted by

10Gig Definitions

XAUI• 10Gig attachment unit interface

XMGII• 10Gig media independent interface

Transponder• Module containing

Optical transmitter & receiver

& mux that changes line rate

MSA• Multi-source agreement

802.3ak• 10Gig over copper

RDMA• Remote direct memory access

RDDP• Remote direct data placement

RDMA on TCP/IP & GigE

QuickTime™ and a decompressor are needed to see this picture.

Page 23: 10Gig Emergence in the Data Center

Hosted by

10Gig MSAsTransponder MSAs

• Xenpak Intel, Agilent, Infineon, JDSU, Picolight

• XPAK Agilent, Intel, Picolight

• X2 Agilent, JDSU, Mitsubishi, OpNext, Optillion

• IBPAK Agilent, Infineon, InfiniCon, Mindspeed,

Molex, OCP, Picolight, SUN, Tyco, W. L. Gore

Transceiver MSA• XFP: 10Gig serial transceiver

JDSU

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

Xenpak

XPAK

X2

XFP

Page 24: 10Gig Emergence in the Data Center

Hosted by

IBPAK

4x Copper Cable

MPO Optical Cable

Dual MPO Optical Cable

12x OpticalPluggable Module

4x OpticalPluggable Module

4x CopperPluggable Module

Module Holder

106 ckt Z-Axis Connector

Page 25: 10Gig Emergence in the Data Center

Hosted by

10Gig Value Prop

Reduced Infrastructure Costs• < Cabling

• < Connections

• < Complexity

• < Management

Increase Performance• > Throughput

• < Latency

Page 26: 10Gig Emergence in the Data Center

Hosted by

10Gig Market Emergence

IBA: Now• Mainstream 2004

Ethernet: Now• Mainstream 2005

Fibre Channel: 2004• Mainstream 2005/2006

Page 27: 10Gig Emergence in the Data Center

Hosted by

10Gig Conclusions

It’s coming

There are real cost justifiable applications

1st applications are in 2004

Mainstream market = 2005/2006

Hockey stick = 2007

Page 28: 10Gig Emergence in the Data Center

Hosted by

Questions