43
Brocade One Virtual Cluster Switching

Simplifying the Next Generation Data Centre for Virtualization and Convergence

Embed Size (px)

DESCRIPTION

This seminar reviews key data centre network challenges, including server virtualisation, and how Brocade(r) Virtual Cluster Switching (VCSTM) technology addresses them. VCS is designed to meet these challenges by enabling next-generation virtual data centre and private cloud computing initiatives.

Citation preview

Page 1: Simplifying the Next Generation Data Centre for Virtualization and Convergence

Brocade OneVirtual Cluster Switching

Page 2: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

2

Agenda

• Brocade Evolution

• Next Generation Data Centre Challenges

• Brocade One • Brocade’s vision for the Data Centre

Page 3: Simplifying the Next Generation Data Centre for Virtualization and Convergence

04/11/2023

• Price/performance leader in IP networks

• Powering 90% of Internet Exchange Points

• 15,000+ customers worldwide

3© 2010 Brocade Communications Systems, Inc. Company Proprietary Information

Acquired Foundry 2008

• Data center networking experts

• Storage networking pioneer and leader

• 70% SAN market share

Page 4: Simplifying the Next Generation Data Centre for Virtualization and Convergence

Brocade NetworksEnd-to-End Networking

December 2008Brocade Technology Vision, Mission, and Markets

Service Providers Enterprise Networks

Application

Data Center Networks

Consulting, Integration, Logistics, Maintenance

Provisioning, Operations, Management

Files Management, Load Balancers, NAT, SSL Acceleration, Firewalls, VPN, Extension, Migration, FC Encryption, Replication

Transport Services

Mgmt

StorageServers Servers

Storage

TCP/IP FC SAN

GlobalServices

Page 5: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

5

Brocade One

Click icon to add picture

The Challenges

Page 6: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

6

• Classic Data Centre model• Designed for North to South Traffic

• Client to Server traffic model

• Designed for transport, not the application

•Standard Enterprise Solution• Enterprise technologies -stacking

• Enterprise topologies- STP, MSTP

• Enterprise limitations – STP, stacking

• Minimize Layer 2 fault domains

• Increased Management footprint

• Multi-layered, multi-protocol architectures for scalability

Brocade OneThe Challenges- Architecture

25 % West to East

Layer 2 Domain-1

Layer 2 Domain-2

Layer 2 Domain-3

Layer 2 Domain-4

75

% N

ort

h t

o S

outh

Page 7: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

7

Brocade OneThe Challenges- Architecture

70 % West to East

Single Layer 2 Domain

SOA SOA FCOE

FCOE

VM VM

30

% N

ort

h t

o S

outh

• Increased West to East traffic• Next Generation Apps (SOA, SAS. Web 2.0)

• Server Virtualisation (VM)– Server to Server

• Convergence (FCOE) – Server to Storage

• Drive for applications awareness• Applications the business enabler

• DC designed around the application

• Network needs to be aware of the apps

•The New DC needs to be flat• Single scalable Layer 2 Domain

Page 8: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

8

Brocade OneThe Challenges- Virtual Machine Mobility

• VM migration • Break network /application

access

• Port Profile information must be identical at destination

• QoS, VLAN, Security etc

• Map Profile to every Port• Eases mobility,

• Network and security best practices !!

! !

The Network needs to be aware of Vmotion Dynamically

Page 9: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

9

Brocade OneThe Challenges- Operational Complexity

• Too many Network layers• Multiple standard and proprietary

protocols

• Too many management points• Multiple small-form-factor edge

switches

• Individual management points

• Restricting deployment schedules

• Too many management Tools• Separate Management tool for LAN,

SAN and HBA/NICs

• Management Silo’sNIC

Mgmt.HBA

Mgmt.

Blade Switch Mgmt.

LANMgmt.

CoreLayer 3

BGP, EIGRP, OSPF, PIM

Aggregation

Layer 2/3IS-IS, OSPF,

PIM, RIP

Access(fixed & bladed)Layer 2/3

STP, OSPF, PLD, UDLD

SANMgmt.

SAN

Page 10: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

10

Brocade One

• To provide business differentiation and investment protection the Date Centre needs to be open, flexible and agile

The Challenges-Flexibility for Open System

NETWORK

SERVER

HYPERVISOR

STORAGE

Hyper-V

Page 11: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

11

Brocade OneVirtual Cluster Switching

Page 12: Simplifying the Next Generation Data Centre for Virtualization and Convergence

04/11/2023© 2010 Brocade Communications Systems, Inc. Company Proprietary Information

12

Brocade One

• Virtual Cluster Switching• Brocade’s Vision for the Next

generation Data Centre

• Evolutionary Technology• Built on the principles of

Brocade’s SAN fabric technology

• Merged with Foundry’s IP knowledge

• Open technology• Intregation and operability with

storage and server partners

Virtual Cluster Switching

Virtual Cluster

Switching

SAN FabricHeritage

FC/FCOEknowledge

FoundryIP

knowledge

Storage& Server

OEM/ partnership

VCS Evolution not Revolution

Page 13: Simplifying the Next Generation Data Centre for Virtualization and Convergence

04/11/2023

VCSLossless Ethernet Fabric for scalable converged Layer 2 domains

Distributed Intelligence within the fabric for seamless server mobility

Logical chassis, behaviour for simplified management and collapsing layers

Dynamic Service Insertion within the fabric for agility and zero downtime

© 2010 Brocade Communications Systems, Inc. Company Proprietary Information 13

Brocade’s Virtual Cluster Switching (VCS)

VCS

ETHERNETFABRIC

DISTRIBUTED INTELLIGENC

E

LOGICAL CHASSIS

DYNAMIC SERVICE

INSERTION

Page 14: Simplifying the Next Generation Data Centre for Virtualization and Convergence

04/11/2023

VM

Virtual Cluster Switching

• First data center Ethernet fabric

• No Spanning Tree Protocol

• Active-Active layer 2 topology

• Multi-path, fully deterministic

• Auto-healing, non-disruptive

• Arbitrary topology, Star, Mesh, Hub & Spoke, Clos etc

• Built for convergence, Lossless, low latency

© 2010 Brocade Communications Systems, Inc. Company Proprietary Information 14

NAS iSCSI FCoE

VM

ETHERNETFABRIC

LOGICAL CHASSIS

DISTRIBUTED INTELLIGENCE

DYNAMIC SERVICE INSERTION

Page 15: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

15

Virtual Cluster Switching

• The VCS Ethernet Fabric capabilities are achieved using TRILL (Transparent Interconnection of Lots of Links)• Introduces Layer 3 Control plane concepts to layer 2

• Providing scalability, control and manageability for layer 2 domains

Ethernet Fabric & TRILL

A proposed data center L2 protocol being developed by an Internet Engineering Task Force (IETF) workgroup

“The TRILL WG will design a solution for shortest-path frame routing in multi-hop IEEE 802.1-compliant Ethernet networks with arbitrary topologies, using an existing link-state routing protocol technology.” - source IETF

Mission

“TRILL solutions are intended to address the problems of …, inability to multipath, … within a single Ethernet link subnet” - source IETF

Scope

Page 16: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

16

Virtual Cluster Switching

• Link state protocol for Control Plane• Announce Rbridge – NOT end MAC addresses

• Rbridge has full topology of the network

• Hop by Hop forwarding to destination

• Allowing traffic engineering

• No Transient loops• TTL within TRILL, decremented at each hop

• Avoiding transient loops & broadcast storms

• Traceroute capability

• No Traffic flooding• Unknown U/C and M/C sent down multicast

tree

• Reverse path forward on each link of tree

Ethernet Fabric & TRILL

To achieve Layer 2 scalability, multi-pathing and stability TRILL introduces Layer 3 concepts

Rbridge 1

Rbridge 2

Rbridge 3

Adjacency Adjacency

Dest R

B-2

MAC tableMAC-B -> Rbridge-3Rbridge-3 -> Rbridge-2

Dest RB-3

Dest

Rbri

dge

SR

CR

bri

dge

Oute

r V

LAN

Ingre

ssN

icnam

e Egre

sss

Nic

nam

e

TTL

Inner

VLA

N

Dest

MA

C

SR

CM

AC

TRILL Frame

Page 17: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

17

Virtual Cluster Switching

• Active-Active Ethernet Fabric • Achieved through TRILL Multi-

pathing capability

• A path built from 10Gbe LAG

• Allowing bandwidth on demand

• Path bandwidth intelligence

• Traffic load balancing• Flow based hashing, 65-70%

utilizing

• Hardware based byte spreading, 90-95% utilizing

• Optimal Path utilization

Ethernet Fabric & TRILL

Intelligent bandwidth utilization within each Fabric path

Packet Spraying90-95% Utilisation within each Fabric path

Page 18: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

18

Virtual Cluster Switching

• Convergence Ready• The VCS Ethernet Fabric is Lossless

• 802.1Qbb – Priority-Based Flow Control• PFC: Allows Identification and

prioritization of traffic

• 802.1Qaz – Enhanced Transmission Selection/ Data Center Bridging Exchange• ETS: Allows grouping of different

priorities and allocation of bandwidth to PFC groups

• DCBX: Discovery and initialization protocol to discover resources connected to DCB-enabled network

Ethernet Fabric –Lossless QoS behaviour

Page 19: Simplifying the Next Generation Data Centre for Virtualization and Convergence

Fewer cables

Fewer adapters

Fewer switches

LAN

SAN A

SAN B

Top of Rack Configuration

04/11/2023

Virtual Cluster SwitchingLossless Ethernet Fabric

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only 19

Page 20: Simplifying the Next Generation Data Centre for Virtualization and Convergence

04/11/2023

Virtual Cluster Switching

• Fabric managed as a single switch

• Logically collapses network layers

• Single management for Edge and Aggregation layer

• Auto-configuration for new devices

• Centralized or distributed management

• Reducing managed elements© 2010 Brocade Communications Systems, Inc. Company Proprietary Information 20

ETHERNETFABRIC

LOGICAL CHASSIS

DISTRIBUTED INTELLIGENCE

DYNAMIC SERVICE INSERTION

Page 21: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

21

Virtual Cluster SwitchingLogical Chassis

• VCS Standard Ethernet switch• VCS members blades in a modular

chassis

• Standard protocols to communicate outside fabric

• RSTP, LACP, 802.1x, sFLOW, etc

• No need to rip and replace

• Evolutionary Migration• Not rip and Replace

• Leverage existing infrastructure

• Evolutionary not Revolutionary

CoreLayer 3

BGP, EIGRP,

OSPF, PIM

Aggregation/

Distribution

Layer 2/3IS-IS, OSPF,

PIM, RIP

Access(fixed & bladed)Layer 2/3

STP, OSPF, PLD, UDLD

VCS

Collapsed Single Access/Aggregation LayerSingle Point of Management for simplicity

Page 22: Simplifying the Next Generation Data Centre for Virtualization and Convergence

04/11/2023

Virtual Cluster Switching

• Fully distributed control plane

• Database replicated on each switch

• Master-less control no re-convergence

• Network-wide knowledge of all members, devices, VMs

• Arbitrary topology, self-forming

• Automatic Migration of Port Profiles (AMPP)

© 2010 Brocade Communications Systems, Inc. Company Proprietary Information 22

VM

conf

conf

NAS iSCSI FCoE

ETHERNETFABRIC

LOGICAL CHASSIS

DISTRIBUTED INTELLIGENCE

DYNAMIC SERVICE INSERTION

Page 23: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

23

Virtual Cluster Switching

• Allows VM to move with the network automatically reconfiguring1. Port Profiles created, managed in

fabric; distributed

2. Discovered by BNA; pushed to orchestration tools

3. Server admin binds VM MAC address to Port Profile ID

4. MAC address/Port Profile ID association pulled by BNA; sent to fabric

5. Intra- and inter- host switching and profile enforcement offloaded from physical servers

Distributed Intelligence

ProfileDistribution

Brocade Network Advisor (BNA)

ServerMgmt

Port ProfilesMAC Bindings

Port ProfilePort Profile IDQOS, ACLs, PoliciesVLAN IDStorage Zoning

MA

C B

ind

ing

s

Port

Pro

file

s

Page 24: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

24

Virtual Cluster Switching

• Today, access to the network lives in the virtual hypervisor• Consumes valuable host resources

• Lack of traffic visibility -security

• No clear management control

• VCS offloads to the physical switch• Eliminates the software switch;

• Virtual Ethernet Port Aggregator (VEPA) technology

• Virtual NICs offloaded to the physical NIC

• Virtual Ethernet Bridging (VEB) technology

• Host resources are freed up for applications• 5-20% host resources back to applications

• VMs have direct I/O with the network

Distributed Intelligence

Physical

Server

Virtual

Virtual Switch

NIC

Switch

vN

IC

vN

IC

vN

IC

vN

IC

conf

conf

Page 25: Simplifying the Next Generation Data Centre for Virtualization and Convergence

04/11/2023

Virtual Cluster Switching

• Reconfigure network via software

• Hardware-based flow redirection

• Incorporation of partner services

• Service modules in a chassis

• Available to the entire VCS fabric

• Non-stop service insertion

• Minimizes cost and physical moves

© 2010 Brocade Communications Systems, Inc. Company Proprietary Information 25

VM VM NetworkServices

ETHERNETFABRIC

LOGICAL CHASSIS

DISTRIBUTEDINTELLIGENCE

DYNAMIC SERVICE INSERTION

Encryption

Layer 4-7

Extension

Security

Page 26: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA

26

Virtual Cluster Switching

• Dynamic Service to connect Data Centers• Extend the layer 2 domain over

distance

• Maintains fabric separation while extending VCS services to secondary site (e.g. discovery, distributed configuration, AMPP)

Dynamic Service Insertion

• VCS Fabric Extension capabilities• Delivers high performance

accelerated connectivity with full line rate compression

• Secures data in-flight with full line rate encryption

• Load balances throughput and provides full failover across multiple connections

Site A Site B

VCS VCS

Fabric ExtensionService

Fabric ExtensionService

Encryption, Compression, Multicasting

Public RoutedNetwork

Page 27: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA

27

Vritual Cluster SwitchingNative Fibre Channel Connectivity

Provide VCS Ethernet Fabric with native connectivity to FC storage

Connect FC storage locally

Leverage new or existing Fibre Channel SAN resources

• VCS Native Fibre Channel Capabilities• Adds Brocade’s Fibre Channel

functionality into the VCS fabric

• 8 Gbps, 16 Gbps FC, frame-level ISL Trunking, Virtual Channels with QoS, etc.

LAN FC SAN

VCSBrocade

DCX

Native Fibre Channel

FC Storage

FC Storage

Page 28: Simplifying the Next Generation Data Centre for Virtualization and Convergence

04/11/2023© 2010 Brocade Communications Systems, Inc. Company Proprietary Information

28

Virtual Cluster SwitchingPower of an Open solution

NETWORK

SERVER

HYPERVISOR

STORAGE

BROCADE ONE ARCHITECTURE

Hyper-V

iSCSI NAS FC FCoE

Page 29: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

29

Virtual Cluster SwitchingSimplified end-to-end Management

• Single Data center-wide platform

• Ethernet, Fibre Channel, and Data Center Bridging (DCB) element management

• Open northbound APIs

• Integration with leading orchestration tools

• VMware and Microsoft hypervisor plug-ins

LAN Converged SAN

ELEMENT MANAGEMENT

NORTHBOUND APIsBrocad

e Networ

k Advisor

Page 30: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

30

Brocade OneVirtual Cluster Switching- The Fabric

Network

Complexity

SAN

IP

HPC

Management

3 to One

3 to One

20 to One

HyperVisor Operation 3 to One

Vswitch

VEB

VEPA

One Converged Ethernet Fabric

One Flat Network Layer

One Management pointfor the Fabric

One Virtual Access LayerDistributed Intelligence

Page 31: Simplifying the Next Generation Data Centre for Virtualization and Convergence

Questions?

Page 32: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only

32

Brocade OneDeployment scenarios

Page 33: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA

33

VCS Deployment Scenarios -11/10 Gbps Top-of-Rack Access – Architecture

Aggr

egat

ion

Acce

ssCo

reSe

rver

s

WAN

MLX w/ MCT,Cisco w/ vPC/VSS,

or other

Existing 1 GbpsAccess Switches

2-switchVCS at ToR

1/10 GbpsServers

10 GbpsServers

1 GbpsServers

LAG

Preserves existing architecture

Leverages existing core/agg

Co-exists with existing ToR switches

Supports 1 and 10 Gbps server connectivity

Active-active networkLoad splits across

connections

No single point failureSelf healing

Fast link reconvergence< 250 milliseconds

High-density access with flexible subscription ratios

Supports up to 36 servers per rack with 4:1 subscription

VCS VCS

Page 34: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA

34

VCS Deployment Scenarios -11/10 Gbps Top-of-Rack Access – Topology

LAG

LAG

Classic ToR VCS ToR

UtilizationActive/Passive

Active/Active

Connections per Server 4 2

Logical Switches per Rack

2 1

LAG per Rack 2 1

20 Gbps per server;

Active/Passive

20 Gbps per server;

Active/Active

Classic 10 GbE Top-of-Rack

VCS 10 GbE Top-of-Rack

2-switch VCS per Rack

Active/Active server connections

Servers only see one ToR switch

Half the server connections

Reduced switch management

Half the number of logical switches to manage

Unified uplinksOne LAG per VCS

1 GbE

10 GbE

10 GbE DCB

Passive Link

MLX w/ MCT,Cisco w/ vPC/VSS,

or other Aggregation

Up to 36 Servers per

Rack

20 ports

72 ports

4 links

4:1 10 Gbps Subscription

Ratioto Aggregation

Logical Chassis

LAG

vLAG

Page 35: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA

35

VCS Deployment Scenarios -11/10 Gbps Top-of-Rack Access – Layout

Preserves existing network architecture

Leverage VCS technology in stages

2-switch VCS in each server rack

Managed as a single switch

1 Gbps and 10 Gbps connectivity

Highly available; active/active

High performance connectivity to End-of-Row Aggregation

One LAG to core for simplified management and rapid failover

Core

2-switch VCS at the Top of Each

Rack

Servers with 1 Gbps or 10 Gbps Connectivity

Aggregation Switches at the End of Each Row

Page 36: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA

36

VCS Deployment Scenarios -210 Gbps Top-of-Rack Access for Blade Servers – Architecture

WAN

MLX w/ MCT,Cisco w/ vPC/VSS,

or other

Existing ToR Switches

2-switchVCS at ToR

Blade Serverswith 1 Gbps

Switches

LAG

Preserves existing architecture

Leverages existing core/agg

Co-exists with existing ToR switches

Provides low-cost, first stage aggregation

High density blade servers without stress on existing aggregation

Reduces cabling out of rack

Active-active networkLoad splits across connections

No single point failureSelf healing

Fast link reconvergence< 250 milliseconds

High-density ToR aggregation with flexible subscription ratios

Supports up to 4 blade chassis per rack with 2:1 subscription

Aggr

egat

ion

Acce

ssCo

reSe

rver

s

Blade Serverswith 10 Gbps Switches/Passthrough

Modules

VCS VCS

Page 37: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA

37

VCS Deployment Scenarios -210 Gbps Top-of-Rack Access for Blade Servers – Topology

LAG

Dual 10 Gbps Switch

Modules per Chassis (any

vendor)

2-switch VCS per Rack

1st stage network aggregation

Ethernet fabric at ToR

Aggregates 4 blade server chassis per rack (8 access switches)

High performance 2:1 subscription through VCS

Reduced switch management

Half the number of logical ToR switches to manage

Unified uplinksOne LAG per VCS

Future: Blade switches become members of the VCS fabric

Drastic reduction in switch management

MLX w/ MCT,Cisco w/ vPC/VSS,

or other Aggregation

Up to 4 Blade Chassis per Rack = 64 Servers

32 ports

64 ports

8 links

4:1 10 Gbps Subscription

RatioThrough 1st

Stage Aggregation

1 GbE

10 GbE

10 GbE DCB

8 links per Blade Switch

LogicalChassis

vLAG

Page 38: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA

38

VCS Deployment Scenarios -210 Gbps Top-of-Rack Access for Blade Servers – Layout

Preserves existing network architecture

Leverage VCS technology in stages

2-switch VCS in each server rack

Managed as a single switch

1st stage aggregation of 10 Gbps blade switches

High performance connectivity to End-of-Row Aggregation

One LAG to core for simplified management and rapid failover

Core

2-switch VCS at the Top of Each Rack; 1st Stage

Aggregation

Blade Servers with 10 Gbps Connectivity

Switches at the End of Each Row; 2nd

Stage Aggregation

Page 39: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA

39

VCS Deployment Scenarios -31/10 Gbps Access; Collapsed Network – Architecture

Flatter, simpler network design

Logical two-tier architecture

Ethernet fabrics at the edge

Greater layer 2 scalability/flexibility

Increased sphere of VM mobility

Seamless network expansion

Optimized multi-path network

All paths are active

No single point failure

STP not necessary

WAN

Edge

Core

Serv

ers

1/10 GbpsServers

10 GbpsServers

VCS Edge Fabrics

LAG

SAN

Fibre Channel Connections to SAN

MLX w/ MCT,Cisco w/ vPC/VSS,

or other

Page 40: Simplifying the Next Generation Data Centre for Virtualization and Convergence

VCS Deployment Scenarios -31/10 Gbps Access; Collapsed Network – Topology – ToR Mesh

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA 40

1 GbE

10 GbE

10 GbE DCB

LogicalChassis

2 ports

36 ports

Servers with 1 Gbps, 10 Gbps, and DCB

Connectivity

1 Links per VCS member to Core Router (20 Total)

perswitch( )

L3ECMP

10 Switch VCS Fabric;

200 Usable Ports

Up to 36 Servers per Rack; 5

Racks per VCS

Scale-out VCS edge fabric

Self aggregating, flattens the network

Clos Fabric topology for flexible subscription ratios

312 usable ports per 10-switch VCS

Supports 144 servers in 4 racks, all with 10 Gbps connections

Drastic reduction in management

Each VCS managed as a single logical chassis

Enables network convergence

DCB and TRILL capabilities for multi-hop FCoE and enhanced iSCSI

MLX w/ MCT,Cisco w/ vPC/VSS,

or other Core

4 links to other switch in rack; 9 links to adjacent

switchesvLAG

Page 41: Simplifying the Next Generation Data Centre for Virtualization and Convergence

VCS Deployment Scenarios -41/10 Gbps Access; Collapsed Network – Layout – ToR Mesh

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA 41

2 VCS fabric members in each rack

Dual connectivity into fabric for each server/storage array

Low cost Twinax cabling in rack

2nd stage VCS fabric members in a middle-of-row rack

Low cost Laserwire cabling from top-of-rack switches

1 VCS fabric per 4 racks of servers (assuming 36 servers per rack)

Fiber optic cabling only used for connectivity from edge VCS to core

Single vLAG per fabric

Reduced management and maximum resiliency

Core

2 Fabric Members per Rack

5 Racksper Fabric

Horizontal Stacking Using ToR Mesh architecture

Servers and Storage with 1 Gbps,

10 Gbps, and DCB Connectivity

Page 42: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA

42

VCS Deployment Scenarios -41/10 Gbps Access; Collapsed Network – Topology – Clos Fabric

1 GbE

10 GbE

10 GbE DCB

LogicalChassis

12 ports

36 ports

Servers with 1 Gbps, 10 Gbps, and DCB

Connectivity

6 Links per Trunk (24 Total)

12 ports

48 ports

perswitch( )

perswitch( )

48 Ports Available for FC SAN

Connectivity or VCS Expansion

10 Switch Fabric;312 Usable Ports

6:1 Subscription Ratio to Core

Up to 36 Servers per Rack; 4

Racks per VCS

Scale-out VCS edge fabric Self aggregating, flattens the network

Clos Fabric topology for flexible subscription ratios

312 usable ports per 10-switch VCS

Supports 144 servers in 4 racks, all with 10 Gbps connections

Drastic reduction in management

Each VCS managed as a single logical chassis

Enables network convergence

DCB and TRILL capabilities for multi-hop FCoE and enhanced iSCSI

MLX w/ MCT,Cisco w/ vPC/VSS,

or other CoreL3

ECMP

vLAG

Page 43: Simplifying the Next Generation Data Centre for Virtualization and Convergence

© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only 43

Questions?