Upload
nguyenthien
View
245
Download
0
Embed Size (px)
Citation preview
1© 2005 Cisco Systems, Inc. All rights reserved.
Cisco - Enabling High Performance Grids and Utility Computing
Shankar SubramanianTechnical Director – Storage & Server NetworkingCisco Systems
222© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Agenda
• InfiniBand Hardware & System Overview
• RDMA and Upper Layer Protocols
• High Performance Computing Architectures
• HPC Building-blocks
• I/O Virtualization
• Server Virtualization
InfiniBand Overview
333© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
444© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
What Is InfiniBand?
• InfiniBand is a high speed – low latency technology used to interconnect servers, storage and networks within the datacenter
• Standards Based – InfiniBand Trade Associationhttp://www.infinibandta.org
• Scalable Interconnect:1X = 2.5Gb/s
4X = 10Gb/s
12X = 30Gb/s
555© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
InfiniBand Physics
• Copper and Fiber interfaces are specified
• Copper Up to 15m* for 4x connections
Up to 10m for 12x connections
• OpticalInitial availability via dongle solution
Up to 300m with current silicon
Long Haul possible, but not with current silicon
* 20m in certain circumstances
666© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
InfiniBand Physics
• Link is bonded 2.5Gbps (1x) linksFiber is a ribbon cable
Copper is a multi-conductor cable
• Each Link is 8b/10b encoded4x Link is 4 2.5Gbps Physical Connections
Each connection is 2Gbps data
SAR provides a single 8Gbps data connection (4x)
24 Gbps (12x)
777© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Pluggable Optics ModuleTransforms Powered Copper Ports to Optical Ports
Topspin Optical Module
• Coverts a copper port to an optical port on a port by port basis
• Extends port to port reach to 150m - 300m with fibreribbon cables
888© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
ServerServer ServerServer
InfiniBand Nomenclature
HostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHost
HostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostHostServerServerServerServer
CPU
CPU
Hos
t Int
erco
nnec
t
MemCntlr
SystemMemory
IB L
ink
HCA
SM
Switch
IB Link TCA
IB Link TCA
Ethernet linkIB Link
FC link
HCA – Host Channel AdaptorSM - Subnet managerTCA – Target Channel Adaptor
999© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
InfiniBand Switch Hardware
• Hardware switch devices is a cut-through memory switch
• Full-duplex, non-blocking 24 port tag forwarding switch
• Tags are system Local ID, provided to all network endpoints by the Master Subnet Manager on system startup
101010© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
InfiniBand Host Channel Adapter
• Network interface for IB attached Servers
• Provides hardware Virtual/Physical memory mapping, Direct Memory Access (DMA), and memory protection
• Provides RDMA (Remote DMA) data transfer engine and reliable packet forwarding capabilities
111111© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
InfiniBand Gateway
• Technically a Target Channel Adapter
• Similar to an HCA attached to an embedded device
• Usually doesn’t require virtual memory manipulation and mapping
• Simplified HCA on a specialized deviceExamples, Ethernet to InfiniBand or Fibre Channel to InfiniBand packet forwarding engines
RDMA and Upper Layer Protocols
121212© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.DC-210811392_06_2005_X
131313© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
NIC
Current NIC Architecture
CPU
CPU
Hos
t Int
erco
nnec
tMemCntlr
Server (Host)
inte
rcon
nect
System Memory
OS Buffer
App Buffer
Data traverses bus 3 times
Multiple context switches robs CPU cycles from actual workMemory bandwidth and per packet interrupts limit max throughputOS manages end-to-end communications path
141414© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
With RDMA and OS Bypass
HCA
CPU
CPU
Hos
t Int
erco
nnec
tMemCntlr
Server (Host)
inte
rcon
nect
System Memory
OS Buffer
App Buffer
Data traverses bus once, saving CPU and memory cycles
Secure Memory – Memory transfers with no CPU overheadPCI-X/PCI-e becomes the bottleneck for network data transmissionHCA manages remote data transmission
151515© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Kernel Bypass
Hardware
Application
Traditional Model Kernel Bypass Model
KernelUser
TCP/IPTransport
Driver
Hardware
Application
Kernel
User
TCP/IPTransport
Driver
RDMA ULP
SocketsLayer
SocketsLayer
161616© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Upper Layer Protocols
• Variety of software protocols to handle high speed communication over RDMA
• Protocols includeIP-over-InfiniBand – IETF http://www.ietf.org/internet-drafts/draft-ietf-ipoib-ip-over-infiniband-09.txtSDP (Socket Direct Protocol) – InfiniBand Trade Association http://infinibandta.orgSRP (SCSI RDMA Protocol) – ANSI T10 http://www.t10.org/ftp/t10/drafts/srp/srp-r16a.pdfDAPL (Direct Access Provider Library) – DAT Collaborative http://www.datcollaborative.orgMPI (Message Passing Interface) – MPI Forum http://www.mpi-forum.org
171717© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
InfiniBand PerformanceMeasured Results
BSD Sockets Async I/O extension
Application
1GE
DirectAccess
IPoIB
TCPIP
SDP
10G IB
uDAPLSRP MPI
3.5 usec8 usec18 usec18 usec30 usec40-60 usecLatency
8Gb/s8Gb/s7.9Gb/s4.5 Gb/s4.1Gb/s1 Gb/sThroughput
High Performance Computing
181818© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.DC-210811392_06_2005_X
191919© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
High Performance Computing Applications
• Parallel processing applicationsClosely coupled
Finite Element Analysis (Crash Simulation)
Fluid Dynamics (Injection Molding)
Loosely coupled
Dataset searches (Terabyte->Petabyte datasets)
Monte-Carlo simulation (10,000s of repetitions)
202020© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
High Performance Computing Networks
• Two Standards Based TechnologiesGigabit Ethernet/10 Gigabit EthernetInfiniBand
• Multiple UsesHPC interconnectStorage trafficLoad/Unload data movementApplication/Systems management
212121© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Network Types
• Network style is guided by applicationClosely coupled applications
Latency is a problem
Throughput is key to resolving latency issues
Loosely coupled applications
Load/unload of datasets can be a key bottleneck
Low latency for rapid response is critical
222222© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
IB Routing for HPC
• Subnet Manager provides global route engine for entire IB Fabric
• Shortest Path First routing
• Round Robin load balancing
• Static routes
232323© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
IB Routing for HPC
242424© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
HPC Storage Problem
• As the job grows so does the storageClusters growing to 1000 nodes or more
Storage growing into the PetaByte range
• The storage problemLoad/Unload a large dataset
Get shared access to large datasets on the fly
Performance is an issue: Need multi-GigaByte/sthroughput
252525© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Current Solutions
• NFSOld standby: Doesn’t scale well (single server)
• Current commercial cluster file systemsDesigned for multiple reader/writer situations
Don’t scale beyond 10s of nodes
Don’t necessarily manage the throughput problem
Need to build out separate Fibre Channel fabric
262626© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
HPC Storage Solutions
• Next Generation Cluster File-systemsBased on new file-systems or modification of old (nfs or iSCSI)
Split the data across multiple file service hosts
Either act as a RAIF (Redundant Array of Inexpensive File-servers), or allow clients to access any one file server, while everyone shares the same back-end storage devices
272727© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
InfiniBand Storage Acceleration
• Further accelerate storage accessIB based forwarding over SDP or DAPL can provide CPU offload and increased throughput
Low latency improves overall throughput and performance
High bandwidth improves load/unload times dramatically
282828© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
HPC Glossary
• Fat-tree – Non-blocking switch architecture (akaCLOS)
• Bisectional bandwidth – the total system bandwidth across the middle of the network
• Non-blocking – full host bandwidth all-2-all communication
A New Infrastructure Category
292929© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.DC-210811392_06_2005_X
303030© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Network Switch
Clients
Network Resources (Internet, Printer, Server)
Storage Switch
Server
Storage (SAN)
Server Switch
Servers
StorageNetwork
A New Category of Data Center Infrastructure-The Server Fabric Switch
313131© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Virtualization (I/O, Storage, and CPU)
−Shared Resources Across Entire Cluster−Routing, Aggregation, Load Balancing−App/OS to CPU provisioning
High Performance Server-to-Server
Interconnect
−RDMA−High Bandwidth −Low Latency−InfiniBand today; PCI-Express and /or 10GigE when ready
Policy-Based Dynamic Resource Mapping
What Makes The Server Fabric Switch Different?
Performance Performance andand ControlControl
323232© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Server Fabric Switch ApplicationsWhy Performance and Control?
Server Clustering
−High Performance Computing (HPC)−“Enterprise-Class” HPC −Database Scalability
Utility or Grid Computing
−Application Provisioning−Server Re-purposing−Server Migration
Applications
I/O Virtualization
−I/O Consolidation−I/O Aggregation−Server Consolidation
333333© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Cisco Building Blocks (Server Fabric Switching)
Gateway Modules- InfiniBand to Ethernet- InfiniBand to Fibre Channel
Integrated System and Fabric management
Switches
Host Channel Adapter (HCA)With upper layer protocols
SRPSDPuDAPLMPIIPoIB
Linux and Windows driver support
343434© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Serv
er F
abric
Sw
itch
HC
A
The Cisco SFS Product Line
•(2) 4XIB PCI-X
•(2) 4XIB PCI-ex
Bla
de
Serv
er
•Remote Boot
•Linux Host Driver
•Windows Host Driver
IBM BladeCenter
• HCA (2) 1XIB PCI-X
•Embedded switch (14) 1XIB (Internal) + (1) 4XIB and (1) 12XIB (External)
Dell 1855
• HCA (2) 4XIB PCI-ex
• Passthru Module (10) 4XIB
Infin
iBan
dM
ultif
abric
SFS 7000 (TS120)
(24) 4XIB(24) 4XIB
SFS 7008 (TS270)
(96) 4XIB(96) 4XIB
SFS 3012 (TS360)
(24) 4XIB + 12 (24) 4XIB + 12 GwsGws
SFS 3001 (TS90)
(12) 4XIB + 1 (12) 4XIB + 1 GwGw
•(2) 2G FC GW
Softw
are
VFrame™ Server Fabric
Virtualization Software R3.0
•(6) GE GW
*plus InfiniBand Cables
I/O Virtualization
353535© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.DC-210811392_06_2005_X
363636© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
The Evolution of I/O Virtualization
Pro: Single managed entity, fast backplane
Con: Expensive, Proprietary server +
backplane
SMP
Pro: Standard servers, inexpensive
Con: Lots of managed components, low-
performing interconnect
Dis-aggregation
Ethernet N
etwork
Fibre Channel
Virtualization
Pro: Reduced # of managed
components, virtual I/O, fast standards
backplane
373737© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Evolution of the Data CenterNetwork and Storage Virtualization
383838© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Evolution of the Data CenterServer Virtualization - The Server Switch
393939© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Virtual I/O for Network and StorageUnified “wire-once” fabric
SAN CiscoSFS 3012
LAN/WAN
Server Cluster
Fibre Channel to InfiniBand gateway for storage access
Two 2-Gbps Fibre Channel ports per gatewayCreate 10-Gbps virtual storage pipe to each server
Fibre Channel to InfiniBand gateway for storage access
Two 2-Gbps Fibre Channel ports per gatewayCreate 10-Gbps virtual storage pipe to each server
Ethernet to InfiniBand gateway for LAN access
Six Gigabit Ethernet ports per gatewayCreate virtual GigE pipe to each server
Ethernet to InfiniBand gateway for LAN access
Six Gigabit Ethernet ports per gatewayCreate virtual GigE pipe to each server
Single InfiniBand link for:- Storage- Network
Single InfiniBand link for:- Storage- Network
404040© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Virtual I/O for Blade ServersEliminating I/O Bottlenecks
SAN Server Switch LAN/WAN
Server Blades
Integrated 10Gbps InfiniBand switch or pass-thru module
Integrated 10Gbps InfiniBand switch or pass-thru module
Shared Ethernet I/O across blades and racksShared Ethernet I/O across blades and racks
Shared FC I/O across blades and racksShared FC I/O across blades and racks
414141© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Integrated InfiniBand for Blade Servers Create “wire-once” fabric
• Integrated 10Gbps InfiniBand switches provide unified “wire-once” fabric
• Optimize density, cooling, space, and cable management.
• Virtual I/O provides shared Ethernet and Fibre Channel ports across blades and racks
• Option of integrated InfiniBand switch (ex: IBM BC) or pass-thru module (ex: Dell 1855)
IB SwitchIB Switch
10Gbps 30Gbps
Blade Chassis with InfiniBand Switches
HCA
Virtual I/O: How it Works
424242© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.DC-210811392_06_2005_X
434343© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Transparent Topology Architecture
• IP CommunicationsInside Fabric: IP over InfiniBand (IPoIB) enables transparent communications for any IP-based applications. Outside Fabric: InfiniBand-to-Ethernet Gateways provide transparent access to existing IP Ethernet infrastructure.
• Fibre Channel StorageInside Fabric: SCSI RDMA Protocol (SRP) enables SCSI over InfiniBand.Outside Fabric: InfiniBand-to-Fibre Channel Gateways provide transparent access to existing SAN infrastructure.
• Cluster CommunicationsInside Fabric: Remote DMA protocols for transparent performance enhancements.
444444© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
The InfiniBand Driver Architecture
BSD Sockets FS API
TCPSDP
IP
DriversVAPI
ETHER INFINIBAND HCA
DAT FILE SYSTEM
SCSI
SRP
FC
FCP
SDP
INFINIBAND SAN
TS API
BSD Sockets NFS-RDMA
LAN/WAN UNIFIED FABRICSAN
INFINIBAND SWITCHETHERSWITCH
FCSWITCHFC GW
EETH GW
NETWORK
APPLICATION
UDAPL
TS TS
IPoIB
User
Kernel
454545© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
InfiniBand-to-Ethernet Gateway Overview
• Ensures seamless integration with IP-based applications.
• Act like L2 bridge between IB and Ethernet
• Bridge group is the main forwarding entity
• Bridge group has two bridge ports Ethernet and IPoIB
• Bridge group bridges one VLAN to one IB partition
• Ethernet bridge port can be tagged or untagged
• Ethernet bridge port can aggregate up to 6 ports
464646© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
InfiniBand-to-Ethernet Gateway Features
• IP-Only protocols
• 802.1Q VLAN support
• Link aggregation
• IPv4 multicast support
• Loop protection
• Ethernet jumbo frames up to 9k
• IP fragmentation
• High availability
474747© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
InfiniBand-to-Fibre Channel Gateway
• Ensures seamless integration with important SAN tools.Fabric-based Zoning
LUN-based access controls
Storage and host-based HA and load balancing tools
• Creates SAN network addresses on InfiniBand.SAN Management Tools must “see” each node.
Creates “talk-through” mode with virtual WWNNs per server.
• Enables SAN Interoperability with InfiniBand.Appears as virtual NL-Port, N-Port, E-Port.
Proven interoperability with Cisco MDS, Brocade, McData, Qlogic, EMC, IBM, Hitachi, and more.
484848© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Typical SAN Today
LUN0
LUNn
LUN0
LUNn
LUN0
LUNn
FC RAIDComplex
WWNN_a
WWNN_c
WWNN_b
LUN0
LUNn
LUN0
LUNn
LUN0
LUNn
FC RAIDComplex
WWNN_d
WWNN_f
WWNN_e
IDC
FibreChannelFabric
Sni ffe r Serverm on itoring/analys is
Sni ffe r Serverm on itoring/analys is
Sni ffe r Serverm on itoring/analys is
Assumptions Made by SAN Mgmt Tools:
Each Physical Connection is a Logical Connection
Each Physical Connection Has a Unique Address
Logical Connectivity Access Between Nodes Can be Controlled by Physical Address
494949© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Sniffer Servermonitoring/analysis
Sniffer Servermonitoring/analysis
Sniffer Servermonitoring/analysis
Sniffer Servermonitoring/analysis
Sniffer Servermonitoring/analysis
Physical vs. Logical View
Sniffer Servermonitoring/analysis
Sniffer Servermonitoring/analysis
Sniffer Servermonitoring/analysis
Sniffer Servermonitoring/analysis
Sniffer Servermonitoring/analysis
Physical View
Logical View
Each server assignedunique WWNN.Appears as direct-attached N_Port.
Server Virtualization
505050© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.DC-210811392_06_2005_X
515151© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Three Categories of Server Virtualization
• Virtual Machine: Splits a servers into independent virtual servers.
VMWare, XEN, MSFTMain value is higher server utilization.
• Virtual SMP: Combines servers together into a single managed powered entity.
Virtual Iron, QlustersMain value is scaling mission critical apps on commodity HW.
• Physical Server Virtualization: Makes servers stateless by moving server identity into the network, including storage and I/O subsystem.
Cisco VFrame™, EgeneraMain value is making infrastructure change easier in heterogeneous environment.
525252© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
VFrame Server Virtualization FrameworkBuilding Blocks
Ethernet (I/O)
InfiniBand (RDMA)High Performance
Server-Server Connectivity
Fibre Channel (Storage)
Topology Transparency
Virtualization and Boot Services
Policy and Provisioning Services
535353© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
VFrame™
• Software suite that makes the Server Switch programmable• Three main components
VFrame™ Embedded System LogicPolicy ingestion, interpretation, and enforcement at the server switch
VFrame™ APIs (and SDK) Allows 3rd party (End-user Customers, Software Partners, System Vendor OEMs) management and provision tools to program and manage the server switch fabric Software Partners
VFrame™ Director Software package disseminates policies to server switch fabric Central policy enforcement provides better system wide decision making and conflict arbitrationCan be installed on any server in the network
545454© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
ProgrammabilityVFrame™
1) Server Switch receives policy from VFrame™ Director or 3rd party software.
2) Based on policy, Server Switch assembles the virtual server
Selects server(s) that meet minimum criteria (e.g. CPU, memory)Boot server(s) over the network with appropriate app/os imageCreates virtual IPs in servers and maps to VLANs for client access.Creates virtual HBAs in servers and maps to Zones, LUNs, and WWNNs for storage access
Policy
Virtual Server
SAN
LAN
CPUs
vIPvIP
vHBAvHBA
vHBA
Cisco SFS 3012
555555© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Case Study: Large Wall Street BankEnterprise Grid Computing
• Application: Replace proprietary platforms with standards-based componentsBuild scalable “on-demand” compute grid for financial applications
• Benefits: 20X Price/Performance Improvement over four years30-50% Application Performance ImprovementStandards-based solution for on-demand computingEnvironment that scales using 500-node building blocksCentralized shared I/O pool for flexibly allocating SAN/IP bandwidth
LAN
Storage
ExistingN/W
44x 24-port
GridI/O
CoreFabric
Edge
12 hosts512 Nodes
565656© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.
Additional Content
• Cisco Press –
http://www.cisco.com/go/datacenter
• Cisco Press –
http://ciscopress.com/datacenterfundamentals
Q and A
575757© 2005 Cisco Systems, Inc. All rights reserved.© 2005 Cisco Systems, Inc. All rights reserved.DC-210811392_06_2005_X