Upload
cisco-canada
View
3.275
Download
3
Tags:
Embed Size (px)
Citation preview
Scalable Data Center Designs for Canadian Small & Medium Size Business Session ID T-DC-16-I
Simon Vaillancourt, Systems Engineer
Email: [email protected]
http://ca.linkedin.com/in/simonvaillancourt/
@svaillancourt #CiscoConnect_TO
Scalable Data Center Designs for Canadian Small & Medium Size Business Session ID T-DC-16-I
Simon Vaillancourt, Systems Engineer
Email: [email protected]
http://ca.linkedin.com/in/simonvaillancourt/
@svaillancourt #CiscoConnect_TO
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
House Keeping Notes
Thank you for attending Cisco Connect Toronto 2014, here are a few housekeeping notes to ensure we all enjoy the session today.
Please ensure your cellphones are set on silent to ensure no one is disturbed during the session
Please keep this session interactive and ask questions, unless we get sidetracked, then I may ask to keep questions for the end of the session to ensure all material is covered
5
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
A Few key-words to consider
6
SCALABLE:
– Right-sizing the Data Center, not just large scale.
– Using components that will also transition easily into larger designs.
SMALL-MEDIUM:
– Requiring a dedicated pair of DC switches.
– The transition point upwards from collapsed-core.
– Separate Layer 2/3 boundary, with DC-oriented feature set.
– Layer-2 edge switching for virtualization.
DATA CENTER DESIGNS:
– Tradeoffs of components to fill topology roles.
WAN/Internet Edge
Client Access/Enterprise
Data Center
L3 -----------
L2
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Session Agenda
7
Midsize Data Center Requirements
– Goals and Challenges
– Fabric Requirements
Starting Point: The Access Pod
– Compute and Storage Edge Requirements
– Key Features
Single Pod Design Examples
– Fixed/Semi-modular/Modular Designs
– vPC Best Practices
Moving to a Multi-Tier Fabric
– Spine/Leaf Designs, roadmap to ACI/DFA
– FabricPath Best Practices
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Midsize Data Center Goals and Challenges
Provide example designs which are:
Flexible: to support different types of Servers, Storage, Applications, and Service
Integration requirements.
Practical: to balance cost with port density requirements, software features, and
hardware capabilities.
Agile: allow rapid growth of the network as needs change. Reuse components in new
roles for investment protection.
8
Choose features to prioritize when making design choices:
Leaf/Access Features: Robust FEX options, Enhanced vPC, 10GBASE-T support, Unified Ports (Native Fibre Channel), FCoE, Adapter-FEX, VM-FEX
Spine/Aggregation Features: 40 Gig-E, Routing Scale, OTV, MPLS, HA, VDC’s
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Growth with Investment protection Re-Use key switching components as the design scales
10
Single Switching Layer with
Direct-attached Servers, FEX
Spine/Leaf Switch Fabric
Easily scale the fabric further:
Add Spine switches to scale fabric bandwidth
Add Leaf switches to scale edge port density Single-layer expands to form Spine/Leaf fabric design
Scaled Spine/Leaf Fabric with
Automation and Orchestration
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Server and Storage needs Drive Design Choices
11
VM VM VM
FCoE
iSCSI
FC
NFS/
CIFS
VM VM VM
Virtualization Requirements
– vSwitch/DVS/OVS
– Nexus 1000V, VM-FEX, Adapter-FEX
– APIs/Programmability/Orchestration
Connectivity Model
– 10 or 1-GigE Server ports
– NIC/HBA Interfaces per-server
– NIC Teaming models
11
Form Factor
– Unified Computing Fabric
– 3rd Party Blade Servers
– Rack Servers (Non-UCS Managed)
Storage Protocols
– Fibre Channel
– FCoE
– IP (iSCSI, NAS)
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Data Center Fabric Requirements
12
• Varied “North-South” communication needs with end-users and external entities.
• Increasing “East-West” communication: clustered applications and workload mobility.
• High throughput and low latency requirements.
• Increasing high availability requirements.
• Automated provisioning and control with orchestration, monitoring, and management tools.
EAST – WEST TRAFFIC
NO
RT
H -
SO
UT
H T
RA
FF
IC
FC
FCoE
iSCSI / NAS
Server/Compute
Site B Enterprise
Network
Public Cloud
Internet
DATA CENTER FABRIC
Mobile
Services
Storage
Orchestration/
Monitoring
Offsite DC
API
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Session Agenda
13
Midsize Data Center Requirements
– Goals and Challenges
– Fabric Requirements
Starting Point: The Access Pod
– Compute and Storage Edge Requirements
– Key Features
Single Pod Design Examples
– Fixed/Semi-modular/Modular Designs
– vPC Best Practices
Moving to a Multi-Tier Fabric
– Spine/Leaf Designs, roadmap to ACI/DFA
– FabricPath Best Practices
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Access Pod basics: Compute, Storage, and Network
14
Access/Leaf
Switch Pair
Storage Array
UCS Fabric
Interconnect
System
To Data Center
Aggregation or
Network Core
“Different Drawing, Same Components”
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Access Pod Features: Virtual Port Channel (vPC)
15
Virtual Port Channel
L2
SiSi SiSi
Non-vPC vPC
Physical Topology Logical Topology
Port-Channels allow aggregation of multiple physical links into a logical bundle.
vPC allows Port-channel link aggregation to span two separate physical switches.
With vPC, Spanning Tree Protocol is no longer the primary means of loop prevention
Provides more efficient bandwidth utilization since all links are actively forwarding
vPC maintains independent control and management planes
Two peer vPC switches are joined together to form a “domain”
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Access Pod Features: Nexus 2000 Fabric Extension
16
Dual NIC 802.3ad Server
Dual NIC Active/Standby
Server
• Using FEX provides Top-of-Rack presence in more racks with fewer points of management, less cabling, and lower cost.
• In a “straight-through” FEX configuration, each Nexus 2000 FEX is only connected to one parent switch.
• Supported straight-through FEX parent switch are Nexus 5000, 6000, 7000 and 9300*
• Nexus 2000 includes 1/10GigE models, plus the B22 models for use in blade server chassis from HP, Dell, Fujitsu, and IBM.
*with upcoming NX-OS 6.1(2)I2(3)
Verify FEX scale and compatibility on cisco.com per platform.
Nexus 2000 FEX
Nexus Parent Switch
End/Middle of Row Switching with FEX
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Nexus Fabric Features: Enhanced vPC (EvPC) Dual-homed FEX with addition of dual-homed servers
17
Dual NIC 802.3ad
Dual NIC Active/Standby
Single NIC
• In an EvPC configuration, server NIC teaming configurations or single-homed server are supported on any port; no vPC ‘orphan ports’
• All components in the network path are fully redundant.
• Supported FEX parent switches are Nexus 6000, 5600 and 5500.
• Provides flexibility to mix all three server NIC configurations (single NIC, Active/Standby and NIC Port Channel).
*Port Channel to active/active server is not configured as a “vPC”. *N7000 planned to support dual-homed FEX without dual-homed servers targeted in NX-OS 7.1
Nexus 6000/5600/5500
FEX
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Nexus Fabric Features: FCoE and Unified Ports Seamless transport of both storage and data traffic at the server edge
Unified Ports:
• May be configured to support either native Fibre Channel or Ethernet
• Available on Nexus 5500/5600UP switches, or as an expansion module on Nexus 6004.
Fibre Channel over Ethernet (FCoE):
• FCoE allows encapsulation and transport of Fibre Channel traffic over an Ethernet network
• Traffic may be extended over Multi-Hop FCoE, or directed to an FC SAN
• SAN “A” / “B” isolation is maintained across the network
FC
Servers with CNA
Nexus Ethernet/FC Switches
FCoE
Links
SAN-B SAN-A
Fibre
Channel
Traffic
Ethernet
or Fibre
Channel
Traffic
Fibre
Channel
Any Unified Port can be configured as:
Disk Array
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Planning Physical Data Center Pod Requirements
19
Compute
Rack
Network/Storage
Rack
(2)N2232
FEX
(32) 1RU
Rack
Servers
Plan for growth in a modular, pod-based repeatable fashion.
Your own “pod” definition may be based on compute, network, or storage requirements.
How many current servers/racks and what is the expected growth?
Map physical Data Center needs to a flexible communication topology.
Nexus switching at Middle or End of Row will aggregate multiple racks of servers with FEX.
(2) N5548UP
Storage
Arrays
Term Svr,
Mgt Switch
PATCH
Today’s
Server
Racks
Tomorrow’s
Data Center
Floor
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Working with 10 and 40 Gigabit Ethernet
20
QSFP-40G-SR4 with direct MPO and 4x10
MPO-to-LC duplex splitter fiber cables
QSFP-40G-CR4
direct-attach cables QSFP+ to 4-SFP+
direct-attach cables
(splitter)
Nexus 2;3;5;6;7;9K support SFP+ and QSFP-based 10/40 Gigabit Ethernet interfaces.*
Direct-attach cables/twinax o Low power
o Low cost
QSFP to 4x SFP+ splitter cables
40 Gigabit Ethernet cable types: o Direct-attach copper
o Optics with SR4, CSR4, LR4
* Verify platform-specific support of specific optics/distances from reference slide
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
40G Offering QSFP BiDi support
• Utilize existing duplex fiber commonly deployed in
10G environment today
• Reduce 40G transition cost by eliminating the need
to upgrade fiber plant
• 75% average savings over parallel fiber for new
deployments
Technology
Value Proposition
12-fiber MPO Duplex LC
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
QSFP/SFP+ References
22
QSFP BiDi 40Gig Datasheet
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps13386/datasheet-c78-730160.html
QSFP 40Gig datasheet
http://www.cisco.com/en/US/prod/collateral/modules/ps5455/data_sheet_c78-660083_ps11541_Products_Data_Sheet.html
Platform specific QSFP compatibility matrix
http://www.cisco.com/en/US/docs/interfaces_modules/transceiver_modules/compatibility/matrix/OL_24900.html
Platform specific SFP+ compatibility matrix
http://www.cisco.com/c/en/us/td/docs/interfaces_modules/transceiver_modules/compatibility/matrix/OL_6974.html
40Gig Cabling White Papers
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps13386/white-paper-c11-729493.pdf
http://www.cisco.com/en/US/products/ps11708/index.html
For Your Reference For Your
Reference
For Your Reference
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Data Center Service Integration Approaches
23
VM VM VM VM VM VM
Network
Core
Virtualized Servers with
Nexus 1000v and vPath
Physical DC
Service Appliances
(Firewall, ADC/SLB,
etc.)
Virtual DC
Services in
Software
Data Center Service Insertion Needs o Firewall, Intrusion Prevention
o Application Delivery, Server Load Balancing
o Network Analysis, WAN Optimization
Physical Service Appliances o Typically introduced at Layer 2/3 Boundary or Data
Center edge.
o Traffic direction with VLAN provisioning, Policy-Based Routing, WCCP.
o Use PortChannel connections to vPC.
o Statically Routed through vPC, or transparent.
Virtualized Services o Deployed in a distributed manner along with virtual
machines.
o Traffic direction with vPath and Nexus 1000v.
o Cloud Services Router (CSR1000V) for smaller scale DCI/OTV, etc.
L3 -----------
L2
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Cisco InterCloud Workload Portability for the Hybrid Cloud
PRIVATE
CLOUD
PUBLIC
CLOUD
InterCloud Director
InterCloud Secure Fabric
Cisco
Powered
VM VM
InterCloud Provider
Enablement Platform
Secure network extension
Workload mobility
Administration portal
Workload management Cloud APIs
• Dev/Test
• Control of “Shadow IT”
• Capacity Augmentation
• Disaster Recovery
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Cisco UCS Director Infrastructure Management
25
On-Demand
Automated Delivery
Policy-Driven
Provisioning
Secure
Cloud
Container
VMs Compute Network Storage
UCS Director
Domain Managers
OS and
Virtual
Machines
Storage
Network
Compute
Tenant
B Tenant
C Tenant
A
Virtualized and Bare-Metal
Compute and Hypervisor
B C A Network and Services
VM VM Bare Metal
Unified Pane of Glass
End-to-End
Automation and
Lifecycle
Management
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Cisco Management Software Portfolio UCS Manager, Central, APIC, DCNM and UCS Director
26
UCS Manager
UCS Central
UCS Director
• Manage Single
UCS domain
• Embedded
Management of all
UCS s/w and h/w
components
• Manage multiple
UCS Domains
• Deliver global
policies, service
profiles, ID pools,
and templates
• Manage Compute,
Storage, Network, ACI
and Virtualisation
• Manage FlexPod,
VSPEX, Vblock
• Support for 3rd party
heterogeneous
infrastructure
APIC & DFA
• Embedded
Management for ACI
• Manages ACI Fabric
• L4-7 Management
• Policies: Connectivity,
Security & QoS.
APIC
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Session Agenda
27
Midsize Data Center Requirements
– Goals and Challenges
– Fabric Requirements
Starting Point: The Access Pod
– Compute and Storage Edge Requirements
– Key Features
Single Pod Design Examples
– Fixed/Semi-modular/Modular Designs
– vPC Best Practices
Moving to a Multi-Tier Fabric
– Spine/Leaf Designs, roadmap to ACI/DFA
– FabricPath Best Practices
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
FC
Single Layer DC, Fixed/Semi-Modular Switching
FCoE
iSCSI / NAS
1Gig/100M
Servers 10 or 1-Gig attached
UCS C-Series
10-GigE
UCS C-Series
L3 -----------
L2 Nexus 5600
Client Access
WAN / DCI
Nexus 5600 Data Center Switches: • 5672UP: 1RU, 48 1/10GE + 6 QSFP (16 Unified Ports)
• 56128P: 2RU, 48 1/10GE + 4 QSFP, 2 expansion slots (24 1/10GE-Unified Port + 2 QSFP module available)
Non-blocking, line-rate Layer-2/3 switching with low latency ~1 µs.
FCoE plus 2/4/8G Fibre Channel options.
Hardware-based Layer-2/3 VXLAN, NVGRE.
Dynamic Fabric Automation (DFA) capable.
Design Notes:
OTV, LISP DCI may be provisioned through separate Nexus 7000 or ASR 1000 WAN Routers
ISSU not supported with Layer-3 on Nexus 5000/6000
Nexus 2000 FEX
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Single Layer Data Center, Modular Chassis High-Availability, Modular 1/10/40/100 GigE
• Nexus 7700, common asics and software shared with Nexus 7000 platform.
• Using F3 I/O Module, concurrent support for: OTV, LISP, MPLS, VPLS
FabricPath, FCoE and FEX
• Dual-Supervisor High Availability.
• Layer-2/3 In Service Software Upgrade (ISSU)
• Virtual Device Contexts (VDC)
• Layer-2/3 VXLAN in hardware on F3 card.
• Dynamic Fabric Automation support; NX-OS 7.1
Design Notes:
For native Fibre Channel add Nexus/MDS SAN.
FCoE direct to FEX support planned for NX-OS 7.1
iSCSI / NAS
10 or 1-Gig attached UCS C-Series
L3 -----------
L2 Nexus 7706
Nexus 7004
WAN
Campus
Client Access
Spine/L
eaf
VDCs
OTV
VDCs
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Single Layer Data Center, Nexus 6004 Positioned for rapid scalability and a 40-GigE Fabric
FCoE
iSCSI / NAS
10 or 1-Gig attached UCS C-Series
L3 -----------
L2 Nexus 6004
WAN / DCI Nexus 6004-EF Benefits:
Up to 96 40-GigE, 160 UP or 384 10-GigE
Integrated line-rate layer-3
Native 40-Gig switch fabric capability
Low ~1us switch latency at Layer-2/3
Line-rate SPAN at 10/40 GigE
Example Components:
2 x Nexus 6004-EF, 24 40G or 96 10G ports active
L3, Storage Licensing and M20UP LEM
8 x Nexus 2248PQ or 2232PP/TM-E
Note: FCoE, iSCSI, NAS storage are supported and native FC module just released.
Campus
Client Access
30
FC
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Single Layer Data Center, ACI-Ready Platform
1Gig/100M
Servers 10 or 1-Gig attached
UCS C-Series
10-GigE
UCS C-Series
L3 -----------
L2 Nexus 9396PX
Client Access
WAN / DCI
Nexus 9000 switching platforms enable migration to Application Centric Infrastructure (ACI).
May also be deployed with in standalone NX-OS mode (no APIC controller).
• 9396PX: 48 1/10GigE SFP+ ports, 12 QSFP
• 9504: Small-footprint HA modular platform
• Basic vPC and straight-through FEX supported as of NX-OS 6.1(2)I2(3)
• VXLAN Layer-2/3 in hardware
• IP-based storage support
• Low latency, non-blocking Layer-2/3 switching ~1µs
Design Notes:
OTV, LISP DCI may be provisioned through separate Nexus 7000 or ASR 1000 WAN Routers.
Fibre Channel or FCoE support requires separate MDS or Nexus 5500/5600 SAN switching. (Future FCoE capable)
ISSU support targeted for 2HCY14 on 9300.
iSCSI /
NAS
FEX
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Single Layer Data Center plus UCS Fabric Alternate Server Edge 1: UCS Fabric Interconnects with Blade and Rack Servers
L3 -----------
L2 Nexus 5672UP
WAN / DCI Typically 4 – 8 UCS Chassis per Fabric
Interconnect pair. Maximum is 20.
UCSM can also manage C-Series servers through 2232PP FEX to UCS Fabric.
Dedicated FCoE uplinks from UCS FI to the Nexus 5672UP for FCoE/FC SAN Access
Nexus switching layer provides inter-VLAN routing, upstream connectivity, and storage fabric services.
Example DC Switching Components: 2 x Nexus 5672UP
Layer- 3 and Storage Licensing
2 x Nexus 2232PP/TM-E UCSM managed
C-Series
UCS Fabric
Interconnects
FC / FCoE
iSCSI / NAS
Campus
Client Access
32
Nexus 2000
UCS B-Series
Chassis
FC
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Single Layer Data Center plus B22 FEX Alternate Server Edge 2: HP, Dell, Fujitsu or IBM Blades Example with B22 FEX
L3 -----------
L2 Nexus 5672UP
WAN / DCI B22 FEX allows Fabric Extension directly
into compatible 3rd-party chassis.
Provides consistent network topology for multiple 3rd-party blade systems and non-UCSM rack servers.
FC or FCoE-based storage
Example Components: 2 x Nexus 5672UP
L3 and Storage Licensing
4 x Nexus B22
Server totals vary based on optional use of additional FEX.
UCS C-Series
Cisco B22 FEX for
Blade Chassis
Access
Campus
Client Access
33
FC
FC / FCoE
iSCSI / NAS
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Flexible Design with Access Pod Variants Mix and match Layer-2 compute connectivity for migration or scale requirements
34
Rack Server Access with FEX
UCS Managed Blade and Rack
B22 FEX with 3rd Party Blade Servers
3rd Party Blades with PassThru and FEX
More features, highest value and physical consolidation
Nexus switching and FEX provide operational consistency
Configuration Best Practices Summary: vPC with Layer-2, Layer-3
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Virtual Port Channel and Layer-2 Optimizations What features to enable?
Autorecovery: Enables a single vPC peer to bring up port channels after power outage scenarios
Orphan Port Suspend: Allows non-vPC ports to fate-share with vPC, enables consistent behavior for Active/Standby NIC Teaming
vPC Peer Switch: Allows vPC peers to behave as a single STP Bridge ID (not required with vPC+ with FabricPath)
Unidirectional Link Detection (UDLD): Best practice for fiber port connectivity to prevent one-way communication (use “normal” mode)
Dual NIC 802.3ad
Dual NIC Active/Standby
vPC Domain:
• autorecovery
• vpc peer switch
Identify Orphan
Ports for
Active/Standby
Teaming
36
For Your Reference
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Virtual Port Channel and Layer-3 Optimizations What features to enable?
vPC and HSRP: Keep HSRP timers at defaults, vPC enables active/active HSRP forwarding
vPC Peer Gateway: Allows the peers to respond to the HSRP MAC, as well as the physical MAC’s of both peers.
IP ARP Synchronize: Proactively synchronizes the ARP table between vPC Peers over Cisco Fabric Services (CFS)
Layer-3 Peering VLAN: Keep a single VLAN for IGP peering between N5k/6k vPC peers on the peer link. (On N7k can also use a separate physical link)
Bind-VRF: Required on Nexus 5500, 6000 for multicast forwarding in a vPC environment. (Not required if using vPC+ with FabricPath)
Layer-3 Peering
vPC Domain
37
vPC Domain:
• Peer gateway
• ip arp sync
L3 -----------
L2
For Your Reference
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Session Agenda
38
Midsize Data Center Requirements – Goals and Challenges
– Fabric Requirements
Starting Point: The Access Pod – Compute and Storage Edge Requirements
– Key Features
Single Pod Design Examples
– Fixed/Semi-modular/Modular Designs
– vPC Best Practices
Moving to a Multi-Tier Fabric
– Spine/Leaf Designs, roadmap to ACI/DFA
– FabricPath Best Practices
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Migration from single-layer to spine/leaf fabric
39
Nexus 7000 or 6004 Single Layer
Nexus 5000 or 9300 Single Layer
• Larger switches more suited to becoming spine layer.
• Smaller switches more suited to becoming leaf/access.
• Layer-3 gateway can migrate to spine switches or to “border-leaf” switch pair.
• Spine switches can support leaf switch connections, plus some FEX and direct-attached servers during migration.
Spine/Leaf Data Center Fabric
Spine
Leaf
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Dynamic Fabric Automation (DFA) Modular building blocks for migration to an automated fabric
40
Leaf
Nexus 7k, 6k, 5k
Spine
Nexus 7k,6k
WAN / DCI
DFA Fabric
Client Access
Border-Leaf
Nexus 7k, 6k
DCNM
DFA Central Point
of Management
Workload Automation:
• Integration with cloud orchestration stacks for, dynamic configuration of fabric leaf switches.
Optimized Networking:
• Provides a distributed default gateway in the leaf layer to handle traffic from any subnet or VLAN.
Virtual Fabrics:
• Implements segment-id in frame header to eliminate hard VLAN scale limits, supports multi-tenancy.
Fabric Management:
• Provides central point of fabric management (CPOM) for network, virtual-fabric and host visibility.
• Auto-configuration of new switches to expand the fabric using POAP, cable plan consistency check.
DCNM
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Application Centric Infrastructure (ACI) APIC controller-managed fabric based on Nexus 9000 hardware innovations
41
Leaf
Nexus 9300
Spine
Nexus 9500
WAN / DCI
ACI Fabric
• Centralized provisioning and abstraction layer for control of the switching fabric.
• Simplified automation with an application-driven policy model.
• Controller provides policy to switches in the fabric but is not in the forwarding path.
• Normalizes traffic to a VXLAN encapsulation with Layer-3 Gateway and optimized forwarding.
• Decouples endpoint identity, location, and policy from the underlying topology.
• Provides for service insertion and redirection.
Application
Infrastructure
Policy Controller Client Access
APIC APIC APIC
Border-Leaf
Nexus 9000
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Expanded Spine/Leaf Nexus Data Center Fabric Introduction of Spine layer, and FabricPath forwarding
Data Center switching control plane distributed over Dual Layers.
• Spine: FabricPath switch-id based forwarding, but also providing Layer-3 and service integration.
• Leaf: Physical TOR switching or FEX aggregation for multiple racks.
Multi-hop FCoE with dedicated links.
Example Components:
• 2 x Nexus 6004, 2 x Nexus 5672UP
• Layer-3 and Storage Licensing
• 12 x Nexus 2232PP/TM-E
FabricPath enabled between tiers for configuration simplicity and future expansion.
Nexus 5600
Leaf
L3 -----------
L2
Nexus 6004
Spine
10 or 1-Gig attached
UCS C-Series
WAN /DCI
FCoE
iSCSI / NAS
FabricPath
Forwarding
FC
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
WAN /DCI
Adding Access Pods to Grow the Fabric Modular expansion with added leaf-switch pairs
L3 -----------
L2
Rack Server Access
with FEX
FCoE
iSCSI / NAS
Rack Server Access
with FEX
Nexus 5600
Leaf
Nexus 6004
Spine
Data Center switching control plane distributed over Dual Layers.
• Spine: FabricPath switch-id based forwarding, but also providing Layer-3 and service integration.
• Leaf: Physical TOR switching or FEX aggregation for multiple racks.
Multi-hop FCoE with dedicated links.
Example Components:
• 2 x Nexus 6004, 4 x Nexus 5672UP
• Layer-3 and Storage Licensing
• 24 x Nexus 2232PP/TM-E
FabricPath enabled between tiers for configuration simplicity and future expansion.
FC
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Modular, High Availability Data Center Fabric Virtual Device Contexts partitioning the physical switch
44
WAN
Spine
VDC
Storage
VDC
OTV
VDC
Core
VDC
L3 -----------
L2
Rack Server Access
with FEX
Rack Server Access
with FEX
FCoE
iSCSI / NAS
Nexus 7700 FabricPath Spine, 5672UP Leaf
• Highly Available spine switching design with dual-supervisor.
• VDCs allow OTV and Storage functions to be partitioned on common hardware.
• Add leaf pairs for greater end node connectivity.
• Add spine nodes for greater fabric scale and HA.
• FCoE support over dedicated links and VDC.
Specific Nexus features utilized:
• Integrated DCI support with OTV, LISP, MPLS, and VPLS .
• Feature-rich switching fabric with FEX, vPC, FabricPath, FCoE.
• Investment protection of a chassis-based switch.
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
FabricPath with vPC+ Best Practices Summary
45
L3 -----------
L2
• Manually assign FabricPath physical switch ID’s to easily identify switches for operational support.
• Configure all leaf switches with STP root priority, or use pseudo-priority to control STP.
• Ensure all access VLANs are “mode fabricpath” to allow forwarding over the vPC+ peer-link which is a FabricPath link.
• Use vPC+ at the Layer-3 gateway pair to provide active/active HSRP.
• Set FabricPath root-priority on the Spine switches for multi-destination trees.
• Enable overload-bit under FabricPath domain to delay switch forwarding state on insertion into fabric “set-overload-bit on-startup <seconds>”
VPC Domain
100
VPC Domain
10
FabricPath
SW-ID: 101
FabricPath
SW-ID: 102
For Your Reference
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Summary: Scalable Midsize Data Center Designs
46
Midsize Data Centers can benefit from the same technology advances as larger ones.
Increase the stability of larger Layer-2 workload domains using vPC, FabricPath, and vPC+.
Start with a structured approach that allows modular design as requirements grow.
Evaluate Nexus switching options based on feature support, scale, and performance.
Plan ahead for re-use of components in new roles as needs change.
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Complete Your Paper Session Evaluation
Give us your feedback and you could win 1 of 2
fabulous prizes in a random draw.
Complete and return your paper evaluation
form to the Room Attendant at the end of the
session.
Winners will be announced today at the end of
the session. You must be present to win!
Please visit the Concierge desk to pick up your
prize redemption slip.
Visit them at BOOTH# 407
Cisco and/or its affiliates. All rights reserved. T-DC-16-I Cisco Public
Reference and Relevant Content
49
Cisco Press MSDC Overlay Book: Using TRILL, FabricPath and VXLAN http://www.ciscopress.com/store/using-trill-fabricpath-and-vxlan-designing-massively-9781587143939
Cisco Press UCS Book: Cisco Unified Computing System (UCS) http://www.ciscopress.com/store/cisco-unified-computing-system-ucs-data-center-a-complete-9781587141935
Cisco Press Nexus Book: NX-OS and Cisco Nexus Switching http://www.ciscopress.com/store/nx-os-and-cisco-nexus-switching-next-generation-data-9781587143045
Cisco Live Technical Breakout Session:
– BRKDCT-2218 - Data Center Design for the Midsize Enterprise
For Your Reference