Upload
vantruc
View
218
Download
0
Embed Size (px)
Citation preview
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 1
Innovations for the next generation Data Center Cisco Nexus 7000
Maciej BocianConsulting System Engineer, Data Center/Storage Networking
Central & Eastern Europe
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 2
The New Data Center
Consolidation Needed to Combat Infrastructure Sprawl and its attendant capex/opex impact
Virtualization of Resources to Easily and Efficiently Adapt to Change
Automation Improves Operations Effectiveness and Infrastructure Availability
Cisco Nexus7000 Delivers Infrastructure Scalability to defer the need to add infrastructure
Cisco Nexus7000 Transport Flexibility to meet growing needs and address next-gen protocols
Cisco Nexus7000 Operational Continuity through a “Zero Service Loss” system architecture
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 9
Nexus 7000 Chassis
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 10
Nexus 7010 10-Slot ChassisFirst chassis in Nexus 7000 product familyOptimized for data center environmentsHigh density
256 10G interfaces per system
High performance1.2Tbps system bandwidth at initial releaseInitially 80Gbps per slot60Mpps per slot
Future proofInitial fabric provides up to 4.1TbpsProduct family scaleable to 15+Tbps40/100G and Unified Fabric ready
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 11
Nexus 7010 Chassis Front
Lockable front doors (removable)
System status LEDs
Integrated cable management with door
Air intake with optional filter
8 payload slots (1-4, 7-10)
2 supervisor slots (5-6)
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 12
Nexus 7010 Chassis Back
Air exhaust
5 crossbar fabric modules
2 fabric fan trays
2 system fan trays
3 power supplies
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 13
System Power
6000W AC power supply for Nexus 7000 series chassis
Dual inputs at 220/240V or 110/120V
Proportional load-sharing among supplies
Hot swappable
Blue beacon LED for easy identification
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 14
Input Source Redundancy
Nexus 7010 Power Redundancy
Power redundancy modes:
Power Supply Redundancy (default)
Input Source Redundancy
Power Supply Redundancy
Grid #1 Grid #2
6 power supplies in 3 physical bays
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 15
System Cooling
Redundant system fan traysprovide cooling of I/O modules and supervisor engines
Redundant fabric fans provide cooling of crossbar fabric modules
Variable speed redundant fans provide complete system coolingFans removed from chassis rear – no disruption of cabling
Hot swappableBlue beacon LED for easy identification
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 16
Blue beacon LEDs allow for easy FRU identification for servicing
Other Hardware Features
System LEDs provide aggregate view of system status
Power supplies
Fan trays
Supervisor engines
Fabric modules
I/O modules
Locking ejector levers ensure proper module seating and prevent accidental disengagement
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 18
Nexus 7000 Supervisor Engine
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 19
Compact Flash cover
Supervisor EngineDual-core 1.66GHz Intel Xeon processor with 4GB DRAMConnectivity Management Processor (CMP) for lights-out management 2MB NVRAM, 2GB internal bootdisk, 2 external compact flash slots10/100/1000 management port with 802.1AE LinkSecConsole & Auxiliary serial portsUSB ports for file transferBlue beacon LED for easy identification
Beacon LED
Console Port
AUX PortManagementEthernet
USB Ports CMP Ethernet
Reset ButtonStatus LEDs
Compact FlashSlots
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 21
Out-of-band management network
CMPCMP
CMPCMP
Data Network
CMPCMP
CMPCMP
Connectivity Management Processor (CMP)Standalone, always-on microprocessor on supervisor engine
Provides ‘lights out’ remote management and disaster recovery via 10/100/1000 interface
Removes need for terminal servers
Monitor supervisor and modules, access log files, power cycle supervisor, etc.
Runs lightweight Linux kernel and network stack
Completely independent of DC-OS on main CPU
Data Network
console cables
Terminal Servers (out-of-band console connectivity)
Out-of-band management network
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 22
MainCPU
System Controller
Internal CF
CMP
CentralArbiter
PHY PHY
LinkEncryption
NVRAM
OBFLFlash
SecurityProcessor
DRAM Flash
DRAM
Fabric ASIC
SwitchedEOBC
To Modules To Fabrics To Modules
Fabric Interfaceand VOQ
PortASIC
ReplicationEngine MET
Supervisor Engine Architecture
Console AUX Mgmt Enet
CMP Enet
slot0:
log-flash:
usbusbusb
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 23
Nexus 7000 I/O Modules
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 24
32-Port 10GE I/O Module32 10GE ports with SFP+ transceivers80G full duplex fabric connectivityIntegrated 60Mpps forwarding engine for fully distributed forwarding4:1 oversubscription at front panel
Virtual output queueing (VOQ) ensuring fair access to fabric bandwidth802.1AE LinkSec on every portBuffering:
Dedicated mode: 100MB ingress, 80MB egressShared mode: 1MB + 100MB ingress, 80MB egress
Queues: 8q2t ingress, 1p7q4t egressBlue beacon LED for easy identification
SFP+
SR at initial release – 300m over MMFLR post-release – 10km over SMF
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 25
Shared versus Dedicated Mode
9 11 13 15
9 11 13 15
Dedicated modeOne interface gets 10G bandwidth
Three interfaces disabled
Shared modeFour interfaces share 10G bandwidth
10G
To fabric
10G
To fabric
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 26
32-Port 10GE I/O Module Architecture
2,4 6,8 10,12 14,16 18,20 22,24 26,28 30,32
Fabric Interface and VOQ
Layer 2Engine
Fabric Interface and VOQ
Fabric ASIC
To Fabrics
Port ASIC Port ASIC Port ASIC Port ASIC
CTS and4:1 Mux
CTS and4:1 Mux
CTS and4:1 Mux
CTS and4:1 Mux
MAC/PHY
CTS and4:1 Mux
CTS and4:1 Mux
CTS and4:1 Mux
CTS and4:1 Mux
Port ASIC Port ASIC Port ASIC Port ASIC
Mezzanine Card
1,3 5,7 9,11 13,15 17,19 21,23 25,27 29,31
Layer 3Engine
FE DaughterCard
LCCPU
To Central ArbiterEOBC
(to Port ASIC) (to LC CPU)
Inband
ReplicationEngine MET
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
MAC/PHY
ReplicationEngine MET
ReplicationEngine MET Replication
Engine MET
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 27
48-Port 1GE I/O Module48 1GE 10/100/1000 RJ-45 ports
40G full duplex fabric connectivity
Integrated 60Mpps forwarding engine for fully distributed forwarding
Virtual output queueing (VOQ) ensuring fair access to fabric bandwidth
802.1AE LinkSec on every portBuffer: 7.5MB ingress, 6.2MB egressQueues: 2q4t ingress, 1p3q4t egressBlue beacon LED for easy identification
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 28
48-Port 1GE I/O Module Architecture
ReplicationEngine MET
Port ASIC
17-24
Octal PHY
Layer 2Engine
Fabric ASIC
FE DaughterCard
To Fabrics
Layer 3Engine
Fabric Interface and VOQ
CTS CTS CTS
Port ASIC
CTS CTS CTS
Port ASIC
CTS CTS CTS
Port ASIC
CTS CTS CTS
9-16
Octal PHY1-8
Octal PHY41-48
Octal PHY33-40
Octal PHY25-32
Octal PHY
To Central Arbiter
LCCPU
EOBC
(to Port ASIC)
(to LC CPU)
Inband
ReplicationEngine MET
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 29
Nexus 7000 Forwarding Engine
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 30
Forwarding Engine Hardware
FIB TCAM 128KMAC table 128KClassification TCAM (ACL and QoS) 64KNetFlow Table (Ingress and Egress) 512KPolicers 16K
Table sizes optimized for Data Center
Advanced hardware forwarding engine integrated on every I/O module60Mpps Layer 2 bridging with hardware MAC learning 60Mpps IPv4 and 30Mpps IPv6 unicastIPv4 and IPv6 multicast (SM, SSM, bidir)IPv4 and IPv6 security ACLs Cisco TrustSec security group tag supportUnicast RPF check and IP source guardQoS remarking and policing policiesIngress and egress NetFlow (full and sampled)GRE tunnels
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 31
Forwarding Engine Details
Forwarding engine chipset consists of two ASICs:Layer 2 Engine
Performs ingress and egress SMAC/DMAC lookupsHardware MAC learningTrue IP-based Layer 2 multicast constraintPerforms lookups on ingress I/O module, and egress I/O module for bridged packets
Layer 3 Engine60Mpps IPv4 and 30Mpps IPv6 Layer 3/Layer 4 lookupsPerforms all FIB, ACL, QoS, NetFlow processingLinear, pipelined architecture – every packet processed in ingress and egress pipePerforms lookups on ingress I/O module, and egress I/O module for multicast replicated packets
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 32
Nexus 7000 Fabric and Bandwidth
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 33
I/O Module Bandwidth Capacity
Initially shipping I/O module bandwidth: 80Gbps per slotAssumes 8 * 10G ports in dedicated mode per module
In Nexus 7000 10-slot chassis:(80Gbps/slot) * (8 payload slots) = 640Gbps
(640Gbps) * (2 for full duplex operation) = 1280Gbps = 1.2Tbps system bandwidth
1.2 Terabits per second initial system bandwidth
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 34
Fabric Bandwidth Capacity
Initially shipping fabric bandwidth: 230Gbps per payload slot, 115Gbps per supervisor slot
Initially shipping modules cannot fully leverage fabric bandwidthAssumes future modules that can leverage full bandwidth
In Nexus 7000 10-slot chassis:(230Gbps/slot) * (8 payload slots) = 1840Gbps(115Gbps/slot) * (2 supervisor slots) = 230Gbps(1840 + 230 = 2070Gbps) * (2 for full duplex operation) = 4140Gbps = 4.1Tbps system bandwidth
4.1 Terabits per second fabric bandwidth capacity
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 35
Future Vision for Platform Series
Future goal to double fabric bandwidth500+Gbps bandwidth per slot
Requires future fabric module
10 slot chassis will scale to 9+Tbps system bandwidth
18 slot chassis will scale to 15+Tbps system bandwidth
15+ Terabits per second platform bandwidth capacity
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 36
Fabric Module
Provides 46Gbps per I/O module slot
Also provides 23G per supervisor slot
Up to 230Gbps per slot with 5 fabric modules
Initially shipping I/O modules do not leverage full fabric bandwidth
Load-sharing across all fabric modules in chassis
Multilevel redundancy with graceful performance degradation
Non-disruptive OIR
Blue beacon LED for easy identification
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 37
46Gbps92Gbps138Gbps184Gbps230Gbps
Fabric Capacity and RedundancyPer-slot bandwidth capacity increases with each fabric module
1G module requires 2 fabrics for N+1 redundancy
10G module requires 3 fabrics for N+1 redundancy
4th and 5th fabric modules provide additional level of redundancy
Future modules will leverage additional fabric bandwidth
Fabric failure results in reduction of overall system bandwidth
Fabrics
Module Slots
40G
1G Module
80G
10G Module
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 38
Access to Fabric Bandwidth
Supervisor engine controls access to fabric bandwidth using central arbitration
Fabric bandwidth represented by Virtual Output Queues (VOQs)
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 39
Virtual Output Queues (VOQs) on ingress modules represent bandwidth capacity on egress modulesGuaranteed delivery to egress module for arbitrated packets entering fabric
If VOQ available on ingress, capacity exists on egressVOQ is NOT equivalent to ingress or egress port buffer or queues
Relates ONLY to ASICs at ingress and egress to fabricVOQ is “virtual” because it represents EGRESS capacity but resides on INGRESS module
It is PHYSICAL buffer where packets are stored
What Are VOQs?
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 40
What Is VOQ?Ingress module
Module 1 Module 2 (1G module)
Module 3 (10G module)
Module 4 (10G module)
VOQs forModule 2
0 1 2 30 1 2 30 1 2 30 1 2 3
VOQs forModule 3
0 1 2 30 1 2 30 1 2 30 1 2 3
0 1 2 30 1 2 3
0 1 2 30 1 2 3
VOQs forModule 4
0 1 2 30 1 2 3
0 1 2 30 1 2 3
0 1 2 30 1 2 3
0 1 2 30 1 2 3
Egress modules
Fabric module
0 1 2 30 1 2 3
0 1 2 30 1 2 3
0 1 2 30 1 2 3
0 1 2 30 1 2 3
Destination 1
Destination 2
Destination 3
Destination 4
Destination 5
Destination 6
Destination 7
Destination 8
Destination 1
Destination 2
Destination 3
Destination 4
Destination 5
Destination 6
Destination 7
Destination 8
0 1 2 30 1 2 30 1 2 30 1 2 3
0 1 2 30 1 2 3
0 1 2 30 1 2 3
Destination 1
Destination 2
Destination 3
Destination 4 0 1 2 30 1 2 30 1 2 30 1 2 3
Egress Capacity
(ability to receive traffic from fabric)
VOQ Buffers correspond to Egress Capacity
(send traffic into fabric based on destination)
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 41
Centralized Fabric Arbitration
Access to fabric bandwidth on ingress module controlled by central arbiter on supervisor
In other words, access to the VOQ for the destination across the fabric
Arbitration works on credit request/grant basisModules communicate egress fabric buffer availability to central arbiterModules request credits from supervisor to place packets in VOQ for transmission to destination over fabricSupervisor grants credits based on egress fabric buffer availability for that destination
Arbiter discriminates among four classes of servicePriority traffic takes precedence over best-effort traffic across fabric
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 42
Central Arbiter
Module 2
Fabrics
VOQ Operation
Supervisor
BufferCredits
VOQ fore2/1,3,5,7
VOQ fore1/1,3,5,7
0 1 2 3VOQ for
e3/1,3,5,7
0 1 2 3 0 1 2 3
Capacity available!
Capacity available! Capacity
available!
Module 1 Module 3
0 1 2 3Egress
DestinationCapacity
EgressDestination
Capacity
0 1 2 3Egress
DestinationCapacity
0 1 2 3
Egress modules have capacity to receive traffic
from fabric
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 43
Fabrics
VOQ Operation
Supervisor
Module 1 Module 2 Module 3
0 1 2 3VOQ for
e3/10 1 2 3
EgressDestination
Capacity
0 1 2 3Egress
DestinationCapacity
0 1 2 3VOQ for
e2/1
INGRESS MODULE EGRESS MODULES
VOQs on ingress module correspond to capacity
on egress modules
Central Arbiter
BufferCredits
VOQ fore2/1,3,5,7
VOQ fore1/1,3,5,7
0 1 2 3VOQ for
e3/1,3,5,7
0 1 2 3 0 1 2 3
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 44
Fabrics
VOQ Operation
Supervisor
Module 1 Module 2 Module 3
0 1 2 3VOQ for
e3/1 Destined to e3/1, priority
level 1
Request to transmit to
e3/1, priority 1!
Request granted!
0 1 2 3Egress
DestinationCapacity
Buffer for VOQ priority 1 now
available!
0 1 2 3Egress
DestinationCapacity
0 1 2 3VOQ for
e2/1
INGRESS MODULE EGRESS MODULES
Central Arbiter
BufferCredits
VOQ fore2/1,3,5,7
VOQ fore1/1,3,5,7
0 1 2 3VOQ for
e3/1,3,5,7
0 1 2 3 0 1 2 3
Deduct credit from VOQ priority 1
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 45
Benefits of Central Arbitration and VOQ
Ensures fair access to bandwidth for multiple ingress ports transmitting to one egress portPrevents congested egress ports from blocking ingress traffic destined to other portsPriority traffic takes precedence over best-effort traffic across fabricEngineered to support Unified I/O
Can provide no-drop service across fabric for future FCoE interfaces
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 46
Fabric Superframing
Fabric interface ASIC performs superframing for fabric-bound unicastpacketsWhen packet hits VOQ, arbitration request generated immediatelyWhen grant returned, packet immediately transmitted into fabricWhen packet transmission complete, check if other packets are enqueuedin the same VOQ – if so, transmit themTransmit until superframe limit reached, or queue emptySuperframe up to 2560 bytes, or up to 32 packets, whichever comes firstIf additional packets need transmission, new arbitration request generated and new superframe beginsSuperframe disassembled on egress fabric interface ASIC
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 47
Fabric ASIC
Fabric Interface
Maximum SF Size = 2560
Fabric Superframing
VOQ
Arbiter onSupervisor
Request
Grant
2560
1200 1200 !
Queue empty!
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 48
Layer 2Engine
Layer 3Engine
Forwarding Engine
Nexus 7000 Packet Flow
Fabric Module 1
Fabric ASIC
Fabric Interface and VOQ
Port ASIC
CTS and4:1 Mux
MAC/PHY
MAC/PHY
ReplicationEngine
Fabric ASIC
Module 1e1/1
Layer 2Engine
Layer 3Engine
Forwarding Engine
Fabric Interface and VOQ
Port ASIC
CTS and4:1 Mux
MAC/PHY
MAC/PHY
ReplicationEngine
Fabric ASIC
Module 2
Supervisor Engine
Central Arbiter
Fabric Module 2
Fabric ASIC
Fabric Module 3
Fabric ASIC
e2/7
Ingress queueing and scheduling CTS LinkSec decryption and
verificationIngress queueing and scheduling in shared mode
Submit packet for lookupIngress multicast replication Layer 2 and
IGMP snooping lookups
Layer 3 and Layer 4 lookups
Queueing and VOQ arbitration requestTransmit to fabric
Credit grant for fabric access
Packet transmission
Packet transmission
Receive from fabricQueue and schedule toward egressReturn buffer credits
Submit packet for lookupEgress multicast replication
Layer 2 and IGMP snooping lookups
Layer 3 and Layer 4 lookups
Egress queueing and scheduling
CTS LinkSec encryption
Receive packet from wire
Transmit packet on wire
Packet transmission
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 49
DC-OS Data Center class operating system
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 50
Operational
Continuity
Multi-core/multi-thread architecture means turning on features will not impact performance
InfrastructureScalability
Virtual Device Contexts allow the switch to be split into multiple logical switches for better utilization and isolation
Multi-transport control plane natively supports unified fabric without external gateways
TransportFlexibility
DC-OS: Delivering DC Class Attributes
Granular stateful process restart provides increased uptime and improved network stability
Integrated Manageability toolset improves troubleshooting and reduces time-to-resolution
ISSU+ allows software upgrades without service interruption
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 51
DC-OS Software Architecture
Layer-2 Protocols Storage Protocols (possible future)
Layer-3 Protocols
Interface Management
Chassis Management
Kernel
Sysm
gr, P
SS &
MTS
SNM
P, X
ML,
CLI
Man
agem
ent
Chip/Driver Infrastructure
VLAN mgr
STP
OSPF
BGP
EIGRP
GLBP
HSRP
VRRP
VSANsZoningFCIPFSPFIVR
UDLD
CDP
802.1XIGMP snp
LACP PIMCTS SNMP
Other Services
Future ServicesPossibilities
……
Protocol Stack (IPv4 / IPv6 / L2)
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 53
Licensing
Licenses are enforced on the switch# show license host-id
License tied chassis serial # stored in dual redundant NVRAM modules on backplane
Licenses are issued in the form of a digitally signed text file# install license bootflash:DC3-1234.lic
Time-bound licensesLicense with expiry date
Currently used in SAN-OS as an emergency when grace period is over and need time to buy license
Expiry date is absolute(expires at midnight UTC on expiry date)
Periodic syslog, callhome and SNMP traps warning when time bound license nears expiry
After expiry date feature will continue to run if grace period has not been exhausted
Grace PeriodEnables features to be run for a certain period without installing a license
Allows feature testing/trials without buying a license (e.g. 120 days)
Periodic syslog, callhome and SNMP traps warning when grace period nears expiry
License PAK (product activation key)
www.cisco.com PAK +
chassis serial #
<xml... licA ...>
license file
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 54
NX-OS Licensing Simple, Flexible Licensing Model
There are three levels of enforced licensing: Base, Enterprise Services, and Advanced Services
Grace periods facilitate feature testing and trials without buying a license (for example, 120 days), with some restrictions. The Cisco Trusted Security does not have a grace period because of export restrictions on strong cryptography
Base
ISSU PVRST+ MSTP+ 802.1Q LACP PVLANs NetFlow SPAN QoS
RIP/RIPng IGMP snooping
DHCP helper
uRPF check
Port Security SSHv2 RBAC SNMP RADIUS
HSRP GLBP VRRP VRF lite CoPP DHCP snooping DAI IPSG 802.1x
Jumbo Frames UDLD Storm
control EEM Cisco GOLD
Call Home NAC TACACS+ ACLs
Enterprise Services
OSPF EIGRP IS-IS BGP Graceful Restart PIM-SM Bidirectio
nal PIM PIM-SSM IGMP
MSDP PBR GRE
Advanced Services VDCs
Cisco Trusted Security
Note: Enterprise Services is NOT included with Advanced Services license
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 55
New NX-OS Feature Navigator
http://www.cisco.com/cdc_content_elements/flash/dataCenter/ciscofeaturenavigator/index.html
Available NOW
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 56
Stateful Fault Recovery
Linux Kernel
BG
P
OSP
F
PIM
TCP/
UD
P
IPv6
STP
HSR
P
LAC
P
etc
HA Manager
Restart process!
If a fault occurs in a process…HA manager determines best recovery action (restart process, switchover to redundant supervisor)Process restarts with no impact on data plane
State checkpointing (PSS) allows instant, stateful process recoverySoftware utilizes Graceful Restart where appropriate
DC3 Data Plane
PSS
DCOS services checkpoint their runtime state to the PSS for recovery in the event of a failure
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 57
Hardware FIB
Software RIB
Stateful Fault Recovery
Linux Kernel
BG
P
OSP
F
PIM
TCP/
UD
P
IPv6
STP
HSR
P
LAC
P
etc
HA Manager
Restart process!
Graceful restartGraceful restart
Routing updatesRouting updates
If a fault occurs in a process…HA manager determines best recovery action (restart process, switchover to redundant supervisor)Process restarts with no impact on data plane
State checkpointing (PSS) allows instant, stateful process recoverySoftware utilizes Graceful Restart where appropriate
Table Update
DC3 Data Plane
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 58
Release 4.0
Release 4.1
In-Service Software Upgrade
Linux Kernel
OSP
F
BG
P
PIM
etc.
HA Manager
DC3 Data Plane
Linux Kernel
HA Manager
Active
I/O Module Images
Upgrade and reboot
Release 4.0
Release 4.1
OSP
F
BG
P
PIM
etc.
Standby
Initiate stateful failoverUpgrade and rebootUpgrade and reboot I/O modules
dc3# install all kickstart bootdisk:4.1-kickstart system bootdisk:4.1-systemdc3#dc3#
Release 4.0
Release 4.1
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 59
Virtual Device Contexts (VDCs)
VDC – Virtual Device ContextFlexible separation/distribution of Software Components
Flexible separation/distribution of Hardware Resources
Securely delineated Administrative Contexts
Infrastructure
Layer-2 Protocols Layer-3 Protocols
VLAN mgr
STP
OSPF
BGP
EIGRP
GLBP
HSRP
VRRP
UDLD
CDP
802.1XIGMP sn.
LACP PIMCTS SNMP
RIBRIB
Protocol Stack (IPv4 / IPv6 / L2)
Layer-2 Protocols Layer-3 Protocols
VLAN mgr
STP
OSPF
BGP
EIGRP
GLBP
HSRP
VRRP
UDLD
CDP
802.1XIGMP sn.
LACP PIMCTS SNMP
RIBRIB
Protocol Stack (IPv4 / IPv6 / L2)
Kernel
VDC A
VDC B
VDC AVDC A VDC BVDC B
VDC n
VDCs are not…The ability to run different OS levels on the same box at the same time
based on a hypervisor model; there is a single ‘infrastructure’ layer that handles h/w programming…
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 60
Virtual Device Contexts An Introduction to the VDC Architecture
Virtual Device Contexts provides virtualization at the device level allowing multiple instances of the device to operate on the same physical switch at the same time…
Linux 2.6 KernelLinux 2.6 KernelInfrastructureInfrastructure
Protocol Stack (IPv4/IPv6/L2)Protocol Stack (IPv4/IPv6/L2)
L2 ProtocolsL2 Protocols
VDC1
VLAN MgrVLAN Mgr
Physical Switch
VDCnProtocol Stack (IPv4/IPv6/L2)Protocol Stack (IPv4/IPv6/L2)
L3 ProtocolsL3 Protocols
UDLDUDLD
VLAN MgrVLAN Mgr UDLDUDLD
LACPLACP CTSCTS
IGMPIGMP 802.1x802.1x
RIBRIB
OSPFOSPF GLBPGLBP
BGPBGP HSRPHSRP
EIGRPEIGRP VRRPVRRP
PIMPIM SNMPSNMP
RIBRIB
L2 ProtocolsL2 Protocols
VLAN MgrVLAN Mgr
L3 ProtocolsL3 Protocols
UDLDUDLD
VLAN MgrVLAN Mgr UDLDUDLD
LACPLACP CTSCTS
IGMPIGMP 802.1x802.1x
RIBRIB
OSPFOSPF GLBPGLBP
BGPBGP HSRPHSRP
EIGRPEIGRP VRRPVRRP
PIMPIM SNMPSNMP
RIBRIB
…
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 62
Virtual Device Contexts VDC Fault Domain
A VDC builds a fault domain around all running processes within that VDC - should a fault occur in a running process, it is truly isolated from other running processes and they will not be impacted…
Linux 2.6 KernelLinux 2.6 KernelInfrastructureInfrastructure
Protocol StackProtocol StackVDCA
Physical Switch
VDC A
Proc
ess A
BCPr
oces
s ABC
Proc
ess D
EFPr
oces
s DEF
Proc
ess X
YZPr
oces
s XYZ
…
Protocol StackProtocol StackVDCB
VDC B
Proc
ess A
BCPr
oces
s ABC
Proc
ess D
EFPr
oces
s DEF
Proc
ess X
YZPr
oces
s XYZ
…
Fault Domain
Process “DEF” in VDC B crashes
Processes in VDC A are not affected and will continue to run unimpeded
This is a function of the process modularity of the OS and a VDC specific IPC context
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 63
Virtual Device Contexts (VDCs)
Network Consolidation:Multiple logical nets/single physical net
Maintain clear delineation between nets
Independent Topologies
Clear Management Boundaries
Fault Containment
Service Velocity:In-line tests
Rapid deployment and rollback
e.g. Enable Utility Computing
Device Consolidation:Logical Appliances
Multi-switch emulation
Pwr, Cooling & Real-Estate efficiencies
Physical network islands are virtualized
onto common datacenter networking
infrastructure
VDC Extranet
VDC Prod
VDC DMZ
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 64
Command Line Interface
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 65
CLI
tstevens-dc3-10# sh ip ospf nei
OSPF Process ID 10 context default
Total number of neighbors: 1
Neighbor ID Pri State Up Time Address Interface
10.255.255.2 1 INIT/DROTHER 00:00:04 10.1.2.2 Eth1/2
tstevens-dc3-10# config t
tstevens-dc3-10(config)# router ospf 10
tstevens-dc3-10(config-router)# sh ip ospf nei
OSPF Process ID 10 context default
Total number of neighbors: 1
Neighbor ID Pri State Up Time Address Interface
10.255.255.2 1 FULL/BDR 00:00:03 10.1.2.2 Eth1/2
tstevens-dc3-10(config-router)#
IOS look-and-feel CLI with enhancements...
Show commands can be executed identically from exec mode and configuration mode
Show commands have parser help even in configuration mode
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 66
Two configuration models for routing protocols
BGP follows neighbor-centric modeltstevens-dc3-10(config)# router bgp 100
tstevens-dc3-10(config-router)# address-family ipv4 unicast
tstevens-dc3-10(config-router-af)# network 10.0.0.0/8
tstevens-dc3-10(config-router-af)# neighbor 10.1.2.2 remote-as 200
tstevens-dc3-10(config-router-neighbor)# address-family ipv4 unicast
tstevens-dc3-10(config-router-neighbor-af)# soft-reconfiguration inbound
IGPs follow interface-centric modeltstevens-dc3-10(config)# router ospf 10
tstevens-dc3-10(config-router)# int e2/22
tstevens-dc3-10(config-if)# ip router ospf 10 area 0
tstevens-dc3-10(config-if)# ip ospf hello-interval 1
tstevens-dc3-10(config-if)#
CLI Routing Configuration
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 67
CLI Slash Notation
tstevens-dc3-10(config)# int e2/23
tstevens-dc3-10(config-if)# ip add 10.2.23.1/24
tstevens-dc3-10(config-if)# ipv6 add ::abcd:223/120
tstevens-dc3-10(config-if)# ip access-list test
tstevens-dc3-10(config-acl)# permit ip 10.1.1.0/24 any
tstevens-dc3-10(config-acl)#
“Slash” notation supported for all IPv4/IPv6 masks
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 68
CLI Interface Ranges
tstevens-dc3-10(config)# int e1/1-3
tstevens-dc3-10(config-if-range)# no sh
tstevens-dc3-10(config-if-range)# int e2/3
tstevens-dc3-10(config-if)# ip add 10.2.3.1/24
tstevens-dc3-10(config-if)# int e2/1-4,e1/1-2,e1/15
tstevens-dc3-10(config-if-range)# mtu 9216
tstevens-dc3-10(config-if-range)#
Same configuration used for interface ranges as for single interfaces
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 69
CLI Parser Help
tstevens-dc3-10(config-if)# <TAB>
bandwidth description exit mac rate-mode storm-control
beacon dot1x flowcontrol mdix service-policy switchport
cdp duplex ip mtu shutdown vrrp
channel-group eou ipv6 nac spanning-tree
delay errdisable link no speed
tstevens-dc3-10(config-if)# ?
bandwidth Set bandwidth informational parameter
beacon Disable/enable the beacon for an interface
cdp CDP Interface Configuration parameters
channel-group Add to/remove from a port-channel
delay Specify interface throughput delay
<etc>
<TAB> key displays brief list of all available options at current branch
? key displays full parser help strings
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 70
CLI Piping Terminal Output
tstevens-dc3-10# sh run | ?
egrep Egrep
grep Grep
less Stream Editor
no-more Turn-off pagination for command output
wc Count words, lines, characters
begin Begin with the line that matches
count Count number of lines
exclude Exclude lines that match
include Include lines that match
tstevens-dc3-10# sh run | egrep ?
-A Print <num> lines of context after every matching line
-B Print <num> lines of context before every matching line
-c Print a total count of matching lines only
-i Ignore case difference when comparing strings
-n Print each match preceded by its line number
-v Print only lines that contain no matches for <expr>
-w Print only lines where the match is a complete word
-x Print only lines where the match is a whole line
WORD Search for the expression
tstevens-dc3-10# sh run | egrep -A 2 -B 2 ospf
interface Ethernet2/22
ip address 10.2.22.1/24
ip router ospf 10 area 0
interface Ethernet2/23
ip address 10.2.23.1/24
ip router ospf 10 area 0
interface Ethernet2/24
--
interface loopback0
ip address 10.255.255.1/32
ip router ospf 10 area 0
router ospf 10
hostname tstevens-dc3-10
tstevens-dc3-10# sh run | in ospf | wc -l
4
tstevens-dc3-10#
Variety of advanced pipe options for CLI output , including egrep, less, no-more, wc
Multiple levels of pipe
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 72
CLI Configuration Rollback
Provides checkpointing and rollback facility to return configuration to any previous stateOptions to name checkpoints, view contents of checkpointed configuration, diff checkpoints versus each other or running/startup configuration, etc.
tstevens-dc3-10# sh checkpoint
---------------------------------------------------------------------
Checkpoint_id Label UserName TimeStamp
---------------------------------------------------------------------
16777476 10-8 tstevens Mon Oct 8 21:55:45 2007
tstevens-dc3-10# rollback destination label 10-8
Note: Processing the Request... Please Wait
Note: Generating the Rollbackpatch... Please Wait
Note: Executing the patch... Please Wait
`conf t`
`interface Ethernet1/1`
`no service-policy type qos input foo stats-enable`
`no ip access-group test in`
tstevens-dc3-10#
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 73
CLI running-config permutations
‘show running-config’ (“show run”) works as expected, but there are many other enhancements over IOS:
dc3# show running-config ?
<CR>
> Redirect it to a file
aaa Display aaa configuration
all Current operating configuration with defaults
am Display am information
arp Display arp information
bgp Display bgp information
callhome Display callhome configuration
cdp Display cdp configuration
cmp Display CMP information
copp show running config for copp
dhcp Display dhcp snoop configurations
diagnostic Display diagnostic information
diff Show the difference between running and startup configuration
dot1x Display dot1x configuration
eem Show the event manager running configuration
eigrp Display eigrp information
icmpv6 Display icmpv6 information
igmp Display igmp information
interface Interface configuration
ip Display ip information
ipqos show running config for ipqosmgr
...
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 75
DCNM Data Center Network Manager
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 76
DCNM Solution Components
DCNM is a Client Server Solution
DCNM Server communicates with the DC-OS devices
DCNM Client communicates with the DCNM Server
DC-OS Device(s)
DCNM Server
DCNM Client
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 77
Server Hardware Specifications
System Requirements
CPU Speed: 3+GHz dual-core processor (32 bit)
RAM: Minimum 4GB
100GB High Performance Hard disk
Operating Systems Supported
Windows Server 2003 Standard Edition Service Pack 2
Red Hat Enterprise Linux AS release 4.
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 78
DCNM DiscoveryDiscovers DC-OS and Cisco IOS devicesDiscovers adjacent devices if CDP enabledServer collects extensive switch inventory and configuration details. Based on the collected information, DCNM Server builds a virtual network model.As part of discovery process, DCNM establishes an SSH session with each DC-OS device managed by DCNM and each Cisco IOS device discoveredSSH session is left in place after discovery. DCNM relies on the SSH session to gather information at regular intervals.
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 79
DCNM Server Network Model
DCNM Server builds an intelligent Network data model that enables the server to intelligently serve user requests.
DC-OS Devices Network
DCNM Server
DCNM Server Network Data
Model
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 80
DCNM Client
DCNM Client is a Java Application
DCNM Client is downloaded from the DCNM Server using Java Web Start technology.
Java Web Start technology enables Java software applications to be deployed with a single click over the network.
Java Web Start ensures that the most current version of the application is deployed, as well as the current version of the Java Runtime Environment (JRE).
DCNM Client is a thin client – all business logic on the DCNM Server.
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 81
Communications
DCNM Server connects to the DC-OS devices over SSH.
DCNM Client communicates to the DCNM server over Java RMI. No direct communication between DCNM Client and the DC3 devices.
DCNM Server notifies DCNM Client of asynchronous events as JMS messages.
DC3DCNM Server
DCNM Client
SSH
Java RMI
JMS
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 84
Feature Selector
FeatureFilter Selection Details
Associated Features
Selection
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 85
Data Center network design with Nexus
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 86
VSS Design with Service Modules and Nexus 7k in the core
10 Gig uplinks
L3 BFD
10 Gig uplinks
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 87
Nexus 7k for 10 Gig Aggregation
10 Gig uplinks
Nexus 7k AccessOr mix of 6k
And Nexus 7k
Bridge Assurance(Whitney2)
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 90
Collapsed Aggregation/Access
core
L3 links
L2 domain
6500 with Service Modules
L3 links
L2 domain
Nexus
core
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 96
Data Center topologies with Nexus and virtualization
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 97
Reference Network Topology
L2
VLAN A
Module 1
VLAN B
L3
VLAN D VLAN EVLAN C
L3
L2
Core
Aggregation
Access
Hierarchical DesignTriangle and Square TopologiesMultiple Access Models: Modular, Blade Switches and ToRMultiple Oversubscription Targets (Per Application Characteristics)2000 – 10000 Servers10,000 to 50,000 ports
Module 2
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 98
New Topology Classic Design
1. Common Topology – Starting PointNexus at Core and Aggregation Layers2-Tier L2 topologyVLANs contained within Agg Module
2. Topology HighlightsLower Oversubscription - if NeededHigher Density 10 GE at Core and Agg Layers
L3core1 core2
VLAN A VLAN B VLAN D VLAN EVLAN C
acc1 acc2 accN accN+1 acc1 acc2 accN accN+1
Module 1 Module 2agg2agg1 aggx aggx+1
L2L3
L2L3
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 99
High Density GE Server Farms 10GE Aggregation and Server Farm Capacity
30 Switches
High Density Optimization AreasMore Ports per Access Switch
i. New Supervisor with 10GE Uplinksii. From 240 to 336 Access Ports
More Access Switches per Aggregation Modulei. 16-port on Catalyst 6500 and 32-port on Nexus 7000ii. From 30 to 60 or 120 Access Switches
More Aggregation Modules per Core Modulei. New I/O modules: 8 wire rate 10GE portsii. From 32 to 64 Wire rate 10GE port per core switch
VLAN A VLAN B
VLAN C
60 - 120 Switches
agg2agg1
VLAN A
acc1
VLAN BVLAN C
accYaccNacc2
4
22
8
240 Ports 336 Ports
agg2agg1
acc1 accYaccNacc2
2 2
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 100
New Topology… Enhanced L2 Design
VLAN A
Module 1
VLAN B
L2
Module 2
L3
VLAN D VLAN EVLAN C
aggx aggx+1
accX accX+1 accY accY+1
Enhanced L2 Topology3-tier L2 Topology
Nexus at Core and Aggregation Layers
6500 at Aggregation and Services Layers
Topology HighlightsDC-Wide VLANs
Higher Stability of STP environment – New STP Features
Lower Oversubscription - if Needed
Higher Density 10 GE at Core and Agg Layers
acc1 acc2 accN accN+1
core1 core2
agg2agg1
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 101
Enhance L2 Topology… End to end Virtual Switching
VLAN A
Module 1
VLAN B
L2
Module 2
L3
VLAN D VLAN EVLAN C
accX accX+1
Enhanced L2 Topology3-tier L2 Topology
Nexus at Core and Aggregation Layers
6500 at Aggregation and Services Layers
Topology HighlightsDC-Wide VLANs
Higher Stability of STP environment – New STP Features
Lower Oversubscription - if Needed
Higher Density 10 GE at Core and Agg Layers
acc1 acc2accY
agg1 aggx
core1
11 XXServerN
accN
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 102
Services Switch 1
New Topology… Classic Design + Integrated Services
L2
Module 2
L3
VLAN D VLAN E
L3core1 core2
aggx aggx+1
svcs2svcs1
acc1 acc2 accN accN+1
Common TopologyNexus at Core and Aggregation Layers6500 at Aggregation and Services Layers2-Tier L2 topologyVLANs contained within Agg Modules
Topology HighlightsLower Oversubscription - if NeededHigher Density 10 GE at Core and Agg LayersServices Integrated through Service ChassisService Chassis & Virtual PortChannels through VSS
VLAN A VLAN B VLAN C
acc1 acc2 accN accN+1
Module 1
svcs2
agg2agg1
svcs1
modules appliances
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 103
Services Switch 1
New Topology… Classic Design, Integrated Services + Virtual Switching
L2
Module 2
L3
VLAN D VLAN E
L3core1 core2
svcs2svcs1
Service Appliances of Service SwitchesLeverage Virtual Port ChannelsNon-blocking path to STP root/HSRP
primary
Topology HighlightsSimplifies topologyApplies equally to
service appliancesService chassis
VLAN A VLAN B VLAN C
acc1 acc2 accN accN+1
Module 1
svcs2svcs1
modules appliances
agg1
acc1 acc2 accN accN+1
aggx
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 104
New Topology – Isolating Collapsed L2 Domains Virtual Device Contexts @ Agg Layer
Pods are isolated at aggregation layerEach Pod runs its own STP instance (instance
per VDC)Multiple pods could exist in a single VDCVLANs contained within Agg Module per VDC
L2L3
VLAN C – VDC2
VLAN C – VDC1
11 22 agg2agg1
Pods are logically isolated – two topologiesEach Pod belong to multiple VDCs
Each VDC topology requires dedicated Ports
VLANs contained within Agg Module per VDC
Higher 10GE Port Density Allows multiple Agg Pairs to be collapsed
Collapsed Agg Pair could still be L2 isolated (different STP instances)
VLAN IDs could be replicated on different VDC – shared infrastucture
Module 1
L2
Module 1
L3
VLAN C VLAN C
acc1 acc2 accN accN+1
agg1 agg2 agg3 agg4
acc1 acc2 accN accN+1
L2
Module 1
L3
VLAN C – VDC1 VLAN C – VDC2
acc1 acc2 accN accN+1
agg1 agg2
© 2007 Cisco Systems, Inc. All rights reserved. Cisco PublicSession IDPresentation_ID 105
New Topology – Enhanced L2 Collapsed Core Virtual Device Contexts @ Core Layer
L2L3
Enhanced L2 Topology with Collapsed CoreBenefits of 3-tier L2 Topology
Zone are still isolated (An STP instance per zone)
Core switches are managed independently by VDC
Zone N
Module 1 Module 2
core5 core6
Module 1 Module 2 Module 1 Module 2
core3 core4core1 core2
L2L3
Zone 1 Zone X
core1 core2