Click here to load reader

Nexus 1000V Deployment Scenarios

  • View
    197

  • Download
    5

Embed Size (px)

Text of Nexus 1000V Deployment Scenarios

Nexus 1000V Deployment Scenarios

Dan Hersey Steve Tegeler

Cisco Nexus 1000V ComponentsServer 1VM #1 VM #2 VM #3 VM #4 VM #5

Server 2VM #6 VM #7 VM #8 VM #9

Server 3VM #10 VM #11 VM #12

VEMVMW ESX

VEMVMW ESX

VEMVMW ESX

Virtual Ethernet Module(VEM)Replaces existing vSwitch Enables advanced switching capability on the hypervisor Provides each VM with dedicated switch ports

Virtual Supervisor Module(VSM)CLI interface into the Nexus 1000V Leverages NX-OS 4.01 Controls multiple VEMs as a single network device

Nexus 1000V

Virtual Center

VSM

Cisco Nexus 1000VFaster VM Deployment

Cisco VN-LinkVirtual Network LinkPolicy-Based VM ConnectivityServerVM #1 VM #2 VM #3 VM #4 VM #5

Mobility of Network & Security PropertiesServerVM #6 VM #7

Non-Disruptive Operational Model

VM #8

Cisco Nexus 1000V VMW ESX VMW ESX

Defined PoliciesWEB Apps HR DB Compliance Virtual Center

VM Connection PolicyDefined in the network Applied in Virtual Center Linked to VM UUID

Cisco Nexus 1000VRicher Network Services

VN-Link: Virtualizing the Network DomainPolicy-Based VM ConnectivityServerVM #1 VM #2 VM #3 VM #4 VM VM #1 #5

Mobility of Network & Security PropertiesServerVM VM #2 #6

Non-Disruptive Operational ModelVM VM #3 #7 VM VM #4 #8

Cisco Nexus 1000V VMW ESX VMW ESX

VMs Need to MoveVMotion DRS SW Upgrade/Patch Hardware Failure Virtual Center

VN-Link Property MobilityVMotion for the network Ensures VM security Maintains connection state

Cisco Nexus 1000VIncrease Operational Efficiency

VN-Link: Virtualizing the Network DomainPolicy-Based VM ConnectivityServerVM #1 VM #2 VM #3 VM #4 VM #5

Mobility of Network & Security PropertiesServerVM #6 VM #7

Non-Disruptive Operational Model

VM #8

Cisco Nexus 1000V VMW ESX VMW ESX

Server BenefitsMaintains existing VM mgmt Reduces deployment time Improves scalability Reduces operational workload Enables VM-level visibility Virtual Center

Network BenefitsUnifies network mgmt and ops Improves operational security Enhances VM network features Ensures policy persistence Enables VM-level visibility

Nexus 1000V Virtual Chassis ModelOne Virtual Supervisor Module managing multiple Virtual Ethernet ModulesDual Supervisors to support HA environments

A single Nexus 1000V can span multiple ESX Clusters

SVS-CP# show module Mod Ports Module-Type --- ----- --------------------------------1 1 Supervisor Module 2 1 Supervisor Module 3 48 Virtual Ethernet Module 4 48 Virtual Ethernet Module --More--

Model Status ------------------ ---------Cisco Nexus 1000V Cisco Nexus 1000V active * standby ok ok

Single Chassis ManagementA single switch from control plane and management plane perspectiveProtocols such as CDP operates as a single switch XML API and SNMP management appears as a single virtual chassis

Upstream-4948-1#show cdp neighbor Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone Device ID Platform Local Intrfce Port ID Gig 1/5 Gig 1/10 136 136 S S Nexus Nexus Holdtme Capability

N1KV-Rack10 1000V Eth2/2 N1KV-Rack10 1000V Eth3/5

Virtual Supervisor OptionsServer 1VM #1 VM #2 VM #3 VM #4 VM #5

Server 2VM #6 VM #7 VM #8 VM #9

Server 3VM #10 VM #11 VM #12

VEMVMW ESX

VEMVMW ESX

VEMVMW ESX

VSM VSM

VSM Virtual ApplianceESX Virtual Appliance Special dependence on CPVA server Supports up to 64 VEMs

VSM VSMVSM Physical ApplianceCisco branded x86 server Runs multiple instances of the VSM virtual appliance Each VSM managed independently

Virtual Supervisor to Virtual CenterNexus 1000V

Virtual Center

VSM

One way API between the VSM and Virtual Center Certificate (Cisco self signed or customer supplied) ensures secure communications Connection is setup on the Supervisor

N1K-CP# show svs connections Connection VC: IP address: 10.95.112.10 Protocol: vmware-vim https vmware dvs datacenter-name: PHXLab ConfigStatus: Enabled OperStatus: Connected

Supervisor to Ethernet ModuleTwo distinct virtual interfaces are used to communicate between the VSM and VEMControl Carries low level messages to ensure proper configuration of the VEM. Maintains a 2 sec heartbeat what the VSM to the VEM (timeout 6 seconds) Packet Carries any network packets between the VEM and the VSM such as CDP/LLDPNexus 1000V

VM #1

VM #2

VM #3

VM #4

VEMVMW ESX

Must be on two separate VLANs Supports both L2 and L3 designs

VSM

Nexus 1000V Deployment Scenarios

Virtual Ethernet Modules

VEM Deployment ScenariosVEM ConceptsLimits of VEM in Nexus 1000V Installation of VEM

Port Types Defined & Addressing Mechanism for portsn1kv(Config t)# interface Module#/Eth# n1kv(Config t)# interface veth#

Spanning Tree Considerations/Conversations General Configuration Options for Traffic Flow Special Ports/VLANs used and I/O characteristics 1GE & 10GE deployment scenarios

Virtual Ethernet Module BasicsVEM is a light weight (~10MB RAM) module that provides N1KV switching capability on the ESX host Single VEM instance per ESX host Relies on the VSM for configuration Can run in last known good state without VSM connectivity Some VMWare features will not work (Vmotion) when VSM is down Must have VSM connectivity upon reboot to switch VM trafficServer 1VM VM VM VM VM VM VM VM #1 #2 #2 #3 #3 #4 #4 #1

Server 2VM VM VM VM VM VM VM VM #5 #6 #6 #7 #7 #8 #8 #5

Server 3VM VM VM VM #9 #10 #9 #10 VM VM #11 #11 VM VM #12 #12

VEM VEMVMW ESX VMW ESX

VEM VEMVMW ESX VMW ESX

VEM VMware vSwitch VEM VMware vSwitchVMW ESX VMW ESX

Virtual Center

VSM

Targeted Cisco Nexus 1000V ScalabilityNexus 1000V

A single Nexus 1000V 66 modules (2x Supervisors and 64x Ethernet Modules)

Virtual Supervisor -

Active

Virtual Supervisor - Standby

Virtual Ethernet Module: 32 physical NICs 256 virtual NICs

VEM

VEM

VEM

VEM

Limit Per Nexus 1000V 512 Port Profiles 2048 physical ports 8,192 virtual ports (vmknic, vswif, vnic)VEM VEM VEM VEM

VEM

VEM

VEM Distributed SwitchingUnique to each VEMData Plane MAC/Forwarding Table Upstream path configuration (EtherChannel, pinning, etc) Module # identification

Shared among all VEMs controlled by VSMControl Plane (mgmt IP) Domain ID of N1K DVS Port Profile Configuration veth Interface PoolNexus 1000V

VSM

VEM Module 3VMW ESX1

VEM Module 4VMW ESX2

VEM Module nVMW ESX3

Installation of VEMCurrent Virtual Ethernet Module code must be in lockstep with the ESX release version. Each time a new ESX server is deployed the correct VEM version must be loaded. Automatic using VMWare Update Manager (VUM) Or manual method with CLI command

m deploying a new SX Server, do you ave something for it?Nexus 1000V

Yes VEM Module 5 I doVirtual Center & VMWare Update Manager

VSM

VEM Module 3VMW ESX VMW ESX

VEM Module 4VMW ESX

Switching Interface Types - EthPhysical Ethernet Ports (Network Admin Configuration)- NIC cards on each ESX server - Appears as a Eth interface on a specific module in NX-OS Example n1kv(Config t)# interface Eth3/1 - Module/Slot - Module number is allocated when ESX is added to N1K - Server name to Module relationship can be found by issuing the show module commandVM #1 VM #2 VM #3 VM #4

VEM Module 3 Module 3VMW esx1.cisco.com n1kv(Config t)# int eth3/1

Switching Interface Types - vethn1kv(Config t)# int veth1VM #1 VM #2 VM #3 VM #4

ervice onsole swif0

veth2

veth5

veth6

veth9

veth68

mknic

veth3

VEM VEM Module 5ESX1

Virtual Ethernet Ports- Virtual Machine/ESX facing ports VEM VEM - Appears as veth within NX-OS Module 6 - No module exists when configuring veth ports ESX2 - Not being assigned to a specific module to simplifies VMotion Example Veth68

Spanning Tree ConsiderationsThere are none, but customers always want an explanation of why BPDUs if sent from an upstream switch, the Nexus 1000V drops them Loop prevention techniques will be used similar to the way VMWare provides today It will only learn MACs connected to a veth port on the local VEM by default If destination is not on the local VEM, frame is forwarded out one of the physical interfaces The best terminology to use with customers is to call the VEM a Leaf Node A BServer 2VM VM #5 #5 VM VM #6 #6 VM VM #7 #7 VM VM #8 #8

Software Switch Software SwitchVMW ESX VMW ESX

1

2

3

4

Configuration options for traffic flowMAC PinningEmbedded switch will determine and fix a path for each MAC address to use until a failure is detectedVM VM #5 #5

Server 2VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9

Server 3VM VM #10 #10 VM VM #11 #11 VM VM #12 #12

Software Switch Software SwitchVMW ESX VMW ESX 1 2

Software Switch Software SwitchVMW ESX VMW ESX

Virtual Port IDEssentially the same as MAC pinning, but based on the virtual NIC port @ FCSA B

Configuration options for traffic flowHashingUsing some parameter to load balance across redundant links to an upstream switch or Cat6k VSS/Nexus vPC (i.e. MAC, IP, TCP, etc)VM VM #5 #5

Server 2VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9

Server 3VM VM #10 #10 VM VM #11 #11 VM VM #12 #12

Software Switch Software SwitchVMW ESX VMW ESX

Software Switch Software SwitchVMW ESX VMW ESX

ManualManually configuring a path through a specific physical NIC to a specific vnic

Channeling Techniques available with VMWareNIC team load balancing algorithms based on either/or, not AND.src MAC (MAC Pinning) virtual Port ID IP Hashing equiv EtherChannel ManualVM VM #5 #5

Server 2VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9

Server 3VM VM #10 #10 VM VM #11 #11 VM VM #12 #12

VMWare doesnt behave any differently if you are talking to the same upstream switch, or a different one. i.e. Hashing scenario

VMware vSwitch VMware vSwitchVMW ESX VMW ESX

VMware vSwitch VMware vSwitchVMW ESX VMW ESX

A

B

Channeling Techniques available with Nexus 1000VTraffic flow is based on same principles as VMware, except N1KV can combinesrc MAC (MAC Pinning) virtual Port ID EtherChannel ManualServer 2VM VM #5 #5 VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9

Server 3VM VM #10 #10 VM VM #11 #11 VM VM #12 #12

VEM VMware vSwitch VEM VMware vSwitchVMW ESX VMW ESX

VEM VMware vSwitch VEM VMware vSwitchVMW ESX VMW ESX

Primary Benefit of N1KV is the ability to pin traffic of specific VLANs to a certain upstream switch and provide EtherChannel

Possible Deployment ScenariosPurpose of the following slides is to make you aware of the different architecture components of a ESX/N1KV environment Any design sessions which leverage these slides before FCS, must come with the caveat that official best practices and recommendations may change

This is meant to start conversations and provide examples of how it could be.

Priorities and I/O characteristics of Nexus 1000V VLANs & Virtual interfacesControl VLAN High Priority, Low BWUnique VLAN configured for VSM to VEM configuration, heartbeats, etc

Packet VLAN Medium Priority, Low BWUnique VLAN configured for SUP level communication (IGMP, CDP, Netflow, system logs VACL/ACL, etc)

Vswif - Medium Priority, Low BWService Console/Management interface to the ESX Server veth port

Vmknic High or Low Priority & BWThe vmknic is used by the TCP/IP stack that services VMotion, NFS and software iSCSI clients that run at the VMkernel level, and remote console traffic veth port

Vnic Priority & I/O characteristics depend on VMStandard VM data traffic veth portVM #1 Service Console vswif0 VM #2 VM #3 VM #4

veth2

veth5

veth6

veth9

veth68

knic

eth3

Additional information & links found on this thread: http://communities.vmware.com/thread/136077?tstart=1775

VEM VEM

1GE design Possible minimumVM #1 veth2 veth5 VM #2 veth6 VM #3 veth9 VM #4 veth68

Service vmknic Console vswif0

Multiple adapters for redundancy and throughput 1GE begs for traffic isolation as pipe can be filled Minimum configs are four NICs (Two per EtherChannel) for Isolation and redundancy 4Gb/s Total Bandwidth

veth3

VEMESX

Pinned Traffic N1KV Control N1KV Packet Service Console Possible VMkernel

Pinned Traffic VMs Possible VMkernel

1GE design A More Common Isolated ScenarioVM #1 veth2 veth5 VM #2 veth6 VM #3 veth9 VM #4 veth68

Service vmknic Console vswif0

Multiple adapters for redundancy and throughput Provide Isolation of different types of traffic Guard against 1GE bottleneck

veth3

VEMESX

N1KV Control N1KV Packet Service Console

VMs

8Gb/s Total BandwidthVMkernel (IP Storage) VMkernel (Vmotion)

Possible 10GE designsVM #1 veth2 veth5 VM #2 veth6 VM #3 veth9 VM #4 veth68

Service vmknic Console vswif0

Pin specific VLAN traffic to a specific uplink to enhance traffic isolation 10GE likely to be enough BW for all traffic Minimum config would be two 10GE NICs for redundancy to two upstream switches

veth3

VEMESX

Pinned Traffic N1KV Control N1KV Packet Service Console Possible VMkernel

Pinned Traffic VMs VMkernel

20 Gb/s Total Bandwidth

Your Feedback is important to us We want to hear from you!Please complete your Survey by going to the URL listed below:

http://iplatform.cisco.com/iplatform/Event Name: Data Center SEVT Session Name: Nexus 1000V Design Scenarios