Upload
nguyennhu
View
218
Download
3
Embed Size (px)
Citation preview
Multi-Hypervisor Networking Compare and
ContrastBill Dufresne
Distinguished Systems Engineer II, CCIE
BRKVIR-2044
• Workload Challenges
• In the Data Center
• In Public/Hybrid Cloud
• Hypervisor Networking Specifics
• Overlay Formats
• KVM
• Hyper-V
• vSphere
• Normalizing the Complexity of Multi-Hypervisor Networking
• Summary / Q&A
Agenda
• Workload Challenges
• In the Data Center
• In Public/Hybrid Cloud
• Hypervisor Networking Specifics
• Overlay Formats
• KVM
• Hyper-V
• vSphere
• Normalizing the Complexity of Multi-Hypervisor Networking
• Summary / Q&A
Agenda
Workload Challenges in the Data Center
Which Architecture for Next-Gen Data Center?
PERIMETER CENTRIC Manual and
ComplexError-ProneStatic
Topology
Limited
Places
1990’s
VIRTUALIZATION
CENTRICNo Physical
Support
Limited
Visibility
Management
Complexity
2000’s
APPLICATION CENTRIC Any Workload Any Place
Full Visibility
Auditable
Policy-Based
Automated
2014
+
Workloads
Data
Resources
Identities
Pro
vis
ion
ing
Optimization
Ava
ilab
ility
2010-2016
Policies
Services
Service Levels and Service Agility up
AutomatedEverything Abstracted
Virtualizing the Data Center: From Sprawl to Real-Time Infrastructure
Pre-2008
SiloedDepartmental Servers
2008-2012
Hardware Costs downFlexibility up
VirtualizedCompute/Storage Abstracted
Source: “Virtualization Changes Virtually Everything,” Thomas Bittman, Gartner, November 2007.
• Applications and extended to Virtual Machines are no longer treated like petsthat you raise and care for, but more like cattle that is raised quickly and consumed.
• Concepts like Continuous Integration and Agile development allow developers to produce more reliable code in a faster cycle.
New Consumption Models are Changing the Rules
Policy Enables Agility, Dynamic Consumption, Applications at Scale
Care
Feeding
Name: fluffy.foo.com
Problems? Get another cow
Name: cw12-bxb9.foo.com
Applications
• Applications drive the Data Center
• Different applications have different needs at different times
• Where the applications run should not matter
• “Application Down to the Wire”
• DevOps frame of Mind: “Build once, run anywhere” and “Configure once and run anything”
• Applications need to be treated differently by the Infrastructure• In the past it was “Build the Infrastructure and layer Apps on top”
• Now it is “Configure the Infrastructure portion for this Application”
Any Application, Any Place, Any Time, Any Scale
• Applications typically need:
Network Services
Security Services
Storage Services
Compute Services
Peer Application Services
Orchestration Services
Management Services
Application Templates
Keeping the Application Fed and Productive
Policies define the services the
application needs to survive and thrive
Workload Challenges in Public Cloud
Workload Mobility
VM
VMVM
VM
VMVM
VM
VMVMVM
VMVM
VM
VMVM
VM
VMVM
Private Public
• Migration: replicate an app instance on a cloud
• Instance built in the cloud
• It could be a copy: replication
• It could be a move: replication with termination
VM
VMVM VM
VMVM
VM
VMVM
VM
VMVM
VM
VMVM
VM
VMVM
VM
VMVM
Migrating to the Public Cloud?
VM
VMVM
VM
VMVM
VM
VMVMVM
VMVM
VM
VMVM
VM
VMVM
Private Public
• What container format (API) does your Cloud Provider use?
• Not the same as your Private Cloud Hypervisor
• Workloads are Stranded
• Can’t we all just use one Hypervisor?
VM
VMVM VM
VMVM
VM
VMVM
VM
VMVM
VM
VMVM
VM
VMVM
VM
VMVM
One-way trip?
VM
VMVM
VM
VMVM
VM
VMVMVM
VMVM
VM
VMVM
VM
VMVM
Private Public
• What container format (API) does your Cloud Provider use?
• Not the same as your Private Cloud Hypervisor
• Workloads are Stranded
• Can’t we all just use one Hypervisor?
VM
VMVM VM
VMVM
VM
VMVM
VM
VMVM
VM
VMVM
VM
VMVM
VM
VMVM
Physical Virtual Cloud Journey• Consistency reduces operational risk and complexity
PHYSICAL
WORKLOAD
VIRTUAL
WORKLOAD
CLOUD
WORKLOAD
• One app per Server
• Static
• Manual provisioning
• Many apps per Server
• Mobile
• Dynamic provisioning
• Multi-tenant per Server
• Elastic
• Automated Scaling
VDC-1 VDC-2
CONSISTENCY: Policy, Features, Security, Management, Separation of Duties
Nexus 1000V, AVS, VM-FEX
vWAAS, VSG, ASAv, vNAM, NS1000V
Nexus 9K,7K/5K/3K/2K
WAAS, ASA, NAM
Cloud Services Router (CSR 1000V)ASR, ISR
Switching
Routing
Services
HYPERVISOR
vSphere
Hyper-V
Open Source
( KVM)
• Workload Challenges
• In the Data Center
• In Public/Hybrid Cloud
• Hypervisor Networking Specifics
• Overlay Formats
• KVM
• Hyper-V
• vSphere
• Normalizing the Complexity of Multi-Hypervisor Networking
• Summary / Q&A
Agenda
Overlay Formats
Virtual Overlay Extensibility
VM VM VM VM VM VM VM VM VM
Layer 2
Layer 2
Physical Server & Network
Infrastructure
How to Optimally Leverage
Physical Infrastructure?
New Workload
Exceeding Capacity
Layer 3
Mobility Across
Layer 3?
Multi-Hypervisor Encapsulations
• x86 Hypervisors within the Data
Center have different
Encapsulations:
• VLAN for Bare-Metal
• VLAN/VXLAN for vSphere
• VLAN/VXLAN for KVM
• VLAN/NVGRE for Hyper-V
Network
Admin
Virtualization
Admin
PHYSICAL
SERVER
VLAN
VXLAN
VLAN
NVGRE
VLAN
VXLAN
VLAN
vSphere Hyper-V KVM
Hypervisor
Management
Fabric
Virtual Extensible Local Area Network (VXLAN)
• Ethernet in IP overlay network • Entire L2 frame encapsulated in UDP
• 50 bytes of overhead
• Include 24 bit VXLAN Identifier
• 16 M logical networks
• Mapped into local bridge domains
• VXLAN can cross Layer 3
• Tunnel between VEMs
• VMs do NOT see VXLAN ID
• IP multicast or MP-BGP w/EVPN used for L2 broadcast/multicast, unknown unicast
• Technology submitted to IETFfor standardization• With VMware, Citrix, Red Hat and
Others
Outer
MAC
DA
Outer
MAC
SA
Outer
802.1Q
Outer
IP DA
Outer
IP SA
Outer
UDP
VXLAN ID
(24 bits)
Inner
MAC
DA
Inner
MAC
SA
Optional
Inner
802.1Q
Original
Ethernet
Payload
CRC
VXLAN Encapsulation Original Ethernet Frame
VXLAN Header Specifics
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|R|R|R|I|0|R|R|R| Reserved | 0x0800 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| VXLAN Network Identifier (VNI) | Reserved |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Original IPv4 Packet |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+Outer
MAC
DA
Outer
MAC
SA
Outer
802.1Q
Outer
IP DA
Outer
IP SA
Outer
UDP
VXLAN ID
(24 bits)
Inner
MAC
DA
Inner
MAC
SA
Optional
Inner
802.1Q
Original
Ethernet
Payload
CRC
VXLAN Encapsulation Original Ethernet Frame
{
Generic Protocol Extension for VXLAN (VXLAN-GPE)
• Ethernet in IP overlay network • Uses the same VXLAN Header (UDP in IP)
• 50 bytes of overhead
• Include 24 bit VXLAN Identifier• 16 M logical networks
• Mapped into local bridge domains
• VXLAN can cross Layer 3
• Reserved Bit 5 used to determine GPE VXLAN type
• Tunnel between VEMs• VMs do NOT see VXLAN ID
• Used to carry more than Ethernet traffic
• Uses separate VTEP-GPE• Can forward to VTEP by setting
Reserved Bit 5 to 0
• Technology submitted to IETF for standardization
Outer
MAC
DA
Outer
MAC
SA
Outer
802.1Q
Outer
IP DA
Outer
IP SA
Outer
UDP
VXLAN ID
(24 bits)
Inner
MAC
DA
Inner
MAC
SA
Optional
Inner
802.1Q
Original
Ethernet
Payload
CRC
VXLAN Encapsulation Original Ethernet Frame
VXLAN-GPE Header Specifics
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|R|R|R|I|1|R|R|R| Reserved | 0x04 (Ntwk Svc Hdr) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| VXLAN Network Identifier (VNI) | Reserved |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Original IPv4 Packet |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+Outer
MAC
DA
Outer
MAC
SA
Outer
802.1Q
Outer
IP DA
Outer
IP SA
Outer
UDP
VXLAN ID
(24 bits)
Inner
MAC
DA
Inner
MAC
SA
Optional
Inner
802.1Q
Original
Ethernet
Payload
CRC
VXLAN Encapsulation Original Ethernet Frame
{
Network Virtualization over GRE (NVGRE)
• MAC over GRE Tunneling
• Entire L2 frame encapsulated in GRE
• 50 bytes of overhead
• Include 24 bit VSID Identifier
• 16 M logical networks
• NVGRE can cross Layer 3
• GRE Tunnel between endpoints
• VMs do NOT see VSID
• Technology submitted to IETF for standardization
• With Microsoft, Arista, Intel, Dell, HP, Broadcom and Emulex
Outer
MAC
DA
Outer
MAC
SA
Outer
802.1Q
Outer
IP SA
Outer
IP DA
Outer
GRE
VSID
(24 bits)
Inner
MAC
DA
Inner
MAC
SA
Original
IP
Payload
NVGRE Encapsulation Original Ethernet Frame
Inner
IP
SA
Inner
IP
DA
Outer GRE Hdr
Inner IP Packet
(Customer Addr)
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0| |1|0| Reserved0 | Ver | Protocol Type 0x6558 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Tenant Network ID (TNI)| Reserved |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
NVGRE Header Specifics
Outer
MAC
DA
Outer
MAC
SA
Outer
802.1Q
Outer
IP DA
Outer
IP SA
Outer
UDP
NVGRE
(24 bits)
Inner
MAC
DA
Inner
MAC
SA
Optional
Inner
802.1Q
Original
Ethernet
Payload
CRC
NVGRE Encapsulation Original Ethernet Frame
{
Generic Network Virtualization Encapsulation (GENEVE)
• Based on Variable-length header• Poor infrastructure hardware performance
• Relies on NIC/CPU to process
• UDP Src used to allow for ECMP Distribution• Uses Hash Algorithm like current ECMP options
• Dst port 6081
• MAC in UDP to cross L3 boundaries
• MTU will vary based on GENEVE Header• DF bit setting only valid on L3 devices
• If hardware unable to accelerate = slow path
• All vNIC endpoints must be L3
• Tunnel between endpoints• VMs do NOT see encapsulation
• Technology submitted to IETF • By Vmware, Microsoft, RedHat, and Intel
Outer
MAC
DA
Outer
MAC
SA
Outer
802.1Q
Outer
IP SA
Outer
IP DA
Variable
GENEVE Bits
Inner
MAC
DA
Inner
MAC
SA
Original
IP
Payload
GENEVE Encapsulation Original Ethernet Frame
Inner
IP
SA
Inner
IP
DA
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|Ver| Opt Len |O|C| Rsvd. | Protocol Type |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Virtual Network Identifier (VNI) | Reserved |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Variable Length Options |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
GENEVE Header Specifics
Outer
MAC
DA
Outer
MAC
SA
Outer
802.1Q
Outer
IP DA
Outer
IP SAVariable
GENEVE Bits
Inner
MAC
DA
Inner
MAC
SA
Optional
Inner
802.1Q
Original
Ethernet
Payload
CRC
GENEVE Encapsulation Original Ethernet Frame
{
Overlay Networking Basics
VM VM VM
NICNIC
HOST
VDS
VXLAN 5100
VLAN 100
VXLAN 5200
VXLAN 5300
VLAN 200 VLAN 300
VXLAN 5100 VXLAN 5200 VXLAN 5300
The VM Guest Operating System still only sees a VLAN Encapsulation.
The Overlay exists within the Virtual Distributed Switch and over the Physical Network
VXLAN Overlay Typical Topology
Tenant A:VXLAN 5200- 10.5.2.0/24Tenant A: VXLAN 5100 – 10.5.1.0/24
Tenant “A”VM-1
10.5.1.52
Tenant “A”VM-3
10.5.2.52
Tenant “A”VM-4
10.5.1.53
AGGR
Tenant “A”VM-2
10.5.2.53
cn-152
VEM
N1KV
VSM
cn-151
VEM
cn-153
VEM
cn-156
VEM
cn-162
VEM
REST APIs
Nw-node
VEM
Virtual
Machine
Manager
Virtual Overlay Network, beyond the Hypervisor
VMData Center
Network
WAN
Physical
Firewall
Bare Metal Servers
Router
Gateway
Gateway
Gateway
Overlay
• Overlay needs gateway to access
physical network
• Physical network to support overlay
traffic pattern
Multi-Hypervisor Virtual Machine Managers
Network
Admin
Virtualization
Admin
PHYSICAL
SERVER
VLAN
VXLAN
VLAN
NVGRE
VLAN
VXLAN
VLAN
vSphere Hyper-V KVM
Hypervisor
Management
Fabric
• Each Hypervisor within the Data
center use different points of
Management:
• System Center Virtual
Machine Manager
(SCVMM) for Hyper-V
• vCenter for vSphere
• Horizon for OpenStack
Hypervisor Technology Comparison
VMware vSphere Microsoft Hyper-V Openstack
Virtual Distributed Switch (VDS) Logical Switch Open vSwitch
Port GroupVirtual Port Profiles + VM
networks
Logical Networks
(Internal/External)
vmknic Host VNIC Virtual Adapter
Folder/Data Center Host Group Tenant
vMotion Live Migration Live Migration
Distributed Resource Scheduling
(DRS)Dynamic Optimization Nova Scheduler
Distributed Power Mgmt (DPM) Power Management Nova Scheduler
vCenter, vCAC SCVMM, SCO Dashboard Horizon
Site Recovery Manager Hyper-V Replica Gluster
Virtual Machine Disk (VMDK) Virtual Hard Disk (VHDX)QuickEMUlator Copy on Write
(QCOW2) or VMDK
KVM
KVM Networking Components
VM VM VM
NICNIC
HOST
LNX Bridge
VLAN 100
VLAN 100
VLAN 200
VLAN 300
VLAN 200 VLAN 300
KVM Networking Components, cont.
VM VM VM
NICNIC
HOST
LNX Bridge
VLAN 100
VLAN 100
VLAN 200
VLAN 300
VLAN 200 VLAN 300
routed
KVM Networking Components, cont.
VM VM VM
NICNIC
HOST
iptables
LNX Bridge
VLAN 100
VLAN 100
VLAN 200
VLAN 300
VLAN 200 VLAN 300
OpenStack Core Projects
• Software to provision virtual machines on commodity hardware at massive scale
OpenStack Compute (Nova)
• Services for discovering, registering, and retrieving virtual machine images
OpenStack Image Service (Glance)
• Software to reliably store billions of objects distributed across commodity hardware
OpenStack Object Storage (Swift)
• A self-service web portal to allow administrators and users to manage OpenStack resources
OpenStack Dashboard (Horizon)
• Provides “network connectivity as a service” between devices managed by other OpenStack services
OpenStack Network Service (Quantum/Neutron)
• Provides “unified authentication” across all OpenStack projects and integrates with 3rd party authentication systems
OpenStack Identity (Keystone)
OpenStack Element Dependencies
Neutron
Nova
Keystone
Glance
Horizon
Cinder
Swift
Neutron Architecture
Clients Neutron Service Backend Networks
Physical and Virtual
Neutron
Interfaces from Nova plug into a switch managed by the Neutron plug-in
Basic Neutron Abstractions & APIs
• Create, Delete, Update
• List, Show
Networks
• Create, Delete, Update
• List, Show
Subnets
• Create, Delete, Update
• List, Show
Ports
Neutron
Networking in Horizon
A simple OpenStack Deployment
Control Node
Mysql
RabbitMQ
Nova-api
Nova-scheduler
Keystone
Neutron Server
Network Node
Neutron-plugin-agent
Neutron-L3-agent
Neutron-dhcp-agent
Compute NodesCompute NodesCompute Nodes
Nova-compute
Neutron-plugin-agent
Data Network
External Network IPMI Network
Management Network
Hyper-V
Host5
VM VM VM
Host6
VM VM VM
Host3
VM VM VM
Host4
VM VM VM
Host1
VM VM VM
Host2
VM VM VM
Logical Network
Microsoft SCVMM Networking ConceptsLogical Networks & Network Sites
Logical Network represents a network with a certain type of connectivity characteristics (DMZ, Tenant isolation)
Network Site2
San Jose Seattle
Network Site3Network Site1
Instantiation of a Logical Network on a set of host-groups (hosts in a POD) is called a Network-Site
Microsoft SCVMM Networking ConceptsLogical Networks & Network Sites
Microsoft SCVMM Networking ConceptsVMs are bound to VM Networks
VM Networks can be backed by either VLANs or other overlay networks (e.g. NVGRE segments).
Microsoft SCVMM Networking ConceptsPort-Classifications
Extensible vSwitch
VM VM VM VM
VNICs
Bundling of profiles
from each extension
is the port-
classification
PNICs
Microsoft SCVMM Networking ConceptsLogical Switch
Switch Template created on SCVMM - allows consistent configuration on all HyperV Hosts where Logical Switch is instantiated
Logical Switch = {Switch extensions, Uplink Profiles, Port-classifications}
Extensible vSwitch
VM VM VM VM
VNICs
PNICs
Choose the port-classifications allowed by this logical switch
Choose the extensions supported by this logical switch
Choose the uplink profiles (VLANs and network policies to be applied to this logical switch
Microsoft SCVMM Networking ConceptsAssociating VM VNICs to VM Networks & Port-classifications
• Choose network
• VM Network
• VM Subnet is tied to the Network (1:1)
• Choose IP address type
• Can be dynamic (DHCP) or statically assigned
• Choose IP pool for static IPs
• Choose Port Profile Classification
• Policy (QoS, Security, Monitoring)
• A Classification refers to a Port Profile
Microsoft SCVMM Networking Concepts• IP Pools - Address Ranges Chosen and Allocated by an external DHCP Server
#nsm ip pool template name my-dhcp-pool
description “Pool for DHCP segments”
dhcp
#nsm network segment mydhcpnet1
ip-pool my-dhcp-pool
#nsm network segment mydhcpnet2
ip-pool my-dhcp-pool
Web
Servers
DB
Servers
DHCP
Server
mydhcpnet1
switch(config)# ip-pool-template DMZ_SJC_Pool1
switch(config-ip-pool-template)#description IP-Pool-for-DMZ_SJC 200
switch(config-ip-pool-template)# ip-address 20.1.1.2 20.1.1.253
switch(config-ip-pool-template)# subnet-mask 255.0.0.0
switch(config-ip-pool-template)# gateway 20.1.1.1
switch(config-ip-pool-template)# end
switch# configure terminal
switch(config)# network-segment DMZ-SJC
switch(config-net-seg)# switchport access vlan 200
switch(config-net-seg)# network-segment-pool DMZ-SJC
switch(config-net-seg)# import ip-pool-template DMZ_SJC_Pool1
switch(config-net-seg)# publish network-segment DMZ-SJCswitch(config-net-seg)# end
IP Pools for VMs
• SCVMM can behave like a DHCP Server - assign IP from an address pool to a VM
• IP Pools defined on VM Networks
• For External VM Networks - Nexus 1000V - the Network Admin defines an IP Pool
Logical Network ‘DMZ’
Microsoft SCVMM Networking ConceptsPutting everything together
Network-site ‘DMZ_POD1’
DMZ_Pod1_VMN1
DMZ_Pod1_VMN2
DMZ_Pod1_VMN3
Network-site ‘DMZ_POD2’
DMZ_Podz2_VMN4
DMZ_Pod2_VMN5
DMZ_Pod2_VMN6
ClientsVM VM VM
IP-Pool1
IP-Pool2
IP-Pool3
IP-Pool4
IP-Pool5
IP-Pool6
GuestsVM VM
Servers
VM VM
vSphere
Virtual Center
vSphere vSphere vSphere
VM VM VM VM VM VM VM VM VM VM VM VM
vSphere Standard vSwitch
• Individual configuration per host
• How many hosts?
• Level of Risk for mis-configuration?
Virtual Center
vSphere vSphere vSphere
VM VM VM VM VM VM VM VM VM VM VM VM
vSphere Distributed Virtual Switch
• Individual configuration per vCenter
• How many switches?
• Consistency = Less Risk
Nexus 1000V VSM
Nexus 1000V VSMVirtual Center
vSphere
Nexus
1000V
VEM
vSphere vSphere
Nexus
1000V
VEM
Nexus
1000V
VEM
VM VM VM VM VM VM VM VM VM VM VM VM
Cisco Nexus 1000V Architecture for vSphere
Distributed Virtual Switch on vCenter
• vSwitch
Per-host, no scale
• DVS
scale across hosts
VLAN & VXLAN Support
kernel-based process
• Port Profile includes:
• Network Profile
• Network Classification
Assigning Port Profiles
• Set VMKernel Port
• Management
• Reporting
• Set Uplinks
• Include VLAN/VXLAN
• Set Port-Profile
• Must be unique per vNIC
Simple add of Nexus 1000V VEM
• Virtual Switch Update Manager
• Plug-in to vCenter
• GUI to install VSM & VEM for NX1kV & AVS
• Use to Upgrade NX1kV & AVS
• Single Tool for Install, Upgrade, Management
• No more ESXi CLI configuration required
• Workload Challenges
• In the Data Center
• In Public/Hybrid Cloud
• Hypervisor Networking Specifics
• Overlay Formats
• KVM
• Hyper-V
• vSphere
• Normalizing the Complexity of Multi-Hypervisor Networking
• Summary / Q&A
Agenda
Nexus 1000V
Why Not Configure Virtual Ports?• Too many ports, and they move too fast
• Network admin needs sanity
• Server admin needs freedom• To deploy and move virtual machines
• To deploy and move physical hosts
switch # int gi1/0/35
switchport mode access
switchport access vlan 23
etc…
switch # int gi1/0/47
switchport mode access
switchport access vlan 23
etc…
switch # int gi1/0/21
switchport mode access
switchport access vlan 23
etc…
switch # int gi1/0/17
switchport mode access
switchport access vlan 23
etc…
Source: http://images.webmagic.com/klov.com/screens/S/wSpace_Invaders.png
Port Profiles – Nexus 1000V
• Instead of configuring individual Ports, create a Port Profile
• Set up ahead of time:
• VLANs
• ACLs
• NetFlow
• QoS
• Private VLANs
• and all other port config!
# port-profile database
switchport mode access
switchport access vlan 10
ip port access-group myacl in
no shut
state enabled
Re-use it multiple times!
Network Segments and Port Profiles• Networks and Profiles are Two Different Things
One network, multiple profiles for access
Port Profiles
Intranet
Web Servers ApplicationsDB
Servers
Network Segment
Different ports need different protection on the same network
Application Server
SSL Web Server
Web Server
DB Server
Network Segments and Port Profiles• And many networks can share the same protection requirements
Multiple networks use the same profiles
Application Server
SSL Web Server
Web Server
DB Server
Port Profiles
Tenant A Intranet
Web App DB
Tenant B Intranet
Web App DB
Tenant C Intranet
Network Segment
Tenant D Intranet
Unified Management Interface across Hypervisors
• NTP
• TACACS+
• RADIUS
• Netflow
• SPAN & ERSPAN
• NX-OS CLI
• SNMP Support
• NetConf/XML
• CDP
• Syslog
vm-network-definition (id, vlan, ip-pool) – for network segments
logical-network-definition (name, id, connected-ports) – fabric n/w
virtual-port-profile (type, id, maxports, switch-id) – for vEth
uplink-port-profile (state, type, id, maxports, switch-id) – for PNIC
ip-address-pool (name, dhcp-server, range etc.) – for ip-pools
Cisco Nexus 1000V
REST-APIs for manageability
Nexus 1000V for Hyper-V VSM Configuration1 N1KV(config)# logical-network DMZ
2
N1KV(config)# network-segment-pool DMZ-SJC
Nexus1000V(config-net-seg-pool)# logical-network Intranet
N1KV(config)# network-segment-pool DMZ-SEA
Nexus1000V(config-net-seg-pool)# logical-network Intranet
3
N1KV(config)# network-segment vlan100
Nexus1000V(config-net-seg)# switchport mode access
Nexus1000V(config-net-seg)# switchport access vlan 100
Nexus1000V(config-net-seg)# network-segment-pool DMZ-SJC
Nexus1000V(config-net-seg)# publish network-segment
4 N1KV(config)# port-profile type vethernet WebServerProfile
Nexus1000V(config-port-prof)# publish port-profile
Nexus1000V(config-port-prof)# no shutdown
Nexus1000V(config-port-prof)# state enabled
Nexus 1000V for Hyper-V VSM Configuration
4
N1KV(config)# uplink-network Nexus1000VUplinkProfile
Nexus1000V(config-uplink-net)# import port-profile PortChannelProfile
Nexus1000V(config-uplink-net)# network-segment-pool DMZ-SJC
Nexus1000V(config-uplink-net)# network-segment-pool DMZ-SEA
Nexus1000V(config-uplink-net)# publish uplink-network
5
N1KV(config)# port-profile type ethernet PortChannelProfile
Nexus1000V(config-port-prof)# channel-group auto mode on mac-pinning
Nexus1000V(config-port-prof)# no shutdown
Nexus1000V(config-port-prof)# state enabled
nsm logical network DMZ
# nsm network segment pool San Jose
# member-of logical network DMZ
# nsm network segment DMZ-POD1
member-of network segment pool DMZ-POD1
switchport mode access
switchport access vlan 100
ip pool import template DMZ_POD1_Pool1
# nsm network segment DMZ-POD2
member-of network segment pool DMZ-POD2
switchport mode access
switchport access vlan 200
ip pool import template DMZ_POD2_Pool2
Cisco Nexus 1000V for Hyper-V• Defining “Network sites” and “VM Networks”
Network Site “San Jose”
VM Network DMZ-POD1
VM Network DMZ-POD2
Logical network “DMZ”
Hyper-V Networking “Decoder Ring”
SCVMM Terminology Cisco Nexus 1000V Terminology
Logical Networks Logical Networks
Network Sites Network Segment Pools
VM Network Definitions Network Segments
IP-Pools IP-Pools & IP-Pool Templates
Port-Classifications Port-profiles
KVM/OpenStack with Nexus 1000V
API Network is
typically routable to
enable public access
Cloud Controller
Node
nova-scheduler
mysql, rabbit...
nova-api
Neutron-server
keystone
Compute Node
nova-compute
*-plugin-agent
Compute Node
nova-compute
*-plugin-agent
Compute Node
nova-compute
*-plugin-agent
Compute Node
nova-compute
Neutron-plugin-
agent
Network Node
dhcp-agent
*-plugin-agent
l3-agent
Network Node
dhcp-agent
*-plugin-agent
l3-agent
Network Node
dhcp-agent
Neutron -plugin-
agent
Neutron l3-
agent
Management Network
API Network
Data Network
External Network
Internet
N1000V
• Foundation of Virtual
Services Architecture
• vPath Service
Insertion/Chaining
• VXLAN Overlay
Networking
• CSR, VPN,
Firewall, etc.
• Leverage Nexus
1000V REST API
VSM/N1000V
Port Profile Configuration in KVM/OpenStack• Configuration Templates
n1000v# show port-profile name VM-Data-ClientOS
port-profile VM-Data-ClientOS
type: Vethernet
description:
status: enabled
max-ports: 32
min-ports: 1
inherit:
config attributes:
switchport mode access
switchport access vlan 110
no shutdown
evaluated config attributes:
switchport mode access
switchport access vlan 110
no shutdown
assigned interfaces:
Vethernet10
Supported Commands Include:
Port management
VLAN
PVLAN
Port-Channel
ACL
Netflow
Port security
QoS
Nexus 1000V for KVM/OpenStack VSM Config1
2
switch(config)# network segment manager switch
Nexus1000V(config-net-seg-pool)# dvs name vsm-kvm-440
vsm-kvm-440(config)# network segment policy default_vlan_template
vsm-kvm-440(config-network-segment-policy)type vlan
vsm-kvm-440(config-network-segment-policy)import port-profile
NSM_Template_vlan
3
vsm-kvm-440(config)# port-profile type vethernet NSM_Template_vlan
vsm-kvm-440(config-port-prof)# guid 16c55294-91a8-41e6-906a-a1b84f1db881
vsm-kvm-440(config-port-prof)# state enabled
vsm-kvm-440(config)# port-profile type ethernet sys-uplink
vsm-kvm-440(config-port-prof)# switchport mode trunk
vsm-kvm-440(config-port-prof)# switchport trunk allowed vlan 1-700
vsm-kvm-440(config-port-prof)# mtu 1550
vsm-kvm-440(config-port-prof)# state enabled
vsm-kvm-440(config-port-prof)# publish port-profile
4
Neutron Work Flow with Cisco Nexus1000V
Neutron VM-Network
(PortID)
OpenStack Neutron
Admin
Create Network Profile Type
VXLAN (TenantA)(Pool created and
assigned to tenant)
Create Network (net1)(Tenant Self Create)
VXLAN 5000-5100
VXLAN 5000Create Subnet (subnet1)(Assign IP Pool)
VXLAN
5000
Policy Profile
(VSM)
Create Port using network
and policy (Created when VM
is instantiated)
Project/Tenant TenantA
Compute Node
Nexus 1000V – VSM
VM-Network
10.5.1.0/24 for
VXLAN 5000
Port is created in VSM
Nexus 1000V OpenStack Extensions
Network Profile extension
Co-existence of multiple network types
Trunk interface support
Policy Profile extension
Admin managed policies
Private VLAN support
Admin: Configure available networks
Tenant: Allocate a network
Tenant Attach VM port to the network
Admin: Configure network pools and
policies
Admin : Configure Policy Profiles
Tenant: Allocate a network from the
pool
Tenant Attach VM port to the network
and associates policy profile
OpenStack native workflow
Nexus 1000V based OpenStack workflow
Nexus 1000V extensions are optional. Can use native workflow in n1kv environment
Nexus 1000V for vSphere VSM Configuration1
2
switch(config)# hostname vsm-esx
vsm-esx(config)#
3
vsm-esx(config)# port-profile type vethernet Test
vsm-esx(config-port-prof)# vmware port-group
vsm-esx(config-port-prof)# switchport mode access
vsm-esx(config-port-prof)# switchport access vlan 351
vsm-esx(config-port-prof)# no shutdown
vsm-esx(config-port-prof)# state enabled
vsm-esx(config)# port-profile type ethernet uplink
vsm-esx(config-port-prof)# vmware port-group
vsm-esx(config-port-prof)# switchport trunk allowed vlan 1-700
vsm-esx(config-port-prof)# channel-group auto mode on mac-pinning
vsm-esx(config-port-prof)# system vlan 351-353
vsm-esx(config-port-prof)# state enable
vSphere Version Hyper-V & KVM Version
Consistency of Network Segments and Port Profiles• Splitting the port-profile into “Network Connectivity” and “Policy”
# port-profile app-server
ip port access-group app_server in
no shut
state enabled
# port-profile db-server
ip port access-group dbserver in
no shut
state enabled
#nsm network segment tenantABC-network
switchport mode access
switchport access vlan 10
Application Servers Data Base Servers
TenantABC Network (VLAN 10)
VM VMVM VM
# port-profile app-server
switchport mode access
switchport access vlan 10
ip port access-group app_server in
no shut
state enabled
# port-profile db-server
switchport mode access
switchport access vlan 10
ip port access-group dbserver in
no shut
state enabled
VEM
vPath
WS2012 Hyper-V
VXLANVEM
vPath
OpenStack
VXLAN
VEM
vPath
vSphere
VXLAN
Cisco Nexus 1000V Overview• Consistency across multiple hypervisors
Hypervisor agnostic technologies & feature-set
Hypervisor-agnostic hosting platform to simplify operations
Virtual Appliance
VSMvWAAS VSGASAvNS1000V
Physical Appliance: Nexus 1100NAM VSG
PrimaryVSM NS1000V
NAM VSG
SecondaryVSM NS1000V
Service InsertionvPath
Nexus 1000V
Distributed Virtual Switch
VM VM VM
VM VM
VM
VM VM VM
VM
VM
VM VM VM
VM VM VMVM
vPath
Intra-VM
WAN Opt
Edge Firewall
12
3
4
5
67
* vPath 3.0 is Network Services Header
Service InsertionvPath, cont.
Nexus 1000V
Distributed Virtual Switch
VM VM VM
VM VM
VM
VM VM VM
VM
VM
VM VM VM
VM VM VMVM
vPath
Intra-VM
WAN Opt
Edge Firewall
12
3
4
5
* vPath 3.0 is Network Services Header
VM VM VM VM
Nexus
1000V
VEM2012 Hyper-VNexus 1000V
VSM
VM VM VM VM
Nexus
1000V
VEM
Nexus 1000V
VSM
vSphere
vCenter SCVMM
Cisco Nexus 1000V for Multi-Hypervisor• Consistent Architecture across hypervisors
VM VM VM VM
Nexus
1000V
VEMOpenStackNexus 1000V
VSM
Horizon
UCS Director
Less is More™
• Goal to get to a common toolset
• Decrease complexity
• Single pane of glass is the Unicorn of IT
• Go on a “Tools Diet”• Modern Tools perform multiple functions
• No need for “Minority Report-like” Admin Desk
• Look for tools that are extensible and open
• Focus on the Future, not the “Care and Feeding”
Infrastructure Automation via UCS Director
Heterogeneous physical and virtual
infrastructure automation across
compute, network and storage
Wizard driven rapid deployment of
UCS integrated infrastructure (FlexPod, Vblock, VersaStack, VSPEX)
Extensible REST API for integration
with north bound orchestration
systems
Netw
ork
Ad
min Create
VXLANs
Create
VLANs
Update
Trunks
Create UCS
Service Profiles
Create
Network Policies
Automated Provisioning: Network
Virtualization and Bare-Metal
VMware Hyper-V KVM
Nexus 1000V
VSMNexus
UCS Director
Bare
Metal
Children’s Hospital Colorado
“The Cisco environment is so easy to manage that we were able move more than 50% of the staff to other high-value roles,” says McIntosh. “Staff is now contributing to other vital areas of the business, such as disaster recovery planning and other activities.”
InterCloud Fabric
Expanding Cloud
Provider Ecosystem
…
Cisco
Intercloud Fabric
Cisco’s Hybrid Cloud Approach
Customer
Open
No Vendor Lock-In
Any Hypervisor to Any Provider
Heterogeneous Infrastructure
End-to-End Security
Unified Workload Management and Governance
Workload Mobility Across Clouds
Choice
Extend Enterprise DC into a Public Cloud – Secure Fabric
Public
V
M
InterCloud
Switch
VM
Manager
Private
IT AdminsEnd Users
VM VM
InterCloud
Extender
InterCloud Services
V
M
InterCloud Secure Fabric
• Corporate networks can be securely
extended into the cloud provider of choice.
• Virtual Machine location/state are managed
from the data center
• Consistent security model across different
cloud providers
ICF Director
Cloud Structure: Simplification & Flexibility
Private Cloud
vDC vDC
App Policy Statements
Policy 1…
Policy 2…
App Policy Statements
Policy 1…
Policy 2…
vDC vDC
App Policy Statements
Policy 1…
Policy 2…
App Policy Statements
Policy 1…
Policy 2…
vDC vDC
App Policy Statements
Policy 1…
Policy 2…
App Policy Statements
Policy 1…
Policy 2…
AppCategory
Policy 1…
Policy 2…
AppCategory
Policy 1…
Policy 2…
AppCategory
Policy 1…
Policy 2…
AppCategory
Policy 1…
Policy 2…
AppCategor
y
Policy 1…
Policy 2…AppCategor
y
Policy 1…
Policy 2…
AppCategory
Policy 1…
Policy 2…
AppCategory
Policy 1…
Policy 2…
AppCategor
y
Policy 1…
Policy 2…AppCategor
y
Policy 1…
Policy 2…
AppCategory
Policy 1…
Policy 2…
AppCategory
Policy 1…
Policy 2…
AWS Azure Cisco Powered
Many LoBs, many apps, many VMs
Dedicated
to a LoB
Multiple
LoBs
Multi-tiered
apps
ESXi Hyper-V ESXiOpen
StackHyper-V
Open
Stack
VM
VMVM
VM
VMVMVM
VMVM
VM
VMVM VM
VMVM
VM
VMVMVM
VMVM
VM
VMVM
VM
VMVM
VMVM VM
VM
VM
VM
VM
VMVM
VM
VMVM
VM
VM
AWS Azure Cisco Powered
VM
VM
VM
VM
VM
VMVM
Different Cloud Providers
ICFCloud1 ICFCloud2 ICFCloud3
Different Hypervisors
System, compute, network & security Policies
Roadmap
Support
Multiple Hypervisors with Public Cloud: Flexibility
VM
VMVM
VM
VMVM
VM
VMVM
OpenStack
vCenter
SCVMM
• Different Hypervisors require different
InterCloud Fabric Directors (one per Virtual
Machine Manager)
• Different InterCloud Fabric Directors can
connect to the same Cloud Providers AWS
Azure
Cisco
Pwr’d
VM
VMVM
vDC1 vDC2VM
VMVM
VM
VMVM
vDC3VM
VMVM
vDC4
VM
VMVM
VM
VMVM
vDC5 vDC6
Application Centric Infrastructure
Multi-Hypervisor-Ready Fabric
• Integrated gateway for VLAN,
VxLAN, and NVGRE networks
from virtual to physical
• Normalization for NVGRE,
VXLAN, and VLAN networks
• Customer not restricted by a
choice of hypervisor
Virtual Integration
Network
Admin
Virtualization
Admin
PHYSICAL
SERVER
VLAN
VXLAN
VLAN
NVGRE
VLAN
VXLAN
VLAN
ESX Hyper-V KVM
Hypervisor
Management
ACI FabricAPIC
APIC
ACI Fabric Encapsulation Normalization
VXLAN
VNID = 5789VXLAN
VNID = 11348
NVGRE
VSID = 7456
Any to Any
802.1Q
VLAN 50
Normalized
Encapsulation
Localized
Encapsulation
IP Fabric Using
eVXLAN
Tagging
PayloadIPeVXLANVTEP
• All traffic within the ACI Fabric is encapsulated with an extended VXLAN (eVXLAN)
header
• External VLAN, VXLAN, NVGRE tags are mapped at ingress to an internal eVXLAN
tag
• Forwarding is not limited to, nor constrained within, the encapsulation type or
encapsulation ‘overlay’ network
Payload
Payload
Payload
Payload
Payload
Eth
IPVXLAN
Outer
IP
IPNVGREOuter
IP
IP802.1Q
Eth
IP
Eth
MAC
Normalization of Ingress
Encapsulation
APIC
Summary
Bringing it all together…
• Multi-Hypervisor is a reality/future that brings additional complexity
• To the Data Center / Private Cloud
• To the Public/Hybrid Cloud
• To the Cisco provided solutions to accomplish normalization:• Command Line - Nexus1000V
• Automation - UCS Director
• Hybrid Cloud - InterCloud Fabric/ ICF Director
• Data Center/Private Cloud - Application Centric Infrastructure
• Embrace the ‘Tools Diet’, find your Narwhal
Recovering Time to Do More Business Impacting Activities
Reducing IT Operations Time, Creating More Time for IT Innovation
CURRENT IT* FAST IT
28% Troubleshooting
19% Security
18% Configuration
14% Equipment Upgrade
14% Traffic Optimization
7% Other
14% Troubleshooting
10% Security
8% Configuration
14% Equipment Upgrade
10% Traffic Optimization
43%Other
Total Network Operations
Time Savings
More Time Available for
Business Innovation
Average Time Spent by Network Administrator
*Source: Forrester Commissioned Study
Key take-aways
• IT complexity is ever increasing, don’t let tool sprawl impact your ability to be effective
• Each Hypervisor networking option is unique
• Understand how the Nexus1000V provides consistency to disparate Hypervisor switching mechanisms at the CLI
• Understand how Automation ala UCS Director and ACI can normalize Hypervisor networking disparity
VLAN
VXLAN
VLAN
NVGRE
Any to Any
V
M
V
M
V
M
V
M
Nexus
1000V
VEMNexus 1000V
VSM
VMM
VLAN
VXLAN VLAN
Related Sessions
• BRKVIR-3013 Deploying and Troubleshooting the Nexus 1000V Virtual Switch on vSphere
• BRKCLD-2003 Building Hybrid Cloud Applications for InterCloud Fabric
• LTRDCT-1578 Integrating UCS Director with ACI in the Data Center
• BRKACI-2006 Integration of Hypervisors and L4-7 Services into an ACI Fabric
Participate in the “My Favorite Speaker” Contest
• Promote your favorite speaker through Twitter and you could win $200 of Cisco Press products (@CiscoPress)
• Send a tweet and include
• Your favorite speaker’s Twitter handle <@cscobill>
• Two hashtags: #CLUS #MyFavoriteSpeaker
• You can submit an entry for more than one of your “favorite” speakers
• Don’t forget to follow @CiscoLive and @CiscoPress
• View the official rules at http://bit.ly/CLUSwin
Promote Your Favorite Speaker and You Could Be a Winner
Complete Your Online Session Evaluation
Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.
• Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect.
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
• Related sessions
Thank you
Data Center / Virtualization Cisco Education OfferingsCourse Description Cisco Certification
Cisco Data Center CCIE Unified Fabric
Workshop (DCXUF);
Cisco Data Center CCIE Unified Computing
Workshop (DCXUC)
Prepare for your CCIE Data Center practical exam with hands on lab
exercises running on a dedicated comprehensive topology
CCIE® Data Center
Implementing Cisco Data Center Unified Fabric
(DCUFI);
Implementing Cisco Data Center Unified
Computing (DCUCI)
Obtain the skills to deploy complex virtualized Data Center Fabric and
Computing environments with Nexus and Cisco UCS.
CCNP® Data Center
Introducing Cisco Data Center Networking
(DCICN); Introducing Cisco Data Center
Technologies (DCICT)
Learn basic data center technologies and how to build a data center
infrastructure.
CCNA® Data Center
Product Training Portfolio: DCAC9k, DCINX9k,
DCMDS, DCUCS, DCNX1K, DCNX5K, DCNX7K
Get a deep understanding of the Cisco data center product line including
the Cisco Nexus9K in ACI and NexusOS modes
For more details, please visit: http://learningnetwork.cisco.com
Questions? Visit the Learning@Cisco Booth or contact [email protected]
Cloud Cisco Education OfferingsCourse Description Cisco Certification
Designing the FlexPod Solution (FPDESIGN);
Implementing and Administering the FlexPod
Solution (FPIMPADM)
Learn how to design, implement and administer FlexPod solutions FlexPod Design Specialist;
FlexPod Implementation &
Administration Specialist
UCS Director (UCSDF) Learn how to manage physical and virtual infrastructure using
orchestration and automation functions of UCS Director.
Cisco Prime Service Catalog Learn how to deliver data center, workplace, and application services in an
on-demand, automated, and repeatable method.
Cisco Intercloud Fabric Learn how to implement end-to-end hybrid clouds with Intercloud Fabric
for Business and Intercloud Fabric for Providers.
Cisco Intelligent Automation for Cloud Learn how to implement and manage cloud deployments with Cisco
Intelligent Automation for Cloud
For more details, please visit: http://learningnetwork.cisco.com
Questions? Visit the Learning@Cisco Booth or contact [email protected]