Upload
marketingarrowecscz
View
310
Download
0
Embed Size (px)
Citation preview
Hardware Virtualization
• Guest Virtual Machines run on top of a Host Machine
• Virtual machine acts like a real computer with an operating system and devices• Virtual hardware – CPUs, Memory, I/O
• The software or firmware that creates a virtual machine on the host hardware is called a hypervisor
HYPERVISOR
Virtualization types
• Guest OS is not modified. Same OS is spun as a VM
• Guest OS is not aware of virtualization. Devices emulated entirely.
• Hypervisor need to trap and translate privileged instructions
Fully Virtualized
• Guest OS is aware that it is running in virtualized environment
• Guest OS and Hypervisor communicate through “hyper calls” for improved performance and efficiency
• Guest OS uses a front-end driver for I/O operations
• Example : Juniper vRR, vMX
Para Virtualized
• Virtualization aware hardware (processors, NICs etc)
• Intel VT-x/VT-d/vmdq, AMD-V
• Example: Juniper VMX
Hardware assisted
VMX Product
• Virtual JUNOS to be hosted on a VM• Follows standard JUNOS release cycles
• Hosted on a VM, Bare Metal, Linux Containers• Multi Core• SR-IOV, virtIO, vmxnet3, …
VCP(Virtualized Control Plane)
VFP(Virtualized Forward Plane)
vMX Product Overview
VCPVFP
Physical NICs Management traffic
Guest VM (Linux) Guest VM (FreeBSD)
Hypervisor: KVM, ESXi
Cores Memory
Bridge / vSwitch
Physical layerPCI Pass through SR-IOV
VirtIO
Virtual Control Plane (VCP)• JUNOS hosted in a VM. Offers all the capabilities
available in JUNOS• Management remains the same as physical MX• SMP capable
Virtual Forwarding Plane (VFP)• Virtualized Trio software forwarding plane. Feature
parity with physical MX. Utilizes Intel DPDK libraries• Multi-threaded SMP implementation allows for
elasticity• SR-IOV capable for high throughput • Can be hosted in VM or bare-metal
Orchestration• vMX instance can be orchestrated through OpenStack
Kilo HEAT templates• Package comes with scripts to launch vMX instance
CENTER CHIP (MQ, XM,..)
VMX Forwarding Model
LOOKUP CHIP (LU, XL…) Queuing Chip (QX, XQ
,..)
FORWARDING WITH TRIO ASICS on MX
DPDK
RIOT
DPDK
FORWARDING WITH x86 on VMX
VMX Detailed View
Physical nics
Virtual nics
DPDK
Internal Bridge
172.16.0.3/16vfp-‐int eth1 :
172.16.0.2/16
em1: 172.16.0.1/16vcp-‐int
rpd chasd
VMXT
RIOT
External Bridgex.x.x.y/m
eth0 : x.x.x.b/m
fxp0: x.x.x.a/m
vfp-‐ext
vcp-‐ext
vCP
vFP
dcd
DPDK
Using VMX: SRIOV Mode
Physical nics
Virtual nics
VCP
VFP
eth0 eth1 eth2 eth3
0 1 2 3
eth0: vf 0
ge-‐0/0/0
eth1: vf 0 eth2: vf 0 eth3: vf 0
ge-‐0/0/1 ge-‐0/0/2 ge-‐0/0/3
VFP ports
JUNOS portsvCP
vFP
Using VMX: Virt-IO Mode
Input can be physical or virtual
Virtual nics
VCP
VFP
0 1 2 3
Virtio-‐0
ge-‐0/0/0
Virtio-‐1 Virtio-‐2 Virtio-‐3
ge-‐0/0/1 ge-‐0/0/2 ge-‐0/0/3
VFP ports
JUNOS ports
vCP
vFP
Using VMX: Virt-IO Mode
VCP1
VFP1
0 1 2 3
ge-‐0/0/0 ge-‐0/0/1 ge-‐0/0/2 ge-‐0/0/3vCP
vFP
vCP
vFP
VCP2
VFP2
0 1 2 3
ge-‐0/0/0 ge-‐0/0/1 ge-‐0/0/2 ge-‐0/0/3
VMX QoSLEVEL-1 LEVEL-
2LEVEL-
3
PORT
SIX
QUEUES
Q0
Q1
Q2
Q3
Q4
Q5
VLAN 1
VLAN 2
VLAN n
High
Medium
Low
§ Port:§ Shaping-rate
§ VLAN:§ Shaping-rate§ 4k per IFD
§ Queues:§ 6 queues§ 3 priorities
§ 1 High § 1 medium§ 4 low
§ Priority groups scheduling follows strict priority for a given VLAN
§ Queues of the same priority for a given VLAN use WRR
§ High and medium queues are capped at transmit-rate
Revisit: X86 Server Architecture
CPU Socket 0 CPU Socket 1
Memory Memory
Memory Controller Memory Controller
PCI Controller PCI Controller
NICs NICs
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
vMX Environment
Description Value
Sample system configuration Intel Xeon E5-‐2667 v2 @ 3.30GHz 25 MB Cache. NIC: Intel 82599 (for SR-‐IOV only)
Memory Minimum: 8 GB (2GB for vRE, 4GB for vPFE, 2GB for Host OS)
Storage Local or NAS
Sample system configuration
Sample configuration for number of CPUs
Use-‐cases Requirement
VMX for up to 100Mbps performance Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].Min # of Cores: 2 [ 1 core for VFP and 1 core for VCP]. Min memory 8G. VirtIO NIC only.
VMX for up 3G of performance Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].Min # of Cores: 4 [ 3 cores for VFP, 1 core for VCP]. Min memory 8G. VirtIO or SR-‐IOV NIC.
VMX for 3G and beyond (assuming min 2 ports of 10G) Min # of vCPUs: 5 [1 vCPU for VCP and 4 vCPUs for VFP].Min # of Cores: 5 [ 4 cores for VFP, 1 core for VCP]. Min memory 8G. SR-‐IOV only NIC.
vMX EnvironmentUse-case 1: vMX instance up to 100Mbps
Min # of vCPUs: 4 [1 vCPU for VCP & 3 vCPUs for VFP]
Min # of Cores: 2 [1 core for VCP. 1 core for VFP]
Min memory 8G.
NIC: VirtIO is sufficient
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
VCPU 0 VCPU 1
VCP (Virtual Control Plane) VFP (Virtual Forwarding Plane)
JUNOS I/O – TX & RX
VCPU 2
Worker
Host OS
CPU Socket
Use-case 2: vMX instance up to 3Gbps
Min # of vCPUs: 4 [1 vCPU for VCP & 3 vCPUs for VFP]
Min # of Cores: 4 [ 1 core for VCP. For VFP assume 2 port 1G/10G with a dedicated I/O core, 1 core for each Worker, 1 core for Host Interface ]
Min memory 8G.
NIC: VirtIO is sufficient. SR-IOV can also be used.
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
VCPU 0 VCPU 1
VCP (Virtual Control Plane) VFP (Virtual Forwarding Plane)
JUNOSI/O port 1 TX & RX
VCPU 3
Worker
Host OS
CPU Socket
I/O port 2 TX & RX
VCPU 2VCPU 1
Host Interface
VCPU 0
Host Interface
vMX EnvironmentUse-case 3: >3Gbps of throughput per instance
Assume 2 port 10G for I/O
Min # of vCPUs: 5 [1 vCPU for VCP & 4 vCPUs for VFP]
Min # of Cores: 5 [ 1 core for VCP. For VFP assume 2 port 10G each with a dedicated I/O core, 1 core for each Worker, 1 core for Host Interface]
Min memory 8G.
NIC: SR-IOV must be used
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
VCPU 0 VCPU 2
VCP (Virtual Control Plane) VFP (Virtual Forwarding Plane)
JUNOS I/O port 1 TX & RX
VCPU 3
Worker 1
Host OS
CPU Socket
I/O port 2 TX & RX
VCPU 2VCPU 0
Host Interface
VCPU 3
Worker 2
VCPU n
Worker n
VMX Performance in 14.1
vFP
vCP
CPU Socket 0 CPU Socket 1
Memory Memory
Memory Controller Memory Controller
PCI Controller PCI Controller
NICs N
ICs
Core Core Core Core
Core CoreCore Core
Core CoreCore Core
Core Core Core Core
Core CoreCore Core
Core CoreCore Core
4 5 6 7 8 9 10 11
246
810
1214161820
12 13 14 15 16 17
vMX Gbps
Cores
Performance for 256B packets
17 Cores16 gbps
VMX Performance in 15.1
vFP
vCP
CPU Socket 0 CPU Socket 1
Memory Memory
Memory Controller Memory Controller
PCI Controller PCI Controller
NICs N
ICs
Core Core Core Core
Core CoreCore Core
Core CoreCore Core
Core Core Core Core
Core CoreCore Core
Core CoreCore Core
4 5 6 7 8 9 10 11
246
810
1214161820
12 13 14 15 16 17
vMX with vHyper
vMX Gbps
CoresPerformance for 256B packets
6 Cores20 gbps
vLNS for business or wholesale - retail
• Separate vLNS instance available for each• Business VPN• Retail ISP
• vLNS sized precisely to serve required PPP and L2TP sessions
CPEAggregation
Access Node
PPP
PPPoE
L2TP tunnel
LAC/ vLAC
Wholesale ISP AAA server
Retail ISPAAA server
Internet
vLNS In Data
Centre
vLNS
Peer Port
PPECore sideport
CustomerVPN
Retail ISP
SERVICE PROVIDER VMX USE CASE –VIRTUAL PE (VPE)
DC/CO Gateway
vPE
Provider MPLS cloudCPE
L2 PE
L3 PE
CPE
Peering
InternetSMBCPE
PseudowireL3VPNIPSEC/Overlay technology
BranchOffice
BranchOffice
DC Fabric
vBNG for BNG near CO
vBNGDeployment Model
SP Core
vBNG
Internet
OLT/DSLAM
DSL or Fiber CPE
OLT/DSLAM
DSL or Fiber CPE
OLT/DSLAM
DSL or Fiber CPE
Central OfficeWith Cloud Infrastructure
L2 Switch L2 Switch
• Business case is strongest when vBNGaggregates 12K or fewer subscribers
Ethernet
Ethernet
Parts of a cloud
§ CGWRCloud gateway router
Could be router, server, switch
§ SwitchesSwitch features and overlay technology as needed
§ ServersIncludes cabling between servers and ToRs, mapping of virtual instances to ports, core capacity and virtual machines
3Leaf
SpineSpine
Leaf
Cloud Gateways
vLNSIP address 1.1.1.1
Other VNFsIP address 2.2.2.2
Server-1
KVMge1 ge2 ge3 ge4
Leaf/TOR
NIC1 NIC2
vLNSIP 3.3.3.3
Server-2
KVMge1 ge2 ge3 ge4
Leaf/TORNIC1 NIC2
Other VNFsIP 4.4.4.4
VMX with service chaining – potential vCPE use case
vMX as vCPE vFirewall vNATBranch
Office
Switch
Provider MPLS cloud
DC/CO GW
BranchOffice
Switch
Provider MPLS cloud
DC/CO Fabric + Contrail overlay
vPE
BranchOffice
Switch
CPE like functionality in the cloud
L2 PE
L2 PE
PE
Internet