15
V2.1 | ©6WIND 2015. All rights reserved. All brand names, trademarks and copyright information cited in this presentation shall remain the property of its registered owners. SPEED MATTERS

NVFI: L2 or L3

  • Upload
    others

  • View
    16

  • Download
    0

Embed Size (px)

Citation preview

Page 1: NVFI: L2 or L3

V2.1 | ©6WIND 2015. All rights reserved. All brand names, trademarks and copyright information cited in this presentation shall remain the property of its registered owners.

SPEED MATTERS

Page 2: NVFI: L2 or L3

V2.1 | 2 ©6WIND 2015

ETSI NFV Model : Openstack the defacto VIM

Page 3: NVFI: L2 or L3

V2.1 | 3 ©6WIND 2015

ETSI NFV Model

Neutron : APIs and Python agents to configure hypervisor’s

networking

Page 4: NVFI: L2 or L3

V2.1 | 4 ©6WIND 2015

Requirements of Virtual Infrastructure

OpenStack: analysis of a Linux KVM NFVI case

Hypervisor

Virtual Switch

Driver Level - pNICs

L2 Virtual Switch or

L3 Virtual Routing

Virtual NICs (vNIC - virtio) - Host to Guest

Virtual Machine (KVM/Qemu)

Virtual

Machine

Application

Software

Virtual

Machine

Application Software

Server Platform

VM-to-VM Communication (ETSI Phase2)

NFV

Infrastructure

VNFs

Page 5: NVFI: L2 or L3

V2.1 | 5 ©6WIND 2015

ovs-appctl

ovs-vsctl

ovs-dpctl

ovs-ofctl

+ what if OpenDayLight is being used?

NFVI survival kit: learn OVS debugging

Page 6: NVFI: L2 or L3

V2.1 | 6 ©6WIND 2015

pNICs = 10G/40G PCI Ethernet ports

vNICs = virtio

Neutron leverages:

L2

OVS

Linux Bridge

L3

netns / Multi Tenants

iptables

Firewall -> Security Groups

NAT -> Openstack’s floating IP addresses

L3 routes

VPN

Kernel IPsec (XFRM)

Overlay / tunneling

OVS vPort or Netdevices

VXLAN, GRE

(GENEVE, MPLS)

Neutron : Linux’s consumer of pNICs to vNICs technologies

dnsmasq

VM 1

eth0

tapYY…

qbrXX

qvbXX

br-int

qvoXX

qr-yyy

br-ex

eth0

qg-yyy

br-tun

eth1

patch-int

patch-tun

External Network

virtio / vhost device

Linux Bridge

veth Pair

OVS

Physical Interface

Network Namespace

OpenStack compute node (host)

tapzzz

IP

IP

IP

IP

qrouter-aaaa

qdhcp-bbbb

Virtio vNIC

pNIC

Qemu / KVM

Page 7: NVFI: L2 or L3

V2.1 | 7 ©6WIND 2015

Limited Bandwidth To

Linux Based Virtual

Machines

Test 1:

Linux Open vSwitch and Linux VM

X86 server with

12x10G Ports

Hypervisor

Linux Based

Virtual Machine

Open

vSwitch

IP

Forwarding 14

Gbps

Page 8: NVFI: L2 or L3

V2.1 | 8 ©6WIND 2015

Compute

Node 240Gbps 6WIND Virtual Accelerator

throughput on 12 cores of Xeon E5-2697 v2

@ 2.70GHz

1 core provides a 20Gbps Virtual Accelerator

bandwidth

Examples on a dual socket / 24 cores server

120Gbps North-South traffic delivered to standard

VMs or VNFs with 12 cores remaining for VMs

40Gbps North-South traffic with 20 cores remaining

for VMs

40Gbps North-South traffic and 160Gbps East-

West traffic for service chaining

6WIND Virtual Accelerator in OpenStack Compute Node

Standard VM/VNF

(DPDK, Linux,

other OS)

Virtio

Standard VM/VNF

(DPDK, Linux,

other OS)

Virtio

Virtual Accelerator

Host

Drivers Virtio

Physical

NICs

Virtual Switching

Open

vSwitch

Linux

Bridge

Overlays

GRE

VXLAN

VLAN

Virtual Networking and

Multi-tenancy

IP Fwd

VRF Filtering

NAT LAG

IPsec

Page 9: NVFI: L2 or L3

V2.1 | 9 ©6WIND 2015

Compute Nodes Neutron Diagram

dnsmasq

VM 1

eth0

tapYY…

qbrXX

qvbXX

br-int

qvoXX

qr-yyy

br-ex

eth0

qg-yyy

br-tun

eth1

patch-int

patch-tun

External Network

virtio / vhost device

Linux Bridge

veth Pair

OVS

Physical Interface

Network Namespace

OpenStack compute node (host)

tapzzz

IP

IP

IP

IP

qrouter-aaaa

qdhcp-bbbb

6WINDGate

6WINDGate

fast path

PMD PMD

PMD

Page 10: NVFI: L2 or L3

V2.1 | 10 ©6WIND 2015

Test 2:

6WIND Virtual Accelerator + Linux VM

8X Bandwidth

Increase

X86 server with

12x10G Ports

Hypervisor

Linux Based

Virtual Machine

Open

vSwitch

IP

Forwarding

14 Gbps

118

Gbps

Virtual Accelerator

Page 11: NVFI: L2 or L3

V2.1 | 11 ©6WIND 2015

6WINDGate OVS L2 switching performance

6.8 Mfps per core

Up to 67.8 Mfps using 10 cores (20 threads)

Performance scales linearly with the number

of cores configured to run the fast path.

Performance is independent of frame size.

This benchmark is simplified, other

overhead have to be included:

Host to VM (virtio)

VM to host (virtio)

OVS over VXLAN

L2 : OVS interconnect

Page 12: NVFI: L2 or L3

V2.1 | 12 ©6WIND 2015

6WINDGate IP forwarding

performance

9.57 Mpps per core

Up to 226.83 Mpps with 22 cores

Performance scales linearly

with the number of cores

configured to run the fast

path.

Performance is independent of

frame size.

L3 processing

Page 13: NVFI: L2 or L3

V2.1 | 13 ©6WIND 2015

L2

all VMs share the same broadcast L2 overlay

network

VM’s VLAN over OpenStack’s VXLAN

How to distribute MAC addresses, how to

manage L2 overlayed topologies?

Proxy ARPs

ISIS?

EVPN (BGP extension)?

PVSTP?

Live migration => MAC tables of switches to

refresh (GARP is not reliable)

L3

L3 interconnects: all IP traffic from VMs are

routed through its VRF such as VPNs

(RFC2547)

No broadcast domain

But L2 interconnect can still be supported:

Each tenant can configure its own L2 over IP

(VXLAN, NVGRE, MPLS over GRE, etc.)

VRF aware L3 routing is mature

Live migration => route updates

L2 or L3 interconnect ?

Page 14: NVFI: L2 or L3

V2.1 | 14 ©6WIND 2015

Benefits of Neutron without L2

No OVS, no Linux Bridge

less networking bugs and less provisioning bugs

no VLAN

Less processing layers => Less CPU cycles

per packets => Increase packet rates

Simplify the design of PCI Ethernet boards

(HW offloads)

Why L2 is leading, but not L3 with OpenStack?

No L3 alternative to OVS

Quagga: EVPN not available

LISP: No proof points with Neutron

Like OVS, any solutions have to be

1st – democratized into Linux kernel, Linux networking

community

2nd – integrated use with OpenStack / Neutron

3rd – described for their usage through standard bodies (ETSI,

IETF, ONF, etc.)

OpenStack Neutron: L2 today, L3 tomorrow?

Page 15: NVFI: L2 or L3

V2.1 | 15 ©6WIND 2015

Conclusion:

ETSI NFV does not assume L2 or L3 interconnect

SDN for VM2VM can be built based on L3 over L3 (aka VPNs)

L3 shall restore performance + simplicity