Upload
opnfv
View
295
Download
0
Embed Size (px)
Citation preview
June 20–23, 2016 | Berlin, Germany
Composing a New OPNFV Solution Stack
Wojciech Dec, Cisco
Abstract
This session showcases how a new OPNFV solution stack (a.k.a. “scenario") is composed and stood up. We'll use a new solution stack framed around a new software forwarder ("VPP") provided by the FD.io project as example for this session. The session discusses how an evolution/change of upstream components from OpenStack, OpenDaylight and FD.io are put in place for the scenario, how installers and tests need to be evolved to allow for integration into OPNFV's continuous integration, deployment and test pipeline.
Introduction
• What is an OPNFV Scenario?A selection of software components that can be automatically installed and tested via the OPNFV CI/CD pipeline.
• The Realization of a scenario is a key release vehicle for OPNFV• Drives evolution/development of components
• Drives development of system test
• Necessitates an installer and installer platform
• Translates into an continous integration & test deployment test pipeline
New OPNFV FastDataStacks (FDS) Scenarios
• Components:• Openstack
• Opendaylight w/ Neutron & GBP• Fd.io: VPP• OPNFV Installer & Test
• Diverse set of contributors:
• Scenario variations covered: L2, L3, HA
Install Tools
VM Control
Network Control
Apex
OpenStack
OpenDaylight
L2
Hypervisor KVM
Forwarder VPP
Apex
OpenStack
OpenDaylight
L3
KVM
VPP
Apex
OpenStack
KVM
VPP
New OPNFV FastDataStacks (FDS) Scenarios
• Components:• Openstack
• Opendaylight• Fd.io: VPP• OPNFV Installer & Test
• Diverse set of contributors:
• Scenario variations covered: L2, L3, HA
Install Tools
VM Control
Network Control
Apex
OpenStack
OpenDaylight
L2
Hypervisor KVM
Forwarder VPP
Apex
OpenStack
OpenDaylight
L3
KVM
VPP
Apex
OpenStack
KVM
VPP
Why FD.io - The Vector Packet Processor Dataplane?
Existing Dataplanes Fd.io VPP
Performance, Scalability & Stability issues
Highly performant, modular and designed for scale, no drops and minimal delay
„In house” Architectures; hard to evolve, slow to innovate
A Linux Foundation Collaborative open source project. Multi vendor support & project governance
Difficult to upgrade & operate. Deep kernel dependencies
Standard based protocol and configuration model driven management agents. Activation of new plugins at run-time
Processor specific Support for multiple processor architectures (x86, ARM)
Limited features L2 and L3 feature rich
Testing? Automated testing for all projects
Management Agent
VPP
(DPDK)
vSwitch
vFirewall
vRouter vFirewallCustom
App
Highly performant? You bet…
VPP technology in a nutshell
› VPP data plane throughput not impacted by large FIB size
› OVSDPDK data plane throughput heavily impacted by FIB size
› VPP and OVSDPDK tested on Haswell x86 platform with E5-2698v3 2x16C 2.3GHz (Ubuntu 14.04 trusty)
OVSDPDK
VPP0
5
10
15
20
2 MACs2k MACs
20k MACs
NDR rates for 2p10GE, 1 core, L2 NIC-to-NIC
[IMIX Gbps]
OVSDPDK
VPP0.0
20.0
40.0
60.0
80.0
100.0
120.0
12 routes1k routes 100k
routes500k
routes1M
routes2M
routes
NDR rates for 12 port 10GE, 12 cores, IPv4
[IMIX Gbps]
VPP Feature Summary
14+ MPPS, single core
Multimillion entry FIBs
Source RPF
Thousands of VRFs
Controlled X-VRF lookups
Multipath – ECMP & Unequal Cost
Multiple million Classifiers –
Arbitrary N-tuple
VLAN Support – Single/Double tag
Counters for everything
Mandatory Input Checks:
TTL expiration
header checksum
L2 length < IP length
ARP resolution/snooping
ARP proxy
IPv4/IPv6 IPv4
GRE, MPLS-GRE, NSH-GRE,
VXLAN
IPSEC
DHCP client/proxy
IPv6
Neighbor discovery
Router Advertisement
DHCPv6 Proxy
L2TPv3
Segment Routing
MAP/LW46 – IPv4aas
iOAM
MPLS
MPLS-o-Ethernet –
Deep label stacks
MPLS-o-GRE
L2
VLAN Support
Single/ Double tag
L2 forwarding with
EFP/BridgeDomain concepts
VTR – push/pop/Translate
(1:1,1:2, 2:1,2:2)
Mac Learning – default limit of
50k addresses
Bridging – Split-horizon group
support/EFP Filtering
Proxy ARP
ARP termination
IRB – BVI Support with
RouterMac assignment
Flooding
Input ACLs
Interface cross-connect
Why OpenDaylight?
• To offer a Software Defined Networking (SDN) platform, covering a broad set of
virtual and physical network devices
• OpenDaylight is an Open Source Software project under the Linux Foundation
with chartered with the development of an open source SDN platform
Code Acceptance Community
To create a robust, extensible,
open source code base that
covers the major common
components required to build
an SDN solution
To get broad industry acceptance amongst
vendors and users
• Using OpenDaylight code directly or through
vendor products
• Vendors using OpenDaylight code as part of
commercial products
To have a thriving and growing
technical community contributing
to the code base, using the code
in commercial products, and
adding value above, below and
around.
ODL Software Architecture
SAL/Core
NetconfClient
Network DevicesNetwork DevicesNetwork Devices
Protocol Plugin
... NetconfServerRESTCONFApplication Application
REST
ApplicationsApplicationsOSS/BSS, External Apps
Data Store Messaging Core
Apps/Services
Yang Model
Data RPCs, Notifications
What is Group Based Policy?
● An intent driven policy framework model intended to describenetwork application requirements independent of underlying infrastructure.
● Concepts
○ Group Endpoints (EPs) into Endpoint Groups (EPGs)
○ Apply Policy (Contracts) to traffic between groups
○ Contracts apply directionally
EPG:Hosts
EPG: WebServers
Contractsweb
Match:destport:80
Action:Allow
ssh
Match:destport:22
Action:Allow
any
Match: *Action:
Allow
Contract:web, ssh
Contract:any
EP:1
EP:2
EP:3
EP:4
Endpoints live in a network context
● Network Context
○ Can be an L2-Bridge Domain (L2BD)
○ Can be an L3-domain (L3D) (think VRF)
● Network Contexts can have Subnets
● Endpoint Groups can specify a default network context for members
EP:1
EP:6
EP:2
Bridge:1
Bridge:2
EP:5 Subnet:1
Subnet:2
Subnet:3
EPG:Hosts
EP:1
EP:2
Compute
VPP
SDN Controller w/ GBP
IPSec
IWAN
Operator Portal
FW & Security
Tenant Portal
NFV Service Packs
L2 L3 FW LB
PublicPrivate
MPLS VPN
VM Resource Orchestrator - Openstack
CPE1
Cust-A
CPE2
Cust-A
A Broader Picture on FDS
Service Orchestrator
SP Data Centre Infra
● Data Services… = Need for network workloadspeed in a virtualized context (NFV, NFVI)● VPP
● … with network application policies…
● Group Based Policy
● … to offer programmatic services on a combination of virtualized and physical devices against a common GBP policy: SDN Controller
● Opendaylight
Policy: End-points @ CPE 1 not allowed to communicate with EP’s @CPE2 unless over a private network
The Fast Data Stack w/ VPP Architecture
● Openstack VM control
● Opendaylight
○ Neutron Server
○ Group Based Policies
○ L2 & L3 Topology
○ Virtual Bridge Domain Manager
○ VPP configuration Rendering
○ Netconf client
● Fd.io VPP:
● Honeycomb Netconf configuration server
● Data Plane
Controller Node
OpenStack Neutron
ODL ML2 neutron plugin
ODL Neutron Service
Neutron to GBP mapper
GBP Manager
VPP renderer
Netconf
Honeycomb
neutron API REST calls
Policies
Legend- In OpenStack
- In OpenDaylight
VDB
VPP
- In Fd.io
DPDK
Compute Node
Detailed Architecture at:https://wiki.opnfv.org/display/fds/OpenStack-ODL-VPP+integration+design+and+architecture
Netconf
TopologyStore
Neutron to VPP mapper
Neutron to GBP mapperNeutron to VPP mapper
The Fast Data Stack w/ VPP Architecture
Controller Node
OpenStack Neutron
ODL ML2 neutron plugin
ODL Neutron Service
GBP Manager
VPP renderer
Netconf
Honeycomb
Legend- In OpenStack
- In OpenDaylight
VDB
VPP
- In Fd.io
DPDK
Compute Nodes
POST PORT(id=uuid, host_id=vpp, vif_type=vhostuser)
Update Port
Map Port to GBP Endpoint + relay policies(Neutron specifics to Generic Endpoint mapping)
Update/Create GBP Endpoint (L2 context, MAC,...)
Apply Policy
Update node(s), bridge-domain
Netconf Commit(bridge config, VxLAN tunnel config)
TopologyStore
Some differences to what you may be used to 1/2
• Per Tenant Bridge and an interface centric model
• No Flows, VLAN re-mapping “tricks”, etc
• *Significantly* simpler to trace/figure-out
VM_A
Tenant1 Bridge
VirtualEth0/0
Eth0
VirtIO/VhostSrv
Vhostclient
VLAN 101
GE0/0.102
VM_Z
Tenant2 Bridge
VirtualEth0/0
Eth0
VirtIO/VhostSrv
Vhostclient
VLAN 102
Socket/tmp/uui
dA
Socket/tmp/uui
dB
Configured by Nova Compute
Configured by ODL
VPP
GE0/0.101
Ostack with OVS Ostack with VPP
Some differences to what you may be used to 2/2
• Full network control & visibility is in Opendaylight
• No network configuration “hidden” into compute node configuration (ovs-agent)
• Allows and visualization and computation of *all* of the topology in ODL
• Reconfigurable from the controller
Compute Node 2
Honeycomb
VPP
VM3
(NFV3)Port
Port
Port
Port
Port
VM4
(NFV4)mgmt
int
Port internet
int
public
mgmtint
private
int
public
intPort
Portprivate
int
EdgenetworksVxLAN/Vlans
PortMGMT
Nova Agent
private
Bridge
ODL
Controller
Physical Toplogy 1:Compute1 –> eth0 -> physical switch eth0/0Compute2 ->eth1 -> physical switch eth1/0
Neutron Logical Topology: Net1NFV1 -> private net (VLAN 100NFV3 -> private net (VLAN 100)
Subtended by: Physical topology 1
Compute Node 1
Honeycomb
VPP
VM1
(NFV1)Port
Port
Port
Port
Port
VM2
(NFV2)mgmt
int
Portinternet
int
public
mgmt
int
private
int
public
intPort
Portprivate
int
EdgenetworksVxLAN/Vlans
Port MGMT
Nova Agent
private
Bridge
Active Code Development realized
• OPNFV: • Apex Installer: Automated install of Openstack w/ ODL, VPP and Honeycomb
on compute nodes• Functests: Initial automated testing of
• Openstack (Mitaka):• Neutron ODL ML2 parser extended to handle VPP elements and vhostuser
binding in picked up in topology
• Opendaylight (pre Boron):• Group Based Policy Neutron & VPP data mapping• Virtual Bridge Domain Manager (builds L2 bridges using VxLAN or native
VLAN interfaces) – new ODL project• Extended topology models• VPP Configuration rendering
• Fd.io• Honeycomb and VPP model supporting VxLAN, VLAN, vhostuser
Adjustments Planned/Needed
• Apex• Parameterization…. Parameterization … & Intelligent defaults...
• VPP’s netconf “mounting in ODL” after install.
• Install paramaters need to be user configurable, eg domain name, networks, etc.
• Testing• Covered on next slide…
• Openstack• ML2 ODL driver robustness and state cleanup
• ODL• HA for Neutron, GBP
• L3 config: Router agent equivalent w/ VxLAN attachment.
• L2 ACL rendering
• DHCP TAP Interfaces
• Fd.io• ACL support
• NAT support (for floating IP)
• Fix Interface “acquisition” issues on RHEL
Test Coverage Needs
• Automated set-up of Ostack, ODL and multi-node HC
• Test from Openstack Port creation:• Verify HC configuration
• Test from Openstack VM creation:• Verify multi VM creation
• Verify VM-VM same node and “other node” connectivity
• At least some scale Testing…
• HA Testing...Honeycomb
(Dataplane Agent)
VPP
DPDK
OpenDaylight
Neutron NorthBound
GBP & VPP Neutron Mapper
Topology Mgr vBD
VPP renderer
GBP Renderer Manager
Lightweight-ODL
Neutron
No VPP coverage
No Full ODLcoverage
Only simulated use tests
FDS Project Schedule – Near Term
CiscoLive Las Vegas (July 2016)• Base O/S-ODL-VPP stack
(Infra complete: Neutron – GBP Mapper – GBP Renderer – Topology Mgr – Honeycomb – VPP)
• Automatic Install• Basic system-level testing• Basic L2 Networking (no NAT/floating IPs, no Security Groups)• Overlays: VXLAN, VLAN
OPNFV Colorado Release (September 2016)• O/S-ODL-VPP stack
(Infra complete: Neutron – GBP Mapper – GBP Renderer – Topology Mgr – Honeycomb – VPP)• Automatic Install• Ongoing OPNFV system-level testing (FuncTest, Yardstick testsuites) – part of OPNFV CI/CD pipeline• Complete L2 Networking (NAT/floating IPs, Security Groups)• HA• Overlays: VXLAN, VLAN, NSH
Detailed development plan: https://wiki.opnfv.org/display/fds/FastDataStacks+Work+Areas#FastDataStacksWorkAreas-Plan
Summary
• New OPNFV Fast Data Stack aimed at providing a high performance NFV services platform
• VPP provides a highly performant & modular data plane
• Opendaylight with Group Based Policy provides SDN device and a consistent network application policy framework
• The stack’s functionality is growing. • Lot’s of features that can never have enough tests
• Join us in building the FDS!