Upload
mestery
View
5.110
Download
3
Embed Size (px)
DESCRIPTION
These slides are from the talk Bob Kukura and I gave at the OpenStack Summit in Hong Kong, November 2013.
Citation preview
Modular Layer 2 InOpenStack Neutron
Robert Kukura, Red HatKyle Mestery, Cisco
1. I’ve heard the Open vSwitch and Linuxbridge Neutron Plugins are being deprecated.
2. I’ve heard ML2 does some cool stuff!
3. I don’t know what ML2 is but want to learn about it and what it provides.
What is Modular Layer 2?
A new Neutron core plugin in Havana
• Modularo Drivers for layer 2 network types and mechanisms -
interface with agents, hardware, controllers, ...o Service plugins and their drivers for layer 3+
• Works with existing L2 agentso openvswitcho linuxbridgeo hyperv
• Deprecates existing monolithic pluginso openvswitcho linuxbridge
Motivations For aModular Layer 2 Plugin
Before Modular Layer 2 ...
Neutron Server
Open vSwitch Plugin
OR
Neutron Server
Linuxbridge Plugin
OR ...
Before Modular Layer 2 ...
Neutron Server
Vendor X Plugin
I want to write a Neutron Plugin.
But I have to duplicate a lot of DB, segmentation, etc. work.
What a pain. :(
ML2 Use Cases
• Replace existing monolithic pluginso Eliminate redundant codeo Reduce development & maintenance effort
• New featureso Top-of-Rack switch controlo Avoid tunnel flooding via L2 populationo Many more to come...
• Heterogeneous deploymentso Specialized hypervisor nodes with distinct network
mechanismso Integrate *aaS applianceso Roll new technologies into existing deployments
Modular Layer 2 Architecture
The Modular Layer 2 (ML2) Plugin is a framework allowing OpenStack Neutron to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers.
What’s Similar?
ML2 is functionally a superset of the monolithic openvswitch, linuxbridge, and hyperv plugins:
• Based on NeutronDBPluginV2
• Models networks in terms of provider attributes
• RPC interface to L2 agents
• Extension APIs
What’s Different?
ML2 introduces several innovations to achieve its goals:
• Cleanly separates management of network types from the mechanisms for accessing those networkso Makes types and mechanisms pluggable via driverso Allows multiple mechanism drivers to access same
network simultaneouslyo Optional features packaged as mechanism drivers
• Supports multi-segment networks
• Flexible port binding
• L3 router extension integrated as a service plugin
ML2 Architecture Diagram
Neutron Server
ML2 Plugin
Type Manager Mechanism Manager
API Extensions
GR
E
TypeDriver
Arista
VLA
N
TypeDriver
VX
LA
N
TypeDriver
Cisco N
exus
Hyp
er-V
L2 P
opulation
Linuxb
ridge
Ope
n vS
witch
Tail-F N
CS
Multi-Segment Networks
VXLAN 123567
physnet1 VLAN 37 physnet2 VLAN 413
VM 1 VM 2 VM 3
● Created via multi-provider API extension● Segments bridged administratively (for now)● Ports associated with network, not specific segment● Ports bound automatically to segment with connectivity
Type Driver API
class TypeDriver(object): @abstractmethod def get_type(self): pass
@abstractmethod def initialize(self): pass
@abstractmethod def validate_provider_segment(self, segment): pass
@abstractmethod def reserve_provider_segment(self, session, segment): pass
@abstractmethod def allocate_tenant_segment(self, session): pass
@abstractmethod def release_segment(self, session, segment): pass
Mechanism Driver APIclass MechanismDriver(object): @abstractmethod def initialize(self): pass
def create_network_precommit(self, context): pass
def create_network_postcommit(self, context): pass
def update_network_precommit(self, context): pass
def update_network_postcommit(self, context): pass
def delete_network_precommit(self, context): pass
def delete_network_postcommit(self, context): pass
def create_subnet_precommit(self, context): pass
def create_subnet_postcommit(self, context): pass
def update_subnet_precommit(self, context): pass
def update_subnet_postcommit(self, context): pass
def delete_subnet_precommit(self, context): pass
def delete_subnet_postcommit(self, context): pass
def create_port_precommit(self, context): pass
def create_port_postcommit(self, context): pass
def update_port_precommit(self, context): pass
def update_port_postcommit(self, context): pass
def delete_port_precommit(self, context): pass
def delete_port_postcommit(self, context): pass
def bind_port(self, context): pass
def validate_port_binding(self, context): return False
def unbind_port(self, context): passclass NetworkContext(object): @abstractproperty def current(self): pass
@abstractproperty def original(self): pass
@abstractproperty def network_segments(self): pass
Port Binding• Determines values for port’s binding:vif_type and
binding:capabilities attributes and selects segment
• Occurs when binding:host_id set on port or existing valid binding
• ML2 plugin calls bind_port() on registered MechanismDrivers, in order listed in config, until one succeeds or all have been tried
• Driver determines if it can bind based on:
o context.network.network_segments
o context.current[‘binding:host_id’]
o context.host_agents()
• For L2 agent drivers, binding requires live L2 agent on port’s host that:
o Supports the network_type of a segment of the port’s network
o Has a mapping for that segment’s physical_network if applicable
• If it can bind the port, driver calls context.set_binding() with binding details
• If no driver succeeds, port’s binding:vif_type set to BINDING_FAILED
class PortContext(object): @abstractproperty def current(self): pass
@abstractproperty def original(self): pass
@abstractproperty def network(self): pass
@abstractproperty def bound_segment(self): pass
@abstractmethod def host_agents(self, agent_type): pass
@abstractmethod def set_binding(self, segment_id,
vif_type, cap_port_filter): pass
Havana Features
Type Drivers in Havana
The following are supported segmentation types in ML2 for the Havana release:
● local● flat● VLAN● GRE● VXLAN
Mechanism Drivers in Havana
The following ML2 MechanismDrivers exist in Havana:
● Arista● Cisco Nexus● Hyper-V Agent● L2 Population● Linuxbridge Agent● Open vSwitch Agent● Tail-f NCS
BeforeML2 L2 Population MechanismDriver
Host 1
Host 2
Host 3Host 4
Host 1
VM A
VM G VM E VM D
VM B
VM C
VM F
VM H
VM I
“VM A” wants to talk to “VM G.” “VM A” sends a broadcast packet, which is replicated to the entire tunnel mesh.
WithML2 L2 Population MechanismDriver
Host 1
Host 2
Host 3Host 4
Host 1
VM A
VM G VM E VM D
VM B
VM C
VM F
VM H
VM I
The ARP request from “VM A” for “VM G” is intercepted and answered using a pre-populated neighbor entry.
Proxy Arp
Traffic from “VM A” to “VM G” is encapsulated and sent to “Host 4” according to the bridge forwarding table entry.
Modular Layer 2 Futures
ML2 Futures: Deprecation Items
• The future of the Open vSwitch and Linuxbridge pluginso These are planned for deprecation in Icehouseo ML2 supports all their functionalityo ML2 works with the existing OVS and Linuxbrige
agentso No new features being added in Icehouse to OVS
and Linuxbridge plugins
• Migration Tool being developed
Plugin vs. ML2 MechanismDriver?
• Advantages of writing an ML2 Driver instead of a new monolithic plugino Much less code to write (or clone) and maintaino New neutron features supported as they are addedo Support for heterogeneous deployments
• Vendors integrating new plugins should consider an ML2 Driver insteado Existing plugins may want to migrate to ML2 as well
ML2 With Current Agents
Neutron Server
ML2 Plugin
Host A
Linuxbridge Agent
Host B
Hyper-V Agent
Host C
Open vSwitch Agent
Host D
Open vSwitch Agent
API Network
● Existing ML2 Plugin works with existing agents
● Separate agents for Linuxbridge, Open vSwitch, and Hyper-V
ML2 With Modular L2 Agent
Neutron Server
ML2 Plugin
Host A
Modular Agent
Host B
Modular Agent
Host C
Modular Agent
Host D
Modular Agent
API Network
● Future direction is to combine Open Source Agents
● Have a single agent which can support Linuxbridge and Open vSwitch
● Pluggable drivers for additional vSwitches, Infiniband, SR-IOV, ...
ML2 Demo
What the Demo Will Show
● ML2 running with multiple MechanismDrivers○ openvswitch○ cisco_nexus
● Booting multiple VMs on multiple compute hosts● Hosts are running Fedora
● Configuration of VLANs across both virtual and physical infrastructure
ML2 Demo Setup
Host 1 Host 2
Cisco Nexus Switch
eth2/1 eth2/2
eth2
nova computenova api
...
neutron server neutron ovs agent
neutron dhcp neutron l3 agent
nova computeneutron ovs
agent
eth2
br-eth2
br-int
br-eth2
br-int
vm1
VLAN is added on the VIF for VM1 and also on the br-eth2 ports by the ML2 OVS
MechanismDriver.
The ML2 Cisco Nexus
MechanismDriver trunks the VLAN
on eth2/1.
vm2
VLAN is added on the VIF for VM2 and also on the br-eth2 ports by the ML2 OVS
MechanismDriver.
The ML2 Cisco Nexus
MechanismDriver trunks the VLAN
on eth2/2.
VM1 can ping VM2 … we’ve successfully
completed the standard network
test.
Questions?