41
Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar Stanford University In collaboration with Martin Casado and Scott Shenker And contributions by many others

Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar Stanford University In collaboration with Martin Casado and Scott

Embed Size (px)

Citation preview

Software Defined Networksand OpenFlow

SDN CIO Summit 2010

Nick McKeown & Guru Parulkar

Stanford University

In collaboration with Martin Casado and Scott ShenkerAnd contributions by many others

Executive Summary

• The network industry is starting to restructure• The trend: “Software Defined Networks”

– Separation of control from datapath– Faster evolution of the network

• It has started in large data centers• It may spread to WAN, campus, enterprise,

home and cellular networks• GENI is putting SDN into hands of researchers

2

What’s the problem?

3

Cellular industry

• Recently made transition to IP• Billions of mobile users• Need to securely extract payments and hold

users accountable

• IP sucks at both, yet hard to change

How can they fix IP to meet their needs?4

Telco Operators

• Global IP traffic growing 40-50% per year• End-customer monthly bill remains unchanged• Therefore, CAPEX and OPEX need to reduce 40-

50% per Gb/s per year• But in practice, reduces by ~20% per year

How can they stay in business?How can they differentiate their service?

5

Trend #1(Logical) centralization of control

6

Already happening

Enterprise WiFi– Set power and channel centrally– Route flows centrally, cache decisions in APs– CAPWAP etc.

Telco backbone networks– Calculate routes centrally– Cache routes in routers

7

Experiment: Stanford campusHow hard is it to centrally control all

flows?20

06

35,000 users10,000 new flows/sec137 network policies

2,000 switches2,000 switch CPUs

How many $400 PCs to centralize all routing and all 137 policies?

Controllers

Host AHost B

[Ethane, Sigcomm ‘07]

EthernetSwitch

EthernetSwitch

EthernetSwitch

EthernetSwitch

EthernetSwitch

EthernetSwitch

EthernetSwitch

EthernetSwitch

Answer:

10

less than one

If you can centralize control, eventually you will.

With replication for fault-tolerance and performance scaling.

11

How will the network be structured?

12

Million of linesof source code

5900 RFCs Barrier to entry

Billions of gates Bloated Power Hungry

Vertically integratedMany complex functions baked into the infrastructure

OSPF, BGP, multicast, differentiated services,Traffic Engineering, NAT, firewalls, MPLS, redundant layers, …

Looks like the mainframe industry in the 1980s

The Current Network

Specialized Packet Forwarding Hardware

OperatingSystem

Feature Feature

Routing, management, mobility management, access control, VPNs, …

13

Specialized Packet Forwarding Hardware

Feature Feature

Specialized Packet Forwarding Hardware

Specialized Packet Forwarding Hardware

Specialized Packet Forwarding Hardware

Specialized Packet Forwarding Hardware

OperatingSystem

OperatingSystem

OperatingSystem

OperatingSystem

OperatingSystem

Network OS

Feature Feature

Feature Feature

Feature Feature

Feature Feature

Feature Feature

Restructured Network

14

Trend #2Software-Defined Network

15

Feature Feature

Network OS

1. Open interface to packet forwarding

3. Well-defined open API2. At least one Network OS

probably many.Open- and closed-source

The “Software-defined Network”

OpenFlow

16

PacketForwarding

PacketForwarding

PacketForwarding

PacketForwarding

PacketForwarding

PacketForwarding

PacketForwarding

PacketForwarding

PacketForwarding

PacketForwarding

OpenFlow Basics

Narrow, vendor-agnostic interface to control switches, routers, APs, basestations.

17

Network OS

Step 1: Separate Control from Datapath

18

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

Step 2: Cache flow decisions in datapath

“If header = x, send to port 4”

“If header = ?, send to me”“If header = y, overwrite header with z, send to ports 5,6”

19

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

FlowTableFlowTable

Network OS

Plumbing Primitives1. Match arbitrary bits in headers:

– Match on any header; or user-defined header– Allows any flow granularity

2. Actions:– Forward to port(s), drop, send to controller– Overwrite header with mask, push or pop– Forward at specific bit-rate

20

HeaderHeaderDataData

e.g. Match: 1000x01xx0101001x

Ethernet Switch/RouterEthernet Switch/Router

Data Path (Hardware)Data Path (Hardware)

Control PathControl PathControl Path (Software)Control Path (Software)

Data Path (Hardware)Data Path (Hardware)

Control PathControl Path OpenFlowOpenFlow

OpenFlow ControllerOpenFlow Controller

OpenFlow Protocol (SSL)

Feature Feature

Network OS

1. Open interface to packet forwarding

3. Well-defined open API2. At least one Network OS

probably many.Open- and closed-source

The “Software Defined Network”

24

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Network OS

Several commercial Network OS in development– Commercial deployments late 2010

Research– Research community mostly uses NOX– Open-source available at: http://noxrepo.org– Expect new research OS’s late 2010

25

Software Defined Networks in Data Centers

26

Example: New Data Center

Cost200,000 serversFanout of 20 10,000 switches$5k vendor switch = $50M$1k commodity switch = $10M

Savings in 10 data centers = $400M

Control

1.More flexible control2.Quickly improve and innovate3.Enables “cloud networking”

Several large data centers will use SDN.

Data Center Networks

Existing Solutions– Tend to increase hardware complexity– Unable to cope with virtualization and multi-

tenancy

Software Defined Network– OpenFlow-enabled vSwitch– Open vSwitch http://openvswitch.org – Network optimized for data center owner– Several commercial products under development

28

Software Defined Networks on College Campuses

29

What we are doing at Stanford

1. Defining the OpenFlow Spec– Check out http://OpenFlow.org– Open weekly meetings at Stanford

2. Enabling researchers to innovate– Add OpenFlow to commercial switches, APs, …– Deploy on college campuses– “Slice” network to allow many experiments

30

OpenFlow

Virtualization or “Slicing” Layer

Isolated “slices”

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Packet Forwarding

Network Operating System 1

Network Operating System 2

Network Operating System 3

Network Operating System 4

Feature

OpenFlow

Feature Feature Feature

Packet Forwarding

Packet Forwarding

Some research examples

32

FlowVisor Creates Virtual Networks

OpenFlowProtocol

FlowVisor

OpenPipesExperiment

OpenFlow WirelessExperiment

OpenFlowProtocol

PlugNServeLoad-balancer

Policy #1Policy #1

Multiple, isolated slices in the same physical network

Multiple, isolated slices in the same physical network

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

Demo Infrastructure with Slicing

Application-specific Load-balancing

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

InternetInternet

OpenFlow Switch

Goal: Minimize http response time over campus networkApproach: Route over path to jointly minimize <path latency, server latency>

Network OS

Load-Balancer

“Pick path & server”

Intercontinental VM MigrationMoved a VM from Stanford to Japan without changing its IP.

VM hosted a video game server with active network connections.

Moved a VM from Stanford to Japan without changing its IP.

VM hosted a video game server with active network connections.

Feature Feature

NOX

Converging Packet and Circuit Networks

IPRouter

IPRouter

TDMSwitchTDM

Switch

WDMSwitchWDMSwitch

WDMSwitchWDMSwitch

IPRouter

IPRouter

Goal: Common control plane for “Layer 3” and “Layer 1” networksApproach: Add OpenFlow to all switches; use common network OS

OpenFlowProtocol

OpenFlowProtocol

[Supercomputing 2009 Demo][OFC 2010]

ElasticTreeGoal: Reduce energy usage in data center networksApproach:

1. Reroute traffic2. Shut off links and switches to reduce power

[NSDI 2010]

Network OS

DCManager

“Pick paths”

ElasticTreeGoal: Reduce energy usage in data center networksApproach:

1. Reroute traffic2. Shut off links and switches to reduce power

[NSDI 2010]

XXXX XX

XX XXNetwork OS

DCManager

“Pick paths”

Executive Summary

• The network industry is starting to restructure• The trend: “Software Defined Networks”

– Separation of control from datapath– Faster evolution of the network

• It has started in large data centers• It may spread to WAN, campus, enterprise,

home and cellular networks• GENI is putting SDN into hands of researchers

40

Thank you

41