68
POMI2020: Network Substrate Software-defined Networks, and OpenFlow NSF Site Visit, June 2010 Nick McKeown Sachin Katti Monica Lam Ramesh Johari Guru Parulkar Stanford University POMI 2020

Stanford University

  • Upload
    pooky

  • View
    31

  • Download
    0

Embed Size (px)

DESCRIPTION

POMI 2020. Stanford University. POMI2020: Network Substrate Software-defined Networks, and OpenFlow NSF Site Visit, June 2010 Nick McKeown Sachin Katti Monica Lam Ramesh Johari Guru Parulkar. POMI Research Agenda. Infrastructure. Applications. Handheld. Data & Computing Substrate - PowerPoint PPT Presentation

Citation preview

Page 1: Stanford University

POMI2020: Network Substrate

Software-defined Networks, and OpenFlow

NSF Site Visit, June 2010

Nick McKeownSachin KattiMonica Lam

Ramesh JohariGuru Parulkar

Stanford University POMI2020

Page 2: Stanford University

POMI Research Agenda

ApplicationsApplications

Data & Computing SubstratePrPl, Junction and Concierge

Data & Computing SubstratePrPl, Junction and Concierge

Radio technologyRadio technology

Econom

icsE

conomicsCinder: Energy

aware, secure OS

Secure mobile browser

UI

HW PlatformNetwork Substrate

Software Defined Network & OpenFlow

Network SubstrateSoftware Defined Network & OpenFlow

Handheld

Infrastructure

Page 3: Stanford University

Outline

We set out to address two “barriers to innovation” in the network…

Barrier 3: There is abundant capacity available, but it is closed and unavailable

Barrier 4: The network infrastructure is closed and will remain ossified

3

Page 4: Stanford University

What do we mean when we say the network is “closed and ossified”?

4

Page 5: Stanford University

Million of linesof source code

5400 RFCs Barrier to entry

Billions of gates Bloated Power Hungry

Many complex functions baked into the infrastructureOSPF, BGP, multicast, differentiated services,Traffic Engineering, NAT, firewalls, MPLS, redundant layers, …

An industry with a “mainframe-mentality”, reluctant to change

The Ossified Network

Specialized Packet Forwarding Hardware

OperatingSystem

Feature Feature

Routing, management, mobility management, access control, VPNs, …

5

Page 6: Stanford University

Glacial process of innovation made worse by captive standards process

DeploymentIdea Standardize

Wait 10 years

1. Driven by vendors2. Owners/operators largely locked out3. Lowest common denominator features4. Glacial innovation

Unlikely to change without external help6

Page 7: Stanford University

Example where change is needed

Cellular industry– Recently made transition to IP– Billions of mobile users– Need to securely extract payments and hold users

accountable– IP is dreadful at both, yet hard to change

7

Page 8: Stanford University

Telco Operators e.g. AT&T, DT, NTT, …

• Global IP traffic will grow 5x by 2013• End-customer monthly bill remains unchanged• Therefore, CAPEX and OPEX need to be

reduced 5x by 2013• But in practice, reduces by <20% per year

Q: How can operators reduce cost?Q: How can they differentiate their service?

8

Page 9: Stanford University

The SDN Approach*

Separate control from the datapath– i.e. separate policy from mechanism

Datapath: Define minimal network instruction set– A set of “plumbling primitives”– A vendor-agnostic interface: OpenFlow

Control: Define a network-wide OS– An API that others can develop on

* With Scott Shenker, Martin Casado and many others 9

Page 10: Stanford University

Specialized Packet Forwarding Hardware

Feature Feature

Specialized Packet Forwarding Hardware

Specialized Packet Forwarding Hardware

Specialized Packet Forwarding Hardware

Specialized Packet Forwarding Hardware

OperatingSystem

OperatingSystem

OperatingSystem

OperatingSystem

OperatingSystem

Network OS

Feature Feature

Feature Feature

Feature Feature

Feature Feature

Feature Feature

Restructured Network

10

Page 11: Stanford University

Feature Feature

Network OS

1. Open interface to hardware

3. Well-defined open API2. At least one Network OS

probably many.Open- and closed-source

The “Software-defined Network”

OpenFlow

11

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Page 12: Stanford University

OpenFlow Basics

Narrow, vendor-agnostic interface to control switches, routers, APs, basestations.

12

Page 13: Stanford University

Network OS

Step 1: Separate Control from Datapath

13

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

Page 14: Stanford University

Step 2: Cache flow decisions in datapath

“If header = x, send to port 4”

“If header = ?, send to me”“If header = y, overwrite header with z, send to ports 5,6”

14

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

FlowTableFlowTable

Page 15: Stanford University

Plumbing Primitives1. Match arbitrary bits in headers:

– Match on any header; or new header– Allows any flow granularity

2. Actions:– Forward to port(s), drop, send to controller– Overwrite header with mask, push or pop– Forward at specific bit-rate

15

HeaderHeaderDataData

Match: 1000x01xx0101001x

Page 16: Stanford University

Feature Feature

Network OS

1. Open interface to hardware

3. Well-defined open API2. At least one Network OS

probably many.Open- and closed-source

The “Software-defined Network”

OpenFlow

16

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Page 17: Stanford University

Network Operating System 1

Open interface to hardware

Virtualization or “Slicing” Layer (FlowVisor)

Network Operating System 2

Network Operating System 3

Network Operating System 4

Feature

Many operating systems, ormany versions

Open interface to hardware

Isolated “slices”

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Feature Feature Feature

Page 18: Stanford University

Our Strategy

Barrier: The network infrastructure is closed and will remain ossified

Strategy: The Software Defined Network– Add OpenFlow to switches, routers, WiFi APs,

basestations, … deploy in our network– Use SDN for our own research– Study how to apply to different types of network– Enable others to do research in their network– (Work with GENI community to deploy widely)

18

Page 19: Stanford University

Some research examples

19

Page 20: Stanford University

Ethane, a precursor to OpenFlowCentralized, reactive, per-flow control

Controller

Host AHost B

Flow Switch

[Ethane, Sigcomm ‘07]

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Simple PacketForwarding Hardware

Page 21: Stanford University

FlowVisor Creates Virtual Networks

OpenFlowProtocol

FlowVisor

OpenPipesDemo

OpenFlow WirelessDemo

OpenFlowProtocol

PlugNServeLoad-balancer

OpenPipesPolicy

OpenPipesPolicy

Multiple, isolated slices in the same physical network

Multiple, isolated slices in the same physical networkOpenFlow

Switch

OpenFlow Switch

OpenFlow Switch

[Sigcomm 2009 – Best Demo][Paper in submission]

Page 22: Stanford University

Demo Infrastructure with Slicing

Page 23: Stanford University

OpenPipesPartition hardware designs across a network

23

[Sigcomm 2009 – 2nd Best Demo][Paper in submission]

Page 24: Stanford University

Load-balancing as Network Primitive

24

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

InternetInternet

OpenFlow Switch

[Sigcomm 2009 Demo][Paper in preparation]

Goal: Minimize http response time over campus networkApproach: Route over path to jointly minimize <path latency, server latency>

Network OS

Load-Balancer

“Pick path & server”

Page 25: Stanford University

Intercontinental VM MigrationMoved a VM from Stanford to Japan without changing its IP.

VM hosted a video game server with active network connections.

Moved a VM from Stanford to Japan without changing its IP.

VM hosted a video game server with active network connections.

[Sigcomm 2008– Best Demo]

Page 26: Stanford University

Feature Feature

NOX

Converging Packet and Circuit Networks

IPRouter

IPRouter

TDMSwitchTDM

Switch

WDMSwitchWDMSwitch

WDMSwitchWDMSwitch

IPRouter

IPRouter

Goal: Common control plane for “Layer 3” and “Layer 1” networksApproach: Add OpenFlow to all switches; use common network OS

OpenFlowProtocol

OpenFlowProtocol

[Supercomputing 2009 Demo][OFC 2010]

Page 27: Stanford University

ElasticTreeGoal: Reduce energy in data center networksApproach:

1. Reroute traffic2. Shut off links and switches to reduce power

[NSDI 2010]

Network OS

DCManager

“Pick paths”

Page 28: Stanford University

ElasticTreeGoal: Reduce energy in data center networksApproach:

1. Reroute traffic2. Shut off links and switches to reduce power

[NSDI 2010]

XXXX XX

XX XXNetwork OS

DCManager

“Pick paths”

Page 29: Stanford University

Network OS

Making a Network Application Friendly

Junction Phone2Phone Apps

“Create a chat room”“Send to all participants”

“Encrypt data”“Min. bandwidth is 6Mbps”“Create a multicast group”

“Encrypt a flow”“Calculate multicast routing”

“Assign flow rate”

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

[SIGCOMM’10 APSys Workshop][SIGCOMM’10 MobiHeld Workshop]

Page 30: Stanford University

Will SDN happen?

30

Page 31: Stanford University

We now believe SDN will happen

It is starting in big data centers– Driven by cost and control– Unable to cope with virtualization, multi-tenancy… – We are trying to “steer” them in same direction

Growing interest by ISPs, cellular operators

(GENI: Deploying on college campuses)

31

Page 32: Stanford University

Example: New Data Center

Cost200,000 serversFanout of 20 a 10,000 switches$5k vendor switch a $50M$1k commodity switch a $10M

Savings in 10 data centers = $400M

Control

1.More flexible control2.Quickly improve and innovate3.Enables “cloud networking”

We believe large data centers will use SDN.

Page 33: Stanford University

POMI Progress

OpenFlow added to many devices– Switches, routers, APs, basestations, transport

switches, chips

Many research experiments have validated the approach

Deployments happening on college campuses

33

Page 34: Stanford University

Self Assessment

+ Good progress on basic architecture+ “Slicing” very promising+ Research experiments validate the approach- The networking industry is very entrenched- To break down the barrier, it takes a lot of

engineering. + More deployments than we expected- Difficult to scale to meet interest/demand

34

Page 35: Stanford University

Outline

We set out to address two “barriers to innovation” in the network…

Barrier: There is abundant capacity available, but it is closed and unavailable

Barrier: The network infrastructure is closed and will remain ossified

35

Page 36: Stanford University

What does it take to…..

36

Open the wireless infrastructure so users can choose any free

spectrum, any network, or many networks, any time?

Page 37: Stanford University

37

AT&T3G

SprintWiMAX

Any network….

Page 38: Stanford University

38

AT&T3G

SprintWiMAX

Many networks….

Page 39: Stanford University

What does it take to give users choice?

39

Page 40: Stanford University

Technology and contracting

Contracts are limited or enabled by technology

This has a first order impact on network economics[ Example: BGP and interdomain routing ]1. What technology is needed to enable a

new form of contract?2. Are there countervailing economic forces that

might prevent efficient use of new technology?

Page 41: Stanford University

Application: Learning to shareCan wireless providers learn to share?Technologies such as OpenFlow Wireless and

radio virtualization enable users to make choices.Will providers let them?Central requirement is complementarity:

Profit-maximizing providers must find collectiveaction in their own best interest.

Examples:Geographical complementarity (roaming).Overcoming high fixed costs (tower sharing).

Page 42: Stanford University

How do we give users choice?

42

Page 43: Stanford University

Wish List

1. Instantaneous contracts with any physical network, independent of its owner or radio technology

2. A network-independent way to choose a network, and to control mobility

43

Page 44: Stanford University

Design Choice

1. Establish my own instantaneous contracts and control the network (hard), or

1. Delegate to an entity in the infrastructure– A service provider– My own agent

Conceptually the same; we start by delegating

44

Page 45: Stanford University

Requirement

Technical– Radio-independent control layer– A method for a service provider to control my

flows on my behalf

Business– An incentive for infrastructure owners to open

access to service providers

45

Page 46: Stanford University

Network OS

“Slicing” Layer

Network OS Network OS Network OS

Feature

Billing, MobilityNew Service

“AT&T” New Service“Vodafone”Billing, MobilityNew Service Feature

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow AP

OpenFlow BS

Page 47: Stanford University

Consequences

Radio-independent control layer– Service provider controls user flows– Easy handover between physical networks– Can use several networks simultaneously– Service innovation by service providers

A method to share the physical infrastructure– Isolation between service providers– Short-term or long-term lease of rights-of-way

47

Page 48: Stanford University

48

Radio Network: Spectrum, RadiosRadio Network: Spectrum, Radios

Network Layer: Wireline NetworkNetwork Layer: Wireline Network

Service Layer: Authentication, Billing, Mobility Management, Routing, …

Service Layer: Authentication, Billing, Mobility Management, Routing, …

AT&T

Page 49: Stanford University

Separating the service from the network

49

Radio Network: Spectrum, RadiosRadio Network: Spectrum, Radios

Network Layer: Wireline NetworkNetwork Layer: Wireline Network

Service Layer: Authentication, Billing, Mobility Management, Routing, …

Service Layer: Authentication, Billing, Mobility Management, Routing, …

Separation/Virtualization

Page 50: Stanford University

Service provider controls a slice across physical networks

50

Separation/Virtualization

Service

Network

Service Service Service

Network Network Network

Radio Network: Spectrum, RadiosRadio Network: Spectrum, Radios

“AT&T”

Page 51: Stanford University

Progress so far

Created OpenFlow wireless network– WiFi and WiMAX – Sliced by bandwidth, flowspace, and SSID

Experiments in mobility management– Projects Class– Lossless handover– Predictive handover– Vertical handover

(With GENI, plan to deploy in other campuses)51

Page 52: Stanford University

Next Steps

Control across physical network owners– With Clearwire: Vertical handoff between our

WiMAX network and theirs– With Google: OpenWiFi project to share WiFi

access– Outsourcing management of home networks– On GENI: Separation of control from network

nationwide (WiFi and WiMAX).

52

Page 53: Stanford University

Network OS

“Slicing” Layer

Network OS Network OS

ApplicationSpecificControl

My ApplicationSpecific Service

User Application

A natural next step

Billing, MobilityNew Service

“AT&T” “Vodafone”Billing, MobilityNew Service

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow WiFi AP

Page 54: Stanford University

Examples

1. Junction General communication broker

2. Application specific quality control In-network replication

54

Page 55: Stanford University

Demonstration

55

Network OS

My ApplicationSpecific Service

“Give me high quality”

Video server

Loss!

“Loss!”

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlow AP

OpenFlow AP

OpenFlow BS

Page 56: Stanford University

Can we virtualize the radio and spectrum?

56

Page 57: Stanford University

Step 1: Separating service from network

57

Radio Network: Spectrum, RadiosRadio Network: Spectrum, Radios

Network Layer: Wireline NetworkNetwork Layer: Wireline Network

Service Layer: Authentication, Billing, Mobility Management, Routing, …

Service Layer: Authentication, Billing, Mobility Management, Routing, …

Separation/Virtualization

Page 58: Stanford University

58

Step 1: Open the wireless infrastructure so users can choose any network, or many networks, any time?

What about the network operator?

Can we open the wireless infrastructure so operators can choose any radio, any available spectrum, any time?

Page 59: Stanford University

Step 2: Separating network from radio

59

Radio Network: Spectrum, RadiosRadio Network: Spectrum, Radios

Network Layer: Wireline NetworkNetwork Layer: Wireline Network

Service Layer: Authentication, Billing, Mobility Management, Routing, …

Service Layer: Authentication, Billing, Mobility Management, Routing, …

Separation/Virtualization

Separation/Virtualization

Page 60: Stanford University

60

AT&T3G

SprintWiMAX

Why separate network from radio?

Page 61: Stanford University

61

AT&T3G

SprintWiMAX

Why separate network from radio?

Page 62: Stanford University

Trends

• Currently, ~10 wireless devices per personFuture, expect 1000 devices per person a trillion devices coming online

• Most of them will be battery operated and expected to last a long time

• Current WiFi+Cellular infrastructure won’t scale

62

Page 63: Stanford University

How to meet this demand?Provide high throughput connectivity anytime, anywhere while keeping battery consumption constant/low

Current networks operate close to the Shannon limit Change how networks are architectedIncrease density : Bring the infrastructure and client closerIncrease spectrum : Adaptively exploit unused spectrum

Page 64: Stanford University

Who will pay for this infrastructure/spectrum?

• Revenue is not keeping pace with traffic growth Operators unlikely to invest in expensive infrastructure

• To scale, the same physical infrastructure and spectrum has to be shared among multiple networks

Our approach: Separate networks from physical infrastructure/spectrum via virtualization

Page 65: Stanford University

Step 2: Separating network from radio

65

Radio Network: Spectrum, Radios

Radio Network: Spectrum, Radios

Network Layer: Wireline NetworkNetwork Layer: Wireline Network

Separation/Virtualization

AT&T Verizon Sprint

WiMax BS2.6-2.8GHz

LTE BS1.8-1.9GHz

LTE BS700-900Mhz

SoftwareRadio

New Virtual Network

Network operators use any BS with spectrum that’s available (assuming the BS works in that range)

Page 66: Stanford University

Spectrum Virtualization• Spectrum resources can be used along 3

dimensions– Frequency– Space– Time

• Spectrum Slice Abstraction– Sets of non-contiguous frequencies as a virtual

spectral block (VSB)– OFDM to stitch together non-contiguous bands

Page 67: Stanford University

Infrastructure Virtualization• Different networks need to co-exist on the same

physical hardware– Guarantee power, spectrum and timing isolation

• Two approaches– Virtualizing within the same technology (e.g. LTE BS)

• Scheduling flows to provide QOS, and spectrum virtualization to share spectrum

– Virtualizing hardware to support heterogeneous wireless technologies (e.g. LTE and WiMax on the same BS)

• Designing higher level computing units (FFT, message passing decoders etc) that can be stitched to create different wireless PHYs

Page 68: Stanford University

Self Assessment

+ Progress being made towards our “big vision”+ Surprised how easy experiments are to create+ Expanded the vision to virtualize infrastructure &

spectrum

- Hard to work with cellular providers- Hard to deploy widely: Spectrum, engineering- A lot of work left to reach our “big vision”

68