Optimizing the ‘One Big Switch’ Abstraction in Software Defined Networks

Preview:

DESCRIPTION

Optimizing the ‘One Big Switch’ Abstraction in Software Defined Networks. Nanxi Kang Princeton University in collaboration with Zhenming Liu, Jennifer Rexford, David Walker. Software Defined Network. Decouple data and control plane A logically centralized control plane (controller) - PowerPoint PPT Presentation

Citation preview

Optimizing the ‘One Big Switch’

Abstraction in Software Defined Networks

Nanxi KangPrinceton University

in collaboration with Zhenming Liu, Jennifer Rexford, David Walker

Software Defined Network• Decouple data and control plane

• A logically centralized control plane (controller)

• Standard protocol• e.g., OpenFlow

2

Switch

Controllerprogram

Network policies

Switch rules

...

...

Existing control platform• Decouple data and control plane

• A logically centralized control plane (controller)

• Standard protocol• e.g., OpenFlow

3

Flexible policies✔✖Easy management

‘One Big Switch’ Abstraction

4

H1 H2

H3

H1 H2

H1H3

From H1, dstIP = 0* => go to H2

From H1, dstIP = 1* => go to H3

Endpoint policy Ee.g., ACL, Load Balancer

Routing policy Re.g., Shortest path routing

H1 H2H3

Automatic Rule Placement

Challenges of Rule Placement

5

H1 H2

H1H3

H1 H2H3

...

...

...

...

...

#rules >10k

TCAM size =1k ~ 2k

Automatic Rule Placement

Endpoint policy E Routing policy R

Past work

6

• Nicira• Install endpoint policies on ingress

switches• Encapsulate packets to the destination• Only apply when ingress are software

switches

• DIFANE• Palette

Contributions• Design a new rule placement algorithm

• Realize high-level network policies• Stay within rule capacity of switches

• Handle policy update incrementally

• Evaluation on real and synthetic policies

7

Contribution• Design a new rule placement algorithm

• Realize high-level network policies• Stay within rule capacity of switches

• Handle policy update incrementally

• Evaluation on real and synthetic policies

7

Problem Statement

8

...

...

...

...

...

Automatic Rule Placement

Endpoint policy E Routing policy RTopology

1. Stay within capacity2. Minimize total

1k 1k0.5k

0.5k

Algorithm Flow

Place rules over paths

Divide rule space across paths

Decompose the network into paths

9

1.

2.

3.

Algorithm Flow

Place rules over paths

Divide rule space across paths

Decompose the network into paths

9

1.

2.

3.

Single Path

• Routing policy is trivial

10

C1 C2 C3

Endpoint policy

11

R1: (srcIP = 0*, dstIP = 00), permitR2: (srcIP = 01, dstIP = 1* ), permitR3: (srcIP = **, dstIP = 11), denyR4: (srcIP = 11, dstIP = ** ), permitR5: (srcIP = 10, dstIP = 0* ), permitR6: (srcIP = **, dstIP = ** ), deny

Map rule to rectangle

00

01

10

110

0011011

srcIP

dstIP

12

R1: (0*, 00),PR2: (01, 1*),PR3: (**, 11),DR4: (11, **),PR5: (10, 0*),PR6: (**, **),D

00

01

10

110

0011011

R1

srcIP

dstIP

Map rule to rectangle

00

01

10

110

0011011

srcIP

dstIP

13

R1: (0*, 00),PR2: (01, 1*),PR3: (**, 11),DR4: (11, **),PR5: (10, 0*),PR6: (**, **),D

00

01

10

110

0011011

R1

R4R3

R2R5

srcIP

dstIP

C1 = 4

Pick rectangle for every switch

14

R1

R4R3

R2R5

Select a rectangle

• Overlapped rules:R2, R3, R4, R6

• Internal rules:R2, R3

#Overlapped rules ≤ C1

00 01 10 1100011011

R1

R4R3

R2R5

15C1 = 4

q

Install rules in first switch00

01

10

110

0011011 R’4

R3R2

16

00 01 10 1100011011

R1

R4R3

R2R5

C1 = 4

q

Rewrite policy00

01

10

110

0011011

R1

R4R5 q

Fwd everything in qSkip the original policy

17

00 01 10 1100011011

R1

R4R3

R2R5

q

Overhead of rules

18

• #Installed rules ≥ |Endpoint policy|

• Non-internal rules won’t be deleted

• Objective in picking rectangles• Max(#internal rules) /

(#overlap rules)

Algorithm Flow

Place rules over paths

Divide rule space across paths

Decompose the network into paths

19

1.

2.

3.

• Routing policy• Implement: install forwarding rules on

switches• Gives {Paths}

Topology = {Paths}

H1 H2

H3

H1 H2

H1H3

H1 H2

H1H3

20

• Enforce endpoint policy• Project endpoint policy to paths

• Only handle packets using the path• Solve paths independently

Project endpoint policy to paths

21

H1 H2

H3

H1 H2

H1H3

H1 H2

H1H3

Endpoint Policy E

E1

E2

E3

E4

What is next step ?

H1H2

H3

Decomposition to paths

? Divide rule space across paths• Estimate the rules needed by each

path• Partition rule space by Linear

ProgrammingSolve rule placement over paths✔ 22

Algorithm Flow

Place rules over paths

Divide rule space across paths

Decompose the network into paths

Success

Fail

23

1.

2.

3.

Roadmap• Design a new rule placement algorithm

• Stay within rule capacity of switches• Minimize the total number of installed

rules

• Handle policy update incrementally• Fast in making changes, • Compute new placement in

background• Evaluation on real and synthetic

policies24

Insert a rule to a path• Path

25

Limited impact• Path

• Update a subset of switches

26

R R R

Limited impact• Path

• Update a subset of switches• Respect original rectangle

selection

27

R’ R

Roadmap• Design a new rule placement algorithm

• Stay within rule capacity of switches• Minimize the total number of installed

rules

• Handle policy update incrementally

• Evaluation on real and synthetic policies• ACLs(campus network), ClassBench• Shortest-path routing on GT-ITM

topology

28

Path• Assume switches have the same

capacity• Find the minimum #rules/switch that

gives a feasible rule placement

• Overhead =

29

|E| #switch #rules / switch

#total rules #extra rules Overhead

13985 4 3646 14584

#rule/switch x #switches

Path• Assume switches have the same

capacity• Find the minimum #rules/switch that

gives a feasible rule placement

• Overhead =

30

|E| #switches #rules / switch

#total rules #extra rules Overhead

13985 4 3646 14584 599

#rule/switch x #switches - |E|

Path• Assume switches have the same

capacity• Find the minimum #rules/switch that

gives a feasible rule placement

• Overhead =

31

|E| #switch #rules / switch

#total rules #extra rules Overhead

13985 4 3646 14584 599 4.3%

#rule/switch x #switches - |E| |E|

#Extra installed rules vs. length

32

1 2 3 4 5 6 7 8 90

0.02

0.04

0.06

0.08

0.1

Path Length

Norm

alize

d #e

xtra

ru

les

|E| #switches #rules / switch

#total rules Overhead

13985 4 3646 14584 4.3%

#Extra installed rules vs. length

33

1 2 3 4 5 6 7 8 90

0.02

0.04

0.06

0.08

0.1

Path Length

Norm

alize

d #e

xtra

ru

les

|E| #switches #rules / switch

#total rules Overhead

13985 4 3646 14584 4.3%13985 8 1895 15160 8.4%

Data set matters

1 2 3 4 5 6 7 80

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Path Length

Norm

alize

d #e

xtra

ru

les

• Real ACL policies 34

Many rule overlaps

Few rule overlaps

Place rules on a graph• #Installed rules

• Use rules on switches efficiently

• Unwanted traffic• Drop unwanted traffic early

• Computation time• Compute rule placement quickly

35

Place rules on a graph• #Installed rules

• Use rules on switches efficiently

• Unwanted traffic• Drop unwanted traffic early

• Computation time• Compute rule placement quickly

36

Carry extra traffic along the path• Install rules along the path

• Not all packets are handled by the first hop

• Unwanted packets travel further

• Quantify effect of carrying unwanted traffic

• Assume uniform distribution of traffic with drop action

37

When unwanted traffic is dropped• An example single path

• Fraction of path travelled

38

#hops Fraction of path travelled

Unwanted traffic dropped at this

switch

Unwanted traffic dropped until this

switch

1 25%2 50%3 75%4 100%

When unwanted traffic is dropped• An example single path

• Fraction of path travelled • Unwanted traffic dropped until the

switch

39

#hops Fraction of path travelled

Unwanted traffic dropped at this

switch

Unwanted traffic dropped until this

switch

1 25% 30% 30%2 50% 10% 40%3 75% 5% 45%4 100% 5% 50%

Aggregating all paths

40

Fraction of path travelled Unwanted traffic dropped

20% 64%75% 70%

100% 100%

• Min #rules/switch for a feasible rule placement

Give a bit more rule space

41

Fraction of path travelled

Min #rules/switch 10% more #rules/switch

20% 64% 84%75% 70% 90%

100% 100% 100%

• Put more rules at the first several switches along the path

Take-aways

42

• Path: low overhead in installing rules.

• Rule capacity is efficiently shared by paths.

• Most unwanted traffic is dropped at the edge.

• Fast algorithm, easily parallelized• < 8 seconds to compute the all

paths

Summary• Contribution

• An efficient rule placement algorithm• Support for incremental update• Evaluation on real and synthetic data

• Future work• Integrate with SDN controllers, e.g.,

Pyretic• Combine rule placement with rule

caching43

THANKS!

Recommended