Upload
justina-carpenter
View
217
Download
0
Tags:
Embed Size (px)
Citation preview
OpenFlow-Based Server Load Balancing GoneWild
Author : Richard Wang, Dana Butnariu, Jennifer Rexford
Publisher : Hot-ICE'11 Proceedings of the 11th USENIX conference
Presenter: Tung-yin Chi
Date: 2015/4/22
Department of Computer Science and Information Engineering National Cheng Kung University, Taiwan R.O.C.
Introduction (1/3)
The OpenFlow standard enables an alternative approach where the commodity network switches divide traffic over the server replicas, based on packet-handling rules installed by a separate controller
However, the simple approach of installing a separate rule for each client connection (or “microflow”) leads to a huge number of rules in the switches and a heavy load on the controller
National Cheng Kung University CSIE Computer & Internet Architecture Lab
2
Introduction (2/3)
We argue that the controller should exploit switch support for wildcard rules for a more scalable solution that directs large aggregates of client traffic to server replicas
We present algorithms that compute concise wildcard rules that achieve a target distribution of the traffic, and automatically adjust to changes in load-balancing policies without disrupting existing connections
National Cheng Kung University CSIE Computer & Internet Architecture Lab
3
Introduction (3/3)
We implement these algorithms on top of the NOX OpenFlow controller, evaluate their effectiveness, and propose several avenues for further research
National Cheng Kung University CSIE Computer & Internet Architecture Lab
4
Core Ideas
National Cheng Kung University CSIE Computer & Internet Architecture Lab
5
Figure 1: Basic model from load balancer switch’s view
Relevant OpenFlow Features
The controller can install rules that match on certain packet-header fields and perform actions on the matching packets
A microflow rule matches on all fields, where a wildcard rule can have “don’t care” bits in some fields
National Cheng Kung University CSIE Computer & Internet Architecture Lab
6
Relevant OpenFlow Features
the switch performs an action of • rewriting the server IP address• forwarding the packet to the output port associated
with the chosen replica Relying on wildcard rules that match on the
client IP addresses
National Cheng Kung University CSIE Computer & Internet Architecture Lab
7
Partitioning the Client Traffic
The partitioning algorithm must divide client traffic in proportion to the load-balancing weights, while relying only on features available in the OpenFlow switches
Initially assume that traffic volume is uniform across client IP addresses
The goal is to generate a small set of wildcard rules that divide the entire client IP address space
National Cheng Kung University CSIE Computer & Internet Architecture Lab
8
Minimizing the Number of Wildcard Rules
Minimizing the Number of Wildcard Rules
Transitioning Quickly With Microflow Rules
Implementation and Evaluation
Built a prototype using OpenVswitch (a software OpenFlow switch) and NOX (an OpenFlow controller platform), running in Mininet.
The prototype runs the partitioning algorithm and our transitioning algorithm.
We use Mininet to build the topology in Figure 1 with a set of 3 replica servers, 2 switches, and a number of clients.
Implementation and Evaluation 3 replica servers host the same
16MB file For this experiment, use 36 clients
with randomly-chosen IP addresses. Each client issues wget requests
for the file We assign 3, = 4, and = 1
Adapting to new load-balancing weights
Overhead of transitions
To evaluate the overhead and delay on the controller during transitions, we have ten clients simultaneously download a 512MB file from two server replicas.
We start with all traffic directed to R1, and then (in the middle of the ten downloads) start a transition to replica R2.
In the experiments, we didn’t see any noticeable degradation in throughput during the transition period
Non-Uniform Client Traffic
Network of Multiple Switches
Conclusion
Our “partitioning” algorithm determines a minimal set of wildcard rules to install, while our “transitioning” algorithm changes these rules to adapt the new load balancing weights.
Our evaluation shows that our system can indeed adapt to changes in target traffic distribution and that the few packets directed to the controller have minimal impact on throughput.