9
Towards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam * , Doru Calin and Henning Schulzrinne * * Columbia University, New York, NY Nokia Bell Labs, New Jersey, NJ Abstract—MPTCP (MultiPath TCP) boosts application net- work performance by aggregating bandwidth over multiple paths. However, it may cause poor performance due to the large number of out-of-order packets especially when the paths have different delays. To resolve this issue, we propose to dynamically add or remove MPTCP paths with the leverage of software- defined networking (SDN). The key idea is to track the available capacity of connected paths and pick the most appropriate paths depending on varying network conditions. To show the feasibility of our approach, we create an SDN platform using Mininet over Wi-Fi networks. Our analysis shows that a faster download and improved quality of experience (QoE) in adaptive rate video streaming is possible with our dynamic MPTCP path control mechanism using SDN. Index Terms—Software-defined Networking (SDN); Multi- PathTCP (MPTCP) I. I NTRODUCTION Today’s multihomed devices are physically connected to multiple network interfaces. For instance, a smartphone is attached to mobile (e.g., 3G and 4G) and Wi-Fi networks for Internet access. A router in a wide area network (WAN) may be connected to multiple Internet Service Providers (ISPs). The major advantage of using multihomed devices is that when an interface is down, Internet connectivity may still be possible through other interfaces. However, with a regular TCP, the multihomed devices cannot use multiple interfaces simultaneously for a single TCP connection. When an interface currently in use goes down, the application must re-establish a new TCP session via another interface for continuity of service and bandwidth aggregation. SCTP (Stream Control Transmission Protocol) has been introduced as an alternative mobility solution at the transport layer [1]. SCTP is used mostly in telephony networks, but not for public Internet services. That is, without the support of net- work manufactures, networking middle-boxes such as firewalls or Network Address Translations (NATs) prevent delivery of SCTP packets. MPTCP (MultiPath TCP) is designed to alleviate this issue [2]. MPTCP is a TCP extension that allows end-hosts to use multiple paths together to maximize network utilization and increase redundancy. It can be implemented in modern operating systems and existing applications without using excessive memory or processing. It performs well in all networking scenarios where regular TCP works. MPTCP uses multiple paths simultaneously for a single TCP connection. This may cause a large number of out-of- order TCP packets, especially when the paths have different bandwidths and delays. For instance, let’s assume that a packet has arrived at the receiver’s buffer over a Wi-Fi network while other packets with lower sequence numbers are still arriving through an LTE network due to congestion. MPTCP holds the packet in the reordering queue until its data sequence number is in order. In this case, using a single path TCP (SPTCP) over Wi-Fi may provide better performance than MPTCP. In this paper, we study how badly an increased size of MPTCP reordering queue affects application network perfor- mance. We analyze MPTCP in-flight packet size during a download and compare delays between MPTCP and SPTCP while downloading different file sizes (10 KB to 100 MB) over Wi-Fi networks under varying network conditions. During our empirical experiments, we try to quantify a threshold that can be used to identify unbalanced traffic load conditions where MPTCP causes poor performance compared to SPTCP. Based on the experimental results, we propose to maximize downloading rates by dynamically adjusting the number of MPTCP paths depending on current available capacity using software-defined networking (SDN). In our proposed platform, an SDN application running on an MPTCP user’s device monitors current downloading rates on connected paths to identify poor links that mainly increase the reordering queue size at the MPTCP layer. It removes the MPTCP path that provides a relatively lower capacity (e.g., in the order of 10) compared to other available paths. It attaches the path again when a sufficiently large capacity becomes available. In this case, the SDN application can obtain the estimated capacity over the path through an SDN controller. We show the feasibility of building dynamic MPTCP path control in an SDN-enabled network. In our testbed, we created the SDN testbed using Mininet [3] and the POX SDN con- troller [4]. We attached two Wi-Fi networks (802.11g/n) to the virtual network, which allowed the MPTCP users to download data from the host inside the Mininet network over two Wi-Fi paths simultaneously. Our evaluation shows that dynamically switching between MPTCP and SPTCP can provide a faster download and improve quality of experience (QoE) in real- time video streaming. The remainder of this paper is organized as follows. In Section II, we focus on understanding the current MPTCP mechanism along with the proposed SDN platform in WANs. Section III describes the potential challenges posed by the use of MPTCP over paths that have different bandwidths. Then, we present our proposed solution in Section IV. Our evaluation is described in Section V. Finally, we look at the related work and summarize our conclusions in Section VI and VII, respectively. 978-1-4673-9486-4/16/$31.00 c 2016 IEEE

Towards Dynamic MPTCP Path Control Using SDNhn2203/papers/12_netsoft2016.pdfTowards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam , ... packet in the reordering queue until its

Embed Size (px)

Citation preview

Page 1: Towards Dynamic MPTCP Path Control Using SDNhn2203/papers/12_netsoft2016.pdfTowards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam , ... packet in the reordering queue until its

Towards Dynamic MPTCP Path Control Using SDNHyunwoo Nam∗, Doru Calin† and Henning Schulzrinne∗

∗ Columbia University, New York, NY† Nokia Bell Labs, New Jersey, NJ

Abstract—MPTCP (MultiPath TCP) boosts application net-work performance by aggregating bandwidth over multiple paths.However, it may cause poor performance due to the largenumber of out-of-order packets especially when the paths havedifferent delays. To resolve this issue, we propose to dynamicallyadd or remove MPTCP paths with the leverage of software-defined networking (SDN). The key idea is to track the availablecapacity of connected paths and pick the most appropriate pathsdepending on varying network conditions. To show the feasibilityof our approach, we create an SDN platform using Mininet overWi-Fi networks. Our analysis shows that a faster download andimproved quality of experience (QoE) in adaptive rate videostreaming is possible with our dynamic MPTCP path controlmechanism using SDN.

Index Terms—Software-defined Networking (SDN); Multi-PathTCP (MPTCP)

I. INTRODUCTION

Today’s multihomed devices are physically connected tomultiple network interfaces. For instance, a smartphone isattached to mobile (e.g., 3G and 4G) and Wi-Fi networks forInternet access. A router in a wide area network (WAN) maybe connected to multiple Internet Service Providers (ISPs).The major advantage of using multihomed devices is thatwhen an interface is down, Internet connectivity may stillbe possible through other interfaces. However, with a regularTCP, the multihomed devices cannot use multiple interfacessimultaneously for a single TCP connection. When an interfacecurrently in use goes down, the application must re-establish anew TCP session via another interface for continuity of serviceand bandwidth aggregation.

SCTP (Stream Control Transmission Protocol) has beenintroduced as an alternative mobility solution at the transportlayer [1]. SCTP is used mostly in telephony networks, but notfor public Internet services. That is, without the support of net-work manufactures, networking middle-boxes such as firewallsor Network Address Translations (NATs) prevent deliveryof SCTP packets. MPTCP (MultiPath TCP) is designed toalleviate this issue [2]. MPTCP is a TCP extension that allowsend-hosts to use multiple paths together to maximize networkutilization and increase redundancy. It can be implemented inmodern operating systems and existing applications withoutusing excessive memory or processing. It performs well in allnetworking scenarios where regular TCP works.

MPTCP uses multiple paths simultaneously for a singleTCP connection. This may cause a large number of out-of-order TCP packets, especially when the paths have differentbandwidths and delays. For instance, let’s assume that a packet

has arrived at the receiver’s buffer over a Wi-Fi network whileother packets with lower sequence numbers are still arrivingthrough an LTE network due to congestion. MPTCP holds thepacket in the reordering queue until its data sequence numberis in order. In this case, using a single path TCP (SPTCP) overWi-Fi may provide better performance than MPTCP.

In this paper, we study how badly an increased size ofMPTCP reordering queue affects application network perfor-mance. We analyze MPTCP in-flight packet size during adownload and compare delays between MPTCP and SPTCPwhile downloading different file sizes (10 KB to 100 MB) overWi-Fi networks under varying network conditions. During ourempirical experiments, we try to quantify a threshold thatcan be used to identify unbalanced traffic load conditionswhere MPTCP causes poor performance compared to SPTCP.Based on the experimental results, we propose to maximizedownloading rates by dynamically adjusting the number ofMPTCP paths depending on current available capacity usingsoftware-defined networking (SDN). In our proposed platform,an SDN application running on an MPTCP user’s devicemonitors current downloading rates on connected paths toidentify poor links that mainly increase the reordering queuesize at the MPTCP layer. It removes the MPTCP path thatprovides a relatively lower capacity (e.g., in the order of 10)compared to other available paths. It attaches the path againwhen a sufficiently large capacity becomes available. In thiscase, the SDN application can obtain the estimated capacityover the path through an SDN controller.

We show the feasibility of building dynamic MPTCP pathcontrol in an SDN-enabled network. In our testbed, we createdthe SDN testbed using Mininet [3] and the POX SDN con-troller [4]. We attached two Wi-Fi networks (802.11g/n) to thevirtual network, which allowed the MPTCP users to downloaddata from the host inside the Mininet network over two Wi-Fipaths simultaneously. Our evaluation shows that dynamicallyswitching between MPTCP and SPTCP can provide a fasterdownload and improve quality of experience (QoE) in real-time video streaming.

The remainder of this paper is organized as follows. InSection II, we focus on understanding the current MPTCPmechanism along with the proposed SDN platform in WANs.Section III describes the potential challenges posed by the useof MPTCP over paths that have different bandwidths. Then, wepresent our proposed solution in Section IV. Our evaluation isdescribed in Section V. Finally, we look at the related work andsummarize our conclusions in Section VI and VII, respectively.978-1-4673-9486-4/16/$31.00 c© 2016 IEEE

Page 2: Towards Dynamic MPTCP Path Control Using SDNhn2203/papers/12_netsoft2016.pdfTowards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam , ... packet in the reordering queue until its

WANSDNcontroller

WANSDNcontroller

WANSDNcontroller

Datastoragecloud

Gamingserver

FTPserver

Videoserver

MPTCPclientseNodeB

SDNcontrollerClientSDN

application

Accesspoint

eNodeBSDNapplication

SDNapplication

LTEEPC

Figure 1: Proposed SDN platform in WANs

II. BACKGROUND

A. How does MPTCP work?

Figure 2 shows how MPTCP establishes TCP sessionsover two paths. MPTCP is implemented in the transportlayer (Internet layer 4), and it supports both IPv4 and IPv6.A subflow is a TCP flow on an individual path that canbe defined by 4-tuple (source and destination IP addressesand TCP port pairs). MPTCP establishes and terminates asubflow similar to regular TCP. It does not attach multiplesubflows at the same time. During the three-way handshake(Step #1 through 3), a client sends the SYN segment thatcontains the MP CAPABLE option. If a server supportsMPTCP, it replies to the SYN segment with an ACK messagewhich includes the MP CAPABLE option. Then, the clientconfirms the MPTCP connection by sending the final ACKwith the MP CAPABLE option. During the process, bothclient and server exchange random keys that are used togenerate a token, which is used for the authentication whenadding a new subflow. To attach a second subflow, the clientand server use the MP JOIN option during the handshake(Step #4 through 6). They exchange random nonces that areused to compute HMACs (hash-based message authenticationcodes). The final ACK (Step #7) is a confirmation messagereceived by the client from the server, containing the HMACand ACK messages.

Once the two subflows are established, MPTCP can usethem to exchange data simultaneously. The default schedulerof MPTCP pushes data through the subflow with the lowestRound Trip Time (RTT) as long as there is space in thecongestion window. If the window is full, it moves ontothe other subflow with the next higher RTT. Each subflowuses its own TCP sequence numbers, and MPTCP uses datasequence numbers to make sure that the packets comingthrough two subflows are delivered to the application layerin order. When a packet loss occurs on a subflow, the packetcan be retransmitted over the other subflow. When one subflowgoes down, MPTCP can use the other subflow to convey thefailure to the other host.

Using MPTCP can be unfair to SPTCP users. Let’s assumethat an MPTCP user shares a link using two subflows with a

MPTCP

IP IP

1. SYN: MP_CAPABLE - KEY

2. SYN ACK: MP_CAPABLE - KEY

3. ACK: MP_CAPABLE - KEY

4. SYN: MP_JOIN - TOKEN, RAND

5. SYN ACK: MP_JOIN - RAND, HMAC

6. ACK: MP_JOIN - HMAC

7. ACK

Subflow Subflow

MPTCP

IP IP

Subflow Subflow

Client ServerClient Server

Figure 2: MPTCP subflow establishment

SPTCP user, and the standard TCP congestion control scheme(Reno) is used on each subflow. In this case, the MPTCPuser will obtain two-thirds of the available bandwidth in theshared link. To resolve this problem, new MPTCP conges-tion control algorithms have been proposed. The Coupledscheme is the default congestion controller of MPTCP, andit provides better congestion balancing than Uncoupled TCPReno [5]. However, it trades off optimal congestion balancingand responsiveness to network changes [6], [7]. Khalili etal. [6] have proposed the Opportunistic Linked IncreasesAlgorithm (OLIA) that preserves better fairness across theshared link than the Coupled algorithm. Recently, balancedlinked adaptation congestion control algorithm (Balia) hasbeen introduced to better cope with the trade-off between TCPfriendliness and responsiveness [8].

B. SDN in WANs

An SDN technology is designed to be deployed amongswitches in data centers, and its availability has been extendedto routers in WANs. Using an OpenFlow protocol, an SDNcontroller obtains various networking feedback (e.g., currentcapacity and packet loss rate on a link) from WAN routersin real time. Then, the controller dynamically changes rout-ing paths based on changing network conditions and QoS

Page 3: Towards Dynamic MPTCP Path Control Using SDNhn2203/papers/12_netsoft2016.pdfTowards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam , ... packet in the reordering queue until its

Eth:  100Mb/s802.11g

802.11n

Web  serverClient

Figure 3: Testbed setup for a baseline analysis of MPTCP

rules that can be updated by service providers using SDNNorthbound APIs [9], [10]. In this paper, we assume that anISP operates its own SDN platform in its domain and sharestraffic information (e.g., QoS rules and service identification)with other ISPs. In addition, we assume that software-definewireless networks are enabled in the existing mobile platform.The SDN applications running on the end-users’ devices andthe edge nodes of wireless networks (e.g., eNodeBs in a 4Gnetwork and access points in a Wi-Fi network) are connectedto their corresponding SDN controllers [11]. We illustrate theproposed platform in Figure 1.

III. ANALYSIS OF MPTCP

As a baseline analysis, we measure the performance ofMPTCP compared to SPTCP. Then, we analyze the problemsof using MPTCP under unbalanced traffic conditions in thenetwork.

A. Testbed setup

Figure 3 shows our testbed setup. MPTCP v0.90 is in-stalled on Ubuntu 14.04 for both client and Web server.The client uses two wireless adapters to access 802.11g and802.11n simultaneously. The Wi-Fi access point (TP-LINK’sTL-WDR4300) has three dual band antennas and can provideup to 300 Mb/s over 802.11g and up to 450 Mb/s over 802.11ntheoretically. The signal strength measured at the client’sdevice is -45 dBm over 802.11g and -38 dBm over 802.11n.The RTT between the client and the server is less than 10 ms.Our TCP tuning is as follows: Caching slow start threshold(ssthresh) from the previous connection is disabled. The initialcongestion window size (CWND) is 10 and TCP SelectiveAcknowledgment (SACK) is enabled. The server’s send buffersize (wmem) settings are 10 KB (min.), 85 KB (default) and16 MB (max.) and the client’s receive buffer size (rmem)settings are 4 KB (min.), 85 KB (default) and 6 MB (max.).

B. Measuring MPTCP performance

To avoid the fairness issue of using MPTCP, there is onlya single client operating either through MPTCP or SPTCPduring each experiment. We compare the download completetimes while the client is downloading several files of varioussizes (10 KB through 5 MB) from the Web server. Figure 4shows our experimental results. The number of data samplesfor each set of experiments associated with a given file sizeis 100. For a small file size such as 10 KB, we do not finda significant difference between MPTCP and SPTCP. This is

0 0.05 0.10

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

MPTCPSPTCP

(a) 10 KB

0 0.05 0.1 0.15 0.20

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

MPTCPSPTCP

(b) 100 KB

0 0.2 0.4 0.6 0.80

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

MPTCPSPTCP

(c) 1 MB

0 1 2 30

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

MPTCPSPTCP

(d) 5 MB

Figure 4: Downloading various file sizes using MPTCP and SPTCPover 802.11n

because before establishing the second subflow over 802.11g,the client completely downloaded the entire file via 802.11n.For 100 KB file size, MPTCP shows a slightly faster downloadthan SPTCP. The client received the majority of the dataduring the slow start over both paths. For a large file sizesuch as 1 MB and 5 MB, MPTCP clearly outperforms SPTCPby aggregating the bandwidth over two paths simultaneouslyduring a download.

The performance also affects how fast MPTCP establishesa second flow. During the above experiments, it took less than50 ms to send the SYN segment over 802.11g for the secondflow after the first subflow was established over 802.11n. Theother question is, how quickly does MPTCP adapt to suddenchanges in network conditions such as when an existing pathgets disconnected or a new interface becomes available duringthe file transfer. For this experiment, we repeatedly connectedand disconnected the 802.11g adapter in the middle of adownload. We first disconnected the 802.11g adapter andtracked the sequence numbers of the MPTCP in-flight packetsthat were sent over 802.11g but not acknowledged yet. It took0.9 s until those packets were resent over 802.11n. Then, weconnected the 802.11g adapter back again, and observed thatMPTCP sent the SYN segment to re-establish the second flowas soon as the client was re-connected to the 802.11g (withinless than 1 s).

C. Problems of using MPTCP

We analyze the behavior and performance of MPTCP whileusing protocol stress scenarios with different bandwidths anddelays. To create unbalanced network conditions, we manipu-late the down-link throughput over the Wi-Fi networks usingLinux Traffic Control (tc) with Network Emulation (netem)that allows to control the link latency and the maximumavailable link capacity. During the experiments, we set the

Page 4: Towards Dynamic MPTCP Path Control Using SDNhn2203/papers/12_netsoft2016.pdfTowards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam , ... packet in the reordering queue until its

0 0.5 1 1.5 29.6

9.62

9.64

9.66x 10

8

Elapsed time(s)

MP

TC

P s

eque

nce

num

ber

802.11n802.11g

Figure 5: MPTCP sequence numbers between t = 0 s and t = 2 s

0 50 100 150 2000

0.2

0.4

0.6

0.8

1

ACK size (KB)

Cum

ulat

ive

prob

abili

ty

802.11n802.11g

Figure 6: ACK size on subflows

achievable down-link speed over 802.11n and 802.11g to10 Mb/s and 0.1 Mb/s, respectively. By creating scenarios withsuch significant rates unbalance (e.g., in the order of 100)across the available wireless access interfaces, our goal wasto emulate conditions where MPTCP gets challenged. Suchconditions may occur for instance when an Wi-Fi access pointis in close proximity of the wireless client, while anotherwireless access point is further way. We capture and analyzethe MPTCP packets while the client downloads a file of sizeequal to 100 MB from the Web server over two paths.

Figure 5 shows the MPTCP sequence numbers between t =0 s and t = 2 s during a download. MPTCP sent most of thepackets over the 802.11n air interface (fast link) and just asmall amount of packets over the 802.11g air interface (slowlink). During the experiment, we found a large number ofduplicate ACKs on both subflows, which explains the jump(vertical lines) among sequence numbers in the figure. Morespecifically, we have observed that several TCP packets thatwere sent over 802.11n ended up being acknowledged via802.11g. Furthermore, we found out that the ACK packets sentover the 802.11g air interface arrived late at the transmitter,causing acknowledgment of data packets that were alreadyconfirmed through the 802.11n air interface. This yieldedduplicate ACKs, and triggered the transmitter to slow down.Figure 6 shows the CDF graph of ACK size during a down-load. The ACK size is small when packets arrive in order.On the other hand, a large fraction of data is acknowledgedin bursts when numerous out-of-order packets occur, yieldinglarger ACK sizes. Hence, the experimental results confirm thatthe 802.11n path outperforms the 802.11g path. Also, this can

26 27 28 29 300

0.5

1

1.5

2x 106

Elapsed time(s)Agg

rega

ted

size

of i

n-fli

ght p

acke

ts (

B)

Receiver’s window sizeMPTCPSum of TCP flows

Figure 7: Analysis of in-flight packets between t = 26 s and t = 30 s

be an indication that a significant amount of data gets stuckin the reordering queue at the client’s device for a while.

We estimate the packet reordering queue size using mptcp-trace [12], a tool that analyzes TCP packets at the MPTCPlayer and provides various subflow statistics. For example,Figure 7 shows the aggregated size of in-flight packet duringa download. The continuous red line shows the receiver’s win-dow size. The dotted blue line denotes the sum of unacknowl-edged data over all subflows. The dashed green line indicatesthe size of unacknowledged data (sent over any subflow) atthe MPTCP layer. Therefore, the difference between greenand blue lines equates to the size of the reordering queueat the MPTCP layer. Due to the large number of out-of-orderpackets that increases the packet reordering time, it took 89.6 sto completely download the file, which is worse than usingSPTCP over 802.11n alone (81.9 s).

D. MPTCP vs. SPTCP

Our analysis shows that MPTCP causes low performancedue to a large number of out-of-order packets, especiallywhen one path experiences the lack of capacity comparedto other available paths. Below, we contrast the performancebetween MPTCP and SPTCP while downloading several filesof different sizes (100 KB through 5MB) under controllednetwork conditions. Referring to the maximum throughputover 802.11n and 802.11g as TputMax

802.11n and TputMax802.11g ,

respectively, Tput4 denotes the normalized throughput gapbetween TputMax

802.11n and TputMax802.11g , which can be calcu-

lated based on the following equation:

Tput4 =|TputMax

802.11n − TputMax802.11g |

Max(TputMax802.11n, Tput

Max802.11g)

(1)

During the experiments, we scale down the available band-widths with the following scenarios:• Scenario 1) TputMax

802.11n is set to 5 Mb/s. TputMax802.11g is

set to 2 Mb/s (Tput4 = 0.6), 1.5 Mb/s (Tput4 = 0.7),1 Mb/s (Tput4 = 0.8), 0.5 Mb/s (Tput4 = 0.9) and0.05 Mb/s (Tput4 = 0.99).

• Scenario 2) TputMax802.11n is set to 10 Mb/s. We set up

the TputMax802.11g to 4 Mb/s, 3 Mb/s, 2 Mb/s, 1 Mb/s and

0.1 Mb/s depending on Tput4 values.

Page 5: Towards Dynamic MPTCP Path Control Using SDNhn2203/papers/12_netsoft2016.pdfTowards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam , ... packet in the reordering queue until its

0

1

2

3

4

100KB0

0.51

1.52

2.5

1MB0

5

10

15

5MB

0

0.5

1

1.5

2

100KB0

0.5

1

1.5

1MB0

2

4

6

5MBAvg.do

wnloadtim

e(s)

0

0.2

0.4

0.6

0.8

100KB0

0.2

0.4

0.6

0.8

1MB0

0.51

1.52

2.5

5MB

Scenario1

Scenario2

Scenario3

Figure 8: Average download complete time(s)

• Scenario 3) TputMax802.11n is set to 25 Mb/s. We change the

TputMax802.11g to 10 Mb/s, 7.5 Mb/s, 5 Mb/s, 2.5 Mb/s and

0.25 Mb/s based on Tput4 values.

Regarding the selected file sizes, we omit the experimentsunder high network conditions (e.g., over 100 Mb/s) since bothMPTCP and SPTCP show good performance and there is nosignificant difference. We note that the controlled capacityis good enough to analyze the file download complete timebetween MPTCP and SPTCP.

Figure 13 shows our experimental results. We found thatin most cases the download complete time gets longer asTput4 increases, regardless of the file sizes. During theexperiments, MPTCP pushes most of the traffic over thefaster link (802.11n), which has sufficient capacity to carrythe traffic. Figure 8 shows the average download completetime during the experiments. For small file sizes, the resultsmay not be significantly different. When Tput4 is 0.99 inScenario 3 with 100 KB, for example, the difference of theaverage complete times between SPTCP an MPTCP is lessthan 0.6 s. However, for larger file sizes such as 1 MB and5 MB, MPTCP performs more poorly under similar heavilyunbalanced traffic load conditions because numerous packetsare still sent and delayed through the poor path (802.11g). Thiscauses an increased number of out-of-order packets during adownload. For instance, when Tput4 is 0.99 in Scenario 1with 5 MB, the average download complete time for MPTCPis 10.9 s while the one for SPTCP is 8.6 s.

These experiments demonstrate that using SPTCP can

Video VoIP FTP …

MPTCP

Congestioncontrol

Flowcontrol

Wi-­Fi LTE

SDNAPP

SDNcontroller

-­ Real-­time  traffic  monitoring-­ Dynamic  flow  control  in  WANs

-­ Flow  identification-­ QoS  policy  update-­ Routing  table  update

Multihomed  host

Figure 9: Dynamic MPTCP path control using SDN

outperform MPTCP, especially when MPTCP uses heavilyunbalanced paths (e.g., Tput4 ≥ 0.9). We also conductedthe same experiments for large file sizes such as 100 MB and200 MB under high network conditions (e.g., over 100 Mb/s)and observed the same behavior.

IV. DYNAMIC MPTCP PATH CONTROL USING SDN

Throughout Section III, we show that MPTCP may ex-perience poor performance over multiple paths that havesignificant differences in bandwidth availability. Based on theanalysis, we suggest to dynamically adjust the number ofMPTCP paths with the support of SDN. The question is,how do we recognize such heavily unbalanced traffic loadconditions? And, when and how do we adjust the numberof MPTCP paths during a download? Figure 9 shows ourproposed platform. An SDN application running on the user’sdevice retrieves various networking information from an SDNcontroller in the WAN and selects appropriate MPTCP pathsin real time. To avoid scalability issues, the MPTCP clientcan obtain the information directly from the SDN-enabledlocal edge nodes such as Wi-Fi access points or eNodeBs inFigure 1.

There are different QoS requirements depending on thecharacteristics of user-applications. For instance, file down-loading using FTP or user’s Web browsing is not latency-sensitive, but the network delay is more important to real-timetraffic applications such as VoIP and online gaming. For over-the-top (OTT) video streaming such as YouTube and Netflix,throughput is considered important to provide high videoresolution. In this paper, our throughput-based path control isdesigned to maximize downloading rates while using MPTCPover multiple paths. To better illustrate the mechanism, let’sassume that a client connected over Wi-Fi and LTE is re-questing YouTube video. Before establishing the connection,the SDN application locally measures current downloadingthroughput over two paths (TputWiFi and TputLTE), if otherflows are using the paths. If the paths are not in use, the SDNapplication estimates the theoretically expected capacity onthe Wi-Fi (TputExp

WiFi) and LTE (TputExpLTE) paths with the

support of the SDN controller. For example, TputExpWiFi can

be calculated as follows:

Page 6: Towards Dynamic MPTCP Path Control Using SDNhn2203/papers/12_netsoft2016.pdfTowards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam , ... packet in the reordering queue until its

TputExpWiFi = Min(TputWAN

WiFi , TputLocalWiFi)

TputLocalWiFi = TputMax

WiFi −N∑i=1

TputUseriWiFi

(2)

TputWANWiFi denotes the available capacity on the path from

the local Wi-Fi ISP to the connected host. The available band-width for the referenced Wi-Fi access point (TputLocal

WiFi) takesinto account the theoretical maximum achievable throughputover the Wi-Fi link (TputMax

WiFi) and the current usages byother users (TputUser

WiFi) in the same Wi-Fi network. Thereceived signal strength indicator (RSSI) and the modulationand coding scheme (MCS) information can be used to estimatethe theoretical Wi-Fi capacity [13]. In LTE, we can estimatethe peak data rate based on CQI (Channel Quality Indicator),MCS, bandwidth, and number of antennas that can be obtainedfrom eNodeBs as shown in Figure 1.

Before using MPTCP over Wi-Fi and LTE, the applicationchecks if the current or available bandwidths over two pathsare significantly different. Throughout the previous exper-iments, we proved that MPTCP causes poor performancedue to an increased size of reordering queue under heavilyunbalanced traffic load conditions. We observed this behaviorespecially when the throughput gap (Tput4) between twopaths is above 0.9. We note that 0.9 is a relative valueto identify heavily unbalanced network conditions, regardingexpected and current bandwidths over MPTCP paths. Thisvalue is based on our empirical experimental results in Sec-tion III. One may wonder how often end-users experience suchunbalanced traffic load conditions. We note that two networkssuch as 4G and Wi-Fi are common in today’s network, butother technologies such as millimeter waves can be alsoused for the Internet access in the 5G network. Since mostnetworks are wireless and they operate simultaneously for aTCP connection, end-users may frequently experience suchundesired network conditions.

Based on these experimental results, we take into accountthe throughput gap (Tputpath4 ) for each path and comparethe value with TputThreshold

4 (= 0.9). For example, we cancalculate TputWiFi

4 using the following equation:

TputWiFi4 =

TputMax − TputWiFi

TputMax

TputMax = Max (TputWiFi, TputLTE)

(3)

It uses MPTCP when both TputLTE4 and TputWiFi

4 areless than 0.9. Otherwise, it uses the path that providesthe highest capacity. Once the path decision is complete,the SDN application updates the traffic information such asthe application identification and the connected paths in itsdatabase. The SDN application monitors current throughputon the connected paths while the client is watching the video.When the throughput gap is larger than the tunable referencethreshold (TputThreshold

4 = 0.9), it sends a signal to the SDNcontroller to diagnose the network conditions. If the problem

802.11g

802.11n

Web  server

Client

Mininet  virtual  network

Eth:  100Mb/s

Client  SDNapplication

Wi-­‐Fi   SDNapplication

Wi-­‐Fi   SDNapplication

POX  controller

Figure 10: Testbed setup for evaluation

is caused by the link congestion in WANs, it can be resolvedby changing the path route by the controller or re-directing theflow to other available content server that can provide betternetworking performance at the moment.

If the local network congestion is causing the problem,the application calculates the above Tputpath4 again to adjustthe number of MPTCP paths with the support of the SDNcontroller. For example, it removes the LTE path if TputLTE

4is larger than the threshold. To disconnect the path, MPTCPcan simply send a TCP RST segment over the connected LTEpath. However in this case, the in-flight packets through thepath will be lost and resent again over the Wi-Fi path. Toprevent this, we suggest to first set the RWIN (TCP ReceiveWindow) size to zero. This causes the TCP transmission overthe path to be halted. After the in-flight packets are delivered,it sends a TCP RST segment to disconnect the path. For thecase where a new Wi-Fi path (TputWiFi2 ) is attached during adownload, the SDN application obtains the theoretical capacityof the WiFi path (TputExp

WiFi2) from the SDN controller.

If TputWiFi24 is less than 0.9, it adds the path by using

MP JOIN option as described in Figure 2. We summarizeour MPTCP path control mechanism in Algorithm 1.

Algorithm 1 MPTCP Path Control using SDN

1: Parameters:· FlowID and FlowDB : Application identification and

flow database· Tputpath1 . . . Tputpathn : Current throughput on path i to n

measured at a client device· TputExp

path1. . . TputExp

pathn: Theoretically maximum achievable

capacity over path i to n obtained from SDN controllers· TputMax: Maximum throughput among available path i to n· Tputpath4 =

TputMax−Tputpath

TputMax

· TputThreshold4 : Throughput gap threshold to adjust the number

of MPTCP paths

2: Adjust the number of MPTCP paths:Get Tputpath and TputExp

path for available path i to nFor available path i to n:

If Tputpathi4 ≥ TputThreshold

4 thenSet RWIN on path i to zeroSend a TCP RST segment over path i

ElseSend MPTCP MP JOIN option to add path i

Page 7: Towards Dynamic MPTCP Path Control Using SDNhn2203/papers/12_netsoft2016.pdfTowards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam , ... packet in the reordering queue until its

0 50 100 150 2000

500

1000

1500

2000

Elapsed time(s)

In-f

light

pac

ket s

ize

(KB

)

MPTCPSum of TCP flows

(a) Without our dynamic path control

0 10 20 30 40 500

500

1000

1500

2000

Elapsed time(s)

In-f

light

pac

ket s

ize

(KB

)

MPTCPSum of TCP flows

OFF 802.11gOFF 802.11g ON 802.11gON 802.11g OFF 802.11gOFF 802.11gOFF 802.11g ON 802.11g OFF 802.11g

(b) With our dynamic path control between t =0 s and t = 50 s

Figure 11: MPTCP in-flight packet size during a download

V. EVALUATION

Figure 10 shows the testbed setup for evaluation. To showthe feasibility of our approach, we created the SDN plat-form using Mininet [3] and the POX SDN controller [4].We attached two real 802.11g/n access points to the virtualnetwork. Note that while the algorithmic framework worksacross heterogeneous access technologies, as described in theprevious section, a mix of Wi-Fi technologies can be usedto test the algorithmic logic. Our SDN applications runningon the access points reported various networking information(e.g., RX and TX bytes, SNR and MCS values) to theSDN controller every 2 s. We used the same TCP settings asdescribed in Section III.

To create unbalanced network conditions, we limited themaximum available bandwidth to 5 Mb/s and set the packetloss rate to 1% over 802.11n. On the path over 802.11g, themaximum available bandwidth changed every 20 s between50 kb/s with 5% of packet loss rate and 2.5 Mb/s with 3%of packet loss rate on average. The SDN application runningon the client’s device removed the Wi-Fi path when the pathhad the larger throughput gap than 0.9. To add or removethe path during a download, we used an iproute MPTCPextension [14] that allows to enable or disable the MPTCPoption on a specific interface during a download. In ourproposed solution, however, a MPTCP subflow is disconnectedby exchanging TCP RST segments among the hosts whilekeeping the interface alive.

A. Comparison of MPTCP reordering queue size during adownload

We evaluate our proposed solution by analyzing MPTCPreordering queue size while downloading a 100 MB file. Fig-

0 200 400 6000

1000

2000

3000

4000

5000

Elapsed time(s)

Pla

yed

bitr

ate

(kbp

s)

(a) Without our dynamic path control

0 200 400 6000

1000

2000

3000

4000

5000

Elapsed time(s)

Pla

yed

bitr

ate

(kbp

s)

(b) With our dynamic path control

Figure 12: Selected bitrates in ABR video streaming

ure 11 shows our experimental results. The red line representsthe in-flight packet size measured at the MPTCP layer. Thegreen line indicates the sum of in-flight packet size on bothpaths. Note that the gap between the two graphs denotes thereordering queue size at the MPTCP layer. In Figure 11a, thereordering queue size is relatively large when the network isthrottled down to 50 kb/s (e.g., between t = 20 s and t = 40 s).On the other hand, the queue size is noticeably decreasedwhen the maximum available capacity over the 802.11g linkis 2.5 Mb/s. As a result, it took 210 s until the entire filewas downloaded. In Figure 11b, once the SDN applicationdisables the poor performing path over 802.11g, the in-flightpacket size is noticeably decreased. It adds the 802.11g pathagain when the maximum link capacity is set to 2.5 Mb/s. Atelapsed time t = 40 s, the link capacity over 802.11g is down to50 kb/s again, which leads to increasing the reordering queuesize. But the queue size is significantly decreased after thepoor link is disabled by our SDN application. Owing to thesmall number of out-of-order packets, it took only 190 s tocompletely download the same 100 MB file.

B. Comparison of Adaptive bitrate (ABR) video streamingperformance

In our testbed, we implemented MPEG-DASH (DynamicAdaptive Streaming over HTTP) ABR streaming platform [15]and watched 596 s of video using the DASH JavaScript playerembedded in the HTML5 Web browser. The video serverstreamed the video with twenty of different bitrates (45 kbpsto 3.9 Mbps). Once the play button was clicked, the playeradaptively selected the best available bitrate based on thedownload speed during the video playback. For video QoEanalysis, we compared the played bitrate and the rebuffering

Page 8: Towards Dynamic MPTCP Path Control Using SDNhn2203/papers/12_netsoft2016.pdfTowards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam , ... packet in the reordering queue until its

period (i.e., elapsed time from a video is stalled to played againduring playback). Figure 12 shows the bitrate changes whilethe video was being played. The zero value of the playedbitrate in the figure indicates rebuffering. As a result, thecase without our path control solution experienced 69 bitrateswitching events and 55 s of rebuffering. On the other hand,when our solution was enabled, there were only 46 bitrateswitching events and the total rebuffering time was limited to25 s.

VI. RELATED WORK

Chen et al. [16] investigate MPTCP performance overcellular and Wi-Fi networks and show that MPTCP improvesperformance across flow sizes. Bonaventure et al. [17] discusshow MPTCP should generate and process TCP RST segmentsduring a download. They suggest to disconnect subflows whenthe host is running out of memory or receiving many alreadyacknowledged packets over the lossy path. Sonkoly et al. [18]design a large-scale multipath capable measurement frame-work using the GEANT OpenFlow facility. They implement apublicly available large-scale multipath capable measurementframework on top of PlanetLab to evaluate MPTCP perfor-mance with real traffic.

Tariq et al. [19] propose QAMO-SDN, QoS-aware MPTCPover software-defined optical network. Their MPTCP algo-rithm is designed to provide service differentiation in datacenter optical networks. Unlike the prior research that focusesmore on the measurement of MPTCP, we propose a solutionto improve MPTCP performance with the leverage of SDN.Sandri et al. [20] propose to use MPTCP in OpenFlow-enablednetworks. They design the multi-flow controller for forwardingMPTCP subflows through disjointed paths in order to improvethe throughout in the shared bottlenecks.

VII. CONCLUSIONS

We show how MPTCP performance can be improved in anSDN-enabled network. In our proposed platform, MPTCP candynamically control subflows based on the available capacityof connected paths with the support of SDN controllers.We evaluate our proposed solution using Mininet over Wi-Fi networks, and demonstrate the benefit of an algorithmicframework that dynamically enables back and forth switchingbetween best available single path (single path TCP) and multi-path (MPTCP), with support of a SDN controller, in order tomaximize downloading rates.

VIII. ACKNOWLEDGMENTS

This work was based on a cooperative work between NokiaBell Labs and Columbia University.

REFERENCES

[1] Ed. R. Stewart. Stream Control Transmission Protocol. IETF StandardsTrack RFC 4960, September 2007.

[2] Christoph Paasch and Olivier Bonaventure. Multipath TCP. ACM Queue,12(2):40:40–40:51, February 2014.

[3] Mininet: An Instant Virtual Network on your Laptop. Retrieved Sep.28, 2015 from http://mininet.org/.

[4] Prototyping SDN controller using Python. Retrieved Sep. 28, 2015 fromhttp://www.noxrepo.org/pox/about-pox/.

[5] M. Handly C. Raiciu and D. Wischik. Coupled Congestion Control forMultipath Transport Protocols. IETF Experimental Track RFC 6356,October 2011.

[6] Ramin Khalili, Nicolas Gast, Miroslav Popovic, Utkarsh Upadhyay, andJean-Yves Le Boudec. MPTCP is Not Pareto-optimal: PerformanceIssues and a Possible Solution. In Proceedings of ACM CoNEXT, Nice,France, December 2012.

[7] Damon Wischik, Costin Raiciu, Adam Greenhalgh, and Mark Handley.Design, Implementation and Evaluation of Congestion Control forMultipath TCP. In Proceedings of USENIX NSDI, Boston, USA, March2011.

[8] A. Walid, Q. Peng, J. Hwang, and S. Low. Balanced Linked AdaptationCongestion Control Algorithm for MPTCP. IETF Standard Track,January 2014.

[9] Hyunwoo Nam, D. Calin, and H. Schulzrinne. Intelligent contentdelivery over wireless via SDN. In Proceedings of IEEE WCNC, NewOrleans, USA, March 2015.

[10] Ali Reza Sharafat, Saurav Das, Guru Parulkar, and Nick McKeown.MPLS-TE and MPLS VPNS with Openflow. In Proceedings of ACMSIGCOMM, Toronto, Ontario, Canada, August 2011.

[11] Li Erran Li, Z. Morley Mao, and Jennifer Rexford. Toward Software-Defined Cellular Networks. In Proceedings of IEEE Computer SocietyEWSDN, Darmstadt, Germany, October 2012.

[12] Benjamin Hesmans and Olivier Bonaventure. Tracing Multipath TCPConnections. In Proceedings of ACM SIGCOMM, Chicago, Illinois,USA, August 2014.

[13] 802.11n MCS index table. Retrieved Mar. 14, 2016 from http://mcsindex.com/.

[14] MultiPath TCP - Linux Kernel implementation. Retrieved Sep. 28, 2015from http://multipath-tcp.org/pmwiki.php/Main/HomePage.

[15] Dynamic Adaptive Streaming over HTTP. Retrieved Sep. 28, 2015 fromhttp://mpeg.chiariglione.org/standards/mpeg-dash.

[16] Yung-Chih Chen, Yeon-sup Lim, Richard J. Gibbens, Erich M. Nahum,Ramin Khalili, and Don Towsley. A Measurement-based Study ofMultiPath TCP Performance over Wireless Networks. In Proceedingsof ACM IMC, Barcelona, Spain, October 2013.

[17] O. Bonaventure, C. Paasch, and G. Detal. Processing of RST segmentsby Multipath TCP. Internet-Draft Experimental Track, July 2014.

[18] B. Sonkoly, F. Nemeth, L. Csikor, L. Gulyas, and A. Gulyas. SDN basedtestbeds for evaluating and promoting multipath TCP. In Proceedingsof IEEE ICC, Sydney, Australia, June 2014.

[19] S. Tariq and M. Bassiouni. QAMO-SDN: QoS aware Multipath TCPfor software defined optical networks. In Proceedings of IEEE CCNC,Las Vegas, USA, January 2015.

[20] M. Sandri, A. Silva, L. A. Rocha, and F. L. Verdi. On the Benefitsof Using Multipath TCP and Openflow in Shared Bottlenecks. InProceedings of IEEE AINA, Gwangju, South Korea, March 2015.

Page 9: Towards Dynamic MPTCP Path Control Using SDNhn2203/papers/12_netsoft2016.pdfTowards Dynamic MPTCP Path Control Using SDN Hyunwoo Nam , ... packet in the reordering queue until its

0 2 4 60

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

Tput∆=0.6

Tput∆=0.7

Tput∆=0.8

Tput∆=0.9

Tput∆=0.99

Single TCP

(a) Scenario 1) 100 KB

1 2 3 40

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

Tput∆=0.6

Tput∆=0.7

Tput∆=0.8

Tput∆=0.9

Tput∆=0.99

Single TCP

(b) Scenario 1) 1 MB

5 10 150

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

Tput∆=0.6

Tput∆=0.7

Tput∆=0.8

Tput∆=0.9

Tput∆=0.99

Single TCP

(c) Scenario 1) 5 MB

0 0.5 1 1.5 20

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

Tput∆=0.6

Tput∆=0.7

Tput∆=0.8

Tput∆=0.9

Tput∆=0.99

Single TCP

(d) Scenario 2) 100 KB

0.5 1 1.5 20

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

Tput∆=0.6

Tput∆=0.7

Tput∆=0.8

Tput∆=0.9

Tput∆=0.99

Single TCP

(e) Scenario 2) 1 MB

2 4 6 8 100

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

Tput∆=0.6

Tput∆=0.7

Tput∆=0.8

Tput∆=0.9

Tput∆=0.99

Single TCP

(f) Scenario 2) 5 MB

0 0.2 0.4 0.6 0.80

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

Tput∆=0.6

Tput∆=0.7

Tput∆=0.8

Tput∆=0.9

Tput∆=0.99

Single TCP

(g) Scenario 3) 100 KB

0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

Tput∆=0.6

Tput∆=0.7

Tput∆=0.8

Tput∆=0.9

Tput∆=0.99

Single TCP

(h) Scenario 3) 1 MB

1 2 3 40

0.2

0.4

0.6

0.8

1

Elapsed time(s)

Cum

ulat

ive

prob

abili

ty

Tput∆=0.6

Tput∆=0.7

Tput∆=0.8

Tput∆=0.9

Tput∆=0.99

Single TCP

(i) Scenario 3) 5 MB

Figure 13: Downloading 100 KB, 1 MB and 5 MB files under Tput4 = 0.6 to 0.99