Upload
fagan
View
22
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Congestion Control in Distributed Media Streaming. Lin Ma Wei Tsang Ooi School of Computing National University of Singapore IEEE INFOCOM 2007. outline. Introduction Distributed Media Streaming (DMS) Task-level TCP Friendliness Framework of DMSCC Throughput Control Congestion Control - PowerPoint PPT Presentation
Citation preview
Congestion Control in Distributed Congestion Control in Distributed Media StreamingMedia Streaming
Lin MaWei Tsang Ooi
School of ComputingSchool of ComputingNational University of SingaporeNational University of Singapore
IEEE INFOCOM 2007IEEE INFOCOM 2007
outlineoutline
Introduction Distributed Media Streaming (DMS) Task-level TCP Friendliness Framework of DMSCC Throughput Control Congestion Control Simulation and Conclusions
IntroductionIntroduction
Distributed Media Streaming (DMS) coined by Nguyen and Zakhor
(IEEE transaction on multimedia 2004) A client receives a media stream from multiple
servers simultaneously multiple media flows may or may not pass
through the same bottleneck
Aggregate congestion control as task-level congestion control
DMS
Distributed Media Streaming client receives a media stream from multiple
servers simultaneous improves robustness allows aggregation of bandwidth among peers
Sender 1
receiver
Sender 2
Internet
0.8Mbps
0.4Mbps
1.2Mbps
ScenarioScenario
Receiver-driven
receiver Sender 1
Sender 2
Estimates round trip timeEstimates round trip time
Estimates round trip timeEstimates round trip time
Sender1’s RTT
Sender1’s RTT
Sender2’s RTT
Sender2’s RTT
Estimates Estimates sender’s loss sender’s loss ratesrates
Decide each sender’s Decide each sender’s sending rates (RAA)sending rates (RAA)
control packet
control packet
Determine the next Determine the next packet to be sent (PPA)packet to be sent (PPA)
Determine the next Determine the next packet to be sent (PPA)packet to be sent (PPA)
Send packet
Send packet
DMS protocol
Rate allocation algorithm (RAA) Run at receiver
minimize the probability of packet loss splitting the sending rates appropriately across
the senders
Packet partition algorithm (PPA) Run at individual senders
Ensure every packet is sent by only one sender minimize the probability of packet arriving late
Bandwidth Estimation
TCP Friendly rate control algorithm (TFRC)
B : current available TCP-friendly bandwidth between each sender and receiver
Trto : TCP timeout
R : estimated round-trip time in seconds
p : estimated loss rate
S : TCP segment size in bytes
232183
332
ppp
Tp
R
SB
rto
Rate Allocation Algorithm
Receiver computes the optimal sending rate for each sender based on loss rate and bandwidth
Subject to
During the interval (t, t + ∆)F(t) : total number of loss packet
L(i, t) : estimated loss rates S(i, t) : estimated sending rates
Sreq(t) : is the required bit rate for the encoded video
B(i, t) : TCP-friendly estimated bandwidth
tiStiLtFN
i
,,1
tiBtiS ,,0
tStiS req
N
i
1
,
Rate Allocation Algorithm
Sort the senders according to their estimated loss rates from lowest to highest
Assign each sender its available bandwidth, until the sum of their available bandwidths exceeds the bit rate of the encoded video
Rate Allocation Algorithm Suppose there exists a rate allocation with M' senders in which
F(t) is min, say F'(t) for some 1≤ i’< M‘ ,but S(i’,t)≠B(i‘,t) ,implies that S(i',t) < B(i',t) Proof : Allocating some of the bit rate from sender M' to
i' we can achieve smaller loss rate than F'(t)
Ω = min[ S(M',t) , B(i',t)−S(i',t)]
F*(t) < F'(t)
Each sender receive control packet from the receiver through a reliable protocol in the following format
Di : Denote the estimate delay from each sender to receiver
Si : Denote the sending rate for each sender
Sync : Synchronization sequence number
Packet Partition Algorithm
D1 D2 S1 S2 Sync
Packet Partition Algorithm
Only the sender who maximizes Ak’(j, k) will be assigned to send kth packet Each sender computes Ak’(j, k) for each packet k for
itself and all other senders
Ak’(j, k) = Tk’(k) – [nj,kσ(j)+2D(j)]
D(j) : estimated delay
nj,k : Number of packets already sent since k’ to packet k
σ(j) = P/S(j) : Sending interval between packets for sender j
(P : packet size)
Tk’(k) : Time difference between arrival and playback time
of kth packet (not affect the value)
arrival time of the kth packet sent by sender j
Packet Partition Algorithm
Among all senders j=1,…N, the one that maximizes Ak’(j,k) is assigned to send kth packet
Each sender keeps track of all the values of Ak’(j,k) for all N senders, and updates every time a packet is sent
Task-level TCP Friendliness
Congestion Control in Single-flow streaming Flow f is TCP-friendly:
B = BTCP
Comparable network environment same loss rate, RTT and packet size
0 2
1
A B
3
f
TCP
TCPTCP RTTBRTTB
Task-level TCP Friendliness
Congestion Control on aggregate
O: set of flows in flow aggregatebi :throughput of flow fiRtti :round trip time of flow fi
0 2
1
A B
3
f1, f2, and f3
TCP
TCPTCP
Ofi
ii RTTBrttb
Task-level TCP Friendliness
Congestion Control in DMS
0
2
1
3
RA
B
C
TCP
TCP
Set of DMS flows to be controlled depends on where congestion appears
jj TCPTCP
ljfi
ii RTTBrttb
Framework of DMSCC
Congestion location
Throughput control
Receiver side
Sender side
Congestion Control
Algorithm
DMS Flows
Update the increasing factor of
AIMD
Congestion Location
An ideal solution to locate a congestion congestion causes a packet loss on a DMS flow
should be able to tell which link is congested
congestion subsides should sense it, so that the regulation on the DMS flows
previously imposed can be lifted
Such ideal solution is difficult simultaneous congestion on different links in the
tree same flow might experience congestion on
different links
Congestion Location
One link congestion (Rubenstein et al. [15].) compare the cross-correlation of two flows and
the auto-correlation of one of them CorrTest(i,j) to denote the correlation test of
Rubenstein applied on flow i and flow j the two flows share a bottleneck
Return1 No shared bottleneck is detected
Return 0
Congestion Location
Idea: Packets passing through same Point of Congestion(POC) close in time experience loss and delay correlations
Using either loss or delay statistics, compute two measures of correlation: Mc: cross-measure (c between flows) Ma: auto-measure (c within a flow)
such that if Mc < Ma then infer POCs are separate else Mc > Ma and infer POCs are shared
Mc = C(Delay(i), Delay(i-1))
Ma = C(Delay(i), Delay(prev(i)) time
i-4
i-2
i
i-1
i-3
i+1
Flow 1 pkts
Flow 2 pkts
Congestion Location
Algorithm 1: find the shared bottleneck
0
2
1 R
A
BOutput ={ A-B }
Congestion Location
Algorithm 2: On every packet loss
C = {A-B, 0-A}
H = {A-B, 0-A, 0-A, A-B, A-B, 0-A,A-B,A-B …}
0
21
AB R
Throughput Control
W : size of congestion windowα : increasing factorL : period (in RTT) between every two
packet lossesp : packet loss rate
83
22
3 2WL
WS
pS
1
pW
3
8
Total number of packets received during that period
Mathis Equation[14]loss rate is p, then for every 1/p
packets, one packet is lost.
Throughput Control
If we want a DMS flow to have β times the throughput of a TCP flow
For the throughput of a DMS flow to be β times of a conformant TCP flow, we need to set its increasing factor α to β^2
2
3
8
4
3
3
8
4
3
pp
WW TCP
Throughput Control
Update Alpha: according to dominant bottleneck Algorithm 3: Update Increasing Factors
0
2
1
A
B R
C = {A-B, 0-A}
C ’ = {A-B}A-B dominates the other link
Throughput Control
Bottleneck Recovery Congestion subsides and there is no more
packet loss, we need to reset αi to 1 DMS flow to fully utilize available bandwidth
A timer is refreshed when packet loss is detected If no packet loss is detected within t seconds
The increasing factors of all DMS flows are reset to 1
Simulation
Link Start End
0-A 0 100
A-B 150 200
B-C 50 150
C-R 250 350
0
21
3
RAB
C
Conclusions
Congestion control in DMS system Task-level TCP-friendliness Locate congestion in a reverse tree topology
Relies on Rubenstein’s method Control the throughput of a DMS flow using
AIMD loop combined throughput on the bottleneck is TCP-
friendly Based on Mathis equation