Upload
others
View
5
Download
0
Embed Size (px)
Citation preview
Streaming Video and TCP-Friendly CongestionControl
Sugih JaminDepartment of EECSUniversity of Michigan
Joint work with:Zhiheng Wang (UofM),
Sujata Banerjee (HP Labs)
Sugih Jamin ([email protected])
Video Application on the Internet
Adaptive playback streaming:
Sender sends data i at time ti
Receiver receives at time ti + ∆,∆ = propagation delay + queueing delay
To smooth out variable queueing delay, receiver buffers some amount ofdata (i to i + k, k > 0) before playing back data i
By the time receiver is ready to play back data i + k, hopefully it wouldhave arrived
Otherwise, increase buffering (hence “adaptive”)
Sugih Jamin ([email protected])
Video Streaming
Two ways to send data:
• bulk transfer: transfer before playback
• streaming: transfer while playback
Why Streaming?
• shorter playback start time
• smaller receiver buffer requirement
• smaller interaction delay requirement
Sugih Jamin ([email protected])
Expectations vs. Reality
Streaming media service requirements:
• resource intensive
• smooth (low variance) throughput
Internet service characteristics:
• shared resource
• variable bandwidth
• unpredictable network latency
• lossy channel
Sugih Jamin ([email protected])
Streaming Video over the Internet
Effect of transient changes in available bandwidth:
• empty buffer on playback
• playback pause on rebuffering
• larger buffer size increases start time(consider live interactive sessions)
Applicable to other streaming data:scientific visualization, massively multiplayergaming dynamic object, web page download
Sugih Jamin ([email protected])
Case Study: Windows Media Player
Application characteristics:
WM Server sends traffic at a constant bit rate
WMP client pauses video playback until sufficient packets have beenbuffered (rebuffering)
WMP client asks for retransmission to recover lost packet. If lost packetcannot be recovered, the whole frame is considered lost
WM Server reduces sending rate when lower available bandwidth isdetected
Sugih Jamin ([email protected])
Streaming Video Quality
P3 P2
P
1
P
3
T
d
RTT/2
no queueing delay sufficient bandwidth
large queueing delay not sufficient bandwidth
some queueing delay bandwidth changes
case 1
case 2
case 3
video quality
packet loss
video quality
video quality
P3 P2 P1
P2 P1
P3 P2 P1P3 P2 P1
P1
P2P3 P1 P2
Sugih Jamin ([email protected])
Measuring Streaming Video Quality
Metrics:
• server transmission rate (service rate)
• client rebuffering probability
• client rebuffering duration
• client frame loss
Sugih Jamin ([email protected])
Improving User Perceived Quality
User less annoyed with lower but consistent quality than continualrebuffering
Changes in available bandwidth cause changes in rebuffering probabilityand duration
Streaming video needs low loss rate and smooth available bandwidth toreduce user annoyance
Need: smooth congestion control mechanism
Sugih Jamin ([email protected])
TCP-Friendliness
TCP is the standard transport protocol
TCP does congestion control by linear probing for available bandwidthand multiplicative decrease on congestion detection (packet loss)
“A congestion control protocol is TCP-friendly if, in steady state, itsbandwidth utilization is no more than required by TCP under similarcircumstances” [Floyd et al., 2000]
TCP-friendliness in a proposed protocol ensures compatibility with TCP
Sugih Jamin ([email protected])
TCP-Friendly Rate Control (TFRC)
Goals:
• to provide streaming media with steady throughput
• to be TCP-friendly
Instead of reacting to individual losses,tries to satisfy the TCP throughput function over time:
T =s
R√
2p3 + tRTO(3
√
3p8 )p(1 + 32p2)
T : TCP throughput; s: packet size; p: loss rateR: path RTT; tRTO: re-transmit timeout
Sugih Jamin ([email protected])
Terminologies
Bandwidth Capacity
Fair Share
Calculated Allowed Rate
Self-clocked Rate
Sending Rate
Data Rate
Throughput
Application
OS
Networks
Application
OS
Networks
Sugih Jamin ([email protected])
Terminologies (contd)
Data rate: the rate at which an application generates data
Sending Rate: the rate at which a connection sends data
Self-clocked rate: upper bound on the sending rate calculated by TFRC
Fair share: TCP’s throughput during bulk data transfer
Fair share load: ratio between the sending rate and the fair share
Throughput: the incoming traffic rate measured at the receiver
Sugih Jamin ([email protected])
Does TFRC Provide Smoother Throughput?
Experiment setup:
S(0)
R1
S(1)
S(M-1)
R2
D(0)
D(1)
D(M-1)
1.5Mbps/50ms
........
........
• Data source: CBR-traffic• Background traffic
o long/short-lived TCP flows with infinite amount of datao flash crowd: large number of short TCP burstso long-range dependent traffic: a number of Pareto distributed
ON/OFF flows
Sugih Jamin ([email protected])
Not that Smooth
0
50
100
150
200
250
300
350
0 20 40 60 80 100
KB
ps
Time (sec)
TCP’s Self-clocked RateTFRC’s Self-clocked Rate
TFRC’s Sending Rate
• Data rate: 50KBps
• Background traffic: 1 long-lived TCP
Sugih Jamin ([email protected])
Worse with Bursty Background Traffic
0
10
20
30
40
50
60
0 100 200 300 400 500 600
KB
ps
Time (sec)’
Congestion RateSending Rate
• Data rate: 20KBps
• Background traffic: 1 long-lived TCP + 5 ON/OFF flows
Sugih Jamin ([email protected])
Internet Experiments
A sample path between MI and CA
0
20
40
60
80
100
300 350 400 450 500
KB
ps
Round ID
Self-clocked RateSending Rate
• Data rate: 40 KBps
• RTT: 67 msec
• Loss event rate: 0.24%
Sugih Jamin ([email protected])
MARC’s Design Motivation
TFRC congestion control is memoryless, whereas:
• streaming media is “well-behaved”when there is no congestion, streaming applications cannot alwaysutilize their fair share fully
• but during congestion, TFRC applies the same rate reductionprinciple to streaming media traffic as to bulk data transfer traffic
Media Aware Rate Control (MARC) proposition:“Well-behaved” streaming applications should be allowed to reduce theirsending rate more slowly during congestion
Sugih Jamin ([email protected])
Media-Aware Rate Control (MARC)
• Define token value C to keep track of a connection’s fair shareutilization
C = βC ′ + (T − Wsend)I
C: tokenβ: decay factorC ′: previous token valueT : previous calculated self-clocked rateWsend: previous sending rateI: feedback interval
• We use β = 0.9
Sugih Jamin ([email protected])
MARC is Effective
0
50
100
150
200
250
300
350
0 20 40 60 80 100
KB
ps
Time (sec)
TCP’s Self-clocked RateTFRC’s Self-clocked Rate
TFRC’s Sending Rate
TFRC
0
50
100
150
200
250
0 20 40 60 80 100
KB
ps
Time (sec)
Self-clocked RateSending Rate
MARC
• Data rate: 50KBps
• Background traffic: 1 long-lived TCP
Sugih Jamin ([email protected])
MARC is TCP-Friendly
0
10
20
30
40
50
60
70
0 20 40 60 80 100 120
Sen
ding
Rat
e (K
Bps
)
Data Rate (KBps)
MARC-REDTFRC-RED
MARC-DropTailTFRC-DropTail
Fair Share
• Date rate: 10 CBR sources
• Background traffic: 10 long-lived TCP
Sugih Jamin ([email protected])
Reaction Time to Persistent Congestion
• Congestion at 50th sec, RTT: 80 msec
• Fair share before congestion: 140 KBps
020406080
100120140
46 48 50 52 54
Sel
f-cl
ocke
d R
ate
(KB
ps)
Time (sec)
x = 52.44
MARCTFRC
(a) Data rate = 20 KBps
020406080
100120140
46 48 50 52 54
Sel
f-cl
ocke
d R
ate
(KB
ps)
Time (sec)
x = 51.81
MARCTFRC
(b) Data rate = 40 KBps
020406080
100120140
46 48 50 52 54
Sel
f-cl
ocke
d R
ate
(KB
ps)
Time (sec)
x = 51.33
MARCTFRC
(c) Data rate = 60 KBps
020406080
100120140
46 48 50 52 54S
elf-
cloc
ked
Rat
e (K
Bps
)Time (sec)
x = 50.90
MARCTFRC
(d) Data rate = 80 KBps
Without token, MARC behaves exactly like TFRC.
Sugih Jamin ([email protected])
Token Dynamics
0
200
400
600
800
1000
1200
1400
1600
1800
10 20 30 40 50 60 70 80 90 0
100
200
300
400
500
600
Sen
ding
Rat
e (K
Bps
)
Tok
en (
KB
yte)
Time (sec)
Sending RateSelf-clocked Rate
Flash-crowdToken
• 1 long-lived TCP and 1 MARC flows
• Data rate: 100 KBps
• Flash-crowd (800 short-lived TCP) starts at the 50th second, lasts for5 seconds
Sugih Jamin ([email protected])
MARC Improves User-Perceived Quality
0
0.2
0.4
0.6
0.8
1
0 1 2 3 4 5
Number of rebuffering events
Pro
bab
ility
den
sity
fu
nct
ion TCP TFRC MARC
• Data rate: 44KBps
• Background traffic: 1 long-lived TCP + 1 ON/OFF flow
Sugih Jamin ([email protected])
Future Works
• Layered video adaptation with MARC
• Analyzing MARC
• Streaming media over end-host multicast
o multiple receivers- congestion control on end-host multicast
o multiple sources- Integrated Flow Control
Sugih Jamin ([email protected])