Bandwidth Estimation in Broadband Access Networks
Venkat PadmanabhanSystems & Networking Group
Microsoft Research
Joint work with:Karthik Lakshminarayanan (Berkeley) & Jitu Padhye (MSR)
June 2004
Outline
• Bandwidth estimation• Previous work• Challenges in broadband access networks• ProbeGap• Experimental evaluation
– 802.11a testbed– cable modem testbed
• Conclusion
Bandwidth Estimation
• Active area of networking research for 15+ years
• “Bandwidth” refers to data rate – “CS bandwidth” (bps), not “EE bandwidth” (Hz)
• Several notions of bandwidth– bottleneck bandwidth, or capacity
• raw bandwidth of narrow link
– available bandwidth• spare capacity of tight link
– other notions• fair share bandwidth• bulk transfer capacity
Bandwidth Estimation
• Of interest in several contexts– congestion control (e.g., TCP) – admission control (e.g., A/V streaming) – background transfer (e.g., TCP Nice)– server/peer selection (e.g., overlay multicast)
• Desirable attributes of an estimation scheme– depends only on end hosts– accurate – fast– lightweight & non-intrusive
Previous Work on Capacity Estimation
• Packet-pair method– Jacobson ’88, Keshav ’91
– cross-traffic ⇒ underestimation/overestimation• Refinement: filtering to eliminate noise
– nettimer [Lai ’00], pathrate [Dovrolis ’01]– key observation: capacity mode may not be dominant
• Single-packet techniques– pathchar [Jacobson ’97], clink [Downey ’99]– dependence on ICMP msgs. limits applicability and
accuracy
do
narrow link
Previous Work on Available Bandwidth Estimation
• Packet Rate Method (PRM)– e.g., pathload [Jain ’02], PTR [Hu ’03]– probe at gradually increasing rates– increasing trend in OWD indicates that pipe is full– accurate but somewhat heavyweight
• Packet Gap Method (PGM)– e.g., IGI [Hu ’03], Spruce [Strauss ’03]– send several carefully spaced probe pairs– estimate cross-traffic based on the increase in spacing– assumes that the tight link is also the narrow link– relatively lightweight but susceptible to delays elsewhere
• RTT-based estimation [Gunawardena ’03]– derive analytical relationship between load and RTT– perturb network by introducing known amount of additional
load– quite heavyweight, susceptible to delays elsewhere &
departure from the assumed traffic model
Packet Rate Method (PRM)
tight link
probes cross-traffic
Probing rate > available bandwidth ⇒ increasing OWD
Probing rate < available bandwidth ⇒ no trend in OWD
Packet Gap Method (PGM)
dido
dido
probes
cross-traffic
tight & narrow link
di < do ⇒ cross-traffic = C*(do-di)/di
di = do ⇒ no cross-traffic
Traditional Link Model
• Assumptions made in previous work:– link has well-defined capacity– point-to-point link with FIFO scheduling– fluid cross-traffic (infinitesimal packet size)
• But these assumptions break down in broadband network settings
Broadband Access Networks
• Various technologies– cable modem, DSL, wireless (WiFi, WiMax)
• Why is broadband different?– “managed” links (pricing flexibility)– typically shared medium (lower cost)– DSL is an exception
• conforms to the traditional link model
• Specific issues– link may not have well-defined capacity– contention and non-FIFO scheduling– bursty cross-traffic
Broadband Issues
• Link may not have well-defined capacity– rate regulation (e.g., token bucket)– dynamic multirate (e.g., 802.11) – ⇒ measured capacity may not be same as sustained
capacity• Non-FIFO scheduling due to frame-level contention
– fully distributed contention-based MAC (e.g., 802.11)– centrally coordinated MAC (e.g., cable uplink)– ⇒ difficult for packet pairs to go through back-to-back– ⇒ probe packets may not see full impact of cross-traffic– ⇒ relative sizes of probe packets & cross-traffic packets
matter• Bursty cross-traffic
– interference between links operating at different rates– e.g., in 802.11a, a single packet CT packet on 6 Mbps link
would appear as a large burst on 54 Mbps link– ⇒ makes it difficult to accurately sample the cross-traffic
Is AvlbBw Still Interesting?
• With a “fair” MAC it may be feasible to estimate the fair share bandwidth– e.g., Keshav’s original packet-pair work
• However, available bandwidth remains interesting– TCP ramp-up
• safe option is to quickly ramp-up to available bandwidth and then probe gradually for fair share
– admission control for A/V streams • letting new stream exercise its fair share might cause
disruption of existing streams
ProbeGap
• New technique for estimating available bandwidth – designed to address some of these issues– non-FIFO scheduling, bursty cross-traffic
• Key idea: probe for idle “gaps” in the link– gather OWD samples– knee in CDF identifies idle fraction– multiply by capacity to obtain available bandwidth
estimate• Issues
– very lightweight• 200 probes of 20-bytes each
– clock drift is a concern• can estimate and neutralize
– susceptible to delays at other links • like PGM and RTT-based method
x
0
1
CDF
OWD
Experimental Evaluation
• We focus on the broadband network in isolation• Testbeds
– 802.11a – cable modem
• controlled testbed• commercial connections
• Tools evaluated– capacity: pathrate– available bandwidth: pathload, spruce, probegap
• Validation:– capacity: measured using intrusive packet train probes– available bandwidth: determined by observing impact
on cross-traffic
802.11a Evaluation
• Experimental setup– 6 nodes in ad hoc configuration
• one pair used for bandwidth estimation• other two pairs used to generate cross-traffic
– cross-traffic: • link rate = 6 Mbps • traffic rate = 0-4 Mbps, packet size = 300 or 1472 B
– estimation link:• single-rate case: link rate = 6 Mbps• multi-rate case: link rate = 54 Mbps
Impact of Packet Size (802.11a)
802.11a (6 Mbps)
0
1000
2000
3000
4000
5000
6000
300 600 1000 1472
Payload Size (Bytes)
Ag
gre
gat
e T
hro
ug
hp
ut
(Kb
ps)
1 pair
2 pairs
3 pairs
802.11a (54 Mbps)
0
5000
10000
15000
20000
25000
30000
35000
300 600 1000 1472
Payload Size (Bytes)A
gg
reg
ate
Th
rou
gh
pu
t (K
bp
s) 1 pair
2 pairs
3 pairs
Significant per-packet overhead, especially at 54 Mbps
Capacity Estimation (802.11a)
• Pathrate uses 1472-byte probe packets• Single-rate case:
– capacity mode identified consistently in the 5.1-5.5 Mbps range, even with cross-traffic
– enough packet pairs go through back-to-back, despite non-FIFO “fair” MAC
– situation might be different with a larger number of contending stations
• Multi-rate case:– capacity mode identified in the 23-30 Mbps range in
most cases– exception with heavy cross-traffic (4 Mbps, 300 B)
• capacity mode identified was 10-11 Mbps
Packet-pair sampling with suitable filtering mostly works
AvlbBw Estimation (802.11a single-rate)
0
0.5
1
1.5
2
2.5
3
3.5
4
2 Mbps,300 B 2 Mbps,1472 B 4 Mbps,300 B 4 Mbps,1472 B
Cross-traffic (rate, pkt size)
Av
aila
ble
Ba
nd
wid
th E
sti
ma
te
(Mb
ps
)
Pathload Spruce ProbeGap-300
ProbeGap-1472 Measured-300 Measured-1472
Overestimation due to tendency towards fair share (Pathload) and differential packet size (Spruce).
Probegap only overestimates slightly under high load.
ProbeGap (802.11a single-rate)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 200 400 600 800 1000
Normalized One Way Delay (Microseconds)
Cu
mu
lati
ve
Fra
cti
on
2Mbps / 300 Byte payload2Mbps / 1472 Byte payload4Mbps / 300 Byte payload4Mbps/ 1472 Byte payload
Overestimation at high loads.Possible fix: send probes in bunches and pick max OWD.
AvlbBw Estimation (802.11a multi-rate)
0
5
10
15
20
25
30
2 Mbps,300 B 2 Mbps,1472 B 4 Mbps,300 B 4 Mbps,1472 B
Cross-traffic (rate, pkt size)
Ava
ilab
le B
and
wid
th E
stim
ate
(Mb
ps)
Pathload Spruce ProbeGap-300
ProbeGap-1472 Measured-300 Measured-1472
The single-rate issues persist. But new anomalies with bothPathload and Spruce due to the burstiness of cross-traffic.
Pathload (802.11a multi-rate)
Pathload stream (9.79 Mbps) with cross-traffic (2 Mbps, 1472 B)
0
2
4
6
8
10
12
14
0 20 40 60 80 100
Sample # in pathload stream
OW
D (
ms
)
Pathload fails to detect consistent increasing trend even though probing rate (9.79 Mbps) exceeds avlb bw (6.5
Mbps).
Impact of Token Bucket (Cable Modem)
• Experimental setup– raw bandwidth of downlink = 27 Mbps– token bucket rate = 6 Mbps, depth = 9600 bytes– cross-traffic rate = 0-6 Mbps
• Capacity estimation– pathrate consistently estimates 26 Mbps regardless of
cross-traffic• Available bandwidth estimation
– pathload overestimates slightly• token bucket can accommodate large train of 300-byte
probes– spruce overestimates significantly
• a pair of probes is less likely to be regulated than a train• unclear what right capacity to assume is
Pathload (cable modem)
Slight overestimation because of token bucket
0
1
2
3
4
5
6
7
8
9
0 1 3 6
Cross-traffic rate (Mbps)
Ava
ilab
le b
and
wid
th e
stim
ate
(Mb
ps)
Low estimate High estimate
Spruce (cable modem)
Assumed capacity: 26 Mbps
0
5
10
15
20
25
0 1 3 6
Cross-traffic rate (Mbps)
Ava
ilab
le b
and
wid
th e
stim
ate
(Mb
ps)
Assumed capacity: 6 Mbps
0
1
2
3
4
5
6
7
0 1 3 6
Cross-traffic rate (Mbps)A
vail
able
ban
dw
idth
es
tim
ate
(Mb
ps)
Significant overestimation because of token bucket.Unclear what the right capacity to assume is.
Conclusion
• Broadband access networks present new challenges to bandwidth estimation– performance experienced by probes may not be
indicative of true performance– tendency to estimate fair share rather than available
bandwidth• ProbeGap looks promising
• More info: www.research.microsoft.com/~padmanab/projects/PeerMetric/– MSR tech report (MSR-TR-2004-44)– IMC 2003 paper (macroscopic properties of broadband
networks)