Upload
elizabeth-fowler
View
215
Download
0
Embed Size (px)
DESCRIPTION
inverse multiplexing Idea: simulate a “large” logical channel out of some number (called a bundle) of “smaller” ones Inverse Multiplexor High Bandwidth Link Low Bandwidth Links High Bandwidth Link Inverse Multiplexor
Citation preview
AdaptiveInverse Multiplexingfor Wide-Area Wireless Networks
Alex C. SnoerenMIT Laboratory for Computer Science
IEEE Globecom ’99Rio de Janeiro, December 5, 1999
context
• Goal: Provide speech and graphical interfaces to wireless devices over wide-area networks
• Challenge: Construct a well-behaved high bandwidth channel out of low bandwidth shared access technologies
inverse multiplexing
• Idea: simulate a “large” logical channel out of some number (called a bundle) of “smaller” ones
InverseMultiplexor
High Bandwidth Link
Low Bandwidth Links
High Bandwidth Link
InverseMultiplexor
goals
• High link utilization and low fragmentation Low bandwidth wireless links
• Tight reordering constraints TCP doesn’t handle reordered packets well
• Adaptive scheduling Throughput of shared wireless links is unstable
over many time scales
contributions
• Standard Inverse Multiplexing Commonly used in ISDN, fractional T1/T3, ATM Private links with no contention Stable & similar channel characteristics
• Link Quality Balancing Adapts to varying capacity shared access
channels Efficient bandwidth utilization TCP-friendly reordering bound
outline
• Scheduling techniques Link Quality Balancing with stable links
• Adaptation Measuring and reacting to channel variations
• Implementation results Constant Bit Rate (CBR) Traffic TCP flows
known scheduling methods
• Round Robin Does not assure optimum link usage Provides no bounds on delay, ordering
• Deficit Round Robin, Fair Queuing Provide efficient link usage, but... Require information about queue lengths
– In CDPD, queues are often buried inside the networks, hence information is unavailable
Don’t provide ordering guarantees
deficit round robin
InverseMultiplexor
12345678
InverseMultiplexor
2165374 8
fragmentation: an extreme
InverseMultiplexor
12345678
InverseMultiplexor
12345678
weighting
InverseMultiplexor
12345678
InverseMultiplexor
1 2345678
x2
x2
x1
x1
link quality balancing• Idea: Fragment traffic in proportion to
individual link throughputs For each link, compute a relative MTU
– For fastest link, use optimum MTU– On all other links, use a proportionately smaller one
Fragment packets to fill MTU-sized buckets– Last fragment arrival times are the same on each link
• Guarantees no inter-round reordering; only possible reordering occurs in the same round
– Requires no information on queue lengths– Work conserving; provides maximal link usage
our approach: balancing
InverseMultiplexor
12345678
InverseMultiplexor
12345678
x2
x2
x1
x1
measurement• Problem: Individual link throughputs are
highly variable over many time scales• How do we measure current throughput?
Absolute values are difficult and expensive to obtain– Without synthetic traffic, we are limited by the offered load;
who knows if it actually is driving the links to full capacity Synthetic probes are problematic
– Without priority queuing, introducing synthetic traffic may cause loss of actual traffic
link quality metric• Solution: Don’t! Relative metrics suffice
Simply maintain proportional estimates End-to-end bandwidth probing will do the rest
• But which metric? Packet arrival times
– Theoretically ideal, but far too noisy to be used in reality Short-term throughput
– Similarly difficult to measure Loss Rates
– With bounded queues, loss rates are a rough indicator of appropriate throughput, and easy to measure
feedback loop• Invariant: Always schedule traffic so that
quality metric will be identical across links As a corollary, any perceived deviation at the
receiver implies an improper estimate Use the receiver’s data to periodically update the
Multiplexor’s scheduling proportions End-to-end bandwidth probing should cause the
weakest link to fail first and/or more often
• Links are asymmetric; measure both ways
cbr traffic
0
2000
4000
6000
8000
10000
12000
14000
0 100 200 300 400 500
ChannelsLogical
Modem 1Actual 1
Modem 2Actual 2
Time(secs)
Thro
ughp
ut (b
its/s
ec)
tcp traffic
0
5000
10000
15000
20000
25000
30000
0 50 100 150 200 250 300 350 400
ChannelsLogical
Modem 1Actual 1
Modem 2Actual 2
Time(secs)
Thro
ughp
ut (b
its/s
ec)
evaluation & future work
• LQB handles shared wireless links well Fragmentation is minimal Reordering is tightly bounded Adapts well to varying channel characteristics
• But we’d like to find a better metric Loss rates are delayed and very coarse grained Perhaps filtering functions exist for inter-packet
arrival times