Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Congestion Avoidance
Richard T. B. Ma
School of Computing
National University of Singapore
CS 5229: Advanced Compute Networks
References
K. K. Ramakrishnan, Raj Jain, “A Binary Feedback Scheme for Congestion Avoidance in Computer Networks with a Connectionless Network Layer”, ACM Computer Communication Review, Vol. 18, No. 4, August 1988, pp. 303-313.
Congestion Collapse
When: October 1986
Where: Lawrence Berkeley Laboratory (LBL) to UC Berkeley site, 400 yards and 3 hops away
What: Throughput dropped from 32 Kbps to 40bps
Network Congestion
Dest
Source
Source
Router
1.5-Mbps T1 link
Congestion in a packet-switched network
Flow Control
End-to-end flow control looks at “selfish” control function Make sure sufficient buffer at destination
Receiver sends 𝑅𝑐𝑣𝑊𝑖𝑛𝑑𝑜𝑤 (or 𝑟𝑤𝑛𝑑) =
𝑅𝑐𝑣𝐵𝑢𝑓𝑓𝑒𝑟 – (𝐿𝑎𝑠𝑡𝐵𝑦𝑡𝑒𝑅𝑐𝑣𝑑 − 𝐿𝑎𝑠𝑡𝐵𝑦𝑡𝑒𝑅𝑒𝑎𝑑)
Sender keeps 𝐿𝑎𝑠𝑡𝐵𝑦𝑡𝑒𝑆𝑒𝑛𝑡 − 𝐿𝑎𝑠𝑡𝐵𝑦𝑡𝑒𝐴𝑐𝑘𝑒𝑑 ≤𝑟𝑤𝑛𝑑
Flow Control VS Congestion Control
End-to-end flow control looks at “selfish” control function Make sure sufficient buffer at destination
Congestion control solves a “social” problem End-to-end flows of the network cooperate to
avoid/recover from congestion of the intermediate nodes/routers they share
Connectionless Flows
Dest1
Source
Source
Router
Multiple flows passing through a set of routers
Source
Dest2
Router
Router
Congestion Avoidance/Control
Congestion control: Detect and reduce load from the “Cliff”
Congestion avoidance: Operate network at the “Knee”
DECbit Scheme: 1-bit Feedback
Minimum feedback information One congestion avoidance bit
Set by the router if congested
Destination sends it back in ACK
When does the router set avoidance bit?
What does an end-host respond?
0 1
1
Congestion Avoidance Bit
Optimization Criteria (Metrics)
Efficiency Maximize “Power”
𝑃𝑜𝑤𝑒𝑟 ≝ 𝑇ℎ𝑟𝑜𝑢𝑔ℎ𝑝𝑢𝑡𝛼
𝑅𝑒𝑠𝑝𝑜𝑛𝑠𝑒 𝑇𝑖𝑚𝑒, 0 < 𝛼 < 1.
Operate at “Knee” when maximizing for 𝛼 = 1
𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒 𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 ≝ 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒 𝑃𝑜𝑤𝑒𝑟
𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒 𝑃𝑜𝑤𝑒𝑟 𝑎𝑡 𝐾𝑛𝑒𝑒
Less than 100% efficiency can happen when • Underutilize capacity (low throughput)
• Overutilize capacity (high response time)
Optimization Criteria (Metrics)
Fairness Maximize Jain’s index
𝐽 𝑥 = 𝑥𝑖𝑛𝑖=1
2
𝑛 𝑥𝑖2𝑛
𝑖=1
𝑥𝑖 denotes 𝑖‘s resource share, absolute or %
𝐽 𝑥 ∈ 0,1
Independent of scale
Continuous
𝐽 𝑥 = 𝑘/𝑛 if only k users are allocated equally
Optimization Criteria (Metrics)
Distributedness Only depends on the congestion avoidance bit
End-hosts control independently
Convergence Responsiveness
Smoothness
Congestion Detection at Router
Assumption: single server FIFO type
Metrics: 1) utilization 𝜌, 2) # of packets 𝐿
Packet size distribution determine service time distribution
When packet size varies a lot, utilization 𝜌 is not a good measure of congestion
𝐿 is used instead to measure congestion
Hysteresis algorithm
Two thresholds 0 < 𝐿1 < 𝐿2 Set congestion signal when 𝐿 > 𝐿2
Unset congestion signal when 𝐿 < 𝐿1
Equivalently, the algorithm maintains a center 𝐶 and width 𝐾 such that 𝐿1 = 𝐶 − 𝐾 and 𝐿2 = 𝐶 + 𝐾
Power is maximized (in experiment) with 𝐶 = 1 and 𝐾 = 0 (or 𝑇1 = 𝑇2 = 1)
Set congestion avoidance bit when 𝐿 ≥ 1
Hysteresis algorithm
Source
Source
Router Source
𝑳𝟐 𝑳𝟏 = 𝟑
Dest
𝑪 𝑲
Feedback Filter at Router
Do not use instantaneous value of 𝐿(𝑡)
Average over time interval 𝑇
Using last (busy + idle) cycle time plus the busy period of the current cycle
What do end-hosts respond?
From a sender’s perspective 𝐿𝑎𝑠𝑡𝐵𝑦𝑡𝑒𝑆𝑒𝑛𝑡 − 𝐿𝑎𝑠𝑡𝐵𝑦𝑡𝑒𝐴𝑐𝑘𝑒𝑑 ≤ min 𝑐𝑤𝑛𝑑, 𝑟𝑤𝑛𝑑
How to control the congestion window 𝑐𝑤𝑛𝑑?
Four aspects: Decision Frequency
Use of the Received Information
Signal Filtering
Decision Function
User Policy: Decision Frequency
Update after each acknowledgement Sliding window size
oscillates frequently
Update after receiving 𝑊𝑝 +𝑊𝑐 ACKs
𝑊𝑝 and 𝑊𝑐 are the sizes of the previous and current sliding windows
User Policy: Signal Filtering
Use of Received Information Only the most recent 𝑊𝑐 packets are examined
Drop old information (𝑊𝑝 packets) after update
Signal Filtering Reduce 𝑊𝑐 if more than 50% of the bits are set
Increase 𝑊𝑐 otherwise
Why do we use 50% as a cutting point? Depends on optimal system utilization 𝜌∗
Depends on the used threshold 𝐶
User Policy: Signal Filtering
Under M/M/1,
Power =Throughput
𝑅𝑇𝑇≈
𝜆
E 𝑊= 𝜇2 1 − 𝜌 𝜌
Max power is attained at 𝜌∗ = 0.5
At the optimum, 𝜋0∗ = 1 − 𝜌∗ = 0.5, 𝜋𝑖
∗ = 𝜌∗𝑖𝜋0
Given a threshold 𝐶 to set congestion bit At the optimum operating point ⇒ P∗ bit set = 1 − 𝜋0
∗ − 𝜋1∗ −⋯− 𝜋𝐶−1
∗
If more than P∗ bit set × 𝑊𝑐 packets are set, system is over-utilized.
User Policy: Signal Filtering
Use of Received Information Only the most recent 𝑊𝑐 packets are examined
Drop old information (𝑊𝑝 packets) after update
Signal Filtering Reduce 𝑊𝑐 if more than 50% of the bits are set
Increase 𝑊𝑐 otherwise
Decision Function How much to increase/decrease?
Decision Function Requirements
Achieve efficiency (high resource power)
Achieve fairness (high Jain’s index)
Minimize oscillations
Minimize convergence time
Linear Decision Choices
1. Additive increase additive decrease (AIAD) ↑: 𝑊𝑐
𝑖 = 𝑊𝑝𝑖 + 𝑏; ↓: 𝑊𝑐
𝑖 = 𝑊𝑝𝑖 − 𝑑
2. Additive increase multiplicative decrease (AIMD) ↑: 𝑊𝑐
𝑖 = 𝑊𝑝𝑖 + 𝑏; ↓: 𝑊𝑐
𝑖 = 𝑑𝑊𝑝𝑖
3. Multiplicative increase additive decrease (MIAD) ↑: 𝑊𝑐
𝑖 = 𝑏𝑊𝑝𝑖; ↓: 𝑊𝑐
𝑖 = 𝑊𝑝𝑖 − 𝑑
4. Multiplicative increase and decrease (MIMD) ↑: 𝑊𝑐
𝑖 = 𝑏𝑊𝑝𝑖; ↓: 𝑊𝑐
𝑖 = 𝑑𝑊𝑝𝑖
References
Dah Ming Chiu and Raj Jain, “Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks”, Computer Networks and ISDN Systems, 1989, Vol. 17, pp. 1-14.
Synchronous Feedback Model
Feedback control loop is synchronous
Congestion state is determined by the number of packets in the system
Single bottleneck and binary feedback
𝑦 𝑡 =
0 𝑖𝑓 𝑥𝑖 𝑡
𝑛
𝑖=1
≤ 𝐶
1 𝑖𝑓 𝑥𝑖 𝑡
𝑛
𝑖=1
> 𝐶
Distributed Linear Control
Each user 𝑖 adjust the window size 𝑥𝑖 as a linear function based on feedback
𝑥𝑖 𝑡 + 1 = 𝑎𝐼 + 𝑏𝐼𝑥𝑖(𝑡) 𝑖𝑓 𝑦 𝑡 = 0
𝑎𝐷 + 𝑏𝐷𝑥𝑖(𝑡) 𝑖𝑓 𝑦 𝑡 = 1
Decision on the values of 𝑎𝐼, 𝑎𝐷, 𝑏𝐼 and 𝑏𝐷. AIAD: 𝑎𝐼 > 0 > 𝑎𝐷; 𝑏𝐼 = 𝑏𝐷 = 1.
AIMD: 𝑎𝐼 > 0 = 𝑎𝐷; 0 < 𝑏𝐷 < 𝑏𝐼 = 1.
MIAD: 𝑎𝐷 < 0 = 𝑎𝐼; 𝑏𝐷 = 1 < 𝑏𝐼 .
MIMD: 𝑎𝐼 = 𝑎𝐷 = 0; 0 < 𝑏𝐷 < 1 < 𝑏𝐼 .
Pictorial Explanation/Intuition
Additive Movement
Multiplicative Movement
AIMD Works
AIAD Not Fair (so as MIMD)
𝑥 𝑡 + 1 = 𝑎 + 𝑏𝑥 𝑡
𝑎 ≤ 0; 𝑏 ≤ 1 needed
Efficiency Convergence
𝒂 ≤ 𝟎 𝒃 ≤ 𝟏
𝒂 ≥ 𝟎 𝒃 ≥ 𝟏
𝒂 ≤ 𝟎 𝒃 ≥ 𝟏
𝒂 ≥ 𝟎 𝒃 ≤ 𝟏
Fairness Convergence
Conclusion: Decrease must be multiplicative in order to achieve fairness and efficiency.
Increase Fairness & Efficiency
Conclusion: Optimal increase is additive in order to achieve fairness and efficiency.
Decision Function Choices
1. Additive increase additive decrease (AIAD) ↑: 𝑊𝑐
𝑖 = 𝑊𝑝𝑖 + 𝑏; ↓: 𝑊𝑐
𝑖 = 𝑊𝑝𝑖 − 𝑑
2. Additive increase multiplicative decrease (AIMD) ↑: 𝑊𝑐
𝑖 = 𝑊𝑝𝑖 + 𝑏; ↓: 𝑊𝑐
𝑖 = 𝑑𝑊𝑝𝑖
3. Multiplicative increase additive decrease (MIAD) ↑: 𝑊𝑐
𝑖 = 𝑏𝑊𝑝𝑖; ↓: 𝑊𝑐
𝑖 = 𝑊𝑝𝑖 − 𝑑
4. Multiplicative increase and decrease (MIMD) ↑: 𝑊𝑐
𝑖 = 𝑏𝑊𝑝𝑖; ↓: 𝑊𝑐
𝑖 = 𝑑𝑊𝑝𝑖
Optimal Convergence To
Efficiency 𝑡𝑒 : time to convergence
Responsiveness improved with large increase/decreases parameters
𝑠𝑒 : oscillation size
Smoothness improved with small increase/decreases parameters
Fairness AIMD is the optimal mechanism
that convergences to fairness
Buffer Management
Buffer over-flow under congestion When to drop packets?
Which packets to drop?
Dest
Source
Source
Router
1.5-Mbps T1 link
Network Layer 4-36
Chapter 4 Network Layer
A note on the use of these ppt slides: We’re making these slides freely available to all (faculty, students, readers).
They’re in PowerPoint form so you can add, modify, and delete slides
(including this one) and slide content to suit your needs. They obviously
represent a lot of work on our part. In return for use, we only ask the
following:
If you use these slides (e.g., in a class) in substantially unaltered form, that
you mention their source (after all, we’d like people to use our book!)
If you post any slides in substantially unaltered form on a www site, that
you note that they are adapted from (or perhaps identical to) our slides, and
note our copyright of this material.
Thanks and enjoy! JFK/KWR
All material copyright 1996-2010
J.F Kurose and K.W. Ross, All Rights Reserved
Computer Networking: A Top Down Approach 5th edition. Jim Kurose, Keith Ross Addison-Wesley, April 2009.
Network Layer 4-37
Chapter 4: Network Layer
4. 1 Introduction
4.2 Virtual circuit and datagram networks
4.3 What’s inside a router?
4.4 IP: Internet Protocol Datagram format
IPv4 addressing
ICMP
IPv6
4.5 Routing algorithms Link state
Distance Vector
Hierarchical routing
4.6 Routing in the Internet RIP
OSPF
BGP
4.7 Broadcast and multicast routing
Network Layer 4-38
Router Architecture Overview
two key router functions: run routing algorithms/protocol (RIP, OSPF, BGP)
forwarding datagrams from incoming to outgoing link
switching fabric
routing processor
router input ports router output ports
Network Layer 4-39
line termination
link layer
protocol (receive)
lookup, forwarding
queueing
Input Port Functions
Decentralized switching: given datagram dest., lookup output port
using forwarding table in input port memory
goal: complete input port processing at ‘line speed’
queuing: if datagrams arrive faster than forwarding rate into switch fabric
Physical layer: bit-level reception
Data link layer: e.g., Ethernet see chapter 5
switch fabric
Network Layer 4-40
Switching fabrics
transfer packet from input buffer to appropriate output buffer
switching rate: rate at which packets can be transfer from inputs to outputs often measured as multiple of input/output line rate
N inputs: switching rate N times line rate desirable
three types of switching fabrics
memory
memory
bus crossbar
Network Layer 4-41
Switching Via Memory
First generation routers:
traditional computers with switching under direct control of CPU
packet copied to system’s memory
speed limited by memory bandwidth (2 bus crossings per datagram)
input port (e.g.,
Ethernet)
memory
output port (e.g.,
Ethernet)
system bus
Network Layer 4-42
Switching Via a Bus
datagram from input port memory
to output port memory via a shared bus
bus contention: switching speed limited by bus bandwidth
32 Gbps bus, Cisco 5600: sufficient speed for access and enterprise routers
bus
Network Layer 4-43
Switching Via An Interconnection Network
overcome bus bandwidth limitations
Banyan networks, crossbar, other interconnection nets initially developed to connect processors in multiprocessor
advanced design: fragmenting datagram into fixed length cells, switch cells through the fabric.
Cisco 12000: switches 60 Gbps through the interconnection network
crossbar
Network Layer 4-44
Output Ports
buffering required when datagrams arrive from fabric faster than the transmission rate
scheduling discipline chooses among queued datagrams for transmission
line termination
link layer
protocol (send)
switch fabric
datagram buffer
queueing
Network Layer 4-45
Output port queueing
buffering when arrival rate via switch exceeds output line speed
queueing (delay) and loss due to output port buffer overflow!
at t, packets more from input to output
one packet time later
switch fabric
switch fabric
Network Layer 4-46
How much buffering?
RFC 3439 rule of thumb: average buffering equal to “typical” RTT (say 250 msec) times link capacity C e.g., C = 10 Gpbs link: 2.5 Gbit buffer
recent recommendation: with N flows, buffering equal to RTT C .
N
Network Layer 4-47
Input Port Queuing
fabric slower than input ports combined -> queueing may occur at input queues queueing delay and loss due to input buffer overflow!
Head-of-the-Line (HOL) blocking: queued datagram at front of queue prevents others in queue from moving forward
output port contention: only one red datagram can be
transferred. lower red packet is blocked
one packet time later: green packet experiences HOL
blocking
switch fabric
switch fabric
References
Sally Floyd and Van Jacobson, “Random Early Detection Gateway for Congestion Avoidance”, IEEE/ACM Transactions on Networking, Vol. 1 No. 4, August 1993.
Congestion Avoidance
“Default” mechanism: FIFO, droptail Congestion can be detected after packet drop
Induce long queues and queueing delays
Main goal and desirable objectives Provide congestion avoidance by controlling the
average queue length
High throughput and low delay
Routers can detect congestion better Distinguish propagation and queueing delay
Random Early Detection (RED)
Does not assume cooperative end hosts, and provide probabilistic fairness to flows
General buffer management scheme that can be used with other congestion control mechanisms, e.g. TCP, and scheduling mechanisms, e.g. FIFO, priority queueing.
Does not require all routers in the Internet to implement in order for RED to work (incremental deployment is possible)
Avoid global synchronization
RED Versus DECbit
Computing average queue size DECbit: last (busy+idle) cycle + current busy
cycle for averaging queue size
RED: time-based exponential decay
Notifying congestion DECbit: no separation of detection and marking,
biased against bursty traffic
RED: randomized marking, avoid global synchronization
RED Algorithm (high-level)
For each packet arrival
calculate the average queue size 𝑎𝑣𝑔
𝑖𝑓 𝑡𝑚𝑖𝑛≤ 𝑎𝑣𝑔 < 𝑡𝑚𝑎𝑥
calculate probability 𝑝𝑎
mark the arriving packet with probability 𝑝𝑎
𝑒𝑙𝑠𝑒 𝑖𝑓 𝑡𝑚𝑎𝑥 ≤ 𝑎𝑣𝑔
mark the arriving packet
RED Active Queue Management
𝑝 𝑥 =
0, 0 ≤ 𝑥 < 𝑡𝑚𝑖𝑛
𝑥 − 𝑡𝑚𝑖𝑛
𝑡𝑚𝑎𝑥 − 𝑡𝑚𝑖𝑛𝑝𝑚𝑎𝑥 , 𝑡𝑚𝑖𝑛 ≤ 𝑥 ≤ 𝑡𝑚𝑎𝑥
1, 𝑡𝑚𝑎𝑥 < 𝑥
𝒕𝒎𝒊𝒏 𝒕𝒎𝒂𝒙
𝒑𝒎𝒂𝒙
1
0
RED Algorithm (part 1)
Initialization: a𝑣𝑔 = 0; 𝑐𝑜𝑢𝑛𝑡 = −1;
for each packet arrival
calculate the average queue size 𝑎𝑣𝑔:
𝑖𝑓 𝑞𝑢𝑒𝑢𝑒 𝑖𝑠 𝑛𝑜𝑛𝑒𝑚𝑝𝑡𝑦: 𝑎𝑣𝑔 = 1 − 𝑤𝑞 𝑎𝑣𝑔 + 𝑤𝑞𝑞
𝑒𝑙𝑠𝑒: 𝑚 = 𝑓 𝑡𝑖𝑚𝑒 − 𝑞_𝑡𝑖𝑚𝑒 ;
𝑎𝑣𝑔 = 1 − 𝑤𝑞𝑚𝑎𝑣𝑔
RED Algorithm Variables
𝑤𝑞: the discount weight for historical avg.
If 𝑤𝑞 is too big, the router cannot filter transient congestions
An upper bound of is derived in the paper
𝑞_𝑡𝑖𝑚𝑒: starting time of the most recent idle period
𝑚 = 𝑓 𝑡𝑖𝑚𝑒 − 𝑞_𝑡𝑖𝑚𝑒 : # of packets that could have been transmitted during the idle period
RED Algorithm (part 2)
𝑖𝑓 𝑡𝑚𝑖𝑛≤ 𝑎𝑣𝑔 < 𝑡𝑚𝑎𝑥
𝑐𝑜𝑢𝑛𝑡 = 𝑐𝑜𝑢𝑛𝑡 + 1;
calculate probability 𝑝𝑎:
𝑝𝑏 = 𝑝𝑚𝑎𝑥(𝑎𝑣𝑔 − 𝑡𝑚𝑖𝑛)/(𝑡𝑚𝑎𝑥 − 𝑡𝑚𝑖𝑛);
𝑝𝑎 = 𝑝𝑏/(1 − 𝑐𝑜𝑢𝑛𝑡 × 𝑝𝑏);
with probability 𝑝𝑎:
mark the arriving packet; 𝑐𝑜𝑢𝑛𝑡 = 0;
𝑒𝑙𝑠𝑒 𝑖𝑓 𝑡𝑚𝑎𝑥 ≤ 𝑎𝑣𝑔
mark the arriving packet; 𝑐𝑜𝑢𝑛𝑡 = 0;
𝑒𝑙𝑠𝑒 𝑐𝑜𝑢𝑛𝑡 = −1;
When queue becomes empty: 𝑞_𝑡𝑖𝑚𝑒 = 𝑡𝑖𝑚𝑒
RED Algorithm Variables
𝑐𝑜𝑢𝑛𝑡: # of unmarked packets under congestion No congestion: 𝑐𝑜𝑢𝑛𝑡 = −1
Just marked a packet, reset: 𝑐𝑜𝑢𝑛𝑡 = 0
The most recent 𝑛 packets are not marked under congestion: 𝑐𝑜𝑢𝑛𝑡 = 𝑛
Relationship between 𝑝𝑎 and 𝑝𝑏
𝑝𝑎 = 𝑝𝑏/(1 − 𝑐𝑜𝑢𝑛𝑡 × 𝑝𝑏)
𝑝𝑎 increases with 𝑐𝑜𝑢𝑛𝑡
Ensure that the router does not wait too long before marking a packet
Make inter-dropping time uniform
Relationship between 𝑝𝑎 and 𝑝𝑏
Use 𝑝𝑏 as the final dropping probability Inter-dropping time 𝑇𝑏 is a geometric r.v.
Prob 𝑇𝑏 = 𝑛 = 1 − 𝑝𝑏𝑛−1𝑝𝑏; E 𝑇𝑏 = 1/𝑝𝑏
More desirable to have uniform distribution
Use 𝑝𝑎 as the final dropping probability
Prob 𝑇𝑎 = 𝑛
=
𝑝𝑏1 − 𝑛 − 1 𝑝𝑏
1−𝑝𝑏
1 − 𝑖𝑝𝑏
𝑛−2
𝑖=0= 𝑝𝑏 1 ≤ 𝑛 ≤ 1/𝑝𝑏
0 𝑛 > 1/𝑝𝑏
E 𝑇𝑎 =1
2𝑝𝑏+1
2
Simulation Results
Topology 4 FTP flows and 1 RED router
Parameters 𝑚𝑖𝑛𝑡ℎ = 5; 𝑚𝑎𝑥𝑡ℎ = 15; 𝑤𝑞 = 0.002
Simulation Results
𝑝𝑎
Simulation Results
How About TCP?
How to control congestion window sizes?
How to infer congestion?
Why and how to estimate and smooth RTT?
What is Slow Start? Why do we need it?
Retransmit timer back-off policy
Acknowledgement sending policy
References
Van Jacobson, “Congestion Avoidance and Control”, ACM Computer Communication Review Vol. 18, No. 4, August 1988, pp. 314-329.