26
Open Loop Flow and Congestion Control TELCOM2321 – CS2520 Wide Area Networks Dr. Walter Cerroni University of Bologna – Italy Visiting Assistant Professor at SIS, Telecom Program Slides based on Dr. Znati’s material

Lecture05

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Lecture05

Open Loop Flow and Congestion Control

TELCOM2321 – CS2520Wide Area NetworksDr. Walter Cerroni

University of Bologna – ItalyVisiting Assistant Professor at SIS, Telecom Program

Slides based on Dr. Znati’s material

Page 2: Lecture05

2

Reading

1. About self-similar traffic:Textbook, Chap. 9, Sections 9.1, 9.3, 9.4and subsection of 9.2 on heavy-tailed distributions

Page 3: Lecture05

3

Flow and congestion control implementation

• Provided at different layers

• Data Link Layer Flow and Error Control– Stop-And-Wait ARQ– Continuous ARQ

• End-to-End Flow and Congestion Control– Closed Loop– Open Loop

Page 4: Lecture05

4

Open Loop Flow Control

1. During call setup, the source describes its behavior using a traffic descriptor– bandwidth and buffer requirements– QoS requirements, in terms of delay, jitter and loss

2. Network nodes, along the path, verify the feasibility of supporting QoS requirements– renegotiation of parameters, if call not acceptable, with

potential rejection– reservation of resources in case of acceptance

3. During data transfer, the source shapes its traffic to match descriptor

4. Network nodes schedule traffic from admitted calls– to meet bandwidth, buffer and QoS requirements– verifying actual compliance with traffic descriptor

Page 5: Lecture05

5

Open Loop Flow Control: Design Issues

• Traffic descriptor– universal descriptor for different types of applications is not likely

• Call admission control scheme– QoS guarantees of newly accepted connections should not affect

currently supported connections• too conservative schemes, based on worst case scenario, are

resource wasteful• too optimistic schemes may fail to meet QoS guarantees

– traffic must be controlled– specific scheduling discipline at intermediate nodes is required

• tradeoff between efficiency, simplicity and capability of supporting delay bounds

Page 6: Lecture05

6

Traffic Descriptor

• It provides behavioral information– it usually describes the worst case behavior rather

than the exact behavior

• It represents the basis of a traffic contract– source agrees not to violate traffic descriptor– network guarantees the negotiated level of QoS

• A traffic policing mechanism is used to verify that the source adheres to its traffic specification

Page 7: Lecture05

7

Traffic Descriptor Properties

• Usability– source must be able to describe its traffic easily– network must be able to perform admissibility test easily

• Verifiability– policing mechanism must be able to verify the source compliance

with its traffic descriptor

• Preservability– network nodes must be able to preserve the traffic characteristics

along the path, if necessary

• Three traffic descriptors are commonly used– Peak Rate– Average Rate– Linear Bounded Arrival Process (LBAP)

Page 8: Lecture05

8

Traffic Descriptor: Peak Rate

• Highest rate allowed of traffic generation– network with fixed size packets

• peak rate measured in pps or bps• peak rate in pps is the inverse of the closest spacing between the

starting times of consecutive packets

– network with variable size packets• peak rate measured in bps• it defines an upper bound on the total number of packets generated

over all window intervals of a specified size

• Descriptor easy to compute an police• It is a loose boundary measure• Highly affected by large deviations from average• Useful for sources with smooth traffic only

Page 9: Lecture05

9

Traffic Descriptor: Average Rate

• Objective is to reduce the effect of outliers• Transmission rate is averaged over a specified

period of time• Two parameters are defined

– t = time window over which rate is measured– N = number of bits/packets to be sent over t

• Two mechanisms are used to compute the average rate– jumping window– moving window

Page 10: Lecture05

10

Traffic Descriptor: Average Rate

• Jumping Window– source claims that no more than N bits/packets will be

transmitted to the network over t– a new time window starts immediately after the last

one– jumping window is sensitive to the starting time of the

first window

• Moving Window– source claims that no more than N bits/packets will be

submitted to the network over all windows of size t – time window moves continuously– enforces tighter bounds on spikes in the input traffic

Page 11: Lecture05

11

Traffic Descriptor: LBAP

• Linear Bounded Arrival Process• Source bounds the number N of bits/packets it

transmits in any interval of length t by a linear function of t

N ≤ ρ t + σ

– ρ is the long term average rate allocated by the network

– σ is the longest burst a source is allowed to sent– source has an intrinsic long-term average rate ρ, but

can sometimes deviate from this rate, as specified by σ

Page 12: Lecture05

12

Traffic Descriptor and Burstiness

• One of the main causes of the congestion is that traffic is often bursty

• Traffic descriptor must be chosen based on source behavior– peak rate is enough for CBR traffic– average rate is enough for VBR traffic with relatively

limited rate variability– LBAP is better if VBR traffic has higher variability

• Data bursts should be controlled to comply with descriptor

• But what exactly is traffic burstiness

Page 13: Lecture05

13

Traffic Burstiness

• Takes into account the variability of source rate• No universal definition

– Peak rate / Average rate– Average source rate / Average rate of reference source– ...

• Poisson arrivals are “less regular” than CBR• M/D/1 input traffic is smoother that M/M/1• Markov-Modulated Poisson Process (MMPP) is

bursty compared to a simple Poisson source• Real-life traffic traces show even higher burstiness

– self-similar behavior

Page 14: Lecture05

14

Self-Similar Traffic

W.E. Leland et al., On the Self-similar Nature of Ethernet Traffic (Extended Version), IEEE/ACM Transactions On Networking, Vol. 2, No. 1, February 1994.

Page 15: Lecture05

15

Self-Similar Traffic

• Different kinds of network traffic show self-similar behavior– Ethernet, WWW, ...

• High variability leads to strong autocorrelation also for large time scales– Long-Range Dependence

• Modeling with Heavy-Tailed Distributions– ex. superposition of many Pareto-distributed ON/OFF

sources with 1 < α < 2– Pareto distribution with parameters

Page 16: Lecture05

16

Heavy-Tailed Distributions

Pro

babi

lity

dens

ity fu

nctio

nα > 2 finite mean, finite variance1 < α ≤ 2 finite mean, infinite variance 0 < α ≤ 1 infinite mean, infinite variance

Page 17: Lecture05

17

Effect on queue size

H: Hurst parameterSelf-similarity when

0.5 < H < 1

Page 18: Lecture05

18

Traffic Policing

• Source behavior must comply with traffic descriptor

• Traffic policing is performed at network edges to detect violations to contract

• Packets conforming to agreed bounds are forwarded to the network– required resources are guaranteed

• Packets exceeding the agreed bounds can be– dropped at edge– marked as non-conforming packets and forwarded to the network

• resources are not guaranteed• dropped at any point in case of congestion

Page 19: Lecture05

19

Traffic Shaping

• In order to comply with descriptor, source traffic could be shaped to a predictable pattern– smoothing burstiness out– applied at source or network edges

• Exceeding packets are delayed– sent later when they eventually conform to descriptor– buffer required

• buffer limit may cause loss/marking

– latency introduced

• Traffic policing must still be enforced if shaping is left to the source

Page 20: Lecture05

20

Traffic Policing vs. Traffic Shaping

Time

Rat

e

Time

Rat

e

Time

Rat

e

Peak rate

Example based on peak ratepolicing

shaping

Page 21: Lecture05

21

Traffic Shaping: Leaky Bucket

ρ

β

• Purpose is to shape bursty traffic into a regular stream of packets– flow is characterized by a rate ρ– bucket is characterized by a size β

• Packets are drained out at rate ρ by a regulator at the bottom of the bucket

• When bucket is full, incoming packets are discarded or marked

• The effect of β is to– limit the maximum bucket size– bound the amount of delay a packet can incur

• Given β� loss/marking rate vs. ρ tradeoff• β = 0 for peak rate policing

data

Page 22: Lecture05

22

Traffic Shaping: Leaky Bucket

• Traffic shaping using leaky bucket generates fixed-rate data flows– QoS requirements easily guaranteed

• Suitable for smoothing small rate variations– depending on β

• Highly variable rate sources must choose rate ρvery close to their peak rate– wasteful solution– bursts are not permitted– a shaper allowing limited rate variation at the output

would be better

Page 23: Lecture05

23

• Bucket collects tokens• Tokens are generated at rate ρ

– discarded when bucket is full

• Each packet requires a token to be sent• A burst lesser than or equal to the number of

tokens available can be transmitted (up to σ)

• When bucket is empty, packets are buffered and sent at rate ρ

Traffic Shaping: Token Bucket

ρ tokens

σ

dataβ

Page 24: Lecture05

24

Traffic Shaping: Token Bucket

• Number of packets sent in interval of length tN ≤ ρ t + σ � LBAP regulator

• β = 0 for LBAP policing• Given β and the maximum loss/marking rate allowed, the

minimal LBAP descriptor is not unique– ρ and σ must be chosen– average rate A ≥ ρ � buffer grows without bound � avoiding

packet losses would require σ to be infinite– peak rate P ≤ ρ � there are always tokens available � σ can

be small at will– as ρ increases in the range [ A, P ], the minimum σ needed to

meet the loss bounds decreases– any ρ and its corresponding σ is a minimal LDAP descriptor

Page 25: Lecture05

25

Traffic Shaping: Token Bucket

ρ

σ

A P

σ0

ρ0

β and loss/marking rate fixed

Page 26: Lecture05

26

Summary

• Open loop flow and congestion control– Traffic descriptor– Burstiness– Policing– Shaping– Leaky bucket

• constant data rate• easier resource management

– Token bucket• variable data rate• specific actions required for QoS enforcement

– packet scheduling– advanced buffer management