1
Buffering Strategies Buffering Strategies in ATM Switchesin ATM Switches
Carey WilliamsonDepartment of Computer Science
University of Calgary
2
IntroductionIntroduction
Up to now, we have assumed bufferless switches and bufferless switch fabrics
When contention occurs, cells are dropped
Not practical to do this!
3
AlternativesAlternatives
Buffering– a cell that cannot be transmitted on
its desired path or port right now can wait in a buffer to try again later
– several possibilities: input buffering, output buffering, crosspoint (internal) buffering, combination thereof
4
Alternatives (Cont’d)Alternatives (Cont’d)
Recirculation– a cell that cannot be transmitted on
its desired path or port right now is sent back to the input ports using a recirculation line to try again in the next time slot (with higher priority)
– hopefully will get through next time
5
Alternatives (Cont’d)Alternatives (Cont’d)
Deflection routing– a cell that cannot be transmitted
on its desired path or port right now is sent out “another” (available) port instead, in the hope that it will find an alternate path to its destination
– example: tandem banyan
6
Alternatives (Cont’d)Alternatives (Cont’d)
Redundant paths– design a switch fabric with multiple
possible paths from each input port to each output port (e.g., Benes)
– greater freedom for path selection– flexible, adaptive, less contention– works well with deflection routing
7
Buffering IssuesBuffering Issues
There are three main factors that affect the performance of switch buffering strategies
Buffer location Buffer size Buffer management strategy
8
Buffer LocationBuffer Location
Several choices Input buffering Output buffering Internal buffering Combination of the above
9
Input BufferingInput Buffering In the event of output port
contention (which can be detected ahead of time at the input ports), let one of the contending cells (chosen at random) go ahead, and hold the other(s) at the input ports
Others try to go through the switch fabric the next chance they get
10
Input Buffering (Cont’d)Input Buffering (Cont’d) Can be a poor choice! Input buffering suffers from the
Head of the Line (HOL) blocking problem
Can significantly degrade the performance of the switch
11
HOL Blocking ProblemHOL Blocking Problem The cell at the head of the input
queue cannot go because of output port contention
Because of the FCFS nature of the queue, all cells behind the head cell are also blocked from going
Even if the output port that they want is idle!!!!
12
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
OUTPUT 0
OUTPUT 1
INPUT 1
INPUT 0
13
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
0
1Two arrivals
14
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
0
1Two departures
15
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
1
0Two arrivals
16
HOL Blocking ExampleHOL Blocking Example
0
1
Two departures
17
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
0
0Two arrivals
18
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
0
0
One departure
19
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
1
01Two arrivals
20
HOL Blocking ExampleHOL Blocking Example
0
11
Two departures
21
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
1
11Two arrivals
22
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
11
1
One departure
23
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
1
10
One arrival
24
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
1
10
One departure
HOL Blocking
25
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
10
0One arrival
26
HOL Blocking ExampleHOL Blocking Example
0
1
0
Two departures
27
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
0
No arrivals
28
HOL Blocking ExampleHOL Blocking Example
2 x 2Switch
0
One departure
29
HOL Blocking: SummaryHOL Blocking: Summary
Cells can end up waiting at input ports even if their desired output port is idle
How often can this happen? For a 100% loaded 2x2 switch,
HOL blocking happens 25% of time Effective throughput: 0.75
30
HOL Blocking (Cont’d)HOL Blocking (Cont’d)
The HOL blocking problem does NOT go away on larger mesh sizes
In fact, it even gets worse!!!
N Maximum Throughput
1 12 0.753 0.68254 0.65535 0.63996 0.63027 0.62348 0.6184 0.5858
Maximum Throughput for Input Buffering
33
Solutions for HOL Solutions for HOL BlockingBlocking Non-FIFO service discipline Lookahead “windowing” schemes
– e.g., if front cell is blocked, then try the next cell, and so on
– maximum lookahead W (e.g., W=8)– called “HOL bypass”
34
Solutions forSolutions forHOL Blocking (Cont’d)HOL Blocking (Cont’d)
Don’t use input buffering! Use output buffering instead
35
Output BufferingOutput Buffering In the event of output port
contention, send all the cells through the switch fabric, letting one of the contending cells (chosen at random) use the output port, but holding the other(s) in the buffers at the output ports
36
Output Buffering Output Buffering (Cont’d)(Cont’d) Main difference: cells have already
gone through the switch fabric As soon as the port is idle, the
cells go out (i.e., work conserving) Nothing else can get in their way Achieves maximum possible
throughput
37
Buffer SizingBuffer Sizing
Need buffer size large enough to keep cell loss below an acceptable threshold (e.g., CLR = 0.000001)
Purpose of buffers is to handle short term statistical fluctuations in queue length
38
Buffer Sizing (Cont’d)Buffer Sizing (Cont’d) Obvious fact #1: the larger the
buffer size, the lower the cell loss Obvious fact #2: the larger the
buffer size, the larger the maximum possible queuing delay (and the cost of the switch!)
Tradeoff: cell loss versus cell delay (and cell delay jitter) (and cost)
39
Buffer Sizing (Cont’d)Buffer Sizing (Cont’d)
Reality: finite buffers– e.g., 100’s or 1000’s of cells per port
Buffers need to be large enough to handle the bursty characteristics of integrated ATM traffic
General rule of thumb: buffer size = 10 x max burst size
40
Buffer ManagementBuffer Management
In a shared memory switch, for example, there is a choice of using dedicated buffers for each port (called partitioned buffering) or using a common pool of buffers shared by all ports (called shared buffering)
Partitioned BuffersPartitioned BuffersSHARED MEMORY
Shared BuffersShared BuffersSHARED MEMORY
43
Buffer Mgmt (Cont’d)Buffer Mgmt (Cont’d)
Shared buffering offers MUCH better cell loss performance
Partititioned is perhaps easier to design and build
Shared is more complicated to design, build, and control
Shared is superior (uniform traffic)
44
SummarySummary There are a wide range of
choices to make for buffering in ATM switches
Main issues:– buffer location– buffer size– buffer management strategy
Major impact on performance