View
9
Download
0
Category
Preview:
Citation preview
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
MAHALAKSHMI ENGINEERING COLLEGE
TIRUCHIRAPALLI – 621 213
UNIT II
CONGESTION AND TRAFFIC MANAGEMENT
Part – A (2 Marks)
1. Difference between multiserver queue and multiple single server ?
Multiserver queues Multiple single sever queues
1.It has less waiting time 1. Waiting time is more since there
are many single servers.
2.It has infinite populations and 2.Population and queue size is less
infinite queue size and have significant impact on
performance
2. Define Kendall's notation?
The notation is given by X/Y/N where X refers to the distribution of interarrival times
Y refers to the distribution of service times
N refers to the number of server.
3. Define mean residence time? The average number of item residence in a system ,including the item being
served(if any) and the item waiting(if any),is r ; and the average time that an item spends in the system ,waiting and being served , is Tr; referred as mean residence time.
4. List some of the common distributions made? G, general distribution of interarrival times or service times
GI, general distribution of interarrival times with restriction that interarrival times are independent
M, negative exponential distribution
D, deterministic arrivals or fixed – length service.
5. Why Queuing Analysis? Option 1: Will wait and see what happens
Option 2: Analyst may take the position impossible to project future
demand and degree of certainty
Option 3: Use of an Analytic model.
Option 4: Use of Simulation model.
6. List some of the model characteristics?
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
The characteristics are Item population
Queue size
Dispatching discipline.
7. List the assumption made on input and output? The assumptions made on input are, Arrival time
Service time
Number of servers
The assumptions made on output are, Items waiting
Waiting time
Items queued
Residence time.
8. What is the objective of congestion control? The objective of congestion control is to maintain the number of packets within the network
below the level at which performance falls off dramatically.
9. Difference between implicit congestion and explicit congestion?
Implicit congestion Explicit congestion
1.It deals with discard and delay 1.It deals with binary rate and
credit
2.Mainly used for connectionless or 2.It takes place in two direction
datagram configurations such as IP forward and backward
based internet
10. Define Backpressure? Backpressure is of limited use they can be applied in logical connections used for
connection oriented network, X.25 based packet switching network.
11. Define Choke packet? Choke packet is control packet generated at a congested node and transmitted back
to a source node to restrict traffic flow. Ex ICMP (Internet Control Message Protocol) and Source quench.
12. List the congestion control mechanism in packet switching networks? Send a control packet from congested node to some or al source nodes
Rely on routing information
Make use of an end-to-end probe packet
Allow a packet switching nodes to add congestion information to packets as they go by
13. List the objectives of frame relay congestion control Minimize frame discard
Create minimal network additional traffic
Maintain, with high probability and minimum variance
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
Be simple to implement
Distribute network resource fairly among users
14. What is Discard Strategy? Discard Strategy deals with the most fundamental response to congestion; when
congestion becomes severe enough, the network is forced to discard frames
15. What is Congestion Avoidance? Congestion Avoidance is used at onset of congestion to minimize the effect on the
network. Explicit signaling mechanism from the network that will trigger the congestion avoidance .
16. What is Congestion recovery? Congestion recovery procedures are used to prevent network collapse in the face of
severe congestion. These procedure are typically initiated when the network begun to drop frames due to congestion. Ex LAPF or TCP
17. What is committed information rate (CIR)? Committed information rate is a rate, in bits per second that the network agrees to
support for a particular frame-mode connection. It is vulnerable to discard in the event of congestion
18. Define BECN?
Backward explicit congestion notification (BECN) notifies the user that congestion avoidance procedures should be initiated where applicable for traffic in the opposite direction of the received frame. It indicates that frames user transmits on this logical connection may encounter congested resources.
19. Define FECN? Forward explicit congestion notification (FECN) notifies the user that congestion
avoidance procedures should be initiated where applicable for traffic in the same direction of the received frame. It indicates that frames user transmits on this logical connection, has encountered congested resources.
20. What is network response and user response? Network response is necessary for frame handler to monitor its queuing behavior. Here
the choice is based on end user. User response is determined by the receipt of BECN or FECN .The simplest procedure
is to use BECN because other one is complex. 21. Define switch?
A switch is simply a box with some number of ports that different devices such as
workstations, routers and other switches attach to.
22. What are the techniques available to accomplish switch path control?
1. address learning
2. Spanning tree
3. Broadcast and discover
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
4.link state routing
5. explicit signaling.
23. Define VLAN?
VLAN is a broadcast domain whose members use LAN switching to communicate as if they
shared the same physical segment.
24. What are the uses of VLAN?
VLAN are useful for administrative, security and broadcast control.
25. What are the two internal forwarding techniques used in LAN switch? 1. Cut through 2. Store and forward.
26. What is cut through forwarding? A switch begins to forward the packet as soon as the destination address is examined and
verified. The forwarding of the first path of the packet can begin even as the remainder of
the packet is being read into the input port switch buffers.
27. What are the advantages of using twisted pair star LAN?
1. Two wire system is susceptible to crosstalk and noise
2. A twisted pair can pass relatively wide range of frequencies.
3. Attenuation is in the range of 20db/mile at 500 khz
4. Transmission is not affected by interference.
28. What are the properties of VC connections? Each VC is identified by a VC identifier. Cells belonging to the single message follow the
same VC. Cells remain in the original order till they reach the destination.
29. What are the advantages of VLAN?
1. Configuration 2.Security 3.Network efficiency 4.Broadcast containment
30. How the Broadcast containment is possible in VLAN?
A properly configured and operational VLAN should prevent or minimize broadcast leakage
from one VLAN to another.
31. What is meant by tag control information (TCI)?
The TCI consists of a three bit user priority field that is used to indicate the frames priority
as it is forwarded through swithes supporting the IEEE 802.1P specification.
32.What is the need for the canonical format indicator(CFI)?
The one bit CFI indicates if the MAC address information is in canonical format.
33.Why switching is so popular? Switching technologies offer much greater performance and capacity at much
lower price. Advances in silicon are placing more networks processing on expensive chips
which prices down and boosts performance by orders of magnitude over older software
based processing.
34.What is meant by LAN switching?
LAN switching is used to move data packets between workstations on the same or different
segments.
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
35. What is meant by VAN switching? VAN switching takes the form of a virtual connection that is provisioned between two end
points such as a pair of routers.
36. What are the properties of switching?
1.operate at layer2 and below of any protocol stack 2. performed in hardware
37.Define switch forwarding?
The information available in the data packet and maintained in the switch enables the switch
to rapidly move data packets from an input port to an output port.
38.What is the need for broadcast and discover technique?
It is commonly used in LAN switching and bridging to locate switched path through the
network.
39..Define spanning tree explorer(STE)?
If the spanning tree is in place the explorer packet may be opt to follow the spanning tree
path to the destination is called as STE.
40.What is the need for the connection identifier(CI)?
CI contained in the packet is used to determine the output port .CI is also called as label.
41.Define all routers explorer(ARE)?
If the explorer packet is flooded through out the entire network is called as ARE.
42. When queue will be formed in a network Queue will be formed, if the current demand for a particular service exceeds the
Capacity of service provider.
43. What are the characteristics of queuing process/
Characteristics of queuing process depend on:
a) Arrival pattern
b) Service pattern
c) Number of server
d) Queue discipline
e) System capacity
f) Number of channels
44. What is meant by traffic intensity in queuing analysis? And write littles
formula for single server queue?
Traffic intensity (or) utilization factor ρ = λ/μ = arrival rate / rate service
Little’s formula ρ = λ TS
r = λ Tr
w = λ Tw
45. Compare Single Server and Multi Server Queue.
S.No Single server model Multiserver model
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
1 Congestion statistics for this model Congestion statistics for this model
are:M/M/1, M/D/1, M/G/1 is M/M/N.
2 Arrival rate = λ Arrival rate for each server = λ/N
46. What is meant by implicit congestion signaling?
When network congestion occurs, packets get discard and acknowledgement will be
delayed. As a result, sources understand that there is congestion implicitly. Here, users are notified
about congestion indirectly.
47. What is meant by explicit congestion signaling? In this method, congestion is indicated directly by a notification. The notification may be in
backward or forward direction.
48. Define committed burst size (BC)
It is defined as the maximum number of bits in a predefined period of time that the network is committed to transfer with out discarding any frames. 49. Define committed information rate (CIR)
CIR is a rate in bps that a network agrees to support for a particular frame mode
connection. Any data transmitted in excess of CIR is vulnerable to discard in event of
congestion. CIR < Access rate
50. Define access rate.
For every connection in frame relay network, an access rate (bps) is defined. The
access rate actually depends on bandwidth of channel connecting user to network.
51. Write Little’s formula.
Little’s formula is defined as the product of item arrive at a rate of λ, and Served time of items Tr (or) product of item arrive at a rate of λ and waiting time of an items Tw.
It is given as, r = λ Tr (or) w = λ Tw
52. List out the model characteristics of queuing models.
a) Item population.
b) Queue size
c) Dispatching discipline.
53. List out the fundamental task of a queuing analysis.
Queuing analysis as the following as a input information.
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
a) Arrival rate
b) Service rate
c) Number of servers.
Provide as output information concerning:
a) Items waiting
b) Waiting time
c) Items queued
d) Residence time. 54. State Kendall’s notation.
Kendall’s notation is X/Y/N, where X refers to the distribution of the interarrival
times, Y refers to the distribution of service times, and N refers to the number of servers.
The most common distributions are denoted as follows:
G = General distribution of interarrival times or service times
GI = General distribution of interarrival times with the restriction that
Interarrival times are independent.
M = Negative exponential distribution
D = Deterministic arrivals or fixed-length service.
Thus, M/M/1 refers to a single-server queuing model with poisson arrivals
(Exponential interarrival times) and exponential service times.
55. List out the assumptions for single server queues.
a. Poisson arrival rate.
b. Dispatching discipline does not give preference to items based on service times
c. Formulas for standard deviation assume first-in, first-out dispatching.
d. No items are discarded from the queue.
56. List out the assumptions for Multiserver queues.
a. Poisson arrival rate.
b. Exponential service times
c. All servers equally loaded.
d. All servers have same mean service time.
e. First-in, first-out dispatching.
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
f. No items are discarded from the queue.
57. State Jackson’s theorem. Jackson’s theorem can be used to analyse a network of queues. The theorem is based
on three assumptions:
1. The queuing network consists of m nodes, each of which provides an independent
exponential service.
2. Items arriving from outside the system to any one of the nodes arrive with a poisson rate.
3. Once served at a node, an item goes (immediately) to one of the other nodes with a fixed
probability, or out of the system.
58. Define Arrival rate and service rate.
Arrival Rate: The rate at which data enters into a queuing system i.e., inter arrival
rate. It is indicated as λ.
Service Rate: The rate at which data leaves the queuing system i.e., service rate.
It is indicated as μ.
59. What is meant by congestion avoidance and congestion recovery technique?
Congestion Avoidance: It is the procedure used at beginning stage of congestion to minimize its
effort. This procedure initiated prior to or at point A. This procedure
prevent congestion from progressing to point B.
Congestion Recovery: This procedure operates around at point B and within region of severe
congestion to prevent network collapse. Here dropped frames are reported to higher layer and
further packet delivery is stopped to recover from congestion.
60. what is the role of de in frame relay? This bit it indicates frame priority. The DE can taken value of 0 or 1. DE=0 means frame
network element; it can be discard the frame during periods of congestion. DE=1, for generally
considered as high priority frames.
61. How does frame relay report congestion? When the particular portion of the network is heavily congestion. It is Desirable to route packets
around rather than through the area of congestion.
62. Define Qos. Refers to the properties of a network that contribute to the degree of satisfaction that user
perceive, relative to the network performance four service
categories are typically under this term capacity, data rate, latency, delay & traffic losses.
63. Define committed burst size The max. amount data that the network agrees to transfer under normal Condition over a
measurement interval T, these data may or may not be contiguous.
64. Define excess burst size The max amount of data in excess of BC that the network will attempt to transfer under
normal condition over a measurement interval T. these data are uncommitted.
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
Part – B (16 Marks)
1.Explain queuing analysis and the different queuing models.
Queing analysis
In queueing theory, a queueing model is used to approximate a real queueing situation
or system, so the queueing behaviour can be analysed mathematically. Queueing
models allow a number of useful steady state performance measures to be determined,
including:
the average number in the queue, or the system, the average time spent in the queue, or
the system,
the statistical distribution of those numbers or times, the probability the queue is full,
or empty, and
the probability of finding the system in a particular state.
These performance measures are important as issues or problems caused by queueing
situations are often related to customer dissatisfaction with service or may be the root
cause of economic losses in a business. Analysis of the relevant queueing models
allows the cause of queueing issues to be identified and the impact of any changes that
might be wanted to be assessed.
Notation Queueing models can be represented using Kendall's notation:
A/B/S/K/N/Disc
where:
A is the interarrival time distribution
B is the service time distribution
S is the number of servers
K is the system capacity
N is the calling population
Disc is the service discipline assumed
Some standard notation for distributions (A or B) are:
M for a Markovian (exponential) distribution
Eκ for an Erlang distribution with κ phases
D for Deterministic (constant)
G for General distribution
PH for a Phase-type distribution
Models
Construction and analysis Queueing models are generally constructed to represent the steady state of a queueing
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
system, that is, the typical, long run or average state of the system. As a consequence,
these are stochastic models that represent the probability that a queueing system will be
found in a particular configuration or state.
A general procedure for constructing and analysing such queueing models is:
1. Identify the parameters of the system, such as the arrival rate, service time, Queue
capacity, and perhaps draw a diagram of the system.
2. Identify the system states. (A state will generally represent the integer number of
customers, people, jobs, calls, messages, etc. in the system and may or may not be
limited.)
3. Draw a state transition diagram that represents the possible system states and identify the
rates to enter and leave each state. This diagram is a representation of a Markov chain.
4. Because the state transition diagram represents the steady state situation between state
there is a balanced flow between states so the probabilities of being in adjacent states can
be related mathematically in terms of the arrival and service rates and state probabilities.
5. Express all the state probabilities in terms of the empty state probability, using the inter-
state transition relationships.
6. Determine the empty state probability by using the fact that all state probabilities always
sum to 1.
Whereas specific problems that have small finite state models are often able to be
analysed numerically, analysis of more general models, using calculus, yields useful
formulae that can be applied to whole classes of problems.
Single-server queue Single-server queues are, perhaps, the most commonly encountered queueing situation
in real life. One encounters a queue with a single server in many situations, including
business (e.g. sales clerk), industry (e.g. a production line), transport (e.g. a bus, a taxi
rank, an intersection), telecommunications (e.g. Telephone line), computing (e.g.
processor sharing). Even where there are multiple servers handling the situation it is
possible to consider each server individually as part of the larger system, in many
cases. (e.g A supermarket checkout has several single server queues that the customer
can select from.) Consequently, being able to model and analyse a single server
queue's behaviour is a particularly useful thing to do.
Poisson arrivals and service M/M/1/∞/∞ represents a single server that has unlimited queue capacity and infinite
calling population, both arrivals and service are Poisson (or random) processes,
meaning the statistical distribution of both the inter-arrival times and the service times
follow the exponential distribution. Because of the mathematical nature of the
exponential distribution, a number of quite simple relationships are able to be derived
for several performance measures based on knowing the arrival rate and service rate.
This is fortunate because, an M/M/1 queuing model can be used to approximate many
queuing situations.
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
Poisson arrivals and general service M/G/1/∞/∞ represents a single server that has unlimited queue capacity and infinite
calling population, while the arrival is still Poisson process, meaning the statistical
distribution of the inter-arrival times still follow the exponential distribution, the
distribution of the service time does not. The distribution of the service time may
follow any general statistical distribution, not just exponential. Relationships are still
able to be derived for a (limited) number of performance measures if one knows the
arrival rate and the mean and variance of the service rate. However the derivations a
generally more complex.
A number of special cases of M/G/1 provide specific solutions that give broad insights
into the best model to choose for specific queueing situations because they permit the
comparison of those solutions to the performance of an M/M/1 model.
Multiple-servers queue Multiple (identical)-servers queue situations are frequently encountered in
telecommunications or a customer service environment. When modelling these
situations care is needed to ensure that it is a multiple servers queue, not a network of
single server queues, because results may differ depending on how the queuing model
behaves. One observational insight provided by comparing queuing models is that a single queue with multiple servers performs better than each server having their own queue and that a single large pool of servers performs better than two or more smaller pools, even though there are the same total number of servers in the system.
One simple example to prove the above fact is as follows: Consider a system having 8 input lines, single queue and 8 servers.The output line has a capacity of 64 kbit/s. Considering the arrival rate at each input as 2 packets/s. So, the total arrival rate is 16 packets/s. With an average of 2000 bits per packet, the service rate is 64 kbit/s/2000b = 32 packets/s. Hence, the average response time of the system is 1/(μ-λ) = 1/(32-16) = 0.0667 sec. Now, consider a second system with 8 queues, one for each server. Each of the 8 output lines has a capacity of 8 kbit/s. The calculation yields the response time as 1/(μ-λ) = 1/(4-2) = 0.5 sec. And the average waiting time in the queue in the first case is ρ/(1-ρ)μ = 0.25, while in the second case is 0.03125.
Infinitely many servers While never exactly encountered in reality, an infinite-servers (e.g. M/M/∞) model is a convenient theoretical model for situations that involve storage or delay, such as parking lots, warehouses and even atomic transitions. In these models there is no queue, as such, instead each arriving customer receives service. When viewed from the outside, the model appears to delay or store each customer for some time.
Queueing System Classification
With Little's Theorem, we have developed some basic understanding of a queueing system. To further our understanding we will have to dig deeper into characteristics of a queueing system that impact its performance. For example, queueing requirements of a restaurant will depend upon factors like:
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
How do customers arrive in the restaurant? Are customer arrivals more during lunch and dinner time (a regular restaurant)? Or is the customer traffic more uniformly distributed (a cafe)? How much time do customers spend in the restaurant? Do customers typically leave the restaurant in a fixed amount of time? Does the customer service time vary with the type of customer? How many tables does the restaurant have for servicing customers?
The above three points correspond to the most important characteristics of a queueing system. They are explained below:
Arrival Process The probability density distribution that determines the customer arrivals in the system.
In a messaging system, this refers to the message arrival probability distribution.
Service Process The probability density distribution that determines the customer service times in the
system.
In a messaging system, this refers to the message transmission time distribution. Since
message transmission is directly proportional to the length of the message, this
parameter indirectly refers to the message length distribution.
Number of Servers Number of servers available to service the customers.
In a messaging system, this refers to the number of links between the source and
destination nodes. Based on the above characteristics, queueing systems can be classified by the following convention:
A/S/n Where A is the arrival process, S is the service process and n is the number of servers. A and S are can be any of the following: M (Markov) Exponential probability density
D (Deterministic) All customers have the same value
G (General) Any arbitrary probability distribution
Examples of queueing systems that can be defined with this convention are:
M/M/1: This is the simplest queueing system to analyze. Here the arrival and service
time are negative exponentially distributed (poisson process). The system consists of
only one server. This queueing system can be applied to a wide variety of problems as
any system with a very large number of independent customers can be approximated
as a Poisson process. Using a Poisson process for service time however is not
applicable in many applications and is only a crude approximation. Refer to M/M/1
Queueing System for details.
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
M/D/n: Here the arrival process is poisson and the service time distribution is
deterministic. The system has n servers. (e.g. a ticket booking counter with n
cashiers.) Here the service time can be assumed to be same for all customers)
G/G/n: This is the most general queueing system where the arrival and service time
processes are both arbitrary. The system has n servers. No analytical solution is known
for this queueing system.
Markovian arrival processes
In queuing theory, Markovian arrival processes are used to model the arrival customers to queue. Some of the most common include the Poisson process, Markovian arrival process and the batch Markovian arrival process.
Markovian arrival processes has two processes. A continuous-time Markov process j(t), a Markov process which is generated by a generator or rate matrix, Q. The
other process is a counting process N(t), which has state space
(where is the set of all natural numbers). N(t) increases every time there is a
transition in j(t) which marked.
Poisson process The Poisson arrival process or Poisson process counts the number of arrivals, each of which has a exponentially distributed time between arrival. In the most general case this can be represented by the rate matrix,
Markov arrival process The Markov arrival process (MAP) is a generalisation of the Poisson process by having non-exponential distribution sojourn between arrivals. The homogeneous case has rate matrix,
Little's law
In queueing theory, Little's result, theorem, lemma, or law says:
The average number of customers in a stable system (over some time interval), N, is
equal to their average arrival rate, λ, multiplied by their average time in the system, T,
or:
Although it looks intuitively reasonable, it's a quite remarkable result, as it implies that this behavior is entirely independent of any of the detailed probability distributions involved, and hence requires no assumptions about the schedule according to which customers arrive or are serviced, or whether they are served in the order in which they arrive.
It is also a comparatively recent result - it was first proved by John Little, an Institute Professor and the Chair of Management Science at the MIT Sloan School of Management, in 1961.
Handily his result applies to any system, and particularly, it applies to systems within
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
systems. So in a bank, the queue might be one subsystem, and each of the tellers another subsystem, and Little's result could be applied to each one, as well as the whole thing. The only requirement is that the system is stable -- it can't be in some transition state such as just starting up or just shutting down.
Mathematical formalization of Little's theorem
Let α(t) be to some system in the interval [0, t]. Let β(t) be the number of departures from the same system in the interval [0, t]. Both α(t) and β(t) are integer valued
increasing functions by their definition. Let Tt be the mean time spent in the system (during the interval [0, t]) for all the customers who were in the system during the
interval [0, t]. Let Nt be the mean number of customers in the system over the duration of the interval [0, t].
If the following limits exist,
and, further, if λ = δ then Little's theorem holds, the limit exists and is given by Little's theorem,
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
2. Explain the effects of congestion and the different congestion control
methodologies in packet switching networks.
Ideal Performance
Effects of Congestion
‘
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
Congestion-Control Mechanisms
Backpressure
– Request from destination to source to reduce rate
– Useful only on a logical connection basis
– Requires hop-by-hop flow control mechanism
Policing
– Measuring and restricting packets as they enter the network
Choke packet
– Specific message back to source
– E.g., ICMP Source Quench
Implicit congestion signaling
– Source detects congestion from transmission delays and lost packets and reduces flow Explicit congestion signaling
Frame Relay reduces network overhead by implementing simple congestion-notification
mechanisms rather than explicit, per-virtual-circuit flow control. Frame Relay typically is
implemented on reliable network media, so data integrity is not sacrificed because flow
control can be left to higher-layer protocols. Frame Relay implements two congestion-
notification mechanisms:
• Forward-explicit congestion notification (FECN)
• Backward-explicit congestion notification (BECN)
FECN and BECN each is controlled by a single bit contained in the Frame Relay frame
header. The Frame Relay frame header also contains a Discard Eligibility (DE) bit, which is
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
used to identify less important traffic that can be dropped during periods of congestion.
The FECN bit is part of the Address field in the Frame Relay frame header. The FECN
mechanism is initiated when a DTE device sends Frame Relay frames into the network. If the
network is congested, DCE devices (switches) set the value of the frames' FECN bit to 1.
When the frames reach the destination DTE device, the Address field (with the FECN bit set)
indicates that the frame experienced congestion in the path from source to destination. The
DTE device can relay this information to a higher-layer protocol for processing. Depending
on the implementation, flow control may be initiated, or the indication may be ignored.
The BECN bit is part of the Address field in the Frame Relay frame header. DCE devices set
the value of the BECN bit to 1 in frames traveling in the opposite direction of frames with
their FECN bit set. This informs the receiving DTE device that a particular path through the
network is congested. The DTE device then can relay this information to a higher-layer
protocol for processing. Depending on the implementation, flow-control may be initiated, or
the indication may be ignored.
Frame Relay Discard Eligibility
The Discard Eligibility (DE) bit is used to indicate that a frame has lower importance
than other frames. The DE bit is part of the Address field in the Frame Relay frame
header.
DTE devices can set the value of the DE bit of a frame to 1 to indicate that the frame
has lower importance than other frames. When the network becomes congested, DCE
devices will discard frames with the DE bit set before discarding those that do not.
This reduces the likelihood of critical data being dropped by Frame Relay DCE
devices during periods of congestion.
Frame Relay Error Checking
Frame Relay uses a common error-checking mechanism known as the cyclic
redundancy check (CRC). The CRC compares two calculated values to determine
whether errors occurred during the transmission from source to destination. Frame
Relay reduces network overhead by implementing error checking rather than error
correction. Frame Relay typically is implemented on reliable network media, so data
integrity is not sacrificed because error correction can be left to higher-layer protocols
running on top of Frame Relay.
3.Explain traffic management in detail.
Traffic Management in Congested Network – Some Considerations
Fairness
– Various flows should ―suffer equally
– Last-in-first-discarded may not be fair
Quality of Service (QoS)
– Flows treated differently, based on need
– Voice, video: delay sensitive, loss insensitive
– File transfer, mail: delay insensitive, loss sensitive
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
– Interactive computing: delay and loss sensitive
Reservations
– Policing: excess traffic discarded or handled on best-effort basis.
Traffic Management
The following issues are related to the efficient use of a network at high load by
congestion control mechanisms:
• Fairness – As congestion develops, flows between sources and destinations will
experience increasing delay.
• Quality of Service (QoS) – packets may have the different requirements of delay,
loss
sensitive, throughput etc. A node might transmit higher-priority packets ahead of
lowpriority
packets in the same queue.
• Reservations—is one way to avoid congestion (to provide assured service). Network
agrees to give a defined QoS as long as traffic flow is within contract parameters. If
net resources are inadequate to meet new reservation.
4. Explain frame relay congestion control.
Frame Relay Congestion Control
Minimize frame discard
Maintain QoS (per-connection bandwidth)
Minimize monopolization of network
Simple to implement, little overhead
Minimal additional network traffic
Resources distributed fairly
Limit spread of congestion
Operate effectively regardless of flow
Have minimum impact other systems in network
Minimize variance in QoS
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
Congestion Avoidance with Explicit Signaling
Two general strategies considered:
Hypothesis 1: Congestion always occurs slowly, almost always at egress nodes
– forward explicit congestion avoidance
Hypothesis 2: Congestion grows very quickly in internal nodes and requires quick
action
– backward explicit congestion avoidance.
Explicit Signaling Response
Network Response
– each frame handler monitors its queuing behavior and takes action
– use FECN/BECN bits
– some/all connections notified of congestion
User (end-system) Response
– receipt of BECN/FECN bits in frame
– BECN at sender: reduce transmission rate
– FECN at receiver: notify peer (via LAPF or higher layer) to restrict flow
Frame Relay Traffic Rate Management Parameters
Committed Information Rate (CIR)
– Average data rate in bits/second that the network agrees to support for a connection
Data Rate of User Access Channel (Access Rate)
– Fixed rate link between user and network (for network access)
Committed Burst Size (Bc)
– Maximum data over an interval agreed to by network.
Excess Burst Size (Be)
– Maximum data, above Bc, over an interval that
network will attempt to transfer.
CS2060 HIGH SPEED NETWORKS
ECE – VII SEM
Relationship of Congestion Parameters
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
Frame Relay Architecture
Figure shows the protocol stacks of frame relay and the comparison with X.25.
Frame relay involves the physical layer and a data link control protocol known as LAPF
(Link Access Protocol for Frame Mode Bearer Services).
Frame relay involves the use of logical connections, called virtual connections. A
separate virtual connection is devoted to call control. The setting up and tearing down of
virtual connections is down over this permanent control-oriented virtual connection.
Other virtual connections are used to carry data only.
Frame relay significantly reduces the amount of work required of the network. However,
error recovery is left to the higher layers since a frame in error is simply discarded.
5.Congestion control in packet switching network.
How do congestions occur?
Congestion occurs when the packets being transmitted through a network begins to
approach packet-handling capacity of the network.
For each node (data network switch and router), there is a queue for each outgoing
channel.
• As a rule of thumb, when a channel for which packets are queuing becomes more
than 80% utilised, it is an alarming rate of threshold.
2.4.1. Effect of Congestion
Input and output at a given node:
• Any given node has a number of I/O ports attached.
• There are 2 buffers at each port, one for arriving one to hold packets waiting to
depart.
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
• Buffers can be of fixed-size or pool of memories available for all buffering activities.
• As a packet arrives, a node examines each incoming packet, and makes routing
decision. Then moves the packet to appropriate output buffer.
Effects of congestion:
• If packets arrive faster than that they can be cleared from outgoing buffers, no
memory is available.
• Two general strategies: (1) discard those incoming packets for which there is no
available buffer space; (2) apply a sort of flow control over its neighbors so that the
traffic flow remains manageable.
• But the neighbors also manage their queues, the congestion points can propagate
throughout a region or all of network.
We need a flow control in such a way as to manage traffic on the entire network.
6. Congestion control mechanisms.
A number of congestion control techniques exist.
Backpressure
• Similar to backpressure in fluids flowing down a pipe.
• The flow restriction at the end propagates backward (against flow of data traffic) to
sources, which are restricted in flow of new packets into network.
• It is limited to connection-oriented network that allows hop-to-hop flow control (such
as X.25-based packet-switching networks provide this), neither ATM nor frame relay
has the capability.
• IP-based, there is no such facility for regulating the flow of data.
Choke Packet
• It is a control packet generated at a congested node.
• It is transmitted back to a source node to restrict traffic.
• An example is the Internet Control Message Protocol (ICMP) Source Queunch
(Message) packet (SQM).
Implicit Congestion Signaling (ICS)
• Two phenomena in network congestion: (1) transmission delayed, and (2) packets are
discarded.
• Congestion control on the basis of implicit signaling is the responsibility of end
systems that requires all sources be able to detect congestion and reduce flow.
• ICS is efficient in connectionless, or datagram, configuration, IP-based internet.
Explicit Congestion Signaling
• It typically operates over connection-oriented networks and controls the flow of
packets over individual connections.
• Two directions:
CS2060 HIGH SPEED NETWORKS ECE – VII SEM
1. Backward: -- send alert bit or packets to source.
2. Forward: -- send alert bit or packets to user then (to source).
• Three categories
1. Binary: A bit is set in a data packet and is forwarded by congestion node.
2. Credit based: These schemes are based on providing an explicit credit.
3. Rate based: To control congestion, any node along the path of the connection can
reduce data-rate.
Recommended