CN Notes by Mohan

Embed Size (px)

Citation preview

  • 7/31/2019 CN Notes by Mohan

    1/17

    1) ISO / OSI MODEL:

    ISO refers International Standards Organization was

    established in 1947, it is a multinational body

    dedicated to worldwide agreement on international

    standards. ISO refers International StandardsOrganization was established in 1947, it is a OSI

    refers to Open System Interconnection that covers

    all aspects of network communication. It is a

    standard of ISO. Here open system is a model that

    allows any two different systems to communicate

    regardless of their underlying architecture. Mainly, it

    is not a protocol it is just a model.

    OSI MODEL

    The open system interconnection model is a layered

    framework. It has seven separate but interrelated

    layers. Each layer having unique responsibilities.

    ARCHITECTURE

    The architecture of OSI model is a layered

    architecture. The seven layers are,

    1. Physical layer 2. Data link layer 3. Network layer 4.

    Transport layer 5. Session layer 6. Presentation layer

    7. Application layer

    The figure shown below shows the layers involved

    when a message sent from A to B pass through

    some intermediate devices. Both the devices A and

    B are formed by the framed architecture. And the

    intermediate nodes only having the layers are

    physical, Data link and network. In every device each

    layer gets the services from the layer just below to

    it. When the device is connected to some other

    device the layer of one device communicates with

    the corresponding layer of another device. This is

    known as peer to peer process. Each layer in the

    sender adds its own information to the message.

    This information is known is header and trailers.

    When the information added at the beginning of

    the data is known as header. Whereas added at the

    end then it called as trailer. Headers added at layers

    2, 3, 4, 5, 6. Trailer added at layer 2. Each layer is

    connected with the next layer by using interfaces.

    Each interface defines what information and

    services a layer must provide for the layer above it.

    ORGANIZATION OF LAYERS

    The seven layers are arranged by three sub groups.

    1. Network Support Layers 2. User Support Layers

    3. Intermediate Layer

    Network Support Layers:

    Physical, Datalink and Network layers come under

    the group. They deal with the physical aspects of

    the data such as electrical specifications, physical

    connections, physical addressing, and transport

    timing and reliability.

    User Support Layers:

    Session, Presentation and Application layers comes

    under the group. They deal with the interoperability

    between the software systems. Intermediate Layer

    The transport layer is the intermediate layer

    between the network support and the user supportlayers.

    FUNCTIONS OF THE LAYERS

    PHYSICAL LAYER

    The physical layer coordinates the functions

    required to transmit a bit stream over a physical

    medium. It deals with the mechanical and electrical

    specifications of the interface and the transmission

    medium.

    The functions are,

    1. Physical Characteristics of Interfaces and Media:

    It defines the electrical and mechanical

    characteristics of the interface and the media.

    It defines the types of transmission medium

    2. Representation of Bits

    To transmit the stream of bits they must beencoded into signal. It defines the type of encoding

    weather electrical or optical.

    3. Data Rate

    It defines the transmission rate i.e. the number of

    bits sent per second.

    4. Synchronization of Bits

    The sender and receiver must be synchronized at bit

    level.

  • 7/31/2019 CN Notes by Mohan

    2/17

  • 7/31/2019 CN Notes by Mohan

    3/17

    The transport layer creates a connection between

    the two end ports.

    It involves three steps. They are,

    1. Connection establishment

    2. Data transmission

    3. Connection discard

    4. Flow Control

    Flow control is performed at end to end level

    5. Error Control

    Error control is performed at end to end level.

    SESSION LAYER

    It acts as a dialog controller. It establishes, maintains

    and synchronizes the interaction between the

    communication devices.The responsibilities are,

    1. Dialog Control

    2. Synchronization

    It adds a synchronization points into a stream of

    bits.The session layer allows two systems to enterinto a dialog. It allows the communication between

    the devices.

    PRESENTATION LAYER

    The presentation layer is responsible for the

    semantics and the syntax of the information

    exchanged.The responsibilities are,

    1. Translation

    Different systems use different encoding systems.The presentation layer is responsible for

    interoperability between different systems. The

    presentation layer t the sender side translates the

    information from the sender dependent format to a

    common format. Likewise, at the receiver side

    presentation layer translate the information from

    common format to receiver dependent format.

    2. Encryption

    To ensure security encryption/decryption is used

    .Encryption means transforms the original

    information to another form

    Decryption means retrieve the original information

    from the encrypted data

    3. Compression

    It used to reduce the number of bits to be

    transmitted.

    APPLICATION LAYER

    The application layer enables the user to access the

    network. It provides interfaces between the users to

    the network.The responsibilities are,

    1. Network Virtual Terminal

    It is a software version of a physical terminal and

    allows a user to log onto a remote host.

    2. File Transfer, Access, and Management

    It allows a user to access files in a remote computer,

    retrieve files, and manage or control files in a

    remote computer.

    3. Mail Services

    4. Directory Service

    It provides the basis for e-mail forwarding and

    storage.It provides distributed database sources and

    access for global information about various objects

    and services.

    2) DIFFERENCE BETWEEN TCP/IP AND OSI

    Number of layers: TCP/IP defines only 5 layers

    (although these are not specifically mentioned in

    the standards)

    Functions performed at a given layer:

    In the OSI model each layer performs specific

    functions.

    In TCP/IP different protocols may be defined within

    a layer, each performing different functions. What is

    common about a set of protocols at the same layer

    is that they share the same set of support protocols

    at the next lower layer.

    Interface between adjacent layers:

  • 7/31/2019 CN Notes by Mohan

    4/17

    In the OSI model, a protocol at a given layer may be

    substituted by a new one without impacting on

    adjacent layers.

    In TCP/IP the strict use of all layers is not

    mandated.

    3) ERROR DETECTION AND CORRECTION

    ERROR

    Networks must be able to transfer data from one

    device to another with complete accuracy. Some

    part of a message will be altered in transit than that

    the entire content will arrive intact. Many factors like

    line noise can alter or wipe out one or more bits of

    a given data unit. This is known as errors.

    TYPES OF ERRORS

    There are two types. They are,

    1. Single Bit Error

    It means that only one bit of a given data unit is

    changed from 1 to

    0 or from 0 to 1.

    2. Burst Bit Error

    It means that two or more bits in the data unit have

    changed.

    ERROR DETECTION

    A burst bit does not necessarily means that the

    errors occur in consecutive

    Bits The length of the bust error is measured from

    the first corrupted bit to the last corrupted bit.

    Some bits in between may not be corrupted. For

    reliable communication errors must be detected

    and corrected. For error detection we are using

    many mechanisms.

    REDUNDANCY

    One error detection mechanism is sending every

    data unit twice. The receiving device then would be

    able to do a bit for bit comparison between the two

    versions of the data. Any discrepancy would indicatean error, and an appropriate correction mechanism

    could be used. But instead of repeating the entire

    data stream, a shorter group of bits may be

    appended to the end of each unit. This technique is

    called redundancy because extra bits are redundant

    to the information. They are discarded as soon as

    the accuracy of the transmission has beendetermined.

    TYPES

    Four types of redundancy checks are used in data

    communications. They are,

    1. vertical redundancy check (VRC)

    2. longitudinal redundancy check (LRC)

    3. cyclic redundancy check (CRC)

    4. checksum

    VERTICAL REDUNDANCY CHECK:

    It is also known as parity check. In this technique a

    redundant bit called a parity bit is appended to

    every data unit so that the total number of 1s in the

    unit including the parity bit becomes even for even

    parity or odd for odd parity. In even parity, the data

    unit is passed through the even parity generator. It

    counts the number of 1s in the data unit. If odd

    number of 1s, then it sets 1 in the parity bit to make

    the number of 1s as even. If the data unit having

    even number of 1s then it sets in the parity bit to

    maintain the number of 1s as even. When it reaches

    its destination, the receiver puts all bits through an

    even parity checking function. If it counts even

    number of 1s than there is no error. Otherwise there

    is some error.

    EXAMPLE:

    The data is : 01010110

    The VRC check : 010101100

    In odd parity, the data unit is passed through the

    odd parity generator. It counts the number of 1s in

    the data unit. If even number of 1s, then it sets 1 in

    the parity bit to make the number of 1s as odd. If

    the data unit having odd number of 1s then it sets

    in the parity bit to maintain the number of 1s as

    odd. When it reaches its destination, the receiver

    puts all bits through an odd parity checking

  • 7/31/2019 CN Notes by Mohan

    5/17

    function. If it counts odd number of 1s than there is

    no error. Otherwise there is some error.

    EXAMPLE

    The data is: 01010110

    The VRC check: 01010111

    LONGITUDINAL REDUNDANCY CHECK

    In this, a block of bits is organized in a table (rows

    and columns). For example, instead of sending a

    block of 32 bits, we organize them in a table made

    of four roes and eight columns. We then calculate

    the parity bit for each column and create a new row

    of eight bits which are the parity bits for the whole

    block

    CYCLIC REDUNDANCY CHECK

    CRC is based on binary division. In this a sequence

    of redundant bits, called CRC remainder is

    appended to the end of a data unit so that the

    resulting data unit becomes exactly divisible by a

    second predetermined binary number. At its

    destination, the incoming data unit is divided by the

    same number. If at this step there is no reminder,

    the data unit is assumed to be intact and therefore

    accepted. A remainder indicates that the data unit

    has been changed in transit and therefore must be

    rejected. Here, the remainder is the CRC. It must

    have exactly one less bit than the divisor, and

    appending it to the end of the data string must

    make the resulting bit sequence exactly divisible by

    the divisor.

    First, a string of n-1 0s is appended to the data unit.

    The number of 0s is one less than the number of

    bits in the divisor which is n bits. Then the newly

    elongated data unit is divided by the divisor using a

    process called binary division. The remainder is CRC.

    The CRC is replaces the appended 0s at the end of

    the data unit. The data unit arrives at the receiver

    first, followed by the CRC. The receiver treats whole

    string as the data unit and divides it by the same

    divisor that was used to find the CRC remainder. If

    the remainder is 0 then the data unit is error free.

    Otherwise it having some error and it must be

    discarded.

    CHECKSUM

    The error detection method used by the higher

    layer protocols is called checksum. It consists of two

    arts. They are,

    1. checksum generator 2. checksum checker

    Checksum Generator:

    In the sender, the checksum generator subdivides

    the data unit into equal segments of n bits. These

    segments are added with each other by using ones

    complement arithmetic in such a way that the total

    is also n bits long. That total is then complemented

    and appended to the end of the data unit.

    Checksum Checker:

    The receiver subdivides the data unit as above andadds all segments together and complements the

    result. If the extended data unit is intact, the total

    value found by adding the data segments and the

    checksum field should be zero. Otherwise the

    packet contains an error and the receiver rejects it.

    EXAMPLE see external ref.

    ERROR CORRECTION

    Error correction is handled in two ways. In one,

    when an error is discovered, the receiver can have

    the sender retransmit the entire data unit. In the

    other, a receiver can use an error correcting code,

    which automatically corrects certain errors.

    Types of error correction:

    1. Single bit error correction 2. Burst bit error

    correction

    Single Bit Error Correction

    To correct a single bit error in an ASCII character,

    the error correction code mustdetermine which of

    the seven bits has changed. In this case we have to

    determine eight different states: no error, error in

    position 1, error in position 2, error in position 3,

    error in position 4, error in position 5, error in

    position 6, error in position 7. It looks like a three bit

    redundancy code should be adequate because

    three bits can show eight different states. But what

    if an error occurs in the redundancy bits? Seven bits

    of data and three bits of redundancy bits equal 10

    bits. So three bits are not adequate.

  • 7/31/2019 CN Notes by Mohan

    6/17

    To calculate the number of redundancy bits (r)

    required to correct a given number of data bits (m)

    we must find a relationship between m and r.If the

    total number of bits in a transmittable unit is m+r

    then r must be able to indicate at least m+r+1

    different state. Of these, one state means no errorand m+r states indicate the location of an error in

    each of the m+r positions. So m+r+1 state must be

    discoverable by r bits. And r bits can indicate 2r

    different states. Therefore, 2r must be equal to or

    greater than m+r+1;

    2r >=m+r+1

    NUMBER OF DATA

    NUMBER OF REDUNDANCY BITS (R)

    TOTAL BITS (M+R)

    BITS (M) see external ref for example

    Hamming Code:

    The hamming code can be applied to data units of

    any length and uses the relationship between data

    and redundancy bits. Positions of redundancy bits in

    hamming code The combinations used to calculate

    each of the four r values for a seven bit data

    sequence are as follows:

    r1 :1,3,5,7,9,11 r2 : 2,3,6,7,10,11 r3 : 4,5,6,7 r4 :

    8,9,10,11

    Here, r1 bit is calculated using all bit positions

    whose binary representation includes a 1 in the

    rightmost position (0001, 0011, 0101, 0111, 1001,

    and 1011). The r2 bit is calculated using all bit

    positions with a 1 in the second position (0010,

    0011, 0110, 0111, 1010 and 1011), and for r3 1 at

    third bit position (0100, 0101, 0110 and 0111) for r41 at fourth bit position (1000, 1001, 1010 and 1011).

    Calculating the r Values:

    In the first step, we place each bit of the original

    character in its appropriate positions in the 11 bit

    unit. Then, we calculate the even parities for the

    various bit combinations. The parity value of each

    combination is the value of the corresponding r bit.

    For example r1 is calculated to provide even parity

    for a combination of bits 3, 5, 7, 9, 11.

    Error Detection and Correction:

    Example: see external ref

    Burst Bit Error Correction:

    Once the bit is identified, the receiver can reverse its

    value and correct the error.

    A hamming code can be designed to correct burst

    errors of certain length. The number of redundancy

    bits required to make these corrections, however, is

    dramatically higher than that required for single bit

    errors. To correct double bit errors, for example,

    we must take into consideration that the two bits

    can be a combination of any two bits in the entire

    sequence. Three bit correction means any three bits

    in the entire sequence and so on.

    FUNCTIONS OF DATA LINK LAYER:

    The data link layer is responsible for the following

    functions. They are,

    1. Line discipline or Access control 2. Flow control 3.

    Error control 4. Framing

    LINE DISCIPLINE

    Communications requires at least two devices, one

    to send and one to receive. If both devices areready to send some information and put their

    signals on the link then the two signals collides each

    other and became nothing. To avoid such a

    situation the data link layer use a mechanism called

    line discipline. and when it can send. It answers then

    question, who should send now?

    Line discipline coordinates the link system. It

    determines which device can send. Line discipline

    can serve in two ways:

    1. enquiry / acknowledgement (ENQ / ACK) 2. poll /

    select (POLL / SELECT)

    ENQ / ACK:

    This method is used in peer to peer

    communications. That is where there is a dedicated

    link between two devices. The initiator first

    transmits a frame called an enquiry (ENQ) asking I

    the receiver Is available to receive data. The receiver

    must answer either with an acknowledgement (ACK)

    frame if it ready to accept or with a negative

    acknowledgement (NAK) frame if it is not ready. If

  • 7/31/2019 CN Notes by Mohan

    7/17

    the response is positive, the initiator is free to send

    its data. Otherwise it waits, and try again. Once all

    its data have been transmitted, the sending system

    finishes with an end of transmission (EOT) frame.

    POLL / SELECT

    designated as primary and the other devices are

    secondary.

    This method of line discipline works with topologies

    where one device is Whenever a multi point link

    consists of a primary device and multiple secondary

    devices using a single transmission line, all

    exchanges must be made through the primary

    device even when the ultimate destination is asecondary. The primary device controls the link. The

    secondary devices follow its instructions. The

    primary device only determines which device is

    allowed to use the channel at a given time. The

    primary asks the secondary if they have anything to

    send; this function is called polling. And the primary

    tells the target secondary to get ready to receive;

    this function is called selecting.

    POLL:

    This function is used by the primary device to solicit

    transmission from the secondary devices. The

    secondary are not allowed to transmit data unless

    asked by the primary device. When the primary

    ready to receive data, it must ask (poll) each device

    in turn if it has anything to send. If the secondary

    have data to transmit it sends otherwise sends a

    negative acknowledgment (NAK). the data frame

    The primary then polls the next secondary. When

    the response is positive (a data frame), the primary

    reads the frame and returns an acknowledgment(ACK). There are two possibilities to terminate the

    transmission: either the secondary sends all data,

    finishing with an EOT frame, or the primary says

    timers up. Then the primary cal polls the

    remaining devices.

    SELECT:

    This mode of function is used whenever the primary

    device has something to send. It alerts the intended

    secondary device get ready to receive data. Before

    sending data it sends the select (SEL) frame. The

    receiver returns an ACK frame. Then the primary

    sends data.

    FLOW CONTROL AND ERROR CONTROL

    FLOW CONTROL

    It refers to a set of procedures used to restrict the

    amount of data flow between sending and receiving

    stations. It tells the sender how much data it can

    transmit before it must wait for an

    acknowledgement from the receiver. There are two

    methods are used. They are,

    1. stop and wait 2. sliding window

    STOP AND WAIT:

    In this method the sender waits for

    acknowledgment after every frame it sends. Only

    after an acknowledgment has been received, then

    the sender sends the next frame. The advantage is

    simplicity. The disadvantage is inefficiency.

    SLIDING WINDOW:

    In this method, the sender can transmit several

    frames before needing an acknowledgment. The

    receiver acknowledges only some of the frames,

    using a single ACK to confirm the receipt of multiple

    data frames. The sliding window refers to imaginary

    boxes at both the sender and receiver. This window

    provides the upper limit on the number of frames

    that can be transmitted before requiring an

    acknowledgement. To identify each frame the

    sliding window scheme introduces the sequence

    number. The frames are numbered as 0 to n-1. And

    the size of the window is n-1. Here the size of the

    window is 7 and the frames are numbered as

    0,1,2,3,4,5,6,7.

    SENDER WINDOW:

    At the beginning the senders window contains n-1

    frames. As frames are sent out the left boundary of

    the window moves inward, shrinking the size of the

    window. Once an ACK receives the window expands

    at the right side boundary to allow in a number of

    new frames equal to number of frames

    acknowledged by that ACK.

    EXAMPLE:

    ERROR CONTROL

  • 7/31/2019 CN Notes by Mohan

    8/17

    Error control is implemented in such a way that

    every time an error is detected, a negative

    acknowledgement is returned and the specified

    frame is retransmitted. This process is called

    automatic repeat request (ARQ).

    The error control is implemented with the flow

    control mechanism. So there are two types in error

    control. They are,

    1. stop and wait ARQ 2. sliding window ARQ

    STOP AND WAIT ARQ:

    It is a form of stop and wait flow control, extended

    to include retransmission of data in case of lost or

    damaged frames.

    DAMAGED FRAME:

    When a frame is discovered by the receiver to

    contain an error, it returns a NAK frame and the

    sender retransmits the last frame.

    LOST DATA FRAME:

    The sender is equipped with a timer that starts

    every time a data frame is transmitted. If the frame

    lost in transmission the receiver can never

    acknowledge it. The sending device waits for an

    ACK or NAK frame until its timer goes off, then it

    tries again. It retransmits the last data frame.

    LOST ACKNOWLEDGEMENT:

    The data frame was received by the receiver but the

    acknowledgement was lost in transmission. The

    sender waits until the timer goes off, then it

    retransmits the data frame. The receiver gets a

    duplicated copy of the data frame. So it knows the

    acknowledgement was lost so it discards the secondcopy.

    SLIDING WINDOW ARQ

    It is used to send multiple frames per time. The

    number of frame is according to the window size.

    The sliding window is an imaginary box which is

    reside on both sender and receiver side. It has two

    types. They are,

    1. go-back-n ARQ 2. selective reject ARQ

    GO-BACK-N ARQ:

    In this method, if one frame is lost or damaged, all

    frames sent since the last frame acknowledged or

    retransmitted.

    DAMAGED FRAME:

    LOST FRAME:

    LOST ACK:

    SELECTIVE REPEAT ARQ

    Selective repeat ARQ re transmits only the damaged

    or lost frames instead of sending multiple frames.

    The selective transmission increases the efficiency of

    transmission and is more suitable for noisy link. The

    receiver should have sorting mechanism.

  • 7/31/2019 CN Notes by Mohan

    9/17

    DAMAGED FRAME:

    LOST FRAME

    LOST ACK

    15)CONGESTION CONTROL

    Congestion in a network may occur if the load on

    the network-the number of packets sent to the

    network-is greater than the capacity of the network-

    the number of packets a network can handle.

    Congestion control refers to the mechanisms and

    techniques to control the congestion and keep the

    load below the capacity. Congestion in a network or

    internetwork occurs because routers and switches

    have queues-buffers that hold the packets before

    and after processing. Congestion control refers totechniques and mechanisms that can either prevent

    congestion, before it happens, or remove

    congestion, after it has happened.

    Congestion control categories:

    1) Open loop congestion control :

    In open-loop congestion control, policies are

    applied to prevent congestion before it happens. In

    these mechanisms, congestion control is handled by

    either the source or the destination.

    Retransmission Policy:

    If the sender feels that a sent packet is lost or

    corrupted, the packet needs to be retransmitted.

    Retransmission in general may increase congestion

    in the network. However, a good retransmission

    policy can prevent congestion. The retransmission

    policy and the retransmission timers must be

    designed to optimize efficiency and at the same

    time prevent congestion. For example, the

    retransmission policy used by TCP (explained later)

    is designed to prevent or alleviate congestion. The

    type of window at the sender may also affect

    congestion. The Selective Repeat window is better

    than the Go-Back-N window for congestion control.

    In the Go-Back-N window, when the timer for a

    packet times out, several packets may be resent,

    although some may have arrived safe and sound at

    the receiver. This duplication may make the

    congestion worse. The Selective Repeat window, on

    the other hand, tries to send the specific packets

    that have been lost or corrupted.

  • 7/31/2019 CN Notes by Mohan

    10/17

    Acknowledgment Policy

    The acknowledgment policy imposed by the

    receiver may also affect congestion. If the receiver

    does not acknowledge every packet it receives, it

    may slow down the sender and help preventcongestion. A receiver may send an

    acknowledgment only if it has a packet to be sent

    or a special timer expires. A receiver may decide to

    acknowledge only N packets at a time. A good

    discarding policy by the routers may prevent

    congestion and at the same time may not harm the

    integrity of the transmission. For example, in audio

    transmission, if the policy is to discard less sensitive

    packets when congestion is likely to happen, the

    quality of sound is still preserved and congestion is

    prevented or alleviated. An admission policy, whichis a quality-of-service mechanism, can also prevent

    congestion in virtual-circuit networks. Switches in a

    flow first check the resource requirement of a flow

    before admitting it to the network. A router can

    deny establishing a virtual circuit connection if there

    is congestion in the network or if there is a

    possibility of future congestion.

    2) Closed-Loop Congestion Control

    Closed-loop congestion control mechanisms try to

    alleviate congestion after it happens.

    Back Pressure:

    The technique of backpressure refers to a

    congestion control mechanism in which a

    congested node stops receiving data from the

    immediate upstream node or nodes. This may cause

    the upstream node or nodes to become congested,

    and they, in turn, reject data from their upstream

    nodes or nodes. And so on. Backpressure is a node-

    to-node congestion control that starts with a node

    and propagates, in the opposite direction of data

    flow, to the source. The backpressure technique can

    be applied only to virtual circuit networks, in which

    each node knows the upstream node from which a

    flow of data is coming.

    Node III in the figure has more input data than it

    can handle. It drops some packets in its input buffer

    and informs node II to slow down. Node II, in turn,

    may be congested because it is slowing down the

    output flow of data. If node II is congested, it

    informs node I to slow down, which in turn maycreate congestion. If so, node I informs the source

    of data to slow down. This, in time, alleviates the

    congestion. Note that the pressure on node III is

    moved backward to the source to remove the

    congestion.

    Choke Packet:

    A choke packet is a packet sent by a node to the

    source to inform it of congestion.

    Difference between the backpressure and choke

    packet methods:

    In backpressure, the warning is from one node to its

    upstream node, although the warning may

    eventually reach the source station. In the choke

    packet method, the warning is from the router,

    which has encountered congestion, to the source

    station directly. The intermediate nodes through

    which the packet has traveled are not warned.

    Signaling

    In implicit signaling, there is no communication

    between the congested node or nodes and the

    source. The source guesses that there is a

    congestion somewhere in the network from other

    symptoms. For example, when a source sends

    several packets and there is no acknowledgment for

    a while, one assumption is that the network is

    congested. The delay in receiving an

    acknowledgment is interpreted as congestion in the

    network; the source should slow down.

    iv) Explicit Signaling:

    The node that experiences congestion can explicitly

    send a signal to the source or destination. The

    explicit signaling method, however, is different from

    the choke packet method. In the choke packet

    method, a separate packet is used for this purpose;

    in the explicit signaling method, the signal is

    included in the packets that carry data.

    Backward Signaling: A bit can be set in a packet

    moving in the direction opposite to the congestion.

  • 7/31/2019 CN Notes by Mohan

    11/17

    This bit can warn the source that there is congestion

    and that it needs to slow down to avoid the

    discarding of packets.

    Forward Signaling: A bit can be set in a packet

    moving in the direction of the congestion. This bitcan warn the destination that there is congestion.

    The receiver in this case can use policies, such as

    slowing down the acknowledgments, to alleviate

    the congestion.

    Congestion Control in TCP

    Congestion Window:

    The sender has two pieces of information: the

    receiver-advertised window size and the congestion

    window size. The actual size of the window is theminimum of these two.

    Actual window size = minimum (rwnd, cwnd)

    TCP's general policy for handling congestion is

    based on three phases:

    slow start

    congestion avoidance

    congestion detection.

    Slow Start: Exponential Increase :

    One of the algorithms used in TCP congestion

    control is called slow start. This algorithm is based

    on the idea that the size of the congestion window

    (cwnd) starts with one maximum segment size

    (MSS). The MSS is determined during connection

    establishment by using an option of the same

    name. The size of the window increases one MSS

    each time an acknowledgment is received. In the

    slow-start phase, the sender starts with a very slow

    rate of transmission, but increases the rate rapidly

    to reach a threshold. When the threshold is reached,

    the data rate is reduced to avoid congestion. Finally

    if congestion is detected, the sender goes back tothe slow-start or congestion avoidance phase based

    on how the congestion is detected.

    The rate is exponential as shown below:

    The sender starts with cwnd =1 MSS. This means

    that the sender can send only one segment. After

    receipt of the acknowledgment for segment 1, the

    size of the congestion window is increased by 1,

    which means that cwnd is now 2. Now two more

    segments can be sent. When each acknowledgment

    is received, the size of the window is increased by 1

    MSS. When all seven segments are acknowledged,

    cwnd = 8.

    Congestion Avoidance:

    Additive Increase If we start with the slow-start

    algorithm, the size of the congestion window

    increases exponentially. To avoid congestion before

    it happens, one must slow down this exponential

    growth. TCP defines another algorithm called

    congestion avoidance, which undergoes an additive

    increase instead of an exponential one. When the

    size of the congestion window reaches the slow-

    start threshold, the slow-start phase stops and the

    additive phase begins. In this algorithm, each time

  • 7/31/2019 CN Notes by Mohan

    12/17

    the whole window of segments is acknowledged

    (one round), the size of the congestion window is

    increased by 1.The rate is additive as shown below:

    Congestion Detection:

    Multiplicative Decrease If congestion occurs, the

    congestion window size must be decreased. The

    only way the sender can guess that congestion has

    occurred is by the need to retransmit a segment.

    However, retransmission can occur in one of two

    cases:

    when a timer times out or when three ACKs are

    received. In both cases, the size of the threshold is

    dropped to one-half, a multiplicative decrease.

    Most TCP implementations have two reactions:

    I. If a time-out occurs, there is a stronger possibility

    of congestion; a segment has probably been

    dropped in the network, and there is no news about

    the sent segments.In this case TCP reacts strongly:

    a. It sets the value of the threshold to one-half of

    the current window size.

    b. It sets cwnd to the size of one segment.

    c. It starts the slow-start phase again.

    2. If three ACKs are received, there is a weaker

    possibility of congestion; a segment may have been

    dropped, but some segments after that may have

    arrived safely since three ACKs are received. This is

    called fast transmission and fast recovery. In this

    case, TCP has a weaker reaction:

    a. It sets the value of the threshold to one-half of

    the current window size.

    b. It sets cwnd to the value of the threshold (some

    implementations add three segment sizes to the

    threshold).

    c. It starts the congestion avoidance phase

    16) QUALITY OF SERVICE

    Quality of service (QoS) is an internetworking issue

    and can be defined as something a flow seeks to

    attain.

    Flow Characteristics:

    Four types of characteristics are attributed to a flow:

    reliability, delay, jitter, and bandwidth.

    i) Reliability

    Reliability is a characteristic that a flow needs. Lack

    of reliability means losing a packet or

    acknowledgment, which entails retransmission.

    ii) Delay

    Source-to-destination delay is another flowcharacteristic. Again applications can tolerate delay

    in different degrees. In this case, telephony, audio

    conferencing, video conferencing, and remote log-

    in need minimum delay, while delay in file transfer

    or e-mail is less important.

    iii) Jitter

    Jitter is the variation in delay for packets belonging

    to the same flow. For example, if four packets

    depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23,all have the same delay, 20 units of time. On the

    other hand, if the above four packets arrive at 21,

    23, 21, and 28, they will have different delays: 21,22,

    19, and 24. For applications such as audio and

    video, the first case is completely acceptable; the

    second case is not.

    Jitter is defined as the variation in the packet delay.

    High jitter means the difference between delays is

    large; low jitter means the variation is small.

    iv) Bandwidth

  • 7/31/2019 CN Notes by Mohan

    13/17

    Different applications need different bandwidths. In

    video conferencing we need to send millions of bits

    per second to refresh a color screen while the total

    number of bits in an e-mail may not reach even a

    million.

    TECHNIQUES TO IMPROVE QoS

    Techniques to improve the quality of service are:

    Scheduling traffic shaping admission control resource reservation.

    1) Scheduling:

    Packets from different flows arrive at a switch or

    router for processing. A good scheduling technique

    treats the different flows in a fair and appropriate

    manner. Several scheduling techniques are

    designed to improve the quality of service. Three of

    them are:

    a) FIFO queuing b) priority queuing c) weighted

    fair queuing.

    a) FIFO Queuing:

    In first-in, first-out (FIFO) queuing, packets wait in a

    buffer (queue) until the node (router or switch) is

    ready to process them. If the average arrival rate is

    higher than the average processing rate, the queue

    will fill up and new packets will be discarded.

    b) Priority Queuing

    In priority queuing, packets are first assigned to apriority class. Each priority class has its own queue.

    The packets in the highest-priority queue are

    processed first. Packets in the lowest-priority queue

    are processed last.

    A priority queue can provide better QoS than theFIFO queue because higher priority traffic, such as

    multimedia, can reach the destination with less

    delay.

    c) Weighted Fair Queuing:

    A better scheduling method is weighted fair

    queuing. In this technique, the packets are still

    assigned to different classes and admitted to

    different queues. The queues, however, are

    weighted based on the priority of the queues;

    higher priority means a higher weight. The system

    processes packets in each queue in a round-robin

    fashion with the number of packets selected from

    each queue based on the corresponding weight.

    2) Traffic Shaping:

    Traffic shaping is a mechanism to control the

    amount and the rate of the traffic sent to the

    network. Two techniques can shape traffic: leaky

    bucket and token bucket.

    a) Leaky Bucket

  • 7/31/2019 CN Notes by Mohan

    14/17

    If a bucket has a small hole at the bottom, the water

    leaks from the bucket at a constant rate as long as

    there is water in the bucket. The rate at which the

    water leaks does not depend on the rate at which

    the water is input to the bucket unless the bucket is

    empty. The input rate can vary, but the output rateremains constant. Similarly, in networking, a

    technique called leaky bucket can smooth out

    bursty traffic. Bursty chunks are stored in the bucket

    and sent out at an average rate.

    A simple leaky bucket implementation is shown in

    Figure. A FIFO queue holds the packets. If the

    traffic consists of fixed-size packets , the process

    removes a fixed number of packets from the queue

    at each tick of the clock. If the traffic consists of

    variable-length packets, the fixed output rate mustbe based on the number of bytes or bits.

    The following is an algorithm for variable-length

    packets:

    1. Initialize a counter to n at the tick of the clock.

    2. Ifn is greater than the size of the packet, send the

    packet and decrement the

    counter by the packet size. Repeat this step until n

    is smaller than the packet size.

    3. Reset the counter and go to step 1.

    b) Token Bucket:

    The leaky bucket is very restrictive. It does not

    credit an idle host. For example, if a host is not

    sending for a while, its bucket becomes empty. Now

    if the host has bursty data, the leaky bucket allows

    only an average rate. The time when the host was

    idle is not taken into account. On the other hand,

    the token bucket algorithm allows idle hosts to

    accumulate credit for the future in the form of

    tokens. For each tick of the clock, the system sends

    n tokens to the bucket. The system removes one

    token for every cell (or byte) of data sent. For

    example, ifn is 100 and the host is idle for 100 ticks,

    the bucket collects 10,000 tokens. Now the host can

    consume all these tokens in one tick with 10,000cells, or the host takes 1000 ticks with 10 cells per

    tick. In other words, the host can send bursty data

    as long as the bucket is not empty.

    3) Resource Reservation:

    A flow of data needs resources such as a buffer,

    bandwidth, CPU time, and so on. The quality ofservice is improved if these resources are reserved

    beforehand. one QoS model called Integrated

    Services, which depends heavily on resource

    reservation to improve the quality of service.

    4) Admission Control

    Admission control refers to the mechanism used by

    a router, or a switch, to accept or reject a flow based

    on predefined parameters called flow specifications.

    Before a router accepts a flow for processing, it

    checks the flow specifications to see if its capacity

    (in terms of bandwidth, buffer size, CPU speed, etc.)

    and its previous commitments to other flows can

    handle the new flow.

    14) Transmission Control Protocol (TCP)

    TCP is a connection oriented protocol; it creates a

    virtual connection between two TCPs to send data.

    In addition, TCP uses flow and error control

    mechanisms at the transport level

    TCP Services

  • 7/31/2019 CN Notes by Mohan

    15/17

    i) Process-to-Process Communication: TCP

    provides process-to-process communication using

    port numbers.

    Well-known ports used by TCP:

    ii) Stream Delivery Service:

    TCP is a stream-oriented protocol. TCP allows the

    sending process to deliver data as a stream of bytes

    and allows the receiving process to obtain data as a

    stream of bytes. TCP creates an environment in

    which the two processes seem to be connected by

    an imaginary "tube" that carries their data across

    the Internet. The sending process produces (writes

    to) the stream of bytes, and the receiving process

    consumes (reads from) them.

    iii) Sending and Receiving Buffers:

    Because the sending and the receiving processes

    may not write or read data at the same speed, TCP

    needs buffers for storage. There are two buffers, the

    sending buffer and the receiving buffer, one for

    each direction. One way to implement a buffer is to

    use a circular array of I-byte locations.

    At the sending site, the buffer has

    three types of chambers. The white section contains

    empty chambers that can be filled by the sending

    process (producer). The gray area holds bytes that

    have been sent but not yet acknowledged. TCP

    keeps these bytes in the buffer until it receives an

    acknowledgment. The colored area contains bytes

    to be sent by the sending TCP.

    The operation of the buffer at the

    receiver site is simpler. The circular buffer is divided

    into two areas (shown as white and colored). The

    white area contains empty chambers to be filled by

    bytes received from the network. The colored

    sections contain received bytes that can be read by

    the receiving process. When a byte is read by the

    receiving process, the chamber is recycled and

    added to the pool of empty chambers.

    iv) Segments

    TCP groups a number of bytes together into a

    packet called a segment. TCP adds a header to each

    segment (for control purposes) and delivers the

    segment to the IP layer for transmission. Thesegments are encapsulated in IP datagrams and

    transmitted. This entire operation is transparent to

    the receiving process.

  • 7/31/2019 CN Notes by Mohan

    16/17

    v) Full-Duplex Communication

    TCP offers full-duplex service, in which data can flow

    in both directions at the same time. Each TCP then

    has a sending and receiving buffer, and segments

    move in both directions.

    vi) Connection-Oriented Service:

    TCP, unlike UDP, is a connection-oriented protocol.

    When a process at site A wants to send and receive

    data from another process at site B, the following

    occurs:

    1. The two TCPs establish a connection between

    them.

    2. Data are exchanged in both directions.

    3. The connection is terminated.

    Note that this is a virtual connection, not a physical

    connection.

    TCP is a reliable transport protocol. It uses an

    acknowledgment mechanism to check the safe and

    sound arrival of data.

    TCP Features:

    To provide the services mentioned, TCP has severalfeatures.

    a) Numbering System:

    Byte Number

    The bytes of data being transferred in each

    connection are numbered by TCP. TCP numbers all

    data bytes that are transmitted in a connection.

    Numbering is independent in each direction. When

    TCP receives bytes of data from a process, it stores

    them in the sending buffer and numbers them. The

    numbering does not necessarily start from O.

    Instead, TCP generates a random number between

    0 and 232 - 1 for the number of the first byte.

    Sequence Number

    After the bytes have been numbered, TCP assigns a

    sequence number to each segment that is being

    sent. The sequence number for each segment is the

    number of the first byte carried in that segment.

    Acknowledgment Number

    Communication in TCP is full duplex; when a

    connection is established, both parties can send and

    receive data at the same time. Each party numbers

    the bytes, usually with a different starting byte

    number. The sequence number in each direction

    shows the number of the first byte carried by thesegment. Each party also uses an acknowledgment

    number to confirm the bytes it has received.

    However, the acknowledgment number defines the

    number of the next byte that the party expects to

    receive. In addition, the acknowledgment number is

    cumulative, which means that the party takes the

    number of the last byte that it has received, safe

    and sound, adds I to it, and announces this sum as

    the acknowledgment number.The term cumulative

    here means that if a party uses 5643 as an

    acknowledgment number, it has received all bytes

    from the beginning up to 5642. Note that this does

    not mean that the party has received 5642 bytes

    because the first byte number does not have to

    start from O.

    b) Flow Control

    TCP, unlike UDP, provides flow control. The receiver

    of the data controls the amount of data that are to

    be sent by the sender. This is done to prevent the

    receiver from being overwhelmed with data. Thenumbering system allows TCP to use a byte-

    oriented flow control.

    c) Error Control

    To provide reliable service, TCP implements an error

    control mechanism. Although error control

    considers a segment as the unit of data for error

    detection (loss or corrupted segments), error

    control is byte-oriented.

    d) Congestion Control:

  • 7/31/2019 CN Notes by Mohan

    17/17

    TCP, unlike UDP, takes into account congestion in

    the network. The amount of data sent by a sender is

    not only controlled by the receiver (flow control),

    but is also detennined by the level of congestion in

    the network.

    TCP segment format:

    The segment consists of a 20- to 60-byte header,

    followed by data from the application program.

    Source port address: This is a 16-bit field that

    defines the port number of the application program

    in the host that is sending the segment.

    Destination port address: This is a 16-bit field thatdefines the port number of the application program

    in the host that is receiving the segment.

    Sequence number: This 32-bit field defines the

    number assigned to the first byte of data contained

    in this segment.

    Acknowledgment number: This 32-bit field

    defines the byte number that the receiver of the

    segment is expecting to receive from the other

    party.

    Header length : This 4-bit field indicates the

    number of 4-byte words in the TCP Header.

    Reserved: This is a 6-bit field reserved for future

    use.

    Control: This field defines 6 different control bits or

    flags.

    Window size : This field defines the size of the

    window, in bytes, that the other party must

    maintain.

    Checksum: This 16-bit field contains the checksum.

    Urgent pointer: This l6-bit field, which is valid onlyif the urgent flag is set, is used when the segment

    contains urgent data.

    Options : There can be up to 40 bytes of optional

    information in the TCP header.