40
Lecture 10: Error Control Coding I Chapter 8 – Coding and Error Control From: Wireless Communications and Networks b y William Stallings, Prentice Hall, 200 2.

Lecture 10: Error Control Coding I

  • Upload
    ron

  • View
    57

  • Download
    1

Embed Size (px)

DESCRIPTION

Lecture 10: Error Control Coding I. Chapter 8 – Coding and Error Control From: Wireless Communications and Networks by William Stallings, Prentice Hall, 2002. Diversity makes use of redundancy in terms of using multiple signals to improve signal quality. - PowerPoint PPT Presentation

Citation preview

Page 1: Lecture 10: Error Control Coding I

Lecture 10: Error Control Coding I

Chapter 8 – Coding and Error Control

From: Wireless Communications and Networks by

William Stallings, Prentice Hall, 2002.

Page 2: Lecture 10: Error Control Coding I

2

Diversity makes use of redundancy in terms of using multiple signals to improve signal quality.

Error control coding, as discussed in this lecture and next, uses redundancy by sending extra data bits to detect and correct bit errors.

Two types of coding in wireless systems (see beginning of Lecture 7) Source coding – compressing a data source using encoding of

information Channel coding – encode information to be able to overcome

bit errors

Now we focus on channel coding, also called error control coding to overcome the bit errors of a channel.

Page 3: Lecture 10: Error Control Coding I

3

I. Error Detection

Three approaches can be used to cope with data transmission errors.

1. Using codes to detect errors.

2. Using codes to correct errors – called Forward Error Correction (FEC).

3. Mechanisms to automatically retransmit (ARQ) corrupted packets.

Page 4: Lecture 10: Error Control Coding I

4

Bit errors and Packet Errors. Given the following definitions:

Pb = probability of a single bit error

P1 = probability that a frame of F bits arrives with no errors

P2 = probability that a frame arrives with one or more errors

Page 5: Lecture 10: Error Control Coding I

5

Example: F = 500 bytes (4000 bits) Pb = 10-6 (a very good performing wireless link) P1 = (1- 10-6)4000 = 0.9960 P2 = 1-(1- 10-6) 4000 = 0.0040 4 out of every 1000 packets have errors At 100 kbps, this means 6 packets per minute are in

error.

Given 100 kilobyte data files: 200 packets per file Probability that the file is corrupted is

1- (1- P1)200 = 0.551

Page 6: Lecture 10: Error Control Coding I

6

If we do nothing, then a significant amount of our data will be corrupted It is more likely that a complete file would be trans

mitted with corruption than without corruption.

Page 7: Lecture 10: Error Control Coding I

7

Error Detection

Basic Idea: Add extra bits to a block of data bits. Data block: k bits Error detection: n-k more bits

Based on some algorithm for creating the extra bits.

Results in a frame (data link layer packets are called “frames”) of n bits

Page 8: Lecture 10: Error Control Coding I

8

Parity Check

Simplest scheme: Add one bit to the frame. The value of the bit is chosen so as to make the nu

mber of 1’s even (or odd, depending on the type of parity).

Example 7 bit character: 1110001 Even parity would make an 8 bit character of what?

Odd parity would make an 8 bit character of what?

Page 9: Lecture 10: Error Control Coding I

9

The receiver separates the data bits and the error correction bits.

Then performs the same algorithm again to see of the received bits are what they should have been.

Hopefully if errors have occurred, the packet can be retransmitted or corrected.

“Hopefully” because there are always some error patterns could go undetected.

Page 10: Lecture 10: Error Control Coding I

10

So, this can be used to detect errors. For example, a received 10100010 (when using even parity) would be invalid. But what types of errors would NOT be detected?

Noise sometimes comes in impulses or during a deep fade – so one cannot assume individual bit errors will occur. Parity checks, therefore, have limited usefullness.

Page 11: Lecture 10: Error Control Coding I

11

Cyclic Redundancy Check (CRC)

Uses more than a single parity bit. Adds an (n-k) bit fr

ame check sequence.

Page 12: Lecture 10: Error Control Coding I

12

Takes the source data and creates a sequence of bits that is only valid if divisible by a predetermined number.

Using modulo-2 arithmetic. Binary addition with no carries. The same as the exclusive-OR operations (XOR). Also can be implemented by polynomial operations.

Page 13: Lecture 10: Error Control Coding I

13

Using the following definitions.

T = n-bit frame to be transmitted. D = k-bit block of data, or message, the first k bits of T F = (n-k)-bit frame check sequence, the last (n-k) bits of T P = pattern of n-k+1 bits; this is the predetermined divisor.

The goal is for T/P to have no remainder.

So, T = 2n-kD + F This means D is shifted by (n-k) bits and F is added to it. Example: D = 11111, F=101 T = 11111000 + 101 = 11111101

Page 14: Lecture 10: Error Control Coding I

14

Suppose we divide 2n-kD by P

Q is the quotient and R is the remainder. So, we can make our frame check sequence F = R

So, T = 2n-kD + R Then

In modulo-2, any number added to itself is zero, so R/P + R/P = 0

The result is a division with no remainder.

Page 15: Lecture 10: Error Control Coding I

15

Page 16: Lecture 10: Error Control Coding I

16

Page 17: Lecture 10: Error Control Coding I

17

Using polynomial operations. In the example, P = 110101 This could also be represented as a polynomial as

P(X) = X5+ X4+ X2+1 From bit positions in P of order 0 to 5.

Using polynomial operations indirectly performs modulo-2 arithmetic.

CRC codes fail when a sequence of bit errors creates a different bit sequence that is also divisible by P.

Page 18: Lecture 10: Error Control Coding I

18

It can be shown that the following errors can be prevented by suitably chosen values for P, and the related P(X). All single bit errors will be detected. All double-bit errors. Any odd number of errors. Any burst error for which the length of the burst is l

ess than or equal to (n-k) This means the burst is less than the frame check seque

nce. A burst error is a contiguous set of bits where all of the

bits are in error.

Page 19: Lecture 10: Error Control Coding I

19

Four versions of P(X) are widely used, in different bit lengths. For example, CRC-16.

P(X) = X16+ X15+ X2+1

Checksums The Internet Protocol uses a checksum approach.

Only on the packet header information. All of the 16-bit words in the header are added together. A checksum is then inserted into to the header. At the receiver, all of the 16-bit words are added again,

and this time (because the checksum was inserted) should sum to zero.

Page 20: Lecture 10: Error Control Coding I

20

II. Block Error Correction Codes Error detection requires blocks to be retransmitte

d. This is inadequate for wireless communication for two reasons.

1. The bit error rate on a wireless link can be quite high, which would result in a large number of retransmissions.

2. In some cases, especially satellite links, the propagation delay is very long compared to the transmission time of a single frame. The result is a very inefficient system.

Page 21: Lecture 10: Error Control Coding I

21

It is desirable to be able to correct errors without requiring retransmission. Using the bits tha

t were transmitted.

Page 22: Lecture 10: Error Control Coding I

22

On transmission, the k-bit block of data is mapped into an n-bit block called a _____________. Using an FEC (____________________) encoder.

A codeword may or may not be similar to those form the CRC approach above.

It may come from taking the original data and adding extra bits (as with CRC).

Or it may be created using a completely new set of bits.

The codewords are longer than the original data. Then the block is transmitted.

Page 23: Lecture 10: Error Control Coding I

23

At the receiver, comparing the received codeword with the set of valid codewords can result in one of five possible outcomes 1. There are no bit errors

The received codeword is the same as the transmitted codeword.

The corresponding source data for that codeword is output from the decoder.

2. An error is detected and can be corrected For certain bit error patterns, it is clear that the received

codeword is close to a valid codeword. It is assumed that the closeby codeword was sent. It is assumed that the source data for that codeword sho

uld be used.

Page 24: Lecture 10: Error Control Coding I

24

3. An error is detected but cannot be corrected. The received codeword is close to two or more valid codewor

ds. One cannot assume which codeword was the original. So, it is decided only that an error has been detected and the f

rame should be retransmitted.

4. An error is not detected. An error pattern occurs that transforms the transmitted codew

ord into another valid codeword. The receiver assumes no error has occurred. The output from the decoder is the source data for a wrong co

deword. Hopefully other application processes will also check the vali

dity of the data.

Page 25: Lecture 10: Error Control Coding I

25

5. An error is detected and is erroneously corrected. An error pattern creates a new codeword that is clos

e to a valid codeword. But the one it is close to is not the one that was sent. Therefore, the decoder outputs the source data for a

wrong codeword.

Page 26: Lecture 10: Error Control Coding I

26

Block Code Principles

Hamming Distance Given are two example sequences.

v1 = 011011, v2 = 110001 The Hamming Distance is defined as the number of bit

s which disagree.

d(v1 , v2 ) = 3

Page 27: Lecture 10: Error Control Coding I

27

Example: Given k = 2, n = 5

Suppose the following is received: 00100 This is not a valid codeword. An error is detected

Page 28: Lecture 10: Error Control Coding I

28

Can the error be corrected? We cannot be sure. 1, 2, 3, 4, or even 5 bits may have been corrupted b

y noise. However, only one bit is different between this and

00000.

d(00100,00000) = 1 Two bits changes would have been required betwee

n this and 00111.

d(00100,00111) = 2 Three bits with 11110. d(00100,11110) = 3 And four bits with 11001. d(00100,11001) = 4

Page 29: Lecture 10: Error Control Coding I

29

Thus the most likely codeword that was sent is 00000. The output from the decoder is then the data block 00. But there we could be having a failed correction and so

me other data block should have been decoded. Decoding rule: Use the closest codeword (in terms

of Hamming distance). Why is it okay to do this? How much less likely ar

e two errors than one error? Assume BER = 10-3.

Page 30: Lecture 10: Error Control Coding I

30

Now, for all cases… There are five bits in the codeword, so there are 25=

32 possible received codewords. Four are valid, the other 28 would come from bit errors. See next page

In many cases, a possible received codeword is a Hamming distance of 1 from a valid codeword.

Page 31: Lecture 10: Error Control Coding I

31

Page 32: Lecture 10: Error Control Coding I

32

But in eight cases, a received codeword would be a distance of 2 away from two valid codewords. The receiver does not know how to choose. A correction decision is undecided. An error is detected but not correctable.

So, we can conclude that in this case an error of 1 bit is correctable, but not errors of two bits.

Page 33: Lecture 10: Error Control Coding I

33

Block code design With an (n, k) code, there are 2k valid codewords ou

t of a possible 2n codewords. The ratio of redundant data bits, (n-k)/k, is called the

______________ of the code. The ratio of k/n is called the ____________.

For example, a ½ rate code carries double the bandwidth of the encoded system for the same net data rate.

½ of the bits are for error control purposes.

Page 34: Lecture 10: Error Control Coding I

34

Example: A 2/5 rate code over a 30 kbps channel.

Net data rate?

Data rate for error control codes?

Page 35: Lecture 10: Error Control Coding I

35

For a code consisting of codewords denoted wi, the minimum Hamming distance is defined as

For the example: v1 = 011011, v2 = 110001, dmin = 3. The maximum number of guaranteed correctable errors

is

The symbol means to round down to the next lowest integer.

Page 36: Lecture 10: Error Control Coding I

36

From the example, tcorr = (3-1)/2 = 1 bit error can be corrected. A two-bit error will cause either an undecided corre

ction or a failed correction. The number of errors that can be detected is

From the example, tdet = 3-1 = 2 All two-bit errors will be detected. As little as a three bit error might cause a failed dete

ction.

Page 37: Lecture 10: Error Control Coding I

37

The following design considerations are involved with devising codewords. For values of n and k, we would like the largest pos

sible value of dmin. The code should be relatively easy to encode and de

code, with minimal memory and processing time. We would like the number of extra bits, (n-k) to be

_____ to preserve bandwidth. We would like the number of extra bits, (n-k) to be

_____ to reduce error rate. The last two objectives are in conflict.

Page 38: Lecture 10: Error Control Coding I

38

Coding Gain

Coding can allow us to use lower power (smaller Eb

/N0) to achieve the same error rate we would have had without using correction bits. Since errors can be corrected.

The curve below on the right is for an uncoded modulation system. Above 5.4 dB, a smaller bit error rate can be achieved

using a ½ rate code for the same Eb/N0

Page 39: Lecture 10: Error Control Coding I

39

Page 40: Lecture 10: Error Control Coding I

40

The coding gain of a code is defined as the reduction in dB of Eb/N0 that is required to obtain the same error rate. For example, for a BER of 10-6, 11 dB is needed for

the ½ rate code, as compared to 13.77 dB without the coding.

This is a coding gain of 2.77 dB. What is the coding gain at a BER of 10-3?

Next lecture: Second part of correction codes Design of specific block codes Convolutional code