23
HAMMING CODES Hamming code with matrix manipulation Consider the four un-encoded information bits u 1 ,u 2 ,u 3 and u 4 . This 1 x 4 matrix can be represented as; [ u ] = [ u 1 u 2 u 3 u 4 ] The seven-bit encoded data is represented as; [ v ] = [ v 1 v 2 v 3 v 4 v 5 v 6 v 7 ] The encoding process of the Hamming code can be represented as modulo-2 multiplication; [ u 1 u 2 u 3 u 4 ] ¿ [ 1101000 ¿ ][ 0110100 ¿ ][ 1110010 ¿ ] ¿ ¿ ¿ EEE332 – G: Hamming code Part 2 1/23 October 2008

G - Hamming Code-2

Embed Size (px)

Citation preview

Page 1: G - Hamming Code-2

HAMMING CODES

Hamming code with matrix manipulation

Consider the four un-encoded information bits u 1 , u2 , u3

and u4 . This 1 x 4 matrix can be represented as;

[ u ] = [u1 u2 u3 u4 ]

The seven-bit encoded data is represented as;

[ v ] = [ v1 v2 v3 v 4 v5 v6 v7 ]

The encoding process of the Hamming code can be represented as modulo-2 multiplication;

[u1 u2 u3 u4 ]⋅ ¿ [1 1 0 1 0 0 0¿ ] [ 0 1 1 0 1 0 0 ¿ ] [ 1 1 1 0 0 1 0 ¿ ] ¿¿

¿where the 4 x 7 matrix is known as a generator matrix

represented by [ G ] .

Note: If we examine carefully this multiplication operation it is consistent with Eq (1) above.

EEE332 – G: Hamming code Part 2 1/18 October 2008

Page 2: G - Hamming Code-2

We can represent the encoding process by multiplying our

information vector [ u ] by a generator matrix [ G ] to obtain

the encoded bit sequence [ v ] .

[ v ] = [ u ] [G ] (2)

EXAMPLE 3Encoding by matrix multiplication

A source outputs the four information bits 1100. These four bits are to be encoded using the Hamming code just described. Using the generator matrix in Eq (2), determine the encoded seven-bit pattern to be transmitted.

Solution

EEE332 – G: Hamming code Part 2 2/18 October 2008

Page 3: G - Hamming Code-2

Let’s take a closer look on generator matrix [ G ] for our Hamming code. From the above

[ G ] = ¿ [1 1 0 1 0 0 0 ¿ ] [0 1 1 0 1 0 0 ¿ ] [ 1 1 1 0 0 1 0 ¿ ]¿¿

¿We can see that the generator matrix has the following properties;

1. The matrix is k rows by n columns: where k=4, the number of information bits from the source; and n=7, the total number of encoced bits.

2. The last four columns of [ G ] form an identity matrix. The identity matrix maps the four information bits into the last four bits of the encoded sequence (seven-bit).

3. The first three columns of [ G ] shows the relationship between the parity (check) bits with data bits.

4. The generator matrix is not unique. For example, we can develop an equally valid Hamming code by

switching the first and second columns in[ G ] .

EEE332 – G: Hamming code Part 2 3/18 October 2008

Page 4: G - Hamming Code-2

Now let’s de-construct the generator matrix into two

submatrices as i.e. a 4 x 4 identity submatrix [ I ] and 4 x 3

parity submatrix [ P ] (which produce the parity bits).

[ G ] =[|¿ 1 1 0|0 1 1|1 1 1|1 0 1

¿

|¿

1 0 0 0|0 1 0 0|0 0 1 0|0 0 0 1

¿

][ G ] = [ [ P ] [ I ]4 x4 ]

Figure 1: Deconstruction of generator matrix [ G ]

Terminology

Systematic code – A code in which the information bits are always repeated in a portion of the encoded sequence (the input data are embedded in the encoded output). A non-systematic code is the one in which the output does not contain the input bits.

EEE332 – G: Hamming code Part 2 4/18 October 2008

4 x 3 parity submatrix[ P ] 4 x 4 identity submatrix [ I ]

Page 5: G - Hamming Code-2

The systematic code is very much desirable because they make it easy to extract the original data from the encoded sequence.

Block code – divides the bit stream from the source encoder into k-bit blocks. Each k-bit blocks is then encoded into an n-bit block prior to transmission.

The Hamming code that we’ve just developed is a block code which takes data stream from source encoder, divides it into four-bit block, and then encodes each four-bit block into a seven-bit block prior to transmission.

Block codes are described in term of n and k, stated as(n , k ). The code we’ve developed is called a (7 , 4 ) Hamming code.

The code rate given as (r ) ;

r = kn

Sometimes the Hamming code above is also called “rate four-sevenths” code

EEE332 – G: Hamming code Part 2 5/18 October 2008

Page 6: G - Hamming Code-2

Hamming code – (15, 11)Suppose we want to develop a Hamming code that provides single-error detection and correction capabilities for a larger block of data.

Suppose we want to use four parity bits (instead of three) – we can produce eleven (11) different combinations of those four bits with two or more “1”s. Therefore, we can use four parity bits to provide single-error detection and correction capabilities for a block of 11 information bits. (Why 11 information data bits? – check previous matrix multiplication).

Thus the generator matrix for the (15, 11) Hamming code is given as follows;

EEE332 – G: Hamming code Part 2 6/18 October 2008

Page 7: G - Hamming Code-2

[ G ] = ¿ [1 1 0 0 1 0 0 0 0 0 0 0 0 0 0¿ ] [ 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 ¿ ] [1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 ¿ ] [ 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 ¿ ] [0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 ¿ ] [ 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 ¿ ] [1 1 1 0 0 0 0 0 0 0 1 0 0 0 0¿ ] [ 1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 ¿ ] [1 0 1 1 0 0 0 0 0 0 0 0 1 0 0 ¿ ] [ 0 1 1 1 0 0 0 0 0 0 0 0 0 1 0 ¿ ]¿¿

¿This new (15, 11) code is not as powerful as our (7, 4) code for protecting information since we can only protect the information from only single-bit error in 11 information bits.

If we break the 11 bit information data into three four-bit blocks, and use the (7, 4) code in order to get better protection.

However, the new code provides less overhead i.e. only four parity bits required for 11 information data bits.

If we break up into three four-bit data will end up with 3 x 3 = 9 parity bits.

EEE332 – G: Hamming code Part 2 7/18 October 2008

Page 8: G - Hamming Code-2

Decoding the Hamming code

Use the (7, 4) code for decoding, error detection and correction

As discussed the matrix [ G ] can be decomposed into a 4 x

3 parity matrix [ P ] and 4 x 4 identity matrix [ I ]

[ G ] = [ [ P ] ⋮ [ I ]4×4 ] Let’s consider forming a new 3 x 7 matrix [ H ]

[ H ] = [ [ I ]3 × 3 ⋮ [ P ]T ]

EEE332 – G: Hamming code Part 2 8/18 October 2008

Page 9: G - Hamming Code-2

Note that [ P ]T becomes a matrix of dimension 3 x 4. Transpose is to turn the rows into column or columns into rows.

The matrix is [ H ] is thus

[ H ] = ¿[1 0 0[ 0 1 0[ 0 0 1

[ ¿

1 0 1 1]1 1 1 0]0 1 1 1

¿

] ¿

Figure 2: deconstruction of matrix [ H ] Now let [ r ] represents the received codeword

[ r ] = [v1 v2 v3 v4 v5 v6 v7 ]

Suppose we post multiply the received codeword by the

transpose of [ H ]

EEE332 – G: Hamming code Part 2 9/18 October 2008

3x3 identity submatrix [ I ] 3x4 transpose of parity matrix [ P ]

Page 10: G - Hamming Code-2

[ r ] [ H ]T = [ v1 v2 v3 v 4 v5 v6 v7 ]⋅¿ [1 0 0 ¿ ] [ 0 1 0 ¿ ] [0 0 1¿ ] [ 1 1 0¿ ] [ 0 1 1¿ ] [ 1 1 1 ¿ ]¿¿

¿

¿¿

The resulting 1 x 3 matrix is called the syndrome, tells us which expected check bits agree with their received counterpart and which check bit do not.

v1⊕ v1 ex=0 ; if v1=v1ex

v1⊕ v1 ex=1; if v1≠v1 ex

Similar results are obtained for v2⊕ v2 ex and v3⊕v3 ex

As we’ve already seen, this agreement or disagreement is exactly what we need for single-bit error detection and correction in the received seven-bit sequence.

Let’s use [ s ] to represent the three-bit syndrome. Thus, we can crate a table similar to the Table 2 (Note F).

EEE332 – G: Hamming code Part 2 10/18 October 2008

Page 11: G - Hamming Code-2

[ s ] Bit received in Error[ 0 0 0 ] None[ 1 0 0 ] v1

[ 0 1 0 ] v2

[ 0 0 1 ] v3

[ 1 1 0 ] v4

[ 0 1 1 ] v5

[ 1 1 1 ] v6

[ 1 0 1 ] v7

What is [ H ]T ? Let’s look at it deeper. Figure 3 below

shows the deconstruction of the matrix [ H ]T

EEE332 – G: Hamming code Part 2 11/18 October 2008

Page 12: G - Hamming Code-2

[ H ]T = ¿ [1 0 0 ¿ ] [ 0 1 0 ¿ ] [ 0 0 1 ¿ ] [¿ ] [ 1 1 0 ¿ ] [0 1 1¿ ] [ 1 1 1¿ ]¿¿

¿Figure 3: Deconstruction of decoding matrix [ H ]T

Deconstruction of [ H ]T into a 3 x 3 identity submatrix [ I ] and 4 x 3 parity submatrix [ P ] .

The parity submatrix produces the calculated check bits v1ex v2 ex and v3ex when modulo-2 multiplication involves

the first, second, and third column of [ H ]T , respectively.

The identity submatrix causes the exclusive-ORing of the first, second, and third received bit with the calculated check bit.

The exclusive-ORing produces 0 if the received and calculated check bits agree or 1 if they disagree, since

EEE332 – G: Hamming code Part 2 12/18 October 2008

3x3 identity submatrix [ I ]

4x3 parity submatrix [ P ]

Page 13: G - Hamming Code-2

A⊕B = ¿ {0 if A = B ¿¿¿Multiplication using [ H ]T followed by syndrome decoding using Table 1 above is therefore equivalent to the decoding process we originally described in Example 1

(Note F). Thus [ H ]T is called the decoding matrix.

EXAMPLE 1 Using matrix operations to detect and correct an error – part 1.

As in Example 1 (Note F) a communication system employs the (7, 4) Hamming code described in Eq. 1 (Note F), is received.

Use matrix operations to determine if an error occurred and then determine the original four bits from the source.

Solution

Geometric Interpretation of Error Control Coding

EEE332 – G: Hamming code Part 2 13/18 October 2008

Page 14: G - Hamming Code-2

Unencoded one-bit sequence u1 Channel encoder

Figure 3: Three-bit channel encoder

Encoded three-bit sequence v1 v2 v3

Let’s begin by considering an encoder that uses majority voting, like in the previous example.

Let’s use one-bit blocks of information. We can represent the information sequence (one bit) as

[ u ] = [u1 ]

and the encoded sequence (three bits) as

[ v ] = [v1 v2 v3 ]

where v1 = u1

v2 = u1

v3 = u1

Figure below shows the encoder;

Let’s consider the information bit u1 to be a point in a one-dimensional vector space as shown in Figure 4.

EEE332 – G: Hamming code Part 2 14/18 October 2008

Page 15: G - Hamming Code-2

0 1

u1

Figure 4: vector space representation of a single input

1v

3v

2v

(0,0,0)

(1,1,1)

Figure 5: Vector space representation of the output

Now let’s represent the encoded three-bit sequence v1 v2 v3 as a point in three dimensional space that represent possible values for the encoded three-bit sequence i.e. (0, 0, 0) and (1, 1, 1) as shown in Figure 5.

Note: The encoding process maps a point from one-dimension vector space (u1) into a point in three-dimensional vector space (v1 , v2 , v3) .

Output sequence – three-bit received sequence as a point in three-dimensional vector space (v1 , v2 , v3) .

EEE332 – G: Hamming code Part 2 15/18 October 2008

Page 16: G - Hamming Code-2

Figure 6: vector space representation of the received bit sequence

Figure 6 shows the received 3-bit sequence with eight possible received bits sequences.

Two of these eight points (black) represent valid codeword (signify no error has occurred in transmission). If we receive (1,1,1) we assume the information bit from the source is 1, and if we receive (0,0,0), we assume that the information bit from the source is 0.

The other six points represent errors (invalid codewords) in the received bit sequence i.e. (0,0,1), (0,1,0), (0,01,1), (1,0,0), (1,0,1), and (1,1,0).

If two bits are “1”s - assume the sequence (1, 1, 1) was transmitted and that one bit error occurred

EEE332 – G: Hamming code Part 2 16/18 October 2008

Page 17: G - Hamming Code-2

If two bits are “0”s - assume the sequence (0, 0, 0) was transmitted and that one bit error occurred

As shown in Figure 6 this process of finding the correct bits transmitted as finding the black dot (valid codewords) closest to the gray dot (invalid codewords) i.e. geometrically closest to it.

Finding the closest point in space is equivalent to assuming to the least possible number of errors has occurred in the received codeword.

Observed the following; 1. All one-bit errors result in points corresponding to

invalid codewords (gray dots), so all single-bit errors can be detected. In Figure 6, any one-bit error is closer to the correct valid codeword than it is to any incorrect valid codeword – for example, the point (1, 1, 0) is geometrically closer to (1, 1, 1) than it is to (0, 0, 0) – so all one-bit error can be corrected.

2. All two-bit errors also correspond to invalid codewords (gray dots), so all can be detected. However, the point is closer to the incorrect valid codeword to the correct valid codeword. For example, if (1, 1, 1) is transmitted and the first two bits are received in error, then (0, 0, 1) is received. Since (0, 0, 1) is closer to (0, 0, 0) than it is to (1, 1, 1). If we try to correct the error will end up with (0, 0, 0) which gives incorrect value.

EEE332 – G: Hamming code Part 2 17/18 October 2008

Page 18: G - Hamming Code-2

3. All three-bit errors result in received sequences representing valid codewords, so three-bit errors cannot even be detected.

EEE332 – G: Hamming code Part 2 18/18 October 2008