Upload
hadang
View
215
Download
0
Embed Size (px)
Citation preview
EDI042 – Error Control Coding(Kodningsteknik)
Chapter 2: Principles of Error Control Coding, Part III
Michael Lentmaier
September 10, 2013
Convolutional Codes
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 36 / 50
What are Convolutional Codes?
Encoder Channel Decoder
K bits N bitsN > K
ut vt rt ut, vt
I Up to now we have considered an independent encoding ofblocks at different time instants t
I For linear block codes we have:vt = ut G
I The fundamental idea of convolutional coding is to express eachcode block vt as a function of several input blocks, i.e.,
vt = f (ut,ut�1, . . . ,ut�m)
I The value m is called the memory of the encoder anddefines how many time instances t0 < t are considered.
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 37 / 50
Encoding Convolutional Codes
I The mapping of the encoder is required to be linear. It is oftenconvenient to write it in matrix form:
vt = ut G0 +ut�1 G1 + · · ·+ut�m Gm
I Using sequences of symbols, divided into blocks, we obtainu = u0,u1, . . . ,ut, . . . with ut =
⇣u(1)
t ,u(2)t , . . .u(K)
t
⌘
v = v0,v1, . . . ,vt, . . . with vt =⇣
v(1)t ,v(2)
t , . . .v(N)t
⌘
I The encoding can then be described by
v = u G
with
G =
2
64G0 G1 · · · Gm
G0 G1 · · · Gm. . . . . . . . .
3
75 , Gi 2 FK⇥N2
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 38 / 50
Encoding Convolutional Codes
I Typically, small block sizes K and N are used to define aconvolutional code of rate R = K
N(sometimes the notation b and c is used instead of K and N)
Example
Section 1.3 • A First Encounter with Convolutional Codes
00
00 11
11
19
o 00
Figure 1.13 A binary rate R = 1/2 tree code.
11
01
10
01
01
10
A 00
B 00
01
Transmittedsequence
havetwo binarycode digits, viz., the twooutputsfrom the encoder. The information sequence1011 ... is clearly seen from the tree to be encodedas the code sequence 11 100001 ....
Thestate ofa systemisadescription of itspasthistorywhich,togetherwitha specificationof the present and future inputs, suffices to determine the present and future outputs. For theencoder in Fig. 1.10, we can choose the encoder state 0' to be the contents of its memoryelements; that is, at time t we have
a, = Ut-(Ut-2 (1.79)
Thus, our encoder has only four differentencoder states, and two consecutive input digits areenough to drive the encoder to any specified encoder state.
For convolutional encoders, it is sometimes useful to draw the state-transition diagram.Ifwe ignorethe labeling,thestate-transition diagramisa de Bruijn graph [00167]. InFig. 1.14,we show the state-transition diagramfor our convolutional encoder.
1/10
11
u
00
0/00
Figure 1.14 A rate R = 1/2 convolutional encoderand its state-transition diagram.
K = 1, N = 2, R = 12 , m = 2
G =
2
6411 10 11
11 10 11. . . . . . . . .
3
75
I A general encoder (without feedback):
Section 1.3 • A First Encounterwith Convolutional Codes 17
The information digits U = UOu) .•• are not as in the previous section separated intoblocks. Instead they form an infinite sequence that is shifted into a register, in our example,of length or memory m = 2. The encoder has two linear output functions. The two outputsequences v(I) = v61
) ... and v(2) = v62) ... are interleaved by a serializer to form
a single-output sequence vri) v62) I) .•. that is transmitted over the channel. For each
information digit that enters the encoder, two channel digits are emitted. Thus, the code rateof this encoder is R = 1/2 bits/channel use.
Assuming that the content of the register is zero at time t = 0, we notice that thetwo output sequences can be viewed as a convolution of the input sequence u and the twosequences 11100... and 10100... , respectively. These latter sequences specify the linearoutput functions; that is, they specify the encoder. The fact that the output sequences can bedescribedby convolutions is why such codes are called convolutional codes.
Ina generalrate R = b/ c, whereb c, binaryconvolutional encoder(withoutfeedback)the causal, that is, zero for time t < 0, information sequence
(I) (2) (b) (I) (2) (h)U = "0"1 ... = Uo Uo ... Uo u l u) ... u) ... (1.68)
is encoded as the causal code sequence
where
(1) (2) (e) (I) (2) (e)V = VOVI .•. = Vo vo ... Vo VI VI ••• VI ...
V, = f tu, U'_I, ... , u,-m)
(1.69)
(1.70)
The parameter m is called the encoder memory. The function / is required to be a linearfunction from to IF2. It is often convenient to write such a function in matrix form:
where G,., 0 i m, is a binary b x c matrix.Using (1.71), we can rewrite the expression for the code sequenceas
VOVI ... = (UOUI .. . )G
or, in shorter notation, as
v =uG
where
(1.71)
(1.72)
(1.73)
) (1.74)
and where here and hereafter the parts of matrices left blank are assumed to be filled in withzeros. We call G the generatormatrix and G;, 0 i m, the generatorsubmatrices.
In Fig. 1.11,we illustratea general convolutional encoder (without feedback).
Figure 1.11 A general convolutional en- u tcoder (without feedback).
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 39 / 50
A Rate R = 23 Example
(1.75)
18 Chapter J • Introduction
EXAMPLE 1.14The rate R = 1/2 convolutional encoder shown in Fig. 1.10has the following generator submatrices:
Go = (lJ)
G 1 = (10)
G2 = (11)
The generator matrix is
G= ( II10 11
11 10 11 ) (1.76)
EXAMPLE 1.15The rate R = 2/3 convolutional encoder shown in Fig. 1.12has generator submatrices
Go = (0 :)1
G) = (1 )0
G2 = (0 )0
The generator matrix is
101 110 000011 001 101
G= 101 110 000011 001 101
v( 1)
-----+-------+--+--.. v(3)
(1. 77)
(1. 78)
Figure 1.12 A rate R = 2/3 convolutional en-coder.
It is often convenientto represent the codewordsof a convolutionalcode as paths througha code tree. A convolutional code is sometimescalled a (linear) treecode. The code tree for theconvolutional code generated by the encoder in Fig. 1.10 is shown in Fig. 1.13. The leftmostnode is called the root. Since the encoder has one binary input, there are, starting at the root,two branches stemming from each node. The upper branch leaving each node corresponds tothe input digit 0, and the lower branch corresponds to the input digit I. On each branch we
How does the corresponding generator matrix look like?
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 40 / 50
Why the name “convolutional” code?
I The output v(j)t can be written as
v(j)t =
K
Âi=1
m
Âl=0
u(i)t�l g(j)
i,l
I Writing the generator coefficients of input i and output j into agenerator vector g(j)
i =⇣
g(j)i,0,g
(j)i,1, . . . ,g
(j)i,m
⌘we can write
v(j) = u(1) ⇤ g(j)1 +u(2) ⇤ g(j)
2 + · · ·+u(K) ⇤ g(j)K =
k
Âi=1
u(i) ⇤ g(j)i
I We see that each output v(j) is related to each input u(i) by aconvolution with the corresponding generator vector g(j)
i
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 41 / 50
Transfer Domain RepresentationI It is convenient to describe the time sequences by means of the
delay operator D (D-transform):u = u0,u1, . . . ,ut, . . . , u(D) = u0 +u1 D+u2 D2 + . . .
v = v0,v1, . . . ,vt, . . . , v(D) = v0 +v1 D+v2 D2 + . . .
I We can write the encoding operation as
v(D) = u(D) G(D)
by introducing the corresponding generator matrix
G(D) = G0 +G1D+G2D2 + · · ·+GmDm
of the form
G(D) =
2
66664
g(1)1 (D) g(2)
1 (D) · · · g(N)1 (D)
g(1)2 (D) g(2)
2 (D) · · · g(N)2 (D)
......
...g(1)
K (D) g(2)K (D) · · · g(N)
K (D)
3
77775
K⇥N
I Example: G(D) =�1+D+D2,1+D2�
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 42 / 50
Parameters of a convolutional encoder
I Constraint length ni: (number of delay elements for input i)
ni = max1jN
degg(j)i (D)
I Memory m:m = max
1iKni
I Overall constraint length n :
n =K
Âi=1
ni
I The encoder state st is defined by the stored valuesat the output of its delay elements at time t
I It follows that the number of different states is equal to 2n
I Free distance dfree: (property of the code C)
dfree = minv,v02Cv6=v0
dH(v,v0) = minv2Cv6=0
wH(v)
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 43 / 50
Tables of some good codes
Low rates:
8 Ulm University - Martin Bossert
Some OFD convolutional codes with rate R = 1/n with n 2 [2, 4]:
R m O(1)G O(2)
G O(3)G O(4)
G df
2 5 7 53 15 17 64 23 35 7
1/2 5 53 75 86 133 171 107 247 371 108 561 753 122 5 7 7 83 13 15 17 104 25 33 37 12
1/3 5 47 53 75 136 133 145 175 157 225 331 367 168 557 663 711 182 5 7 7 7 103 13 15 15 17 134 25 27 33 37 16
1/4 5 53 67 71 75 186 135 135 147 163 207 235 275 313 357 228 463 535 733 745 24
Some OFD convolutional codes with rate R = 2/3 and R = 3/4:
R m � O(1)G O(2)
G O(3)G O(4)
G df
1 2 13 6 16 32 3 41 30 75 4
2/3 2 4 56 23 65 53 5 245 150 375 63 6 266 171 367 71 3 400 630 521 701 4
3/4 2 5 442 270 141 763 52 6 472 215 113 764 6
Low-rate OFD codes with memory m = 4:
R O(1)G O(2)
G O(3)G O(4)
G O(5)G O(6)
G O(7)G O(8)
G O(9)G O(10)
G df
1/5 25 27 33 35 37 201/6 25 27 33 35 35 37 241/7 25 27 27 33 35 35 37 281/8 25 25 27 33 33 35 37 37 321/9 25 25 27 33 33 35 35 37 37 361/10 25 25 25 33 33 33 35 37 37 37 40
High rates:
8 Ulm University - Martin Bossert
Some OFD convolutional codes with rate R = 1/n with n 2 [2, 4]:
R m O(1)G O(2)
G O(3)G O(4)
G df
2 5 7 53 15 17 64 23 35 7
1/2 5 53 75 86 133 171 107 247 371 108 561 753 122 5 7 7 83 13 15 17 104 25 33 37 12
1/3 5 47 53 75 136 133 145 175 157 225 331 367 168 557 663 711 182 5 7 7 7 103 13 15 15 17 134 25 27 33 37 16
1/4 5 53 67 71 75 186 135 135 147 163 207 235 275 313 357 228 463 535 733 745 24
Some OFD convolutional codes with rate R = 2/3 and R = 3/4:
R m � O(1)G O(2)
G O(3)G O(4)
G df
1 2 13 6 16 32 3 41 30 75 4
2/3 2 4 56 23 65 53 5 245 150 375 63 6 266 171 367 71 3 400 630 521 701 4
3/4 2 5 442 270 141 763 52 6 472 215 113 764 6
Low-rate OFD codes with memory m = 4:
R O(1)G O(2)
G O(3)G O(4)
G O(5)G O(6)
G O(7)G O(8)
G O(9)G O(10)
G df
1/5 25 27 33 35 37 201/6 25 27 33 35 35 37 241/7 25 27 27 33 35 35 37 281/8 25 25 27 33 33 35 37 37 321/9 25 25 27 33 33 35 35 37 37 361/10 25 25 25 33 33 33 35 37 37 37 40
The generators are listed on octalnotation, corresponding to rows ofthe matrix [GT
0 , . . . ,GTm]
Source:Martin Bossert, “Channel Codingfor Telecommunications”
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 44 / 50
State Transition Diagram
I A convolutional encoder is a finite state machine that can beimplemented by a linear sequential circuit
I The encoder output vt and the upcoming state st+1 are functionsof the input ut and the current state st (Mealy machine)
Example
Section 1.3 • A First Encounter with Convolutional Codes
00
00 11
11
19
o 00
Figure 1.13 A binary rate R = 1/2 tree code.
11
01
10
01
01
10
A 00
B 00
01
Transmittedsequence
havetwo binarycode digits, viz., the twooutputsfrom the encoder. The information sequence1011 ... is clearly seen from the tree to be encodedas the code sequence 11 100001 ....
Thestate ofa systemisadescription of itspasthistorywhich,togetherwitha specificationof the present and future inputs, suffices to determine the present and future outputs. For theencoder in Fig. 1.10, we can choose the encoder state 0' to be the contents of its memoryelements; that is, at time t we have
a, = Ut-(Ut-2 (1.79)
Thus, our encoder has only four differentencoder states, and two consecutive input digits areenough to drive the encoder to any specified encoder state.
For convolutional encoders, it is sometimes useful to draw the state-transition diagram.Ifwe ignorethe labeling,thestate-transition diagramisa de Bruijn graph [00167]. In Fig. 1.14,we show the state-transition diagramfor our convolutional encoder.
1/10
11
u
00
0/00
Figure 1.14 A rate R = 1/2 convolutional encoderand its state-transition diagram.
n = m = 2,s 2 {00,01,10,11}
Section 1.3 • A First Encounter with Convolutional Codes
00
00 11
11
19
o 00
Figure 1.13 A binary rate R = 1/2 tree code.
11
01
10
01
01
10
A 00
B 00
01
Transmittedsequence
havetwo binarycode digits, viz., the twooutputsfrom the encoder. The information sequence1011 ... is clearly seen from the tree to be encodedas the code sequence 11 100001 ....
Thestate ofa systemisadescription of itspasthistorywhich,togetherwitha specificationof the present and future inputs, suffices to determine the present and future outputs. For theencoder in Fig. 1.10, we can choose the encoder state 0' to be the contents of its memoryelements; that is, at time t we have
a, = Ut-(Ut-2 (1.79)
Thus, our encoder has only four differentencoder states, and two consecutive input digits areenough to drive the encoder to any specified encoder state.
For convolutional encoders, it is sometimes useful to draw the state-transition diagram.Ifwe ignorethe labeling,thestate-transition diagramisa de Bruijn graph [00167]. InFig. 1.14,we show the state-transition diagramfor our convolutional encoder.
1/10
11
u
00
0/00
Figure 1.14 A rate R = 1/2 convolutional encoderand its state-transition diagram.
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 45 / 50
Trellis RepresentationI Every input sequence u = u0,u1, . . . defines a path within the
state transition diagram of the encoderI This path can be illustrated nicely if we expand the state diagram
along the time instants t, resulting in an encoder trellisI Each possible input sequence u and resulting code sequence
v = v0,v1, . . . corresponds to a unique path in the trellis
Example
Section 1.3 • A First Encounter with Convolutional Codes
00
00 11
11
19
o 00
Figure 1.13 A binary rate R = 1/2 tree code.
11
01
10
01
01
10
A 00
B 00
01
Transmittedsequence
havetwo binarycode digits, viz., the twooutputsfrom the encoder. The information sequence1011 ... is clearly seen from the tree to be encodedas the code sequence 11 100001 ....
Thestate ofa systemisadescription of itspasthistorywhich,togetherwitha specificationof the present and future inputs, suffices to determine the present and future outputs. For theencoder in Fig. 1.10, we can choose the encoder state 0' to be the contents of its memoryelements; that is, at time t we have
a, = Ut-(Ut-2 (1.79)
Thus, our encoder has only four differentencoder states, and two consecutive input digits areenough to drive the encoder to any specified encoder state.
For convolutional encoders, it is sometimes useful to draw the state-transition diagram.Ifwe ignorethe labeling,thestate-transition diagramisa de Bruijn graph [00167]. In Fig. 1.14,we show the state-transition diagramfor our convolutional encoder.
1/10
11
u
00
0/00
Figure 1.14 A rate R = 1/2 convolutional encoderand its state-transition diagram.
20 Chapter 1 • Introduction
Let us return to the tree code in Fig. 1.13. As an example, the two input sequences010(nodeA)and I10(nodeB)bothdrivetheencoderto thesameencoderstate,viz.,a = 01. Thus,the twosubtreesstemmingfrom these two nodesare identical. Why treat them separately? Letus replace them with one node corresponding to state 01 at time 3. For each time or depth inthe tree, wecan similarlyreplaceall equivalentnodes withonly one-we obtain the trellis-likestructure shown in Fig. 1.15, where the upper and lower branches leaving the encoder statescorrespond to information symbols 0 and 1, respectively.
Figure 1.15 A binary rate R = 1/2 trellis code.
The information sequence 101 I ... corresponds in the trellis to the same code sequenceas in the tree, viz., 11 10000 I .... The trellis is just a more convenientrepresentation of thesame set of encoded sequencesas is specified by the tree, and it is easily constructed from thestate-transitiondiagram. A convolutional code is often called a (linear) trellis code.
Wewill often consider sequencesof finite length; therefore, it is convenientto introducethe notations
and
X[O.n) =XOXI .. . Xn-I
XlO.n] = XOXI .. . X"
(1.80)
(1.81 )
Suppose that our trellis code in Fig. 1.15 is used to communicate over a BSC withcrossover probability E, where 0 < E < 1/2. We start the encoder in encoder state a == 00,and we feed it with the finite information sequence ulO. n) followed by m = 2 dummy zeros inorder to drive the encoder back to encoder state a = 00. The convolutional code is terminatedand, thus, convertedinto a block code. The corresponding encoded sequence is the codewordvlO,n+m)' The receivedsequence is denoted r(O.n+m).
Tosimplifythe notationsin the following discussion,we simply writeu, v, andr insteadof u[O.n), vlO.n+m), and r[O,tz+m)'
We will now, by an example, show how the structure of the trellis can be exploited toperform maximum-likelihood (ML) decoding in a very efficient way. The memory m = 2encoder in Fig. 1.10 is used to encode three information digits together with m = 2 dummyzeros; the trellis is terminatedandourconvolutional codehasbecomeablockcode. Acodewordconsisting of ten code digits is transmitted over a BSC. Suppose that r = II 00 11 00 lOisreceived. The corresponding trellis is shownin Fig. 1.16. (In practice,typicallya fewthousandinformationbits are encoded before the encoder is forcedback to the allzero state by encodingm dummy zeros.)
As shown by the discussion following (1.27), the ML decoder (and the MD decoder)chooses as its estimatevof v the codewordv that minimizes the Hammingdistance dH(r, v)betweenr and v. That is, it minimizesthe numberof positions in which the codewordand thereceivedsequencediffer. In order to find the codewordthat is closest to the receivedsequence,we move through the trellis from left to right, discarding all subpaths that could not turn outto be the prefixof the best path through the trellis. When we reach depth m = 2, we have foursubpaths-{)ne for each encoder state. At the next depth, however, there are eight subpaths-
How large is dfree?
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 46 / 50
Systematic Encoding
I Systematic encoding can be achieved with a generator matrix ofthe form
G(D) =
2
66664
1 0 · · · 0 g(K+1)1 (D) · · · g(N)
1 (D)
0 1 · · · 0 g(K+1)2 (D) · · · g(N)
2 (D)...
.... . .
......
...0 0 · · · 1 g(K+1)
K (D) · · · g(N)K (D)
3
77775
K⇥N
I Assume that a generator matrix G(D) can be written as
G0(D) =⇥T(D) R0(D)
⇤
where T(D) is a non-singular matrix over F2(D)I Then we can obtain a systematic generator matrix
G(D) = T�1(D)G0(D) =⇥IK T�1(D)R0(D)
⇤= [IK R(D)]
I Remark:in general the elements of R(D) can be rational functions
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 47 / 50
Rational Generator Matrices
I Not every generator matrix can be converted into systematicform with polynomial entries
I We define a general convolutional encoder by a generator matrix
G(D) =hg(j)
i (D)i
(i,j)2 F2(D)K⇥N
in which the elements are rational functions in D, i.e.,
g(j)i (D) =
f0 + f1D+ · · · fmDm
1+q1D+ · · ·+qmDm
Example
G0(D) =⇥1+D+D2, 1+D2⇤ ) G(D) =
1,
1+D2
1+D+D2
�
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 48 / 50
Encoders with Feedback
I Consider the encoder mapping v(j) = u(i) g(j)i (D)
I The rational functions
g(j)i (D) =
f0 + f1D+ · · · fmDm
1+q1D+ · · ·+qmDm
can be realized by encoders with feedbackI The following two realizations are most commonly used:
Controller canonical form:Section 2.1 • Convolutional Codes and Their Encoders 33
Figure 2.1 Thecontroller canonical form of a rational transfer function.
where we have replaced j - i by k, and where
f(D) = fo + fl D +...+ fm D"
and00
w(D) = L wkDkk=-oo
From Fig. 2.1 it follows thatm
Wj = Uj + Lq;Wj-;;=)
Upondefining qo = 1, (2.12) can be rewritten asm
Uj = Lq;Wj-;;=0
or, by repeating the steps in (2.9), as
u(D) = q(D)w(D)
where00
u(D) = L ujDjj=-oo
and
q(D) = 1+qID+···+qmDm
(2.10)
(2.11 )
(2.12)
(2.13)
(2.14)
(2.15)
(2.16)
Combining(2.9) and (2.14) we have
v(D) = u(D)f(D)/q(D) = u(D/o + lID + + fmDm
(2.17)I +ql D + +qm D'"
Let g(D) = f(D)/q(D), then v(D) = u(D)g(D), and we say that g(D) is a rational transferfunction that transfers the input U (D) into the output v(D). From (2.17), it follows that everyrationalfunctionwitha constantterm 1in the denominator polynomial q(D) (or,equivalently,with q(O) = lor, again equivalently, with q(D) delayfree) is a rational transfer function thatcan be realized in the controller canonical form shown in Fig. 2.1. Every rational functiong(D) = f(D)/q(D), where q(D) is delayfree, is called a realizablefunction.
In general,a matrixG(D) whoseentriesarerationalfunctions iscalleda rationaltransferfunction matrix. A rationaltransferfunction matrixG(D) for a linearsystemwithmanyinputsor many outputs whoseentries are realizable functions is called realizable.
ut
vt
Observer canonical form:
(2.19)
34 Chapter2 • Convolutional Encoders-Structural Properties
In practice, givena rationaltransferfunction matrixwe haveto realizeitby linearsequen-tial circuits. It can be realized in manydifferentways. For instance, the realizable function
g(D)__ fo + f-D + ... + fmDm
(2.18)1+qID+···+qmDm
has the controllercanonical form illustrated in Fig. 2.1. On the other hand, since the circuit inFig. 2.2 is linear, we have
v(D) = u(D)(fo + flD + ... + fm Drn)
+v(D)(ql D +...+ qmDm)
which is the same as (2.17). Thus, Fig. 2.2 is also a realization of (2.18). In this realization,the delay elementsdo not in general form a shift registeras thesedelay elementsare separatedby adders. This is the so-calledobserver canonicalform of the rational function (2.18). Thecontroller and observer canonical forms in Figs. 2.1 and 2.2, respectively, are two differentrealizations of the same rational transfer function.
10
'U ---.----.e-------tl---.
v
Figure 2.2 The observer canonical form of a rational transfer function.
Weare now prepared to give a formal definition of a convolutional transducer.
Definition A rate R = hie (binary)convolutional transducer over the field of rationalfunctions lF2(D ) is a linear mapping
r : ((D» ((D))u(D) H- v(D)
whichcan be represented as
v(D) = u(D)G(D) (2.20)
where G(D) is a b x c transferfunction matrix of rank b with entries in IF2(D) and v(D) iscalled a code sequencearising from the information sequenceu(D).
Obviously, we must be able to reconstruct the information sequenceu(D) from thecodesequencev(D) when there is no noiseon thechannel. Otherwisethe convolutional transducerwouldbe useless. Therefore, we requirethat the transducermapis injective; that is, the transferfunction matrix G(D) has rank b over the field IF2(D).
Weare now well prepared for the following
Definition A rate R = bIc convolutional code C over IF2 is the image set of a rateR = b lc convolutional transducer with G(D) of rank b over IF2(D ) as its transfer functionmatrix.
It follows immediately from the definition that a rate R = bIc convolutional code Cover IF2 with the b x c matrix G(D) of rank b over IF2 (D ) as a transfer function matrix can
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 49 / 50
Example
G(D) =
2
41
1+D+D2D
1+D31
1+D3
D2
1+D31
1+D31
1+D
3
5
Note: (1+D+D2)(1+D) = (1+D3)
Controller canonical form:
Section2.1 • Convolutional Codesand Their Encoders 37
Definition A generator matrix for a convolutional codeis catastrophic if thereexistsaninformation sequence u(D) with infinitely many nonzero digits, wH(u(D» = 00, that resultsin codewords v(D) withonly finitely many nonzero digits, wH(v(D» < 00.
EXAMPLE 2.2The third generatormatrix for the convolutional code given above, viz.,
G2(D) = (1 + D3 1+ D + D2 + D3) (2.31)
is catastrophic since u(D) = 1/(1 + D) = 1 + D + D 2 + ... has wH(u(D» = 00 but v(D) =u(D)G2(D) = (I + D + D 2 1+ D 2) = (I 1) + (1 O)D + (I I)D2 has wH(v(D» = 5 < 00. InFig. 2.5 we show its controllercanonical form.
,..- ---{+ 1'(1)
Figure 2.5 A rate R = 1/2 catastrophic convo- ulutional encoderwithgenerator matrix G2(D).
When a catastrophic generator matrix is used for encoding, finitely many errors (fivein the previous example) in the estimate v(D) of the transmitted codeword v(D) can lead toinfinitely many errorsin theestimateu(D) of theinformation sequence u(D )-a "catastrophic"situation that must be avoided!
Beingcatastrophic is a generator matrix property, not a code property. Every convolu-tionalcode has bothcatastrophic and noncatastrophic generator matrices.
Clearly, the choiceof the generator matrix is of great importance.
EXAMPLE 2.3The rate R = 2/3 generatormatrix
G(D) = ( 1+ i: D2 1: D3 1 +1 D3 ) (2.32)
1 + D3 1+ D3 I + Dhas the controllerand observercanonical forms shownin Figs.2.6 and 2.7, respectively.
Figure 2.6 Thecontroller canonical form of thegenerator matrix G(D) in (2.32).
Observer canonical form:38 Chapter 2 • Convolutional Encoders-Structural Properties
U(l) ----+---------......-----..
Figure 2.7 The observercanonical form of the generatormatrixG(D) in (2.32).
In Chapter 3 we will show that generator matrices G(D) with G(O) of full rank are ofparticular interest. Hence, we introduce
Definition A generatormatrixG(D) is called an encoding matrix if G(O) has full rank.
We have immediately the following
Theorem 2.3 An encoding matrix is (realizableand) delayfree.
The polynomial generator matrices Go(D) (2.25), G I (D) (2.27), and G2(D) (2.31)as well as the rational generator G(D) in Example 2.3 are all encoding matrices. But thepolynomial generatormatrix
(2.33)
is not an encoding matrix since G(0) has rank 1.In the sequel we will see that all generatormatricesthat are interesting in practiceare in
fact encoding matrices!
Remark. The generator matrix for the convolutional encoder shown in Fig. 1.11 can. h (D) "m (k)Dk d h (k)be written G(D) = (gij (D)) were gij = i...Jk=Ogij an were gij are
the entries of the b x c matrix Gk , 0 ::: k ::: m, in (1.74).
2.2 THE SMITH FORM OF POLYNOMIAL CONVOLUTIONALGENERATOR MATRICES
Wenow presenta usefuldecomposition of polynomial convolutional generatormatrices. Thisdecomposition is based on the following fundamental algebraicresult [Jac85]:
Michael Lentmaier EDI042 – Error Control Coding: Chapter 2 50 / 50