8
Neural Comput & Applic (1999)8:339–346 1999 Springer-Verlag London Limited Adaptive Predistortion and Postdistortion for Nonlinear Channel N. Rodrı ´guez 1 , I. Soto 1 and R. Carrasco 2 1 Santiago University, Chile; 2 School of Engineering, Staffordshire University, Beaconside, Stafford, UK This paper proposes a new adaptive predistortion- postdistortion scheme based on a recurrent neural network to reduce nonlinear distortion introduced by a high power amplifier in the amplitude and phase of received Quadrature Phase Shift Keying (QPSK) signals in a digital microwave system. The recurrent neural network structure is inspired by the model proposed by Williams and Zipser, with a modified backpropagation algorithm. The input sig- nal is processed by a nonlinear predistorter which reduces the warping effect. The received output signal is passed through a postdistorter to compen- sate for the warping and clustering effects produced by an amplifier. The proposed scheme yields a significant improvement when it is compared to the system without predistortion-postdistortion, perform- ance is evaluated in terms of the bit error rate and output signal constellation. Keywords: Predistortion-postdistortion; Recurrent Neural Network 1. Introduction The High Power Amplifiers (HPA) are a class of Travelling Wave Tubes (TWT) in microwave radio systems, and are characterised by instantaneous non- linear distortion [1]. The HPA nonlinearity can be compensated by the use of a predistorter (PD). Traditionally, a predistorter is used before the HPA on the transmitter side. This can be divided into two categories. First, schemes based on data predistortion prior to spectral shaping and power Correspondence and offprint requests to: R. Carrasco, School of Engineering, Staffordshire University, PO Box 33, Beaconside ST18 0DF, UK. amplification. These techniques are considered to be effective against the well known phenomenon of signal constellation deformation, referred to as the warping effect [2,3], and can also be applied by introducing memory in the data predistortion algor- ithm [3], to counteract the effect of constellation point spreading, known as clustering effect. Sec- ondly, the signal predistortion technique, which con- sists of a PD placed between the shaping filter and the HPA. In this case, the objective of the PD is to implement nonlinearity in the third or fifth order; the coefficients are such as to cancel the intrinsic nonlinearity of the HPA up to the terms of the same order [4–6]. Both schemes can be implemented at IF by means of an analogue circuit, or at baseband with digital components. The second option is usually preferred, since it is better suited to the realisation of adaptive predistortion, and is capable of tracking any possible change in the HPA parameters, such as drifts due to temperature and variations of the operating point. An alternative method to compensate for nonlinear distortion introduced by the HPA in digital trans- mission systems was proposed by Metzger and Vale- tin [7]. This scheme is based on the determination of the Probability Density Function of InterSymbol Interference (PDF-ISI), which is caused by the com- pound effect of linear channel dispersion and nonlin- ear distortion. In this paper, an adaptive predistortion–postdistor- tion technique based on a recurrent neural network is presented, which differs from the predistortion methods proposed in the literature of the last decade. The predistorter and postdistorter task is to minimise the global mean squared error between the input to the predistorter and the output of the system. The recurrent network architecture used in this work is a fully recurrent network inspired by Williams and

Adaptive Predistortion and Postdistortion for Nonlinear Channel

Embed Size (px)

Citation preview

Page 1: Adaptive Predistortion and Postdistortion for Nonlinear Channel

Neural Comput & Applic (1999)8:339–346 1999 Springer-Verlag London Limited

Adaptive Predistortion and Postdistortion for NonlinearChannel

N. Rodrı́guez1, I. Soto1 and R. Carrasco21Santiago University, Chile;2School of Engineering, Staffordshire University, Beaconside, Stafford, UK

This paper proposes a new adaptive predistortion-postdistortion scheme based on a recurrent neuralnetwork to reduce nonlinear distortion introducedby a high power amplifier in the amplitude andphase of received Quadrature Phase Shift Keying(QPSK) signals in a digital microwave system. Therecurrent neural network structure is inspired bythe model proposed by Williams and Zipser, with amodified backpropagation algorithm. The input sig-nal is processed by a nonlinear predistorter whichreduces the warping effect. The received outputsignal is passed through a postdistorter to compen-sate for the warping and clustering effects producedby an amplifier. The proposed scheme yields asignificant improvement when it is compared to thesystem without predistortion-postdistortion, perform-ance is evaluated in terms of the bit error rate andoutput signal constellation.

Keywords: Predistortion-postdistortion; RecurrentNeural Network

1. Introduction

The High Power Amplifiers (HPA) are a class ofTravelling Wave Tubes (TWT) in microwave radiosystems, and are characterised by instantaneous non-linear distortion [1]. The HPA nonlinearity can becompensated by the use of a predistorter (PD).

Traditionally, a predistorter is used before theHPA on the transmitter side. This can be dividedinto two categories. First, schemes based on datapredistortion prior to spectral shaping and power

Correspondence and offprint requests to: R. Carrasco, School ofEngineering, Staffordshire University, PO Box 33, BeaconsideST18 0DF, UK.

amplification. These techniques are considered to beeffective against the well known phenomenon ofsignal constellation deformation, referred to as thewarping effect [2,3], and can also be applied byintroducing memory in the data predistortion algor-ithm [3], to counteract the effect of constellationpoint spreading, known as clustering effect. Sec-ondly, the signal predistortion technique, which con-sists of a PD placed between the shaping filter andthe HPA. In this case, the objective of the PD isto implement nonlinearity in the third or fifth order;the coefficients are such as to cancel the intrinsicnonlinearity of the HPA up to the terms of the sameorder [4–6].

Both schemes can be implemented at IF by meansof an analogue circuit, or at baseband with digitalcomponents. The second option is usually preferred,since it is better suited to the realisation of adaptivepredistortion, and is capable of tracking any possiblechange in the HPA parameters, such as drifts dueto temperature and variations of the operating point.

An alternative method to compensate for nonlineardistortion introduced by the HPA in digital trans-mission systems was proposed by Metzger and Vale-tin [7]. This scheme is based on the determinationof the Probability Density Function of InterSymbolInterference (PDF-ISI), which is caused by the com-pound effect of linear channel dispersion and nonlin-ear distortion.

In this paper, an adaptive predistortion–postdistor-tion technique based on a recurrent neural networkis presented, which differs from the predistortionmethods proposed in the literature of the last decade.The predistorter and postdistorter task is to minimisethe global mean squared error between the input tothe predistorter and the output of the system. Therecurrent network architecture used in this work isa fully recurrent network inspired by Williams and

Page 2: Adaptive Predistortion and Postdistortion for Nonlinear Channel

340 N. Rodrı́guez et al.

Zipser [8]. Due to the higher connectivity of fullyrecurrent networks, their modelling capabilities aremuch richer than feedforward networks, and haveless complexity. In fact, it has been proved that afully recurrent network with a simple layer of hiddenunits has the capacity to model an arbitrary, smoothnon-linear dynamic system [9]. The recurrent andstandard neural networks have also been investigatedfor the equalisation in mobile and satellite communi-cation systems [10–13].

Section 2 presents a description of the system,and in Section 3, the recurrent neural network archi-tecture is described. In Section 4 the training algor-ithm for the processes of predistortion and postdis-tortion is presented. Section 5 presents the processof calibration of the predistorter and postdistorter.In Section 6 the simulation and discussion of resultsis presented, and finally, in Section 7 the conclusionsare drawn.

2. System Description

The baseband-equivalent functional block diagramof the transmission system is illustrated in Fig. 1,for Quadrature Phase Shift Keying (QPSK) modu-lation. The transmission pulse is a raised-cosinecharacterised with a rolloff factor of 0.5. The raised-cosine filter is split equally between the transmitterand receiver filters. The transmitter is composed ofa random generator of signals with values±1 forthe in-phase and quadrature components. Thesequence is interpolated by 16, and then filteredwith the first part of the raised-cosine low pass filter.

The low pass filtered signals are passed throughthe predistorter, and the output signals are modulatedby multiplying the value of cosvct and sinvct forthe in-phase and quadrature components, respect-ively, vc being the angular frequency of the modul-ating carrier. The modulated signal is the input to theHPA, which is modelled as a nonlinear memoryless

Fig. 1. Functional block diagram of the transmission system.

system [1]; Additive White Gaussian Noise(AWGN) is added to the HPA.

The receiver initially demodulates the signal,which has been passed through the transmissionchannel. The phases of the receiver oscillators areset in accordance with the phases of the modulatingoscillators, i.e. the received signal has a phase distor-tion equal to that introduced by the transmissionchannel. The demodulated signal is filtered by thesecond part of the whole raised-cosine low passfilter, and then decimated by 16 and is the input tothe postdistorter.

Traditionally, the modulation schemes used indigital microwave systems have been QPSK, butrecently schemes of Quadrature Amplitude Modu-lation (QAM), with level 16-QAM and 64-QAM,have been proposed as a new standard. However,the QAM signal constellations are highly sensitiveto nonlinear distortion [3].

In this system the warping effect is reduced bythe use of a predistorter, while the warping andclustering effect will be compensated for by the useof a neural postdistorter, as shown in Fig. 1.

The fitness function of Fig. 1 can be defined as

J =12 O

M

p=1

(dp − f pptd (f p

pd, Ch))2 (1)

where:

dp: the desired output of the system for patternpf pptd: the postdistorter for patternp is the inverse

function of the predistorter and channel whichis used to compensate the distortion effectintroduced by the channel

f ppd: predistortion function for patternpCh: channel composed by HPA and AWGN.

Since the fitness function of Eq. (1) is quitecomplex, assumptions can be made to reduce thecomplexity of the algorithm used to compensate forthe nonlinear effects.

Assuming that the predistorter is removed and istuned in the initial training phase of the system,and the postdistorter is not connected until the pre-distorter weights are stable, then the fitness functionwill be reduced to

J =12 O

M

p=1

(dp − f ppd)2 (2)

3. Recurrent Neural NetworkArchitecture

One way of introducing feedback into neural net-works is by so-called ‘global feedback’, where

Page 3: Adaptive Predistortion and Postdistortion for Nonlinear Channel

341Adaptive Predistortion and Postdistortion for Nonlinear Channel

adaptive feedback connections are provided betweeneach pair of hidden units. Global feedback thusmeans that each hidden unit is fully connected tothe hidden unit state vectors(t − 1) containing theprevious hidden unit outputs, as illustrated in Fig.2. The resulting networks are called ‘fully recurrent’networks; this type of recurrent network is some-times called a Williams and Zipser network [8].

The specific update formulae for the units of therecurrent network are considered as follows. Letx(t)denote a vector containing theNi external inputs attime t, let s(t) denote a vector containing theNh

hidden units outputs at timet, and lety(t) denote avector containing theNo output units of the outputlayer at time t. For convenience in the weightlabelling the combined inputs to the hidden units attime t a state vectorzh(t) whose kth elements aredefined as

zhk(t) = Hxk(t), k P I

sk(t − 1), k P H(3)

wherek is the index ofx, s and y, so that I denotesthe set ofNi indices for whichzh

k is an input, andH denotes the set ofNh indices for whichzh

k is theoutput of a hidden unit.

The activation of thekth hidden unit is nowcalculated according to

sk(t) = f(vk(t)), k P I < H

= f3 OjPI<H

wkjzhj (t) + wkb4

= f3OjPI

wkjxj(t) + OjPH

wkjsj(t − 1) + wkb)4 (4)

where vk(t) denotes the weighted input to hiddenunit k, f(.) is the nonlinear bipolar sigmoid function

Fig. 2. Architecture for RNN-PD and RNN-PTD.

set to f(x) =1−e−x

1+e−x, and wkb is the bias weight. The

hidden unit outputs are then forwarded to the outputunit, which sees the input vectorzo(t).

zok(t) = sk(t), k P H (5)

The architectures of a RNN are denoted by thevector (Ni, Nh, No), whereNi represents the numberof units of the input layer,Nh the number of unitsof the hidden layer andNo the number of units ofthe output layer. The output layer is compossed oftwo units N0 = 2, one for the in-phase componentand the other for the quadrature component of thesymbol, respectively. When the 16-QAM modulationscheme is used, the output layer has four units,N0

= 4, and when a 64-QAM modulation scheme isused, the output layer has six units,N0 = 6. Theoptimal number of units that comprise the rest ofthe layers in network will be determined by heuris-tic methods.

The parameter used to measure the performanceof the neural network architecture with differentnumbers of units in their layers is the Bit ErrorRate (BER); it is usually the factor to be optimisedin a digital communication system.

The output yk(t) of the recurrent network con-sidered here is linear to allow for arbitrary dynamicrange. The network output is thus updated accord-ing to

yk(t) = OjPH

wojzoj (t) + wob, k P O

= OjPH

wojsj(t) + wob (6)

where O denotes the sets ofNo indices for whichyk(t) is a unit output of the output layer, andwob isthe output unit bias. Note that the output unit doesnot have feedback of its own previous values, asfeedback in a linear unit is very likely to result instability problems.

When performing the first iteration at time, say,t = 1, it is customary to set the values of the hiddenand output unit outputs in the previous time step tozero [8], i.e.

sk(0) = 0, k P H (7)

Setting the initial previous outputs to zero meansthat these values will not influence the networkoutput in the first iteration, and the recurrent networkconsidered here will therefore perform a purely feed-forward mapping in the first iteration.

The feedforward mapping in the first iteration isdue to the layered update of the units in the recurrentnetwork, i.e. the hidden unit outputs are updated

Page 4: Adaptive Predistortion and Postdistortion for Nonlinear Channel

342 N. Rodrı́guez et al.

prior to the update of the output unit, just as for afeedforward network.

To train the selected model structure, it is neces-sary to obtain a databaseJ of examples consistingof inputs x(t) and corresponding desired outputsd(t), J = (x(t), d(t)} T

t = 1, where T is the number oftraining examples. The model is then trained, i.e.the parameters are adjusted so as to obtain a ‘good’model, which is one that describes the observeddata well, i.e. only makes small errorse(t),

e(t) = d(t) − y(tuw,x(t)) (8)

where d(t) is the desired output at timet andy(tuw,x(t)) is the model output given the externalinputs x(t) and the concatenated set of parametersw; in the following, the dependency upon the para-meters will often be implicitly assumed, and themodel output at timet is simply denoted asy(t).

4. Training Algorithm

This section describes the equations used to formu-late the training algorithm of Appendix A.

In the predistortion algorithm, the actual outputvalue is compared with the desired output, resultingin an error signal. The error signal is fed backthrough the network and weights are adjusted tominimise this error. This algorithm is discussed indetail elsewhere [8,14].

The total mean-squared errorJ is given by

J(t) =12 O

kPO

(dk(t) − yk(t))2 (9)

wheredk(t) is the desired output value andyk(t) theactual output of thekth unit of the output layer.This error can be minimised by taking partial deriva-tives of J with respect to each weight, and equatingto zero.

To calculate error signal for the output layer, Eq.(10) is used:

dk(t) =1−y2

k(t)2

(dk(t) − yk(t)), k P O (10)

The error signal for the hidden layer is given by

dhj (t) =

1−s2k(t)

2 OkPO

dh+1k (t)wh+1

kj (t − 1), j P H

(11)

The increments used in updating the weights andthe bias weight of thehth layer can be accomplishedby the following rules:

Dw(h)ij (t) = hd(h)

j (t)s(h−1)i (t) + aDw(t)

ij (t − 1) (12)

w(h)ij (t) = w(h)

ij (t − 1) + Dw(h)ij (t) (13)

whjb(t) = wh

jb(t − 1) + bdhj (t) (14)

whete h is the learning rate, a is the momentumparameter andb is the bias weight level adaptationrate. The algorithms of Appendix A have beenimplemented using Eqs (4) and (10)–(14).

The value of the weightswkj depends upon thechannel characteristics that in practice are not knownexactly at the receiver, since they are subject todrifts due to temperature and variations of HPAfrom the operating point. However, in the investi-gation the channel has been supposed to be station-ary, and the values of the weightswk were estimatedover a very large number of samples.

The model used to represent the HPA in thispaper was proposed by Saleh [1]. The equations areincluded in Appendix B; the optimum values pro-posed in the same reference areap = 2.0922,bp =1.2466,aq = 5.5290,bq = 2.7088.

5. Calibration of the NeuralPredistorter and Postdistorter

This section describes the optimisation procedurefor determining the architecture of the recurrentneural network predistorter (RNN-PD) and postdis-torter (RRN-PTD) to be used in a QPSK modulationscheme. The parameters of the neural architectureare calibrated during the training process of thepredistorter and the postdistorter. These parametersare (a) the number of neurons in every layer of thenetwork (Ni, Nh, No), (b) the learning rate (h) andthe bias weight level (b), and (c) the momentum (a).

The neural predistorter and postdistorter have beentrained with a random sequence of 2000 symbols,starting with the weights of the network randomlyinitialised. A bit rate transmission of 9600 bits/secis obtained, with a signal-to-noise ratio of 7 dB andInput Back Off (IBO) factor of 4 dB, which definesthe mean point of operation. The bit error rate isobtained with the neural predistorter and post-distorter working in production mode, with a newsequence of 100,000 random symbols.

The RNN-PTD is calibrated according to Eqs (1)and (2). First, Eq. (2) is used to setf p

pd in a stableoperation condition or steady state, denoted byf ppd−ss. Secondly,f p

ptd is calibrated assuming that thepredistorter has previously been trained.

Therefore, Eq. (1) can be rewritten as

J =12 O

M

p=1

(dp − f pptd (f p

pd−ss, Ch))2 (15)

Page 5: Adaptive Predistortion and Postdistortion for Nonlinear Channel

343Adaptive Predistortion and Postdistortion for Nonlinear Channel

Figure 11 in Appendix C gives the values of theparameters of the predistorter and postdistorter. Fig-ure 11 shows a comparison of the BER values fordifferent architectures of the neural predistorter andneural postdistorter. The architecture that offers thebest error performance for the RNN-PD and RNN-PTD is (3,2,2). This structure usesNi = 3 units inthe feed-forward part of the input vector,Nh = 2units in feedback part of hidden layer, andNo = 2units in the output layer; one for the in-phasecomponent and the other for the quadrature compo-nent of the symbol, respectively.

The optimal values ofh, b and a are found byusing Monte Carlo simulation. The optimisation ofthe parameters is achieved by using the sub-optimalstructure (3,2,2) for the RNN-PD and RNN-PTD.

Figure 12 in Appendix C shows a comparison ofthe BER values obtained for different values ofh,0.1 < h < 0.9, while the value ofb is set to 0.75h. The value of the momentuma is fixed at 0.2.From Fig. 12, it can be seen that the best errorperformance for RNN-PD is obtained when thevalue of the parameterh is 0.8 andb is equal to0.6; and for the RNN-PTD the best value of theparameterh is 0.1 andb is equal to 0.075.

Figure 13 in Appendix C shows the BER offeredby the RNN-PD and RNN-PTD, respectively, withdifferent values ofa, h = 0.8 and b = 0.6. Thebest error performance obtained for RNN-PD andRNN-PTD for the range 0,1< a < 0.3 is alsoshown.

Finally, the optimal architecture of the RNN-PDis defined by the structure (3,2,2) and the parametersh = 0.8, b = 0.6 anda = 0.2, and the structure ofRRN-PTD is (3,2,2) and parametersh = 0.1, b =0.075 anda = 0.2

6. Discussion of Results

To understand the effect introduced by the neuralpredistorter and the compound effect introduced bythe predistorter and postdistorter, Figs 3, 4 and 5

Fig. 3. Output QPSK signal of the system without compensationwith S7N = 7 dB.

Fig. 4. Output QPSK signal with RNN-PD (3,2,2), S/N= 7 dB,IBO = 0 dB, with HPA and AWGN.

Fig. 5. Output QPSK signal with RNN-PD-PTD (3,2,2), S/N07dB, IBO = 0 dB, with HPA and AWGN.

have been plotted. Figure 3 shows the output of thesystem without predistortion-postdistortion tech-niques. In Fig. 4 the output of the system withRNN-PD has been plotted in terms of in-phase(Ipd) and quadrature (Qpd) components, respectively.Observe that Fig. 4 still exhibits the warping effect.However, the increased Euclidean distance willallow a better decodification of the symbol. In Fig.5 the output of the system with RNN-PTD is alsoplotted in terms of in-phase (Iptd) and quadrature(Qptd) components. In this figure, the warping andclustering effect shown in Fig. 4 has been correctedby the use of the compound system. As stated byEq. (15), the postdistorter has been adjusted tocompensate for the effeccts that have been intro-duced by the PD and the channel.

The BER performance was evaluated by simulat-ing the system shown in Fig. 1, with the QPSKsignal format. The BER curves presented in Figs 6and 7 have been obtained using 100,000 randomsymbols.

The predistortion-postdistortion technique presentedhere yields a significant improvement when com-pared to the system without technique (W-T) com-pensation and the PDF-ISI scheme. Figures 6 and7 show the BER curves versus signal-to-noise (S/N)ratio for different values of input back-off. Figure6 shows an example of such curves obtained for a

Page 6: Adaptive Predistortion and Postdistortion for Nonlinear Channel

344 N. Rodrı́guez et al.

Fig. 6. BER performance vs. S/N for QPSK, RNN-PD (3,2,2),with HPA and AWGN.

Fig. 7. BER performance vs. S/N for QPSK, RNN-PTD (3,2,2),with HPA and WAGN.

QPSK system, which is corrupted by a HPA withdifferent input back-off factors and by AdditiveWhite Gaussian Noise (AWGN). An improvementof 11.5 dB can be achieved using RNN-PD, withan input back-off factor of 0 dB, for a BER of10−4, and the gain obtained when compared withthe PDF-ISI scheme is 2 dB for a BER of 10−5.Figure 7 shows the gain of the RNN-PD-PTDscheme over a system without compensation. Thegain is approximately of 13 dB, with an input back-off factor of 0 dB for a BER of 10−4. The gainobtained when compared with the PDF-ISI schemeis of 4 dB for a BER of 10−5. The improvementachieved with RNN-PD-PTD over the RNN-PDscheme is approximately 2 dB, with an input back-off factor of 0 dB for a BER of 10−5. Also, in Figs6 and 7 the performance is practically independentof IBO, when −8 dB < IBO < 8 dB.

Finally, the procedure presented for the QPSKmodulation scheme can be generalised to a higherorder. In Figs 8 and 9 there are examples of the

Fig. 8. Output 16-QAM signal with RNN-PTD (3,2,2), S/N= 7dB, IBO = 4 dB with HPA and AWGN.

Fig. 9. Output 64-QAM signal with RNN-PTD (3,2,2), S/N= 7dB, IBO = 4 dB, with HPA and AWGN.

utilisation for 16-QAM and 64-QAM modulationschemes, respectively.

7. Conclusions

A new compensation technique based on a recurrentneural network has been presented. With referenceto Fig. 5, an excellent compensation can be achievedby the use of a predistorter and postdistorter in acompound effect, reducing the warping and clus-tering effects significantly.

The performance of the proposed scheme wascompared to a system without compensation, usinga HPA. The new system achieve 11.5 dB gain withan input back-off of 0 dB for a BER of 10−4, anda gain of 2 dB for a BER of 10−5, when comparedwith the PDF-ISI scheme. When using both predis-torter and postdistorter, a further gain of 13 dB canbe achieved with an input back-off of 0 dB for aBER of 10−4, and a gain of 4 dB for a BER of10−5, when compared with the PDF-ISI scheme.

The performance of the proposed system is sig-nificantly better than systems previously used forcompensating similar effects.

References

1. Saleh M, Adel A. Frequency-independent and fre-quency-dependent nonlinear models TWT amplifiers.IEEE Trans Commun 1981; 29: 1715–1719

2. Karam G. Data predistortion technique using intersym-bol interpolation. IEEE Trans Commun 1990; 38(10):1716–1723

3. Karam G. A data predistortion technique with memoryfor QAM radio systems. IEEE Trans Commun 1991;39(3): 336–343

4. Ghaderi M, Kumar S, Dodds DE. Adaptive predistor-tion lineariser using polynomial functions. IEE ProcCommun 1994; 141(2): 49–55

5. Ghaderi M, Kumar S, Dodds DE. Fast adaptive poly-nomial I and Q predistorter with global optimisation.IEE Proc Commun 1996; 143(2): 78–86

6. Stapleton S, Costescu F. Amplifier linearization using

Page 7: Adaptive Predistortion and Postdistortion for Nonlinear Channel

345Adaptive Predistortion and Postdistortion for Nonlinear Channel

adaptive analog predistortion. IEEE Trans VehicleTechnology 1992; 41(1): 49–56

7. Metzger K, Valetin R. Intersymbol interference dueto linear and nonlinear distortion. IEEE Trans Com-mun 1996; 44(7)

8. Williams RJ, Zipser D. A learning algorithm forcontinually running fully recurrent neural networks.Neural Computation 1989; 1: 270–280

9. S.L. A structure by which a recurrent neural networkcan approximate a nonlineal dynamic system. ProcInternational Neural Networks 1991; II: 709–714

10. Benson M, Carrasco RA. Recurrent neural networkarray for CDMA mobile communications systems.IEE-Electronic Letter 1997; 33(25): 2105–2106

11. Coloma J, Carrasco RA. MLP equaliser for frequencyselective time-varying channels. IEE-Electronic Letter1994; 30(6): 503–504

12. Coloma J, Carrasco RA. Non-linear adaptive algor-ithms for equalisation in mobile satellite communi-cations. Journal of Neural Computing and Applications1994: 2: 99–118

13. Soto I, Carrasco RA. Combination of Viterbi decoderwith and adaptive neural equaliser over a Rician fadingchannel. In: T. Wysocki, H. Razavi and B. Honorary(eds.), Digital Signal Processing for CommunicationSystems, July 1997

14. Pao YH. Adaptive Pattern Recognition and NeuralNetworks. Addison-Wesley, New York, 1989

Appendix A: Algorithm 1. Algorithmfor forward and back propagation ofthe signals

Algorithm 1.1. Initialisation of weights andbias weight

for k=0 to H dofor i=0 to (I+H) do

wki(0) = 0 (random value in the range [−1,1])Next iwkb(0) = 0 and Dwkh(0) = 0for o=0 to O do

wko(0) = 0 (random value in the range [−1,1])Newt hwob(0) = 0 and Dwok(0) = 0

Next k

Algorithm 1.2. Output generation

for k=1 to H do

sk(t) = f F OjPI<H<O

wkjzhj (t) + wkbG

Next kfor k=1 to O do

yk(t) = OjPH

wojsj(t) + wob

Next k

Algorithm 1.3. Error signal of the outputlayer

for j=1 to O do

dk(t) =1−y2

k(t)2

(dk(t) − yk(t))

Next j

Algorithm 1.4. Errors of the hidden units

for j=1 to H do

dhj (t) =

1−s2k(t)

2 OkPO

dh+1k (t) wh+1

kj (t−1)

Next j

Algorithm 1.5. New value weight and biasweight

for k=0 to H dofor i=0 to (I+H) do

Dwhki(n) = hdh

i (n)zhi (n) + aDwh

ki(n − 1)wh

ki(n) = whki(n − 1) + Dwh

ki(n)Next iwh

kb(n) = whkb(n − 1) + bdh

k(n)for o=0 to O do

Dwhko(n) = hdh

o(n)zho(n) + aDwh

ko(n − 1)wh

ko(n) = whko(n − 1) + Dwh

ko(n)Next owob(n) = wob(n − 1) + bdo(n)Next k

Appendix B: Nonlinear Model ofTWT Amplifier

The TWT modelled as a memoryless amplifier exhi-bits nonlinear distortions in both amplitude andphase [1]. The model used in this case is a quadra-ture model as shown in Fig. 10.

Fig. 10. Model nonlinear quadrature of a HPA.

Page 8: Adaptive Predistortion and Postdistortion for Nonlinear Channel

346 N. Rodrı́guez et al.

Let the input signal be

x(t) = r(t) cos [vt + u(t)] (B1)

where v is the carrier frequency, andr(t) and u(t)are the modulated envelope and phase, respectively.

If the input is given by (B1), the output of theTWT can be expressed as

p(t) = P[r(t)] cos [vt + u(t)] (B2)

q(t) = −Q[r(t)] sin [vt + u(t)] (B3)

where P(r) and Q(r) are odd functions ofr, withlinear and cubic leading terms, respectively.

To representP(r) and Q(r) we use the following:

P(r) =apr

1 + bpr2 (B4)

Q(r) =aqr3

(1 + bqr2)2 (B5)

where ap, bp, aq and bq are real constants.

Appendix C

Figure 11 shows the parameters of the architectureof RNN-PD and RNN-PTD.

Fig. 11. BER for different architectures withh = 0.8, b = 0.6,a = 0.2 and S/N= 7 dB.

Fig. 12. BER performance for RNN-PD and RNN-PTD (3,2,2),with b = 0.75h, a = 0.2 and S/N= 7 dB.

Fig. 13. BER performance for RNN-PD and RNN-PTD (3,2,2),with h = 0.8, b = 0.6 and S/N= 7 dB.