10
Novel Least Mean Square Algorithm B. T. Krishna Dept. of Electroni cs and Communic ation Engineering, Jawaharlal Nehru T echn olog ical University Kakinada, Kakinada, Andhrapradesh, India 533003,Email: [email protected] Abstract A Novel Least Mean Square(LMS) algorithm using First order Al-Alaoui die rentiator is proposed. The condition for stabil ity is also deriv ed. The performance of the proposed Algorithm is compared with existing algorithms by taking an example. The resul ts prove the ecac y of the proposed Algo- rithm. Keywords:  Digital dierentiator; Adaptive Filters; Steepest Descen t Algorithm; LMS algorithm; System Identication 1. Introduc tion LMS algorithm is one of the Adaptive ltering Algorithms, which is used to produce lesser value of Mean Squared error and is invented in 1960’s[1-2]. This Algorithm is a Stochastic implementation of the Steepest descent algo- rithm with Cost function  J  being Mean Squared Error[6]. For cost function J  the steepest descent is described as the vector dierential equation[2,5,6] dW dt  = µ  J W (t)  Where  W  is vector of unknown weights and  µ  is a scalar gain. Although LMS algorithm is simple to implement and robust it suers from Low con ve rgence speed. Many of the var iation s in the LMS algorith m are also propose d. If the derivative in the above Equation is approximated by Backward deri v ativ e Con ve ntional LMS Algori thm is obtain ed[5]. By appro ximat ing the derivative term with the Trapezoidal rule, a Novel Trapezoidal LMS(TLMS) is proposed in [5]. But, higher the closeness of the approximation to the ideal one, the re is a poss ibi lit y of deri ving a no v el LMS Alg ori thm. So, in thi s letter, Al-Alaoui approximation is used to derive novel LMS Algorithm. Al- Alaoui dierentiator is obtained by interpolating Backward and Trapezoidal Pr ep rint submit ted to Co mp ut ers and El ect rical Engi ne er ing October 18 , 20 14

Compelec Lms Algorithms

Embed Size (px)

Citation preview

8/10/2019 Compelec Lms Algorithms

http://slidepdf.com/reader/full/compelec-lms-algorithms 1/10

Novel Least Mean Square Algorithm

B. T. Krishna

Dept. of Electronics and Communication Engineering, Jawaharlal Nehru Technological 

University Kakinada, Kakinada, Andhrapradesh, India −533003,Email:

[email protected] 

Abstract

A Novel Least Mean Square(LMS) algorithm using First order Al-Alaouidifferentiator is proposed. The condition for stability is also derived. Theperformance of the proposed Algorithm is compared with existing algorithmsby taking an example. The results prove the efficacy of the proposed Algo-

rithm.Keywords:   Digital differentiator; Adaptive Filters; Steepest DescentAlgorithm; LMS algorithm; System Identification

1. Introduction

LMS algorithm is one of the Adaptive filtering Algorithms, which is usedto produce lesser value of Mean Squared error and is invented in 1960’s[1-2].This Algorithm is a Stochastic implementation of the Steepest descent algo-rithm with Cost function  J  being Mean Squared Error[6]. For cost function

J  the steepest descent is described as the vector differential equation[2,5,6]dW dt

  = −µ   ∂J ∂W (t)

 Where W  is vector of unknown weights and  µ  is a scalar gain.Although LMS algorithm is simple to implement and robust it suffers fromLow convergence speed. Many of the variations in the LMS algorithm arealso proposed.

If the derivative in the above Equation is approximated by Backwardderivative Conventional LMS Algorithm is obtained[5]. By approximatingthe derivative term with the Trapezoidal rule, a Novel Trapezoidal LMS(TLMS)is proposed in [5]. But, higher the closeness of the approximation to the idealone, there is a possibility of deriving a novel LMS Algorithm. So, in thisletter, Al-Alaoui approximation is used to derive novel LMS Algorithm. Al-Alaoui differentiator is obtained by interpolating Backward and Trapezoidal

Preprint submitted to Computers and Electrical Engineering October 18, 2014

8/10/2019 Compelec Lms Algorithms

http://slidepdf.com/reader/full/compelec-lms-algorithms 2/10

integrators, inversion and Stabilization[7].Digital differentiators find applications in Radar, Sonar, and Bio-medical

Engineering etc. These can be of either Infinite Impulse Response (IIR) typeor Finite Impulse Response (FIR) type. But IIR type filters are preferredbecause of Low cost, less space etc. In [5], a Trapezoidal rule based LMS al-

gorithm is proposed. In this paper, First order Al-Alaoui differentiator basedsteepest descent algorithm is presented and its performance is compared withthe existing one.

The paper is organized as follows. Derivation of Al-Alaoui digital differ-entiator based novel steepest descent algorithm and condition for stability isproposed in Section 2. Results and Conclusions are presented in Section 3.

2. LITERATURE SURVEY

Two of the well-known integration techniques are rectangular and Trape-zoidal methods. If the derivative term   dW (t)

dt  in Eqn.(1)is approximated by

rectangular Euler integration[2,5],

dW 

dt  ≈

W k+1 − W kT 

  (1)

Where  T   is the sampling interval normalized to unity and W k   is the discreteweight vector at sample interval   k. From Eqns 1 and 2 it can be shownthat[5]

W (n + 1) = W (n) − µ  ∂J 

∂W (n)  (2)

Solving for the gradient of mean square error would result in[6],

W (n + 1) = W (n) + 2µe(n)x(n) (3)

This is the equation for LMS algorithm which is used to estimate the weight of the filter[6]. LMS algorithms have many applications in the areas of adaptivefilters, image processing and signal processing [1, 2, 5]. Similarly, accordingto trapezoidal integration[5],

dW 

dt  ≈

2W k+1T 

1 − z −1

1 + z −1  (4)

where  z −1 is the backward shift operator. Using Eqn.5 in Eqn.1 and simpli-fying,we have,

W k+1 =  W k − µ2 ∂J ∂W k− µ2 ∂J ∂W k−1

(5)

2

8/10/2019 Compelec Lms Algorithms

http://slidepdf.com/reader/full/compelec-lms-algorithms 3/10

Considering a FIR filter with input  x(n) and output  y(n),

y(n) =N −1i=0

w(i)x(n − i) = W T (n)x(n) (6)

d(n) being the desired signal and the error signal can be computed as,

e(n) = d(n) − y(n) (7)

and hence the mean square error (MSE) is given by

J  = E [e2(n)] (8)

By substituting Eqn.9 in Eqn.6, we have,

W (n + 1) = W (n) −µ

2

∂e2(n)

∂W (n) −

µ

2

∂e2(n)

∂W (n − 1)  (9)

∂J ∂W (n)

 =   ∂e2(n)∂W (n)

 = 2e(n)   ∂e(n)∂W (n)

= 2e(n)∂ (d(n)−y(n))∂W (n)

= −2e(n)∂ (W T (n)x(n))

∂W (n)

(10)

Using the results in Eqn.(11) in Eqn.(6) and simplifying,

W (n + 1) =  W (n) + µe(n)x(n) + µe(n − 1)x(n − 1) (11)

This is the equation for Trapezoidal Steepest-descent algorithm(TLMS)[2].

For stability and convergence the step size satisfies[2],

0 < µ <  1

λmax(12)

Where  λmax   is the largest eigenvalue.

3. AL-ALAOUI DIGITAL DIFFERENTIATOR BASED NOVEL

STEEPEST DESCENT ALGORITHM

z -domain representation of Backward and Trapezoidal Integrators is[4,7],

H Rect(z ) =   zT z − 1

  (13)

3

8/10/2019 Compelec Lms Algorithms

http://slidepdf.com/reader/full/compelec-lms-algorithms 4/10

Figure 1: Magnitude Response Comparison Of Digital Integrators

H Trap(z ) = T  (z  + 1)

2 (z − 1)  (14)

Fig.1. Compares the Backward and Trapezoidal Integrators with the idealintegrator which is defined by the Transfer function  H ( jω) =   1

 jω.

Al-Alaoui digital differentiator is designed by interpolating the rectangu-lar Euler integrator and Trapezoidal integrator and is given by[4],

H N (z ) = aH Rect(z ) + (1 − a)H Trap(z ) (15)

It is observed that the value of  a =   34

 provides better approximation to theideal integrator[4]

H N (z ) = T 

8

(z  + 7)

(z − 1)  (16)

Applying the approach proposed in [4], the new differentiator is,

G(z ) =  8

7T 

(z − 1)

(z  + 1/7)(17)

Magnitude and Phase response comparison of Al−Alaoui digital differen-tiator with the ideal one (G( jω) =  jω) are shown in Fig.2 and 3 respectively.

4

8/10/2019 Compelec Lms Algorithms

http://slidepdf.com/reader/full/compelec-lms-algorithms 5/10

Figure 2: Magnitude Response Comparison Of Digital Differentiators

Figure 3: Phase Response Comparison Of Digital Differentiators

From the transfer function of minimum phase differentiator,defined by

5

8/10/2019 Compelec Lms Algorithms

http://slidepdf.com/reader/full/compelec-lms-algorithms 6/10

Eqn.(1), it can be written that,

dW 

dt  ≈

8W n+17T 

1 − z −1

1 +  z−1/7(18)

On applying the above differentiator to steepest descent algorithm,

W (n + 1) =  W (n) −7µ

8

∂J (n)

∂W (n) −

µ

8

∂J 

∂W (n − 1)  (19)

Using the same approach as TLMS algorithm[5], one obtains

W (n + 1) = W (n) + 7µ

4  e(n)x(n) +

 µ

4e(n − 1)x(n − 1) (20)

This is the equation for Novel Steepest-descent algorithm based on Al-Alaouidigital differentiator. For m =  n+1 weights, it can be seen that the computa-

tional complexity is same as that of the TLMS algorithm and when comparedto LMS algorithm, requires only (1 + m) extra multiples and  m  extra addi-tions. To find the condition for stability of the Proposed Algorithm,let usedefine weight error vectors  c(n) and c(n + 1) as,

c(n) = w(n) − w0   (21)

c(n + 1) = w(n + 1) − w0 =  c(n) +   µ4 (7e(n)x(n) + e(n − 1)x(n − 1))

(22)The Weiner solution is given by[2],

w0 =  R−1

 p   (23)

where  R  and  p  are Cross correlation and Auto correlation respectively andare defined as[2,5],

R =  E (x(n)xT (n)) (24)

 p =  E (d(n)xT (n)) = E (dT (n)x(n)) (25)

It can be shown that the error signal  e(n) is given by,

e(n) = d(n) − y(n) =  e0(n) − xT (n)c(n) (26)

where,e0(n) = d(n) − xT (n)w0   (27)

6

8/10/2019 Compelec Lms Algorithms

http://slidepdf.com/reader/full/compelec-lms-algorithms 7/10

From the equation for error term  e(n) , it can be written easily as,

e(n − 1) = e0(n − 1) − xT (n − 1)c(n − 1) (28)

The final expression for weight error vector is shown to be,

c(n + 1) =I −   7µ4  x(n)xT (n)

c(n) +  µ4

7x(n)e0(n) +  x(n− 1)e0(n− 1) − xT (n− 1)x(n− 1)c(n

(29)

For the sake of simplicity, the expected values of the individual terms of theabove equation are calculated and dispalyed in Eqns(14-17) for convenience,

I −

4 x(n)xT (n)

c(n)

 =

I −

4  R

E  (c(n)) (30)

E  (x(n)e0(n)) = ( p − Rw0) (31)

xT (n − 1)x(n − 1)c(n − 1)

 =  RE  (c(n − 1)) (32)

E  (x(n−

1)e0(n−

1)) = E 

x(n−

1)

d(n−

1)−

x

(n−

1)w0

= p − Rw0(33)

By taking the expectations on both sides of the Eqn.(30), and simplifying,

E  (c(n + 1)) =

I −

4

RE (c(n)) −

µ

4RE  (c(n − 1)) (34)

Define a Unitary similarity transformation  E (v(n)) =  QT E (c(n)) where  Qhas its columns as eigen values of the matrix R. So,  QT Q =  I  and  R  =  QΛQT 

where Λ is a diagonal matrix of eigen values  λ1, λ2, ............λn+1. Multiply-ing with  QT  on both sides of the Eqn.(18),and using the unitary similarity

transformations,

E (v(n)) =

I −

4 Λ

E (v(n − 1)) −

µ

4ΛE  (v(n − 2)) (35)

For the  k th natural mode,the above equation reduces to,

E (v(n)) =

I −

4 λk

E (v(n − 1)) −

µ

4λkE  (v(n − 2)) (36)

For stability, the roots of the set of polynomials should lie within the unit-circle.

1 −

1 − 7µ4  λk

z −1 + µ4 λkz −2 = 0   k = 0, 1, 2, ....n + 1 (37)

7

8/10/2019 Compelec Lms Algorithms

http://slidepdf.com/reader/full/compelec-lms-algorithms 8/10

By applying Jury’s test it is shown that the condition for stability as[8],

0 < µ <  4

3λmax(38)

where  λmax  is the largest eigen value of  R.

4. RESULTS AND CONCLUSIONS

To illustrate the efficacy of the proposed Al−Alaoui based LMS algorithm,a system identification example is taken into consideration. Consider theidentification of a 4th order FIR system w(z ) = 1+ 2z −1 + 3z −2 + 4z −3 + 5z −4

with additive white noise of variable variance and  µ   is chosen as 0.1. Theperformance for various values of SNRs is illustrated in the Figures 4, 5 and6. Table I shows the values of steady-state mean and variance of the weightestimate for different values of SNR.

Figure 4: Comparison of LMS, TLMS and Novel steepest descent algorithms for 40dBSNR

It can be seen that the Novel steepest descent algorithm have shows supe-rior performance over TLMS, LMS as the variance of weight estimate is lesserfor varied values of SNR. The performance of Novel steepest descent algo-rithm when compared with conventional approaches (namely LMS, TLMS)observed to have faster convergence and lower mean square error.

8

8/10/2019 Compelec Lms Algorithms

http://slidepdf.com/reader/full/compelec-lms-algorithms 9/10

Figure 5: Comparison of LMS, TLMS and Novel steepest descent algorithms for 26dB

SNR

Figure 6: Comparison of LMS, TLMS and Novel steepest descent algorithms for 19dBSNR

[1] J.G.Proakis, D.G.Manolakis,Digital Signal Processing: Principles, Algo-rithms and Applications,Prentice-Hall,Upper Saddle River, New Jersey,2007.

9

8/10/2019 Compelec Lms Algorithms

http://slidepdf.com/reader/full/compelec-lms-algorithms 10/10