Adaptive Filters Algorithms (Part 2) - Uni Kiel › images › teaching › ... · Adaptive Filters...

Preview:

Citation preview

Slide 1

Gerhard Schmidt

Christian-Albrechts-Universität zu KielFaculty of Engineering Electrical Engineering and Information TechnologyDigital Signal Processing and System Theory

Adaptive Filters – Algorithms (Part 2)

Slide 2Slide 2Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Today:

Contents of the Lecture

Adaptive Algorithms:

Introductory Remarks

Recursive Least Squares (RLS) Algorithm

Least Mean Square Algorithm (LMS Algorithm) – Part 1

Least Mean Square Algorithm (LMS Algorithm) – Part 2

Affine Projection Algorithm (AP Algorithm)

Slide 3Slide 3Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Basics – Part 1

Least Mean Square (LMS) Algorithm

Optimization criterion:

Minimizing the mean square error

Assumptions:

Real, stationary random processes

Structure:

Unknown system

Adaptivefilter

Slide 4Slide 4Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Derivation – Part 2

Least Mean Square (LMS) Algorithm

Method according to Newton

What we have so far:

Resolving it to leads to:

With the introduction of a step size , the following adaptation rule can be formulated:

Slide 5Slide 5Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Derivation – Part 3

Least Mean Square (LMS) Algorithm

Method according to Newton:

Method of steepest descent:

LMS algorithm

For practical approaches the expectation value is replaced by its instantaneousvalue. This leads to the so-called least mean square (LMS) algorithm:

Slide 6Slide 6Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Upper Bound for the Step Size

Least Mean Square (LMS) Algorithm

A priori error:

A posteriori error:

Consequently:

For large and input processes with zero mean the following approximation is valid:

Slide 7Slide 7Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

System Distance

Least Mean Square (LMS) Algorithm

Old system distance New system distance

How LMS adaptation changes system distance:

Target

Current system error vector

Slide 8Slide 8Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Sign Algorithm

Least Mean Square (LMS) Algorithm

Update rule:

with

Early algorithm with very low complexity (even used today in applications that operate at very high frequencies). It can be implemented without any multiplications (step size multiplication can be implemented as a bit shift).

Slide 9Slide 9Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Analysis of the Mean Value

Least Mean Square (LMS) Algorithm

Expectation of the filter coefficients:

If the procedure converges, the coefficients reach stationary end values:

So we have orthogonality:

Wiener solution

Slide 10Slide 10Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 1

Least Mean Square (LMS) Algorithm

Into the equation for the LMS algorithm

and get:

we insert the equation for the error

Expectation of the filter coefficients:

Slide 11Slide 11Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 2

Least Mean Square (LMS) Algorithm

Expectation of the filter coefficients:

Independence assumption:

Difference between means and expectations:

Convergence of the means requires:

Slide 12Slide 12Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 3

Least Mean Square (LMS) Algorithm

= 0 because of Wiener solution

Recursion:

Convergence requires the contraction of the matrix:

Slide 13Slide 13Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 4

Least Mean Square (LMS) Algorithm

Case 1: White input signal

Condition for the convergence of the mean values:

For comparison – condition for the convergence of the filter coefficients:

Convergence requires the contraction of the matrix (result from last slide):

Slide 14Slide 14Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 5

Least Mean Square (LMS) Algorithm

Case 2: Colored input – assumptions

Slide 15Slide 15Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 6

Least Mean Square (LMS) Algorithm

Putting the following results

together, leads to the following notation for the autocorrelation matrix:

Slide 16Slide 16Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 7

Least Mean Square (LMS) Algorithm

Recursion:

Slide 17Slide 17Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Condition for Convergence – Part 1

Least Mean Square (LMS) Algorithm

Condition for the convergence of the expectations of the filter coefficients:

Previous result:

Slide 18Slide 18Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Condition for Convergence – Part 2

Least Mean Square (LMS) Algorithm

A (very rough) estimate for the largest eigenvalue:

Consequently:

Slide 19Slide 19Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Eigenvalues and Power Spectral Density – Part 1

Least Mean Square (LMS) Algorithm

Relation between eigenvalues and power spectral density:

Signal vector:

Autocorrelation matrix:

Fourier transform:

Equation for eigenvalues:

Eigenvalue:

Slide 20Slide 20Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Eigenvalues and Power Spectral Density – Part 2

Least Mean Square (LMS) Algorithm

… previous result …

… exchanging the order of the sums and the integral and splitting the exponential term …

… lower bound …

… upper bound …

Computing lower and upper bounds for the eigenvalues – part 1:

Slide 21Slide 21Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Eigenvalues and Power Spectral Density – Part 3

Least Mean Square (LMS) Algorithm

Computing lower and upper bounds for the eigenvalues – part 2:

… exchanging again the order of the sums and the integral …

… solving the integral first …

… inserting the result und using the orthonormality properties of eigenvectors …

Slide 22Slide 22Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Eigenvalues and Power Spectral Density – Part 4

Least Mean Square (LMS) Algorithm

… exchanging again the order of the sums and the integral …

Computing lower and upper bounds for the eigenvalues – part 2:

… inserting the result from above to obtain the upper bound …

… inserting the result from above to obtain the lower bound …

… finally we get…

Slide 23Slide 23Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Geometrical Explanation of Convergence – Part 1

Least Mean Square (LMS) Algorithm

System:

System output:

Structure:

Unknown system

Adaptivefilter

Slide 24Slide 24Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Geometrical Explanation of Convergence – Part 2

Least Mean Square (LMS) Algorithm

Error signal:

Difference vector:

LMS algorithm:

Slide 25Slide 25Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Geometrical Explanation of Convergence – Part 3

Least Mean Square (LMS) Algorithm

The vector will be split into two components:

It applies to parallel components:

With:

Slide 26Slide 26Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Geometrical Explanation of Convergence – Part 4

Least Mean Square (LMS) Algorithm

Contraction of the system error vector:

… result obtained two slides before …

… splitting the system error vector …

… using and that is orthogonal to …

… this results in …

Slide 27Slide 27Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

NLMS Algorithm – Part 1

Least Mean Square (LMS) Algorithm

Normalized LMS algorithm:

LMS algorithm:

Unknown system

Adaptivefilter

Slide 28Slide 28Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

NLMS Algorithm – Part 2

Least Mean Square (LMS) Algorithm

Adaption (in general):

A priori error:

A posteriori error:

A successful adaptation requires

or:

Slide 29Slide 29Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

NLMS Algorithm – Part 3

Least Mean Square (LMS) Algorithm

Condition:

Ansatz:

Convergence condition:

Inserting the update equation:

Slide 30Slide 30Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

NLMS Algorithm – Part 4

Least Mean Square (LMS) Algorithm

Condition:

Ansatz:

Step size requirement fo the NLMS algorithm (after a few lines …):

For comparison with LMS algorithm:

or

Slide 31Slide 31Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

NLMS Algorithm – Part 5

Least Mean Square (LMS) Algorithm

Ansatz:

Adaptation rule for the NLMS algorithm:

Slide 32Slide 32Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Matlab-Demo: Speed of Convergence

Least Mean Square (LMS) Algorithm

Slide 33Slide 33Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence Examples – Part 1

Least Mean Square (LMS) Algorithm

Setup:

White noise:

Slide 34Slide 34Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence Examples – Part 2

Least Mean Square (LMS) Algorithm

Setup:

Colored noise:

Slide 35Slide 35Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence Examples – Part 3

Least Mean Square (LMS) Algorithm

Setup:

Speech:

Slide 36Slide 36Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Today

Contents of the Lecture

Adaptive Algorithms:

Introductory Remarks

Recursive Least Squares (RLS) Algorithm

Least Mean Square Algorithm (LMS Algorithm) – Part 1

Least Mean Square Algorithm (LMS Algorithm) – Part 2

Affine Projection Algorithm (AP Algorithm)

Slide 37Slide 37Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Basics

Affine Projection Algorithm

Signal matrix:

Signal vector:

Filter vector:

Filter output:

M describes the order of the procedure

Unknown system

Slide 38Slide 38Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Signal Matrix

Affine Projection Algorithm

Definition of the signal matrix:

Slide 39Slide 39Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Error Vector – Part 1

Affine Projection Algorithm

Signal matrix:

Desired signal vector:

Filter output vector:

A priori error vector:

Adaption rule:

A posteriori error vector:

Slide 40Slide 40Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Error Vector – Part 2

Affine Projection Algorithm

Requirement:

Requirement:

Slide 41Slide 41Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Ansatz

Affine Projection Algorithm

Requirement:

Ansatz:

Step-size condition:

Slide 42Slide 42Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Geometrical Interpretation

Affine Projection Algorithm

NLMS algorithm AP algorithm

Slide 43Slide 43Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Regularization

Affine Projection Algorithm

Regularised version of the AP algorithm:

Non-regularised version of the AP algorithm:

Slide 44Slide 44Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 1

Affine Projection Algorithm

White noise:

Slide 45Slide 45Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 2

Affine Projection Algorithm

White noise:

Slide 46Slide 46Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 3

Affine Projection Algorithm

Colored noise

Slide 47Slide 47Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 4

Affine Projection Algorithm

Colored noise:

Slide 48Slide 48Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 5

Affine Projection Algorithm

Speech:

Slide 49Slide 49Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 6

Affine Projection Algorithm

Speech:

Slide 50Slide 50Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Summary and Outlook

Adaptive Filters – Algorithms

This week and last week:

Introductory Remarks

Recursive Least Squares (RLS) Algorithm

Least Mean Square Algorithm (LMS Algorithm) – Part 1

Least Mean Square Algorithm (LMS Algorithm) – Part 2

Affine Projection Algorithm (AP Algorithm)

Next week:

Control of Adaptive Filters

Recommended