Lecture #18 EEE 574 Dr. Dan Tylavsky Nonlinear Problem Solvers

Preview:

Citation preview

Lecture #18

EEE 574

Dr. Dan Tylavsky

Nonlinear Problem Solvers

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers Two traditional methods for solving simultaneous nonlinear eqautions:

– Guass-Seidel Method (Overview - available in many text books and easy to understand.)

– Newton Raphson Method (We’ll work through in detail).

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers Gauss- Seidel Method

– Suppose we wish to find solutions to equation:

0)( xf– Let’s write this equation in the following fixed-point form:

)(xgx – We can always get this form. If by no other means we can ways write:

)()( xgxfxx

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers– Then iterate as follows:

)(1 kk xgx

– Where we start with an initial estimate, x0.

– Depending on the characteristics of the problem, the result may be fast or slow to converge, or the solution process may diverge.– With power flow problems that have been examined, the process usually converges.

– Converged when, |xk+1-xk|<.

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers• Think-Pair-Square: Find a solution to the following problem starting with an initial estimate of x0=0.4. If x is

a bus voltage, determine a reasonable convergence criteria and be able to justify it.

02.2 xx

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers– Asymptotic Rate of Convergence: Largest integer r such that:

.arg,0*

*1

kelforxx

xxrk

k

– Gauss-Seidel method has linear convergence, i.e., r=1.

– Define error as:*xxkk

i

0.5710.1310.0280.01

3.37610 3

1.16410 3

3.97210 4

1.34910 4

4.70310 5

1.5110 5

6.12710 60.57082

6.127099 106

i

100 i

0 5 101 10

6

1 105

1 104

1 103

0.01

0.1

1Plot of Error v. Iteration Number

Interation Index

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers

For linear convergence, asymptotic rate of convergence:

.arg'1*

*1

kelforxx

xxC

k

k

**1

1*

*1

loglog

log)'log(

xxxxC

xx

xxC

kk

k

k

Incremental improvement in estimate is constant on a log scale.

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers Gauss- Seidel Method

– Advantages• Simple conceptually• Simple to program• No LDU factorization.• Sparsity is used simply• Iterations take little execution time.• Often more robust than Newton-Raphson

– Disadvantages• Sometimes not as robust.• Slow linear convergence rate.

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers Newton’s Method

– Single Equation: Looking for the roots of

0)( xf

f(x)

Approach

1. Guess at solution x=x0

x0

2. Represent f(x) as a linear approximation about x=x0 using a Taylor series expansion:

0)()(

)()(0

0

o

x

xxx

xfxfxf3. Solve for a better estimate of

x*:

x

xf

xfxxx

)(

)()(

0

001

x

xf

xfxx

)(

)(0

001

x

xf

xfxx

k

kkk

)(

)(1

x1x2x*

4. Converged when:|f(xk)|<

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers• Think-Pair-Square: Using Newton’s method, find a solution to the following problem starting with an initial estimate of x0=0.4. If f(x) is sum of

real/reactive power flow into a bus, and x represents bus voltage, determine a reasonable convergence criteria and be able to justify it.

02.2 xx

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers Compare iterates from G-S and Newton.

xi

0.41.80.74782610.30422660.18188520.17091020.17082040.1708204

xi

0.40.040.19840.160637

0.1741960.1696560.171217

0.1706850.1708670.1708050.170826

G-S Newton

Newton starts off worse because it is more sensitive to:• An accurate initial guess.• The “smoothness” of f(x) near x0.

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers

1.8

0.45

x2

x .2

21 x1 0.5 0 0.5 1 1.5 2

0.5

0

0.5

1

1.5

2

x1 x2

x0

-1.8

x3x4

xi

0.41.80.74782610.30422660.18188520.17091020.17082040.1708204

Newton

x0

x1

x2

x3

x4

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers

1.62918

7.210865 107

Ni

GSi

100 i0 5 10

1 107

1 106

1 105

1 104

1 103

0.01

0.1

1

10

– Asymptotic Rate of Convergence: Largest integer r such that:

rk

k

xx

xx

*

*1

0

– Newton’s method has quadratic convergence, i.e., r=2.

Precision Limitation

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers Newton-Raphson Method

– Apply Newton’s Method to Simultaneous Equations: Looking for the roots of:

0

0

0

0

),,(

),,(

),,(

0)(

1

12

11

nn

n

n

xxf

xxf

xxf

xf

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers Repeat same Newton-type steps:

1. Make ‘good’ guess at solution:

Tnxxxxx 002

01

0 ,, 2. Approximate f(x) using a Taylor series expansion about point x0:

0)()(

)()(

0)()(

)()(

00

00

00

0

xxJxf

JacobianxJx

xfwhere

xxx

xfxfxf

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers

0

02

01

21

2

2

2

1

2

1

2

1

1

1

00

0

))((

n

xn

nnn

n

n

x

x

x

x

f

x

f

x

f

x

f

x

f

x

fx

f

x

f

x

f

xxJ

3. Solve linearized equations for a better estimate of x*:

001

001000 )()(0)()(

xxx

xfxJxxxJxf

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers

4. Check for convergence:

)( 1xf

5. If converged, end, otherwise perform next iteration with i=2:

)()(11 kkkkkk xfxJxxxx

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers Ex: Find the solution to the following 2-bus power flow problem:

E1=1@0o E2=V2@q2

P2+jQ2=1+j1

j0.15

15.0

)sin()cos(11

)@(,15.0

111

15.0

222222

222

222

2

*

2122

j

VjVVj

VEj

VEj

Ej

EEjQP

qq

q

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers Bringing all terms to one side and multiplying

numerator and denominator by (-j) gives:

15.0

)sin()cos(110

15.0

)sin()cos(11

222222

222222

jVVjVj

j

VjVVj

qq

qq

15.0

)sin(10

15.0

)cos(10

22

222

2

q

q

V

VV

Break equation into real and imaginary parts:

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers Setting x1=q2,x2=V2 gives:

magnitudevoltagebusx

anglevoltagebusxwhere

xxV

xxxVV

2

1

1222

122222

22

15.0

)sin(10

15.0

)sin(10

15.0

)cos(10

15.0

)cos(10

q

q

1. Make a ‘good’ guess at solution:

0.1

0.0x

2a. Evaluate f(x) at x0:

1

1

15.0

)0cos(111

15.0

)0sin(0.11

)( 20xf

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers

15.0

)cos(*215.0

)cos(1

15.0

)sin(15.0

)cos(1

15.0

)sin(15.0

)sin(1

15.0

)cos(*15.0

)sin(1

)(

12

2

1222

2

212

1

1222

1

2

1

2

12

2

112

1

12

1

1

xx

x

xxx

x

fxx

x

xxx

x

f

x

x

xx

x

fxx

x

xx

x

f

xJ

2b. Calculate analytical form of the Jacobian

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers2c. Evaluate the Jacobian at best estimate of solution:

667.60

0667.6

15.0

)0cos(1*2

15.0

)0sin(*115.0

)0sin(

15.0

)0cos(*1

15.0

)cos(*2

15.0

)sin(15.0

)sin(

15.0

)cos(

)(

0

0

1212

112

x

x xxxx

xxx

xJ

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers3. Solve the linear approximation for a better estimate of x*.

15.0

15.0

1

1

667.60

0667.6

)()(

1

0

02

12

01

11

0 xfxJxx

xxx

x

85.0

15.0

15.01

15.00

02

02

01

01

12

11

xx

xx

x

xx

4. Check for convergence.

213.0

153.0

15.0

)15.0cos(85.85.1

15.0

)15.0sin(85.1

)( 21xf

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers

qk

00.150.18650990.19032740.19037230.1903723

Vk

10.850.79842430.79277620.79270940.7927094

Iteration Results

012345

Norm_fk

10.2136

0.0194-42.2385·10-83.1157·10

0

© Copyright 1999 Daniel Tylavsky

Nonlinear Problem Solvers

0 1 2 3 4 51 10

161 10

151 1014

1 10131 1012

1 10111 1010

1 1091 108

1 1071 106

1 1051 104

1 1030.010.1

1Newton Power Flow Convergence

Iteration Index

Nor

m o

f M

ism

atch

es

1

0

Norm_f k

50 k

The End

Recommended