26
Eigenvalues and eigenvectors Births Deaths Population increase Population increase = Births – deaths t Equilibri um t t t t t t t t t t N d b N d N b N N ) 1 ( 1 N: population size b: birthrate d: deathrate t t t t t t t t t t t t t t t t t t d b N deaths N births N deaths births N N r N deaths d N births b t t RN N 1 The net reproduction rate R = (1+b t -d t ) If the population is age structured and contains k age classes we get k i k k k k N b N b N b N b N 1 2 2 1 1 0 ... The numbers of surviving individuals from class i to class j are given by 2 1 1 1 2 2 0 1 1 ) 1 ( ... ) 1 ( ) 1 ( k k k N d N N d N N d N

Eigenvalues and eigenvectors

  • Upload
    gaetan

  • View
    136

  • Download
    4

Embed Size (px)

DESCRIPTION

Eigenvalues and eigenvectors. Equilibrium. Population increase. Deaths. Births. t. Population increase = Births – deaths. If the population is age structured and contains k age classes we get. N: population size b: birthrate d: deathrate. - PowerPoint PPT Presentation

Citation preview

Page 1: Eigenvalues  and  eigenvectors

Eigenvalues and eigenvectors

BirthsDeaths

Population increase

Population increase = Births – deaths

t

Equilibrium

tttttttttt NdbNdNbNN )1(1

N: population sizeb: birthrated: deathrate

ttt

t

t

t

t

tt

t

tt

t

tt

t

tt

dbN

deathsN

birthsNdeathsbirths

NNr

Ndeathsd

Nbirthsb

tt RNN 1

The net reproduction rate R = (1+bt-dt)

If the population is age structured and contains k age classes we get

k

ikkkk NbNbNbNbN

122110 ...

The numbers of surviving individuals from class i to class j are given by

211

122

011

)1(...

)1()1(

kkk NdN

NdNNdN

Page 2: Eigenvalues  and  eigenvectors

Leslie matrix

Assume you have a population of organisms that is age structured.Let fX denote the fecundity (rate of reproduction) at age class x.

Let sx denote the fraction of individuals that survives to the next age class x+1 (survival rates).Let nx denote the number of individuals at age class x

We can denote this assumptions in a matrix model called the Leslie model. We have w-1 age classes, w is the maximum age of an individual.

L is a square matrix.

1

2

1

0

...

wn

nnn

tN

000000...............0...0000...0000...000

...

2

2

1

0

13210

w

w

s

ss

sfffff

L

tt LNN 1

Numbers per age class at time t=1 are the dot product of the Leslie matrix with the abundance vector N at time t

01 NLN tt

k

ikkkk NbNbNbNbN

12211 ...

211

122

011

)1(...

)1()1(

kkk NdN

NdNNdN

Page 3: Eigenvalues  and  eigenvectors

ttn

nnn

s

ss

sfffff

n

nnn

1

2

1

0

2

2

1

0

13210

11

2

1

0

...

...

000000...............0...0000...0000...000

...

...

...

ww

w

w

vThe sum of all fecundities gives

the number of newborns

vn0s0 gives the number of

individuals in the first age class

Nw-1sw-2 gives the number of individuals in the last classv

The Leslie model is a linear approach.It assumes stable fecundity and mortality rates

The effect pof the initial age composition disappears over timeAge composition approaches an equilibrium although the whole

population might go extinct.Population growth or decline is often exponential

Page 4: Eigenvalues  and  eigenvectors

An example

Age class N0 L1 1000 0 0.5 1.2 1.5 1.1 0.2 0.0052 2000 0.4 0 0 0 0 0 03 2500 0 0.8 0 0 0 0 04 1000 0 0 0.5 0 0 0 05 500 0 0 0 0.3 0 0 06 100 0 0 0 0 0.1 0 07 10 0 0 0 0 0 0.004 0

Generation0 1 2 3 4 5 6 7 8 9 10 11 12

1000 6070.05 4335.002 3216.511 3709.4 3822.356 3338.88 3195.559 3199.811 3037.552 2873.77 2783.134 2681.0592000 400 2428.02 1734.001 1286.604 1483.76 1528.942 1335.552 1278.224 1279.924 1215.021 1149.508 1113.2542500 1600 320 1942.416 1387.201 1029.284 1187.008 1223.154 1068.442 1022.579 1023.939 972.0165 919.60631000 1250 800 160 971.208 693.6003 514.6418 593.504 611.5769 534.2208 511.2894 511.9697 486.0083

500 300 375 240 48 291.3624 208.0801 154.3925 178.0512 183.4731 160.2662 153.3868 153.5909100 50 30 37.5 24 4.8 29.13624 20.80801 15.43925 17.80512 18.34731 16.02662 15.33868

10 0.4 0.2 0.12 0.15 0.096 0.0192 0.116545 0.083232 0.061757 0.07122 0.073389 0.064106

At the long run the population dies out.Reproduction rates

are too low to counterbalance the high mortality rates

0.01

0.1

1

10

100

1000

10000

0 5 10 15 20 25

Abun

danc

e

Time

12345

6

7

Important properties:1. Eventually all age classes

grow or shrink at the same rate

2. Initial growth depends on the age structure

3. Early reproduction contributes more to population growth than late reproduction

Page 5: Eigenvalues  and  eigenvectors

Leslie matrix

tt LNN 1

01 NLN tt

0.01

0.1

1

10

100

1000

10000

0 5 10 15 20 25

Abun

danc

e

Time

12345

6

7

Does the Leslie approach predict a stationary point where population abundances doesn’t change any more?

ttt NLNN 1

We’re looking for a vector that doesn’t change direction when multiplied with the Leslie matrix.

This vector is called the eigenvector U of the matrix.Eigenvectors are only defined for square matrices.

ULU

0dtdN

0][0UILULU

I: identity matrix

000000...............0...0000...0000...000

...

1

2

2

1

4321

k

k

d

dd

dfbbbb

L

k

t

n

nnn

...3

2

1

N

Page 6: Eigenvalues  and  eigenvectors

ftCefgN

gfNdtdN

The insulin – glycogen system

At high blood glucose levels insulin stimulates glycogen synthesis and inhibits glycogen breakdown.

The change in glycogen concentration can be modelled by the sum of constant production

and concentration dependent breakdown

01

0

gf

N

gfNAt equilibrium we have

01001

10

0

01

1

00111

0

2

2

2

2

gf

NN

gf

NN

Ngf

NN

gfN

TT

The vector {-f,g} is the eigenvector of the dispersion matrix and gives the stationary point.

The value -1 is called the eigenvalue of this system.

Page 7: Eigenvalues  and  eigenvectors

X

YHow to transform vector A

into vector B?

A

B

BXA

97

31

5.25.121

Multiplication of a vector with a square matrix defines a new

vector that points to a different direction.

The matrix defines a transformation in space

X

Y

A

B

BXA

Image transformationX contains all the information necesssary

to transform the image

The vectors that don’t change during transformation are the eigenvectors.

AXA

In general we defineUXU

U is the eigenvector and the eigenvalue of the square matrix X

0][0][0

0

UΛXUIXIUXU

UXUUXU

Page 8: Eigenvalues  and  eigenvectors

UXU

The basic equation

nnmnmm

n

n

nnmnmm

n

n

u

uu

u

uu

aaa

aaaaaa

u

uu

u

uu

aaa

aaaaaa

......00

.........0...00...0

......

..................

.........

..................

1

1

1

1

21

12221

11211

1

1

1

1

21

12221

11211

The matrices A and L have the same properties. We have diagonalized the matrix A.

We have reduced the information contained in A into a characteristic value , the eigenvalue.

Page 9: Eigenvalues  and  eigenvectors

2

1

2

1

2221

1211

uu

uu

aaaa

L

22

11

22

11

2

1

22

11

2221

1211

00

vuvu

vuvu

vuvu

aaaa

A nxn matrix has n eigenvalues and n eigenvectorsSymmetric matrices and their transposes have identical eigenvectors and eigenvalues

vu

00)(

)()(

;

2

1

2

12211

221122112

1

2

1

2

1

2

1

2

1

2221

2111

2

1

2

1

2221

2111

2

1

2

1

2

1

2

1

2221

2111

2

1

2

1

2

1

2

1

2

1

2

1

2221

2111

2

1

2

1

2

1

2221

2111

2

1

2

1

2221

2111

vv

uu

uvuv

uvuvuvuvvv

uu

uu

vv

vv

aaaa

uu

uu

aaaa

vv

vv

uu

yy

aaaa

uu

uu

vv

uu

vv

uu

aaaa

vv

vv

vv

aaaa

uu

uu

aaaa

vu

T

v

T

u

TT

T

v

T

T

uu

TT

vu

Eigenvectors of

symmetric matrices are orthogonal.

Page 10: Eigenvalues  and  eigenvectors

1 1

2 1 1 0 2 10 0

3 4 0 1 3 4

(2 )(4 ) 3 1; 5

i i[A I] u 0

1 21 1

2 2 1 2

(2 )u u 0u u2 1 1 0 2 1 1 10 0 u

u u3 4 0 1 3 4 3u (4 )u 0 1 3

2 1 1 1 11

3 4 1 1 1

2 1 1 5 15

3 4 3 15 3

i i iA u u

How to calculate eigenvectors and eigenvalues?

The equation is either zero for the trivial case u=0 or if [A-I] =0

Page 11: Eigenvalues  and  eigenvectors

0

000

0][

2221

1211

2221

1211

2221

1211

aaaa

aaaa

aaaa

u

uIAuAu

A

42

4)

2(

)(

0)(

0

22211

221112212211

2,1

22211

2211122122211

2211122122112

12212

22112211

2221

1211

aaaaaaaa

aaaaaaaa

aaaaaa

aaaaaa

aaaa

1121

2111

aaaa

A

221112,1 aa

2221

2111

aaaa

A

2)(4

422

22112

2122112,1

22211

22112

212211

2,1

aaaaa

aaaaaaa

The general solutions of 2x2 matrices

Distance matrix Dispersion matrix

Page 12: Eigenvalues  and  eigenvectors

1

31

21

13

2

2

122

1122

1212222

1111212

2211

2222121

1212111

2

1

1

2221

11211

............

............

......

...

...

0......

0...0...

0...

........................

...

mmmmm

m

mmmmm

nn

nn

mmmmm

mm

mm

mmmm

m

a

aa

u

u

uu

aa

aa

uauaua

uauauauauaua

uauaua

uauauauauaua

u

uu

aa

aaaaa

A

0

0][

2

1

2221

1211

2221

1211

i

i

i

i

uu

aaaa

aaaa

uIAuAu

A

0)(0)(

0

222121

212111

2

1

2221

1211

uauauaua

uu

aaaa

This system is indeterminate

Matrix reduction

Page 13: Eigenvalues  and  eigenvectors

111

1212

12121111212221

12111

11

001

1

aau

uaauaa

aau

0

0][

2

1

2221

1211

2221

1211

i

i

i

i

uu

aaaa

aaaa

uIAuAu

A

0......

............

0][

............

...

1

1

111

1

111

mmmm

m

mmm

m

u

u

aa

aa

aa

aa

uIAuAu

A

0...

0...

............

det

22

11

1

111

m

mmm

mmm

m

bbb

aa

aa

Characteristic polynomial

Eigenvalues and eigenvectors can only be computed analytically to the fourth power of m.

Higher order matrices need numerical solutions

Higher order matrices

222

1222

2222221

21

0)(1

aau

uaau

Page 14: Eigenvalues  and  eigenvectors

The power method to find the largest eigenvalue.

The power method is an interative process that starts from an initial guess of the eigenvector to approximate the eigenvalue

uAu

00

0 uAu Let the first component u11 of u1 being 1.

Rescale u1 to become 1 for the first component. This gives a second guess for .

11

1 uAu

Repeat this procedure until the difference n+1 – n is less than a predefined number e.

22

2 uAu

A4 0 1

-2 0 12 0 1

X0 X1 X2 X3 X4 X5 X6 X7 X81 5 4.6 4.565217 4.561905 4.561587 4.561556 4.561553 4.5615531 -1 -1.4 -1.43478 -1.4381 -1.43841 -1.43844 -1.43845 -1.438451 3 2.6 2.565217 2.561905 2.561587 2.561556 2.561553 2.561553

u0 u1 u2 u3 u4 u5 u6 u7 u81 1 1 1 1 1 1 1 11 -0.2 -0.30435 -0.31429 -0.31524 -0.31533 -0.31534 -0.31534 -0.315341 0.6 0.565217 0.561905 0.561587 0.561556 0.561553 0.561553 0.561553

0 1 2 3 4 5 6 7 81 5 4.6 4.565217 4.561905 4.561587 4.561556 4.561553 4.561553

Having the eigenvalues thew eigenvectors come immediately from solving

the linear system

0...

........................

...

2

1

1

22221

112111

mmmmm

m

u

uu

aa

aaaaa

using matrix reduction

Page 15: Eigenvalues  and  eigenvectors
Page 16: Eigenvalues  and  eigenvectors

Some properties of eigenvectors

11 L

UUAAUUUΛAUUΛΛU

If L is the diagonal matrix of eigenvalues:

The product of all eigenvalues equals the

determinant of a matrix.

n

i i1det A

The determinant is zero if at least one of the eigenvalues is

zero.In this case the matrix is

singular.

The eigenvectors of symmetric matrices are orthogonal

0':)(

UUA symmetric

Eigenvectors do not change after a matrix is multiplied by a scalar k.

Eigenvalues are also multiplied by k.

0][][ uIkkAuIA

If A is trianagular or diagonal the eigenvalues of A are the diagonal

entries of A.

A Eigenvalues 2 3 -1 3 2

3 2 -6 34 -5 4

5 5

Page 17: Eigenvalues  and  eigenvectors

Page Rank

In large webs (1-d)/N is very small

DC

CDC

B

BDB

A

ADAD

D

DCDC

B

BCB

A

ACAC

D

DBD

C

CBCB

A

ABAB

D

DAD

C

CAC

B

BABAA

pckdp

ckdp

ckdpp

ckdpp

ckdp

ckdpp

ckdp

ckdpp

ckdpp

ckdp

ckdp

ckdppp

0

0

0

0

D

C

B

A

D

C

B

A

C

CD

B

BD

A

AD

D

DC

B

BC

A

AC

D

DB

C

CB

A

AB

D

DA

C

CA

B

BA

pppp

pppp

ckd

ckd

ckd

ckd

ckd

ckd

ckd

ckd

ckd

ckd

ckd

ckd

0

0

0

0

A standard eigenvector problem

uPu

The requested raking is simply contained in the largest eigenvector of P.

Nd

ckdp

ckdp

ckdpp

Nd

ckdp

ckdp

ckdpp

Nd

ckdp

ckdp

ckdpp

Nd

ckdp

ckdp

ckdpp

C

CDC

B

BDB

A

ADAD

D

DCD

B

BCB

A

ACAC

D

DBD

C

CBC

A

ABAB

D

DAD

C

CAC

B

BABA

1

1

1

1

Page 18: Eigenvalues  and  eigenvectors

85.0185.0185.0185.01

41

10002/15.0115.002/15.015.0115.0

0001

D

C

B

A

pppp

A B

C D

D

C

B

A

D

C

B

A

C

C

B

B

A

A

D

D

B

B

A

A

D

D

C

C

A

A

D

D

C

C

B

B

pppp

pppp

ckd

ckd

ckd

ckd

ckd

ckd

ckd

ckd

ckd

ckd

ckd

ckd

0

0

0

0

Page 19: Eigenvalues  and  eigenvectors

-15

-10

-5

0

5

10

15

-10 -8 -6 -4 -2 0 2 4 6 8 10

Y

X-10

-5

0

5

10

-10 -8 -6 -4 -2 0 2 4 6 8 10

Y

X

Principal axes

u1 u1

u2

u2

The principal axes span a new Cartesian system . Principal axes are orthogonal.

-15

-10

-5

0

5

10

15

-10-8

-6-4

-20

24

68

10

Y

X

u1

u2

The data points of the new system are close to the new x-axis. The variance

within the new system is much

smaller.

UMXV )( );();()2;1( YXYXuu

Principal axes define the longer and shorter radius of an oval around the scatter of data points.

The quotient of longer to short principal axes measure how close the data points are associated

(similar to the coefficient of correlation).

Page 20: Eigenvalues  and  eigenvectors

Major axis regression

-15

-10

-5

0

5

10

15

-10 -8 -6 -4 -2 0 2 4 6 8 10

Y

X

u1

u2

The largest major axis defines a regression line through the data points {xi,yi}.

The major axis is identical with the largest eigenvector of the associated

covariance matrix.The length of the axes are given by the

eigenvalues. The eigenvalues measure therefore the

association between a and y.

0)( UID Major axis regression minimizes the Euclidean

distances of the data points to the regression line.

2)(4 222222

2,1YXXYYX sssss

222

21 1

y

xy

ss

u

u

222

1222

222

12

22

212

1

aau

aa

uu

u

XmYb

ss

uum

MAR

y

xyMAR

2

21

22

2

2

YXY

XYX

ssss

ΣD

The first principal axis is given by the largest

eigenvalue

Page 21: Eigenvalues  and  eigenvectors

The relationship between ordinary least square (OLS) and major axis (MAR) regression

YX

XY

sssr 2

X

XYOLS s

sm 2Y

XYMAR s

sm

X

YOLS s

srm 2Y

YXMAR s

ssrm

2

2

Y

XOLSMAR s

smm

Page 22: Eigenvalues  and  eigenvectors

Going Excel

X Y

0.7198581 2.486935 0.084878 0.23295 0.2683467 0.351561 0.23295 0.791836 0.9514901 2.705939 0.8280125 2.097096 0.4483117 0.87799 Eigenvalues 0.9386594 2.705279 0.015021 0 0.301201 0.960431 0.861692 0

0.4835976 1.8985610.7188817 1.9567160.7992589 2.734179 Eigenvectors0.3955747 0.958425 0.957858 0.2872410.802986 2.721408 -0.28724 0.957858

0.5685275 1.256930.0697029 0.2961540.3099076 0.6880480.9044138 2.466684 Eigenvectors0.3853803 0.981417 0.979137 0.20320.2352667 0.945944 -0.2032 0.9791370.8249179 2.3787330.8661037 2.569350.066128 0.603999 Parameters

a 3.334687b -0.23791

Values to draw the MAR regression0.1 0.0955610.9 2.76331

Covariance matrix

y = 2.8818x + 0.0185

00.5

11.5

22.5

3

0 0.2 0.4 0.6 0.8 1

YX

y=3.3347-0.2379

Page 23: Eigenvalues  and  eigenvectors

X Y

1 =3*A4+2=A4+A4*(LOS

()-0.5)=B4+B4*(LOS

()-0.5)

1 5 0.620370962 4.053401137 71.08896 93.07421 2 8 1.405829923 5.955994942 93.07421 264.3759 3 11 3.932365359 14.33527103 4 14 3.369965031 10.78551619 Eigenvalues 5 17 6.754318881 16.24109695 33.55803 6 20 3.140987171 27.95607099 301.9068 7 23 4.848193641 25.997851798 26 11.07692858 36.238371939 29 10.48809173 30.37969645 Eigenvectors10 32 5.174301246 46.52139803 0.927438 0.37397711 35 5.632905042 34.70506056 -0.37398 0.92743812 38 13.32938498 44.3523179413 41 16.94257054 25.7327815114 44 15.78715334 25.52100704 Parameters15 47 21.75787273 52.94515263 a 2.47993316 50 12.77137686 48.07135318 b 8.84797417 53 13.25068353 38.6710865218 56 22.70011271 28.7831935 Values to draw the MAR regression19 59 25.34824478 40.06533423 0.5 10.0879420 62 26.61513272 60.76764637 27 75.8061821 65 24.21743343 57.47387651

Covariance matrix

y = 1.37x + 15.86

01020304050607080

0 10 20 30

Y

X

y=2.48+8.85

y=3+2

Errors in the x and y variables cause OLS regression to predict lower slopes. Major axis regression is closer to the correxct slope.

Ordinary least squares

regression (OLS)

Major axis regression

(MAR)

If both variables have error terms MAR should be preferred.

The MAR slope is always steeper than the OLS slope.

Page 24: Eigenvalues  and  eigenvectors

Latitude of capitals (decimal degrees)

Days below zero

Latitude of capitals (decimal degrees)

Years below zero

41.33 34 41.33 0.09444442.5 60 42.5 0.166667 80.38689 420.4235 80.38689 1.167843

48.12 92 48.12 0.255556 420.4235 3803.503 1.167843 0.02934837.73 1 37.73 0.00277839.55 18 39.55 0.05 Eigenvalues53.87 144 53.87 0.4 33.50203 0.01237950.9 50 50.9 0.138889 3850.388 80.40386

43.82 114 43.82 0.31666751.15 64 51.15 0.17777842.65 102 42.65 0.283333 Eigenvectors27.93 1 27.93 0.002778 0.993839 0.110831 0.014528 0.99989449.22 12 49.22 0.033333 -0.11083 0.993839 -0.99989 0.01452841.92 11 41.92 0.03055635.33 1 35.33 0.00277845.82 114 45.82 0.316667 Parameters Parameters37.1 1 37.1 0.002778 a1 8.96715 a2 0.01453

35.15 2 35.15 0.005556 b1 -350.55 b2 -0.4872250.1 119 50.1 0.330556

55.63 85 55.63 0.236111 Values to draw the MAR regression36.4 2 36.4 0.005556 1 -341.583 -0.47269

59.35 143 59.35 0.397222 80 366.8216 0.67517962 35 62 0.097222

60.32 169 60.32 0.469444 MAR48.73 50 48.73 0.138889 a1/a2 617.146552.38 97 52.38 0.269444 OLS36.1 0 36.1 0 a1/a2 359.459537.9 2 37.9 0.005556

Covariance matrix Covariance matrix

y = 5.32x - 179.40

100

200

300

400

0 20 40 60 80

D

L

y=8.97-350.6

y = 0.0148x - 0.498

0

0.2

0.4

0.6

0.8

1

0 20 40 60 80

D

L

y=0.0145-0.487

MAR is not stable after rescaling of only one of the variables

OLS regression retains the correct scaling factor, MAR

does not.

MAR should not be used for comparing slopes if variables have different dimensions and were measured in different units, because

the slope depends on the way of measurement.If both variables are rescaled in the same manner (by the same

factor) this problem doesn’t appear.

Days/360

Page 25: Eigenvalues  and  eigenvectors

1

n 1 1 1

n 1 1 n 1 n n 1 n 1i i

A U U A U U

A (U U )(U U )(U U )...

A U ( U U...) U U U U I U U U

L L

L L L

L L

n n n 1

i

n n 1i

A U I U

A U U

Scaling a matrix

A U U L

A simple way to take the power of a square matrix

Page 26: Eigenvalues  and  eigenvectors

The variance and covariance of data according to the principal axis

)()(11

............

...

1

111

);( MXMXΣ

T

mmm

m

yx n

UMXV )( );();()2;1( yxyxuu

UΣUUMXMXUVVΣ );()2;1()2;1()2;1( 11)()(

11

11

yxTTT

uuuuT

uu nnn

kkTkkkk

Tkkyx

Tkk uuuuuu );(

2

0);( kTjkkk

Tjkyx

Tjjk uuuuuu

The vector of data points in the new system comes from the

transformation according to the principal axes

(x;y)y

x

u2

u1

The variance of variable k in the new system is equal to the eigenvalue of the kth

principal axis. The covariance of variables j and k in the new system is zero. The new variables are

independent.

Eigenvectors are normalized to the length of one

The eigenvectors are orthogonal