16
Information Theory Lecture 12 Parallel Gaussian channels, OFDM and MIMO STEFAN HÖST

Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

Information TheoryLecture 12Parallel Gaussian channels,OFDM and MIMO

STEFAN HÖST

Page 2: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

Parallel Gaussian channelsConsider n parallel, independent, Gaussian channels with a totalpower contraint of ∑i Pi ≤ P. The noise on channel i isZi ∼ N (0,

√Ni).

+Xn

Zn

Yn

+X2

Z2

Y2

+X1

Z1

Y1

......

Stefan Höst Information theory 1

Page 3: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

Parallel Gaussian channelsWater fillingTheorem

Given k independent parallel Gaussian channels with noisevariance Ni , i = 1, . . . , k, and a restricted total transmitted power,∑i Pi = P.The capacity is given by

C =12

n

∑i=1

log(1 +Pi

Ni)

where

Pi =(B − Ni

)+, (x)+ =

{x , x ≥ 0

0, x < 0

and B is such that ∑i Pi = P.

Stefan Höst Information theory 2

Page 4: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

Frequency division

Wideband channel – OFDM

Consider a wideband channel of bandwidth W , where

• Noise and channel attenuation varies over frequency

• Power constraint: E [x(t)2] = P

Split into n independent sub-channels of bandwidth W∆ = Wn , s.t.

• Noise and channel attenuation constant

• Power constraint: ∑i Pi = P where Pi = E [X 2i ] power in

sub-channel i

Stefan Höst Information theory 4

Page 5: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

Frequency division

N0,i /2

f1 2 3 4 5 6 7 8 9 10

W

|Hi |2

f1 2 3 4 5 6 7 8 9 10

W

Stefan Höst Information theory 5

Page 6: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

Frequency division

Parallel channels

Equivalent model n parallel channels:

X1H1

Z1 Y1 = X1 + Z1, Z1 ∼ N(

0,√

N0,12

)

X2H2

Z2 Y2 = X2 + Z2, Z2 ∼ N(

0,√

N0,22

)

XnHn

Zn Yn = Xn + Zn, Zn ∼ N(

0,√

N0,n2

)...

Stefan Höst Information theory 6

Page 7: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

Frequency divisionTheorem

Capacity For a wideband channel containing n independentsub-channels with bandwidth W∆, attenuation Hi and additivenoise Zi ∼ N(0,

√N0,i /2), the channel capacity is

C =n

∑i=1

W∆ log(

1 +Pi |Hi |2N0,iW∆

)where Pi =

(B − N0,i W∆

|Hi |2)+

∑i Pi = P

and P is the total power constraint.

Stefan Höst Information theory 7

Page 8: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

MIMO channel

MIMO (Multiple In, Multiple Out)

Consider a radio channel with nt transmit antennas and nr receiveantennas. The attenuation from antenna i to antenna j is hji .

......... ...

Xnt

X2

X1

+

Znr

Ynr

+Z2

Y2

+Z1

Y1

h1,1h2,1hn

r ,1h1,2

h2,2hnr ,2

h 1,nt

h2,n t

hnr ,nt

Stefan Höst Information theory 8

Page 9: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

MIMO channel

Multi-dimensional Gaussian channel

Let

X =

X1...

Xnt

Z =

Z1...

Znr

∼ N(0,ΛZ )

where ΛZ = E[ZZ T ] is the covariance matrix for Z . Then

Y = HX + Z , where H =

h11 . . . h1nt...

...hnr 1 . . . hnr nt

Stefan Höst Information theory 9

Page 10: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

MIMO channel

Multi-dimensional Gaussian channel

With ΛX = E[XX T ] as the covariance matrix for X , the power

constraint for the channel is

trΛX = ∑i

E[X 2

i]= ∑

iPi ≤ P

where tr() denotes the trace function (sum of diagonal elements).

Stefan Höst Information theory 10

Page 11: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

n-dim Gaussian distributionn-dim Gaussian distribution

Let X ∼ N(µ,ΛX ) be an n-dim Gaussian column vector with meanµ = E [X ] and covariance matrix ΛX = E [(X − µ)(X − µ)T ]. Then

fX (x) =1√

(2π)n|ΛX |e−

12 (x−µ)T Λ−1

X (x−µ)

This gives the differential entropy

H(X ) =12log(2πe)n|ΛX | =

12log |2πeΛX |

The n-dimensional Gaussian distribution maximises the differentialentropy for all distributions with mean µ and covariance matrix ΛX .

|A| denotes the determinant of the matrix A.

Stefan Höst Information theory 11

Page 12: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

MIMO channelThe mutual information between X and Y :

I(X ;Y ) = H(Y )− H(Y |X ) = H(Y )− H(Z )

= H(Y )− 12log(2πe)n|ΛZ |

≤ 12log(2πe)n|ΛX | −

12log(2πe)n|ΛZ |

=12log |ΛY Λ−1

Z | eq. iff X ∼ N(0,ΛX )

The covariance of Y :

ΛY = E[(HX + Z )(HX + Z )T

]= HΛX HT + ΛZ

This gives

I(X ;Y ) ≤ 12log |I + HΛX HT Λ−1

Z | eq. iff X ∼ N(0,ΛX )

Stefan Höst Information theory 12

Page 13: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

MIMO channelLet ΛZ = NInr and H = USV T the SVD of H, i.e.S = diag(s1, s2, . . . , sn)

|U| = |V | = 1 UUT = I VV T = I

Then,

C = maxtrΛX=P

12log |I + 1

NHΛX HT |

= maxtrΛX=P

12log |I + 1

NSV T ΛX VST |

With X̃ = V T X , ΛX̃ = V T ΛX V , and

C = maxtrΛX̃=P

12log |I + 1

NSΛX̃ ST |

Stefan Höst Information theory 13

Page 14: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

Some Matrix TheoryTheorem

Let A be a matrix with the eigenvalues λi . Then

trA = ∑i

λi and |A| = ∏i

λi

Theorem (Similar)

The matrices A and B are similar if there exists a non-singularmatrix M s.t. B = MAM−1. Then A and B have the same set ofeigenvalues, and consequently the same trace and determinant.

Theorem (Hadamard inequality)

If A is positive semi-definite, then |A| ≤ ∏i aii with equality if andonly if A diagonal.Stefan Höst Information theory 14

Page 15: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

MIMO channelThat is,

C = maxtrΛX̃=P

12log∣∣∣I + 1

NSΛX̃ ST

∣∣∣≤ max

trΛX̃=P

12log∏

i

(1 +

1N

s2i P̃i

)where P̃i are the diagonal elements of ΛX̃ .

Maximum occurs for ΛX̃ = diag(P̃1, . . . , P̃n), which gives

C = ∑i

12log(

1 +1N

s2i P̃i

)

Stefan Höst Information theory 15

Page 16: Information Theory Lecture 12 Parallel Gaussian …...Parallel Gaussian channels Water filling Theorem Given k independent parallel Gaussian channels with noise variance Ni, i = 1,

MIMO channelTheorem (Capacity of MIMO channel)

A multi-dimensional Gaussian channel with

Y = HX + Z

where H is the channel matrix and Z ∼ N(0,NInr ), has thechannel capacity

C =nt

∑i=1

12log(

1 +s2

i P̃i

N

)where

P̃i =(

B − Ns2

i

)+∑i Pi = P

where the SVD of the channel is H = USV T , and si the diagonalelements of S.

Stefan Höst Information theory 16