Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Random Matrices, Integrals and Space-time Systems
Babak HassibiCalifornia Institute of TechnologyDIMACS Workshop on Algebraic Coding and Information
Theory, Dec 15-18, 2003
Outline
• Overview of multi-antenna systems• Random matrices• Rotational-invariance• Eigendistributions• Orthogonal polynomials• Some important integrals• Applications• Open problems
IntroductionWe will be interested in multi-antenna systems of the form:
,VSHM
X +=ρ
where NTNMMTNT VHSX ×××× ∈∈∈∈ CCCC ,,, are the receive, transmit, channel, and noise matrices, respectively.Moreover, are the number of transmit/receive antennasrespectively, is the coherence interval and is the SNR.
The entries of are iid and the entries of are also , but they may be correlated.
NM ,T ρ
V )1,0(CN H)1,0(CN
Some Questions
• What is the capacity?• What are the capacity-achieving input distributions?• For specific input distributions, what is the mutual
information and/or cut-off rates?• What are the (pairwise) probability of errors?
We will be interested in two cases. The coherent case, whereis known to the receiver and the non-coherent case, whereis unknown to the receiver.
The following questions are natural to ask.
HH
Random Matrices
A random matrix is simply described by the joint pdf of its entries,
An example is the family of Gaussian random matrices, where the entries are jointly Gaussian.
nm× A
),...,1;,...,1;()( njmiapAp ij ===
Rotational-InvarianceAn important class of random matrices are (left- and right-) rotationally-invariant ones, with the property that their pdf is invariant to (pre- and post-) multiplication by any and
unitary matrices and .mm×
nn× Θ Ψ),()( ApAp =Θ
andmI=ΘΘ=ΘΘ **
),()( ApAp =Ψ nI=ΨΨ=ΨΨ **
If a random matrix is both right- and left- rotationally-invariantwe will simply call it isotropically-random (i.r.).If is a random matrix with iid Gaussian entries, then it is i.r.,as are all of the matrices:G
*11
*1
12
*21
12121
1** ,)(,,,,,, AGGGGGGGGGGGGGGGG p −−−+
Isotropically-Random Unitary Matrices
A random unitary matrix is one for which the pdf is given by
).()()( *mIfp −ΘΘΘ=Θ δ
When the unitary matrix is i.r., then it is not hard to show that
).()1()...()( *2/)1( mmm Imp −ΘΘ
ΓΓ=Θ + δ
πTherefore an i.r. unitary matrix has a uniform distribution overthe Stiefel manifold (space of unitary matrices). It is also calledthe Haar measure.
A Fourier Representation
If we denote the columns of by thenΘ ,,..,1, mkk =θ
))(Im())(Re()( ***kllkkllk
lkmI δθθδδθθδδ −−=−ΘΘ Π
≥
Using the Fourier representation of the delta function
xjedx ωωπ
δ ∫=21)(
It follows that we can write
∫Ω=Ω
−ΘΘΩ+ ΩΓΓ
=−ΘΘ*
* )(2/)13(
*
2)1()...()( mIjtr
mmmm edmIπ
δ
A Few TheoremsI.r. unitary matrices come up in many applications.
Theorem 1 Let be an i.r. random matrix and consider the svd Then the following two equivalent statements hold:1. are independent random matrices and and
are i.r. unitary.2. The pdf of only depends on
Idea of Proof: and have the same distribution forany unitary and
A nm×.*VUA Σ=
VU ,,Σ U V
A :Σ).()( Σ= fAp
*ΨΘA AΘ ....Ψ
Theorem 2 Let A be an i.r. Hermitian matrix and consider theeigendecomposition . Then the following two equivalent statements are true.1. are independent random matrices and is i.r. unitary.2. The pdf of A is independent of U:
Theorem 3 Let A be a left rotationally-invariant random matrixand consider the QR decomposition, A=QR. Then the matricesQ and R are independent and Q is i.r. unitary.
*UUA Λ=
Λ,U U
).()( Λ= fAp
Some JacobiansThe decompositions and can be consideredas coordinate transformations. Their corresponding Jacobianscan be computed to be:
and
for some constant c.
Note that both Jacobians are independent of U and Q.
*UUA Λ= QRA =
)()(!1 *2
mlklk
IUUm
dUddA −−Λ= Π>
δλλ
)( *
1m
kmkk
m
kIQQrdQdRcdA −= −
=Π δ
EigendistributionsThus for an i.r. Hermitian A with pdf we have),(⋅Ap
).()(!1)(),( *2
mlk
lkA IUUm
pUp −−Λ=Λ ∏>
δλλ
Integrating out the eigenvectors yields:
Theorem 4 Let A be an i.r. Hermitian matrix with pdfThen
Note that , a Vandermonde determinant.
).(⋅Ap
∏>
−
−ΛΓ+Γ
=Λlk
lkA
mm
pm
p 2)1(
)()()1()...1(
)( λλπ
∏>
Λ=−lk
lk V )(det)( 22λλ
Some Examples• Wishart matrices, , where G is
• Ratio of Wishart matrices,
• I.r. unitary matrix. Eigenvalues are on the unit circle and the distribution of the phases are:
GGA *= ., nmnm ≥×
).(det)( 221
1
Λ=Λ−
=
−∏ Vecp kn
k
nmk
λλ
:121−= AAA
).(det)11()( 22
1
Λ+
=Λ ∏=
Vcp n
k
n
k λ
.)(sin),...,( 21 ∏
>
−=lk
lkm cp αααα
The Marginal DistributionNote that all the previous eigendistributions were of the form:
For such pdf’s the marginal can be computed using an eleganttrick due to Wigner.
Define the Hankel matrix
Note that Assume that Then we can perform theCholesky decomposition F=LL*, with L lower triangular.
∏=
Λ=Λm
kk Vfcp
1
2 ).(det)()( λ
[ ].11
)( 1
1
−
−∫
= m
m
fdF λλ
λλ LM
.0≥F .0>F
Note that implies that the polynomials
are orthonormal wrt to the weighting function f(.):
Now the marginal distribution of one eigenvalue is given by
But
mIFLL =−− *1
=
−
−
−1
1
1
0 1
)(
)(
mm
Lg
g
λλ
λMM
∫ = .)()()( kllk ggfd δλλλλ
)(det)()( 2
121 Λ== ∫ ∏
=
Vfddcpm
kkm λλλλλ L
21
1212 ))((det)(
detΛ= −
=− ∫ ∏ VLfddLc m
kkm λλλ L
)()()(
)()(11)(
111
010
111
11 Λ=
=
=Λ
−−−−
−−G
mmm
m
mm
m
Vgg
ggLVL
λλ
λλ
λλ L
MOM
L
L
MOM
L
Now upon expanding out and integrating over the variables the only terms that do not vanish are thosefor which the indices of the orthonormal polynomials coincide.
Thus, after the smoke clears
In fact, we have the following result.
Theorem 5 Let A be an i.r. Hermitian matrix with Then the marginal distribution of the eigenvalues of A is
)(det 2 ΛGV
mλλ L2
∑−
=
=1
0
2 ).()()(m
kkgfcp λλλ
∏= ).()( kA fAp λ
∑−
=
=1
0
2 ).()(1)(m
kkgf
mp λλλ
Orthogonal Polynomials• What was just described was the connection between
random matrices and orthogonal polynomials.• For Wishart matrices, Laguerre polynomials arise. For
ratios of Wishart matrices it is Jacobi polynomials, and for i.r. unitary matrices it is the complex exponential functions (orthogonal on the unit circle).
• Theorem 5 gives a Christoffel-Darboux sum and so
• The above sum gives a uniform way to obtain the asymptotic distribution of the marginal pdf and to obtain results such as Wigner’s semi-circle law.
))()()()(()()( '11
'1 λλλλλλ mmmmm
m ggggaa
mfp −−
− −⋅=
RemarkThe attentive audience will have discerned that my choice ofthe Cholesky factorization of F and the resulting orthogonal polynomials was rather arbitrary.
It is possible to find the marginal distribution without resortingto orthogonal polynomials. The result is given below.
[ ]
=
−
−−
1
11
11)(1)(
m
m Ffm
pλ
λλλ ML
Coherent ChannelsLet us now return to the multi-antenna model
where we will assume that the channel H is known. We will assume that where are the correlationmatrices at the transmitter and receiver and G has iid CN(0,1)entries. Note that can be assumed diagonal wlog.
According to Foschini&Telatar:
1. When
,VSHM
X +=ρ
rtGDDH = rt DD ,
rt DD ,
:, NrMt IDID ==
∑ +=+= ))(1log()det(log*
*
MGGEGG
MIEC N ρλρ
2. When
3. When
4. In the general case:
Cases 1-3 are readily dealt with using the techniques developed so far, since the matrices are rotationally-invariant.
Therefore we will do something more interesting and compute the characteristic function (not just the mean). This requires more machinery, as does Case 4, which we now develop.
:Mt ID =
∑ +=+= ))(1log()det(log*2
*2
MGGD
EGGDM
IEC rrM ρλρ
:Nr ID =
∑ +=+===
))(1log(max)det(logmax*
)(
*
)( MGPDDG
GPDDGM
IEC tt
MPtrttNMPtrρλρ
∑ +==
))(1log(max*
)( MGDPDDGD
EC rttr
MPtrρλ
A Useful Integral FormulaUsing a generalization of the technique used to prove Theorem5, we can show the following result.
Theorem 6 Let functions begiven and define the matrices
Then
where
1,,0),(),(),( −= mkhgf kk Lλλλ
=Λ
=Λ
−−−− )()(
)()()(,
)()(
)()()(
111
010
111
010
mmm
m
H
mmm
m
G
hh
hhV
gg
ggV
λλ
λλ
λλ
λλ
L
MOM
L
L
MOM
L
FmVVfd H
m
kGk det!)(det)(det)(
1
=ΛΛΛ∫ ∏=
λ
[ ].)()()(
)()( 10
1
0
λλλ
λλλ −
−
∫
= m
m
hhg
gfdF LM
Theorem 6 was apparently first shown by Andreief in 1883.
A useful generalization has been noted in Chiani, Win and Zanella (2003).
Theorem 7 Let functionsbe given. Then
where for the tensor we have defined
and the sums are over all possible permutations of the integers 1 to m.
1,,0),(),(),( −= mkhgf kkk Lλλλ
))()()(()(det)(det)(1
λλλλ kjiH
m
kGkk hgfTensorVVfd ∫∫ ∏ =ΛΛΛ
=
ijkA
∑ ∑ ∏=
=µ α
αµαµm
kkkk
AATensor1
)sgn()sgn()(
αµ,
An Exponential IntegralTheorem 8 (Itzyskon and Zuber, 1990) Let A and B be m-dimensional diagonal matrices. Then
where
Idea of Proof: Use induction. Start by partitioning
∫ Γ+Γ=−ΘΘΘ ΘΘ
)(det)(det),(det)1()1()( **
BVAVBAEmIed m
BAtr Lδ
=mmm
m
baba
baba
ee
eeBAE
L
MOM
L
1
111
),(
[ ]
=Θ=Θ
maA
A 11 ,ϑ
Then rewrite so that thedesired integral becomes
trBaBIaAtrBAtr mmm +Θ−Θ=ΘΘ − ))(()( *1111
*
∫ ∫ −ΘΘΘΘ −ΘΘΘ=−ΘΘΘ )()( 11
*11
* *1
'1
*
mBAtrtrBa
mBAtr IedeIed m δδ
∫ −−ΘΘΩ+ΘΘΩΘ= )(1
11*1
*1
'1 mm IjtrBAtrtrBa eddce
∫∏=
Ω−
Ω+Ω= m
kk
jtrtrBa
jAb
edce m
1
' )det(
∫∏=
−
−
+= m
kmk
WjtrAtrBa
jWIb
edWce m
11 )det(
'
)()(det)(
1*2
1
1
1
*'
−
=
−
=
Λ−
−Λ+
Λ= ∫∏∏
mm
klk
m
l
UUjtrAtrBa IUUV
jb
edUdce m δλ
The last integral is over an (m-1)-dimensional i.r. matrix.And so if use the integral formula (at the lower dimension)to do the integral over U, we get
An application of Theorem 6 now gives the result.
)(det),(det)(
1)(det
'
1
1
1' ΛΛ
+Λ= ∫
∏∏
=
−
=
VAEjb
dAV
ec m
klk
m
l
trBam
λ
Characteristic FunctionConsider
The characteristic function is (assuming M=N)
Successive use of Theorems 6 and 8 give the result.
)det(log)det(log ** DGGM
IEGDGM
IEC NMρρ
+=+=
ωρ
ω ρ jN
DGGM
IjDGG
MIEEe N )det( *)det(log *
+=+
trWjN eDW
MIdWc −+= ∫ ωρ )det(
1
)det()(det
−−+= ∫ trWDjNm eW
MIdW
Dc ωρ
)()(det)1()(det
*2
1
1*
MDUtrU
M
kkm IUUVe
MdUd
Dc
−Λ+Λ=−Λ−
=∏∫ δλρ
Non-coherent Channels
Let us now consider the non-coherent channel.
where H is unknown and has iid CN(0,1) entries.
Theorem 9 (Hochwald and Marzetta, 1998) The capacity-achieving distribution is given by S = UD, where U is T-by-Mi.r. unitary and D is an independent diagonal.
Idea of Proof: Write S=UDV*. V* can be absorbed in H and so Is not needed. Optimal S is left rotationally-invariant.
,VSHM
X +=ρ
Mutual InformationDetermining the optimal distribution on D is an open problem.However, given D, one can compute all quantities of interest.The starting point is
The expectation over U is now readily do-able to give p(X|D). (A little tricky since U is not square, but doable using FourierRepresentation of delta functions and Theorems 6 and 8.)
)(det),|(
*2
)( 1*2*
UUDM
I
eDUXpT
NTN
XUUDM
ItrX T
ρπ
ρ
+=
−+−
)(det 2
)( *12**
DM
I
e
MNTN
XUDMIUtrXXtrX M
ρπ
ρ
+=
−−++−
Other Problems• Mutual information for almost any input distribution on D
can be computed.• Cut-off rates for coherent and non-coherent channels for
many input distributions (Gaussian, i.r. unitary, etc.) can be computed.
• Characteristic function for coherent channel capacity in general case can be computed.
• Sum rate capacity of MIMO broadcast channel in some special cases can be computed.
• Diversity of distributed space-time coding in wireless networks can be determined.
Other Work and Open Problems• I did not touch at all upon asymptotic analysis using the
Stieltjes transform.• Open problem include determining the optimal input
distribution for the non-coherent channel and finding the optimal power allocation for coherent channels when there is correlation among the transmit antennas.