On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

Embed Size (px)

Citation preview

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    1/16

    IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006 271

    [10] A. Martinez, A. Guilln i Fbregas, and G. Caire, New simple evalu-ation of the error probability of bit-interleaved coded modulation usingthe saddlepoint approximation, in Proc. 2004 Int. Symp. InformationTheory and its Applications, Parma, Italy, Oct. 2004, pp. 14911496.

    [11] G. Poltyrev, Bounds on the decoding error probability of linear codesvia theirspectra,IEEE Trans. Inf. Theory, vol. 40,no. 4,pp. 12841292,Jul. 1994.

    [12] D. Divsalar, H. Jin, and R. J. McEliece, Coding theorems for turbo-like codes, in Proc. 36th Allerton Conf. Communication, Control, and

    Computing, Allerton House, Monticello, IL, Sep. 1998, pp. 201210.[13] F. W. J. Olver, Asymptotics and Special Functions. New York: Aca-

    demic, 1974.

    On the Distribution of SINR for the MMSE MIMO

    Receiver and Performance Analysis

    Ping Li, Debashis Paul, Ravi Narasimhan, Member, IEEE, and

    John Cioffi, Fellow, IEEE

    AbstractThis correspondence studies the statistical distributionof the signal-to-interference-plus-noise ratio (SINR) for the minimummean-square error (MMSE) receiver in multiple-input multiple-output(MIMO) wireless communications. The channel model is assumed tobe (transmit) correlated Rayleigh flat-fading with unequal powers.The SINR can be decomposed into two independent random variables:SINR = SINR + , where SINR corresponds to the SINR for azero-forcing (ZF) receiver and has an exact Gamma distribution. Thiscorrespondence focuses on characterizing the statistical properties of

    using the results from random matrix theory. First three asymptoticmoments of are derived for uncorrelated channels and channels with

    equicorrelations. For general correlated channels, some limiting upperbounds for the first three moments are also provided. For uncorrelatedchannels and correlated channels satisfying certain conditions, it is proved

    that converges to a Normal random variable. A Gamma distributionand a generalized Gamma distribution are proposed as approximations tothe finite sample distribution of . Simulations suggest that these approx-imate distributions can be used to estimate accurately the probability oferrors even for very small dimensions (e.g., two transmit antennas).

    IndexTermsAsymptotic distributions, channel correlation,error prob-ability, Gamma approximation, minimum mean square error (MMSE) re-ceiver, multiple-input multiple-output (MIMO) system, random matrix,signal-to-interference-plus-noise ratio (SINR).

    I. INTRODUCTION

    This study considers the following signal and channel model in a

    multiple-input multiple-output (MIMO) system:

    y

    r

    =

    1

    p

    m

    HH

    H

    WW

    W

    RR

    R

    tt

    t

    PP

    P

    x

    t

    + n

    c

    =

    1

    p

    m

    HH

    H

    x

    t

    + n

    c

    (1)

    Manuscript received January 20, 2005; revised September 9, 2005.P. Li is with the Department of Statistics, Stanford University, Stanford, CA

    94305 USA (e-mail: [email protected]).D. Paul is with the Department of Statistics, University of California, Davis,

    Davis, CA 95616 USA (e-mail: [email protected]).R. Narasimhan is with the Department of Electrical Engineering, Uni-

    versity of California, Santa Cruz, Santa Cruz, CA 95064 USA (e-mail:[email protected]).

    J. Cioffi is with the Department of Electrical Engineering, Stanford Univer-sity, Stanford, CA 94305 USA (e-mail: [email protected]).

    Communicated by R. R. Mller, Associate Editor for Communications.Digital Object Identifier 10.1109/TIT.2005.860466

    wherex

    t

    p is the (normalized) transmitted signal vector andy

    r

    m is the received signal vector. Herep

    is the number of transmit an-

    tennas andm

    is the number of receive antennas.HH

    H

    WW

    W

    m 2 p con-

    sists of independent and odentically distributed (i.i.d.) standard com-

    plex Normal entries.RR

    R

    tt

    t

    p 2 p is the transmitter correlation ma-

    trix.PP

    P

    = d i a g [ ~ c

    1

    ; ~c

    2

    ; . . . ; ~c

    p

    ]

    p 2 p ;~c

    k

    = c

    k

    m

    p

    , wherec

    k

    is the

    signal-to-noise ratio (SNR) for thek

    th spatial stream. This definition

    of SNR is consistent with [1, Sec. 7.4].HHH = HHH

    WW

    W

    RRR

    tt

    t

    PPP

    m 2 p

    is treated as the channel matrix.n

    c

    m is the complex noise vector

    and is assumed to have zero mean and identity covariance. Note that the

    power matrixPP

    P has terms involving the variance of the noise. The cor-

    relation matrixRR

    R

    t

    and power matrixPP

    P are assumed to be nonrandom.

    Also, we restrict our attention top

    m

    .

    We consider the popular linear minimum mean-square error

    (MMSE) receiver. Conditional on the channel matrixHH

    H , the signal-to-

    interference-plus-noise ratio (SINR) on thek

    th spatial stream can be

    expressed as (e.g., [1][6])

    SINRk

    =

    1

    MMSEk

    01 =

    1

    II

    I

    p

    +

    1

    m

    HH

    H

    y

    HH

    H

    0 1

    k k

    01

    (2)

    whereII

    I

    p

    is ap

    2p

    identity matrix, andHH

    H

    y is theHermitian transposeof

    HH

    H . Note that (2), in the same form as equation (7.49) of [1], is derived

    based on the second-order statistics of the input signals, not restricted

    to binary signals.

    For binary inputs, Verd [4, eq. (6.47)] provides the exact formula

    for computing thebit-error rate (BER) (also see[7]). Conditional onHH

    H ,

    this BER formula requires computing2

    p 0 1 Q-functions. To compute

    BER unconditionally, we need to sampleHH

    H enough times (e.g.,1 0

    5 )

    to get a reliable estimate. Whenp

    3 2

    (orp

    6 4

    ), the computations

    become intractable [4], [8].

    Recently, study of the asymptotic properties of multiuser receivers

    (e.g., [2][4], [6], [8][11]) has received a lot of attention. Works that

    relate directly to the content of this correspondence include Tse and

    Hanly [11] and Verd and Shamai [6], who independently derived the

    asymptotic first moment of SINR for uncorrelated channels. Tse and

    Zeitouni [3] proved the asymptotic Normality of SINR for the equal

    power case, and commented on the possibility of extending the result

    to theunequal powers scenario.Zhang et al. [12] proved the asymptotic

    Normality of the multiple-access interference (MAI), which is closely

    related to SINR. Guo et al. [8] proved the asymptotic Normality of the

    decision statistics for a variety of linear multiuser receivers. [8] con-

    sidered a general power distribution and corresponding unconditional

    asymptotic behavior.

    Based on the asymptotic Normality results, Poor and Verd [2] (also

    in [4], [8]) proposed using the limiting BER (denoted by BER1

    ) for

    binary modulations, which is a single Q-function

    BER1

    = Q ( E (

    SINRk

    )

    1

    ) =

    1

    p

    E ( S I N R )

    e

    0 t = 2

    d t

    (3)

    whereE (

    SINRk

    )

    1

    denotes the asymptotic first moment of SINRk

    .

    Equation (3) is convenient and accurate for large dimensions.

    However, its accuracy for small dimensions is of some concern.

    For instance, [8] compared the asymptotic BER with simulation

    results, which showed that even withp = 6 4

    there existed significant

    discrepancies. In general, (3) will underestimate the true BER. For

    example, in our simulations, whenm = 1 6 ; p = 8 ;

    SNR=

    15 dB, the

    asymptotic BER given by (3) is roughly 11 0 0 0 0

    of the exact BER. In

    current practice, code-division multiple-access (CDMA) channels with

    m ; p

    between3 2

    and6 4

    are typical and in multiple-antenna systems

    arrays of 4 antennas are typical but arrays with 8 to 16 antennas would

    be feasible in the near future [9]. Therefore, it would be useful if

    0018-9448/$20.00 2006 IEEE

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    2/16

    272 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006

    one could compute error probabilities both efficiently and accurately.

    Another motivation to have accurate and simple BER expressions

    comes from system optimization designs [13], [14].

    This study addresses three aspects related to improving the accuracy

    in computing probability of errors. First, the known formulas for the

    asymptotic moments can be improved at small dimensions. Second,

    while the asymptotic BER (3) converges very slowly [4, p. 305], the

    accuracy can be improved by considering other asymptotically equiva-lent distributions using higher moments. Third, the presence of channel

    correlations may(seriously)affect themoments andBER. In fact, when

    the dimension increases, the effect of correlation tends to invalidate the

    independence assumption [15, p. 222]. To the best of our knowledge,

    the asymptotic SINR results in the existing literature do not take into

    account correlations. Mller [16] has commented that the presence of

    correlations makes the analysis very difficult.

    The channel model in (1) takes into account the (transmit) channel

    correlations. This model does not consider the receiver correlation and

    assumes a Gaussian channel (Rayleigh flat-fading), while [3] and [8]

    did not assume Gaussianity. Removing the Normality assumption will

    still result in the same first moment of SINR but different second mo-

    ment (see [3]). Ignoring thechannel correlation, however, may produce

    quite different results even for the first moment. For example, it will beshown later that, with equal power and constant correlation

    (equicor-

    relation model), the first moment is only about( 1 0 )

    times the first

    moment without correlation.

    Our work starts with a crucial observation: assuming a correlated

    channel model in (1), the SINR can be decomposed into two indepen-

    dent components

    SINRk

    =

    SINRZ F

    k

    + T

    k

    = S

    k

    + T

    k

    (4)

    whereSINR Z Fk

    , denoted byS

    k

    , is thecorresponding SINR for thezero-

    forcing (ZF) receiver, which, conditional onHH

    H , can be expressed as

    (e.g., [1], [3], [17])

    SINRZ Fk

    =

    1

    1

    m

    HH

    H

    y

    HH

    H

    0 1

    k k

    :

    (5)

    It will be shown thatS

    k

    has a Gamma distribution. BecauseS

    k

    and

    T

    k

    are independent, we can focus only onT

    k

    . AsS

    k

    is often the dom-

    inating component, separating outS

    k

    is expected to improve the ac-

    curacy of the approximation. For uncorrelated channels and channels

    with equicorrelation, we derive the first three asymptotic moments for

    T

    k

    (thus, for SINRk

    also), which match the simulations remarkably

    well for finite dimensions. For general correlated channels, some lim-

    iting upper bounds for the first three moments are proposed.

    We prove the asymptotic Normality ofT

    k

    for correlated channels

    under certain conditions. The proof of asymptotic Normality provides

    a rigorous basis for proposing approximate distributions. SinceT

    k

    is

    proved to converge to a nonrandom distribution (i.e., a Normal with

    nonrandom mean and variance), the same asymptotic results hold forthe unconditional SINR

    k

    , by dominatedconvergence. Thesubtle dif-

    ference between conditional and unconditional SINRk

    becomes im-

    portant if one would like to extend to random power distributions or

    random correlations (see [8]).

    As an alternative to (3), a well-known approximation to thetrue BER

    for binary modulations would be

    BERa

    4

    =

    1

    0

    1

    p

    x

    1

    p

    2

    e

    0

    d t d F

    S I N R

    ( x )

    (6)

    whereF

    S I N R

    is thecumulative distribution function (CDF) of SINRk

    .

    (6) converges to (3) as long as the asymptotic Normality holds. We

    propose using a Gamma and a generalized Gamma distribution to ap-

    proximateF

    S I N R

    . In particular, ifT

    k

    is approximated as a generalized

    Gamma random variable, combining the exact distribution ofS

    k

    , we

    are able to produce very accurate BER curves even form = 4

    .

    We will use binary signals as a test case for evaluating our methods.

    We assume binary frequency-shift keying (BFSK) for obtaining the

    same limiting BER (3) as in [2], [4], [8]. It should be clear that our

    methodology applies to other constellations such asM

    -QAM.

    The correspondence is organized as follows. Section II describes

    how to decompose SINRk

    =

    SINR

    Z F

    k

    + T

    k

    . Section III has the deriva-tion of the exact distribution of SINR Z Fk

    and its independence from

    T

    k

    . Section IV contains the derivation of the first three asymptotic mo-

    ments (or upper bounds on moments) ofT

    k

    . Asymptotic Normality of

    T

    k

    is proved in Section V. In Section VI, the Gamma and generalized

    Gamma distribution approximations are proposed. Section VII has a

    demonstration about how to apply our results on SINR in computing

    the probability of errors. Section VIII compares our theoretical results

    with Monte Carlo simulations.

    II. DECOMPOSITION OF SINRk

    LetAA

    A

    = II

    I

    p

    +

    1

    m

    HH

    H

    y

    HH

    H . Apply the following shifting operation:

    AA

    A

    = ( a

    1

    a

    2

    . . . a

    k

    . . . a

    p

    )

    s h i f t

    0 !

    AA

    A

    k k

    a

    y

    k ( 0 k )

    a

    k ( 0 k )

    AA

    A

    ( 0 k ; 0 k )

    =

    ~

    AA

    A (7)

    wherea

    k

    2

    p 2 1 stands for thek

    th column ofA ; a

    k ( 0 k )

    2

    ( p 0 1 ) 2 1

    isa

    k

    with thek

    th entry removed.AA

    A

    ( 0 k ; 0 k )

    2

    ( p 0 1 ) 2 ( p 0 1 ) isAA

    A with

    thek

    th column andk

    th row removed. Then

    [ AA

    A

    0 1

    ]

    k k

    = [

    ~

    AA

    A

    0 1

    ]

    1 1

    = AA

    A

    k k

    0a

    y

    k ( 0 k )

    ( AA

    A

    ( 0 k ; 0 k )

    )

    0 1

    a

    k ( 0 k )

    0 1

    :

    (8)

    From (8), it follows that (2) can be expressed as

    SINRk

    =

    1

    AA

    A

    k k

    0a

    y

    k ( 0 k )

    AA

    A

    ( 0 k ; 0 k )

    0 1

    a

    k ( 0 k )

    0 1

    01

    =

    1

    m

    h

    y

    k

    h

    k

    0

    1

    m

    2

    h

    y

    k

    HH

    H

    ( 0 k )

    II

    I

    p 0 1

    +

    1

    m

    HH

    H

    y

    ( 0 k )

    HH

    H

    ( 0 k )

    0 1

    HH

    H

    y

    ( 0 k )

    h

    k

    (9)

    whereh

    k

    2

    m 2 1 stands for thek

    th column ofHH

    H

    ; HH

    H

    ( 0 k )

    2

    m 2 ( p 0 1 ) isHH

    H with thek

    th column removed.

    Consider the singular value decomposition (SVD):HH

    H

    ( 0 k )

    =

    U D VU D V

    U D V

    y

    ; UU

    U

    2

    m 2 ( p 0 1 )

    ; DD

    D

    2

    ( p 0 1 ) 2 ( p 0 1 )

    ; VV

    V

    2

    ( p 0 1 ) 2 ( p 0 1 )

    ; UU

    U

    y

    UU

    U

    = II

    I

    p 0 1

    ; VV

    V

    y

    VV

    V

    = V VV V

    V V

    y

    = II

    I

    p 0 1

    . LetUU

    U

    c

    be the or-

    thogonal complement ofUU

    U , implying thatUU

    U

    y

    UU

    U

    c

    = 00

    0

    ( p 0 1 ) 2 ( m 0 p + 1 )

    ,

    and ~UU

    U

    = [ UU

    U

    UU

    U

    c

    ]

    satisfies ~UU

    U

    y

    ~

    UU

    U

    =

    ~

    UU

    U

    ~

    UU

    U

    y

    = II

    I

    m

    . Plug these into (9) to get

    SINRk

    =

    1

    m

    h

    y

    k

    ~

    UU

    U

    ~

    UU

    U

    y

    h

    k

    0

    1

    m

    2

    h

    y

    k

    U DU D

    U D

    2

    II

    I

    p 0 1

    +

    1

    m

    DD

    D

    2

    0 1

    UU

    U

    y

    h

    k

    :

    (10)

    Now, expand

    ~

    UU

    U

    y

    h

    k

    =

    UU

    U

    y

    h

    k

    UU

    U

    y

    c

    h

    k

    =

    t

    k

    s

    k

    ; t

    k

    2

    ( p 0 1 ) 2 1

    ; s

    k

    2

    ( m 0 p + 1 ) 2 1

    :

    Then (10) becomes

    SINRk

    =

    1

    m

    s

    y

    k

    s

    k

    +

    1

    m

    t

    y

    k

    II

    I

    p 0 1

    +

    1

    m

    DD

    D

    2

    0 1

    t

    k

    =

    1

    m

    m 0 p + 1

    i = 1

    ks

    k ; i

    k

    2

    +

    1

    m

    p 0 1

    i = 1

    kt

    k ; i

    k

    2

    1 +

    1

    m

    d

    2

    i

    = S

    k

    + T

    k

    (11)

    whered

    i

    is thei

    th diagonal entry ofDD

    D .

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    3/16

    IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006 273

    The same shifting and SVD operations on SINR Z Fk

    in (5) yields

    SINRZ Fk

    =

    1

    m

    s

    y

    k

    s

    k

    = S

    k

    :

    (12)

    This proves the decomposition SINRk

    =

    SINRZ Fk

    + T

    k

    . So far, no

    statistical properties ofHH

    H are used, hence, the decomposition is not

    restricted to Gaussian channels.

    In the casep > m

    , SINRk

    in (2) is still valid. The same SVD yields

    SINRk

    =

    1

    m

    m

    i = 1

    k t

    k ; i

    k

    2

    1 +

    1

    m

    d

    2

    i

    = T

    k

    (13)

    becauseHH

    H

    ( 0 k )

    has onlym

    nonzero singular values whenp > m

    . The

    results from random matrix theory that we will use later also apply to

    p > m

    . However, in this study, we restrict our attention top m

    .

    III. DISTRIBUTIONS OFS

    k

    ANDt

    k

    The distribution ofS

    k

    and its relationship withT

    k

    can be obtained

    from the properties of the multivariate Normal distribution (see [18,

    Ch.3]). Here,RR

    R

    = PP

    P

    RR

    R

    t

    PP

    P , which is assumed to be positive definite,

    can be considered as the generalized covariance matrix.

    We denote a Normal distribution as N ( m e a n ; V a r ) , and a complexNormal distribution as

    C N ( m e a n ; V a r )

    . Thus, each row ofHH

    H

    WW

    W

    C N ( 0 ; I0 ; I

    0 ; I

    p

    )

    , and each row ofHH

    H

    C N ( 0 ; R0 ; R

    0 ; R

    )

    .

    Observe that

    6

    k

    4

    =

    1

    [ RR

    R

    0 1

    ]

    k k

    =

    ~c

    k

    [ RR

    R

    tt

    t

    0 1

    ]

    k k

    = r

    k k

    0 r

    y

    k ( 0 k )

    ( RR

    R

    ( 0 k ; 0 k )

    )

    0 1

    r

    k ( 0 k )

    (14)

    wherer

    k k

    is the( k ; k )

    th element ofRR

    R ,r

    k ( 0 k )

    is thek

    th column ofRR

    R

    with thek

    th element removed,RR

    R

    ( 0 k ; 0 k )

    isRR

    R with thek

    th row andk

    th

    column removed,[ RR

    R

    0 1

    ]

    k k

    is the( k ; k )

    th entry ofRR

    R

    0 1 .

    Conditional onHH

    H

    ( 0 k )

    ; h

    k

    ; t

    k

    ;

    ands

    k

    are all Normally distributed

    h

    k

    j HH

    H

    ( 0 k )

    0 HH

    H

    ( 0 k )

    ( RR

    R

    ( 0 k ; 0 k )

    )

    0 1

    r

    k ( 0 k )

    + C N ( 0 0

    0

    ; 6

    k

    II

    I

    m

    )

    (15)

    t

    k

    = UU

    U

    y

    h

    k

    j HH

    H

    ( 0 k )

    0 D VD V

    D V

    y

    ( RR

    R

    ( 0 k ; 0 k )

    )

    0 1

    r

    k ( 0 k )

    + C N ( 0 0

    0

    ; 6

    k

    II

    I

    p 0 1

    )

    (16)

    s

    k

    = UU

    U

    y

    c

    h

    k

    j HH

    H

    ( 0 k )

    C N ( 0 0

    0

    ; 6

    k

    II

    I

    m 0 p + 1

    ) :

    (17)

    (Recall thatHH

    H

    ( 0 k )

    = U D VU D V

    U D V

    y

    ; UU

    U

    y

    HH

    H

    ( 0 k )

    = D VD V

    D V

    y

    ; UU

    U

    y

    c

    HH

    H

    ( 0 k )

    =

    UU

    U

    y

    c

    U D VU D V

    U D V

    y

    = 00

    0 .)

    We state our first lemma.

    Lemma 1: SINRZ Fk

    = S

    k

    is a Gamma random variable,S

    k

    G ( m 0 p + 1 ;

    1

    m

    6

    k

    )

    .S

    k

    is independent ofT

    k

    .

    Proof: Recall that

    S

    k

    =

    1

    m

    m 0 p + 1

    i = 1

    k s

    k ; i

    k

    2

    :

    The independence ofS

    k

    andT

    k

    follows from (16) and (17). Condi-

    tional distribution ofs

    k

    givenHH

    H

    ( 0 k )

    is independent ofHH

    H

    ( 0 k )

    . There-

    fore, (17) is theunconditional distribution ofs

    k

    . Theelementsofs

    k

    are

    independent and identically distributed (i.i.d.)C N ( 0 0

    0

    ; 6

    k

    )

    , and there-

    fore

    S

    k

    G ( m 0 p + 1 ;

    1

    m

    6

    k

    ) :

    Gore et al. [19] proved that SINR Z Fk

    is a Gamma random variable

    for the equal power case. Both Mller et al. [17] and Tse and Zeitouni

    [3] used a Beta distribution to approximate SINR Z Fk

    .

    Fig. 1. The ratiosE (

    T

    S

    )

    , obtained from1 0

    5 simulations, with respect top

    m

    ,for two (equal power) SNR levels (

    c =

    0 dB, 30 dB), and three levels ofequicorrelations ( = 0 ; 0 : 5 ; 0 : 8 ) .

    The first three moments ofS

    k

    are

    E ( S

    k

    ) =

    m 0 p + 1

    m

    6

    k

    V a r ( S

    k

    ) =

    m 0 p + 1

    m

    2

    6

    2

    k

    S k ( S

    k

    ) = E ( S

    k

    0 E ( S

    k

    ) )

    3

    = 2

    m 0 p + 1

    m

    3

    6

    3

    k

    :

    (18)

    The following lemma concerns the distribution oft

    k

    .

    Lemma 2: Conditional on HHH( 0 k )

    ; k t

    k ; i

    k

    2 is a noncentral

    Chi-squared random variable with a moment generating function

    (MGF)

    E e x p ( k t

    k ; i

    k

    2

    y ) j HH

    H

    ( 0 k )

    =

    1

    1 0 y 6

    k

    e x p

    y k z

    i

    k

    2

    1 0 y 6

    k

    (19)

    wherez

    i

    = 0 d

    i

    ( VV

    V

    y

    RR

    R

    0 1

    ( 0 k ; 0 k )

    r

    k ( 0 k )

    )

    i

    . For uncorrelated channels,

    k t

    k ; i

    k

    2

    G ( 1 ; ~c

    k

    ) , independent of HH H( 0 k )

    .

    Proof: From (16), tk ; i

    j HH

    H

    ( 0 k )

    C N ( z

    i

    ; 6

    k

    )

    . For uncorrelated

    channels,z

    i

    = 00

    0 and6

    k

    = ~c

    k

    hencek t

    k ; i

    k

    2

    G ( 1 ; ~c

    k

    )

    . In general,

    conditional onHH

    H

    ( 0 k )

    , 26

    k t

    k ; i

    k

    2 is a standard noncentral Chi-squared

    random variablewith two degrees of freedom and noncentrality param-

    eter 26

    k z

    i

    k

    2 , which gives the MGF in (19).

    For future use, we give expressions for three moments ofk t

    k ; i

    k

    2 ,

    conditional onHH

    H

    ( 0 k )

    E ( k t

    k ; i

    k

    2

    j HH

    H

    ( 0 k )

    ) = 6

    k

    + k z

    i

    k

    2

    V a r ( k t

    k ; i

    k

    2

    j HH

    H

    ( 0 k )

    ) = 6

    2

    k

    + 2 k z

    i

    k

    2

    6

    k

    S k ( k t

    k ; i

    k

    2

    j HH

    H

    ( 0 k )

    ) = 2 6

    3

    k

    + k z

    i

    k

    2

    6

    2

    k

    :

    (20)

    The significance of the decomposition of SINRk

    can be seen fromFig. 1, which plots

    E (

    T

    S

    )

    as a function ofp ; c

    (SNR), and correlation

    ,

    form = 1

    . Here we consider equal powerc

    k

    = c

    case withRR

    R

    t

    being

    an equicorrelation matrix (i.e.,RR

    R

    t

    consists of1

    s in the main diagonal

    and

    s in alloff diagonal entries). This figure suggests that in themajor

    range of SNR and pm

    ; S

    k

    might be the dominating component.

    IV. ASYMPTOTIC MOMENTS

    This section derives the asymptotic moments ofT

    k

    , written as

    T

    k

    =

    1

    m

    p 0 1

    i = 1

    k t

    k ; i

    k

    2

    1 +

    1

    m

    d

    2

    i

    =

    1

    m

    p 0 1

    i = 1

    k t

    k ; i

    k

    2

    i

    =

    1

    m

    t

    y

    k

    33

    3

    t

    k

    (21)

    where

    i

    =

    1

    1 + d

    ; 33

    3

    = ( II

    I

    p 0 1

    +

    1

    m

    DD

    D

    )

    0 1

    = d i a g [

    1

    ; . . . ;

    p 0 1

    ]

    .

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    4/16

    274 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006

    We use some known results for the empirical eigenvalue distribution

    (ESD) of theproduct of tworandom matrices (e.g.,Silverstein[20], Bai

    [21], Silversteinand Bai[22]),to study theasymptotic properties ofT

    k

    .

    We work under the regime:p

    ! 1; m

    ! 1;

    p 0 1

    m

    !

    2( 0 ; )

    . In

    the rest of the correspondence, whenever we mention in the limit, we

    refer to this condition.

    Suppose that f

    i

    g

    p 0 1

    i = 1

    are theeigenvalues ofRR

    R

    ( 0 k ; 0 k )

    . Suppose fur-

    ther that the ESD ofRRR = PPP R t PR t PR

    t

    P

    converges asp

    ! 1

    to a non-random measure

    F

    R . By Theorem 1.1 of [20], under some weak reg-

    ularity conditions onF

    R (including that the support ofF

    R is compact

    and does not contain0

    ), in the limit, the ESD of 1m

    HH

    H

    y

    ( 0 k )

    HH

    H

    ( 0 k )

    , de-

    noted by J

    , converges to a measureJ

    , whose Stieltjes transform, de-

    noted byM

    J

    , satisfies

    M

    J

    ( z )

    4

    =

    x

    0z

    J ( d x ) =

    d F

    R

    ( )

    (

    0

    0 z M

    J

    ( z ) )

    0z

    (22)

    (see [8]). For finitep

    , the last integral in (22) can be approximated by

    p

    0

    p 0 1

    i = 1

    i

    (

    0

    0 z M

    J

    ( z ) )

    0z

    :

    Note that for uncorrelated channels,

    i

    = ~c

    i

    . In general, (22) requires

    thatz

    2

    +

    =

    fz

    2: I m ( z ) > 0

    g , andM

    J

    ( z )

    is the unique solu-

    tion in fM

    2:

    0

    1 0

    z

    + M

    2

    +

    g . However, sinceM

    J

    (

    0 )

    0

    and sinceRR

    R is positive definite (implying that all

    i

    s are positive), and

    we only consider

    , it follows that 1 ( 1 0 0 z M ( z ) ) 0 z

    and its

    derivatives are bounded in a complex neighborhood ofz =

    0

    , hence

    M

    J

    ( z )

    and its derivatives are well defined atz =

    0

    by the bounded

    convergence theorem.

    Therefore,

    t r ( 3 3

    3

    )

    p

    0

    =

    p

    0

    p 0 1

    i = 1

    +

    1

    m

    d

    2

    i

    =

    + x

    J ( d x )

    =

    x

    0(

    0 )

    J

    (

    d x

    )

    p

    0 !M

    J

    (

    0 )

    4

    =

    ; c

    ;

    (23)

    t r ( 3 3

    3

    2

    )

    p

    0

    =

    p

    0

    p 0 1

    i = 1

    +

    1

    m

    d

    2

    i

    2

    (24)

    =

    ( x

    0(

    0 ) )

    2

    J ( d x )

    p

    0 !M

    J

    (

    0 )

    4

    =

    2

    ; c

    ;

    t r ( 3 3

    3

    3

    )

    p

    0

    =

    p

    0

    p 0 1

    i = 1

    +

    1

    m

    d

    2

    i

    3

    =

    ( x

    0(

    0 ) )

    3

    J ( d x )

    p

    0 !

    2

    M

    J

    (

    0 )

    4

    =

    ; c

    (25)

    where p

    0 ! denotes converge in probability, andM

    J

    ( z )

    and

    M

    J

    ( z )

    are the first and second derivatives ofM

    J

    ( z )

    , respectively. In

    general, MJ

    ( z ) ; M

    J

    ( z ) , and M J

    ( z ) have to be solved numerically

    except in some simple cases (e.g., an uncorrelated channel with equal

    powers).

    Let1

    i

    ( z ) =

    i

    (

    0

    0 z M

    J

    ( z ) )

    0z

    .M

    J

    ( z )

    , andM

    J

    can be

    approximated by solving

    M

    J

    ( z )

    0

    p

    0

    p 0 1

    i = 1

    i

    z

    1

    i

    ( z )

    2

    =

    p

    0

    p 0 1

    i = 1

    i

    M

    J

    ( z ) +

    1

    i

    ( z )

    2

    M

    J

    ( z )

    0

    p

    0

    p 0 1

    i = 1

    i

    z

    1

    i

    ( z )

    2

    =

    2

    p

    0

    p 0 1

    i = 1

    i

    M

    J

    ( z )

    1

    i

    ( z )

    2

    +

    2

    p

    0

    p 0 1

    i

    = 1

    (

    i

    M

    J

    ( z ) +

    i

    z M

    J

    ( z ) + )

    2

    1

    i

    ( z )

    3

    :

    M

    J

    ( z )

    and its derivatives are not sufficient for computing the mo-

    ments ofT

    k

    , which involve kt

    k ; i

    k

    2 . We will combine the conditional

    moments ofkt

    k ; i

    k

    2 in (20) withM

    J

    ( z )

    to get the asymptotic moments

    ofT

    k

    .

    Three cases are considered in the following three subsections. First,

    we consider uncorrelated channels with unequal powers and obtain the

    asymptotic moments ofT

    k

    explicitly, although the results cannot be

    expressed in closed form. Next, we show that, assuming equal powers,the asymptotic moments ofT

    k

    for uncorrelated channels have closed-

    form expressions. Finally, for general correlated channels with unequal

    powers, we derive some limiting upper bounds for the moments ofT

    k

    and give some sufficient conditions under which these upper bounds

    are the exact limits.

    A. Asymptotic Moments ofT

    k

    for Uncorrelated Channels

    In this case,RR

    R

    t

    = II

    I

    p

    ; RR

    R

    = PP

    P

    ; 6

    k

    = ~c

    k

    = c

    k

    m

    p

    ;

    r

    k ( 0 k )

    = 0 ;

    kz

    i

    k

    2

    = d

    2

    i

    k( V

    y

    RR

    R

    0 1

    ( 0 k ; 0 k )

    r

    k ( 0 k )

    )

    i

    k

    2

    = 0

    , and

    kt

    k ; i

    k

    2

    G ( ; ~c

    k

    )

    , as shown in Lemma 2.

    The following three lemmas are proved in Appendix I.

    Lemma 3:

    E

    p

    p

    0

    T

    k

    !c

    k

    ; c

    4

    = c

    k

    M

    J

    (

    0 ) :

    (26)

    Lemma 4:

    V a r

    p

    p

    p

    0

    T

    k

    !c

    2

    k

    2

    ; c

    4

    = c

    2

    k

    M

    J

    (

    0 ) :

    (27)

    Lemma 5:

    p

    0

    S k ( p T

    k

    )

    !2 c

    3

    k

    ; c

    4

    = c

    3

    k

    M

    J

    (

    0 ) :

    (28)

    Therefore, asymptotically, the first three moments ofT

    k

    can be ap-

    proximated as

    E ( T

    k

    )

    c

    k

    p

    0

    p

    ; c

    (29)

    V a r ( T

    k

    )

    c

    2

    k

    p

    0

    p

    2

    2

    ; c

    (30)

    S k ( T

    k

    )

    2 c

    3

    k

    p

    0

    p

    3

    ; c

    :

    (31)

    Combining with the moments ofS

    k

    in (18), and using independence

    ofS

    k

    andT

    k

    , we have

    E (

    SINRk

    )

    c

    k

    m

    0p +

    p

    + c

    k

    p

    0

    p

    ; c

    (32)

    V a r (

    SINRk

    )

    c

    2

    k

    m

    0p +

    p

    2

    + c

    2

    k

    p

    0

    p

    2

    2

    ; c

    (33)

    S k (

    SINRk

    )

    2 c

    3

    k

    m

    0p +

    p

    3

    + 2 c

    3

    k

    p

    0

    p

    3

    ; c

    :

    (34)

    B. Closed-Form Asymptotic Moments ofT

    k

    for Uncorrelated

    Channels With Equal Powers

    In this case,HH

    H

    =

    p

    ~c HH

    H

    WW

    W

    . The ESD of 1m

    HH

    H

    y

    ( 0 k )

    HH

    H

    ( 0 k )

    converges

    to the well-known MarcenkoPastur Law ([21, Theorem 2.5]), from

    which one can derive closed-form expressions for the moments by (te-

    dious) integration. Alternatively, we directly solve forM

    J

    (

    0 )

    from

    the simplified version (i.e., taking

    i

    = ~c

    for alli

    ) of (22) to get

    ; c

    = M

    J

    (

    0 ) =

    0( ~c (

    0 ) + )

    2 ~c

    (35)

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    5/16

    IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006 275

    Fig. 2. The ratios~

    ~

    and~

    ~

    , with respect tom

    andp = m

    , for SNR

    =

    20 dB.

    where = ~c

    2

    ( 1 0 )

    2

    + 2 ~c ( 1 + ) + 1

    . Similarly

    2

    ; c

    = M

    0

    J

    (

    0 1 ) =

    ; c

    0

    1 + ~c ( 1 + ) 0

    2 ~c

    (36)

    ; c

    =

    1

    2

    M

    0 0

    J

    ( 0 1 ) =

    2

    ; c

    0

    ~c

    3

    :

    (37)

    From1 + ~c ( 1 + )

    , it follows that

    ; c

    2

    ; c

    ; c

    :

    (38)

    We always replace

    by p 0 1m

    in our computations. In the literature

    (e.g., [3], [4], [8]), =

    p

    m

    is often used. It can be shown that the

    choice of

    can have a significant impact for small dimensions.1 Our

    approximate formula (32) forE (

    SINRk

    )

    (denoted as~

    ) is

    ~ = c

    m

    0p + 1

    p

    + c

    p

    01

    p

    ; c

    = ~c

    0

    1

    4

    p

    01

    m

    1

    ~c ( 1 +

    p

    )

    2

    + 1

    0~c ( 1

    0

    p

    )

    2

    + 1

    2

    :

    (39)

    When

    is replaced with p 0 1m

    , the corresponding~

    is denoted by

    ~

    p 0 1

    . If instead

    is replaced with pm

    , it is denoted by~

    p

    . We can

    compare~

    with the well-known asymptotic first moment (e.g., [3], [4,

    eq. (6.59)]), denoted by~

    R

    ;

    ~

    R

    = ~c

    0

    1

    4

    ( ~c ( 1 +

    p

    )

    2

    + 1

    0~c ( 1

    0

    p

    )

    2

    + 1 )

    2

    :

    (40)

    Similarly,~

    R ; p 0 1

    and~

    R ; p

    indicate whether =

    p 0 1

    m

    or =

    p

    m

    is

    used for~

    R

    . Obviously,~

    R ; p 0

    1

    = ~

    p 0

    1

    .Fig. 2 plots

    ~

    ~

    and~

    ~

    as functions ofm

    and pm

    . It is clear

    that the difference between~

    p 0 1

    and~

    R ; p

    at small dimensions can be

    substantial. For example, whenm = 4 ; p = 4 ;

    ~

    ~

    is almost3

    . On

    the other hand, the difference between~

    p 0 1

    and~

    p

    is not significant.

    This experiment implies that our proposed approximate formulas are

    not only accurate (see simulation results in Section VIII) but also not

    as sensitive to the choice of

    .

    The following lemma compares~

    with~

    R

    algebraically.

    Lemma 6: ~p

    ~

    p 0 1

    = ~

    R ; p 0 1

    ~

    R ; p

    .

    Proof: See Appendix II.

    1Note that in our notations, ~c = c mp

    is equivalent to A

    in [4, eq. (6.59)]

    and equivalent to P

    in [3].

    Similar results can be obtained if we compare our approximate vari-

    ance formula (33) with the asymptotic variance given in [3]. See the

    technical report [23] for details.

    C. Asymptotic Upper Bounds for Correlated Channels

    Recall that

    E (

    kt

    k ; i

    k

    2

    kHH

    H

    ( 0 k )

    ) = 6

    k

    +

    kz

    i

    k

    2

    where

    kz

    i

    k

    2

    = d

    2

    i

    k( V

    y

    ( RR

    R

    ( 0 k ; 0 k )

    )

    0 1

    r

    k ( 0 k )

    )

    i

    k

    2

    :

    There is an important inequality

    p 0 1

    i = 1

    1

    m

    kz

    i

    k

    2

    i

    =

    p 0 1

    i = 1

    1

    m

    kz

    i

    k

    2

    1 +

    1

    m

    d

    2

    i

    p 0 1

    i = 1

    V

    y

    ( RR

    R

    ( 0 k ; 0 k )

    )

    0 1

    r

    k ( 0 k )

    i

    2

    =

    kV

    y

    RR

    R

    ( 0 k ; 0 k )

    )

    0

    1

    r

    k ( 0 k )

    k

    2

    =

    kRR

    R

    ( 0 k ; 0 k )

    )

    0 1

    r

    k ( 0 k )

    k

    2

    4

    = e

    k

    :

    (41)

    Some limiting upper bounds for the first three moments ofT

    k

    are

    provided in the next lemma.

    Lemma 7:

    E ( T

    k

    )

    U

    =

    p

    01

    m

    6

    k

    M

    J

    (

    01 ) + e

    k

    (42)

    V a r ( T

    k

    )

    U

    =

    p

    01

    m

    2

    6

    2

    k

    M

    J

    (

    01 ) +

    1

    m

    6

    k

    1

    2

    + 2

    2

    e

    k

    + e

    2

    k

    (43)

    S k ( T

    k

    )

    U

    =

    p

    01

    m

    3

    6

    3

    k

    M

    J

    (

    01 ) +

    8 = 9

    m

    2

    6

    2

    k

    e

    k

    +

    m

    2

    6

    2

    k

    1

    e

    k

    + e

    3

    k

    +

    3

    2

    e

    k

    6

    k

    1

    m

    e

    k

    + 2

    1

    m

    6

    k

    2

    e

    k

    + 2

    1

    m

    6

    k

    2

    (44)

    where

    and

    are the same constants in the proofs of Lemmas 4 and 5.

    Proof: See Appendix III.

    We are interested in the special case wheree

    k

    !0

    . Ife

    k

    !0

    at

    any rate (faster thanO ( p

    0

    )

    or faster thanO ( p

    0

    )

    ), ignoring all the

    terms involvinge

    k

    ; E ( T

    k

    )

    U (V a r ( T

    k

    )

    U orS k ( T

    k

    )

    U ) is still the true

    limit. Under these conditions, we propose the approximate moments

    for SINRk

    E ( SINRk

    )

    m

    0p + 1

    m

    6

    k

    +

    p

    01

    m

    6

    k

    ; c

    + e

    k

    (45)

    V a r (

    SINRk

    )

    m

    0p + 1

    m

    2

    6

    2

    k

    +

    p

    01

    m

    2

    6

    2

    k

    2

    ; c

    + e

    2

    k

    (46)

    S k (

    SINRk

    )

    2

    m

    0p + 1

    m

    3

    6

    3

    k

    + 2

    p

    01

    m

    3

    6

    3

    k

    ; c

    + e

    3

    k

    :

    (47)

    Note thate

    k

    ; e

    2

    k

    ; e

    3

    k

    are retained in these expressions because simula-

    tion studies show that keeping these terms improves accuracy at very

    small dimensions.

    It turns outthat in theequicorrelation situation,e

    k

    !0

    ata good rate

    under mild regularity conditions. The equicorrelation is the simplest

    correlation model (e.g., [15, eq. (4.41)]) and is often used to model

    closely spaced antennas or for the worst case analysis [24]. For more

    realistic correlation models, see Narasimhan [25].

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    6/16

    276 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006

    Lemma 8: If the correlation matrix RR Rt

    consists of1

    s in the main

    diagonal and

    s in all off-diagonal entries,2 thene

    k

    !0

    in the limit,

    as long asi 6= k

    c

    c

    = O ( p

    f

    )

    forf

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    7/16

    IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006 277

    In this section, the same conditions are assumed as in proving the

    asymptotic Normality ofT

    k

    . That is,e

    k

    ! 0

    faster thanO ( p

    0

    )

    .

    When this condition is not satisfied, we do not have a rigorous asymp-

    totic result.

    In this section, the symbol

    is also used to denote the approximate

    distributions.

    A. Normal ApproximationAsymptotic Normality of

    T

    k

    implies that

    T

    k

    N

    p 0 1

    m

    6

    k

    ; c

    ;

    p 0 1

    m

    2

    6

    2

    k

    2

    ; c

    :

    (55)

    Also, the asymptotic Normality of SINRk

    means that

    SINRk

    N

    m 0 p + 1

    m

    6

    k

    +

    p 0 1

    m

    6

    k

    ; c

    ;

    m 0 p + 1

    m

    2

    6

    2

    k

    +

    p 0 1

    m

    2

    6

    2

    k

    2

    ; c

    :

    (56)

    B. Gamma Approximation

    T

    k can be approximated by a Gamma random variable G ( T ; T )

    whose parameters are determined by solving

    E ( T

    k

    ) =

    T

    T

    =

    p 0 1

    m

    6

    k

    ; c

    V a r ( T

    k

    ) =

    T

    2

    T

    =

    p 0 1

    m

    2

    6

    2

    k

    2

    ; c

    :

    The Gamma approximation ofT

    k

    is therefore,

    T

    k

    G

    ( p 0 1 )

    2

    ; c

    2

    ; c

    ;

    1

    m

    6

    k

    2

    ; c

    ; c

    :

    (57)

    According to the Gamma approximation, the third central moment of

    T

    k

    should be

    2

    T

    3

    T

    = 2

    p 0 1

    m

    3

    6

    3

    k

    4

    ; c

    ; c

    :

    (58)

    We can also approximate SINRk

    by a Gamma distributionG ( ; )

    ,

    again by matching the first two moments

    SINRk

    G

    ( m 0 p + 1 + ( p 0 1 )

    ; c

    )

    2

    m 0 p + 1 + ( p 0 1 )

    2

    ; c

    ;

    1

    m

    6

    k

    m 0 p + 1 + ( p 0 1 )

    2

    ; c

    m 0 p + 1 + ( p 0 1 )

    ; c

    :

    (59)

    The third central moment of SINRk

    then would be

    2

    m

    3

    6

    3

    k

    m 0 p + 1 + ( p 0 1 )

    2

    ; c

    2

    m 0 p + 1 + ( p 0 1 )

    ; c

    :

    (60)

    DefineR S

    to be the ratio of the third central moment of the approxi-

    mated Gamma distribution to the asymptotic third central moment. For

    T

    k

    this becomes

    R S

    T

    =

    4

    ; c

    ; c

    ; c

    :

    (61)

    For SINRk

    ;

    R S

    S I N R

    =

    1 0

    p 0 1

    m

    +

    p 0 1

    m

    2

    ; c

    2

    1 0

    p 0 1

    m

    +

    p 0 1

    m

    ; c

    1 0

    p 0 1

    m

    +

    p 0 1

    m

    ; c

    :

    (62)

    R S

    T

    andR S

    S I N R

    areindicators of how well theGammaapproxima-

    tion captures the skewness of the distribution ofT

    k

    . Ideally, we would

    Fig. 3. R ST

    and R SS I N R

    for selected SNR levels, over the whole rangeof , for equal power uncorrelated channels.

    likeR S = 1

    . It will be shown later thatR S

    is also critical for the gen-

    eralized Gamma approximation.

    The following inequalities hold for uncorrelated channels with equal

    powers.

    Lemma 10: Assuming equal powers and no correlations

    R S

    T

    1

    (63)

    R S

    S I N R

    1

    for anyp m

    (64)

    R S

    T

    R S

    S I N R

    in the limit:

    (65)

    Proof: See Appendix V.

    Fig. 3 plots some examples ofR S

    T

    andR S

    S I N R

    , for the equal

    power uncorrelated cases.

    C. Generalized Gamma Approximation

    A generalized Gamma distribution can be described by a stable law

    or an infinite divisible distribution [27], [28], [26, Chs. 2.7, 2.8], whichinvolves the sum of i.i.d. sequence of random variables. In our case,

    conditionally,T

    k

    is a weighted sum of non-i.i.d. random variables,

    henceT

    k

    is not exactly a stable law nor infinite divisible.

    The Gamma approximation ofT

    k

    can be generalized by introducing

    an additional parameter to the original Gamma distribution [27], [28],

    i.e., assumingT

    k

    G (

    T

    ;

    T

    ;

    T

    )

    . The regular Gamma distribu-

    tion is a special case with

    T

    = 1

    . Assuming a generalized Gamma

    distribution, the first three moments ofT

    k

    would be

    E ( T

    k

    ) =

    T

    T

    V a r ( T

    k

    ) =

    T

    2

    T

    S k ( T

    k

    ) = (

    T

    + 1 )

    T

    3

    T

    :

    (66)

    Equating these moments with the asymptotic moments ofT

    k

    will

    lead to the same

    T

    and

    T

    as for the Gamma approximation. The

    third parameter

    T

    will be

    T

    =

    2

    R S

    T

    0 1 :

    (67)

    Similarly, we can also generalize the Gamma approximation of

    SINRk

    by assuming SINRk

    G ( ; ; )

    . The third parameter will

    be

    =

    2

    R S

    S I N R

    0 1 :

    (68)

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    8/16

    278 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006

    When > 1

    , the generalized Gamma distribution with these param-

    eter does not have an explicit density in general. However, it can be

    described in terms of the stable law and has a closed-form MGF [27],

    [28]

    MGF( s ; G ( ; ; ) ) = e x p

    0 1

    1 0 ( 1 0 s ) :

    (69)

    When 1

    and > 1

    for uncorrelated

    equal power channels.

    VII. ANALYSIS OF THE PROBABILITY OF ERROR

    Computation of the probability of errors using the distribution of

    SINRk

    is a way of measuring how successfully the proposed distribu-

    tion approximatesthe truth. Forthis purpose, we will compute theBER

    using (6), which is equivalent to the BFSK BER. It should be straight-

    forward to apply our methods to other types of (nonbinary) constella-

    tions.

    To simplify the notations, the subscriptk

    in BERk

    will be dropped

    for the rest of the correspondence. In this section, we will provide BER

    formulas under the Gamma and the generalized Gamma approxima-

    tions, and compare these results with the exact formula for BER, de-

    noted by BERe

    , given in [4, eq. (6.47)] or [7].

    If the asymptotic Normality of SINRk

    holds, as in Corollary 2, then

    BERa

    4

    =

    1

    1

    p

    x

    1

    p

    2

    e

    0

    d t d F SINR ( x )

    ! BER1

    4

    = Q E (

    SINRk

    )

    1

    :

    (71)

    For uncorrelated channels with equal powers

    E ( SINRk ) 1 =c

    2

    ( 1

    0 )

    2

    + 2 c ( 1 + ) +

    2

    + c ( 1

    0 )

    0

    2

    (72)

    by treating p 0 1m

    =

    p

    m

    =

    . However, for correlated channels, since

    it is not convenient to computeE (

    SINRk

    )

    1

    , we will use the our fi-

    nite-dimensional moment formula to computeE (

    SINRk

    )

    1

    , which is

    different for differentp

    andm

    .

    Next, we will derive a variety of BER formulas corresponding to

    various approximation schemes, based on BFSK, which can be easily

    generalized to other types of constellations.

    A. BER by Gamma Approximation on SINRk

    Denote the BER computed using the Gamma approximation by

    BERg

    . Integration of (6) by parts yields

    BERg

    =

    1

    1

    p

    2

    e

    0

    F

    S I N R

    ( x )

    1

    2

    x

    0

    d x :

    (73)

    ReplacingF

    S I N R

    ( x )

    withF

    ;

    ( x )

    , the CDF of the Gamma distri-

    butionG ( ; )

    as defined in (59), and using the results from integral

    tables [29], we have

    BERg

    =

    1

    1

    p

    2

    e

    0

    F

    ;

    ( x )

    1

    2

    x

    0

    d x

    =

    1

    2 0 ( )

    p

    2

    1

    x

    0

    e

    0

    0 ;

    x

    d x

    =

    1

    2 0 ( )

    p

    2

    0 ( 1 = 2 + ) ( 1 = )

    ( 1 = 2 + 1 = )

    1 = 2 +

    2

    2

    F

    1

    1 ; 1 = 2 + ; + 1 ;

    1 =

    1 = 2 + 1 =

    (74)

    where thegamma function0 ( y ) =

    1

    t

    y 0 1

    e

    0 t

    d t

    , andthe incomplete

    gamma function0 ( ; y ) =

    y

    t

    0 1

    e

    0 t

    d t

    , and2

    F

    1

    (

    1)

    is the hyper-

    geometric function

    2

    F

    1

    1 ; 1 = 2 + ; + 1 ;

    1 =

    1 = 2 + 1 =

    =

    1

    n =

    0 ( 1 = 2 + + n )

    0 ( 1 = 2 + )

    0 ( + 1 + n )

    0 ( + 1 )

    1 =

    1 = 2 + 1 =

    n

    n

    !

    :

    (75)

    B. BER by Generalized Gamma Approximation on SINRk

    According to the results of [30], [31, Ch. 9.2.3], under the gener-

    alized Gamma approximation on SINRk

    , the BFSK BER (denoted as

    BERg g

    ) can be expressed in terms of the MGF as

    BERg g

    =

    1

    MGF 01

    2 s i n

    2

    ; G ( ; ; ) d

    =

    1

    e x p

    01

    1

    01 +

    2 s i n

    2

    d

    (76)

    which is for > 1

    and can be evaluated numerically. We can similarly

    write down BERg g

    for 1

    , we can write down the

    MGF for SINRk

    = S

    k

    + T

    k

    as

    MGFs ; G m

    0p + 1 ;

    6

    k

    m

    + G (

    T

    ;

    T

    ;

    T

    )

    =

    1

    1

    0

    6

    m

    s

    m 0 p + 1

    2e x p

    T

    T

    01

    1

    0( 1

    0

    T

    T

    s ) :

    (77)

    The corresponding BER (denoted as BERg + g g

    ) would be

    BERg + g g

    =

    1

    1

    1 +

    6

    2 m s i n

    m 0 p + 1

    2e x p

    T

    T

    01

    1

    01 +

    T

    T

    2 s i n

    2

    d

    (78)

    which has to be evaluated numerically.

    VIII. SIMULATIONS

    Our simulations considerm = 4 ( p = 2 ; 4 )

    , andm = 1 ( p =

    8 ; 1 )

    , for theequalpowercase.Both uncorrelated and correlated (with

    equicorrelation = : 5

    ) channels are tested. The range of SNRs is

    c = dB3 dB. Without loss of generality, the first stream (i.e., k =

    1

    ) is alwaysassumed.HH

    H

    WW

    W

    is sampled1

    times for every combination

    of (m ; p ; c

    , and

    ) for computing the empirical moments, distributions,

    and BER, except when computing the exact BERe

    form = 1 ; HH

    H

    WW

    W

    is only sampled1

    5 times.

    A. Moments

    The theoretical moments are computed using (45), (46), and (47).

    Fig. 4 plots the first three moments ofT

    k

    computed both theoretically

    and empirically from simulations. Form = 1

    , the theoretical mo-

    ments match the simulations very well, especially the first moment.

    Whenm = 4

    , the curves for the first moment are still quite accurate

    except for the correlated cases at very small SNRs. Note that due to

    thel o g

    scale, the errors at small SNRs are largely exaggerated in the

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    9/16

    IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006 279

    Fig. 4. Theoretical and empirical moments ofTk

    . The vertical axes are in the l o g1 0

    scale. Note that the variances and third moments are in terms of their squareroots and cubic roots, respectively.

    Fig. 5. Theoretical and empirical moments of SINR.

    figure. Whenm = 4

    , for the second and third moments, there seem

    to be some significant discrepancies between theoretical results and

    simulations at large SNRs. However, these errors contribute negligibly

    to the second and third moments of SINRk

    (see Fig. 5). For example,

    whenm = 4 ; p = 4 ; = 0

    , exactS k ( S

    k

    ) = 3 1 4 : 9 8

    3

    . Although theempirical

    S k ( T

    k

    ) ( = 1 6 : 4 0

    3

    )

    differs quite significantly from the the-

    oreticalS k ( T

    k

    ) ( = 5 : 3 3

    3

    )

    , the theoreticalS k (

    SINR) ( = 3 1 4 : 9 8

    3

    )

    is

    almost identical to the empiricalS k (

    SINR) ( = 3 1 4 : 9 9

    3

    )

    .

    The theoretical and empirical moments of SINRk

    are compared in

    Fig. 5 form = 4

    . As expected, the curves match almost perfectly

    except for the observable (due to thel o g

    scale) errors at very small

    SNRs.

    B. Distributions

    Fig. 6 presents the quantile-quantile (qq) plots for distributions of

    T

    k

    based on Gamma and Normal approximations against the empir-

    ical distribution, at a selected SNR=

    10 dB. The figure shows that the

    Normal approximation works poorly for smallm

    orp

    . The Gamma

    approximation fits much better in all cases. Fig. 7 gives the same type

    of plots for SINRk

    . It shows the Gamma approximation works well,

    especially for pm

    =

    1

    2

    . When pm

    = 1

    , the Gamma distribution approx-

    imates the portion between 1%99% quantiles pretty well.

    We have shown thatR S

    T

    R S

    S I N R for uncorrelated channelswith equal powers in Lemma 10, which implies that a Gamma could

    approximate the distribution of SINRk

    better than that ofT

    k

    . Also,

    Fig. 3 showsR S

    S I N R

    is much smaller at pm

    = 1

    than at pm

    =

    1

    2

    ,

    which helps explain whyGammaapproximation works well at pm

    =

    1

    2

    .

    C. Error Performance

    Fig. 8 plots BERs versus SNRs for uncorrelated channels.The figure

    shows that the BER curves produced by the Gamma approximation

    are almost indistinguishable from the simulated curves for pm

    =

    1

    2

    .

    When pm

    = 1

    , the Gamma approximation still works well for moderate

    SNRs (e.g.,SNR

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    10/16

    280 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006

    Fig. 6. Quantile-quantile plots for Tk

    . The triangles on the curves indicate the 1% and 99% quantiles. The range of quantiles is 0.1%99.9%.

    Fig. 7. Quantile-quantile plots for SINR . The triangles indicate the 1% and 99% quantiles. The range of quantiles is 0.1%99.9%.

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    11/16

    IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006 281

    Fig. 8. BER curves for uncorrelated channels. = 12

    and = 1 are used forcomputing BER in both (a) and (b). BER stands for the exact BER. BERand BER are computed by assuming SINR is a Gamma and a generalizedGamma, respectively. BER uses the exact Gamma distribution for S

    k

    andapproximates

    T

    k

    by a generalized Gamma. BER is the asymptotic BER.

    Fig. 9. BER curves for equicorrelation = 0 : 5 . Note that the BER s aredifferent because we did not simulate the true limit of the mean.

    Fig. 10. Compare the uncorrelated and correlated BER curves.m = 1

    ,p = 1

    .

    curves. All figures indicate using the asymptotic BER(

    BER1

    )

    formula will seriously underestimate the error probabilities at large

    SNRs.3

    Fig. 9 presents the BER results for the correlated channels with

    equicorrelation = 0 : 5

    . We can see the similar trends as for the

    uncorrelated cases, i.e., Gamma approximation works well for pm

    =

    1

    2

    and the generalized Gamma approximation performs remarkably well.

    Finally, to see the difference between correlated and uncorrelated

    cases more closely, Fig. 10 plots the interesting portions of the BER

    curves for both cases, which illustrates that the differences could be

    significant.

    IX. CONCLUSION

    This study characterized the distribution of SINR for the MMSE re-

    ceiver in the MIMO systems, for channels with nonrandom transmit

    correlations and unequal powers. The work started with a key observa-

    tion that SINR can be decomposed into two independent component:

    SINR = SINRZ F

    + T , where SINRZ F

    has an exact Gamma distribu-tion. For uncorrelated channels as well as thecorrelated channels under

    certain conditions,T

    is proved to converge in distribution to a Normal

    and can be well approximated by a Gamma or a generalized Gamma.

    Our BER analysis suggested that these approximate distributions can

    be used to accurately estimate theerrorprobabilities even forvery small

    dimensions.

    APPENDIX I

    PROOF OF LEMMAS 3, 4, 5

    Restate Lemma 3 as follows:

    E

    p

    p 0 1

    T

    k

    ! c

    k

    ; c

    4

    = c

    k

    M

    J

    ( 0 1 ) :

    (79)

    Proof:

    E

    p

    p 0 1

    T

    k

    = E

    1

    p 0 1

    E ( p T

    k

    j HH

    H

    ( 0 k )

    )

    = E

    1

    p 0 1

    p

    m

    p 0 1

    i = 1

    E ( k t

    k ; i

    k

    2

    )

    i

    = 6

    k

    p

    m

    E

    t r ( 3 3

    3

    )

    p 0 1

    ! 6

    k

    p

    m

    E ( M

    J

    ( 0 1 ) ) = c

    k

    M

    J

    ( 0 1 )

    3Note that, in order to produce comparable BER results with the classicalreferences (e.g., [2], [4], [8]), we actually used real channels (only for the BERcurves).

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    12/16

    282 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006

    by the bounded convergence theorem [26, Sec. 1.3b], because

    t r ( 3 3

    3

    )

    p 0 1

    1 :

    Restate Lemma 4 as follows:

    V a r

    p

    p

    p

    01

    T

    k

    !c

    2

    k

    2

    ; c

    4

    = c

    2

    k

    M

    J

    (

    01 ) :

    (80)

    Proof:

    V a r

    p

    p

    p

    01

    T

    k

    =

    p

    2

    p

    01

    V a r

    1

    m

    p 0 1

    i = 1

    kt

    k ; i

    k

    2

    i

    =

    p

    2

    p

    01

    E V a r

    1

    m

    p 0 1

    i = 1

    kt

    k ; i

    k

    2

    i

    HH

    H

    ( 0 k )

    +

    p

    2

    p

    01

    V a r E

    1

    m

    p 0 1

    i = 1

    kt

    k ; i

    k

    2

    i

    HH

    H

    ( 0 k )

    =

    p

    2

    m

    2

    6

    2

    k

    E

    t r ( 3 3

    3

    2

    )

    p

    01

    +

    p

    2

    m

    2

    6

    2

    k

    ( p

    01 ) V a r

    t r ( 3 3

    3

    )

    p

    01

    !c

    2

    k

    M

    J

    (

    01 )

    becauseE (

    t r ( 3 3

    3

    )

    p 0 1

    )

    !M

    J

    (

    01 )

    by the bounded convergence theorem,

    and( p

    01 ) V a r (

    t r ( 3 3

    3

    )

    p 0 1

    )

    !0

    , which can be proved using the results from

    the concentration of spectral measures for random matrices [32]. The

    result we need is Corollary 1.8b in [32], which can be stated as follows:

    P

    t r ( 3 3

    3

    )

    p

    01

    0E

    t r ( 3 3

    3

    )

    p

    01

    >

    2 e

    0 ( p 0 1 ) (81)

    for any > 0

    .

    depends on the spectral radius ofRR

    R

    (

    0 k ; 0 k

    )

    , the

    logarithmic Sobolev inequality constant, and the Lipschitz constant

    ofg ( x ) = f ( x

    2

    ) =

    1

    1 + x

    . We can easily check that the function

    f ( x ) =

    1

    1 + x

    is convex Lipschitz andg ( x )

    has a finite Lipschitz norm.

    Therefore,

    V a r

    t r ( 3 3

    3

    )

    p

    01

    =

    1

    0

    2 x P

    t r ( 3 3

    3

    )

    p

    01

    0E

    t r ( 3 3

    3

    )

    p

    01

    > x d x :

    4

    1

    0

    x e

    0 ( p 0 1 ) x

    d x =

    2

    ( p

    01 )

    2

    (82)

    which implies( p

    01 ) V a r (

    t r ( 3 3

    3

    )

    p 0 1

    )

    !0

    . This completes the proof.

    Restate Lemma 5 as follows:

    1

    p

    01

    S k ( p T

    k

    ) =

    1

    p

    01

    E ( ( p T

    k

    0E ( p T

    k

    ) )

    )

    !2 c

    k

    ; c

    4

    = c

    k

    M

    J

    (

    01 ) :

    (83)

    Proof:

    1

    p

    01

    E ( ( p T

    k

    0E ( p T

    k

    ) )

    )

    =

    1

    p

    01

    E E p T

    k

    0E p T

    k

    jHH

    H

    ( 0 k )

    jHH

    H

    ( 0 k )

    +

    1

    p

    01

    E E ( p T

    k

    jHH

    H

    ( 0 k )

    )

    0E ( p T

    k

    )

    +

    p

    01

    C o v E p T

    k

    jHH

    H

    ( 0 k )

    ; V a r p T

    k

    jHH

    H

    ( 0 k )

    :

    (84)

    We can show that the first term in the right-hand side of (84) con-

    verges toc

    k

    M

    J

    (

    01 )

    as follows:

    1

    p

    01

    E E p T

    k

    0E p T

    k

    jHH

    H

    ( 0 k )

    jHH

    H

    ( 0 k )

    =

    p

    m

    1

    p

    01

    E E

    p 0 1

    i = 1

    kt

    k ; i

    k

    2

    06

    k

    i

    HH

    H

    ( 0 k )

    =

    p

    m

    1

    p

    01

    E

    p 0 1

    i = 1

    E

    kt

    k ; i

    k

    2

    06

    k

    i

    =

    p

    m

    1

    p

    01

    E

    p 0 1

    i = 1

    2 6

    k

    i

    = 2

    p

    m

    6

    k

    E

    t r 3 3

    3

    p

    01

    !2

    p

    m

    6

    k

    1

    2

    E M

    J

    (

    01 ) = c

    k

    M

    J

    (

    01 ) :

    (85)

    We can show the last two terms in the right-hand side of (84) tend to0

    .

    Expand

    1

    p

    01

    E E p T

    k

    jHH

    H

    ( 0 k )

    0E ( p T

    k

    )

    =

    p

    m

    6

    k

    ( p

    01 )

    2

    E

    t r ( 3 3

    3

    )

    p

    01

    0

    E ( t r ( 3 3

    3

    ) )

    p

    01

    :

    Apply the concentration theorem one more time

    E

    t r ( 3 3

    3

    )

    p

    01

    0

    E ( t r ( 3 3

    3

    ) )

    p

    01

    1

    0

    x

    2

    P

    t r ( 3 3

    3

    )

    p

    01

    0E

    t r ( 3 3

    3

    )

    p

    01

    > x d x

    1

    0

    x

    2

    2 e

    0

    (

    p 0 1 ) x

    d x =

    2

    1

    ( p

    01 )

    (86)

    which implies 1p 0 1

    E ( ( E ( p T

    k

    jHH

    H

    ( 0 k )

    )

    0E ( p T

    k

    ) )

    )

    !0

    .

    Thelast term in (84) also tends to zero, again using theconcentration

    theorem and the following inequality:

    p

    01

    C o v ( E ( p T

    k

    jHH

    H

    ( 0 k )

    ) ; V a r ( p T

    k

    jHH

    H

    ( 0 k )

    ) )

    =

    p

    m

    6

    k

    ( p

    01 ) C o v

    t r ( 3 3

    3

    )

    p

    01

    ;

    t r ( 3 3

    3

    2

    )

    p

    01

    p

    m

    ( p

    01 ) 6

    k

    V a r

    t r ( 3 3

    3

    )

    p

    01

    V a r

    t r ( 3 3

    3

    2

    )

    p

    01

    :

    (87)

    We have shown

    V a r

    t r ( 3 3

    3

    )

    p

    01

    2

    ( p

    01 )

    2

    :

    Similar arguments will show that

    V a r

    t r ( 3 3

    3

    2

    )

    p

    01

    2

    ( p

    01 )

    2

    for a different constant

    . Therefore, the last term in (84) also tends to

    zero.

    Combining the results for the three terms of (84) together completes

    the proof.

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    13/16

    IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006 283

    APPENDIX II

    PROOF OF LEMMA 6

    Restate Lemma 6 as follows:

    ~

    p

    ~

    p 0 1

    = ~

    R ; p 0 1

    ~

    R ; p

    :

    (88)

    Proof: Expand ~p

    ~

    p

    =

    1

    2

    c

    m

    0p + 1

    p

    +

    1

    2

    c m

    p

    2

    0

    1

    2

    p

    01

    p

    ( 1

    0

    p

    )

    (89)

    where

    p

    = c

    2

    m

    2

    p

    2

    1

    0

    p

    m

    2

    + 2 c

    m

    p

    1 +

    p

    m

    + 1

    ( p

    p

    )

    2

    = ( c m + p

    0c p )

    2

    + 4 c p

    2

    = ( c m + p )

    2

    + c

    2

    p

    2

    + 2 c p

    2

    02 c

    2

    m p :

    Expand~

    p 0 1

    ~

    p 0

    1

    =

    1

    2

    c

    m

    0p + 1

    p

    0

    1

    2

    ( 1

    0

    p 0

    1

    )

    (90)

    where

    p 0 1

    =

    2

    p

    +

    c

    2

    p

    2

    + 2 c

    2

    m

    p

    2

    02

    c

    2

    p

    0

    2 c

    p

    ;

    ( p

    p 0 1

    )

    2

    = ( p

    p

    )

    2

    + c

    2

    + 2 c

    2

    m

    02 c

    2

    p

    02 c p :

    Therefore,

    ~

    p

    0~

    p 0 1

    =

    1

    2

    c m

    p

    2

    +

    1

    p

    +

    p

    01

    p

    p

    0

    p 0 1

    :

    It suffices to show

    c m + p + p ( p

    01 )

    p

    0p

    2

    p 0 1

    0

    ( )( c m + p )

    2

    + ( p

    01 ) p

    2

    2

    p

    + 2 ( c m + p ) ( p

    01 ) p

    p

    p

    4

    2

    p 0 1

    ( )( c m + p ) ( p

    01 ) p

    p

    ( p

    01 ) ( c m + p )

    2

    0c p ( p

    01 ) ( c m

    0p )

    ( )( c m + p ) p

    p

    ( c m + p )

    2

    0c p ( c m

    0p )

    ( )( c m + p )

    2

    ( c m

    0p )

    2

    which is true. Therefore, we prove~

    p

    ~

    p 0 1

    . Here we use ( ) for

    is equivalent to.

    To show~

    p 0

    1

    ~

    R ; p

    , note that

    ~

    p 0 1

    0~

    R ; p

    =

    1

    2

    c

    p

    +

    p 0 1

    0

    p

    :

    Suffices to show

    p

    p 0 1

    p

    p

    0c

    ( )p

    2

    2

    p 0 1

    p

    2

    2

    p

    + c

    2

    02 c p

    p

    ( )c

    2

    + 2 c

    2

    m

    02 c

    2

    p

    02 c p

    c

    2

    02 c p

    p

    ( )p

    p

    0c m + c p + p

    which is true. Hence, we complete the proof.

    APPENDIX III

    PROOF OF LEMMA 7

    We will frequently use the following three inequalities:

    Z

    ( 1 )

    k

    4

    =

    p 0 1

    i = 1

    1

    m

    kz

    i

    k

    2

    i

    =

    p 0 1

    i = 1

    1

    m

    kz

    i

    k

    2

    1 +

    1

    m

    d

    2

    i

    kRR

    R

    ( 0 k ; 0 k )

    )

    0 1

    r

    k ( 0 k )

    k

    2

    4

    = e

    k (91)

    Z

    ( 2 )

    k

    4

    =

    p 0 1

    i = 1

    1

    m

    2

    kz

    i

    k

    2

    1 +

    1

    m

    d

    2

    i

    2

    1

    2

    e

    k

    (92)

    Z

    ( )

    k

    4

    =

    p 0 1

    i = 1

    1

    m

    kz

    i

    k

    2

    1 +

    1

    m

    d

    2

    i

    8

    9

    e

    k

    :

    (93)

    (Note that x1 + x

    1 ;

    2 x

    ( 1 + x )

    1

    2

    ;

    6 x

    ( 1 + x )

    8

    9

    , for anyx

    0

    .)

    We begin with the first moment. Conditional onHH

    H

    ( 0 k )

    E T

    k

    jHH

    H

    ( 0 k )

    =

    1

    m

    p 0 1

    i = 1

    6

    k

    +

    kz

    i

    k

    2

    1 +

    1

    m

    d

    2

    i

    =

    1

    m

    6

    k

    t r ( 3 3

    3

    ) + Z

    ( 1 )

    k

    1

    m

    6

    k

    t r ( 3 3

    3

    ) + e

    k

    :

    (94)

    Recall that t r ( 3 3 3

    )

    p 0 1

    P

    !M

    J

    (

    01 )

    , the limiting upper bound ofE ( T

    k

    )

    is

    then given by

    E ( T

    k

    )

    U

    =

    p

    01

    m

    6

    k

    M

    J

    (

    01 ) + e

    k

    :

    (95)

    We can derive the limiting upper bound for the variance ofT

    k

    by

    following the proof of Lemma 4. We know that

    V a r

    p

    p

    p

    01

    T

    k

    =

    p

    2

    p

    01

    E

    1

    m

    2

    p 0 1

    i = 1

    6

    2

    k

    + 2

    kz

    i

    k

    2

    6

    k

    1 +

    1

    m

    d

    2

    i

    2

    +

    p

    2

    p

    01

    V a r

    1

    m

    p 0 1

    i = 1

    6

    k

    +

    kz

    i

    k

    2

    1 +

    1

    m

    d

    2

    i

    :

    (96)

    The first term in the right-hand side of (96) can be written as

    p

    2

    m

    2

    6

    2

    k

    E

    t r ( 3 3

    3

    2

    )

    p

    01

    +

    p

    2

    m

    1

    p

    01

    6

    k

    E ( Z

    ( 2 )

    k

    )

    p

    2

    m

    2

    6

    2

    k

    M

    J

    (

    01 ) +

    p

    2

    m

    1

    p

    01

    6

    k

    1

    2

    e

    k

    (97)

    in the limiting sense.

    Using the fact that

    V a r ( x + y )

    V a r ( x ) + E ( y

    2

    ) + 2 V a r ( x ) E ( y

    2

    )

    we can bound the second term in (96) by

    p

    2

    m

    2

    ( p

    01 ) 6

    2

    k

    V a r

    t r ( 3 3

    3

    )

    p

    01

    +

    p

    2

    p

    01

    E Z

    ( 1 )

    k

    2

    + 2

    p

    2

    m

    6

    k

    V a r

    t r ( 3 3

    3

    )

    p

    01

    E Z

    ( 1 )

    k

    2

    p

    2

    m

    2

    2 6

    2

    k

    ( p

    01 )

    +

    p

    2

    p

    01

    e

    2

    k

    + 2

    p

    2

    m ( p

    01 )

    6

    k

    2

    e

    k

    (98)

    where

    is the same constant in Lemma 4.

    Authorized licensed use limited to: HIGHER EDUCATION COMMISSION. Downloaded on January 22, 2009 at 00:25 from IEEE Xplore. Restrictions apply.

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    14/16

    284 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006

    Combining (97), (98), and ignoring the higher order term in (98), we

    can get a comprehensive upper bound

    V a r

    p

    p

    p

    01

    T

    k

    U

    =

    p

    2

    m

    2

    6

    2

    k

    M

    0

    J

    (

    01 ) +

    p

    2

    m

    6

    k

    1

    p

    01

    1

    2

    e

    k

    +

    p

    2

    p

    01

    e

    2

    k

    + 2

    p

    2

    m ( p

    01 )

    6

    k

    2

    e

    k

    (99)

    which, for convenience, is written as

    V a r ( T

    k

    )

    U

    =

    p

    01

    m

    2

    6

    2

    k

    M

    0

    J

    (

    01 ) +

    1

    m

    6

    k

    1

    2

    + 2

    2

    e

    k

    + e

    2

    k

    :

    (100)

    We can get the limiting upper bound for the third central moment by

    following the proof of Lemma 5.

    Recall

    p

    3

    p

    01

    S k ( T

    k

    ) =

    1

    p

    01

    E E p T

    k

    0E ( p T

    k

    jHH

    H

    ( 0 k )

    )

    3

    jHH

    H

    ( 0 k )

    +

    1

    p

    01

    E E ( p T

    k

    jHH

    H

    ( 0 k )

    )

    0E ( p T

    k

    )

    3

    +

    p

    01

    C o v E p T

    k

    jHH

    H

    ( 0 k )

    ; V a r p T

    k

    jHH

    H

    ( 0 k )

    :

    (101)

    Expand the first term of the right-hand side of (101) to get

    p

    3

    m

    3

    6

    3

    k

    E

    t r ( 3 3

    3

    3

    )

    p

    01

    +

    p

    3

    m

    2

    1

    p

    01

    6

    2

    k

    Z

    ( 3 )

    k

    p

    3

    m

    3

    6

    3

    k

    M

    J

    (

    01 ) +

    p

    3

    m

    2

    1

    p

    01

    6

    2

    k

    8

    9

    e

    k

    :

    (102)

    Expanding the second term of (101) and selectively ignoring somenegative terms, we can get a bound

    1

    p

    01

    E E ( p T

    k

    jHH

    H

    ( 0 k )

    )

    0E ( p T

    k

    )

    3

    p

    3

    m

    3

    1

    p

    01

    E ( ( t r ( 3 3

    3

    )

    0E ( t r ( 3 3

    3

    ) ) ) 6

    k

    + m e

    k

    )

    3

    p

    3

    m

    3

    6

    3

    k

    (

    2

    1

    p

    01

    3

    ) +

    p

    3

    p

    01

    e

    3

    k

    +

    p

    3

    m

    2

    6

    2

    k

    2

    ( p

    01 )

    e

    k

    :

    We can also bound the last term of (101) by

    p

    01

    C o v E p T

    k

    jHH

    H

    ( 0 k )

    ; V a r p T

    k

    jHH

    H

    ( 0 k )

    p

    01

    V a r E p T

    k

    jHH

    H

    ( 0 k )

    V a r V a r p T

    k

    jHH

    H

    ( 0 k )

    :

    We know

    V a r E p T

    k

    jHH

    H

    ( 0 k )

    p

    2

    e

    2

    k

    + 2

    p

    2

    m

    6

    k

    2

    e

    k

    and

    V a r V a r p T

    k

    jHH

    H

    ( 0 k )

    =

    p

    4

    m

    4

    6

    2

    k

    V a r 6

    k

    t r ( 3 3

    3

    2

    ) + m Z

    ( 2 )

    k

    p

    4

    m

    4

    6

    2

    k

    6

    2

    k

    2

    + m

    2

    1

    2

    e

    k

    2

    + 6

    k

    m e

    k

    2

    :

    (103)

    Combing all the bounds and limits, we can obtain a limiting upper

    bound for the third moment

    S k ( T

    k

    )

    U

    =

    p

    01

    m

    3

    6

    3

    k

    M

    J

    (

    01 ) +

    8 = 9

    m

    2

    6

    2

    k

    e

    k

    +

    m

    2

    6

    2

    k

    1

    e

    k

    + e

    3

    k

    +

    2

    e

    k

    6

    k

    1

    m

    e

    k

    + 2

    1

    m

    6

    k

    2

    e

    k

    + 2

    1

    m

    6

    k

    2

    :

    (104)

    APPENDIX IV

    PROOF OF LEMMA 9

    Restate Lemma 9 as follows:

    W =

    m T

    k

    06

    k

    t r ( 3 3

    3

    )

    p

    p

    01

    D

    =

    )N 0 ; 6

    2

    k

    2

    ; c

    :

    (105)

    Proof: It suffices to show that the characteristic function of W ,

    E ( e

    j y W

    )

    !e x p (

    0

    y

    2

    6

    2

    k

    2

    ; c

    )

    , where

    E ( e

    j y W

    ) = E ( E ( e

    j y W

    jHH

    H

    ( 0 k )

    ) ) ; j =

    p

    01 :

    (106)

    Conditional onHH

    H

    ( 0 k )

    E e

    j y W

    jHH

    H

    ( 0 k )

    = E e x p j y

    m T

    k

    06

    k

    t r ( 3 3

    3

    )

    p

    p

    01

    HH

    H

    ( 0 k )

    = e x p

    0j y 6

    k

    t r ( 3 3

    3

    )

    p

    p

    01

    p 0 1

    i = 1

    E e x p j y

    kt

    k ; i

    k

    2

    i

    = e x p

    0j y 6

    k

    t r ( 3 3

    3

    )

    p

    p

    01

    2

    p 0 1

    i = 1

    1

    0

    1

    p

    p

    01

    j y 6

    k

    i

    0 1

    e x p

    kz

    i

    k

    2

    j y

    p

    p 0 1

    1

    0

    j y 6

    p

    p 0 1

    = e x p

    0j y 6

    k

    t r ( 3 3

    3

    )

    p

    p

    01

    L

    1

    ( p ) L

    2

    ( p ) :

    (107)

    Note that

    i

    =

    1

    1 + d

    1

    . For any fixedy

    , sincep

    ! 1 , we

    can assume

    jy

    j

    p

    p

    01

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    15/16

    IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006 285

    because of the assumption thate

    k

    !0

    faster thanO ( p

    0

    )

    . Therefore

    L

    2

    ( p )

    P

    !1

    , hence can be ignored.

    L

    1

    ( p ) =

    p 0 1

    i = 1

    1

    0

    1

    p

    p

    01

    j y 6

    k

    i

    0 1

    = e x p

    0

    p 0 1

    i = 1

    l o g 1

    0

    j y 6

    k

    i

    p

    p

    01

    = e x p

    p 0 1

    i = 1

    1

    n = 1

    ( j y 6

    k

    i

    )

    n

    n ( p

    01 )

    n = 2

    = e x p

    1

    n = 1

    p 0 1

    i = 1

    ( j y 6

    k

    i

    )

    n

    n ( p

    01 )

    n = 2

    = e x p

    p 0 1

    i = 1

    j y 6

    k

    i

    ( p

    01 )

    1 = 2

    e x p

    p 0 1

    i = 1

    0y

    2

    6

    2

    k

    2

    i

    2 ( p

    01 )

    2e x p

    1

    n = 3

    p 0 1

    i = 1

    ( j y 6

    k

    i

    )

    n

    n ( p

    01 )

    n = 2

    = e x p

    j y 6

    k

    t r ( 3 3

    3

    )

    p

    p

    01

    e x p

    0y

    2

    6

    2

    k

    t r ( 3 3

    3

    2

    )

    2 ( p

    01 )

    L

    3

    ( p ) :

    (110)

    We will show thatL

    3

    ( p )

    converges to1

    in probability. Note that in the

    above derivations, we can expand the complex logarithm and switch

    the order of summations because of the condition j y jpp 0 1

  • 7/30/2019 On the Distribution of SINR for the MMSE MIMO Receiver and Performance Analysis.pdf

    16/16

    286 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006

    The remaining task is to show thatR S

    T

    R S

    S I N R

    in the limit.

    In the limit, we can replace p 0 1m

    by

    , thus

    R S

    T

    R S

    S I N R

    ( )

    ( 1 0 )

    2

    + 2 ( 1 0 )

    2

    ; c

    +

    2

    4

    ; c

    ( 1 0 )

    2

    + ( 1 0 ) (

    ; c

    +

    ; c

    ) +

    2

    (

    ; c

    ; c

    )

    4

    ; c

    ; c

    ; c

    ( ) ( 1 0 )

    ; c

    ; c

    0

    4

    ; c

    2

    ; c

    2

    ; c

    (

    ; c

    +

    ; c

    ) 0 2

    ; c

    ; c

    which is equivalent to~c

    2

    2

    0

    , after pages of tedious algebra. This

    completes the proof forR S

    T

    R S

    S I N R

    , in the limit. See the tech-

    nical report [23] for details.

    ACKNOWLEDGMENT

    The authors wish to thank Persi Diaconis for reading the original

    draft of this manuscript and Amir Dembo for the helpful advice in the

    revision. Ping Li thanks Tze Leung Lai and Youngjae Kim for the en-

    joyable conversations. He also acknowledges Antonia Tulino, Sergio

    Verd, and Dongning Guo for the e-mail communications during the

    preparation of the manuscript.

    REFERENCES

    [1] A. Paulraj, R. Nabar, and D. Gore, Introduction to Space-Time WirelessCommunications, 1st ed. New York: Cambridge Univ. Press, 2003.

    [2] H. V. Poor andS. Verd, Probability of error in MMSE multiuserdetec-tion, IEEE Trans. Inf. Theory, vol. 43, no. 3, pp. 858871, May 1997.

    [3] D. N. C. Tse and O. Zeitouni, Linear multiuser receivers in randomenvironments, IEEE Trans. Inf. Theory, vol. 46, no. 1, pp. 171188,Jan. 2000.

    [4] S. Verd, Multiuser Detection. Cambridge, U.K.: Cambridge Univ.Press, 1998.

    [5] U. Madhow and M. L. Honig, MMSE interference suppression for di-rect-sequence spread-spectrum CDMA,IEEE Trans. Commun., vol. 42,no. 12, pp. 31783188, Dec. 1994.

    [6] S. Verd and S. Shamai (Shitz), Spectral efficiency of CDMA withrandom spreading,IEEE Trans. Inf. Theory, vol. 45,no. 2, pp.622640,Mar. 1999.

    [7] A.Hjrungnes andP.S. R. Diniz,MinimumBER prefilter transformforcommunications systems with binary signaling and known for MIMOchannel, IEEE Signal Process. Lett., vol. 12, no. 3, pp. 234237, Mar.2005.

    [8] D. Guo, S. Verd, andL. K. Rasmussen, Asymptotic normalityof linearmultiuser receiver outputs, IEEE Trans. Inf. Theory, vol. 48, no. 12, pp.30803095, Dec. 2002.

    [9] A. M. Tulino and S. Verd, Random matrix theory and wireless com-munications, Foundations and Trends in Communications and Infor-

    mation Theory, vol. 1, Jun. 2004.[10] L. Li, A. M. Tulino, and S. Verd, Asymptotic eigenvalue moments for

    linear multiuser detection, Commun. Inf. and Syst., vol. 1, no. 3, pp.273304, 2001.

    [11] D. N. C. Tse and S. V. Hanly, Linear multiuser receivers: Effectiveinterference, ef