39
Chalmers & TELECOM Bretagne Coding for phase noise channels IVAN LELLOUCH Department of Signals & Systems Chalmers University of Technology Gothenburg, Sweden 2011 Master’s Thesis 2011:1

Coding for phase noise channels

Embed Size (px)

Citation preview

Page 1: Coding for phase noise channels

Chalmers & TELECOM Bretagne

Coding for phase noise channels

IVAN LELLOUCH

Department of Signals & Systems

Chalmers University of Technology

Gothenburg, Sweden 2011

Master’s Thesis 2011:1

Page 2: Coding for phase noise channels
Page 3: Coding for phase noise channels

Abstract

Page 4: Coding for phase noise channels
Page 5: Coding for phase noise channels

Acknowledgements

Page 6: Coding for phase noise channels
Page 7: Coding for phase noise channels

Contents

1 Introduction 1

2 Bounds and capacity 3

2.1 Capacity and information density . . . . . . . . . . . . . . . . . . . . . . . 32.2 Binary hypothesis testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.3 Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.3.1 Dependence Testing bound [5] . . . . . . . . . . . . . . . . . . . . 52.3.2 Meta converse bound [5] . . . . . . . . . . . . . . . . . . . . . . . . 7

3 AWGN channel 9

3.1 The AWGN channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.1.1 Information density . . . . . . . . . . . . . . . . . . . . . . . . . . 103.1.2 Depending testing bound . . . . . . . . . . . . . . . . . . . . . . . 103.1.3 Meta converse bound . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4 Phase noise channels 13

4.1 Uniform phase noise channel . . . . . . . . . . . . . . . . . . . . . . . . . 134.1.1 Information density . . . . . . . . . . . . . . . . . . . . . . . . . . 144.1.2 Depending testing bound . . . . . . . . . . . . . . . . . . . . . . . 14

4.2 Uniform phase noise AWGN channel . . . . . . . . . . . . . . . . . . . . . 174.2.1 Information density . . . . . . . . . . . . . . . . . . . . . . . . . . 174.2.2 Dpending testing bound . . . . . . . . . . . . . . . . . . . . . . . . 19

4.3 Tikhonov phase noise channel . . . . . . . . . . . . . . . . . . . . . . . . . 204.3.1 Information density . . . . . . . . . . . . . . . . . . . . . . . . . . 214.3.2 Depending testing bound . . . . . . . . . . . . . . . . . . . . . . . 244.3.3 Meta converse bound . . . . . . . . . . . . . . . . . . . . . . . . . . 29

5 Conclusion 30

Bibliography 31

i

Page 8: Coding for phase noise channels

List of Figures

3.1 DT and converse for AWGN channel - SNR = 0 - Pe = 10−3 . . . . . . . 12

4.1 DT and constraint capacities for three uniform AM constellations. . . . . 204.2 Tikhonov probability density function . . . . . . . . . . . . . . . . . . . . 224.3 Two 64-QAM constellations in AWGN phase noise channel . . . . . . . . 254.4 Robust Circular QAM constellation with phase noise . . . . . . . . . . . . 264.5 DT curves for two 64-QAM constellations in AWGN phase noise channel

with SNR = 0dB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.6 DT curves for two 64-QAM constellations in AWGN phase noise channel

with SNR = 15dB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.7 Comparaison of DT curves for different phase noise power . . . . . . . . . 274.8 Comparaison of DT curves for different probabilities of error . . . . . . . 28

ii

Page 9: Coding for phase noise channels

1

Introduction

Since Shannon’s landmark paper [1], there has been a lot of studies regarding the chan-nel capacity, which is the amount of information we can reliably sent through a channel.This result is a theoretical limit, that required to use an infinite block length. In practice,we want to know, for a given communication system, how far our system is from thisupper bound.When we design a system, there are two main parameters we need to determine. Theerror probability that the system can tolerate and the delay constraint, which is relatedto the size of the message we want to send, ie., the block length. Therefore, we want tofind, given these parameters what is the new upper bound for our system. Thus we willwork with finite block length and a given probability of error.Two bounds are defined in order to determine this new limit, the achievability boundand the converse bound.Achievability bound is a lower bound on the size of the codebook, given a block lengthand error probability.Converse bound is an upper bound on the size of the codebook, given a block lengthand error probability.By using both of this bounds, we can determine an approximation of the theoretical limitof information we can send through a channel, for a given block length and a probabilityof error.Achievability bounds already exist in information theory studies. Three main boundswere defined by Feinstein [2], Shannon [3] and Gallager [4]. An optimization of auxiliaryconstants was needed in order to compute those bounds. Thanks to this work we havesome new insights regarding how far systems can work from the capacity of the channelwith a finite blocklength.In a recent work [5] Polyanskiy et al defined a new achievability bound that do notrequired any auxiliary constant optimization and is tighter than the three bounds in[2, 3, 4] and one converse bound.

1

Page 10: Coding for phase noise channels

CHAPTER 1. INTRODUCTION

This thesis is in the framework of the MAGIC project involving Chalmers University ofTechnology, Ericsson AB and Qamcom Technology AB and the context is the microwavebackhauling for IMT advanced and beyond. A part of this project is to investigate themodulations and coding techniques for channels impaired by phase noise. In digital com-munications systems, the use of low-cost oscillators at the receiver causes phase noisewhich can become a severe problem for high symbol rates and constellation sizes. Forchannels impaired by phase noise, the codes we are usually using do not perform as goodas they do in classical channels.

In this thesis, we will deal with two bounds from [5], an achievable bound and a conversebound. We will apply those bounds to phase noise channels and see how far we are fromthe capacity and which rates we can reach given a block length and an error probability.

The outline of the Thesis is as follows.In Chapter 2, we introduce the capacity and the bounds that will be used in the follow-ing chapters. The main result is the expression of the depending testing (DT) boundthat we used to find a lower bound on the maximal coding rate. We will explain howPolyanskiy et al derived this bound and show how we can use it for our channel.In Chapter 3, we first apply our results to the additive white Gaussian noise (AWGN)channel. It is useful to see how the equations works for continuous noise channel, butalso to determine the impact the phase noise will cause on the maximal coding rate.In Chapter 4, we find the main results of the thesis. We apply the DT bound on sev-eral partially coherent additive white Gaussian noise (PC-AWGN) channels, ie., AWGNchannel impaired by phase noise, and compare them with the bound for the AWGNchannel and the constrained capacity. Therefore, the loss induce by the phase noise willbe estimated, for a given channel, and this will leads to an approximation of the maximalcoding rate for the PC-AWGN we investigated. We also present a constellation designedfor phase noise channel and show the performance improvements.

2

Page 11: Coding for phase noise channels

2

Bounds and capacity

2.1 Capacity and information density

We denote by A and B the input and output sets. If X and Y are random variables fromA and B respectively then x and y are their particular realizations. X and Y denotethe probability spaces and PX and PY are the probability density functions of X andY respectively. We also define PY |X , the conditional probability from A to B given acodebook (c1,...,cM ) and we denote by M its size.Since we are interested in finite block length analysis, a realisation x of a random variableX represent a n-dimensional vector, ie., x = (x1,...,xn).

The capacity C of a channel is the maximum of information we can reliably sendthrough it with a vanishing error probability and for an infinite block length. For inputand output X and Y distributed according to PX and PY respectively, the capacity Cis given by

C = maxX

{I(X;Y )} (2.1)

where I(X,Y ) is the mutual information between X and Y . The expression is max-imized with respect to the choice of the input distribution PX .

I(X;Y ) =

∫∫

X ,YpXY (x,y) log2

pXY (x,y)

pX(x)pY (y)dxdy

where the logarithmic term is called the information density :

i(x; y) = log2pX,Y (x,y)

pX(x)pY (y)(2.2)

It is proven in [6] that the capacity for the AWGN channel is achieved by a Gaussianinput distribution and is given by

3

Page 12: Coding for phase noise channels

CHAPTER 2. BOUNDS AND CAPACITY

C =1

2log2 (1 + SNR) (2.3)

where SNR is the signal-to-noise ratio, ie., SNR = PN , where P and N are the input and

noise power respectively.The capacity can be computed when we know which distribution maximized (2.1).

In this case we say that the distribution is capacity achieving. For some channels, suchas phase noise channels, we have few informations regarding the capacity. For thosechannels, we will determine an input and use it for our calculations. Thus, we will workwith a constrained capacity, ie., constrained to a specific input distribution, which is anupper bound of the information that can be sent through the channel for a given inputdistribution.The capacity can also be define by using the rate R =

log2 Mn

C = limε←0

limn←inf

1

nlog2M

∗(n,ε) (2.4)

where n is the block length, ε the probability of error and M∗ defined as follow

M∗(n,ε) = max {M : ∃(n,M,ε)− code} (2.5)

2.2 Binary hypothesis testing

Later in this thesis we will need to use a binary hypothesis testing in order to define anupper bound for the rate. We consider a random variable R defined as follow

R : {P,Q} → {1,0} (2.6)

where 1 indicates that P is chosen. We also consider the random transformation

PZ|R : R→ {1,0} (2.7)

We define βα(P,Q), the maximal probability of error under Q if the probability of errorunder P is lower than α. We can denote this test by

βα(P,Q) = infPZ|R:

r∈R PZ|R(1|r)P (r)≥α

r∈RPZ|R(1|r)Q(r) (2.8)

2.3 Bounds

Yury Polyanskiy defined in [5] the DT bound and the meta converse bound over randomcodes . In this chapter we will start by describing those bounds and apply them overcontinuous channels.

4

Page 13: Coding for phase noise channels

CHAPTER 2. BOUNDS AND CAPACITY

2.3.1 Dependence Testing bound [5]

We will present the technique proposed in [5] regarding the bounding of the error prob-ability for any input distribution given a channel.

Theorem 1. Depending testing bound

Given an input distribution PX on A, there exists a code with codebook size M , and the

average probability of error is bounded by

ε ≤ E

[

exp

(

−∣

i(x,y) − log2M − 1

2

+)]

(2.9)

where

|u|+ = max(0,u) (2.10)

Proof. Let Zx(y) be the following function

Zx(y) = 1(i(x,y)>log M−12

) (2.11)

where 1A(.) is an indicator function :

1A(x) =

{

1, if x ∈ A,

0, otherwise.(2.12)

For a given codebook (c1,...,cM ), the decoder computes (2.11) for the codeword cjstarting with c1, until it finds Zcj(y) = 1, or the decoder returns an error. Therefore,there is no error with probability

Pr

{Zcj (y) = 1}⋂

i<j

{Zci(y) = 0}

(2.13)

Then, we can write the error for the jth codeword as

ε(cj) = Pr

{Zcj (y) = 0}⋃

i<j

{Zci(y) = 1}

(2.14)

Using the union bound on this expression and (2.11)

ε(cj) ≤ Pr[

{Zcj(y) = 0}]

+∑

i<j

Pr [{Zci(y) = 1}] (2.15)

= Pr

[

i(cj ,y) ≤ logM − 1

2

]

+∑

i<j

Pr

[

i(ci,y) > logM − 1

2

]

(2.16)

5

Page 14: Coding for phase noise channels

CHAPTER 2. BOUNDS AND CAPACITY

The codebook is generated randomly according to the distribution PX , and we denote byy a realization of the random variable Y , but independent of the transmitted codeword.Thus, the probability of error, if we send the codeword cj , is bounded by

ε(cj) ≤ Pr

[

i(x,y) ≤ logM − 1

2

]

+ (j − 1)Pr

[

i(x,y) > logM − 1

2

]

(2.17)

Then, if we suppose that Pr(cj) =1M , we have

ε =1

M

M∑

j=1

ε(cj) (2.18)

and

ε ≤ 1

M

M∑

j=1

[

Pr

[

i(x,y) ≤ logM − 1

2

]

+ (j − 1)Pr

[

i(x,y) > logM − 1

2

]]

(2.19)

which give us finally the following expression for the average error probability

ε ≤ Pr

[

i(x,y) ≤ logM − 1

2

]

+M − 1

2Pr

[

i(x,y) > logM − 1

2

]

(2.20)

We know that

exp(

−|i(x,y) − log u|+)

= 1(i(x,y)≤log u) + up(x)p(y)

p(x,y)1(i(x,y)>log u) (2.21)

By averaging over p(x,y), and using y, we have

exp(

−|i(x,y)− log u|+)

= Pr(i(x,y) ≤ log u) + u∑

x

y

p(x)p(y)1(i(x,y)>log u) (2.22)

and knowing that y is independent of x it leads us to

exp(

−|i(x,y)− log u|+)

= Pr(i(x,y) ≤ log u) + u∑

x

y

p(x,y)1(i(x,y)>log u) (2.23)

and finally

exp(

−|i(x,y)− log u|+)

= Pr(i(x,y) ≤ log u) + uPr(i(x,y) > log u) (2.24)

Thus, replacing u = M−12 and using (2.20) we obtain (2.9) which completes the proof.

This expression needs no auxiliary constant optimization and can be computed fora given channel by knowing the information density. Applications over AWGN channelsand phase noise channels will be shown in following chapters.

6

Page 15: Coding for phase noise channels

CHAPTER 2. BOUNDS AND CAPACITY

2.3.2 Meta converse bound [5]

The meta converse bound is an upper bound on the size of the codebook for a given errorprobability and block length. To define this bound, we will use the binary hypothesistesting defined in (2.8).

Theorem 2. Let denote by A and B the input and output alphabets respectively. We

consider two random transformations PY |X and QY |X from A to B, and a code (f,g)with average probability of error ε under PY |X and ε′ under QY |X .

The probability distribution induced by the encoder is PX = QX . Then we have

β1−ε(PY |X ,QY |X) ≤ 1− ε′ (2.25)

where β is the binary hypothesis testing defined in (2.8).

Proof. We denote by s the input message chosen in (s1,...,sM ) and by x = f(s) theencoded message. Also y is the message before decoding and z = g(z) the decodedmessage. We define the following random variable to represent an error-free transmission

Z = 1s=z (2.26)

First we notice that the conditional distribution of Z given (X,Y ) is the same for bothchannels PY |X and QY |X .

P [Z = 1|X,Y ] =M∑

i=1

P [s = si,z = si|X,Y ] (2.27)

Then, given (X,Y ), since the input and output messages are independent we have

P [Z = 1|X,Y ] =M∑

i=1

P [s = si|X,Y ]P [z = si|X,Y ] (2.28)

We can simplify the expression as follows

P [Z = 1|X,Y ] =M∑

i=1

P [s = si|X]P [z = si|Y ] (2.29)

We recognize in the second term of the product the decoding function, while the firstterm is independent of the choice of the channel given the definition of the probabilitydistributions induced by the encoder PX = QX that we defined earlier.Then, using

PZ|XY = QZ|XY (2.30)

we define the following binary hypothesis testing∑

x∈A

y∈BPZ|XY (1|xy)PXY (x,y) = 1− ε (2.31)

x∈A

y∈BPZ|XY (1|xy)QXY (x,y) = 1− ε′ (2.32)

and using the definition in (2.8) we get (2.25).

7

Page 16: Coding for phase noise channels

CHAPTER 2. BOUNDS AND CAPACITY

Theorem 3. Every code with M codeword in A and an average probability of error ε

satisfies

M ≤ supPX

infQY

1

β1−ε(PXY ,PX ×QY )(2.33)

where PX describes all input distributions on A and QY all output distributions on B.

8

Page 17: Coding for phase noise channels

3

AWGN channel

In this chapter we will apply the bounds discussed in the previous chapter to the AWGNchannel. We know several results for this channel, such as the capacity. We will seehow far the DT and the meta-converse bounds are from the capacity and if they aretight enough to have an idea of the maximum achievable rate given a block length anderror probability. The results for the AWGN channel will be useful in the next chapterto evaluate the effect of the phase noise on the achievable rates for AWGN channelsimpaired by phase noise.

3.1 The AWGN channel

Let us consider x ∈ X , y ∈ Y and the transition probability between X and Y , PXY .We have the following expression for the AWGN channel

y = x+ t (3.1)

where the noise samples t ∼ N (0,σIn) are independent and identically-distributed.Thus, we know the conditional output probability function

PY |X=x =

(

1

2πσ2

)n/2

e− (y−x)T (y−x)

2σ2 (3.2)

where .T is the transpose operation. We know from [6] that the Gaussian input distri-bution achieves the capacity for this channel. So we will consider x ∼ N (0,PIn) anddenote by PX the corresponding probability distribution.

Given the conditional output probability function and the input, the output distri-bution is defined as follow (summation of Gaussian variables)

y ∼ N (0,(σ2 + P )In) (3.3)

9

Page 18: Coding for phase noise channels

CHAPTER 3. AWGN CHANNEL

3.1.1 Information density

We can now define the information density of the channel using distributions y|x ∼N (x,σ2In) and y ∼ N (0,(σ2 + P )In).

i(x,y) =log2(e)

2

[

yT y

P + σ2− (y − x)T (y − x)

σ2

]

+n

2log2

P + σ2

σ2(3.4)

which can be rewritten as

i(x,y) =n

2log2

P + σ2

σ2+

log2(e)

2

n∑

i=1

y2iP + σ2

− n2iσ2

(3.5)

3.1.2 Depending testing bound

To compute the DT bound for the AWGN channel, we are using (3.5) in (2.9).

ε ≤ E

[

exp

(

−∣

n

2log2

P + σ2

σ2+

log2(e)

2

n∑

i=1

[

y2iP + σ2

− n2iσ2

]

− log2M − 1

2

+)]

(3.6)

Then to compute the expectation we can use a Monte Carlo simulation. The samplesare generated according to the model description in section 3.1.

For this simulation, we use the input distribution x ∼ N (0,P ). This leads to theDT bound for the maximal coding rate on this channel, ie., this bound will be an upperbound for all other DT’s for this channel. In practice, discrete constellations are used forour systems. Therefore we also look at the results for a know discrete input constellationwhich will be useful in the next chapter when we will compare results for the AWGNand the partially coherent AWGN channels.

3.1.3 Meta converse bound

We know that the input distribution and the noise are Gaussian with parameters P andσ2 respectively. We also know that the summation of two Gaussian random variable isa Gaussian random variable, thus we chose y ∼ N (0,σY In) as the output distributionfor the computation of the converse bound.

We can now define the information density

i(x,y) =n

2log2 σ

2Y +

log2 e

2

n∑

i=1

[

y2iσ2Y

− (yi − xi)2

]

(3.7)

We choose the input such that ||x||2 = nP . To simplify calculations, we are usingx = x0 = (

√P ,...,

√P ). This is possible because of the symmetry of the problem.

Thus, using Zi ∼ N (0,1), Hn and Gn have the following distributions

10

Page 19: Coding for phase noise channels

CHAPTER 3. AWGN CHANNEL

Hn = n log2 σY − nP

2log2 e+

1

2log2 e

n∑

i=1

(

(1− σ2Y )Z2i + 2

√PσY Zi

)

(3.8)

and

Gn = n log2 σY + nP

2σ2Ylog2 e+

1

2σ2Ylog2 e

n∑

i=1

(

(1− σ2Y )Z2i + 2

√PZi

)

(3.9)

where Hn and Gn are the information density under PY |X and PY respectively.Then, by choosing σ2Y = 1 + P , we have

Hn =n

2log2(1 + P ) +

P

2(1 + P )log2 e

n∑

i=1

(

1− Z2i +

2√PZi

)

(3.10)

and

Gn =n

2log2(1 + P )− P

2log2 e

n∑

i=1

(

1 + Z2i − 2

1 +1

PZi

)

(3.11)

Notice that Hn and Gn are non-central χ2 distributions, thus we have

Hn =n

2(log2(1 + P ) + log2 e)−

P log2 e

2(1 + P )yn (3.12)

with yn ∼ χ2n(

nP ), and

Gn =n

2(log2(1 + P ) + log2 e)−

P log2 e

2yn (3.13)

with yn ∼ χ2n(n+ n

P ).Finally, to compute the bound, we find γn such as

Pr [Hn ≥ γn] = 1− ε (3.14)

which lead to

M ≤ 1

Pr [Gn ≥ γn](3.15)

Those expressions can be computed directly using closed-form expressions. For somechannels, when we do not have them, therefore we have to compute the bound with aMonte Carlo simulation and we will discuss the issue of calculate the second probabilityPr [Gn ≥ γn], which decreases exponentially to 0, by this method.

In Fig. 3.1, we plot the results for a real-valued AWGN channel of the rate, in bit perchannel use against the block length n. For this example, we use the capacity achievinginput distribution x ∼ N (0,σ) to compute the DT bound. For the following chapters,we will use discrete input distribution for the AWGN channel in order to compare withthe results we will find for partially coherent AWGN channels.

We see that the gap between both curves, the DT and the converse, get smaller whenthe block length get larger. This result give a good approximation of the maximal codingrate for this channel given the error probability. We also know from the definition of thecapacity, that both curves will tend toward it when n grows to infinity.

11

Page 20: Coding for phase noise channels

CHAPTER 3. AWGN CHANNEL

0 200 400 600 800 1000 1200 1400 1600 1800 20000

0.1

0.2

0.3

0.4

0.5

0.6

Blocklength, n

Rat

e, b

it/ch

.use

Meta−converseDTCapacity

Figure 3.1: DT and converse for AWGN channel - SNR = 0 - Pe = 10−3

12

Page 21: Coding for phase noise channels

4

Phase noise channels

In this chapter we will focus on channels impaired by phase noise. First, we will seesome results with a uniform phase noise and the equations that lead to the DT bound.Then we will focus on two more realistic channels, the AWGN channel impaired withuniform phase noise, and with Tikhonov phase noise.

4.1 Uniform phase noise channel

We consider a uniform phase noise channel where θ is the additive noise on the phase.This noise is distributed uniformly between −a and a, ie., θ ∼ U(−a,a). If x ∈ X andy ∈ Y we have the following expressions

y = xeiθ (4.1)

andyk = xke

iθk (4.2)

Using an interleaver, we can assume that the noise is memoryless, then we have theconditional output distribution

p(y|x) =n∏

k=1

p(yk|xk) (4.3)

Notice that regardless the phase noise, the noise cannot change the magnitude of thesample, ie., |x| = |y|. Then, for this channel we will only consider a constellation withall points belonging to the same ring.

13

Page 22: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

4.1.1 Information density

The information density is defined by

i(x,y) = log2PY |X=x(y)

PY (y)(4.4)

We know that p(yk) = p(θk), thus we have

p(θk) =

1

2a, if |θk| ≤ a

0, otherwise(4.5)

Using (4.3) and (4.5) we obtain the expression of the conditional output distribution

PY |X=x(y) =

n∏

i=1

p(yi|xi) =

1

(2a)n, if y ∈ Yx

0, otherwise

(4.6)

where

Yx =

{

y ∈ Y, ∀i ∈ [1,n] ,| arg yixi

| ≤ a

}

(4.7)

Then, using the law of total probability, we obtain the output distribution

PY (y) =1

|X |∑

x

p(y|x) = |Xy||X |(2a)n (4.8)

where

Xy =

{

x ∈ X , ∀i ∈ [1,N ] ,| arg yixi| ≤ a

}

(4.9)

Finally, the information density for the uniform phase noise channel is given by thefollowing expression

i(x,y) =

log2|X ||Xy|

, if y ∈ Yx

0, otherwise

(4.10)

4.1.2 Depending testing bound

Since the capacity achieving distribution for this channel is not known, we will workwith a given input constellation.

Let the input alphabet be distributed according to an m-PSK modulation. Let nbe the block length, M the size of the codebook M and E(M) the set of all M -sizecodebooks.The codebook is randomly chosen.

14

Page 23: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

Given a probability of error ε, we want to find the highest M such as the followingexpression stands

ε ≤ E

[

e−(i(x,y)−log2(M−1

2 ))+]

(4.11)

Since we are using discrete input, we can rewrite the expression as follows by ex-panding the expectation over PXY

ε ≤∑

x∈X

y∈Yp(x,y)e(i(x,y)−log2(

M−12 ))

+

dy (4.12)

Let z(x,y) = |Xy|, we obtain

ε ≤∑

z∈NP (z)e

−(

log22M

(M−1)z

)+

(4.13)

Then, the probability P (z) can be expanded as follows

P (z) =∑

x∈X

y∈YP (z|x,y)p(y|x)p(x)dy (4.14)

which in (4.13) gives

ε ≤∑

z∈N

(

x∈X

y∈YP (z|x,y)p(y|x)p(x)dy

)

e−(

log22M

(M−1)z

)+

(4.15)

Since the input is an m-PSK modulation and the phase noise is uniform we know theexpressions of p(y|x) and p(x)

ε ≤∑

z∈N

(

x∈X

y∈YxP (z|x,y) 1

(2a)n1

mndy

)

e−(

log22M

(M−1)z

)+

(4.16)

Then, we can simplify the equation using the symmetry of the problem, by choosingx0 a realisation of X

ε ≤∑

z∈N

(

y∈Yx0P (z|x0,y)

1

(2a)ndy

)

e−(

log22M

(M−1)z

)+

(4.17)

and by expanding the integration over y we obtain

ε ≤ 1

(2a)n

z∈N

(∫ a

−a· · ·∫ a

−aP (z|x0,y)dy1 · · · dyn

)

e−(

log22M

(M−1)z

)+

(4.18)

Let V (x,y) be the number of neighbours in X for y ∈ Yx. V = V (x,y)− 1.Then the probability P (z|x,y) can be written as follows

15

Page 24: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

P (z|x,y) =(

V

z

) V−1∏

j=0

1

(mn − 1− j)

z−1∏

j=0

(M − 1− j)

V−z−1∏

j=0

(mn −M − j) (4.19)

P (z|x,y) = 0 if z > V (x,y).

Given the phase noise parameter a, and the number of points m in one ring of theconstellation, we can determine the function v(yk).v(yk) define the number of points the output yk can come from, in one ring of theconstellation. This function is a simple function with two values v1 and v2 as soon asthe points in each ring are equally spaced.Then we can define two constant d1 and d2 by the following expressions

d1 =

∫ yk+a

yk−a1(v(yk) = v1)dyk (4.20)

d2 =

∫ yk+a

yk−a1(v(yk) = v2)dyk (4.21)

Finally we have the following expression to compute the bound

ε ≤ 1

(2a)n

max(v1n,v2n)∑

z=0

(

n∑

u=0

(

n

u

)

d1ud2

n−up(

z|V = v1uv2

n−u))

e−(

log22M

(M−1)(z+1)

)+

(4.22)The complexity of this calculation depends on the complexity of p (z|V = v1

uv2n−u).

This is a product of V terms, so the complexity is O(2n). Of course, we can’t computethis expression because of its complexity. In the next chapters, we are using partiallycoherent AWGN channels, and the expressions do not need to calculate the probabilityp(z|x,y) which make the computation much faster.

16

Page 25: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

4.2 Uniform phase noise AWGN channel

We consider an AWGN channel impaired with a uniform phase noise θ ∼ U(−a,a). Ifx ∈ X , y ∈ Y and t ∼ N (0,σ2N ) we have

y = xeiθ + t (4.23)

andyk = xke

iθk + tk (4.24)

For this channel we can define the information density as follows.

4.2.1 Information density

We need both expressions of PY and PY |X to determine the expression of i(x,y). First,we know that the noise is memoryless, which allows us to write

PY |X=x(y) =

n∏

k=1

p(yk|xk) (4.25)

where xk, yk and tk can be written as

xk = akeibk

tk = ckeidk

yk = αkeiβk

which give us the following expression for the conditional output distribution (usingpolar coordinates)

p(yk|xk) =∫

θk

αkp(θk|xk)p(tk|θk,xk)dθk (4.26)

=

∫ a

−a

αk

2a

1

2πσ2exp−|tk|2

2σ2dθk (4.27)

where|tk|2 = |yk − xke

iθk |2 (4.28)

We develop (4.28) as follows

|tk|2 = |αk cos(βk) + iαk sin(βk)− |xk| cos(θk + arg(xk)) + i|xk| sin(θk + arg(xk))|2(4.29)

= (αk cos(βk)− |xk| cos(θk) + arg(xk))2 + (αk sin(βk)− |xk| sin(θk + arg(xk)))

2

(4.30)

= α2k + a2k − 2αkak cos(θk + bk − βk) (4.31)

17

Page 26: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

which used in (4.26) gives

p(yk|xk) = αk

exp(− (a2k+α2

k)

2σ2 )

(2a)(2πσ2)

∫ a

−aexp

αkak cos(θk + bk − βk)

σ2dθk (4.32)

Then, using the law of total probabilities we obtain the expression of the outputdensity probability

p(yk) =

m−1∑

u=0

p(xu,k)p(yk|xu,k) (4.33)

Now, we need to choose the input distribution to determine the expression of theinformation density. We will consider a set of M codewords (c1,...,cM ) with the sameprobability. Then we have

p(yk) =

m−1∑

u=0

αk

m

exp(− (a2u,k

+α2k)

2σ2 )

(2a)(2πσ2)

∫ a

−aexp

αkau,k cos(θk + bu,k − βk)

σ2dθk (4.34)

which finally gives the following expression for the information density

i(x,αeiβ) =N∑

k=1

log2

exp(− (a2k+α2

k)

2σ2 )

(2a)(2πσ2)

∫ a−a exp

αkak cos(θk+bk−βk)σ2 dθk

∑m−1u=0

1m

exp(−(a2

u,k+α2

k)

2σ2 )

(2a)(2πσ2)

∫ a−a exp

αkau,k cos(θk+bu,k−βk)

σ2 dθk

(4.35)

and with some simplifications we obtain

i(x,αeiβ) =

N∑

k=1

log2

exp(− a2k

2σ2 )∫ a−a exp

αkak cos(θk+bk−βk)σ2 dθk

∑m−1u=0

1m exp(−a2

u,k

2σ2 )∫ a−a exp

αkau,k cos(θk+bu,k−βk)σ2 dθk

(4.36)

We recognize in the information density expression the following integral∫ s

0ek cos(x)dx, s ≤ π. (4.37)

We can find a closed-form expression only if we choose s = π, by using the Besselfunction of the first kind. For this case (4.32) becomes,

p(αk,βk|ak,bk) = αk

exp(− (a2k+α2

k)

2σ2 )

(2πσ2)

1

∫ π

−πexp

αkak cos(θk + bk − βk)

σ2dθk (4.38)

and using the properties of trigonometric functions we can rewrite the expression asfollows

p(αk,βk|ak,bk) = αk

exp(− (a2k+α2

k)

2σ2 )

(2πσ2)

1

∫ π

−πexp

i2αkak sin(θk)

σ2dθk (4.39)

18

Page 27: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

We notice that the expression is independent of βk, the angle of y

p(αk|ak,bk) =∫ π

−πp(αk,βk|ak,bk)dβk (4.40)

= (2π)p(αk,βk|ak,bk) (4.41)

which leads to

p(αk|ak) =αk

σ2exp

{

−a2k + α2

k

2σ2

}

I0

(αkak

σ2

)

(4.42)

using I0(.) the Bessel function of the first kind.

M-PSK

Considering a M-PSK, we have ak =√P and PY can be defined as follows

PY (αk) =m∑

u=1

p(au,k)p(αk|ak) = p(αk|ak) (4.43)

which leads to i(x,y) = 0. Given the fact that we have a non-coherent AWGN chan-nel with an m-PSK modulation, we easily understand that no information can be sentthrough the channel.

Amplitude modulation

Now, we consider an amplitude modulation. If we have R points in our constellation,and if ar =

√Pr then we have

i(x,y) = i(a,α) =

N∑

k=1

log2

exp{

− a2k

2σ2

}

I0(αkak

σ2

)

∑Rr=1 exp

{

− Pr

2σ2

}

I0

(

αk

√Pr

σ2

)

(4.44)

Once again, given the channel, there is no information on the phase. So we can workby using only the magnitude of each point.

4.2.2 Dpending testing bound

Now we want to determine the upper bound for this channel. First, we pick an inputconstellation and then we use it in (4.44) to determine the equation of the DT (2.9),which leads us to the expression

ε ≤ E

exp

N∑

k=1

log2

exp{

− a2k

2σ2

}

I0(

αkakσ2

)

∑Rr=1 exp

{

− Pr

2σ2

}

I0

(

αk

√Pr

σ2

)

− log2M − 1

2

+

(4.45)

We use a Monte Carlo simulation to calculate this expression.

19

Page 28: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

Amplitude modulation input

For this constellation, we consider m points equally spaced with average power P = 1.In Fig. 4.1 we plot the rate, in bit per channel use, against the block length n. We chooseto present 3 constellations, with 8, 16 and 32 points. The Gaussian noise is defined bySNR = 15dB, and the error probability is Pe = 10−3.For each constellation, the depending testing bound and the constrained capacity areplotted.

0 200 400 600 800 1000 1200 1400 1600 1800 20000

0.5

1

1.5

2

2.5

3

3.5

4

4.5

blocklength, n

Rat

e, b

it/C

h.us

e

DT and constraint capacity : 8−AM constellationDT and constraint capacity : 16−AM constellationDT and constraint capacity : 32−AM constellation

Figure 4.1: DT and constraint capacities for three uniform AM constellations.

We see in Fig. 4.1 that, for a given constellation, both curves, the DT bound andthe constrained capacity, are tight when the block length is large. We can also noticethat the gap between both curves decreases faster when we have fewer points in ourconstellation.

4.3 Tikhonov phase noise channel

A more realistic channel to describe a system impaired by a phase noise is the TikhonovAWGN channel. We have a closed-form expression for the noise and, using Lapidoth’sresult in [7] we can also have one for the conditional output.

20

Page 29: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

We choose to study this model because it is a good approximation of the phase noiseerror induced by a first-order phase-locked loop [? ].We consider t ∼ N (0,σn), the Gaussian noise, and θ the phase noise distributed accordingto the Tikhonov distribution. This distribution is presented in section 4.3.

y = xeiθ + t (4.46)

andyk = xke

iθk + tk (4.47)

Tikhonov distribution

The Tikhonov distribution, also known as Von Mises distribution [? ], is an approxima-tion of the wrapped Gaussian which is defined as follows

pW (θ) =∑

k∈ZpΘ(θ + 2kπ) =

1√2πσ2

k∈Ze

−(θ−2kπ)2

2σ2 (4.48)

Its support is [−π;π] and is function of a parameter ρ. The probability densityfunction is given by

p(x|ρ) = eρ cos(x)

2πI0(ρ)(4.49)

In Fig. 4.2 we see the Tikhonov distribution for 3 values of the parameter ρ. Thelarger the parameter is, the smaller the noise is.

4.3.1 Information density

First, we need to determine the expression for the conditional output distribution. Weknow that the noise is memoryless, so we can focus on p(yk|xk).

p(yk|xk) =∫ π

−πpn(yk|xk,θk)pθ(θk)dθk (4.50)

Using both expressions of the Gaussian pdf and the Tikhonov pdf, we have

p(yk|xk) =∫ π

−π

1

2πσ2exp

(

−∣

∣yk − xkejθk∣

2

2σ2

)

eρ cos(θk)

2πI0(ρ)dθk (4.51)

=1

(2π)2σ2I0(ρ)

∫ π

−πexp

(

−∣

∣yk − xkejθk∣

2

2σ2+ ρ cos(θk)

)

dθk (4.52)

21

Page 30: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

−3 −2 −1 0 1 2 30

1

2

3

4

5

6

7

8

9

angle (rad)

rho=500rho=100rho=10

Figure 4.2: Tikhonov probability density function

we can now expand the expression in the exponential∣

∣yk − xke

jθk∣

2= |yk|2 + |xk|2 − y∗kxke

jθk − ykx∗ke−jθk (4.53)

= |yk|2 + |xk|2 − y∗kxk(cos(θk) + sin(θk))− ykx∗k(cos(θk)− sin(θk))

(4.54)

= |yk|2 + |xk|2 − 2<(y∗kxk) cos(θk) + 2=(y∗kxk) sin(θk) (4.55)

which gives us the following expression for the conditional output distribution

p(yk|xk) =exp

(

−(|yk|2+|xk|2)2σ2

)

(2π)2σ2I0(ρ)

∫ π

−πexp

(

(<(y∗kxk) + ρ) cos(θk)−=(y∗kxk) sin(θk)σ2

)

dθk

(4.56)Because of the symmetry of the problem, we choose to work with polar coordinates,

thus we define xk and yk by

xk = akeibk

yk = αkeiβk

22

Page 31: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

Then we have

y∗kxk = aαei(b−β) (4.57)

and

<(y∗kxk) = aα cos (i(b− β)) (4.58)

=(y∗kxk) = aα sin (i(b− β)) (4.59)

Using both equations we define

u =aα

σ2(4.60)

andA = (<(y∗kxk) + ρ)2 + (=(y∗kxk))2 (4.61)

which leads to

p(yk|xk) =α exp

(

−(a2+α2)2σ2

)

(2π)2σ2I0(ρ)

∫ π

−πexp

[√A

(

cos(θk)u cos(b− β)√

A− sin(θk)

u sin(b− β)√A

)]

dθk

(4.62)We defined A such that

A = (u cos(b− β))2 + (u sin(b− β))2 (4.63)

so we can find z such that

cos(z) =u cos(b− β)√

A(4.64)

sin(z) =u sin(b− β)√

A(4.65)

Then we have

p(yk|xk) =α exp

(

−(a2+α2)2σ2

)

(2π)2σ2I0(ρ)

∫ π

−πexp

[√A (cos(θk + z))

]

dθk (4.66)

which is equal to

p(yk|xk) =α exp

(

−(a2+α2)2σ2

)

(2π)2σ2I0(ρ)

∫ π

−πexp

[√A cos(θk)

]

dθk (4.67)

given that z and θk are independent.Finally, we recognize in (4.67) the Bessel function of the first kind which gives the

following expression for the conditional output distribution

23

Page 32: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

pY |X(yk,xk) =α

2πσ2exp

{

−α2 + a2

2σ2

}

I0(√A)

I0(ρ)(4.68)

To find an expression for the information density, we also need the output distributionPY . In this thesis, we consider a discrete input constellation withM codewords (c1,...,cM )and P (ci) =

1M .

Given this input, the output distribution can be computed as follows

PY (yk) =

M∑

i=1

1

MpY |X(yk|ci) (4.69)

and the information density is

i(x,y) = i(aeib,αeiβ) = log2

exp{

− a2

2σ2

}

I0(√A)

∑Mi=1

1M exp

{

− a2i2σ2

}

I0(√Ai)

(4.70)

where

A =a2α2

σ4+ 2ρ

σ2cos(b− β) + ρ2 (4.71)

4.3.2 Depending testing bound

We compare two constellations for the AWGN channel with Tikhonov phase noise. Wehave the classic 64-QAM constellation and a robust circular QAM constellation designedspecifically for this channel [8].The second constellation is designed in order to minimize the average minimum distancebetween two points of the constellation. The algorithm presented in [8] gives an exampleof the constellation for a given phase noise.In Fig. 4.3 we plot both constellations and what happen to them through an AWGNchannel with SNR = 30dB impaired by the given phase noise ρ = 625 (σph = 0.04).

In Fig. 4.4 we plot the robust circular 64-QAM constellation impaired by a Tikhonovphase noise with parameter ρ = 625.

In Fig. 4.5 we plot the DT curve and the constrained capacity for both constellations.We choose SNR = 0dB, ρ = 625 and Pe = 10−3 for this simulation.

In Fig. 4.6 we plot the DT bound and the constrained capacity for both constella-tions. We choose SNR = 15dB, ρ = 625 and Pe = 10−3 for this simulation.

In Fig. 4.7 we plot the DT bound for the robust circular 64-QAM constellation fortwo power of phase noise. We also plot the DT bound and the constrained capacitywithout phase noise, ie., Gaussian noise only, and both constrained and unconstrainedcapacity for this channel. We choose SNR = 15dB and Pe = 10−3 for this simulation.

24

Page 33: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

−2 −1 0 1 2−2

−1

0

1

264−QAM constellation

−2 −1 0 1 2−2

−1

0

1

2Robust Circular QAM constellation

−2 −1 0 1 2−2

−1

0

1

264−QAM constellation with noise

−2 −1 0 1 2−2

−1

0

1

2Robust Circular QAM constellation with noise

Figure 4.3: Two 64-QAM constellations in AWGN phase noise channel

In Fig. 4.8 we plot the DT bound for the robust circular 64-QAM constellation fortwo probabilities of error. We also plot the constrained capacity for this channel. Wechoose SNR = 0dB and ρ = 100 for this simulation.

25

Page 34: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

−1.5 −1 −0.5 0 0.5 1 1.5−1.5

−1

−0.5

0

0.5

1

1.5

Figure 4.4: Robust Circular QAM constellation with phase noise

0 200 400 600 800 1000 1200 1400 1600 1800 20000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Blocklength, n

Rat

e, b

it/C

h.us

e

DT for Robust Circular 64−QAMDT for regular 64−QAMConstrained−capacity for 64−QAM

Figure 4.5: DT curves for two 64-QAM constellations in AWGN phase noise channel withSNR = 0dB

26

Page 35: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

0 200 400 600 800 1000 1200 1400 1600 1800 20000

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Blocklength, n

Rat

e, b

it/C

h.us

e

DT for Robust Circular 64−QAMDT for regular 64−QAMConstrained−capacity for Robust Circular 64−QAMConstrained−capacity for Regular 64−QAM

Figure 4.6: DT curves for two 64-QAM constellations in AWGN phase noise channel withSNR = 15dB

0 200 400 600 800 1000 1200 1400 1600 1800 20000

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

5.5

Blocklength, n

Rat

e bi

t/Ch.

use

DT for rho=100DT for rho=625DT and constrained−capacity without phase noiseCapacity

Figure 4.7: Comparaison of DT curves for different phase noise power

27

Page 36: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

0 100 200 300 400 500 600 700 800 900 10000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Blocklength, n

Rat

e, b

it/C

h.us

e

DT with Pe=10E−3DT with Pe=10E−9Constrained−capacity

Figure 4.8: Comparaison of DT curves for different probabilities of error

In Fig. 4.3 we see both constellations we used in our simulations. The robust circular64-QAM has been designed for phase noise channels with a noise power ρ = 625. Theoptimization criteria used is the maximization of the minimal distance between twoadjacent rings.In Fig. 4.5 we notice that both DT curves are the same for both constellations. Despitethe differences between these constellations, the power of the Gaussian noise is too highto make a difference between these constellations. We can also notice that, even for alarge block length (n = 2000), their is still a gap between the DT and the constrainedcapacity.In Fig. 4.6 we notice a difference between both constellations. For large block length(n ≤ 100) the robust circular 64-QAM performs better than the regular 64-QAM. Wealso notice that the DT bound and the constrained capacity are tight, which gives us abetter approximation of the maximal coding rate for this channel.From these curves, we can also see that for high SNR we reach the capacity much fasterthan for small SNR. Then the gap between the DT bound and the constrained capacityis tighter for large SNR.In Fig. 4.7 we see the impact of the phase noise power on the DT bound. We presentthe loss of coding rate between two channels with different phase noise power. We alsosee that with the parameter ρ = 625, the maximal coding rate is very close to the codingrate without any phase noise. We also notice the loss induce by our constellation inregard to the capacity achieving distribution.In Fig. 4.8 we see the impact of the probability of error on the coding rate. We notice

28

Page 37: Coding for phase noise channels

CHAPTER 4. PHASE NOISE CHANNELS

that the difference between these curves exist for small block length, and we will need alarger block length to have a smaller probability of error.

4.3.3 Meta converse bound

As we define earlier, we know the conditional output for our channel

pY |X(r,φ,R,ψ) =R

2πσ2exp

{

−R2 + r2

2σ2

}

I0(ν)

I0(ρ)(4.72)

where

ν =

R2r2

σ4+ 2ρ

Rr

σ2cos(φ− ψ) + ρ2

The meta converse bound requires to pick an output distribution. For our case, weuse the following distribution, which is capacity achieving for high SNR [7].

R2 ∼ χ21 (4.73)

ψ ∼ U(−π,π) (4.74)

PY (R,ψ) =1

2π√2R2Γ

(

12

) exp

(

−R2

2

)

(4.75)

Thus, we can define the information density given those two distributions

i(x,y) =

N∑

i=1

log2

(

pY |X=xi(yi)

PY (yi)

)

(4.76)

i(x,y) =N∑

i=1

log2

Ri

2πσ2 exp{

−R2i+r2i2σ2

}

I0(νi)I0(ρ)

1

2π√

2R2iΓ( 1

2)exp

(

−R2i

2

)

(4.77)

i(x,y) = N log2

(√2Γ(

12

)

σ2I0(ρ)

)

+N∑

i=1

log2

(

I0(νi) exp

(

−R2i + r2i2σ2

− R2i

2

))

(4.78)

Then, we denote by Gn and Hn the information density under PY and PY |X respec-tively.

To compute the converse bound, we have to find the parameter γn given the followingcondition

P [Hn ≥ γn] = 1− ε (4.79)

and then, we use this parameter to determine the following probability

P [Gn ≥ γn] (4.80)

The main issue for this bound is the calculation of the probability P [Gn ≥ γn].We know that this value decreases exponentially to 0 and we do not have any closed-form expression to compute it. In the real-value Gaussian case, we found a closed-formexpression using the chi-square distribution.

29

Page 38: Coding for phase noise channels

5

Conclusion

In this work we applied an achievability bound on phase noise channels in order todetermine the maximum coding rate for such channel.First, we focused on a simple model with a uniform phase noise. We managed to finda closed-form expression for the DT bound, but the computation complexity was anissue. Then we moved on to two partially coherent AWGN channels. For the AWGNchannel impaired by uniform phase noise, a closed form expression has be found forthe non coherent case which gave some results. And finally, we found some results forthe AWGN channel impaired by a Tikhonov phase noise. We investigated the impactof all parameters (Noises power and probability of error) on the curves. Through bothapplications on phase noise channels we can see that the DT bound and the constraintcapacity associated with a constellation are really close for high SNR. This give us agood idea of the achievable rate for a given block length and an error probability.We also investigated the impact of different power of phase noise and the loss induced onthis rate. Moreover, we can see on the curves that for large block length (n > 500) andhigh SNR (SNR> 15), more than 95% of the constrained capacity is already achieved.Given those informations, we can evaluate the performances of codes and discuss theinterest of using larger block.For small SNR, the gap between the achievability bound and the constrained capacity isstill large, therefore, we do not have a tight approximation of the maximal coding rate.As a future work for our thesis, we can study the meta-converse and try to find anapproximation for the binary hypothesis testing. In that way we could compute theupper bound and have a tighter approximation.Another following of this thesis could be to investigate all the codes we already have andsee their performances over PC-AWGN channels.

30

Page 39: Coding for phase noise channels

Bibliography

[1] C. E. Shannon, A mathematical theory of communication, Bell System TechnicalJournal (1948) 379–423.

[2] A. Feinstein, A new basic theorem of information theory, IRE trans. Inform. Theory(1954) pp. 2–22.

[3] C. E. Shannon, Certain results in coding theory for noisy channels, Inf. Contr., vol.1 (1957) pp. 6–25.

[4] R. G. Gallager, A simple derivation of the coding theorem and some applications,IEEE Trans. Inf. Theory, vol. 40 (1965) pp.3–18.

[5] Y. Polyanskiy, H. V. Poor, S. Verdu, Channel coding rate in the finite blocklengthregime, IEEE Trans. Inf. Theory.

[6] T. Cover, J. Thomas, Elements of Information Theory, Wiley, 2006.

[7] A. Lapidoth, On phase noise channels at high snr, IEEE Trans. Inf. Theory.

[8] A. Papadopoulos, K. N. Pappi, G. K. Karagiannidis, H. Mehrpouyan, Robust circularqam constellations in the presence of phase noise.

31