169
EE 561 Communication Theory Spring 2003 Instructor: Matthew Valenti Date: Jan.15, 2003 Lecture #2 Probability and Random Variables

DC Digital Communication PART2

  • Upload
    aravind

  • View
    1.893

  • Download
    2

Embed Size (px)

DESCRIPTION

A property of MVG_OMALLOORBy Matthew Valenti

Citation preview

Page 1: DC Digital Communication PART2

EE 561Communication Theory

Spring 2003

Instructor: Matthew Valenti

Date: Jan.15, 2003

Lecture #2

Probability and Random Variables

Page 2: DC Digital Communication PART2

Review/Preview

Last time: Course policies (syllabus). Block diagram of communication system.

This time: Review of probability and random variables.

Page 3: DC Digital Communication PART2

Random Events

When we conduct a random experiment, we can use set notation to describe possible outcomes.

Example: Roll a six-sided die. Possible outcomes: S={1,2,3,4,5,6} An event is any subset of possible outcomes: A = {1,2} The complement of the event is: A = S-A = {3,4,5,6} S is the certain event (the set of all outcomes). is the null event (empty set).

Another example: Transmit a data bit. Two complementary outcomes are:

• {Received correctly, received in error}

A

Page 4: DC Digital Communication PART2

More Set Theory

The union (or sum) of two events contains all sample points in either event. Let A = {1,2}, B = {5,6}, and C={1,3,5} Then find A B and A C

The intersection of two events contains only those points that are common to both sets. Find AB and AC If AB = , then A and B are mutually exclusive.

Page 5: DC Digital Communication PART2

Probability

The probability P(A) is a number which measures the likelihood of event A.

Axioms of probability: P(A)0. P(A)1 and P(A)=1 only if A=S (the certain event). If A and B are two events such that AB=,

then P(AB) = P(A)+P(B).• A and B are mutually exclusive.

S

A B i.e. A and Bdon’t overlap

Page 6: DC Digital Communication PART2

Joint and Conditional Probability

Joint probability is the probability that both A and B occur: P(A,B) = P(AB).

Conditional probability is the probability that A will occur given that B has occurred:

Bayes’ theorem: P(A,B) = P(A)P(B|A) = P(B)P(A|B)

P(A|B) =P(A,B)P(B)

P(B|A) =P(A|B)P(B)

P(A) and P(A|B) =P(B|A)P(A)

P(B)

and P(B|A) =P(A,B)P(A)

Page 7: DC Digital Communication PART2

Statistical Independence

Events A and B are statistically independent if P(A,B) = P(A)P(B). If A and B are independent, then:

• P(A|B) = P(A) and P(B|A) = P(B). Example:

• Flip a coin, call result A={heads, tails}.• Flip it again, call result B = {heads,tails}.• P{A=heads, B=tails} = 0.25.• P{A=heads}P{B=tails} = (0.5)(0.5) = 0.25

Page 8: DC Digital Communication PART2

Random Variables

A random variable X(s) is a real-valued function of the underlying event space sS. Typically, we just denote it as X.

• i.e. we suppress the dependence on s (it is assumed).

Random variables (R.V.’s) can be either discrete or continuous: A discrete R.V. can only take on a countable

number of values.• Example: The number of students in a class.

A continuous R.V. can take on a continuous range of values.

• Example: The voltage across a resistor.

Page 9: DC Digital Communication PART2

Cumulative Distribution Function

Abbreviated CDF. Also called Probability Distribution Function. Definition: Fx(a) = P[ X x ] Properties:

F(x) is monotonically nondecreasing. F(-) = 0 F() = 1 P[ a < X b] = F(b) - F(a)

The CDF completely defines the random variable, but is cumbersome to work with.

Instead, we will use the pdf …

Page 10: DC Digital Communication PART2

Probability Density Function

Abbreviated pdf. Definition:

Properties:p(x) 0

Interpretation: Measures how fast the CDF is increasing. Measures how likely a RV is to lie at a particular value or

within a range of values.

)()( xFdx

dxp XX

1)(

dxxpX

)()(][)( aFbFbXaPdxxp XX

b

a

X

Page 11: DC Digital Communication PART2

Example

Consider a fair die: P[X=1] = P[X=2] = … = P[X=6] = 1/6.

The CDF is:

The pdf is:0 2 4 6 x

0 2 4 6 x

unit stepfunction

6

1

)(6

1)(

iX ixuxF

dirac deltafunction

6

1

)(6

1)(

iX ixxp

Page 12: DC Digital Communication PART2

Expected Values

Sometimes the pdf is unknown or cumbersome to specify.

Expected values are a shorthand way of describing a random variable.

The most important examples are: Mean:

Variance:

The expectation operator works with any function Y=g(X).

dxxxpmXE X )(][

22222 )()( XXXX mXEdxxpmxmXE

dxxpxgXgEYE )()()]([][

Page 13: DC Digital Communication PART2

Uniform Random Variables

The uniform random variable is the most basic type of continuous R.V.

The pdf of a uniform R.V. is constant over a finite range and zero elsewhere:

A

1

2

Am

2

Am m

A

x

Page 14: DC Digital Communication PART2

Example

Consider a uniform random variable with pdf:

Compute the mean and variance.

elsewhere

xxp

0

10010/1)(

Page 15: DC Digital Communication PART2

Probability Mass Function

The pdf of discrete RV’s consists of a set of weighted dirac delta functions. Delta functions can be cumbersome to work with.

Instead, we can define the probability mass function (pmf) for discrete random variables: p[x] = P[X=x]

Properties of pmf: p[x] 0

p xx

[ ]

1

p x P a X bx a

b

[ ] [ ]

For the die-rollexample:p[x] = 1/6for 1 x 6

Page 16: DC Digital Communication PART2

Binary Distribution

A binary or Bernoulli random variable has the following pmf:

Used to model binary data (p=1/2). Used to model the probability of bit error. Mean:

Variance:

1

01][

xp

xpxp

Page 17: DC Digital Communication PART2

Binomial Distribution

Let where {Xi, i=1,…,n} are i.i.d.

Bernoulli random variables. Then:

where:

Mean: Variance:

n

iiXY

1

knkY pp

k

nkp

)1(][

)!(!

!

knk

n

k

n

npmX

)1(2 pnpX

Page 18: DC Digital Communication PART2

Example

Suppose we transmit a 31 bit sequence (code word).

We use an error correcting code capable of correcting 3 errors.

The probability that any individual bit in the code word is received in error is p=.001.

What is the probability that the code word is incorrectly decoded? i.e. Probability that more than 3 bits are in error.

Page 19: DC Digital Communication PART2

Example

Parameters: n=31, p=0.001, and t=3

Page 20: DC Digital Communication PART2

Pairs of Random Variables

We often need to consider a pair (X,Y) of RVs joint CDF: joint pdf: marginal pdf:

Conditional pdf:

Bayes rule:

yYxXPyYxXPyxF YX ,),(,

),(),( ,

2

, yxFyx

yxp YXYX

dyyxpxp YXX ),()( ,

dyyxpxp YXX ),()( ,

)(

),()|( ,

yp

yxpyxp

Y

YXX )(

),()|( ,

xp

yxpxyp

X

YXY

)(

)()|()|(

yp

xpxypyxp

Y

XYX )(

)()|()|(

xp

ypyxpxyp

X

YXY

Page 21: DC Digital Communication PART2

Independence and Joint Moments

X and Y are independent if:

Correlation:

If E[XY] = 0 then X,Y are orthogonal

Covariance:

If XY = 0 then X,Y are uncorrelated. If X,Y are independent, then they are uncorrelated

dxdyyxxypXYE ),(][

dxdyyxpmymxmYmXE yxYxYX ),())(()])([(,

)()(),(, ypxpyxp YXYX

Page 22: DC Digital Communication PART2

Random Vectors

Random vectors are an n-dimensional generalization of pairs of random variables. X = [X1,X2, …, Xn]’ Joint CDF & pdf are possible, but cumbersome. Marginalize by integrating out unwanted variables.

Mean is specified by a vector mx = [m1, m2, …, mn]’ Correlation and covariance specified by matrices:

Covariance matrix:• M=[i,j] i.e. a positive-definite matrix with (i,j)th element i,j

• where• If M diagonal, the X is uncorrelated

Linear Transformation: Y = AX• mY=AmX

• MY=AMXA’

)])([(, jjiiji mXmXE

Page 23: DC Digital Communication PART2

Central Limit Theorem

Let [X1,X2, …Xn] be a vector of n independent and identically distributed (i.i.d.) random variables, and let:

Then as n, Y will have a Gaussian distribution. This is the Central Limit Theorem. This theorem holds for (almost) any distribution of Xi’s.

Importance of Central Limit Theorem: Thermal noise results from the random movement of many

electrons --- modeled very well with Gaussian distribution. Interference from many equal power (identically distributed)

interferers in a CDMA system tends towards a Gaussian distribution.

n

iiXY

1

Page 24: DC Digital Communication PART2

Gaussian Random Variables

The pdf of a Gaussian random variable is:

where m is the mean and 2 is the variance.

Properties of Gaussian random variables: A Gaussian R.V. is completely described by its mean and

variance.• Gaussian vector specified by mean and covariance matrix.

The sum of Gaussian R.V.’s is also Gaussian.• Linear transformation of Gaussian vector is Gaussian.

If two Gaussian R.V.’s are uncorrelated, then they are also independent.

• Unocrrelated Gaussian vector is independent.

)2/()( 22

2

1)(

mx

X exp

Page 25: DC Digital Communication PART2

The Q Function

The Q function can be used to find the probability that the value of a Gaussian R.V. lies in a certain range.

The Q function is defined by:

where X is a Gaussian R.V. with zero mean and unit variance (i.e. 2=1).

Can also be defined as:

)(1)( zFzQ x

z

dezQ

2/2

2

1)(

Page 26: DC Digital Communication PART2

Using the Q Function

If X is a Gaussian R.V. with mean m and variance 2, then the CDF of X is:

Approximation for large X Most Q function tables only go up to z=4 or z=6. For z>4 a good approximation is:

amQaXPaFx ][)(

2/2

2

1)( ze

zzQ

Page 27: DC Digital Communication PART2

0 0.5 1 1.5 2 2.5 3 3.5 4 4.510

-6

10-5

10-4

10-3

10-2

10-1

100

101

Q-Function and Overbound

overbound

Q function

Page 28: DC Digital Communication PART2

EE 561Communication Theory

Spring 2003

Instructor: Matthew Valenti

Date: Jan.17, 2003

Lecture #3

Random Processes

Page 29: DC Digital Communication PART2

Review/Preview

Last time: Review of probability and random variables.

• Random variables, CDF, pdf, expectation.• Pairs of RVs, random vectors, autocorrelation,

covariance.• Uniform, Gaussian, Bernoulli, and binomial RVs.

This time: Random processes.

Upcoming assignments: HW #1 is due in 1 week. Computer Assignment #1 will be posted soon.

Page 30: DC Digital Communication PART2

Random Variablesvs. Random Processes

Random variables model unknown values. Random variables are numbers.

Random processes model unknown signals. Random processes are functions of time.

One Interpretation: A random process is just a collection of random variables. A random process evaluated at a specific

time t is a random variable. If X(t) is a random process then X(1), X(1.5),

and X(37.5) are all random variables.

Page 31: DC Digital Communication PART2

Random Variables

Random variables map the outcome of a random experiment to a number.

heads

tails

S

0 1 X

Page 32: DC Digital Communication PART2

Random ProcessesRandom Processes mapthe outcome of a randomexperiment to a signal(function of time).

heads

tails

signal associatedwith the outcome:

sample function

ense

mbl

e

A random process evaluated at aparticular time is a random variable

S

Page 33: DC Digital Communication PART2

Random Process Terminology

The expected value, ensemble average or mean of a random process is:

The autocorrelation function (ACF) is:

Autocorrelation is a measure of how alike the random process is from one time instant to another.

Autocovariance:

dxxpxtxEtm ttx )()()(

212121

),()()(),( 2121 tttttt dxdxxxpxxtxtxEtt

( , ) ( , ) ( ) ( )t t t t m t m t1 2 1 2 1 2

Page 34: DC Digital Communication PART2

Mean and Autocorrelation

Finding the mean and autocorrelation is not as hard as it might appear! Why: because oftentimes a random process can

be expressed as a function of a random variable. We already know how to work with functions of

random variables. Example:

This is just a function g() of :

We know how to find the expected value of a function of a random variable:

• To find this you need to know the pdf of .

)2sin()( ttx

)2sin()( tg

)2sin()()( tEgEtxE

a random variable

Page 35: DC Digital Communication PART2

An Example

If is uniform between 0 and , then:

)2sin()( tEtmx

dpt )()2sin(

0

1)2sin( dt

)2cos(2

t

)2sin()2sin(),( 2121 ttEtt

dptt )()2sin()2sin( 21

0

21

1)2sin()2sin( dtt

)(2cos2

112 tt

2cos2

1)( 12 where tt

Page 36: DC Digital Communication PART2

Stationarity A process is strict-sense stationary (SSS) if all

its joint densities are invariant to a time shift:

in general, it is difficult to prove that a random process is strict sense stationary.

A process is wide-sense stationary (WSS) if: The mean is a constant:

The autocorrelation is a function of time difference only:

If a process is strict-sense stationary, then it is also wide-sense stationary.

)(),...,(),()(),...,(),(

)(),()(),(

)()(

002121

02121

ttxttxttxptxtxtxp

ttxttxptxtxp

ttxptxp

NoxNx

oxx

oxx

xx mtm )(

12

21

where

)(),(

tt

tt

Page 37: DC Digital Communication PART2

Properties of the Autocorrelation Function

If x(t) is Wide Sense Stationary, then its autocorrelation function has the following properties:

Examples: Which of the following are valid ACF’s?

2)( )0( txE this is the second moment

)()( even symmetry

)()0(

Page 38: DC Digital Communication PART2

Power Spectral Density

Power Spectral Density (PSD) is a measure of a random process’ power content per unit frequency. Denoted (f). Units of W/Hz. (f) is nonnegative function. For real-valued processes, (f) is an even function.

The total power of the process if found by:

The power within bandwidth B is found by:

dffP )(

B

dffP )(

Page 39: DC Digital Communication PART2

Wiener-Khintchine Theorem

We can easily find the PSD of a WSS random processes.

Wiener-Khintchine theorem: If x(t) is a wide sense stationary random process,

then:

i.e. the PSD is the Fourier Transform of the ACF. Example:

Find the PSD of a WSS R.P with autocorrelation:

deFf fj 2)()()(

TT

T

TTT

if0

if1)(

Page 40: DC Digital Communication PART2

Example:

T

TTT

if0

if1)(

Page 41: DC Digital Communication PART2

White Gaussian Noise

A process is Gaussian if any n samples placed into a vector form a Gaussian vector. If a Gaussian process is WSS then it is SSS.

A process is white if the following hold: WSS. zero-mean, i.e. mx(t) = 0. Flat PSD, i.e. (f) = constant.

A white Gaussian noise process: Is Gaussian. Is white.

• The PSD is (f) =N0/2

• N0/2 is called the two-sided noise spectral density.

Since it is WSS+Gaussian, then it is also SSS.

Page 42: DC Digital Communication PART2

Linear Systems

The output of a linear time invariant (LTI) system is found by convolution.

However, if the input to the system is a random process, we can’t find X(f).

Solution: use power spectral densities:

This implies that the output of a LTI system is WSS if the input is WSS.

y(t)h(t)

x(t)

)()()( thtxty )()()( fHfXfY

2)()()( fHff xy

Page 43: DC Digital Communication PART2

Example

A white Gaussian noise process with PSD of (f) =N0/2 = 10-

5 W/Hz is passed through an ideal lowpass filter with cutoff at 1 kHz.

Compute the noise power at the filter output.

Page 44: DC Digital Communication PART2

Ergodicity

A random process is said to be ergodic if it is ergodic in the mean and ergodic in correlation: Ergodic in the mean:

Ergodic in the correlation:

In order for a random process to be ergodic, it must first be Wide Sense Stationary.

If a R.P. is ergodic, then we can compute power three different ways: From any sample function:

From the autocorrelation:

From the Power Spectral Density:

)()}({ txtxEmx

)()()()()( txtxtxtxEx

22/

2/

2 |)(||)(|1

lim txdttxT

PT

T

tx

)0(xxP

dffP xx )(

time average operator:

2/

2/

)(1

lim)(T

T

t dttgT

tg

Page 45: DC Digital Communication PART2

Cross-correlation

If we have two random processes x(t) and y(t) we can define a cross-correlation function:

If x(t) and y(t) are jointly stationary, then the cross-correlation becomes:

If x(t) and y(t) are uncorrelated, then:

If x(t) and y(t) are independent, then they are also uncorrelated, and thus:

212121

),()()(),( 2121 ttttttxy dydxyxpyxtytxEtt

)()()( tytxExy

yxxy mm)(

)()()()( tyEtxEtytxE

Page 46: DC Digital Communication PART2

Summary of Random Processes

A random process is a random function of time. Or conversely, an indexed set of random variables.

A particular realization of a random process is called a sample function.

),( 1stx

),( 2stx

),( 3stx

t

t

t

Furthermore, a Random Process evaluated at a particular point in time is a Random Variable.A random process is ergodic in the mean if the time average of every sample function is the same as the expected value of the random process at any time.

Page 47: DC Digital Communication PART2

EE 561Communication Theory

Spring 2003

Instructor: Matthew Valenti

Date: Jan. 31, 2003

Lecture #8

Advanced Coding Techniques

Page 48: DC Digital Communication PART2

Review Earlier this week:

Continued our discussion of quantization.• Quantization = source coding for continuous sources.

Lloyd-max algorithm.• Optimal quantizer design if source pdf is known.• Analogous to Huffman coding.

Scalar vs. vector quantization.• Performance can be improved by jointly encoding

multiple samples. As number of samples then R R(D). Vector quantization can take advantage of

correlation in the source. Even if the source is uncorrelated, vector

quantization achieves a shaping gain.• We computed the distortion of a vector quantizer.

Page 49: DC Digital Communication PART2

Preview

K-means algorithm.• Optimal quantizer design when source pdf is unknown.• Analogous to Lempel-Ziv algorithm.

This time: Practical source coding for speech.

• Differential pulse code modulation.• Vocoding.

Reading: Proakis section 3.5

Page 50: DC Digital Communication PART2

Coding Techniques for Speech

All speech coding techniques employ quantization.

Many techniques also use additional strategies to exploit the characteristics of human speech. Companding: Pass a nonuniform sample

through a nonlinearity to make it more uniform. Then sample with a uniform quantizer.-law and A-law.

DPCM. Vocoding.

Page 51: DC Digital Communication PART2

Differential PCM

Speech is highly correlated. Given several past samples of a speech signal it is

possible to predict the next sample to a high degree of accuracy by using a linear prediction filter.

The error of the prediction filter is much smaller than the actual signal itself.

In differential pulse-code modulation (DPCM), the error at the output of a prediction filter is quantized, rather than the voice signal itself. DPCM can produce “toll-quality” speech at half the

normal bit rate (i.e. 32 kbps).

Page 52: DC Digital Communication PART2

DPCM Block Diagram

DPCMSignal

Sample

PredictionFilter

PredictionFilter

Quantizer

DigitalCommunications

Channel

Decoder DAC

AnalogInputSignal

AnalogOutputSignal

-

+

+

+

Encode

Page 53: DC Digital Communication PART2

DPCM Issues

The linear prediction filter is usually just a feedforward (FIR) filter. The filter coefficients must be periodically transmitted.

In adaptive differential pulse-code modulation (ADPCM), the quantization levels can be changed on the fly. Helpful if the input pdf changes over time (nonstationary). Used in DECT (Digital European Cordless Telephone).

Delta modulation is a special case of DPCM where there are only two quantization levels. Only need to know the zero-crossings of the signal

While DPCM works well on speech, it does not work well for modem signals. Modem signals are uncorrelated.

Page 54: DC Digital Communication PART2

1.2 2.4 4.8 9.6 16 24 32 64

Unsatisfactory (1)

Poor (2)

Fair (3)

Good (4)

Excellent (5)

Bit Rate (kbps)

Mean Opinion Score(MOS)

Toll quality

Communicationsquality

Waveform codersVocoders

Tradeoff: Voice Quality versus Bit Rate

The bit rate produced by the voice coder can be reduced at a price. Increased

hardware complexity.

Reduced perceived speech quality.

Page 55: DC Digital Communication PART2

Waveform Coding and Vocoding

For high bit rates (16-64 kbps) it is sufficient to just sample and quantize the time domain voice waveform. This is called waveform coding. DPCM is a type of waveform coding.

For low bit rate voice encoding it is necessary to mathematically model the voice and transmit the parameters associated with the model. Process of analysis and synthesis. Called vocoding. Most vocoding techniques are based on linear predictive

coding (LPC).

Page 56: DC Digital Communication PART2

Linear Predictive Coding

Linear predictive coding is similar to DPCM with the following exceptions: The prediction filter is more complex

• more taps in the FIR filter. The filter coefficients are transmitted more frequently

• once every 20 milliseconds.

• The filter coefficients are quantized with a vector quantizer. The error signal is not transmitted directly

• The error signal can be though of a type of noise.

• Instead the statistics of the “noise” are transmitted Power level Whether voiced (vowels) or unvoiced (consonants)

• This is where the big savings (in terms of bit rate) comes from.

Page 57: DC Digital Communication PART2

Vocoder standards Vocoding is the single most important technology

enabling digital cell phones. RPE-LTP

Regular Pulse Excited Long Term Prediction. Used in GSM (European Digital Cellular) 13 kbps.

VSELP Vector Sum Excited Linear Predictive Coder. Used in USDC, IS-136 (US Digital Cellular). 8 kbps.

QCELP Qualcomm Code Excited Linear Predictive Coder. Used in IS-95. (US Spread Spectrum Cellular) Variable bit rate (full, half, quarter, eighth) Original full rate was 9.6 kbps. Revised standard (QCELP13) uses 14.4 kbps.

Page 58: DC Digital Communication PART2

Preview of Next Week

Sample

Quantize

Source Encode

Encryption

ChannelEncoder

Modulator Channel

D/A Conversion

Decryption

Source Decoder

ChannelDecoder

Equalizer

Demodulator

AnalogInputsignal

AnalogOutputsignal

DigitalOutput

DirectDigitalInput

we have beenlooking at thispart of thecommunicationsystem(Part 1 of 4)

Now, we will start looking at this partof the communication system. (Part 2 of 4)

Page 59: DC Digital Communication PART2

Modulation Principles

Almost all communication systems transmit data using a sinusoidal carrier waveform. Electromagnetic signals propagate well. Choice of carrier frequency allows placement of

signal in arbitrary part of spectrum. Modulation is implemented in practice by:

Processing digital information at baseband. Pulse shaping and filtering of digital waveform. Baseband signal is mixed with signal from

oscillator to bring up to RF. Radio frequency (RF) signal is filtered amplified

and coupled with antenna.

Page 60: DC Digital Communication PART2

Modulator: Simplified Block Diagram

Baseband Processing:Source Coding

Channel Coding, etc.

Pulse-shapingFilter

Digital/AnalogConverter

Filter andAmplify

data bits

code bits (symbol)

data rate

symbol rate

oversampled ~10X symbol rate

cos(2fct)baseband section RF section

antenna

Page 61: DC Digital Communication PART2

Modulation

Modulation shifts the spectrum of a baseband signal to that it becomes a bandpass signal.

A bandpass signal has non-negligible spectrum only about some carrier frequency fc >> 0 Note: the bandwidth of a bandpass signal is the range

of positive frequencies for which the spectrum is non-negligible.

Unless otherwise specified, the bandwidth of a bandpass signal is twice the bandwidth of the baseband signal used to create it.

BW=B BW=2B

Page 62: DC Digital Communication PART2

Modulation

Common digital modulation techniques use the data value to modify the amplitude, phase, or frequency of the carrier. Amplitude: On-off keying (OOK)

• 1 A cos(2fct)• 0 0

More generally, this is called amplitude shift keying (ASK). Phase: Phase shift keying (PSK)

• 1 A cos(2fct)• 0 A cos(2fct + ) = - A cos(2fct)

Frequency: Frequency shift keying (FSK)• 1 A cos(2f1t)• 0 A cos(2f2t)

Page 63: DC Digital Communication PART2

copyright 2003

EE 561 Communication Theory

Spring 2003

Instructor: Matthew Valenti

Date: Feb 5, 2003

Lecture #10

Representation of Bandpass Signals

Page 64: DC Digital Communication PART2

© 2

003

Announcements Homework #2 is due today.

I’ll post solutions over the weekend. Including solutions to all problems from chapter 3.

Computer assignment #1 is due next week. See webpage for details.

Today: Vector representation of signals Sections 4.1-4.2

Page 65: DC Digital Communication PART2

© 2

003

Review/Preview

Sample

Quantize

Source Encode

Encryption

ChannelEncoder

Modulator Channel

D/A Conversion

Decryption

Source Decoder

ChannelDecoder

Equalizer

Demodulator

AnalogInputsignal

AnalogOutputsignal

DigitalOutput

DirectDigitalInput

we have beenlooking at thispart of thecommunicationsystem

Now, we will start looking at this partof the communication system.

Page 66: DC Digital Communication PART2

© 2

003

Representation of Bandpass Signals(EE 461 Review)

A bandpass signal is a signal that has a bandwidth that is much smaller than the carrier frequency. i.e. most of the spectral content is not at DC. Otherwise, it is called baseband or lowpass.

Bandpass signals can be represented in any of three standard formats: Quadrature notation. Complex envelope notation. Magnitude and phase notation.

Page 67: DC Digital Communication PART2

© 2

003

Standard Notationsfor Bandpass Signals

Quadrature notation

x(t) and y(t) are real-valued lowpass signals called the in-phase and quadrature components of s(t).

Complex envelope notation

sl(t) is the complex envelope of s(t).

sl(t) is a complex-valued lowpass signal.

s t x t f t y t f tc c( ) ( )cos ( )sin 2 2 b g b g

s t x t jy t e s t ej f tl

j f tc c( ) Re ( ) ( ) Re ( ) a f 2 2

Page 68: DC Digital Communication PART2

© 2

003

More Notation forBandpass Signals

Magnitude and phase notation

Where a(t) is the magnitude and (t) is the phase of s(t). a(t) and (t) are both real-valued lowpass signals.

Relationship among notations:

s t a t f t tc( ) ( )cos ( ) 2 b g

a t x t y t( ) ( ) ( ) 2 2

( ) tan( )

( )t

y t

x t

LNM

OQP

1

x t a t t( ) ( )cos ( ) a fy t a t t( ) ( )sin ( ) a f

s t x t jy tl ( ) ( ) ( )

Page 69: DC Digital Communication PART2

© 2

003

Key Points

With these alternative representations, we can consider bandpass signals independently from their carrier frequency.

The idea of quadrature notation sets up a coordinate system for looking at common modulation types. Idea: plot in 2-dimensional space.

• x axis is the in-phase component.

• y axis is the quadrature component. Called a signal constellation diagram.

Page 70: DC Digital Communication PART2

© 2

003

Example Signal Constellation Diagram: BPSK

y t( ) 0kpx t( ) , 1 1k p

Page 71: DC Digital Communication PART2

© 2

003

Example Signal Constellation Diagram: QPSK

y t( ) , 1 1k px t( ) , 1 1k p

QPSK: Quadri-phase shift keying

Page 72: DC Digital Communication PART2

© 2

003

Example Signal Constellation Diagram: QAM

QAM: Quadrature Amplitude Modulation

x t( ) , , , 3 1 1 3l qy t( ) , , , 3 1 1 3l q

Page 73: DC Digital Communication PART2

© 2

003

Interpretation of Signal Constellation Diagrams

Axis are labeled with x(t) and y(t). Possible signals are plotted as points. Signal power is proportional to distance from origin. Probability of mistaking one signal for another is

related to the distance between signal points. The received signal will be corrupted by noise. The receiver selects the signal point closest to the

received signal.

Page 74: DC Digital Communication PART2

© 2

003

Example: A Received QAM Transmission

x t( ) , , , 3 1 1 3l qy t( ) , , , 3 1 1 3l q

received signal

Page 75: DC Digital Communication PART2

© 2

003

A New Way of Viewing Modulation

The quadrature way of viewing modulation is very convenient for some modulation types. QAM and M-PSK.

We will examine an even more general way of looking at modulation by using signal spaces. We can study any modulation type.

By choosing an appropriate set of axes for our signal constellation, we will be able to: Design modulation types which have desirable properties. Construct optimal receivers for a given modulation type. Analyze the performance of modulation types using very

general techniques.

First, we must review vector spaces …

Page 76: DC Digital Communication PART2

© 2

003

Vector Spaces

An n-dimensional vector consists of n scalar components

The norm (length) of a vector v is given by:

The inner product of two vectors and is given by:

v v v vn1 2, ,...,v v vn1 2, ,...,l q

v1 1 1 1 2 1 v v v n, , ,, ,...,

v2 2 1 2 2 2 v v v n, , ,, ,...,

v vii

n2

1

v v1 2 1 21

v vi ii

n

, ,

Page 77: DC Digital Communication PART2

© 2

003

Basis Vectors

A vector v may be expressed as a linear combination of its basis vectors

ei is normalized if it has unit length

If ei is normalized, then vi is the projection of v onto ei

Think of the basis vectors as a coordinate system (x,y,z,… axes) for describing the vector v. What makes a good choice of coordinate system?

e e e1 2, ,..., nl qv e

vi ii

n

1

vi i e v

ei 1

Page 78: DC Digital Communication PART2

© 2

003

Complete Basis

The set of basis vectors is complete or spans the vector space n if any vector v can be represented as a linear combination of basis vectors:

The set of basis vectors is linearly independent if no one basis vector can be represented as a linear combination of the remaining vectors. The n vectors must be linearly independent in order to span

n.

e e e1 2, ,..., nl q

v e vi ii

n

1

Page 79: DC Digital Communication PART2

© 2

003

Given the following vector space:

Which of the following is a complete basis?

e1

1

0LNM

OQP e2

1

0

LNM

OQP

e1

1

0LNM

OQP e2

0

1LNM

OQP

e1

1

1LNM

OQP e2

1

1

LNM

OQP

Example: Complete Basis

v1

0

0LNM

OQPv2

0

1LNM

OQP v3

1

0LNM

OQP v4

1

1LNM

OQP

Page 80: DC Digital Communication PART2

© 2

003

Orthonormal Basis

Two vectors vi and vj are orthogonal if

A basis is orthonormal if: All basis vectors are orthogonal to one-another. All basis vectors are normalized.

v vi j 0

Page 81: DC Digital Communication PART2

© 2

003

Which of the following is a complete orthonormal basis?

e1

1

0LNM

OQP e2

0

1LNM

OQP

e1

1

1LNM

OQP e2

1

1

LNM

OQP

e1

1 2

1 2LNM

OQP

/

/e2

1 2

1 2

LNM

OQP

/

/

Example: Complete Orthonormal Basis

e1

1

1LNM

OQP e2

1

0

LNM

OQP

e2

1

0

LNM

OQPe1

1 2

1 2LNM

OQP

/

/

Page 82: DC Digital Communication PART2

copyright 2003

EE 561 Communication Theory

Spring 2003

Instructor: Matthew Valenti

Date: Feb 21, 2003

Lecture #16

Page 83: DC Digital Communication PART2

© 2

003

Announcements

HW #3 is due on Monday. “sigspace.m” is on web-page.

Page 84: DC Digital Communication PART2

© 2

003

Review

Sample

Quantize

Source Encode

Encryption

ChannelEncoder

Modulator Channel

D/A Conversion

Decryption

Source Decoder

ChannelDecoder

Equalizer

Demodulator

AnalogInputsignal

AnalogOutputsignal

DigitalOutput

DirectDigitalInput

We have been looking at this partof the system.

Page 85: DC Digital Communication PART2

© 2

003

Digital Signaling over AWGN Channel

System model:

Signal space representation:

+s t s t s t s tM( ) ( ), ( ),..., ( ) 1 2l q

n(t)Gaussian

nn

Naf af 0

2

r t s t n taf af af

r t r f t n tk kk

K

( ) ( ) ' ( )

1

basis functions for s(t)

orthogonal noise --- disregard.r r t f t dt

s t f t dt n t f t dt

s n

k k

T

k

T

k

T

m k k

zz z

( ) ( )

( ) ( ) ( ) ( )

,

0

0 0

Page 86: DC Digital Communication PART2

© 2

003

Receiver Overview

Frontend

Backend

r t( ) r s

Goal: obtain the vectorof sufficient statistics rfrom the received signal r(t).

Implementation: either analogelectronics, or digital electronicsworking at several (3-10) samples per symbol period T.

Options:Correlation receiverMatched-filter receiver

Goal: obtain an estimate of thetransmitted signal given the vector r.

Implementation: Digital signal processing operating at the symbolperiod. One vector sample persymbol period.

Options:MAP receiverML receiver

Page 87: DC Digital Communication PART2

Front End Design #1:Bank of Correlators

zdtT

0

f t1( )

r t( ) r1

zdtT

0

f tK ( )

rK

r L

NMMMO

QPPP

r

rK

1

Page 88: DC Digital Communication PART2

Front End Design #2:Bank of Matched Filters

r t( ) r1

h t f T tK K( ) ( ) rK

r L

NMMMO

QPPP

r

rK

1

t T

h t f T t1 1( ) ( ) t T

Page 89: DC Digital Communication PART2

© 2

003

MAP Decision Rule arg max |s

sr s

S

m

p pm m

b gm r

arg max exp/ ,s

s

FHGG

IKJJ

RS|T|

UV|W|

m

p Nr s

Nm o

K k m k

ok

K

b g c h2

2

1

substitute conditional pdf ofr given sm (vector Gaussian)

arg max ln exp/ ,s

s

FHGG

IKJJ

LNMM

OQPP

RS|T|

UV|W|

m

p Nr s

Nm o

K k m k

ok

K

b g c h2

2

1

take natural log

arg max ln ln ln exp/ ,s

s

FHGG

IKJJ

LNMM

OQPP

RS|T|

UV|W|

m

p Nr s

Nm o

K k m k

ok

Kbg b g c h 2

2

1

use ln(xy) =ln(x) + ln(y)

arg max ln ln ,ss

R

S|T|

UV|W|

m

pK

Nr s

Nm ok m k

ok

Kbg b g c h2

2

1

use ln(xy) = yln(x)and ln(exp(x)) = x

arg max ln ln ,ss

RST

UVW

m

pK

NN

r sm oo

k m kk

Kbg b g c h2

1 2

1

pull 1/No out of summation

Page 90: DC Digital Communication PART2

© 2

003

MAP Decision Rule(Continued)

arg max ln ln , ,ss

RST

UVW

m

pK

NN

r s r sm oo

k m k k m kk

Kbg b g c h2

122 2

1

square term in summation

arg max ln , ,ss

RST

UVW

m

pN

s r smo

m k k m kk

Kbg c h12 2

1

eliminate terms that are commonto all sm

arg max ln , ,ss

RST

UVW

m

pN

s rN

smo

m k kk

K

om k

k

Kbg 2 1

1

2

1

Break up the one summationinto two summations

arg max ln ,ss

RST

UVW

m

pN

s rNm

om k k

k

Km

o

bg 2

1

E Use definition of signal energy:

Em m

T

m kk

K

s t dt s z

( ) ,

2

0

2

1

arg max ln ,ss

RST

UVW

m

Np s ro

m m k kk

Km

2 21

bg E Multiply by No/2

We use this equation to design theoptimal MAP receiver!

Page 91: DC Digital Communication PART2

Back End Design #1:MAP Decision Rule

S L

NMMM

O

QPPP

s s

s s

K

M M K

1 1 1

1

, ,

, ,

z Srr

z1

Npo

2 211lnbg E

zM

Npo

MM

2 2lnb g E

chooselargest

s

z s rm m k kk

K

,

1

Page 92: DC Digital Communication PART2

© 2

003

ML Decision Rule

ML is simply MAP with pm = 1/M

arg max |ss

r sS

m

p m

b gm r

arg max ,ss

RST

UVW

m

s rm k kk

Km

1 2

E

arg maxss

RSTUVWm

zmmE2

Page 93: DC Digital Communication PART2

Back End Design #2:ML Decision Rule

If pm’s are unknown or all equal, then use the ML (maximum likelihood) decision rule:

S L

NMMM

O

QPPP

s s

s s

K

m M K

1 1 1

1

, ,

, ,

z Srr

z1

E1

2

zM

EM

2

chooselargest

s

Page 94: DC Digital Communication PART2

© 2

003

Example

Start with the following signal set:

What kind of modulation is this? What is the energy of each signal?

0 1 2

s t1( )

0 1 2

s t2 ( )

0 1 2s t4 ( )

0 1 2

s t3( )

Page 95: DC Digital Communication PART2

Concept of the Correlation Receiver

Concept Correlate the received signal against all 4 possible transmitted

signals. Pick most likely after accounting for pm.

zdtT

0

s t1( )

r t( )

zdtT

0

s t4 ( )

Npo

2 1lnbg

Npo

M2lnb g

chooselargest

s

z1

zM

Page 96: DC Digital Communication PART2

© 2

003

Signal Space Representation

Note: the previous receiver is not an efficient implementation. 4 correlators were used. Could we use fewer correlators?

• We can answer this by using the concept of signal space!

Using the following basis functions:

Find the signal vectors and signal space diagram.

0 1 2

f t1( ) f t2 ( )

0 1 2

Page 97: DC Digital Communication PART2

A More Efficient MAP Receiver

zdtT

0

f t1( )

r t( )N

po

2 1lnbg

chooselargest

s

z r r1 1 2

f t2 ( )

r1

zdtT

0

r2

mat

rix

mul

tipl

y:z=

Sr N

po

2 2lnbgz r r2 1 2

Npo

2 3lnbgz r r3 1 2

Npo

2 4lnbgz r r4 1 2

S

s

s

s

s

L

N

MMMM

O

Q

PPPP

L

N

MMMM

O

Q

PPPP

1

2

3

4

1 1

1 1

1 1

1 1

T

T

T

T

Page 98: DC Digital Communication PART2

The ML Receiver

zdtT

0

f t1( )

r t( )

chooselargest

s

z r r1 1 2

f t2 ( )

r1

zdtT

0

r2

mat

rix

mul

tipl

y:z=

Sr

z r r2 1 2

z r r3 1 2

z r r4 1 2

S

s

s

s

s

L

N

MMMM

O

Q

PPPP

L

N

MMMM

O

Q

PPPP

1

2

3

4

1 1

1 1

1 1

1 1

T

T

T

T

Page 99: DC Digital Communication PART2

Decision Regions

The decision regions can be shown on the signal space diagram. Example: Assume pm = ¼ for m={1,2,3,4} Thus MAP and ML rules are the same.

Page 100: DC Digital Communication PART2

© 2

003

Average Energy Per Bit

The energy of the mth signal (symbol) is:

The average energy per symbol is:

log2M is the number of bits per symbol.

Thus the average energy per bit is:

Eb allows for a fair comparison of the energy efficiencies of different signal sets. We use Eb/No for comparison.

Em m kk

K

s ,

2

1

E E Es m m mk

M

E p

1

EE

bs

M

log2

Page 101: DC Digital Communication PART2

© 2

003

Visualizing Signal Spaces

A MATLAB function has been posted on the web page that allows you to visualize two-dimensional signal spaces and the associated decision regions.

Usage: sigspace( [x1 y1 p1; x2 y2 p2; …; xM yM pM],

EbNodB ) where:

• (xm,ym) is the coordinate of the mth signal point • pm is the probability of the mth signal

can omit this to get ML receiver

• EbNodB is Eb/No in decibels.

Page 102: DC Digital Communication PART2

Example: QPSK with ML Decision Rule

sigspace( [1 1;1 –1; -1 1; -1 –1], 10 )

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

X

Y

Signal Space and Decision Regions

Page 103: DC Digital Communication PART2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

X

Y

Signal Space and Decision Regions

Example: QPSK with Unequal Probabilities

sigspace( [1 1 .3;1 –1 .3; -1 1 .3; -1 –1 .1], 2 )

Page 104: DC Digital Communication PART2

Example: Extreme Case of Unequal Probabilities

sigspace( [.5 .5 .3;.5 –.5 .3; -.5 .5 .3; -.5 –.5 .1], -6 )

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

X

Y

Signal Space and Decision Regions

Page 105: DC Digital Communication PART2

Example: Unequal Signal Energy

sigspace( [1 1; 2 2; 3 3; 4 4], 10)

0 0.5 1 1.5 2 2.5 3 3.5 40

0.5

1

1.5

2

2.5

3

3.5

4

X

Y

Signal Space and Decision Regions

Page 106: DC Digital Communication PART2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

X

Y

Signal Space and Decision Regions

Example: 16-QAM sigspace( [0.5 0.5; 1.5 0.5; 0.5 1.5; 1.5 1.5; ...

-0.5 0.5; -1.5 0.5; -0.5 1.5; -1.5 1.5; ...0.5 -0.5; 1.5 -0.5; 0.5 -1.5; 1.5 -1.5; ...-0.5 -0.5; -1.5 -0.5; -0.5 -1.5; -1.5 -1.5], 10 )

Page 107: DC Digital Communication PART2

copyright 2003

EE 561 Communication Theory

Spring 2003

Instructor: Matthew Valenti

Date: Feb. 26, 2003

Lecture #18

QPSK and the Union Bound

Page 108: DC Digital Communication PART2

© 2

003

Assignments HW #4 is posted.

Due on Monday March 10.

HW #3 solutions posted. Full chapter 4 solutions included.

Computer Assignment #2 Will be posted later this week. Will be due on Monday March 24. I encourage you to finish before the exam.

Midterm exam discussion. Scheduled for March 14. We might have a faculty candidate that day and thus may

need to reschedule. Options: Thursday evening 5-7 PM or short-fuse take-home?

Page 109: DC Digital Communication PART2

© 2

003

Mid-term Exam Exam Guidelines:

Timed exam. Limited to 2 hours.• Option (1): Thursday 5-7 PM.• Option (2): Handed out Wed. 3 PM. Due Fri. 3 PM.

But you are not to work more than 2 hours on it.• Same exam either way.

Open book and notes. No help from classmates or from me.

• Must sign pledge and not discuss it.• If anyone asks you about the exam, you should tell me who.

Covers chapters 1-5, and HW 1-4.• Computer assignment 1 & 2 are also relevant.

Two sample exams posted on the webpage.• From 2000 and 2001.• Should be able to do everything except #1 on 2001 exam.

Page 110: DC Digital Communication PART2

© 2

003

Review: Receiver Overview

Frontend

Backend

r t( ) r s

r r t f t dt

s t f t dt n t f t dt

s n

k k

T

k

T

k

T

m k k

zz z

( ) ( )

( ) ( ) ( ) ( )

,

0

0 0

MAP rule:

arg max ln ,ss

RST

UVW

m

Np s ro

m m k kk

Km

2 21

bg E

ML rule:

arg max ,ss

RST

UVW

m

s rm k kk

Km

1 2

E

Page 111: DC Digital Communication PART2

© 2

003

Error Probability

Symbol error probability:P

p

s

mm

M

m m

Pr

Pr |

s s

s s s s1

From “Total probability theorem”

Probability that sm was sent. Probability of error given that sm was sent:

Pr | Pr |

|

Pr |

|

s s s s r s s

r s r

r s s

r s r

z

z

m m m m

m

R

m m

m

R

R

p d

R

p d

m

m

b g

b g1

1

Probability that the received vector isnot in the decision region Rm, given that sm was sent.

Page 112: DC Digital Communication PART2

© 2

003

Comments on Error Calculation

For K>1, the integral is multidimensional. Recall that difficult multidimensional integrals can be

simplified by rotation or translation of coordinates. This is similar to change of variables in 1-D integrals.

Error probability depends on the distance between signals.

Error probability does not depend on the choice of coordinates.

Therefore, any translation, rotation, or reflection operation on the coordinates that does not change the distance between the signals will not affect the error probabilities.

Page 113: DC Digital Communication PART2

© 2

003

QPSK: Definition

Now consider quartenary phase shift keying: Signals:

Using Gram-Schmidt orthonormalization we find two basis functions:

Now:

s t P f tc1 2 2( ) cos b g for 0 < t < T

s t P f tc2 2 2( ) sin b g for 0 < t < T

f tT

f tc1

22( ) cos b g for 0 < t < T

s t P f tc3 2 2( ) cos b g for 0 < t < T

s t P f tc4 2 2( ) sin b g for 0 < t < T

f tT

f tc2

22( ) sin b g for 0 < t < T

PT

s t dtT T

Ts b z1 22

0

( )E E

Page 114: DC Digital Communication PART2

© 2

003

QPSK: Signal Space Representation

Signal vectors:

Signal space diagram:

Es

s2

s1

Es

s10

LNM

OQP

Es s2

0LNM

OQPEs

s30

LNM

OQP

Es s4

0

LNM

OQPEs

Es

Es

s3

s4

R1

R2

R4

R3

Page 115: DC Digital Communication PART2

© 2

003

QPSK: Coordinate Rotation

The analysis is easier if we rotate coordinates by 45o:

s2 s1

s3 s4

R1R2

R4R3

s12

2

L

N

MMMM

O

Q

PPPP

E

E

s

s

s22

2

L

N

MMMM

O

Q

PPPP

E

E

s

s

s32

2

L

N

MMMM

O

Q

PPPP

E

E

s

s

s42

2

L

N

MMMM

O

Q

PPPP

E

E

s

s

Page 116: DC Digital Communication PART2

© 2

003

QPSK: Conditional Error ProbabilityPr | |

, | ,

exp

exp exp

, ,

s s s s r s r

FHG

IKJ

FHG

IKJ

LNMM

OQPP

RS|T|

UV|W|

FHG

IKJ

RS|T|

UV|W|

F

zzzzzz

1 1 1

1 2 1 1 1 2 1 2

00

1

2

2

2

1 2

00

1

2

1

0

2

1

1

11 1

2 2

11 1

2

1 1

2

1

p d

p r r s s dr dr

N Nr r dr dr

N Nr dr

N Nr

R

o o

s s

o o

s

o o

s

b g

c h

E E

E E

HGIKJ

RS|T|

UV|W|

FHG

IKJ

LNMM

OQPP

FHG

IKJ

LNMM

OQPP

FHG

IKJ

FHG

IKJ

LNMM

OQPP

z2

2

0

2

1 1 1

2

dr

QN

QN

QN

QN

s

o

s

o

s

o

s

o

E E

E E

Page 117: DC Digital Communication PART2

© 2

003

QPSK: Symbol Error Probabilty

From symmetry:

Thus:

Pr |s s s s FHG

IKJ

FHG

IKJ

LNMM

OQPPm m

s

o

s

o

QN

QN

2

2

E E

P

p

QN

QN

QN

QN

s

mm

M

m m

s

o

s

o

b

o

b

o

FHG

IKJ

FHG

IKJ

LNMM

OQPP

FHG

IKJ

FHG

IKJ

LNMM

OQPP

Pr

Pr |

s s

s s s s1

2

2

2

22 2

E E

E E

Page 118: DC Digital Communication PART2

© 2

003

QPSK: Bit Error Probability

Assume Gray Mapping:

00

1011

01

P QN

QN

QN

QN

QN

bb

o

b

o

b

o

b

o

b

o

FHG

IKJ

FHG

IKJ

LNMM

OQPP

FHG

IKJ

FHG

IKJ

LNMM

OQPP

FHG

IKJ

2

2

2 2

2 2 2

2

2

E E

E E E

If neighbor ismistakenly chosenthen only 1 bit willbe wrong,i.e. Pb = Ps/2

If opposite signalis chose then bothbits are incorrect,i.e. Pb = Ps

Q2(z) << Q(z)Same BER as BPSK

Page 119: DC Digital Communication PART2

© 2

003

Error Probability for Large M

In theory, we can compute any symbol error probability using the appropriate integrals.

In practice, this becomes tedious for large constellations (M >4). The decision region Rj has complicated shape.

Pr | Pr |

|

Pr |

|

s s s s r s s

r s r

s s s s

r s r

z

z

i i i i

i

R

j ijj i

M

i

Rjj i

M

R

p d

p d

i

j

b g

b g1

1

Page 120: DC Digital Communication PART2

© 2

003

Conditional Error Probabilities

Consider the following term for QPSK:

We must integrate over the R4: This is tricky because R4 has two boundaries.

Pr | |s s s s r s r z4 1 1

4

p dR

b g

R1R2

R4R3

s2 s1

s3 s4

Page 121: DC Digital Communication PART2

© 2

003

We can bound this probability by only integrating over a region with just one boundary:

Now this is easier to evaluate:

A Bound on Probability

Pr | Prs s s s 4 1 4 1z z

R1R2

R4R3 Ignore the presenceof s2 and s3.Then we pick s4 over s1 wheneverz4 z1

Pairwise error probability:If we may only choose between s1 and s4 Then prob. of picking s4 over s1

s2 s1

s3 s4

Page 122: DC Digital Communication PART2

© 2

003

Calculation of Pairwise Error Prob.

By appropriate rotation & translation, we can express the pairwise decision problem as:

This is just like BPSK!

di j,

2

s j si

di j,

2

RiRj

Pr | Pr

,

s s s s

FHG

IKJ

j i j i

i j

o

z z

Qd

N2

Page 123: DC Digital Communication PART2

© 2

003

The Union Bound

Putting it all together:

And the symbol error probability becomes:

This is called the Union bound.

Pr | |

Pr

,

s s s s r s r

FHG

IKJ

z

i i i

Rjj i

M

j ijj i

M

i j

ojj i

M

p d

z z

Qd

N

j

b g1

1

1 2

P p

p Qd

N

s ii

M

i i

ii

Mi j

ojj i

M

FHG

IKJ

1

1 1 2

Pr |

,

s s s s

Page 124: DC Digital Communication PART2

© 2

003

Example: QPSK

Find Union bound on Ps for QPSK:

Page 125: DC Digital Communication PART2

© 2

003

Consider the exact calculation:

Now consider the Union bound:

Pr | |s s s s r s r z

1 12

4

p di

Rjj

b g

R1R2

R4R3

Pr | ,s s s s FHG

IKJ

1 11

2

4

2Q

d

Nj

oj

R1

R4

This area has beenAccounted for alreadyNo need to include it.

s2 s1

s3 s4

s2 s1

s3 s4

Page 126: DC Digital Communication PART2

© 2

003

Improved Union Bound

Let Ai be the set of signals with decision regions directly adjacent to Ri

Share a common boundary

Then:

Example: QPSK

P p Qd

Ns ii

Mi j

oj A

M

i

FHG

IKJ

1 2

,

s2 s1

s3 s4

R1

R4

Page 127: DC Digital Communication PART2

© 2

003

Comparison

0 1 2 3 4 5 6

10-2

10-1

Eb/No in dB

BE

RPerformance of QPSK

Union bound

Improved Union bound

Exact

Page 128: DC Digital Communication PART2

© 2

003

Mid-Semester!

Right now, we are at the midpoint of this class.

Page 129: DC Digital Communication PART2

copyright 2003

EE 561 Communication Theory

Spring 2003

Instructor: Matthew Valenti

Date: Feb. 28, 2003

Lecture #19

PAM, PSK, and QAM

Page 130: DC Digital Communication PART2

© 2

003

Review: Union Bound

The symbol error rate of any digital modulation over an AWGN channel can be found using the Union bound:

A better bound (tighter and easier to compute) can be found using:

P Qd

Ns ii

Mi j

ojj i

M

FHG

IKJ

Pr[ ] ,s s1 1 2

d d

s s

i j i j

i j

i k j kk

K

,

, ,

( , )

s s

s s

c h2

1

P Qd

Ns ii

Mi j

oj Ai

FHG

IKJ

Pr[ ] ,s s1 2

A j R Ri j i : borders m r

Page 131: DC Digital Communication PART2

© 2

003

Consider the exact calculation:

Now consider the Union bound:

Pr | |s s s s r s r z

1 12

4

p di

Rjj

b g

R1R2

R4R3

Pr | ,s s s s FHG

IKJ

1 11

2

4

2Q

d

Nj

oj

R1

R4

This area has beenAccounted for alreadyNo need to include it.

s2 s1

s3 s4

s2 s1

s3 s4

Page 132: DC Digital Communication PART2

© 2

003

Improved Union Bound

Let Ai be the set of signals with decision regions directly adjacent to Ri

Share a common boundary

Then:

Example: QPSK

P p Qd

Ns ii

Mi j

oj A

M

i

FHG

IKJ

1 2

,

s2 s1

s3 s4

R1

R4

Page 133: DC Digital Communication PART2

© 2

003

Categories of Digital Modulation

Digital Modulation can be classified as: PAM: pulse amplitude modulation PSK: phase shift keying QAM: quadrature amplitude modulation Orthogonal signaling Biorthogonal signaling Simplex signaling

We have already defined and analyzed some of these.

For completeness, we will now define and analyze all of these formats.

Note: the definition & performance only depends on the geometry of the signal constellation --- not the choice of basis functions!

Page 134: DC Digital Communication PART2

© 2

003

Bandwidth of Digital Modulation

The bandwidth W of a digitally modulated signal must satisfy:

The actual bandwidth depends on the choice of basis functions. “pulse shaping” For sinc-pulses this is an equality.

However, if the basis functions are confined to time [0,T], then:

WKR KR

Ms b

2 2 2log

W KRKR

Msb

log2

Page 135: DC Digital Communication PART2

© 2

003

PAMPulse Amplitude Modulation

K = 1 dimension. Usually, the M signals are equally spaced

along the line. di,j = dmin (a constant) for |j-i| = 1

Usually, the M signals are symmetric about the origin.

Examples:0

0

dmin

s1 s8

si i Md

LNM

OQP( ) min2 1

2

Page 136: DC Digital Communication PART2

© 2

003

Performance of PAM

Using the Improved Union bound:

There are two cases to consider:

P Qd

N

MQ

d

N

s ii

Mi j

oj A

i j

oj Ai

M

i

i

FHG

IKJ

FHG

IKJ

Pr[ ] ,

,

s s1

1

2

1

2

outer points have only 1 neighborthere are 2 of these

inner points have 2 neighborsthere are (M-2) of these

assuming equiprobablesignaling

Page 137: DC Digital Communication PART2

© 2

003

Performance of PAM

For the “outer” points (one neighbor):

For the “inner” points (two neighbors):

Therefore,

P Qd

Ni i Mi i

o

| mins s s FHG

IKJ

21 for and

P Qd

Ni Mi i

o

| mins s s FHG

IKJ 2

22 1 for

P p PM

P

MM Q

d

NQ

d

N

M

MQ

d

N

s ii

M

i i i ii

M

o o

o

FHG

IKJ

FHG

IKJ

FHG

IKJ

F

HGIKJ

1 1

1

12 2

22

2

2 1

2

| |

( )

( )

min min

min

s s s s s s

Page 138: DC Digital Communication PART2

© 2

003

Performance of PAM

Using the Improved Union bound:

We would like an expression in terms of Eb.

However, unlike other formats we have considered, the energy of the M signals is not the same.

Ei i Md

LNM

OQP( ) min2 1

2

2

PM

MQ

d

Nso

F

HGIKJ

2 1

2

( ) min

Page 139: DC Digital Communication PART2

© 2

003

PAMAverage Energy per Symbol

E Es i ii

M

i

M

i

M

n

M

p

Mi M

d

M

di M

M

dn

M

d M M

d M

LNM

OQP

FH IK

FH IK

FH IK FHIKFHIKFHG

IKJ

1

2

1

22

1

22

1

2

2 2

2 2

12 1

2

1

22 1

2

22 1

2

2

1

3 24

21

1

12

( )

( )

( )

min

min

min/

min

minc h

assuming equiprobablesignaling

from symmetry

arithmetic series: ( )2 1

1

34 1

2

1

2

n

N N

n

N

c h

Page 140: DC Digital Communication PART2

© 2

003

Performance of PAM

Solve for dmin:

Substitute into Union Bound Expression:

Bit error probability:

dM

smin

12

12

E

c h

PM

MQ

d

N

M

MQ

M N

M

MQ

M

M N

s

o

s

o

b

o

F

HGIKJ

FHGG

IKJJ

FHGG

IKJJ

2 1

2

2 1 6

1

2 1 6

1

2

22

( )

( )

( ) log

min

E

E

c h

c h

PM

M MQ

M

M Nb

b

o

FHGG

IKJJ

2 1 6

12

22

( )

log

logE

c hIf Gray coding used,only 1 bit error willnormally occur whenthere is a symbol error.

Eq. (5.2-46)

Page 141: DC Digital Communication PART2

Performance Comparison: PAM Performance gets worse as M increases

0 2 4 6 8 10 12 14 16 18 2010

-8

10-6

10-4

10-2

100

Eb/No (dB)

BE

R

32-PAM

8-PAM

16-PAM

2-PAM

4-PAM

Page 142: DC Digital Communication PART2

© 2

003

MPSKM-ary Phase Shift Keying

K = 2 dimensions. Signals equally spaced along a circle of

radius Es

si

s

s

i

Mi

M

FH IKFH IK

L

N

MMMM

O

Q

PPPP

E

E

cos( )

sin( )

2 1

2 1

Example: 8 PSK

Page 143: DC Digital Communication PART2

© 2

003

MPSK: Distances

Distance between two adjacent points:

Use law of cosines:

dij

Es

Es

2M

radians

2 2 2 2ab C a b ccos 2

22

2 12

4

2

2

2

E E

E

E

E

s s ij

ij s

ij s

ij s

Md

dM

dM

dM

cos

cos

sin

sin

FH IK

FH IKLNM

OQP

FHIK FHIK

1 2 2 2 cos sin af

Page 144: DC Digital Communication PART2

© 2

003

Performance of M-PSK

Symbol error probability (M>2):

Bit error probability (M>2):

P QN M

QM

N M

ss

o

b

o

FHIKFHG

IKJ

FHIKFHG

IKJ

22

22 2

E

E

sin

logsin

PM

QM

N Mbb

o

FHIKFHG

IKJ

2 2

2

2

log

logsin

E

Page 145: DC Digital Communication PART2

0 2 4 6 8 10 12 14 16 18 2010

-8

10-6

10-4

10-2

100

Eb/No in dB

BE

R

Performance Comparison:M-PSK

Performance gets worse as M increases.

4-PSK 8-PSK

32-PSK

16-PSK

64-PSK

Page 146: DC Digital Communication PART2

© 2

003

QAMQuadrature Amplitude Modulation

K = 2 dimensions Points can be placed anywhere on the plane. Actual Ps depends on the geometry. Example: 16 QAM

dmin

Corner points have 2 neighbors(There are 4 of these)

Edge points have 3 neighbors(There are 8 of these)

Interior points have 4 neighbors(There are 4 of these)

Page 147: DC Digital Communication PART2

© 2

003

Performance of ExampleQAM Constellation

Improved Union Bound:

P p P

P

Qd

NQ

d

NQ

d

N

Qd

N

Qd

N

s ii

M

i i

i ii

M

o o o

o

o

FHG

IKJ

LNMM

OQPP

FHG

IKJ

LNMM

OQPP

FHG

IKJ

LNMM

OQPP

FHG

IKJ

FHG

IKJ

FHG

IKJ

1

1

1

16

1

164 2

28 3

24 4

2

48

16 2

32

|

|

min min min

min

min

s s s

s s s

Page 148: DC Digital Communication PART2

© 2

003

Performance of ExampleQAM Constellation

We would like to express Ps in terms of Eb.

However, as with PAM, the energy of the M signals is not the same.

E

E

E

i

i

i

d dd

d dd

d dd

FHG

IKJ

FHG

IKJ

FHG

IKJ

FHG

IKJ

FHG

IKJ

FHG

IKJ

3

2

3

2

9

2

2

3

2

5

2

2 2

1

2

2 2

2

2 2

2

2 2

2

min minmin

min minmin

min minmin

Corner points

Edge points

Interior points

Page 149: DC Digital Communication PART2

© 2

003

QAMAverage Energy per Symbol

Average Energy per Symbol:

Solving for dmin

E Es i ii

p

d d d

d

d

FHG

IKJ F

HGIKJ F

HGIKJ

LNM

OQP

1

16

2 2 2

2

2

1

164

9

28

5

24

1

2

40

165

2

min min min

min

min

assuming equiprobablesignaling

d smin

2

5

E

Page 150: DC Digital Communication PART2

© 2

003

Performance of ExampleQAM Constellation

Substitute into Union Bound Expression:

For other values of M, you must compute Ps in this manner.

Relationship between Ps and Pb depends on how bits are mapped to symbols.

P Qd

N

QN

QN

s

o

s

o

b

o

FHG

IKJ

FHG

IKJ

FHG

IKJ

32

35

34

5

min

E

E

Page 151: DC Digital Communication PART2

0 2 4 6 8 10 12 14 16 18 2010

-8

10-6

10-4

10-2

100

Eb/No in dB

Comparison:QAM vs. M-PSK

Performance of QAM is better than M-PSK.

16-QAM

16-PSK

sym

bol e

rror

pro

babi

lity

16-PAM

Page 152: DC Digital Communication PART2

copyright 2003

EE 561 Communication Theory

Spring 2003

Instructor: Matthew Valenti

Date: Mar. 3, 2003

Lecture #20

Orthogonal, bi-orthogonal,

and simplex modulation

Page 153: DC Digital Communication PART2

© 2

003

Review: Performance of Digital Modulation

We have been finding performance bounds for several modulation types defined in class. The Union bound:

The Improved Union bound:

P Qd

Ns ii

Mi j

ojj i

M

FHG

IKJ

Pr[ ] ,s s1 1 2

P Qd

Ns ii

Mi j

oj Ai

FHG

IKJ

Pr[ ] ,s s1 2

A j R Ri j i : borders m r

Page 154: DC Digital Communication PART2

© 2

003

PAMPulse Amplitude Modulation

Definition: K = 1 dimension. Signals are equally spaced and symmetric about

the origin:

Performance:

0

dmin

s1 s8

si i Md

LNM

OQP( ) min2 1

2

PM

MQ

M

M Nsb

o

FHG

IKJ

2 1 6

12

2

( ) logE

c h

PM

M MQ

M

M Nb

b

o

FHGG

IKJJ

2 1 6

12

22

( )

log

logE

c h

Example: 8-PAM

Page 155: DC Digital Communication PART2

© 2

003

MPSKM-ary Phase Shift Keying

Definition: K = 2 dimensions. Signals equally spaced along a circle of radius

Performance:

Es

si

s

s

i

Mi

M

FH IKFH IK

L

N

MMMM

O

Q

PPPP

E

E

cos( )

sin( )

2 1

2 1

Example: 8 PSK

P QM

N Msb

o

FHIKFHG

IKJ2

2 2E logsin

PM

QM

N Mbb

o

FHIKFHG

IKJ

2 2

2

2

log

logsin

E

Page 156: DC Digital Communication PART2

© 2

003

QAMQuadrature Amplitude Modulation

Definition: K = 2 dimensions Points can be placed anywhere on the plane. Neighboring points are normally distance dmin apart. Constellation normally takes on a “box” or “cross” shape.

Performance: Depends on geometry. In general, when pi = 1/M:

Example: 16 QAM

PM

N Qd

NN Q

d

NN Q

d

Nso o o

FHG

IKJ

LNMM

OQPP

FHG

IKJ

LNMM

OQPP

FHG

IKJ

LNMM

OQPP

FHG

IKJ

12

23

24

22 3 4min min min

Number for points with 4 neighbors

Number for points with 3 neighbors

Number for points with 2 neighbors

Page 157: DC Digital Communication PART2

© 2

003

QAM: Continued

Need to relate dmin to Es and Eb. Because QAM signals don’t have constant energy:

Solve the above to get dmin = f(Es) and plug into the expression for Ps.

Bit error probability is difficult to determine, because the exact mapping of bits to symbols must be taken into account.

E Es i ii

M

i ii

M

ii

M

p

p

M

1

2

1

2

1

1

s

s

Page 158: DC Digital Communication PART2

© 2

003

Orthogonal Signaling

K=M dimensions. Signal space representation:

Example: 3-FSK

s1

0

0

L

N

MMMMM

O

Q

PPPPP

Es

s M

s

L

N

MMMM

O

Q

PPPP

0

0

E

s2

0

0

L

N

MMMM

O

Q

PPPPEs

Es

Es

Es

Page 159: DC Digital Communication PART2

© 2

003

Performance of Orthogonal Signaling

Distances: The signal points are equally-distant:

Using the Union Bound:

Bit error probability

P M QN

M QM

N

ss

o

b

o

FHG

IKJ

FHG

IKJ

1

1 2

a f

a f

E

E log

P M QM

Nb

M

Mb

o

FHG

IKJ

2

2 11

2

2

12

(log )

(log )

loga f E

d i jij s 2E

see Eq. (5.2-24)for details

Page 160: DC Digital Communication PART2

0 2 4 6 8 10 12 14 1610

-8

10-6

10-4

10-2

100

Eb/No in dB

BE

RPerformance Comparison:

FSK (Orthogonal) Performance gets better as M increases.

2-FSK

8-FSK

4-FSK

16-FSK

64-FSK32-FSK

Page 161: DC Digital Communication PART2

© 2

003

Limits of FSK

As M , then Ps 0 provided that:

Although FSK is energy efficient, it is not bandwidth efficient (5.2-86):

BW efficiency can be improved by using: Biorthogonal signaling. Simplex signaling.

Eb

oN ln .2 1 59 dB

WKR KR

M

MR

Ms b b

2 2 22 2log log

Eq. (5.2-30)

Eq. (5.2-86)

Page 162: DC Digital Communication PART2

© 2

003

Biorthogonal Signaling

K=M/2 dimensions M is even.

First M/2 signals are orthogonal: si(t) = fi(t) for 1 i M/2

Remaining M/2 signals are the negatives: si(t) = - fi-M/2(t) for M/2 + 1 i M

Since half-as many dimensions as orthogonal, the bandwidth requirement is halved:

WKR KR

M

MR

Ms b b

2 2 42 2log log

Es

Es

Page 163: DC Digital Communication PART2

© 2

003

Example Biorthogonal Signal Set

Biorthogonal signal set for M=6.

Es

Es

Es Es

Es

Es

Page 164: DC Digital Communication PART2

© 2

003

Performance of Biorthogonal Signals

Compute the distances:

Union Bound

Improved Union Bound

Performance of biorthogonal is actually slightly better than orthogonal.

d

i j

i jM

i j s

s

,

RS||T||

0

22

2

for

for

otherwise

E

E

P M QN

QNs

s

o

s

o

FHG

IKJ

FHG

IKJ2

2a f E E

P M QN

M QM

N

ss

o

b

o

FHG

IKJ

FHG

IKJ

2

2 2

a f

a f

E

E log

Page 165: DC Digital Communication PART2

Simplex Signaling

Consider our 3-FSK example:» 3 points form a plane» All 3 points can be placed on a 2-dimensional

plot.» By changing coordinates, we can reduce the

number of dimensions from 3 to 2.

Make a new constellation.» 2 dimensional instead of 3.» The origin of the new constellation is the mean of

the old constellation.» The distances between signals are the same.

Es

Es

Es

dmin

Page 166: DC Digital Communication PART2

© 2

003

Simplex Signaling

To create a simplex signal set: Start with an orthogonal signal set. Compute the centroid of the set:

Shift the signals so that they are centered about the centroid:

s s

L

N

MMMMM

O

Q

PPPPP p

M

M

i ii

M

s

s

E

E

s s si i'

Page 167: DC Digital Communication PART2

© 2

003

Simplex Signaling

Compute a new set of basis functions.• The new set of signals has dimensionality K = M-1.• Therefore the bandwidth is

The average energy of the simplex constellation is now less than that of the original orthogonal set:

WKR KR

M

M R

Ms b b

2 2

1

22 2log

( )

log

E Es i s M' FH IKs s

21

1

Page 168: DC Digital Communication PART2

© 2

003

Performance of Simplex Signals

The distances between signals remains the same as with orthogonal signaling.

Therefore, the symbol error probability is:

Where,

Therefore,

P M QNs

s

o

FHG

IKJ1a f E

E Es s M' FH IK1

1

P M QM

N M

M QM M

M N

ss

o

b

o

FHG

IKJ

FHG

IKJ

11

11

2

a f

a f

E

E

'

( )

log

( )

E Es s

M

M

FH IK'

1

But this is in terms of theold Es --- we want it in termsof the new one …

Page 169: DC Digital Communication PART2

© 2

003

BPSK and QPSK

Categorize the following: BPSK

MPSK? QAM? Orthogonal? Biorthgonal? Simplex?

QPSK MPSK? QAM? Orthogonal? Biorthgonal? Simplex?