Upload
shona-lang
View
225
Download
2
Tags:
Embed Size (px)
Citation preview
Chapter 1 Random Processes
“Probabilities” is considered an important background to communication study.
© Po-Ning [email protected] Chapter 1-2
1.1 Mathematical Models
To model a system mathematically is the basis of its analysis, either analytically or empirically.
Two models are usually considered: Deterministic model
No uncertainty about its time-dependent behavior at any instance of time.
Random or stochastic model Uncertain about its time-dependent behavior at any
instance of time, but certain on the statistical behavior at any instance of
time.
© Po-Ning [email protected] Chapter 1-3
1.1 Examples of Stochastic Models
Channel noise and interference Source of information, such as voice
© Po-Ning [email protected] Chapter 1-4
A1.1: Relative Frequency
How to determine the probability of “head appearance” for a coin?
Answer: Relative frequency.
Specifically, by carrying out n coin-tossing experiments, the relative frequency of head appearance is equal to Nn(A)/n, where Nn(A) is the number of head appearance in these n random experiments.
© Po-Ning [email protected] Chapter 1-5
A1.1: Relative Frequency
Is relative frequency close to the true probability (of head appearance)? It could occur that 4-out-of-10 tossing results are
“head” for a fair coin!
Can one guarantee that the true “head appearance probability” remains unchanged (i.e., time-invariant) in each experiment (performed at different time instance)?
© Po-Ning [email protected] Chapter 1-6
A1.1: Relative Frequency
Similarly, the previous question can be extended to “In a communication system, can we estimate the noise by repetitive measurements at consecutive but different time instance?”
Some assumptions on the statistical models are necessary!
© Po-Ning [email protected] Chapter 1-7
A1.1: Axioms of Probability Definition of a Probability System (S, F, P) (also
named Probability Space)1. Sample space S
All possible outcomes (sample points) of the experiment
2. Event space F Subset of sample space, which can be probabilistically
measured. F and S F A F implies Ac F. A F and B F implies A∪B F.
3. Probability measure P
A F and B F implies A∩B F.
© Po-Ning [email protected] Chapter 1-8
A1.1: Axioms of Probability
3. Probability measure P A probability measure satisfies:
P(S) = 1 and P(EmptySet) = 0 For any A in F, 0 ≦ P(A) 1.≦ For any two mutually exclusive events A and B, P(A ∪
B) = P(A) + P(B)
)()()( .3
1.)(
0 .2
.0)emptySet( and )( .1
BNANBANn
AN
NnSN
nnn
n
nn
These Axioms coincide with the relative-frequency expectation.
© Po-Ning [email protected] Chapter 1-9
A1.1: Axioms of Probability
Example. Rolling a dice. S = { } F = {EmpySet, { }, { }, S} P satisfies
P(EmptySet) = 0 P({ }) = 0.4 P({ }) = 0.6 P(S) = 1.
© Po-Ning [email protected] Chapter 1-10
A1.1: Properties from Axioms
All the properties are induced from Axioms Example 1. P(Ac) = 1 – P(A).
Proof. Since A and Ac are mutually exclusive events, P(A) + P(Ac) = P(A∪Ac) = P(S) = 1.
Example 2. P(A∪B) = P(A) + P(B) – P(A∩B).
Proof. P(A∪B) = P(A/B) + P(B/A) + P(A∩B) = [P(A/B) + P(A∩B)] + [P(B/A) + P(A∩B)] – P(A∩B) = P(A) + P(B) – P(A∩B).
© Po-Ning [email protected] Chapter 1-11
A1.1: Conditional Probability
Definition of conditional probability
Independence of events A knowledge of occurrence of event A tells us no
more about the probability of occurrence of event B than we knew without this knowledge.
Hence, they are statistically independent.
)(
)(
)(
)( )|(
AP
BAP
AN
BANABP
n
n
)()|( BPABP
© Po-Ning [email protected] Chapter 1-12
A1.2: Random Variable
A probability system (S, F, P) can be “visualized” (or observed, or recorded) through a real-valued random variable X
1
2
3
4
5
6
After mapping the sample point in the sample space to a real number, the cumulative distribution function (cdf) can be defined as:
})(:{]Pr[)( xsXSsPxXxFX
: SX
© Po-Ning [email protected] Chapter 1-13
A1.2: Random Variable Since the event [X ≦ x] has to be probabilistically
measurable for any real number x, the event space F has to consist of all the elements of the form [X ≦ x].
In previous example, the event space F must contain the intersections and unions of the following 6 sets.
{
{
{
{
{
{
}={sS:X(s)≦1}
}={sS:X(s)≦2}
}={sS:X(s)≦3}
}={sS:X(s)≦4}
}={sS:X(s)≦5}
}={sS:X(s)≦6}
Otherwise, the cdf is not well-defined.
© Po-Ning [email protected] Chapter 1-14
A1.2: Random Variable
It can be proved that we can construct a well-defined probability system (S, F, P) for any random variable X and its cdf FX. So to define a real-valued random variable by its
cdf is good enough from engineering standpoint. In other words, it is not necessary to mention the
probability system (S, F, P) before defining a random variable.
© Po-Ning [email protected] Chapter 1-15
A1.2: Random Variable
It can be proved that any function satisfying:
is a legitimate cdf for some random variable. It suffices to check the above three properties for
FX(x) to well-define a random variable.
.decreasing-Non 3.
.continuous - Right.2
.1)(lim and 0)(lim .1
xGxGxx
© Po-Ning [email protected] Chapter 1-16
A1.2: Random Variable
A non-negative function fX(x) satisfies
is called the probability density function (pdf) of random variable X.
If the pdf of X exists, then
x
X dttfxX )()Pr(
x
xFxf X
X
)(
)(
© Po-Ning [email protected] Chapter 1-17
A1.2: Random Variable
It is not necessarily true that If
then the pdf of X exists and equals fX(x).
,)(
)(x
xFxf X
X
© Po-Ning [email protected] Chapter 1-18
A1.2: Random Vector
We can extend a random variable to a (multi-dimensional) random vector. For two random variables X and Y, the joint cdf is
defined as:
Again, the event [X ≦ x and Y ≦ y] must be probabilistically measurable in some probability system (S, F, P) for any real numbers x and y.
)} )(:{ } )(:({
) and Pr(),(,
ysYSsxsXSsP
yYxXyxF YX
© Po-Ning [email protected] Chapter 1-19
A1.2: Random Vector
As continuing from previous example, the event space F must contain the intersections and unions of the following 4 sets.
{
{
{
{
}={sS : X(s)≦1 and Y(s)≦1}
}={sS : X(s)≦2 and Y(s)≦1}
}={sS : X(s)≦1 and Y(s)≦2}
}={sS : X(s)≦2 and Y(s)≦2}
(1,1)
(2,1)
(1,1)
(2,2)
(1,2)
(2,2)
:),( SYX
© Po-Ning [email protected] Chapter 1-20
A1.2: Random Vector
If its joint density fX,Y(x,y) exists, then
The conditional density function of Y given that [X = x] is
provided that fX(x) 0.
© Po-Ning [email protected] Chapter 1-21
1.2 Random Process
Random process: An extension of multi-dimensional random vectors Representation of 2-dimensional random vector
(X,Y) = (X(1), X(2)) = {X(j), jI}, where the index set I equals {1, 2}.
Representation of m-dimensional random vector {X(j), jI}, where the index set I equals {1, 2,…, m}.
© Po-Ning [email protected] Chapter 1-22
1.2 Random Process
How about {X(t), t}? It is no longer a random vector since the index set is
continuous! This is a suitable model for, e.g., a noise because a
noise often exists continuously in time. Its cdf is well-defined through the mapping:
for every t .
StX :)(
© Po-Ning [email protected]
X(t,s1)
X(t,s2)
X(t,sn)
Chapter 1-23
Define or view a random process via its inherited probability system
© Po-Ning [email protected] Chapter 1-24
1.2 Random Process
X(t, sj) is called a sample function (or a realization) of the random process for sample point sj.
X(t, sj) is deterministic.
© Po-Ning [email protected] Chapter 1-25
1.2 Random Process
Notably, with a probability system (S, F, P) over which the random process is defined, any finite-dimensional joint cdf is well-defined. For example,
332211
332211
),( : ),( :),(:
)( and )( and )(Pr
xstXSsxstXSsxstXSsP
xtXxtXxtX
© Po-Ning [email protected] Chapter 1-26
1.2 Random Process
Summary A random variable
maps s to a real number X A random vector
maps s to a multi-dimensional real vector. A random process
maps s to a real-valued deterministic function.
s is where the probability is truly defined; yet the image of the mapping is what we can observe and experiment over.
© Po-Ning [email protected] Chapter 1-27
1.2 Random Process
Question: Can we map s to two or more real-valued deterministic functions?
Answer: Yes, such as (X(t), Y(t)). Then, we can discuss any finite-dimensional joint
distribution of X(t) and Y(t), such as, the joint distribution of
))(),(),(),(),(( 41321 tYtYtXtXtX
© Po-Ning [email protected] Chapter 1-28
1.3 (Strictly) Stationary
For strict stationarity, the statistical property of a random process encountered in real world is often independent of the time at which the observation (or experiment) is initiated.
Mathematically, this can be formulated as that for any t1, t2, …, tk and :
),...,,(
),...,,(
21)(),...,(),(
21)(),...,(),(
21
21
ktXtXtX
ktXtXtX
xxxF
xxxF
k
k
© Po-Ning [email protected] Chapter 1-29
1.3 (Strictly) Stationary
Example 1.1. Suppose that any finite-dimensional cdf of a random process X(t) is defined. Find the probability of the joint event.
222111 )( and )( btXabtXaA
),(),(
),(),()(
21)(),(21)(),(
21)(),(21)(),(
2121
2121
aaFbaF
abFbbFAP
tXtXtXtX
tXtXtXtX
Answer:
© Po-Ning [email protected] Chapter 1-30
1.3 (Strictly) Stationary
© Po-Ning [email protected] Chapter 1-31
1.3 (Strictly) Stationary
Example 1.1. Further assume that X(t) is strictly stationary.
Then, P(A) is also equal to:
),(),(
),(),()(
21)(),(21)(),(
21)(),(21)(),(
2121
2121
aaFbaF
abFbbFAP
tXtXtXtX
tXtXtXtX
Why introducing “stationarity?” With stationarity, we can be certain that the
observations made at different time instances have the same distributions!
For example, X(0), X(T), X(2T), X(3T), ….
Suppose that Pr[X(0) = 0] = Pr[X(0)=1] = ½. Can we guarantee that the relative frequency (i.e., their average) of “1’s appearance” for experiments performed at several different times approach ½ by stationarity? No, we need additional assumption!
© Po-Ning [email protected] Chapter 1-32
1.3 (Strictly) Stationary
© Po-Ning [email protected] Chapter 1-33
1.4 Mean
The mean of a random process X(t) at time t is equal to:
where fX(t)() is the pdf of X(t) at time t.
If X(t) is stationary, the mean X(t) is independent of t, and is a constant for all t.
dxxfxtXEt tXX )()]([)( )(
© Po-Ning [email protected] Chapter 1-34
1.4 Autocorrelation
The autocorrelation function of a real random process X(t) is given by:
If X(t) is stationary, the autocorrelation function RX(t1, t2) is equal to RX(t1t2, 0).
2121)(),(21
2121
),(
)]()([),(
21dxdxxxfxx
tXtXEttR
tXtX
X
© Po-Ning [email protected] Chapter 1-35
1.4 Autocorrelation
)(
)0,(
)]0()([
),(
),(
)]()([),(
21
21
21
2121)(),0(21
2121)(),(21
2121
12
21
ttR
ttR
XttXE
dxdxxxfxx
dxdxxxfxx
tXtXEttR
X
X
ttXX
tXtX
X
A short-hand for autocorrelation function of a stationary process
© Po-Ning [email protected] Chapter 1-36
1.4 Autocorrelation
Conceptually, autocorrelation function = “power correlation” between two time instances t1 and t2.
© Po-Ning [email protected] Chapter 1-37
1.4 Autocovariance
“Variance” is the degree of variation to the standard value (i.e., mean).
)()(),(
)()()()(
)()(),(
)()()()(
)()()()(
)()()()(
)()()()(
)()()()(),(
2121
2121
2121
2121
2121
2121
2121
221121
ttttR
tttt
ttttR
tttXEt
ttXEtXtXE
tttXt
ttXtXtXE
ttXttXEttC
XXX
XXXX
XXX
XXX
X
XXX
X
XXX
© Po-Ning [email protected] Background 38
Autocovariance function CX(t1, t2) is given by:
© Po-Ning [email protected] Chapter 1-39
1.4 Autocovariance
If X(t) is stationary, the autocovariance function CX(t1, t2) becomes
)(
)0,(
)0,(
)()(),(),(
21
21
2
21
212121
ttC
ttC
ttR
ttttRttC
X
X
XX
XXXX
© Po-Ning [email protected] Chapter 1-40
1.4 Wide-Sense Stationary (WSS)
Since in most cases of practical interest, only the first two moments (i.e., X(t) and CX(t1, t2)) are concerned, an alternative definition of stationarity is introduced.
Definition (Wide-Sense Stationarity) A random process X(t) is WSS if
)(),(
constant;)(
2121 ttCttC
t
XX
X
).(),(
constant;)(or
2121 ttRttR
t
XX
X
© Po-Ning [email protected] Chapter 1-41
1.4 Wide-Sense Stationary (WSS)
Alternative names for WSS include weakly stationary stationary in the weak sense second-order stationary
If the first two moments of a random process exist (i.e., are finite), then strictly stationary implies weakly stationary (but not vice versa).
© Po-Ning [email protected] Chapter 1-42
1.4 Cyclostationarity
Definition (Cyclostationarity) A random process X(t) is cyclostationary if there exists a constant T such that
).,(),(
;)()(
2121 ttCTtTtC
tTt
XX
XX
© Po-Ning [email protected] Chapter 1-43
1.4 Properties of autocorrelation function for WSS random process
1. Second Moment: RX(0) = E[X2(t)] > 0.
2. Symmetry: RX() = RX().
1. In concept, autocorrelation function = “power correlation” between two time instances t1 and t2.
2. For a WSS process, this “power correlation” only depends on time difference.
© Po-Ning [email protected] Chapter 1-44
1.4 Properties of autocorrelation function for WSS random process
3. Peak at zero: |RX()| ≦ RX(0)
)(2)0(2
)(2)0()0(
)]()([2)()(
)()(022
2
XX
XXX
RR
RRR
tXtXEtXEtXE
tXtXE
Proof:
Hence, )0()()0( XXX RRR with equality holds when
.1)0()(Pr)()(Pr XXtXtX
© Po-Ning [email protected] Chapter 1-45
1.4 Properties of autocorrelation function for WSS random process
Operational meaning of autocorrelation function: The “power” correlation of a random process at
seconds apart. The smaller RX() is, the less correlation between
X(t) and X(t+).
© Po-Ning [email protected] Chapter 1-46
1.4 Properties of autocorrelation function for WSS random process
If RX() decreases faster, the correlation decreases faster.
© Po-Ning [email protected] Chapter 1-47
1.4 Decorrelation time
Some researchers define the decorrelation time as:
© Po-Ning [email protected] Chapter 1-48
Example 1.2 Signal with Random Phase
Let X(t) = A cos(2fct + ), where is uniformly distributed over [, ). Application: A local carrier at the receiver side may
have a random “phase difference” with respect to the carrier at the transmitter side.
© Po-Ning [email protected] Chapter 1-49
Example 1.2 Signal with Random Phase
ChannelEncoder
…0110Modulator
…,m(t), m(t), m(t), m(t)
m(t)
T
Carrier waveAccos(2fct)
s(t)
w(t)
x(t)
Local carriercos(2fct+)
T
dt0
correlator
yT>< 0
0110…
X(t)=Acos(2fct+)Local carriercos(2fct)
© Po-Ning [email protected] Chapter 1-50
Example 1.2 Signal with Random Phase
Then
.0
)2sin()2sin(2
)2sin(2
)2cos(2
2
1)2cos(
)]2cos([)(
tftfA
tfA
dtfA
dtfA
tfAEt
cc
c
c
c
cX
© Po-Ning [email protected] Chapter 1-51
Example 1.2 Signal with Random Phase
.)(2cos2
)(2cos)(22cos2
1
2
)2()2(cos
)2()2(cos2
1
2
2
1)2cos()2cos(
)]2cos()2cos([),(
21
2
2121
2
21
21
2
212
2121
ttfA
dttfttfA
dtftf
tftfA
dtftfA
tfAtfAEttR
c
cc
cc
cc
cc
ccX
Hence, X(t) is WSS.
© Po-Ning [email protected] Chapter 1-52
Example 1.2 Signal with Random Phase
© Po-Ning [email protected] Chapter 1-53
Example 1.3 Random Binary Wave
Let
where …, I2, I1, I0, I1, I2, … are independent, and each Ij is either 1 or 1 with equal probability, and
)()( dn
n tnTtpIAtX
otherwise,0
0,1)(
Tttp
© Po-Ning [email protected] Chapter 1-55
Example 1.3 Random Binary Wave
ChannelEncoder
…0110Modulator
m(t)
T
No/Ignorecarrier wave
s(t)
w(t)
x(t)
correlator
yT>< 0
0110…
…,m(t), m(t), m(t), m(t)
X(t) = A p(t−td)
© Po-Ning [email protected] Chapter 1-56
Example 1.3 Random Binary Wave
Then by assuming that td is uniformly distributed over [0, T), we obtain:
0
)]([0
)]([][
)()(
dn
dn
n
dn
nX
tnTtpEA
tnTtpEIEA
tnTtpIAEt
© Po-Ning [email protected] Chapter 1-57
Example 1.3 Random Binary Wave
A useful probabilistic rule: E[X] = E[E[X|Y]]
dttXtXEEtXtXE )()()]()([ 2121
So we have:
)()(
)()(][
)()(][
]|)()([]|[
)()(
)()(
212
2122
212
212
21
21
dn
d
dn
dn
dn m
dmn
ddn m
ddmn
ddm
mdn
n
d
tnTtptnTtpA
tnTtptnTtpIEA
tmTtptnTtpIIEA
ttmTtptnTtpEtIIEA
ttmTtpIAtnTtpIAE
ttXtXE
© Po-Ning [email protected]
.for 0][][][ Since mnIEIEIIE mnmn
Chapter 1-58
© Po-Ning [email protected]
1ly equivalentor
0 if 1)(
11
11
T
ttn
T
tt
TtnTttnTtp
dd
dd
1ly equivalentor
0 if 1)(
22
22
T
ttn
T
tt
TtnTttnTtp
dd
dd
integer. an bemust n
Three conditions must be simultaneously satisfied in order to obtain a non-zero
n dd tnTtptnTtp )()( 21
.0 Ttd Notably,
Chapter 1-59
.0 and 0 01 0 and 0
0
111 and 0
:gives 1 withCombining
dd
dd
ddd
dd
tTTT
nt
TtTtT
t
T
tnTt
T
tn
T
t
© Po-Ning [email protected]
0 if,0
0 if,1
integer an bemust
1
d
ddd
tn
Ttn
nT
tn
T
t
For an easier understanding, we let t1 = 0 and t2 = 0.
The below two conditions reduce to:
Chapter 1-60
© Po-Ning [email protected] Chapter 1-61
otherwise,0
)0 and ||0(or )||0(,
)()()()(
2121
2
21
2
21
dd
dn
dd
tTttTtttA
tnTtptnTtpAttXtXE
Example 1.3 Random Binary Wave
otherwise,0
||0,||
1
otherwise,0
||0,1
)()()()(
21212
21||
2
2121 21
TttT
ttA
TttdtT
AttXtXEEtXtXET
tt dd
© Po-Ning [email protected] Chapter 1-62
Example 1.3 Random Binary Wave
© Po-Ning [email protected] Chapter 1-63
1.4 Cross-Correlation
The cross-correlation between two processes X(t) and Y(t) is:
Sometimes, their correlation matrix is given instead for convenience:
)]()([),(, uYtXEutR YX
),(),(
),(),(),(
,
,
, utRutR
utRutRut
YXY
YXX
YXR
© Po-Ning [email protected] Chapter 1-64
1.4 Cross-Correlation
If X(t) and Y(t) are jointly weakly stationary, then
)()(
)()(
)(),(
,
,
,,
utRutR
utRutR
utut
YXY
YXX
YXYX RR
© Po-Ning [email protected] Chapter 1-65
Example 1.4 Quadrature-Modulated Processes
Consider a pair of quadrature decomposition of X(t) as:
where is independent of X(t) and is uniformly distributed over [0, ).
)2sin()()(
2cos)()(
tftXtX
tftXtX
cQ
cI
© Po-Ning [email protected] Chapter 1-66
Example 1.4 Quadrature-Modulated Processes
),())(2sin(2
1
2
))(2sin()2)(2sin(),(
)]2cos()2[sin()]()([
)]2sin()()2cos()([
)]()([),(,
utRutf
tufutfEutR
tfufEuXtXE
ufuXtftXE
uXtXEutR
Xc
ccX
cc
cc
QIXX QI
© Po-Ning [email protected] Chapter 1-67
Example 1.4 Quadrature-Modulated Processes
Notably, if t = u, i.e., synchronize in two quadrature components, then
which indicates that simultaneous observations of the quadrature-modulated processes are “orthogonal” to each other!
(See Slide 1-78 for a formal definition of orthogonality.)
0),(, ttRQI XX
© Po-Ning [email protected] Chapter 1-68
1.5 Ergodic Process
For a random-process-modeled noise (or random-process-modeled source) X(t), how can we know its mean and variance? Answer: Relative frequency. Specifically, by measuring X(t1), X(t2), …, X(tn), and
calculating their average, it is expected that this time average will be close to its mean.
Question is “Will this time average be close to its mean, if X(t) is stationary?” For a stationary process, the mean function X(t) is
independent of time t.
© Po-Ning [email protected] Chapter 1-69
1.5 Ergodic Process
The answer is negative! An additional ergodicity assumption is
necessary for time average converging to the ensemble average X.
© Po-Ning [email protected] Chapter 1-70
1.5 Time Average versus Ensemble Average
Example. X(t) is stationary. For any t, X(t) is uniformly distributed over {1, 2, 3,
4, 5, 6}. Then ensemble average is equal to:
5.36
16
6
15
6
14
6
13
6
12
6
11
© Po-Ning [email protected] Chapter 1-71
1.5 Time Average versus Ensemble Average
We make a series of observations at time 0, T, 2T, …, 10T to obtain 1, 2, 3, 4, 3, 2, 5, 6, 4, 1. (They are deterministic!)
Then, the time average is equal to:
1.310
1465234321
© Po-Ning [email protected] Chapter 1-72
1.5 Ergodicity
Definition. A stationary process X(t) is ergodic in the mean if
where
0)(Varlim .2
and 1,)(lim Pr.1
T
T
XT
XXT
T
TX dttX
TT )(
2
1)(
© Po-Ning [email protected] Chapter 1-73
1.5 Ergodicity
Definition. A stationary process X(t) is ergodic in the autocorrelation function if
where
0);(Varlim .2
and 1,)();(lim Pr.1
TR
RTR
XT
XXT
T
TX dttXtXT
TR )()(2
1);(
© Po-Ning [email protected] Chapter 1-74
1.5 Ergodic Process
Experiments or observations on the same process can only be performed at different time.
“Stationarity” only guarantees that the observations made at different time come from the same distribution.
© Po-Ning [email protected] Chapter 1-75
A1.3 Statistical Average
Alternative names of ensemble average Mean Expectation value Sample average. (Recall that sample space consists
of all possible outcomes!)
How about the sample average of a function g( ) of a random variable X ?
dxxfxgXgE X )()()]([
© Po-Ning [email protected] Chapter 1-76
A1.3 Statistical Average nth moment of random variable X
The 2nd moment is also named mean-square value. nth central moment of random variable X
The 2nd central moment is also named variance. Square root of the 2nd central moment is also named
standard deviation.
© Po-Ning [email protected] Chapter 1-77
A1.3 Joint Moments
The joint moment of X and Y is given by:
When i = j = 1, the joint moment is specifically named correlation.
The correlation of centered random variables is specifically named covariance.
dxdyyxfyxYXE YX
jiki ),(][ ,
YXYX XYEYXEYX ][)])([(],[Cov
© Po-Ning [email protected] Chapter 1-78
A1.3 Joint Moments
Two random variables, X and Y, are uncorrelated if, and only if, Cov[X, Y] = 0.
Two random variables, X and Y, are orthogonal if, and only if, E[XY] = 0.
The covariance, normalized by two standard deviations, is named correlation coefficient of X and Y.
YX
YX
],[Cov
© Po-Ning [email protected] Chapter 1-79
A1.3 Characteristic Function
Characteristic function is indeed the inverse Fourier transform of the pdf.
© Po-Ning [email protected] Chapter 1-80
1.6 Transmission of a Random Process Through a Stable Linear Time-Invariant Filter
Linear Y(t) is a linear function of X(t). Specifically, Y(t) is a weighted sum of X(t).
Time-invariant The weights are time-independent.
Stable Dirichlet’s condition (defined later) and “Stability” implies that the output is an energy function, which has
finite power (second moment), if the input is an energy function.
dh 2|)(|
© Po-Ning [email protected] Chapter 1-81
Example of LTI filter: Mobile Radio Channel
Transmitter Receiver
),( 11
),( 22
),( 33 .)( where
,)()(
)()()()(3
1
332211
ii
iii
h
tsh
tstststY
)(tX )(tY
© Po-Ning [email protected] Chapter 1-82
Example of LTI filter: Mobile Radio Channel
Transmitter Receiver
dtXhtY )()()(
)(tX )(tY
© Po-Ning [email protected] Chapter 1-83
1.6 Transmission of a Random Process Through a Linear Time-Invariant Filter
What are the mean and autocorrelation functions of the LTI filter output Y(t)? Suppose X(t) is stationary and has finite mean. Suppose Then
dhdtXEh
dtXhEtYEt
X
Y
)()]([)(
)()()]([)(
dh |)(|
© Po-Ning [email protected] Chapter 1-84
1.6 Zero-Frequency (DC) Response
dh )(1
The mean of the LTI filter output process is equal to the mean of the stationary filter input multiplied by the DC response of the system.
dhXY )(
© Po-Ning [email protected] Chapter 1-85
1.6 Autocorrelation
122121
122121
222111
),()()(
)()()()(
)()()()(
)]()([),(
ddutRhh
dduXtXEhh
duXhdtXhE
uYtYEutR
X
Y
122121 )()()()( then
WSS, )( If
ddRhhR
tX
XY
© Po-Ning [email protected] Chapter 1-86
1.6 WSS input induces WSS output
From the above derivations, we conclude: For a stable LTI filter, a WSS input induces a WSS
output. In general (not necessarily WSS),
As the above two quantities also relate in “convolution” form, a spectrum analysis is perhaps better in characterizing their relationship.
© Po-Ning [email protected] Chapter 1-87
A2.1 Fourier Analysis
Fourier Transform Pair
Fourier Transform G(f) is the frequency spectrum content of a signal g(t). |G(f)| magnitude spectrum arg{G(f)} phase spectrum
dfftjfGtgtg
dtftjtgfGtg
)2exp()()(:)( of Transform Fourier Inverse
)2exp()()(:)( of TransformFourier
© Po-Ning [email protected] Chapter 1-88
A2.1 Dirichlet’s Condition
Dirichlet’s condition In every finite interval, g(t) has a finite number of
local maxima and minima, and a finite number of discontinuity points.
Sufficient conditions for the existence of Fourier transform g(t) satisfies Dirichlet’s condition Absolute integrability.
dttg |)(|
© Po-Ning [email protected] Chapter 1-89
A2.1 Dirichlet’s Condition
“Existence” means that the Fourier transform pair is valid only for continuity points.
.1||,0
;11,1)(
t
ttg
.1||,0
;11,1)(
t
ttgand
has the same spectrum G(f).
However, the above two functions are not equal at t = 1 and t = −1 !
© Po-Ning [email protected] Chapter 1-90
A2.1 Dirac Delta Function
A function that exists only in principle. Define the Dirac Delta Function as a function
(t) satisfies:
(t) can be thought of as a limit of a unit-area pulse function.
.0,0
;0,)(
t
tt and .1)(
dtt
otherwise.,0
;2
1
2
1,)( where,)()(lim n
tn
ntstts nnn
© Po-Ning [email protected] Chapter 1-91
A2.1 Properties of Dirac Delta Function
Sifting property If g(t) is continuous at t0, then
The sifting property is not necessarily true if g(t) is discontinuous at t0.
)()()()(
)()()(
0
)2/(1
)2/(10
00
0
0
tgdtntgdtttstg
tgdttttg
nt
ntn
© Po-Ning [email protected] Chapter 1-92
A2.1 Properties of Dirac Delta Function
Replication property For every continuous point of g(t),
Constant spectrum
dtgtg )()()(
.1)2exp()0()2exp()(
dtftjtdtftjt
© Po-Ning [email protected] Chapter 1-93
A2.1 Properties of Dirac Delta Function
Scaling after integration Although
their integrations are different
Hence, the “multiplicative constant” to Dirac delta function is significant, and shall never be ignored!
0,0
0,)(2)(
t
ttt
.2)(2 while1)(
dttdtt
??? )()()()(
dxxgdxxfxgxf
© Po-Ning [email protected] Chapter 1-94
A2.1 Properties of Dirac Delta Function
More properties Multiplication convention
ba
baatbtat
if,0
if),()()(
)(
)()(
)()()()(
ba
dtatba
dtbaatdtbtat
© Po-Ning [email protected] Chapter 1-95
A2.1 Fourier Series
The Fourier transform of a periodic function does not exist! E.g., for integer k,
otherwise.,0
;122,1)(
ktktg
0
1
0 2 4 6 8 10
© Po-Ning [email protected] Chapter 1-96
A2.1 Fourier Series
Theorem: If gT(t) is a bounded periodic function satisfying Dirichlet condition, then
at every continuity points of gT(t), where T is the smallest real number such that gT(t) = gT(t+T), and
nnT t
T
njctg
2exp)(
2/
2/
2exp)(
1 T
TTn dtt
T
njtg
Tc
© Po-Ning [email protected] Chapter 1-97
A2.1 Relation between a Periodic Function and its Generating Function
Define the generating function of a periodic function gT(t) with period T as:
Then
otherwise.,0
;2/2/),()(
TtTtgtg T
m
T mTtgtg )()(
© Po-Ning [email protected] Chapter 1-98
A2.1 Relation between a Periodic Function and its Generating Function
Let G(f) be the spectrum of g(t) (which should exist).
Then from Theorem in Slide 1-96,
mTtsdttT
njmTtg
T
dttT
njmTtg
T
dttT
njtg
Tc
m
T
T
T
Tm
T
TTn
,2
exp)(1
2exp)(
1
2exp)(
1
2/
2/
2/
2/
2/
2/
T
nG
T
dssT
njsg
T
dssT
njsg
T
dsmTsT
njsg
Tc
m
mTT
mTT
m
mTT
mTTn
1
2exp)(1
2exp)(
1
)(2
exp)(1
2/
2/
2/
2/
© Po-Ning [email protected] Chapter 1-99
(Continue from the previous slide.)
© Po-Ning [email protected] Chapter 1-100
A2.1 Relation between a Periodic Function and its Generating Function
This concludes to the Poisson’s sum formula
nT t
T
nj
T
nG
Ttg 2exp
1)(
T
nG
)(tgT
T
1
T
)(tg )( fG1
T/1
© Po-Ning [email protected] Chapter 1-101
A2.1 Spectrums through LTI filter
h()x1(t) y1(t)
h()x2(t) y2(t)
A linear filter satisfies the principle of superposition, i.e.,
h()x1(t) + x2(t) y1(t) + y2(t)
excitation response
© Po-Ning [email protected] Chapter 1-102
-2 -1 1 2
-1
-0.5
0.5
1
A2.1 Linearity and Convolution
A linear time-invariant filter can be described by convolution integral
Example of a non-linear system
h()x(t) y(t)
)(1.0)()( 3 txtxty
.)()()( dtxhty
© Po-Ning [email protected] Chapter 1-103
A2.1 Linearity and Convolution
Time-Convolution = Spectrum-Multiplication
dffjfHh
dfftjfxtxdtxhty
)2exp()()(
)2exp()()( and )()()(
ddtftjtxh
dtftjdtxh
dtftjtyfy
)2exp()()(
)2exp()()(
)2exp()()(
)()(
)2exp()()(
)2exp()()2exp()(
,))(2exp()()()(
fHfx
dfjhfx
ddsfsjsxfjh
tsddtsfjsxhfy
© Po-Ning [email protected] Chapter 1-104
© Po-Ning [email protected] Chapter 1-105
A2.1 Impulse Response of LTI Filter
Impulse response = Filter response to Dirac delta function (application of replication property)
h()(t) h(t)
).()()()()()( thdthdtxhty
© Po-Ning [email protected] Chapter 1-106
A2.1 Frequency Response of LTI Filter
Frequency response = Filter response to a complex exponential input of unit amplitude and frequency f
h()exp(j2ft)
)()2exp(
2exp)(2exp
)(2exp)(
)()()(
fHftj
dfjhftj
dtfjh
dtxhty
© Po-Ning [email protected] Chapter 1-107
A2.1 Measures for Frequency Response
response phase)(
response amplitude|)(| where)],(exp[|)(|)(
f
fHfjfHfH
)()(
)(|)(|log)(log
fjf
fjfHfH
response phase)(
gain)( where
f
f
Expression 1
Expression 2
dB |)(|log20
nepers |)(|ln)(
10 fH
fHf
© Po-Ning [email protected] Chapter 1-108
A2.1 Fourier Analysis
Remember to self-study Tables A6.2 and A6.3.
© Po-Ning [email protected] Chapter 1-109
1.7 Power Spectral Density
LTIh()
Deterministic x(t)
dtxhty )()()(
LTIh()
WSS X(t)
dtXhtY )()()(
© Po-Ning [email protected] Chapter 1-110
1.7 Power Spectral Density
How about the spectrum relation between filter input and filter output? An apparent relation is:
LTIH(f)
Deterministic x(f) )()()( fxfHfy
LTIH(f)
X(f) )()()( fXfHfY
© Po-Ning [email protected] Chapter 1-111
1.7 Power Spectral Density
This is however not adequate for a random process. For a random process, what concerns us is the
relation between the input statistic and output statistic.
© Po-Ning [email protected] Chapter 1-112
1.7 Power Spectral Density
How about the relation of the first two moments between filter input and output? Spectrum relation of mean processes
dth
dtXhEtYEt
X
Y
)()(
)()()]([)(
)()()( fHff XY
© Po-Ning [email protected] Chapter 1-113
補充 : Time-Average Autocorrelation Function
For a non-stationary process, we can use the time-average autocorrelation function to define the average power correlation for a given time difference.
It is implicitly assumed that is independent of the location of the integration window. Hence,
T
TTX dttXtXET
R )]()([2
1lim)(
)(XR
2/3
2/)]()([
2
1lim)(
T
TTX dttXtXET
R
© Po-Ning [email protected] Chapter 1-114
補充 : Time-Average Autocorrelation Function
E.g., for a WSS process,
E.g., for a deterministic function,
T
TT
T
TTX
dttxtxT
dttxtxET
R
)()(2
1lim
)]()([2
1lim)(
)]()([)( tXtXERX
© Po-Ning [email protected] Chapter 1-115
補充 : Time-Average Autocorrelation Function
E.g., for a cyclostationary process,
).( of periodonary cyclostati theis where
,)]()([2
1)(
tXT
dttXtXET
RT
TX
© Po-Ning [email protected] Chapter 1-116
補充 : Time-Average Autocorrelation Function
The time-average power spectral density is the Fourier transform of the time-average autocorrelation function.
)]()([
2
1lim
)()(2
1lim
)]()([2
1lim)(
*
2
2
2
2
fXfXET
dedttXtXET
dedttXtXET
fS
TT
fj
TT
fjT
TTX
.||)()( where 2 TttXtX T 1
© Po-Ning [email protected] Chapter 1-117
補充 : Time-Average Autocorrelation Function
For a WSS process, For a deterministic process,
).()( fSfS XX
).()(2
1lim)( *
2 fxfxT
fS TT
X
© Po-Ning [email protected] Chapter 1-119
© Po-Ning [email protected] Chapter 1-120
1.7 Power Spectral Density under WSS input
For a WSS filter input,
© Po-Ning [email protected] Chapter 1-121
1.7 Power Spectral Density under WSS input
Observation
E[Y2(t)] is generally viewed as the average power of the WSS filter output process Y(t).
This average power distributes over each spectrum frequency f through SY(f). (Hence, the total average power is equal to the integration of SY(f).)
Thus, SY(f) is named the power spectral density (PSD) of Y(t).
dffSfHdffSRtYE XYY )(|)(|)()0()]([ 22
© Po-Ning [email protected] Chapter 1-122
1.7 Power Spectral Density under WSS input
The unit of E[Y2(t)] is, e.g., Watt. So the unit of SY(f) is therefore Watt per Hz.
© Po-Ning [email protected] Chapter 1-123
1.7 Operational Meaning of PSD
Example. Assume h() is real, and |H(f)| is given by:
© Po-Ning [email protected] Chapter 1-124
1.7 Operational Meaning of PSD
Then
)]()([
)()(
)(|)(|
)()0()]([
2/
2/
2/
2/
2
2
cXcX
ff
ff X
ff
ff X
X
YY
fSfSf
dffSdffS
dffSfH
dffSRtYE
c
c
c
c
The filter passes only those frequency components of the input random process X(t), which lie inside a narrow frequency band of width f centered about the frequency fc and fc.
© Po-Ning [email protected] Chapter 1-125
1.7 Properties of PSD
Property 0. Einstein-Wiener-Khintchine relation Relation between autocorrelation function and PSD
of a WSS process X(t)
dffjfSR
dfjRfS
XX
XX
)2exp()()(
)2exp()()(
© Po-Ning [email protected] Chapter 1-126
1.7 Properties of PSD
Property 1. Power density at zero frequency
Property 2: Average power
[Second] [Watt] )(
)0([Watt/Hz] )0(
dR
SS
X
XX
[Hz] [Watt/Hz] )([Watt] )]([ 2 dffStXE X
[Watt-Second]
© Po-Ning [email protected] Chapter 1-127
1.7 Properties of PSD
Property 3: Non-negativity for WSS processes
Proof: Pass X(t) through a filter with impulse response h() = cos(2fc).
Then H(f) = (1/2)[(f fc) + (f + fc)].
0)( fSX
© Po-Ning [email protected] Chapter 1-128
1.7 Properties of PSD
4. Property from )()( since ),(2
1
))()((4
1
)()()()(4
1
)(|)(|
)()]([
2
2
fSfSfS
fSfS
dffSffdffSff
dffSfH
dffStYE
XXcX
cXcX
XcXc
X
Y
As a result,
This step requires that SX(f) is continuous, which is true in general.
© Po-Ning [email protected] Chapter 1-129
1.7 Properties of PSD
Therefore, by passing through a proper filter,
for any fc.
0)]([2)( 2 tYEfS cX
© Po-Ning [email protected] Chapter 1-130
1.7 Properties of PSD
Property 4: PSD is an even function, i.e.,
Proof.
).()( fSfS XX
)(
)()( ,)(
,)(
)()(
2
2
)(2
fS
RRdsesR
sdsesR
deRfS
X
XXfsj
X
fsjX
fjXX
© Po-Ning [email protected] Chapter 1-131
1.7 Properties of PSD
Property 5: PSD is real.
Proof.
Then with Property 4,
Thus, SX(f) is real.
)()(real )( * fSfSR XXX
).()()( * fSfSfS XXX
© Po-Ning [email protected] Chapter 1-132
Example 1.5(continue from Example 1.2) Signal with Random Phase
Let X(t) = A cos(2fct + ), where is uniformly distributed over [, ).
.2cos2
)(2
cX fA
R
)()(4
4
4
2cos2
)(
2
)(2)(22
2222
22
cc
ffjffj
fjfjfj
fjcX
ffffA
dedeA
deeeA
defA
fS
cc
cc
© Po-Ning [email protected] Chapter 1-133
Example 1.5(continue from Example 1.2) Signal with Random Phase
© Po-Ning [email protected] Chapter 1-134
Example 1.6 (continue from Example 1.3) Random Binary Wave
Let
where …, I2, I1, I0, I1, I2, … are independent, and each Ij is either 1 or 1 with equal probability, and
)()( dn
n tnTtpIAtX
otherwise,0
0,1)(
Tttp
© Po-Ning [email protected] Chapter 1-135
otherwise,0
||,||
1)(
2 TT
ARX
Example 1.6 (continue from Example 1.3) Random Binary Wave
T
T
fj
T
T
fj
T
T
fj
T
T
fjX
defTj
A
defjT
AefjT
A
deT
AfS
22
2222
22
)sgn(2
2
1)sgn(
1
2
1||1
||1)( duvuvdvu
)(sinc)(sin
)2cos(224
114
4
2222
)sgn(2
)(
22222
2
22
2
2222
2
02
0
222
2
0 2
0
22
22
fTTAfTTf
A
fTTf
A
eeTf
A
eeTf
A
defjdefjfjfTj
A
defTj
AfS
fTjfTj
T
fjTfj
T
fjT fj
T
T
fjX
© Po-Ning [email protected] Chapter 1-136
(Continue from the previous slide.)
© Po-Ning [email protected] Chapter 1-137
© Po-Ning [email protected] Chapter 1-138
1.7 Energy Spectral Density
Energy of a (deterministic) function p(t) is given by Recall that the average power of p(t) is given by
Observe that
dtdfefpdfefp
dttptpdttp
tfjftj*
'22
*2
')'()(
)()(|)(|
.|)(| 2
dttp
.|)(|2
1lim 2
T
TTdttp
T
dffp
dffpfp
dfdffffpfp
dfdfdtefpfp
dtdfefpdfefpdttp
tffj
tfjftj
2
*
*
)'(2*
'2*22
|)(|
)()(
')'()'()(
')'()(
')'()(|)(|
For the same reason as PSD, |p(f)|2 is named energy spectral density (ESD) of p(t).
(Continue from the previous slide.)
© Po-Ning [email protected] Chapter 1-139
© Po-Ning [email protected] Chapter 1-140
Example
The ESD of a rectangular pulse of amplitude A and duration T is given by
)(sinc)( 2222
0
2 fTTAdtAefET
ftjg
© Po-Ning [email protected] Chapter 1-141
Example 1.7 Mixing of a Random Process with a Sinusoidal Process
Let Y(t) = X(t) cos(2fct + ), where is uniformly distributed over [, ), and X(t) is WSS and independent of .
2
))(2cos()(
)]2cos()2[cos()]()([
)]2cos()2cos()()([),(
utfutR
uftfEuXtXE
uftfuXtXEutR
cX
cc
ccY
)()(4
1)( cXcXY ffSffSfS
© Po-Ning [email protected] Chapter 1-142
1.7 How to measure PSD?
If X(t) is not only (strictly) stationary but also ergodic, then any (deterministic) observation sample x(t) in [T, T) has:
Similarly,
X
T
TTtXEdttx
T
)]([)(2
1lim
)()()(2
1lim X
T
TTRdttxtx
T
© Po-Ning [email protected] Chapter 1-143
1.7 How to measure PSD?
Hence, we may use the time-limited Fourier transform of the time-averaged autocorrelation function:
to approximate the PSD.
T
TTdttxtx
T)()(
2
1lim
)()(2
1
))(2exp()()2exp()(2
1
)2exp()()2exp()(2
1
,))(2exp()()(2
1
)2exp()()(2
1
)2exp()()(2
1
22 fxfxT
dttfjtxdsfsjsxT
dtdsfsjsxftjtxT
tsdtdstsfjsxtxT
dtdfjtxtxT
dfjdttxtxT
TT
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
© Po-Ning [email protected] Chapter 1-144
(Notably, we only have the values of x(t) for t in [T, T).)
© Po-Ning [email protected] Chapter 1-145
1.7 How to measure PSD?
The estimate
is named the periodogram. To summarize:
)()(2
122 fxfx
T TT
)()(2
1)( Then .3
.)2exp()()( Calculate .2
).,[ durationfor )( Observe .1
22
2
fxfxT
fS
dtftjtxfx
TTtx
TTX
T
TT
© Po-Ning [email protected] Chapter 1-146
1.7 Cross Spectral Density
Definition: For two (jointly WSS) random processes, X(t) and Y(t), their cross spectral densities are given by:
).(for
)()( and )]()([),( and
),()( and )]()([),( where
)2exp()()(
)2exp()()(
,,,
,,,
,,
,,
ut
utRRuXtYEutR
utRRuYtXEutR
dfjRfS
dfjRfS
XYXYXY
YXYXYX
XYXY
YXYX
© Po-Ning [email protected] Chapter 1-147
1.7 Cross Spectral Density
Property
)()( ,, XYYX RR
)(
)2exp()(
)2exp()(
)2exp()()(
,
,
,
,,
fS
dfjR
dfjR
dfjRfS
YX
YX
XY
XYXY
© Po-Ning [email protected] Chapter 1-148
Example 1.8 PSD of Sum Process
Determine the PSD of sum process Z(t) = X(t) + Y(t) of two zero-mean WSS processes X(t) and Y(t). Answer:
).,(),(),(),(
)]()([)]()([)]()([)]()([
))]()())(()([(
)]()([
),(
,, utRutRutRutR
uYtYEuXtYEuYtXEuXtXE
uYuXtYtXE
uZtZE
utR
YXYYXX
Z
© Po-Ning [email protected] Chapter 1-149
).()()()()(
Hence,
).()()()()(
thatimplies WSS
,,
,,
fSfSfSfSfS
RRRRR
YXYYXXZ
YXYYXXZ
Q.E.D.
).()()(
)]([)]([)]()([ i.e.,
ed,uncorrelat are )( and )( If
fSfSfS
tYEtXEtYtXE
tYtX
YXZ
The PSD of a sum process of zero-mean uncorrelated processes is equal to the sum of their individual PSDs.
© Po-Ning [email protected] Chapter 1-150
Example 1.9
Determine the CSD of output processes induced by separately passing jointly WSS inputs through a pair of stable LTI filters.
© Po-Ning [email protected] Chapter 1-151
© Po-Ning [email protected] Chapter 1-152
1.8 Gaussian Process
Definition. A random variable is Gaussian distributed, if its pdf has the form
2
2
2 2
)(exp
2
1)(
Y
Y
Y
Y
yyf
© Po-Ning [email protected] Chapter 1-153
1.8 Gaussian Process
Definition. An n-dimensional random vector is Gaussian distributed, if its pdf has the form
)()(
2
1exp
|)|2(
1)( 1
2/
xxxf T
nX
matrix. covariance theis },{Cov},{Cov
},{Cov},{Cov
and vector,mean theis ]][],...,[],[[ where
2212
2111
21
nn
Tn
XXXX
XXXX
XEXEXE
© Po-Ning [email protected] Chapter 1-154
1.8 Gaussian Process
For a Gaussian random vector, “uncorrelation” implies “independence.”
n
iiXX
nn
xfxfXX
XX
i
122
11
)()(},{Cov0
0},{Cov
© Po-Ning [email protected] Chapter 1-155
1.8 Gaussian Process
Definition. A random process X(t) is said to be Gaussian distributed, if for every function g(t), satisfying
is a Gaussian random variable.
T T
X dtduutRugtg0 0
),()()(
T
dttXtgY0
)()(
.),()()(][ Notably,0 0
2 T T
X dtduutRugtgYE
© Po-Ning [email protected] Chapter 1-156
1.8 Central Limit Theorem
Theorem (Central Limit Theorem) For a sequence of independent and identically distributed (i.i.d.) random variables X1, X2, X3, …
y
X
XnX
ndx
xy
n
XX
2exp
2
1)()(Prlim
2
1
].[ and ][ where 22jXjX XEXE
© Po-Ning [email protected] Chapter 1-157
1.8 Properties of Gaussian processes
Property 1. Output of a stable linear filter is a Gaussian process if the input is a Gaussian process.
(This is self-justified by the definition of Gaussian processes.)
Property 2. A finite number of samples of a Gaussian process forms a multi-dimensional Gaussian vector.
(No proof. Some books use this as the definition of Gaussian processes.)
© Po-Ning [email protected] Chapter 1-158
1.8 Properties of Gaussian processes
Property 3. A WSS Gaussian process is also strictly stationary.
(An immediate consequence of Property 2.)
© Po-Ning [email protected] Chapter 1-159
1.9 Noise
Noise An unwanted signal that will disturb the
transmission or processing of signals in communication systems.
Types Shot noise Thermal noise … etc.
© Po-Ning [email protected] Chapter 1-160
1.9 Shot Noise
A noise arises from the discrete nature of diodes and transistors. E.g., a current pulse is generated every time an
electron is emitted by the cathode.
Mathematical model
k
kShot tptX )()(
duration. finite withshape pulsea is )( and
generated, is pulsea timeof sequence theare }{ where
tpkk
© Po-Ning [email protected] Chapter 1-161
1.9 Shot Noise
XShot(t) is called the shot noise.
A more useful model is to count the number of electrons emitted in the time interval (0, t].
}:max{)( tktN k
© Po-Ning [email protected] Chapter 1-162
1.9 Shot Noise
N(t) behaves like a Poisson Counting Process. Definition (Poisson counting process) A Poisson
counting process with parameter is a process {N(t), t 0} with N(0) = 0 and stationary independent increments satisfying that for 0 < t1 < t2, N(t2) N(t1) is Poisson distributed with mean (t2 t1). In other words,
)](exp[!
)]([)()(Pr 12
1212 tt
k
ttktNtN
k
© Po-Ning [email protected] Chapter 1-163
1.9 Shot Noise
A detailed statistical characterization of the shot-noise process X(t) is in general hard.
Some properties are quoted below. XShot(t) is strictly stationary.
Mean
Autocovariance function
dttp
ShotX )(
dttptpC
ShotX )()()(
Chapter 1-164
1.9 Shot Noise
For your reference
© Po-Ning [email protected] Chapter 1-165
1.9 Shot Noise
Example. p(t) is a rectangular pulse of amplitude A and duration T.
ATAdtdttpT
X Shot
0)(
otherwise,0
|||),|()(
2 TTAC
ShotX
© Po-Ning [email protected] Chapter 1-166
1.9 Thermal Noise A noise arises from the random motion of
electrons in a conductor. Mathematical model
Thermal noise voltage VTN that appears across the terminals of a resistor, measured in a bandwidth of f Herz, is zero-mean Gaussian distributed with variance ][volts 4][ 22 fkTRVE TN
Kelvin.degrees in re temparatuabsolute theis and
ohms, in resistance theis constant, sBoltzmann'
theis Kelvindegreeper joules 1038.1 where 23
T
R
k
© Po-Ning [email protected] Chapter 1-167
1.9 Thermal Noise
Model of a noisy resistor
222 ][][ RIEVE TNTN
© Po-Ning [email protected] Chapter 1-168
1.9 White Noise
A noise is white if its PSD equals constant for all frequencies. It is often defined as:
Impracticability The noise has infinite power
2)( 0N
fSW
.2
)()]([ 02
df
NdffStWE W
© Po-Ning [email protected] Chapter 1-169
1.9 White Noise
Another impracticability No matter how closely in time two samples are,
they are uncorrelated!
So impractical, why white noise is so popular in the analysis of communication system? There do exist noise sources that have a flat power
spectral density over a range of frequencies that is much larger than the bandwidths of subsequent filters or measurement devices.
© Po-Ning [email protected] Chapter 1-170
1.9 White Noise
Some physically measurements have shown that the PSD of (a certain kind of) noise has the form
where k is the Boltzmann’s constant, T is the absolute temperature, and R are the parameters of physical medium.
When f << ,
22
2
)2(
2)(
fkTRfSW
22
)2(
2)( 0
22
2 NkTR
fkTRfSW
© Po-Ning [email protected] Chapter 1-171
Example 1.10 Ideal Low-Pass Filtered White Noise
After the filter, the PSD of the zero-mean white noise becomes:
otherwise,0
||,2)(|)(|)(
02 Bf
NfSfHfS WFW
)2(sinc)2exp(2
)( 00 BBNdffj
NR
B
BFW
ed.uncorrelat i.e., ,0)( implies )2/( FWRBk
© Po-Ning [email protected] Chapter 1-172
Example 1.10 Ideal Low-Pass Filtered White Noise
So if we sample the noise at rate of 2B times per second, the resultant noise samples are uncorrelated!
© Po-Ning [email protected] Chapter 1-173
Example 1.11
ChannelChannelEncoderEncoder
……01100110ModulatorModulator
……,-,-mm((tt), ), mm((tt), ), mm((tt), -), -mm((tt))
mm((tt))
TT
Carrier waveCarrier waveAccos(2Accos(2ffcctt))
ss((tt))
w(t)
xx((tt))
T
dt0
correlator
NN>><< 00
0110…0110…
)cos(22/
carrier Local
ctfT
© Po-Ning [email protected] Chapter 1-174
Example 1.11
energy. signal thenomalize carrier to local
the toadded is /2factor scalinga figure, previous theIn T
.1
)4cos(1
)2(cos2
)2cos(2
EnergySignal
0
0
2
0
2
Tc
T
c
T
c
dtT
tf
dttfT
dttfT
© Po-Ning [email protected] Chapter 1-175
Example 1.11
T
c dttfT
twN0
)2cos(2
)( Noise
.0)2cos(2
)]([)2cos(2
)(00
T
c
T
cN dttfT
twEdttfT
twE
T
cc
T
T
c
T
cN
dsdtsftfswtwET
dssfT
swdttfT
twE
0 0
00
2
)2cos()2cos()]()([2
)2cos(2
)()2cos(2
)(
.2
)2(cos
)2cos()2cos()(2
2
0
0
20
0 0
02
N
dssfT
N
dsdtsftfstN
TT
c
T
cc
T
N
© Po-Ning [email protected] Chapter 1-176
(Continue from the previous slide.)
If w(t) is white Gaussian, then the pdf of N is uniquely determined by the first and second moments.
© Po-Ning [email protected] Chapter 1-177
1.10 Narrowband Noise
In general, the receiver of a communication system includes a narrowband filter whose bandwidth is just large enough to pass the modulated component of the received signal.
The noise is therefore also filtered by this narrowband filter.
So the noise’s PSD after being filtered may look like the figures in the next slide.
© Po-Ning [email protected] Chapter 1-178
1.10 Narrowband Noise
The analysis on narrowband noise will be covered in subsequent sections.
© Po-Ning [email protected] Chapter 1-179
A2.2 Bandwidth
The bandwidth is the width of the frequency range outside which the power is essentially negligible. E.g., the bandwidth of a (strictly) band-limited
signal shown below is B.
© Po-Ning [email protected] Chapter 1-180
A2.2 Null-to-Null Bandwidth
Most signals of practical interest are not strictly band-limited. Therefore, there may not be a universally accepted
definition of bandwidth for such signals. In such case, people may use null-to-null
bandwidth. The width of the main spectral lobe that lies inside the
positive frequency region (f 0).
A2.2 Null-to-Null Bandwidth
)(sinc)( 22 fΤTAfSX
. amplitude and duration of pulse
r rectangulaa is )( where),()(
AT
tptnTtpIAtX dn
n
© Po-Ning [email protected] Chapter 1-181
The null-to-null bandwidth is 1/T.
© Po-Ning [email protected] Chapter 1-182
A2.2 Null-to-Null Bandwidth
The null-to-null bandwidth in this case is 2B.
© Po-Ning [email protected] Chapter 1-183
A2.2 3-dB Bandwidth
A 3-dB bandwidth is the displacement between the two (positive) frequencies, at which the magnitude spectrum of the signal reaches its maximum value, and at which the magnitude spectrum of the signal drops to of the peak spectrum value. Drawback: A small 3-dB bandwidth does not
necessarily indicate that most of the power will be confined within a small range. (E.g., the signal may have slowly decreasing tail.)
21
© Po-Ning [email protected] Chapter 1-184
2
1)(sinc
)0(
)(2
22
TA
fΤTA
S
fS
X
X
© Po-Ning [email protected] Chapter 1-184
The 3-dB bandwidth is 0.32/T.
A2.2 3-dB Bandwidth
T
32.0
© Po-Ning [email protected] Chapter 1-185
A2.2 Root-Mean-Square Bandwidth
rms bandwidth
Disadvantage: Sometimes,
even if
2/12
)(
)(
dffS
dffSfB
X
X
rms
dffSf X )(2
.)(
dffSX
© Po-Ning [email protected] Chapter 1-186
A2.2 Bandwidth of Deterministic Signals
The previous definitions can also be applied to Deterministic Signals where PSD is replaced by ESD. For example, a deterministic signal with spectrum
G(f) has rms bandwidth:2/1
2
22
|)(|
|)(|
dffG
dffGfBrms
© Po-Ning [email protected] Chapter 1-187
A2.2 Noise Equivalent Bandwidth
An important consideration in communication system is the noise power at a linear filter output due to a white noise input. We can characterize the noise-resistant ability of
this filter by its noise equivalent bandwidth. Noise equivalent bandwidth = The bandwidth of an
ideal low-pass filter through which the same output filter noise power is resulted.
© Po-Ning [email protected] Chapter 1-188
A2.2 Noise Equivalent Bandwidth
© Po-Ning [email protected] Chapter 1-189
A2.2 Noise Equivalent Bandwidth Output noise power for a general linear filter
Output noise power for an ideal low-pass filter of bandwidth B and the same amplitude as the general linear filter at f = 0.
dffH
NdffHfSW
202 |)(|2
|)(|)(
20
202 |)0(||)0(|2
|)(|)( HBNdfHN
dffHfSB
BW
2
2
|)0(|2
|)(|
H
dffHBNE
© Po-Ning [email protected] Chapter 1-190
A2.2 Time-Bandwidth Product
Time-Scaling Property of Fourier Transform Reducing the time-scale by a factor of a extends the
bandwidth by a factor of a.
This hints that the product of time- and frequency- parameters should remain constant, which is named the time-bandwidth product or bandwidth-duration product.
a
fG
aatgfGtg
FourierFourier
||
1)()()(
© Po-Ning [email protected] Chapter 1-191
A2.2 Time-Bandwidth Product
Since there are various definitions of time-parameter (e.g., duration of a signal) and frequency-parameter (e.g., bandwidth), the time-bandwidth product constant may change for different definitions.
E.g., rms duration and rms bandwidth of a pulse g(t)2/1
2
22
|)(|
|)(|
dttg
dttgtTrms
2/1
2
22
|)(|
|)(|
dffG
dffGfBrms
.4
1 Then
rmsrmsBT
© Po-Ning [email protected] Chapter 1-192
A2.2 Time-Bandwidth Product
Example: g(t) = exp(t2). Then G(f) = exp(f2).
Example: g(t) = exp(t|). Then G(f) = 2/(1+42f2).
.2
12/1
2
22
2
2
dte
dtetBT
t
t
rmsrms .4
1 Then
rmsrmsBT
.4
1
2
1
2
1
)41(1
)41(
2/1
222
222
22/1
||2
||22
dff
dff
f
dte
dtetBT
t
t
rmsrms
© Po-Ning [email protected] Chapter 1-193
A2.3 Hilbert Transform
Let G(f) be the spectrum of a real function g(t). By convention, denote by u(f) the unit step
function, i.e.,
Put g+(t) to be the function
corresponding to 2u(f)G(f).
0,0
0,2/1
0,1
)(
f
f
f
fu
)( fG )()(2 fGfu
Multiply 2 to unchange the area.
© Po-Ning [email protected] Chapter 1-194
A2.3 Hilbert Transform
How to obtain g+(t)?
Answer: Hilbert Transformer.
Proof: Observe that
0,1
0,0
0,1
)sgn( where),sgn(1)(2
f
f
f
fffu
Then by the next slide, we learn that
}0{1
)()(2
tt
jtfuFourierInverse
1
© Po-Ning [email protected] Chapter 1-195
By extended Fourier transform,
0,0
0,4
4lim)sgn(
220
rierInverseFou
t
tt
j
ta
tjf
a
}0{)()sgn(1)(2rierInverseFou
tt
jtffu 1
© Po-Ning [email protected] Chapter 1-196
A2.3 Hilbert Transform
).( of Transform Hilbert thenamed is )(
)()(ˆ where
),(ˆ)(
)(*}0{1
)(
)(*}0{1
)(
)}({*)}(2{
)()(2)(11
1
tgdt
gtg
tgjtg
tgtt
jtg
tgtt
jt
fGFourierfuFourier
fGfuFouriertg
1
1
© Po-Ning [email protected] Chapter 1-197
A2.3 Hilbert Transform
1
) h)tg )ˆ tg
0,1
0,0
0,1
)sgn( where),sgn()(1
)
f
f
f
ffjfHhFourier
0]},2/)([exp{|)(|
0,0
0]},2/)([exp{|)(|
)()sgn()(ˆ
ffGjfG
f
ffGjfG
fGfjfG
© Po-Ning [email protected] Chapter 1-198
A2.3 Hilbert Transform
Hence, Hilbert Transform is basically a 90 degree phase shifter.
© Po-Ning [email protected] Chapter 1-199
A2.3 Hilbert Transform
dt
gtg
dt
gtg
)(ˆ1)(
)(1)(ˆ
PairTransformHilbert
1
) h)tg )ˆ tg
1
) h)ˆ tg )tg
1
© Po-Ning [email protected] Chapter 1-200
A2.3 Hilbert Transform
An important property of Hilbert Transform is that:
slide.)next thein proof the(See
.0)(ˆ)( s,other word In
n.Integratio of sense thein orthogonal are )(ˆ and )(
dttgtg
tgtg
(Examples of Hilbert Transform Pairs can be found in Table A6.4.)
The real and imaginary parts of are orthogonal to each other.
.)()( if ,0
)()()()(
)()()()(
)()()sgn(
)()(ˆ
)()(ˆ
)(ˆ)()(ˆ)(
0
00
0
0
2
2
dffGfG
dffGfGdffGfGj
dffGfGdffGfGj
dffGfGfj
dffGfG
dfdtetgfG
dtdfefGtgdttgtg
ftj
ftj
© Po-Ning [email protected] Chapter 1-201
© Po-Ning [email protected] Chapter 1-202
A2.4 Complex Representation of Signals and Systems
g+(t) is named the pre-envelope or analytical signal of g(t).
We can similarly define
)(ˆ)()( tgjtgtg
)( fG)())(1(2 fGfu
© Po-Ning [email protected] Chapter 1-203
A2.4 Canonical Representation of Band-Pass Signals
Now let G(f) be a narrow-band signal for which 2W << fc. Then we can obtain its pre-
envelope G+(f).
Afterwards, we can shift the pre-envelope to its low-pass signal )()(
~cffGfG
© Po-Ning [email protected] Chapter 1-204
A2.4 Canonical Representation of Band-Pass Signal
These steps give the relation between the complex lowpass signal (baseband signal) and the real bandpass signal (passband signal).
Quite often, the real and imaginary parts of complex lowpass signal are respectively denoted by gI(t) and gQ(t).
)2exp()(~Re)(Re)( tfjtgtgtg c
© Po-Ning [email protected] Chapter 1-205
A2.4 Canonical Representation of Band-Pass Signal
In terminology,
)( signal pass-band theofcomponent quadrature)(
)( signal pass-band theofcomponent phase-in)(
envelopecomplex )(~envelope-pre)(
tgtg
tgtg
tg
tg
Q
I
This leads to a canonical, or standard, expression for g(t).
)2sin()()2cos()(
)2exp())()((Re)(
tftgtftg
tfjtjgtgtg
cQcI
cQI
© Po-Ning [email protected] Chapter 1-207
A2.4 Canonical Representation of Band-Pass Signal
Canonical transmitter)2sin()()2cos()()( tftgtftgtg cQcI
© Po-Ning [email protected] Chapter 1-208
A2.4 Canonical Representation of Band-Pass Signal
Canonical receiver
© Po-Ning [email protected] Chapter 1-209
A2.4 More Terminology
)( of phase )(
)(tan)(
)( of envelopeor envelope natural
)()(|)(~||)(|)(
)( signal pass-band theofcomponent quadrature)(
)( signal pass-band theofcomponent phase-in)(
envelopecomplex )(~envelope-pre)(
1
22
tgtg
tgt
tg
tgtgtgtgta
tgtg
tgtg
tg
tg
I
Q
QI
Q
I
© Po-Ning [email protected] Chapter 1-210
A2.4 Bandpass System
Consider the case of passing a band-pass signal x(t) through a real LTI filter h() to yield an output y(t).
Can we have a low-pass equivalent system for the bandpass system?
)(h)(tx
dtxhty )()()(
Similar to the previous analysis, we have:
)()()(~)2sin()()2cos()( )(~ Re)( 2
tjxtxtx
tftxtftxetxtx
QI
cQcItfj c
. and ,frequency carrier
theof Hz within tolimited is )( of spectrum The :
cc fWf
Wtx
Assumption
response impulsecomplex )()()(~
)2sin()()2cos()( )(~
Re)( 2
QI
cQcIfj
jhhh
fhfhehh c
. and ,frequency carrier
theof Hz within tolimited is )( of spectrum The :
cc fBf
Bh
Assumption
© Po-Ning [email protected] Chapter 1-211
Now, is the filter output y(t) also a bandpass signal?
© Po-Ning [email protected] Chapter 1-212
)( fX
)( fH
)( fY
© Po-Ning [email protected] Chapter 1-213
A2.4 Bandpass System
Question: Is the following system valid?
The advantage of the above equivalent system is that there is no need to deal with the carrier frequency in the system analysis.
The answer to the question is YES (with some modification)! It will be substantiated in the sequel.
)(~ h
)(~ tx
dtxhty )(~)(
~)(~
).(~
)(~
)(~
show that tosufficesIt fHfXfY
.
)()(~
)()(~
)()(~
and
)()(2)(
)()(2)(
)()(2)(
that Observe
c
c
c
ffHfH
ffXfX
ffYfY
fHfufH
fXfufX
fYfufY
ly,Consequent
)(~
2
)(2
)()(4
)()()(4
)()(2)()(2
)()()(~
)(~
fY
ffY
ffYffu
ffHffXffu
ffHffuffXffu
ffHffXfHfX
c
cc
ccc
cccc
cc
© Po-Ning [email protected] Chapter 1-214
There is an additional multiplicative constant 2 at the output!
© Po-Ning [email protected] Chapter 1-215
A2.4 Bandpass System
Conclusion:
)(~ h
)(~ tx
dtxhty )(~)(
~
2
1)(~
© Po-Ning [email protected] Chapter 1-216
A2.4 Bandpass System
Final note on bandpass system Some books define H+(f) = u(f)H(f) for a filter,
instead of H+(f) = 2u(f)H(f) as for a signal.
As a result of this definition (i.e., H+(f) = u(f)H(f)),
It is justifiable to remove 2 in H+(f) = u(f)H(f), because a filter is used to filter out the signal; hence, it is not necessary to make the total area constant.
dtxhtyehh cfj )(~)(
~)(~ and )(
~ Re2)( 2
© Po-Ning [email protected] Chapter 1-217
1.11 Representation of Narrowband Noise in terms of In-Phase and Quadrature Components
In Appendix 2.4, the bandpass system representation is discussed based on deterministic signals.
How about a random process? Can we have a low-pass isomorphism system to a bandpass random process. Take the noise process N(t) as an example.
)(h)(tN
dtNhtY )()()(
© Po-Ning [email protected] Chapter 1-218
1.11 Representation of Narrowband Noise in Terms of In-Phase and Quadrature Components
A WSS real-valued zero-mean noise process N(t) is a band-pass process if its PSD SN(f) 0 for |f fc| B and |f + fc| B, and also B < fc. Similar to the analysis for deterministic signals, let
for some joint zero-mean WSS of NI(t) and NQ(t). Notably, the joint WSS of NI(t) and NQ(t)
immediately imply WSS of
)()()(~
)2sin()()2cos()()(
tjNtNtN
tftNtftNtN
QI
cQcI
).(~
tN
© Po-Ning [email protected] Chapter 1-219
1.11 PSD of NI(t ) and NQ(t)
First, we note that joint WSS of NI(t ) and NQ(t) and the WSS of N(t) imply:
sequel.) thein proof the(See )()(
)()(
,,
IQQI
QI
NNNN
NN
RR
RR
© Po-Ning [email protected] Chapter 1-220
))2(2sin()]()([2
1
))2(2cos()]()([2
1
)2sin()]()([2
1
)2cos()]()([2
1)(
,,
,,
tfRR
tfRR
fRR
fRRR
cNNNN
cNN
cNNNN
cNNN
IQQI
QI
IQQI
QI
(Continue from the previous slide.) These two terms must equal zero, since RN() is not a function of t.
)()(
)()(
,,
IQQI
QI
NNNN
NN
RR
RR
)2sin()()2cos()()( , cNNcNN fRfRRIQI
(Property 1)
(Property 2)
© Po-Ning [email protected] Chapter 1-221
© Po-Ning [email protected] Chapter 1-222
)()(
)]()([)()(2
1
))]()(())()([(2
1
)](~
)(~
[2
1)(
,
,,
*~
IQI
QIIQQI
NNN
NNNNNN
QIQI
N
jRR
RRjRR
tjNtNtjNtNE
tNtNER
1.11 PSD of NI(t ) and NQ(t)
Some other properties
slide.)next The (Cf.
power. same thehave reasonably
)(~
isomophism lowpass its and
)( that so added is 2
1Factor
)].(~
)(~
[2
1)(
),(~
complex For
)].()([)(
),( realFor
*~
tN
tN
tNtNER
tN
tNtNER
tN
N
N
(Property 3)
© Po-Ning [email protected] Chapter 1-223
1.11 PSD of NI(t ) and NQ(t)
(Property 4)
Properties 2 and 3 jointly imply
hold. tofail )()(
)()(2)( of
relation spectrum desired the,2
1factor he without tfact, In
~
cNN
NN
ffSfS
fSfufS
© Po-Ning [email protected] Chapter 1-224
1.11 Summary of Spectrum Properties
)()( and )()( .1 ,, IQQIQI NNNNNN RRRR
)2sin()()2cos()()( .2 , cNNcNN fRfRRIQI
)()()( .3 ,~ IQI NNNN
jRRR
.)2exp()(Re)( 4. ~ cNN fjRR
)()( and )()( .6 ,, fSfSfSfSIQQIQI NNNNNN [From 1.]
[From 4. See the next slide.]
)()(2
1)( 7. ~~ cNcNN ffSffSfS
valued.real is )( .5 ~ fSN
).()(By *~~ NN
RR
)0(
)0()0(
Q
I
N
NN
R
RR
valued.real is )( since ,)()(2
1
)()(2
1
)(2
1)(
2
1
)()(2
1
)()(2
1
)(Re
)()(
~~~
*~~
*)(2
~)(2
~
22*~
2~
2*2~
2~
22~
2
fSffSffS
ffSffS
deRdeR
deeReR
deeReR
deeR
deRfS
NcNcN
cNcN
ffj
N
ffj
N
fjfj
N
fj
N
fjfj
N
fj
N
fjfj
N
fjNN
cc
cc
cc
c
© Po-Ning [email protected] Chapter 1-225
© Po-Ning [email protected] Chapter 1-226
means. zero have they since ed,uncorrelat are )( and )( .8 tNtN QI
.0)]()([)0(
)0()0(
)()(
)()]()([)(
)()(
,
,,
,,
,,
,,
tNtNER
RR
RR
RtNtNER
RR
QINN
NNNN
NNNN
NNQINN
NNNN
QI
QIQI
QIQI
IQQI
IQQI
)()(
)()(2)(
~
cNN
NN
ffSfS
fSfufS
Now, let us examine the desired spectrum properties of:
© Po-Ning [email protected] Chapter 1-227
)2cos()2cos(),(4
)]2cos()()2cos()([4
)]()([),(
uftfutR
ufuNtftNE
uVtVEutR
ccN
cc
IIVI
)2sin()(
)2cos()()(
tftN
tftNtN
cQ
cI
)(tN I
)(tNQ
)(tVI
)(tVQ
)2cos(2 tfc
)2sin(2 tfc
WSS.not is )( Notably, tVI
© Po-Ning [email protected] Chapter 1-228
)2cos()(2
)2cos()(2))2(2cos(1
lim)(
)]2cos())2(2[cos(1
lim)(
)2cos())(2cos()(42
1lim
),(2
1lim)(
cN
cN
T
T cTN
T
T ccTN
T
T ccNT
T
T VTV
fR
fRdttfT
R
dtftfT
R
dttftfRT
dtttRT
RII
)()()( cNcNV ffSffSfSI
Bf
BffHfSfHfS
II VN ,0
||,1|)(| where),(|)(|)( 22
© Po-Ning [email protected] Chapter 1-229
otherwise,0
||),()(
|)(|)]()([)()( 2
BfffSffS
fHffSffSfSfS
cNcN
cNcNNN II
otherwise,0
||),()()(
BfffSffSfS cNcN
NQ
Similarly,
)2sin()(
)2cos()()(
tftN
tftNtN
cQ
cI
)(tN I
)(tNQ
)(tVI
)(tVQ
)2cos(2 tfc
)2sin(2 tfc
© Po-Ning [email protected] Chapter 1-230
).( to turn weNext, , QI NNR
)2sin()2cos(),(4
)2sin()()2cos()(4
)]()([),(,
uftfutR
ufuNtftNE
uVtVEutR
ccN
cc
QIVV QI
)2sin())(2cos()(4),(, tftfRttR ccNVV QI
© Po-Ning [email protected] Chapter 1-231
)2sin()(2
)]2sin()2(2[sin(1
lim)(
),(2
1lim)( ,,
cN
T
T ccTN
T
T VVTVV
fR
dtftfT
R
dtttRT
RQIQI
)]()([)(, cNcNVV ffSffSjfSQI
2121,21
212121
222111
,
),()()(
)]()([)()(
)()()()(
)()(),(
ddutRhh
dduVtVEhh
duVhdtVhE
uNtNEutR
QI
QI
VVQI
QIQI
QQII
QINN
© Po-Ning [email protected] Chapter 1-232
© Po-Ning [email protected] Chapter 1-233
)()()(
.)(Let
,)()()(
)()()()(
,
12
21
)(2
,21
21
2
12,21,
12
fSfHfH
u
ddudeuRhh
dddeRhhfS
QI
QI
QIQI
VVQI
ufj
VVQI
fj
VVQINN
otherwise0,
||)],()([
|)(|)]()([
)()(2
,,
BfffSffSj
fHffSffSj
fSfS
cNcN
cNcN
NNNN QIQI
© Po-Ning [email protected] Chapter 1-234
Finally,
© Po-Ning [email protected] Chapter 1-235
Example 1.12 Ideal Band-Pass Filtered White Noise
© Po-Ning [email protected] Chapter 1-236
1.12 Representation of Narrowband Noise in Terms of Envelope and Phase Components
Now we turn to envelope R(t) and phase (t) components of a random process of the form
)](2cos[)(
)2sin()()2cos()()(
ttftR
tftNtftNtN
c
cQcI
)].(/)([tan)( and )()()( where 122 tNtNttNtNtR IQQI
© Po-Ning [email protected] Chapter 1-237
1.12 Pdf of R(t) and (t)
Assume that N(t) is a white Gaussian process with two-sided PSD 2 = N0/2.
For convenience, let NI and NQ be snapshot samples of NI(t) and NQ(t).
Then
2
22
2, 2exp
2
1),(
QI
QINN
nnnnf
QI
© Po-Ning [email protected] Chapter 1-238
1.12 Pdf of R(t) and (t)
By nI = r cos() and nQ = r sin(),
drdrr
drd
d
dn
d
dndr
dn
dr
dnr
dndnnn
rA
rAQI
QI
nnA
QIQI
QI
),(2
2
2
),(2
2
2),(
2
22
2
2exp
2
1
2exp
2
1
2exp
2
1
.2
exp2
1
2exp
2),( So
2
2
22
2
2,
rrrrrfR
© Po-Ning [email protected] Chapter 1-239
1.12 Pdf of R(t) and (t)
R and are therefore independent.
.0for 2
1)(
.0for 2
exp)(2
2
2
f
rrr
rfRRayleigh distribution.
Normalized Rayleigh distribution with 2 = 1.
© Po-Ning [email protected] Chapter 1-240
1.13 Sine Wave Plus Narrowband Noise
Now suppose the previous Gaussian white noise is added to a sinusoid of amplitude A.
Then
Uncorrelation for Gaussian nI(t) and nQ(t) implies their independence.
)2sin()()2cos()(
)2sin()2cos()()2cos(
)()2cos()(
cQcI
cQcIc
c
ftxftx
tfntftntfA
tntfAtx
© Po-Ning [email protected] Chapter 1-241
1.13 Sine Wave Plus Narrowband Noise This gives the pdf of xI(t) and xQ(t) as:
By xI = r cos() and xQ = r sin(),
2
22
2, 2
)(exp
2
1),(
QI
QIXX
xAxxxf
QI
2
22
2
2
222
2,
2
)cos(2exp
2
1
2
)(sin))cos((exp
2
1),(
ArAr
d
dx
d
dxdr
dx
dr
dxrAr
rfQI
QI
R
© Po-Ning [email protected] Chapter 1-242
1.13 Sine Wave Plus Narrowband Noise
Notably, in this case, R and are no longer independent.
We are more interested in the marginal distribution of R.
2
0 2
22
2
2
0,
2
)cos(2exp
2
),()(
dArArr
drfrf RR
© Po-Ning [email protected] Chapter 1-243
1.13 Sine Wave Plus Narrowband Noise
,2
exp
2
)cos(2exp
2exp
2)(
202
22
2
2
0 22
22
2
ArI
Arr
dArArr
rfR
order. zero of kindfirst theof
function Besselmodified theis ))cos(exp(2
1)( where
2
00
dxxI
This distribution is named the Rician distribution.
© Po-Ning [email protected] Chapter 1-244
1.13 Normalized Rician distribution
,2
exp)( 0
22
avIav
vvfV
© Po-Ning [email protected] Chapter 1-245
A3.1 Bessel Functions
Bessel’s equation of order n
Its solution Jn(x) is the Bessel function of the first kind of order n.
0)( 22
2
22 ynx
dx
dyx
dx
ydx
0
2
0
)!(!
4/
2sinexp
2
1
))sin(cos(1
)(
m
m
n
n
n
mnm
xxdjnjx
dnxxJ
© Po-Ning [email protected] Chapter 1-246
A3.2 Properties of the Bessel Function
.1)( .7
))sin(exp()exp()( .6 .0)(lim .5
24cos
2)( large, When .4
.!2
)( small, When .3 )(2
)()( .2
)()1()()1()( 1.
2
11
nn
nnnn
n
n
n
nnnn
nn
nn
n
xJ
jxjnxJxJ
nx
xxJx
n
xxJxxJ
x
nxJxJ
xJxJxJ
© Po-Ning [email protected] Chapter 1-247
A3.3 Modified Bessel Function
Modified Bessel’s equation of order n
Its solution In(x) is the Modified Bessel function of the first kind of order n.
The modified Bessel function is monotonically increasing in x for all n.
0)( 222
2
22 ynxj
dx
dyx
dx
ydx
dnxxJjxI n
nn )cos())cos(exp(
2
1)()(
© Po-Ning [email protected] Chapter 1-248
A3.3 Properties of Modified Bessel Function
))cos(exp()exp()( '.6
.2
)exp()( large, When '.4
.1,0
0,1)(lim '.3
0
xjnxI
x
xxIx
n
nxI
nn
n
nx
© Po-Ning [email protected] Chapter 1-249
1.14 Computer Experiments: Flat-Fading Channel
Model of a multi-path channelpaths. Assume N
N
kkck tfA
1
)2cos(
)2cos( tfA c
© Po-Ning [email protected] Chapter 1-250
1.14 Computer Experiments: Flat-Fading Channel
)2sin()2cos()2cos()(1
tfYtfYtfAtY cQcI
N
kkck
.)sin( and )cos( where11
N
kkkQ
N
kkkI AYAY
. oft independen is which
,)(~
output induces )(~
Input
t
jYYtYAtX QI
).2,0[over uniform and
)1,1[over uniform and i.i.d., )},{( Assume
k
kkk AA
© Po-Ning [email protected] Chapter 1-251
1.14 Experiment 1
By the central limit theorem,
)1,0(6/
)cos()cos(
6/11 Normal
N
AA
N
Y NNI
).6/( varianceand 0 mean
withddistribute Gaussianely approximat is So
N
YI
have we, variablerandom Gaussianany For G
3.]||[
]||[ and 0
]||[
]||[22
4
223
32
1
GE
GE
GE
GE
© Po-Ning [email protected] Chapter 1-252
1.14 Experiment 1
We can therefore use 1 and 2 to examine the degree of resemblance to Gaussian for a random variable.
N
10 100 1000 10000
YI
1 0.2443 0.0255 0.0065 0.0003
2 2.1567 2.8759 2.8587 3.0075
YQ
1 0.0874 0.0017 0.0004 0.0000
2 1.9621 2.7109 3.1663 3.0135
40/3)](sin[)](cos[
6/1)](sin[)](cos[
0)]sin([)]cos([
3][/][3
][
][)1(3][
])[(
])[(
][
][~
}{ i.i.d. mean-zerofor /)(
4444
2222
224
222
224
21
2
41
22
4
2
1
AEAE
AEAE
AEAE
N
UEUE
UEN
UENNUNE
UUE
UUE
SE
SE
UaUUS
N
N
N
N
kNNN
0000.39975.29745.2745.2
10000100010010N
© Po-Ning [email protected] Chapter 1-253
1.14 Experiment 2
We can re-formulate Y(t) as:
If YI and YQ approach independent Gaussian, then R and respectively approach Rayleigh and uniform distributions.
)./(tan and where
),2cos()(
122IQQI
c
YYYYR
tfRtY
© Po-Ning [email protected] Chapter 1-254
1.14 Experiment 2
N = 10,000
The experimental curve is obtained by averaging 100 histograms.
© Po-Ning [email protected] Chapter 1-255
1.14 Experiment 2
).102cos( Input 6 t
Output channel Rayleigh
ingcorrespond The
© Po-Ning [email protected] Chapter 1-256
1.15 Summary and Discussion Definition of Probability System and Probability
Measure Random variable, vector and process Autocorrelation and crosscorrelation Definition of WSS Why ergodicity?
Time average as a good “estimate” of ensemble average Characteristic function and Fourier transform
Dirichlet’s condition Dirac delta function Fourier series and its relation to Fourier transform
© Po-Ning [email protected] Chapter 1-257
1.15 Summary and Discussion Power spectral density and its properties Energy spectral density Cross spectral density Stable LTI filter
Linearity and convolution Narrowband process
Canonical low-pass isomorphism In-phase and quadrature components Hilbert transform Bandpass system
© Po-Ning [email protected] Chapter 1-258
1.15 Summary and Discussion
Bandwidth Null to null, 3dB, rms, noise-equivalent Time-bandwidth product
Noise Shot noise, thermal noise and white noise
Gaussian, Rayleigh and Rician Central limit theorem