Upload
others
View
7
Download
0
Embed Size (px)
Citation preview
1
B1 Information Engineering Mini-project 2011-12
Reconstruction and Filtering in the Time Domain
Michaelmas 2011 David Murray
Rev: November 3, 2011 [email protected]
Course Page www.robots.ox.ac.uk/∼dwm/Courses/3B1-2011
What to do first
1. Once logged in to a machine using your single-sign-on username and password
visit the Course Page (url given above) and follow the instructions.2. Unzipping the specified file will create a B1 directory structure containing:
• Notes: containing pdf of these notes.
• Matlab: containing subdirectories related to the various tasks
3. Start up Matlab and point it to the Reconstruction directory.4. Start working through these notes. They will first walk you through some
mathematical and programming warm-up tasks, building towards the project
aim. The guidance will eventually fade out.5. Any late updates, extra advice, etc, will appear on the course page.
Backing up
As you may wish to work on your own machine as well as on those in the Department,
it is vital that you back up your work and keep track of the latest version. We suggest
treating the Departmental copy as your master copy (as is it backed up properly!).
Take special care when copying back updated material to your main repository
(e.g. the Department). Suppose you have worked on prog.m on your machine and
want to replace it in the Department. Don’t immediately overwrite it. Instead make
a reserve copy.
mv prog.m prog.Nov10
cp /mykey/prog.m .
Never rely on a memory stick for backup. Sticks are great for transporting stuff from
A to B, but flash memory can die suddenly. You should anyway be able to transport
material back and forth using secure ftp.
IntroductionAt the macroscopic level, most physical phenomena are continuous over a region of
space and time, and, as a consequence, most physical sensors provide continuous
output signals. One then might suppose that sampling is the product and bane of
the digital age — a result of using ADCs and computers to capture sensor output.
However, sampling has always been a feature of experimental data collection. An
example is the measurement of temperature, a spatio-temporal field T (x, y , z, t)
where x, y , z, t ∈ R. A thermocouple will give a continous temporal signal, but
only at one location. To measure temperature across a turbine blade surface one
might use an array of thermocouples, providing temporally continuous, but spatially
sampled data. Furthermore, the pre-computer experimentalist had to read off and
scribble down their outputs, so the data immediately becomes temporally sampled as
well. Even if our pre-computer experimentalist recorded the results in a continuous
fashion, say using a chart recorder, he or she would then sample values from the
resulting paper plot.
So it seems that as input to analysis, whether by computer or by hand, data are
commonly sampled. However, often we want the outputs of that analysis to be ap-
plicable to the continuous domain, to drive some physical plant, to provide feedback
to our senses, or indeed to permit resampling.
The destination
Your principal aim in this project is to develop code for the design of optimal impulse
response sequences for low-pass filtering of sampled data, and to report on the
code and its results in use. You will use several numerical techniques from the B1
lectures, including optimization and integration, and combine these methods with
ideas developed analytically in the 2nd year Fourier Transform course.
Getting there
Rather than state the problem and leave you to get on with it, the project is designed
as a set of increasingly complex investigations for which there is decreasing guidance.
Initially the notes lead you through rather prescribed “lab-like” exercises, reminding
you about matlab and providing useful snippets of code. Their purpose is to give you
sufficient hinterland to tackle the larger aim.
2
3
1. The first investigation is into reconstruction of band-limited signals from sam-
pled data. The input is sampled data, and the output is quasi-continuous data
— quasi-continuous because the output is sampled (we are using a computer!),
but with an arbitrarily small sampling period. In other words, a value can be
reconstructed at any time. This section will ease you back in to thinking about
processes in the frequency domain, and also to get you working with Matlab.
These notes will guide you through this stage, by suggesting things you should
check out and report on. There is some code available for you.
2. The second investigation concerns low pass filtering of sampled data. Sampled
data in, sampled data out with the same sampling period. You will use your
knowledge of Fourier basis functions to find optimal values for the discrete
impulse response function of a low pass filter. You will see that optimality here
is based on a least-squares criterion.
3. The third investigation introduces a more practical model of a low-pass filter
involving pass, transition, and stop bands. You will again use a least squares
criterion, but employ optimization techniques from the B1 lectures.
4. The least squares criterion has a significant drawback. The fourth investigation
uses the same filter model, but you will use linear programming to optimize your
filter against new criteria.
5. The final sections suggests ideas for further work.
Your report
You have exactly 10 pages to report on your approach, code and findings, which is
not generous. You will have to be selective in what you write.
Do not merely copy or echo the notes. You are expected to read around the subject,
and add value to the material. Address why one approach might be preferred to
another.
There is no standard recipe for success, but a possible approach to writing is to treat
your report as short technical review paper. As with any scientific paper, you should
introduce the material, set out the theory, detail crucial parts of the implementation,
describe the experiments and give their results, and draw conclusions that inform the
next part of the work. Assume the reader is expert, keep the engineering content
high, and aim to illuminate.
1 Reconstruction from sampled signals
1.1 Introduction
Continuing the introductory discussion, it may be worth distinguishing a number of
approaches used to reconstruct experimental data, before moving on to the recon-
struction of band-limited signals.
1.1.1 Parametric physical models
Fig. 1.1(a) shows the experimental results from measuring the gain of a system at
selected frequencies. Suppose that you have a strong expectation that the system is
a damped second-order system, so that the data can be explained by the parametric
model
|G|(ω;A,ω0, ζ) =Aω2
0√(ω2
0 − ω2)2 + (2ζω0ω)2. (1.1)
The input here is the known angular frequency ω, which is under your control, and
the output is the measured |G| at that frequency. However, to describe the data we
need to vary the parameters of the model — the zero-frequency gain A, the resonant
frequency ω0, and the damping factor ζ. Figs. 1.1(b) shows the data reconstructed
(or fitted) using an optimal set of parameters. Example code to achieve this is given
later.
1 1.5 2 2.5−5
0
5
10
15
20
25
30
35
log(ω)
log(|
G|)
1 1.5 2 2.5−5
0
5
10
15
20
25
30
35
log(ω)
log(|
G|)
(a) (b)
Figure 1.1: (a) Measurements of |G| sampled in frequency. (b) Reconstruction with the optimal set
of parameters.
4
1.1. INTRODUCTION 5
1.1.2 Parametric descriptive models
Note that although the above model is based on physics, we are far removed from the
inter-atomic forces giving rise to elasticity and viscosity. We have relied on someone
else having devised simplified explanations of the bulk properties of matter.
In some areas of endeavour, although one can measure the variation of a quantity
with another, it is not feasible to devise physical generative models that involve just
a few parameters. Despite this, it is often still useful to describe observed data
economically, and in terms of trends, because this allows easier comparison of the
results from different experiments, and indeed might inspire the creation of a model.
For example, suppose you were asked to analyse an experiment in gamma-ray spec-
troscopy — a subject about which you know nothing. You find that if a particular
decaying isotope is present you observe a spectrum with a peak as in Figure 1.2(a);
but if it is absent you observe a background spectrum as in Figure 1.2(b). You cannot
explain why the background is as it is, but you might nonetheless describe it with a
linear or weakly quadratic function and, similarly, you might describe the decay peak
with a Gaussian. Both choices are made because they provide good descriptions of
the data, rather than reasoned physical explanations of the data.
The model has 6 parameters describing the spectrum c(e) as a function of energy
e, where e0 is the centre energy of the peak, σ its width, and S its central height,
c(e) = [a + b(e − e0) + c(e − e0)2] + S exp
[−
(e − e0)2
2σ2
]. (1.2)
450 500 550
400
600
800
1000
1200
1400
1600
1800
2000
Energy (keV)
Count
450 500 550
400
600
800
1000
1200
1400
1600
1800
2000
Energy (keV)
Count
450 500 550
400
600
800
1000
1200
1400
1600
1800
2000
Energy (keV)
Count
(a) Spectrum with sample present (b) Spectrum of background (c)Description using function
Figure 1.2: Describing data using parametrized functions with no reasoned physical basis.
6 1. RECONSTRUCTION FROM SAMPLED SIGNALS
1.1.3 Non-parametric methods
If one has no good physical or descriptive model one can avoid the use of parameters
entirely. For example, in non-parametric kernel regression (e.g., Nadaraya and Wat-
son) each observed datum position xi is used as a centre on which a kernel function
is placed. The fitted curve is given by a weighted average of the kernels
yf i t(x) =
∑ni=1 yiKh(x − xi)∑ni=1Kh(x − xi)
, (1.3)
where the weighting is the observed yi at xi . A variety of kernel functions are used
— top hats, triangular functions, quadratics, Gaussians and so on — but common
traits are that they are symmetrical and have a characteristic width (or bandwidth).
iyy
x
y
x
Figure 1.3: Fitting with Gaussian kernels. (a) kernels placed at each xi . (b) Scaled by yi . (c) Example.
This method, which allows the data “to do the talking”, may seem rather ad hoc to
you. It can be misused, though there is a wealth of theory indicating its limits.
But the method might also have reminded you of something you saw in the 2nd year
lectures on Fourier Transforms, that it was possible to reconstruct perfectly a signal
from samples of it, provided the signal was band-limited. That is where this project
picks up.
1.2. RECONSTRUCTION OF BAND-LIMITED SIGNALS: THEORY 7
1.2 Reconstruction of band-limited signals: theory
The first aim in the project is to develop code to reconstruct a sampled signal, as
sketched in Fig. 1.4. As this is a computational method the reconstruction can only
be made at a finite number of points, but these can lie at any value of the variable
t required.
y(t)Sampling Interval
sT f(t)
t?
Figure 1.4: Reconstruction of a signal from samples of it.
1.2.1 Theory
Let us assume a continuous signal f (t) in time, and recall that sampling is a multi-
plicative process as shown in Fig. 1.5. With a sampling interval of Ts , the sampling
signal needed is the infinite impulse train
p(t) =
∞∑k=−∞
δ(t − kTs) . (1.4)
y(t)
p(t)
f(t)
t
Sampling Interval
sT
Figure 1.5: A sampled signal generated by modulation with an infinite-duration impuse train.
The modulation property tells us that the frequency spectrum of the sampled signal
is the convolution
Y (ω) =1
2πF (ω) ∗ P (ω) , (1.5)
8 1. RECONSTRUCTION FROM SAMPLED SIGNALS
where the Fourier transform of a pulse train is
P (ω) = ωs
∞∑k=−∞
δ(ω − kωs) with ωs = 2π/Ts . (1.6)
Hence the sampled spectrum becomes
Y (ω) =ωs
2π
∞∑k=−∞
F (ω − kωs) =1
Ts
∞∑k=−∞
F (ω − kωs) (1.7)
An example is sketched in Fig. 1.6. You will realize from the gaps in the frequency
spectrum that the signal is not aliased, which means that (i) the original f (t) is band-
limited (to ωm, say), and (ii) the sampling frequency satisfies the Nyquist criterion
ωs ≥ 2ωm. Note from Eq. 1.7 that the amplitude of each “copy” is a factor of 1/Tstimes the original magnitude.
*
Y( )ω
P( )ω
F(ω)
−2ω −ω ω 2ω0s
ω
0 ω
s s s
A
A/Ts
Figure 1.6: Pulse train modulation at T = 2π/ωs gives duplicate spectra every ωs.
Now recall the Nyquist-Shannon theorem, which indicates that the baseband spec-
trum can be recovered uncorrupted by filtering the output with an ideal low-pass
filter (Fig. 1.7). Note that to restore the original amplitude the gain of the filter
must be Ts .
1.2.2 Reconstruction in the time domain
A remarkable thing you learnt from the 2nd year Fourier Transform lectures is that
it is possible to restore the original in the time domain.
The required brick-wall low-pass filter is a top-hat with amplitude Ts in the frequency
domain. If the input signal f (t) is band-limited to ωm, the reconstruction filter can
have half-width ωm too. The required transfer function is
TsΠm(ω) =
{Ts |ω| ≤ ωm
0 |ω| > ωm.(1.8)
1.2. RECONSTRUCTION OF BAND-LIMITED SIGNALS: THEORY 9
Y( )ω
00 0
Original Sampled After filtering byideal LP filter
Ts
Figure 1.7: Recovery of the baseband is possible using an ideal low pass filter with a gain of Ts .
The FT of the sampled input is 12πF (ω) ∗ P (ω), so the FT of the output of the
reconstruction filter is
O(ω) = TsΠm(ω)1
2π(F (ω) ∗ P (ω)) . (1.9)
The convolution/modulation properties tell us that f (t) ∗ g(t) ⇔ F (ω)G(ω) and
f (t)g(t)⇔ 12πF (ω) ∗ G(ω), so that the output in the time domain is
o(t) = TsFT −1 [Πm] ∗ FT −1
[1
2πF (ω) ∗ P (ω)
](1.10)
= TsFT −1 [Πm] ∗ (f (t)p(t)) .
We know that, first,
f (t)p(t) = f (t)
∞∑k=−∞
δ(t − kTs) =
∞∑k=−∞
f (kTs)δ(t − kTs) , (1.11)
and, second, a unit pulse of half-width a/2 has as it Fourier Transform
Πa/2(t)⇔ asin(ωa/2)
ωa/2=
sin(ωa/2)
ω/2. (1.12)
Also the Dual Property tells us that
f (t)⇔ F (ω) implies F (t)⇔ 2πf (−ω) (1.13)
Hence
sin(ta/2)
t/2⇔ 2πΠa/2(−ω) and ⇒
sin(tωm)
πt⇔ Πm(−ω) = Πm(+ω) , (1.14)
where flipping the sign of ω exploits Π’s even symmetry.
10 1. RECONSTRUCTION FROM SAMPLED SIGNALS
Putting all this together gives
o(t) = Tssin(tωm)
πt∗
∞∑k=−∞
f (kTs)δ(t − kTs) (1.15)
= Ts
∞∑k=−∞
f (kTs)
∫ ∞−∞
sin(ωm(t − τ))
π(t − τ)δ(τ − kTs)dτ
= Ts
∞∑k=−∞
f (kTs)sin(ωm(t − kTs))
π(t − kTs).
This expression says that the temporal reconstruction of a sampled band-limited
signal f (kTs) can be achieved by interpolating the samples with sinc functions.
One draws a sinc function scaled by the sample’s size at the position of each sample,
and then adds them up.
You’ll notice too that this is a form of the kernel-based non-parametric fitting method
we referred to earlier — but there is nothing ad hoc about the derived expression.
1.3 Example from the A1 lectures
The example shown in the A1 lectures was f (t) = exp(−t2/20) cos(t), which is
bandlimited at ωm > 1. Suppose we say ωm = 2. Now we require ωs ≥ 4, but for
convenience let’s make ωs = 2π which makes Ts = 1.
The exponential envelope here allows us to approximate the functions to zero for
|t| > 10, so our samples will be, using Ts = 1
f (kTs) = f (k) =
{exp(−k2/20) cos(k) k = −10,−9, · · · , 9, 10
0 otherwise(1.16)
So using the reconstruction interpolation with ωm = 2, Ts = 1, and using the
approximation above
o(t) = Ts
∞∑k=−∞
f (kTs)sin(ωm(t − kTs))
π(t − kTs)≈
10∑k=−10
f (k)sin(2(t − k))
π(t − k)(1.17)
The results are shown in Fig. 1.8.
1.4. TOWARDS CODE FOR RECONSTRUCTION 11
(a) (b)
(c) (d)
Figure 1.8: (a) The orignal signal f (t), and samples taken at Ts = 1. (b) Sinc function associated
with the middle sample. (c) Sinc functions associated with all samples. (d) Their summation is
identical to the original signal. (Part (c) is clearer in the colour pdf.)
1.4 Towards code for reconstruction
Let’s return to the general formula
o(t) = Ts∑All k
f (kTs)sin(ωm(t − kTs))
π(t − kTs). (1.18)
Some care is required when coding this.
1. Notice that the theory developed assumes that the k = 0 sample occurs at
t = 0. There are two problems here. First we might not have a sample labelled
k = 0, and even if we did it might not occur at t = 0. This can be solved by
applying shifts to k and t.
Suppose the samples are held in a vector fs with elements fs [1] to fs [nsamples],
and the corresponding times held in ts [1] to ts [nsamples]. (NB: Elements
12 1. RECONSTRUCTION FROM SAMPLED SIGNALS
in Matlab are written using curvy brackets, but shall use square brackets to
distinguish them in mathematical expressions.)
Now suppose that we wish to reconstruct using only samples fs [m] to fs [n]. The
reconstruction is obtained by applying offsets to both k and t on the RHS of
the equation
o(t) = Ts
n∑k=m
fs [k ]sin(ωm(t − ts [m]− (k −m)Ts))
π(t − ts [m]− (k −m)Ts). (1.19)
On the RHS of the equation, the effective zeroes for k and t are now associated
with the mth sample.
2. The next slighty tricky part is to avoid the “small over small” computation in
sin(p)/p when p ≈ 0 by utilizing Matlab’s built-in sinc function, which is define
as in HLT
sinc(p) = sin(πp)/πp . (1.20)
If we set pk = (ωm/π)(t − ts [m]− (k −m)Ts), then
sinc(pk) =sin(ωm(t − ts [m]− (k −m)Ts))
ωm(t − ts [m]− (k −m)Ts)(1.21)
so that
o(t) = Ts
n∑k=m
fs [k ](ωmπ
)sinc(pk) . (1.22)
TASK 1.1
1. Review the following code.
Function func() is our trial function. Its bandlimit is guessed at in func˙bandlimit().
The top-level function reconstruct(f,prefix) takes samples from a function
driven at frequency f then reconstructs the original using sinc interpolation.
2. Run reconstruct(1000,’recon’) at the prompt. This will use a notional
maximum frequency component with 1kHz, and save the plot of results in file
recon.eps
3. Explore the result of changing the sampling rate, and so on. Why are there
large errors at either end?
1.4. TOWARDS CODE FOR RECONSTRUCTION 13
0 0.005 0.01 0.015 0.02 0.025 0.03−10
−5
0
5
10
0 0.005 0.01 0.015 0.02 0.025 0.03−10
−5
0
5
10
0 0.005 0.01 0.015 0.02 0.025 0.03−3
−2
−1
0
1
Figure 1.9: From top to bottom: the original signal and samples from it, its reconstruction using the
sinc function, and the difference between the original and reconstructed signals.
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % reconstruct.m
3 % Script to reconstruct func using sincinterp
4 % DWM 15/9/11
5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
6
7 function reconstruct(f,prefix)
8
9 fm = func˙bandlimit(f); % approx bandlimit
10 tstart = 0; % start time: arbitrary
11 tstop = 30/f; % stop time: arbitrary
12
13 % Choose sampling frequency
14 fs = 2.1*fm; % close to the minimum allowed
15 ts = 1/fs; % Hence sampling period
16
17 % Sample the function
18 tsample = tstart:ts:tstop;
14 1. RECONSTRUCTION FROM SAMPLED SIGNALS
19 fsample = func(tsample ,f);
20 nsamples = length(fsample);
21
22 % Determine times tr where you want reconstruction
23 tstep = ts/10; % say 10 per orig sample
24 tr = tstart:tstep:tstop;
25
26 % Reconstruct at these times % Ugly code warning
27 for n=1: length(tr)
28 fr(n) = sincinterp (** FIXME **);
29 end;
30 fo = func(tr ,f); % Find original func values
31 err = fr -fo; % Hence a vector of errors
32
33 % Plotting stuff
34 hold off;
35 subplot (3,1,1),plot(tr , fo , ’-’,’LineWidth ’,2,’Color ’,[0 0 .7]);hold on;
36 subplot (3,1,1),plot(tsample , fsample ,’O’,’LineWidth ’,2,’Color ’,[.7 0 0])
;
37 subplot (3,1,2),plot(tr , fr , ’-’,’LineWidth ’,2,’Color ’,[0 0.7 0]);
38 subplot (3,1,3),plot(tr , err ,’-’,’LineWidth ’,2,’Color ’,[0.6 0 0.6]);
39 filename=sprintf(’%s.eps ’,prefix);
40 print(’-depsc ’, filename); % Save as colour postscript
../Matlab/Reconstruction/reconstruct.m
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % func: arbitrary test function
3 % Inputs: t time
4 % f frequency
5 % Outputs: function value r
6 % DWM 31/10/11
7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
8 function r = func(t,f)
9 v = 2*pi*f*t;
10 r = sin (0.1*v) + sin (0.2*v) + sin (0.3*v) + sin (0.4*v) + ...
11 sin (0.5*v) + sin (0.6*v) + sin (0.7*v) + sin (0.8*v) + ...
12 sin (0.9*v) + sin (1.0*v);
../Matlab/Reconstruction/func.m
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % func˙bandlimit
3 % Inputs: f, the parameter given to func1
4 % Outputs: fm , approximation to bandlimit
5 % DWM 31/10/11
6 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
7 function fm = func˙bandlimit(f)
8 fm = 1.2*f;
../Matlab/Reconstruction/func bandlimit.m
1.4. TOWARDS CODE FOR RECONSTRUCTION 15
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % Output: function value at time t by sinc interpolation
3 % Inputs: t time at which reconstruction is wanted
4 % fs vector of samples
5 % ts vector of sample times
6 % m, n indices of first ,last samples to be used
7 % Ts sample period
8 % wm bandlimit
9 % DWM 15/9/11
10 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
11 function sum = sincinterp(t, fs,ts,Ts,wm, m,n)
12 c = Ts*wm/pi; % useful constant
13 sum = 0 ; % initialize the summation
14 for k=m:1:n % loop over the samples
15 p = **FIXME **; % look at the def of sinc(x)
16 sum = ** FIXME **; % actually doing it
17 end;
../Matlab/Reconstruction/sincinterp.m
TASK 1.2
1. Point matlab at ../Matlab/Spectrum/ and run spectrum(1000,’myspectrum’)
This computes the power spectrum of the samples using the Fast Fourier Trans-
form, storing the graph in myspectrum.eps. (Don’t worry that the peaks appear
as different heights. This is just a result of the spectrum itself being sampled.
The integral under each peak will be more equal.)
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % spectrum
3 % Plots power spectrum of function in func()
4 % Inputs: frequency (Hz)
5 % file prefix
6 % DWM 31/10/11
7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
8 function spectrum(f,prefix)
9
10 fm = func˙bandlimit(f);
11 tstart = 0;
12 nperiods = 200;
13 tstop = nperiods/f;
14 % Choose sampling frequency
15 fs = 2.1*fm; % close to the minimum allowed
16 ts = 1/fs; % Hence sampling period
17 % Sample the function
18 tsample = tstart:ts:tstop;
19 fsample = func(tsample ,f);
16 1. RECONSTRUCTION FROM SAMPLED SIGNALS
20 nsamples = length(fsample);
21
22 % Fast Fourier Transform of signal
23 Y=fft(fsample ,512);
24 % Evaluate power spectrum
25 Pyy = Y.*conj(Y)/512;
26 % Jig up one -sided frequencies only
27 freq=fs *(0:256) /512;
28
29 % Plot the spectrum
30 hold off;
31 set(gca , ’FontSize ’, 18);
32 plot(freq ,Pyy (1:257) , ’-’,’LineWidth ’,2,’Color ’,[0 0.4 0.7]);
33 axis( [0,max(freq),0,max(Pyy (1:257))] );
34 xlabel(’Frequency (Hz)’); ylabel(’Power ’);
35 filename = sprintf(’%s.eps ’,prefix);
36 print (’-depsc ’, filename);
../Matlab/Spectrum/spectrum.m
0 200 400 600 800 1000 12000
20
40
60
80
100
120
Frequency (Hz)
Pow
er
Figure 1.10: The power spectrum computed using matlab’s fft.
1.5. WINDOWED SINC APPROXIMATIONS 17
1.5 Windowed sinc approximations
Although sinc interpolation is mathematically ideal, it is far from ideal for implemen-
tation. The sinc function is of infinite length and decays slowly. To reconstruct the
original at t requires contributions from “distant” samples whose times difference
|t − kTs | are large.
Numerous methods of approximating the sinc function and/or reducing its width
appear in the literature. It is common to use “windowed” sinc interpolation
s(p) = w(p)sinc(p) (1.23)
where w(p) has a particular within the window |p| < M, and is zero outside.
For example:Window Type When |p| < M Otherwise
0. Not windowed at all w(p) = 1 w(p) = 1
1. Rectangular w(p) = 1 0
2. Linear w(p) = 1− |p|/M 0
3. Quadratic w(p) = 1− p2/M2 0
4. Cosine w(p) = cos(πp/2M) 0
2 Filtering of sampled signalsNow you will consider low pass filtering of a sequence of samples in order to create
another sequence of samples with the same sampling period (Fig. 2.1.)
?
Sampling Interval
s
Sampling Interval
s
g[kT ]f[kT ]s sT T
Figure 2.1: A causal system inputting a sampled signal f and outputting a sampled signal g.
2.1 Introduction
A sequence f will be denoted by the vector f. The k-th sample in f is fk ≡ f [kTs ].
If fk was captured at tk , then fk−1 was captured at tk−1 = tk − Ts .In the domain of sampled signals, a causal linear shift-invariant system computes its
current output gn from its current input, its (N−1) past inputs, and its (M−1) past
outputs. That is (and notice one summation is from k = 0, the other from k = 1):
gn =
N−1∑k=0
ak fn−k −M−1∑k=1
bkgn−k . (2.1)
This form, which is recursive as it uses past outputs, is able to generate systems
whose impulse response may be of infinite duration or of finite duration — Infinite
Impulse Response or Finite Impulse Response (IIR or FIR) systems. If we do not use
past outputs the expression becomes non-recursive
gn =
N−1∑k=0
ak fn−k . (2.2)
This can only generate FIR systems. It is this sort of system we are interested in.
By analogy with the continuous case, Eq. 2.2 is the discrete convolution between
a and f, which hints to us that that the elements ak are actually the elements of
the discrete impulse response sequence h. In the z-domain the (one-sided) transfer
18
2.2. SOME INITIAL CONSTRAINTS ON H 19
function of the system is
H(z) =
N−1∑k=0
akz−k =
N−1∑k=0
hkz−k . (2.3)
It can be implemented using N taps and N − 1 delays in tapped delay line, as shown
in Fig. 2.2.
Delay DelayDelay
f f f fn n−1 n−2 n−3
0 1 2 3
SUM
gn
fDelay
N−1
n−(N−1)
hh h h h
Figure 2.2: Tapped delay line filter. Notice that fn is the current input, and fn−1, etc, previous inputs.
2.2 Some initial constraints on h
Your aim throughout the rest of the project is to explore how to find optimal values
for the impulse response sequence h to generate a low pass filter to satisfy cer-
tain performance criteria. However, there are some general constraints we can use
immediately.
1. If the N values of hk in h form either a symmetric or antisymmetric sequence,
the filter exhibits a constant group delay at all frequencies. Such filters are said
to exhibit “exact linear phase shift”. (It is easy to explain this phrase. If the
delay ∆T is constant at all frequencies, then the phase shift ∆φ = ω∆T depends
linearly on frequency. We will comment on this again later.)
It can also be shown that antisymmetric sequencies are suited to generating high-
pass and band-pass filters, but symmetric sequences are suited to generating
low-pass and band-stop (notch) filters. Here we want to construct a low pass
filter, so that each hk = hN−1−k for k = 0 to k = (N − 1).
2. We could design the filter with either an even or odd number N of values in h.
Let us use N is odd. There are 12(N − 1) + 1 distinct values in h:
0h 1h h (N−1)/2 h hN−1N−2
Equal
20 2. FILTERING OF SAMPLED SIGNALS
2.3 Relating the hk to a frequency-dependent transfer function
We know how the hk contribute to H(z), but we need the connection to the more
familiar transfer function which is a function of angular frequency ω. For a purely
harmonic input, s = 0 and z = exp(jωTs), where Ts is the sampling preiod. The
locus of z is the unit circle in the z-plane. Then
H(exp(jωTs)) =
N−1∑k=0
hk exp(−jkωTs) . (2.4)
Look at the LHS. This is a function of an exponential of ω. Let’s denote this function
of a function as H. The transfer function is
H(ωTs) =
N−1∑k=0
hk exp(−jkωTs) . (2.5)
But this means our transfer function is periodic in ω with period 2π/Ts — surely
a mistake! How could a periodic function in frequency be considered a satisfactory
low pass filter? The answer is that the input signal is sampled, and so is already
band-limited and periodic. (Remember Fig. 1.6.) The sampling period is Ts , so the
sampling frequency is ωs = 2π/Ts and hence the band-limit is ωm = π/Ts . Thus in
the baseband we find frequencies from −π/Ts ≤ ω ≤ +π/Ts , and in the other bands
from −π/Ts + Kωs ≤ ω ≤ +π/Ts + Kωs for K = ±1,±2. Thus a filter function
periodic in 2π/Ts is exactly what we want, as shown in Fig. 2.3.
Note that the band-limit of ωm is not necessarily one that is externally imposed (e.g.
by an anti-aliasing filter). Whether or not the original signal was band-limited, as
soon as you sample the original, is de facto band-limited. It could be aliased too, but
it is still band-limited.
s/T−π sπ /T
ω
Filter Signal
Figure 2.3: Periodic Low Pass Filter.
2.3. RELATING THE HK TO A FREQUENCY-DEPENDENT TRANSFER FUNCTION 21
It is convenient to define a new quantity θ = ωTs and write
H(θ,h) =
N−1∑k=0
hk exp(−jkθ) (2.6)
which has a period of 2π. For a low-pass filter we want to introduce a cut-off at θcbetween 0 and π, as in Fig. 2.4.
Filter Signal
θ
π0−π θc
Figure 2.4: The first period of the Low Pass Filter in normalized coordinates.
Now change the index of the summation and the index attached to the h elements.
H(θ) =
+ 12(N−1)∑k=− 12(N−1)
hk exp
[−j(k +
1
2(N − 1))θ
]
= exp
[−j
1
2(N − 1)θ
] + 12(N−1)∑k=− 12(N−1)
hk exp(−jkθ)
= exp
[−j
1
2(N − 1)θ
]h0 + 2
12(N−1)∑k=1
hk cos kθ
= exp
[−j
1
2(N − 1)θ
]G(θ,h) (2.7)
where we have used the symmetry property of the h sequence. The first term is
nothing but a phase shift, and does not impact on the magnitude of H(θ). The
second term indicates that when the h values are real, they will generate a real
number for the magnitude of H(θ). Note though that they might not generate a
positive number for the magnitude which we might need to worry about later.
22 2. FILTERING OF SAMPLED SIGNALS
Note that now h0 refers to the central value of the impulse response sequence, with
negative indices to one side and strictly positive indices to the other. You may find
it useful to try a particular value for N. Suppose N = 11, then the used values are
0h 1hh h−5 5h −1
Used in expression
2.3.1 Truncated Fourier approximation
The more interesting part of the transfer function is
G(θ,h) =
h0 + 2
12(N−1)∑k=1
hk cos kθ
. (2.8)
The expression in the square bracket looks very much like a Fourier cosine series
— but a truncated one. Perhaps a good approximation would be to use the first12(N − 1) + 1 Fourier coefficients of a unit top hat function of half-width θc as the
approximation?
TASK 2
Use the directory B1/Matlab/Fourier for this task.
1. On paper derive expressions for the Fourier series coefficients describing the
periodic function defining the desired response (Fig. 2.4), and hence work out
the relationship between the Fourier coefficients and the hk values for 0 ≤ k ≤(N − 1)/2.
2. Because Matlab does not use index 0 in vectors, it is convenient to work with
a shifted vector η = [h0, h1, . . . , h(N−1)/2], which holds just the the centre and
right-hand half of h. Write a function etacoeffs() in file etacoeffs.m that
inputs any odd N along with the cut-off θc , and returns the vector η.
3. Design and code up a function gactual() in file gactual.m that evaluates the
second part of the transfer
G(θ,η) = η1 + 2
12(N−1)∑k=1
ηk+1 cos kθ . (2.9)
Remember that Matlab is more efficient operating on vectors rather than using
“for loops”. Your function should input a matlab vector θ and output a vector
G of values, using G(θ,η) = Aη. Recall too that expressions like cos(vector)
produce a vector result.
2.3. RELATING THE HK TO A FREQUENCY-DEPENDENT TRANSFER FUNCTION 23
4. Write a function gdesired() that when given a matlab vector of θ values θ
generates a vector Gd of the desired Gd values given by
Gd(θ) =
{1 for − θc ≤ θ ≤ θc0 for− π ≤ θ < −θc , θc < θ ≤ π (2.10)
5. Now write a top-level function fourierlowpass() to glue everything together.
The parameters that drive your filter design are the number of taps N and the
cutoff θc . Use as parameters to your function N, fc= θc/π, and nsteps which
controls at how many points you calculate the filter’s value.
At first your θ vector could contain values over more than one period. Once
you are satisfied that the shape is periodic and symmetrical you could plot just
over 0 ≤ θ ≤ π.
Add code so that fourierlowpass() plots the desired values and actual values
Gd and G on the same axes.
Also write code to generate and store a lollipop plot of the h coefficients. You
should plot all the coefficients, from h−(N−1)/2 to h(N−1)/2, so that the symmetry
is explicit.
Devise a method of generating the plot filenames from the values of N and fc.
Eg, if these are 11 and 0.5, the filename might be fourierlowpass˙0p5˙11.eps.
You will want later to compare results with filter with N = 31. You should ex-
plore what happens when N becomes this large, and larger still.
Your results should appear something like those in Fig. 2.5.
0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
θ
Filt
er
response G
(θ)
−15 −10 −5 0 5 10 15
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
k
Imp
uls
e r
esp
on
se
h[k
]
(a) (b)
Figure 2.5: Truncated Fourier results for 31 taps with θc = 0.60π. (a) Frequency response and (b)
Impulse response sequence.
24 2. FILTERING OF SAMPLED SIGNALS
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % etacoeffs
3 % Inputs N (odd) number of taps
4 % thetac cutoff frequency (0 to pi)
5 % Outputs eta COLUMN Vector of 0.5(N+1) h coeffs
6 % DWM 30/10/11
7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
8 function eta = etacoeffs(N,thetac)
9 % These are the Fourier coefficients for the taps
10 eta(1) = ** FIXME **; % Central value
11 %
12 for k=1:(N-1)/2
13 eta(k+1) = ** FIXME **; % One side
14 end
15 eta=eta ’; % Make it a COLUMN vector
../Matlab/Fourier/etacoeffs.m
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % gactual
3 % Inputs eta , COLUMN vector of h values
4 % theta , COLUMN vector of theta values
5 % Outputs g, COLUMN vector of g values
6 % DWM 30/10/11
7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
8 function g = gactual(eta ,theta)
9 % fancy matrix vector way
10 n = length(theta);
11 kmax = length(eta) -1;
12 A = ones(n,1); % 1st column of matrix
13 for k=1: kmax
14 A =[A, **FIXME ** ]; % Add another COLUMN to right of A
15 end
16 g = **FIXME **; % Matrix vector multiplication
../Matlab/Fourier/gactual.m
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % gdesired
3 % Inputs theta COLUMN vector of theta vals 0 to pi
4 % thetac cutoff value
5 % Outputs gd vector of desired g values
6 % DWM 30/10/11
7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
8 function gd = gdesired(theta ,thetac)
9 % Tricksy! Because the values are 1 and 0 we can
10 % use the outputs from a logical test!
11 gd = abs(theta) ¡= abs(thetac);
../Matlab/Fourier/gdesired.m
2.3. RELATING THE HK TO A FREQUENCY-DEPENDENT TRANSFER FUNCTION 25
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % fourierlowpass
3 % Inputs: N, number (odd) of taps
4 % : fc , cutoff freq as fraction of pi
5 % : nsteps , number of divisions 0 to pi
6 % DWM 30/10/11
7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
8 function fourierlowpass(N,fc,nsteps)
9 % Really should check that
10 % (1) N is odd and (2) fc is in proper range
11 % but skip for now ...
12
13 % Step (1) Theta values
14 thetac = pi*fc; % cut off
15 theta = [0:1: nsteps ]’*(pi/nsteps); % COLUMN theta vector
16
17 % Step (2) get the eta values and actual filter
18 eta = etacoeffs(N,thetac);
19 g = gactual(eta ,theta); % actual filter
20
21 % Step (3) get the desired filter values
22 gd = gdesired(theta ,thetac); % desired filter
23
24 % Step (4) Plot the actual and desired filter
25 hold off;
26 set(gca , ’FontSize ’, 18);
27 plot(theta , gd, ’-’,’LineWidth ’,2,’Color ’,[0 0 0]); hold on;
28 plot(theta , g, ’-’,’LineWidth ’,2,’Color ’,[1 0 0]);
29 axis( [0 ,3.2 , -0.1 ,1.1]);
30 xlabel(’“theta ’); ylabel(’Filter response G(“ theta)’);
31 % Get a meaningful filename
32 num = floor(fc);
33 frac = floor (100*(fc -num)); %two dec places
34 fr˙file = sprintf(’fourierlowpass˙fr˙%d˙%dp%d.eps ’,N,num ,frac);
35 fprintf(’Frequency response saved to %s“n’,fr˙file);
36 % Save plot as colour postscript
37 print(’-depsc ’, fr˙file);
38 pause;
39
40 % Step (5) Save plot of impulse response as a lollipop plot
41 ir˙prefix = sprintf(’fourierlowpass˙ir˙%d˙%dp%d’,N,num ,frac);
42 lollipop(ir˙prefix ,eta);
../Matlab/Fourier/fourierlowpass.m
26 2. FILTERING OF SAMPLED SIGNALS
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % lollipop - plots a lollipop graph
3 % Inputs: fileprefix (a string)
4 % eta , vector of tap coefficients
5 % DWM 30/10/11
6 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
7 function lollipop(fileprefix ,eta)
8
9 % Fix up left side --- there is a nicer way
10 kmax = length(eta);
11 % fix up the left hand
12 for k=1:kmax -1
13 h(k) = eta(kmax+1-k);
14 end
15 % Fix up right side
16 h = [h,eta ’]; % ROW VECTOR
17 index = [-kmax +1:1: kmax -1];
18
19 % Plot as lollipops
20 hold off;
21 set(gca , ’FontSize ’, 18);
22 plot(index , h, ’O’,’MarkerSize ’,7,’LineWidth ’,6,’Color ’,[0.7 0 0]); hold
on;
23 xlabel(’k’); ylabel(’Impulse response h[k]’);
24 for k=1: length(h)
25 line([ index(k),index(k)],[0,h(k)],’LineWidth ’,3,’Color ’,[0.7 0 0]);
26 end
27 line([min(index) -1,max(index)+1],[0,0],’ LineWidth ’,2,’Color ’,[0 0 0]);
28 axis( [min(index) -1,max(index)+1, min(h) -0.1,max(h)+0.1]);
29
30 % Save it for posterity in fileprefix.eps
31 filename=sprintf(’%s.eps ’,fileprefix);
32 print(’-depsc ’, filename);
33 fprintf(’Lollipop saved to %s“n’,filename);
../Matlab/Fourier/lollipop.m
2.4 What was the approximation we just made?
The approximation we just made may have seemed ad hoc, and it worth spending a
moment to understand that what you made was, given the number of terms in the
Fourier Series, a least squares approximation. Least squares was the criterion used
to fit the damped oscillator data and the nuclear spectrum data in the introduction.
So let’s consider first discrete data.
2.4. WHAT WAS THE APPROXIMATION WE JUST MADE? 27
2.4.1 Discrete data
Suppose we have a number of measurements zi obtained at values xi of an indepen-
dent variable x (the dots in Fig. 2.6), and we have a model with a set of parameters
p = (p1, p2, ..., pn) that given xi will produce the expected value zf i t(xi ,p) (the line
in Fig. 2.6).
)ip,(xzfit
x
z
x i
z i
Figure 2.6:
Assuming the uncertainty on each point is the same, the optimal least squares pa-
rameter set is that which minimizes a cost C(p) equal to the sum of the squares
of the differences between measured and fitted values. That is we seek the optimal
parameters p∗, where
p∗ = arg minp
[C(p)] = arg minp
[∑i
(zi − zf i t(xi , p))2
]. (2.11)
The general method to find the n parameters p∗ = (p1, . . . , pn)∗ is to solve the set
of n simultaneous equations
∂C
∂p1= 0,
∂C
∂p2= 0, . . . ,
∂C
∂pn= 0 , (2.12)
and to check are that all ∂2C/∂p2i > 0 so that the solution really is a minimum.
2.4.2 Continuous functions using orthogonal basis sets
Now suppose that rather than approximating discrete data, you want to approximate
a continuous function zm(x). Using an orthogonal basis set ψ (Fourier, Chebyshev,
whatever), the approximating function is the series
z(x) = a0ψ0(x) + a1ψ1(x) + . . . , (2.13)
28 2. FILTERING OF SAMPLED SIGNALS
and the coefficients are found using the appropriate orthogonality property expressed
using inner products as
an = 〈ψn, zm〉/〈ψn, ψn〉 . (2.14)
Turning to least squares, the appropriate cost function C is the integral
C =
∫R
(z(x)− zm(x)
)2dx, (2.15)
where R is an appropriate range (for Fourier, one period.) Remembering that zm(x)
is continuous, the cost is (we drop the (x) from line 2 onwards)
C =
∫R
(z(x)− zm(x)
)2dx (2.16)
=
∫R
z2 dx − 2
∫R
z zm dx +
∫R
z2m dx
=(a2
0 〈ψ0, ψ0〉+ a21 〈ψ1, ψ1〉+ . . .
)− 2 (a0 〈ψ0(x), zm〉+ a1 〈ψ1(x), zm〉+ . . .)
+K
where K is a constant. But the ’a’ coefficients are the parameters in this model.
Hence
∂C/∂a0 = 2a0 〈ψ0, ψ0〉 − 2 〈ψ0, zm〉 = 0 (2.17)
∂C/∂a1 = 2a1 〈ψ1, ψ1〉 − 2 〈ψ1, zm〉 = 0 (2.18)
and so on. Each equation is independent and gives the general result
an = 〈ψn, zm〉/〈ψn, ψn〉 . (2.19)
But this is exactly what we wrote in Eq. 2.14! In other words, when we ap-
proximate a function shape using a set of orthogonal basis functions and use the
orthogonality relationships to find the coefficients, the resulting curve is that which
would be obtained by varying the ’a’ coefficients to generate a least-squares approx-
imation to the function.
In the next part of the project you will design the filter using least squares, but in a
more direct way that allows for greater flexibility in modelling.
3 Optimal design using least squares
3.1 A different model of the low pass filter
The low pass filter has so far been defined in terms of a single cut off frequency θc .
Here you use a more realistic model of the filter, and optimize parameters to meet
attainable criteria.
A commonly used model of the low pass filter is shown in Fig. 3.1. It involves
definition of a pass band (0 ≤ θ ≤ θp), in which the G(θ) should have magnitude
close to unity, and a stop band (θs ≤ θ ≤ π) in which the magnitude should be close
to zero. Nothing explicit need be said about the transition band (θp ≤ θ ≤ θs).
θ
π0−πθ
Pass band
θp s
Stop band
Figure 3.1: A fancier low pass filter model with pass and stop bands.
3.2 Least squares fitting to the pass and stop bands
You will generate the filter coefficients using least-squares, but fitting only in the
pass and stop bands. You want to find the optimal parameter vector η∗ such that
η∗ = minη
[∫ θp
0
W (θ)ε2(θ,η)dθ +
∫ π
θs
W (θ)ε2(θ,η)dθ
]. (3.1)
The error is
ε(θ,η) = G(θ,η)− Gd(θ) , (3.2)
where Gd(θ) is the desired value, unity in the pass band and zero in the stop band.
W (θ) is a weight function. A large value of W (θ) will force a close fit at that
particular value of θ, and vice versa.
29
30 3. OPTIMAL DESIGN USING LEAST SQUARES
3.3 Towards an implementation
The problem involves minimization of a cost function which itself involves numerical
integration. It is sensible to do as little work as possible inside functions that may be
called multiple times.
For example, we can precompute vectors of closely spaced θ values, θp and θs for
the pass and stop bands. Suppose too that the weights in the pass and stop band
are constants Wp and Ws . We also know that the desired values Gd are 1 and 0 in
the pass and stop bands. The optimization can be written as
η∗ = minηC(η, θp, θs ,Wp,Ws) (3.3)
where
C(η, θp, θs ,Wp,Ws) = WpIp(η, θp) +WsIs(η, θs) . (3.4)
Ip(η, θp) is a numerical integration over the range of values in the θp vector
Ip(η, θp) =
∫ θp
0
(G(η, θ)− 1)2dθ (3.5)
and (nearly) similarly for the the stop band.
For the integration you might sum rectangles, sum trapezia, or use Simpson’s rule.
The first two are illustrated in Fig. 3.2. (Notice that the first method is effectly
taking the sum of squares just as with discrete data, then multiplying by ∆θ which
is a constant.)
θ θ
ε (θ) ε (θ)22
Figure 3.2: Approximation of the integral (a) as a sum of squares (b) using the trapezium rule.
3.3. TOWARDS AN IMPLEMENTATION 31
TASK 3
Use the directory B1/Matlab/LeastSq for this task.
1. Review the lsqlowpass() top-level function below. It sets up vectors of θ
values in the pass and stop bands, computes the desired values in those bands,
then sets initial values of the η values.
The key line is the one calling fminsearch() to minimize the cost function
by varying the values of η. Use matlab’s documentation to discover how the
function is used.
With the parameters optimized, the shape of the optimal filter is recovered, and
plots made.
2. A skeleton of the the cost function has been provided. You need to write the
function, which will involve calling a numerical integration function.
3. Explore the performance for different relative weights and taps, to make sure
that your code is performing as expected, and comment on the results.
0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
θ
Filt
er
response G
(θ)
−15 −10 −5 0 5 10 15
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
k
Imp
uls
e r
esp
on
se
h[k
]
(a) (b)
Figure 3.3: Results for 31 taps with θp = 0.56π and θs = 0.63π. The relative weights were 10:1. (a)
Frequency response and (b) Impulse response sequence.
32 3. OPTIMAL DESIGN USING LEAST SQUARES
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % lsqlowpass
3 % Inputs: N number (odd) of taps
4 % : fp ,fs pass ,stop band freq: frac of pi
5 % : Wp ,Ws weights in pass ,stop
6 % : nsteps , gives integration step
7 % size as pi/nsteps
8 % DWM 26/9/11
9 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
10 function lsqlowpass(N,fp,fs,Wp,Ws,nsteps)
11
12 % Should check (1) N is odd
13 % Skip for now (2) fp ,fs are in proper range
14 % (3) weights are ¿= 0
15 thetap = pi*fp; % cut off
16 thetas = pi*fs; % cut off
17
18 % theta vectors over range and bands
19 dtheta = pi/nsteps;
20 theta = [0: dtheta:pi]’; % theta vector for entire range
21 thetapass = [0: dtheta:thetap]’; % for pass band
22 thetastop = [thetas:dtheta:pi]’; % for stop band
23
24 % desired values
25 gdpass = ones(length(thetapass) ,1); % desired g in pass
26 gdstop = zeros(length(thetastop) ,1);% ditto in stop
27
28 % base the initial guess for the coefficients on
29 % truncated Fourier with cut off midway into transition band
30 initialeta = etacoeffs(N ,0.5*( thetap+thetas)); % get the eta values
31
32
33 % Optimize the parameters by minimizing the cost function
34 % There are other matlab minimizers that could be used!!
35 finaleta = fminsearch(@costfunction , **FIXME ** );
36
37 % Work out the response values over all the range
38 g = gactual(finaleta ,theta);
39
40 % Plotting stuff
41 hold off;
42 set(gca , ’FontSize ’, 18);
43 plot(thetapass , gdpass , ’-’,’LineWidth ’,4,’Color ’,[0 0.7 0]); hold on;
44 plot(thetastop , gdstop , ’-’,’LineWidth ’,4,’Color ’,[0.7 0 0]); hold on;
45 plot(theta , g , ’-’,’LineWidth ’,2,’Color ’,[0 0 0.7]); hold on;
46 axis( [0 ,3.2 , -0.1 ,1.1]);
47 xlabel(’“theta ’); ylabel(’Filter response G(“ theta)’);
48
49 % Save plot as colour postscript
3.3. TOWARDS AN IMPLEMENTATION 33
50 fracp = floor (100*(fp-floor(fp))); %two dec places
51 fracs = floor (100*(fs-floor(fs))); %two dec places
52 fr˙file = sprintf(’lsqlp˙fr˙%d˙%d-%d.eps ’,N,fracp ,fracs);
53 fprintf(’Frequency response saved to %s“n’,fr˙file);
54 print(’-depsc ’, fr˙file);
55 pause;
56
57 % Save plot of impulse response
58 ir˙fileprefix = sprintf(’lsqlp˙ir˙%d˙%d-%d’,N,fracp ,fracs);
59 lollipop(ir˙fileprefix ,finaleta);
../Matlab/LeastSq/lsqlowpass.m
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % costfunction
3 % Inputs eta: a set of parameters
4 % thetapass: vector of theta values for pass band
5 % gdpass: vector of corresponding desired values
6 % Wp: scalar weight for all values in pass band
7 % Then Ditto of the stop band
8 % DWM 1/11/11
9 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
10 function cost =costfunction(eta ,thetapass ,gdpass ,Wp,thetastop ,gdstop ,Ws)
11
12 % Step 1.1
13 % Use the eta values to calculate the vector of actual response
14 % in the pass band using the vector of theta values
15 g˙pass = ** FIXME **;
16
17 % Step 1.2
18 % Work out the vector of squares -of -the -error at each theta value
19 errsq˙pass = **FIXME **;
20
21 % Step 1.3
22 % Integrate using (for example !) the trapezium rule
23 integral˙pass = trapz( **FIXME ** );
24
25 % Step 2.1, 2.2 and 2.3
26 % Do exactly the same , but now for the stop band
27 g˙stop = ** FIXME **;
28 errsq˙stop = **FIXME **;
29 integral˙stop = **FIXME **;
30
31 % Step 3
32 % Weighted sum of integrals is the cost
33 cost = Wp*integral˙pass + Ws*integral˙stop;
../Matlab/LeastSq/costfunction.m
4 Optimal design using linear programmingYou are asked to optimize the design of the filter using linear programming. This
allows you to specify a cost function to be minimized (one linear in the unknowns)
along with a set of linear constraints to be satisfied.
4.1 Linear programming in general
The linear programming problem is usually written as
Minimize f (x) = c>x the scalar cost function, linear in x,
over x the variables
Such that Ax ≤ b the constraints
(4.1)
Here, c is a known vector of numbers, A is a known matrix, b is a known vector, and
x is the vector of unknown variables.
4.1.1 Example
St Anne’s makes large puddings to be served throughout lunch. The chef has 15
units of ingredient (a) and 18 units of ingredient (b) in store. To make a unit size
pudding (1) requires 5 units of ingredient (a), and 3 units of ingredient (b). Pudding
(2) requires 3 of (a) and 6 of (b). The return on sale of pudding (1) is £20 and of
pudding (2) is £30. How should the ingredients be used to maximize returns?1
Let x1 and x2 be the size of puddings made. We want to maximize the return, or
minimize the negative profit. Our problem is to minimize f (x) = (−20x1 − 30x2),
subject to 5x1 + 3x2 ≤ 15 and 3x1 + 6x2 ≤ 18.
So for this problem
c = [−20,−30]> A =
[5 3
3 6
]b = [15, 18]> (4.2)
To solve this, use the linprog() routine from Matlab’s optimization toolbox.
1St John’s, we hear, does not need to cost its puddings.
34
4.2. CRITERIA 35
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % pudding
3 % A linear programming problem
4 % DWM 26/9/11
5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
6 % Set up
7 c = [-20 -30]’;
8 A = [5 3; 3 6];
9 b = [15 18]’;
10 % Solve
11 x = linprog(c,A,b)
../Matlab/LinProg/pudding.m
4.2 Criteria
Unfortunately we must stop thinking about puddings and return to low-pass filters.
In the pass band we wish |G(θ)| ≈ 1 and in the stop band |G(θ)| ≈ 0.
One possible design criterion is to choose η as
η∗ = arg minη
max
{max
0≤θ≤θP||G(η, θ)| − 1| , max
θS≤θ≤π|G(η, θ)|
}. (4.3)
This finds the maximum deviation from unity in the pass band, and the maximum
deviation from zero in the stop band, and takes the maximum of those two. It varies
η to minimize that maximum. A slight modification would be to weight the deviations
differently, say between the pass band and stop band, making it more important to
be close to one than zero, or vice versa.
We now formulate this task as a linear program. We will do this in steps, as it holds
a surprise.
4.3 Turning the criteria into a linear program
Let Gd(θ) be the desired value of the transfer function — unity in the pass band,
zero in the stop, and undefined in the transition region. Given a set of values h, or
equivalently a set η, define a weighted deviation or error as
ε(θ) = W (θ) [G(η, θ)− Gd(θ)] (4.4)
where for the symmetric odd-N filter G(η, θ) = η1 + 2∑ 1
2(N−1)
k=1 ηk+1 cos kθ .
36 4. OPTIMAL DESIGN USING LINEAR PROGRAMMING
We want to find the smallest weighted (mod) error, such that all mod errors are
equal to or less than it. We can write the problem as
Minimize ε0 the scalar cost function
over η, ε0 the variables
Such that ε(θ) ≤ ε0, ε(θ) ≥ −ε0 the constraints
(4.5)
Using the expression for ε(θ), the constraints can be re-written as
G(θ)− ε0/W (θ) ≤ Gd(θ) and − G(θ)− ε0/W (θ) ≤ −Gd(θ) (4.6)
where any value of θ where the weight W (θ) = 0 must be excluded.
Let L values of θ be selected, and denote G(θ`) = G` and so on. The first set of
constraints can be written G1...
GL
− ε0
1/W1...
1/WL
≤ Gd1
...
GdL
(4.7)
or
g− ε0m ≤ gd (4.8)
But G1...
GL
=
1 2 cos θ1 2 cos 2θ1 . . . 2 cos 12(N − 1)θ1
......
......
1 2 cos θL 2 cos 2θL . . . 2 cos 12(N − 1)θL
h0
...
h 12(N−1)
= Cη
(4.9)
Thus
Cη − ε0m ≤ gd (4.10)
The other set of constraints is
−Cη − ε0m ≤ −gd (4.11)
These can be combined to give(C −m
−C −m
)(η
ε0
)≤(
gd−gd
)(4.12)
4.3. TURNING THE CRITERIA INTO A LINEAR PROGRAM 37
Returning to the standard linear programming problem
Minimize f (x) = c>x the scalar cost function
over x the variables
Such that Ax ≤ b the constraints
(4.13)
we recognize that
A =
(C −m
−C −m
); b =
(gd−gd
); c =
(0 12(N+1)
1
); x =
(η
ε0
)(4.14)
TASK 4
1. Utilizing various parts of earlier code, use linear programming to design an op-
timal low pass filter with N taps, with N odd.
2. Generate code to show plots showing (i) G(θ) vs θ, and (ii) log |G(θ)| in dB vs
θ.
3. Generate a lollipop plot showing all the entries in h.
0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
θ
Filt
er
response G
(θ)
−15 −10 −5 0 5 10 15
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
k
Imp
uls
e r
esp
on
se
h[k
]
(a) (b)
Figure 4.1: Results for 31 taps with θp = 0.56π and θs = 0.63π. The relative weights were 1:4. (a)
Frequency response and (b) Impulse response sequence.
38 4. OPTIMAL DESIGN USING LINEAR PROGRAMMING
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % linproglowpass
3 % Inputs: N number (odd) of taps
4 % : fp ,fs pass ,stop band freq: frac of pi
5 % : Wp ,Ws weights in pass ,stop
6 % : nsteps , gives integration step
7 % size as pi/nsteps
8 % DWM 30/10/11
9 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
10 function linproglowpass(N,fp,fs,Wp,Ws,nsteps)
11
12 % Should check (1) N is odd
13 % Skip for now (2) fp ,fs are in proper range
14 % (3) weights are ¿= 0
15
16 thetap = pi*fp; % cut off
17 thetas = pi*fs; % cut off
18
19 % theta COLUMN vectors over range and bands
20 dtheta = pi/nsteps;
21 theta = [0: dtheta:pi]’; % theta vector for entire range
22 thetapass = [0: dtheta:thetap]’; % for pass band
23 thetastop = [thetas:dtheta:pi]’; % for stop band
24
25 % desired values , to make column vector gd
26 gdpass = ones(length(thetapass) ,1); % desired g in pass
27 gdstop = zeros(length(thetastop) ,1);% ditto in stop
28 gd = ** FIXME **; % combined column vector
29
30 % inverse weight values , to make vector m
31 inversewpass = (1/Wp)*ones(length(thetapass) ,1); % in pass
32 inversewstop = (1/Ws)*ones(length(thetastop) ,1); % in pass
33 m = **FIXME **;
34
35 % Construct the matrix C (we did this earlier as part of gactual.m)!
36 % Remember to use all theta values in pass and stop bands
37 C = buildCmatrix(N, [thetapass;thetastop ]);
38
39 % Now build the A,b and c for linprog
40 A = **FIXME **;
41 b = **FIXME **;
42 c = **FIXME **;
43
44 % Now use linprog!
45 x = linprog(c,A,b);
46
47 % Cream off the eta values , and eps0
48 kmax = (N+1) /2;
49 eta = x(1: kmax);
4.3. TURNING THE CRITERIA INTO A LINEAR PROGRAM 39
50 eps0 = x(kmax +1)
51
52 % Work out the response values over all the range
53 g = gactual(eta ,theta);
54
55 % Plotting stuff
56 hold off;
57 set(gca , ’FontSize ’, 18);
58 plot(thetapass , gdpass , ’-’,’LineWidth ’,4,’Color ’,[0 0.7 0]); hold on;
59 plot(thetastop , gdstop , ’-’,’LineWidth ’,4,’Color ’,[0.7 0 0]); hold on;
60 plot(theta , g , ’-’,’LineWidth ’,2,’Color ’,[0 0 0.7]); hold on;
61 axis( [0 ,3.2 , -0.1 ,1.1]);
62 xlabel(’“theta ’); ylabel(’Filter response G(“ theta)’);
63
64 % Save plot as colour postscript
65 fracp = floor (100*(fp-floor(fp))); %two dec places
66 fracs = floor (100*(fs-floor(fs))); %two dec places
67 fr˙file = sprintf(’linprog˙fr˙%d˙%d-%d.eps ’,N,fracp ,fracs);
68 fprintf(’Frequency response saved to %s“n’,fr˙file);
69 print(’-depsc ’, fr˙file);
70 pause;
71
72 % Save plot of impulse response
73 ir˙fileprefix = sprintf(’linprog˙ir˙%d˙%d-%d’,N,fracp ,fracs);
74 lollipop(ir˙fileprefix ,eta);
../Matlab/LinProg/linproglowpass.m
1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2 % buildCmatrix
3 % Inputs N, number of taps
4 % theta , COLUMN vector of theta values
5 % Outputs C, matrix as per notes
6 % DWM 26/9/11
7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
8 function C = buildCmatrix(N,theta)
9 % fancy matrix vector way
10 n = length(theta);
11 C = ones(n,1); % 1st column of matrix
12 for k=1:(N-1)/2
13 C =[C, **FIXME **]; % Add another column to right of C
14 end
../Matlab/LinProg/buildCmatrix.m
5 Filter evaluationYou have designed a range of digital filters which are optimized under different cri-
teria. In Fig. 2.2 you saw how the filter taps are used to generate a filtered output
from the input.
Now you should find out if they actually work in practice!
This is the part where you really are on your own. Neither DWM nor the demonstra-
tors have tried this, nor will they. It forms the novel part of your report.
TASK 5
1. You will need matlab code to generate a long sequence of samples from the
function used for reconstruction. (Code in the Spectrum directory will be useful.)
Keep in mind that the sampling period is an important number.
2. Go back to each filter top-level function, and add code to save the h values into
a file. (Probably best to save all the h values, not just the one-sided η.)
3. Now, for each, generate an optimum filter with some fixed number of taps with
a cut off frequency, or pass and stop frequencies, that should cut out some of
the spikes in the power spectrum.
4. Write code to perform the operation implied by Fig. 2.2. You’ll need to generate
the samples, read in the h values from file, and perform a discrete convolution.
(You may find it helpful to read parts of the additional set of notes in the Notes
directory.)
5. Add code to plot out the power spectrum before and after filtering.
6. Make trials to find an acceptable number of taps.
7. Finally, go back and look at the complete equation for H in Eq. 2.7. What
happens to the phase if G() goes negative. How might you handle that in
principle and practice?
40
6 More possibilitiesHere are two thoughts for further exploration.
6.1 Adding in temporal constraints using linear programming
A variety of constraints can be introduced using linear programming.
For example, suppose as before, one seeks
h∗ = arg minh
max
{max
0≤θ≤θP
∣∣|H(θ)| − 1∣∣ , maxθs≤θ≤π
∣∣H(θ)∣∣} (6.1)
but now also subject to a maximum ripple in the temporal step response U(n) over
a given time window∣∣U(n)∣∣ ≤ β, 0 ≤ n ≤ NW (6.2)
where the values 0 ≤ n ≤ NW cover the region where the step response is close to
zero.
You know that the step response in the continuous time domain is defined by the
convolution
U(t) =
∫ ∞−∞
u(τ)h(t − τ)dτ =
∫ ∞0
h(t − τ)dτ , (6.3)
and in the discrete case the analogous result is
U(n) =
∞∑k=0
h[n − k ] . (6.4)
But for FIR, h = 0 for any k > n. Hence
U(n) =
n∑k=0
h[n − k ] ≡n∑k=0
h[k ] (6.5)
Note carefully that we are returning to the original indexing of h, so that h[0] and
h[N − 1] are the extreme entries in the impulse response, and the that centre is at
h[(N + 1)/2]. The step response U[n] is zero for any n < 0 and equals U[N − 1] for
any n > N − 1.
41
42 6. MORE POSSIBILITIES
U[n]
n0 (N−1)/2wN
Figure 6.1: The step response generated by summation. The window of width Nw considered here is
where the response is still close to zero.
To build the constraint we will only need η entries. The reason is that the following
expression for the step response is valid for 0 ≤ n ≤ 12(N − 1) only, but as Nw ≤
12(N − 1) it is sufficient for our needs:
U(n) =
n∑k=0
η[1
2(N + 1)− k ] . (6.6)
Hence, written out, the constraints on the step response are
ηN ′ ≤ β ; −ηN ′ ≤ β (6.7)
[ηN ′ + ηN ′−1] ≤ β ; − [ηN ′ + ηN ′−1] ≤ β...
...
[ηN ′ + ηN ′−1 + . . .+ ηN ′−Nw ] ≤ β ; − [ηN ′ + ηN ′−1 + . . .+ ηN ′−Nw ] ≤ β
where N ′ = 12(N + 1). We can write these sets of contraints as
Tη ≤ β and − Tη ≤ β (6.8)
where T is a (Nw + 1)× 12(N + 1) matrix and β is a (Nw + 1)× 1 vector.
You would have to think about the form of T, and then consider how to install it into
the matrix A use in the linear program. (Also remember that x still contains another
variable!)
6.2. WINDOW SINC RECONSTRUCTION 43
6.2 Window sinc reconstruction
Recall the note at the end of section 1, where a number of windowing methods were
introduced.
s(p) = w(p)sinc(p) (6.9)
where w(p) has a particular within the window |p| < M, and is zero outside.
For example:Window Type When |p| < M Otherwise
0. Not windowed at all w(p) = 1 w(p) = 1
1. Rectangular w(p) = 1 0
2. Linear w(p) = 1− |p|/M 0
3. Quadratic w(p) = 1− p2/M2 0
4. Cosine w(p) = cos(πp/2M) 0
It may be interesting to implement a new reconstruction function, which uses win-
dowed sinc interpolation and compare the outcomes using an error measure.
You might use the Window type as a parameter to the top level function, allowing you
to conveniently change interpolation window. Also you might use M as a parameter
to the function, find a suitably small value that provides acceptable results, and then
keep it fixed for different types.
Your code must take advantage of the efficiency savings possible by ignoring samples
for which |t − kTs | ≤ M. You might measure the speed up.