14
1 EE 168 Handout #34 Introduction to Digital Image Processing March 13, 2012 HOMEWORK 8 SOLUTIONS Problem 1: Displaying Topography The image representing the topography of the Big Island of Hawaii is displayed as a grayscale image with the major features labeled in Figure 1-1. A shaded relief display is formed from the elevation image as shown in Figure 1-2. Note that a shaded relief display is computed from the differences between successive pixel pairs, and, as such, convolution can be utilized for this computation. A filter of [1 -1] computes the required difference values, and the orientation of the filter determines the direction from which light falls on the island (i.e. which slopes are positive and which are negative). The direction of change of the filter values should match with the direction of light. In particular, a 2-D filter with coefficients decreasing from left to right represents illumination from the left (slopes increasing from the left will be positive and bright, since convolution flips the mask) and a 2-D filter with coefficients decreasing from top to bottom represents illumination from the top. Following these ideas, we can form a shaded relief display using the following steps: i) Choose the coefficients of the convolution filter. The coefficients should represent a difference, so they should be 1’s and -1’s. ii) Form a 2-D convolution filter using the correct orientation. Using the explanation from above, the following filters are selected for the three illuminations, using Matlab notation: - From left: [1 -1; 1 -1] - From top: [1 1; -1 -1] - From top-left: [0 1 0; 1 0 -1; 0 -1 0] iii) Use a linear transformation on the resultant convolution to get a pleasing view. Since the standard deviation of the shaded relief difference image is inherently low (less than 2.5 for the island values) we need to stretch it to achieve a pleasing contrast on the display. Because we have negative slopes as well as positive slopes (with the mean close to zero), we also need to shift the image upwards to be in the (0, 255) range. In particular, the following transformation is used in this problem (where std island represents the standard deviation of the original image’s non-zero island values (the water values do not enter into the computation) and the ratio 80/std island is between 30 and 45 for the three shaded relief images): 128 ) , ( 80 ) , ( m n f std m n g island

EE 168 Handout #34 Introduction to Digital Image

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

1

EE 168 Handout #34

Introduction to Digital Image Processing March 13, 2012

HOMEWORK 8 – SOLUTIONS

Problem 1: Displaying Topography

The image representing the topography of the Big Island of Hawaii is displayed as a grayscale

image with the major features labeled in Figure 1-1. A shaded relief display is formed from the

elevation image as shown in Figure 1-2. Note that a shaded relief display is computed from the

differences between successive pixel pairs, and, as such, convolution can be utilized for this

computation. A filter of [1 -1] computes the required difference values, and the orientation of

the filter determines the direction from which light falls on the island (i.e. which slopes are

positive and which are negative). The direction of change of the filter values should match with

the direction of light. In particular, a 2-D filter with coefficients decreasing from left to right

represents illumination from the left (slopes increasing from the left will be positive and bright,

since convolution flips the mask) and a 2-D filter with coefficients decreasing from top to bottom

represents illumination from the top.

Following these ideas, we can form a shaded relief display using the following steps:

i) Choose the coefficients of the convolution filter. The coefficients should represent a

difference, so they should be 1’s and -1’s.

ii) Form a 2-D convolution filter using the correct orientation. Using the explanation

from above, the following filters are selected for the three illuminations, using Matlab

notation:

- From left: [1 -1; 1 -1]

- From top: [1 1; -1 -1]

- From top-left: [0 1 0; 1 0 -1; 0 -1 0]

iii) Use a linear transformation on the resultant convolution to get a pleasing view. Since

the standard deviation of the shaded relief difference image is inherently low (less

than 2.5 for the island values) we need to stretch it to achieve a pleasing contrast on

the display. Because we have negative slopes as well as positive slopes (with the

mean close to zero), we also need to shift the image upwards to be in the (0, 255)

range. In particular, the following transformation is used in this problem (where

stdisland represents the standard deviation of the original image’s non-zero island

values (the water values do not enter into the computation) and the ratio 80/stdisland is

between 30 and 45 for the three shaded relief images):

128),(80

),( mnfstd

mngisland

2

Figure 1-1: The original elevation image with major peaks labeled

Original Image

200 400 600

200

400

600

800

Illumination from the left

200 400 600

200

400

600

800

Illumination from the top

200 400 600

200

400

600

800

Illumination from the top-left

200 400 600

200

400

600

800

Figure 1-2: The Shaded Relief Display

3

Figure 1-3 shows a color version of the map. Note that shaded relief intensity values

computed through convolution can be used to represent brightness, while hue and saturation can

be represented as functions of the elevation. Thus, the color map contains both the elevation and

shaded relief information. The following equations are used to determine the hue, saturation,

and brightness values:

255

),(),(

400

),(255),(

510

),(255),(

mnsmnbrightness

mnemnsaturation

mnemnhue

where e is the elevation image and s is the shaded relief image

The Color Version of the Map

100 200 300 400 500 600 700

100

200

300

400

500

600

700

800

900

Figure 1-3: A Color Version of the Elevation Map

4

Problem 2: Contour Maps

We can also form a contour map using the same elevation data used in Problem 1. Contour maps are

widely used to represent 3-D data, especially elevation of each point on a 2-D land surface, using

only two-dimensions. This is done by using curves, which connect points of equal elevation. A

contour map for the elevation data in Problem 1 is shown in Figure 2-1. Note that the approximate

height at a point p can be found using:

ICheight pp

where Cp is the number of curves surrounding point p, and I is the additional height introduced by

each curve. In this case, each contour line corresponds to an increment of 200 meters (I).

The Contour Map

100 200 300 400 500 600 700

100

200

300

400

500

600

700

800

900

Figure 2-1: The Contour Map

5

Problem 3: Perspective Views

This problem explores how to generate perspective views of elevation data using shaded relief. The

original image and its shaded relief are formed using techniques similar to those of Problem 1. More

specifically, the same linear transformations with the top illumination are used (Figure 3-1).

We can create a perspective view of this image through the set of equations provided in lecture:

dy

dhzhv

dy

dxu

)(

where u and v represent the pixel locations on the 2-D screen, x, y, and z specify the location of the

object point in the 3-D space and h and d represent the viewing height and distance, respectively.

The perspective view can also be created using the pinhole camera algorithm, as given in the notes:

0'

0'

yy

dzz

yy

dxx

where the origin of the object image frame is y=y0 from the pinhole origin and the observation frame

is y=-d from the pinhole origin. These equations map any object point (x, y+y0, z) into the

corresponding (x’,z’) observation frame point. The two perspective methods are demonstrated in

Figure 3-1.

We can form a rotated perspective view, by first rotating the elevation and shaded relief images by

the same angle, and then applying the set of equations above on the rotated images. The resultant

perspective views are shown in Figure 3-3. Holes are filled using an iterative median filter method.

It is also possible to form a color version of the perspective view by using the color map idea

introduced in Problem 1. In particular, the following equations can be used to find the hue,

saturation, and brightness values of each pixel on the 2-D screen:

255

),(),(

400

),(255),(

510

),(255),(

yxsvubrightness

yxevusaturation

yxevuhue

r

r

r

where er is the rotated elevation image, and sr is the rotated shaded relief image. Note that in the

above set of equations, (x, y) is the location of the object point associated with the (u,v) locations on

the 2-D screen. Hence, these equations should be computed inside the (x,y) to (u,v) mapping loop in

the code. Note also that v corresponds to the row index of the perspective image, while u is the

column index. The results are shown in Figure 3-3.

6

The Shaded Relief

50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500

Figure 3-1: The Shaded Relief Display

The Perspective View (holes filled with iterative median filter)

100 200 300 400 500

200

250

300

350

400

450

500

The Pinhole Perspective View (holes filled with iterative median filter)

100 200 300 400 500

200

250

300

350

400

450

500

Figure 3-2: Perspective Views

The Colored Rotated Perspective View (Rotated by -40 degrees)

(holes filled with iterative median filter)

100 200 300 400 500

200

250

300

350

400

450

500

The Colored Rotated Pinhole Perspective View (Rotated by -40 degrees)

(holes filled with iterative median filter)

100 200 300 400 500

200

250

300

350

400

450

500

Figure 3-3: A Colored Rotated Perspective View

7

Problem 4: Morphing Example

To morph the portrait of President Washington into that of President Lincoln, we need to

first pick out several good tie points between the images. These should include the corners of the

images as well as the eyes, nose, corners of the mouth, chin, ears, points around the hairline and

shoulders. 43 tie points are used to generate the morph shown in Figure 4-1.

After the tie points are chosen and their coordinates generated with the Matlab ginput()

command, the displacement vectors between the x and y coordinates of the tie points are

calculated. Next, the function griddata() is used to interpolate the displacement in x and y at

each pixel location in order to warp the Washington image into the Lincoln Image. The

intermediate images are calculated by sampling the original and final images and incrementally

mapping the samples to intermediate pixel positions (so that the original image moves towards

the final image and the final image moves towards the original), and the images are then

blending at each intermediate location by weighting the intensities by different amounts. See the

attached Matlab code for the details of the morph.

Figure 4-1: The Morph frames

8

MATLAB code:

Problem 1: Displaying Topography

% read the image into im.

ncols = 735;

nrows = 927;

fid = fopen('lab8prob1data');

im = fread(fid,[ncols nrows],'uint8');

im = im';

fclose(fid);

sig_inds = find(im >0);

mean_im = mean(mean(im(sig_inds)));

var_im=(mean((reshape(im(sig_inds),length(sig_inds),1)-mean_im).^2));

std_im=var_im^.5;

% stretch the image (island values only) - leave the water values at 0

% increase the contrast using the stdev of the non-zero (island) values

im2 = zeros(nrows, ncols);

im2(sig_inds) = (50/std_im).*(im(sig_inds) - mean_im) + 100;

im2 = reshape(im2, nrows, ncols);

im2(find(im2 < 0)) = 0;

im2(find(im2 > 255)) = 255;

% form shaded views as explained in the solutions. f1, f2 and f3

% are the left, top and top-left illumination filters, respectively

% create_shaded_relief.m computes the shaded relief and stretches it to be

% between 0 and 255 for the island values only

f1 = [1 -1; 1 -1];

shaded1 = create_shaded_relief(im, f1);

f2 = [1 1 ; -1 -1];

shaded2 = create_shaded_relief(im, f2);

f3 = [0 1 0;1 0 -1; 0 -1 0];

shaded3 = create_shaded_relief(im, f3);

% form a color map, representing the brightness from the shaded reliefs,

% and the hue and saturation from the elevation

im3 = zeros(nrows,ncols,3);

im3(:,:,1) = (255-im2)/510;

im3(:,:,2) = (255-im2)/400;

im3(:,:,3) = shaded3./255; %make brightness be between 0 and 1

im4 = hsv2rgb(im3);

% create a color map and display images

v = linspace(0,1,256)';

figure; colormap([v v v])

subplot(2,2,1)

image(im2); axis('image');

title('Original Image')

subplot(2,2,2)

image(shaded1); axis('image');

title('Illumination from the left');

subplot(2,2,3)

image(shaded2); axis('image');

title('Illumination from the top');

subplot(2,2,4)

image(shaded3); axis('image');

title('Illumination from the top-left');

figure;

image(im4); axis('image');

title('The Color Version of the Map')

9

figure; colormap([v v v]);

image(im2); axis('image');

title('Original Image with features labeled')

Problem 2: Contour Maps

% A contour map is formed using intervals specified by the vector v.

% im is the elevation data

f = fopen('lab8prob1data');

im = fread(f,[735 927],'uint8');

im = im';

v1 = linspace(0,1,256)';

% make contour intervals at 200 m;

% each increment in pixel value corresponds to 20 m in height

% => need contour intervals of 10 pixel values

v = 0:10:255;

figure; colormap([v1 v1 v1]);

%contour() draws LENGTH(v) contour lines at the values specified in vector v.

[c,h] = contour((im),v); axis('image');

%clabel(c,h);

title('The Contour Map')

set(gca,'YDir','reverse')

Problem 3: Perspective Views

% read the image into im

nrows = 512;

ncols = 512;

f = fopen('lab8prob3data');

im = fread(f,[ncols nrows],'uint8');

elev = im';

fclose(f);

v1 = linspace(0,1,256)';

% Shaded Relief – illuminated from the top

f= [1 1; -1 -1];

im_sr = create_shaded_relief(elev, f);

v1 = linspace(0,1,256)';

figure; colormap([v1 v1 v1]);

image(im_sr); axis('image');

title('The Shaded Relief');

%smooth shaded relief – not necessary

mask_2 = (1/4)*ones(2,2); %Averaging filter

im_sr = conv2(im_sr, mask_2, 'same');

% Form a perspective view image using d=900 and h=450

perspective = zeros(512,512);

d = 900;

h = 450;

[m, n] = size(im_sr);

for y=m:-1:1, %start at back

for x=n:-1:1,

z = elev(x,y);

u = round(x*d/(y+d));

v = round(h + (z-h)*d/(y+d));

if (u>0 & v>0),

if(u<=512 & v<=512),

perspective(v,u) = im_sr(x,y);

10

%one way to fill the small holes is to map them to a 2x2 output area

%perspective(v:v+1,u:u+1) = repmat(im_sr(x,y),2,2);

end

end

end

end

perspective = flipud(perspective);

%also compute perspective with pinhole algorithm

d=1000;

y0=1000;

[m, n] = size(im_sr);

ph_perspective = zeros(m,n);

for y=m:-1:1,

for x=n:-1:1,

x_hat=round(-d/(y+y0)*x);%+m+1;

z_hat=round(-d/(y+y0)*elev(x,y));%+(m);

if (-x_hat>0 & -z_hat>0),

if(-x_hat<=512 & -z_hat<=512),

ph_perspective(-z_hat,-x_hat)=im_sr(x,y);

%one way to fill the small holes is to map them to a 2x2 output area

%ph_perspective(-z_hat:-z_hat+1,-x_hat:-x_hat+1)=repmat(im_sr(x,y),2,2);

end

end

end

end

ph_perspective = flipud(ph_perspective);

% We can also view the rotated perspective

rot_angle = -40;

elev_rot = imrotate(elev,rot_angle, 'bilinear');

shade_rot = imrotate(im_sr,rot_angle, 'bilinear');

elev_rot(find(elev_rot < 0)) = 0;

elev_rot(find(elev_rot > 255)) = 255;

shade_rot(find(shade_rot < 0)) = 0;

shade_rot(find(shade_rot > 255)) = 255;

d = 900;

h = 450;

%[m,n] = size(shade_rot);

[m,n] = size(perspective); %this may not work for some rotation angles

perspective_rot = zeros(m,n);

perspective_rot_color = zeros(m,n,3);

for x=n:-1:1,

for y=m:-1:1,

z = elev_rot(x,y);

u = round(x*d/(y+d));

v = round(h + (z-h)*d/(y+d));

if (u>0 & v>0),

if(u<=m & v<=n),

perspective_rot(v,u) = shade_rot(x,y);

% Color is added using the approach in Problem 1

perspective_rot_color(v,u,1) = (255-elev_rot(x,y))/510;

perspective_rot_color(v,u,2) = (255-elev_rot(x,y))/400;

perspective_rot_color(v,u,3) = (1/255)*shade_rot(x,y);

end

end

end

end

perspective_rot = flipud(perspective_rot);

perspective_rot_colored = hsv2rgb(perspective_rot_color);

perspective_rot_colored(:,:,1) = flipud(perspective_rot_colored(:,:,1));

perspective_rot_colored(:,:,2) = flipud(perspective_rot_colored(:,:,2));

perspective_rot_colored(:,:,3) = flipud(perspective_rot_colored(:,:,3));

11

%pinhole camera rotation

d=1000;

y0=1000;

rot_angle = -40;

elev_rot = imrotate(elev,rot_angle,'bilinear');

shade_rot = imrotate(im_sr,rot_angle,'bilinear');

elev_rot(find(elev_rot < 0)) = 0;

elev_rot(find(elev_rot > 255)) = 255;

shade_rot(find(shade_rot < 0)) = 0;

shade_rot(find(shade_rot > 255)) = 255;

%[m,n] = size(shade_rot);

[m,n] = size(ph_perspective); %this may not work for some rotation angles

ph_perspective_rot = zeros(m,n);

ph_perspective_rot_color = zeros(m,n,3);

for y=m:-1:1

for x=n:-1:1

x_hat=round(-d/(y+y0)*x);%+m+1;

z_hat=round(-d/(y+y0)*elev_rot(x,y));%+(m);

if (-x_hat>0 & -z_hat>0),

if(-x_hat<=m & -z_hat<=n),

ph_perspective_rot(-z_hat,-x_hat)=shade_rot(x,y);

ph_perspective_rot_color(-z_hat,-x_hat,1) = (255-elev_rot(x,y))/510;

ph_perspective_rot_color(-z_hat,-x_hat,2) = (255-elev_rot(x,y))/400;

ph_perspective_rot_color(-z_hat,-x_hat,3) = (1/255)*shade_rot(x,y);

%ph_perspective(-z_hat:-z_hat+1,-x_hat:-x_hat+1)=repmat(im_sr(x,y),2,2);

end

end

end

end

ph_perspective_rot = flipud(ph_perspective_rot);

ph_perspective_rot_colored = hsv2rgb(ph_perspective_rot_color);

ph_perspective_rot_colored(:,:,1) = flipud(ph_perspective_rot_colored(:,:,1));

ph_perspective_rot_colored(:,:,2) = flipud(ph_perspective_rot_colored(:,:,2));

ph_perspective_rot_colored(:,:,3) = flipud(ph_perspective_rot_colored(:,:,3));

%fill in holes with an interative median filter

persp_medfilt = fill_holes_med(perspective,3,2,11);

ph_persp_medfilt = fill_holes_med(ph_perspective,3,2,11);

persp_rot_medfilt = fill_holes_med(perspective_rot,3,2,11);

ph_persp_rot_medfilt = fill_holes_med(ph_perspective_rot,3,2,11);

persp_rot_medfilt_col = fill_holes_med(perspective_rot_colored,3,2,11);

ph_persp_rot_medfilt_col = fill_holes_med(ph_perspective_rot_colored,3,2,11);

% display

figure; colormap([v1 v1 v1]); %axis('image')

subplot(2,2,1); image(persp_medfilt);

title(sprintf('The Perspective View \n (holes filled with iterative median filter)'))

ylimits = ylim;

set(gca,'DataAspectRatio',[1 1 1], 'PlotBoxAspectRatio',[1 1 1],'YLim',[200 ylimits(1,2)])

subplot(2,2,2); image(ph_persp_medfilt); %axis('image')

title(sprintf('The Pinhole Perspective View \n (holes filled with iterative median filter)'))

ylimits = ylim;

set(gca,'DataAspectRatio',[1 1 1], 'PlotBoxAspectRatio',[1 1 1],'YLim',[200 ylimits(1,2)])

figure; colormap([v1 v1 v1]); %axis('image')

subplot(2,2,1); image(persp_rot_medfilt);

title(sprintf('The Rotated Perspective View \n (Rotated by %d degrees) \n (holes filled with

iterative median filter)', rot_angle))

ylimits = ylim;

set(gca,'DataAspectRatio',[1 1 1], 'PlotBoxAspectRatio',[1 1 1],'YLim',[200 ylimits(1,2)])

12

subplot(2,2,2); image(ph_persp_rot_medfilt); %axis('image')

title(sprintf('The Rotated Pinhole Perspective View \n (Rotated by %d degrees) \n (holes filled

with iterative median filter)', rot_angle))

ylimits = ylim;

set(gca,'DataAspectRatio',[1 1 1], 'PlotBoxAspectRatio',[1 1 1],'YLim',[200 ylimits(1,2)])

figure; colormap([v1 v1 v1]); %axis('image')

subplot(2,2,1); image(persp_rot_medfilt_col);

title(sprintf('The Colored Rotated Perspective View \n (Rotated by %d degrees) \n (holes filled

with iterative median filter)', rot_angle))

ylimits = ylim;

set(gca,'DataAspectRatio',[1 1 1], 'PlotBoxAspectRatio',[1 1 1],'YLim',[200 ylimits(1,2)])

subplot(2,2,2); image(ph_persp_rot_medfilt_col); %axis('image')

title(sprintf('The Colored Rotated Pinhole Perspective View \n (Rotated by %d degrees) \n (holes

filled with iterative median filter)', rot_angle))

ylimits = ylim;

set(gca,'DataAspectRatio',[1 1 1], 'PlotBoxAspectRatio',[1 1 1],'YLim',[200 ylimits(1,2)])

Problem 4: Morphing Example % read the image into im

nrows = 256;

ncols = 180;

f = fopen('lab8prob4data1');

im1 = fread(f,[ncols nrows],'uint8');

im1 = im1';

fclose(f);

f = fopen('lab8prob4data2');

im2 = fread(f,[ncols nrows],'uint8');

im2 = im2';

fclose(f);

v1 = linspace(0,1,256)';

% figure; colormap([v1 v1 v1])

% image(im1); axis('image');

% title('Initial Image');

% figure; colormap([v1 v1 v1])

% image(im2); axis('image');

% title('Final Image');

%find good tie points and store them in tie_points.mat

%[x1, y1] = ginput

%[x2, y2] = ginput

%save tie_points x1 y1 x2 y2

load tie_points

%compute the displacement vectors for each tie point

x_disp = x1-x2;

y_disp = y1-y2;

%generate resampling grid

x = linspace(1,ncols,ncols);

y = linspace(1, nrows, nrows);

[xi, yi] = meshgrid(x,y);

% generate the x and y displacement matrices for each pixel location

shiftx = griddata(x1, y1, x_disp, xi, yi, 'linear', {'QJ'});

shifty = griddata(x1, y1, y_disp, xi, yi, 'linear', {'QJ'});

%replace 'NaN's

shiftx(find(isnan(shiftx)==1))=0;

shifty(find(isnan(shifty)==1))=0;

13

%verify that the shift matrices look correct

%figure; colormap(gray); imagesc(shiftx)

%figure; colormap(gray); imagesc(shifty)

%morphing code

figure; colormap(gray);

count=1;

for frame_ind = 1:11,

q = (frame_ind-1)/10;

%get the new locations of the pixels for the intermediate images

%move first image towards second and second towards first

locx = round(xi+shiftx*q); %sample initial image at the intermediate frame coordinates

locy = round(yi+shifty*q);

locxx = round(xi-shiftx*(1-q)); %sample final image at the intermediate frame coordinates

locyy = round(yi-shifty*(1-q));

%make sure all pixels stay within array

locx = clip(locx, 1, ncols);

locy = clip(locy, 1, nrows);

locxx = clip(locxx, 1, ncols);

locyy = clip(locyy, 1, nrows);

%now map pixels to their new positions and blend

im_q = (1-q)*im1((locx(:)-1)*nrows + locy(:)) + q*im2((locxx(:)-1)*nrows + locyy(:));

im_q = reshape(im_q, nrows, ncols);

%subplot(2,6,count); imagesc(im_q); axis('image'); axis off

%count = count+1;

imagesc(im_q); axis('image');

M(frame_ind) = getframe;

end

% Play movie once, save it as avi file

movie(M);

movie2avi(M,'M.avi');

% How to load and play an avi file

Loaded_M = aviread('M.avi');

movie(Loaded_M);

Function: Create a Stretched Shaded Relief Image

function shaded_relief = create_shaded_relief(im, filter)

%create a shaded relief image and stretch it by considering only the island values

%assume that all island values are nonzero (greater than sea-level)

%leave the water values at zero

nrows = size(im,1);

ncols = size(im,2);

im_filtered = conv2(im,filter,'same');

sig_inds = find(im >0);

mean_im = mean(mean(im_filtered(sig_inds)));

var_im=(mean((reshape(im_filtered(sig_inds),length(sig_inds),1)-mean_im).^2));

std_im=var_im^.5;

% stretch the image - increase the contrast of the non-zero (island) values

% and add 128 (the maximum slope value) to shift slopes greater than zero

% stretch only the island values - leave the water as black

shaded_relief = zeros(nrows, ncols);

shaded_relief(sig_inds) = (80/std_im).*im_filtered(sig_inds) + 128;

shaded_relief = reshape(shaded_relief, nrows, ncols);

%scale shaded relief image to be between 0 and 255

shaded_relief(find(shaded_relief < 0)) = 0;

shaded_relief(find(shaded_relief > 255)) = 255;

14

Function: Filling Holes with Iterative Median Filter

function im_filled = fill_holes_med(im, med_start, med_step, med_stop)

%fill in the holes with an iterative median filter

% The median filter is applied only to the holes (where im == 0)

% The filter size starts small to keep the small holes as accurate as possible

im_filled = im;

for med_len = med_start:med_step:med_stop;

med_size = med_len*med_len;

med_offset = floor(med_len/2);

for i=1+med_offset:size(im_filled,1)-med_offset,

for j=1+med_offset:size(im_filled,2)-med_offset,

if (im(i,j)==0),

im_filled(i,j) = median(reshape(im_filled(i-med_offset:i+med_offset,

j-med_offset:j+med_offset),1,med_size));

end

end

end

end