103
Computational Cameras CS766 Michael Correll Sajika Gallege

Depth and Image from a Single Image

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Depth and Image from a Single ImageOverview
• Drawbacks to the current camera model: –Not everything can be in focus –Objects in motion are blurred –Once you’ve set a depth of field, can’t
change it • So let’s use computational techniques
to solve these problems.
Pinhole Cameras
SLR Camera
•Movable Lens •Variable Aperture •Fixed Sensor •Sensor Parallel to the Lens
d
• Hajime Nagahara, Osaka University • Sujit Kuthirummal, Columbia University • Changyin Zhou, Columbia University • Shree K. Nayar, Columbia University
Depth of Field (DOF)
• The range of scene depths that appear focused in an image
• DOF can be increased by making the aperture smaller
• Reduces the amount of light received by the detector, resulting in greater image noise (Lower SNR)
• DOF vs SNR: a long-standing limitations of imaging
Depth of Field (DOF)
Changing the aperture size affects depth of field. A smaller aperture increases the range in which the object is approximately in focus
Aperture and DOF
New Approach
• Varying position and/or orientation of the image detector during the integration time of a photograph
• Focal plane is swept through a volume of the scene causing all points within it to come into and go out of focus, while the detector collects photons
Flexible DOF
Flexible DOF
• Extended Depth of Field: Image detector is moved at uniform speed during image integration
• Discontinuous Depth of Field: Image detector is moved at non-uniform speed during image integration
• Tilted Depth of Field: Emulate a tilted image detector using a rolling shutter.
Shutter
• Global Shutter: all pixels are exposed simultaneously and for the same duration
• Rolling Shutter: different rows are exposed at different time intervals but for the same duration
Flexible DOF
Presentation Notes
Principle: We propose to translate the detector along the optical axis during image integration. Consequently, while the detector is collecting photons for a photograph, a large range of scene depths come into and go out of focus. We demonstrate that by controlling how we translate the detector, we can manipulate the depth of field of the imaging system.
Flexible DOF
(a) A scene point M, at a distance u from the lens, is imaged in perfect focus by a detector at a distance v from the lens. If the detector is shifted to a distance p from the lens, M is imaged as a blurred circle with diameter b centered around m
(b) Our flexible DOF camera translates the detector along the optical axis during the integration time of an image. By controlling the starting position, speed, and acceleration of the detector, we can manipulate the DOF in powerful ways
Flexible DOF
Point spread function (PSF)
• Ideal pillbox function:
– r: distance of an image point from the center m of the blur circle
– (x) = rectangle function, which has a value 1, if |x| < 1/2 and 0 otherwise
• Gaussian function:
• Integrated PSF:
Extended Depth of Field (EDOF)
Translating a detector with a global shutter at a constant speed during image integration
• Depth Invariace of IPSF • Image with Extended DOF • Image with high SNR
Extended Depth of Field
Presentation Notes
Detector Motion for Extended DOF : To capture a scene with a extended depth of field, while using a large aperture for good SNR, we propose to translate a detector with a global shutter at a constant speed during image integration. In a captured image, the blur kernel is almost the same for all scene depths and image locations. Applying deconvolution with a single blur kernel gives a sharp, all in focus image.
Depth Invariance of IPSF
• Detector translating along the optical axis with constant speed s
• IPSF:
invariant to scene depth.
Computing EDOF Images using De- convolution
• EDOF camera’s IPSF is invariant to scene depth and image location
• Deconvolve a image with a single IPSF to get an image with greater DOF.
• Richardson-Lucy: • Wiener: • Dabov: combines Wiener deconvolution
and block-based denoising (BM3D)
EDOF Samples
Captured Image (f/1.4, T=0.36sec) Computed EDOF Image
Normal Camera(f/1.4, T=0.36sec, Near Focus) Normal Camera (f/8, T=0.36sec, Near Focus)
EDOF Samples
Captured Image (f/1.4, T=0.36sec) Computed EDOF Image
Normal Camera(f/1.4, T=0.36sec, Near Focus) Normal Camera (f/8, T=0.36sec, Near Focus)
EDOF Samples
Captured Image (f/1.4, T=0.36sec) Computed EDOF Image
Normal Camera(f/1.4, T=0.36sec, Near Focus) Normal Camera (f/8, T=0.36sec, Near Focus)
Discontinuous Depth of Field
• Useful for eliminating obstacles • first focuses on the foreground for a
part of the integration time, • Moves quickly to another location to
focus on the backdrop for the remaining integration time
Discontinuous DOF
Presentation Notes
Detector Motion for Discontinuous DOF: To capture a scene with a discontinuous depth of field, we propose to translate a detector with a global shutter in a non-linear fashion during image integration. The detector motion shown in the animation enables us to capture two disconnected scene regions with sharpness while objects in between are severly blurred.
Discontinuous DOF Samples
Vanishing Wire Mesh: In this scene we have a toy cow and a
toy hen in front of a wire mesh, behind which there is a scenic
background. We can capture this scene so that we see the toys
and the background, but the wire mesh is so defocused that it
vanishes from the image.
Tilted Depth of Field
• View cameras can be made to focus on tilted scene planes by adjusting the orientation of the lens with respect to the detector
• Can be emulated by translating the detector at uniform speed with a rolling electronic shutter
Tilted Depth of Field
• Translated at uniform speed: s • Exposure time: T • Tilt angle between lens and detector: • Angle between scene and focal plane: • Height of Detector: H • Scheimpflug condition:
Tilted DOF
Presentation Notes
Detector Motion for Tilted DOF: To capture a scene with a tilted depth of field, we propose to translate a detector with a rolling shutter at a constant speed during image integration. This emulates a tilted image detector, and as a result we get a tilted depth of field. By varying the translation speed, we can control the tilt.
Tilted DOF Samples
In this example, the table-top with the newspaper, mug, and keys
is tilted with the lens plane. We set the detector translation in
order to realize a tilted depth of field that is aligned with the table
top. The entire newspaper is now in focus. The bottom of the mug
is focused, but the top is not, indicating that the depth of field is
aligned with the table top
More Extensions
• Non-Planar DOF • Extended DOF using SLR's Focus Ring • Extended DOF Video
Presenter
Presentation Notes
Detector Motion for Non-Planar DOF: To capture a scene with a non-planar depth of field, we propose to translate a detector with a rolling shutter in a non-linear fashion. This emulates a curved image detector, and as a result we get a curved/non-planar depth of field.
Non Planar DOF sample
Image from Normal Camera Image from Our Camera (f/1.4) (f/1.4)
Presenter
Presentation Notes
In this example, the crayons are arranged on a semi-circular arc, while the price tag is placed at the same depth as the nearest crayons. We capture this scene with a curved (non-planar) depth of field that is aligned with the crayons, so that all the crayons are in focus, while the price tag is defocused.
Extended DOF using SLR's Focus Ring
Uniformly rotate the focus ring of a SLR camera lens during image integration
EDOF Samples
Captured Image (f/1.4, T=0.6sec) Computed EDOF Image
Normal Camera(f/1.4, T=0.36sec, Near Focus) Normal Camera (f/8, T=0.6sec, Near Focus)
Presenter
Presentation Notes
In this example, we demonstrate that by uniformly rotating a SLR camera len's focus ring during image integration, we can capture scenes with a large depth of field while using a large aperture to ensure high SNR.
Presenter
Presentation Notes
To capture extended depth of field video, we propose to move the detector at a constant speed, forward one frame, back the next and so on. This is possible because the blur kernel is invariant to the direction of motion.
Image and Depth of Field from a Conventional Camera with a Coded Aperture
• Anat Levin • Rob Fergus • Fredo Durand • William T. Freeman Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory
Image and Depth of Field from a Conventional Camera with a Coded Aperture
Image and Depth of Field from a Conventional Camera with a Coded Aperture
Motivation
• Blur from being out of focus equiv. to convolution with a blur kernel – Imgfinal = Imgsharp*f
• Different scene depths are different scales of the blur kernel – Imgobject at d=k= fk*Imgsharp
• If we can solve for the blur kernel we can solve for both the deblurred image as well as 3D scene depth.
Single input image:
Presentation Notes
We also undo the defocus blur and recover an all focused image.
Problems
• Blur kernel: –Could be
? =
=?
Image Priors
+
• Assume gradients (and noise) are distributed on a zero-mean Gaussian.
+ Can use Least Squares to deconvolve - Not really how images are (usually
heavy-tailed) • Sparse assumptions can be used, but at the
cost of speed (no O(n) algorithms)
Solutions
Finding the Right Scale
• As the scale of the filter changes, pattern of 0’s change.
• If there are high frequencies where the filter has a 0, we’re probably at the wrong scale.
• (Actually need ML for this to account for high frequency noise)
Correct scale
Smaller scale
Larger scale
Presentation Notes
So why a coded aperture does actually helps us to estimate depth? We captured the same image with both a conventional and a coded lens. And let’s try again to deconvolve our local image window with different scales of the defocus filter. With a conventional aperture we have uncertainty between the correct scale and the smaller one, but with the coded aperture, both the smaller and the larger scales are making mess, so the uncertainty is reduced. This uncertainty demonstrates the major idea behind the coded aperture, and for those of you who want to understand better what happens, we make a short transition into the frequency domain.
Coded Aperture
• Need to ditch traditional aperture – Binary filter – No “islands” – Lessened diffraction – Lessened radial distortion – Needs to be good at distinguishing different filter
scales – Needs to be very good at distinguishing filters at
low frequencies
More discrimination between scales
Keep minimal error scale in each local window + regularization
Regularizing Depth Estimation
Presentation Notes
To obtain depth for the full image, we consider each sub-window independently, trying a range of defocus scales and picking the one which minimizes the deconvolution error. However, this can be noisy. Also, like most passive depth estimation methods, our approach requires texture, so has trouble with uniform areas of the image. Therefore, we use a markov random field to regularize the local depth map. This prefers the depth to be piece-wise constant and that depth discontinuities should align with image discontinuities. This gives the improved depth map on the right. The colors correspond to depth from the camera, red being closer and blue being further away.
Depth Reconstruction Cont’d
• Sections of the image that are regular/untextured are hard to get right.
• Use user input to narrow constraints.
Depth Reconstruction Cont’d
Presenter
Presentation Notes
Using the all-focused image and the layered depth map, one application is post-exposure refocusing. Healing brush in PS
Presenter
Motion-Invariant Photography
• Anat Levin • Peter Sand • Taeg Sang Cho • Fredo Durand • Bill Freeman CSAIL, MIT
Motion Blur
Presenter
Presentation Notes
This image demonstrates the motion blur problem. Most of the scene is static but for some reason the cans are moving, and they are blurred. Since blur destroys image quality we would like to get rid of it.
• Reduce shutter speed – But reduces amount of light
• When shutter speed is as fast as we physically can and there is still blur – Computational solution: deconvolution
Overcoming motion blur?
Presentation Notes
One solution to motion blur is to reduce shutter speed is one, but this also reduces the signal to noise ratio. Since we cannot reduce exposure time too much, there is a need for computational solutions such as deconvolution.
• Need to know blur kernel (motion velocity)
=?=
=?=


Presentation Notes
The first challenge with deconvolution is the need to know of the exact blur kernel, which is a function of the motion velocity.
• Need to know blur kernel (motion velocity)
• Need to segment image
cans’ velocity
Presenter
Presentation Notes
The second challenge is that even if we managed to recover the proper kernel, applying it on the static parts leads to garbage. Therefore we also need to segment the image according to motion.
• Information loss (reduced signal to noise ratio)
blurred input deblurred static input
• Need to know blur kernel (motion velocity)
• Need to segment image
Presenter
Presentation Notes
A third challenge is that blur destroy image information. Even a successful deconvolution will not be as sharp as a static image.
- Existing approach: Flutter Shutter, Raskar et al 2006
• Information loss
• Need to segment image
Close & open shutter during exposure, achieves broad-band kernel. But does not address kernel estimation and segmentation
Why is motion deblurring hard?
Presenter
Presentation Notes
The flutter shutter camera proposes a creative solution. By opening and closing the shutter during exposure one can preserve much more high frequency information. Yet this solution doesn’t address the first two challenges- estimating the motion and segmenting the image.
Counter intuitive solution:
• Makes blur invariant to motion- can be removed with spatially uniform deconvolution
- kernel is known (no need to estimate motion)
- kernel identical over the image (no need to segment)
• Makes blur easy to invert
To reduce motion blur, increase it!
- move camera as picture is taken
Presenter
Presentation Notes
The solution we propose might sound counter intuitive- to reduce motion blur we increase it. We move the camera during exposure time, so that we also blur the static parts of the scene. However the motion is designed in a special way which makes blur invariant to object velocity. That means that the entire scene is blurred equally, with a known kernel. The advantage is that we can remove the blur with a single deconvolution, without any motion estimation and without segmentation. Another advantage is that our special motion preserve high frequency information and makes the blur easy to invert.
Motion invariant blur- disclaimers:
Presenter
Presentation Notes
Two disclaimers: First we assume the motion is 1D, for example horizontal. However, this still covers a wide class of real motions, such as cars and walking people. Second, our camera greatly improves quality for moving objects, but slightly degrades static parts.
Controlling motion blur
Presentation Notes
To illustrate the idea, let’s look at this moving scene.
Can we control motion blur?
Controlling motion blur
Presentation Notes
If we capture it with a static camera, the background is sharp, but the moving cars are blurred Now the big question is: can we control the blur?
Controlling motion blur
Presentation Notes
One way to control the blur is to move the camera during exposure and track the red car. The moving white frame in this video is showing you the view of the camera sensor. On the right is the camera displacement as a function of time.
Controlling motion blur
Controlling motion blur
Presentation Notes
Now this is what the sensor sees during exposure. From the sensor view the red car is static, the background moves.
Controlling motion blur
Motion invariant blur
Presentation Notes
The recorded image is the integral over time. The red car is sharp and the rest is blurred. So by tracking an object we can affect the blur, but this is still not a solution because it improves one object but degrades others. In contrast, here is an image from our camera- ALL the objects are blurred EQUALLY, regardless of their velocity. How can we do that? It seems hard. We can track one object or another one, but no matter which one we choose, the blur of the rest of the scene is different. Also, tracking requires that you know the object's velocity.
Parabolic sweep
• Start by moving very fast to the right
• Continuously slow down until stop
• Continuously accelerate to the left
Intuition:
For any velocity, there is one instant where we track perfectly.
Sensor position x
Presentation Notes
Our solution is to use a non linear motion. Here is what we do: we move the camera during exposure in 1D but vary the displacement as a parabolic function of time. That is, we start by moving very fast to the left, and we continuously slow down until we stop. Then we start to continuously accelerate to the right. In opposed to tracking one object, what we achieve here is that every velocity is tracked for a portion of the exposure.
Motion invariant blur
Presentation Notes
Here is a parabolic displacement. The sensor frame starts moving to the left, it continuously slows down until it stops, and than it accelerate in the other direction. The displacement is parabolic in space time, but note that the sensor is only moving in 1D.
Motion invariant blur
Presentation Notes
Now let’s check it from the sensor viewpoint. We start moving. At some point we track the blue car and it’s sharp. Then we slow down and track the stationary background, then we accelerate and at some point track the red car.
Motion invariant blur
Presentation Notes
Now all this happens during exposure and the recorded picture is the integral over time. and you can see that all objects have the same blurring kernel.
Static camera
Our parabolic input
Our output after deblurring
Presentation Notes
So let’s see a real example - we have an image from a static camera- the static background is sharp and the moving guy is blurred. Such a blur is hard to remove because it is unknown and varies over the image. In contrast, here is the input from our camera. Everything is blurred, but everything is blurred with the same point spread function. Therefore we turn the problem into *non blind* deconvolution – the blur kernel is known and uniform over the image, and we can remove blur with a single deconvolution, without segmentation and without motion estimation.
The space time volume
Presentation Notes
Now that we have seen the basic principle, we want to provide more technical explanation for the parabolic motion, and for that we consider the space time volume- the 3D volume generated by stacking all images generated from all time instances. We then look at a 2D xt-slice out of the volume.
Shearing
Presentation Notes
It will be easier to understand how the moving objects are blurred if we make the trajectories of the red points vertical. Therefore we change the parameterization of the xt-slice. The mathematical transformation applied here is shearing. and to be consistent, we need to shear the integration segment as well. Now it is easy to see exactly how much the car is blurred- the width of the blur is proportional to the slope of the sheared segment.
x
x
Shearing (x,t) -> (-st,t)
Presentation Notes
We note that a sheared parabola remains a parabola, it only shifts.
Solution: parabolic curve - shear invariant
For any velocity (slope),
• corresponds to moment when object is tracked.
• The parabola has a linear derivative => spends equal time tracking each velocity.
x
t
Presenter
Presentation Notes
Intuitively, for every object, there is one time instance when the parabola is tangent to its slope and this corresponds to the moment during exposure in which the object is traced. Also, since the derivative of the parabola is linear we spend an equal amount of time tracing each object.
Assume: we could perfectly identify blur kernel
Which camera has motion blur that is easy to invert? - Static? Flutter Shutter? Parabolic?
Prove: parabolic motion achieves near optimal information preservation
Deblurring and information loss
Presenter
Presentation Notes
So far we have seen that parabolic displacement is motion invariant. We now turn attention to another the big question- image quality. We know that blur destroys image information, and even successful deconvolution will not be as sharp as a static image. Therefore, forget for a moment about the ability to identify blur. We only want to evaluate how much information is damaged or preserved by the blur of different cameras, static, fluttered or parabolic. We will show that a parabolic motion also achieves near optimal energy preservation.
Frequencies from possible
motions


Bounded velocities range=> need to preserve a double wedge in the frequency domain
Space-time Fourier domain Primal Domain
t
x
Presentation Notes
Motion blur is usually studied for a given velocity at a time. In contrast, we study blur for a full range of velocities simultaneously. For this, we use the space-time volume and its Fourier transform. Recall that velocity corresponds to slope in an xt slice. Now consider the Fourier transform of an object at a given speed. It generates frequency content on a slanted line that corresponds to its velocity. For example, static objects lead to horizontal frequency content. Translating objects lead to frequency content along slanted lines. And in general, if the range of velocities in the scene is bounded, the scene generates frequency content only within a double wedge region. The boundaries of this wedge are set by the maximal velocity we expect to see in the scene. Now let's see how a given sensor motion preserves information. In the paper, we show that it is determined by the power spectrum of the integration curve. That is, to preserve information for a given velocity we want the spectrum of the integration curve to be high on the corresponding line of the frequency domain. If we want to preserve a full velocity range, we want a high spectrum in the double wedge.
Static object: high response Higher velocities: low
Static camera
Presentation Notes
As an example consider the straight integration curve of a static camera. It’s Fourier transform is a sinc. Static objects correspond to the horizontal slice and along this slice we have high frequency content. But moving objects correspond to slanted slices with only low spectrum values. Therefore a static camera preserves a lot of information for static objects but does very poorly for the moving ones.
Higher velocities: better than static camera


t
x
Objects
Flutter shutter (Raskar et al 2006)
Velocity 2
Velocity 1
Presentation Notes
Now let’s look at the flutter shutter camera, which is designed to give motion blur that is easy to invert. To achieve this, this camera opens and closes the shutter during exposure, which leads to the discontinuous integration curve on the lower left. The holes in the segments are times when the shutter is closed. Because of this complex pattern, the spectrum contains higher frequencies. On the horizontal slice of a static object it still has high response. but it has higher values along slanted lines and therefore moving objects are preserved much better.
Equal high response in all range
The parabolic camera
Presentation Notes
Now let’s see the power spectrum of the parabolic curve. It adapts to the wedge shape. It actually preserves an equal amount of information along all slices. For the static slice this is slightly less than a flutter shutter camera, but for the motion slices the spectrum values are significantly increased.








Spends frequency “budget” outside wedge
Handles 2D motion
Presenter
Presentation Notes
Now let’s see how different cameras do with respect to this upper bound. We note that the power spectrum of a flutter shutter camera must be constant horizontally. Therefore it must spend budget outside the wedge. Since the norm of each column is bounded, this means less budget for the desired velocities range. In contrast to this, the power spectrum of the parabolic curve fits the wedge shape and does not spend much budget outside the wedge. In fact, in the paper we prove that it is getting very close to the optimal upper bound.
Comparing camera reconstruction
Static Flutter Shutter Parabolic Blurred
input
Presentation Notes
Here is a visual comparison. We synthetically blurred a scene according to all 3 camera models and add an equal amount of noise. The inversion of a static camera blur is worst, a flutter shutter model does better, but the parabolic motion preserves much more information.
Hardware construction
• Ideally move sensor (requires same hardware as existing stabilization systems)
• In prototype implementation: rotate camera
variable radius cam
Presentation Notes
Now, we have implemented parabolic displacement in hardware. The ideal thing is to translate the sensor during exposure. This essentially requires the same hardware as existing motion stabilization technology. However, in our prototype implementation we use an external solution and we approximate translation with a small rotation.
Linear rail
Static camera input- Unknown and variable blur
Presenter
Presentation Notes
We move to some real results. Here is an image from a static camera- the scene contains multiple depth layers which translate into multiple motion velocities. In contrast, here is the input from our camera. It is blurred everywhere, but blurred equally everywhere.
Linear rail
Our output after deblurring- NON-BLIND deconvolution
Presenter
Presentation Notes
Therefore we can remove the blur with single uniform deconvolution without estimating the motion or segmenting the image. One can notice some information loss at the static background but the moving parts are significantly improved. The objects in this scene were moving on a perfect linear rail.
Input from a static camera Deblurred output from our camera
Human motion- no perfect linearity
Presenter
Presentation Notes
In the next example we have human motion which of course isn’t perfectly linear. But we still remove the blur with a single uniform deconvolution.
Violating 1D motion assumption- forward motion
Input from a static camera Deblurred output from our camera
Presenter
Presentation Notes
We have tried to violate the model assumption even further, and here the guy is moving forward, therefore his motion is clearly not horizontal. Yet, deconvolution with our kernel is doing fine. The technique seems to work even when we somewhat violate the horizontal motion assumption. This could be explained by the so called aperture ambiguity. Essentially for 1D edges, the direction of motion is ambiguous and a diagonal motion behaves exactly like an horizontal motion with a different velocity.
Violating 1D motion assumption- stand-up motion
Input from a static camera Deblurred output from our camera
Presenter
Presentation Notes
It is not impossible to break the technique. Here the man is standing up- fully vertical motion. We definitely get artifacts on the face, but not that many around the hand.
Violating 1D motion assumption- rotation
Input from a static camera Deblurred output from our camera
Presenter
Presentation Notes
Here we have a rotating board, involving all motion directions. At the center, where motion is small, the deblurring is surprisingly good. But at the boundaries we do see artifacts.
Parabolic curve – issues
• Spatial shift- but does not affect visual quality in deconvolution
• Parabola tail clipping: not exactly the same blur
• Motion boundaries break the convolution model
• Assumes: Object motion horizontal
Presenter
Presentation Notes
We said that parabolic integration curve is motion invariant, but some approximations are involved, and let’s state them clearly. First, because a sheared parabola is shifted, there is a spatial shift of objects at the deconvolved image, as a function of the object velocity. But since a shift isn’t a visual artifact we aren’t worried about it. Second, since the integration time is finite, there are tail clipping effects and the tail of the integration kernel is not perfectly identical for all velocities, but the effect on the deconvolution is minor. Another issue is that the convolution model brakes at motion boundaries, but in practice we haven’t noticed that this leads to real artifacts. Finally the assumption is that the object motion is 1D and that it is linear up to a 1st order approximation.
Conclusions
x
• Camera moved during exposure, parabolic displacement
• Blur invariant to motion: - Same over all image (no need to segment) - Known in advance (no kernel identification)
• Easy to invert (near optimal frequency response)
• For 1D motion - Somewhat robust to 1D motion violation - Future work: 2D extensions
Presenter
Presentation Notes
To conclude the talk we have proposed too translate the camera during exposure following a parabolic displacement The advantages is that the resulting blur is invariant to object motion and we can remove it via deconvolution with a single known kernel, no need for motion estimation and no segmentation The blur is also easy to invert and is quite close to the optimal frequency response we can hope to achieve. The basic solution is aimed to handle 1D motion blur. Yet it is somewhat robust to violation of the 1D assumption and we are exploring ways to extended it to 2D.
Thank you
Depth of Field (DOF)
Depth of Field (DOF)
Extended Depth of Field
Depth Invariance of IPSF
Depth Invariance of IPSF
Block-Matching and 3D Filtering (BM3D)
EDOF Samples
EDOF Samples
EDOF Samples
EDOF Samples
Slide Number 38
Image and Depth of Field from a Conventional Camera with a Coded Aperture
Image and Depth of Field from a Conventional Camera with a Coded Aperture
Image and Depth of Field from a Conventional Camera with a Coded Aperture
Motivation
Goal
Problems
Slide Number 57
Slide Number 58
Slide Number 59
Slide Number 60
Slide Number 61
Slide Number 62