Upload
nathan-miller
View
218
Download
2
Tags:
Embed Size (px)
Citation preview
Image Acquisition
Asma KanwalLecturer Department of Computer Science, GC University, Lahore
Dr. Wajahat Mahmood QaziAssistant ProfessorDepartment of Computer Science, GC University, Lahore
2
Human Visual Perception
• Why study visual perception?• Image processing algorithms are
designed based on how our visual system works.
• In image compression, we need to know what information is not perceptually important and can be ignored.
• In image enhancement, we need to know what types of operations that are likely to improve an image visually.
3
The Human Visual System
• The human visual system consists of two primary components – the eye and the brain, which are connected by the optic nerve.• Eye – receiving sensor (camera,
scanner).• Brain – information processing unit
(computer system).• Optic nerve – connection cable (physical
wire).
4
The Human Visual System
5
Cross Section of the Human Eye
6
Visual Perception: Human Eye (cont.)
1. The lens contains 60-70% water, 6% of fat.
2. The iris diaphragm controls amount of light that enters the eye.
3. Light receptors in the retina- About 6-7 millions cones for bright light vision called photopic • - Density of cones is about 150,000
elements/mm2.• - Cones involve in color vision.• - Cones are concentrated in fovea about
1.5x1.5 mm2.• - About 75-150 millions rods for dim light vision called
scotopic• - Rods are sensitive to low level of light and are
not involved• color vision.
Blind spot is the region of emergence of the optic nerve from the eye.
7
Image Formation in the Human Eye
8
Image Formation in the Human Eye
• Focal length of the eye: 17 to 14 mm• Let h be the height in mm of that
object in the retinal image, then 15/100 = h / 17 , h =
2.55mm• The retinal image is reflected
primarily in the area of the fovea.
9
What is light?
• The visible portion of the electromagnetic (EM) spectrum.
• It occurs between wavelengths of approximately 400 and 700 nanometers.
10
Light and the Electromagnetic Spectrum
11
Light and the Electromagnetic SpectrumLight and the Electromagnetic Spectrum
• Three basic quantities described the quality of a chromatic light source:• Radiance: the total amount energy that
flow from the light source (can be measured)
• Luminance: the amount of energy an observer perceives from a light source (can be measured)
• Brightness: a subjective descriptor of light perception; perceived quantity of light emitted (cannot be measured)
Light and the Electromagnetic SpectrumLight and the Electromagnetic Spectrum
• Relationship between frequency ( ) and wavelength ( )
, where c is the speed of light• Energy of a photon , where h is Planck’s constant
c
hE
Terminologies
Wave Length:The distance between peaks (high points) iscalled wavelength.
Frequency:Frequency describes the number of waves that pass a fixed place in a given amount of time.
Amplitude:Amplitude is the height of a wave.
Terminologies
Reflection:
Refraction:Refraction of waves involves a change in the direction of waves as they pass from one medium to another.
Diffraction: Diffraction involves a change in direction of waves as they pass through an opening or around a barrier in their path.
15
Image formation
• There are two parts to the image formation process:
• The geometry of image formation, which determines where in the image plane the projection of a point in the scene will be located.
• The physics of light, which determines the brightness of a point in the image plane as a function of illumination and surface properties.
16
A Simple model of image formation
• The scene is illuminated by a single source.• The scene reflects radiation towards the
camera.• The camera senses it via chemicals on film.
17
Pinhole camera
• This is the simplest device to form an image of a 3D scene on a 2D surface.
• Straight rays of light pass through a “pinhole” and form an inverted image of the object on the image plane.
fXx
Z
fYy
Z
18
Video on Pinhole Camera
http://www.howcast.com/videos/387145-How-to-Transform-a-Room-into-a-Camera-Obscura
19
Camera optics
• In practice, the aperture must be larger to admit more light.
• Lenses are placed to in the aperture to focus the bundle of rays from each scene point onto the corresponding point in the image plane
20
Camera Image Side Up
21
Image formation (cont’d)
• Optical parameters of the lens• lens type• focal length• field of view
• Photometric parameters• type, intensity, and direction of illumination• reflectance properties of the viewed surfaces
• Geometric parameters• type of projections• position and orientation of camera in space• perspective distortions introduced by the imaging
process
22
Pixel Transformation
23
Spatial Domain Methods
f(x,y)
g(x,y)
g(x,y)
f(x,y)
Point Processing
Area/Mask Processing
24
Color Transformation
25
Color Models
• The purpose of a color model (also called Color Space or Color System) is to facilitate the specification of colors in some standard way
• A color model is a specification of a coordinate system and a subspace within that system where each color is represented by a single point
• Color Models
RGB (Red, Green, Blue)CMY (Cyan, Magenta, Yellow)HSI (Hue, Saturation, Intensity)
26
RGB Model
• Each color is represented in its primary color components Red, Green and Blue
• This model is based on Cartesian Coordinate System
27
CMY Color Model
28
CMY Color Model
29
HSI Color Model
• Hue (dominant colour seen) • Wavelength of the pure colour observed in the signal.• Distinguishes red, yellow, green, etc.• More the 400 hues can be seen by the human eye.
• Saturation (degree of dilution)• Inverse of the quantity of “white” present in the
signal. A pure colour has 100% saturation, the white and grey have 0% saturation.
• Distinguishes red from pink, marine blue from royal blue, etc.
• About 20 saturation levels are visible per hue.
• Intensity• Distinguishes the gray levels.
30
Color Transformations
Color transformation can be represented by the expression ::
g(x,y)=T[f(x,y)]
f(x,y): input imageg(x,y): processed (output) imageT[*]: an operator on f defined over neighborhood of (x,y).
The pixel values here are triplets or quartets (i.e group of 3 or 4 values)
31
Color Transformations
32
Geometric Transformation
33
Image alignment
Why don’t these image line up exactly?
34
What is the geometric relationship between these two images?
?
Answer: Similarity transformation (translation, rotation, uniform scale)
35
What is the geometric relationship between these two images?
?
36
What is the geometric relationship between these two images?
Very important for creating mosaics!
37
Geometric Processes
• Transformation applied on the coordinates of the pixels (i.e., relocate pixels).
• A geometric transformation has the general form
(x,y) = T{(v,w)} where (v,w) are the original pixel coordinates
and (x,y) are the transformed pixel coordinates.
38
Image Warping
• image filtering: change range of image
g(x) = h(f(x))
• image warping: change domain of
imageg(x) = f(h(x))
f
x
hg
x
f
x
hg
x
39
Image Warping
• image filtering: change range of image
g(x) = h(f(x))
• image warping: change domain of
imageg(x) = f(h(x))
h
h
f
f g
g
40
Parametric (global) warping• Examples of parametric warps:
translation
rotation aspect
41
Parametric (global) warping
• Transformation T is a coordinate-changing machine:p’ = T(p)
• What does it mean that T is global?• Is the same for any point p• can be described by just a few numbers (parameters)
• Let’s consider linear xforms (can be represented by a 2D matrix):
T
p = (x,y) p’ = (x’,y’)
42
All 2D Linear Transformations
• Linear transformations are combinations of …• Scale,• Rotation,• Shear, and• Mirror
• Properties of linear transformations:• Origin maps to origin• Lines map to lines• Parallel lines remain parallel• Ratios are preserved• Closed under composition
y
x
dc
ba
y
x
'
'
yx
lkji
hgfe
dcba
yx
''
43
Homogeneous coordinates
Trick: add one more coordinate:
homogeneous image coordinates
Converting from homogeneous coordinates
x
y
w
(x, y, w)
w = 1 (x/w, y/w, 1)
homogeneous plane
44
2D Translation
• Moves a point to a new location by adding translation amounts to the coordinates of the point.
or
or
45
2D Translation (cont’d)
• To translate an object, translate every point of the object by the same amount.
46
2D Scaling
• Changes the size of the object by multiplying the coordinates of the points by scaling factors.
oror
47
2D Scaling (cont’d)
• Uniform vs non-uniform scaling
• Effect of scale factors:
48
2D Rotation
• Rotates points by an angle θ about origin
(θ >0: counterclockwise rotation)
• From ABP triangle:
• From ACP’ triangle:A
BC
49
2D Rotation (cont’d)
• From the above equations we have: or
or
50
Homogeneous coordinates
• Add one more coordinate: (x,y) (xh, yh, w)
• Recover (x,y) by homogenizing (xh, yh, w):
• So, xh=xw, yh=yw,
(x, y) (xw, yw, w)
51
Homogeneous coordinates (cont’d)
• (x, y) has multiple representations in homogeneous coordinates:• w=1 (x,y) (x,y,1)• w=2 (x,y) (2x,2y,2)
• All these points lie on a line in the space of homogeneous coordinates !!
projectivespace
52
2D Translation using homogeneous coordinates
w=1
53
2D Translation using homogeneous coordinates (cont’d)
• Successive translations:
54
2D Scaling using homogeneous coordinates
w=1
55
2D Scaling using homogeneous coordinates (cont’d)
• Successive scalings:
56
2D Rotation using homogeneous coordinates
w=1
57
2D Rotation using homogeneous coordinates (cont’d)
• Successive rotations:
or
58
Composition of transformations
• The transformation matrices of a series of transformations can be concatenated into a single transformation matrix.* Translate P1 to origin
* Perform scaling and rotation* Translate to P2Example:
59
Composition of transformations (cont’d)
• Important: preserve the order of transformations!
translation + rotation rotation + translation
60
2D shear transformation
• Shearing along x-axis:
• Shearing along y-axis
changes objectshape!
61
Affine transformations
any transformation with last row [ 0 0 1 ] we call an affine transformation
62
Basic affine transformations
1100
0cossin
0sincos
1
'
'
y
x
y
x
1100
10
01
1
'
'
y
x
t
t
y
x
y
x
1100
01
01
1
'
'
y
x
sh
sh
y
x
y
x
Translate
2D in-plane rotation Shear
1100
00
00
1
'
'
y
x
s
s
y
x
y
x
Scale
63
Affine Transformations
• Under certain assumptions, affine transformations can be used to approximate the effects of perspective projection!
G. Bebis, M. Georgiopoulos, N. da Vitoria Lobo, and M. Shah, " Recognition by learning affine transformations", Pattern Recognition, Vol. 32, No. 10, pp. 1783-1799, 1999.
affine transformed object
64
Affine Transformations
• Affine transformations are combinations of …• Linear transformations, and• Translations
• Properties of affine transformations:• Origin does not necessarily map to origin• Lines map to lines• Parallel lines remain parallel• Ratios are preserved• Closed under composition
wyx
fedcba
wyx
100''
65
Projective Transformations aka Homographies aka Planar Perspective Maps
Called a homography (or planar perspective map)
66
Projective Transformations
• Projective transformations …• Affine transformations, and• Projective warps
• Properties of projective transformations:• Origin does not necessarily map to origin• Lines map to lines• Parallel lines do not necessarily remain parallel• Ratios are not preserved• Closed under composition
wyx
ihgfedcba
wyx
'''
67
2D image transformations
These transformations are a nested set of groups• Closed under composition and inverse is a member
68
3D Transformations
• Right-handed / left-handed systems
69
3D Transformations (cont’d)
• Positive rotation angles for right-handed systems:
(counter-clockwise rotations)
70
Homogeneous coordinates
• Add one more coordinate: (x,y,z) (xh, yh, zh,w)• Recover (x,y,z) by homogenizing (xh, yh, zh,w):
• In general, xh=xw, yh=yw, zh=zw
• (x, y,z) (xw, yw, zw, w)
• Each point (x, y, z) corresponds to a line in the 4D-space of homogeneous coordinates.
71
3D Translation
72
3D Scaling
73
3D Rotation
• Rotation about the z-axis:
74
3D Rotation (cont’d)
• Rotation about the x-axis:
75
3D Rotation (cont’d)
• Rotation about the y-axis