120
IC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and Assessment of Image Quality Jan P. Allebach School of Electrical and Computer Engineering Purdue University West Lafayette, Indiana [email protected] CIC-19, San Jose, CA, 8 November 2011

CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

Embed Size (px)

Citation preview

Page 1: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011

Spatiochromatic Vision Models for Imaging

withApplications to the Development of Image Rendering Algorithms and Assessment of

Image Quality

Jan P. Allebach

School of Electrical and Computer Engineering

Purdue University

West Lafayette, Indiana

[email protected]

CIC-19, San Jose, CA, 8 November 2011

Page 2: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 2/120

What is a model?

• Model is not a complete description of the phenomenon being modeled.

• It should capture only what is important to the application at hand, and nothing more.

• Its structure must be responsive to resource constraints.

From dictionary.com:A schematic description of a system, theory, or phenomenon that accounts for its known or inferred properties and may be used for further study of its characteristics.

Page 3: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 3/120

Visual system components

RetinaOptic NerveVisual CortexSaccadic MotionViewing

Conditions and

Surround

AdaptationStimulusResponsePrimarySecondary

Page 4: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 4/120

Why do we need spatiochromatic models?

• Imaging systems succeed by providing a facsimile of the real world

• A few primaries instead of an exact spectral match• Spatially discretized and amplitude quantized

representation of images that are continuous in both space and amplitude

• These methods only succeed only because of the limitations of the human visual system (HVS)

• To design lowest cost systems that achieve the desired objective, it is necessary to take into account the human visual system in the design and evaluation

Page 5: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 5/120

Modeling context

• Modeling process is very dependent on the intended application- Motivation for developing the models in the first place

- Governs choice of features to be captured and computational structure of the model

- Provides the final test of the success of the model

• Tight interplay between models for imaging system components and the human visual system

• Model usage may be either embedded or external

Page 6: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 6/120

Pedagogical approach

• Spatiochromatic modeling, in principle, builds on all of the following areas:- Color science- Imaging science- Psychophysics- Image systems engineering

• As stated in course description, we assume only a rudimentary knowledge of these subjects

• Start from basic principles, but move quickly to more advanced level

• Focus on what is needed to follow the modeling discussion• See references at end

Page 7: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 7/120

Synopsis of tutorial

• General framework for spatiochromatic models for the HVS

• Introduction to digital halftoning• Application of spatiochromatic models to design of

color halftones• Overview of use of HVS models in image quality

assessment• Color Image Fidelity Assessor

Page 8: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 8/120

For further information

• There is an extensive list of references at the end of these notes.

• The powerpoint presentation may be downloaded from the web: https://engineering.purdue.edu/~ece638/

Page 9: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 9/120

The retinal image is what counts

• Every spatiochromatic model has an implied viewing distance

• What happens when this condition is not met?- Too far – image looks

better than specification

- Too close – may see artifacts

Page 10: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 10/120

Basic spatiochromatic model structure

Trichromatic Stage

StimulusLMSOpponent StageSpatial

FrequencyFiltering Stage

O1O2O3O1O2O3~~~

Page 11: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 11/120

Trichromatic stage

• First proposed by Young in 1801

• Ignored for over 50 years• Helmholtz revived concept in

Handbook of Physiological Optics (1856 -1866)

• Physiological basis is the existence of three different cone types in retina

• Accurately predicts the results of color matching experiments over a wide range of conditions

Simulated retinal mosaic

Page 12: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 12/120

Trichromatic sensor model

• For the human visual system, the spectral response functions can be measured indirectly through color matching experiments

RS = S(λ )QR∫ (λ )dλ

GS = S(λ )QG∫ (λ )dλ

BS = S(λ )QB∫ (λ )dλ

λ

QB(λ)

λ

QG(λ)

λ

QR(λ)

• are spectral response functions that characterize the sensor

QR(λ),QG (λ),QB(λ)

Page 13: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 13/120

Transformation between tristimulus representations

• The trichromatic sensor model is applicable to a wide range of image capture devices, such as cameras and scanners, as well as the human visual system

• If the spectral response functions are a linear transformation of those corresponding to the human visual system, then we call the 3-tuple response the tristimulus coordinate of that spectral power distribution

• For any two sets of spectral response functions that are both linear transformations of those for the human visual system, we can use a 3x3 matrix to transform between the corresponding tristimulus coordinates for any spectral stimulus

• The color matching functions for visually independent primaries are also equivalent to the cone responses of the human visual system

Page 14: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 14/120

Color matching experiment – setup

Page 15: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 15/120

Color matching experiment - procedure

• Test stimulus is fixed• Observer individually adjusts strengths of the three

match stimuli to achieve a visual match between the two sides of the split field

• Mixture is assumed to be additive, i.e. radiant power of mixture in any wavelength interval is sum of radiant powers of the three match stimuli in the same interval

• To achieve a match with some test stimuli , it may be necessary to move one or two of the match stimuli over to the side where the test stimulus is located

• For visually independent primaries, match amounts are equivalent to tristimulus coordinates

CT

C1,C2,C3

CM (λ1,λ2)

CT

Page 16: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 16/120

Color matching functions

• A color matching experiment yields the amount of each of three primaries required to match a particular stimulus

• A special case is a monochromatic stimulus with wavelength

• If we repeat this experiment for all wavelengths , we obtain color matching functions

• Since any stimulus can be expressed as a weighted sum of monochromatic stimuli, the primary match amounts can be expressed as

λ

r (λ),g(λ),b(λ)( )λ

S(λ)

pR = S(λ)∫ r (λ)dλ, pG = S(λ)∫ g(λ)dλ, pB = S(λ)∫ b(λ)dλ

pR , pG , pB( )

Page 17: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 17/120

CIE 1931 standard RGB observer

• Observer consists of color matching functions corresponding to monochromatic primaries

• Primaries- R – 700 nm- G – 546.1 nm- B – 435.8 nm

• Ratio of radiances• Chosen to place chromaticity of equal energy stimulus E at center

of (r-g) chromaticity diagram, i.e. at (0.333,0.333) that areas under color matching functions are identical. • Based on observations in a 2 degree field of view using color

matching method discussed earlier.

LR:LG:LB =72.1:1.4:1.0

Page 18: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 18/120

Color matching functions for 1931 CIE standard RGB observer

Page 19: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 19/120

Relative luminous efficiency - a special case of color matching

• An achromatic sensor with response function is called the standard photometric observer.

V(λ)

Page 20: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 20/120

CIE 1931 standard XYZ observer

• The CIE also defined a second standard observer based on a linear transformation from the 1931 RGB color matching functions.

• The XYZ observer has the following properties:- The color matching functions are non-

negative at all wavelengths.- The chromaticity coordinates of all realizable stimuli are

non-negative.- The color matching function is equal to the relative

luminous efficiency function

• To achieve these properties, it was necessary to use primaries that are not realizable.

• The chromaticities of the primaries lie outside the spectral locus.

x (λ),y (λ),z (λ)[ ]

y (λ)V(λ)

Page 21: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 21/120

Color matching functions for 1931 CIE standard XYZ observer

Page 22: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 22/120

Cone responses for human visual system*

*Vos and Walraven, Vision Res., 1971

Page 23: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 23/120

Chromaticity coordinates

• Chromaticity coordinates provide an important method for visualizing tristimulus coordinates, i.e. sensor responses or primary match amounts

• Let denote either the sensor response or the primary match amounts for a particular stimulus

• The corresponding chromaticity coordinates are defined as

• We can see by inspection that each coordinate lies between 0 and 1 and that all three coordinates sum to 1

R,G, B

r =R

R+G+ B, g=

GR+G+ B

, b=B

R+G+ B

Page 24: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 24/120

Chromaticity diagram for 1931 CIE standard XYZ observer

Page 25: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 25/120

How do we use the trichromatic model?

• Assuming image is in a standard color space, such as sRGB, we transform to CIE XYZ as follows:

• Remove gamma correction, and transform to linear RGB

• Perform 3x3 matrix transform from linear RGB to CIE XYZ

• The CIE XYZ representation of the image will form the basis for further stages of the model

C (linear ) = C(nonlinear)⎡⎣ ⎤⎦γC , C =R,G,B

XYX

⎢⎢⎢

⎥⎥⎥

=A3×3

R(linear)

G(linear)

B(linear)

⎢⎢⎢

⎥⎥⎥

Page 26: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 26/120

Basic spatiochromatic model structure

Trichromatic Stage

StimulusLMSOpponent StageSpatial

FrequencyFiltering Stage

O1O2O3O1O2O3~~~

Page 27: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 27/120

Opponent stage

• Trichromatic theory provides the basis for understanding whether or not two spectral power distributions will appear the same to an observer when viewed under the same conditions.

• However, the trichromatic theory will tell us nothing about the appearance of a stimulus.

• In the early 1900’s, Ewald Hering observed some properties of color appearance- Red and green never occur together – there is no such thing

as a reddish green, or a greenish red- If I add a small amount of blue to green, it looks bluish-

green. If I add more blue to green, it becomes cyan.- In contrast, if I add red to green, the green becomes less

saturated. If I add enough red to green, the color appears gray, blue, or yellow

- If I add enough red to green, the color appears red, but never reddish green

Page 28: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 28/120

Blue-yellow color opponency

Page 29: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 29/120

Opponent stage (cont.)

• Hering postulated that there existed two kinds of neural pathways in the visual system- Red-Green pathway fires fast if there is a lot of red,

fires slowly if there is a lot of green

- Blue-Yellow pathway fires fast if there is a lot of blue, fires slowly if there is a lot of yellow

• Hering provided no experimental evidence for his theory; and it was ignored for over 50 years

Page 30: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 30/120

Experimental evidence for opponency

• Hurvitch and Jameson hue cancellation experiment (1955)•

• Savaetichin electrophysiological evidence from the retinal neurons of a fish (1956)

• Boynton’s color naming experiment (1965)• Wandell’s color decorrelation experiment

Left and right plots show data for two different observers. Open triangles show cancellation of red-green appearance. Closed circles show cancellation of blue-yellow appearance.

Page 31: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 31/120

Color spaces that incorporate opponency

• YUV (NTSC video standard space)

• YCrCb (Kodak PhotoCD space)

• L*a*b* (CIE uniform color space)• YCxCz (Linearized CIE L*a*b* space)

• O1O2O3 (Wandell’s optimally decorrelated space)

O1 =LO2 =−0.59L + 0.80M −0.12SO3 =−0.34L −0.11M + 0.93S

Page 32: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 32/120

CIE L*a*b* and its linearizedversion YCxCz in terms of CIE XYZ

• CIE L*a*b*

L* = 116 f(Y/Yn) - 16 a* = 200[f(X/Xn) - f(Y/Yn) ] b* = 500[f(Y/Yn) - f(Z/Zn) ]

7.787x +16/116 0 x 0.008856 x1/3 0.008856 x 1

f(x) =white point :(Xn,Yn,Zn)

• Linearized opponent color space YyCxCz

Yy = 116 (Y/Yn) Cx = 200[(X/Xn) - (Y/Yn) ] Cz = 500[(Y/Yn) - (Z/Zn) ]

correlate of luminance

R-G opponent color chrominance channel Y-B opponent color chrominance channel

L*

-a*

+a*

-b*+b*

≤ ≤≤≤

Page 33: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 33/120

Wandell’s space in terms of CIE XYZ*

O1,O2 ,O3

O1

O2

O3

⎢⎢⎢

⎥⎥⎥

=0.000 1.000 0.0000.449 −0.290 0.0770.086 −0.590 0.501

⎢⎢⎢

⎥⎥⎥

XYZ

⎢⎢⎢

⎥⎥⎥

*Wen Wu, “Two Problems in Digital Color Imaging: Colorimetry and Image Fidelity Assessor,” Ph.D. Dissertation, Purdue University, Dec. 2000

Page 34: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 34/120

Visualization of opponent color representation

(13.3,o2,0.17) (13.3,0.24,o3)(Y,0.24,0.17)

(Y,o2,o3)

Page 35: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 35/120

Basic spatiochromatic model structure

Trichromatic Stage

StimulusLMSOpponent StageSpatial

FrequencyFiltering Stage

O1O2O3O1O2O3~~~

Page 36: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 36/120

Impact of viewing geometry on spatial frequencies

• Both arrows A and B generate same retinal image• For small ratio , the angle subtended at the

retina in radians ishA / dA

hA / dA

Page 37: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 37/120

Spatial frequency conversion

• To convert between (cycles/inch) viewed at distance (inches) and (cycles/degree) subtended at the retina, we thus have

• For a viewing distance of 12 inches, this becomes

uD u

u (cycles/degree) ≈0.21u (cycles/inch)

u (cycles/degree) =πD180

u (cycles/inch)

Page 38: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 38/120

Spatial frequency filtering stage

• Based on pyschophysical measurements of contrast sensitivity function

• Use sinusoidal stimuli with modulation along achromatic, red-green, or blue-yellow axes

• For any fixed spatial frequency, threshold of visibility is depends only on . This is Weber’s Law. ΔL / L

Page 39: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 39/120

Campbell’s contrast sensivity function on log-log axes

Page 40: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 40/120

Dependence of sine wave visibility on contrast and spatial frequency

Page 41: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 41/120

Models for achromatic spatial contrast sensitivty*

Author Contrast sensitivity function Constants

Campbell 1969

Mannos 1974

Nasanen 1984

Daly1987

)( 22 ρπβρπα −− −eek 046.0 ,012.0 == βα

))(exp()( dccba ρρ −+

)(ρH

1.1 ,114.0,192.0 ,6.2

====dc

ba

)log

exp(dLc

aLb

+−

ρ

11 ,91.3 ,525.0

3188.0 ,6.131

=====

Ldcba

⎩⎨⎧ >+ −

else , 1 if ,)( max

)( ρρρ ρ dcecba

6.6,1.1 ,114.0,192.0 ,2.2

max ===

==

ρdc

ba

*Kim and Allebach, IEEE T-IP, March 2002

Page 42: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 42/120

Achromatic spatial contrast sensitivity curves

Page 43: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 43/120

Chrominance spatial frequency response

• Based on Mullen’s data*

W ( f ) =Aexp −α f( )α =0.419 A=100

*K.T. Mullen, J. Physiol., 1985

Page 44: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 44/120

Spatial Frequency Response of Opponent Channels

ChrominanceChrominance [Kolpatzik and Bouman*][Kolpatzik and Bouman*]

Luminance Luminance [Nasanen][Nasanen]

*B. Kolpatzik, and C. A. Bouman, J. Electr. Imaging, July 1992

2

,yYH u v

2

, ,x zC CH u v

Page 45: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 45/120

Illustration of difference in spatial frequency response of luminance and chrominance channels

Original image O1- filtered

Page 46: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 46/120

Illustration of difference in spatial frequency response of luminance and chrominance channels

Original image O2- filtered

Page 47: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 47/120

Illustration of difference in spatial frequency response of luminance and chrominance channels

Original image O3- filtered

Page 48: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 48/120

Application areas for spatiochromatic models

• Color image display on low-cost devices- PDA- Cellphone

• Color image printing- Inkjet- Laser electrophotographic

• Digital video display- LCD- DMD- Plasma panel

• Lossy color image compression- JPEG- MPEG

Page 49: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 49/120

Synopsis of tutorial

• General framework for spatiochromatic models for the HVS

• Introduction to digital halftoning• Application of spatiochromatic models to design of

color halftones• Overview of use of HVS models in image quality

assessment• Color Image Fidelity Assessor

Page 50: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 50/120

What is digital halftoning?

• Digital halftoning is the process of rendering a continuous-tone image with a device that is capable of generating only two or a few levels of gray at each point on the device output surface.

• The perception of additional levels of gray depends on a local average of the binary or multilevel texture.

Page 51: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 51/120

What is digital halftoning (cont.)

• Detail is rendered by local modulation of the texture

Page 52: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 52/120

The Two Fundamental Goalsof Digital Halftoning

• Representation of Tone- smooth, homogeneous texture.

- free from visible structure or contouring.

Diamond dot screen

Bayer screen

Error diffusion

DBS

Page 53: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 53/120

The Two Fundamental Goalsof Digital Halftoning (cont.)

• Representation of Detail- sharp, distinct, and good contrast in rendering of

fine structure in image.

- good rendering of lines, edges, and type characters.

- freedom from moire due to interference between halftone algorithm and image content

Diamond dot screen

DBS screen Error diffusion DBS

Page 54: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 54/120

Types of Halftone Texture

Periodic Aperiodic

Clustered Dot

Dispersed Dot

Page 55: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 55/120

Basic structure of screening algorithm

1 1 1 1

1 3 3 1

1 3 3

1 1 1

1

1

0.5

1.52.5

3.5

1 1 1 1

1 3 3 1

1 3 3

1 1 1

1

1

Continuous-Tone Image

Halftone Image

Compare

0.5

1.52.5

3.5

Threshold Matrix

The threshold matrix is periodically tiled over the entire continuous-tone image.

Page 56: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 56/120

Error diffusion

n]d[m,wl]nk,u[ml]nk,u[m k,l−++←++

g[m,n] =1, [ ,u m ]n ≥ [ ,t m ]n

0, else ⎧ ⎨ ⎩

n]u[m,n]g[m,n]d[m, −=

Q(•)

wk,l

g[m,n]f[m,n]

d[m,n]

+-

+

-

u[m,n]

Page 57: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 57/120

Direct binary search*

*D. Lieberman and J. P. Allebach, IEEE T-IP, Nov. 2002

Printer model

Page 58: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 58/120

The Search Heuristic

Toggle

Swap 1

Swap 2

Swap 3

Accept pattern with lowest error

Page 59: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 59/120

DBS Convergence: 0, 1, 2, 4, 6, and 8 Iterations

Page 60: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 60/120

Swaps vs. Toggles

Toggle only Swap and toggle

Page 61: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 61/120

Dual interpretation for DBS

• Minimize mean-squared error at distance D

• Minimize maximum error at distance 2D

Page 62: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 62/120

Illustrationof Dual Interpreta-tion f[m] f[m]*p[m] f[m]*cpp[m]~

~~

g[m] g[m]*p[m] g[m]*cpp[m]~~~

Page 63: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 63/120

Impact of scale parameter S on halftone texture

dsdtS

nt

S

mshtsh

Dnmc pp )

180,

180(),(

)(

180],[

2

2

~~πππ ∫∫ ++=

S1=0.5S2 S3=2.0S2S2=300x9.5

S =RD R D- printer resolution - viewing distance

Page 64: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 64/120

Does it make a difference which model we use?

• Reason for normalization- Bandwidths of models differ significantly.- Causes a significant difference in texture

between the models.- For any fixed model, can achieve a similar range

of textures by varying scale parameter.- Would like to compare the models at the same

texture scale.• Normalization method

- Match the 50% point from the maximum for Nasanen’s model.

Page 65: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 65/120

Normalized contrast sensitivity functions

Page 66: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 66/120

Comparison between models

Nasanen Daly

Campbell Mannos

Page 67: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 67/120

Comparison between models (cont.)

• In 2003, Monga, Geisler, and Evans published a comparision of the effectiveness of four different color HVS models in the context of error diffusion halftoning*

• They concluded that the Flohr et al** model resulted in the best overall image quality

*V. Monga, W. Geisler, B. L. Evans, IEEE SP Letters, April 2003 **U.Agar and J. P. Allebach, IEEE T-IP, Dec. 2005

Page 68: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 68/120

Synopsis of tutorial

• General framework for spatiochromatic models for the HVS• Introduction to digital halftoning

• Application of spatiochromatic models to design of color halftones- Embedding of spatiochromatic model

within DBS for color halftoning- Use of spatiochromatic model with hybrid screen to improve highlight

texture- Application of spatiochromatic model to design of tile sets for periodic

clustered-dot screens

• Overview of use of HVS models in image quality assessment• Color Image Fidelity Assessor

Page 69: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 69/120

Embedding of spatiochromatic modelwithin DBS for color halftoning*

Input RGB Continuous-tone

Image

Initial CMY

Halftone

Halftone under test

RGB to

YyCxCzLum. and Chrom.Spatial Freq. Res.

HVS model

+

CMY to

YyCxCzg[m]

fYyCxCz[m]

fYyCxCz(x)~

f[m]

gYyCxCz[m]

- gYyCxCz(x)~

eYyCxCz(x)T~ ~eYyCxCz(x)dxE =

accept or reject the trial halftone change

Lum. and Chrom.Spatial Freq. Res.

HVS model

*U.Agar and J. P. Allebach, IEEE T-IP, Dec. 2005

Page 70: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 70/120

Results from embedding model in DBS

CDBS in RGB CDBS in YyCxCz

Page 71: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 71/120

Use of spatiochromatic models with hybrid screen to improve highlight texture*

• The hybrid screen is a screening algorithm which generates stochastic dispersed-dot textures in highlights and shadows, and periodic clustered-dot textures in midtones.

• It is based on two main concepts: supercell and core

Dispersed-dot Clustered-dot

Periodic

recursive ordering pattern regularly nucleated clusters

Stochastic

blue noise green noise

Smooth transitio

n

*Lin and Allebach, IEEE T-IP, Dec. 2006; Lee and Allebach, IEEE T-IP, Feb. 2010.

Page 72: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 72/120

Simple clustered-dot screen

Halftone using simple clustered-dot screen

Contouring

Continuous-tone input

a=0

a=127/255

a=16/255

Page 73: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 73/120

Supercell approach

• Supercell is a set of microcells combined together as a single period of screen

• Supercell is used

To increase the gray levels

To create more accurately angled screen

1 1

11

2 2

22

0 0

00

4 6

57

8 10

911

0 2

13

microcellwith macrocell

growing sequence0 2

13

Increasing the gray levelsCreating more accurately

angled screen

14.93°15.9°

Note that microcells are

not identical

Page 74: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 74/120

Limitation on supercell

Clustered-dot microcell with Bayer macrocell growing sequence

Abrupt texture change - Bayer structure

Periodic dot withdrawal pattern

Clustered-dot microscreen with stochastic-dispersed macrocell growing sequence

Homogeneousdot distribution

Maze-like artifact

Stochastic dot withdrawal pattern

Page 75: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 75/120

Role of highlight and shadow cores• The core is a small region in each microcell where the original microcell

growing sequence is ignored and the sequence can be randomized the first dot can move around within the core creates blue-noise-like texture

• There are separate core regions for highlights and shadows

Highlight core

Shadow core

The first dot placement within the core

4 6

57

8 10

911

0 2

13

microcellgrowing

sequence

0 1

23

4 2

511

8 10

17

0 6

93

microcellgrowing

sequence varies from cell to cell

macrocellgrowing

sequence

0 2

13

macrocellgrowing

sequence

0 2

13

Conventional supercell Hybrid screen with core region

Page 76: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 76/120

Improvement of texture quality with hybrid screen

The hybrid screen – clustered-dot microcell with 2x2 core with DBS macrocell growing sequence

More homogeneous dot distribution

No noticeabledot withdrawal

pattern

Homogeneousdot distribution

Clustered-dot microscreen with stochastic-dispersed macrocell growing sequence

Maze-like artifact

Stochasticdot withdrawal

pattern

No maze-like artifact

Page 77: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 77/120

Joint color screen design frameworkC

CMYKg

C

CMYKf

Luminance filter

sum

Luminance filter

S(.)2

+

M

CMYKg

M

CMYKf

Luminance filter

sumLuminance filter

S(.)2

+

cE

mE

CM

CMYKg

CM

CMYKf

T

Luminance filter

Chrominance filter

Chrominance filter

yY

xC

zC

T

Luminance filter

Chrominance filter

Chrominance filter

S(.)2

+

sum

yY

xC

zC

yY

xC

zC

yY

xC

zC

Cw

Mw

CMw

E

Page 78: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 78/120

Joint screen design results

Cyan halftone – plane independent screen design

Cyan halftone – joint screen design with magenta screen

Page 79: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 79/120

Joint screen design results

Magenta halftone – plane independent screen design

Magenta halftone – joint screen design with cyan screen

Page 80: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 80/120

Joint screen design results

Cyan and magenta halftone – plane independent screen design

Cyan and magenta halftone – joint screen design

Dot-on-dot printing decreasedUniform distribution

Page 81: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 81/120

Synopsis of tutorial

• General framework for spatiochromatic models for the HVS• Introduction to digital halftoning

• Application of spatiochromatic models to design of color halftones- Embedding of spatiochromatic model

within DBS for color halftoning- Use of spatiochromatic model with hybrid screen to improve highlight

texture- Application of spatiochromatic model to design of tile sets for periodic

clustered-dot screens

• Overview of use of HVS models in image quality assessment• Color Image Fidelity Assessor

Page 82: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 82/120

Application of spatiochromatic model to design of tile sets for periodic clustered-dot screens*

Continuous Parameter Halftone Cell (CPHC)

⎥⎦⎤

⎢⎣⎡=

→→

21N nn

*F. Baqai and J. P. Allebach, Proc. IEEE, Jan. 2002

Page 83: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 83/120

Finding Discrete Parameter Halftone Cell (DPHC)

• Compute number of pixels in unit cell = |det(N)|

• Assign pixels to unit cell in order of decreasing area of overlap with CPHC

• Skip over pixels that are congruent to a pixel that has already been assigned to DPHC

⎥⎦

⎤⎢⎣

⎡−

=32

25N

Area DPHC

Page 84: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 84/120

Threshold Assignment by Growing Dots and Holes Simultaneously

i[m] s[m]

Abs. = 0.26 Abs. = 0.53 Abs. = 0.74

⎥⎦

⎤⎢⎣

⎡−

=32

25N

Page 85: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 85/120

Neugebauer Primaries Ri( )

D65

CIE XYZ CMF’s

Color Device Model

λ

Page 86: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 86/120

Opponent Color Channels

• Use linearized version of L*a* b* color space to represent opponent

color channels of the human visual system Flohr et al [1993]

Yy =116YYn

− 16

Cx =500XXn

−YYn

⎣⎢

⎦⎥

Cz =200YYn

−ZZn

⎣⎢

⎦⎥

where, (Xn,Yn,Zn) is the D65 white point

Page 87: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 87/120

Spatial Frequency Response of Opponent Channels

ChrominanceChrominance [Kolpatzik and Bouman][Kolpatzik and Bouman]

Luminance Luminance [Nasanen][Nasanen]

cycles/samplecycles/sample

cycles/sample cycles/sample

Page 88: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 88/120

Overall Framework for Perceptual Model Part I

εXYZ

Page 89: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 89/120

Overall Framework for Perceptual Model Part II

εXYZ

ε

ε

ε

%εYy

%εCx

%εCz

Page 90: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 90/120

Best Worst

Optimized for Registration Errors Conventional

Magnified Scanned Textures for Various Screens

Absorptance = 0.25MSE = 9 x Best

MSE = 4 x Best MSE = 5 x Best

Page 91: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 91/120

Weighted Spectra of Error in YyCxCz

Best Worst

Page 92: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 92/120

Weighted Spectra of Error in YyCxCz

Optimized for Registration Errors Conventional

Page 93: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 93/120

Synopsis of tutorial

• General framework for spatiochromatic models for the HVS

• Introduction to digital halftoning• Application of spatiochromatic models to design of

color halftones• Overview of use of HVS models in image quality

assessment• Color Image Fidelity Assessor

Page 94: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 94/120

An image quality example

• Noise in low-light parts of scene

• Moire on Prof. Bouman’s shirt

• JPEG artifacts• Poor color

rendering – too red• Poor contrast and

tone – too dark• Glare from flash ”It's a little frightening to think that this picture is associated

with the Transactions on Image Processing” – C. A. Bouman

Page 95: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 95/120

Imaging Pipeline

• Camera• scanner

Image capture

Image processing

Image output

Display Printer

Enhance Compose Compress

Page 96: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 96/120

Image quality perspectives – image vs. system

• Imaging systems based- Resolution (modulation transfer function)

- Dynamic range

- Noise characteristics

• Image-based- Sharpness

- Contrast

- Graininess/mottle

Page 97: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 97/120

Image quality vs. print quality

• Image quality- Broader viewpoint- Often focuses on issues that arise during image

processing phase below, especially compression.- May also consider image capture and display

• Print quality- Specifically considers issues that arise during printing

Image capture

Image processing

Image output

Page 98: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 98/120

Typical image quality issues

• See discussion of photograph of Charles A. Bouman et al

Page 99: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 99/120

Typical print quality issues

• Bands – orthogonal to process direction

• Streaks – parallel to process direction

• Spots- Repetitive

- Random

• Color plane registration errors• Ghosting• Toner scatter• Swath misalignment

http://www.hp.com/cpso-support-new/pq/4700/home.html

Page 100: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 100/120

Image quality assessment functionalities

• Metrics vs. maps- Local or global strength of a particular defect – a single

number

- Map showing defect strength throughout the image – an image

• Single defect vs. summative measures- Assess strength of a single defect, i.e. noise

- Assess overall image quality – must account for all significant defects and their interactions

• Reference vs. no-reference methods

Page 101: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 101/120

Image quality assessment factors

• Masking – image content may mask visibility of defect- Texture

- Edges

• Tent-pole effect – worse defect dominates percept of image quality defects and overall assessment of image quality

Page 102: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 102/120

Pyramid-Based Image Quality Metrics

Daly, 1993Visual

DifferencePredictor

(VDP)

Lubin, 1995Sarnoff VisualDiscriminationModel (VDM)

Taylor&Allebach,1998Image FidelityAssessor (IFA)

Mantiuk & Daly, 2005

High DynamicRange VDP

Wang & Wandell, 2002 SSIM

(not HVS based)

Wencheng Wu, 2000, Color

Image Fidelity Assessor (CIFA)

Teo & Heeger,1994 Perceptual DistortionMetric (PDM)

Avadhana & Algazi, 1999, Picture Distortion Metric

Doll et al.,1998Georgia Tech Vision(GTV) Model

Monochromatic Chromatic Not HVS based

Watson & Solomon, 1997Model of Visual

Contrast Gain Control

Watson & Ahumada, 2005, Model for

Fovea Detection of Spatial Contrast

Zhang&Wandell,1998Color ImageDistortion Maps

Jin,Feng&Newell,1998Color Visual DifferenceModel (CVDM)

Lian, 2001Color Visual DifferencePredictor (CVDP)

Page 103: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 103/120

Structural Similarity (SSIM) Index*

• The SSIM Index expresses the similarity of image X and image Y at a point (i, j)

where is a measure of local luminance similarity is a measure of local contrast similarity is a measure of local structure similarity

( ) ( ) ( ) ( )SSIM i, j = L i, j C i, j S i, j⋅ ⋅X,Y X,Y X,Y X,Y

*Wang & Bovik, IEEE Signal Processing Letters, March 02*Wang, Bovik, Sheikh & Simoncelli, Trans on IP, March 04

( )L i, jX,Y

( )C i, jX,Y

( )S i, jX,Y

Page 104: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 104/120

Luminance similarity

( ) ( )( , )= , ,QP

p P q Q

i j w p q i p j qμ=− =−

+ +∑ ∑K K

or , the luminance of the pixel; A typical window is a 11X11 circular-symmetric Gaussian weighting function.

=K X Yw

where, local average luminance

window function

L

X,Y(i, j) =

2μX(i, j)μ

Y(i, j) + C

1

μX

2 (i, j) + μY

2 (i, j) + C1

Page 105: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 105/120

Contrast similarity

where

or , the luminance of the pixel;A typical window is a 11X11 circular-symmetric Gaussian weighting function.

=K X Yw

CX,Y (i, j) = 2σX (i, j)σY (i, j) + C2

σX2 (i, j) + σY

2 (i, j) + C2

σK (i, j) = w(p, q) K(i + p, j + q) - μ K (i, j)[ ]2

q=-Q

Q

∑p=-P

P

μK (i, j) = w p,q( )q=−Q

Q

∑p=−P

P

∑ K i + p, j + q( )

Page 106: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 106/120

Structural similarity

where

Structure comparison is conducted after luminance subtraction and variance normalization. Specifically, Prof. Bovik associates and with the structure of the two images.

( ) ( )( ) ( ), , / ,i j i j i jμ σ− X XX

( ) ( )( ) ( ), , / ,i j i j i jμ σ− Y YY

Correlation coefficient between X and Y

SX,Y (i, j) = σXY (i, j) + C3

σX (i, j)σY (i, j) + C3

σXY (i, j) = w(p, q) X(i + p, j + q) - μ X (i, j)[ ] Y(i + p, j + q) - μ Y (i, j)[ ]q=-Q

Q

∑p=-P

P

Page 107: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 107/120

Synopsis of tutorial

• General framework for spatiochromatic models for the HVS

• Introduction to digital halftoning• Application of spatiochromatic models to design of

color halftones• Overview of use of HVS models in image quality

assessment• Color Image Fidelity Assessor

Page 108: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 108/120

*C. Taylor, Z. Pizlo, and J. P. Allebach, IS&T PICS, May. 1998

*

Page 109: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 109/120

Model for assessment of color image fidelity

• Color extension of Taylor’s achromatic IFA• The model predicts perceived image fidelity

- Assesses visible differences in the opponent channels

- Explains the nature of visible difference (luminance change vs. color shift)

Color ImageFidelityAssessor(CIFA)

Ideal

Rendered

Viewing parameters

Image mapsof predicted

visibledifferences

*W. Wu, Z. Pizlo, and J. P. Allebach, IS&T PICS, Apr. 2001

Page 110: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 110/120

Chromatic difference(Definition)

• Objective: evaluate the spatial interaction between colors

• First transform CIE XYZ to opponent color space (O2,O3) *

* X. Zhang and B.A. Wandell, “A SPATIAL EXTENSION OF CIELAB FOR DIGITAL COLOR IMAGE REPRODUCTION”, SID-97

• Then normalize to obtain opponent chromaticities (o2,o3)

• Define chromatic difference (analogous to luminance contrast c1)

Luminance Red-Green

Blue-Yellow

Y

O2

O3

⎢⎢⎢

⎥⎥⎥

=0 1 0

0.449 −0.29 0.0770.086 −0.59 0.501

⎢⎢⎢

⎥⎥⎥

XYZ

⎢⎢⎢

⎥⎥⎥

(o2 , o3 ) =(O2 /Y,O3 /Y ) opponent chromaticities

ci =omax −omin

2i =2, 3.

Page 111: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 111/120

Opponent color representation

(13.3,o2,0.17) (13.3,0.24,o3)(Y,0.24,0.17)

(Y,o2,o3)

Page 112: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 112/120

)16/cos(2.0 Jπ)16/cos(1.0 Jπ

Chromatic difference(illustration)

• Chromatic difference is a measure of chromaticity variation • Chromatic difference is a spatial feature derived from

opponent chromaticity that has little dependence upon luminance

)16/cos(05.0 Jπ

0.1 0.10.20.05

• Chromatic difference is the amplitude of the sinusoidal grating

)16/cos(1.0 Jπ

Gray at pixel (I , J ): (Y ,o2 ,o3) = 6.885,0.235924,0.174438( )

Page 113: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 113/120

CIFA

Ideal Y Image

Rendered Y Image

Ideal O2 Image

Rendered O2 Image

Ideal O3 Image

Rendered O3 Image

Blue-yellowIFA

Red-greenIFA

Achromatic*IFA

Chromatic IFAs

* Previous work of Taylor et al

(Y,O2,O3): Opponent representation of an image

Multi-resolution Y images

Image map of predictedvisible luminance

differences

Image map of predictedvisible blue-yellow

differences

Image map of predictedvisible red-green

differences

Page 114: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 114/120

PsychometricLUT (f,o2,c2)

Chromatic diff.discrimination

Red-green IFA

PsychometricSelector

ChannelResponsePredictor

LimitedMemory

Prob. Sum.

LowpassPyramid

LowpassPyramid

Chromatic Diff.Decomposition

Chromatic Diff.Decomposition

η

c

+

Adaptation level

Contrast Decomposition

Contrast Decomposition

Achromatic IFAAchromatic IFA Psychometric LUT (f,Y,c1)

Lum. contrastdiscrimination

Contrast: luminance contrast & chromatic difference

Γ1Γ2

Page 115: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 115/120

Estimating parameters of LUT(Psychophysical method)

• Red-green stimulus: (Y,o2,o3) specifies the background color, c2 is the ref. chromatic difference

• Which stimulus has less chromatic difference?

( )3.0,2.0,885.6 − ( )3.0,2.0,885.6 −

)(2 )cos( ⋅⋅ec )(

2 )cos()( ⋅⋅Δ+ ecc

-0.02 -0.01 0 0.01 0.020

0.2

0.4

0.6

0.8

1

(Y ,o2 ,o3) =(6.885,0.2,−0.3), f =4 cycle/degree, c2 =0.1, Δc=0.1viewed from 1.5 m away

Δc

Pro

bab

ility

of

choosi

ng

left

Page 116: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 116/120

Representative results

• Results for f = 16, 8, 4, 2, 1 cycle/deg are drawn in red, green, blue, yellow, and black.

• Threshold is not affected strongly by the reference chromatic difference

• Chromatic channels function like low-pass filters

-0.2 0 0.2 0.4 0.60

0.05

0.1

0.15

0.2

-0.1 0 0.1 0.2 0.30

0.02

0.04

0.06

Reference c3Reference c2

Thr

esho

ld

Thr

esho

ld

Red-green discrimination atRG1:(Y,o2,o3)=(5,0.2,-0.3)

Blue-yellow discrimination atBY1:(Y,o2,o3)=(5,0.3,0.2)

Page 117: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 117/120

CIFA output for example distortions(Hue change)

Luminance R-G B-Y

Page 118: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 118/120

CIFA output for example distortions(Blurring)

Luminance R-G B-Y

Page 119: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 119/120

CIFA output for example distortions(Limited gamut)

Luminance R-G B-Y

Page 120: CIC-19, San Jose, CA, 8 November 2011 Spatiochromatic Vision Models for Imaging with Applications to the Development of Image Rendering Algorithms and

CIC-19, San Jose, CA, 8 November 2011 120/120

Thank you for your

attention!