240
MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN DIGITAL IMAGERS by Jenny Leung Bachelor of Computer Engineering, University of Victoria, 2006 THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE In the Faculty of Engineering © Jenny Leung 2011 SIMON FRASER UNIVERSITY Spring 2011 All rights reserved. However, in accordance with the Copyright Act of Canada, this work may be reproduced, without authorization, under the conditions for Fair Dealing. Therefore, limited reproduction of this work for the purposes of private study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly if cited appropriately.

MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN DIGITAL IMAGERS

by

Jenny Leung Bachelor of Computer Engineering, University of Victoria, 2006

THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

MASTER OF APPLIED SCIENCE

In the Faculty of Engineering

© Jenny Leung 2011

SIMON FRASER UNIVERSITY

Spring 2011

All rights reserved. However, in accordance with the Copyright Act of Canada, this work may be reproduced, without authorization, under the conditions for Fair Dealing. Therefore, limited reproduction of this work for the purposes of private

study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly if cited appropriately.

Page 2: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

ii

APPROVAL

Name: Jenny Leung

Degree: Master of Applied Science

Title of Thesis: Measurement and Analysis of Defect Development in Digital Imagers.

Examining Committee:

Chair: Dr. Albert Leung, PEng Professor, Engineering Science

______________________________________

Dr. Glenn H. Chapman, PEng Senior Supervisor Professor, Engineering Science

______________________________________

Dr. Marinko V. Sarunic, PEng Supervisor Assistant Professor, Engineering Science

______________________________________

Dr. Israel Koren External Examiner University of Massachusetts at Amherst Dept. of Computer and Electrical Engineering

Date Defended/Approved: April 21th, 2011___________________________

Page 3: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

Last revision: Spring 09

Declaration of Partial Copyright Licence The author, whose copyright is declared on the title page of this work, has granted to Simon Fraser University the right to lend this thesis, project or extended essay to users of the Simon Fraser University Library, and to make partial or single copies only for such users or in response to a request from the library of any other university, or other educational institution, on its own behalf or for one of its users.

The author has further granted permission to Simon Fraser University to keep or make a digital copy for use in its circulating collection (currently available to the public at the “Institutional Repository” link of the SFU Library website <www.lib.sfu.ca> at: <http://ir.lib.sfu.ca/handle/1892/112>) and, without changing the content, to translate the thesis/project or extended essays, if technically possible, to any medium or format for the purpose of preservation of the digital work.

The author has further agreed that permission for multiple copying of this work for scholarly purposes may be granted by either the author or the Dean of Graduate Studies.

It is understood that copying or publication of this work for financial gain shall not be allowed without the author’s written permission.

Permission for public performance, or limited permission for private scholarly use, of any multimedia materials forming part of this work, may have been granted by the author. This information may be found on the separately catalogued multimedia material and in the signed Partial Copyright Licence.

While licensing SFU to permit the above uses, the author retains copyright in the thesis, project or extended essays, including the right to change the work for subsequent purposes, including editing and publishing the work in whole or in part, and licensing other parties, as the author may desire.

The original Partial Copyright Licence attesting to these terms, and signed by this author, may be found in the original bound copy of this work, retained in the Simon Fraser University Archive.

Simon Fraser University Library Burnaby, BC, Canada

Page 4: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

iii

ABSTRACT

This thesis experimentally investigated the development of defects in commercial

cameras ranging from high-end DSLRs, moderate point-and-shoot, to cellphone

cameras. All tested cameras operating in the terrestrial environment developed hot

pixels. In this study, calibration procedures are used to measure defect parameters and

collect spatial data. Software tools are built to trace the temporal growth of defects from

historical camera images. The imaging processes, demosaicing, jpeg are explored for

its effect on defects. Statistical methods are developed to analyze the spatial and

temporal distribution and identify the defect causal source. The impact of camera design

parameters: ISO, sensor and pixel size on the imager defects are investigated. An

empirical formula is created from the data to project the defect growth rate as a function

of the sensor design parameters.

Also, the multi-finger photogate pixels are measured over the visible spectrum

and the enhancement in sensitivity of these designs are explored.

Keywords: image sensors; hot pixels; defective pixels; demosaicing; fault-tolerance;

photogate

Page 5: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

iv

ACKNOWLEDGEMENTS

I would like to thank my thesis committee members Glenn, Israel and

Marinko for their participation in getting through my thesis. I want to extend my

graitude to my supervisor Dr. Glenn Chapman for giving me this opportunity to

work on this research project. Your inspirations, patient, and guidance through

this project are much appreciated. I would also like to thank Dr. Israel and

Zahava Koren for sharing your thoughtful insights and advices throughout my

study.

A special thank to all my colleagues for your participation and assistance

along the way. This gradute experience would be the same without you.

Lastly, I want to thank my parents, and friends for your endless support.

Your presences have made this journey more enjoyable.

Page 6: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

v

TABLE OF CONTENTS

Approval ........................................... ............................................................................. ii

Abstract ........................................... ............................................................................. iii

Acknowledgements................................... ................................................................... iv

Table of Contents.................................. ........................................................................v

LIST OF FIGURES.......................................................................................................viii

LIST OF TABLES ..................................... ....................................................................xii

Glossary........................................... ............................................................................ xv

1: Introduction .................................... ...........................................................................1 1.1 History of Image Sensor..........................................................................................3 1.2 Modern Digital Cameras..........................................................................................4 1.3 Reliability Issues .....................................................................................................8

1.3.1 In-field defect analysis .................................................................................9 1.3.2 Defect Growth Algorithm............................................................................10

1.4 Impact of defects on future sensor design.............................................................11 1.5 Multi-Finger Active pixel sensor.............................................................................12 1.6 Summary...............................................................................................................12

2: Theory and Background on Solid-State Image Senso rs ......................................14 2.1 Theory of Photodetectors ......................................................................................14

2.1.1 Photodiodes ..............................................................................................17 2.1.2 Photogates ................................................................................................20 2.1.3 Pixel performance metric ...........................................................................22

2.2 Charge Coupled Device ........................................................................................23 2.2.1 Charge Transfer ........................................................................................24 2.2.2 Basic CCD Structures................................................................................27

2.3 CMOS Sensor.......................................................................................................30 2.3.1 Photodiode Active Pixel Sensor.................................................................30 2.3.2 Photogate Active Pixel Sensor...................................................................32 2.3.3 CMOS Sensor Arrays ................................................................................34

2.4 CMOS vs. CCD.....................................................................................................36 2.5 Digital cameras .....................................................................................................37

2.5.1 Sensor and Pixel size ................................................................................39 2.5.2 Color filter array sensors............................................................................42

2.6 Camera operation .................................................................................................44 2.6.1 ISO amplification .......................................................................................46

Page 7: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

vi

2.7 Defects in image sensors ......................................................................................47 2.7.1 Material degradation..................................................................................48 2.7.2 In-field defect mechanisms ........................................................................50

2.8 Summary...............................................................................................................54

3: Types of defect in digital cameras.............. ...........................................................56 3.1 Defect Identification on Digital Cameras................................................................57

3.1.1 Stuck defects.............................................................................................58 3.1.2 Hot Pixels ..................................................................................................59

3.2 Defect Identification Techniques ...........................................................................61 3.2.1 Bright Defects Identification Techniques for DSLRs...................................62 3.2.2 Bright Defects Identification Technique for cellphone cameras ..................64

3.3 Defects in demosaic and compressed images ......................................................67 3.3.1 Demosaicing Algorithm..............................................................................67 3.3.2 Demosaicing algorithms comparison .........................................................73 3.3.3 Analyzing defects in color images..............................................................76 3.3.4 Defect on a uniform color background .......................................................77 3.3.5 Defects on varying color backgrounds .......................................................89

3.4 Summary...............................................................................................................97

4: Characterization of in-field defects............ ............................................................99 4.1 Basic DSLR defect data ......................................................................................100 4.2 ISO Amplification.................................................................................................101

4.2.1 ISO and hot pixel parameters ..................................................................103 4.2.2 ISO and hot pixel numbers ......................................................................107

4.3 Spatial Distribution of faults.................................................................................113 4.3.1 Inter-defect distance distribution ..............................................................114 4.3.2 Inter-defect distance chi-square test ........................................................117 4.3.3 Nearest neighbour analysis .....................................................................119 4.3.4 Nearest neighour Monte-Carlo simulation................................................124 4.3.5 Spatial distribution results........................................................................126

4.4 Basic defect data from small sensors .................................................................. 127 4.4.1 Defect data from cellphone cameras .......................................................127 4.4.2 Defect data from Point-and-shoot cameras..............................................129

4.5 Temporal Growth ................................................................................................130 4.5.1 Defect growth rate on large area sensors ................................................131 4.5.2 Defect growth rate on small area sensor..................................................135 4.5.3 Calibration temporal growth limitations ....................................................137

4.6 Chapter Summary ...............................................................................................138

5: Temporal Growth of In-field with Defects Trace A lgorithm................................140 5.1 Bayes defect trace algorithm...............................................................................141

5.1.1 Interpolation scheme ...............................................................................147 5.1.2 Windowing and Correction scheme .........................................................150

5.2 Simulation results................................................................................................152 5.3 Experimental results............................................................................................160 5.4 Summary.............................................................................................................169

Page 8: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

vii

6: The Impact of Pixel and Sensor design on defecti ve pixels ..............................171 6.1 Impact of sensor design trend on defects on imagers .........................................173

6.1.1 Defect count on APS vs. CCD .................................................................174 6.1.2 Impact of ISO trend on defects ................................................................176 6.1.3 Defect growth rate vs. sensor area ..........................................................177 6.1.4 Defect growth rate vs. pixel size ..............................................................179

6.2 Chapter Summary ...............................................................................................189

7: Multi-Finger Active Pixel Sensor................ ..........................................................190 7.1 Multi-Fingered Photogate APS............................................................................191 7.2 Experimental setup and sensitivity measure........................................................195

7.2.1 LED control circuit and calibration............................................................196 7.2.2 Photogate sensor performance measures ...............................................198

7.3 Experimental results............................................................................................199 7.3.1 Comparison response for different photogate structure ...........................201 7.3.2 Comparison response at various wavelength ..........................................206

7.4 Chapter Summary ...............................................................................................209

8: Conclusion ...................................... ......................................................................211 8.1 Measure of in-field defects ..................................................................................211 8.2 Spatial and temporal growth analysis .................................................................. 213 8.3 Defect trace algorithm .........................................................................................215 8.4 Fitting of defect growth with sensor design trends ...............................................216 8.5 Experimental measure of Mulit-Finger Photogate................................................217 8.6 Future Work ........................................................................................................218

References......................................... ........................................................................220

Appendix A: Specification of tested DSLRs.......... ..................................................224

Page 9: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

viii

LIST OF FIGURES

Figure 1-1. CMOS camera-on-chip vs. CCD....................................................................4

Figure 1-2. Production of film vs. digital cameras (data from CITA [6]). ...........................6

Figure 2-1. Absorption of photons in semiconductor......................................................15

Figure 2-2. Absorption coefficient of silicon crystal at various wavelengths.[14] ............16

Figure 2-3. Simple p-n junction......................................................................................18

Figure 2-4. Photodidoe (a) unbiased, (b) reverse biased...............................................19

Figure 2-5. Standard Photogate. ...................................................................................21

Figure 2-6. CCD composed with (a) 2, and (b) 3 MOS capacitors. ................................24

Figure 2-7. Three-Phase clock cycle CCD.....................................................................25

Figure 2-8. Buried channel CCD (BCCD). .....................................................................27

Figure 2-9. Common CCD structures (a) Frame Transfer, (b) Interline Transfer, (c) Full Frame.....................................................................................................28

Figure 2-10. Active pixel sensor with Photodiode photodetector....................................31

Figure 2-11. Active pixel sensor with Photogate photodetector. ....................................33

Figure 2-12. Photogate operation cycle, from signal integration to readout. ..................33

Figure 2-13. Active pixel array.......................................................................................35

Figure 2-14. Point-and-Shoot digital still camera. ..........................................................38

Figure 2-15. DSLR digital still camera. ..........................................................................39

Figure 2-16. Various sensor sizes. ................................................................................40

Figure 2-17. Color filter array sensor. ............................................................................43

Figure 2-18. Basic image process operation. ................................................................45

Figure 3-1. Pixel response to optical exposure. .............................................................57

Figure 3-2. Fully and Partially stuck defects ..................................................................58

Figure 3-3. Normalized pixel dark response vs. exposure time of (a) good pixel, (b) partially-stuck, (c) standard hot pixel, (d) partially-stuck hot pixel. ............60

Figure 3-4. DSLR noise level at various ISOs [data: [35]] ..............................................64

Figure 3-5. Mesh plot of a defect in a demosaic compressed color image.....................66

Figure 3-6. Bilinear interpolation of (a) green, (b) red and blue pixels............................69

Page 10: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

ix

Figure 3-7. Kimmel gradient mask.................................................................................71

Figure 3-8. Sample images used in experiment.............................................................74

Figure 3-9. Moire pattern (b) Bilinear, (c) Median, (d) Kimmel. ......................................76

Figure 3-10. Experiment procedure. ..............................................................................77

Figure 3-11. Bilinear demosaic image for red defect with IOffset = 0.8..............................79

Figure 3-12. Error mesh plot of red defect at IOffset = 0.8 with bilinear demosaicing........81

Figure 3-13. Median demosaic image for red defect with IOffset = 0.8..............................82

Figure 3-14. Error mesh plot of red defect at IOffset = 0.8 with median demosaicing.......................................................................................................................85

Figure 3-15. Kimmel demosaic image for red defect with IOffset = 0.8. ............................86

Figure 3-16. Error mesh plot of red defect at IOffset = 0.8 with kimmel demosaicing. .......88

Figure 3-17. MSE vs. IOffset of a red defect on non-uniform background (bilinear demosaic). ....................................................................................................92

Figure 3-18. MSE vs. IOffset of a red defect on non-uniform background (median demosaic). ....................................................................................................94

Figure 3-19. MSE vs. IOffset of a red defect on non-uniform background (kimmel demosaic). ....................................................................................................96

Figure 4-1. Dark response of a hot pixel at various ISO level. .....................................104

Figure 4-2. Plot of (a) Dark current, (b) Offset vs. ISO.................................................106

Figure 4-3. Magnitude distribution of (a) dark current intensity rate, (b) dark offset at various ISO levels. ..................................................................................107

Figure 4-4. Magnitude distribution of (a) dark current, (b) dark offset at various ISO levels from camera B. ..........................................................................109

Figure 4-5. Combined defect offset distribution at (a) 1/30s, (b) 1/2s...........................111

Figure 4-6. Spatial pattern (a) clustered, (b) random. ..................................................113

Figure 4-7. Defect map of hot pixels identify from camera A at ISO 400......................114

Figure 4-8. Inter-defect distance measurement. ..........................................................115

Figure 4-9. Inter-defect distance distribution of (a) APS, (b) CCD sensors at ISO 400..............................................................................................................115

Figure 4-10. Defect inter-distance distribution at various ISO levels. ...........................117

Figure 4-11. Comparison of the theoretical and empirical distribution of nearest neighbor distances in camera M..................................................................121

Figure 4-12. Empirical distribution of G(d) vs. G(d) with upper and lower bound. ........126

Figure 4-13. Defect count vs. sensor age for camera A from dark-frame calibration (at ISO 400). .............................................................................131

Figure 4-14. Average defect count vs. sensor age by sensor type at ISO 400.............133

Figure 5-1. Concept of defect trace algorithm..............................................................141

Page 11: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

x

Figure 5-2. Ring interpolation. .....................................................................................143

Figure 5-3. Image wide interpolation errors (a) PDF, (b) CDF. ....................................144

Figure 5-4. A 5x5 pixel interpolation mask weighting factor: (a) regular averaging (b) ring averaging. .......................................................................................148

Figure 5-5. Image wide interpolation error derived from regular and ring averaging. ...................................................................................................149

Figure 5-6. Sliding window approach to defect identification........................................150

Figure 5-7. Post-correction procedure. ........................................................................152

Figure 5-8. Plot of Prob(Good|y) vs. image in the windowing test................................156

Figure 5-9. Defect growth rate at ISO 400 with calibration and Bayes search identification. ...............................................................................................163

Figure 5-10. Defect growth rate at ISO 800 with calibration and Bayes search identification. ...............................................................................................166

Figure 5-11. Defect growth rate at ISO1600 with calibration and Bayes search identification. ...............................................................................................167

Figure 6-1. Mega Pixel design trends in digital cameras 2001 to 2008. .......................172

Figure 6-2. Impact of dark current on large and small pixel. ........................................181

Figure 6-3. Defect rate per sensor area vs. pixel size (ISO400). .................................183

Figure 6-4. Semi-log of defect rate per sensor area vs. pixel size................................184

Figure 6-5. Logarithmic plot of defect rate per sensor area of all tested imagers. ........184

Figure 6-6. Logarithm plot of defect rate per sensor area versus pixel size of all tested APS imagers. ................................................................................... 186

Figure 6-7. Logarithm plot of defect rate per sensor area versus pixel size of all tested CCD imagers.................................................................................... 186

Figure 7-1. Single silicon absorption coefficient vs. photon energy. (Data from Refs[14]) .....................................................................................................192

Figure 7-2. Standard photogate photodetector. ...........................................................193

Figure 7-3. Multi-finger photogate photodetector. ........................................................193

Figure 7-4. Standard and multi-finger photogate APS design and expected potential well [13]. .......................................................................................194

Figure 7-5. Experimental setup. ..................................................................................195

Figure 7-6. Relative intensity vs. wavelength...............................................................196

Figure 7-7.Voltage-current converter...........................................................................197

Figure 7-8. Input voltage vs. illumination intensity. ......................................................197

Figure 7-9. Pixel output vs. input light intensity............................................................199

Figure 7-10. Compare sensitivity curve of standard and multi-finger photogate pixels...........................................................................................................200

Page 12: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

xi

Figure 7-11. Sensitivity ratio relative to standard photogate vs. photogate area. (red light).....................................................................................................203

Page 13: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

xii

LIST OF TABLES

Table 2-1. Average sensor size used in various digital cameras (2008-2009). ..............40

Table 2-2. Comparison of die cost on a 300mm wafer...................................................42

Table 3-1. Characteristics of defect type .......................................................................58

Table 3-2. Average MSE and PSNR of demosaic images. ............................................75

Table 3-3. Estimate defect size with bilinear demosaicing. ............................................79

Table 3-4. Peak defect cluster value from bilinear demosaicing. ...................................80

Table 3-5. Estimate defect size with median demosaicing.............................................83

Table 3-6. Peak defect cluster value from median demosaicing. ...................................84

Table 3-7. Estimate defect size with kimmel demosaicing. ............................................86

Table 3-8. Peak defect cluster value from kimmel demosaicing.....................................87

Table 3-9. Comparison of defect in varying color region with bilinear demosaicing.......................................................................................................................90

Table 3-10. Comparison of defect in varying color region with median demosaicing..................................................................................................93

Table 3-11. Comparison of defect in varying color region with kimmel demosaicing..................................................................................................95

Table 4-1. Summary of defects identified in DSLRs at ISO 400...................................100

Table 4-2. Cumulative total of hot pixels identified at various ISO levels. ....................103

Table 4-3. Magnitude of dark current and offset measured for defect in Figure4.1. .....105

Table 4-4. Statistics summary of spatial defect distributions from APS and CCD sensors. ......................................................................................................116

Table 4-5. Statistics summary of spatial defect distributions at various ISO settings. ......................................................................................................117

Table 4-6. Theoretical vs. actual inter-defect distance distribution (in percentage). .....118

Table 4-7. Comparison of Ĝ(d) and G(d) from each test cameras. ..............................123

Table 4-8. Accumulated defects count from 10 cellphone cameras (ISO 400).............128

Table 4-9. Accumulated defect count from Point-and-Phoot at various ISO levels. .....129

Table 4-10. Measured defect rate from calibration result for all tested mid-size DSLRs. .......................................................................................................132

Page 14: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

xiii

Table 4-11. Measured defect rate from calibration result for all tested full-frame DSLRs. .......................................................................................................132

Table 4-12. Measured defect rates from cellphone cameras at ISO 400. ....................135

Table 4-13. Measured defect rates for Point-and-Shoot at various ISO levels. ............136

Table 5-1. Compared interpolation error from various interpolation schemes. .............148

Table 5-2. Performance of Bayes detection at fixed dark current (Intp: 3x3)................154

Table 5-3. Performance of Bayes detection at fixed dark current (Intp: 5x5 ring).........154

Table 5-4. Performance of Bayes detection at fixed dark current (Intp: 7x7 ring).........155

Table 5-5. Performance of Bayes detection at fixed exposure (Intp: 3x3). ...................157

Table 5-6. Performance of Bayes detection at fixed exposure (Intp: 5x5 ring). ............157

Table 5-7. Performance of Bayes detection at fixed exposure (Intp: 7x7 ring). ............157

Table 5-8. Performance of Bayes detection using various interpolation schemes.......159

Table 5-9. Specification of test cameras......................................................................161

Table 5-10. Manual calibration and Bayes detection growth rate comparison at ISO 400.......................................................................................................162

Table 5-11. Manual Calibration and Bayes detection growth rate comparison at ISO 800.......................................................................................................166

Table 5-12. Manual Calibration and Bayes detection growth rate comparison at ISO 1600.....................................................................................................167

Table 6-1. Average sensor and pixel sizes from tested cameras. ................................174

Table 6-2. Average defect rate for various sizes of sensors. .......................................174

Table 6-3. Comparison of APS DSLRs defect rates at various ISOs scaled with sensor area. ................................................................................................179

Table 6-4. Average defect rate per sensor area for all camera types at various ISOs............................................................................................................180

Table 6-5. Comparison of defect rate per sensor area between CCD in PS and DSLRs. .......................................................................................................182

Table 6-6. Linear regression fit statistics on defects/year/mm2 vs. pixel size ...............185

Table 6-7. Linear regression fit statistics on defect rate/mm2 vs. pixel size. .................187

Table 6-8. Estimated defect rate/mm2 at various pixel sizes with the fitted power function. ......................................................................................................188

Table 7-1. Multi-finger photogate APS poly-finger spacing [13]. ..................................194

Table 7-2. LED colors and dominate wavelengths.......................................................196

Table 7-3. Sensitivity result from standard and multi-fingered photogates. ..................200

Table 7-4. Sensitivity ratio for multi-finger photogates relative to standard photogate....................................................................................................201

Table 7-5. Sensitivity change for multi-finger photogates relative to standard photogate....................................................................................................201

Page 15: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

xiv

Table 7-6. Sensitivity of open area in multi-fingered photogates..................................204

Table 7-7. Collection efficiency of open area in multi-fingered photogates. .................204

Table 7-8. Relative sensitivity between Red, Yellow, Green and Blue illumination. .....206

Table 7-9. Ideal responsivitiy ratio approximation (η =constant). .................................207

Page 16: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

xv

GLOSSARY

4NN 4 Nearest Neighbours

APS Active Pixel Sensor

A/D Analogy-to-Digital

BCCD Buried Channel CCD

CCD Charge Couple Device

CDS Correlated Double Sampling

CFA Color Filter Array

CMOS Complimentary Metal Oxide Semiconductor

DSC Digital Still Camera

DSLR Digital Signle Lens Reflex

FFCCD Full Frame CCD

FTCCD Frame Transfer CCD

ITCCD Interline Transfer CCD

LCD Liquid Crystal Display

MSE Mean Square Error

PS Point-and-Shoot

PSNR Peak Signal to Noise Ratio

QE Quantum Efficiency

SNR Signal to Noise Ratio

Page 17: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

1

1: INTRODUCTION

The start of photography begun as early as in 500BC with the creation of

the pin-hole camera concept (camera obscura)[1]. However, the true invention of

the camera which records images did not occur until 1826. In early photography,

reflected light from an object or scene is projected onto a light sensitive material,

known as film, for a period of time creating a reaction that makes a copy of the

scene. The film cameras had dominated the camera market for over 100 years.

However the film process involves many steps, from capturing the image, to

chemically developing the film and finally printing photograph. In the 1960s, the

imaging technology was integrated with the modern semiconductor elements as

a light sensing device, also known as the semiconductor image sensor. The

digital image sensor has the advantage of integration with other electronic

systems such as LCD display, electronic storage, microprocessor, etc. These

new features found only in the digital camera systems have started a new era of

imaging. With the increasing popularity and benefits for a wide range of

applications, the digital imagers become the mainstream imaging device in the

21st century. While the Digital Still Cameras (DSC) have been rapidly replacing

the tradition film systems, the digital image sensors still suffer from new

challenges to surpass the image quality of the film systems.

Researches are continuously developing advance imaging algorithms and

color sensing elements to improve the image quality from the digital sensor.

Page 18: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

2

Enhancement functions such as face recognition, blink detection, etc, were

developed to ease handling of the devices by all photographers. However, one

of the main challenges remains is the reliability of the sensor. A common

problem suffered by many electronic devices is the development of faults or

failures due to material related degradation or radiation damage. Typical image

sensors (~23 - 864mm2) are much larger than common electronic chips ~5x5mm.

Thus the likelihood of defects being found in the image sensors is greater than

regular devices. Defects on sensors are permanent damage that alters the

characteristics of the normal pixel operation. Such damage will impact the

quality of image captured by the sensor and limit the lifetime of the device. With

the sensor being the subject to degradation, and the operational lifetime is

growing, the problem of defects in imaging sensors needs to be addressed.

The four main focuses of this thesis are: the exploration of the source and

characteristics of pixel defects, the impact of imager defects in regular photos,

and how the defect growth is affected by the the sensor design trends, and an

exploration in photogate design to improve the sensitivity of the photodetector

over the visible spectrum. This study will involve the development of calibration

techniques, and a defect trace algorithm to identify defects and the growth rate of

these faults from a set of commercial imagers. Traditional yield analysis will be

adapted to identify the defect source mechanism on commercial digital cameras.

Three classes of commercial cameras will be involved in this study: Digital Single

Lens Reflex systems (DSLRs), Point-and-Shoot (PS) and cellphone cameras.

Each of these cameras will provide data to measure the impact of defects on

Page 19: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

3

various sensors. A detail analysis of defects collected from the different class of

cameras, and study of an image algorithm (i.e. demosaicing) will be presented to

pinpoint the impact of defect to the design of the sensor and image quality.

1.1 History of Image Sensor

The two types of sensor technologies are Charge-Coupled Device (CCD)

and CMOS, which were both invented in 1960’s. However the early work on

CMOS sensors showed they suffered from fixed pattern noise. Due to the

underdevelopment of the CMOS process line, this technology was not a

favourable choice. In comparison, when Bell labs announced the invention of

CCD by Willard Boyle and George Smith in 1969[2], this technology was

embraced by the imaging industry for its freedom from fixed pattern noise and

small pixel size. The CCD became the main focus of research for over 20 years

as the technology continued to improve. By the early 1980’s with the

development of advance fabrication and lithography, the CMOS technology

improved drastically and has became the dominate process line for most logic

devices and processors. This resulted in the CMOS sensor being revived as an

imaging device. In early 1990’s the CMOS Active Pixel Sensor (APS) was first

introduced by Fossum, Mendis and Kemeny at JPL[3]. With the significant effort

made in exploring the CMOS APS, the quality and noise level improved

drastically to the point where it was comparable to the CCDs. The CMOS APS

became a rival to the CCD as it brought low power and reduced cost imaging

systems. More importantly, being a CMOS based technology, it allows

integration with other sub-systems (i.e. timing control, Analog-to-Digital (A/D),

Page 20: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

4

system controller, Digital Signal Processor (DSP)) creating a highly integrated

camera system[4], as shown in Figure 1-1. Since the analog process line of

CCD is optimized for the imaging performance, implementing additional functions

required redevelopment; thus prolong the design time and production cost. The

CMOS APS permits digital integration, but the increase in noise level will

degrade the image quality and increase the design complexity. Thus, with the

trade-off for imaging performance, the CMOS APS only recently has been

implemented in the camera-on-chip-design for areas where cost is important

such as cellphones.

Figure 1-1. CMOS camera-on-chip vs. CCD.

1.2 Modern Digital Cameras

The concept of digital camera was introduced in 1973 by Texas

Instruments Incorporated[5]. The first application of digital image sensor was in

video camera; however it did not achieve great success due to the high pricing of

the product. The birth of the first digital camera was marked by the Fuji DS-1P in

1988, which used a 400K pixels CCD sensor and saved images to SRAM

Page 21: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

5

memory cards. However this camera was not available on the commercial

market. The first digital camera available commercially appeared at 1991 was

the Kodak DCS-1. It consisted of a 1.3MP CCD sensor that fitted into a Nikon

SLR camera body where images are stored on an external 200MB hard disk.

The DCS-1 was targeted for newspaper photography and successfully reduced

the time from taking the image to transmitting it for publishing. The more

portable commercial digital cameras began to appear in 1994, with the Apple

QuickTake 100, later followed by the Casio QV-10, which was the first digital

camera with a build-in LCD display. By 1997, the digital cameras resolution had

increased to multiple Mega Pixels (MP), and in 2002 mobile phones are

equipped with digital image sensors. From the statistics, shown in Figure 1-2,

reported by the Canadian Imaging Trade Association (CITA)[6], in the year 1999

only 0.05 millions digital cameras were sold as compared to 33 million film

cameras. However, with the continuous advancements in digital imagers, by

2002, the sales of film cameras had declined to 23 million units where as the

digital camera had increased to 24 million units. The sales of digital camera

continues to increase as the most recent report showed, 119 millions units were

sold in 2008 with almost no film cameras.

Page 22: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

6

0

20

40

60

80

100

120

96 97 98 99 00 01 02 03 04 05 06 07 08 09

Millions

Num

ber

of c

amer

a m

anuf

actu

red

Film Digital

Figure 1-2. Production of film vs. digital cameras (data from CITA [6]).

Digital cameras have been dominating the photography industries due its

attractive features and functions which the film camera cannot offer. A typical

digital camera found in the commercial market consists of several components

which include an image sensor, A/D, microprocessor, LCD display and

removable storage device. In a traditional film camera, the film functions both as

the light sensing element and storage; however, in a DSC the role of film is

replaced by the use of an image sensor and a removable storage device. As

shown in Figure 1-1, the analog light signals at the image sensor is transformed

into a digital signal via an A/D converter and can be stored onto a removable

storage device and displayed on a LCD. The removable storages such as

microdrive, compact flash, or SD card, are a form of flash memory and can be

reused. However in film cameras, each roll of film needs to be replaced after a

single use. Hence, the cost of owning a digit camera is far less than that of film

cameras. In addition, images stored in digital form can be accessed by other

Page 23: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

7

electronic devices. In the film cameras, images are only available after

developing and printing, but in the digital cameras, the LCD display provides

immediate image playback, which is an important advantage for seeing if the

desired picture has been captured. More importantly, in the compact camera

models (i.e. Point-and-Shoot), the LCD also functions as an image view finder.

The microprocessor in the digital cameras provides enhanced features that are

not feasible in film cameras. For example the sensitivity of a film, also known as

ISO (International Standards Organization defined value) can only be adjusted by

changing the roll of film. However in the digital cameras, this is achieved by

changing the amplication at the sensor output. Thus the ISO can be altered

easily between each capture.

The development of the image sensor has impacted wide range of

applications. With the integration of digital image sensors into many electronic

devices such as cellphone cameras, it has made a remarkable change to our

daily life. The embedded camera in cellphones provides alternative device for us

to capture images or videos. In addition, video conference calls are not limited to

our PC workstation but can be made on the go with mobile phones. In the

medical community, the digital imagers have began to replace the traditional film-

base radiography[7]. Images stored in digital forms reduce the chance of

misplacement and permit sharing and transmitting of pictures over a computer

network. In addition, digital images can be processed by software functions to

enhance relevant information for diagnosis or to correct improper camera

settings. Unlike the traditional film production, digital recording is free from

Page 24: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

8

damage cause by a faulty camera and projector mechanics and storage

degradation such as dirt trapped in the film. In addition, cinematographers can

monitor the filming process and correct any displacement on the set immediately.

The surveillance and security systems such as vehicle tracking[8] and traffic

measures[9], have also gained benefits with the invention of digital image sensor.

The digital data acquired from the remote sensors can be transmitted through a

wireless network[10] which reduces the storage require at each sensor site and

allows remote monitoring.

The digital cameras in many ways surpass the film cameras with the user

friendly features, transferability, and low cost, but problems such as reliability of

the sensors is still a major concern. In the film cameras, a defective film will be

replaced with the next roll but in the digital cameras the replacement of a

defective sensor can be expensive and is often not feasible for highly integrated

camera systems.

1.3 Reliability Issues

Excluding the mechanic failures; the lifetime of a digital camera is limited

by the reliability of the sensor. Different from the film cameras, where the cost of

replacing the film is low; in the digital systems image sensors are themselves

expensive and are interconnected to other camera subsystems. Thus the cost of

replacing the sensor is high and often not feasible. When a sensor generates

faulty pixels, all subsequent images captured will be affected. In many

applications such as embedded image sensors in the space shuttle, or remote

sensing where access to the sensor is limited, the reliability of the imagers is very

Page 25: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

9

important. In the commercial digital camera market, the replacement of low-end

commercial cameras such as Point-and-Shoot(PS) is common due to modest

cost and constant appearance of new camera features. However with the

higher-end cameras such as DSLR, the cost of these cameras is significant

(~1000 or more). Thus the replacement rate of these cameras is less frequent,

typically several years. None the less, as the performance of digital camera

matured where the resolution, and image processing functions become nearly

the same for many cameras, the next main concern to consumers will be the

lifetime of the digital cameras.

1.3.1 In-field defect analysis

A common problems suffered by all microelectronic devices are the

developed of defects over the lifetime of the device. A defective pixel will simply

fail to sense light properly. Faults on image sensors developed during

manufacture time or during operation in the field. These defects are permanent

damage on the sensor and will affect all captured images. Manfacturing time

defects are corrected via factory mapping, where factory testing identifies faults

and hides the defects, but this is not available for in-field defects. Most of the

current literature studies on faults in digital image sensors focused on the change

in the optical characteristics in high radiation environment such as outer

space[11]. However, the reports of defects observed in regular digital cameras

which are discussed in photography forums were rarely being addressed.

Whether the cause of these in-field defects is due to degradation from fabrication

process or in-field factors such as radiation, these damages are not limited to

Page 26: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

10

space applications but are also important in the terrestrial environment. Very

little information in the literature discusses the defect development rate during in

field operation which is a central point of this thesis. In chapter 3 details of the

defect identification techniques for DSLRs, Point-and-Shoot, and cellphone

cameras will be discussed, where we aim to extract information such as the type

and quantity of defects, spatial location map of defects and measurment of the

defect parameters. In addition, we will study the prevalence of defects as

function of the camera settings, which will provide insight to the effects of defects

on regular photography. In chapter 4, we will provide detailed defect data

collections from various commercial cameras. Then, extensive statistical

analysis derived from standard manufacture yield analysis will be applied in

attempt to characterize the defect source mechanism.

1.3.2 Defect Growth Algorithm

In any fault tolerant study, the failure rate measures the frequency of the

development of new faults and also the reliability of the device. Defects caused

by different source mechanism will exhibit different failure rates. Thus by

tracking the temporal growth of defects, we can provide a better judgment of the

causal mechanism behind those commercial digital cameras. Defects on a

sensor will appear in all subsequence images. Thus the development of in-field

faults can be found by tracing through the image dataset to find when the defect

first appeared. The date which the image is captured (i.e. which is recorded in

the image meta-data) will be served as an approximation of the defect

development date. The procedure of tracing defects can be done by visual

Page 27: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

11

inspection; however with some image dataset as large as 10Gb, this procedure is

cumbersome. In chapter 5, we had proposed a recursive algorithm which utilizes

statistics gather from each image to automatically determine the presence or

absence of defects in a specific image. The algorithm has been implemented

and the accuracy of the detection has be verified through simulations and tested

with existing image datasets from real cameras.

1.4 Impact of defects on future sensor design

Four main trends are found in new sensor designs. First, the CMOS APS

is gaining in popularity as large area devices, as most commercial DSLRs are

now equipped with APS sensors. Secondly, the improvement of sensor

technology has reduced the noise signal on imagers. This permits an expansion

in the ISO range. Third, the changes in sensor sizes due the high demand of

cellphone cameras is driving companies toward smaller sensors, and the need of

better image quality in the high-end DSLRs has lead to more production of the

large area sensors (i.e. full frame). Lastly, the increase of imager resolution is

achieved with more pixels on the sensor. While the sensor size remains nearly

the same, the dimension of pixels is reduced to attain a higher pixel count. In

chapter 6, we will explore the possible impact of these current sensor trends on

defects using the defect data collected from different types of cameras: cellphone,

PS and DSLR. An extensive analysis of the defect rate base on various design

parameters will provide possible estimation on the quality of sensor with the

future imager design.

Page 28: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

12

1.5 Multi-Finger Active pixel sensor

The two main types of photodetectors used by the APS imagers are

photodiode and photogate. The light sensing area of these pixels consist a

photodiode comprised of a PN junction or photogate which is simply a version of

a MOS capacitor. Although the structures of the two phototectors are different;

the basic operation remains the same. When the photodetector encounters light,

the optical signal is translated into an electronic signal through the separation

and collection of electron-hole pairs. One main drawback of the photogate is the

non-uniform sensitivity measure over the visible spectrum. To address the issue

of the absorption in the photogate, a multi-finger photogate structure had been

proposed [12], [13]. Chapter 7 will extend the experimental study from [13] by

measuring the sensitivity of both the standard and multi-finger photogate over the

visible spectrum. The sensitivity measured from the multi-finger photogate

provides a means to estimate the amount of absorption which had not been

studied before. More importantly, the extensive analysis can provide insights into

the performance of the multi-finger photogate and other possible drawbacks of

the standard photogate design.

1.6 Summary

Faults generated on image sensors are being study in the high radiation

environment; however little work had addressed the reports of defects developed

in terrestrial environment. Accumulation of defects will continue to degrade the

image quality and the change in optical characteristics of the sensor will limit the

usability of the sensor. More importantly, as the imaging technology matures,

Page 29: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

13

lifetime of the cameras will become an important concern to consumers. The first

half of the thesis (chapter 2 and 3) with focus on addressing the issues of defects

found in commercial digital cameras while operating in the field. In chapter 4, a

detail study on the defect spatial distribution, development rate, and the impact of

camera setting on faults will offer insight on the defect source mechanism. In

chapter 5, an in depth look into the failure rate of individual sensor will be

examined by analyzing the historical image dataset with our proposed defect

trace algorithm. In chapter 6, defect data collected will be categorized by sensor

type, sensor and pixel area. The comparison of defect data by the sensor design

will provide estimations of the impact of defects in future sensors. More

importantly, it will serve as a limitation factor measure to the current sensor

design trend. Chapter 7 of this thesis will extend on the work done by

Michelle L. Haye where we will analyze the spectral response of the standard

and multi-finger photogate implemented previously.

Page 30: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

14

2: THEORY AND BACKGROUND ON SOLID-STATE IMAGE SENSORS

Before going into further discussions on defects in digital imagers, we will

first review some basic operations of image sensors. In this chapter we will first

cover the theory of light detection in semiconductors and the basic structure of

the two main photodetectors: photodiodes and photogates. Then we will discuss

the principle architectures of CCD and CMOS APS pixels and some metrics use

to evaluate the performance of these sensors. The optical signal is converted

into voltage/current by the solid-state image sensor and the internal image

processing functions such as noise reduction, color interpolation, white balancing

are applied to enhance the image quality. To better understand the impact of

defects in the pixel output, we will provide the background on the design and

operation of the commercial digital cameras. Finally, in the last part of this

chapter we will provide the background on the mechanism behind the two defect

sources: material degradation and external random process.

2.1 Theory of Photodetectors

Photoconversion is the process by which the energy of incident light is

converted into an electric signal. In the basic properties of a semiconductor there

are discrete energy levels which electrons may occupy. The highest energy

band occupied by electrons at absolute zero temperature is called the valence

band. At the same temperature, the conduction band, which electrons must

Page 31: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

15

occupy for current to flow, of semiconductor is empty. The energy separation

between the upper edge of the valence band and lowest conduction band is

known as the bandgap energy Eg, shown in Figure 2-1. Photons travelling

through the semiconductor with energy exceeding Eg will excite an electron from

the valence into conduction band leaving a mobile hole behind. Thus, as shown

in Figure 2-1, each absorbed photon will create an electron-hole pair. On the

other hand, to any photon with insufficient energy the semiconductor will appear

transparent.

Figure 2-1. Absorption of photons in semiconductor.

The energy of the photon depends on the wavelength, as calculated with

λch

Ephoton

⋅= , (2-1)

where h is Planck’s constant, c is speed of light and λ is the photon wavelength.

Each semiconductor material has a cut-off wavelength denoted by λc, and is

calculated with Equation(2-1) for a photon energy equal to the bandgap energy

(i.e. Ephoton = Eg). The cut-off wavelength simply shows that any photons with

wavelength longer than λc will not be absorbed.

Page 32: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

16

Silicon has a bandgap energy of 1.1eV; hence it is able to detect photons

in the visible spectrum (400 – 700nm) and near Infra Red (IR), but most photons

in the IR range (>1124nm) will not be absorbed.

While light penetrates through the semiconductor, optical power is lost due

to the interaction between photons and the electrons. The intensity of photons

passing through the semiconductor decays exponentially,

)exp()( xIxI o α−⋅= , (2-2)

where x is distance below the surface and α is the absorption coefficient (in cm-1).

Shown in Figure 2-2, the absorption coefficient, α, is a wavelength

dependent variable.

1.E+02

1.E+03

1.E+04

1.E+05

1.50 2.00 2.50 3.00Photon energy (eV)

Abs

orpt

ion

coef

ficie

nt (

1/cm

)

Figure 2-2. Absorption coefficient of silicon cryst al at various wavelengths.[14]

Photons with high energies have a larger absorption coefficient and will be

absorbed in shallower depths as compared to the photons with lower energies.

A low absorption coefficient implies the photons will penetrate deeply into the

semiconductor before fully absorbed. For example, in a silicon crystal, the α of

blue light (2.61eV) is ~5E+4cm-1 and red light (1.19eV) is ~5E+3cm-1. The depth

Page 33: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

17

1/α is the distance which the photon intensity drops by a factor of 1/e. Given an

initial intensity of red and blue photons in a silicon crystal, the red photons would

need to travel is 10x longer than the blue photons to be reduced by the same 1/e

factor. In addition, for silicon based optical devices, photons with much shorter

than visible wavelengths (i.e. UV range) will be absorbed by the oxide layers and

penetrate little in the substrate. Thus all the carriers are generated near the

surface which is dominated by the surface traps. The detectable range of silicon

base semiconductor is from ~1µm to short enough that surface and cover glass

optical absorption becomes dominate (typically 350nm).

Photogenerated carriers can provide a measure of the light intensity.

However, without an electric field, these electron-hole pairs will recombine after a

short time. The main merit of the photodetector is to collect photocarriers but the

efficiency of the detector is determined by the ability to prevent the free carriers

from recombining. The way of creating that separation is shown by the two main

types of photodetectors used in the CMOS sensors: photodiodes and photogates.

In the following sections we will discuss each of these in details.

2.1.1 Photodiodes

Photodiodes utilizes a P-N junction to collect photocarriers. A P-N

junction is composed of p- and n-type semiconductors layers that contact each

other. As shown in Figure 2-3(a), the joining of the two different semiconductors

will form a junction at the interface which is known as depletion region. Under

zero bias, the depleted region is formed by the diffusion of mobile holes in the p-

region and electrons in n- region; thus leaving the positively charged donors in n-

Page 34: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

18

and the negatively charged acceptors in p-. The separation of charges at the

interface of the junction forms an internal electric field which prevents further

recombination of mobile carriers. The internal electric field has a build-in

potential of Vbi. When an external voltage is applied, the internal potential will

change and cause movement in the mobile charges; thus results in a net current

flow as shown in Figure 2-3(a). When a forward bias is applied, Figure 2-3(c),

the internal potential decreases, thus more mobile charges are able to diffuse

across the junction, results in net forward current flow. When a reverse bias is

applied as shown in Figure 2-3(d), both holes in p- and electrons in n- are being

pulled away from the junction, thus the width of the depleted region expands.

The maximum reverse bias which a p-n junction can operate is marked by the

breakdown voltage, Vbr.

(b) Zero bias

(c) Forward bias

(a) I-V curve (d) Reverse bias

Figure 2-3. Simple p-n junction.

Page 35: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

19

A typical silicon based photodiode as shown in Figure 2-4(a) consisting of

a N-type material in the substrate, a layer of P-type material above the N region

forming the active surface, and a thin layer of insulator material above the P-type

region. As noted, the absorption of the photons is wavelength dependent. The

photons with short wavelengths are absorbed near the surface in the p-region,

and long wavelengths tend to penetrate deeply into the n-region before being

fully absorbed. Hence, during signal integration, shown in Figure 2-4(b), an

external reverse bias is applied to extend the depletion region such that the

absorption of photons at various depths is accommodated.

(a) unbiased (b) reverse biased

Figure 2-4. Photodidoe (a) unbiased, (b) reverse bi ased.

When the photodiode is exposed to a light source, as shown in

Figure 2-4(b), photons with sufficient energy will stimulate electron-hole pairs

through out the material. The electron-hole pairs created in the depletion region

create the drift current, IDrift, and the carriers created outside junction move via

the diffusion current, IDiffuse. The drift and diffusion of carriers generate a net

photocurrent,

DriftDiffuseph III += . (2-3)

Page 36: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

20

Measurment of this photocurrent depends on the number of electron-hole pairs

generated and the time it takes for the carries to drift across the junction. The

response time is limited by the width of the junction as all carriers need to travel

through this layer. Hence, the photocarriers generated within the junction will

have the fastest response time (i.e. IDrift). The photocarriers generated outside

the depletion region need to diffuse into the junction which results in a slow

response time. To optimize the response time, the p-layer must be kept shallow

and the reverse biased voltage should extend the junction such that the

absorption length of the desire wavelengths are within the depletion region.

Although the width of the depletion region can be extended with a reverse bias,

the creation of a dark current is major drawback to this operation. Dark current is

simply a thermally generated leakage current due to the applied reverse bias

voltage; the current is usually small, from pA to µA. However, dark current is a

function of the junction width and temperature. Thus if a wide depletion layer

operates at a high temperature, this could result in a significant measure of dark

current.

2.1.2 Photogates

Another type of photodetectors commonly found in CMOS sensors is the

photogate. Photogates integrated the MOS capacitor technology to capture

incident illumination within a potential well. As shown in Figure 2-5, the basic

structure of the photogate consists of a MOS capacitor with a thin layer of

poly-silicon as a gate that sits on top of a transparent insulator layer. Photogate

changes the optical signals into charges.

Page 37: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

21

Figure 2-5. Standard Photogate.

As show in Figure 2-5, with p-type substrate, when a positive gate voltage

is applied, holes are pushed away from the positive gate forming a potential well

(depletion region) of ionized acceptors. The depth of the depleted region

depends on the gate voltage (VG), which will affect the capacity of the photogate.

During signal integration time, the photons must penetrate through the silicon

gate into the substrate where electron-hole pairs are formed in the potential well.

The electric field in the depleted region pushes electrons to the surface while the

holes will penetrate and be absorbed by the substrate. The amount of charges

collected depends on the integration time; however thermally generated carriers

will limit the integration cycle. The optically generated carriers are stored as

charges in photogate; thus a read out circuit is needed to generate the voltage or

current signal from the stored charges. Notice that all incident photons need to

pass through the gate layer; thus, the optical absorption in the gate layer is a

major limitation to the efficiency of this photodetector.

When optical signal is measured as accumulated charges, it allows

sensing of weaker signals thus the photogate has a higher sensitivity

measurement than the photodiode. A simple photodiode measures the optical

signal of the instantaneous current/voltage, which made it dependent on strong

Page 38: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

22

light signal. However, the fast response time made this a suitable choice for high

speed applications.

2.1.3 Pixel performance metric

There are several standard metrics used to evaluate the performance of

the photodetector and imaging pixels. In this section we will define some of

these metrics that we will be using in this thesis.

The performance of the photodetectors is measured with Quantum

Efficiency (QE) and responsivity. The absorption of incident photons depends on

the penetration depth, and the reflectivity of the conductor surface. More

importantly, the generated electron-hole pairs can be loss through recombination

and trapping. Thus the efficiecy of the conversion process is expressed as

photonsincidentPairHoleElectronCollectedGenerated

QE_#_,# −−== η . (2-4)

The numerator is the number of absorbed photons generated electron hole pairs

which are collected and the denominator is the number of incident photons. This

ratio is always less than unity. Since the photocollection is wavelength

dependent, we an express QE as a function of wavelength,

υη

hP

eI

o

ph

/

/= , (2-5)

where Iph is the photogenerated current, and Po is the incident light power. For

the photodetectors which measures the output in current, we can expresse the

the efficiency in terms of the output current. This is also know as responsivitiy,

Page 39: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

23

hce

WP

AIR

o

ph λη==)(

)(. (2-6)

A good photodetector, should have QE ~90-95% over visible spectrum.

Each pixel consists of a photodetector connected to an output circuitry.

The actual photosensitive area is usually a fraction of the pixel area. The fill

factor measures the fraction of photosensitive area versus the full pixel area.

Hence, pixels with small fill factor have less surface exposure to light and will

have a lower collection of photocarriers. Recently, mircolens was introduced to

resolve such problem. The microlens is a transparent lens positioned above

each pixel which helps to direct light from al the pixel area to the photosensitive

region. The sensitivity of a pixel measures the rate which the pixel response to

the incident light power. The dynamic range measures the range between the

maximum output level to the minimum noise signal. Thus a high noise level will

significantly reduce the dynamic range of the pixel.

2.2 Charge Coupled Device

The Charge-Coupled device was invented in 1969 at Bell Lab. At first, the

CCD was created as a digital memory to compete with the Magnetic Bubble

Memory (MBM)[15] as a mass memory storage device. However, as the costs of

hard disks were reduced and with the development of flash memory neither the

CCD nor MBM became the next generation of mass storage digital memory. The

principle operation of the CCDs is like a shift register. It is composed of a linear

array of MOS capacitors that can store charges. By controlling the gate voltage

of the MOS capacitors, it will induce the charge packets to move along the array.

Page 40: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

24

The optical response of the CCD even under low light conditions has resulted in

its taking off as a major imaging device in many large-scale light sensing

applications.

In the following two sections, we will discuss the basic operation of the

charge transfer and the several commonly used transfer methodologies that are

employed in the industry.

2.2.1 Charge Transfer

One of the key operations of the CCD is the integration of photo-electrons

and the transfer of the collected charge packets. By keeping the capacitors

closely spaced, the interaction between the depleted regions will allow charges to

shift to the adjacent well. A typically 2 and 3 clock phase CCDs are shown in

Figure 2-6. Note that in a 2 clock phase design, each pixel is consisted of 2

MOS capacitors and 3 for the 3 clock phase CCD.

(a) 2 MOS capacitors (b) 3 MOS capacitors

Figure 2-6. CCD composed with (a) 2, and (b) 3 MOS capacitors.

At each clock phase, the gate voltage will be adjusted to shift the charge

packet into the adjacent MOS capacitor; thus, the sensor composed of 3 MOS

capacitors will require to operate on a 3 clock phase cycle.

Page 41: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

25

Figure 2-7. Three-Phase clock cycle CCD.

The operation of the 3 clock phase CCD is as follows, at each stage, one

gate region acts as the storage and the two adjacent gates act as the barriers.

Shown in Figure 2-7, during the signal integration phase, VG(1) is pulsed high to

create a potential well in the substrate and collect the photogenerated carriers.

Then at the first cycle of the transfer phase, VG(2) is pulsed high while VG(1)

decreases slowly. Thus the charge stored in the first well will flow toward the

adjacent well because it has the lower potential. At the second clock cycle, VG(3)

will pulse high while VG(2) and VG(1) are held low creating a barrier to the

adjacent MOS capacitors. Thus, the charge packet under gate 2 is now shifted

into the well under gate 3. At the last clock cycle, VG(1) will pulse high while

VG(2) and (3) are held low. Now the charge packet is shifted into the next pixel.

By repeating the 3 clock cycles, the collected charge packets will move

sequentially across each row to the output node for readout. This operation is

often called a bit bucket.

Page 42: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

26

The output of each pixel on a CCD sensor is highly dependent on the

transfer efficiency. During the transfer of the charge packets several factors such

as the dark current, transfer speed, and interface traps will affect the overall

transfer efficiency. Dark current is caused by the thermal generated charges

which build-up in the potential well and will corrupt the signal packet. This

charge build-up is due to the high voltage applied at the gate. Thus, by operating

at a high clock frequency, the dark current can be reduced. However, the clock

frequency is governed by the charge transfer speed. If the clock frequency is too

high, charge will be loss during the transfer. The main drawback to the transfer

speed is limited by the interface traps. At each transfer, the charges will fill up

the empty traps at the surface. Then, at the next transfer some traps will release

the charge instantaneously while others are slower. The slow released charges

might not get transfer; hence resulting in signal loss. This phenomenon is known

as an interface trap loss. The problem with surface traps can be overcome with

the Buried channel CCD (BCCD) which consists of an n-type layer above the

p-type substrate as shown in Figure 2-8. When a positive gate voltage is applied,

the n-type layer is fully depleted, and the charges will be collected at the

minimum potential. Because the charge packet is localized from the Si/SiO2

interface, the overall transfer efficiency is increased by minimizing signal loss due

to trapping of charges. With the highly customized process line used to

manufacture the CCDs, this sensor is reported to operate with a 99% transfer

efficiency[16].

Page 43: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

27

Figure 2-8. Buried channel CCD (BCCD).

2.2.2 Basic CCD Structures

The CCD sensors come in different structures to accommodate for the

requirements needed by various applications. In this section, we will present the

three common types of CCD architectures: Frame Transfer (FTCCD), Interline

Transfer (ITCCD), and Full-Frame (FFCCD). The signal charges packet stored

in each pixel are shifted to the output node located at the bottom of each column.

Thus, the frame speed which the CCDs can operate on is limited by the transfer

speed.

Page 44: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

28

(a) Frame Transfer (FTCCD) (b) Interline Transfer (ITCCD) (c) Full Frame (FFCCD)

Figure 2-9. Common CCD structures (a) Frame Transfe r, (b) Interline Transfer, (c) Full Frame.

When the frame speed is the key requirement, for example in video

cameras, Frame Transfer CCD will be the preferred choice. As shown in Figure

2-9(a), FTCCD consists of two CCD arrays of the same size combined with a

horizontal shift register at the output. The top CCD will be used to collect signal

charges, and the second one is shielded from light to act as an analog memory.

During signal integration, charges will be collected by the top CCD, then, the

integrated charges will be quickly transfered in parallel onto the bottom CCD.

The stored signal charges in the bottom CCD is then transfer into the horizontal

shift register one row at a time and readout by the output circuitry. While signal

is being transfer for readout, the top CCD can start the next image integration

cycle; thus, the device operation speed is optimized. However, this structure

suffers from a smear problem which arises from the simultaneous integration and

transfer to storage. Also, the need of using two CCD areas will increase the

production cost.

Page 45: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

29

Alternatively, the Interline Transfer CCD is among the most popular

architectures used for commercial digital still cameras. Shown in Figure 2-9(b),

ITCCD is composed of photodiodes arranged in interlaced columns and

positioned between masked vertical transfer CCD pixels. The photodiode is

used to collect photogenerated charges while the adjacent CCD will act as an

analog frame memory. Stored charges in the CCD pixel will transfer into the

horizontal shift register one row at a time and read by the output circuitry.

Because the photodiode is not being using during the transfer cycle, the next

integration cycle can be used during the transfer. With a proper timing, this CCD

can operate at high speed and with minimal smear problems.

The last and most important CCD design, is the Full-Frame CCD. Shown

in Figure 2-9(c) the FFCCD has a 100% fill factor because the entire pixel array

is photosensitive; thus this is the highest quality CCD design available on the

market. The integrated charges collected by the CCD pixels are transferred in

parallel onto the horizontal shift register. In this architecture, there is no

dedicated storage; hence the CCD array functions both as charge collection and

an analog memory. The pixel array is shielded from light with a mechanical

shutter during the transfer cycle. With the use of an external mechanical shutter

in the camera to control the exposure, the integration and charge transfer will not

occur simultaneously. Hence the smearing problem is eliminated. However, the

frame rate at which this structure can operate is limited by the read-out cycle.

Thus this structure is mostly use for high quality imaging and not rapid shooting.

Page 46: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

30

2.3 CMOS Sensor

Like the CCD sensors, early CMOS sensors were known as passive pixel

sensors because the amplification of the instantaneous photogenerated current

is performed at the output of each row. In 1990s, when the CMOS sensor began

to revive, it has adopted the Active Pixel Sensor (APS) design where each pixel

integrates the photocarreis locally and has a built-in amplification.

In a CMOS sensor, both the photodiodes and photogates can be used as

photodetectors. The operations of the two photodetectors are very similar and

will be discussed in the following two sections.

2.3.1 Photodiode Active Pixel Sensor

A simple photodiode active pixel is shown in Figure 2-10. The photodiode

acts to collect incident light in the form of integrated charges that creates a

voltage/current when connected to the read-out circuit. A typical pixel read-out

circuit consists of 3 transistors, ReSet (RST), Source Follower (SF) and Row

Selector (RS), as labelled in Figure 2-10, are used to control the pixel operation

as will be described next. The read out circuit can be set to operate in voltage or

current mode.

Page 47: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

31

(a) Photodiode APS circuit (b) Photodiode APS contr ol signals

Figure 2-10. Active pixel sensor with Photodiode ph otodetector.

Shown in Figure 2-10(b) there are three stages to the pixel’s operation:

reset, signal integration, and readout. At the start of each integration cycle, the

RST transistor is pulsed high, to precharge the capacitance, Cx, at node Vx

during time Treset (Figure 2-10(b)). Hence, the reset voltage measured at node Vx

is simply VDD - VTh. The capacitance at node Vx is composed of the photodiode

capacitance and the parasitic capacitance from the RST and SF transistors,

SFRSTdx CCCC ++= . (2-7)

The capacitance of the photodiode Cd is typically 10 times larger than that of the

transistors; thus the capacitance of Cx is dominated by the photodiode.

During the signal integration cycle, the RST is turned off and the QPhoto

generated from the incident light will partially discharge Cx over the integration

cycle for exposure of duration Tint (Figure 2-10(b)). The voltage measured at

node Vx is calculated by

x

Photox C

QV = . (2-8)

Page 48: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

32

As a first approximation, the capacitance Cd is proportional to the diode

area; thus shrinking the pixel will reduce both the collected light and Cx at about

the same rate. Hence, Vx is approximately independent of pixel size for 10 - 2µm

pixels.

In the readout cycle, the RS transistor is connected to the row address

bus, and is turned on when the row is being selected for read out. During read

out, the buffered voltage at the SF transistor will be placed onto the column bus

and stored into the Sample-and-Hold (S/H) circuitry located at the bottom of each

column. The output from the SF transistor is calculated by,

noise

C

CAV

x

hsout +⋅= / , (2-9)

where A is the voltage gain, usually <1, and Cs/h is the capacitance in the S/H

circuitry. The photodiode APS has the advantages that its control/readout cycle

is very simple, and that power is only consumed during the reset and readout

stage.

2.3.2 Photogate Active Pixel Sensor

In a photogate APS, the photogate is used to collect photogenerated

carriers. As shown in Figure 2-11 the basic structure of the photogate pixel

employs a 4 transistors readout circuit. Different from the photodiode pixel, the

photogate requires two additional control lines to control the photogate voltage

VPG and the transfer of charges from the potential well to the floating diffusion Tx.

Page 49: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

33

Figure 2-11. Active pixel sensor with Photogate pho todetector.

There are four stages to the photogate APS operations: signal integration,

reset, transfer, and readout. Each of the 4 transistors is responsibled for

controlling the operation at different stages as shown in Figure 2-12.

Initial Condition

(a) Signal Integration

(b) Reset

(c) Transfer

(d) Readout

Figure 2-12. Photogate operation cycle, from signal integration to readout.

The pixel operation begins with signal integration by applying a photogate

voltage VPG to create a potential well where photogenerated carriers can be

collected, Figure 2-12(a). Then the pixel RST transistor is turned on to remove

Page 50: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

34

the previous charges stored in the floating diffusion, Figure 2-12(b). At the

transfer cycle Figure 2-12(c), the TX gate is turned on and the VPG is turned off to

shift the collected charges into the floating diffusion and the gate of the SF

transistor. Finally, during the readout cycle, Figure 2-12(d), the RS transistor is

turned on and the corresponding signal voltage is readout by the SF into the S/H

circuit. The corresponding voltage collected by each pixel can be calculated with

PGFDsignal CC

QV

+= . (2-10)

The advantage of the photogate APS is that the capacitance of the

photogate is small so in principal it should be more sensitive. However the

absorption in the gate reduces this. Also it consumes power during the

integration phase, unlike the photodiode APS, and has a more complicated

control cycle.

2.3.3 CMOS Sensor Arrays

The basic structure of CMOS sensor is shown in Figure 2-13, it is

composed of an array of active pixels; hence this sensor is known as CMOS APS.

Because an additional transistor is used to perform amplification at each pixel

site, the fill factor of CMOS APS pixels (~25-30%) is smaller than CCD pixels

(~70-90%).

Page 51: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

35

Figure 2-13. Active pixel array.

Each photosite on the array is connected to the row select circuit which is

used to select a row for readout. Unlike the CCDs, APSs can be randomly

addressed. The row select control allows partial readout of the array and this

function is known as windowing. Charge packets in the CCDs are read in a

sequential manner; thus windowing is not feasible. The output of the pixel is

connected to the S/H circuit located at the bottom of each column. Due the

variation in manufacture process and the reset value of each pixel, the CMOS

sensors generally suffer fixed pattern noise. That is each pixel has a different

reset (unexposed) voltage, and different threshold in the SF transistors creating a

variation in the image characteristics. To reduce this artifact, a Correlated

Double Sampling circuit (CDS) is adopted at the bottom of each column in the

array. The CDS is consisted of a mirror of two S/H capacitors. One capacitor is

used to hold reset value and the other use to hold signal output. The function of

the CDS is to subtract the reset value from the output signal; hence, the variation

in reset value and threshold voltage will be suppressed.

Page 52: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

36

2.4 CMOS vs. CCD

As the two sensor technologies mature, there is no clear indication

whether one is more favourable over the other. In fact the choice of sensor is

usually determined by the application requirements such as frame speed, image

quality, power consumption, and production cost[17].

The design of CMOS APS has an advantage in low power consumption

because the photodiode APS do not consume power during the integration

phase while the CCDs do. The CMOS APS also offers possible integration with

other CMOS subsystems. Hence, this sensor is well embraced by the embedded

applications such as mobile phones, and security cameras. The frame rate of

CCDs is limited by the pixel transfer speed; thus the higher resolution CCDs are

tradeoff with a lower frame rate. On the other hand, the CMOS APS provides

random access; which is easier to achieve higher frame rate. Hence in the high-

speed applications such video cameras, the CMOS APS is the preferred choice.

The APSs benefit from the advances in CMOS fabrication in regular circuits so its

production cost has declined. Since for large sensors the production cost of

CMOS is lower than CCD, in the recent years, many of the large area CCD

sensors used in commercial DSLRs are being replaced by CMOS APS.

However, the CCDs being a more mature technology maintained its market in the

small sensors (i.e. Point-and-Shoot) where the balance of imaging performance

and production cost is needed. One of the main drawbacks of the CMOS APS is

each pixel has a build in amplifier, thus the fill factor is small and it limits the

ability to shrink the pixel size. More importantly, amplifier variation becomes

Page 53: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

37

more significant when operate in low light conditions. On the other hand, the

single amplifier at the output node adopted by the CCD sensors provides a

higher fill factor and uniform pixel response. The CCDs have a greater signal

response under low light condition made it suitable for many scientific

applications (eg. Hubblespace telescope). In addition, the large fill factor made it

easier for CCD to achieve smaller pixel sizes; thus most PS cameras in the

market use this sensor. Recently, the addition of micorlens on each pixel have

allowed both CCDs and APSs to collect the same light for a given pixel size; thus

reducing the fill factor problem for CMOS pixels.

2.5 Digital cameras

The two main types of digital cameras available on the market are Point-

and-Shoot (PS), and Digital Single Lens Reflex (DSLR). A typically PS uses a

small sensor (e.g. 6.1 x 4.6mm); however the pixel count of these cameras is

nearly the same as DSLRs. Hence, the pixels on these sensors are relatively

small (1.5 - 2µm). The pixel size differentiates the quality of the two types of

cameras. As the pixel size decreases, the light sensitivity decreases as well,

hence, PS tends to suffer image quality under low light conditions. Since this

class of cameras targets portability, the tradeoff is in the imaging performance.

In addition, as shown in Figure 2-14, the optics is integrated into the camera and

usually of a much smaller size; thus poorer optical resolution as well.

Page 54: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

38

(a) Typical camera (b) cross section view

Figure 2-14. Point-and-Shoot digital still camera.

More advance photographers who are concerned with better control of the

camera parameters (exposurer time, ISO, etc) and required high imaging quality,

will prefer the DSLRs. The DSLR inherits the traditional SLR architecture as

shown in Figure 2-15. The lens system is interchangeable and a single reflective

mirror mechanism is used to project image onto a viewfinder to “see” through the

lens. To expose an image onto the sensor, the mirror will swing upward into a

90o angle creating a path for the light to reach the sensor. In the early models of

DSLRs, the LCD display was mainly used for a quick image playback while

taking pictures. However in recent developments, most DSLR models are also

equipped with a live view option which allows photographer to utilize the LCD

display as the view finder. Advances in sensor production have allowed lower

cost DSLRs to achieve better quality images than PS cameras with more

features at a similar cost. Hence, DSLRs are growing in popularity.

Page 55: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

39

(a) typical camera (b) cross section view

Figure 2-15. DSLR digital still camera.

2.5.1 Sensor and Pixel size

The sensor area of Point-and-Shoot cameras ranges from 28 to 51mm2 as

shown in Figure 2-16. When compared to the tradition 35mm film, the small PS

sensor has only 3 - 5% of the sensing area. The sensor area of the mid-range

DSLR ranges from 350 to 545mm2, which has ~50% of the sensing area of the

35mm film. The sensor size trends are being driven by two opposite

applications: high quality DSLRs and cellphone cameras. In effort to match the

image quality of the film cameras, the high-end DSLRs are moving toward the

full frame sensor (36 x 24mm). The sensing area of a full frame sensor is

equivalent to the 35mm film. By comparison the increasing popularity of the

portable, small cellphone cameras demands the use of small sensors. The

sensor employed by the cellphone cameras has by far the smallest sensing

area of 7.2mm2, a small fraction of the DSLR imagers (see Figure 2-16).

Page 56: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

40

Figure 2-16. Various sensor sizes.

Table 2-1. Average sensor size used in various digi tal cameras (2008-2009).

Camera Type Sensor size (mm) Pixels (MP) Pixel size (µm)

Full-frame DSLR 36.0 x 24.0 17.03 7.09 x 7.09 DSLR 21.9 x 15.3 11.80 5.15 x 5.18 PS 6.1 x 4.6 10.25 1.17 x 1.17 Cellphone 3.0 x 2.4 5.00 2.20 x 2.20

The main impacts of the sensor size is the angle of view and production

cost. Given the same optical system, the small sensor will have a much smaller

angle of view than a large sensor; hence subjects are being cropped out from the

image. In terms of production cost, all sensors are being cut from the same size

silicon wafer. Thus, the production is maximized when the size of the sensor

remains small. A typical DSLR sensor area is ~330-550mm2 which is 3x the area

of a PC processor chip (~100mm2). Shown in Table 2-2, is the comparison of die

size (i.e various sensors, processor chip) that can be manufactured on a 300mm

wafer. The dies per wafer is calculated with

areaDie

dareaDie

dwaferDies

_2_)2/(

/2

⋅⋅−⋅= ππ

; (2-11)

which estimates the number of chips that can be cut from a wafer.

Page 57: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

41

As shown in Table 2-2, assuming there is no defects on the wafer, only

~59 full frame sensors can be manufactured on a 300mm wafer. On the same

wafer, ~3000–9000 of PS and cellphone camera sensors can be made.

Moreover, as the sensor area increases, the probability of defects on a given die

will also increase. These faults on the wafer are related to manufacture time

defects on the die. The number of good die yielded from a wafer is called die

yield and is calculated with

α

α

⋅+⋅= areaDieareadefectsyieldwaferyieldDie

_/1__ . (2-12)

The parameter α measure the manufacturing complexity which is related

to the masking level and for CMOS process α = 4. Shown in column 3 and 4

from Table 2-2, the small sensor has a typical die yield >90%; thus only <10% of

the die will be discarded due to defects. As the sensor size increases the die

yield decreases to less than 50% for the average DSLR sensors (APS-C/H) and

13% for the full frame sensors. Hence, for these large area sensors over 50% of

the die are discarded due to defects. The low die yield shows that out of the 59

full frame sensors cut from the 300mm wafer, only 8 sensors are usable. As the

production per wafer decreases, the cost of each die will increase. The cost per

die is calculated as

yieldDiewaferDiewaferofCost

ecost_of_di_/

__⋅

= . (2-13)

Assume a 300mm wafe cost is $1000, the price of each full frame sensor

is ~$124, which is 300 times more than the small sensors ~$0.3.

Page 58: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

42

Table 2-2. Comparison of die cost on a 300mm wafer.

chip size

(mm 2) die/wafer die yield (%) die yield / wafer cost of die

($) full frame DSLR 864.00 59.14 13.56 8.02 124.72

APS-H DSLR 545.30 101.09 25.37 25.65 38.99 APS-C DSLR 330.00 177.51 41.29 73.29 13.64

average PS 21.20 3189.50 93.88 2994.46 0.33 cellphone camreas 7.20 9569.11 97.87 9365.18 0.11

intel core i7 263.00 227.67 48.67 110.81 9.02 intel core2 107.00 596.19 73.43 437.81 2.28

Although the large sensors tend to have better sensitivity due the larger

photosensitive area, this performance is highly dependent on the pixel size. As

mention before, the high pixel count on the small sensors is achieved by

shrinking the pixel size. The tradeoff with the small pixel size is the imaging

performance[18]. The shrinkage of pixel size will reduce the photosensitivity

area which implies that the capacity of the photo-collection is reduced. This will

result in lower dynamic range as the pixel will saturate at a much lower value. In

terms of the Signal-to Noise Ratio (SNR), the noise signals are minimized with

the new sensor technologies; however, the accumulation of dark current

increases with the exposure duration. With the decline of the well capacity in the

small pixels, the SNR measures will decrease and this impact is more significant

in long exposure images.

2.5.2 Color filter array sensors

As created pixels are monochromatic; they do not generate color

information. To capture color images, the sensor must classify the wavelength of

the incoming light and this mechanism is called color separation.

The most common approach being used to create color images is the

Color Filter Array (CFA), as shown in Figure 2-17(a). The sensors that use the

Page 59: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

43

CFA design are also known as CFA sensors. This color separation method

utilizes an array of transparent filters position above the image sensor such that

only the desire wavelength range will reach a given photosite. Because each

pixel on the CFA sensor will only collect information from a given wavelength

range; the light of other wavelengths are discarded. The most common color

filter pattern used in commercial imagers is called the Bayer mosaic pattern

which consists of red, green and blue filters, as shown in Figure 2-17(b). The

Bayer pattern is composed of 2 green samples because the human visual

system is most sensitive to this wavelength range; however the arrangement of

the colors in the array differs among various manufactures.

(a) Microlens color filter array (b) Bayer mosaic p attern

Figure 2-17. Color filter array sensor.

The Bayer pattern is for RGB color system images; thus each pixel is

composed of the measured intensity of red, green and blue wavelenghts to

produce the true color. However in the CFA sensor, each photosite only records

of the three colors. Thus the output from the CFA sensor requires software

interpolation algorithm known as demosaicing to estimate the two missing color

values at each photosite from the surrounding pixels. This will be covered in

Page 60: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

44

more details in chapter 3. The main problem for images captured using the CFA

sensors is the creation of a Moiré pattern; that is a distortion caused by

interpolation error of the missing colors. Another disadvantage is the reduction in

overall sensitivity of the sensor. The output of the CFA sensor retains 50% of the

intensity at the green wavelengths, and 25% at the red and blue wavelengths.

Hence, large amounts of information are being discarded which reduces the

sensitivity of the sensor. With such a problem existing, recently, Kodak has

presented an alternative color filter pattern which replaces some pixels with no

filter areas, also known as the panchromatic pixels[19]. The advantage of having

no filter is that no photons will be lost; thus providing an increase luminance

channel to the output image. Hence, creates a higher sensitivity, and allows the

use of faster shutter speed under low-light conditions.

The CFA sensors needed color interpolation to recover the missing color

channels at each pixel site. Current demosaicing algorithms neglected the

presence of defects on sensors; thus faults are treated as normal pixels. The

use of the faulty values in the demosaicing will result in larger interpolation error

and widen the defective area. The demosaicing algorithms are proprietary and

vary among manufacturers. Because this process is irreversible thus to create a

more robust digital imager, in-field defect correction is needed. A detail

examinng of the impact of demosaicing on defects will be discussed in chapter 3.

2.6 Camera operation

The basic image processing pipeline in a digital camera system is shown

in Figure 2-18. Each image captured by the sensor will produce a raw image.

Page 61: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

45

The raw format image is simply the direct output from the sensor before

demosaicing. As shown in Figure 2-18, the raw image output from a CFA sensor

consists of a 12-14 bit measure of one of the three color channels (i.e. R, G or B).

The raw format is available in DSLRs and some high-end PSs only. To generate

a color image, first the demosaicing will be applied. Then, white balance,

sharpening, noise reduction, etc… will be executed to improve the image quality.

Usually the digital chain is executed on 12 – 14 bit values to maintain the high

precision in the algorithms. At the end of the process, the image will under go an

8-bit conversion and Jpeg compression to reduce the file size. The Jpeg

compression will produce a smaller file size than raw, but it freezes the image

into all that processing so it cannot be changed at later dates. Some

professional photographers will shoot in raw format as they have the option to

apply a customized processing pipeline with photo editing softwares. The entire

process pipeline has no knowledge of possible defective pixels on the sensor.

Hence, defective pixels will cause errors in all imaging functions.

Figure 2-18. Basic image process operation.

Page 62: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

46

2.6.1 ISO amplification

The camera ISO system in film cameras is known as the sensitivity of the

film, or film speed. In this sense, the film with a low ISO speed rating has less

sensitivity and requires longer exposure times. Consider the case of a camera

with a fixed F/number and aperture. If an image is captured at ISO 100, the

same image can be produced by reducing the shutter speed by half at ISO 200.

Different from the film systems, the sensitivity of digital camera systems depends

on the properties and settings of the sensor, noise level and image processing

functions. The sensitivity of an image sensor measures the ratio between input

illumination and output signal level. However due to the various processing

functions, and additional noise from the mixed signal system; the overall

sensitivity observed at the output image will be different. The ISO setting is

specified by the manufacture such that the image produced is comparabled to

the pictures created by film cameras of the same ISO.

In the digital camera systems, multiple ISO speeds can be achieved by

changing the amplification of the signal output from the sensor. The gain can be

applied to the analog output from the sensor or by bit shifting the output from the

A/D converter. Gain is applied to all pixels despite the possible presence of

defects in the sensors. With such amplification applied to the defects, the

appearance of faulty pixels will become more visible. In chapter 3 we will discuss

in details the impact of ISO amplification on defects in image sensors.

Page 63: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

47

2.7 Defects in image sensors

Defects are known to develop over the lifetime of microelectronic devices.

There are two main types of errors: soft errors and hard errors. Soft errors are a

single event upset causing an instantaneous change of the pixel sensor state.

This fault is related to bit errors; thus the state can be recovered after a reset or

when overwriting the error. On the other hand, hard errors are permanent

damage; hence, the change of state is unrecoverable. The defects of concern in

digital imagers are permanent damage and will change the state of the pixel for

all future images. These defects are irreversible; thus accumulation of defects

requires replacement with a new sensor. As opposed to the film cameras, a

defective film can be easily replaced with a new roll.

Faulty pixels that occur prior to shipment of the camera are known as

manufacture time defects and faults that developed after leaving the factory are

known as post-fabrication or in-field defects. In this thesis we will focus on

defects developed while the imagers are operating in the field. The two main

defect mechanisms that induced faulty pixels in the digital imagers are

categorized into material degradation and external stress. In the following

sections we will discuss the two mechanisms in details. It is important to note

that defects cause by manufacturing processes exists. However prior to the

shipment of these cameras, the manufacturer will perform a calibration to map

out the defects on each sensor. The defect map is stored in the camera and

imaging functions such as interpolation or dark frame subtraction will be used to

hide the presence of these defects.

Page 64: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

48

2.7.1 Material degradation

Material degradation is associated with the reliability of the semiconductor

devices. This is mainly due to the alternation of the intrinsic properties of the

structural layers of the sensor such as the bulk silicon or the gate oxide[20],[21].

The decay of the materials of the sensor can lead to issues ranging from the

minor case of erroneous output to the more serious failures of a malfunctioning

device.

Gate oxide thinning is one of the common issues found when reducing the

device dimension to make smaller pixels. Trimming down the thickness of the

oxide layer usually leads to a faster wear-out rate; hence, the device becomes

more sensitive to damage. Any catastrophic damage such as sudden spike in

voltage/current, or static discharge can cause a permanent breakdown in the

dielectric material. Alternatively, the break down can simply be the decay of the

insulator material, also known as time-dependent dielectric breakdown. The

degradation of the dielectric material usually occurs at local weak oxide points

due to defects. The defects found in oxide film are related to local poor

processing conditions such as when impurities are introduced while growing the

oxide. The break down of the gate oxide will form a conduction path from the

gate to the substrate. Thus the current flows between the drain and the source

cannot be controlled. In the case of circuits, the excessive leakage will increase

the standby power dissipation and decrease in circuit speed. For the APS pixels,

gate leakage will cause the reset charge on the gate to decrease, thus results in

a time related change in the pixel value.

Page 65: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

49

Hot carriers are another problem commonly found in sub 100mm

semiconductor devices. In pMOS, hot carriers are referred to high energy holes

and high energy electrons in nMOS. These carriers have sufficiently large

energies, so that when carriers are accelerated by the large electric field of small

devices, these carriers can get injected and trapped in the gate or insulating

oxide interface. Thus they permanently affect the space charged layer. The

build-up of trap charges in the transistors will cause a decrease of the drain

current or a shift in the transistor threshold voltage.

Electromigration is another failure mechanism which is caused by the

diffusion of metal atoms in the wires due to the force from the electron flow of the

current. Migration of the metal atoms will cause build-up of atoms at the positive

end of a wire while leaving vacancies at the negative end. This literally wears

away or narrows the line especially at the thin area like steps over other layers.

Under this condition, the current passing through the interconnection will be

subjected to an increase in electrical resistance or even a broken conductor. The

local heating may cause dielectric cracking which will result in shorting between

adjacent metal lines. Another problem is material changing in composition due to

changes in the chemical or crystal structure that arises from temperature or other

environmental conditions.

Many of these failure mechanisms are triggered by defects or

contaminations from the fabrication process. The defects introduced in the

material will create breakdowns while the devices are operating in the field.

Since the defects are localized in small area, failure of from material degradation

Page 66: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

50

will usually result in local defect clusters. In addition, the occurrence of such

defects will increase exponentially with time[22]. Ensuring a clean fabrication

condition, and keeping good design rules will help reduce much of these future

breakdowns.

2.7.2 In-field defect mechanisms

The development of defects after fabrication (i.e. in the field) is also

related to the environment that employed the device. These types of defects are

directly related to reliability and robustness of the system. The two common

types of external stress that cause damage in microelectronic devices are electric

and radiation stress. Each of these mechanisms will be discussed in the

following sections.

2.7.2.1 Electrical Stress

For regular (non-imaging) devices a common post fabircaiton failure is due

to electrical stress. The two common electric stress sources are categorized by

the changes in the application of the supply voltage, Electric Over Stress (EOS),

and Electric Static Discharge (ESD). EOS is associated with an over-voltage or

current stress that lasts for a relatively short duration (>1µs). This type of stress

generally occurs during a normal circuit operation. It may arise due to voltage

applied in reverse bias operation, a relay operation, or a power supply linear

variation. However, an external factor such as lighting surges will also induce

stress to the circuitry. The second type of electrical stress, ESD, as implied by

the name, is triggered from a static discharge in the working environment. The

Page 67: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

51

static build-up when applied to the device will discharge through the lowest

resistive path. Hence, the circuit will experience a high current pulse for a short

duration (1ns – 1µs). The high transient voltage will cause permanent damage to

the thin dielectrics (gate oxide) which results in an increase of leakage current.

These defects are triggered by specified events or working conditions. Thus, it is

usually associated with hot-spot development. The spatial distribution of defects

development from such mechanism is usually random and does not have a

constant failure rate. Electrical stress is less common for cameras as they

usually operate on battery supplies.

2.7.2.2 Radiation Stress

Another defect source which affects all microelectronic devices is radiation

from sources such as cosmic rays or radioactive materials in the environment.

This mechanism is more severe when the device is employed in space

applications where radiation levels can be extreme. Several literature

studies[23]-[25] have shown the damage in image sensors working in a harsh

radiation environment. The accumulation of defects will significantly degrade the

image quality and limit the lifetime of the device. Radiation damage is not limited

to spaceborne applications, terrestrial radiation (i.e. cosmic rays) was often

reported to be the main cause to damage found in transistors, and processors,

RAM memories, etc…[26],[27]. Studies from Theuwissen[28],[29] have details

the observations of cosmic rays damages on imaging sensors operating in

terrestrial environment.

Page 68: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

52

Cosmic rays are composed of higher energy particles, mostly protons, and

are categorized into primary and secondary rays. The primary rays come from

the sun and solar flares striking the Earth’s atmosphere. Another source is the

high energy particles outside the solar system. Most of the primary rays are

deflected by the earth’s magnetic fields, hit atoms in the air and get decay or are

absorbed by the atmosphere. Hence, less than 1% of the particles from the

primary rays will reach the Earth’s surface. The lower energy remnants that

reached the Earth’s surface are called secondary cosmic rays. The measured of

cosmic rays at sea level are the secondary rays. These consist of neutrons,

protons, pions, muons, electrons and photons[30]. The magnetic fields which

shielded the earth from many charged particles weaken as it reaches the pole.

Hence, the density of the cosmic rays varies with altitude and latitude. A study

from IBM[30] has shown that cosmic rays flux increases exponentially with

elevation; thus the particle flux is 10x more with an increase of 10km in height.

The peak density of particles in the terrestrial environment occurs at an altitude

of 15km above sea level, which is near flying height of airplanes. The radiation

level for the trans-atlantic or pacific airflights is 100x higher than on groun level.

The cosmic rays damage often reported on processors, RAM, etc… are

soft errors. These Single Event Upset (SEU) faults can be detected and

corrected with fault-tolerant algorithms. Permanet defects are generally much

less common in these digital devices but do occur. The analogy nature of image

sensors makes them more sensitive and vulnerable to the cosmic rays damage.

Page 69: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

53

Hence, the energetic particles such as neutrons, electrons and protons will cause

permanent damage to the optoelectronic devices.

The failure of a pixel is usually due to permanent ionization damage and

displacement damage. Ionization damage is related to the generation of

electron-hole pairs in the insulator material. Creation of these carriers in the

dielectric will cause an accumulation of trapped charges in the oxide interface.

The effect on pixels with such damage is a shift in the threshold voltage and an

increase of the dark current. Hence, the noise level of the pixel will increase and

the dynamic range will be reduced.

Displacement damage is the result of collisions in the silicon substrate

crystal by energetic particles which causes the displacement of an atom. The

displacement will either leave vacancies in the lattice or the original position of

the atom will be replaced by the bombarding ion. The displacement of atoms is

simply a defect in the silicon lattice and will disturb the intrinsic properties of the

bulk silicon creating effects such as a new localized energy level in the forbidden

energy bandgap. Additional energy levels will affect the mobility of carriers and

promote thermal generation of electron-holes pairs. The most prominent effect is

the increase in the dark current level.

Both ionization damage and displacement damage affect the CCD and

APS sensors. One of the main requirements in CCD operation is the near

perfect transfer efficiency. The small generation of surface trap charges will

significantly affect the transfer efficiency of the CCD sensor. Although the

inversion operation can reduce the surface charges from this ionization damage,

Page 70: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

54

traps in the bulk silicon due to displacement damage are harder to resolve. The

CMOS APS support x-y addressing; hence charge transfer loss in this

architecture is not as significant. However, the excessive dark current level is the

dominated effect in both sensors because these charges become integrated

during the exposure cycle. As we will see this creates hot pixels effects in these

sensors.

2.8 Summary

The photodetector is the basic building block in the digital image sensors.

Optically generated electron-hole pairs are collected with a reverse biased PN

junction (photodiode) or in a depleted well (photogate).

In a CCD sensor, a series of closely spaced MOS capacitors will shift

collected charge packets sequentially for readout. The CMOS sensor uses

photodiodes or photogates whose signal is integrated on an amplifier transistor to

create an active pixel sensor. The additional transistor at each pixel reduces the

fill factor of APS pixel. However, the x-y addressing architecture used in the

CMOS APS makes it easy to support windowing output and faster frame speed.

The two main types of cameras that employe the digital image sensors are:

PSs and DSLRs. The pixel and sensor size of these cameras differentiates their

quality in terms of dynamic range and noise. Recently, the popularity of

cellphone camera market has created new demand for small sensors and small

pixels design.

Page 71: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

55

Defects in microelectronic devices are the main limitation factor to the

reliability and robustness of the device. Faults on image sensors are permanent

damage and will degrade image quality of the sensors. The two main sources to

faults in microelectronic devices are material degradation and in-field defect

mechanism. Material related defects are triggered by the design limitations and

contaminations during fabrication process. These defects will weaken the device

thus resulting in early material breakdown. The post-fabrication defect

mechanism includes electric stress (i.e. applied voltage, statsic discharge) and

radiation. Radiation stress is not limits to spacborne environment. Terrestrial

radiation, cosmic rays, consists of high energy particles that can cause ionization

or displacement damage in the oxide layer and bulk silicon. The result in

accumulation of interface traps and excess leakage current will affect both the

CCD and CMOS APS sensors.

Each defect causal mechanism exhibit different characteristics, in

chapter 4, detail analysis of the spatial distribution and growth rate of defects will

help pin-point the defect source in the digital image sensors.

Page 72: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

56

3: TYPES OF DEFECT IN DIGITAL CAMERAS

Observations of imager defects have been reported on many camera

forums. However, few studies had been done to understand the mechanism

behinds the development of these in-field defects from commercial cameras. In

addition, the impact and characteristics of these faults under normal camera

operations have not been addressed. Most research studies on imager defects

were related to sensors employed in space applications[23]-[25]. Although

radiation was claimed to be the main source to sensor defects in spaceborne

environment, imagers operating in terrestrial environment have also experience

defects and the source has not been previously identified.

In this study we will focus on characterizing and modelling the defects

observed in commercial imagers (i.e. DSLR, PS and cellphones). In this chapter,

we will be presenting the types of faults found in commercial sensors and the

characteristics of each of these defects. Then we will discuss some customized

laboratory techniques that are used to identify defects from commercial cameras.

Majority of the commercial cameras adopted the CFA design which requires

demosaicing to generate color images. In the second part of this chapter we will

present a study on several demosaicing algorithms and analyze the impact of

this imaging function on faulty pixels.

Page 73: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

57

3.1 Defect Identification on Digital Cameras

A typical pixel response from the sensor of interest is shown in Figure 3-1.

Under the illumination of a light source, the output of the pixel will increase

linearly with respect to the duration of the exposure time, also known as the

shutter speed. The maximum output of the pixel is limited by the saturation level

of the photocarriers collection. In an 8-bit color pixel system, 0 represents a dark

pixel and 255 is a fully saturated white pixel. For simplicity, in the remainder of

this thesis we will use a normalized scale where the pixel output range is from 0

to 1 with 0 being a dark pixel and 1 being a white pixel. The typical portion of the

operation of a pixel can be modelled by

)(),( expexp TRmTRI PhotoPhotoPixel ⋅⋅= , for satTT ≤exp (3-1)

satPixel II = , for satTT >exp (3-2)

where RPhoto is the incident illumination rate, Texp is the exposure time, Tsat is the

point where the current reaches saturation(Isat) and m is the numerical gain

controlled by the ISO setting.

Figure 3-1. Pixel response to optical exposure.

Faulty pixels on the image sensors will fail to sense light properly. Several

types of defects had been reported in study[31] and photographer forums. Faults

Page 74: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

58

are categorized into two types. The first type of faults will fail to response to light

completely; these are called the fully-stuck defects. The second type of faults is

still responsive to light but fail to give a proper measurement. The characteristics

of some of the commonly known defects are summarized in Table 3-1.

Table 3-1. Characteristics of defect type

Responsive to light Defect type Output function Description

Stuck high 1)( =xf Appear as a bright pixel at all time No

Stuck low 0)( =xf Appear as dark pixel at all time

Partially-stuck bxxf +=)( Offset 0 < b < 1

Hot pixel (standard) exp)( TRxxf dark ⋅+=

Illumination independent offset that increases linearly with exposure time i.e. Idark Yes

Hot pixel (partially-stuck)

bTRxxf expdark +⋅+=)(

Has two illumination independent offsets (1) increases with exposure time, Rdark (2) offset at all time, b

Figure 3-2. Fully and Partially stuck defects

3.1.1 Stuck defects

The most commonly known faults are the stuck defects (see Figure 3-2).

Pixels classified as fully-stuck faults are no longer sensitive to incident

illumination; these pixels will either be fully saturated (stuck-high), or fully dark

Page 75: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

59

(stuck-low). Shown in Figure 3-2, are the three types of stuck defects. A fully-

stuck low defect will appear as a dark pixel in all images while a fully-stuck high

will always appear as a white pixel. Another less commonly discussed type of

stuck defect is the partially-stuck pixel. This type of faults is still sensitive to light,

but operates with a fixed offset. As shown in Table 3-1 the output function of a

partially stuck pixel is modelled with the offset b (for 0 < b <1). This offset is

added onto the measured illumination; hence the pixel will appear brighter than

normal as shown in Figure 3-2. More importantly, the offset reduces the dynamic

range of the pixel, in another words, such pixel will reach saturation at a much

faster rate.

In the work from [32], a detail study was conducted to identify fully-stuck

and partially-stuck defects from a collection of commercial cameras; however no

trace of such faults were found in these studies. Because stuck defects can be

differentiated easily from the good pixels, these faults are often identified at

fabrication time. For most commercial digital cameras (i.e. DSLR, PS), the

sensors were calibrated prior to shipment; thus these manufacture time defects

are removed. However due to the tight production cost in low-end cameras such

as those on cellphones, mapping of these defects might not be done.

3.1.2 Hot Pixels

Hot pixel is another type of defect observed in digital imagers. Different

from stuck defects, hot pixels were seen to develop while these cameras are

operating in the field. Shown in Table 3-1, this type of faulty pixel will continue to

sense light; however, it has an addition illumination independent component

Page 76: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

60

known as dark current, RDark, which increases linearly with exposure time (see

Figure 3-3). Hence, like the partially-stuck defects, the output of hot pixels will be

higher than good pixels for the same illumination intensity. There are two

classes of hot pixel: standard and partially-stuck hot pixel. Standard hot pixel is

most visible under long exposure as the dark current component increases with

the integration time.

Figure 3-3. Normalized pixel dark response vs. expo sure time of (a) good pixel,

(b) partially-stuck, (c) standard hot pixel, (d) pa rtially-stuck hot pixel.

Shown in Figure 3-3 is the comparison of pixel dark output from a good,

partially-stuck defect, standard and partially-stuck hot pixel under no illumination

(i.e. the dark response). As demonstrated by Figure 3-3(a), without light, a good

pixel should be black over any exposure range. For a partially stuck defect,

Figure 3-3(b), the offset is exposure independent; hence the offset is constant

over the entire exposure range. Figure 3-3(c) shows the output of a standard hot

pixel. Since RDark increases linearly with time, the output will increase over the

integration time even with no illumination. Figure 3-3(d) models the output of a

Page 77: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

61

partially-stuck hot pixel. Like the standard hot pixel, RDark increases linearly with

the exposure time, but differently, and like a partially-stuck defect, it has a

constant offset b. Hence, the partially-stuck hot pixel will have the shortest

saturation time. A point to keep in mind, the offset modelled in Figure 3-3 are

added onto the illumination charges collected by the pixel. Hence, the

measurement from these plots also demonstrated the reduction of the pixel

dynamic range. A typical hot pixel response can be modelled with

)(),,,( expexpexp bTRTRmbTRRI DarkPhotoDarkPhotoPixel ++⋅= , (3-3)

where RPhoto is the incident illumination rate, RDark is the dark current rate and b is

the offset. Thus, the combined offset due by the defect parameters, Ioffset, can be

modelled using

)(),,( expexp bTRmbTRI DarkDarkOffset +⋅= , (3-4)

where RPhoto is zero.

3.2 Defect Identification Techniques

There are two main calibration techniques used to map defects from

sensors: Dark-field, and Flat-field (Light-field) calibration. Each calibration is

responsible for finding different types of defects. For example, the dark-field

calibration is an image captured by the sensor under the absence of light. Thus,

this calibration is mainly used to test for any bright defects. Similarly, the flat-field

is an image captured with a uniform light source such that all pixels are will be at

or near saturation, and this technique is used to test for dark defects. However,

the creation of a uniform light source requires a very customized and difficult

Page 78: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

62

setup which is not feasible for home testing. From a study[32], no reports of

stuck-low defects were reported, hence in this study we will only focus on finding

bright defects (i.e. stuck-high, partially-stuck and hot pixels).

In most defect studies, the focus is on analyzing the magnitude of the dark

current and its fluctuations over the varying temperature of the sensor[33][34].

Different from these studies, our calibrations aimed to extract information such as

the quantity of defects, the magnitude of the defect parameters, and its spatial

location on each commercial imager. The information collected from these

calibrations will serve as the data in the characterization of the defect source and

the growth rate of defects over the sensors’ life time.

In this study, we will be analyzing three types of cameras: commercial

DSLRs, PSs and cellphones. The user control functions available on these

cameras vary. For example, DSLR offers explicit controls on exposure settings,

a wide ISO range, and output of the raw image format. By comparison the PSs

and cellphones have limited manual controls and only offer jpeg output. Each of

these controls will affect our calibration procedure; thus the techniques used are

tailor to the setting available by these cameras. In the following two sections, we

will describe the basic procedure used to calibrate these commercial cameras.

3.2.1 Bright Defects Identification Techniques for DSLRs

Commercial DSLRs are commonly used by more advance/professional

photographers or those who are concerned with the imaging performance. This

type of cameras provides more settings such that photographers can change

Page 79: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

63

parameters to achieve the best image quality one desires. In particular the raw

format function available on these cameras provides the best scenario to perform

digital image editing. The raw images contain direct output from the sensor;

hence defects have not been permanently altered by the imaging pipeline. Also

explicit exposure, aperture and ISO adjustment allow calibration to be carried out

in a well control situation.

To identify bright defective pixels such as hot pixels, stuck-high, and

partially-stuck defects, dark frame calibration will be the ideal procedure. Dark

frame calibration is performed in a dark illumination situation. The camera is

placed in the dark such that the sensor is not exposed to any light source; hence,

any bright defects can be identified easy from the dark output. Our basic

calibration procedure is as followed:

� Adjust the output format to raw.

� Disable any noise reduction settings, flash, picture rotation

� Keep the ISO constant (e.g. 400)

� Capture the image at increasing exposure from 1/100 to 2s

The camera gain used during the calibration is usually ISO 400, where the

noise level is negligibled for the DSLRs. In our calibration, not only do we want

to verify the existence of these faults, but also be able to estimate the magnitude

of the defect parameters (i.e. RDark, b). To test for hot pixels, multiple calibration

images are taken at increasing exposure levels.

To identify the bright defects we applied a threshold test to find all pixels

with an output above the noise level. The noise signal increases with the ISO

amplification; hence the threshold value needs to be adjusted. Shown in

Page 80: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

64

Figure 3-4 is the plot of noise level versus ISO for two DSLR cameras. The

variation in the noise can be approximated using an exponential regression fit,

)exp(BxAy ⋅= , where

=50

log2xISO

x . (3-5)

Where y is the measured noise and x is the number of doubling of ISO from 50.

(i.e. ISO400 = 23 where x = 3) To compensate for the variation of the noise level

in different sensors, we use the average of the A and B values (i.e. A = 0.8 and

B = 0.2), estimated from several DSLRs.

1 2 3 4 50

1

2

3

4

5

6

ISO setting

stan

dard

dev

iatio

n of

lum

inos

ity

camera 1camrea 1-fitcamera 2camera 2-fit

Figure 3-4. DSLR noise level at various ISOs [data: [35]]

Note: (data from camera analysis at dpreview.com [3 5])

The pixel output from each calibration image can be used to generate a

dark response plot, as shown in Figure 3-3. Both defect parameters (RDark and b)

can be approximated with a linear fit function.

3.2.2 Bright Defects Identification Technique for c ellphone cameras

Different from the DSLRs, both PS and cellphone cameras have limited

manual controls. In particular, the inability to capture images in the raw format

restricts our calibration to the use of the jpeg compressed color images

Page 81: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

65

generated by the imaging pipeline. Hence, defects will be distorted by the

irreversible imaging functions which increase the difficulty in identifying the pixel

location. Although some advance PS cameras have a higher exposure limit and

allow explicit exposure control, these features are generally not available for

cellphone cameras. To overcome these challenges imposed by these types of

cameras, the calibration procedure used for DSLRs will need to be modified.

Data from these cameras is important as they have the smallest pixel and sensor

areas in our measurement set.

When there is no explicit exposure controls available, the exposure

compensation will be used to maximize the allowed integration time which is

often less than 1s. This is an internal camera setting that controls the tradeoffs it

makes in aperature and exposure time up to the camera longest permited

exposure. Otherwise, images will be taken at variable exposure times. As the

raw image format is not available on these cameras, all calibration images are

taken in the jpeg format. Again images will be captured under dark room

conditions where the tested cellphone or PS is fully shielded from any light

sources. To identify the bright defects, a threshold of test will be used. However

due to smearing of defects from imaging functions, a single pixel fault will appear

as defect cluster in the color images. As demonstrated in Figure 3-5 which plots

the intensity versus pixel x, y position in the final demosaic and jpeg compressed

image from our tested cellphone cameras. In this case we assume that the fault

is an isolated defect. This assumption can be justified because as shown in

chapter 4 none of the DSLRs raw files show two defects to be nearest neighbour

Page 82: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

66

or clustered. The mesh plot in Figure 3-5 shows that for a single defect on a red

pixel site, we will observe a defect cluster on the red color plane while little effect

in the blue and green planes. A simple threshold test will have duplicate

detection of a single defect due to the clustering of these faults. To eliminate

such false detections, each cluster will be mapped with our software tool. Then

the location of the pixel will be estimated by the peak of the defect cluster. Also

to eliminate false detection arising from the noise signals, multiple (~typically 6)

images are captured and the defective pixel is declared only if it appears in at

least 3 of the calibration images. The calibration of these images is performed at

a fixed exposure level; hence the magnitude of the defect parameters cannot be

estimated. We will not be able to distinguish whether the defect is a hot pixel or

partially-stuck defect. Rather we can only conclude that the identified faults are

bright defects, their location and numbers.

Figure 3-5. Mesh plot of a defect in a demosaic com pressed color image.

Page 83: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

67

3.3 Defects in demosaic and compressed images

Single pixel defects can only be found in the raw images before

demosaicing; in color images these single sites are distorted by the imaging

pipeline and result in a defect cluster. The creation of defect clusters makes the

fault more visible than a single pixel defect. Hence imaging functions will

enhance the visibility of defects.

The creation of color images taken with the CFA sensors is shown in

Figure 2-18. Because the defects are treated as normal pixels by all imaging

algorithms, the false measurement from a faulty pixel will impose errors in the

processing functions. All applied imaging functions will affect the appearance of

the defects. However, the interpolation used by demosaicing (i.e. color

interpolation) will have the most significant impact to the final output as the

missing colors of the neighbouring pixels are approximated using the faulty pixel.

In this experiment, we will explore the impact of faulty pixels on color

images processed by various demosaicing algorithms. In addition, we will

analyze the possible impact from jpeg compression as well.

3.3.1 Demosaicing Algorithm

Demosaicing is an irreversible imaging process and which is used to

generate a color image from the CFA sensors. Each of the camera

manufactures has their own proprietary algorithms. Demosaicing is the first

function applied in the processing pipeline. Hence, the accuracy of the

Page 84: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

68

interpolated values will affect the subsequent functions. Demosaicing algorithms

can be categorized into three types, simple interpolation, statistical and adaptive.

In this experiment we will implement one algorithm from each of the three

categories and observe the impact on the presence of faulty pixels. In addition,

we will also compare the appearance of defect at different jpeg compression

level. A problem suffers by the demosaicing images is the moiré pattern. This

type artifact is an interference that appears on edges of periodic patterns, as we

will show in the later section. Note that when we refer to edges in the discussion

we mean the boundries between objects within in the scene, not the picture

border.

3.3.1.1 Bilinear demosaicing

Bilinear interpolation is the simplest linear demosaicing method. The

estimation of the missing color is based on the neighbouring pixels from the

same color channels. Thus the calculation of each color plane is an independent

process. Although this method is fast, it suffers poor image quality and moiré

effect as well. For the Bayer CFA (i.e RGBG) mask shown in Figure 2-17(a), and

isolating each color planes, Figure 3-6 shows the four nearest neighbouring

pixels that will be used to interpolate the missing pixel (e.g. clear cell). The exact

calculations that will be used to compute missing colors over the entire sensor

area are:

Green: 4

86425

GGGGG

+++= (3-6)

Red, Blue: 2

312

RRR

+= ;2

714

RRR

+= ;4

97315

RRRRR

+++= (3-7)

Page 85: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

69

In this Bayer mask, there is double the number of green pixels as compare

to red and blue, thus an estimation of green pixel is always consists of 4

neighbouring pixels. On the other hand, for a red or blue pixel, the estimation will

involve either 2 or 4 neighbours depending on the location of the centre pixel.

Thus the estimation of green is usually more accurate then red and blue pixels.

(a) green pixel (b) red and blue pixels

Figure 3-6. Bilinear interpolation of (a) green, (b ) red and blue pixels

3.3.1.2 Median demosaicing

The second type of demosaicing relies on correlation with the other color

planes. An example of this type of algorithm is the median demosaicing

proposed by Freeman[36]. The algorithm is executed in four steps. First the

missing colors at each photosite are recovered using bilinear interpolation. Then,

the difference between each color plane is computed using the Equations (3-8)-

(3-10).

),(),(),( yxfyxfyxD grrg −= (3-8) ),(),(),( yxfyxfyxD bggb −= (3-9) ),(),(),( yxfyxfyxD brrb −= (3-10)

Page 86: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

70

The function f denotes the pixel value at location x, y from the indicate

color plane. Next, a median filter is applied to the computed difference Drg, Dgb

and Drb. The purpose of the median filter is to suppress any large discrepancies

between color planes based on information from the surrounding pixels. The last

stage is the correction step. The results from the median filter are used to

correct the interpolated color at each photosite. This method is especially useful

in suppressing artifacts on object edge regions in the picture. Due to the large

color variation at edges and the lack of information in the red and blue channel,

the comparison with other color planes can suppress the interpolation error and

artifacts in the final image.

3.3.1.3 Kimmel demosaicing

Adaptive demosaicing is a more advance process of interpolation which

utilized mathematical modelling to obtain information from the local area near the

pixel for best approximation. A simple example of such an algorithm is a

gradient-based technique proposed by Laroche and Prescott[37]. With this

method, the interpolation is performed in the direction of a local image edge such

that the error from the abrupt changes in color by the boundry is minimized.

Another algorithm, and the one which we will be using, is proposed by

Kimmel[38]. It integrates several methods: linear, weighted-gradient, and color

ratio interpolation. This algorithm is executed in three steps. First the missing

green components at each photosite is interpolated using a weighted bilinear

interpolation,

Page 87: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

71

8642

886644225 EEEE

GEGEGEGEG

++++++= . (3-11)

The weight factor is used to adjust the interpolation to the direction of the

local edge and is calculated with

)()(1

12

52

i

iPDPD

E++

= . (3-12)

The gradient function D is calculated with Equations (3-13)-(3-16). The

direction of the gradient factor is shown in Figure 3-7, which shows the Kimmel

gradient mask.

2)( 82

5

PPPDx

−= ; (3-13) Vertical,

Horizontal

2)( 64

5

PPPDy

−= (3-14)

−−=

2,

2max)( 5951

5

PPPPPDxd (3-15)

Diagonal +45, -45

−−=

2,

2max)( 5753

5

PPPPPDyd (3-16)

Figure 3-7. Kimmel gradient mask.

In the second stage, the red and blue components are interpolated using a

ratio interpolation. With this method, the ratio between color planes is assumed

to remain constant within the image scene. Because the typical Bayer patterns

Page 88: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

72

record more green information; thus the ratio interpolation of red and blue, given

in Equation (3-17), (3-18) respectively, are based on the ratio with respect to the

green components.

59731

9

99

7

77

3

33

1

11

5 GEEEE

GR

EGR

EGR

EGR

E

R ×+++

+

+

+

= (3-17)

59731

9

99

7

77

3

33

1

11

5 GEEEE

GB

EGB

EGB

EGB

E

B ×+++

+

+

+

= (3-18)

The last step is the correction stage. The purpose of this step is to ensure

the color ratio in the object remains constant. To satisfy this property, the green

components recovered from the first stage are recalculated using the ratio with

respect to the red and blue components obtain from the second step,

Equation (3-19).

255

5br GG

G+= (3-19)

58642

8

88

6

66

4

44

2

22

5 REEEE

RG

ERG

ERG

ERG

E

Gr ×+++

+

+

+

=

58642

8

88

6

66

4

44

2

22

5 BEEEE

BG

EBG

EBG

EBG

E

Gb ×+++

+

+

+

=

After correcting the green components, the ratio with the red and blue

components will be changed; thus these two channels will be recalculated using

Equations (3-20) and (3-21).

Page 89: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

73

59

1

9

1

5 GE

GR

ER

ii

i i

ii

×∑

=

=

=, 5≠i (3-20)

59

1

9

1

5 GE

GB

EB

ii

i i

ii

×∑

=

=

=, 5≠i (3-21)

To obtain the best result, the correcting stage is repeated for at least three

times. Different from the bilinear and median algorithm, the Kimmel algorithm

incorporates adaption to the scene of a given image with the weighted bilinear

interpolation, and color ratio interpolation. These enhancements are crucial in

reducing artifacts which we will be showing in experimental results. Clearly, the

down side to the Kimmel method is the high computational requirments which

reduce the rate at which images can be captured.

3.3.2 Demosaicing algorithms comparison

In the first experiment, we will test the performance of each demosaciing

algorithm by applying it on a set of camera color images. The execution of the

experiment is as follows: first each color image is converted into raw form using

the Bayer mask shown in Figure 2-17. Then, each demosaicing algorithm is

used to recover the color image. The performance of each algorithm is the

measured based on the comparison of the demosaic image with the original color

image. The metric used in the evaluation are the Mean-Square Error (MSE),

Equation (3-22), and the Peak-Signal-Noise-Ratio (PSNR), Equation (3-23).

21

0

1

0),(),(

1∑ ∑ −

⋅=

=

=

m

i

n

jww jiKjiI

nmMSE (3-22)

Page 90: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

74

=MSE

MaxPSNR i

10log20 (3-23)

The MSE measures how the pixel values from the demosaic output differs

from the original image. Thus a high MSE implies that there are large

interpolation errors. The second parameter PSNR asses the quality of the result

image by evaluating the ratio between the maximum pixel value with the average

error. The interpolation errors are simply noise signals in the output image;

hence, high PSNR indicates the magnitude of error is small relative to the peak

signal value.

In this experiment 10 regular photographs captured by the same camera

will be used as shown in Figure 3-8. Each demosaic algorithm will be applied to

the raw form of these images to generate a full color version of the images. By

comparing the demosaic output with the original image, the average MSE and

PSNR calculated are summarized in Table 3-2. This will provide us with a

baseline value to compare image quality when defects are injected into the

image.

Figure 3-8. Sample images used in experiment.

Page 91: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

75

Table 3-2. Average MSE and PSNR of demosaic images.

Red Green Blue Total Methods MSE PSNR MSE PSNR MSE PSNR MSE PSNR

Bilinear 97.09 29.48 36.89 33.70 114.90 28.36 82.96 29.95 Median 33.86 33.93 9.87 39.11 32.50 34.04 25.41 35.09 Kimmel 26.74 34.73 13.00 37.93 25.44 34.92 21.73 35.54

First we will examine the performance of the three algorithms by each

color plane. As shown in the first three columns in Table 3-2, despite the

different approaches used by each demosaicing function, the evaluations of the

green pixels have the lowest MSE. Hence, the PSNR computed from the green

plane is well above 30dB. The CFA sensors that use a Bayer pattern only

records 25% of red and blue pixels. Thus the interpolation of the missing red and

blue pixels will suffer in accuracy as reflected by the high MSE. Comparing the

three algorithms, we will examine the overall MSE and PSNR of all three color

planes as summarized in the last column in Table 3-2. Among the three

algorithms, bilinear has the lowest PSNR of 29.95dB due to the large

interpolation error. The median demosaicing utilizes median filter to suppressed

large interpolation error. Hence the improvement was reflected with an increase

in the PSNR of 35.09dB. The kimmel algorithm incorporates edge information

and the ratio between color channels into the interpolation. Thus the overall

interpolation errors are further reduced, and the PSNR increases to 35.54dB.

The accuracy of these interpolations is highly affected by the scene of the

image. Estimation of the pixel values in the regions with abrupt changes, like an

object edge, will suffer large interpolation errors. The significant interpolation

errors near rapid changes will create a type of artifacts called moiré pattern. To

reduce this type of artifacts, the gradient interpolation is often used. Interpolating

Page 92: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

76

along the direction of the edge can help reduce the estimation error. In

Figure 3-9, examples of moiré pattern generated by the three demosaicing

algorithms are shown. Both the bilinear and median algorithms make no use of

the shape or texture within the image. Thus in Figure 3-9(b) and (c) the black

lines show both a false color pattern along therm and the solid area show a moiré

pattern. With the Kimmel algorithm, Figure 3-9(d), both the color ratio and edge

detection were used, thus the same line patterns in the image appear more

refined.

(a) Original Image (b) Bilinear (c) Median (d) Kimm el

Figure 3-9. Moire pattern (b) Bilinear, (c) Median, (d) Kimmel.

3.3.3 Analyzing defects in color images

As seen from the results in the previous experiment, each algorithm

inherits some errors in the interpolation of the missing colors. These errors can

lead to observable artifacts affecting the overall image quality. A faulty pixel

measures incorrect light level; thus the error from such pixels will impose

additional errors in the interpolation of the neighboring pixels. In the following

two experiments we will inject bright defects into each image and observe the

impact of the demosaicing algorithms on the defective pixel and its neighbours.

Page 93: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

77

In a regular camera image process sequence, multiple imaging functions

are applied to create a color image. To isolate the analysis to the impact from

the demosaicing process only, we will start with a color image as shown in

Figure 3-10. Like in the previous experiment, the color image will be converted

into raw form. Then a single pixel bright defect will be injected into the image.

Next the demosacing function will be applied to obtain the color image. Finally

we will also apply the jpeg compression to observe any additional effects of the

image compression on the spread of the defective pixel. The experiment will be

divided into two parts. In the first part, a single defect will be injected on a

uniform background, and in the second part, the defect will be injected on a color

varying background.

Figure 3-10. Experiment procedure.

3.3.4 Defect on a uniform color background

In this set of tests, 11 images with a uniform gray scale background will be

used. The gray scales starts from a black image (i.e. R, G, B = 0) and intensity

increases with a step size of 5 to a maximum value of (R, G, B, = 50) where 255

is the saturation value. A constant background eliminates any interpolation

errors from the scene due to edges and color variations. Hence we can better

Page 94: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

78

observed the impact of defect on the neighbouring pixels. A simulated defect will

be injected into the raw image, where the defect offset is added onto the pixel

value, as shown in Figure 3-10. For each test, the defect will be inserted into one

of the three color pixels in the Bayer pattern. The magnitude of the defect

parameter is represented in the form of Ioffset, which will take on a constant value.

To measure the impact of defect on the image quality, we will compare the

difference between a defective image and that without the defect. As shown in

Figure 3-10, both the defective and non-defective images are processed by the

same demosaic function. Hence, the difference between the pixel outputs will be

the impact from the defective point. In addition we will also apply the build-in

jpeg compression function in Photoshop to create a compressed image. The

compression quality measures on a scale of 1 to 10 with 10 being the highest

quality and 1 being the lowest. In the following experiment we will be using three

compression levels, with 9 being the high, 6 being the medium and 3 being the

low quality compressed image. These results will replicate the conditions of the

dark field test in PS and cellphones.

3.3.4.1 Bilinear demosaic results

The first set of sample result of a red defect processed by the bilinear

demosaic image is shown in Figure 3-11. A visual comparison of the four images

shows that the non-compressed TIFF image in Figure 3-11(a) has the brightest

defect cluster but also the most confined spot.

Page 95: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

79

(a) TIFF (b) JPEG 9 (c) JPEG 6 (d) JPEG 3

Figure 3-11. Bilinear demosaic image for red defect with I Offset = 0.8.

By taking the difference between the defective and non-defective pixel

image, the errors will indicate the spreading of the defect. The size of the defect

clusters are summarized in Table 3-3. Note the highlighted columns are ones

with the defective pixel color.

Table 3-3. Estimate defect size with bilinear demos aicing.

TIFF JPEG 9 JPEG 6 JPEG 3

R G B R G B R G B R G B 0.2 3x3 0x0 0x0 7x7 7x7 7x7 8x8 8x8 8x8 0x0 0x0 0x0 0.4 3x3 0x0 0x0 16x16 16x16 16x16 8x8 8x8 8x8 8x8 8x8 8x8 0.6 3x3 0x0 0x0 16x16 16x16 16x16 15x15 8x8 8x8 8x8 8x8 8x8 0.8 3x3 0x0 0x0 16x16 16x16 16x16 15x15 8x8 8x8 12x12 9x9 8x8 R

ED

1.0 3x3 0x0 0x0 16x16 16x16 16x16 16x16 15x15 8x8 13x13 12x12 12x12

0.2 0x0 3x3 0x0 7x7 6x6 7x7 8x8 8x8 8x8 0x0 0x0 0x0 0.4 0x0 3x3 0x0 7x7 7x7 7x7 8x8 8x8 8x8 8x8 8x8 8x8 0.6 0x0 3x3 0x0 8x8 7x7 8x8 8x8 8x8 8x8 8x8 8x8 8x8 0.8 0x0 3x3 0x0 8x8 8x8 8x8 8x8 9x9 10x10 8x8 8x8 8x8 G

RE

EN

1.0 0x0 3x3 0x0 8x8 8x8 11x11 8x8 9x9 10x10 12x12 12x12 12x12

0.2 0x0 0x0 3x3 2x2 2x2 7x7 0x0 0x0 0x0 0x0 0x0 0x0 0.4 0x0 0x0 3x3 3x3 5x5 16x16 4x4 4x4 4x4 0x0 0x0 0x0 0.6 0x0 0x0 3x3 7x7 6x6 16x16 7x7 7x7 17x17 9x9 9x9 9x9 0.8 0x0 0x0 3x3 8x8 9x9 18x18 8x8 13x13 17x17 11x11 11x11 11x11 B

LUE

1.0 0x0 0x0 3x3 10x10 11x11 18x18 8x8 13x13 17x17 11x11 11x11 15x15

Because the interpolation used by the bilinear demosaic performed on

each color planes independently, as shown from the results, a red defect will only

affect neighbouring pixels from the same color plane in the uncompressed

Page 96: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

80

images. Since the bilinear interpolation consists of the nearest 3x3 neighbouring

pixels, the spreading of the defects is also confined within the 3x3 region.

However, these two points are only true for the uncompressed image (i.e. Figure

3-11(a), TIFF). In the case on the compressed images (i.e. JPEG 9, 6 and 3) a

single defective will spread into a wider area and affecting all three color planes

as show in Table 3-3 and seen in Figure 3-11(b)-(d). A sample error mesh plot of

a red defect is shown in Figure 3-12. Again a visual comparison shows that the

compressed images have a wider spread of the faulty values. However, the

peak error of the fault is reduced through compression. The peak errors of the

defect in the resulting images are summarized in Table 3-4.

Table 3-4. Peak defect cluster value from bilinear demosaicing.

TIFF JPEG 9 JPEG 6 JPEG 3 R G B R G B R G B R G B

0.2 0.200 0.000 0.000 0.047 0.039 0.043 0.027 0.027 0.027 0.004 0.004 0.004 0.4 0.400 0.000 0.000 0.133 0.086 0.102 0.085 0.085 0.085 0.057 0.057 0.057 0.6 0.600 0.000 0.000 0.239 0.110 0.145 0.150 0.134 0.138 0.102 0.102 0.102 0.8 0.800 0.000 0.000 0.391 0.132 0.179 0.210 0.167 0.183 0.173 0.173 0.173 R

ED

1.0 0.904 0.000 0.000 0.440 0.156 0.208 0.252 0.172 0.200 0.204 0.181 0.189

0.2 0.000 0.200 0.000 0.067 0.067 0.067 0.039 0.039 0.039 0.004 0.004 0.004 0.4 0.000 0.400 0.000 0.212 0.220 0.208 0.090 0.090 0.090 0.081 0.081 0.081 0.6 0.000 0.600 0.000 0.293 0.332 0.301 0.154 0.154 0.154 0.142 0.142 0.142 0.8 0.000 0.800 0.000 0.424 0.471 0.424 0.320 0.320 0.320 0.185 0.185 0.185 G

RE

EN

1.0 0.000 0.902 0.000 0.494 0.560 0.499 0.366 0.366 0.366 0.234 0.234 0.234

0.2 0.000 0.000 0.200 0.017 0.013 0.033 0.002 0.002 0.005 0.004 0.004 0.004 0.4 0.000 0.000 0.400 0.031 0.024 0.078 0.025 0.025 0.025 0.004 0.004 0.004 0.6 0.000 0.000 0.600 0.051 0.031 0.149 0.032 0.020 0.087 0.035 0.035 0.035 0.8 0.000 0.000 0.800 0.068 0.026 0.339 0.075 0.063 0.130 0.049 0.049 0.049 B

LUE

1.0 0.000 0.000 0.902 0.074 0.027 0.417 0.077 0.066 0.132 0.051 0.048 0.068

It is clear from Table 3-4 that the uncompressed image has the highest

peak error; thus the defective pixel appears the brightest. As the lossiness of the

compression increases, the peak error is being reduced by ~78%. Although

Page 97: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

81

defect appears less visible in compressed image, the spread of the defect covers

a much wider area.

(a) Tiff (b) Jpeg 9

(c) Jpeg 6 (d) Jpeg 3

Figure 3-12. Error mesh plot of red defect at I Offset = 0.8 with bilinear demosaicing.

Page 98: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

82

3.3.4.2 Median demosaic results

Different from the bilinear algorithm, the median demosaic related pixels

from all three color planes in the interpolation. Shown in Figure 3-13, are the

resulting images of a red defect processed by the median demosaicinh. Different

from the bilinear demosaic images (Figure 3-11), the red defect appears as a

white pixel surrounded by red neighboring pixels. Observe that this defect now

spread both in area and into the other (G and B) color planes. Again measuring

the spread of the error values, we can measure the defect cluster size as

summarized in Table 3-5.

(a) TIFF (b) JPEG 9 (c) JPEG 6 (d) JPEG 3

Figure 3-13. Median demosaic image for red defect w ith I Offset = 0.8.

Page 99: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

83

Table 3-5. Estimate defect size with median demosai cing.

TIFF JPEG 9 JPEG 6 JPEG 3

R G B R G B R G B R G B 0.2 3x3 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 0x0 0x0 0x0 0.4 3x3 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 0.6 3x3 1x1 1x1 16x16 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 0.8 3x3 1x1 1x1 16x16 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 R

ED

1.0 3x3 1x1 1x1 16x16 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8

0.2 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 0x0 0x0 0x0 0.4 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 0.6 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 0.8 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 G

RE

EN

1.0 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 12x12 12x12 12x12

0.2 1x1 1x1 3x3 8x8 8x8 8x8 8x8 8x8 8x8 0x0 0x0 0x0 0.4 1x1 1x1 3x3 8x8 8x8 12x12 8x8 8x8 8x8 8x8 8x8 8x8 0.6 1x1 1x1 3x3 8x8 8x8 16x16 8x8 8x8 8x8 8x8 8x8 8x8 0.8 1x1 1x1 3x3 8x8 9x9 18x18 8x8 8x8 8x8 8x8 8x8 8x8 B

LUE

1.0 1x1 1x1 3x3 8x8 10x10 18x18 8x8 8x8 8x8 8x8 8x8 8x8

The use of median filter to suppress large interpolation errors does not

correct the defects, as seen from the resulting images. In fact, it is this correction

step that will spread the defect onto all three color planes. Hence the defect will

appear as a white spot when the pixel is at or near saturation. Notice that for the

uncompressed image (i.e. TIFF - e.g. Figure 3-13(a)), the spreading of the red

and blue defects is confined within the 3x3 region, which is same as the bilinear

demosaic. However, for the green defects, these spreading are not observed.

Because the raw images retain 50% of green pixels, the median filter correction

is able to reduce the spread of the defects on the green plane. Like the bilinear

demosaic, the high quality compression shows the largest defect spread of up to

18x18 (in the blue case) when the pixel is full saturated.

The peak error values measured from the resulting images are

summarized in Table 3-6. The sample error mesh plots of the uncompressed

and compressed images are shown in Figure 3-14.

Page 100: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

84

Table 3-6. Peak defect cluster value from median de mosaicing.

TIFF JPEG90 JPEG60 JPEG30

R G B R G B R G B R G B 0.2 0.200 0.000 0.004 0.078 0.078 0.078 0.022 0.022 0.022 0.004 0.004 0.004 0.4 0.400 0.000 0.004 0.231 0.231 0.231 0.091 0.091 0.091 0.045 0.045 0.045 0.6 0.600 0.000 0.004 0.467 0.424 0.439 0.147 0.147 0.147 0.161 0.161 0.161 0.8 0.800 0.000 0.004 0.557 0.510 0.525 0.310 0.310 0.310 0.188 0.188 0.188 R

ED

1.0 0.902 0.000 0.004 0.627 0.580 0.596 0.392 0.392 0.392 0.210 0.210 0.210

0.2 0.153 0.200 0.153 0.118 0.118 0.118 0.034 0.034 0.034 0.004 0.004 0.004 0.4 0.302 0.400 0.302 0.396 0.396 0.396 0.077 0.077 0.077 0.068 0.068 0.068 0.6 0.451 0.600 0.451 0.504 0.504 0.504 0.260 0.260 0.260 0.134 0.134 0.134 0.8 0.600 0.800 0.600 0.733 0.733 0.733 0.448 0.448 0.448 0.149 0.149 0.149 G

RE

EN

1.0 0.678 0.902 0.678 0.810 0.810 0.810 0.679 0.679 0.679 0.236 0.236 0.236

0.2 0.102 0.102 0.200 0.047 0.047 0.047 0.291 0.018 0.018 0.004 0.004 0.004 0.4 0.200 0.200 0.400 0.173 0.169 0.188 0.018 0.065 0.065 0.048 0.048 0.048 0.6 0.302 0.302 0.600 0.369 0.365 0.384 0.065 0.122 0.122 0.051 0.051 0.051 0.8 0.400 0.400 0.800 0.475 0.471 0.502 0.122 0.189 0.189 0.103 0.103 0.103 B

LUE

1.0 0.452 0.452 0.902 0.496 0.489 0.536 0.189 0.281 0.281 0.158 0.158 0.158

Different from the bilinear demosaic results, we did not observe as large of

a decrease in the peak error values with median demosaic images. For example

the peak error of a red defect in the TIFF and JPEG 9 has only 30% drop as

compared to 55% drop observed in bilinear demosic. This observation is

demonstrated in Figure 3-14(a) and (b). Because the defect has been spreaded

into all the color planes prior to the compression, the suppression of the peak

value is less significant as the difference of the pixel values between color planes

is minimal.

Page 101: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

85

(a) Tiff (b) Jpeg 9

(c) Jpeg 6 (d) Jpeg 3

Figure 3-14. Error mesh plot of red defect at I Offset = 0.8 with median demosaicing.

Page 102: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

86

3.3.4.3 Kimmel demosaic results

Previously we have shown that the adapative approached used in Kimmel

demosaic can suppress the moiré artifacts. With the presence of a defect pixel,

the result images are shown in Figure 3-15. Different from the median demosaic

images, in this case, the correlation of color planes in the interpolation will not

cause the defective pixel to appear as a white spot. To examine each color

plane in details, the measure of defect spread is summarized in Table 3-7.

(a) TIFF (b) JPEG 9 (c) JPEG 6 (d) JPEG 3

Figure 3-15. Kimmel demosaic image for red defect w ith I Offset = 0.8.

Table 3-7. Estimate defect size with kimmel demosai cing.

TIFF JPEG 9 JPEG 6 JPEG 3

R G B R G B R G B R G B 0.2 5x5 3x3 3x3 6x6 6x6 6x6 7x7 7x7 7x7 0x0 0x0 0x0 0.4 7x7 4x4 4x4 16x16 8x8 8x8 8x8 8x8 8x8 6x6 6x6 6x6 0.6 7x7 4x4 4x4 16x16 8x8 8x8 13x13 8x8 8x8 8x8 8x8 8x8 0.8 7x7 5x5 5x5 16x16 13x13 15x15 15x15 8x8 8x8 8x8 8x8 8x8 R

ED

1.0 7x7 5x5 5x5 16x16 14x14 15x15 15x15 11x11 8x8 8x8 8x8 8x8

0.2 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 0x0 0x0 0x0 0.4 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 0.6 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 0.8 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 G

RE

EN

1.0 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8

0.2 3x3 3x3 5x5 5x5 5x5 8x8 4x4 4x4 4x4 0x0 0x0 0x0 0.4 4x4 4x4 7x7 6x6 6x6 12x12 6x6 6x6 6x6 4x4 4x4 4x4 0.6 4x4 4x4 7x7 7x7 8x8 16x16 6x6 6x6 17x17 6x6 6x6 6x6 0.8 5x5 5x5 7x7 9x9 9x9 16x16 7x7 9x9 17x17 7x7 7x7 7x7 B

LUE

1.0 5x5 5x5 7x7 10x10 9x9 16x16 7x7 9x9 17x17 8x8 8x8 8x8

Page 103: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

87

As shown from the results in Table 3-7, the red and blue defects spread

into the ~5x5 and 7x7 region with the Kimmel demosaicing. This is larger than

the 3x3 region measured from the bilinear and median demosaic. The color ratio

interpolation used in the Kimmel function also enhances the spreading of error

values on the defect-free color planes. However, different from the bilinear and

median images, a single green fault at all offset ranges will remain as single

defective pixel after the Kimmel demosaic. Although the jpeg compression

increases the defect spreading, the size of green defect cluster is confined within

in the 8x8 region at all compression levels.

The peak error of the defective pixel measured after demosaicing is

summarized in Table 3-8. The sample error mesh plots of a red defect are

shown in Figure 3-16.

Table 3-8. Peak defect cluster value from kimmel de mosaicing.

TIFF JPEG 9 JPEG 6 JPEG 3

R G B R G B R G B R G B 0.2 0.200 0.068 0.014 0.052 0.052 0.052 0.020 0.020 0.020 0.004 0.004 0.004 0.4 0.400 0.111 0.017 0.207 0.162 0.177 0.075 0.075 0.075 0.027 0.027 0.027 0.6 0.600 0.137 0.017 0.306 0.259 0.267 0.121 0.108 0.112 0.081 0.081 0.081 0.8 0.800 0.155 0.018 0.415 0.316 0.322 0.153 0.137 0.141 0.137 0.137 0.137 R

ED

1.0 0.902 0.159 0.018 0.453 0.343 0.342 0.179 0.145 0.157 0.158 0.158 0.158

0.2 0.172 0.200 0.172 0.142 0.142 0.142 0.034 0.034 0.034 0.004 0.004 0.004 0.4 0.345 0.400 0.345 0.393 0.393 0.393 0.109 0.109 0.109 0.063 0.063 0.063 0.6 0.517 0.600 0.517 0.538 0.538 0.538 0.349 0.349 0.349 0.136 0.136 0.136 0.8 0.688 0.800 0.688 0.763 0.765 0.764 0.563 0.563 0.563 0.211 0.211 0.211 G

RE

EN

1.0 0.766 0.902 0.766 0.830 0.832 0.830 0.786 0.786 0.786 0.259 0.259 0.259

0.2 0.060 0.068 0.200 0.030 0.027 0.040 0.008 0.008 0.008 0.004 0.004 0.004 0.4 0.086 0.111 0.400 0.085 0.081 0.106 0.028 0.028 0.028 0.016 0.016 0.016 0.6 0.096 0.137 0.600 0.160 0.149 0.215 0.051 0.047 0.066 0.038 0.038 0.038 0.8 0.101 0.155 0.800 0.201 0.199 0.261 0.075 0.065 0.126 0.046 0.046 0.046 B

LUE

1.0 0.101 0.159 0.902 0.214 0.210 0.302 0.085 0.075 0.135 0.049 0.049 0.049

Page 104: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

88

(a) Tiff (b) Jpeg 9

(c) Jpeg 6 (d) Jpeg 3

Figure 3-16. Error mesh plot of red defect at I Offset = 0.8 with kimmel demosaicing.

First a visual comparison of the mesh plots in Figure 3-16 of (a) TIFF and

(b)Jpeg 9 showed that the peak error is reduced significantly with by the

compression. This is verified from the measurement recorded in Table 3-8. On

Page 105: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

89

average, the peak error is reduced by 50% in Jpeg 9 images, which is more than

the 40% measured in median demosaic images. With the lowest compression

quality (i.e. Jpeg 3), the peak error is reduced by ~80%. Hence, the appearance

of defect shown in Figure 3-15(d) is gray instead of red. A common trend

observed from the compressed images in all three set of demosaic images is the

high quality compression gives the largest defect spread. The lossiness of the

low quality compression reduces the peak errors and the size of the defect

cluster. However, size of the defect spread in the compressed images is still

larger than the uncompressed images.

It is clear that the demosaicing will spread a single defective pixel into its

neighboring pixels. The size of the defect cluster will range from 3x3 – 7x7

region in an uncompressed image. The peak error is nearly the defect offset

value in the uncompressed images; thus the defect cluster is very visible. Adding

compression is able to reduce the peak error; however, the spread of defect will

increase into the 18x18 region for high compression. Although the defect

clusters are smaller and less visible in the lossy compressed images, the use of

low quality compression is not common as the pixel values are being discarded

and altered through this process.

3.3.5 Defects on varying color backgrounds

In this second part of the experiment we will be using a cropped section

from the image data set shown in Figure 3-8, to provide a color varying

background. The same procedure shown in Figure 3-10 will be used but the

defects will be injected into these cropped color images. Again a simulated

Page 106: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

90

defect will be inserted into one of the three color planes and IOffset will be

increased progressively. The color image provides variation in the background;

hence, we can estimate the impact of defect in regular photos. The

measurement of impact from the defective pixel is based on the MSE calculated

from the comparison between the defective and non-defective images as shown

in Figure 3-10.

3.3.5.1 Bilinear demosaic results

Previously, we have shown that the bilinear demosaic will spread the

defect into the nearest 3x3 region in an uncompressed image, and this extend to

16x16 in compressed images. In this experiment, we uses the MSE to measures

the impact of the spreading around the defective area. The MSE calculated from

the bilinear demosaic images are summarized in Table 3-9.

Table 3-9. Comparison of defect in varying color re gion with bilinear demosaicing.

Tiff Jpeg 9 Jpeg 6 Jpeg 3 R G B R G B R G B R G B

0.2 0.048 0.000 0.000 0.050 0.024 0.056 0.071 0.043 0.094 0.124 0.073 0.135 0.4 0.177 0.000 0.000 0.097 0.032 0.064 0.094 0.057 0.103 0.136 0.081 0.145 0.6 0.352 0.000 0.000 0.191 0.039 0.076 0.127 0.071 0.118 0.148 0.091 0.158 0.8 0.454 0.000 0.000 0.243 0.042 0.083 0.153 0.075 0.123 0.168 0.106 0.171 R

ED

1.0 0.478 0.000 0.000 0.251 0.042 0.084 0.174 0.078 0.125 0.172 0.110 0.174

0.2 0.000 0.029 0.000 0.045 0.031 0.060 0.070 0.047 0.100 0.121 0.075 0.143 0.4 0.000 0.104 0.000 0.070 0.067 0.089 0.083 0.064 0.112 0.134 0.092 0.157 0.6 0.000 0.209 0.000 0.101 0.108 0.118 0.112 0.096 0.140 0.153 0.115 0.177 0.8 0.000 0.282 0.000 0.125 0.141 0.139 0.139 0.124 0.165 0.164 0.142 0.195 G

RE

EN

1.0 0.000 0.296 0.000 0.130 0.149 0.145 0.143 0.128 0.168 0.163 0.146 0.197

0.2 0.000 0.000 0.052 0.038 0.021 0.061 0.066 0.042 0.100 0.122 0.075 0.139 0.4 0.000 0.000 0.189 0.037 0.022 0.092 0.067 0.044 0.119 0.124 0.076 0.152 0.6 0.000 0.000 0.396 0.039 0.024 0.181 0.070 0.047 0.152 0.126 0.079 0.154 0.8 0.000 0.000 0.552 0.040 0.024 0.302 0.071 0.048 0.161 0.128 0.081 0.189 BLU

E

1.0 0.000 0.000 0.578 0.040 0.024 0.324 0.071 0.049 0.170 0.128 0.081 0.189

Page 107: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

91

Like the results observed from the uniform background, the errors cause

by the defect remains only on the defective color plane in the uncompressed

images. As expected, the errors cause by a green defect is lower than red and

blue defect due to the extra green neighbour pixels available in the calculation.

There are two trends observed in the compressed images. First the due to the

suppression of the peak error as shown from the uniform background results, the

MSE calculated from the defective color plane decreases with compression. On

the other hand, the spread of the errors into the two non-defective color channels

increases the MSE on these planes. This trend is demonstrated by the plot of

the MSE versus IOffset of a red defective pixel at different compression levels in

Figure 3-17.

Page 108: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

92

RED plane

0.00

0.10

0.20

0.30

0.40

0.50

0.2 0.4 0.6 0.8 1

TIFF

JPEG 9

JPEG 6

JPEG 3

GREEN plane

0.00

0.10

0.20

0.30

0.40

0.50

0.2 0.4 0.6 0.8 1

BLUE plane

0.00

0.10

0.20

0.30

0.40

0.50

0.2 0.4 0.6 0.8 1

Figure 3-17. MSE vs. I Offset of a red defect on non-uniform background (bilinea r demosaic).

Comparing the three plots, as the quality of the compression declines, the

difference between evaluated MSE from three color color planes reduces as well.

In fact, as shown in the red and blue planes of the jpeg 3 curve, the plots are

nearly the same. This suggested that the compression algorithm has the

tendancy to suppress color variations of the three color planes, hence lowering

Page 109: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

93

the impact from the defective pixel. However, as noted the use of low quality

compression (e.g Jpeg 3) is not very common.

3.3.5.2 Median demosaic results

Likewise, the MSEs calculated for the median demosaic images are

summarized in Table 3-10.

Table 3-10. Comparison of defect in varying color r egion with median demosaicing.

Tiff Jpeg 9 Jpeg 6 Jpeg 3 R G B R G B R G B R G B

0.2 0.031 0.007 0.008 0.038 0.023 0.041 0.068 0.047 0.082 0.124 0.074 0.126 0.4 0.104 0.024 0.024 0.081 0.057 0.075 0.081 0.059 0.093 0.147 0.091 0.144 0.6 0.203 0.045 0.046 0.132 0.099 0.117 0.106 0.085 0.121 0.172 0.116 0.170 0.8 0.260 0.056 0.057 0.157 0.118 0.136 0.132 0.110 0.145 0.179 0.122 0.176 R

ED

1.0 0.274 0.059 0.060 0.161 0.122 0.140 0.139 0.117 0.151 0.180 0.124 0.178

0.2 0.017 0.024 0.015 0.045 0.034 0.050 0.064 0.047 0.080 0.033 0.022 0.042 0.4 0.053 0.085 0.053 0.106 0.101 0.113 0.084 0.067 0.100 0.065 0.055 0.080 0.6 0.101 0.169 0.101 0.155 0.156 0.167 0.156 0.142 0.175 0.097 0.089 0.131 0.8 0.135 0.228 0.135 0.203 0.206 0.215 0.219 0.205 0.238 0.119 0.111 0.164 G

RE

EN

1.0 0.141 0.239 0.142 0.211 0.214 0.223 0.234 0.220 0.253 0.122 0.114 0.166

0.2 0.009 0.009 0.032 0.033 0.022 0.042 0.063 0.044 0.078 0.127 0.076 0.129 0.4 0.029 0.028 0.112 0.065 0.055 0.080 0.073 0.054 0.089 0.131 0.083 0.137 0.6 0.057 0.053 0.229 0.097 0.089 0.131 0.085 0.068 0.106 0.145 0.095 0.151 0.8 0.075 0.071 0.316 0.119 0.111 0.164 0.105 0.088 0.129 0.153 0.104 0.160 B

LUE

1.0 0.078 0.074 0.331 0.122 0.114 0.166 0.108 0.091 0.133 0.155 0.105 0.160

The non-zero MSEs calculated from the defect-free color channels reflect

the spread of defects in the uncompressed images. Although the MSEs

calculated on these color planes are relatively low, these errors will increase

through compression as the defective region expands. Shown in Figure 3-18 is

the plot of MSE versus IOffset of a red defect.

Page 110: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

94

RED plane

0.00

0.10

0.20

0.30

0.40

0.50

0.2 0.4 0.6 0.8 1

TIFF

JPEG 9

JPEG 6

JPEG 3

GREEN plane

0.00

0.10

0.20

0.30

0.40

0.50

0.2 0.4 0.6 0.8 1

BLUE plane

0.00

0.10

0.20

0.30

0.40

0.50

0.2 0.4 0.6 0.8 1

Figure 3-18. MSE vs. I Offset of a red defect on non-uniform background (median demosaic).

Again the smoothing effect from the compression can be observed from

the jpeg 3 curves on the red and blue planes. Although the defect resides on the

red color plane, the MSE calculated from the red plane is nearly same as that on

the blue plane. Observed in the MSE plots of the red color plane, at a low impact

defect, IOffset = 0.2, the error spread through compression dominates. Thus the

MSE is increased through the spreading by the low quality compression.

However, at the high impact defect (i.e. IOffset >= 0.6), the IOffset becomes the

Page 111: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

95

dominating error factor. Hence, the low quality compression reduces the

appearance of defect cluster by suppressing the peak error from the defective

pixel.

3.3.5.3 Kimmel demosaic results

The last set of results is from Kimmel demosaicing and is summarized in

Table 3-11.

Table 3-11. Comparison of defect in varying color r egion with kimmel demosaicing.

Tiff Jpeg 9 Jpeg 6 Jpeg 3 R G B R G B R G B R G B

0.2 0.028 0.006 0.005 0.037 0.023 0.033 0.069 0.045 0.069 0.123 0.072 0.117 0.4 0.098 0.019 0.013 0.083 0.055 0.065 0.083 0.056 0.080 0.135 0.082 0.127 0.6 0.194 0.031 0.020 0.119 0.079 0.089 0.102 0.070 0.093 0.154 0.100 0.146 0.8 0.251 0.035 0.022 0.143 0.087 0.097 0.116 0.076 0.099 0.165 0.108 0.155 R

ED

1.0 0.265 0.036 0.022 0.150 0.087 0.097 0.118 0.078 0.101 0.166 0.109 0.157

0.2 0.027 0.024 0.018 0.049 0.040 0.050 0.066 0.047 0.072 0.123 0.072 0.120 0.4 0.087 0.084 0.060 0.115 0.110 0.116 0.088 0.070 0.094 0.142 0.090 0.137 0.6 0.150 0.167 0.121 0.168 0.166 0.172 0.178 0.162 0.185 0.165 0.112 0.160 0.8 0.186 0.226 0.160 0.219 0.220 0.226 0.243 0.227 0.250 0.178 0.126 0.172 G

RE

EN

1.0 0.196 0.238 0.168 0.228 0.229 0.235 0.272 0.257 0.279 0.178 0.126 0.172

0.2 0.011 0.010 0.029 0.033 0.024 0.036 0.063 0.044 0.069 0.122 0.076 0.120 0.4 0.028 0.028 0.107 0.064 0.057 0.083 0.071 0.052 0.084 0.128 0.082 0.130 0.6 0.041 0.048 0.225 0.082 0.076 0.130 0.080 0.061 0.104 0.134 0.089 0.138 0.8 0.046 0.058 0.314 0.094 0.089 0.157 0.086 0.068 0.124 0.139 0.098 0.155 B

LUE

1.0 0.047 0.059 0.328 0.095 0.090 0.159 0.087 0.068 0.126 0.141 0.098 0.160

Although the Kimmel desmoaic function will also spread the defects on to

the fault-free color planes, in most cases MSEs reported in Table 3-11 are lower

than that in Table 3-9 (bilinear), and Table 3-10 (median). Shown in Figure 3-19

is the plot of MSE versus IOffset of a red defect.

Page 112: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

96

RED plane

0.00

0.10

0.20

0.30

0.40

0.50

0.2 0.4 0.6 0.8 1

TIFF

JPEG 9

JPEG 6

JPEG 3

GREEN plane

0.00

0.10

0.20

0.30

0.40

0.50

0.2 0.4 0.6 0.8 1

BLUE plane

0.00

0.10

0.20

0.30

0.40

0.50

0.2 0.4 0.6 0.8 1

Figure 3-19. MSE vs. I Offset of a red defect on non-uniform background (kimmel demosaic).

Looking at the defect-free color planes (i.e. green and blue), the MSEs

calculated from the uncompressed images are lower than that observed from

median demosaic images. Hence, this suggested the adaptive approach will not

impose significant errors on the neighbouring pixels. However, as the uniform

background result shows, the small errors are being spread into the 7x7 region

with this demosaic function. Again the compression function is showing a

reduction of the peak error, with the lower MSE values, when compared to the

Page 113: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

97

uncompressed images. However, the appearance of low impact defects is being

enhanced by the jpeg compression through the wide spreading of the small

errors. It is important to note that the Kimmel type algorithms are very common

in the high-end cameras.

3.4 Summary

The defects found in digital cameras are categorized into two types. First,

are the fully-stuck defects which are no longer responsive to light. These

defects are most often observed at manufacture time and factory mapping can

resolve such problem. The second type of fault is still responsive to light but

fails to give a proper measure of the light level (e.g. partially-stuck defect,

standard and partially-stuck hot pixels). The offset values from these faults is

either constant or exposure dependent value and both are added onto the

illumination signal in the pixel. Hence, these faults will always appear brighter

than the normal pixels.

The bright defects can be identified easily with a dark frame calibration

test. The raw format and explicit exposure control available on DSLRs provide

an ideal setting for calibration measurement. On the other hand, the lack of

explicit exposure control from PS and cellphone cameras requires the

calibration be evaluated in a compressed color image form. Defects found in

color images are altered by the camera internal imaging functions. Hence only

the count of defects can be extracted from such such calibration.

Page 114: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

98

In the study of the three demosaicing algorithms: bilinear, median and

Kimmel had been studied. Testing of the three demosaic functions shows the

adaptive approach used by the Kimmel algorithm provided the best image

quality. The gradient interpolation and color ratio correction in Kimmel help

reduce the moiré artifacts cause by interpolation error on or near the edge of

objects. The testing of a single defective pixel on uniform background has

shown that a 3x3 spread with the bilinear and median demosic and 7x7 with the

Kimmel demosaic for uncompressed images. The lossy jpeg compression

images have shown a reduction of the peak error from the faulty pixel. However,

the compression spread the defect will affect all three color planes and the

impacting pixels within the 18x18 region.

In the next chapter the calibration technique described will be applied to a

set of DSLRs, PS and cellphone cameras. Detail such as the magnitude of

defect parameters, spatial location and increase of defect counts will be served

in the yield anlaysis.

Page 115: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

99

4: CHARACTERIZATION OF IN-FIELD DEFECTS

Most research studies on imager fault analysis have focused on

measuring the magnitude of the dark current and its variation with the

temperature shift in sensors[33][34]. These studies have neglected important

information such as the spatial distributions and development rates of the post-

fabrication defects over the lifetime of the sensor. By comparison, in our study of

in-field defects, we aim to provide answers to two main questions: what is the

causal source of the defects developing in commercial cameras? Second, what

is the growth rate of these faults. Like the standard fabrication time defect yield

analysis, the distribution of defects and the failure rate are crucial in the

characterization of defect source mechanism. In this chapter, we will be using

the defect detection techniques presented in chapter 3 to obtain the defect

distribution and characteristics. We will be monitoring a set of commercial digital

imagers (DSLR, PS and cellphones) over the course of their lifetime. Testing the

cameras periodically will provide information such as the quantity, temporal

growth of faults, and the magnitude of the defect parameters. This information

can help us analyzed the spatial distribution of faulty pixels and defect rate of the

faults on each individual sensor. We will also investigate how changing camera

parameters, such as the gain (ISO) affect the number of visible defects.

Page 116: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

100

4.1 Basic DSLR defect data

In our continuous research, we have been analyzing a set of 21

commercial DSLRs as listed in Appendix A. The cameras in this study range

from less to a year to <6 years old from the manufactured date and all have

sensor of at least 23x15mm in size. From our most recent dark-frame calibration

analysis, the break down of the defects found for each camera is summarized in

Table 4-1.

Table 4-1. Summary of defects identified in DSLRs a t ISO 400.

Number Hot Pixel Camera Sensor

Type Age

(year) Stuck-High

Partially-stuck Standard Partially-

stuck

Total

A APS 6 0 0 0 12 12 B APS 1 0 0 0 7 7 C APS 4 0 0 1 5 6 D APS 2 0 0 1 1 2 E APS 2 0 0 0 1 1 F APS 0.8 0 0 0 3 3 G APS 0.5 0 0 0 1 1 H APS 4 0 0 1 3 4 I APS 1 0 0 0 1 1 J CCD 4 0 0 17 0 17 K CCD 5 0 0 6 16 22 L CCD 5 0 0 11 23 34 M CCD 5 0 0 12 16 28 O CCD 2 0 0 17 1 18 N CCD 1 0 0 0 7 7 P CCD 2.5 0 0 9 1 10 Q APS 2 0 0 0 2 2 R CCD 2.5 0 0 17 0 17 S CCD 0.5 0 0 5 6 11 T APS 2 0 0 0 0 0 U CCD 5 0 0 26 0 26

Cumulative Total: 0 0 123 106 229

One of the first clear points from this table relates to the stuck high defects

mentioned in chapter 3. Although photographers had reported to have seeing

stuck defects in their images, from our calibrations, on the cumulative total of 229

identified faults from 21 cameras, there was no evidence of any stuck high or low

Page 117: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

101

faults. This is same for pure partially-stuck defects, where none were identified

from our calibrations. In fact, the dominated defect type found in both APS and

CCD sensors is the hot pixel. Another important finding is that while standard hot

pixels are assumed to have no impact at very short exposures, a significant

number of those we find do appear at all exposure times due to an additional

offset. The partially-stuck hot pixels were very common, 106 out of the 229 (35%)

identified hot pixels exist with an offset. This is a new finding because the offset

in hot pixels appears to be not addressed in the literature. As discussed in

section 3.1, this offset will affect the pixel output at all exposure levels further

reducing the pixel dynamic range. In fact, based on the data observed in our

table, most, if not all, of the stuck-high defects reported by the camera users

could simply be the partially-stuck hot pixels with a high offset. In particular, this

point will be made clear in the next section which shows the impact of camera

gain (ISO) settings on the defect count.

Another important observation from these results is that the number of

defects found in older cameras consistently increases with the age of the sensor.

The the growth rate of defects will be discussed in details in the temporal growth

section. Furthermore, we confirm our initial finding that the defect does not

change significantly in parameters after formation[32]. The accumulation of

defects suggests the quality of the sensors will degrade over time.

4.2 ISO Amplification

One of the most important adjustable functions on a digital camera is the

ISO gain. In the tradition film cameras ISO is the sensitivity measure of the film,

Page 118: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

102

and in the case of digital cameras it is the gain or sensitivity setting of the sensor

scaled to match that of the film equivalent. Importantly, the ISO is simply a

numerical gain setting of the amplification applied to the sensor output. Because

the ISO of a digital imager can be adjusted from image to image, it has become a

significant new control ability for digital photography which was not available in

the film cameras. Despite having this advantage, the amplification of the pixel

output creates an unanticipated problem: it will enhance the appearance of

defect as well. The usable ISO range for a camera is limited by the noise level

on the sensor. Before 2004, most of the commercial DSLRs had a usable ISO

range of ISO 100 – 1600. We use ISO 400 for our standard dark-frame

calibration as at this setting the noise level was very low in all DSLRs, whereas at

higher ISOs the noise signal began to increase in the older cameras. We would

expect that as the as the gain increases the noise will be amplified and this

increase applies in the same way for defect parameters as well. Camera

manufacturers use software algorithms to reduce the noise levels at higher ISO

but these do not suppress the increase in hot pixels intensities. In the following

Table 4-2 summarizes the results from the calibrations performed at different ISO

levels and the cumulative total of hot pixels identified at each ISO level from a

sub-set of 13 cameras. As not all cameras from Table 4-1 were accessible for

re-testing, the calibration at different ISOs can only be performed on a sub-set of

cameras.

Page 119: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

103

Table 4-2. Cumulative total of hot pixels identifie d at various ISO levels.

ISO Camera Age 100 200 400 800 1600

A 6 4 11 12 23 25 B 1 1 4 7 15 22 C 4 0 1 6 11 19 D 2 1 1 2 4 10 E 2 0 0 1 5 12 F 0.8 1 1 2 3 6 G 0.5 0 0 1 1 2 H 4 0 0 4 7 12 I 1 0 0 1 5 8 J 4 0 10 17 23 33 K 5 8 15 22 43 69 L 5 11 17 34 52 67 M 5 11 18 28 48 82

Cumulative Total: 37 78 137 240 367

The accumulated defects from the 13 cameras of age 1-6 years showed a

clear trend where the number of faults found increased with the ISO levels. At

ISO 400, a total of 137 defects were identified in this set of cameras. By

comparison at the lower setting of ISO 100 only 27% of these defects were

observable. The number of defects identified increased significantly when these

cameras were calibrated at still higher ISOs. From the result at ISO 800 the

number of faults increased by a factor of 1.75 to 240 defects, and at 1600, a

factor of 2.7 with 367 defects. Hence, the number of defects we would expect

from the 21 cameras in Table 4-1 will actually be >600 defects when calibrated at

the higher ISO levels. This trend suggested at the low ISOs many defects were

not identified because they were not distinguishable from the noise signal.

4.2.1 ISO and hot pixel parameters

Calibration at still higher ISOs like up to 25600 in the newer DSLR (like

camera B) shows that the defect parameters are being amplified significantly

where the noise level is just moderate. Thus the distinction between the

Page 120: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

104

background noise and faults become more noticeable. In Figure 4-1, the plot

shows the comparison of dark response versus exposure time of an identified

hot pixel at various ISO levels.

0 0.5 1 1.5 20

0.2

0.4

0.6

0.8

1

Exposure duration (s)

Nor

mal

ized

pix

el o

utpu

t

ISO100

ISO200

ISO400

ISO800

ISO1600

ISO3200

ISO6400

ISO12800

ISO25600

Figure 4-1. Dark response of a hot pixel at various ISO level.

The magnitude of the dark currents and offsets measured for the defect

are summarized in Table 4-3. Note that the dark currents and offsets here are

measured on a normalized pixel scale where 1 represents saturation. Thus one

over the scaled dark current is the exposure time until saturation if the offset is

zero. While all hot pixels showed the same behaviour, the pixel selected for

Figure 4-1 was able to demonstrate this behaviour of the dark current and offset

changes over the 100 to 25600 ISO range.

Page 121: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

105

Table 4-3. Magnitude of dark current and offset mea sured for defect in Figure4.1.

ISO 100 200 400 800 1600 3200 6400 12800 25600

Dark current (1/s) 0.040 0.096 0.205 0.454 0.878 1.791 3.163 4.885 NA

Offset 0.003 0.010 0.019 0.043 0.081 0.150 0.319 0.692 1.000

It is clear for the defect shown here at ISOs below our standard 400, the

defect magnitude of RDark (values <0.1/s) and b (values <0.01) are relatively low.

However, as the gain goes up with the ISO levels, both RDark, and b increased

dramatically. The plot at ISO 12800 shows that the dynamic range of the

defective pixel has a 70% reduction due to the offset b. This offset values will be

reflected at all exposure times. Since this offset is added to the collected light,

this means that the pixel will be saturated under all but the darkest areas of a

photograph. Making this worst, RDark, the slope became steeper as the ISO gain

increases. Thus the dark current, RDark rises rapidly with exposure time. At ISO

12800 the dark current rate is 4.89/s which means the pixel will saturate in the

dark at 1/4s exposure. At ISO 25600, this defective pixel is fully saturated at all

exposures which will cause this fault to appear as a stuck-high defect, so the

slope is unmeasureable. Note the combination of the amplified offset b and rapid

increase of Rdark with exposure will cause any pixel with illumination >0.5 to

appear as a stuck high defect in almost any exposure. From Table 4-1, 46.2% of

the identify faults are partially-stuck hot pixels. This suggested that the

development of stuck high pixels in the field may actually be hot pixels with high

offsets amplified by the ISO gain.

Page 122: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

106

The numerical gain applied to the sensor differs between manufacturers

due to a variation in the sensitivity. From our measure of RDark and b at various

ISO settings, we have plotted these measurements versus ISO for two individual

faults in Figure 4-2.

0

1

2

3

4

5

0 2000 4000 6000ISO

Dar

k C

urre

nt (

1/s)

Defect 1

Defect 2

0

0.25

0.5

0.75

1

0 2000 4000 6000ISOO

ffse

t

Defect 1

Defect 2

(a) Dark current (b) Offset

Figure 4-2. Plot of (a) Dark current, (b) Offset vs . ISO.

Both plot in Figure 4-2 (a)dark current, and (b) offset magnitude display a

linear increase with ISO levels. Given the measurements of RDark and b at the

specific ISO levels, we can approximate the dark current and offset at other ISO

levels from the following derivation:

xISOxDark ISOAR ⋅=− , calibratedcalibratedDark ISOAR ⋅=−

calibratedDarkcalibrated

xISOxDark R

ISOISO

R −− ⋅=

calibrated

x

ISOISO

m = (4-1)

The linear trend from the two plots suggested that the gain m from

Equation(3-3) is simply the ratio between the ISOx and the given ISO level which

RDark and b are calibrated from (i.e ISOcalibrated) as shown in Equation (4-1). This

Page 123: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

107

means, as expected the hot pixel parameters increase with ISO at the same rate

that the collected photoelectrons do. This ratio indicates the RDark and b

measured at ISO 400 are increased by a factor of 4 at ISO 1600 and 8 at

ISO3200. As expected the impact from such a scaling factor will cause the

moderate defects identified at ISO 400 to appear as fully stuck faults at the

higher ISO levels. Hence, the 137 defects identified at ISO 400 will most likely

reach saturation at ISO 1600 and if not at higher ISOs.

4.2.2 ISO and hot pixel numbers

The cumulative defect total from Table 4-2 shows that at ISO 100 only

10% of the total defects from ISO 1600 were detected and at ISO 400 this

increased to 37%. However, this suggested the majority of the in-field faults are

low impact damage and the visibility of these defects are due to amplification

from the ISO gain. To see how the magnitude of the defect parameters vary over

different ISO levels, the plot in Figure 4-3 shows the distribution of RDark and b

collected from all cameras listed in Table 4-2.

0

20

40

60

80

100

0.08 0.2 0.4 0.6 0.8 1dark current rate intensity (1/s)

defe

ct c

ount

(%

)

ISO 100ISO 200ISO 400ISO 800ISO 1600

0

20

40

60

80

100

0.01 0.03 0.05 0.07 0.09 0.1dark offset

defe

ct c

ount

(%

)

ISO 100ISO 200ISO 400ISO 800ISO 1600

(a) Dark current intensity rate (b) Dark offset

Figure 4-3. Magnitude distribution of (a) dark cur rent intensity rate, (b) dark offset at various ISO levels.

Page 124: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

108

Since most of the tested cameras from Table 4-2 have a usable ISO range

of 100–1600, the distribution of the two plots of Figure 4-3 are scaled based on

the cumulative defect total identified at the highest ISO level (i.e.1600). A

common trend observed from the two plots is that at all ISO levels the majority of

the defects were identified with RDark<=0.2/s and b<0.01. This observation

shows that many of the faults are created with a low impact (i.e dark current). At

ISO 100 and 200 where the gain factor remains small, only 20-40% of the faults

from ISO 1600 were identified. As shown from the two distributions, the

magnitudes of these faults are small. In fact, only 10-20% of these defects will

pass our threshold test (i.e. Ioffset>=0.1) at these ISO ranges, as reported in

Table 4-2. As the ISO level increases, both RDark and b are amplified, and the

distributions from ISO 400, 800 and 1600 showe more defects are measured with

RDark>0.2/s and b> 0.01. The broadening of the distribution is caused by the

amplification of the moderate defects identified at ISO 100 and 200. In fact at

ISO 1600, the plot shows over 10% of the faults has RDark >=1/s. These high

dark currents faults will saturate almost immediately with fast shutter speed at

modest light levels; thus appearing as fully-stuck high defects.

As the sensor technology improves, the noise level observed on sensors

is reduced through both pixel design and software noise suppression algorithms.

Hence, the usable ISO range in the newer cameras is continuously expanding.

In 2010, most DSLR cameras released into the market have a usable ISO range

up to ISO 6400 or higher. However; from our collection of cameras only one of

the newest cameras which uses a 24x36mm sensor (camera B) has calibration

Page 125: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

109

data for the whole ISO range up to 25600. To observe the trend on the increase

of defect count at these new high ISO settings, the distribution of the defect

parameters for this single camera is shown in Figure 4-4.

0

20

40

60

80

100

0.08 0.2 0.4 0.6 0.8 1dark current rate intensity (1/s)

defe

ct c

ount

(%

)

ISO 100 ISO 200ISO 400 ISO 800ISO 1600 ISO 3200ISO 6400 ISO 12800ISO 25600

(a) Distribution of Dark current

0

20

40

60

80

100

0.01 0.03 0.05 0.07 0.09 0.1dark current offset

defe

ct c

ount

(%

)

ISO 100 ISO 200ISO 400 ISO 800ISO 1600 ISO 3200ISO 6400 ISO 12800ISO 25600

(b) Distribution of offset

Figure 4-4. Magnitude distribution of (a) dark cur rent, (b) dark offset at various ISO levels from camera B.

Again the two distributions shown are scaled based on the number of

defects identified from the highest ISO level (i.e. ISO 25600). When compared to

the distributions from the collective cameras at the lower ISOs shown in

Figure 4-3, a similar trend is found in camera B where most defects are created

with low damage (i.e. RDark<= 0.01/s or b<= 0.08). With the expanded ISO range

Page 126: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

110

3200 – 25600, the moderated defects measured from the low ISOs continue to

scale up; thus broadening the distribution of the defect parameters. In fact at

ISO 25600, we observed two clear peaks in both plots. The first peak shows

~50% defect count at RDark > 0.2/s or b >= 0.03 and the second peak shows

~20% defect count at RDark >1 or b >0.1. The first peak demonstrates the scaling

of low impact defects and second peak is from the moderate defects which

appear at ISO 100 - 1600. It is unknown if additional hot pixels will continue to

appear above the noise threshold as the ISO increases beyond our available

setting. However, the trend observed in the expand ISO range shows that as the

gain factor increases with the higher ISOs, it will enhance the significance and

numbers of low impact defects. In addition, the saturation of moderate defects

will cause great distortion to the image quality in the high ISO images.

As the defect parameters get amplified by the ISO gain, faults will become

more visible even in short exposure time images. Recall the calculation of the

combined offset from Equation(3-4), where Ioffset provides an estimate of the

brightness of each hot pixel at a specific exposure and ISO setting. This offset

can be interpreted as the dynamic range reduction that the faulty pixel will

encounter. Thus a large offset is a major interference in the pixel operation.

Given the defect parameter measured at various ISO levels from Figure 4-3, we

calculated the Ioffset at 1/30s, a typical short exposure setting use for low/modest

light conditions, and at 1/2s, a long exposure used in very dark conditions. These

measures of Ioffset can be used to evaluate the impact of defects in a regular

camera setting. The distribution of Ioffset is plotted in Figure 4-5.

Page 127: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

111

0

20

40

60

80

100

0.1 0.3 0.5 0.7 0.9 1combined defect offset

defe

ct c

ount

(%

)

ISO 100ISO 200ISO 400ISO 800ISO 1600

0

20

40

60

80

100

0.1 0.3 0.5 0.7 0.9 1combined defect offset

defe

ct c

ount

(%

)

ISO 100ISO 200ISO 400ISO 800ISO 1600

(a) Exposure time 1/30s (b) Exposure time 1/2s

Figure 4-5. Combined defect offset distribution at (a) 1/30s, (b) 1/2s.

At a commonly used low exposure time e.g. 1/30s, the distribution of Ioffset

in Figure 4-5(a) shows that majority of the defect offset values are smaller than

0.1 at all ISO levels. At ISO 400, over 50% of the identified faults will have a

combined offset <0.1. However, at high ISOs (i.e 800 and 1600), the

distributions of Ioffset are broader and over 2% of the defective pixels will have

Ioffset >= 0.2. In other words, these defective pixels will have a 20% reduction in

dynamic range. In fact, at ISO 1600 ~2% of the defective pixels will be fully

saturated even at such low exposure levels. It is well known from the camera

users forums that what appear to be stuck high pixels have been observed.

However from our own measurements, we did not find any true stuck high

defects. As shown from Equation(3-4), the photo-current is added on top of Ioffset,

thus pixels with Ioffset >= 0.2 will be at or near saturation in most images. The

distribution here shown that these reported stuck high pixels are most likely

cause by the fully saturated hot pixels at common ISO levels.

In everyday photography, long exposures (i.e. >1/15s) are rarely used

because camera motions will distort the image unless a fixed/tripod mounting is

Page 128: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

112

employed. Typical long exposure photography is used under low light conditions

with high ISO setting, which also has large dark areas in the picture

(e.g. night scenes). Thus the brightness of the defective pixels will create a

significant impact to the image quality. In Figure 4-5(b), the plot shows the

distribution of Ioffset calculated at 1/2s exposure time. Different from the results

seen at 1/30s, the plot in Figure 4-5(b) shows a broader distribution with more

pixels having a high measure of Ioffset even at the low ISO levels. At ISO 800 and

1600 over 20% of the defects are measured with Ioffset >= 0.2. Hence, combining

with the incident illumination, these faults will most likely be at saturation.

Another photography area is sport or action images where high ISO is being

used to compensate for the very short exposure time. In those conditions the

combination of amplified offsets in the defective pixels and high light levels again

brings saturated pixels to distort the images.

It is clear that the camera noise level has improved at high ISO levels but

the gain increases the “hotness” of the faulty pixels give way to a clear distinction

between hot pixels and the background noise. Calibration at the expanded ISO

range shows 2-3 times more hot pixels over the moderated ISO 400 level.

Although the distribution of RDark and Ioffset shows that majority of these faults are

created with low damage, the ISO gain will cause these low impact defects to

become more prominent and the moderate defects to reach full saturation when

combined with the exposured image.

Page 129: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

113

4.3 Spatial Distribution of faults

Like the traditional yield analysis on manufactured chip, the mapping of

defects can provide insight to the defect causal mechanism as each potential

source will produce a different spatial and temporal pattern of defects. For

example, as noted in chapter 3, if the main defect source is degradation of the

sensor material, we should observe large defect clusters in the sensor[39]. If, on

the other hand, the defects are caused by a radiation source (i.e. cosmic ray

radiation), it will likely result in permanent damage to randomly located pixels.

An example of a clustered pattern is shown in Figure 4-6(a) and a random

pattern in (b).

(a) Clustered (b) Random

Figure 4-6. Spatial pattern (a) clustered, (b) rand om.

The information related to the spatial location of the faulty pixels can be

collected from our dark-frame calibration procedure. In Figure 4-7 shows an

example of the spatial map of faulty pixels calibrated at ISO 400 for camera A.

Page 130: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

114

x (mm)

y (m

m)

0 22.7

15.1

0

Figure 4-7. Defect map of hot pixels identify from camera A at ISO 400.

From visual inspection of the defect maps of all the tested cameras (e.g. in

Figure 4-7), we have observed no local cluster of defects, and faults appear to be

uniformly distributed over the entire sensing area. Indeed in all the cameras

tested not a single case of defect being adjacent has been found. To confirm this

observation, more rigorous statistical analysis is applied. In the following

sections we will apply differenct methods to analyze the spatial defect patterns

observed from each tested sensor. We want to find whether these faults are

clustered (e.g. Figure 4-6(a)) or randomly distributed (e.g Figure 4-6(b)) on these

sensors.

4.3.1 Inter-defect distance distribution

The first method is to analyze the spatial distribution of defects from each

individual sensor. The Euclidean distance between faults, see Figure 4-8, are

calculated for each sensor from Table 4-1. The distances collected from each

Page 131: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

115

sensor are categorized by the sensor type and then plotted into one single

distribution as shown in Figure 4-9.

Figure 4-8. Inter-defect distance measurement.

0 5 10 15 20 250

5

10

15

20

distance (mm)

freq

uenc

y (%

)

0 5 10 15 20 25

0

2

4

6

8

10

12

distance (mm)

freq

uenc

y (%

)

(a) APS (b) CCD

Figure 4-9. Inter-defect distance distribution of ( a) APS, (b) CCD sensors at ISO 400.

Despite the differences of the two sensor technologies, plots in of

Figure 4-9(a) APS and Figure 4-9(b) CCD both showed a distribution expected

from a random occurence of defects with a peak near the median inter pixel

distances and with no multiple peaks at either long or short distances. If any of

the tested sensors exhibit local clustering of defects, we would expect to observe

multiple peaks at both short and long distances. As shown in Figure 4-6(a), the

measure of short distances arises from the close defects in the cluster and the

Page 132: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

116

long distances arise from the separation distances between the clusters. A

detailed measure of the two distributions is summarized in Table 4-4. The

distribution plots from APS and CCD both appeared as broad distributions with a

single peak at ~10 mm and a standard deviation of 5.2 mm. The 10mm distance

is nearly half the maximum distance on a 24x15mm sensor. The broad

distribution suggested that defects are randomly scattered over the sensor area.

In addition, the similar finding in both sensors indicated that the defect source is

not related to the manufacturing process or the design of the pixel; rather it is a

common random external source such as radiation.

Table 4-4. Statistics summary of spatial defect dis tributions from APS and CCD sensors.

Distance Sensor Type # of defects Mean

(mm) Standard deviation

(mm) Min

(pixel) APS 38 9.94 5.22 151 CCD 190 10.02 5.26 2

Up to now, the analysis shown is based on the defects found at ISO 400

(Table 4-1). As shown in section 4.2, the brightness of defects is enhanced by

the ISO gain factor; thus calibration at higher ISOs will reveal more finding of the

low impact defects. However, from our collection of cameras, only a subset of

imagers was available for testing at higher ISOs (see Table 4-2). Based on the

calibrations at multiple ISOs collected from the 13 cameras (Table 4-2), we

repeated the same distance analysis for the defects found on these sensors at

the tested ISO levels. The distribution of distances collected from each sensor is

plotted in Figure 4-10 and the measurements of the plots are summarized in

Table 4-5.

Page 133: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

117

0 10 20 300

5

10

distance (mm)

freq

uenc

y (%

)

0 10 20 30

0

5

10

distance (mm)

freq

uenc

y (%

)

0 10 20 30

0

5

10

distance (mm)

freq

uenc

y (%

)

(a) ISO 400 (b) ISO 800 (c) ISO 1600

Figure 4-10. Defect inter-distance distribution at various ISO levels.

Table 4-5. Statistics summary of spatial defect dis tributions at various ISO settings.

Distance ISO level # of defects Mean

(mm) Standard deviation

(mm) Min

(pixel) 400 137 10.37 5.26 3 800 240 10.37 5.34 2 1600 367 10.35 5.40 2

Although more defects were found at ISO 800 and 1600, the distributions

remain nearly the same with one single peak broad distribution and an average

distance measure of 10.36mm. Thus the calibrations from higher ISOs continue

to confirm that there are no local defect clusters in any of the tested sensors.

These distributions strongly suggest that these faults are not related to material

degradation where clusters of defects are expected. In fact, the similar broad

uniform distributions from all three plots are suggesting these faults are caused

by a random source such as cosmic rays radiation.

4.3.2 Inter-defect distance chi-square test

In order to strengthen the conclusion from our visual inspection on the

inter-defect distributions, a statistical chi-square, “goodness of fit” test, as

proposed by our collaborator Israel and Zahava Koren of UMASS Amherst[40], is

Page 134: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

118

performed on the three distributions shown in Figure 4-10. First a Monte-Carlo

simulation is performed by simulating 100 sensors of size 24.86 x 16.56 mm with

defects uniformly distributed over the sensor area. In this case a random number

generator is used to create the x and y coordinates for the 100 - 400 defective

pixels scattered the simulated sensors. Then, for each simulated sensor, the

inter-defect distances are calculated to derive the expected distribution. The

Monte-Carlo results are listed in Table 4-6 as the expected value for each of the

20 distances from 0 to > 28mm. The three experimental distributions shown in

Figure 4-10 are then compared against the theoretical distribution by computing

the chi-square value,

−=i

ii

E

EO 22 )(χ , (4-2)

where Oi and Ei are the observed and expected frequencies respectively. The

numerical values from the observed distributions are also summarized in

Table 4-6

Table 4-6. Theoretical vs. actual inter-defect dist ance distribution (in percentage).

Distance 0.0 1.5 3.0 4.5 6.0 7.5 9.0 10.5 12.0 13.5 15.0 Expected 0.42 3.11 5.58 7.50 8.82 9.63 9.97 9.92 9.36 8.49 7.31 ISO 400 0.35 3.88 6.36 7.84 9.18 10.10 11.44 9.96 8.12 8.12 7.91 ISO 800 0.64 3.93 5.82 8.27 9.06 10.44 10.58 10.26 8.69 8.57 7.32 ISO 1600 0.57 3.68 5.72 8.41 9.66 10.92 10.82 9.93 8.99 8.20 6.49

Distance 16.5 18.0 19.5 21.0 22.5 24.0 25.5 27.0 28.5 Total Χ

2 Expected 5.84 4.59 3.61 2.69 1.79 0.97 0.32 0.08 0.01 100.0 ISO 400 4.73 5.44 3.32 1.34 1.27 0.14 0.21 0.21 0.07 100.0 3.40 ISO 800 5.28 4.62 2.82 1.45 1.28 0.29 0.17 0.17 0.34 100.0 14.14 ISO 1600 5.13 4.29 2.72 2.07 1.25 0.38 0.18 0.17 0.42 100.0 19.73

The three chi-square values from each distribution are 3.40, 14.14 and

19.73 for the observed distance distributions at ISO 400, 800 and 1600

Page 135: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

119

respectively. The chi-square distribution table shows that the critical value is

30.14 (for 19 degrees of freedom and a significance level of 0.05). Since the

critical value is much greater than the chi-square values from all three

distributions, it shows that all experimental distributions observed in Figure 4-10

are consistent with a random defect distribution. Thus the defect source is most

likely from a random mechanism such as radiation rather than a clustered source

like material degradation.

4.3.3 Nearest neighbour analysis

In the first two methods we analyzed the spatial distribution of defects

base on all inter-distances between all faults on the sensor. Instead of observing

all the distances measured between defects, in this third method we will test the

distribution of faults using a nearest neighbour analysis. The method for

identifying cluster of events which is well established in the literatures[41][42] and

used for identifying clustered distribution in area for fields such as geography.

Different from the previous analysis, this methodology is based on the distance

measured to the closest defect point. In a cluster pattern, the distances between

close defects will be much smaller than in a randomly scattered pattern. Hence,

this measure will provide means to concentrate on the close inter-event

distances.

First we need to find and compute the nearest neighbour distance for each

pixel on the sensor. The shortest distance measured from the ith defective pixel

is denoted by di. Under Complete Spatial Randomness (CSR) conditions, where

Page 136: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

120

each faulty pixel is an independent event, then the theoretical distribution

function is

)-exp(-1)( 2ddG λπ= , for d ≥ 0,

An=λ . (4-3)

The nearest distance, d, depends on n, the number of defects on the sensor, and

A, the sensor area.

Now we measure the actual distribution in the data set. The Empirical

Distribution Function (EDF), G^

(d), of the nearest neighbour distances is

calculated as follows:

ndd

dG i ][#)(ˆ <= , (4-4)

where we count (i.e. #[di < d]) the number of defects with nearest distance di < d.

Given a set of di measured from the defects on each sensor in Table 4-2,

we can compute the empirical distribution G^

(d) using Equation(4-4) and

compared it to the theoretical distribution G(d). In Figure 4-11(a), it shows the

plot of G^

(d) and G(d) versus d for camera M at ISO 1600. The G(d) shown in

Figure 4-11(a) is the calculated values based on the sensor size of camera M

with 85 defects at ISO 1600 (from Table 4-2). From the visual inspection of

Figure 4-11(a), the EDF calculated for camera M resembled closely to theoretical

distribution, G(d). The shape of the EDF will give insight into the spatial

distribution of defects on the sensor. If the defects are clustered on the sensors,

then G^

(d) will rise rapidly due to the short distances measured within the clusters.

On the other hand, if defects are randomly scattered, G^

(d) will increase slowly at

Page 137: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

121

short distances until the critical point (i.e. mean nearest distance,d ) and G^

(d) will

then rise rapidly beyond this point.

To compare the experimental G^

(d) and the theoretical G(d), Figure 4-11(b)

shows a plot of G^

(d) versus G(d). If the EDF is exactly the same as the

theoretical distribution, then all points should lie on the linear line. Otherwise, the

deviation from the linear line gives an approximation on how closely does G^

(d)

resembles a CSR distribution (i.e. G(d)). For this particular camera M, a visual

inspection of Figure 4-11(b) suggested that the distribution of G^

(d) resembled

closely to the CSR distribution. Thus, defects are most likely random on the

sensor.

(a) Ĝ(d) and G(d) vs. d (b) Ĝ(d) vs. G(d)

Figure 4-11. Comparison of the theoretical and empi rical distribution of nearest neighbor distances in camera M.

The computation of the theoretical distribution G(d) depends on the

number of defects, n, and the area of the sensor, A. As the parameters n and A

are different for each tested imager, the calculated G(d) will vary as well. Thus

Page 138: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

122

the comparison between the theoretical and empirical distribution must be done

individually for the 13 cameras shown in Table 4-2.

The comparison of G^

(d) with G(d) for these sensors are based on two

parameters: Rn, the nearest neighbour index and z the standard normal deviate.

The evaluation of Rn is the ratio between the mean nearest distance and the

expected distance from the theoretic CSR distribution,

)( min

min

dE

dRn = . (4-5)

The expected values from the theoretical distribution G(d) with the

boundary correction factor as shown by Donnelly[43] and is evaluated as follows:

nAl

nnA

dE)(

)042.0051.0(5.0)( 5.0-min ++= , (4-6)

where l(A) is the perimeter of the sensor with area A. As mention earlier, mind is

the critical point which indicates the steepness of the distribution. Thus Rn is a

measure of whether the observed pattern is clustered, random or regular. The

nearest neighbour index has a value between 0 and 2.15 where 0 measures of a

clustered pattern and 2.15 is a regular pattern. If Rn lies close to 1, then the

observed pattern is most likely random. The tables of confidence levels for Rn is

given in[42].

The second assessment parameter is the standard normal deviate, it is a

test of statistical significant of the comparison of G(d) and G^

(d). The standard

normal deviate is calculated as follows:

Page 139: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

123

)(

)(-

min

minmin

dVar

dEdz = , (4-7)

where the variance of d from theoretical distribution is calculate by:

5min )(037.0070.0)(

nA

Aln

AdVar ×+= . (4-8)

At the 95% confidence level, if the z-score is between -1.96 and +1.96

then we cannot reject the null hypothesis. In another words, the observed

pattern is most likely a random distribution. If the z-score lies outside this range,

the null hypothesis is rejected, and the observed pattern is either clustered or

dispersed.

Using the above equations, both Rn and z are computed for each tested

sensor from Table 4-2 at the three calibrated ISO levels, and the results are

summarized in Table 4-7.

Table 4-7. Comparison of Ĝ(d) and G(d) from each test cameras.

ISO 400 ISO 800 ISO 1600 # Rn z # Rn z # Rn z

A 12 1.09 0.53 23 1.01 0.05 25 1.07 0.57 B 7 1.19 0.84 15 1.21 1.39 22 1.12 1.00 C 6 0.71 -1.18 11 1.01 0.08 19 1.00 0.01 D 2 0.47 1.19 4 0.83 -0.56 10 0.92 -0.43 E 1 -- -- 5 1.28 1.04 12 1.04 0.24 F 2 0.60 -0.98 3 0.97 -0.09 6 0.88 -0.51 G 1 -- -- 1 -- -- 2 1.00 0.00 H 4 0.97 -0.11 7 1.18 0.82 12 1.20 1.20 I 1 -- -- 5 1.04 0.17 8 0.92 -0.36 J 17 1.08 0.57 23 1.05 0.38 33 1.02 0.17 K 22 1.11 0.87 43 1.05 0.58 69 0.92 -1.23 L 34 0.94 -0.64 52 0.95 -0.68 67 0.90 -1.43 M 28 0.86 -1.24 48 0.89 -1.31 82 0.91 -1.48

average: 0.90 1.04 0.99

The average Rn calculated at ISO 400 is 0.9, at 800 is 1.04 and at 1600 is

0.99. Most of these Rn values lie close to 1; thus the defect patterns observed on

Page 140: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

124

these sensors at all ISO levels are most likely a random distribution. The number

of defect identified varies between each tested sensor and increase at the higher

ISO levels. Hence, the significance of each computed Rn dependent of the faults

on the sensor. An alternative test is to compare the calculated Rn with the

nearest neighbour index critcal values. For the given number of defects, n from

each sensor, the Rn must lie within the critcal range of a two-tailed test to not

reject the null hypothesis. Verified from the two-tail test at 95% confidence level,

all Rn falls within the critical range for the given number of sample points, hence

we cannot reject the null hypothes. Thus at 95% confidence level the defect

patterns observed on these sensors are mostly likely random patterns.

Additional verification from the standard-normal deviate, z, shows all

calculated values fall within the -1.96 and 1.96 ranges. Hence, we can again

conclude at 95% significance level that the null hypothesis is accepted; each

sensors exhibit a random pattern of defects.

4.3.4 Nearest neighour Monte-Carlo simulation

In the last part of the analysis, we will compare the map of each camera

with a set of simulated sensors using a Monte-Carlo method. Each simulated

sensor will have a set of defects distributed randomly over the sensor area.

Again the x and y location of the defects are generated with a random number

generator. Given a finite set of random defect patterns S=<s>, there must be an

upper and lower bound exists for Figure 4-11(b). If the faults on our tested

sensors are randomly distributed, then the defect map of that sensor is simply an

element in the set S and should lie within the boundary. Assume we have a finite

Page 141: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

125

set S of 100 random spatial patterns. To see if the defect map existed within the

finite set S, we need to find the upper and lower bound from the set S. To find

the upper and lower bounds, we generated 99 simulated sensors with n defects

randomly distributed over the area. The sensor size and number of defects of

the 99 simulated imagers is based on the actual imagers. For each simulated

sensor the computed EDF is denoted by G^

i(d), where i = 2, 3, …, s. Note that

G^

1(d) is the EDF calculated from our actual imagers. The average EDF, G(d)

from the 99 sensors is calculated as

1-

)(ˆ

)(s

dG

dG iji∑

≠= . (4-9)

The upper and lower bound is defined by Equation (4-10) and (4-11). ))(ˆ(max)(

,...,2dGdU i

si == (4-10)

))(ˆ(min)(,...,2

dGdL isi =

= (4-11)

A sample plot of G^

1(d) from camera M versus G(d) from 99 simulated

sensors with the upper and lower bound is shown in Figure 4-12. Different from

the plot in Figure 4-11(b), this plot compares the observed pattern against the

distributions derived from a set of simulated sensors with defects distributed by

CSR event. The plot in Figure 4-12 shows that our observed pattern lies closely

to the simulated distribution. In fact, G^

1(d) falls within the upper and lower

bounds from the simulation. Hence, this result suggests that the observed G^

1(d)

is simply a case of the randomly distributed defect patterns. Repeating this

analysis for each tested sensors, visual inspection of the results shows all

observed distributions fall within the simulated upper and lower bounds. Thus

Page 142: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

126

again this confirms all defect patterns from our set of tested sensors are simply a

case of the random defect pattern.

Figure 4-12. Empirical distribution of G(d) vs. G ¯ (d) with upper and lower bound.

4.3.5 Spatial distribution results

In the spatial distribution analysis, all four methods, inter-defect distance

distribution, chi-square test, nearest neighbour analysis and Montel-Carlo

modelling all show with good statistical confidence that defects observed on our

tested imagers are not cluster. Rather these faults resemble a spatial random

pattern. As noted in chapter 3 the lack of defect clustering indicates these faults

are not material degradation related. In fact the random distribution is indicating

a random mechanism such as cosmic rays radiation. The finding from this

analysis is indeed consistent with the experimental observation found in

Theuwissen’s studies[27],[28], which showed higher defect rates in higher

radiation environments.

Page 143: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

127

4.4 Basic defect data from small sensors

Before 2002, the small area sensors market was dominated by the

Point-and-Shoot(PS) cameras. However the past 5 years, a rapidly growing new

class, cellphone cameras, had increased the occurrence of small area sensor

more than ever. Both cellphone and PS cameras targets portability over image

quality; thus the sensors employed by these cameras are small, ~2-8% of the

sensor area compared to typical DSLR sensors. The functions available on

these cameras are relatively simple as well. Missing features such as manual

exposure control and raw image mode made it challenging to calibrate these

cameras. Instead of the standard dark frame calibration procedure use for

DSLRs, we use the customize procedure as discussed in section 3.2.2 which

extracts the defects from jpeg images. The customized dark-frame calibration

can identify bright pixels from the dark images. However defects are distorted by

the imaging process, the exact location can only be approximated within a couple

of pixels by identifying the peak in each defect cluster.

4.4.1 Defect data from cellphone cameras

In this study we have worked with a collection of 10 cellphones of the

same model (Nokia N82) which are all manufactured about the same time. Each

of these cellphones have a build in APS sensor of size 3.0 x 2.4mm and a pixel

size of 2.2 x 2.2µm. Using the calibration procedure from section 3.2.2, and

repeating it about once every year, the results are summarized in Table 4-8.

Page 144: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

128

Table 4-8. Accumulated defects count from 10 cellph one cameras (ISO 400).

Cellphone 2008 2009 2010 Phone A 9 14 18 Phone B 13 15 17 Phone C 8 15 20 Phone D 6 20 24 Phone E 12 21 26 Phone F 14 16 18 Phone G 14 18 20 Phone H 10 19 25 Phone I 14 20 23 Phone J 17 19 22

Cumulative Total: 117 177 213 Average per phone: 12 18 21

Due to the limitation with the exposure and ISO control only a short range

of exposures are available and we can only calibrate at ISO 400. We cannot plot

the dark response versus exposure time, thus it is not possible to measure the

defect parameters (i.e. dark current and offset). We are only able to conclude

these are bright defects but not the exact defect type. However, as no stuck high

or partially-stuck defects were found in DSLRs, these observed faults are most

likely hot pixels. From the first calibrations when these cellphones were <1 year

old, we have identified 117 faults; thus on average, there is ~12 defects on each

sensor. As compared to any 1 year old DSLRs sensors, the numbers of faults

developed on these small sensors are significantly higher than the 3-4 faults/year

(at ISO 400) in a sensor with 12x the sensor area. To keep a low manufacturing

cost in these embedded cameras, where typically the cellphones costs less than

the full PS cameras, the mapping of fabrication time defects prior to shipment are

not feasible. Hence; the faults identified on these imagers will include

manufacture time defects plus those developed while operating in the field.

Despite the lack of defect mapping from the manufacturers, by the second and

Page 145: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

129

third tests, we have reported a cumulative total of 177 and 213 defects

respectively.

4.4.2 Defect data from Point-and-shoot cameras

In addition to cellphone cameras, we have also identified defects from a

set of PS cameras. Each of these cameras uses a CCD sensor with area ranged

from 20 – 40mm2 and the pixel size from 1.5 – 2.8µm. The age span of these PS

cameras is 1–7 years old. In PS cameras there is explicit control to adjust the

ISO settings; thus we are able to calibrate these cameras at various ISO levels.

However control of the shutter time and output of only jpeg images require the

use of the same defect identification method as in the cellphone cameras.

Shown in Table 4-9 is the count of defect identified from the set of PS cameras.

In these PS cameras the manufacture time defects are mapped out.

Table 4-9. Accumulated defect count from Point-and- Phoot at various ISO levels.

Defect count Camera Sensor Type Age (year)

ISO 100 ISO 200 ISO 400 PS-A CCD 3 7 7 11 PS-B CCD 6 10 11 27 PS-C CCD 7 6 7 10 PS-D CCD 1 0 0 24

Cumulative total: 23 25 72

Inspecting the defect count from each PS camera in Table 4-9, the results

here also show an increase of the defect count as we calibrated at the higher

ISO levels. Unfortunately for the PS cameras we do not yet have multiple year

data. Nevetheless, the defect number are much higher than DSLRS at the same

ISO. In the next section we will examine the temporal growth of defects based

on the sensor age and defect count for each camera type.

Page 146: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

130

4.5 Temporal Growth

Temporal growth is also another aspect that is often examined in a defect

yield analysis. The growth of defect with time is an important factor in

determining how the camera images will deteriorate as the system ages. Also

the rate at which faults developed on the sensor will give another indication of the

characteristics of the defect source. In this study, two methods were used to

measure the defect development rates for each tested imager. The first method

utilized the results from periodic calibrations which we will show in this section.

The second method used historical images to identify the first appearance of

defects will be shown in the chapter 5.

Each dark-frame calibration gives the number of defects on the sensor at

their specific age. Thus by calibrating each sensor periodically, we can collect the

number of defects developed over the lifetime of the sensor. However as many

of the cameras are only occasionally available (we borrow them from several

owners) the times between tests is rather random. By plotting the defect count

versus age as shown in Figure 4-13 for camera A, we can observe the trend at

which defect increases with time. The size of sensors used by the three types of

cameras is different. Hence we will divide this analysis into two parts. First we

will examine the defect rates from DSLRs, then in the second part we will look

into the small sensors used by cellphones and PS cameras.

Page 147: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

131

0 50 1000

10

20

30

Camera age (months)Tota

l num

ber

of d

efec

ts

Figure 4-13. Defect count vs. sensor age for camera A from dark-frame calibration (at ISO 400).

4.5.1 Defect growth rate on large area sensors

Visual inspection of the plot in Figure 4-13 suggested the defect on sensor

increase linear by time. Hence a linear regression fit function is used to measure

the defect growth rate. In the investigation of the defect rates on the large area

sensors, we have generated plots like Figure 4-13 for each camera in Table 4-1.

The measured defect rates for each of these cameras are summarized in

Table 4-10 for mid-size DSLRs and Table 4-11 for full-frame DSLRs. For the

subset of cameras in Table 4-2, we have measured the rates at the different ISO

levels.

Page 148: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

132

Table 4-10. Measured defect rate from calibration r esult for all tested mid-size DSLRs.

Defect Rate (defects/year) Camera Sensor Type 100 200 400 800 1600 3200

A APS 1.70 2.09 3.57 3.35 3.83 -- C APS -- 0.50 1.15 2.10 3.80 -- D APS 1.20 2.40 3.68 3.86 7.71 -- E APS -- -- 1.86 2.14 5.14 -- F APS 1.11 1.56 2.22 2.88 4.85 -- H APS -- -- 0.87 1.53 2.62 3.49 I APS -- -- 0.40 1.60 3.20 -- J CCD -- 2.08 3.86 5.31 9.90 -- K CCD 1.52 2.86 5.69 8.19 13.14 -- L CCD 2.06 3.19 6.70 9.60 12.37 -- M CCD 1.86 3.32 4.98 8.68 14.58 27.88 O CCD -- -- 9.82 -- -- -- N CCD -- -- 10.50 -- -- -- P CCD -- -- 3.81 -- -- -- Q APS -- -- 0.77 -- -- -- R CCD -- -- 5.10 -- -- -- S CCD -- -- 2.53 -- -- -- T APS -- -- -- -- -- -- U CCD -- -- 4.46 -- -- -- Average rate (APS): 1.34 1.64 1.82 2.49 4.45 3.49 Average rate (CCD): 1.81 2.86 5.75 7.94 12.50 27.88

Table 4-11. Measured defect rate from calibration r esult for all tested full-frame DSLRs.

Defect Rate (defects/year) Camera Sensor Type 100 200 400 800 1600 3200

B APS 2.18 4.36 7.35 11.38 18.31 20.47 G APS -- -- 2.00 2.00 4.00 4.00 Average rate (APS): 2.18 4.36 4.68 6.69 11.16 12.24

For the cameras that had been calibrated at various ISOs, the defect rates

increases as we measure at the higher ISO levels. Taking the average of the

defect rates measured from mid-size DSLR sensors, the result is summarized at

the end of Table 4-10 for each sensor type (i.e. APS, CCD). As shown from the

results, the average rate of the mid-size APS at ISO 100 is 1.34 defects/year and

this increases by a factor of 2.6 to 3.49 defects/year at ISO 3200. For the full

frame sensor, shown in Table 4-11, at ISO 100, the average defect rate is

2.18 defects/year and this increase by a factor of 11 to 24.47 defects/year at

ISO 3200. Similarly, for the mid-size CCD sensors, we found 1.81 defects/year

Page 149: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

133

at ISO 100 and this increases to 27.88 defects/year at ISO 3200, which is 15

times higher. Since more low impact defects are detected at the higher ISOs, the

measurement is most likely reflecting the true defect rate from hot pixels that

were too weak to be observed above the noise at the lower gain levels.

From Table 4-10, the average rates calculated for mid-size APS and CCD

sensors showed that the mid-size CCDs have a higher defect rate than the APS

sensors. Shown in Figure 4-14 is the average defect count versus age of the

sensors from the cameras in Table 4-1.

0

10

20

30

1 2 3 4Age

Def

ect c

ount

APS

CCD

Figure 4-14. Average defect count vs. sensor age by sensor type at ISO 400.

From visual inspection, the chart in Figure 4-14 demonstrates on average

the CCD sensors have a higher defect count as compare to the APS sensors of

the same age in every year measured. In fact, reported in Table 4-10, at

ISO 400, the average growth rate from the CCDs (5.75defects/year) is 3 times

higher than the APSs (1.82defects/year). In fact, the defect rates of the mid-size

CCD are nearly the same as the full-frame APS sensors. Thus this finding

suggests first the defect rate might scales with the sensor area, as we will be

exploring this in detail in chapter 6. Second, the CCD sensors might be more

sensitive to defects as compare to the APS sensors.

Page 150: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

134

Both the APS and CCD sensors show a continuous linear increase of

defects with time. By comparison as noted in chapter 3, material degradation

mechanism creates an exponential growth in defects with time. This again

indicated the in-field defects are not cause by the material stability issues related

to the manufacturing processes. The similar trend shared by these sensors

suggested the causal mechanism is independent of the sensor design. In fact, it

is most likely that both sensors are exposed and affected by the same causal

mechanism. Also the linear growth rates suggested that faults are not cause by

a single traumatic event but a continuous impact of some source on the sensors.

However, the higher defect rate found in CCD does indicate that this type

of sensors might be more sensitive to the cause of the defects. One factor that

might have affected the defect rate is the fill factor of the pixel which fraction of

the photosensitive area of pixel. For a typical APS pixel, the fill factor is ~25%,

while for the CCD pixel the fill factor ranges from 70-90%. The larger

photosensitive area in the CCD pixel will have more surface exposure to the

defect source. Thus the probability of the defect damage on the photosensitive

area is higher on a CCD pixel, and this might result in the higher defect count in

the CCD sensors.

From an in-depth investigation, several cameras which have been on

transatlantic/pacific flights have shown more defects than other cameras of the

same age and model. It is known that the cosmic rays radiation level is 100

times higher in transatlantic/pacific flights. Since it has been hypothesized that

cosmic rays are the causal source of hot pixels, this would lead to higher defect

Page 151: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

135

count. To better understand the effect cosmic rays radiation as the defect source,

we must gain a better measurement of the dates which each defect developed.

This experiment will be done and the results will be shown in chapter 5 by

analyzing the historical images captured by the sensors.

4.5.2 Defect growth rate on small area sensor

From the multiple calibrations taken with the cellphone cameras, as shown

in Table 4-8, we can plot the defect count versus sensor age for each cellphone.

As the defects identified from calibrations include manufacture time defects, we

cannot assume zero defects at time 0. The defect rates measured with the linear

regression fits for each tested cellphone are summarized in Table 4-12.

Table 4-12. Measured defect rates from cellphone ca meras at ISO 400.

Cellphone Defect Rate (defect/year) Phone A 3.95 Phone B 1.97 Phone C 4.93 Phone D 3.95 Phone E 4.93 Phone F 1.97 Phone G 1.97 Phone H 5.92 Phone I 2.96 Phone J 2.96

Average rate: 3.55

Reported from our measurments, on average these sensors have

developed ~3.55 defects/year. The rates observed from the cellphone cameras

are 1.9x higher than the 1.82 defects/year for the mid-size DSLR APS at ISO 400

(Table 4-10). However, the areas of the cellphone sensors are more than 12x

smaller than DSLRs. Thus the defect rate per sensor area is actually much

Page 152: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

136

higher on these small area, small pixel sensors. Chapter 6 will investigate the

impact of sensor and pixel size on the defect rate in details.

Using the defect count and sensor age reported in Table 4-9, we can

measure the temporal growth of defects for the PS cameras. Since the

manufacturers do perform defect mapping prior to the shipment on these PS

cameras, thus we can assume there are no defect at time 0. The defect rates

measured at various ISO levels are listed in Table 4-13. A point to note is that

only one calibration was taken with each of these PS cameras; hence the

measurements of these defect rates have more uncertainty than for the other

cameras (i.e. DSLRs). The software calibration tool for the PS cameras was only

developed at the end of this thesis so this work will be extened by future studies.

Table 4-13. Measured defect rates for Point-and-Sho ot at various ISO levels.

Defect rate (defects/year) Camera Sensor Type ISO 100 ISO 200 ISO 400

PS-A CCD 1.88 1.88 2.95 PS-B CCD 1.58 1.73 4.26 PS-C CCD 0.85 1.00 1.42 PS-D CCD 0.00 0.00 18.88

Average rate: 1.08 1.15 6.88

From the first measurments, the average defect rate of the 4 tested PS

cameras at ISO 400 is 6.88 defects/year, which is higher than the

3.55 defects/year reported from the cellphone cameras. The CCD sensors used

in the PS cameras is typically 3x larger than the APS sensors in cellphone

cameras. Hence, the high defect rates of these CCD sensors are consistent with

our previous observation where the comparison of defect count in the CCDs is

higher than the APSs. The 6.88 defects/year from the PS cameras is similar to

Page 153: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

137

the 5.73 defects/year of the mid-size CCDs (Table 4-10). However, the

difference in sensor area again is showing that the small area, small pixel

sensors are experiencing a higher defect rate per mm2.

4.5.3 Calibration temporal growth limitations

There are two maindraw backs to this method of temporal growth

measurment. First, the accuracy of the defect rate approximation depends on

the number of sample points used in the linear fit. In another words, if only one

calibration was taken with the imager, the measuremet rate from the fit function

will be biased by the single point. The second problem is the frequency at

which calibrations were taken. If the time between each calibration is one year

or more apart, the calibration will not be able to provide a close estimation of the

first appearance of the defects. This will cause an underestimate of the defect

rate. Due to the limited access to some of the cameras, only a few imagers

benefited from the continuous calibrations at a few months apart, while most

cameras are calibrated once a year or longer. To overcome this problem in

chapter 5, we will present a statistical accumulation approach to extract defect

dates from the historical images captured by the cameras. Such method can

increase the accuracy of defect rate measurements as the frequency at which

images were captured is much higher than the calibrations.

The preliminary results showed the defect growth rate for the two types of

sensors (i.e. APS and CCD) with damage accumulated as the sensor ages.

Increases in defects on the sensors create a limitation to the useful lifetime of a

sensor. Although photographers will often purchase new cameras every 5 years

Page 154: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

138

or less, in the case of embedded systems, such as sensors in vehicle and

security cameras, this could affect the reliability of the devices over time.

4.6 Chapter Summary

In this chapter we have showed the defect count collected from DSLRs,

PS and cellphone cameras. All these cameras showed increases in defects with

the age of the sensor. In addition, while calibrations at higher ISOs revealed

more low impact defects, this suggested much of the faults developed in the field

are created with low damage. However the detail analysis on the defect

parameters has shown the brightness of the defect increases at a much higher

rate than the noise signal. Hence, more low impact defects are being seen at the

high ISOs and the moderate defects found at low ISO will reach saturation.

From the first spatial analysis we have looked at the inter-distance

between all defects. The histograms of the inter-distances have shown a broad

distribution with one single peak at ~10mm. Then a chi-square test was used to

test the observed distribution to the expected distribution derived from a set of

100 simulated sensors with defects scattered randomly across the area. The

chi-square value suggested the observed distribution at ISO 400, 800 and 1600

all resembles a random distribution at 95% confidence level. In the third method,

the nearest neighbour analysis was used. The distributon of the shortest

distances between defects from each sensor were computed and compared to

the CSR event base distribution. Both the nearest neighbour index Rn and the

standard normal deviate confirmed at 95% confidence that these defects are

randomly spaced on the sensors. The last method model a set of sensors with

Page 155: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

139

randomly scattered defects using the Monte-Carlo methods. The neareset

neighbour distribution from the simulated sensors formed an upper and lower

bound. The comparison of the measured distribution shows all sensors fall within

that boundry. Thus these defect patterns are a case of the random pattern.

The temporal growth of defects from all imagers indicated a linear

increase of defects. The continuous accumulation suggested the defect causal

mechanism is not related to the sensor design or manufacturing process but

shared by both the APSs and CCDs. The characteristics found in the spatial and

temporal analysis indicated the defects on these sensors are most likely caused

by cosmic rays radiation. More importantly the higher defect rates found in the

CCDs indicated this type of sensors might be more sensitive to radiation. Lastly,

the preliminary results on the small sensors are indicating higher defect rate per

mm2.

This study had looked at defect pattern at various ISO levels. With more

defects found at the higher ISOs and no clustering patterns were found, this

strengthen the statistical relevance of our analysis and confirmed that the defects

on sensors is most likely cause by a random external source rather than material

degradation .

Page 156: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

140

5: TEMPORAL GROWTH OF IN-FIELD WITH DEFECTS TRACE ALGORITHM

The general defect growth rates can be measured using a series of

calibrations taken over time. Each calibration result provided the number of

defects on the sensors at the time of the test. Thus, with calibration images

collected over the lifetime of the sensor, we can observe the trend of the defect

rate for each individual sensor. However, the defect growth rates measured

using calibration results suffers one main problem which is the time between

calibrations. Since some cameras are not accessible for frequent calibrations,

the period between each test range from several months to over a year. Thus

the errors of the growth rates will suffer accuracy. Ideally we would like to know

the defect development date within a few days. Instead of measuring the growth

rates from calibration images, an alternative choice is to identify the defect date

utilizing the first appearance in regular images take by the cameras.

Each image captured by the camera is a record of the current state of the

sensor as shown in Figure 5-1. By analyzing the presence/absence of defects

over the entire historical image dataset, we can better identify the defect

development date of each faulty pixel. Since photos are taken on a regular basis,

usually with intervals of minutes to less than 3 months, this can improve the

measurement of the defect growth rate over that of simple calibrations. To find

the first appearance of a defect, we could visually inspect each image. However

Page 157: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

141

this process is slow and cumbersome. With some image dataset have over

10,000 pictures, this process is not feasible. In this research, we have developed

a mathematical algorithm that will use the image itself to evaluate the

appearance of the defects and accumulate statistics on this over a sequence of

images to evaluate the first defect development date. In the first part of this

chapter, we will present the basics of the algorithm. Then we will demonstrate

the accuracy of the algorithm with a set of simulations. Finally this algorithm will

be applied to seven image datasets from cameras that have been operating in

the field. Then the defect development dates established by these searches will

be used to measure the defect rates. These results will be compared to the

calibration established rates shown in section 4.5.1.

Figure 5-1. Concept of defect trace algorithm.

5.1 Bayes defect trace algorithm

Previously research by our lab has shown that Bayes algorithms can

identify defects from a seqeuence of pictures[32]. In this work, we extend the

method to find the development dates of the hot pixel defects. In this algorithm,

the imager is described by an array of W x H pixels and output of each pixel is

denoted by yi,j. This algorithm focuses on analyzing 8-bit RGB color images

Page 158: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

142

which means each pixel is composed of 3 color channels (Red, Green, and Blue),

each of which has an intensity between 0 (dark) to 255 (saturation). Most

commercial digital imagers use the CFA sensors; thus the images are assumed

to have undergone demosaicing and image compression, as none of these

images were raw files. From section 3.1.2 we had introduced a mathematical

function to characterize the operation of a pixel, this equation can be simplified

into

∆+⋅=+⋅+⋅= xmbRTxmy Dark )( exp , where

calibrated

x

ISOISO

m = . (5-1)

The parameter x is the incident light intensity that strikes the pixel, the defect

parameter m·(Texp·RDark + b) is denoted by ∆, and m is the amplification adjusted

by the ISO setting. The defect parameters RDark and b are estimated from the

calibration test. However this value depends on the ISO setting which the

calibration was taken with.

The algorithm analyzes sequence of images from each camera

individually. For each camera, information such as the spatial location of the

defects and the magnitude of RDark and b are needed and are collected through

dark frame calibrations. The first step of the algorithm begins with the estimation

of the expected value for each pixel, denoted by z, by interpolation with the

neighboring pixels. This assumes that the presence of the defect will create a

known deviation (i.e. ∆) from the expected value obtained by interpolation.

Hence, the output of a good pixel is z, and a defective pixel is z+∆. The

interpolation scheme adopted by this algorithm is a ring mask as shown in

Figure 5-2. This scheme will only take the average from the pixels on the

Page 159: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

143

perimeter of the mask and omitting everything else. As discussed in section 3.3

demosaicing will cause a single defective pixel to appear as a cluster of defects

in color images. Thus by omitting the immediate neighbors around the center

defective pixel, which would be affected by the presence of the defect, we can

gain a more accurate estimate of the expected good pixel value.

Figure 5-2. Ring interpolation.

In any image interpolation, the values produced might differ from the

actual pixel signal. Thus after calculating the image-wide interpolated values, we

compare the difference between the expected and the actual pixel value

(ei,j = yi,j - zi,j) and obtain the image-wide interpolation errors. From these

collected image wide error values, we can compute the interpolation error

Probability Density Function (PDF), pE(ei,j), and Cumulative Density Function

(CDF), PE(ei,j). The image wide interpolate error PDF as shown in Figure 5-3(a)

plots the occurence of each interpolation error value from the range of -255 to

255 (i.e. 8 bit value pixel). The frequency of each error value is used as a

statistic measured to evaluate the likelihood of the error value being an

interpolation error or due to the presence of defect. The interpolation error CDF

as shown in Figure 5-3(b) plots the count error < e for e is from -255 to 255.

Page 160: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

144

-200 -100 0 100 2000

2

4

6

8

10

12

error

freq

uenc

y (%

)

-200 -100 0 100 200

0

20

40

60

80

100

error

freq

uenc

y (%

)

(a) Interpolation errors PDF (b) Interpolation erro rs CDF

Figure 5-3. Image wide interpolation errors (a) PDF , (b) CDF.

The second step is to evaluate for the presence of defects in each image.

For each identified hot pixel (from calibration), we move recursively forward in

time over a sequence of images and use the Bayesian function:

(5-2)

)|()|()|()|()|()|(

)|(11

1

−−

⋅+⋅⋅=

kkkk

kkk yHotProbHotyProbyGoodProbGoodyProb

yGoodProbGoodyProbyGoodProb .

The probability, Prob(Good|yk), evaluate the likelihood of the pixel, with an output

yk, being good in the k-th image. Likewise the probability

)|(1)|( kk yGoodProbyHotProb −= , (5-3)

will evaluate the likelihood of the pixel is hot in the k-th image. This probability

Prob(Good|yk) will be close to 1 at the beginning when the pixel is still good, and

will eventually go down to 0 as we move forward in time when the pixel becomes

defective. Thus for the first image where Prob(Good|yk) falls below our

predetermine threshold, we can identify the defect development date from that

image.

Page 161: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

145

The two conditional terms in Equation (5-2), Prob(yk|Good) and

Prob(yk|Hot) are computing the likelihood that the pixel with output yk is in a good

or hot state. This is calculated using the interpolation error PDF, pE(ek), as

indicated by Equation (5-4), and (5-5) respectively.

)()|( kkEk zypGoodyProb −= (5-4) ))(())((()|( exp ∆+−=+⋅⋅+−= kkEDarkkkEk zypbRTmzypHotyProb (5-5)

Assuming the expected value of a good pixel zk (from interpolation), if the

actual value yk (from k-th image) is for a good pixel, then the error ek = yk - zk

would be approximately zero, as in Equation (5-4). Likewise if the actual value yk

is for a hot pixel, then the expected value is corrected by the deviation factor

(zk+∆); thus error ek = yk - (zk+∆) will be approximately zero, as in Equation (5-5).

Because the PDF is derived from the image wider interpolation errors, the

evaluation of Equation (5-4), and (5-5) will depend on the accuracy of the

interpolation scheme.

From Equation (5-5), we assume that the defect parameters RDark and b

are constant values. However in reality this is not true, as RDark and b will vary

due to the temperature changes in the sensor[32]. Thus the term ek = yk - (zk+∆)

which assumes a fixed defect parameter will be an inaccurate estimate. Instead

of a constant defect parameter, we modify the model such that the fluctuation of

these values will be considered. To compensate for the variation in the defect

parameters, we will provide a conservative underestimate of the dark offset as

denote by ∆min. The lower bound of the combined dark offset ∆min is defined by

)( minexpminmin bTRm Dark +⋅⋅=∆ − , (5-6)

Page 162: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

146

where both RDark-min and bmin are the conservative lower bounds to the range

which RDark and b may assume during the camera operation.

With the estimate of the lower bound for ∆min, we can correct the estimate

of Prob(yk|Hot) with range of ∆min and ∆max. Thus the derivation of the new

Prob(yk|Hot) is as follows:

)Prob(y|∆∆)(z(yp)Prob(y|Hot e =+−= )) maxmin ∆∆~Prob(y| ∆Prob(y| ∆ ≤≤

∑ ⋅=≤≤max

min

maxmin Prob(∆Prob(y|∆∆∆Prob(y|∆ )))

∑ ⋅+−=

max

min

e Prob(∆∆zyp )))(( (5-7)

The probability function, Prob(∆), is the PDF of delta and, is treated as a uniform

distribution between ∆min and ∆max. In an 8-bit imaging system, the maximum

value for e and ∆ are 255, thus we will treat ∆max = 255, so Prob(∆) is simply a

discrete summation from ∆min to 255

∆−⋅+−=

255

min2551

))(()|(Prmin∆

e ∆zypHotyob . (5-8)

The evaluation of Equation (5-8) is repeatedly performed for each

identified defect on the sensor and is repeated for every image in the dataset.

Assuming there are n defects on the sensor and k number of images in the

dataset, we will need to repeat the calculated for n·k times. The overhead in this

computation is a major drawback to large image datasets.

The Equation (5-8) can be simplified with a change of variables where

x = y-(z+∆). Then the inner summation from Equation (5-8) is simply the

interpolation error CDF, as shown in Equation (5-9).

Page 163: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

147

∑⋅

∆−=

upper

lower

x

xe xpHotyProb )(

2551

)|(min

, where min:

255:

∆−−>−−<

zyxx

zyxx

upper

lower

[ ])255()(255

1)|( min

min

−−−∆−−⋅∆−

= zyPzyPHotyProb EE (5-9)

Now the calculation of Prob(y|Hot) just requires a simple subtraction between

values be read from a CDF vector.

5.1.1 Interpolation scheme

The core of the algorithm is based on comparison of the interpolated value

with the observed pixel value to determine the presence of hot pixels in each

image. As shown in derivation of the Bayes detection algorithm, the PDF pe(e)

and CDF PE(e) are derived from the image wide interpolation errors of all pixels.

Therefore the choice of the interpolation schemes has a significant effect on the

accuracy of the algorithm. Interpolation from close region around the pixel x will

provide the closest estimate value of x. For example average from the 3x3

nearest neighbor usually gives the most accurate estimate of x. However, in our

interpolation we want to estimate the expected good output of x. From the

demosaicing analysis in section 3.3.1 we have shown a single defective pixel will

spread into its neighbouring pixels and this will distorts neighbouring pixels

around the defect. Hence, the estimation the good output from the 3x3 region of

a defective pixel is misleading. To achieve a better estimate of the good output

from each pixel, we modify the typically interpolation mask to one with a ring

averaging as shown in Figure 5-2. An example of a regular 5x5 averaging is

shown in Figure 5-4(a). The estimation of this interpolation is simply the average

from all pixels in the mask area. Shown in Figure 5-4(b) is an example of the 5x5

Page 164: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

148

ring averaing mask. The pixel x is interpolated using only the average from the

pixel on the perimeter of the mask. Hence we can get a better approximation of

the good output of a defective pixel by eliminating the immediate neighbour

pixels which are most affected by the demosaicing spread of the defect.

(a) 5x5 regular (b) 5x5 ring

Figure 5-4. A 5x5 pixel interpolation mask weightin g factor: (a) regular averaging (b) ring averaging.

Note. The 0 and x pixels are not counted in the ave raging.

The image wide interpolation error distribution derived from a collection of

10 images using the two different interpolation schemes are shown in Figure 5-5.

It is easy to see because the 3x3 mask size consists of the immediate

neighbours to the center pixel. Hence a 3x3 ring mask is simply the same as the

3x3 regular mask. The summary of the distribution plots is reported in Table 5-1.

Table 5-1. Compared interpolation error from variou s interpolation schemes.

3x3 5x5 7x7

Mean Std Mean Std Mean Std

Regular 0.55 5.04 0.67 7.21 0.76 8.81

Ring NA NA 0.70 8.55 0.83 10.83

Page 165: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

149

-200 0 2000

5

10

15

20

25

error

freq

uenc

y (%

)

-200 0 2000

5

10

15

20

25

error

freq

uenc

y (%

)

-200 0 2000

5

10

15

20

25

error

freq

uenc

y (%

)

(a) 3x3 (b) 5x5 Regular (c) 7x7 Regular

-200 0 2000

5

10

15

20

25

error

freq

uenc

y (%

)

-200 0 200

0

5

10

15

20

25

error

freq

uenc

y (%

)

(d) 5x5 Ring (e) 7x7 Ring

Figure 5-5. Image wide interpolation error derived from regular and ring averaging.

Visual inspection of the 3x3 error distribution in Figure 5-5(a) has peak at

0.55 and the count of small errors is much higher than the other mask sizes. The

As the interpolation mask, the mean error increase to 0.67 with the 5x5 regular

0.76 with the 7x7 regular averaging. The same trend is found in the ring

interpolation. Although the ring averaging omitted the nearest neighbour pixels

from the interpolation, the average error of 0.7 with the 5x5 ring is only slightly

higher than the average error of 0.67 with the 5x5 regular mask. Thus we did not

loose significant accuracy by omitting the immediate neighbours.

Page 166: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

150

5.1.2 Windowing and Correction scheme

Consider a time ordered sequence of pictures from a camera (i.e. an

image dataset). We will now use the Bayes accumulation of statistics to identify

when a defect develops from the sequence of images. The detection of defects

is based on the change in the accumulated probability Prob(Good|yk). However

the visibility of the bad pixels is affected by conditions such as the scene being

captured, the exposure and the ISO speed used. The resulting statistics from a

large set of images will encounter problems such as saturation of the

accumulated Bayesian value. For example if a pixel turns bad after a long

operation time, then the accumulation from early images in the dataset will cause

the probability to saturate at the good state. Thus it is hard to detect the small

deviation from the low impact defects. To better identify the instantaneous

change of a pixels cause by a defect developing, it is better to confine the

calculation to a subset sequence of images using a “window” where changes of

the weaker defects can be detected. For a sliding window through the picture

sequence with length n (i.e. number of pictures), the accumulation will be defined

by the n most recently loaded images, as shown in Figure 5-6.

Figure 5-6. Sliding window approach to defect ident ification.

Page 167: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

151

The implementation of the sliding window is translated into two First-in-

First-out (FIFO) queues for each defective pixel. The two FIFO queues will each

store the pE(e) and PE(e) that are calculated for the each defect from the specific

image as shown in Figure 5-6.

A problem with the algorithm up to this point is that we are assuming the

interpolation error on an image is minimal thus the large errors measured are due

to the presence of the defect. However, this is not always true as the details of

the image scene will affect the accuracy of the interpolated values. For example,

a local image region with an edge or fine details tends to have large color

variations and in such cases a large estimation error is unavoidable. In addition,

for images captured at high ISO settings, the noise level will become an issue

even in a uniform color region. This will also affect the performance of the

interpolation scheme. A simple solution to these problems would be filtering out

images from the dataset with these problems. However fine details are common

in localized regions of the picture, thus tossing out images will potentially flush

away other useful information. Instead of discarding images, we designed a

post-correction procedure which can help correct any false identification due to

the interpolation error. The procedure for the post-correction is shown in

Figure 5-7. First each defect is identified by examining the plot of Prob(Good|yk)

over the entire image dataset as shown in Figure 5-7(b). For the first image

where Prob(Good|yk) is below the threshold value, it will be declared as the first

defect development date. Given the point where Prob(Good|yk) < threshold, the

post correction procedure will examine the local region from the k-th image. The

Page 168: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

152

evaluation will be based on the color mean and variance around the defective

pixel. Given these two measurements, if the color variance and mean exceed a

predefined threshold, then it indicates this region suffers large interpolation error

or pixels are at or near saturation. Hence the identification from this region

suffers accuracy and will be considered invalid. If the identified point fails the

post-correction test, then the next detection point in the sequence will be tested

in the same way. Since the hot pixels turn on at a particular time, and do not

change, the creation point can thus be identified.

5.2 Simulation results

To test out the algorithm, we will first create a set of images, where

simulated defects are injected into these images. Then the Bayes detection

algorithm will be used to find the first image which each defect was first injected.

There are several factors that can affect the performance of the algorithm, which

included: the window length, interpolation ring size, image exposure time and

magnitude of the defect parameters. Each of these factors will be examined in

the simulation for its impact on the performance of the detection algorithm.

Figure 5-7. Post-correction procedure.

Page 169: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

153

The experiment is performed on a 1MP simulated sensor of size

1234 x 823 pixels. A fixed number of simulated standard hot pixels will be

scattered randomly over the sensor area. However; to simulate an aging imager,

one additional defect will be added progressively at a fix rate in the set of 50

images. Thus we will start with a defect free sensor, after the k-th image, an

additional defect will be created. This process will allow us to keep track of the

first image at which the defect was injected. For each RGB color photo used in

the simulation, we first converted the image into raw form where defect can be

injected. The dark current from the simulated hot pixel will be added on top of

the pixel value from the image to create the defective pixel. Then the bilinear

demosaicing function will be used to return image to a full color form.

Both the magnitude of the dark current and the exposure setting used to

capture the photo will impact the visibility of faults in an image. Hence, the

evaluation of the Bayes detection function is divided into 3 parts. First we will

focused on the dark current of the defect, next is the exposure time of the image

then finally we will simulated a random process which will model real image

dataset. For each set of simulations we will test the limits where defects are still

detectable. In each experiment we will explore with different sizes of ring mask

and window lengths.

The detection of each defect is defined by as a “hit” or a “miss”. A “hit”

indicates the algorithm is able to identify the image which the defect first appears.

The error, ∆k, is the image count deviation between the known first defective

Page 170: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

154

image and the detected image. Hence, a “hit” occurs when ∆k = 0. A “miss”

occurs when the algorithm fails to detect the defect in all images in the sequence.

In the first simulation we want to evaluate the performance of the

algorithm for defects with a dark current range between 0.2 – 0.8/s. On each

simulation run, 10 faulty pixels with same dark current will be randomly scattered

over the sensor area. Each defect will be injected into the image assuming to

use a shutter duration between (0.06 – 0.5s). The bayes detection algorithm will

calculate Prob(Good|yk) for each defect in the sequence of images. The first

appearance of defect is identified with a threshold test for Prob(Good|yk)<0.5.

The simulation is repeated for the three interpolation schemes (3x3, 5x5 and 7x7

ring) at window lengths of 3, 5, and 7 images. Based on these settings, the

results for each interpolation mask are summarized in Table 5-2, Table 5-3, and

Table 5-4.

Table 5-2. Performance of Bayes detection at fixed dark current (Intp: 3x3).

Window = 3 Window = 5 Window = 7 Dark

Current %Hit %Miss ∆k %Hit %Miss ∆k %Hit %Miss ∆k

0.2 57.00 4 1.30 21.43 2 1.57 8.89 10 3.07 0.4 59.60 1 0.48 58.00 0 0.48 40.63 4 0.81 0.6 63.54 1 0.47 60.00 0 0.40 54.74 5 0.57 0.8 67.68 0 0.44 63.00 0 0.37 59.60 1 0.54

Table 5-3. Performance of Bayes detection at fixed dark current (Intp: 5x5 ring).

Window = 3 Window = 5 Window = 7 Dark

Current %Hit %Miss ∆k %Hit %Miss ∆k %Hit %Miss ∆k

0.2 57.00 0 1.09 15.00 0 1.99 11.83 7 2.71 0.4 64.00 0 0.44 60.00 0 0.45 42.42 1 0.84 0.6 68.00 0 0.38 64.00 0 0.39 56.00 0 0.52 0.8 73.00 0 0.33 68.00 0 0.33 61.00 0 0.50

Page 171: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

155

Table 5-4. Performance of Bayes detection at fixed dark current (Intp: 7x7 ring).

Window = 3 Window = 5 Window = 7 Dark

Current %Hit %Miss ∆k %Hit %Miss ∆k r %Hit %Miss ∆k

0.2 58.33 4 1.61 12.37 3 2.84 10.23 12 3.56 0.4 55.00 0 0.45 37.00 0 0.81 36.17 6 0.94 0.6 68.00 0 0.48 47.00 0 0.56 41.00 0 0.75 0.8 75.00 0 0.38 56.00 0 0.46 53.00 0 0.67

Two common trends were observed among the results from the three

interpolation schemes. First, the highest number of undetected defects

(i.e. % miss) occurred from the detection of low impact defects. A 12% miss rate

is found in the detection of defect with dark current = 0.2 using a 7x7 ring with

window = 7. As the magnitude of dark current increases, the defects become

more visible, thus the hit rate (%hit) also increases. Secondly, the variation of

the window length has a great impact on the accuracy of the detection. The

window length of 3 images achieved the highest hit rate for all interpolation

schemes. In contrast, the window length of 7 images suffered accuracy

especially in the detection of low impact defects where 10-12% of defects are not

detected. The long window accumulated information from more images. Thus

the small changes from the low impact defects are more difficult to detect over

the long sequence of accumulation. Shown Figure 5-8 is the plot of the,

Prob(Good|yk), for a simulated defect with dark current = 0.2. The accumulated

probability is calculated over a sequence of images using window length 3, 5 and

7 images. As demonstrated in Figure 5-8(b) for window length of 5 and (c) 7

where both plots show a smooth Bayes accumulation. Thus any small

fluctuations such as interpolation errors and detection of low impact defects will

not get emphasized. On the other hand, with a window length of 3, Figure 5-8(a)

the plot shows more fluctuation in the Bayes accumulation. With less images

Page 172: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

156

used in the accumulation, small changes are being emphasized, thus the low

impact defects are more detectable in this setting. As reflected from the results,

window 7 suppressed small changes, the error of the detection is large and the

miss rate is high. Although window length of 5 has a lower miss rate than

window with 7 images, this detection error ∆k in most cases are higher than

window 3.

0 20 40 600

0.5

1

n Image

Pro

babi

lity

0 20 40 600

0.5

1

n Image

Pro

babi

lity

0 20 40 60

0

0.5

1

n Image

Pro

babi

lity

(a) window length 3 (b) window length 5 (c) window length 7

Figure 5-8. Plot of Prob(Good|y) vs. image in the w indowing test.

Despite the impact of window length, in most cases the 5x5 ring averaging

achieved the best hit rate among the 3 interpolation schemes. With the short

window setting (3 images), the 5x5 ring achieved a hit rate of 73% for defect with

dark current = 0.8/s and no misses. Although the 7x7 ring scheme with the same

window length has a 75% hit rate, the average image error (i.e ∆k) and miss rates

are also higher, which suggested the ring size suffers large interpolation errors

as shown in Table 5-1.

In the second part of the simulation, we will test the performance of the

detection at different image exposure durations. For this simulation the exposure

time (i.e. shutter speed) of the tested images are kept constant while the

Page 173: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

157

simulated hot pixels will have dark current range between 0.2 – 0.8/s. The tested

exposure duration settings were 0.06, 0.125, 0.25 and 0.5s. Again 10 simulated

defects will be injected into a 1MP sensor and the simulation is repeated on 10

simulated sensors. The average hit and miss rate results from the detections are

summarized in Table 5-5, Table 5-6, and Table 5-7 for the three interpolation ring

sizes.

Table 5-5. Performance of Bayes detection at fixed exposure (Intp: 3x3).

Window = 3 Window = 5 Window = 7 Exposure

time %Hit %Miss ∆k %Hit %Miss ∆k %Hit %Miss ∆k

0.060 42.22 10 3.71 5.95 16 5.27 6.94 28 5.70 0.125 64.00 1 0.75 39.39 1 1.44 17.98 11 3.03 0.250 70.71 0 0.39 52.00 0 0.85 43.62 6 1.01 0.500 77.00 0 0.34 58.00 0 0.39 53.00 0 0.58

Table 5-6. Performance of Bayes detection at fixed exposure (Intp: 5x5 ring).

Window = 3 Window = 5 Window = 7 Exposure

time %Hit %Miss ∆k %Hit %Miss ∆k %Hit %Miss ∆k

0.060 40.22 8 3.80 10.47 14 5.27 4.17 28 6.22 0.125 66.00 1 0.66 42.00 0 1.44 22.47 11 2.28 0.250 72.73 0 0.34 53.00 0 0.85 47.00 0 0.86 0.500 83.00 0 0.20 61.00 0 0.39 62.00 0 0.45

Table 5-7. Performance of Bayes detection at fixed exposure (Intp: 7x7 ring).

Window = 3 Window = 5 Window = 7 Exposure

time %Hit %Miss ∆k %Hit %Miss ∆k %Hit %Miss ∆k

0.060 32.18 13 5.40 6.41 22 5.88 4.23 29 6.83 0.125 57.29 4 1.65 31.96 3 2.22 18.18 12 2.58 0.250 62.00 0 0.39 47.00 0 0.78 29.17 4 1.07 0.500 77.00 0 0.25 48.00 0 0.53 45.00 0 0.67

Observing the miss rates from all three interpolation schemes, the highest

miss rate is ~28% for detection from short exposure (0.06s) images. This is in

agreement to the visibility of hot pixels depended on the exposure level. At short

Page 174: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

158

exposure times, the faults appear close to the noise level. Thus, it is hard to

distinguish the difference between the noise signal and the defect. The use of a

long window shows ~28 out of 100 defects were not detected by the Bayes

detection algorithm. This result suggested if the image dataset is consisted of

mostly short exposure images, the accuracy of defect date will suffer large error.

The impact of window length on the performance of the detection was

similar to that observed from the previous simulation. The long window length

settings suffered accuracy especially when detecting faults from short exposure

time images where the hit rate is below 10%. The use of short window length

improves the detection drastically with a 40% hit rate. The brightness of the hot

pixels is a function of the exposure duration. Hence hot pixels will appear close

to the noise signal when capture at short exposure duration. Accumulation with

long window tends to neglect small changes from the weak hot pixels which

results in a high miss rate. While small changes are being emphasized in a short

window, this suggested that confining the accumulation to a subset of images is

crucial for detection of low impact defects and from short exposure time images.

The choice of the interpolation mask will also affect the performance of the

detection. Again, in most cases, the 5x5 ring gives the highest hit rate. Both the

7x7 and 3x3 ring suffered from problems with interpolation errors or spreading of

the defects. Hence these are reflected in the lower hit rate.

In the last part of the simulation, we will model what is expected of a real

image dataset, where both the dark current and exposure time will take on

variations. The dark current of the simulated defects will range from 0.2 – 0.8/s

Page 175: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

159

and exposure time of the each image will vary between 0.06 – 0.5s. Again we

will simulate 10 sensors and each sensor will have 10 simulated standard hot

pixels randomly scattered over the sensing area. The average detection result

from the 10 simulated sensors is summarized in Table 5-8.

Table 5-8. Performance of Bayes detection using va rious interpolation schemes.

Window = 3 Window = 5 Window = 7 Interpolation

Scheme %Hit %Miss ∆k %Hit %Mis

s ∆k %Hit %Miss ∆k

3x3 66.00 0 0.40 35.71 2 0.84 44.33 3 0.72 5x5 78.00 0 0.27 51.00 0 0.64 46.46 1 1.05 7x7 65.00 0 0.49 41.00 0 0.96 36.73 2 1.13

Consistence with the previous simulation results, the 5x5 ring averaging

has the best average hit rate as compared to the 3x3 and 7x7 ring. The impact of

the demosaicing on the nearest neighbours is reflected from the large error with

3x3 ring averaging. Although the 7x7 averaging has omitted the immediate

neighbour pixels, the large ring size fails to give accurate pixel estimation

because the pixels at that distance do not well reflect the actual pixel value. As

seen from Table 5-1, this ring mask suffers the largest interpolation errors. The

second factor that affects the performance of the detector is the length of the

sliding window. From the trend observed in previous simulations, the short

window has the advantage in identifying low impact defects. The results from

Table 5-8 again show that the window of 3 images has the highest hit rate of

78%. As discussed in section 4.2, in-field defects are mostly created with low

damage. Hence, from our simulations, it suggested that the optimal setting of the

detector will be the 5x5 ring averaging scheme and a window length of 3.

Page 176: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

160

It is important to note that our simulations are based on standard hot

pixels. The additional offset in partially-stuck-hot pixel will enhance the

brightness of the faults and reduce the impact of exposure duration on detection.

Hence, the detection accuracy of such defects should be higher.

The main problem suffers by this algorithm is that the performance highly

depends on the parameters of the images available. In other words, for periods

when images are capture frequently there will be more samples to test for

defects. Thus the date error in detection will remain small. If images are seldom

collected, the detection date error will be large as we have seen in the calibration.

In addition, the appearance of the defects is highly affected by the camera

settings as mention before the exposure time and the ISO setting. Thus when

images are captured on a sunny day (short exposure and low ISO), defects are

not likely visible in these images. Hence this can delay our detection of defects.

By comparison long duration and high ISOs pictures enhance the defects and

provide a better condition to detect the defects.

5.3 Experimental results

With the defect trace algorithm, we can extract the defect development

date by analyzing for the first appearance of the defect from the regular photos

captured by the imager. While photos are being captured on a regular basis, it

provides more frequent samples of the sensor state than the yearly calibrations.

However, due to privacy issues, we only have access to an image dataset from 7

of the 21 cameras shown in Table 4-1. The specifications of the cameras

Page 177: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

161

available are listed in Table 5-9. This subset of cameras consisted of 5 APS and

2 CCD imagers of size ~23 x 26 mm and an average pixel size of 6.5µm.

Table 5-9. Specification of test cameras.

Camera Sensor Type Sensor Size Pixel Size Age A APS 22.7 x 15.1 7.38 x 7.36 6 B APS 36.0 x 24.0 6.26 x 6.26 1 C APS 22.7 x 15.1 7.38 x 7.36 4 D APS 22.2 x 14.8 5.14 x 5.14 2 N CCD 23.6 x 15.8 5.87 x 5.87 2 P CCD 23.7 x 15.5 7.69 x 7.57 2 Q APS 22.5 x 15.5 6.30 x 6.30 1

Previously, simulation results have demonstrated that the 5x5 ring

averaging was able to compensate for the spreading of defects due to

demosaicing. In addition, the window length of 3 provides the optimal setting for

detection from short exposure images and low impact defects. Thus for the

following experiment such a combination will be the setup for the Bayes detection

algorithm.

Based on the defects identified from the calibrations at ISO 400, we have

applied the Bayes detection algorithm to find the first image in which each defect

clearly appears. Then the defect date is read from the meta-data of the image.

From the 7 tested cameras, there is well over 30,000 images; therefore it was not

feasible to visually inspect the presence of the defects from each image. Instead

we will be comparing with the growth rate measured from the calibration method

as shown in section 4.5.1. Indeed it is common with digital camera to take tens

to hundres of pictures within a day. Hence, there will be a stream of images

capturing the state of sensor within a relatively short time frame.

Page 178: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

162

The defect dates identified from the Bayes detection algorithm are plotted

against the sensor age as shown in Figure 5-9. Similar to the plot generated

from the calibration methods, the growth of defects from the Bayes detection also

follows a linear trend. Hence a linear regression fit function is used to measure

the defect growth rate. The fitted defect rates for each imager using the two

methods are summarized in Table 5-10.

Table 5-10. Manual calibration and Bayes detection growth rate comparison at ISO 400.

Defect growth rate (defects/year) Camera

Manual Calibration Bayes Detection Diff (%)

A 3.57 ± 0.16 3.66 ± 0.11 2.49 B 7.35 ± 0.15 7.47 ± 0.36 1.62 C 1.20 ± 0.10 1.42 ± 0.15 16.79 D 3.68 ± 0.59 3.49 ± 0.43 -5.90 N 10.50 ± 0.00 10.60 ± 2.34 0.95 P 3.81 ± 0.00 5.27 ± 0.18 32.16 Q 0.77 ± 0.00 1.09 ± 0.00 34.41

Page 179: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

163

0 50 1000

10

20

30

Camera age (months)

Tota

l num

ber

of d

efec

ts

0 10 20 30

0

5

10

15

20

Camera age (months)

Tota

l num

ber

of d

efec

ts

(a) camera A (b) camera B

0 50 1000

5

10

Camera age (months)

Tota

l num

ber

of d

efec

ts

0 20 40

0

5

10

15

Camera age (months)

Tota

l num

ber

of d

efec

ts

(c) camera C (d) camera D

0 20 400

10

20

30

40

Camera age (months)

Tota

l num

ber

of d

efec

ts

0 50 100

0

10

20

30

Camera age (months)

Tota

l num

ber

of d

efec

ts

(e) camera N (f) camera P

0 50 1000

2

4

6

Camera age (months)

Tota

l num

ber

of d

efec

ts

(g) camera Q

Figure 5-9. Defect growth rate at ISO 400 with cali bration and Bayes search identification.

Page 180: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

164

The defect rates measured by the two methods are in close approximation.

In fact the largest difference is ~30% which is found in camera P and Q. For

camera P, Figure 5-9(f) and Q, Figure 5-9 (g), the measure of defect growth rate

from the calibration is based on result from one single test. Also this single

calibration was taken when these cameras were 2-4 years old. Thus for any

defects that were developed at the early stage of the sensor will suffer large

estimation error (i.e up to 4 years). The Bayes detection for camera P in

Figure 5-9 (f) demonstrated that by analyzing the historical images, this method

is able to recover information that was missing from the calibration. Hence the

defect rate measure by the Bayes detection will be more accurate. The

comparison shows that, in all but one case (camera D), where the difference

between the measured rates are still within the regression error, the calibration

will give an underestimate of the defect rate. This would be expected because

of the large time separation between calibrations.

The disadvantage of Bayes detection is that it requires access to images

take by the camera which is not always available. Both camera P and Q suffers

such problem where we only have access to a subset of images. Thus as seen

in camera P, some defects were not found due to missing samples from the

image set. Having access to the images captured by the camera is important

as each image records the state of the sensor.

This is also shown in Figure 5-9(b), for camera B, a 1 year old sensor.

The rate estimated with the manual method is also based on a single calibration,

but since the time gap between purchase date to the first calibration is less than

Page 181: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

165

1 year. The difference between the two estimated rates is less than 2%. Also,

from the large set of images being captured in the short period this potentially

increases the accuracy of the Bayes detection.

The number of defects found on sensors depends on the ISO setting used

to perform the calibration. In chapter 4, we have shown that calibration at high

ISOs will reveal some low impact defects which were not detectable above the

noise in the low ISO pictures. As more defects are found at the higher ISOs, the

defect rates will change. In the following experiment we will applied the Bayes

detection to the same set of cameras given the defect found at ISO 800 and

1600. Since only 4 of the 7 cameras had been calibrated at these ISO levels;

this test is only performed on a subset of imagers (i.e camera A, B, C and D).

The estimated rates for these cameras are summarized in Table 5-11 for ISO

800, and Table 5-12 for ISO 1600.

Page 182: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

166

Table 5-11. Manual Calibration and Bayes detection growth rate comparison at ISO 800

Defect growth rate (defects/year) Camera

Manual Calibration Bayes Detection Diff (%)

A 3.35 ± 0.26 3.86 ± 0.11 14.15 B 11.38 ± 1.37 12.12 ± 0.62 6.30 C 2.10 ± 0.23 2.02 ± 0.17 -3.88 D 3.86 ± 0.39 5.62 ± 0.32 37.13

0 50 1000

10

20

30

Camera age (months)

Tota

l num

ber

of d

efec

ts

0 10 20 30

0

10

20

30

Camera age (months)To

tal n

umbe

r of

def

ects

(a) camera A (b) camera B

0 50 1000

5

10

15

Camera age (months)

Tota

l num

ber

of d

efec

ts

0 20 40

0

5

10

15

20

Camera age (months)

Tota

l num

ber

of d

efec

ts

(c) camera C (d) camera D

Figure 5-10. Defect growth rate at ISO 800 with cal ibration and Bayes search identification.

Page 183: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

167

Table 5-12. Manual Calibration and Bayes detection growth rate comparison at ISO 1600.

Defect growth rate (defects/year) Camera

Manual Calibration Bayes Detection Diff (%)

A 3.83 ± 0.07 4.84 ± 0.18 23.30 B 18.31 ± 0.77 17.77 ± 0.47 -2.99 C 3.80 ± 0.04 4.04 ± 0.19 6.12 D 7.71 ± 0.78 9.32 ± 0.28 18.91

0 50 1000

10

20

30

40

Camera age (months)

Tota

l num

ber

of d

efec

ts

0 10 20 30

0

10

20

30

40

Camera age (months)To

tal n

umbe

r of

def

ects

(a) Camera A (b) Camera B

0 50 1000

10

20

30

Camera age (months)

Tota

l num

ber

of d

efec

ts

0 20 40

0

10

20

30

Camera age (months)

Tota

l num

ber

of d

efec

ts

(c) Camera C (d) Camera D

Figure 5-11. Defect growth rate at ISO1600 with cal ibration and Bayes search identification.

The number of defects identified at ISO 800 and 1600 is significantly

higher than that at ISO 400 (Table 5-10), thus the defect rates measured will

increase. In particular, the most significant increase is in camera B, where the

rate doubles from 7.47 defect/year at ISO 400 to 16.37 defect/year at ISO 1600.

Due to the limited number of calibrations taken at these ISO levels, the difference

Page 184: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

168

between the rates estimated by the two methods are greater than at ISO 400.

Again the large time gap between calibrations will increase the error in the defect

date estimated with the calibrations. As shown from camera A in Figure 5-11(a),

although several calibrations were used to approximate the defect rate, there

was a ~4 year span where no calibrations were taken at ISO 1600. This “gap” is

due to the fact that the calibrations for this research did not begin unitil the

camera was 4 years old. Hence, the defects developed within those 4 years will

be falsely estimated by the calibration. With the Bayes detection, images taken

continuously can fill in the information missing from the calibration retroactively

from the time before the calibration test begin. Indeed this shows how the defect

development can be recovered using the Bayes algorithm. Thus the rate

measured using the Bayes detection should resemble more closely to the real

temporal growth of defects.

Despite the fact that different techniques were used to approximate the

defect rates, both methods have suggested that faults are developed

continuously and the numbers increase linear with time. With more defects

found in higher ISOs, the same trend was observed. A linear growth indicates

that in-field defects are not likely cause by a single traumatic event or material

degradation. Rather the defect mechanism is a continuous source. Material

related defects will usually develop local clustering of defects both in space and

in time. Both APS and CCD show an increase of defect count over time, this

suggested the defects source is not related to the sensor architecture. Rather

these imagers are continuously exposed to the same random defect source.

Page 185: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

169

5.4 Summary

Images captured by the cameras can be utilized to trace back the

development of defects while the imagers operated in the field. In this the

chapter, a Bayesian recursive function was presented to automatically trace the

first appearance of defects from the historical image dataset. The image wide

interpolation errors are served to provide statistics which measure the presence

of defects. As part of this, the interpolation scheme is used to provide estimate

of a good pixel output. To avoid inaccurate interpolation estimates from defect

spreading in color images, a ring mask is used. In the ring interpolation scheme,

the nearest neighbour pixels are discarded, as these pixels are most affected by

demosaicng.

The Bayesian function accumulates statistics over a sequence of images

to measure the likelihood of a pixel being in a hot state. Long accumulations will

cause the statistics to saturate. Hence, a sliding window is used to confine the

accumulations to the n most recent images. In addition, false detections due to

large interpolation error can be corrected with the post-correction procedure.

From a set of simulation performed with the Bayes detection algorithm, the

visibility of hot pixels limits the accuracy of the detection. Testing with various

settings has demonstrated that a window length of 3 images is the optimal setting

to detect low visibility defects. In addition, the 5x5 ring averaging has shown this

ring size is the best tradeoff for good pixel estimation while avoiding the defect

pixel spreading problem.

Page 186: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

170

The defect rates measured using the Bayes detection is in close

agreement with the calibration test method. The comparison between the two

methods showed that the calibration method usually underestimates the defect

rates due the large time spans between each calibration. With the Bayes

detection, the frequency of images taken by the camera can fill in the information

missing from the calibrations. Hence the rates measured by this detection

algorithm will be a more accurately measure of the true defect rates.

Page 187: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

171

6: THE IMPACT OF PIXEL AND SENSOR DESIGN ON DEFECTIVE PIXELS

There are four main trends that are developing in the design of new digital

imagers. First is the choice of APS or CCD type sensors. The early imager

market was dominated by the CCD sensors, while the APSs are mostly used for

low performance imaging devices. In recent years, APSs have been replacing

the CCDs as large area sensors in many DSLR cameras.

The second trend is the expanded ISO range. Before 2008 most DSLRs

had a usable range of up to ISO 1600. However in the newer camera models,

the typical top ISO range has increased to 6400. Some high-end DSLRs have a

usable range up to 25,600. The increase of ISO permits natural light

photography, and reduces the use of long exposure under low light conditions.

The third trend is the changes in the sensor size. The divergent demand

of both large and small area sensors has increased drastically by the drive from

DSLRs (i.e big sensor) and cellphone cameras (i.e. small sensor). As reported

from CIPS[44], although the production of PS cameras is much higher than

DSLRs, the relative growth of DSLRs per year is actually higher. The drive of

better image quality recently has resulted in more full frame sensors being

introduced into the commercial high-end cameras. Since the image sensor was

first introduced as an embedded device in cellphone, the demand for small

sensors has increased drastically. A market report published in 2007 by

Page 188: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

172

Tessera[45] has shown that in 2006 over 600 millions mobile phones had a build

in camera and this trend will continue to increase. Since most cellphones are

embedded with more than one camera, the report from Tessera has shown that

the mobile phones device has dominated 53% of the sensor market.

The last trend is the increase in the pixel number on sensor. From recent

data collected by CIPS[44] the break down of mega pixels on sensor from each

year is shown in Figure 6-1.

0

10

20

30

40

50

60

70

80

90

100

2001 2002 2003 2004 2005 2006 2007 2008 2009

Millions

Num

ber

of c

amer

a M

anuf

actu

red

less than 2MP 2-3MP3-6MP 6-8MPover 8MP

Figure 6-1. Mega Pixel design trends in digital cam eras 2001 to 2008.

The number of pixels found in commercial imagers has increased from

250,000 pixels (2001) to over 10 MP (2010), and some high-end camera systems

have up to 21 MP in DSLRs and 50 MP in medium format (Hasselbad). The

increase in the pixel number allows the captured images to have a higher

resolution, thus the dimension of the printed images can be made larger. When

Page 189: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

173

the sensor size remains the same, the increase of pixel number implies the

shrinkage of pixel dimension.

In the previous sections we have observed two trends from the analysis of

defect rate from various imagers. First we have seen that the CCD sensors

appear to have a higher defect rate as compared to the APSs. This trend

suggested that CCD sensors might be more sensitive to the defect source due to

the different design of the two sensors. Secondly, we have seen that for the

same type of sensor, the larger area sensors seem to have a higher defect rate.

At the same time, newer cameras which use small sensors and reduced pixel

size also appear to have higher defect rate per sensor area. The two

observations had lead to the possible impact on defect rates from the four new

imager design trends. In the following sections, we will explore each of the

design trends in details and analyzing the impact it has on the defect

development rate.

6.1 Impact of sensor design trend on defects on ima gers

New commercial digital cameras are improving in different aspects such

as the choice of sensor (i.e CCD and APS) used, ISO range, sensor size and

pixel number. From our defect analysis, we have correlated some of these

changes to the impact of defects on the sensors. In this study, we have

examined the defects from three classes of imagers: cellphones, PSs and

DSLRs. Small sensors (~20mm2) are used in the embedded cellphone and PS

cameras, the mid-size sensors (~300mm2) are used in entry and mid-range

DSLRs, and the full frame sensors (864mm2) are found in professional DSLRs.

Page 190: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

174

The typical specifications for each of the three types of sensors are listed in the

Table 6-1.

Table 6-1. Average sensor and pixel sizes from test ed cameras.

Camera Type Sensor Type

Sensor Size (mm)

Sensor Area (mm 2)

Pixel size (µm)

Pixel area (µm2)

Cellphone APS 5.40 x 4.28 23.11 2.20 x 2.20 4.84 Point-and-shoot CCD 6.08 x 4.55 27.66 2.22 x 2.22 4.93 Mid DLSR APS 22.54 x 15.01 338.36 6.10 x 6.10 26.42 Mid DSLR CCD 23.63 x 15.65 369.81 7.01 x 6.78 47.53 Full frame DSLR APS 36.00 x 24.00 864.00 6.26 x 6.26 39.19

In Table 6-2 is a summary of the average defect rates from the three

classes of cameras at various ISOs. The average defect rates of DSLRs are

collected from Table 4-10, Table 4-11 and are categorized by the sensor type

and size. The temporal defect growth rates of the cellphones are collected from

Table 4-12, and PS cameras are from Table 4-13.

Table 6-2. Average defect rate for various sizes of sensors.

Defect rate (defects/year) ISO level Cellphone

(APS) Point-and-

shoot(CCD) Mid DSLR

(APS) Mid DSLR

(CCD) Full frame

DSLR (APS) 100 NA 1.08 1.34 1.81 2.18 200 NA 1.15 1.64 2.86 4.36 400 3.55 6.88 1.82 5.75 4.68 800 NA NA 2.49 7.94 6.69 1600 NA NA 4.45 12.50 11.16 3200 NA NA NA 27.88 12.24 6400 NA NA NA NA 16.13 12800 NA NA NA NA 24.41

6.1.1 Defect count on APS vs. CCD

During the early development of digital imagers, the CCDs were the main

sensors employed in this application. In 1990, with the improvements in the

CMOS technology, the APS sensors had gained more attention and were

Page 191: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

175

recognized as one of the mainstream imaging devices. The CCD sensors being

the more mature technology require dedicated process lines. This sensor is

usually the preferred choice for medium quality imaging (i.e. DSLRs, PS, and

scientific imaging). However, in recent years with the tremendous improvements

on the APS sensors, many commercial DSLRs are moving toward APS sensors.

Also since the APS sensors are CMOS compatible; it is favoured by many

embedded application such as cellphone devices.

In terms of defects found on these sensors, from our study, we have

observed a significant difference in defect count among the two pixel types. In

particular, we have noticed that most of the tested CCD imagers tend to have a

higher defect count than APS sensors of the same age. As shown in Table 6-1,

the average sensor area of the mid-size APS is 338.36mm2 which is close to the

mid-size CCD sensors with 369.81mm2. As observed from Table 6-2, the defect

rates measured at ISO 400 from our collection of DSLRs are 5.75 defects/year

for the CCD sensors and 1.82 defects/year for the APS sensors. Although the

sensing areas of the two imagers are nearly the same, the defect rate of the

CCDs is ~3x higher than APS imagers at all ISO levels.

By comparison in the full frame APS sensors (864mm2), the sensing area

is 2.3x larger than the mid-size CCD sensors. However, at ISO 400, the defect

rate of the CCD sensors is still 1.2x higher than the full frame sensors,

(4.68defects/year). The high defect rate of the CCDs suggested this sensor

might be more sensitive to the defect source. On the average pixel area shown

in Table 6-1, the area of the CCD pixels is 2x larger than the APS pixels. In

Page 192: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

176

addition, the fill factor of CCD pixels (~70-90%) is ~2-3x larger than that of the

APS pixels (~25-30%). This is in agreement with the high defect rate observed

from CCD sensors. Hence, the higher defect rate indicates that the larger

photosensitive area on CCD pixel will increase the exposure to the defect source.

Since cosmic rays occurrence scales with the area, this trend in agreement with

cosmic rays being the source mechanism.

The defect rate on CCDs will create a greater impact when we increase

the sensor size. If we scale the defect rates with the sensor area, then at

ISO 3200 the 27.88 defects/year on the mid-size CCD sensor will increase to

64.1 defects/year on a full frame sensor (i.e a scaling factor of 2.3x). In fact, this

approximation is for sensors operating in the terrestrial environment where

radiation level is minimal. Many large area CCD imagers are employed in the

space applications where radiation level is 300x higher. Thus the expect defect

rates in the space environment is much higher and the usable lifetime of these

sensors are very limited by the high defect rate.

6.1.2 Impact of ISO trend on defects

The second trend observed from the newly released cameras is the

expanded ISO range. As the sensor technologies improved, the noise level is

reduced; thus the usable ISO range expands. This trend is especially noticeable

in the mid and high-end DSLRs.

From our study we have shown that the hot pixel intensity scales

approximately with the ISO levels. Thus doubling the ISO will doubles the

Page 193: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

177

intensity of each fault. One of the main trends shared by all tested camera is the

increase of defect count when calibration is performed at the higher ISO settings.

The improvement in noise level, enable a clearer distinction between the

background noise and defects at the extended ISO levels. Hence, the calibration

at these higer ISO ranges can reveal more low impact defects. In fact the

amplification of the dark offset is most significant. At ISO 400, 46.3% of the

defects are classified as partially-stuck hot pixels, but at ISO 1600 the number

increases to 71.2%. This is important as partially-stuck hot pixels, unlike the

standard ones, affect the sensor at very short exposure durations. The results

from our analysis have pinpointed that many of these extra hot pixels are created

with low damage. The increase of ISO gain will cause these low impact defects

to become more prominent. In addition, the number of saturated defects at these

high ISO levels creates a major impact on the image quality.

In the analysis of defect rate as summarized in Table 6-2, we have

observed that an increase in the defect rates as more low impact defects were

found at the higher ISO levels. Hence the measured rates at the expanded ISO

range provide a closer approximation to the true defect development rate for all

strengths of defects.

6.1.3 Defect growth rate vs. sensor area

The third trend in the new cameras is the changes in sensor size. In

recent years, the full-frame sensors were being used in the DSLRs to match the

size of the traditional 35mm film. This large area sensor provides more vibrant

image quality and better operation in low-light conditions. However; at the same

Page 194: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

178

time the number of defects found on the sensors will also increase. On the other

hand, a rising class of imagers: embedded cellphone cameras, has dominated

the market for small sensors.

If all sensors are exposed to the same radiation level, we would expect the

sensor with the largest sensing area to develop the most number of defects if the

pixel sizes are constant. The defect rates reported in Table 6-2 shows that at

ISO 400, the cellphone cameras which have the smallest APS sensor

(23.11mm2), is 3.55 defects/year. This defect rate is much higher than the

1.82 defects/year from the mid-size APS (338.36mm2) but comparable to the

4.68 defects/year from the full-frame APS (864mm2). The sensor area of the

cellphone camera is 6.8% of the mid-size APS sensor and only 2.8% of the full

frame sensor. If we scale the defect rate of cellphone camera (3.55 defects/year)

by the sensing area, it will translate into 51.98 defects/year on a mid-size DSLR

and 132.72 defects/year on a full-frame sensor. The expected rate from scaling

with the sensor area is much higher than the observed rates for DSLRs reported

in Table 6-2. It is important to take note on the pixel size difference between

these three sensors. The small sensors have a pixel size of 2.2 x 2.2µm

whereas the mid and large area sensors have a pixel size ~6.2 x 6.2µm. Thus

this suggested there is possible impact from the reduction of the pixel size.

Now between the two APS DSLRs, the sensing area of the mid-size

DSLRs is only 39% of the full frame sensor. If the defect rate scales

proportionally with the sensor area, we would expect the defect rate on a full

frame sensor to be 2.55x that of the mid-size DSLRs. Taking the defect rates

Page 195: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

179

from Table 6-2 for the mid-size DSLRs (APS) measured at various ISOs, we can

calculate the expected defect rates of the full frame sensor by scaling these

measurements with the 2.55x factor as shown in Table 6-3. The calculated rates

are compared to the observed full frame defect rates collected from Table 6-2

and show a close comparison.

Table 6-3. Comparison of APS DSLRs defect rates at various ISOs scaled with sensor area.

ISO Defect Rate (defects/year)

100 200 400 800 1600

Mid-size DSLR: 1.34 1.64 1.82 2.49 4.45

Expected Fullframe: 3.42 4.18 4.64 6.35 11.35 Observed Fullframe: 2.18 4.36 4.68 6.69 11.16 Full frame Difference 44.29% 4.22% 0.86% 5.21% 1.69%

The results shown in Table 6-3 indicates the expected rates calculated

with the scaling factor resembled closely to the observed rate measured from our

tested full frame imagers. The area scaled rates average only a 11.25%

difference from the actual full frame rates. Different from the cellphone cameras,

both the mid-size and full frame sensor has approximately the same pixel size.

Thus this result shows that defect rate scales with the sensor area when the pixel

size remains the same. Hence we should use a metric of defect rate per sensor

area (i.e. defects/year/mm2) when comparing sensors.

6.1.4 Defect growth rate vs. pixel size

The last design trend found among most of the new cameras is the

increase of pixel numbers on the sensor. While the sensor sizes of the PS and

mid-size DSLRs do not change much, the increase of pixel numbers on the

imager will reduce the pixel dimensions. This shrinkage of pixel size will reduce

Page 196: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

180

the sensing area of each pixel. Hence, the dynamic range and the noise-to-

signal ratio will be reduced as well.

In the previous section, we have observed the defect rates scaled with the

sensing area when the pixel size on the imager is nearly the same. Hence, we

can scale the defect rates from Table 6-2 by the sensing area as summarized in

Table 6-4 for various ISOs and camera types.

Table 6-4. Average defect rate per sensor area for all camera types at various ISOs.

Defect rate per Sensor area (defects/year/mm 2) ISO level Cellphone

(APS) Point-and-

shoot(CCD) Mid DSLR

(APS) Mid DSLR

(CCD) Full frame

DSLR 100 -- 3.90 x 10-2 0.40 x 10-2 0.49 x 10-2 0.25 x 10-2 200 -- 4.16 x 10-2 0.49 x 10-2 0.77 x 10-2 0.51 x 10-2 400 15.4 x 10-2 24.90 x 10-2 0.54 x 10-2 1.55 x 10-2 0.54 x 10-2 800 -- -- 0.74 x 10-2 2.15 x 10-2 0.77 x 10-2 1600 -- -- 1.32 x 10-2 3.38 x 10-2 1.29 x 10-2 3200 -- -- -- 7.54 x 10-2 1.42 x 10-2 6400 -- -- -- -- 1.87 x 10-2 12800 -- -- -- -- 2.83 x 10-2

Again in Table 6-4, the defect rate per mm2 of the cellphone cameras is

~28x higher than the mid-size and full frame DSLR APS sensors. The defect

rate from the small PS CCD sensors is 16x higher (at ISO 400) than the mid-size

DSLR CCD sensors. Hence this suggests the possible impact from the increase

of pixel count or the shrinkage of pixel size.

A study on defect size by Dudas[32] had shown that the estimated defect

point size is very small. With the 367 isolated defect observed at ISO 1600, the

estimated defect size is <0.04µm, which is well within the 2.2µm pixel size.

Hence defects on the small sensors should be a point source and the dark

current magnitude should remain the same, independent of the pixel size.

Page 197: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

181

However, the small APS pixels have less sensing area as compared to the large

pixel. Since the capacitance of the photodetector scales approximately by the

sensing area, the output of the pixel remains constant. However, the dark

current magnitude does not change for a given defect. Hence, when the pixel

size shrinks, the sensitivity of the pixel to each dark current electron increases.

This means that even a weak hot pixel damage can cause a significant effect in

the case of the small pixels. Assume that all pixels have the same efficiency and

the capacitance of the pixel scales proportional with the sensor area. Recall the

output of the photodetector from Equation(2-8). As demonstrated in Figure 6-2, if

the pixel area is reduced by half, then the sensitivity of the small pixel to each

electron will double.

Figure 6-2. Impact of dark current on large and sma ll pixel.

This scaling factor is like the ISO amplification factor m from Equation(3-3).

Hence, shrinking the pixel dimension will increase the scaling factor m. The

defect parameters will scale like Ioffset from Equation(3-4). Thus the hot pixels

that are considered as low impact defects in the large pixels will become more

prominent in the small pixel when measured at the same ISO level.

Page 198: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

182

Using this assumption, the average pixel area of the mid-size DSLR APS

(26.42µm2) is 5.45x of the small APS pixels (4.84µm2) in the cellphone cameras.

Hence the defect rate observed at ISO 400 from the small APS sensor should be

compared to the defect rate of the large pixel measured at ISO 2180 (close to

ISO 1600 in our table). However, as shown in Table 6-4, the defect rate of the

small APS pixel at ISO 400 (15.4x10-2 defects/year/mm2) is 12x higher than the

large APS pixel at ISO1600 (1.32x10-2 defects/year/mm2).

With the same kind of comparison, the pixels area of the CCD sensor

used in DSLRs (47.53µm2) is 9.64x larger than the pixel area on the PS sensor

(4.93µm2). Hence the defect rates measured at ISO 100 from the PS sensor

should resemble the rate measured at ISO 800 from the DSLRs. This

comparison is summarized in Table 6-5.

Table 6-5. Comparison of defect rate per sensor are a between CCD in PS and DSLRs.

PS (CCD)

Defect rate (defects/year/mm 2)

Defect rate (defects/year/mm 2)

DSLR (CCD)

ISO 100 3.90 x 10-2 2.15 x 10-2 ISO 800

ISO 200 4.16 x 10-2 3.38 x 10-2 ISO1600

ISO 400 24.90 x 10-2 7.54 x 10-2 ISO 3200

The comparison of defect rates in Table 6-5 shows the development of

faults in the PS is still 2-3x higher than DSLRs at the higher ISO levels. Both the

small APS and CCD pixels have shown a higher defect rate. Hence, this finding

indicates the impact of defect on small area pixels is more significant than a

simple scaling of the pixel area.

Page 199: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

183

Using the defect rates at ISO 400 collected from Table 4-10, 11 and 12 of

DSLRs, cellphone and PS imagers respectively, we scale these measurments by

the sensor areas and then plot it against the pixel size as shown in Figure 6-3.

0 2 4 6 80

0.2

0.4

0.6

0.8

Pixel size (um)

Def

ect r

ate

(/ye

ar/m

m2)

Figure 6-3. Defect rate per sensor area vs. pixel s ize (ISO400).

Visual inspection of the plot in Figure 6-3 shows that the defect rates

increases rapidly as the pixel size reduced. However, the defect rates do not

scale linearly with the pixel size. In fact, the plot suggested a possible

exponential increase of the defect rates when the pixel size goes down. In

Figure 6-4 we show a semi-log plot of the defect rate per sensor area versus the

pixel size.

Page 200: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

184

0 2 4 6 810

-3

10-2

10-1

100

Pixel size (um)

Def

ect r

ate

(/ye

ar/m

m2)

Figure 6-4. Semi-log of defect rate per sensor area vs. pixel size.

This semi-log plot suggested a possible log linear regression fit. However,

the modest accuracy (R2 = 0.68) suggesting this was not the correct equation.

Instead, a log-log plot is used which is shown in Figure 6-5.

10-3

10-2

10-1

100

Pixel size (um)

Def

ect

rate

(/y

ear/

mm

2 )

100.2 100.6 100.8 101100.4 -1

-0.5

0

0.5

1

Pixel size (um)

Res

idua

l

100.2100.4 100.6 100.8 101

(a) Linear regression fit (b) Residues from regress ion

Figure 6-5. Logarithmic plot of defect rate per sen sor area of all tested imagers.

The log-log plot of the defect rate versus pixel size shows a much stronger

indication of a linear trend. Hence the linear regression fit function used in Figure

6-5 (a) is

Page 201: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

185

)log()log()log( xBAy += . (6-1)

On a linear scale, this the regression fit function is simply a power function,

BXAy ⋅= . (6-2)

Table 6-6. Linear regression fit statistics on defe cts/year/mm 2 vs. pixel size

A B R 2

0.989 -2.558 0.748

The regression statistics from log-log plot are summarized in Table 6-6.

The R2, which measures the goodness-of-fit is ~0.748. If the measure of R2 is

close to unity, it indicates the regression fit function is a close approximation to

the observed values. Hence the value of 0.748 indicates the power function is a

good fit function. The residuals of the fit plot Figure 6-5(b) shows the deviation

are nearly uniformly distributed about the fit. This strongly indicates the power

law is a good equation fit to the data. The power function indicates the defect

rate does not scale linearly with the pixel size. Instead it increases in a power

law as the pixel size decreases. The exponent factor of -2.56 suggested, the

defect rate scales a bit greater than by the pixel area.

As shown in Table 6-4, the defect rate per sensor area still indicates that

the mid-size CCDs developed 3x more defects than the mid-size APS sensors.

Hence, in the following plots, we separate the analysis by the sensor type. The

log-log plot of defect rate per sensor area versus pixel size of all tested APS

sensors is shown in Figure 6-6, and for CCD sensors in Figure 6-7. Again a

Page 202: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

186

linear regression fit is used and the statistics from the fit function are summarized

in Table 6-7.

10-3

10-2

10-1

100

Pixel size (um)

Def

ect r

ate

(/ye

ar/m

m2 )

100.2100.4 100.6 100.8 101 -1

-0.5

0

0.5

1

Pixel size (um)

Res

idua

l

100.2 100.4100.6 100.8 101

(a) Linear regression fit (b) Residues from regress ion

Figure 6-6. Logarithm plot of defect rate per senso r area versus pixel size of all tested APS imagers.

10-3

10-2

10-1

100

Pixel size (um)

Def

ect r

ate

(/ye

ar/m

m2 )

100.2 100.4100.6

100.8 101 -1

-0.5

0

0.5

1

Pixel size (um)

Res

idua

l

100.2 100.4 100.6100.8 101

(a) Linear regression fit (b) Residues from regress ion

Figure 6-7. Logarithm plot of defect rate per senso r area versus pixel size of all tested CCD imagers.

Page 203: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

187

Table 6-7. Linear regression fit statistics on defe ct rate/mm 2 vs. pixel size.

Sensor Type A B R 2

APS 1.866 -3.318 0.881

CCD 0.726 -2.044 0.807

All the points on the residual plots in Figure 6-6(b), Figure 6-7(b) are

randomly distributed on either side of the curve, again supporting the power law

equation. Thus all the data points in Figure 6-6(a) and Figure 6-7(a) lie closely to

the regression fit function. The R2 recorded in Table 6-7 for the APS and CCD

are both ~0.8 which is modestly better than the fit shown for combined imagers

(Table 6-6). Since both the APS and CCD sensors show the same good

regression fit with the power function, this strongly indicates that defect rate

increase in a power law with the shrinkage of pixel size. The power factor B

estimated for the CCDs is ~-2.05 which shows that the defect rate scales

approximately by the pixel area. However, the power factor B for the APSs is,

-3.318. Hence this shows that the impact of scaling down the pixel size on the

APSs will cause the each pixel to become more sensitive to the defect source

than just with the pixel.

Using the regression factor, we can calculate the defect rate/mm2 of APS

and CCD at various pixel sizes, as summarized in Table 6-8.

Page 204: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

188

Table 6-8. Estimated defect rate/mm 2 at various pixel sizes with the fitted power funct ion.

Pixel size (µm) Sensor Type 2.0 3.0 4.0 5.0 6.0 7.0

APS 187.00x10-3 48.70x10-3 18.80x10-3 8.95x10-3 4.89x10-3 2.93x10-3

CCD 176.00x10-3 76.90x10-3 42.70x10-3 27.10x10-3 18.60x10-3 13.60x10-3

CCD/APS 0.94 1.58 2.28 3.02 3.81 4.6

The estimated defect rate/mm2 at 6-7µm pixel shows that the CCDs will

develop 3-4x more defects than APS sensors. The fill factor of the CCD pxiels is

approximately 2-3x larger than the APS pixels, which is similar to the difference

of 3-4x factor in the defect rate. Hence, this indicates the size of the

photosensitive area is likely the cause of the defect rate changes on the CCD

pixels. However at the small pixel end (2µm), the APS sensors are measured to

have nearly the same defect rate/mm2 as the CCD sensors. Although the large

photosensitive area increases the radiation exposure of the large pixels, the

shrinkage of the pixel size will increase the sensitivity to each electron. Such

impact has shown a drastic increase of the defect rate in the APS pixels but not

the CCD pixels. Hence the impact of defects on the small APS pixels becomes

much more significant than on the small CCD pixels. This suggests that for even

smaller pixels the APS may have a higher defect rate than for the CCDs.

It is important to note that this trend in reducing the pixel size where the

manufactures are trying to increase the pixel count with smaller pixels on the

small sensors (cellphone and PS). The drive for higher MP on mid-size and full

frame DSLR cameras will cause the manufactures to look at smaller pixels on the

large area sensors as well. The impact of defect on these small pixels will be a

Page 205: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

189

significant drawback on the image quality. This power law relationship and the

sensor area scaling suggested important tradeoff for the sensor designers.

6.2 Chapter Summary

The design trends of new imagers are driven by the demand in the

commercial camera markets. The use of CCD and APS sensors, ISO range,

sensor size and pixel number are design to improve the imaging performance

while keeping a production cost low. However, these design trends have

neglected one important aspect – they affect the defects on the sensors. In this

analysis we have shown that many of these design trends will have an impact on

the growth rate of defects on the sensor. In particular the expanded ISO range

continues to reveal more low impact defects and cause moderate defect to

saturate. The analysis of the defect rates on various sensor sizes indicates the

rates scale proportionally with the sensor area if the pixel size is constant.

Hence the full-frame sensor will develop the most defects. Finally observations

from the small pixels shows that defect rates will scale as a power law with the

pixel size. Result from the regression fit function measures that the defect rate of

the CCD sensors scales by a power factor of near 2, which is approximately the

pixel area. However the defect rate of the APS scales at a higher rate with a

power factor of 3. This analysis suggested that scaling down the pixel size to

~2µm in APS sensor will cause these pixels to become more sensitive to the

dark current. By comparison for the larger pixels (~6µm) the CCD will show

more defects.

Page 206: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

190

7: MULTI-FINGER ACTIVE PIXEL SENSOR

Prior to 1980, the CCD was the dominate sensor technology used by most

sensing devices. However, with the improvement of CMOS processing, the APS

became one of the mainstream sensing technologies. The APS pixel opens new

options with some of its attractive advantages. Its compatibility with other CMOS

process makes this less expensive to fabricate and embeddable into other

devices and the single operating voltage/low power consumption have lead to a

wide range of applications. The two main photodetectors used by the APS

sensors are photodiode and photogate. While the photodiode based APS is a

more commonly used technology, in this study we will focus on exploring how to

improve the photogate APS.

As discussed in section 2.1.2, the photogate detector is simply a MOS

capacitor with a poly-silicon gate deposited on the top surface. When incident

light strikes the poly-gate photons will penetrate through the poly-silicon layer

and can be collected in the silicon substrate. In the following section we will

explore possible alternative designs to enhance the sensitivity of the photogate

by using a multi-finger gate on the detection area. A multi-finger design is

composed of poly-silicon stripes spaced evenly over the surface. The

multi-finger photogate had been proposed by Chapman and his graduate

students[13] and other researchers[12]. In a study from La Haye[13] both the

standard and multi-finger photogate APS pixels of size 5.4x5.4µm were

Page 207: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

191

fabricated with 0.18µm CMOS technology. Preliminary results from this study

had suggested that the multi-finger photogate structures had a higher sensitivity

response as compare to the standard photogate when the pixel was exposed to

red light. In this thesis, we focus on the actual sensitivity measurements from

different multi-finger structure at varies photon energies. In addition, we will

explore the concept of the fringing field to enhance photon collection in the

substrate.

7.1 Multi-Fingered Photogate APS

The sensitivity photogate APS is dependent on the number of elector-hole

pairs generated by the incident photons during integration time. In a standard

photogate APS, the large absorption from the poly-silicon gate increases over the

visible spectrum; thus the sensitivity near the blue spectrum is significantly

weaker than in the red. Due to the absorption in the silicon material, photons will

penetrate to different depths, and the light intensity is characterized by

)exp( xII o α−= , (7-1)

where Io is the initial light intensity at the surface, α is the absorption coefficient

and x is the penetrated depth.

The absorption coefficient α varies at different at wavelength, as shown in

Figure 7-1 where the absorption is ~104 – 105 cm-1 in the visible spectrum. In

general, the absorption increases as the photon energy increases; thus the

intensity of blue light will significantly weaker than the red light at when measure

as the same penetration depth.

Page 208: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

192

1.E+02

1.E+03

1.E+04

1.E+05

1.50 2.00 2.50 3.00Photon energy (eV)

Abs

orpt

ion

coef

ficie

nt (

1/cm

)

Figure 7-1. Single silicon absorption coefficient v s. photon energy. (Data from Refs[14])

Hence, the main drawback to this photodetector is the limitation due to the

optical properties of the poly-silicon which causes loss of collection in the short

wavelengths due to absorption by the poly-gate. The absorption characteristics

of the poly-silicon are dependent on the full wafer fabrication process and are

also affected by the of the oxide and silicon layers below the poly. Thus,

depositing the poly-gate in isolation such as on a glass substrate will not

reproduce the true optical characteristics of the films.

In a standard photogate APS as shown in Figure 7-2, the poly-silicon gate

is deposited over the entire detection area, thus the absorption of photons is

unavoidable. A fully depleted region is created and will extend over the entire

pixel.

Page 209: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

193

Figure 7-2. Standard photogate photodetector.

If open areas are introduced in the poly-silicon gate where the opening is

filled with transparent insulator material as shown in Figure 7-3, then the loss of

photons due to absorption can be reduced. In this multi-finger design, we

estimated in addition to the depleted region forming under each poly finger, the

gates will create a fringing field that will reach over to the open area thus provide

a continuous potential well under the entire detection area.

Figure 7-3. Multi-finger photogate photodetector.

In a study [13], La Haye had implemented both the standard and three

different layouts of multi-finger photogate APS as shown in Figure 7-4. In the

each multi-finger photogate designs, we consider a photogate consists of a poly-

ring divided by one or more poly-fingers. The multi-finger photogate shown in

Figure 7-4(b), the poly-ring is composed of 1 poly-finger, (c) 3 poly-fingers, and

(d) 5 poly-fingers. The spacing between the inserted poly-fingers in each layout

Page 210: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

194

is summarized in Table 7-1. Again in Figure 7-4 we provide a first estimate of the

potential well formed under this poly-gate design. As the spacing between the

poly-fingers decreases, the strength of fringing field grows stronger. Thus we

estimate the depth of the well below the open area will be more uniform and

photon collection will be enhanced.

Pixel

Potential well

(a) standard (b) 1-Finger (c) 3-Finger (d) 5-Finger

Figure 7-4. Standard and multi-finger photogate APS design and expected potential well [13].

Table 7-1. Multi-finger photogate APS poly-finger s pacing [13].

Multi-finger structure Spacing (µm) % open area 1-finger 2.54 59.30 3-finger 0.91 42.50 5-finger 0.37 25.80

The photogate APS pixels designed by La Haye[13] were implemented

using 0.18µm CMOS technology by the Canadian Microelectronic Corp. and

each inserted poly-finger is 0.72µm wide. The width of the poly-finger is set not

by the minimum geometry of the technology but by the design rule by which the

poly can be masked so that the metal-silicide is not deposited on the photogate

area.

Page 211: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

195

In the previous work, the multi-finger photogates had been tested with only

red light. In this work, testing of the photogates at multiple wavelength bands will

be investigated.

7.2 Experimental setup and sensitivity measure

To test the performance of the three different multi-finger photogate APS,

we used a set of four LEDs array as our light source to illuminate each pixel. The

LEDs are positioned at a fixed distance above a diffuser to produce uniform

illumination. The position of the chip is at a fixed distance below the diffuser and

directly below the LEDs. Figure 7-5 shows the apparatus setup. To ensure the

tested pixel is not affected by other light sources, the entire setup is enclosed in a

darkbox.

Figure 7-5. Experimental setup.

The operation of the pixel is controlled by a computer running LabView

where the system will supply the power to signal integration, transfer gate signals,

row/column readout and the timing control to the APS chips. The photocurrent

readout from each pixel is converted to a voltage by a current-to-voltage

Page 212: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

196

converter and the computer will record the data through the data acquisitions

controller.

7.2.1 LED control circuit and calibration

In our experiment, four different colors (red, yellow, green, and blue) of

LEDs are used such that we can test the response of the multi-finger pixels at

different photon energies. The dominated wavelengths for each of the LED

colors are listed in Table 7-2 and the spectral plot of is shown in Figure 7-6.

Table 7-2. LED colors and dominate wavelengths.

LED colors Peak wavelength (nm) Energy (eV) Spectra l width(nm) Red 631 1.97 20

Yellow 587 2.11 15 Green 525 2.36 35 Blue 470 2.64 25

400 450 500 550 600 650 7000

0.2

0.4

0.6

0.8

1

1.2

wavelength (nm)

rela

tive

inte

nsity

red

yellow

green

blue

Figure 7-6. Relative intensity vs. wavelength.

The intensity of the LEDs is controlled by a varying the input voltage with

an operational amplifier connects in a voltage-current converter, as shown in

Figure 7-7. The feedback to the op-amp ensures the voltage across the load

resistor; thus, the current through the LED can be determined by

Page 213: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

197

Lim

refc R

VI = . (7-2)

Figure 7-7.Voltage-current converter.

The light intensity from the LED is calibrated using the Field-Master light

power meter. The photodiode sensor from the light power meter is positioned at

the same location as the chip to measure the light density of each LED set. All

measurements with the light power meter compensated for the different LED

colors. By stepping the input voltage from 0 – 6.24V, we can determine the

amount of illumination at each input voltage, and the calibration curve for each

LED set is shown in Figure 7-8

0

50

100

150

200

250

300

350

0 1 2 3 4 5 6Input Voltage (V)

Ligh

t po

wer

den

sity

(fW

/µm

2 ) Red

Yellow

Blue

Green

Figure 7-8. Input voltage vs. illumination intensit y.

Page 214: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

198

The illumination of the LED increases linearly with current; however due to

the non-ideal characteristics in the circuitry, the feedback voltage from the

op-amp deviates slightly from the input voltage. This creates a non-linear region

in the calibration curves and it is most noticeable in the red and yellow LEDs,

where the measured illumination is higher than expected. To compensate for the

deviation, the calibrations plots are curve-fitted with a linear fit function by a

ignoring the data where op-amp misbehaved. The linear fitted function is then

used to correct the sensitivity curve.

7.2.2 Photogate sensor performance measures

The performance of each pixel can be measured by analyzing its

characteristics over a range of illumination power until the pixel reach saturation.

The light intensity is different for each LED set; thus the exposure time (duration

for which the LED is turned on) used for each diode is adjusted to ensure the

pixels reache saturation. To measure the sensitivity we first plot the pixel output

versus input light intensity as shown in Figure 7-9. The instantaneous input

illumination of the LEDs is recovered by the mapping the input voltage to the LED

calibration curve from Figure 7-8. Then, the sensitivity of the pixel is simply the

slope measured from no illumination to the first saturation point. As indicated in

the previous section, due to the error in our LED calibration curve, this will

introduce a small error to our estimation of the sensitivity.

Page 215: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

199

Figure 7-9. Pixel output vs. input light intensity.

7.3 Experimental results

In this experiment all four photogate APS pixel structures are tested: the

standard, 1-finger, 3-finger, 5-finger. By illuminating the pixels with various

colors of LED, we can measure and compare the sensitivity characteristics

between the different photogate structures and their response at different

wavelengths. For each design, 16 pixels are tested at one sitting to gain

statistical accuracy, minimize error and setup variation from multiple tests. The

output voltage read from each pixel is plotted against the incident illumination

using the LED calibration curve. Then the slope of the linear regression curve fit

is used determine the sensitivity of each pixel. The sensitivity measure for each

pixel type is summarized in Table 7-3, where the results shown here are the

average sensitivity measured from the 16 pixels. A sample of the pixel output

from each of the 4 type of APS pixels are plotted in Figure 7-10.

Page 216: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

200

Table 7-3. Sensitivity result from standard and mul ti-fingered photogates.

Sensitivity (V·µm 2/ fJ) Pixel Type Red Yellow Green Blue

Standard 74.30 ± 0.83 67.30 ± 0.75 38.23 ± 0.44 42.04 ± 0.48 1-finger 58.63 ± 0.57 51.00 ± 0.55 29.68 ± 0.28 33.48 ± 0.34 3-finger 97.83 ± 0.75 88.62 ± 0.83 52.22 ± 0.49 55.22 ± 0.59 5-finger 109.70 ± 0.93 98.05 ± 0.92 58.11 ± 0.64 63.67 ± 0.7 6

0 0.005 0.01 0.015 0.020

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Input Illumination (fJ/um 2)

outp

ut v

olta

ge (

V)

standard

1-finger

3-finger

5-finger

Figure 7-10. Compare sensitivity curve of standard and multi-finger photogate pixels.

Despite the difference in the photogate structure, the results in Table 7-3

shows that each pixel type displays a similar trend. The sensitivity decreases as

the wavelength of the light source shorten. This trend is a clear indication that

the absorption from the poly-gate increases at the short wavelengths. However,

by inserting poly-fingers into the detection area of the photogate, we have

observed a significant increase in the response to light from both 3 and 5 finger

photogate pixels. Visual inspection of Figure 7-10 shows that both the standard

and 1-finger photogate has a saturation point at ~0.015fJ/µm2 with an output

voltage of 1.0V. With the 3- and 5-finger photogate, these pixels saturated at

Page 217: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

201

~0.018fJ/µm2 and an output voltage above 1.8V. This observation suggested

that the multi-finger photogates (3- and 5-finger) not only have a higher sensitivity

but also a wider dynamic range.

7.3.1 Comparison response for different photogate s tructure

To better understand the change in sensitivity observed in the multi-finger

pixels, we compute the relative sensitivity ratio for each multi-finger pixel with

respect to the standard fully covered photogate pixels

%100

___% ×=

ardy_of_StandsensitivitrMultiFingeofysensitivit

ratioySensitivit . (7-3)

The fractional change relative to the standard photogate is estimated with

100_%_% −= ratioySensitivitchangeySensitivit . (7-4)

The computed sensitivity ratio is summarized Table 7-4 and sensitivity

change is in Table 7-5.

Table 7-4. Sensitivity ratio for multi-finger photo gates relative to standard photogate.

% Sensitivity ratio Pixel Type

% photogate area Red Yellow Green Blue

Standard 100.00 100.00 100.00 100.00 100.00 1-finger 40.70 78.92 75.78 77.64 79.64 3-finger 57.50 131.67 131.68 136.61 131.36 5-finger 74.20 147.66 145.69 152.00 151.44

Table 7-5. Sensitivity change for multi-finger phot ogates relative to standard photogate.

% Change in Sensitivity Pixel Type

% photogate area Red Yellow Green Blue

Standard 100.00 NA NA NA NA 1-finger 40.70 -21.08 ± 2.08 -24.22 ± 2.18 -22.36 ± 2.09 -20.36 ± 2.16 3-finger 57.50 31.67 ± 1.73 31.68 ± 2.01 36.61 ± 1.89 31.36 ± 2.09 5-finger 74.20 47.66 ± 1.61 45.69 ± 1.87 52.00 ± 2.05 51.44 ± 2.27

Page 218: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

202

Both the sensitivity ratio (Table 7-4) and the change (Table 7-5) calculated

for 3-finger shows a ~33%, and 5-finger, 49% increase in sensitivity relative to

the standard photogate when exposed to light source of different wavelengths.

On the other hand, the 1-finger photogate did not achieve the same improvement.

In fact, the sensitivity had dropped by 22% as compared to the standard

photogate. The sensitivity of each pixel is estimated with linear regression fit;

thus the evaluation of the relative sensitivity is subjected to error. The error in

sensitivity as indicated in Table 7-5 is far less than 2.5%. Thus the large shift in

sensitivity measured from our multi-finger design must be the effect of the poly-

fingers. Shown in the above tables, due to the insertion of poly-fingers, the gate

detection areas of the multi-finger photogates are much smaller than that of the

standard photogate. The decrease of the gate detection area will reduce the size

of the potential well as approximated in Figure 7-4. Although, the detection area

under the poly-gate is smaller in the multi-finger design, the effect of fringing field

reaches under the open area ensures a full potential well in the substrate. Given

the fraction of detection area relative to the standard in Table 7-4, and Table 7-5,

we show the change of sensitivity of the four illuminations with respect to the

photogate area in Figure 7-11.

Page 219: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

203

0

20

40

60

80

100

120

140

160

20 40 60 80 100 120

Photogate area (%)

Sen

sitiv

ity r

atio

(%)

Red

1-Finger

3-Finger

5-Finger

Standard

Figure 7-11. Sensitivity ratio relative to standard photogate vs. photogate area. (red light)

As observed from the plot in Figure 7-11, it suggested that the 1-finger

photogate with only 40.7% of detection area, still has 88% sensitivity light

measured by the standard photogate. However, for the 3-finger photogate, with

only 57.5% of the poly gate, the sensitivity is 33% higher than the standard

photogate. In fact, the 5-finger photogate with the second largest fraction of

photogate area of 74.2% was able to achieve the highest sensitivity, an increase

of 45%, among the 4 types of APS pixels. It is important to note that the

maximum achieve sensitivity is limited by the 4 types of APS pixels. Thus the

potential maximum increase in sensitivity which the multi-finger design can

achieve cannot be determined in this plot. Rather, we can estimate it to be near

that of the 5-finger design.

The sensitivity measured with 3- and 5-finger photogate shows the

enclosure with opening in the detection area will improve the photocarriers

collection in the substrate. Assumes the poly-gate is 100% efficient and collects

the same amount of the incident light in all pixels; then the observed increase in

Page 220: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

204

sensitivity is contributed by the open area. By subtracting the photogate area

from the total sensitivity, we can estimate the collection from the open area with

reaPhotogateAtivityTotalSensiOpenAreaySensitivit %%_% −= . (7-5)

Consequently, the collection efficiency relative to the standard gate of the open

area is evaluated with

OpenArea

OpenAreaySensitivitEfficiencyCollection

%_%

_ = . (7-6)

Using Equation(7-5), we computed the partial sensitivity contributed by the open

area as summarized in Table 7-6. The collection efficiency of these poly-fingers

for each tested multi-finger design is calculated with Equation(7-6) and

summarized in Table 7-7.

Table 7-6. Sensitivity of open area in multi-finger ed photogates.

% Sensitivity Pixel Type % Open area

Red Yellow Green Blue 1-finger 59.30 38.22 35.08 36.94 38.95 3-finger 42.50 74.17 74.18 74.18 73.86 5-finger 25.80 73.46 71.49 71.49 77.24

Table 7-7. Collection efficiency of open area in mu lti-fingered photogates.

% Sensitivity Pixel Type % Open area

Red Yellow Green Blue 1-finger 59.30 0.64 ± 0.03 0.59 ± 0.03 0.62 ± 0.03 0.66 ± 0. 03 3-finger 42.50 1.75 ± 0.05 1.75 ± 0.06 1.86 ± 0.06 1.74 ± 0.06 5-finger 25.80 2.85 ± 0.09 2.77 ± 0.11 3.02 ± 0.12 2.99 ± 0. 13

From the above tables, the largest open area is found in the 1-finger

design, 59.3% of the detection area. However the large open area only collects

~62% of the incident photoelectrons as compared to the standard poly-gate,

causing an overall decrease in photon collection. Recall the effect of the fringing

Page 221: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

205

field, which is created by the closely spaced potential well extended from the

poly-gate. In the 1-finger photogate, the open area is relatively large; hence, the

weak fringing field below the gap did not provide sufficient depth for collection of

all the light in the open area. On the other hand, the 3-finger has a total of 42.5%

open area. The collection efficiency of these open area region is ~1.7x which

correspondes to ~170% of photoelectrons collection in the openings relative to

tha photogate covered area. In the 3-finger photogate, poly-fingers are closely

spaced by 0.91µm open areas over the detection region. The potential well

below the poly-gate thus is much stronger and the fringing field effect is more

significant. As the strength of the fringing field under the gap increases, it forms

a much stronger potential well for collection in the open area. As observed from

the the 5-fingers photogate, the total open area is 25.8% of the pixel, which is

half of the area in the 3-fingers design. However the collection of light in the

opening is ~2.9 time more than the amount of photoelectrons collected by the

poly-gate. Thus this clearly indicates capacity of the potential well did not

decrease with the open area. Rather the efficiency of the open area increases

as the strength of the fringing field between the poly-fingers increases. More

importantly, the increase of photon collection in the open area suggested there is

at least 66% loss of the photon collection due to the absorption at the poly silicon.

Subsequent research by Kalyanam[46] has confirmed the fringing field effect by

modelling the depletion region using TCAD.

Page 222: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

206

7.3.2 Comparison response at various wavelength

Up to now, our result has suggested that the poly-gate absorbed

significant amounts of light. Thus we have hypothesized that by removing some

poly-gate, light can be collected in the open area provided the strength of the

fringing field is strong enough.

As shown in Figure 7-1 the poly-silicon absorption increases toward the

short wavelengths. Thus we would expect the sensitivity increase would be

highest in the blue spectrum because the absorption of photons will be reduced

in the open area. To measure how the sensitivity changes for each pixel type

under the illumination of different colors, we have evaluated the relative

sensitivity with respect to the red LED sets as summarized in Table 7-8.

Table 7-8. Relative sensitivity between Red, Yellow , Green and Blue illumination.

Pixel Type Red Yellow / Red Green / Red Blue / Red Standard 1.00 0.91 ± 2.22% 0.52 ± 2.26% 0.57 ± 2.26% 1-finger 1.00 0.87 ± 2.04% 0.51 ± 1.91% 0.57 ± 1.98% 3-finger 1.00 0.91 ± 1.70% 0.53 ± 1.71% 0.56 ± 1.84% 5-finger 1.00 0.89 ± 1.78% 0.53 ± 1.95% 0.58 ± 2.04%

As shown in Table 7-8, it is clear that the shorter wavelength sources

(Green and Blue) encounter a much higher absorption at the poly-gate with

~50% reduction in sensitivity as compared to the red light source. If the

absorption can be reduced with the open area we would expect the relative

sensitivity measurement with the multi-finger photogate to increase. However,

the relative sensitivity shown in Table 7-8 is nearly constant from all photogate

designs. Hence, we did not observe any significant improvement in reducing the

blue absorption with the multi-finger photogate design.

Page 223: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

207

In the silicon crystal, the energy required to create an electron-hole pair is

~1.1eV. The wavelength of each LED set is different as listed in Table 7-2; thus

the number of electron-hole pair generated by each set of LEDs will vary for a

given illumination fJ/µm2. The quantum efficiency,η , is the measure of electron-

hole pairs generated and collected from the incident photons as shown in

Equation(2-4). The responsivity R of a photodetector,

hce

Rλη ⋅= , (7-7)

is the measure of the of detector response as a function of wavelength. If η =1,

then the responsivity will increase with wavelength. In reality,η is always below

unity because some photons get reflected off the surface, and E-H pairs may

disappear by recombination and surface traps. In addition, the absorption

coefficienct varies as a function of wavelength. Hence long wavelengths get

absorbed at the surface and short wavelengths may get absorped in the depleted

layer.

If we assume the quantum efficiency is constant over the visible spectrum,

then the relative responsivitiy of each LED with respect to red are summarized in

Table 7-9.

Table 7-9. Ideal responsivitiy ratio approximation (η =constant).

Red Yellow/Red Green/Red Blue/Red

Responsivity ratio: 1.00 0.93 0.83 0.75

Now comparing the ideal responsivity ratio in Table 7-9 with the relative

sensitivity measured in Table 7-8, the responsivitiy ratio is all higher at all times.

Page 224: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

208

This observation indicates that η is not constant, and the loss of photoelectrons

is due to both absorption in the silicon gate and reflection between the insulator

and the poly-gate layer. The responsivitiy ratio of yellow of 0.93 is slightly higher

but very close to the observed sensitivity ratio of ~0.90. However, at the green

and blue wavelengths the responsivity ratio is ~1.5-1.6x higher than the

experimental sensitivity ratio from all pixel types. The large discrepancy in both

green and blue spectrum indicates there is a significant loss of photons from the

absorption and reflectivity at poly-gate.

With the insertion of open area on the poly-gate as in the 1, 3, and 5 finger

designs, we did not achieve any significant improvement in sensitivity in neither

the yellow, green nor blue wavelengths. In fact the sensitivity ratio remains

nearly constant as compared to the standard photogate. The lack of increase in

the sensitivity in the blue spectrum suggested possible absorption in the

transparent insulator layer which is used to fill the open area and below the poly-

gate. Some of the commonly used insulators are Silicon Dioxide (SiOx) and

Silicon Nitride (SixNy). Although ideally these insulators have nearly no

absorption in the visible spectrum, in the case of Silicon Nitride, the

stoichoemetric ratio between silicon and nitrogen will change this optical

characteristics. As studies[47][48] on Si3N4 with a 4.5eV bandgap energy, no

light from visible spectrum will be absorbed due to the wide energy gap.

However, as the dosage of silicon increases in the mixture, the bandgap energy

will decrease and the absorption characteristics will also change. The studies

from [47][48] suggest, the decrease in bandgap energy in SixNy will increase the

Page 225: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

209

absorption to the blue spectrum with an absorption coefficient of 103 – 104cm-1.

Although this absorption is relatively low as compared to the silicon gate, the

insulator layer is much thicker than poly-gate and this will potentially affect the

collection of short wavelengths in the open areas. The process of fabricating our

design is not tuned to making photo devices; therefore the optical characteristic

is not optimized to achieve the best photo response. As our result has shown the

increase in sensitivity in multi-finger pixels indicated the insulator is not as

absorptive as the silicon poly-gate. However, the thickness and optical

characteristics of the insulator material will have impact on the photo response of

photogate pixels.

7.4 Chapter Summary

In the study with different multi-finger photogates, there is a clear

indication of absorption of visible light in the short wavelengths. By comparing

the collection of the gate and the open area, the measurement from our

experiment shows there is at least a 66% loss of photon collection at the poly-

gate layer. With the insertion of openings in the poly-gate, light can be collected

in the open area. However, the collection cannot be achieved if the potential well

under the poly-gate is not strong enough to create a fringing field under the open

area. As our result from the 1-finger design shows, a wide spacing between the

poly-fingers will reduce the sensitivity of the photogate. On the other hand, with

the 3- and 5-finger photogates, the open area achieved a collection efficiency of

~1.7x and ~2.9x respectively. Thus the collection in the opening is directly

related to the strength of the fringing field between the poly-gate. Although the

Page 226: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

210

multi-finger photogate is more sensitive than standard photogate, the ratios of

sensitivity increase between from red and blue illumination remained constant in

all photogate designs. This result suggested there is possible absorption in the

insulator layer toward the blue. Extended work need to be conduct to investigate

the absorption characteristics of the insulator material and other cause to the loss

of sensitivity in the short wavelength ranges.

Page 227: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

211

8: CONCLUSION

The application of digital imagers is expanding continuously. Their use in

many cameras and embedded systems such as remote security cameras,

medical imaging, etc, will demand for better quality and fault free sensors. This

thesis has addressed the problem of the development of defects on the sensors

and the possible impact with the future sensor design trends.

8.1 Measure of in-field defects

The finding of defects in commercial imagers is verified through the testing

of DSLRs, PS and cellphone cameras that are operating in the field. Extending

the of Dudas[32] and data collected for this thesis showed that standard and

partially-stuck hot pixels have been identified as the main defect types in all test

cameras. Hot pixels are bright defects and can be identified using the dark frame

calibration technique. In DSLRs this procedure is done in the raw image form

where a series of images are taken with an increasing exposure time at a fixed

ISO in a dark illumination condition. From the testing of 21 DSLRs of age 1-7

years old, a total of 229 hot pixels were found at ISO 400, of these 106 existed

with an offset. This is an important finding as unlike standard ones, offset hot

pixels affect pictures of any exposure time.

In the recent testing at various ISO levels on a subset of 13 DSLRs, 137

hot pixels were identified at ISO 400. This number doubles at ISO 800 with 240

Page 228: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

212

defects and triple at ISO 1600 with 367 defects. These additional defects

revealed at the higher ISOs indicated most hot pixels are created with a low

damage. In fact, the visibility of hot pixels is enhanced by the ISO gain. As the

usable ISO range continues to expand in the newer cameras, more defects will

be observed in images taken at this setting.

In the study of PS and cellphone cameras, the dark calibration technique

is modified as the raw format is not available. The lack of explicit exposure time

control imposed additional challenges in the testing of these cameras. These

commercial cameras uses the CFA sensors, hence the color images output from

these cameras are already processed by the internal imaging functions. Thus, a

single defective pixel will spread into its neighboring pixels. A study on three

demosaicing functions have shown that a simple bilinear and median algorithoms

will create a 3x3 defect cluster and a 5x5 cluster with the adaptive kimmel

algorithm for an uncompressed image. The addition impact from the osse jpeg

compression had shown to spread the appearance of a single defective pixel into

a 12x12 defect cluster. The modified dark frame calibration requires multiple

images taken at a fixed exposure time. A software tool was build to map these

defect clusters and extract the defect location from the peak value in the cluster.

Using such techniques, a total of 213 defects were found on a set of 10 identifcal

cellphones with a 3 year old APS sensor. Similarly 72 defects were found on 3

tested PSs which use the CCD sensors of age range from 1-7 years old.

Page 229: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

213

8.2 Spatial and temporal growth analysis

The use of raw format in the calibration on the 21 DSLRs provide a

measure of the defect parameters and x-y location of each fault. Also by

repeating the calibration periodically, we can track the defect growth over the

sensors’ lifetime. These two pieces of information served in the analysis of the

spatial distribution and temporal growth of faults which help to characterize the

defect causal source.

The failure mechanism can be classified as material degradation or

external source. Degradation of the sensor material will generate localize

breakdown, hence results in cluster of defects. We have utilized 4 different

methods to analyze the spatial distribution of defects on the tested sensors. First,

the distribution of the calculated distances between all faults on each sensor is

being measured. The broad random distribution of the measured distances with

a single peak at ~10mm had shown no indication of local defect clusters.

Secondly, with a Monte-Carlo simulation, a distribution is generated from 100

random spatial defect patterns. A chi-square test compared these results to the

data and verified at a 95% confidence level that the observed distribution from

the tested cameras is a random distribution. The same results were observed on

both the CCD and CMOS APS sensors. This suggested the defects are

independent of the sensor technology or the degradation of the sensor material.

In the third test, a nearest neighbour analysis measures the distribution of

distances from the closest defect. The observe distribution is compared with the

theorical CSR distribution using the nearest neighbour index Rn. The average Rn

Page 230: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

214

is 0.9 at ISO 400, and 1.04 at ISO 800 and 0.99 at ISO 1600. Verified from the

two-tailed Rn distribution, all sensors exhibit a random distribution of defects at

95% confidence level. Lastly, in a Monte-Carlo modelling, 99 instances of CSR

pattern defective sensors are generated. Each measured distribution from the

tested sensor is well within the boundry set from the 99 simulated sensors.

Hence, this result verified that each observed defect pattern is simply a case of a

random distribution.

The temporal measure of the failures will give an indication of the

characteristics of the defect mechanism. From first approximation, we have

estimated the defect rates using the multiple calibration results collected from

each sensor. The plots of defect count versus sensor age shows that the

number of defects on the sensors increases continuously at nearly a constant

linear rate. With a linear regression fit, the defect rate of the CCDs in DSLRs is

~5.75 defects/year which is double the rate of the APS of ~2.39 defects/year. In

addition the measured average rate of the small APS cellphone sensors is ~3.55

defects/year and the small CCD PS sensor is ~6.88defects/year. The constant

rate found in all sensors suggested these sensors are exposed to the same

continuous source.

The results from the analysis of spatial and temporal distribution of the

CCD and APS sensors had shown no clustering of defects and a linear growth in

time. These results rejected the hypothesis of material degradation but rather

suggested a random source such as cosmic rays is the causal mechanism

behind these in-field defects.

Page 231: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

215

8.3 Defect trace algorithm

While the defect rates can be measured from periodic calibration results,

this method suffers large errors when the calibration time gap is over one year

apart. Instead, we utilized regular color images captured by the cameras to

provide a constant measurement of the state of the sensor. The defect

development rates can be traced by analyzing the entire historical image dataset.

In this thesis we have introduced the use of Bayesian statistics to detect the

presence of defect based on the evaluation accumulated from a sequence of

images. The state of the pixel (i.e good/hot) is estimated by comparing the

image pixel value with its neighbouring pixels. Because the defects appears will

appear as defect clusters in color images, a ring interpolation is used to

estimated the good pixel value. In addition, to avoid saturation from the large

accumulation of good pixels, a sliding window approach is used to confine the

accumulation to a subset of images.

The performance of the Bayesian algorithm is verified with a set of

Monte-Carlo simulations. The simulation results suggested while detection of the

weak hot pixels will suffer delay in detection, this dependence is more significant

when the image dataset consists mostly of pictures taken at short exposure times.

In addition to identify the defective from the spreading of defects in color images,

a 5x5 ring size was demonstrated to give the best accuracy. The use of a short

sliding window had also shown an improvement in the detection especially in

finding the low impact hot pixels.

Page 232: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

216

The Bayesian algorithm is applied on a set of real image dataset from 7

tested DSLRs. All defects are correctly identified using the algorithm and the

results showed faults were being found at times when calibration results were not

available. Hence, the defect rates measured from the calibrations are always an

underestimate of the true rates. The Bayesian algorithm had demonstrated a

better measurement of the defect rate with the regular images captured by the

cameras.

8.4 Fitting of defect growth with sensor design tre nds

In this thesis we have correlated some of the sensor design trends to the

impact of defects on the future sensors. First is the examination of the defect

rates from the two types of sensors: CCD and APS. In the comparison of the

large area DSLR sensors, the defect rate of the CCDs is 2x higher than the APSs.

The same trend was observed in the small area PS and cellphone sensors. The

high defect rates measured on the CCD cameras indicated this sensor might be

more sensitive to the defect source. The second trend is the changes in the

sensor size. In this thesis, various sensor sizes ranged from the large full frame

to the small cellphone sensors had been tested. A detail comparison had shown

that keeping the pixel size constant, the defect rate scales approximately by the

sensor area. Hence, the defect rates should be expressed per sensor area (i.e.

defects/year/mm2). Although the use of the full frame sensors was driven by the

improvement in image quality, by contrary, the high number of defects developed

on these sensors will be the main drawback to the lifetime and quality of the high-

end DSLRs. The last trend is the shrinkage of pixel size to increase the pixel

Page 233: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

217

count on the sensors. Most PS and cellphone cameras use 2-3µm pixel size on

the small sensor while DSLRs use pixel of size 6-7µm. Now combining the

defect rates measured from the three types of cameras (i.e. DSLRs, PS,

cellphone) to develp a model that will related the defect rates per area to the

pixel size. This thesis found that the best fit was to a power law y = AxB. A plot

of all tested cameras indicated the defect rates scaled by a power of -2.5 with

pixel size. To be exact, the CCD sensors scales by a power of -2.04 (i.e. ~pixel

area) and the APS is slightly higher, with a power factor of -3.32. Although the

defect rates of the CCDs with pixel of 6-7µm had a higher measured defect rates,

as the pixel size scales down to 2-3µm, the defect rates of the APS is nearly the

same as the CCDs. This finding suggested the shrinkage of APS pixels will

cause the sensor to become more sensitive to defects.

By analyzing the defect rates of these small pixels, a much higher defect

rate was observed on these sensors as compares to large pixels DSLR sensors.

A power law regression fit suggested the defect rate per sensor area will

increase as a power law with the decrease of pixels size.

8.5 Experimental measure of Mulit-Finger Photogate

Photogate is known to suffer loss of collection due to absorption at the

silicon-gate. A study from La-Haye had introduced and implemented several

multi-finger photogate designs. This thesis had extended the testing of the

multi-finger photogates over the visible spectrum to provide a measure of the

absorption and verified the effect of the fringing field in photo collection.

Page 234: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

218

The testing of the multi-finger photogates at various wavelengths had

shown a clear indication of absorption of visible light in the short wavelengths.

The open area in the multi-finger photogates measured ~66% loss of photon

collection due to the poly-gate layer.

As a first approximation, the potential well below the open area in the

multi-photogate is achieved by the frining field from the closely spaced fingers.

The sensitivity of a 1-finger photogate design showed a reduction in senstivitiy

since the fringing fields under the poly-gate were not strong enough to create a

full potential well under the open areas. However, with the 3 and 5-finger

photogates design, the pixel achieved a collection efficiency of ~1.7x and ~2.9x

respectively. Hence, the collection in the open area is directly related to the

strength of the fringing field between the poly-fingers.

Although the multi-finger photogate had a higher sensitiviy measured than

the standard photogate, the ratios of sensitivity between the red and blue

illumination remained constant in all photogate designs. This result suggested

there is possible absorption in the transparent insulator layer.

8.6 Future Work

Preliminary data collected from the PS and cellphone cameras have

provided insight to the impact of sensor design trends on development of defects

on imagers. This analysis has suggested that possible future work look at testing

with a larger collection of small sensor devices. While the work on the

cellphones and PS has just begun recently, more data is needed by monitoring

Page 235: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

219

these imagers for a longer time period. Currently the analysis of pixel size is

limited to 2-3µm on the small sensors and and 6-7µm from the large sensors.

Expanding the study on a large set of PS cameras can provide data from pixels

of size range from 4-5µm.

In the current study of defects from large area full-frame sensor, the

results are based on 2 imagers only. As full-frame sensors have just recently

entered the commercial DSLR market, only a small set of cameras are available

for testing. To strengthen the statistical relevance of our study, defect data from

more full-frame sensors is needed. In addition, the impact of ISO on defects is

currently limited to ISO 1600 for most tested cameras. Since the extanded ISO

range of >6400 is found in the new camera models, more testing of these new

cameras is needed to better observe the defect trend with the ISO amplification.

Lastly the measurements on the multi-finger photogate have shown the

increase in sensitivity as compared to the standard photogate. Hence, this result

has shown the effect of fringing field in keeping a full potential well over the entire

photogate area. However, the maximum sensitivity that can be achieved by the

multi-finger photogate design will need future investigations. The extension of

such investigation will be carried out in simulations in the work by Kalyanam.

Page 236: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

220

REFERENCES

[1] Edwards, S. H. (ed.), “Electronic and Digital Photography,” History of Photography, 22, 1998.

[2] W. S. Boyle and G.E. Smith, "Charge coupled semiconductor devices," Bell Syst. Tech. Journal, vol. 49, pp. 587-593, 1970.

[3] S. Mendis, S. Kemeny and E.R. Fossum, "A 128xl28 CMOS active pixel image sensor for highly integrated imaging systems,"IEEE IEDM Tech. Dig., pp. 583-586, (1993).

[4] S. K.Mendis, S. E. Kemeny, R. C. Gee, B. Pain, C. O. Staller, Q Kim, E. R. Fossum, “CMOS Active Pixel Sensor for Highly Integrated Imaging System”, IEEE Journal of Solid-State Circuits, vol. 32, pp. 187-197 (1997)

[5] A. A. Willis, “Electronic Photography System”, U.S. Patent 4 057 830, Nov, 8 1977

[6] Canadian Imaging Trade Association, http://www.citacanada.ca/, accessed March 2009.

[7] R. L. Wisfield, M. A. Hartney, R.A. Street, R. B. Apte, “New Amorphous-Silicon Image Sensor for X-ray Diagnostic Medical Imaging Applications”, SPIE, vol. 3336, Medical Imaging 1998, Physics of Medical Imaging, 22-24 Feburary 1988, pp. 444-452.

[8] Z. Sun, G. Bedis, R. Miller, ”On-Road Vehicle Detection Using Optical Sensor: A Review”, IEEE International Conference on Intelligent Transportation Systems, 2004, pp. 585-590.

[9] D. Beymer, P. McLauchlan, B. Coifman, J. Malik, “A Real-time Computer Vision System for Measuring Traffic Parameters”, in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 495-501.

[10]

I. Downes, L. B. Rad, H. Aghajan, “Development of a Mote for Wireless Image Sensor Networks”, in Proc. Cognitive Systems and Interactive Sensors (COGIS), Paris, Mar. 2006.

[11] J. C. Pickel, A. H. Kalma, G. R. Hopkinson, C. J. Marshall, “Radiataion Effects on Photonic Imagers – A Historical Perspective,” IEEE Trans. Nuclear Science, vol. 50(3), 2003, pp. 671-688.

Page 237: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

221

[12] K. Yamamoto, Y. Oya, K. Kagawa, M. Nunoshita, J. Ohta, K. Watanabe, “A 128 x 128 Pixel Complementary Metal Oxide Semiconductor Image Sensor with an Improved Architecture for Detecting Modulated Light Signals,” Optical Review, vol. 13, no. 2, pp. 64-68, March 2006.

[13] M. L. La Haye “Enhancing Sensitivity for Active Pixel Sensors with Fault Tolerance and Demossaicing,” MASc Thesis, Burnaby, CA, 2007.

[14] E. D. Palik, “Handbook of Optical Constants of Solids,” New York, Academic Press, 1985.

[15] J. E. Juliussen, “Bubbles and CCD memories – Solid state mass storage,” afips, pp. 1067, 1978, in Proc. National Computer Conference, 1978.

[16] CCD transfer efficiency Active pixel Sensor: Are CCD Dinosaurs?

[17] D. Litwiller, “CMOS vs. CCD Maturing Technologies, Maturing Markets,” Photonics Spectral, vol. 39 pp. 154-158, 2005.

[18] J. Farrell, F. Xiao, S. Kavusi, “Resolution and Light Sensitivity Tradeoff with Pixel Size,” in Proc. SPIE Conference in Digital Photography II 6069, pp. 211-218, 2006.

[19] J. F. Jr. Jamilton, J. T. Compton, “Processing Color and Panchromatic Pixels,” U.S. Patent, 20070024879, Feb, 2007.

[20] E. A. Amerasekera, F. N. Najm, Failure Mechanisms in Semiconductor Devices 2nd ed., New York, NY: John Wiley & Sons, Inc., 1997.

[21] Reliability in CMOS IC Design: Physical Failure Mechanisms and their Modeling, IN MOSIS Technical Notes, http://www.mosis.org/support/technical -notes.html.

[22] Toshiba, “Accelerated Lifetime tests”. Internet: http://www.semicon.toshiba.co.jp/eng/product/reliability/device/testing/testing2/1186548_7827.html, Aug 2010 [Feb 2011].

[23] A. H Johnston, “Radiation Damage of Electronic and Optoelectronic Devices in Space,” in Proc. the 4th International Workshop on Radiation Effects on Semiconductor Device for Space Application, Tsukuba, Japan, Oct. 2000.

[24] J. Bogaerts, B. Dierickx, G. Meynants, D. Uwaerts, “Total Dose and Displacement Damage Effects in a Radiation-Hardened CMOS APS,” IEEE Transactions on Electronic Devices, vol. 50, no. 1, pp. 84-90, 2003.

[25] G. R. Hopkinson, C. J. Dale, P. W. Marshall, “Proton Effects in Charge-Coupled Devices,” IEEE Transactions on Nuclear Science, vol. 40, no.2, pp. 614-626, 1996.

[26] G. R. Srinivasan, "Modeling the cosmic-ray-induced soft-error rate in integrated circuits: An overview," IBM Journal of Research and Development , vol.40, no.1, pp.77-89, Jan. 1996

Page 238: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

222

[27] T. J. O'Gorman, J. M. Ross, A. H. Taber, J. F. Ziegler, H. P. Muhlfeld, C. J. Montrose, H. W. Curtis, J. L. Walsh, "Field testing for cosmic ray soft errors in semiconductor memories," IBM Journal of Research and Development , vol.40, no.1, pp.41-50, Jan. 1996

[28] A. J. P. Theuwissen, “Influence of terrestrial cosmic rays on the reliability of CCD image sensors. Part 1: experiments at room temperature,” IEEE Transactions on Electron Devices, Vol. 54 (12), pg. 3260-6, 2007

[29] A. J. P. Theuwissen, “Influence of terrestrial cosmic rays on the reliability of CCD image sensors. Part 2: experiments at elevated temperature,” IEEE Transactions on Electron Devices, Vol. 54 (12), pg. 2324-8, 2008

[30]

J. F. Ziegler; "Terrestrial cosmic ray intensities," IBM Journal of Research and Development , vol.42, no.1, pp.117-140, Jan. 1998

[31]

J. Dudas, L. M. Wu, C. Jung, G. H. Chapman, Z. Koren, I. Koren, “Identification of in-field defect development in digital image sensors”, in Proc. Electronic Imaging, Digital Photography II, V6502, 6502Y1-0Y12, San Jose, Jan. 2007

[32]

J. Dudas, “Characterization and Avoidance of In-Field Defects in Solid-State Image Sensors,” MASc, Burnaby, CA, 2008

[33]

R. Widenhorn, M. M. Blouke, A. Weber, A. Rest, and E. Bodegom, “Temperature dependence of dark current in a CCD,” Proc. SPIE vol. 4669, pp. 193-201, 2002.

[34] J.C. Dunlap, O. Sostin, R. Widenhorn,and E. Bodegom, “Dark current behaviour in DSLR cameras,” Proc. SPIE-IS&T Electronic Imaging, SPIE vol. 7249, 2009.

[35] Dpreview, “Digtal Photography Review,” http://www.dpreview.com

[36]

W. T. Freeman, “Median Filter for Reconstructing Missing Color Samples,” U.S Patent, 2724395, 1988

[37] C. A. Laroche, and M. A. Prescott, “Apparatus and Method for Adaptively Interpolation a Full Color Image Utilizing Chrominance gradients,” U.S. Patent 5,373,322, Dec. 1994.

[38] R. Kimmel, “Demosaicing: Image Reconstruction from CCD Samples,” Proc. Trans. Imaging Progressing, vol. 8, pp. 1221-1228, 1999.

[39] S. K. Tewksbury, Wafer-Level Integrated Systems: Implementation Issues, Norwell, MA, Kluwer, 1989.

[40] J.Leung, J.Dudas, G. H. Chapman, I. Koren, Z. Koren, “Quantitative Analysis of In-Field Defects in Image Sensor Arrays”, Proc. IEEE Int. Symposium on Defect and Fault Tolerance, pp 517-525, Rome, Italy, Oct. 2007.

Page 239: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

223

[41] P. J. Diggle, Statistical analysis of spatial patterns, London, New York: Academic Press, 1983.

[42] D. Ebdon, Statistics in Geography, Oxford, UK: Blackwell Publisher Inc., 1985.

[43] K. P. Donnelly, Simulations to determine the variance and edge effect of total nearest-neighbour distance. Cambridge, Cambridge University Press, 1978.

[44] Camera and Imaging Products Assocaition, http://www.cipa.jp/english/index.html, accessed Dec 2009.

[45] Y. Dagan, Topic:”The Future of Cameras for Mobile Electronics.”, 2007

[46] P. V. R. Kalyanam, G.H. Chapman, and M. Parameswaran, “Enhanced sensitivity achievement using advanced device simulation of multifinger photo gate active pixel sensors”, Proc. SPIE 75360G, pp. 2010

[47] A. C. Diebold, “Handbook of silicon semiconductor metrology,” New York, Marcel Dekker, 2001.

[48] H. R. Philipp, “Optical Properties of Silicon Nitride,” J. Electrochem. Soc., Volume 120, Issue 2, pp. 295-300, February 1973.

Page 240: MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN …summit.sfu.ca/system/files/iritems1/11917/etd6552_JLeung.pdf · temporal distribution and identify the defect causal source. The

224

APPENDIX A: SPECIFICATION OF TESTED DSLRS

Camera Camera Model Sensor Type MP Sensor Size

(mm x mm) Pixel Size (µm x µm)

A Canon EOS10D APS 6.3 22.7 × 15.1 7.38 x 7.36

B Canon EOS5DMarkII APS 21.0 36.0 × 24.0 6.26 x 6.26

C Canon EOS300D APS 6.3 22.7 × 15.1 7.38 x 7.36

D Canon EOS450D APS 12.2 22.2 × 14.8 5.14 x 5.14

E Canon EOS350D APS 8.0 22.2 × 14.8 6.33 x 6.33

F Canon EOS450D APS 12.2 22.2 × 14.8 5.14 x 5.14

G Canon EOS5DMarkII APS 21.0 36.0 × 24.0 6.26 x 6.26

H Canon EOS30D APS 8.2 22.5 × 15.0 6.30 x 6.30

I Canon EOS350D APS 10.1 22.2 × 14.8 6.33 x 6.33

J Nikon D50 CCD 6.0 23.7 × 15.5 7.69 x 7.57

K Nikon D80 CCD 10.0 23.6 × 15.8 5.87 x 5.87

L Nikon D80 CCD 10.0 23.6 × 15.8 5.87 x 5.87

M Nikon D80 CCD 10.0 23.6 × 15.8 5.87 x 5.87

O Nikon D70 CCD 6.0 23.7 × 15.5 7.69 x 7.57

N Nikon D80 CCD 10.0 23.6 × 15.8 5.87 x 5.87

P Nikon D40 CCD 6.0 23.7 × 15.5 7.69 x 7.57

Q Canon EOS30D APS 8.2 22.5 × 15.0 6.30 x 6.30

R Nikon D70 CCD 6.0 23.7 × 15.5 7.69 x 7.57

S Nikon D200 CCD 10.0 23.6 × 15.8 6.10 x 6.10

T Nikon D2x APS 12.2 23.7 × 15.7 5.39 x 5.38

U Nikon D1x CCD 5.3 23.7 × 15.5 7.87 x 7.90