16
2446 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009 Wide-Dynamic-Range CMOS Image Sensors—Comparative Performance Analysis Arthur Spivak, Alexander Belenky, Alexander Fish, Member, IEEE, and Orly Yadid-Pecht, Fellow, IEEE Abstract—A large variety of solutions for widening the dynamic range (DR) of CMOS image sensors has been proposed throughout the years. We propose a set of criteria upon which an effective comparative analysis of the performance of wide-DR (WDR) sen- sors can be done. Sensors for WDR are divided into seven cate- gories: 1) companding sensors; 2) multimode sensors; 3) clipping sensors; 4) frequency-based sensors; 5) time-to-saturation (time- to-first spike) sensors; 6) global-control-over-the-integration-time sensors; and 7) autonomous-control-over-the-integration-time sensors. The comparative analysis for each category is based upon the quantitative assessments of the following parameters: signal- to-noise ratio, DR extension, noise floor, minimal transistor count, and sensitivity. These parameters are assessed using consistent assumptions and definitions, which are common to all WDR sensor categories. The advantages and disadvantages of each category in the sense of power consumption and data rate are discussed qualitatively. The influence of technology advancements on the proposed set of criteria is discussed as well. Index Terms—Active pixel sensor (APS), CMOS image sensors (CIS), dynamic range (DR), noise floor (NF), sensitivity, sensors, signal-to-noise ratio (SNR). I. I NTRODUCTION T HE ABILITY to integrate various smart functions of imagers on a single chip [1], [2], using CMOS technology, has been the stimulation for fast growth of the production of digital cameras based on this technology. A large variety of smart functions allowed CMOS imagers to be implemented in the areas of security vision systems, medical devices, quality control, and space research [3]. While scaling trends in CMOS technology facilitate higher resolution imaging, they also have an adverse effect on image performance, leading to limited quantum efficiency, increased leakage current, and reduced signal swing [4], hence reduced dynamic range (DR) of the sensor [3], [5], [6]. The DR is one of the most important figures of merit in CMOS image sensors (CISs) and quantifies the ability of a sensor to image highlights and shadows. In cases of high illumination levels, a narrow DR causes the saturation of pixels with high sensitivity, resulting in loss of information. These issues provide researchers with many challenges during their efforts to realize high-quality CIS-based vision systems. Manuscript received February 2, 2009; revised June 9, 2009. First published September 29, 2009; current version published October 21, 2009. The review of this paper was arranged by Editor A. Theuwissen. A. Spivak, A. Belenky, and A. Fish are with the VLSI Systems Center, Ben-Gurion University of the Negev, Beersheba 84105, Israel (e-mail: [email protected]; [email protected]; afi[email protected]). O. Yadid-Pecht is with the Electrical and Computer Engineering Department, University of Calgary, Calgary, AB T2N 1N4, Canada (e-mail: [email protected]). Digital Object Identifier 10.1109/TED.2009.2030599 The overall task for wide-DR (WDR) imaging can be divided into two distinguished stages: image capture, preventing the loss of scene details, and image compression, allowing im- age representation on conventional computer screens. The first stage is a challenge mainly for image sensor designers and can be implemented at the pixel or system levels, whereas the second stage is mainly accomplished in software. This paper examines only the solutions that are relevant to the first stage, i.e., to the image capture ability. Improvement of image capture capability can be done either by reducing the noise floor (NF) of the sensor [7]–[10] or by extending its saturation toward higher light intensities. Here, we focus on the solutions that extend the DR toward high light intensities. Various solutions for extending the DR in CISs at the pixel or system levels have been presented in recent years. A qualitative summary of the existing solutions and their comparisons are presented in [11]. In that paper, the WDR algorithms have been divided into six general categories. This paper updates the division to categories and provides quantitative assessments of sensor performance within each category, as well as overall qualitative comparisons between the different categories. The updated classification of the WDR schemes is presented in the following: 1) companding sensors that compress their response to light due to their logarithmic transfer function; 2) multimode sensors that have a linear and a logarithmic response at dark and bright illumination levels, respectively (i.e., they are able to switch between linear and logarithmic modes of operation); 3) clipping sensors, in which a capacity well adjustment method is applied; 4) frequency-based sensors, where the sen- sor output is converted into a pulse frequency; 5) time-to- saturation [(TTS); time-to-first spike] sensors, where the image is processed according to the time the pixel was detected as saturated; 6) sensors with global control over the integration time; and 7) sensors with autonomous control over the integra- tion time, where each pixel has control over its own exposure period. This paper is organized as follows: Section II presents the definitions and basic assumptions. Section III presents the quantitative study of the WDR schemes. Section IV presents the discussion. Section V summarizes the study. II. DEFINITIONS AND BASIC ASSUMPTIONS To be consistent throughout the comparisons presented in this paper, we have defined basic assumptions that are applica- ble for each algorithm. Moreover, these assumptions simplify the numeric analysis of each approach for widening the DR. We assume that pixels within each group of WDR sensors operate without color filters, i.e., the signal from the pixel is converted 0018-9383/$26.00 © 2009 IEEE

Wide-Dynamic-Range CMOS Image Sensors\u0026#x2014;Comparative Performance Analysis

Embed Size (px)

Citation preview

2446 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Wide-Dynamic-Range CMOS ImageSensors—Comparative Performance Analysis

Arthur Spivak, Alexander Belenky, Alexander Fish, Member, IEEE, and Orly Yadid-Pecht, Fellow, IEEE

Abstract—A large variety of solutions for widening the dynamicrange (DR) of CMOS image sensors has been proposed throughoutthe years. We propose a set of criteria upon which an effectivecomparative analysis of the performance of wide-DR (WDR) sen-sors can be done. Sensors for WDR are divided into seven cate-gories: 1) companding sensors; 2) multimode sensors; 3) clippingsensors; 4) frequency-based sensors; 5) time-to-saturation (time-to-first spike) sensors; 6) global-control-over-the-integration-timesensors; and 7) autonomous-control-over-the-integration-timesensors. The comparative analysis for each category is based uponthe quantitative assessments of the following parameters: signal-to-noise ratio, DR extension, noise floor, minimal transistor count,and sensitivity. These parameters are assessed using consistentassumptions and definitions, which are common to all WDR sensorcategories. The advantages and disadvantages of each categoryin the sense of power consumption and data rate are discussedqualitatively. The influence of technology advancements on theproposed set of criteria is discussed as well.

Index Terms—Active pixel sensor (APS), CMOS image sensors(CIS), dynamic range (DR), noise floor (NF), sensitivity, sensors,signal-to-noise ratio (SNR).

I. INTRODUCTION

THE ABILITY to integrate various smart functions ofimagers on a single chip [1], [2], using CMOS technology,

has been the stimulation for fast growth of the production ofdigital cameras based on this technology. A large variety ofsmart functions allowed CMOS imagers to be implemented inthe areas of security vision systems, medical devices, qualitycontrol, and space research [3].

While scaling trends in CMOS technology facilitate higherresolution imaging, they also have an adverse effect on imageperformance, leading to limited quantum efficiency, increasedleakage current, and reduced signal swing [4], hence reduceddynamic range (DR) of the sensor [3], [5], [6]. The DR is oneof the most important figures of merit in CMOS image sensors(CISs) and quantifies the ability of a sensor to image highlightsand shadows. In cases of high illumination levels, a narrow DRcauses the saturation of pixels with high sensitivity, resulting inloss of information. These issues provide researchers with manychallenges during their efforts to realize high-quality CIS-basedvision systems.

Manuscript received February 2, 2009; revised June 9, 2009. First publishedSeptember 29, 2009; current version published October 21, 2009. The reviewof this paper was arranged by Editor A. Theuwissen.

A. Spivak, A. Belenky, and A. Fish are with the VLSI SystemsCenter, Ben-Gurion University of the Negev, Beersheba 84105, Israel (e-mail:[email protected]; [email protected]; [email protected]).

O. Yadid-Pecht is with the Electrical and Computer Engineering Department,University of Calgary, Calgary, AB T2N 1N4, Canada (e-mail: [email protected]).

Digital Object Identifier 10.1109/TED.2009.2030599

The overall task for wide-DR (WDR) imaging can be dividedinto two distinguished stages: image capture, preventing theloss of scene details, and image compression, allowing im-age representation on conventional computer screens. The firststage is a challenge mainly for image sensor designers andcan be implemented at the pixel or system levels, whereas thesecond stage is mainly accomplished in software. This paperexamines only the solutions that are relevant to the first stage,i.e., to the image capture ability. Improvement of image capturecapability can be done either by reducing the noise floor (NF) ofthe sensor [7]–[10] or by extending its saturation toward higherlight intensities. Here, we focus on the solutions that extend theDR toward high light intensities.

Various solutions for extending the DR in CISs at the pixel orsystem levels have been presented in recent years. A qualitativesummary of the existing solutions and their comparisons arepresented in [11]. In that paper, the WDR algorithms havebeen divided into six general categories. This paper updatesthe division to categories and provides quantitative assessmentsof sensor performance within each category, as well as overallqualitative comparisons between the different categories. Theupdated classification of the WDR schemes is presented in thefollowing: 1) companding sensors that compress their responseto light due to their logarithmic transfer function; 2) multimodesensors that have a linear and a logarithmic response at darkand bright illumination levels, respectively (i.e., they are ableto switch between linear and logarithmic modes of operation);3) clipping sensors, in which a capacity well adjustmentmethod is applied; 4) frequency-based sensors, where the sen-sor output is converted into a pulse frequency; 5) time-to-saturation [(TTS); time-to-first spike] sensors, where the imageis processed according to the time the pixel was detected assaturated; 6) sensors with global control over the integrationtime; and 7) sensors with autonomous control over the integra-tion time, where each pixel has control over its own exposureperiod. This paper is organized as follows: Section II presentsthe definitions and basic assumptions. Section III presents thequantitative study of the WDR schemes. Section IV presentsthe discussion. Section V summarizes the study.

II. DEFINITIONS AND BASIC ASSUMPTIONS

To be consistent throughout the comparisons presented inthis paper, we have defined basic assumptions that are applica-ble for each algorithm. Moreover, these assumptions simplifythe numeric analysis of each approach for widening the DR. Weassume that pixels within each group of WDR sensors operatewithout color filters, i.e., the signal from the pixel is converted

0018-9383/$26.00 © 2009 IEEE

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS 2447

in grayscale. Symbols iph, idc, and σr relate to photogeneratedcurrent, dark current, and readout signal deviation, respectively.We assume that the incident light intensity is constant over theframe and that the transfer function from the light intensity tothe photogenerated current is linear and constant. This assump-tion allows us to easily calculate the accumulated photocharge.For all of the sensor groups, aside from the logarithmic ones, weassume that the longest integration time is tint. We also assumethat all sensors operate at a video frame rate (30 frames persecond), and, therefore, the longest integration time is 30 ms.All signal-to-noise ratio (SNR) calculations were performedfor the same range of photogenerated current iph. In order tosimplify further the calculations, we assume that the dark signalis equal to 0.1 fA. We also assume that all capacitances withinthe pixel are time invariant; otherwise, all calculations becomeextremely complex. Each pixel is represented with functionalblocks in order to make calculations for a particular group ofWDR sensors as general as possible. For example, the photo-diode that integrates photocharge during the frame is replacedwith the integrator block “

∫.” Note that, since the accumulated

photogenerated charge is negative, the current flows out of theintegrator block. The equivalent capacitance associated withthe integrator Cint is equal for all pixels and is set to 10 fF.The saturation signal is 1 V for all techniques except thelogarithmic one. The analog-to-digital converter (ADC) usedfor the final analog-to-digital (A–D) conversion has the sameresolution for all the sensors. The full well capacity of a pixel(without additional capacitances) is defined by Qmax. Thein-pixel source follower amplifier, utilized for nondestructivereadout, is assumed to have constant gain and to operate in thesaturation region. The gain fixed pattern noise (FPN) σPRNU

(where PRNU stands for “photoresponse nonuniformity”) com-ponent in all systems (except the logarithmic mode) is assumedto be between 0.1% and 0.3% of the photogenerated signal,according to the functionality and the minimal number oftransistors required for the implementation of the pixel in thespecific category of WDR sensors. The offset FPN σOff_FPN

component is assumed to vary from 0.1% to 0.3% of the sat-uration signal according to the functionality, minimal numberof transistors inside the pixel, and calibration procedure com-plexity. The minimal number of transistors within the pixel isassessed assuming that each switch and source follower are im-plemented with one and with two transistors, respectively. Thecomparator is implemented with five transistors, and the mem-ory unit is implemented with six transistors. We also assumethat the pixels do not share any transistors between them.

The minimal readout noise is 3e−. Thermal noise stored oncapacitors, which is referred to as “kTC” noise, is taken intoconsideration only when it is caused by the reset operation. Thisnoise is a function of the sensor temperature T , which is givenin degrees Kelvin. The bulk effect is ignored; otherwise, thethreshold voltage is a function of the signal itself.

The SNR of each WDR scheme is defined as the ratiobetween the photogenerated signal and the root mean squareof the average noise power. The average power of the noise iscalculated by summarizing the variances of all the noise com-ponents, assuming that none of the noise random processes arecorrelated. Note that all noise components are input referred.

Fig. 1. Schematic diagram of the reference APS.

When the photocurrent exceeds the bounds of the system’s DR,its SNR drops to zero. However, the SNR as defined previouslyis not sufficient for the evaluation of sensor ability to imagehigh DR scenes since it does not indicate the sensitivity of thesensor, i.e., the ability to distinguish between two adjacent lightintensities. Therefore, in this paper, we perform an analysis ofsensitivity along with SNR and DR throughout the whole rangeof relevant photocurrents. This analysis allows assessing the ef-fectiveness of a certain DR extension. The sensitivity parameterin this paper is defined as the derivative of the pixel output withrespect to the photogenerated current and is presented in voltsper picoampere. The sensitivity for photocurrents exceeding theDR is zero. We also define the minimum detectable signal asNF, evaluated in electrons.

In order to compare the parameters, such as NF, minimaltransistor count, etc., a conventional three-transistor (3T) ac-tive pixel sensor (APS) cell, shown in Fig. 1, was taken asa reference pixel. In this pixel, the photogenerated and darkcurrents iph discharge an integrator capacitance. At the end ofthe integration time, the voltage at the “sense” node is read out(Vout) using a source follower amplifier (Buffer in Fig. 1). Ibias

is the columnwise current source associated with the Buffer. Anew integration starts after the photodiode is reset by activatingthe “Reset” signal. The SNR for the reference sensor is given by

SNR=20 log10

iphtintCint√

σ2r + q(iph+idc)tint

C2int

+σ2PRNUi2pht2int

C2int

+σ2Off_FPN

Q2max

C2int

. (1)

The NF is calculated as follows:

NF =Cint

q

√σ2

r +qidctint

C2int

+ σ2Off_FPN

Q2max

C2int

. (2)

All the calculations are performed using MATLAB.Fig. 2 presents the calculated SNR for a 3T APS-based

sensor.The calculation of the SNR for the reference pixel is per-

formed with the following values:

Qmax = 62 500e− T = 300 K σr = 48 μV

Cint = 10 fF σPRNU = 0.1% σOff_FPN = 0.1%

NF = 65e− .

2448 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Fig. 2. Calculated SNR for the reference sensor.

Fig. 3. Sensitivity of the reference sensor.

The “knee” in the SNR curve (Fig. 2) is caused by an offsetFPN component, which is taken into account in (1).

The sensitivity S (depicted in Fig. 3) of the reference sensoris given by

S =dVsense

diph=

d

diph

(iphtint

Cint

)=

tint

Cint. (3)

In all of the DR calculations, the following basic DR defini-tion is used:

DR = 20 log10

imax

imin(4)

where imax is the largest detectable photogenerated current andimin is the smallest detectable current defined by the NF.

For all of the reviewed solutions, the maximum DR extensionis presented by the DR factor (DRF) [12]. This factor representsthe DR extension for each particular WDR algorithm. Thecalculation of the DRF is unique for each WDR algorithm. Inthis paper, we present it in decibels in order to bring it to thesame scale as the DR. For some algorithms, it is calculated asthe ratio of the integration times, whereas, for other algorithms,it is defined as the ratio between the integration capacitances.In some cases, it is a combination of both the aforementionedratios. Power consumption for each category of WDR sensorsis assessed qualitatively. Quantitative analysis of the powerconsumption parameter is out of the scope of this paper.

Fig. 4. Schematic diagram of a logarithmic response pixel.

III. CALCULATION OF SNR OF WDR SENSORS

A. Companding Sensors (Logarithmic CompressedResponse Photodetectors)

Pixels with a compressed response, as depicted in Fig. 4,achieve an extremely high DR (over 100 dB). Their compand-ing ability enables the representation of a wide range of lightintensities by a relatively small signal voltage swing [13]–[16].The companding characteristic is achieved through the logarith-mic relation between the current flowing within the pixel andthe output voltage, as shown in

iIn = iph + idc = I0eq(Vbias_tr−Vsense−Vth_tr)

kT (5)

where iIn is the pixel current that is composed of photogener-ated iph and dark currents idc. I0 is equivalent to the current iInwhen transistor Mtr is at onset of the subthreshold operationregion. k is the Boltzmann constant. The voltages Vbias_tr

and Vth_tr are the bias and the threshold voltages of Mtr,respectively. Vsense is the voltage at the sensing node “sense”of the pixel. We can express the voltage signal caused by lightfallen onto a pixel with reference to the signal caused by thedark current only as follows:

Vsig =kT

qln

(iph

idc

). (6)

The shot noise variance in the logarithmic pixel is given by

V̄ 2shot =

kT

2CPD(7)

where CPD is the capacitance of the photodiode.By using (6) for the effective signal energy calculation and

by summing up all the noise sources (readout, FPN, and shotnoise), we get the following expression:

SNR = 20 log10

kTq ln iph

idc√σ2

r + kT

2CPD+ σ2

PRNU_ logi2ph + σ2

Off_FPNV 2th_tr

(8)

where σr is the voltage deviation caused by the readout processand σPRNU_ log and σOff_FPN are the gain and offset FPNcomponents, respectively. Note that σPRNU_ log is a functionof the photogenerated signal itself since the sensor transferfunction is nonlinear. The signal swing of the logarithmic pixelis equal to Vth_tr; thus, this parameter is the reference for the

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS 2449

Fig. 5. Calculated SNR for the logarithmic sensor.

offset FPN component calculation. Note that the logarithmicpixel depicted in Fig. 4 cannot be calibrated; therefore, theoffset FPN is considered to be 0.3%. Fig. 5 shows the calculatedresults for the logarithmic sensor SNR. The dashed line denotesthe SNR of the reference sensor.

The SNR calculation was performed using the followingvalues:

T = 300 K σr = 96 μV Cpd = 10 fF

Vth_tr = 0.5 V σPRNU_ log = 0.0026 V/iph

σOff_FPN = 0.3% NF = 109e− .

The NF for the logarithmic pixel is calculated by

NF =Cint

q

√σ2

r +kT

2CPD+ σ2

Off_FPN

V 2th_tr

C2PD

. (9)

The logarithmic pixel’s DRF (in relation to the referencepixel) can be expressed as follows:

DRF = 20 log10

I0

iref_sat(10)

where iref_sat is the saturation current of the reference pixelthat is given by

iref_sat =CintΔV

tint(11)

where ΔV is the saturation signal.Substituting the appropriate values into (10) and (11), we get

that the DRF is approximately equal to 50 dB.Although the logarithmic pixels provide a large WDR, they

suffer from increased FPN, reduced sensitivity, and reduced sig-nal swing. The FPN offset component can partially be removedthrough a calibration procedure reported in [16]. This procedurecancels out the threshold voltage variations, but other offsetcomponents remain untouched, such as dark current spatialvariations across the pixel array. An improved version of thiscalibrating procedure is proposed in [15]. Complex recursivealgorithms that reduce the offset and even the gain componentsof FPN are reported in [17], but these algorithms are verydifficult to implement on the same die as the pixel array.

Fig. 6. Sensitivity of the logarithmic sensor.

The loss of sensitivity and, as a result of it, loss of informa-tion can clearly be seen when we derive the sensitivity of thelogarithmic sensor

S =dVsense

diph=

d

diph

kT

qln

(iph

idc

)=

kT

qiph. (12)

From Fig. 6, we can tell that, although the sensor sensitivityis greater than zero throughout five decades, it drops virtuallyto zero after three decades only, making high light intensitiesvery difficult to distinguish.

As previously mentioned, the threshold voltage Vth limits theoutput voltage swing of a conventional logarithmic pixel to afew hundred millivolts. The problem of reduced swing in thelogarithmic pixel is addressed in [18], where a load transistoris added within the pixel. While increasing the output voltageswing, this solution results in increased gain FPN. Moreover,the logarithmic pixels suffer from image lag. This undesiredlag effect is most pronounced at low light conditions, and itis caused by a long settling time constant that can exceed theframe time.

B. Multimode Sensors

Multimode sensors have multiple modes of operation, ex-ploiting the advantages of each mode. At low illumination con-ditions, they operate as conventional linear pixels, whereas, athigh illuminations, they operate as companding pixels. A possi-ble implementation of a multimode sensor is presented in [19].In this implementation, the pixel can operate both in a conven-tional integration mode and in a current readout mode. In thiscase, the final logarithmic conversion of the current to voltage isperformed outside the pixel. Another possible implementationof a multimode sensor is presented in [20], where the sensorpasses from linear to logarithmic mode by itself during theframe. On the other hand, in the pixel depicted in Fig. 7,the transfer function can be switched between logarithmic andlinear modes. Consequently, a nondestructive readout in eitherlinear or logarithmic mode can be performed at the pixel leveland then read out using an analog buffer, as presented in [21]and [22]. In the linear integration mode of operation, the SNRis given by (1).

2450 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Fig. 7. Linear–logarithmic multimode pixel with voltage readout.

Fig. 8. (a) Calculated SNR of single-mode sensors (logarithmic and linear).(b) Combined SNR of the multimode sensor.

The SNR of a pixel operating in a continuous logarithmicmode can be expressed using (8). Therefore, by deliberately“stitching” the regions of sensor operation, we can achieve anoptimal SNR for each of the intensities within the DR.

Fig. 8(a) shows the SNR curves of single-mode sensors(the logarithmic and the linear modes of operation), whereasFig. 8(b) presents the combined SNR characteristics of a multi-mode sensor that can operate in both linear and logarithmicmodes.

The criterion for switching between operating modes is de-rived from the comparison between the output signals receivedat the same illumination levels. It is obvious that the linear modeis preferable over the logarithmic one prior to pixel saturation.Once the pixel has become saturated, the logarithmic mode ispreferable. The DR extension of a multimode sensor is achievedin the logarithmic mode; hence, the DR extension is expressedby (10).

The following values were used for the calculation:

Qmax = 62 500e− T = 300 K σr = 100 μVCint = 10 fF NF = 65e−Lin. mode σPRNU = 0.1% σOff_FPN = 0.1%

tint = 30 msLog. mode σPRNU_ log = 0.0026 V/iph

σOff_FPN = 0.3%.

Fig. 9. Sensitivity of the multimode sensor.

The sensitivity of the multimode sensor is depicted in Fig. 9.The NF is assessed using (2).As can be seen after the pixel switches to logarithmic opera-

tion, the sensitivity drops by several orders of magnitude.

C. Clipping (Capacitance Adjustment) Sensors

Clipping sensors are sensors where a capacity well adjust-ment method is applied. In this method, the integrating capac-itance of a certain pixel is increased throughout the integrationperiod. For this algorithm, the DR extension is a function of thepixel capacitance and the amount of time that this capacitanceintegrates the charge. In this technique, the whole integrationtime can be divided into a certain number of slots. At thebeginning of each slot, the integrating capacitance is set to ahigher value. Assuming that the integrating capacitance is beingreset every integration slot, we can relate a corresponding DRFi

that can be expressed as follows:

DRFi = 20 log10

⎛⎜⎜⎝

i∑n=0

Cn

i−1∑n=0

Cn

⎞⎟⎟⎠ ti−1

ti

C0 =Cint, 1 ≤ i ≤ M (13)

where Cn is the added capacitance during the time interval[tn, tn+1] and ti is the ith integration time slot. M is the numberof capacitance adjustment steps. Therefore, the expression forthe overall DRF is the combination of all the aforementionedextension factors

DRF = 20 log10

(C0 + C1

C0

)t0t1

•(

C0 + C1 + C2

C0 + C1

)t1t2

•(

C0 + · · · + CM

C0 + · · · + CM−1

)tM−1

tM

= 20 log10

⎛⎜⎜⎝

M∑n=0

Cn

C0

⎞⎟⎟⎠ t0

tM. (14)

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS 2451

Fig. 10. Schematic diagram of a pixel with lateral overflowing capacitance(LOFIC).

If the pixel capacitance is being reset only at the beginningof a new frame, then the relations (13) and (14) becomeindependent of the time ratios.

A possible implementation of this method was demonstratedin [23], where the sensing node depth potential was controlledby an external signal that was applied to each pixel withinthe APS array. Another solution is proposed in [24], wheretwo images are captured using two different integration ca-pacitances within the pixel. Afterward, each capture is mul-tiplied by two different gains, thus producing four images ofdifferent sensitivities. One alternative solution is to use a pixelthat accommodates a lateral overflow integration capacitance(LOFIC) [25]–[27] as shown in Fig. 10. The idea is to collectthe photogenerated charges that overcome the potential barrierof the ConLOFIC switch that separates the photosensing areafrom the pixel sensing node. Thus, there are two signals that areread out from the pixel. The first is the signal that is completelytransferred to the integrator capacitance when the ConLOFIC

switch is disconnected from the LOFIC capacitance. The sec-ond is the signal that is stored on both the LOFIC and integratorcapacitances that are connected by the ConLOFIC switch.

In this case, the DRF expression is simplified to

DRF = 20 log10

Cint + CLOFIC

Cint(15)

where Cint and CLOFIC are the integrator and overflow capac-itances, respectively. The “Reset_L” signal is used to reset theLOFIC capacitance before it can be connected to the “sense”node. The SNR of a pixel that does not utilize the LOFICcapacitance is given by (1).

On the other hand, the SNR of a pixel that utilizes the LOFICcapacitance is shown in (16) at the bottom of the page. Q̃max

denotes the full well capacity of the combined capacitance,which consists of the initial and the overflowing capacitances.

If a Correlated Double Sampling (CDS) procedure is per-formed, then the “kTC” component can be removed from (16).

Fig. 11. (a) Calculated SNR of signals accumulated with and without LOFIC.(b) Combined SNR response.

However, in this case, the CDS operation requires a memorycircuit for storing the reset pixel level.

Fig. 11(a) presents the SNR calculation of each signalseparately: with LOFIC and without LOFIC (no CDS). Thecombined SNR calculation results are shown in Fig. 11(b).

The following values were used in the calculation presentedin Fig. 11:

T = 300 K tint = 30 ms Vswitch = 0.8Vsat = 1.6 V

NF = 95e−Regular mode Cint = 10 fF Qmax = 62 500e−

σPRNU = 0.1% σOff_FPN = 0.15%

σr = 98 μV

LOFIC mode Cint = 10 fF CLOFIC = 200 fF

Q̃max = 1.25Me− σPRNU = 0.1%

σOff_FPN = 0.15% σr = 5 μV.

The pixel in Fig. 10 can be formed using five transistors. Itsoffset FPN is a bit larger relative to the reference 3T pixel. TheNF is assessed using (2). The DRF that is calculated using (15)is equal to 26 dB. Vswitch is the boundary discharge voltage forthe signal processing. When the pixel does not pass Vswitch, thepreferred signal is that with the higher sensitivity; otherwise,the output signal will be the voltage that was accumulated onthe Cint and CLOFIC capacitances. The maximum SNR (notin the logarithmic scale) is improved beyond

√Qmax since the

amount of charge that can be utilized for signal evaluation is

SNR = 20 log10

iphtintCint+CLOFIC√

σ2r + q(iph+idc)tint

(Cint+CLOFIC)2 +σ2PRNUi2pht2int+σ2

Off_FPNQ̃2max

(Cint+CLOFIC)2 + 2kT(Cint+CLOFIC)

(16)

2452 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Fig. 12. Calculated sensitivity for the LOFIC sensor.

increased according to the capacitances ratio (15). Moreover,it will increase even further when the column capacitance thatis shared by all pixels in the same column will be utilized forcharge accumulation flowing out of the chosen pixel [19]. Thesensitivity for such a pixel varies according to the capacitanceutilized for signal processing. In regular mode, the sensitivityis given by (3). When LOFIC is utilized, the sensitivity isevaluated by

S =dVsense

diph=

d

diph

(iphtint

Cint + CLOFIC

)=

tint

Cint + CLOFIC.

(17)

The combined sensitivity for the LOFIC sensor is depicted inFig. 12.

Another method for the WDR that utilizes clipping the pixelresponse is presented in [28]. In this method, multiple partialresets are applied to the pixel during the frame, with each resetat certain time point. Each reset event sets the pixel potential tosome intermediate value. The values of the intermediate resetsare sorted in a descending order. Consequently, the brighterpixels are clipped by a certain reset, whereas the darker pixelscontinue to integrate untouched. The difference in time andin values of the adjacent reset events sets the DRF for thatintegration slot as follows:

DRFi = 20 log10

(Qi − Qi−1

ti − ti−1• tint

Qmax

)(18)

where Qi is the maximal charge that can be accumulated untilthe time ti. The overall DRF is given by

DRF = 20 log10

(Qmax − QM

tint − tM• tint

Qmax

)(19)

Fig. 13. Calculated SNR for the multiple-partial-reset sensor.

where QM and tM are charge and time point before the lastintegration slot, respectively. The SNR for such sensor can becalculated as shown in (20) at the bottom of the page. Thecalculated SNR is depicted in Fig. 13.

The calculation is performed assuming the following values:

Qmax = 62 500e Q0 = 0 Q1 =Qmax

2

Q2 =3Qmax

4Q3 =

7Qmax

8Q4 = Qmax

t0 = 0 t1 =3tint

4t2 =

15tint

16

t3 =511tint

512t3 = tint tint = 30 ms

Cint = 10 fF T = 300 K σr = 96 μVσPRNU = 0.1% σOff_FPN = 0.1% NF = 86e− .

The pixel utilized in this method is very simple, and its structureis identical to the reference 3T in Fig. 1. However, the NF ishigher than that in the reference sensor since the thermal noisesignals induced by the partial resets are not correlated

NF =Cint

q

√σ2

r +2kT

Cint+ σ2

Off_FPN

Q2max

C2int

+ qidctint

C2int

. (21)

The DRF is calculated using (19) to be 36 dB.Partial-reset method extends the DR, without the need to

utilize an additional in-pixel capacitance. However, it suffersfrom low sensitivity

S =dVsense

diph=

d

diph

(iph(tint − ti)

Cint

)=

tint − tiCint

. (22)

The decrease in sensitivity can be seen in Fig. 14.

SNR = 20 log10

Qi + iph(tint−ti)Cint√

σ2r + q((iph+idc)(tint−ti)+Qi)

(Cint)2+

σ2PRNUQ2

i+i2ph(tint−ti)+σ2

Off_FPNQ2max

(Cint)2+ 2kT

(Cint)

(20)

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS 2453

Fig. 14. Calculated sensitivity for the multiple-partial-reset sensor.

Fig. 15. Conceptual diagram of a light-to-frequency converting pixel.

D. Frequency-Based Sensors

In these sensors, light intensity is converted into pulse fre-quency [29]–[32]. The conceptual diagram of this type ofpixel is depicted in Fig. 15. When the voltage at the “sense”node passes below the reference voltage Vref , the comparatorgenerates a pulse that updates the data in the “Digital StoringUnit” block and resets the integrator by means of the feedbacksignal “Self Reset.” The frequency of spikes generated by acertain light intensity can be calculated as follows:

f =iph + id

Cint(Vreset − Vref)=

iph + idCintΔV

(23)

with Vreset representing the reset voltage and Vref the referencevoltage of the photodiode. ΔV is the difference between Vreset

and Vref . At the end of the frame, the digital data Dout are readout for final processing. In this way, the pixel can output dataalready in a digital domain.

It can be assessed that implementing the pixel depicted inFig. 15 will require at least 12 transistors.

Note that, in these sensors, the light intensity that does notcause the pixel to discharge beyond Vref is not recognizable.Thus, the DR in such a sensor is defined by the ratio of themaximal counter switching rate fmax and the minimal one thatcan be expressed by means of the nominal integration time tint

DR = 20 log10

fmax1

tint

= 20 log10 fmaxtint. (24)

Fig. 16. Schematic diagram of a TTS pixel.

Assuming fmax = 5 MHz (similar numbers quoted in [30]), theDR is equal to 105 dB.

The DRF for light-to-frequency sensors is not calculatedsince these sensors detect saturation events only.

The SNR at the input of the comparator is given by

SNR=20 log10

ΔV√σ2

r + qΔVCint

+σ̃2PRNU+σ2

Off_FPN(ΔV )2+ kTCint

(25)

where the gain FPN component is a constant that is given by

σ̃2PRNU = σ2

PRNU

i2pht2satC2

int

= σ2PRNU(ΔV )2 (26)

where tsat is the time that it took for the pixel to discharge fromVreset to Vref . The NF is calculated using (21) when the kTCcomponent is taken into account only once. The assessed noiseis 210 electrons.

It can easily be seen from (25) that the SNR curve for thefrequency-based sensor is a straight line throughout the wholeDR. If ΔV is set to its maximum possible value, the SNRreaches its peak value for every light intensity within the DR.This is an advantage over WDR sensors, where SNR dipsexist in the extended DR due to the sensitivity drop. However,setting ΔV to maximum limits the sensor imaging abilities tohighly illuminated scenes. Leveling down ΔV will enable toimage darker scenes but at the expense of SNR decrease. Thesaturation detection occurs with the reference to time constantvoltage; therefore, the sensitivity, as it is defined in this paper,is not calculated for this group of sensors.

E. TTS Sensors

In these imagers, the information is encoded at the time thatthe pixel is detected as saturated. Various algorithms based onthis principle have been presented in the literature [33]–[40].A general schematic diagram of a TTS pixel is presented inFig. 16. Each pixel is connected to a global varying refer-ence voltage, generated by the voltage ramp Vramp. When the“sense” node exceeds this voltage, the pixel comparator fires apulse that indicates pixel saturation Vcomp. This pulse causesa time-variant signal that encodes the time of the saturation(Timeref) to be written into the “Saturation Event Memory.”By knowing the time until the first firing event and the refer-ence voltage at the time of the event, the final image can be

2454 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Fig. 17. Calculated SNR for the TTS sensor.

reconstructed. In particular case, when Vcomp is kept constantand there was no firing event, denoting pixel saturation, thenthe light intensity can conventionally be retrieved by readingthe pixel analog value (mantissa) through a Buffer.

Generally, the highest achievable current is proportional tothe fastest rate of voltage ramp change. Therefore, the DRextension here is

DRF = 20 log10

tint

ttoggle_min(27)

where ttoggle_min is the minimal detectable saturation time. Itsvalue is constrained by the system performance and particularlyby the properties of the ADC within the pixel. Equations (28)and (25) describe the SNR characteristics of this algorithmfor the case of a piecewise reference voltage with infiniteresolution. For pixels that saturate (i.e., reach ΔV ) within tint,it will be constant. For the less illuminated pixels, it enablesintegration throughout the whole integration period

SNR=20 log10iphtintCint√

σ2r + q(iph+idc)tint

C2int

+σ2PRNUi2pht2int+σ2

Off_FPNQ2max

C2int

+ 2kTCint

.

(28)

If a monotonically rising ramp, which climbs from 0 V toΔV during tint, is applied, the SNR will be calculated with(28) for all illumination levels. The value of tint in (28) willbe replaced with ttoggle (the comparator switching time) that isgiven by

ttoggle =ΔV(

ΔVtint

+ iphCint

) . (29)

The calculated results are depicted in Fig. 17.The following values were used in the SNR calculation

shown in Fig. 17:

Qmax = 62 500e− ΔV = 1 V T = 300 K

σr = 190 μV Cint = 10 fF NF = 206e−σPRNU = 0.3% σOff_FPN = 0.3% tint = 30 ms.

Fig. 18. Calculated sensitivity for the TTS sensor.

Due to the relatively complex pixel structure, which con-sists of at least eight transistors, these SNR simulations wereperformed with higher FPN components and higher readoutvariance. Increased gain FPN prevents the sensor from reachingas high an SNR as the reference sensor, although the differencesin the maximum SNR values are minor.

The sensitivity of a TTS sensor depends on the shape of thetime-varying signal Vramp (Fig. 16) and is calculated as

S =dVsense

diph=

d

diph

(iphttoggle

Cint

)=

ttoggleCint

+iph

Cint

dttogglediph

.

(30)

The NF is calculated using (2). From Fig. 17, we can con-clude that the DRF in this method exceeds 50 dB. From Fig. 18,where sensitivity of TTS sensor is depicted, it can be easilyunderstood that for low light intensities the preferable way ofsignal processing is the voltage readout by means of Buffer(Fig. 16), since the switching time converges to tint (29). Forthe last two decades of light intensities that are “squeezed” intoa very small voltage range and the switching time converges tobe inversely proportional to iph, the preciseness of the signalprocessing depends on the resolution of the saturation timerepresentation.

F. Sensors With Global Control Over the Integration Time

The concept of global control over the integration time is tosample the pixel array after a predetermined integration timeand to reset the whole array regardless of the amount of chargeeach pixel has integrated until then [41]–[53]. There are severalsolutions implementing this concept.

One of the solutions is the conventional multiple-capturealgorithm. In this algorithm, the whole sensor integrates for dif-ferent exposure times regardless of the incoming light intensity[41]–[44]. The final image can be reconstructed by choosing theclosest value to saturation for each pixel. The general structureof a pixel, used to implement the conventional multiple-capturealgorithm, is the same as for the reference pixel (see Fig. 1). Itoperates identically to the reference pixel for each exposure.

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS 2455

Fig. 19. Calculated SNR characteristics for a sensor utilizing conventionalmultiple captures.

To simulate the conventional multiple-capture algorithm, thesensor signal is calculated for five different exposure times, and,after each exposure, the image is read out. The DR extension isthe ratio between the longest and the shortest exposures and isgiven by

DRF = 20 log10

tmax_exposure

tmin_exposure(31)

where tmax_exposure and tmin_exposure are the maximal and theminimal exposure times respectively. The SNR in this WDRscheme can be represented by (1), where tint is replaced withone of the exposure times tint_i. The SNR calculation resultsare presented in Fig. 19.

The following values were used during the simulation de-picted in Fig. 19:

Qmax = 62 500e− T = 300 K σr = 48 μV

Cint = 10 fF σPRNU = 0.1% σOff_FPN = 0.1%

NF = 65e− tint_0 = 20 ms tint_1 = 7.5 ms

tint_2 = 1.8755 ms tint_3 = 469 μs tint_4 = 117 μs.

The NF is the same as for the reference sensor. The DRF iscalculated to be 45 dB using (31).

The SNR dips in Fig. 19 are explained by the expression

SNRdip∼= 20 log10

√tint_i

tint_i+1= 10 log10

tint_i

tint_i+1,

i = 0, . . . , 4 (32)

where tint_i and tint_i+1 are the successive exposure periods.The sensitivity for such a sensor is given by

S =dVsense

diph=

d

diph

(iphtint_i

Cint

)=

tint_i

Cint. (33)

The sharp falloffs in the sensitivity, as seen in Fig. 20, arecaused by the reduction in the capture times for the higherlight intensities. It can easily be understood from (32) and(33) that, as the DR extension increases, the SNR dips and

Fig. 20. Calculated sensitivity for a sensor utilizing conventional multiplecaptures.

Fig. 21. SNR for a sensor utilizing overlapping multiple captures.

sensitivity reduction will become more severe. Obviously, thereis a tradeoff between the effort to increase the DR extensionand the effort to keep the SNR drop as small as possible. Oneof the ways to minimize the SNR drop is to perform multiplecaptures at a high frequency [45], [47]. However, performingcaptures with high frequency complicates the pixel structureand increases the number of A–D conversions.

In the conventional multiple-capture method, the longestintegration time is lower than the frame time due to the needto accommodate additional exposures within the same frame.Consequently, the SNR for the low-end illumination intensi-ties is reduced. A possible solution for extending the longestintegration time is overlapping the multiple-capture technique,presented in [51]–[53]. In this technique, two captures, onelong and one short, occur at the same time. In other words,the captures overlap one another; thus, the charge integratedfor a long period of time is not being dumped before the shortintegration period occurs.

In order to prevent an undesired charge dump during theframe, the whole array is reset not to the highest possible valuebut to some intermediate one.

The pixel that utilizes overlapping multiple captures is iden-tical to the reference 3T pixel (Fig. 1). The SNR in Fig. 21

2456 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Fig. 22. Sensitivity of a sensor utilizing overlapping multiple captures.

Fig. 23. Schematic diagram of a conditionally reset pixel.

is assessed in the same way as for the conventional multiplecaptures

Qmax = 62 500e− T = 300 K σr = 48 μV

Cint = 10 fF σPRNU = 0.1% σOff_FPN = 0.1%NF = 65e− tint_0 = 30 ms tint_1 = 117 μs.

The large SNR drop is caused by a sharp decrease in the shortcapture time. This can be seen in Fig. 22, where sensitivity isdepicted.

By cascading the DR extensions, the SNR dips can be re-duced, however, at the expense of lowering the frame rate [53].

G. Sensors With Autonomous Control Over theIntegration Time

The sensors that belong to this group automatically adjustthe integration times of each pixel according to the illuminationintensity [54]–[60] by adding a conditional reset circuitry to thepixels (Fig. 23). In such sensors, every pixel is nondestructivelyread out to a logic circuitry at certain time points. This circuitry,

Fig. 24. SNR for a sensor utilizing autonomous control over the integra-tion time.

based on comparators, compares the pixel value with the time-invariant threshold voltage and generates a signal notifyingwhether the pixel has discharged below the threshold or not.

If the pixel has discharged below the threshold, the signalgenerated by the logic will reset the pixel by means of the LogicDecision signal fed to the Conditional Reset switch (Fig. 23).If the pixel signal level is above the threshold, it continues tointegrate without reset until the next frame. The number ofresets performed on a certain pixel is stored in the memory unitand is updated every time the pixel is checked to be reset ornot. At the end of the frame, the pixel’s analog value (mantissa)is read out and converted by the ADC. The digital informationregarding the number of resets is read out of the memory and isused for autoscaling of a digitized mantissa value.

A possible implementation with four transistors of a pixelwith the conditional reset feature is presented in [61]. The DRextension is equal to the ratio between the nominal integrationtime and the minimal exposure time and thus can be calculatedby (31). The calculation of the SNR in the current group ofsensors depends on the mode of shutter they are based upon(rolling [54] or global [57], [58], [60]).

For sensors that operate in rolling shutter mode, the SNR canbe calculated by (34), shown at the bottom of the page. tint_i inthis relation is the notation for the exposure time.

The calculated result can be seen in Fig. 24.The SNR relation for a sensor operating in global shutter

mode will be derived in a future work.The following values were used in the calculation:

Qmax = 62 500e− T = 300 K σr = 96 μV

Cint = 10 fF σPRNU = 0.1% σOff_FPN = 0.1%NF = 86e− tint_0 = 30 ms tint_1 = 7.5 ms

tint_2 = 1.8755 ms tint_3 = 469 μs tint_4 = 117 μs.

SNR = 20 log10

iphtint_i

Cint√σ2

r + q(iph+idc)tint_i

C2int

+σ2PRNUi2pht2int_i

C2int

+ σ2Off_FPN

Q2max

C2int

+ 2 kTCint

(34)

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS 2457

Fig. 25. Calculated sensitivity for the autonomous-control-over-the-integration-time sensor.

TABLE ISUMMARY OF THE ASSESSED PARAMETERS FOR CONVENTIONAL

LOGARITHMIC, LINEAR–LOGARITHMIC, CLIPPING,AND PARTIAL-RESET SENSORS

The calculated DRF is equal to 48 dB given by (31). TheNF (21) in this pixel is higher than that in the conventionalone due to the lack of ability to perform CDS. The analysisof the sensitivity depicted in Fig. 25 is the same as that for themultiple-capture method presented in (33).

IV. DISCUSSION

The purpose of this section is to discuss the pros and cons ofthe aforementioned WDR techniques (Tables I–III). The param-eters that have been chosen for comparison are NF, calculatedin electrons; minimal number of transistors required for pixelimplementation (Minimal # of transistors); absolute DR (DR),calculated in decibels; DR extension factor (DRF), evaluated indecibels; and sensitivity of the pixel (Sensitivity), assessed involts per picoampere. These parameters are essential for properunderstanding of the tradeoffs associated with each categoryand provide the basis for comparison between the reviewedalgorithms. For every WDR category, the assessed sensitivitywill not be presented as an absolute number but rather as a rangeof possible values received from the appropriate figures.

The top row in Table I presents the performance character-istics of a conventional sensor based on the reference pixel(Fig. 1). The conventional sensor (Conv.) has low NF due to its

TABLE IISUMMARY OF THE ASSESSED PARAMETERS FOR

FREQUENCY-BASED AND TTS SENSORS

TABLE IIISUMMARY OF THE ASSESSED PARAMETERS FOR GLOBAL AND

AUTONOMOUS CONTROL OVER THE INTEGRATION TIME SENSORS

simple pixel structure. Since the only operation which is neededto perform after the integration is ADC conversion, the controlover the reference sensor is very simple. The conventionalsensor requires only a minimal amount of circuitry, and its out-put signal processing is straightforward. Therefore, the powerconsumption for such a sensor is low. The spatial resolutioncan be assessed qualitatively assuming that the pixel arraysof all sensors discussed previously occupy the same area andare implemented in the same technology. Thus, as the minimalnumber of transistors within the pixel rises, the spatial reso-lution decreases. Consequently, sensors based on pixels withthree transistors have the best spatial resolution. The sensitivityof the conventional sensor is constant through the whole DR.Both the pixel capacitance and the integration time are constant.

The logarithmic sensor (Log.), mentioned earlier, has a verysimple pixel structure; however, its NF is higher than that ofthe reference pixel due to increased offset FPN. Generally,the power consumption of the logarithmic sensor is low, asit does not require any complicated circuitry at its periphery.The control over such a sensor is simple, but it becomes morecomplicated when calibration procedures for FPN reductionare utilized. A remarkable DR extension is achieved throughcompressing the sensor response, which maps large range lightintensities to a very small voltage signal. This fact is clearlyshown by the four-orders-of-magnitude difference in sensitivitybetween low and high photocurrents. We assess that the loga-rithmic sensor owns the highest sensitivity of all WDR sensorsin low light conditions and one of the lowest in the extendedDR. To conclude, the logarithmic sensor provides a very wideDR and high spatial resolution. However, this sensor requiresmore complex color processing due to the nonlinear responseand has a reduced sensitivity at high illumination intensities dueto its companding ability.

The pixel that is capable of operating in both linear andlogarithmic modes (multimode) consists of a minimum ofthree transistors; therefore, the spatial resolution of the sensorbased on such a pixel is high. Generally, the NF in multimode

2458 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

sensors can be regarded as low since their pixels do not containfrequently switched circuitry, and CDS procedure for kTC noiseelimination can be performed. Low NF allows this multimodesensor to reach a very high DR (118 dB), while its DRF is thesame as that for the pure logarithmic sensor. The control overa multimode sensor can be more complicated than that of thepure logarithmic sensor [16] due to the need to store a linearlyintegrated value and then to compare it to that received in thelogarithmic mode of operation. In such a case, the frame rateis lower, and power consumption is higher than that in the purelogarithmic sensor. In the regular integration mode, the light in-tensities are linearly mapped to pixel voltage, and the sensitivityis high. However, in the logarithmic mode, high light intensitiesare compressed to a small swing, and, as a result of that com-pression, the sensitivity drops abruptly. We can conclude thatthe multimode sensor allows reaching very high DR with decentspatial resolution but at the expense of very low sensitivity athigh end of illumination level and nontrivial signal processing.

Sensors utilizing LOFIC and multiple-partial-reset (M. Par-tial Resets) method are both categorized as clipping sensors.An important advantage of the latter method over LOFIC issimpler pixel structure, allowing higher spatial resolution. Bothof the aforementioned methods produce piecewise linear pixelresponse, but, in the sensor with LOFIC, additional signalprocessing is required before for the output signal selection be-tween two or more signals, whereas, in the multiple partial-resetsensor, only one signal is received from the pixel; thus, the lattersensor can achieve higher frame rate. Power consumption inthe sensor using LOFIC is higher than that consumed in partialresets due to extensive signal processing. However, the imple-mentation of the partial-reset method requires a very accurateanalog circuitry and precise timing control since every time thepixel is being reset to a different intermediate value throughoutthe frame. Nonuniformity of reset values or nonuniformity inthe times of the reset events, throughout the APS array, cancause increased gain FPN. The accuracy and precision of thereset events can limit either the size of the array or the framerate as well. The advantages of the LOFIC sensor, referring tothe partial-reset sensor, are lower NF due to the possibility ofperforming CDS and higher sensitivity in highly lighted scenes.A certain advantage of the LOFIC sensor over other WDRsensors is the maximal reachable SNR, which exceeds

√Qmax

bound proportionally to the capacitance ratio (15).Table II contains the performance characteristics of the

frequency-based and TTS sensors. These sensors are basedon relatively complex pixels that include ADCs, self-resetcircuitry, etc. For that reason, the NF is assessed to be highercompared to that of the other sensors. Another drawback ofthese sensors is that their spatial resolution is low since the min-imal transistor count within the pixel is higher relatively thanother sensors. The power consumption is high, since the pixel isfrequently self-reset and the counter is toggled at the samefrequency [30].

In TTS sensors, the dominant power consumers are in-pixel comparator and peripheral circuits that perform the finalcalculation of the signal according to the threshold voltage andthe time it took for the pixel to saturate [38]. Note that, insome TTS sensors, a very precise time-varying global reference

should be generated. Uniformity in the distribution of time-varying voltage throughout the pixel array can limit the spatialresolution in TTS sensors. Frequency-based sensors also neces-sitate supplying additional global data lines throughout the APSarray. However, most of those signal lines are digital, excludingthe reference voltage. The reference voltage is time invariant,so it can easily be distributed evenly to each pixel within thearray. The final readout for frequency-based sensors is usuallysimpler than that of TTS since each pixel already containsthe digitized data output (Fig. 15). In case that TTS pixelis digital [34], the readout procedure is the same. However,the digital data in frequency based sensor is generated by thepixel itself, whereas in TTS sensor, a global bus should deliverthe whole digital word to all pixels in parallel. Therefore, weassess that frequency based sensors and TTS sensors basedupon digital pixels, have the highest frame rate relatively otherWDR schemes. In frequency-based sensors, the mapping oflight intensity to the frequency of the reset spikes is linear.On the other hand, in TTS sensors mapping the incoming lightintensity to the saturation time to is non-linear. An importantadvantage of frequency-based and TTS sensors is that theycan reach a very high DR, particularly those based on TTS,where the reference voltage control is extremely flexible [38].Moreover, as can be learned from Table II, TTS sensors reachthe highest DR and DRF.

Table III presents the assessment of the parameters ofWDR sensors that are based on conventional (Conv. Mult.Captures), overlapping-multiple-captures (Overlap. Mult. Cap-tures), and autonomous-control-over-integration-time (Auto-nom) algorithms.

Sensors based on these three algorithms reach remarkableDR and DRF while keeping the pixel structure very simple.Moreover, the DR extension in these sensors is very flexibleand can be changed according to (31). The spatial resolutionof a sensor with autonomous integration time control willbe slightly below those utilizing conventional or overlappingmultiple captures due to additional transistor implementingthe conditional reset ability. The sensitivity of a sensor withconventional multiple captures is lower than that with overlap-ping captures or autonomous integration time control due tothe reduced longest integration time. In overlapping multiplecaptures, the sensitivity is low throughout the whole extendedDR since only one capture for the DR extension is utilized. Athigh end of illumination intensities in these three WDR sensors,the sensitivity in the extended DR region falls by two orders ofmagnitude; however, it is still relatively higher than that in thelogarithmic and TTS sensors.

The drawback of the conventional multiple-capture sensoris that it requires extensive periphery DSP circuitry. The dataprocessing in such sensor involves multiple A–D conversionsand frequent readout cycles from memory units that store thedigitized pixel values from the previous captures. In a multiple-exposure sensor, the frame rate can be limited by the timerequired to process increasing the number of possible outputs.It is possible to reduce the number of possible output signalsand spare the need for memory units for storing pixel valuesfrom multiple frames if two captures are utilized only [51]. Inthis way, the final output signal can be retrieved immediately

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS 2459

after the second pixel readout. However, extending the DRusing two captures only will cause severe SNR drop at theboundary of the extended range (32). On the other hand, insensors with autonomous control over the integration time, thecontrol is not complicated since the intermediate comparisonsof the pixel values are performed to a constant voltage and thefinal processing involves autoscaling of a single digitized pixelvalue only. The DR extension in such sensors is bounded bythe minimal time of the memory update process, which occursbetween the saturation checks.

The dominant power consumption sources for conventionalmultiple-capture sensors are memory unit and digital signalprocessing circuitry. In the case of overlapping multiple cap-tures, the main power consumer is the signal processing cir-cuitry outside the pixel. The dominant power consumer insensors with autonomous integration time control is the mem-ory unit that stores the WDR bits.

V. SUMMARY

As the technology of implementation of CISs scales down,pixel size is being reduced, increasing the spatial resolution.Further improvement in the spatial resolution can be achievedby sharing certain circuitry among adjacent pixels [51]. Powerconsumption is being reduced as supply voltages decrease.Further power decrease can be achieved by using of low-powertechniques [60].

Scaling down of pixel size causes a decrease in intrapixel ca-pacitance. Smaller capacitance levels up the sensor sensitivity,improving its ability to detect low light intensities. However,such capacitance reduction increases the “kTC” noise, anddecreases the maximal SNR. Consequently, CDS procedure,which removes the “kTC” noise is desirable to be utilized in thefinal signal processing and the in-pixel capacitance size shouldbe sufficient to keep the required maximal SNR.

Generally, there are three approaches for widening DR non-linear, piecewise linear, and linear according to the sensortransfer function. The nonlinear approach that is utilized inlogarithmic, multimode, and TTS sensors allows reaching veryhigh DR but with remarkable loss of sensitivity which nega-tively affects the image quality. Sensors with piecewise linearand linear response such as clipping, global, and autonomouscontrol over the integration time reach the DR, which is sim-ilar to that of nonlinear sensors, with improved sensitivity.Frequency-based sensors, which have a linear response, reach ahigh DR but with loss of information in the darker scenes.

The quantitative assessments of the DR extension, NF, SNR,minimal transistor count, and sensitivity for each category ofWDR sensors were performed and discussed. Power consump-tion was discussed qualitatively. Advantages and drawbacksof each of the seven sensor categories were discussed andsummarized.

ACKNOWLEDGMENT

The authors would like to thank Dr. D. Lubzens,Dr. S. Lineykin, A. Teman, S. Glozman, T. Katz, andL. Blockstein for valuable discussions.

REFERENCES

[1] O. Yadid-Pecht, R. Ginosar, and Y. Shacham-Diamand, “Random accessphotodiode array for intelligent image capture,” IEEE Trans. ElectronDevices, vol. 38, no. 8, pp. 1772–1780, Aug. 1991.

[2] S. K. Mendis, S. E. Kemeny, R. C. Gee, B. Pain, C. O. Staller, Q. Kim, andE. R. Fossum, “CMOS active pixel image sensors for highly integratedimaging systems,” IEEE J. Solid-State Circuits, vol. 32, no. 2, pp. 187–197, Feb. 1997.

[3] E. R. Fossum, “CMOS image sensors: Electronic camera-on-a-chip,”IEEE Trans. Electron Devices, vol. 44, no. 10, pp. 1689–1698, Oct. 1997.

[4] D. N. Yaung, S. G. Wuu, Y. K. Fang, C. S. Wang, C. H. Tseng, andM. S. Liang, “Nonsilicide source/drain pixel for 0.25-μm CMOS imagesensor,” IEEE Electron Device Lett., vol. 22, no. 2, pp. 71–73, Feb. 2001.

[5] K. Cho, A. I. Krymski, and E. R. Fossum, “A 1.5-V 550 μW 176 × 144autonomous CMOS active pixel image sensor,” IEEE Trans. ElectronDevices—Special Issue on Image Sensors, vol. 50, no. 1, pp. 96–105,Jan. 2003.

[6] O. Yadid-Pecht and R. Etienne-Cummings, CMOS Imagers: From Photo-transduction to Image Processing. Norwell, MA: Kluwer, 2004.

[7] K. Mizobuchi, S. Adachi, J. Tejada, H. Oshikubo, N. Akahane, andS. Sugawa, “A low-noise wide dynamic range CMOS image sensorwith low and high temperatures resistance,” Proc. SPIE, vol. 6816,pp. 681 604-1–681 604-8, 2008.

[8] E. Stevens, H. Komori, H. Doan, H. Fujita, J. Kyan, C. Parks, G. Shi,C. Tivarus, and J. Wu, “Low-crosstalk and low-dark-current CMOSimage-sensor technology using a hole-based detector,” in Proc. ISSCC—Image Sensors and Technology, 2008, p. 60 595.

[9] S. L. Barna, “Dark current reduction circuitry for CMOS active pixel,”U.S. Patent 7 186 964 B2, Mar. 6, 2007.

[10] I. Shcherback, A. Belenky, and O. Yadid-Pecht, “Empirical dark cur-rent modeling for complementary metal oxide semiconductor active pixelsensor,” Opt. Eng.—Special Issue on Focal Plane Arrays, vol. 41, no. 6,pp. 1216–1219, Jun. 2002.

[11] O. Yadid-Pecht, “Wide dynamic range sensors,” Opt. Eng., vol. 38, no. 10,pp. 1650–1660, Oct. 1999.

[12] D. Yang and A. El Gamal, “Comparative analysis of SNR for imagesensors with enhanced dynamic range,” Proc. SPIE, vol. 3649, pp. 197–211, Jan. 1999.

[13] S. G. Chamberlain and J. P. Lee, “Silicon imaging arrays with newphotoelements, wide dynamic range and free from blooming,” in Proc.Custom Integr. Circuits Conf., Rochester, NY, 1984, pp. 81–85.

[14] K. A. Boahen and A. G. Andreou, “A contrast sensitive retina with recip-rocal synapses,” in Advances in Neural Information Processing Systems,vol. 4. San Mateo, CA: Morgan Kaufmann, 1992, pp. 764–772.

[15] E. Labonne, G. Sicard, M. Renaudin, and P. Berger, “A 100 dB dynamicrange CMOS image sensor with global shutter,” in Proc. 13th IEEEICECS, Dec. 2006, pp. 1133–1136.

[16] S. Kavadias, B. Dierickx, D. Scheffer, A. Alaerts, D. Uwaerts, andJ. Bogaerts, “A logarithmic response CMOS image sensor with on-chipcalibration,” IEEE J. Solid-State Circuits, vol. 35, no. 8, pp. 1146–1152,Aug. 2000.

[17] D. Joseph and S. Collins, “Modeling, calibration, and correction of non-linear illumination-dependent fixed pattern noise in logarithmic CMOSimage sensors,” IEEE Trans. Instrum. Meas., vol. 51, no. 5, pp. 996–1001,Oct. 2002.

[18] L.-W. Lai, C.-H. Lai, and Y.-C. King, “A novel logarithmic responseCMOS image sensor with high output voltage swing and in-pixel fixed-pattern noise reduction,” IEEE Sensors J., vol. 4, no. 1, pp. 122–126,Feb. 2004.

[19] N. Akahane, R. Ryuzaki, S. Adachi, K. Mizobuchi, and S. Sugawa, “A200 dB dynamic range iris-less CMOS image sensor with lateral overflowintegration capacitor using hybrid voltage and current readout operation,”in Proc. IEEE Int. Solid-State Circuits Conf., Feb. 2006, pp. 1161–1170.

[20] C. E. Fox, J. Hynecek, and D. R. Dykaar, “Wide-dynamic-range pixel withcombined linear and logarithmic response and increased signal swing,”Proc. SPIE, vol. 3965, pp. 4–10, May 2000.

[21] N. Tu, R. Hornsey, and S. Ingram, “CMOS active pixel image sensor withcombined linear and logarithmic mode operation,” in Proc. IEEE Can.Conf. Elect. Comput. Eng., 1998, pp. 754–757.

[22] G. Storm, R. Henderson, J. E. D. Hurwitz, D. Renshaw,K. Findlater, and M. Purcell, “Extended dynamic range from a combinedlinear–logarithmic CMOS image sensor,” IEEE J. Solid-State Circuits,vol. 41, no. 9, pp. 2095–2106, Sep. 2006.

[23] S. Decker, D. McGrath, K. Brehmer, and C. G. Sodini, “A 256 × 256CMOS imaging array with wide dynamic range pixels and column-parallel digital output,” IEEE J. Solid-State Circuits, vol. 33, no. 12,pp. 2081–2091, Dec. 1998.

2460 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

[24] Y. Wang, S. L. Barna, S. Campbell, and E. R. Fossum, “A high dy-namic range CMOS APS image sensor,” in Proc. IEEE Workshop ChargeCoupled Devices, Adv. Image Sens., Jun. 2001, pp. 137–140.

[25] E. R. Fossum, “High dynamic range cascaded integration pixel cell andmethod of operation,” U.S. Patent 6 888 122 B2, May 3, 2005.

[26] N. Akahane, S. Sugawa, S. Adachi, K. Mori, T. Ishiuchi, andK. Mizobuchi, “A sensitivity and linearity improvement of a 100-dBdynamic range CMOS image sensor using a lateral overflow integrationcapacitor,” IEEE J. Solid-State Circuits, vol. 41, no. 4, pp. 851–858,Apr. 2006.

[27] L. Woonghee, N. Akahane, S. Adachi, K. Mizobuchi, and S. Sugawa, “Ahigh S/N ratio and high full well capacity CMOS image sensor with activepixel readout feedback operation,” in Proc. IEEE ASSCC, Nov. 2007,pp. 260–263.

[28] D. Hertel, A. Betts, R. Hicks, and M. ten Brinke, “An adaptive multiple-reset CMOS wide dynamic range imager for automotive vision applica-tions,” in Proc. IEEE Intell. Veh. Symp., Eindhoven, The Netherlands,Jun. 2008, pp. 614–619.

[29] K. P. Frohmader, “A novel MOS compatible light intensity-to-frequencyconverter suited for monolithic integration,” IEEE J. Solid-State Circuits,vol. SSC-17, no. 3, pp. 588–591, Jun. 1982.

[30] X. Wang, W. Wong, and R. Hornsey, “A high dynamic range CMOS imagesensor with inpixel light-to-frequency conversion,” IEEE Trans. ElectronDevices, vol. 53, no. 12, pp. 2988–2992, Dec. 2006.

[31] L. G. McIlrath, “A low-power low-noise ultrawide-dynamic-range CMOSimager with pixel-parallel A/D conversion,” IEEE J. Solid-State Circuits,vol. 36, no. 5, pp. 846–853, May 2001.

[32] E. Culurciello, R. Etienne-Cummings, and K. Boahen, “A biomorphicdigital image sensor,” IEEE J. Solid-State Circuits, vol. 38, no. 2, pp. 281–294, Apr. 2006.

[33] V. Brajovic and T. Kanade, “New massively parallel technique for globaloperations in embedded imagers,” in Proc. IEEE Workshop CCDs, Adv.Image Sens., Apr. 1995, pp. 1–6.

[34] A. Kitchen, A. Bermak, and A. Bouzerdoum, “PWM digital pixel sensorbased on asynchronous self-resetting scheme,” IEEE Electron DeviceLett., vol. 25, no. 7, pp. 471–473, Jul. 2004.

[35] A. Bermak and A. Kitchen, “A novel adaptive logarithmic digital pixelsensor,” IEEE Photon. Technol. Lett., vol. 18, no. 20, pp. 2147–2149,Oct. 15, 2006.

[36] A. Bermak and Y.-F. Yung, “A DPS array with programmable resolutionand reconfigurable conversion time,” IEEE Trans. Very Large Scale Integr.(VLSI) Syst., vol. 14, no. 1, pp. 15–22, Jan. 2006.

[37] L. Qiang, J. G. Harris, and Z. J. Chen, “A time-to-first spike CMOS imagesensor with coarse temporal sampling,” Analog Integr. Circuits SignalProcess., vol. 47, no. 3, pp. 303–313, Apr. 2006.

[38] C. Shoushun and A. Bermak, “Arbitrated time-to-first spike CMOS imagesensor with on-chip histogram equalization,” IEEE Trans. Very LargeScale Integr. (VLSI) Syst., vol. 15, no. 3, pp. 346–357, Mar. 2007.

[39] D. Stoppa, A. Vatteroni, D. Covi, A. Baschiroto, A. Satori, andA. Simoni, “A 120-dB dynamic range CMOS image sensor with program-mable power responsivity,” IEEE J. Solid-State Circuits, vol. 42, no. 7,pp. 1555–1563, Jul. 2007.

[40] T. Lule, M. Wagner, M. Verhoven, H. Keller, and M. Bohm, “10000-pixel,120 dB imager in TFA technology,” IEEE J. Solid-State Circuits, vol. 35,no. 5, pp. 732–739, May 2000.

[41] O. Yadid-Pecht and E. R. Fossum, “Image sensor with ultra-high-linear-dynamic range utilizing dual output CMOS active pixel sensors,” IEEETrans. Electron Devices—Special Issue on Solid State Image Sensors,vol. 44, no. 10, pp. 1721–1724, Oct. 1997.

[42] M. Mase, S. Kawahito, M. Sasaki, Y. Wakamori, and M. Furuta, “A widedynamic range CMOS image sensor with multiple exposure-time signaloutputs and 12-bit column-parallel cyclic A/D converters,” IEEE J. Solid-State Circuits, vol. 40, no. 12, pp. 2787–2795, Dec. 2005.

[43] S. Masaaki, M. Mitsuhito, K. Shoji, and T. Yoshiaki, “A wide-dynamic-range CMOS image sensor based on multiple short exposure-time readoutwith multiple-resolution column-parallel ADC,” IEEE Sensors J., vol. 7,no. 1, pp. 151–158, Jan. 2007.

[44] Y. Takayoshi, K. Shigetaka, M. Takahiko, and K. Yoshihisa, “A 140 dB-dynamic-range MOS image sensor with in-pixel multiple-exposuresynthesis,” in Proc. IEEE Int. Solid-State Circuits Conf., Feb. 2008,pp. 50–51.

[45] L. Xinqiao and A. El Gamal, “Photocurrent estimation for a self-reset CMOS image sensor,” Proc. SPIE, vol. 4669, pp. 304–312,2000.

[46] S. Kavusi and A. El Gamal, “Quantitative study of high dynamicrange image sensor architectures,” in Proc. SPIE—Electronic Imaging,Jan. 2004, vol. 5301, pp. 264–275.

[47] S. Kavusi and A. El Gamal, “Folded multiple-capture: An architec-ture for high dynamic range disturbance-tolerant focal plane array,” inProc. SPIE—Infrared Technology and Applications, Apr. 2004, vol. 5406,pp. 351–360.

[48] D. Yang, A. El Gamal, B. Fowler, and H. Tian, “A 640 × 512 CMOSimage sensor with ultra wide dynamic range floating point pixel levelADC,” IEEE J. Solid-State Circuits, vol. 34, no. 12, pp. 1821–1834,Dec. 1999.

[49] J. Rhee and Y. Joo, “Wide dynamic range CMOS image sensor with pixellevel ADC,” Electron. Lett., vol. 39, no. 4, pp. 360–361, Feb. 2003.

[50] A. Guilvard, J. Segura, P. Magnan, and P. Martin-Gonthier, “A digitalhigh dynamic range CMOS image sensor with multi-integration and pixelreadout request,” in Proc. SPIE—Sensors, Cameras, and Systems forScientific/Industrial Applications VIII, San Jose, CA, Jan. 28–Feb. 1,2007, pp. 1–10.

[51] Y. Egawa, H. Koike, R. Okamoto, H. Yamashita, N. Tanaka, J. Hosokawa,K. Arakawa, H. Ishida, H. Harakawa, T. Sakai, and H. Goto, “A 1/2.5 inch5.2 Mpixel, 96 dB dynamic range CMOS image sensor with fixed pat-tern noise free, double exposure time read-out operation,” in Proc. IEEEASSCC, Nov. 2006, pp. 135–138.

[52] E. Yoshitaka, T. Nagataka, K. Nobuhiro, S. Hiromichi, N. Akira,H. Hiroto, L. Yoshinori, and M. Makoto, “A White-RGB CFA-patternedCMOS image sensor with wide dynamic range,” in Proc. IEEE Int. Solid-State Circuits Conf., Feb. 2008, pp. 52–53.

[53] O. Yusuke, T. Atushi, T. Tadayuki, K. Akihiko, S. Hiroki, K. Masanori,and N. Tadakuni, “A 121.8 dB dynamic range CMOS image sensor us-ing pixel-variation-free midpoint potential drive and overlapping multipleexposures,” in Proc. Int. Image Sens. Workshop, Jun. 2007, pp. 30–33.

[54] O. Yadid-Pecht and A. Belenky, “In-pixel autoexposure CMOS APS,”IEEE J. Solid-State Circuits, vol. 38, no. 8, pp. 1425–1428, Aug. 2003.

[55] T. Hamamoto and K. Aizawa, “A computational image sensor with adap-tive pixel based integration time,” IEEE J. Solid-State Circuits, vol. 36,no. 4, pp. 580–585, Apr. 2001.

[56] P. M. Acosta-Serafini, I. Masaki, and C. G. Sodini, “A 1/3′′ VGA linearwide dynamic range CMOS image sensor implementing a predictive mul-tiple sampling algorithm with overlapping integration intervals,” IEEE J.Solid-State Circuits, vol. 39, no. 9, pp. 1487–1496, Sep. 2004.

[57] A. Fish, A. Belenky, and O. Yadid-Pecht, “Wide dynamic range snapshotAPS For ultra low-power applications,” IEEE Trans. Circuits Syst. II, Exp.Briefs, vol. 52, no. 11, pp. 729–733, Nov. 2005.

[58] A. Belenky, A. Fish, A. Spivak, and O. Yadid-Pecht, “Global shutterCMOS image sensor with wide dynamic range,” IEEE Trans. CircuitsSyst. II, Exp. Briefs, vol. 54, no. 12, pp. 1032–1036, Dec. 2007.

[59] S. W. Han, S. J. Kim, J. H. Choi, C. K. Kim, and E. Yoon, “A high dynamicrange CMOS image sensor with in-pixel floating-node analog memory forpixel level integration time control,” in Proc. Symp. VLSI Circuits, Dig.Tech. Papers, 2006, pp. 25–26.

[60] A. Belenky, A. Fish, A. Spivak, and O. Yadid-Pecht, “A snapshot CMOSimage sensor with extended dynamic range,” IEEE Sensors J., vol. 9,no. 2, pp. 103–111, Feb. 2009.

[61] O. Yadid-Pecht, B. Pain, C. Staller, C. Clark, and E. R. Fossum, “CMOSactive pixel sensor star tracker with regional electronic shutter,” IEEE J.Solid-State Circuits, vol. 32, no. 2, pp. 285–288, Feb. 1997.

Arthur Spivak was born on April 29, 1979. Hereceived the B.Sc. degree in electrical engineeringfrom Technion–Israel Institute of Technology, Haifa,Israel, in 2005 and the M.Sc. degree in electrical en-gineering from Ben-Gurion University of the Negev,Beersheba, Israel, in 2009, where he is currentlyworking toward the Ph.D. degree.

Since 2006, he has been with the VLSI SystemsCenter, Ben-Gurion University of the Negev, wherehe has been engaged in the research, development,and design of analog and digital circuits integrated

in CMOS imagers.

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS 2461

Alexander Belenky received the B.Sc. degree inphysics and the M.Sc. degree in electrooptics engi-neering from Ben-Gurion University of the Negev,Beersheba, Israel, in 1995 and 2003, respectively,where he is currently working toward the Ph.D.degree.

Since 1998, he has been with the VLSI SystemsCenter, Ben-Gurion University of the Negev, wherehe is responsible for the VLSI laboratory. His cur-rent interests are smart CMOS image sensors, imageprocessing, and imaging systems.

Alexander Fish (S’04–M’06) received the B.Sc. de-gree in electrical engineering from Technion–IsraelInstitute of Technology, Haifa, Israel, in 1999 and theM.Sc. and Ph.D. (summa cum laude) degrees fromBen-Gurion University of the Negev, Beersheba,Israel, in 2002 and 2006, respectively.

From 2006 to 2008, he was a Postdoctoral Fellowwith the ATIPS Laboratory, University of Calgary,Calgary, AB, Canada. He is currently a FacultyMember with the Department of Electrical and Com-puter Engineering, Ben-Gurion University of the

Negev. His research interests include low-power CMOS image sensors, analogand digital on-chip image processing, algorithms for dynamic range expansion,and low-power design techniques for digital and analog circuits. He has au-thored over 50 scientific papers and patent applications and two book chapters.

Dr. Fish was the recipient of the Electrical Engineering Dean Award atTechnion in 1997 and the Technion President’s Award for Excellence in Studyin 1998. He was a coauthor of two papers that won the Best Paper Finalistawards at the ICECS’04 and ISCAS’05 conferences. He was also the recipientof the Young Innovator Award for Outstanding Achievements in the field ofinformation theories and applications by ITHEA in 2005. In 2006, he washonored with the Engineering Faculty Dean “Teaching Excellence” recognitionat Ben-Gurion University of the Negev. He has served as a Referee in the IEEETRANSACTIONS ON CIRCUITS AND SYSTEMS—I, the IEEE TRANSACTIONS

ON CIRCUITS AND SYSTEMS—II, Sensors and Actuators Journal, the IEEESENSORS JOURNAL, and SPIE Optical Engineering Journal, as well as ISCAS,ICECS, and IEEE Sensors Conferences. He was also a Coorganizer of specialsessions on “smart” CMOS Image Sensors at the IEEE Sensors Conference2007 and on low-power “Smart” Image Sensors and Beyond at the IEEEISCAS 2008.

Orly Yadid-Pecht (S’90–M’95–SM’01–F’07) re-ceived the B.Sc. degree in electrical engineering andthe M.Sc. and D.Sc. degrees from Technion–IsraelInstitute of Technology, Haifa, Israel, in 1984, 1990,and 1995, respectively.

From 1995 to 1997, she was a National ResearchCouncil (USA) Research Fellow in the areas of ad-vanced image sensors with the Jet Propulsion Labo-ratory, California Institute of Technology, Pasadena.In 1997, she was a Member with the Departmentsof Electrical and Electro-Optical Engineering,

Ben-Gurion University of the Negev, Beersheba, Israel, where she foundedthe VLSI Systems Center, specializing in CMOS image sensors. Since 2009,she has been an iCORE Professor of integrated sensors intelligent systemswith the University of Calgary, Calgary, AB, Canada. Her main subjects ofinterest are integrated CMOS sensors, smart sensors, image processing, neuralnets, and microsystem implementation. She has published over a hundredpapers and patents and has led over a dozen research projects supported bygovernment and industry. Her work has over 200 external citations. In addition,she has coauthored and coedited the first book on CMOS image sensors: CMOSImaging: From Photo-Transduction to Image Processing (2004). She alsoserves as the Director on the board of two companies.

Dr. Yadid-Pecht has served as an Associate Editor for the IEEETRANSACTIONS ON VERY LARGE SCALE INTEGRATION SYSTEMS and theDeputy Editor-in-Chief for the IEEE TRANSACTIONS ON CIRCUITS AND

SYSTEMS—I. She has also been on the CAS BoG. She currently serves asan Associate Editor for the IEEE TBioCAS, the CAS Representative for theIEEE Sensors Council, and a member of the Neural Networks Nanoelectronicsand Gigascale Systems committees and the Sensory Systems committee, whichshe chaired during 2003–2004. She was an IEEE Distinguished Lecturer ofthe Circuits and Systems Society in 2005. She was also the General Chair ofthe IEEE International Conference on Electronic Circuits and Systems and is acurrent member of the steering committee of this conference.