9
IEEE SENSORS JOURNAL, VOL. 9, NO. 2, FEBRUARY 2009 103 A Snapshot CMOS Image Sensor With Extended Dynamic Range Alexander Belenky, Alexander Fish, Member, IEEE, Arthur Spivak, and Orly Yadid-Pecht, Fellow, IEEE Abstract—In this paper, a proof of concept for a snapshot CMOS image sensor with extended dynamic range is presented. A proto- type of 32 32 pixels has been fabricated using the 1-poly 4-metal CMOS 0.35 process available through MOSIS and was suc- cessfully tested. The measurements from the test chip showed that the fabricated imager allows wide dynamic range (WDR) opera- tion in a snapshot readout mode. This DR extension has become possible due to a unique in-pixel architecture allowing automatic adaptation of each pixel in the array to its illumination level. To reduce the pixel power dissipation various low-power design tech- niques have been utilized in the pixel design. A single pixel occu- pies and dissipates 23 nW with 8 bit DR expansion at room light level, and 29 nW at high illumination level, equiva- lent to clear sky at video rate. The power dissipation of the whole sensor (including the supporting circuitry) is 450 at video rate. Sensor design is described, design considerations are shown and measurements from the test chip are presented. Index Terms—CMOS imagers, high dynamic range, image sensor, integration time, low-power, very large scale integration (VLSI). I. INTRODUCTION F AST development of low-power miniature CMOS image sensors triggers their penetration to various fields of our daily life. CMOS imagers offer significant advantages in terms of low-power, low-voltage, flexibility, cost, and miniaturization. These features make them very suitable for a variety of applica- tions where both low-power and wide intrascene dynamic range (WDR) are the main demands. While power reduction is usually achieved by technology scaling and aggressive supply voltage reduction [1], it however affects the output swing of the sensor and thus decreases its dynamic range (DR) [1]–[3]. The narrow DR of image sensors entails saturation of a pixel with high sen- sitivity, in case of high illumination levels, and part of the in- formation can be lost. DR insufficiency of conventional video cameras is a serious problem in realizing a robust vision system for taking images consisting of wide illumination conditions in the same scene. Different solutions for extending the DR in CMOS image sen- sors have been presented in recent years [5]–[30]. A comprehen- Manuscript received March 02, 2008; revised June 23, 2008; accepted July 16, 2008. Current version published January 09, 2009. The associate editor co- ordinating the review of this paper and approving it for publication was Prof. Paul Regtien. The authors are with the VLSI Systems Center, Department of Electrical and Computer Engineering, Beer-Sheva 84105, Israel (e-mail: belenky@ee. bgu.ac.il; afi[email protected]; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/JSEN.2008.2011073 sive summary of existing solutions and their comparisons have been presented in [4]. In the aforementioned paper, the WDR algorithms have been divided to six general categories, how- ever nowadays we can divide them to eight categories, since additional WDR solutions have been proposed throughout the past nine years. The updated classification of WDR schemes and their brief overview is presented below: (a) Companding sensors, such as logarithmic, compressed response photodetec- tors [5]–[7]. These sensors achieve very high DR and work in continuous, i.e., nonintegrating mode and, therefore, can have a higher time sampling resolution per pixel, if required. A disad- vantage of these sensors that the mismatch between the transis- tors will cause a nonlinear fixed pattern noise (FPN); (b) Mul- timode sensors, that have a linear and a logarithmic response at dark and bright illumination levels, respectively, i.e., they are able to switch between a linear and a logarithmic mode of oper- ation[8], [9]. Usually, they require extended settling time when switching and require massive signal post-readout processing; (c) Frequency-based sensors, where the sensor output is con- verted to pulse frequency [10]–[12]. In these sensors, the DR extension can exceed five decades, but the pixel structure be- comes complex, and the power consumed by each pixel is in- creased; (d) Sensors, where a capacity well adjustment method is applied. One of the possible implementations of this method is to control the potential depth of the sensing node well [13]. An alternative implementation utilizes an additional fixed ca- pacitance where the overflowing charge is being accumulated [14], [15]. These sensors achieve an increased DR, which is clamped by the photodiode and the overflow capacitances ratio, while keeping the pixel structure very simple; (e) Time to First Spike sensors, where the information is encoded at the time the pixel was detected as saturated [16]–[18]. Combination of sat- uration time information with the reference voltage signal pro- vides good flexibility for this WDR scheme, however, it requires quite complex signal processing at the chip periphery; (f) WDR sensors with global control over the integration time. Multiple exposures method is an example of global control over the inte- gration time, where the whole sensor is let to integrate for dif- ferent exposure times regardless of the incoming light intensity [19]–[23]. The final image can be reconstructed by choosing the closest value to saturation, for each pixel. In [22], the pixel value is constructed by successive comparisons to a monotonic ramp signal after every integration slot. After integration stops, the pixel mantissa and the extension exponent are ready for readout. However, this method requires allocating memory to store an intermediate digitized pixel value. Alternatively, the final image can be reconstructed by combining the pixel values captured at different exposure times and the time the pixel saturated [23]. 1530-437X/$25.00 © 2009 IEEE Authorized licensed use limited to: BEN GURION UNIVERSITY. Downloaded on June 2, 2009 at 10:26 from IEEE Xplore. Restrictions apply.

IEEE SENSORS JOURNAL, VOL. 9, NO. 2, FEBRUARY 2009 103 A ... · IEEE SENSORS JOURNAL, VOL. 9, NO. 2, FEBRUARY 2009 103 A Snapshot CMOS Image Sensor With Extended Dynamic Range Alexander

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

IEEE SENSORS JOURNAL, VOL. 9, NO. 2, FEBRUARY 2009 103

A Snapshot CMOS Image SensorWith Extended Dynamic Range

Alexander Belenky, Alexander Fish, Member, IEEE, Arthur Spivak, and Orly Yadid-Pecht, Fellow, IEEE

Abstract—In this paper, a proof of concept for a snapshot CMOSimage sensor with extended dynamic range is presented. A proto-type of 32 32 pixels has been fabricated using the 1-poly 4-metalCMOS 0.35 � process available through MOSIS and was suc-cessfully tested. The measurements from the test chip showed thatthe fabricated imager allows wide dynamic range (WDR) opera-tion in a snapshot readout mode. This DR extension has becomepossible due to a unique in-pixel architecture allowing automaticadaptation of each pixel in the array to its illumination level. Toreduce the pixel power dissipation various low-power design tech-niques have been utilized in the pixel design. A single pixel occu-pies �� ��� ��� and dissipates 23 nW with 8 bit DR expansionat room light level, and 29 nW at high illumination level, equiva-lent to clear sky at video rate. The power dissipation of the wholesensor (including the supporting circuitry) is 450 � at video rate.Sensor design is described, design considerations are shown andmeasurements from the test chip are presented.

Index Terms—CMOS imagers, high dynamic range, imagesensor, integration time, low-power, very large scale integration(VLSI).

I. INTRODUCTION

F AST development of low-power miniature CMOS imagesensors triggers their penetration to various fields of our

daily life. CMOS imagers offer significant advantages in termsof low-power, low-voltage, flexibility, cost, and miniaturization.These features make them very suitable for a variety of applica-tions where both low-power and wide intrascene dynamic range(WDR) are the main demands. While power reduction is usuallyachieved by technology scaling and aggressive supply voltagereduction [1], it however affects the output swing of the sensorand thus decreases its dynamic range (DR) [1]–[3]. The narrowDR of image sensors entails saturation of a pixel with high sen-sitivity, in case of high illumination levels, and part of the in-formation can be lost. DR insufficiency of conventional videocameras is a serious problem in realizing a robust vision systemfor taking images consisting of wide illumination conditions inthe same scene.

Different solutions for extending the DR in CMOS image sen-sors have been presented in recent years [5]–[30]. A comprehen-

Manuscript received March 02, 2008; revised June 23, 2008; accepted July16, 2008. Current version published January 09, 2009. The associate editor co-ordinating the review of this paper and approving it for publication was Prof.Paul Regtien.

The authors are with the VLSI Systems Center, Department of Electricaland Computer Engineering, Beer-Sheva 84105, Israel (e-mail: [email protected]; [email protected]; [email protected]; [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/JSEN.2008.2011073

sive summary of existing solutions and their comparisons havebeen presented in [4]. In the aforementioned paper, the WDRalgorithms have been divided to six general categories, how-ever nowadays we can divide them to eight categories, sinceadditional WDR solutions have been proposed throughout thepast nine years. The updated classification of WDR schemesand their brief overview is presented below: (a) Compandingsensors, such as logarithmic, compressed response photodetec-tors [5]–[7]. These sensors achieve very high DR and work incontinuous, i.e., nonintegrating mode and, therefore, can have ahigher time sampling resolution per pixel, if required. A disad-vantage of these sensors that the mismatch between the transis-tors will cause a nonlinear fixed pattern noise (FPN); (b) Mul-timode sensors, that have a linear and a logarithmic response atdark and bright illumination levels, respectively, i.e., they areable to switch between a linear and a logarithmic mode of oper-ation[8], [9]. Usually, they require extended settling time whenswitching and require massive signal post-readout processing;(c) Frequency-based sensors, where the sensor output is con-verted to pulse frequency [10]–[12]. In these sensors, the DRextension can exceed five decades, but the pixel structure be-comes complex, and the power consumed by each pixel is in-creased; (d) Sensors, where a capacity well adjustment methodis applied. One of the possible implementations of this methodis to control the potential depth of the sensing node well [13].An alternative implementation utilizes an additional fixed ca-pacitance where the overflowing charge is being accumulated[14], [15]. These sensors achieve an increased DR, which isclamped by the photodiode and the overflow capacitances ratio,while keeping the pixel structure very simple; (e) Time to FirstSpike sensors, where the information is encoded at the time thepixel was detected as saturated [16]–[18]. Combination of sat-uration time information with the reference voltage signal pro-vides good flexibility for this WDR scheme, however, it requiresquite complex signal processing at the chip periphery; (f) WDRsensors with global control over the integration time. Multipleexposures method is an example of global control over the inte-gration time, where the whole sensor is let to integrate for dif-ferent exposure times regardless of the incoming light intensity[19]–[23]. The final image can be reconstructed by choosing theclosest value to saturation, for each pixel. In [22], the pixel valueis constructed by successive comparisons to a monotonic rampsignal after every integration slot. After integration stops, thepixel mantissa and the extension exponent are ready for readout.However, this method requires allocating memory to store anintermediate digitized pixel value. Alternatively, the final imagecan be reconstructed by combining the pixel values captured atdifferent exposure times and the time the pixel saturated [23].

1530-437X/$25.00 © 2009 IEEE

Authorized licensed use limited to: BEN GURION UNIVERSITY. Downloaded on June 2, 2009 at 10:26 from IEEE Xplore. Restrictions apply.

104 IEEE SENSORS JOURNAL, VOL. 9, NO. 2, FEBRUARY 2009

Another variation of the multiple exposures algorithm was pre-sented in [24] and [25], where the shorter integration time over-lapped the longer one. During the integration, the array wasreset to some intermediate well potential. Pixels that did notpass the mid-point barrier continued to integrate without inter-mediate information loss. The information loss caused by themid-point reset was compensated by autoscaling the charge ac-cumulated during the shorter exposure time. Consequently, thesignal-to-noise ratio (SNR) in low light intensity was improvedin comparison to regular multiple exposures algorithm; (g) Sen-sors with local control, where different areas within the sensorcan have different exposure times [26]; and (h) Sensors withautonomous control over the integration time, in which the inte-gration time is adjusted by each pixel [27]–[30]. The imager de-scribed in this paper is based on this method. This method can beregarded as a multi-reset algorithm and utilizes the conditionalreset scheme, which is implemented at the pixel level. The pixelstructure remains simple and the final signal processing is verystraightforward, i.e., the data is being integrated on the pixelcapacitance; followed by readout of the signal and reset levelsfor subtraction and final A-D conversion. Thus, there is no needto choose between optional signals received from multiple A-Dconversions that are unavoidable in the multimode and the mul-tiple exposure sensors. Another advantage of the sensors pro-posed in [27]–[30] is that the threshold value to which the pixelis being compared to is time invariant, thus there is no need togenerate highly precise global time varying voltage references.

Conventional CMOS imagers operate in a rolling shuttermode [3] which leads to image deformation in times whenthere is relative motion between the imager and the scene. Theimager, operating in the global shutter (snapshot) regime, uti-lizing a memory element inside each pixel solves this problemand provides capabilities similar to a mechanical shutter. Thisallows simultaneous integration of the entire pixel array, andthen preventing exposure while the image data is read out.

Recently, we have proposed a new WDR CMOS imagesensor architecture that can deal with all the above mentionedchallenges of CMOS image sensors and presented its sim-ulation results [30]. This proposed imager was expected toprovide wide DR by applying adaptive exposure time to eachpixel, according to the local illumination intensity level. Drivenby low-power dissipation requirements, the proposed pixelwas designed to operate with dual low-voltage supplies (1.2and 1.8 V) and utilized an advanced low-power sensor designmethodology. This methodology included implementation of aspecial low-power and area efficient components, such as com-parator and multiplexer, as well as various general techniquesfor power reduction.

The goal of this paper is to provide a proof of concept for thepreviously designed imager. Since the imager employs nonstan-dard circuits and utilizes complicated pixels, each consistingof 18 transistors, this proof of concept stage is essential forfurther snapshot WDR sensor development. This paper presentsmeasurements from a test chip and shows that the fabricatedimager is fully operational and allows extension of dynamicrange, as expected. In addition, design considerations, such asthreshold voltage considerations, low-power considerations,and noise analysis are presented.

Fig. 1. The photograph of the fabricated chip with highlighted major buildingblocks.

The remainder of this paper is organized as follows. Section IIpresents a brief description of the sensor and its operation. De-sign considerations are discussed in Section III. Section IVpresents measurements from a fabricated prototype. Conclusionand future research are outlined in Section V.

II. SENSOR DESCRIPTION AND OPERATION

This section describes the scheme of the proposed pixel andbriefly explains general principles of its operation. A moredetailed description of the imager operation can be found inour former publication [30]. A photomicrograph of the chipthat highlights its major building blocks is shown in Fig. 1. Thesensor consists of a pixel array, one row decoder, two columndecoders, sample and hold (S/H) circuits, and a digital storage.Two column decoders for the pixel array work in paralleland are used to retrieve the mantissa and exponent values,correspondingly.

Fig. 2 shows the implementation scheme of the fabricatedsnapshot WDR pixel. The pixel operation is based on the multi-reset algorithm for DR expansion, recently successfully imple-mented by our group in rolling shutter imager [27].

According to this algorithm, the full integration time for theframe is subdivided into intervals, preferably in a sequenceof progressively shorter intervals. At the end of each interval,a nondestructive readout of the pixels is performed, and thereadout level of each pixel is compared to a respective threshold.Based on the comparison result, it is determined if the pixelwill saturate before the end of the frame. If it is determined thatthe pixel will saturate, the pixel is reset. Resetting the pixel atan intermediate point during the integration period significantlyreduces the probability that the pixel will saturate prior to thefinal readout. The binary information concerning having thereset applied or not is saved in a digital storage, to enable properscaling of the value read. The incident light intensity may then

Authorized licensed use limited to: BEN GURION UNIVERSITY. Downloaded on June 2, 2009 at 10:26 from IEEE Xplore. Restrictions apply.

BELENKY et al.: A SNAPSHOT CMOS IMAGE SENSOR WITH EXTENDED DYNAMIC RANGE 105

Fig. 2. Scheme of a single WDR snapshot pixel.

be calculated at the end of the integration period by multiplyingthe final readout level by a scaling factor which is based on thelength of time since the last reset. This length of time may bedetermined from knowledge of how many times the given pixelwas reset over the entire integration period. Therefore, the lightintensity of the pixel is calculated as

(1)

where relates to the incident light intensity, is theanalog or digitized output value that has been read out at theend of the integration period, is a chosen constant ( ),for example 2, that relates to the division of the integration timeinto progressively shorter intervals and represents howmany times the given pixel was reset over the entire integrationperiod.

The presented circuit operates as follows: at the beginning ofthe frame the pixel is reset by applying ‘0’and ‘0’. This way the internal line “Reset” isequal to ‘1 h’ (1.8 V) independently on the internal line “Compout” value, charging the photodiode ( ) and internal line“Comp_in” ( ) to ,where (‘1 h’), (‘1’) andis the threshold voltage of an NMOS. At the same time, the in-ternal line “Comp out” is precharged to ‘1 h’ by negative pulseof ‘0’. The reset phase is stoppedby applying ‘1’ and ‘1’and photodiode starts discharging, according to the energy of

incident light. At this stage, the total capacitance connectedto the photodiode is given by . At theend of the first interval the output of the photodiode (voltageon ) is compared with an appropriate threshold, associ-ated with the switching threshold voltage of the comparator,implemented by a conventional inverter. This comparison isperformed by enabling the inverter operation ( ‘1 h’and ‘0’).

If , meaning that the pixel will satu-rate at the end of the integration time, then ‘1 h’(determined by the inverter). At the same time, “Cond Reset”falls to ‘0’ by applying short negative pulse, causing the and

to operate as a standard inverter and enabling operation ofthe inverter, consisting of , and . As a result, (for

‘1 h’) the photodiode is reset again. The binaryinformation concerning having the reset applied or not is savedlocally in storage capacitor ( ) by ‘0’ atthe time when ‘0’ and is transmitted duringthe next interval to the external digital storage in the upper partof the sensor array, associated with the certain pixel, to enableproper scaling of the value read. The readout of this digitalsignal is performed through the regular output chain, used foranalog signal readout, by allowing ‘1’.

If , i.e., meaning that the pixelwill not saturate at the end of the integration time, then

‘0’ (determined by the inverter). In thiscase, the photodiode is not reset ( ‘0’,

‘0’ ‘0’) and transistor isturned off, separating and . Once the comparisonis stopped by returning the inverter to the “sleep” mode andapplying ‘1’ and ‘1’,the photodiode continues discharging, according to the energyof incident light. This time, the total capacitance connected tothe photodiode is given only by . At the end of subsequentintervals (when comparisons are performed again), isalready disconnected from the , causing the voltagesaved on ( ) to be compared to the thresholdvoltage, produced by the inverter. Thus, no resets are applieduntil the full integration time is finished.

At the end of the full integration time , the capac-itor is connected to the capacitor by applying

‘0’ and the final photodiodevoltage on the capacitor is determinedby the charge transfer between and . This waythis final voltage is independent of whether transistor wasclosed or open during the last interval. Note, in case, when“Comp out” was equal to ‘1 h’ (reset was performed at the lastcomparison), the capacitor was already connected to thecapacitor .

The next stage is transfer of the charge accumulated in thephotodiode capacitor to a storage capacitor , by applying

‘1 h’. Before this charge transfer, the storage ca-pacitor is reset to by applying‘0’. Once this charge transfer has been completed, the photo-diode is able to begin a new frame exposure, and the charge,newly transferred to the in-pixel memory, is held there until it isread out at its assigned time in a row-by-row readout sequencethrough the output chain.

Authorized licensed use limited to: BEN GURION UNIVERSITY. Downloaded on June 2, 2009 at 10:26 from IEEE Xplore. Restrictions apply.

106 IEEE SENSORS JOURNAL, VOL. 9, NO. 2, FEBRUARY 2009

Fig. 3. Threshold voltage derivations.

III. DESIGN CONSIDERATIONS

A. Threshold Voltage Considerations

The overall idea of the algorithm is to avoid the effect of pixelsaturation. As previously mentioned, the algorithm comparesthe readout level of each pixel to a respective threshold voltageat the end of each interval and takes a decision based on the an-ticipation if during the whole integration period the pixel willbe saturated or not. Therefore, assuming that the pixel valuewas first compared at , the intrinsic (theo-retical) threshold value should be chosen in such a way,so that a straight line (dash-doted line number 1 in Fig. 3) thatdescribes discharging of the pixel during , through(photodiode reset voltage) at and the threshold voltage

at the first subintegration period, will not cross the pixelsaturation voltage before the whole integration timeis due.

The equation of this straight line is given by

(2)

where is a maximum pixel voltage swing equal thedifference between and values. To find the value ofthe intrinsic threshold voltage , theis substituted into the line equation, resulting in the followingformula:

(3)

In real designs, each comparator has its own offset voltage.Therefore, for two different comparators having two differentoffset voltages the comparison will be performed at differentpoints even if the same threshold voltage was set. Fig. 3shows an example of two pixels, discharging by the same il-lumination level and being processed using two comparatorshaving different offset values. The same was applied forboth cases. As can be seen, there is an immunity to change incomparator offsets, since in both cases the final results are thesame. In the first case (shown by the solid line number 2), thepixel value did not pass the threshold voltage and, there-fore, it was not reset at the first comparison. In the second case

(shown by the dashes line number 3), the pixel value did passthe threshold voltage and, therefore, it was resetat the first comparison. However, the final results [given by (1)]remain similar for both cases. Note, in the second case, the SNRof the pixel is reduced since the integration time was reduced inthis case. A more detailed description on influence of integra-tion time on the sensor SNR can be found in [21]–[23].

Although the sensor provides immunity to the comparatoroffsets, it is very important to choose the threshold that en-sures that all pixels in the array will not saturate at the end of thewhole integration period. This threshold voltage is given by

(4)

where is the absolute value of the maximum comparatoroffset in the array.

B. Noise Considerations

Noise contribution in the fabricated imager can be classifiedaccording to the following noise generation sources: thermal(“reset”) noise induced by reset operation, fixed pattern noise(FPN), caused by process variations and quantum shot noise dueto fluctuations in the photo and dark currents.

The overall contribution of the “reset” noise in the imager isgiven by

(5)

The noise variance resulted in reset operation contributed bythe capacitance is doubled since the readout procedure doesnot utilize true CDS. Assuming that the sensor operates at roomtemperature, the calculated noise voltage deviation is 1.46 mV.

At the end of integration period, , the final voltage onthe capacitance is given by

(6)

where is the effective integration time, representing the pe-riod from the last reset to and is the nominal pixelreset voltage on the sensing node. is defined by the

Authorized licensed use limited to: BEN GURION UNIVERSITY. Downloaded on June 2, 2009 at 10:26 from IEEE Xplore. Restrictions apply.

BELENKY et al.: A SNAPSHOT CMOS IMAGE SENSOR WITH EXTENDED DYNAMIC RANGE 107

and “reset” values ( and ) and the magnitudeof each of the three aforementioned pixel capacitances

(7)

In order to calculate the final output voltage [ from (1)]that represents the incident light intensity fallen on the pixel,two samples are performed at the end of the integration period.During the first sample, the is readout, while during thesecond one the value is sampled. The readout value ofthe first sample is given by

(8)

where is the SF transistor threshold voltage,is the bias current in the SF and is the transconductanceparameter of the same transistor. The value of the second sampleis given by

(9)

The FPN noise can be divided into two general components:gain and offset. The gain component is caused by process fluc-tuation in each parameter of the pixel transfer function (activearea, capacitance magnitude ( ). The offset FPNcomponent arises from variations in the pixel dark current, SFinput transistor threshold ( ) and strength ( ) and dif-ferences in column current source . Most of the offset FPNcomponents can be eliminated by subtraction of these two afore-mentioned samples; however, the variations in capacitors anddark current values between different pixels are not compen-sated. Using (7)–(9), the deviations in as a function of themismatch in each pixel parameter are derived. For example, as-suming 1% mismatch in the and , the maximal outputvoltage variation will be around 941 .

Future FPN reduction methods include digital double sam-pling (DDS) that will reduce the noise generated in an analogsubtraction. Moreover, the pixel design and implementationwill be adapted to increasing demands of the future chip per-formance by means of operation frequency, spatial resolutionand power dissipation. Therefore, we will minimize gradientsthat can occur when biasing large pixel arrays that operate athigh switching frequencies, while keeping the switching powerminimized.

Another noise mechanism that should be addressed is thequantum (“shot”) noise. This process also causes fluctuationsin the value. The maximal variance in the readout signalcan be described as

(10)

where is the current that causes the photodiode to dischargecompletely during the integration time. Using this relation, wecan calculate that the deviation caused by this noise process isbounded by 1.15 mV and it is very close to the “reset” noisecomponent value. Therefore, the “reset” noise is the dominantcomponent for all illumination levels.

C. Low-Power Design Considerations

A number of approaches for power reduction in CMOS imagesensors were presented in the literature [1], [31]. According tothese approaches, the power can be reduced at different designlevels – technology, device, circuit, logic, architecture, algo-rithm, and system integration. In the presented WDR imagerdesign, much efforts have been done to reduce power dissipa-tion by utilizing various design techniques at the circuit andlogic level. These techniques have allowed reduction of bothpeak power dissipation (during the snapshot operation) and av-erage power, dissipated by the pixel. The utilized methods canbe divided into four main parts.

(a) Power reduction by leakage current control in analogand digital circuits – differentiation between the “active”and “sleep” modes of the certain circuit by insertion of a“sleep” transistor was applied on the in-pixel circuits likeMux and comparator. This technique relies on the reduc-tion of the leakage current using the “stacked scheme”– by stacking two “off” transistors, the subthresholdleakage current is reduced significantly compared toa single off device due to simultaneous reductions ingate-source, body bias, an drain-source voltages [32].However, the insertion of sleep transistors slightly re-duces the fill factor and increases the capacitance of the“Sleep” wire, which is shared by all pixels in the arrayand is driven by an external digital driver. Since theoperation of the pixel is fully parallel, as described inSection II, the influence of the increase in the delay dueto the insertion of the sleep transistors is negligible com-pared to the whole integration time (a few nano secondscompared to 30 milliseconds of the integration time).

(b) Low voltage operation – reduction of the power supplyvoltage is a key element in low-power CMOS imagers.However, the design of a low-voltage CMOS sensor in-volves several well-known challenges. Employing mul-tiple voltage supplies can relax the problem. The idea ofmultiple grows up from the dual- approach inthe digital circuit design, where the gates of the noncrit-ical paths have the reduced supply voltage , whilethose on the critical paths have . This results inreducing the power without degrading the entire circuitperformance. Similarly to the digital circuits, in the de-signed sensor high (1.8 V) was used in the criticalplaces, significantly influenced by reduction, whilelow-voltage supply (1.2 V) was applied in others.

(c) Applying various low-power digital design styles forpower reduction in the digital circuitry – in general, thecustom design of the particular blocks in digital circuitryusing nonstandard design techniques, like pass-tran-sistor-logic (PTL) may significantly improve powerdissipation. Custom digital design can be very usefulwhen the in-pixel digital processing approach is used.In the presented WDR imager, a nonstandard low-powerMux, implemented using gate diffusion input (GDI)design technique was used. The GDI method is basedon a simple cell that contains four terminals, allowingimplementation of various complex functions using only

Authorized licensed use limited to: BEN GURION UNIVERSITY. Downloaded on June 2, 2009 at 10:26 from IEEE Xplore. Restrictions apply.

108 IEEE SENSORS JOURNAL, VOL. 9, NO. 2, FEBRUARY 2009

Fig. 4. Setup for picture capture.

two transistors [33]. It has been proven that this Muximplementation leads to significant reduction in powerdissipation, compared to a standard CMOS implementa-tion. In addition, it employs only two transistors ( and

) to implement the Mux functionality. In the case ofthe discussed snapshot pixel, a regular state of the Muxcircuit is ‘1’ and‘1’ and if ‘1’, therefore leakage currentdoes not exist most of the time.

(d) Optimization of digital and analog sensor output chains– a standard three-transistor APS pixel includes only oneanalog output and two digital control inputs. An advancedsensor usually has more complicated structures. In addi-tion to an increased number of analog outputs and digitalcontrol inputs, it can include digital outputs generated bythe pixel. The demand for this increased number of pixelinputs and outputs, the desire for acceptable fill factor andthe requirement for low-voltage operation result in the ne-cessity to optimize the analog and digital sensor outputchains for area efficient low-power low-voltage operation.In this design, the same output chain was employed bothfor analog and digital signals to reduce the area and powerdissipation of the imager.

IV. EXPERIMENTAL RESULTS

A 32 32 imager was successfully fabricated using stan-dard mixed-signal TSMC 0.35 process available throughMOSIS. Each pixel has a size of 18 18 and a fillfactor of 15%.

To verify the proposed concept, the fabricated test chip wastested using dual power supply voltages (1.2 and 1.8 V), as de-scribed in Section II. All control signals were generated exter-nally using an field programmable gate array (FPGA) based testboard. The output signals were downloaded to the PC, allowingimage capture, processing and presentation. The test setup isshown in Fig. 4. The setup allowed imager operation at up to1000 frames/s and DR extension of up to 8 bits.

Fig. 5 depicts an optical setup used for the test chip charac-terization. This setup is common for most of optical tests andconsists of: a) a halogen lamp; b) a motorized filter wheel with along-pass filters with cutoff wavelengths within the range of thevisible light spectrum; c) monochromator; d) integrating sphere;and e) reference diode. The integration sphere is used to obtain

Fig. 5. Optical test setup.

Fig. 6. Measured quantum efficiency.

Fig. 7. Imager normalized output voltage as a function of incident illuminancewithout applying the WDR algorithm.

a uniform illumination on the chip interface. Light intensity iscontrolled by the monochromator slit input and is measured bythe reference photodiode.

The spectral response of the fabricated sensor is depicted inFig. 6.

The maximum quantum efficiency (QE) of 32% is achievedat 600 nm wavelength.

Fig. 7 shows measurements of the image sensor normalizedoutput voltage as a function of incident illumination without ap-plying the WDR algorithm. The measurements were performedby acquisition of numerous frames at varying light intensities.

Authorized licensed use limited to: BEN GURION UNIVERSITY. Downloaded on June 2, 2009 at 10:26 from IEEE Xplore. Restrictions apply.

BELENKY et al.: A SNAPSHOT CMOS IMAGE SENSOR WITH EXTENDED DYNAMIC RANGE 109

Fig. 8. Imager normalized output voltage as a function of incident illuminance: (a) without applying the WDR algorithm and (b) with 3 bits DR extension.

Each point represents an average of 50 frames for a given in-tensity. The sensor linearity and output swing is defined by thein-pixel source follower amplifier. Under these conditions, themeasured DR is 49 dB. According to simulation results, pre-sented in [30], the maximum DR without expansion was 57 dB.We assume that the main reasons of this discrepancy betweenthe results are switched capacitor noise with sharing betweenthe comparator and photodiode capacitor, kickback comparatornoise, and reset path jitter that have not been taken in accountin the analysis presented in Section III.

Fig. 8 presents comparison of the output voltage as a func-tion of incident illumination measurements performed withoutapplying the algorithm (similar to Fig. 8 and with 3 bits DR ex-pansion (equivalent to 18 dB extension). It can be seen that thepixel performs reset operation each time the photodiode voltagereaches the comparator threshold according to the incident lightintensity.

In this design, the comparator threshold ensures that thepixel will not saturate at the end of the integration time (seethreshold voltage consideration in Section III). To do so, theoutput voltage in Fig. 8 does not reach its maximum possiblevalue (630 mV) and is reset at 480 mV, reducing pixel SNR[22], [23].

Fig. 9, shows the scene observed by the proposed imager atdifferent illumination conditions. The scene observed at roomlight illumination conditions is presented in (a). The same sceneobserved with a laser light on the object is shown in (b). Inthis test, the imager operated without WDR expansion. As caneasily be seen, some of the pixels were saturated because of thestrong light conditions. The scene observed at the same illumi-nation conditions (with a strong light on the object) is seen in(c). However, in this case, 1-bit WDR expansion was applied. Itcan be clearly seen that, as expected, the part of the pixels satu-rated in (b), did not saturate in this case. Fig. 9(d) demonstratesthe imager performance under the same illumination with 2 bitsextension.

Note, both Fig. 9(c) and (d) aim to prove the functionality ofthe imager, showing only the output of the imager. There-fore, the images give an impression of noise presence. In orderto present the actual pixel values [see (1)], the value of each pixelhas to be calculated according to (1) and should then be com-

Fig. 9. (a) Scene observed at room light illumination conditions. (b) Scene ob-served at room light conditions with a laser light on the object without WDRexpansion. (c) Scene observed at room light conditions with a laser light on theobject with 1-bit WDR expansion. (d) Scene observed at room light conditionswith a laser light on the object with 2-bit WDR expansion.

TABLE IPOWER DISSIPATION OF A SINGLE PIXEL FOR

DIFFERENT ILLUMINATION LEVELS

pressed to suit an 8-bit monitor resolution. There are many waysto present WDR imagers on 8-bit monitors [34], [35], however,this issue is out of scope for this paper.

Table I presents power dissipation of a single sensor for dif-ferent illumination levels at video rate ( ).

To perform these measurements, the power supply of the pixelarray was separated from all support circuitry, the power of the

Authorized licensed use limited to: BEN GURION UNIVERSITY. Downloaded on June 2, 2009 at 10:26 from IEEE Xplore. Restrictions apply.

110 IEEE SENSORS JOURNAL, VOL. 9, NO. 2, FEBRUARY 2009

TABLE IIIMAGE SENSOR PERFORMANCE FIGURES OF MERIT

whole array was measured and divided by 1024. The power wasmeasured in three cases: (a) no DR expansion algorithm wasapplied; (b) expansion of 4 bit (equivalent to );and (c) expansion of 8 bit (equivalent to ). Ascan be seen, the pixel has very low-power dissipation of 23 nWfor 8 bit expansion at room light and 29 nW at high illuminationlevel equivalent to clear sky. The power dissipation of the wholesensor (including the supporting circuitry) was 450 .

Since the proposed imager was implemented in 0.35CMOS technology, a relatively large pixel size and a lowfill factor have been achieved. The imager implementationin more advanced technologies can decrease the pixel sizeor/and increase the fill factor. Scaling to 0.18 and largertechnologies is straightforward and can be easily calculatedusing a scaling factor. However, the scaling to more advancedprocesses (below 0.18 ) is not straightforward since it mayrequire techniques for more aggressive power reduction due toincreased leakages. In addition, the reduced supply voltage inthese technologies may also result in the need to change the in-pixel circuitry, while remaining with the same algorithm.

It is also very important to mention, that although a proofof concept for the small 32 32 pixel array was presented, thearray can be easily enlarged to any size since the pixels operationis fully parallel and the requirements for readout operation arerelaxed. The only restriction on the array scaling to higher pixelcounts is the peak power dissipation during the parallel opera-tion of all pixels in the array. However, this issue can be resolvedon the layout level by designing critical wires wide enough. It isobvious, that the size of the memory, decoders and S/H circuitswill be scaled accordantly with the pixel array scaling.

Table II presents the chip attributes.

V. CONCLUSION

We have presented a low-power global shutter CMOS imagesensor with ultra-WDR. A prototype of 32 32 pixels has beenfabricated using the 1-poly 4-metal CMOS standard 0.35process available through MOSIS and was successfully tested.The proposed imager performs snapshot image acquisition andoffers a customized, linear, large increase in the dynamic rangeby implementing smart, low-power circuits for in-pixel auto-exposure. Driven by low-power dissipation requirements, the

proposed pixel is operated by dual low voltage supplies (1.2and 1.8 V) and utilizes an advanced low-power sensor designmethodology. The measurements from the test chip have shownthat the fabricated imager allows DR expansion of up to 97 dB.

REFERENCES

[1] K. Cho, A. I. Krymski, and E. R. Fossum, “A 1.5-V 550�� 176� 144autonomous CMOS active pixel image sensor,” IEEE Trans. on Elec-tron Devices, Special Issue on Image Sensors, vol. 50, pp. 96–105, Jan.2003.

[2] E. Fossum, “CMOS image sensors: Electronic camera-on-a-chip,”IEEE Trans. Electron Devices, vol. 44, p. 1689, 1997.

[3] O. Yadid-Pecht and R. Etienne-Cummings, CMOS Imagers: From Pho-totransduction to Image Processing. Norwell, MA: Kluwer.

[4] O. Yadid-Pecht, “Wide dynamic range sensors,” Opt. Eng., vol. 38, no.10, pp. 1650–1660, Oct. 1999.

[5] K. A. Boahen and A. G. Andreou, “A contrast sensitive retina withreciprocal synapses,” Advances in Neural Information Processing, vol.4, pp. 762–772, 1992.

[6] E. Labonne, G. Sicard, M. Renaudin, and P. Berger, “A 100 dB dy-namic range CMOS image sensor with global shutter,” in Proc. 13thIEEE Int. Conf. Electron., Circuits, Syst., Dec. 2006, pp. 1133–1136.

[7] S. Kavadias, B. Dierickx, D. Scheffer, A. Alaerts, D. Uwaerts, and J.Bogaerts, “A logarithmic response CMOS image sensor with on-chipcalibration,” IEEE J. Solid State Circuits, vol. 35, no. 8, pp. 1146–1152,Aug. 2000.

[8] G. Storm, R. Henderson, J. E. D. Hurwitz, D. Renshaw, K. Findlater,and M. Purcell, “Extended dynamic range from a combined linear-log-arithmic CMOS image sensor,” IEEE J. Solid State Circuits, vol. 41,no. 9, pp. 2095–2106, Sep. 2006.

[9] N. Akahane, R. Ryuzaki, S. Adachi, K. Mizobuchi, and S. Sugawa,“A 200 dB dynamic range Iris-less CMOS image sensor withlateral overflow integration capacitor using hybrid voltage and currentreadout operation,” in Proc. Solid-State Circuits Conf., Feb. 2006,pp. 1161–1170.

[10] X. Wang, W. Wong, and R. Hornsey, “A high dynamic range CMOSimage sensor with inpixel light-to-frequency conversion,” IEEE Trans.Electron Devices, vol. 53, p. 2988, 2006.

[11] L. G. McIlrath, “A low-power low-noise ultrawide-dynamic-rangeCMOS imager with pixel-parallel A/D conversion,” IEEE J. SolidState Circuits, vol. 36, no. 5, pp. 846–853, May 2001.

[12] E. Culurciello, R. Etienne-Cummings, and K. Boahen, “A biomorphicdigital image sensor,” IEEE J. Solid State Circuits, vol. 38, no. 2, pp.281–294, Apr. 2006.

[13] S. Decker, D. McGrath, K. Brehmer, and C. G. Sodini, “A 256� 256CMOS imaging array with wide dynamic range pixels and column-parallel digital output,” IEEE J. Solid State Circuits, vol. 33, no. 12,pp. 2081–2091, Dec. 1998.

[14] N. Akahane, S. Sugawa, S. Adachi, K. Mori, T. Ishiuchi, and K. Mi-zobuchi, “A sensitivity and linearity improvement of a 100-dB dynamicrange CMOS image sensor using a lateral overflow integration capac-itor,” IEEE J. Solid State Circuits, vol. 41, no. 4, pp. 851–858, Apr.2006.

[15] W. Lee, N. Akahane, S. Adachi, K. Mizobuchi, and S. Sugawa, “Ahigh S/N ratio and high full well capacity CMOS image sensor withactive pixel readout feedback operation,” in Proc. IEEE Asian Solid-State Circuits Conf., Nov. 2007, pp. 260–263.

[16] C. Shoushun and A. Bermak, “Arbitrated time-to-first spike CMOSimage sensor with on-chip histogram equalization,” IEEE Trans. VeryLarge Scale Integration (VLSI) Syst., vol. 15, pp. 346–357, Mar. 2007.

[17] D. Stoppa, A. M. Vatteroni, D. Covi, A. Baschiroto, A. Satori, andA. Simoni, “A 120-dB dynamic range CMOS image sensor with pro-grammable power responsivity,” IEEE J. Solid State Circuits, vol. 42,no. 7, Jul. 2007.

[18] T. Lule, M. Wagner, M. Verhoven, H. Keller, and M. Bohm, “10000-pixel,120 dB imager in TFA technology,” IEEE J. Solid State Circuits,vol. 35, no. 5, May 2000.

[19] O. Yadid-Pecht and E. Fossum, “Image sensor with ultra-high-linear-dynamic range utilizing dual output CMOS active pixel sensors,” IEEETrans. Electron Devices, Special Issue on Solid State Image Sensors,vol. 44, no. 10, pp. 1721–1724, Oct. 1997.

Authorized licensed use limited to: BEN GURION UNIVERSITY. Downloaded on June 2, 2009 at 10:26 from IEEE Xplore. Restrictions apply.

BELENKY et al.: A SNAPSHOT CMOS IMAGE SENSOR WITH EXTENDED DYNAMIC RANGE 111

[20] M. Mase, S. Kawahito, M. Sasaki, Y. Wakamori, and M. Furuta, “Awide dynamic range CMOS image sensor with multiple exposure-timesignal outputs and 12-bit column-parallel cyclic A/D converters,” IEEEJ. Solid State Circuits, vol. 40, no. 12, pp. 2787–2795, Dec. 2005.

[21] M. Sasaki, M. Mase, S. Kawahito, and Y. Tadokoro, “A wide-dy-namic-range CMOS image sensor based on multiple short expo-sure-time readout with multiple-resolution column-parallel ADC,”IEEE Sensors J., vol. 7, pp. 2787–2795, Jan. 2007.

[22] D. X. D. Yang, A. El Gamal, B. Fowler, and H. Tian, “A 640� 512CMOS image sensor with ultra wide dynamic range floating point pixellevel ADC,” in Proc. IEEE ISSCC, 1999, pp. 1821–1834, WA 17.5.

[23] S. Kavusi and A. El Gamal, “Folded multiple-capture: An architecturefor high dynamic range disturbance-tolerant focal plane array,” in Proc.SPIE Infrared Technol. App., Orlando, FL, Apr. 2004, vol. 5406, pp.351–360.

[24] Y. Egawa, H. Koike, R. Okamoto, H. Yamashita, N. Tanaka, J.Hosokawa, K. Arakawa, H. Ishida, H. Harakawa, T. Sakai, and H.Goto, “A 1/2.5 inch 5.2 Mpixel, 96 dB dynamic range CMOS imagesensor with fixed pattern noise free, double exposure time read-outoperation,” in Proc. IEEE Asian Solid-State Circuits Conf., Nov. 2006,pp. 135–138.

[25] O. Yusuke, T. Atushi, T. Tadayuki, K. Akihiko, S. Hiroki, K. Masanori,and N. Tadakuni, “A 121.8 dB dynamic range CMOS image sensorusing pixel-variation-free midpoint potential drive and overlappingmultiple exposures,” in Proc. Int. Image Sensors Workshop, Jun. 2007,pp. 30–33.

[26] O. Yadid-Pecht, B. Pain, C. Staller, C. Clark, and E. Fossum, “CMOSactive pixel sensor star tracker with regional electronic shutter,” IEEEJ. Solid State Circuits, vol. 32, no. 2, pp. 285–288, Feb. 1997.

[27] O. Yadid-Pecht and A. Belenky, “In-pixel autoexposure CMOS APS,”IEEE J. Solid-State Circuits, vol. 38, pp. 1425–1428, Aug. 2003.

[28] T. Hamamoto and K. Aizawa, “A computational image sensor withadaptive pixel based integration time,” IEEE J. Solid State Circuits,vol. 36, no. 4, pp. 580–585, Apr. 2001.

[29] P. M. Acosta-Serafini, I. Masaki, and C. G. Sodini, “A 1/3 VGA linearwide dynamic range CMOS image sensor implementing a predictivemultiple sampling algorithm with overlapping integration intervals,”IEEE J. Solid-State Circuits, vol. 39, pp. 1487–1496, Sep. 2004.

[30] A. Fish, A. Belenky, and O. Yadid-Pecht, “Wide dynamic range snap-shot APS for ultra low-power applications,” IEEE Trans. Circuits andSyst. II, vol. 52, no. 11, pp. 729–733, Nov. 2005.

[31] A. Fish, “Smart active pixel sensors for ultra low-power applications,”Ph.D. dissertation, Ben Gurion Univ., Beer-Sheva, Israel, 2006.

[32] S. Narendra, “Scaling of stack effect and its application for leakagereduction,” in Proc. ISLPED, 2001, pp. 195–200.

[33] A. Morgenshtein, A. Fish, and I. A. Wagner, “Gate-diffusion input(GDI) – A power-efficient method for digital combinatorial circuits,”IEEE Trans. Very Large Scale Integration (VLSI) Syst., vol. 10, no. 5,pp. 566–581, Oct. 2002.

[34] L. Meylan and S. Süsstrunk, “High dynamic range image renderingwith a retinex-based adaptive filter,” IEEE Trans. Image Process., vol.15, no. 9, pp. 2820–2830, Sep. 2006.

[35] M. D. Fairchild and G. M. Johnson, “iCAM framework for image ap-pearance, image differences, and quality,” J. Electronic Imaging, vol.13, no. 1, pp. 126–138, Jan. 2004.

Alexander Belenky received the B.Sc. degree inphysics and the M.Sc. degree in electrooptics engi-neering from Ben Gurion University, Beer Sheva,Israel, in 1995 and 2003, respectively. He is nowworking towards the Ph.D. degree at Ben GurionUniversity.

From 1998, he has been working at the VLSISystems Center, where he is responsible for the VLSILaboratory. His current interests are smart CMOSimage sensors, image processing, and imagingsystems.

Alexander Fish (S’04–M’06) received the B.Sc. de-gree in electrical engineering from the Technion, Is-rael Institute of Technology, Haifa, Israel, in 1999,and the M.Sc. degree in 2002 and the Ph.D. degree(summa cum laude) in 2006, respectively, from BenGurion University, Israel.

He was a Postdoctoral Fellow in the ATIPS Lab-oratory at the University of Calgary (Canada) from2006 to 2008. Currently, he is a faculty member inthe Electrical and Computer Engineering Departmentat the Ben-Gurion University. He has also published

two book chapters. He has authored over 50 scientific papers and patent appli-cations. His research interests include low-power CMOS image sensors, analogand digital on-chip image processing, algorithms for dynamic range expansionand low-power design techniques for digital and analog circuits.

Dr. Fish was honored with the Electrical Engineering Dean Award at Tech-nion in 1997 and with the Technion President’s Award for Excellence in study in1998, respectively. He was a coauthor of two papers that won the Best Paper Fi-nalist awards at the ICECS04 and ISCAS05 conferences. He was also awardedthe Young Innovator Award for Outstanding Achievements in the field of Infor-mation Theories and Applications by ITHEA in 2005. In 2006, he was honoredwith the Engineering Faculty Dean “Teaching Excellence” recognition at Ben-Gurion University. He has served as a referee in the IEEE TRANSACTIONS ON

CIRCUITS AND SYSTEMS I, the IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS

II, Sensors and Actuators Journal, the IEEE SENSORS JOURNAL, SPIE OpticalEngineering Journal, as well as ISCAS, ICECS, and IEEE Sensors Conferences.He was also a co-organizer of special sessions on “smart” CMOS Image Sen-sors at the IEEE Sensors Conference 2007 and on low-power “Smart” ImageSensors and Beyond at the IEEE ISCAS 2008.

Arthur Spivak received the B.Sc. degree in elec-trical engineering from Technion, Israel Institute ofTechnology, Haifa, Israel, in 2005. He is currentlyworking towards the M.Sc. degree in electronics.

In 2006, he joined the VLSI Systems Center,where he has been engaged in the research, devel-opment, and design of the analog and digital circuitsintegrated in CMOS imagers.

Orly Yadid-Pecht (S’90–M’95–SM’01–F’07)received the B.Sc. degree in electrical engineering,and the M.Sc. and D.Sc. degrees from Technion,Israel Institute of Technology, Haifa, Israel, in 1984,1995, and 1990, respectively.

She was a National Research Council (USA)Research Fellow from 1995 to 1997 in the areasof advanced image sensors at the Jet PropulsionLaboratory (JPL), California Institute of Technology(Caltech). In 1997, she joined the Ben-GurionUniversity, Israel, as a member in the Electrical

and Electro-Optical Engineering Departments. There she founded the VLSISystems Center, specializing in CMOS image sensors. Since 2003, she hasalso been with the ATIPS Laboratory at the University of Calgary, Canada,promoting the area of Integrated sensors. Her main subjects of interest areintegrated CMOS sensors, smart sensors, image processing, neural nets, andmicrosystem implementations. She has published over 100 papers and patentsand has led over a dozen research projects supported by government andindustry. Her work has over 200 external citations. In addition, she has coau-thored and coedited the first book on CMOS Image Sensors: CMOS Imaging:From Photo-Transduction to Image Processing, (2004).

Dr. Yadid-Pecht was an Associate Editor for the IEEE TRANSACTIONS ON

VERY LARGE SCALE INTEGRATION SYSTEMS and the Deputy Editor-in-Chief forthe IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS–PART I. She is currentlyan Associate Editor for the IEEE TBioCAS, a member of the IEEE SensorsCouncil, and a member of several other technical and steering committees. Shewas an IEEE Distinguished Lecturer of the Circuits and Systems Society in2005. She was also the General Chair of the IEEE International Conference onElectronic Circuits and Systems (ICECS).

Authorized licensed use limited to: BEN GURION UNIVERSITY. Downloaded on June 2, 2009 at 10:26 from IEEE Xplore. Restrictions apply.