24
Jan Schumacher, Advanced Workflow Specialist, Leica Microsystems, Wetzlar, Germany Louise Bertrand, Product Performance Manager, Leica Microsystems, Exton, PA, USA Authors TechNote *in accordance with ISO/IEC 2382:2015 Technical Note THUNDER IMAGERS: HOW DO THEY REALLY WORK? Decode 3D biology in real time* HeLa cell spheroid stained with Alexa Fluor 568 Phalloidin (Actin) and YOYO 1 iodide (Nucleus).

TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Jan Schumacher, Advanced Workflow Specialist, Leica Microsystems, Wetzlar, Germany

Louise Bertrand, Product Performance Manager, Leica Microsystems, Exton, PA, USA

Authors

TechNote

*in accordance with ISO/IEC 2382:2015

Technical Note

THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Decode 3D biology in real time*HeLa cell spheroid stained with Alexa Fluor 568 Phalloidin (Actin) and YOYO 1 iodide (Nucleus).

Page 2: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Introduction Historically, widefield microscopy has not been best suited for the imaging of large sample/specimen volumes. The image background (BG), mainly originating from out-of-focus regions of the observed sample, significantly reduces the contrast, the effective dynamic range, and the maximal possible signal-to-noise ratio (SNR) of the imaging system. The recorded images show a typical haze and, in many cases, do not provide the level of detail required for further analysis. Those working with thick 3D samples either use alternative microscopy methods or can try to reduce the haze by post-processing a series of images.

Methods to reduce or remove background (BG) signalDepending on the way in which the BG caused by a out-of-focus-signal is handled, we distinguish between exclusive and inclusive methods.

Inclusive methods, such as widefield (WF) deconvolution microscopy, take the distribution of light in the whole volume into account and reassign recorded photons from the BG to their origins, thereby increasing the SNR of the recorded volumes. This reassignment can be done, because the distribution of light originating from a single point is described by the Point Spread Function (PSF).

Inclusive methods reach their limits as more and more light from out-of-focus layers is combined with the light from the in-focus-region. Effects which distort the PSF, such as light scattering, increase the BG, making restoration with inclusive methods more difficult. Unfortunately, scattering is unavoidable in biological specimens. Because inclusive methods, according to their definition, use all signals detected in the image, they also process signal components from out-of-focus layers that should not contribute to the final result.

Exclusive methods are based on the principle of separating out the unwanted BG and subtracting it from the image, so only the signal from the in-focus layer remains. Camera-based systems utilize hardware to prevent the acquisition of out-of-focus light (e.g. spinning disk systems or selective plane illumination) or a combination of software and hardware to remove BG components (grid projecting systems). Grid projecting systems need multiple images to be acquired, which can lead to motion artefacts when recording fast moving samples. In addition, they work only up to a limited depth, as a sharp image of the grid needs to be detected by the camera.

The gold standard in removing out-of-focus BG are pinhole-based scanning systems. The pinhole of a confocal system excludes light from out-of-focus layers, so only light from the in-focus layer reaches the detector.

THUNDER Imagers use Computational Clearing as exclusive method to remove the BG with a single recorded image in real time. It therefore overcomes the disadvantages when imaging life samples as mentioned above.

Computational Clearing (CC)Computational Clearing is the core technology in THUNDER Imagers. It detects and removes the out-of-focus BG for each image, making the signal of interest directly accessible. At the same time, in the in-focus area, edges, and intensity of the specimen features remain.

When recording an image with a camera-based fluorescence microscope the “unwanted” BG adds to the “wanted” signal of the in-focus structures and both is always recorded. For best results, the aim is to reduce the BG as much as possible. To exclude unwanted BG from an image, it is critical to find key criteria necessary to accurately separate the BG from the wanted signal. Generally, BG shows a characteristic behavior in recorded images which is independent of its origin. Hence, just from its appearance in an image, it is not discernable where the BG comes from.

Specifically in biological samples, the BG is usually not constant. It is quite variable over the field of view (FOV). Computational Clearing takes this automatically into account to make the in-focus signal immediately accessible.

2 THUNDER IMAGER TECHNICAL NOTE

Page 3: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Images acquired with a widefield microscope can be decomposed into two components: in-focus and BG signals. BG is mainly arising from out-of-focus signals. Thus, a widefield image, I(r), can approximately be given by:

(1)

Where

Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

and

Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

are the effective point spread functions of the in-focus (if) and out-of-focus (of) contributions and the fluorophore distribution, respectively. Because the out-of-focus PSF is much wider than the in-focus one, these two contributions in Eq. (1) can be clearly separated by length-scale-discriminating algorithms, such as, wavelet transforms. We developed an iterative algorithm to separate these two contributions. It calculates the following minimization for each iteration:

(2)

Here

Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

represents the structural length scale of the estimated out of focus contribution

Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

. The structural length scale

Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

Eq. (2) is calculated based on the optical parameters of the system and can be adapted. In the LAS X software, it is called “feature” scale.

Using this approach, only the BG is removed. Both the signal and the noise from the in-focus sample area of interest are kept. Because the noise from the in-focus area remains, the edges of in-focus features in the images are visible, therefore maintaining the spatial relations between the sample features with respect to their feature scale. The relative intensities of the features are still conserved, despite the varying nature of BG typical in life science samples.

Unlike traditional inclusive methods, the image that is revealed using Computational Clearing is not generated, but just “unmasked” from the background signals in the sample.

Figure 1: Illustration of the in-focus and out-of-focus PSF: The PSF of widefield images (center) can be described by the two PSF components which are in-focus (left) and out-of-focus (right). The background estimation takes advantage of the fact that the structural length scale,

Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

, of the out-of-focus signal is larger than the corre-sponding structural length scale,

Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

, as given by the width of the in-focus signal.

Figure 2: Beta III Tubulin Rat Neuronal Cells labeled with Cy5 showing the edges of structures, which are preserved after Computational Clearing, and the resulting background. Images were acquired with a THUNDER Imager 3D Cell Culture and an HC PL APO 63x/1.40 OIL objective.

How to separate out of focus from in focus signal? Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

Raw Data Computational Clearing Background

Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

Figure 1: Illustration of the in focus and out of focus PSF: The PSF of widefield images (center) can effectively be described by the two PSF components in focus (left) and out of focus (right). The background estimation takes advantage of the fact, that the structural length scale [ ̂ ] of the out of focus signal is larger than the corresponding structural length scale 0 as given by the width of the in-focus signal.

Images acquired with a widefield microscope can be decomposed into two components: in focus and BG. BG is mainly arising from out of focus signals. Thus, a widefield image I(r) can approximately be given by:

( )~ ( ) ∗ ( ) + ( ) ∗ ( ) (1)

Where / ( ) and ( ) are effec�ve point spread func�ons of the in/out of focus contribu�ons and the fluorophore distribu�on, respec�vely. Because the out of focus PSF is much wider than the in focus one, these two contribu�ons in Eq. (1) can well be separated by the length scale discrimina�ng algorithms such as a wavelet transform. We developed an itera�ve algorithm to separate these two contribu�ons. It calculates the following minimiza�on for each itera�on:

= argmin ̂ [‖ − ̂ ‖2] subject to [ ̂ ] > 0 (2)

Here [ ̂ ] represents the structural length scale of the es�mated out of focus contribu�on . The structural length scale 0 Eq. (2) is a necessary user input for the algorithm to work properly. In LAS X so�ware it is called feature scale.

3THUNDER IMAGER TECHNICAL NOTE

Page 4: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Computational Clearing removes the BG, clearly revealing focal planes deep in the sample. Computational Clearing, as an exclusive method, actually becomes even more powerful when used in combination with an inclusive method.

THUNDER Imagers offer three modes to choose from: > Instant Computational Clearing (ICC),

> Small Volume Computational Clearing (SVCC) and

> Large Volume Computational Clearing (LVCC).

Instant Computational Clearing (ICC) is a synonym of the exclusive Computational Clearing method as it was first introduced at the beginning of this technology note. SVCC and LVCC are combinations of exclusive Computational Clearing and an inclusive decision-mask-based 3D deconvolution dedicated to either thin samples (SVCC) or thick samples (LVCC). The adaptive image information extraction of the inclusive methods follows a concept that evolved from LIGHTNING, Leica Microsystem’s adaptive deconvolution method, originally developed for confocal microscopy.

LIGHTNING uses a decision mask as a base reference to calculate an appropriate set of parameters for each voxel of an image. In combination with a widefield PSF, the functionality inherent to LIGHTNING of a fully automated adaptive deconvolution process can be transferred to widefield detection.

More detailed information about adaptive image information extraction and deconvolution can be found in J. Reymann’s White Paper: LIGHTNING – Image Information Extraction by Adaptive Deconvolution.

Experimental evidenceIn this section, experimental data is shown to demonstrate:

> How the data generated with THUNDER Imagers is quantifiable;

> How Computational Clearing allows imaging deeper within a sample;

> The improvement in image resolution attained with THUNDER Imagers.

Figure 3:InSpeck beads seen in a single field of view. The phase contrast image was used to find beads by thresholding. Scalebar: 20 µm.

Information extraction: Adding Adaptive Deconvolution

Quantifying Widefield Data with Computational Clearing I

InSpeck beads are microsphere standards that generate a series of well-defined fluorescent intensity levels for constructing calibration curves and evaluating sample brightness. In this short experiment, an equal volume of same-size fluorescent and non-fluorescent beads were mixed together. The fluorescent beads had different relative intensities, i.e., 100%, 35%, 14%, 3.7%, 1%, and 0.3%.

InSpeck beads were deposited onto a cover slip and 156 positions were imaged using a 20x low NA objective (Figure 3, single z-position). Three channels were recorded (Figure 3 from left to right): bright field (BF), phase contrast (PH) and fluorescence (FLUO). The FLUO intensity was adjusted to avoid saturation of the camera sensor from bright objects. To correct for potential inhomogeneous illumination, the central area of the FOV was used. No further flat-field correction was performed. The FLUO images were post-processed with Instant Computational Clearing (ICC) using a feature scale of 2500 nm which corresponds to the bead size.

Is computationally cleared data quantifiable?

4 THUNDER IMAGER TECHNICAL NOTE

Page 5: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Beads were found by simple thresholding of a PH image. To correct for falsely detected beads, only round objects (≥ 0.99 roundness) of a certain size (68 to 76 pixels) were accepted. This mask was used to get the mean intensities of the raw fluorescent and the ICC processed channels. There was no exclusion of intensity outliers. To get relative values, the raw and processed intensities of all accepted beads were divided by the median intensity of their largest intensity population (usually the 100% relative-intensity fluorescent beads).

In figure 4 (right), the black lines show that, following Computational Clearing, the intensities still appear around the expected values.

Conclusion: Computational Clearing allows the true fluorescent dynamics of the beads to be distinguished, even for the weakest-signal population which is not observable in the raw data. Quantification of emission intensities is easily done when using Computational Clearing. However, for such kinds of experiments, good practices for quantitative fluorescence microscopy need to be followed very closely.

Quantifying Widefield Data with Computational Clearing IIThe following experiment shows how ICC deals with massive differences and heterogeneity in BG. A green-fluorescent-bead population of varying intensities was prepared and dispersed onto a cover slip. The beads appeared with mixed intensity, but in clusters (Figure 6, left). A general BG was provided by removing the excitation filter from the filter cube and adding a fluorescein BG to one half of the cover slip by marking it with a marker pen. Two equally sized regions of non-overlapping FOVs were defined: one in the area with fluorescein, the high BG tile scan (Figure 5: Region A, left), and the other in the area without it, the low BG tile scan (Figure 5: Region B, right).

Figure 4: Histogram showing the relative fluorescence intensity distribution for the same features seen in both the raw (left) and ICC-processed image data (right). The black lines indicate the relative intensities of the underlying bead population. Computational-cleared data scale set to a max of 1,000 counts: 3,620 counts are in the first bin (zero to 0.1%) representing the non-fluorescent beads.

Figure 5: Merged image of two non-overlapping tile scans (each 187 FOVs with 250 x 250 µm). Left) a tile scan in a high and inhomogeneous BG region (Region A). Right) tile scan in a low BG region (Region B).

A B

5THUNDER IMAGER TECHNICAL NOTE

Page 6: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

For each FOV, the beads were identified by simple thresholding of the BF image (Figure 6, left). From this mask, the mean fluorescent

intensities of the raw and ICC-processed images were obtained.

Figure 6: Single FOVs of the BF channel (left), raw fluorescence image (center), and ICC processed image (right). The BF channel was used to segment the central area of the beads. The segmented areas were used for analysis in the fluorescence channels. Scalebar: 20 µm. Raw image: scaling from 250,00 to 600,00 gray values. ICC image: scaling from 0 to 26,000 gray values.

Objects which did not show a certain roundness and size were discarded and not used for further analysis. Other outlier corrections were not applied. In total, 39,337 objects in region A (high and inhomogeneous background) and 43,031 objects in region B (low

background) were identified. For subsequent comparisons of the intensities, 39,337 objects were selected randomly from region A so that the sample sizes of both regions matched.

Figure 7: Intensity distribution of objects seen in regions A (high BG, blue) and B (low BG, red). The left histogram shows raw data and the right ICC-processed data.

The intensity distribution of the objects in region A (high BG) and B (low BG) are very distinctive (Kolgomorov Smirnov distance: 0.79±0.2, permutation resampling). The general offset and the added BG can be seen (Figure 7, left blue). The same analysis of data after Computational Clearing shows a very similar distribution (KS: 0.05±0.02) for both regions.

Conclusion: Computational Clearing can deal with heterogeneous BG signals which are inherent in the image data of real biological specimens. In addition, it allows quantification of fluorescence signals without the need of tedious local BG removal algorithms which usually need to be adjusted for each imaging session (even for the same object).

Quantifying Widefield Data with Computational Clearing III

To further show the linear behavior of ICC, images of stable fluorescing objects (15 µm beads) within a fixed FOV were recorded with increasing exposure times. To exclude illumination-onset effects, the objects were illuminated constantly with the excitation light. Due to the low density of beads and flatness, background in raw images originated mostly from the camera offset. ICC parameters were set according to the object size: 15 µm with highest strength (100%).

6 THUNDER IMAGER TECHNICAL NOTE

Page 7: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Objects (n=107) were identified in the longest exposed (160 ms), processed image (Figure 8, green dots). Objects consist of all pixels within a 4-pixel distance around a local maximum with an intensity greater than 20% of the maximum. Data is highly linear (Figure 9, left, r >0.999 for all single object measurements). To visualize the respective mean value, intensity was divided by the exposure time and the

intensity corresponding to the longest exposure time. Raw data shows that the relative amount of BG decreases with increasing signal, which is correct, as the BG source is mainly the constant camera offset (Figure 9, center blue). The processed data, however, displays a linear behavior (Figure 9, center red).

Figure 8: Raw images (top row) and images taken with Computational Clearing (bottom row) taken with different exposure times (columns) shown divided by the respective exposure time. Green dots: objects for further analysis. Red square: region for traditional background subtraction. Scalebar: 100 µm.

Finally, ICC was compared to traditionally BG-subtracted data. This step is generally mandatory for quantification of intensities. The mean intensity of an object-free area (100 x 100 pixels, as shown in Figure 8, red square) was calculated for each image and subtracted from the intensity data of the same image. Plotting the mean intensities of the previously found objects versus traditionally BG-subtracted raw data shows that ICC gives the same result (Figure 9, right).

Conclusion: ICC shows a linear behavior. It enables data quantification without the need of further image processing, which can be tedious, especially with heterogeneous backgrounds.

Figure 9: Intensities of identified objects (Figure 8, green dots): left) raw ICC data, single measurements (gray) and average (red). Center) the normalized relative mean value ( divided by exposure time and the value at 160 ms exposure) for intensities of raw (blue) and images taken with Computational Clearing (red). The shadow represents the distri bution of single-object values. Right) computationally cleared data plotted against traditional background-subtracted data where a line of perfect correlation has been added (red line).

7THUNDER IMAGER TECHNICAL NOTE

Page 8: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

The maximal depth that can be imaged is highly sample dependent. Factors, such as density of fluorophores, absorption, or homogeneity of local refractive indices within the sample, directly influence the SNR and amount of scattered light per voxel. These factors usually fluctuate, even within the same field of view.

The classical way to achieve optical sectioning of 3D samples on camera-based systems is by using multiple-point illumination, such as with a Nipkow disk or grid-projecting devices. The latter introduces artifacts whenever the grid cannot be projected sharply in the focal plane. Disk-based systems, on the other hand, have to deal with the

finite distance between pinholes which introduces light contamination from out-of-focus planes at certain imaging depths.

With Computational Clearing, the maximal depth in a sufficiently transparent sample mostly depends on the scattering of the emitted light. Computational Clearing enables deep imaging by removing the scattered light component. If at least some contrast in the image can be achieved locally, THUNDER Imagers make it accessible. The big advantage of Computational Clearing is that it works with live specimens, so imaging can be done under physiological conditions.

How deep can Thunder image within a sample?

Figure 10: Volume rendering of a computationally cleared 150 µm brain section.

The better the contrast-to-noise ratio, the better the result of the reconstruction will be. For the example shown in figure 10, Large Volume Computational Clearing (LVCC), a combination of Computational Clearing and adaptive deconvolution, was used to image a thick sample volume. In the upper layers of the sample, even the finest details are resolved and can be segmented. Although the resolution and

segmentation might be reduced for deeper layers, imaging at a depth of 140 to 150 µm in the sample (Figure 11) shows a significant amount of valuable details which are not revealed in the raw data. Without THUNDER, most widefield imaging experiments stop at a depth of 50 µm, as it is believed that no more information can be retrieved.

THUNDER IMAGER TECHNICAL NOTE8

Page 9: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Applying Small Volume Computational Clearing (SVCC) to single, non-overlapping, diffraction-limited objects results in a resolution enhancement, as shown below in Figure 12. In the given example a single bead of 40 nm diameter was imaged

(100x, 1.45 NA objective) and SVCC with default settings applied. The result is a resolution enhancement* of 2 times laterally (ratio FWHMX SVCC/Raw = 0.51) and more than 2.5 times axially (ratio FWHMZ SVCC/Raw = 0.39).

Figure 11: Maximum intensity projections for depths of 140 to 150 µm.

Resolution improvement with Thunder

Figure 12: X axis (left) and Z axis (right) intensity measurements of a single bead with a size below the optical resolution limit: before (blue dots) and after SVCC (red dots) and fit ted Gaussian (shadows). The inserts show the respective XY and XZ planes.

*Resolution enhancement as defined as the apparent size of a point source emitting light. Separating two structures close to each other below the refraction limit is not possible.

THUNDER IMAGER TECHNICAL NOTE 9

Page 10: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Computational Clearing, an exclusive method from Leica Microsystems, efficiently differentiates and eliminates background from wanted signal. It is the core technology of the THUNDER Imager family.

Different experiments with the appropriate samples gave evidence that Computational Clearing allows quantitative analysis of widefield

images. In combination with adaptive deconvolution, it allows the resolution to be enhanced. THUNDER Imagers allow deeper imaging in large volume samples, such as tissue, model organisms, or 3D cell cultures. THUNDER Imagers are powerful imaging solutions that maximize the information that is extracted from 3D samples.

Summary

Copy

right

© 0

2/20

19 L

eica

Mic

rosy

stem

s CM

S Gm

bH, W

etzl

ar, G

erm

any.

All

right

s re

serv

ed. S

ubje

ct to

mod

ifica

tions

. LEI

CA a

nd th

e Le

ica

Logo

are

regi

ster

ed tr

adem

arks

of L

eica

Mic

rosy

stem

s IR

Gm

bH.

CONNECT

WITH US!

Leica Microsystems CMS GmbH | Ernst-Leitz-Strasse 17–37 | D-35578 Wetzlar (Germany)Tel. +49 (0) 6441 29-0 | F +49 (0) 6441 29-2599

www.leica-microsystems.com/thunder

Page 11: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

From Eye to Insight

Technical Brief

AN INTRODUCTION TO COMPUTATIONAL CLEARING:

A New Method to Remove Out-of-Focus Blur

Levi Felts, PhD Vikram Kohli, PhD James M. Marr, PhD Jan Schumacher, PhD Oliver Schlicker, PhDMarketing Manager Advanced Workflow Specialist Advanced Workflow Specialist Advanced Workflow Specialist Product Manager

Authors

WITHOUTComputational Clearing

WITHComputational Clearing

Page 12: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

THUNDER IMAGER TECHNICAL BRIEF2

IntroductionWhere does background originate from?

Commonly in widefield (WF) microscopy, the imaging of objects in the

object plane result in captured images that are compounded by image haze

(referred to as background noise and shown in Figure 1). The presence

of background noise in an image obscures structural features from being

resolved. In WF microscopy background noise arises from the convolution

of multiple sources including autofluorescence, dark current, sensor noise,

and light collected from out of focus planes, Figure 1. The amount of light

collected and imaged on the detector plane depends on the thickness of

the sample, the number of scatters and wavelength of light, as well as the

numerical aperture (NA) of the objective lens. As the NA of the objective lens

increases, so does the half collection angle of captured light. Many software

packages include background subtraction algorithms to enhance the contrast

of features in the image by reducing background noise. An enhancement in

image contrast does not improve image resolution. Contrast depends on the

difference between the object and background intensity and scales inversely

with the background intensity. Methods to suppress the background through

smoothening or subtraction, can yield images with improved contrast.

Many methods exist to remove background noise, the most commonly used

being rolling ball and sliding paraboloid [1,2]. Recently, Leica Microsystems

introduced their own background subtraction method called Instant

Computational Clearing (ICC) [3], which is present in all Leica THUNDER

WF imaging platforms. Independent of the chosen background subtraction

method, each algorithm strives to minimize noise and feature erosion while

retaining the underlying structural details of the image.

Most importantly, these background subtraction methods aim to improve

the analysis and quantification of captured data.

Figure 1: In widefield microscopy many sources of background noise, including but not limited to, dark current (detector readout), autofluorescence (from the sample) and light collected from out-of-focus planes convolve with the signal of interest (in-focus image; objective focal plane), resulting in the

collected image shown in (A). It is the addition of these background noise sources (yellow dashed line) that contaminates the signal of interest (orange line). Background subtraction methods attempt to reduce these sources of noise to improve image contrast, (B).

FIGURE 1

Signal of Interest

Summation of allBackground Signals

Out of Focus Signal

Autofluorescence

Detector Dark Current

A

Inte

nsity

(arb

. uni

t)

Inte

nsity

(arb

. uni

t)

B

DistanceDistance

Signal of Interest

Summation of allBackground Signals

Out of Focus Signal

Autofluorescence

Detector Dark Current

A

Inte

nsity

(arb

. uni

t)

Inte

nsity

(arb

. uni

t)

B

DistanceDistance

Page 13: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

THUNDER IMAGER TECHNICAL BRIEF 3

Common Background Subtraction Methods

Rolling ball (Figure 2C) is a common background subtraction algorithm that uses

a structural element placed over the image with a radius of curvature of the ball

set by the user in pixels [1,2,4]. To efficiently remove the background, the pixel

radius is set to a pixel value that is as large or slightly greater than the largest

sized feature in the image. Relative to the intensity peaks of the features in the

image, the background is considered smooth, and with the structural element

set to a value larger than the width of the feature, the structural element

operates on the background by changing the local background value of the

pixel. Often, a Gaussian filter is applied to the image prior to implementing the

rolling ball, with the Gaussian filter acting as a low pass frequency filter that

smoothens the image. Applying a Gaussian filter prior to the rolling ball can

help to reduce image noise yielding an improved background subtraction.

Sliding paraboloid (Figure 2D) is another popular background subtraction

method, similar to rolling ball. Sliding paraboloid replaces the ball with

the apex of a paraboloid, with a radius value defining the curvature of

the paraboloid [4]. By sliding the paraboloid across the image, the local

background is subtracted by estimating the intensity variation across the

apex. This method can be a better treatment of the data when the features do

not correspond well to pixel values. Functionally, the subtracted background

is performed in a similar way to rolling ball, and pre-filters such as Gaussian

filtering can be applied to suppress noise in the resultant image.

Rolling ball, sliding paraboloid and other background subtraction methods

require the use of structural elements, wavelets or point spread function

modelling to estimate and subtract the background [1,2,4,5]. Identifying the

parameters for efficient background removal with minimal feature erosion

can be time consuming, is typically performed manually and implemented

as a post-acquisition workflow. Leica Microsystems has developed a new

background subtraction algorithm which addresses many of these challenges.

Figure 2: Mouse Kidney stained with membrane glycoproteins by wheat germ agglutinin - Alexa488. Commercially available from Thermo Fisher Scientific; FluoCells prepared slide #3. Imaged with a 63x/1.4NA oil objective and a GFP filter set (ex488/em524). (A) Widefield fluorescent image prior to background subtraction. The red box represents a cropped region of the entire field of view, and is the same region shown in (B-D). (B-D) Background subtraction was performed on the cropped region,

(B) ICC, (C) Rolling Ball, and (D) Sliding Paraboloid. The feature scale value for ICC was 619 nm. For a consistent comparison, the feature scale value of 619 nm was converted to a pixel value for rolling ball (pixel value = 6) and sliding paraboloid (pixel value = 0.2) based on the size of the imaging sensor. Both the rolling ball and sliding paraboloid background subtraction methods were performed in FIJI [3]. Scale bar = 50 µm.

A

FIGURE 2

B C D

50 Microns

Page 14: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

THUNDER IMAGER TECHNICAL BRIEF4

Leica’s Solution: Instant Computational Clearing

Conclusion

The ICC algorithm [3] offered by Leica uses a different approach to increase

image contrast. ICC does not implement structural elements, wavelet

transforms, or use point spread function (PSF) modeling to estimate the

background noise. ICC improves image contrast by considering the entire

image and applies a model to distinguish signal (structures within the

image) from the background noise. Similar to minimizing a cost function

[6,7,8,9] for linear regression, ICC minimizes a non-quadratic cost function

to estimate the background noise, Eq 1:

‖y – x‖non-quadratic + γ‖∇x‖ Eq 1.

where y is the input parameter, x is the background noise to be estimated,

γ‖∇x‖ is the L2 regularization term, and γ is the regularization

parameter. The regularization parameter is set to a default value

proportional to the full width at half maximum of the PSF of the optical

system or is a user selectable value based on the size of specific features

in the image. Collectively, γ‖∇x‖ acts to penalize the cost function to

prevent overfitting of the background, or erosion of targeted features. Once

the background noise has been estimated, it is subtracted from the image

to reveal the true signal (Figure 2B).

Both rolling ball and sliding paraboloid can be found in 3rd party image

analysis tools including paid microscopy software packages as well

as open source software such as FIJI [4]. Variations do exist in how

these subtraction methods are applied; however, they are commonly

implemented on the raw data as a post-processing step outside of

the imaging workflow, post-acquisition. The application of these two

background subtraction methods on the raw data is an iterative process,

requiring the user to identify the best pixel value to use to enhance the

features in the image while preserving structural details. In contrast, ICC

is fully integrated into the imaging workflow for background noise removal

and image contrast enhancement. ICC can be applied both post-acquisition

or during acquisition, the latter providing an instantaneous real-time

preview of the contrast improved image as the data is being acquired.

The unaltered raw data is preserved following ICC, which allows users

to perform ground truth analysis. Following ICC, both the raw and the

processed datasets can be further compared and analyzed using the 3D

visualization and 2D and 3D analysis packages in Leica’s LASX software.

Image background subtraction using either rolling ball, sliding paraboloid

or ICC produce images with improved clarity. However, the improvement

is limited to an enhancement in contrast without any effect on image

resolution. To improve image resolution Leica offers an adaptive

deconvolution algorithm that can be paired with ICC to produce an image

that has both improved contrast and resolution. The adaptive nature of

the algorithm arises from the creation of adaptive signal to noise ratio

coefficients on a voxel-by-voxel basis that constrains the deconvolution

through regularization. An in-depth discussion of Leica’s adaptive

deconvolution will be covered in our next upcoming technology brief.

22

22

22

FIGURE 2B

Page 15: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

THUNDER IMAGER TECHNICAL BRIEF 5

References

[1] Sternberg, "Biomedical Image Processing," in Computer, vol. 16, no. 1,

pp. 22-34, Jan. 1983, doi: 10.1109/MC.1983.1654163.

[2] Rodrigues, Mariana C.M., and Matthias Militzer. “Application of the

Rolling Ball Algorithm to Measure Phase Volume Fraction from Backscattered

Electron Images.” Materials Characterization, Elsevier, 12 Mar. 2020,

www.sciencedirect.com/science/article/abs/pii/S1044580319328104.

[3] Walter, Kai., and Ziesche, Florian. (2019) Apparatus and Method,

Particularly for microscopes and endoscopes, using baseline estimation

and half-quadratic minimization for the deblurring of images. WO

2019/185174 A1. European Patent Office, 3 October 2019.

[4] Schindelin, Johannes, et al. “Fiji: an Open-Source Platform for

Biological-Image Analysis.” Nature News, Nature Publishing Group, 28

June 2012, www.nature.com/articles/nmeth.2019.

[5] https://imagej.nih.gov/ij/developer/api/ij/plugin/filter/

BackgroundSubtracter.html

[6] Mazet, V. et al. “Background removal from spectra by designing and

minimizing a non-quadratic cost function.” Chemometrics and Intelligent

Laboratory Systems 76 (2005): 121-133.

[7] Peng, Jiangtao, et al. “Asymmetric Least Squares for Multiple Spectra

Baseline Correction.” Analytica Chimica Acta, Elsevier, 15 Oct. 2010,

www.sciencedirect.com/science/article/abs/pii/

[8] Peng, Jiangtao, et al. “Spike Removal and Denoising of Raman Spectra

by Wavelet Transform Methods.” Analytical Chemistry, 21 July 2001, pubs.

acs.org/doi/10.1021/ac0013756. S0003267010010627?via=ihub.

[9] Ramos, Pablo Manuel, and Itziar Ruisánchez. “Noise and Background

Removal in Raman Spectra of Ancient Pigments Using Wavelet Transform.”

Wiley Online Library, John Wiley & Sons, Ltd, 16 June 2005, onlinelibrary.

wiley.com/doi/abs/10.1002/jrs.1370.

Page 16: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Leica Microsystems CMS GmbH | Ernst-Leitz-Strasse 17–37 | D-35578 Wetzlar (Germany)Tel. +49 (0) 6441 29-0 | F +49 (0) 6441 29-2599

www.leica-microsystems.com/thunder

MC-0001347–15.09.2020. Copyright © 2020 Leica Microsystems CMS GmbH, Wetzlar, Germany. All rights reserved. Subject to modifications. LEICA and the Leica Logo are registered trademarks of Leica Microsystems IR GmbH.

www.leica-microsystems.com

CONNECTWITH US!

WITHOUTComputational Clearing

WITHComputational Clearing

Cover image: Mouse retina was fixed and stained by following reagents: anti-CD31 antibody (green): Endothelia cells, IsoB4 (red): Blood vessels, and microglia anti-GFAP antibody (blue): Astrocytes Sample courtesy by Jeremy Burton, PhD and Jiyeon Lee, PhD, Genentech Inc., South San Francisco, USA. Imaged by Olga Davydenko, PhD (Leica)

Back cover image: Mouse kidney section with Alexa Fluor™ 488 WGA, Alexa Fluor™ 568 Phalloidin, and DAPI. Sample is a FluoCells™ prepared slide #3 from Thermo Fisher Scientific, Waltham, MA, USA.

Page 17: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

From Eye to Insight

Vikram Kohli, PhD James M. Marr, PhD Oliver Schlicker, PhD Levi Felts, PhD Advanced Workflow Specialist Advanced Workflow Specialist Senior Application Manager Marketing Manager

Authors

Technical Brief

THE POWER OF PAIRING

ADAPTIVE DECONVOLUTION WITH

COMPUTATIONAL CLEARING

Page 18: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

ADAPTIVE DECONVOLUTION WITH COMPUTATIONAL CLEARING2

Introduction

Since the invention of the microscope in 1595 [1], the discovery of

fluorescence and the first description of light microscopy in visualizing

stained samples in the 1850’s [1], widefield (WF) fluorescence microscopy

has become a widely used imaging technique that has benefited the

disciplines of engineering and science.

Our scientific curiosity continues to drive the development of microscopy

forward, with the goal to see and resolve more structural detail. However,

due to the wave nature of light and the diffraction of light by optical

elements, image resolution is limited by the diffraction spot known as the

diffraction limit.

In WF fluorescence microscopy both the contrast and resolution of a

captured image is reduced by multiple sources including: light collected

from adjacent planes, scattered light, and camera sensor noise, all of

which increase image haze/blur (background noise). Additional losses in

image resolution and contrast can be tied to the systems optical response

function commonly known as the point spread function (PSF). The PSF

describes what an idealized point source would look like when imaged at

the detector plane and operates as a low pass frequency filter that filters

out the high spatial frequency content in the image, Figure 1.

Mathematically, image formation (i) can be represented as a convolution

(*) between the observed object (o) and the PSF (h) with added noise

(Poisson and/or Gaussian - ε ) as described by Eq 1:

i(x, y, z) = o(x, y, z) * h(x, y, z) + ε Eq 1.

To minimize the effects of decreased image contrast and resolution by the

PSF, image restoration techniques such a deconvolution are often used to

restore and enhance the detail that is lost in the image.

Figure 1. The convolution of an object with the systems optical response (PSF) results in a blurred image. The original image (a) was corrupted with gaussian noise (b) and convolved with a synthetic PSF (2D Gaussian function) (c) resulting in the blurred image shown in (d). (e) and (f) Depicts the image in the frequency domain pre- and post-convolution. The convolution of the image with the PSF operates as a low pass frequency filter and removes some of the high spatial frequency content in the image (f – red arrows; compare with e – red arrows). The images were produced in Matlab 2020a (MathWorks, Natick MA), and the test target was taken fromhttps://en.wikipedia.org/wiki/1951_USAF_resolution_test_chart.

FIGURE 1

Original Image

Original Image Fourier Transformed Convolved Image Fourier Transformed

With Gaussian Noise Synthetic PSF Blurred Image

A B

FE

DC

Page 19: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

ADAPTIVE DECONVOLUTION WITH COMPUTATIONAL CLEARING 3

What is Deconvolution?

Deconvolution is a computational method used to restore the image

of the object that is corrupted by the PSF along with sources of noise.

Given the acquired image i, the observed object o, and knowledge of the

PSF (theoretically or experimentally determined) the observed object in

Eq 1. can be restored through the process of deconvolution. To perform

deconvolution, the object and the PSF are transformed into the frequency

domain by the Fourier transform as presented in Eq 2.

o(x, y, z) * h(x, y, z) = F –1 [F{o(x, y, z)}F{h(x, y, z)} Eq 2.

For computational reasons, deconvolution is performed in the frequency

domain, and Eq 1. can be re-written as shown in Eq 3. and solved for O:

I ⁄ H = OH ⁄ H + £ ⁄ H Eq 3.

where F, F –1, O, H, and £ are the Fourier transform, the inverse

Fourier transform, the Fourier transformed observed object, the Fourier

transformed PSF (known as the optical transfer function; OTF) and the

Fourier transformed noise. However, as H approaches zero (at the edges

of the PSF), both the left-most term and the noise term in Eq 3. become

increasingly large, amplifying noise and creating artifacts. To limit the

amount of amplified noise, the PSF can be truncated, but this will result in

the loss of image detail.

Several deconvolution algorithms have been proposed [2-8] to address

the above image restoration issues. For instance, the Richardson Lucy

(RL) algorithm based on a Bayesian iterative approach is formulated using

image statistics described by a Poisson or Gaussian noise process. By

minimizing the negative log of the probability [2,3], the RL equation for

deconvolution for Poisson and Gaussian noise processes are, Eq 4:

ok+1 = ok for Poisson [2,3] and

ok+1 = ok + [(h+ * i) – (h+ * h) *ok ] for Gaussian [10] Eq 4.

where i, and o have been previously defined, and h+ is the flipped PSF.

However, similar to Eq 3. the RL deconvolution method is susceptible to

amplified noise, resulting in a deconvolution that is dominated by noise

[2,3], Figure 2. Partial treatment to limit amplified image noise is to either

terminate the convergence early, or to carefully select a value for ok that

is pre-blurred with a low pass frequency filter (ex. Gaussian filter) [2,3].

More recently, new deconvolution methods have been proposed that use

a regularization term that can be added to the algorithm that constrains

the deconvolution. Such regularization techniques include methods like

Tikhonov-Miller, Total-Variation, and Good’s roughness [2,4-9]. The purpose

of these regularization terms is to penalize the deconvolution to limit image

noise and artifact generation with the goal to preserve image detail.

h+*i

(h*ok )[ ]

Figure 2. The RL deconvolution algorithm was used to restore the image of the observed object (resolution target pattern). The observed object to restore is the blurred image shown in Figure 1(d), and the PSF used in the deconvolution was the synthetic PSF in Figure 1(c). As the number of iterations increase, (a-c), the image detail is improved. However, this is at the expense of amplified noise and image artifacts.The images were produced in Matlab 2020a (MathWorks, Natick MA).

FIGURE 2

RL Deconvolution, iterations = 10 RL Deconvolution, iterations = 50 RL Deconvolution, iterations = 500

A B C

Page 20: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

ADAPTIVE DECONVOLUTION WITH COMPUTATIONAL CLEARING4

Leica Microsystems Approach

The method used by Leica Microsystems is an accelerated adaptive RL

approach that is regularized and constrained using Good’s roughness [11].

Using a Bayesian statistic approach with a Gaussian noise process, the

function to minimize for deconvolution is:

min ||i – h * o||2 + γ ∫ 1 [∇o]2 Eq 5.

where γ is the regularization term, ∇ is the differentiation operator, and h

is the Gibson-Lanni PSF. In Eq 5., the regularization term, γ, is dependent

on the local signal-to-noise ratio (SNR) and is a function of adaptive

SNR (x, y) coefficients created over the entire image, Figure 3. Together,

γ∫1[∇o]2, acts to penalize the deconvolution, and the regularization term

scales non-linear with SNR (x, y) as:

with SNRmax defined as a predetermined maximum SNR value, and γmax is the maximum predefined regularization. The result is the adaptive

regularization of Eq 5., yielding greater penalization to the deconvolution

for regions having lower SNRs in contrast to regions with higher SNRs.

This yields a deconvolution process that is properly regularized over the

entire image avoiding image artifacts and the amplification of noise.

Using the deconvolution algorithm offered by Leica Microsystems the user

can define the number of iterations for deconvolution or use the default

option to have the algorithm determine the stop criterion for convergence.

The latter option is more time efficient, and removes the guesswork

required by the user. Similar to Leica Microsystems Instant Computational

Clearing (ICC) algorithm, the described adaptive deconvolution method is

included on all THUNDER imaging systems and is fully integrated into the

imaging workflow [12].

0

Figure 3: By viewing the image (a) as a heat-map of low SNR (black/purple) to high SNR (orange/red) local adaptive coefficients are created over the entire image to (b) regularize and constrain the deconvolution. The figure was adapted from https://downloads.leica-microsystems.com/Leica%20TCS%20SP8/Publications/LIGHTNING_WhitePaper.pdf

0

{ }2 ( )/2{ – 1}π2SNR(x, y)

SNRmaxγ(x, y ) = max 0, γmax [1 – arctan ] Eq 6.

label a and b

FIGURE 3

Regu

lariz

atio

n

A

B

Low HighSNR

Page 21: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

ADAPTIVE DECONVOLUTION WITH COMPUTATIONAL CLEARING 5

THUNDER: ICC and Deconvolution

How well deconvolution performs on a widefield fluorescent image is

dependent on several factors including the amount of light collected from

adjacent planes, the sample thickness, the degree of scattered light, and

the SNR of the image. For instance, in thick specimens an appreciable

amount of light scattering can occur within the sample resulting in the

failure of deconvolution to produce a restored image with improved

resolution and contrast.

For samples that have high SNR with minimal background noise,

deconvolution can yield over processed images resulting in unwanted sharp

boundary transitions between image features. Therefore, the need for an

imaging workflow that is adaptable for diverse sample thicknesses with

varying SNRs is important.

In our previous technology brief we discussed Leica Microsystems ICC

algorithm as a method to restore image contrast through the removal of

background noise. This is done through the minimization of a non-quadratic

cost function that estimates and subtracts the background noise from the

image to improve contrast [12]. To address the previously mentioned pitfalls

of deconvolution, Leica Microsystems introduced a workflow that combines

both computational algorithms, allowing users to perform ICC, or ICC with

deconvolution. For the latter, the pairing is selectable between two different

processing modalities based on the sample thickness and SNR, referred to as

small and large volume computational clearance, SVCC and LVCC respectively.

label a and b

Figure 4: (a) A low SNR widefield fluorescent image of a U2OS cell stained with SiR700 actin. Due to the low SNR, the true spatial features are difficult to differentiate due to the background noise. (b) While the application of ICC improves the overall image contrast, the structural signal is still obscured and is

at the signal level of the background noise. (c) After SVCC the image and its features are enhanced over the background noise resulting in an image having improved contrast and resolution with increased SNR. Scale bar = 50 µm. Images courtesy of James Marr, PhD, from a sample supplied by Leica Microsystems.

FIGURE 4

A B C

Page 22: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

ADAPTIVE DECONVOLUTION WITH COMPUTATIONAL CLEARING6

Figure 5: A 70 μm widefield z- stack of an uncleared mouse lung tissue for studying type I alveolar epithelial cells. The differentiation marker, colored cyan, labels AT1 lineage, and the magenta label identifies the receptors for advance glycation end products (RAGE). (a) A single z-plane at a depth of 52.25 μm was selected from the widefield raw image. Despite the z-plane having good SNR, the structural features of interest are obscured in the background noise. (b) After applying ICC (region

within the red box) the background noise is removed, however, the spatial features are not fully recovered. (c) Only after LVCC are the features recovered, resulting in an image with increased SNR, contrast and resolution. This is true for all z-planes within the 70 um z-stack. Scale bar = 50 µm. Images courtesy of Yana Kazadaeva, in Dr Tushar Desai lab, California.

FIGURE 5

B

A

C

When using SVCC, the adaptive deconvolution is performed prior to THUNDER

ICC. In SVCC, a theoretical PSF is used in the deconvolution with knowledge

of the systems optical parameters (the type of microscope objective, the

emission wavelength, the sample embedding media, etc.). The choice in using

SVCC is especially important for ‘noisy’ images, Figure 4, since the adaptive

deconvolution will improve the SNR prior to the automatic removal of the

unwanted background via ICC, Figure 4C.

In LVCC, ICC is performed prior to deconvolution and uses a PSF that is

influenced by the parameters of ICC. LVCC is ideally suited for samples that

have higher SNRs and more background noise contributions that are common

in thick samples, Figure 5. By performing ICC prior to deconvolution, the

contrast of the spatial features is enhanced, Figure 5C, allowing for a more

precise treatment of the remaining signal through deconvolution.

For both SVCC and LVCC, the strength of ICC is user adjustable. The

strength parameter (s) scales the amount of estimated background

intensity (Ibackground) that is subtracted from the image (I) resulting in the

final image (I'):

I' = I – s • Ibackground Eq 7.

Together the strength parameter, SVCC, and LVCC allow the user to fine

tune the computational algorithm to their sample for the best treatment

of their data.

As with ICC, both SVCC and LVCC can be applied during image acquisition

or post-acquisition, with the raw data always being preserved. This means

that the user can directly compare the raw data to the processed data for

further quantification. The topic of quantification will be discussed in our

next upcoming technology brief.

Page 23: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

ADAPTIVE DECONVOLUTION WITH COMPUTATIONAL CLEARING 7

Acknowledgements:

We would like to thank Kai Walter (Software engineer), Louise Bertrand

(Product Performance Manager Widefield) and Jan Schumacher (Advanced

Workflow Specialist) for reading and providing their comments on this

technical brief.

References:

1. Heinrichs, A. (2009, October 1). (1858, 1871) First histological stain,

Synthesis of fluorescein. Retrieved November 09, 2020, from https://www.

nature.com/milestones/milelight/full/milelight02.html

2. Sibarita, J. (2005). Deconvolution Microscopy. Microscopy Techniques

Advances in Biochemical Engineering/Biotechnology, 95, 201-243.

doi:10.1007/b102215

3. Dey, N., Blanc-Féraud, L., Zimmer, C., Roux, P., Kam, Z., Olivo-Marin,

J., Zerubia, J. (2004). 3D Microscopy Deconvolution using Richardson-Lucy

Algorithm with Total Variation Regularization.

4. Rodriguez, P.,. (2013). Total Variation Regularization Algorithms for

Images Corrupted with Different Noise Models: A Review. Journal of

Electrical and Computer Engineering. 2013. 10.1155/2013/217021.

5. Tao, M., Yang, J. (2009). Alternating direction algorithms for total

variation deconvolution in image reconstruction. Optimization Online.

6. Roysam, B., Shrauner, J. A., Miller, M. I., (1988) Bayesian imaging

using Good's roughness measure-implementation on a massively parallel

processor, ICASSP-88., International Conference on Acoustics, Speech, and

Signal Processing, New York, NY, USA, , pp. 932-935 vol.2, doi: 10.1109/

ICASSP.1988.196742.

7. Verveer, P., Jovin , T., (1998) Image restoration based on Good’s

roughness penalty with application to fluorescence microscopy, J. Opt. Soc.

Am. A 15, 1077-1083

8. Zhu, M., (2008). Fast Numerical Algorithms for Total Variation Based

Image Restoration, [Unpublished Ph.D. dissertation] University of

California, Los Angeles.

9. Good, I., & Gaskins, R. (1971). Nonparametric Roughness Penalties for

Probability Densities. Biometrika, 58(2), 255-277. doi:10.2307/2334515

10. Oyamada, Y. (2011). Richardson-Lucy Algorithm with Gaussian noise.

11. D. Zeische, F. and Walter, K., Leica Microsystems CMS GmbH,

Deconvolution Apparatus and Method Using a Local Signal-to-Noise Ratio,

Germany, 18194617.9, 14.09.2018.

12. Felts, L., Kohli, V., Marr, J., Schumacher, J., Schlicker, O. (2020, October

01). An Introduction to Computational Clearing. Retrieved November

09, 2020, from https://www.leica-microsystems.com/science-lab/an-

introduction-to-computational-clearing/

Page 24: TechNote THUNDER IMAGERS: HOW DO THEY REALLY WORK?

Leica Microsystems CMS GmbH | Ernst-Leitz-Strasse 17–37 | D-35578 Wetzlar (Germany)

Tel. +49 (0) 6441 29-0 | F +49 (0) 6441 29-2599

www.leica-microsystems.com/thunder

MC-0001965–27.01.2021. Copyright © 2021 Leica Microsystems CMS GmbH, Wetzlar, Germany. All rights reserved. Subject to modifications. LEICA and the Leica Logo are registered trademarks of Leica Microsystems IR GmbH.

www.leica-microsystems.com

Cover image: Mouse kidney section with Alexa Fluor™ 488 WGA, Alexa Fluor™ 568 Phalloidin, and DAPI. Sample is a FluoCells™ prepared slide #3 from Thermo Fisher Scientific, Waltham, MA, USA. Images courtesy of Dr. Reyna Martinez – De Luna, Upstate Medical University, Department of Ophthalmology and Visual Back cover image.

Back cover image: Adult rat brain. Neurons (Alexa Fluor488, green), Astrocytes (GFAP, red), Nuclei (DAPI, blue). Image courtesy of Prof. En Xu, Institute of Neurosciences and Department of Neurology of the Second Affiliated Hospital of Guangzhou Medical University, China

CONNECT

WITH US!