Software Tools, Data Structures, and Interfaces for ... proprietary formats¢â‚¬â€‌QuickTime, AVI, WAV, etc

  • View
    0

  • Download
    0

Embed Size (px)

Text of Software Tools, Data Structures, and Interfaces for ... proprietary...

  • Topic Introduction

    Software Tools, Data Structures, and Interfaces for Microscope Imaging

    Nico Stuurman and Jason R. Swedlow

    The arrival of electronic photodetectors in biological microscopy has led to a revolution in the appli- cation of imaging in cell and developmental biology. The extreme photosensitivity of electronic photo- detectors has enabled the routine use of multidimensional data acquisition spanning space and time and spectral range in live cell and tissue imaging. These techniques have provided key insights into the molecular and structural dynamics of living biology. However, digital photodetectors offer another advantage—they provide a linear mapping between the photon flux coming from the sample and the electronic sample they produce. Thus, an image presented as a visual representation of the sample is also a quantitative measurement of photon flux. These quantitative measurements are the basis of subsequent processing and analysis to improve signal contrast, to compare changes in the concentration of signal, and to reveal changes in cell structure and dynamics. For this reason, many laboratories and companies have committed their resources to software development, resulting in the availability of a large number of image-processing and analysis packages. In this article, we review the software tools for image data analysis that are now available and give some examples of their use in imaging experiments to reveal new insights into biological mechanisms. In our final section, we high- light some of the new directions for image analysis that are significant unmet challenges and present our own ideas for future directions.

    BACKGROUND

    Ahallmark of scientific experiment is the quantitative comparison of a control condition and ameasure of a change or difference after some perturbation. In biology, microscopes are used to visualize the structure and behavior of cells, tissues, and organisms, and to assess changes before, during, or after a perturbation. Amicroscope collects light from a sample and forms an image, which is a representation of the sample, biased by any contrast mechanisms used to emphasize specific aspects of the sample. For the first 300 years of microscopy, this image was recorded with pencil and paper, and this artistic rep- resentation was then shared with others. The addition of photographic cameras on microscopes enabled mass reproduction and substantially reduced, but by no means eliminated, the viewer’s bias in recording the image for the first time. Even though the microscope image was directly projected onto the recording medium, the nonlinear response of film to photon flux and its relative insensitivity in low-light applications limited the application of microscopy for quantitative analysis.

    DIGITAL IMAGES

    What Is a Digital Image?

    Microscope digital images are measurements of photon flux across a defined grid or area. They are recorded using either a detector that is an array of photosensitive elements, or pixels, that records a

    Adapted from Live Cell Imaging, 2nd edition (ed. Goldman et al.). CSHL Press, Cold Spring Harbor, NY, USA, 2010.

    © 2012 Cold Spring Harbor Laboratory Press Cite this article as Cold Spring Harbor Protoc; 2012; doi:10.1101/pdb.top067504

    50

    Cold Spring Harbor Laboratory Press on December 2, 2020 - Published by http://cshprotocols.cshlp.org/Downloaded from

    http://cshprotocols.cshlp.org/ http://www.cshlpress.com

  • whole field simultaneously or a single-point detector that is scanned, usually as a raster, across the sample field to create a full image. The recorded value at each pixel in the image is a digitized measure- ment of photon flux at a specific point and corresponds to the voltage generated by electrons liberated by photons interacting with the detector surface. Computer software is used to display, manipulate, and store the array of measured photon fluxes as what we recognize as a digital microscope image.

    The Multidimensional Five-Dimensional Image

    Each array of pixels generates a two-dimensional (2D) image, a representation of the sample. However, it is now common to assemble these 2D micrographs into larger entities. For example, a series of 2D micrographs taken at a defined focus interval can be thought of as a single three- dimensional (3D) image that represents the 3D cell under the objective lens. Alternatively, a series of 2D micrographs taken at a single focal position at defined time intervals form a different kind of 3D image—a time-lapse movie at a defined focal plane. It is also possible to record a focal series over time and to create a four-dimensional (4D) movie. Any of these approaches can be further expanded by recording different contrast methods—the most common, by far, is the use of multiple fluorophores to simultaneously record the concentration of different molecules at the same time. In the limit, this generates a five-dimensional (5D) image. We have chosen to use the singular image to emphasize the integrated nature of this data structure and that the individual time points, focal planes, and spectral measurements all are part of a single measurement.

    Regardless of the specific details of an experiment, an image actually has all these dimensions, but some only are of unitary extent. In the simplest case, the recording of a fluorescence signal from a single wavelength and focus position at a specific time generates a 5D image. The focus, time, and wavelength dimension all exist but are just equal to 1. Thus, recording more than one fluorophore simply extends the spectral dimension, as recording a time series extends the time dimension. In this approach, extents change but the intrinsic dimensionality of the image does not. The advantage of this approach is that it provides a single data structure for all data storage, display, and processing. For example, processing of a 5D image only requires definition of focal planes, time points, and wave- lengths. In most cases, a single application, aware of the 5D form of the data file, suffices to handle data of different extents.

    One of themost difficult parts of working with these data structures is the lack of a defined nomen- clature for referring to the data. Images that sample space only are sometimes referred to as “3D images” or “stacks.” Time-lapse images are often referred to as “movies” or “4D images.” Time-lapse data can be stored in their original format or compressed and assembled into a single file and stored in proprietary formats—QuickTime, AVI, WAV, etc. These compressed formats are convenient in that they substantially reduce the size of files and are supported by common presentation software (e.g., PowerPoint and Keynote), but they do not necessarily retain the pixel data in a form that preserves the integrity of the original data measurements. It is important to be aware of the distinction between compression methods that are lossless (i.e., the original data can be restored) and lossy (often much better at reducing storage but losing the ability to restore the original data).

    Monochrome versus Color

    Microscope images can be either monochrome or color, and it is critical to know the derivation and description of the data in an image to understand what is actually being measured. Monochrome images are single-channeled images and are themost directmapping of the photon fluxmeasurements required by the photoelectronic detector. They are used as the basis of more elaborate displays using color to encode different channels or different lookup tables. Color images may be created based on the display of multiple monochrome images; however, images may also be stored as color (e.g., JPEG).Analysis of color images (e.g., images of histology sections) is possible but often starts by decom- posing the color image into the individual RGB (red–green–blue) channels and by processing them separately. Analysis based on differences in intensity should be undertaken with caution, as the files that store color images rarely retain the quantitativemapping of photon fluxmeasured by the detector.

    Cite this article as Cold Spring Harbor Protoc; 2012; doi:10.1101/pdb.top067504 51

    Tools for Microscope Imaging

    Cold Spring Harbor Laboratory Press on December 2, 2020 - Published by http://cshprotocols.cshlp.org/Downloaded from

    http://cshprotocols.cshlp.org/ http://www.cshlpress.com

  • Bit Depth

    When stored in digital form, data from electronic detectors are stored in bits, or more formally, a base-2 representation of the quantitative measurements from the photodetector. By convention, a byte is a sequence of 8 bits and thus can represent numerical values from 0 to 28 – 1 or 255. Data that can be represented in this range are referred to as 8-bit data and have a bit depth of 8 bits. Most scientific-grade charge-coupled device (CCD) cameras digitize their data to either 12 bits (data range from 0 to 212 – 1 or 4095) or 16 bits (data range from 0 to 216 – 1 or 65,535). When stored in computer memory, 8-bit data map easily to a single byte, whereas 12-bit or 16-bit data must be stored in 2 bytes in sequence to properly represent the data. In general, data-acquisition soft- ware handles storage of these data without any intervention from the user. However, when moving data between different software programs, unexpected transformations can occur. For example, data recorded with a 12-bit CCD camera will appear as 16-bit data to any visualization or analysis program that reads it. Most microscopy software tools handle this difference properly; however, some (such as Photoshop) display an image assuming a possible dynamic range of 216, requiring the user to manually change the display settings. For these