1
Deep neural networks for aberrations compensation in retinal imaging acquired by digital holography J. Rivet, G. Tochon, S. Meimon, M. Paques, T. G´ eraud, M. Atlan ESPCI Paris, PSL Research University, Sorbonne Universit´ e, CNRS, Institut Langevin, 1 rue Jussieu, F-75005 Paris, France EPITA Research and Development Laboratory (LRDE), 14-16 rue Voltaire, F-94270 Le Kremlin-Bicˆ etre, France ONERA, the French Aerospace Lab, F-92320 Chˆatillon, France Institut de la Vision, CIC 1423, INSERM UMR-S 968, CNRS, Sorbonne Universit´ e, 17 rue Moreau, 75012 Paris, France Corresponding author: [email protected] Abstract Optical Coherence Tomography (OCT) is one of the most widely available technology for retinal imaging. It is commonly used to monitor and study eye diseases like glaucoma, macular degeneration and diabetes. We develop computational imaging approaches based on wide-field interferometric measurements for OCT imaging in order to increase the imaging throughput. With our setup and our home-made software Holovibes, we can reconstruct 3D holographic images in real-time, at 10 billion voxels per second. In computational imaging by digital holography, lateral resolution of retinal images is limited by the aberrations of the eye to about 20 microns. To overcome this limitation and improve lateral resolution up to the diffraction limit, eye’s aberrations have to be canceled. We are investigating new digital aberration compensation schemes to circumvent the limitations of aberrations correction algorithms of the state of the art. Setups OCT setup: the source is a tunable laser whose wavelength varies linearly from 870 nm to 820 nm in 0.5 s. The CMOS camera records 1024 × 1024 pixel images at a frame rate of 512 Hz with 16 bit/pixel quantization. OCT images Real-time holographic tomography of a semi-transparent sam- ple of 400-800 microns silica beads wrapped with tape. Our setup provides en-face and axial images at the same time. It is still a prototype. Imaging in the eye was demonstrated at much faster acquisition rates (60 kHz) to circumvent physiolog- ical motion [5]. Axial field of view is about 1.8 mm, with a resolution of 14 microns. Input throughput: 512 Mega bytes per second. Output throughput: 10 billion voxels per second. Real-time images are obtained with Holovibes. Deep neural networks for aberrations compensation ! ! " ! ! " " " " " " " " " " " " " " " " " " # " # " # # # # x 0 x 1 H 1 x 2 H 2 H 3 H 4 x 3 x 4 A 5-layer denseblock (left) and U-net (right). IDiffNet [1] is able to reconstruct phase images propagated through a scattering medium. It suceeds in computational image gen- eration from interferogram measurements. It uses denseblocks from Densenet [2] and skip connections from U-Net [3], improving features propagation in the network. We are adapting this type of network in order to reconstruct intensity images of retina’s layers. The network will be trained on the eye fundus database from the 15-20 hospital in Paris. Digital image formation To obtain retinal images, fields corresponding to interferograms acquired with the CMOS camera are digitally propagated from camera plane to image or reconstruction plane. Considering camera plane corresponds to z = 0, and the distance between both planes is z , field in image plane can be expressed as: E (x, y, z )= E (x, y, 0) * h(x, y, z ), (1) where h(x, y, z ) is the impulse response of free space propaga- tion: h(x, y, z )= e ikz iλz e i k 2z ( x 2 +y 2 ) . (2) Intensity interferogram acquired can be related to the field with I (x, y, 0) = |E (x, y, 0)| 2 . In the same way, H (x, y, z )= |E (x, y, z )| 2 . The same relation Eq. 1 exists between interfero- gram and hologram: H (x, y, z )= I (x, y, 0) * h(x, y, z ). (3) Time Fourier transform is then applied to get frequency and depth dependency. Aberrations compensation 1 Simulation Images extracted from eye fundus database, collected at the 15-20 hospital in Paris. iris and pupil lens cornea macula and fovea retina optic nerve head vitreous gel f 0 f air Sketch of the optical configuration. Original image PSF with aberrations (astigmatism) Image with aberrations Intensity images are convoluted to a point spread function (PSF) corresponding to the impulse response of the eye’s optical system which light beam crosses. If there is no aberration, the wavefront will not be distorted, the PSF will be a point and the image will stick to the original one. If there are aberrations, created by cornea, the PSF will not be a point anymore, which means the wavefront will be distorted. The resolution of resulting image will decrease. 2 State of the art Plane wavefront Tilted wavefront Aberrated wavefront Shack-Hartmann wavefront sensor principle. The focal plane of the lens is divided into multiple sub-lenses. Each of them provides PSF for some parts of the plane. If the wavefront is not aberrated, each PSF will be in the middle of each sub-image in the focal plane. If the wavefront is distorted, each PSF will be shifted according to the wavefront’s slope. The relation between wavefront and PSF is defined by PSF = FFT {Γ wavefront }, where FFT means Fast Fourier Transform, and Γ wavefront is the autocorrelation of the wavefront in the focal plane. Aberration compensation methods: Reconstruction of digital holograms in subapertures: wavefront measurement, lateral phase correlation analysis and non-iterative aberration compensation of retinal holograms [4]. Iterative digital aberration compensation: minimization of the local entropy of speckle-averaged tomographic volumes [5]. Further reading References [1] Shuai Li, Mo Deng, Justin Lee, Ayan Sinha, and George Barbastathis. Imaging through glass diffusers using densely connected convolutional networks. arXiv preprint arXiv:1711.06810, 2017. [2] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, volume 1, page 3, 2017. [3] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convo- lutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted inter- vention, pages 234–241. Springer, 2015. [4] Laurin Ginner, Tilman Schmoll, Abhishek Kumar, Matthias Salas, Nas- tassia Pricoupenko, Lara M Wurster, and Rainer A Leitgeb. Holo- graphic line field en-face OCT with digital adaptive optics in the retina in vivo. Biomedical Optics Express, 9(2):472–485, 2018. [5] Dierck Hillmann, Hendrik Spahr, Carola Hain, Helge Sudkamp, Gesa Franke, Clara Pf¨ affle, Christian Winter, and Gereon uttmann. Aberration-free volumetric high-speed imaging of in vivo retina. Scien- tific reports, 6:35209, 2016.

Deepneuralnetworksforaberrationscompensationinretinalimagi ...theo/posters/geraud.2019.spie_poster.pdfDeepneuralnetworksforaberrationscompensationinretinalimagingacquiredbydigital

  • Upload
    hahuong

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Deepneuralnetworksforaberrationscompensationinretinalimagi ...theo/posters/geraud.2019.spie_poster.pdfDeepneuralnetworksforaberrationscompensationinretinalimagingacquiredbydigital

Deepneuralnetworks foraberrationscompensation inretinal imagingacquiredbydigitalholography

J. Rivet, G. Tochon, S. Meimon, M. Paques, T. Geraud, M. Atlan

ESPCI Paris, PSL Research University, Sorbonne Universite, CNRS, Institut Langevin, 1 rue Jussieu, F-75005 Paris, FranceEPITA Research and Development Laboratory (LRDE), 14-16 rue Voltaire, F-94270 Le Kremlin-Bicetre, France

ONERA, the French Aerospace Lab, F-92320 Chatillon, FranceInstitut de la Vision, CIC 1423, INSERM UMR-S 968, CNRS, Sorbonne Universite, 17 rue Moreau, 75012 Paris, France

Corresponding author: [email protected]

Abstract

Optical Coherence Tomography (OCT) is one of the most widely available technology for retinal imaging. It is commonlyused to monitor and study eye diseases like glaucoma, macular degeneration and diabetes. We develop computational imagingapproaches based on wide-field interferometric measurements for OCT imaging in order to increase the imaging throughput.With our setup and our home-made software Holovibes, we can reconstruct 3D holographic images in real-time, at 10 billionvoxels per second.In computational imaging by digital holography, lateral resolution of retinal images is limited by the aberrations of the

eye to about 20 microns. To overcome this limitation and improve lateral resolution up to the diffraction limit, eye’s aberrationshave to be canceled. We are investigating new digital aberration compensation schemes to circumvent the limitations ofaberrations correction algorithms of the state of the art.

Setups

OCT setup: the source is a tunable laser whose wavelength varies linearly from 870 nm to 820 nm in 0.5 s. The CMOScamera records 1024× 1024 pixel images at a frame rate of 512 Hz with 16 bit/pixel quantization.

OCT images

Real-time holographic tomography of a semi-transparent sam-ple of 400-800 microns silica beads wrapped with tape.

Our setup provides en-face and axial images at the same time.It is still a prototype. Imaging in the eye was demonstrated atmuch faster acquisition rates (60 kHz) to circumvent physiolog-ical motion [5].

Axial field of view is about 1.8 mm, with a resolution of 14

microns.

Input throughput: 512 Mega bytes per second.Output throughput: 10 billion voxels per second.Real-time images are obtained with Holovibes.

Deep neural networks for aberrations compensation

!!

"

!!

" ""

" ""

" ""

"

"

" ""

" ""

"

#"

#"

##

##

x0

x1

H1

x2

H2

H3

H4

x3

x4

A 5-layer denseblock (left) and U-net (right).

IDiffNet [1] is able to reconstruct phase images propagated through a scattering medium. It suceeds in computational image gen-eration from interferogram measurements. It uses denseblocks from Densenet [2] and skip connections from U-Net [3], improvingfeatures propagation in the network. We are adapting this type of network in order to reconstruct intensity images of retina’slayers. The network will be trained on the eye fundus database from the 15-20 hospital in Paris.

Digital image formation

To obtain retinal images, fields corresponding to interferogramsacquired with the CMOS camera are digitally propagated fromcamera plane to image or reconstruction plane. Consideringcamera plane corresponds to z = 0, and the distance betweenboth planes is z, field in image plane can be expressed as:

E(x, y, z) = E(x, y, 0) ∗ h(x, y, z), (1)

where h(x, y, z) is the impulse response of free space propaga-tion:

h(x, y, z) =eikz

iλzei

k

2z (x2+y2). (2)

Intensity interferogram acquired can be related to the fieldwith I(x, y, 0) = |E(x, y, 0)|2. In the same way, H(x, y, z) =|E(x, y, z)|2. The same relation Eq. 1 exists between interfero-gram and hologram:

H(x, y, z) = I(x, y, 0) ∗ h(x, y, z). (3)

Time Fourier transform is then applied to get frequency anddepth dependency.

Aberrations compensation

1 Simulation

Images extracted from eye fundus database, collected at the 15-20 hospital in Paris.

irisand pupil

lens cornea

maculaand fovea

retina

optic nervehead

vitreousgel

f0 f

air

Sketch of the optical configuration.

Original image PSF with aberrations (astigmatism) Image with aberrations

Intensity images are convoluted to a point spread function (PSF) corresponding to the impulse response of the eye’s opticalsystem which light beam crosses. If there is no aberration, the wavefront will not be distorted, the PSF will be a point and theimage will stick to the original one. If there are aberrations, created by cornea, the PSF will not be a point anymore, whichmeans the wavefront will be distorted. The resolution of resulting image will decrease.

2 State of the art

Plane wavefront Tilted wavefront Aberrated wavefront

Shack-Hartmann wavefront sensor principle.

The focal plane of the lens is divided into multiple sub-lenses. Each of them provides PSF for some parts of the plane. Ifthe wavefront is not aberrated, each PSF will be in the middle of each sub-image in the focal plane. If the wavefront isdistorted, each PSF will be shifted according to the wavefront’s slope. The relation between wavefront and PSF is defined byPSF = FFT {Γwavefront}, where FFT means Fast Fourier Transform, and Γwavefront is the autocorrelation of the wavefront inthe focal plane.

Aberration compensation methods:Reconstruction of digital holograms in subapertures: wavefront measurement, lateral phase correlation analysis andnon-iterative aberration compensation of retinal holograms [4].Iterative digital aberration compensation: minimization of the local entropy of speckle-averaged tomographic volumes [5].

Further reading

References

[1] Shuai Li, Mo Deng, Justin Lee, Ayan Sinha, and George Barbastathis.Imaging through glass diffusers using densely connected convolutionalnetworks. arXiv preprint arXiv:1711.06810, 2017.

[2] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van derMaaten. Densely connected convolutional networks. In Proceedings

of the IEEE conference on computer vision and pattern recognition,volume 1, page 3, 2017.

[3] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convo-lutional networks for biomedical image segmentation. In International

Conference on Medical image computing and computer-assisted inter-

vention, pages 234–241. Springer, 2015.

[4] Laurin Ginner, Tilman Schmoll, Abhishek Kumar, Matthias Salas, Nas-tassia Pricoupenko, Lara M Wurster, and Rainer A Leitgeb. Holo-graphic line field en-face OCT with digital adaptive optics in the retinain vivo. Biomedical Optics Express, 9(2):472–485, 2018.

[5] Dierck Hillmann, Hendrik Spahr, Carola Hain, Helge Sudkamp, GesaFranke, Clara Pfaffle, Christian Winter, and Gereon Huttmann.Aberration-free volumetric high-speed imaging of in vivo retina. Scien-tific reports, 6:35209, 2016.