+ All Categories
Home > Documents > Automatic phase aberration compensation for digital...

Automatic phase aberration compensation for digital...

Date post: 24-Feb-2021
Category:
Upload: others
View: 6 times
Download: 0 times
Share this document with a friend
15
Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection THANH NGUYEN, 1 VY BUI, 1 VAN LAM, 2 CHRISTOPHER B. RAUB, 2 LIN- CHING CHANG, 1 AND GEORGE NEHMETALLAH 1,* 1 Electrical Engineering and Computer Science Department, The Catholic University of America, 620 Michigan Ave, Washington, DC 20017, USA 2 Biomedical Department, The Catholic University of America, 620 Michigan Ave, Washington, DC 20017, USA * [email protected] Abstract: We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells’ motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations. © 2017 Optical Society of America OCIS codes: (090.1995) Digital holography; (100.3010) Image reconstruction techniques; (090.1000) Aberration compensation; (120.5050) Phase measurement; (100.4996) Pattern recognition, neural networks. References and links 1. H. Nguyen, D. Nguyen, Z. Wang, H. Kieu, and M. Le, “Real-time, high-accuracy 3D imaging and shape measurement,” Appl. Opt. 54(1), A9–A17 (2015). 2. K. Hien, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol. 25(3), 035401 (2014). 3. T. T. A. Nguyen, H. N. D. Le, M. Vo, Z. Wang, L. Luu, and J. C. Ramella-Roman, “Three-dimensional phantoms for curvature correction in spatial frequency domain imaging,” Biomed. Opt. Express 3(6), 1200–1214 (2012). 4. T. Nguyen, G. Nehmetallah, D. Tran, A. Darudi, and P. Soltani, “Fully automated, high speed, tomographic phase object reconstruction using the transport of intensity equation in transmission and reflection configurations,” Appl. Opt. 54(35), 10443–10453 (2015). 5. T. Nguyen and G. Nehmetallah, “Non-interferometric tomography of phase objects using spatial light modulators,” J. Imaging 2(4), 30 (2016). 6. N. Pavillon, J. Kühn, C. Moratal, P. Jourdain, C. Depeursinge, P. J. Magistretti, and P. Marquet, “Early cell death detection with digital holographic microscopy,” PLoS One 7(1), e30912 (2012). 7. J. Kühn, F. Montfort, T. Colomb, B. Rappaz, C. Moratal, N. Pavillon, P. Marquet, and C. Depeursinge, “Submicrometer tomography of cells by multiple-wavelength digital holographic microscopy in reflection,” Opt. Lett. 34(5), 653–655 (2009). 8. J. Kühn, E. Shaffer, J. Mena, B. Breton, J. Parent, B. Rappaz, M. Chambon, Y. Emery, P. Magistretti, C. Depeursinge, P. Marquet, and G. Turcatti, “Label-free cytotoxicity screening assay by digital holographic microscopy,” Assay Drug Dev. Technol. 11(2), 101–107 (2013). Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15043 #292793 https://doi.org/10.1364/OE.25.015043 Journal © 2017 Received 17 Apr 2017; revised 14 May 2017; accepted 15 May 2017; published 21 Jun 2017
Transcript
Page 1: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection

THANH NGUYEN,1 VY BUI,1 VAN LAM,2 CHRISTOPHER B. RAUB,2 LIN-CHING CHANG,1 AND GEORGE NEHMETALLAH

1,* 1Electrical Engineering and Computer Science Department, The Catholic University of America, 620 Michigan Ave, Washington, DC 20017, USA 2Biomedical Department, The Catholic University of America, 620 Michigan Ave, Washington, DC 20017, USA *[email protected]

Abstract: We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells’ motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations. © 2017 Optical Society of America

OCIS codes: (090.1995) Digital holography; (100.3010) Image reconstruction techniques; (090.1000) Aberration compensation; (120.5050) Phase measurement; (100.4996) Pattern recognition, neural networks.

References and links

1. H. Nguyen, D. Nguyen, Z. Wang, H. Kieu, and M. Le, “Real-time, high-accuracy 3D imaging and shape measurement,” Appl. Opt. 54(1), A9–A17 (2015).

2. K. Hien, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separateobjects with stereo vision,” Meas. Sci. Technol. 25(3), 035401 (2014).

3. T. T. A. Nguyen, H. N. D. Le, M. Vo, Z. Wang, L. Luu, and J. C. Ramella-Roman, “Three-dimensionalphantoms for curvature correction in spatial frequency domain imaging,” Biomed. Opt. Express 3(6), 1200–1214(2012).

4. T. Nguyen, G. Nehmetallah, D. Tran, A. Darudi, and P. Soltani, “Fully automated, high speed, tomographic phase object reconstruction using the transport of intensity equation in transmission and reflection configurations,” Appl. Opt. 54(35), 10443–10453 (2015).

5. T. Nguyen and G. Nehmetallah, “Non-interferometric tomography of phase objects using spatial light modulators,” J. Imaging 2(4), 30 (2016).

6. N. Pavillon, J. Kühn, C. Moratal, P. Jourdain, C. Depeursinge, P. J. Magistretti, and P. Marquet, “Early celldeath detection with digital holographic microscopy,” PLoS One 7(1), e30912 (2012).

7. J. Kühn, F. Montfort, T. Colomb, B. Rappaz, C. Moratal, N. Pavillon, P. Marquet, and C. Depeursinge, “Submicrometer tomography of cells by multiple-wavelength digital holographic microscopy in reflection,” Opt.Lett. 34(5), 653–655 (2009).

8. J. Kühn, E. Shaffer, J. Mena, B. Breton, J. Parent, B. Rappaz, M. Chambon, Y. Emery, P. Magistretti, C. Depeursinge, P. Marquet, and G. Turcatti, “Label-free cytotoxicity screening assay by digital holographic microscopy,” Assay Drug Dev. Technol. 11(2), 101–107 (2013).

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15043

#292793 https://doi.org/10.1364/OE.25.015043 Journal © 2017 Received 17 Apr 2017; revised 14 May 2017; accepted 15 May 2017; published 21 Jun 2017

Page 2: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

9. B. Rappaz, E. Cano, T. Colomb, J. Kühn, C. Depeursinge, V. Simanis, P. J. Magistretti, and P. Marquet, “Noninvasive characterization of the fission yeast cell cycle by monitoring dry mass with digital holographic microscopy,” J. Biomed. Opt. 14(3), 034049 (2009).

10. N. Pavillon, A. Benke, D. Boss, C. Moratal, J. Kühn, P. Jourdain, C. Depeursinge, P. J. Magistretti, and P. Marquet, “Cell morphology and intracellular ionic homeostasis explored with a multimodal approach combining epifluorescence and digital holographic microscopy,” J. Biophotonics 3(7), 432–436 (2010).

11. N. Warnasooriya, F. Joud, P. Bun, G. Tessier, M. Coppey-Moisan, P. Desbiolles, M. Atlan, M. Abboud, and M. Gross, “Imaging gold nanoparticles in living cell environments using heterodyne digital holographic microscopy,” Opt. Express 18(4), 3264–3273 (2010).

12. Y. Fang, C. Y. Y. Iu, C. N. P. Lui, Y. Zou, C. K. M. Fung, H. W. Li, N. Xi, K. K. L. Yung, and K. W. C. Lai, “Investigating dynamic structural and mechanical changes of neuroblastoma cells associated with glutamate-mediated neurodegeneration,” Sci. Rep. 4, 7074 (2014).

13. R. John and V. P. Pandiyan, “An optofluidic bio-imaging platform for quantitative phase imaging of lab on a chip devices,” Appl. Opt. 55(3), A54–A59 (2016).

14. L. Williams, P. P. Banerjee, G. Nehmetallah, and S. Praharaj, “Holographic volume displacement calculations via multiwavelength digital holography,” Appl. Opt. 53(8), 1597–1603 (2014).

15. G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in 3D imaging,” Adv. Opt. Photonics 4(4), 472–553 (2012).

16. E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. 38(34), 6994–7001 (1999).

17. G. Nehmetallah, R. Aylo, and L. Williams, Analog and Digital Holography with MATLAB®, (SPIE Press, 2015).

18. T. Nguyen, G. Nehmetallah, C. Raub, S. Mathews, and R. Aylo, “Accurate quantitative phase digital holographic microscopy with single- and multiple-wavelength telecentric and nontelecentric configurations,” Appl. Opt. 55(21), 5666–5683 (2016).

19. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “Phase aberration compensation in digital holographic microscopy based on principal component analysis,” Opt. Lett. 38(10), 1724–1726 (2013).

20. H. N. D. Le, M. S. Kim, and D.-H. Kim, “Comparison of singular value decomposition and principal component analysis applied to hyperspectral imaging of biofilm,” in IEEE Photonics Conference (IPC, 2012), pp. 23–27.

21. T. Colomb, F. Montfort, J. Kühn, N. Aspert, E. Cuche, A. Marian, F. Charrière, S. Bourquin, P. Marquet, and C.Depeursinge, “Numerical parametric lens for shifting, magnification, and complete aberration compensation indigital holographic microscopy,” J. Opt. Soc. Am. A 23(12), 3177–3190 (2006).

22. T. Colomb, E. Cuche, F. Charrière, J. Kühn, N. Aspert, F. Montfort, P. Marquet, and C. Depeursinge, “Automatic procedure for aberration compensation in digital holographic microscopy and applications to specimen shape compensation,” Appl. Opt. 45(5), 851–863 (2006).

23. X. Bresson, S. Esedoḡlu, P. Vandergheynst, J. P. Thiran, and S. Osher, “Fast global minimization of the active contour/snake model,” J. Math. Imaging Vis. 28(2), 151–167 (2007).

24. S. A. Hojjatoleslami and J. Kittler, “Region growing: a new approach,” IEEE Trans. Image Process. 7(7), 1079–1084 (1998).

25. Y. Boykov and F. L. Gareth, “Graph cuts and efficient ND image segmentation,” Int. J. Comput. Vis. 70(2), 109–131 (2006).

26. L. Grady, “Random walks for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 1768–1783(2006).

27. W. Khan, “Image segmentation techniques: A survey,” J. Image Graphics 1(4), 166–170 (2013). 28. R. H. Laprade, “Split-and-merge segmentation of aerial photographs,” Comput. Vis. Graph. Image Process.

44(1), 77–86 (1988).29. J. B. M. T. Roerdink and A. Meijster, “The watershed transform: Definitions, algorithms and parallelization

strategies,” Fundam. Inform. 41(1–2), 187–228 (2000).30. V. Bui and L.-C. Chang, “Deep learning architectures for hard character classification,” in Proc. Int. Conf. Art if.

Int. (2016), pp. 108.31. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015).32. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,”

in International Conference on Medical Image Computing and Computer-Assisted Intervention (SpringerInternational Publishing, 2015).

33. D. Harris and S. Harris, Digital Design and Computer Architecture, 2nd ed. (Morgan Kaufmann, 2012). 34. A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in

Proc. of the 30th Intern. Conf. on Machine Learning (ICML, 2013).35. S. Ioffe, and C. Szegedy. “Batch normalization: Accelerating deep network training by reducing internal

covariate shift,” [arXiv preprint arXiv:1502.03167].36. H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in Proceedings of

the IEEE International Conference on Computer Vision (2015).37. I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton, “On the importance of initialization and momentum in

deep learning,” ICML 3(28), 1139–1147 (2013).

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15044

Page 3: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

38. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: A system for large-scale machine learning,” in Proc. of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016).

39. M. A. Herráez, D. R. Burton, M. J. Lalor, and M. A. Gdeisat, “Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path,” Appl. Opt. 41(35), 7437–7444 (2002).

40. C. B. Raub, V. Suresh, T. Krasieva, J. Lyubovitsky, J. D. Mih, A. J. Putnam, B. J. Tromberg, and S. C. George, “Noninvasive assessment of collagen gel microstructure and mechanics using multiphoton microscopy,”Biophys. J. 92(6), 2212–2222 (2007).

1. Introduction

Three-dimensional image retrieving techniques are important for many applications [1–5]. These techniques can roughly be divided into interferometric and non-interferometric techniques. Digital Holographic Microscopy (DHM) is an interferometric non-invasive technique for acquiring real-time quantitative phase images which has an enormous impact in many fields such as biology of living cells [6–9], neural science [10], nanoparticle tracking [11], biophotonics, bioengineering and biological processes [12], microfluidics [13], and metrology [14, 15]. A traditional DHM system records a digital hologram optically using a microscope objective (MO) and the image reconstruction is performed digitally using optical propagation techniques [16, 17]. However, the use of an MO introduces phase aberrations which can be superposed over the biological sample (object). A successful image reconstruction requires very tedious alignment and precise measurement of the system parameters such as reference beam angle, reconstruction distance, and MO’s focal length which are often difficult to achieve in a laboratory environment. To overcome these difficulties, in a previous analysis, the use of multiple-wavelength DHM and telecentric DHM configurations were employed which allowed canceling the bulk of optical phase aberration due to the MO and the reference beam [18]. Residual aberrations could be compensated digitally by using Principal Components Analysis (PCA) [19,20] or Zernike polynomial fitting (ZPF) [18, 21, 22]. However, the use of a multi-wavelength source makes the system setup more complicated and expensive. Also, the existing digital compensation techniques still have other drawbacks. The ZPF requires background information to find the phase residual which is detected semi-manually by cropping background area to perform the fitting [18, 21, 22]. PCA, on the other hand, automatically predicts phase residual by creating a self-conjugated phase to compensate for the aberrations but assuming that the phase aberrations have only linear and spherical components and leaving higher order phase aberrations unaccounted for. Therefore, an automatic detection of the background areas in DHM would be highly desired. Many segmentation techniques have been proposed which can be divided into semi-automatic techniques such as active contour [23], region growing [24], graph cut [25], and random walker [26], which require predefined seeds, and fully automatic segmentation techniques such as edge-based [27], region-based [27], split and merge [28], and watershed techniques [29], However, in the case of DHM, these existing methods are not reliable because of the overwhelming phase aberrations and speckle noise in the images.

In this work, we propose a combination of ZPF with a fully automatic background detection using deep learning convolutional neural network (CNN) to compensate for all the phase aberrations without human intervention such as manual cropping. This technique works with single and multiple wavelength telecentric configurations. Theoretical and experimental results will be used to quantitatively assess growth and migratory behavior of invasive cancer cells.

2. Bi-telecentric DHM optical setup

Figure 1 shows the bi-telecentric digital holographic microscopy (BT-DHM) system in vertical transmission mode suited for biological sample analyses. A similar setup can also be used in reflection mode as well. This setup is used in an afocal configuration, where the back

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15045

Page 4: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

focal plane of the MO coincides with the front focal plane of the Tube lens ( ) o TLf f≡ , with

the object placed at the front focal plane of the MO, resulting in the cancellation of the bulk of the spherical phase curvature normally present in traditional DHM systems. The optical beam from a HeNe laser travels through a spatial filter and periscope system and is collimated with a collimating lens to produce a plane wave beam. The collimated beam is split into a reference beam and an object beam which is focused on the biological sample using an afocal configuration. The two beams which are tilted by a small angle (<1°) from each other are recombined using a beam splitter and interfere with each other on a CCD to generate an off-axis hologram. The magnification of the BT-DHM system is /TL oM f f= − .

Fig. 1. The BT-DHM system in transmission mode: (a) vertical tabletop setup and (b) optical schematic.

The most common numerical reconstruction algorithms used in constructing digital holograms are the discrete Fresnel transform, the convolution approach, and the reconstruction by angular spectrum which is defined as [15,17]:

( ) ( ) ( ){ } ( , ) , , exp 2x y x yH f f F h x y h x y i xf yf dxdy∞

π−

= = − + (1)

( ) ( ) ( ), , exp 2x y x y zU f f H f f if dπ= (2)

( ) ( ) ( ) ( ){ }1, , , exp 2x y x y x y x yu F U f f U f f i f f df df∞

ξ η π ξ η−

= = + (3)

where d is the distance between image plane and CCD, ( ),h x y is the hologram, ( , )u ξ η is the

reconstructed image, F is the Fourier transform operator, λ is the wavelength, and 2 2 2, , 1 / λ x y z x yf f f f f= − − are the spatial frequencies. In DHM we introduce a MO to

increase the spatial resolution which is computed according to Eq. (4). Due to the magnification ‘M’ introduced by the MO the pixel size in the image plane, magξΔ and magηΔscale according to:

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15046

Page 5: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

( ) ( )/ , / ,mag magd N xM d N yMξ λ η λΔ = Δ Δ = Δ(4)

where N is the number of pixel in one dimension, and xΔ , yΔ denote the sampling intervals

or pixel size /x y L NΔ = Δ = , L L× are the dimensions of the CCD sensor. This is

intuitively understood by realizing that the holographic recording is now simply a recording of the geometrically magnified virtual image located at distance d. Thus, the pixel resolution is automatically scaled accordingly. For a transmissive phase object on/between transmissive surface/s, the phase change (optical thickness T) due to the change in index nΔ can be calculated as:

( ) ( ),,

2obT

n

ϕ ξ ηλξ ηπ

(5)

where the phase due to the biological sample is expressed as:

( ) ( ) ( )2 2, , , 2ob

jk

Rϕ ξ η ϕ ξ η ξ η= − + where R is the radius of curvature of the spherical

curvature of the MO and ( ),ϕ ξ η is the total phase of the object beam without using the bi-

telecentric configuration. Conventional image reconstruction using Eq. (3) contains phase aberrations which can be

mitigated with the image reconstruction pipeline shown in Fig. 2. First, the hologram is converted into Fourier domain, and the + 1 order component is extracted. Then the wrapped phase image is obtained by extracting the phase of the inverse Fourier transform of the cropped spectrum, to be followed by phase unwrapping. The unwrapped phase can be fed into a trained Convolutional Neural Network (CNN) model to determine the background areas. The background information is then fed into the ZPF to calculate the conjugated phase aberration. Phase compensation is done in spatial domain by multiplying the Inverse Fourier Transform of the cropped + 1 spectrum order with the complex exponential term which contains the conjugated phase aberration. Then, the Fourier Transform of the compensated hologram is centered and zero padded to the original image size. Finally, the angular spectrum reconstruction technique is performed to obtain the phase height distribution of the full-sized, aberration-free reconstructed hologram, as shown in Fig. 2.

One of the crucial steps in the image reconstruction pipeline stated above is to train the proposed deep learning CNN which requires a training data set of sub-sampled phase aberration images and their corresponding ground truth (label) images. Section 3 describes in details the data preparation steps for training the CNN model and Section 4 describes how the CNN model is implemented.

3. Biological sample and data preparation

The cancer cells from the highly invasive MDA-MB-231 breast cancer cell line were seeded on type I collagen hydrogels, polymerized at 4 mg/ml and a temperature of 37°C in 35 mm glass-bottomed petri dishes. The cells on collagen were incubated for 24 hours in DMEM medium containing 10% fetal bovine serum, in standard tissue culture conditions of 37°C and 5% CO2, and 100% humidity. Then, cells were taken from the incubator and imaged with the bi-telecentric DHM setup described above to produce phase reconstruction maps.

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15047

Page 6: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

Fig. 2. Image reconstruction pipeline with phase aberration compensation based on CNN + ZPF: (a, b, c) Hologram, Fourier Spectrum of the hologram, and cropped + 1 order spectrum, respectively. (d, e) Wrapped phase and unwrapped phase of (c) respectively. (f) CNN trained model, (g) output binary segmentation, (h) visualization of background detection, (i) Zernike polynomial fitting, (j) phase aberration calculated from ZPF, (k) Fourier Spectrum after phase aberration compensation, (l) Fourier Spectrum with zero padding and centering, (m) reconstructed phase map using angular spectrum, and (n) final unwrapped phase map.

Figure 3 shows the steps of data collection/preparation using a PCA method [19]. This process contained three separate parts: (a) collecting background phase aberration (red path), (b) collecting a single cell data set which includes phase height distribution and binary mask(blue path), and (c) what data is input to the proposed CNN model (the green path). Along thered path, random background phase aberrations were obtained while there was no samplepresent in the BT-DHM system. As shown in Fig. 1, MO1 and MO2 were both shifted up,down, and rotated to create different phase aberrations. Two-hundred and ten hologramswithout a sample present were captured and reconstructed, using angular spectrum method.The background sub-sampled (256 × 256) phase aberration was reconstructed after using aband-pass filter around the + 1 order (virtual image location) by using an inverse Fouriertransform and phase unwrapping.

Along the blue path, forty holograms containing cancer cells were also reconstructed using the PCA method. For the training stage of the deep-learning CNN, 306 single cells were manually segmented from those forty reconstructed holograms to obtain real phase distribution images and corresponding ground truth binary images (0 for background, 1 for cells). Then, each of cell’s phase distribution images, binary masks, and subsampled phase aberration images were augmented using flipping (horizontally and vertically) and rotating (90°, 180°, and 270°). Therefore, 1836 single cell phase distribution images, corresponding to 1836 single cell binary masks and 1260 sub-sampled background phase aberration were obtained. In order to create the training data set, 4 – 10 real phase maps of cells were randomly added into each of the 1260 phase aberration images that contain no samples at random positions. It should be noted that the total phase is the integral of the optical path length (OPL). These phase maps were preprocessed with a moving average filter [5 × 5] to smooth out the edges due to the manual segmentation. Similarly, and corresponding to the same 4 – 10 random positions of the real phase maps, the ground truth binary masks were added to a zero background phase map to create the labeled data set. Notice that, different types of cells can produce different shapes. In our case, a future objective would be to quantitatively assess the growth and migratory behavior of invasive cancer cells, and hence cells from the invasive MDA-MB-231 breast cancer line were used for this purpose.

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15048

Page 7: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

Fig. 3. The pipeline of the data preparation process to be used to train the CNN model. The blue path collects data for single cell segmentations and binary masks. The red path collects sub-sampled phase aberration. The green path shows how the data is fed to the CNN model.

Note that, for each type of cells, manual segmentation is only performed once. Hence, the manual segmentation is only performed in the data preparation stage. Usually, deep learning CNN techniques require a certain amount of training data to produce good results. This additional overhead to collect and prepare the training data can be expensive. However, by augmenting 210 phase images (without sample present) and 306 cell images through flipping and rotation, we are able to create a training data set of 1260 phase aberration images and their corresponding ground truths images. Eighty percent of these images were randomly selected for training, and the rest of images were used for validation.

4. Deep learning convolutional neural network training

In this section, we describe the implementation of deep learning CNN for automatic background detection for digital holographic microscopic images. The deep learning architecture contains multiple convolutional neural network layers [30, 31], including max pooling layers, unpooling layers with rectified linear unit (ReLU) activation function, and batch normalization (BN) function, similar to the architecture used in [32]. Let us denote by (i)x , '(i) ,x and (i)y to be the input data volume corresponding to the initial group of phase

aberration images, the currently observed volume data at a certain stage of the CNN model, and the output data volume of the CNN model, respectively. The input and output data volume along with the ground truth images have a size of (batchSize × imageWidth × imageHeight × channel), where batchSize is the number of images in each training session. In our model, the input volume has a size (8 × 128 × 128 × 1) (1 channel indicates a grayscale image), whereas the output volume has a size (8 × 128 × 128 × 2) (2 channels for 2 classes

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15049

Page 8: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

obtained from the one-hot-encoding of the ground truth images [33]). The model shown in Fig. 4 is a simpler version of the U-Net model [32]. An output neuron in the U-net model is computed through convolution operations (which we define as a convolution layer) with the preceding neurons connected to it such that these input neurons are situated in a local spatial region of the input. Specifically, each output neuron in a neuron layer is computed by the dot product between their weights and a connected small region of the input volume, with an addition of the neuron bias:

( ) ( ) ( ) ( )1

0

' ' , 1, 2, , ,M

i j j jl l l l

j

x W x B i N−=

= + = … (6)

where W is the weight, B is the bias, j is the index in the local spatial region M which is the total number of elements in that region, N is the total number of neurons in each layer which can be changed depending on the architecture, and l is the layer number.

Fig. 4. U-net Convolutional Neural Network model.

The U-net model contains two parts: Down-sampling (left half of Fig. 4) and up-sampling (right half of Fig. 4). After each convolutional layer, ReLU activation function [34] and BN function [35] were applied to effectively capture non-linearities in data and speedup the training. In the down-sampling path, feature extraction is done through convolution which transforms the input image to a multi-dimensional feature representation [31, 36]. On the other hand, the up-sampling path is a shape generator that produces object segmentation from the extracted features obtained from the convolution path. ReLU activation improves the computational speed of the training stage of the neural networks and prevents the issue of “vanishing gradient” while employing the sigmoidal function traditionally used for this purpose. The ReLU activation function we used is defined as in [34]:

( )( )( ) ( )'' , 0

' , 1, 2, , ,0 ,

i ii x if x

f x i Notherwise

>= = …

(7)

where ( )ix′ is the ith pixel in the volume data under training,N is the total number of pixels in the volume data: N = batchSize × layerwidth × layerheight × channel, where layerwidth and

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15050

Page 9: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

layerheight is the width and height of the image at the lth layer, and the channel is the number of weights W in the lth layer.

On the other hand, batch normalization allows the system to: (a) have much higher learning rates, (b) be less sensitive to the initialization conditions, and (c) reduce the internal covariate shift [35]. BN can be implemented by normalizing the data volume to make it zero mean and unit variance as defined in Eq. (8):

( )( ) ( )

2

[ ]ˆ ' ,

i ii x x

xμγ β

σ ε′ ′−= +

+ (8)

where [ ] ( )

1

1 Ni

i

µ x xN =

′ ′= , ( ) ( )'2 2

1

1 ( )

Ni i

i

x µ xN

σ=

′ = − , ε is a regularization parameter used

to avoid the case of uniform images), γ is a scaling factor, β is the shifting factor

( 1, 0γ β= = ), and ( )ˆ ' ix is the output of the BN stage.

The down-sampling and up-sampling were done using max pooling and unpooling, respectively. Max pooling is a form of non-linear down-sampling that eliminates non-maximal values, and helps in reducing the computational complexity of upper layers by reducing the dimensionality of the intermediate layers. Also, max pooling was done in part to avoid over fitting. The unpooling operation is a non-linear form of up sampling a previous layer by using nearest neighbor interpolation of the features obtained by max pooling, and resulting gradually in the shape of the samples. Our deep learning CNN model has a symmetrical architecture with max pooling and unpooling filters both with a 2 × 2 kernel size.

In our experiment, the Softmax function, a popular linear classifier defined in Eq. (9), was used in the last layer to calculate the prediction probability of background/cell potential as:

( )

( )

( )

1

( ) , 1,2..., ,i

i

yi

N y

i

eS y i N

e=

= =

(9)

where N (8 × 128 × 128 × 2) is the number of pixels (neurons) needed to be classified in the segmentation process.

An error is a discrepancy measure between the output produced by the system and the correct output for an input pattern. A loss value is the average of errors between the predicted

probability ( )iSy and the corresponding ground truth pixel ( )iL . In our study, the loss function

was measured by using the cross entropy function which is defined as:

( ) ( )

1

1 log( ( )),

Ni

i

iL SN

yЄ=

=− (10)

The training algorithm was performed by iterating the process of feeding the phase aberration images in batches through the model and calculating the error Є using an optimizer to minimize the error. The Stochastic Gradient Descent (SGD) optimizer was employed in the back propagation algorithm. Instead of evaluating the cost and the gradients over the full training set, SGD evaluates the values of these parameters using less training samples. The learning rate was initially set to 1e-2, the decay to 1e-6, and the momentum to 0.96 [37]. Other parameters used in this work: batchsize of 8, image size of 128 × 128 instead of 256 × 256 to avoid memory overflow (images will be resized at the end), depth channel of 32 at the first layer, the deepest channel is 512, and training with 360 epochs. The proposed model was implemented in Python using TensorFlow/Keras framework [38] and the implementation was GPU-accelerated with NVIDIA GeForce 970M. Figure 5 shows the training and the validation loss obtained from 360 epochs. Each epoch contains 126 batches of training data. The parameters were updated after each training batch. The training loss and

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15051

Page 10: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

the validation loss started at 0.48 and 0.2916, respectively. The results suggest that the loss value decreases quickly (i.e., learned quickly) during the first 50 epochs of the training, and the validation loss value decreases with random oscillations (i.e., transitory period) in the first 50 epochs. Note that the validation loss value was slightly less than the value of the training loss value during epoch 50 to 220 which implies that the model learned slowly in this period. Between epochs 220 and 360 the validation loss value was slightly higher than the training loss value. Both values decreased slowly to 0.0256 and 0.0237, respectively.

Fig. 5. Training loss and validation loss for the 360 epochs.

5. Testing data and phase aberration compensation

To evaluate the performance of the proposed deep neural network and ZPF technique, 30 holograms recorded by the BT-DHM system (see Fig. 1) and reconstructed using the pipeline in Fig. 2, were tested. The background of a phase aberration image was first located, so the background pixel representation can be used in the ZPF model [18]. As shown in Fig. 2, the unwrapped phase [39] is passed through the trained CNN model discussed in Section 4 (Fig.

2(g)) to produce the mask prediction ( )iy in Eq. (9). The output of the model is normalized in

the [0, 1] range and the threshold is set to 0.5 to classify the background and cell area as described by the following equation:

( )( )i

i 1 , if 0.5, 1, 2, , ,

0,

yB i N

otherwise

≤= = …

(11)

Figure 6 shows exemplary channels of selected layers in the down-sampled (top row) and up-sampled (bottom row) paths (see Fig. 4), respectively, for visualization effect. The image on the top left is the raw phase aberration. The next five images on the top row are the outputs of consecutive down-sampling layers. The first five images on the lower row are the outputs of the up-sampled layers, and the lower right image is the binary mask using the threshold function defined in Eq. (11). The down-sampled layers contain the strong features of the image such as the parabolic intensities, and edges, etc., while the up-sampled layers contain the shape of the cells.

In order to measure the conjugated background phase aberration, the pixels from the raw phase image are chosen corresponding to the background pixels’ locations obtained from the binary image where ( (i) 1B = ), then converted to a 1D vector to perform the polynomial fitting [18]. Then, the polynomial fitting was implemented using a 5th order polynomial with 21 coefficients as:

( )5 5

0 0

, , 5,i jij

i j

S x y p x y i j= =

= + ≤ (12)

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15052

Page 11: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

where ijp are the coefficients, i and j are polynomial orders, x and y present pixel coordinates.

Let the arrays 00 10 05 [ ]ijp p p p= … …P and [ ]0 1 10 20 ,a a a a= … …A hold the polynomial

model’s coefficients and the Zernike model’s coefficients. The 21 coefficients of the P polynomial are used to calculate the coefficients of the Zernike polynomial as shown in the following equation:

1, , .i j p−=A z P (13)

Fig. 6. Visualization of outputs of a selected channel from the following layers: 3, 6, 9, 12, 15, 18, 21, 24, 27, and 28 in CNN.

The , ,i j pz matrix consists of coefficients corresponding to each order of the Zernike

polynomials:

0,0, 1 0,0,0,0,0 0,0,1 0,0,100

1,0, 1 1,0,1,0,0 1,0,1 1,0,101

10 4,0,0 4,0,1 4,0,10 4,0, 1 4,0,

20 0,5,0 0,5,1 0,5,10 0,5, 1 0,5,

p p

p p

p p

p p

z zz z zaz zz z za

a z z z z z

a z z z z z

… … … …

= … …

… …

1

00

10

40

0520

,

p

p

p

p

p

=

×

(14)

The Zernike polynomial model is used to construct the conjugated phase, as:

20

0

exp , 1, 2, , 21,conjugated k kk

P j a Z k=

= − = … (15)

where kZ coefficients are expressed according to Zemax classification. After obtaining the

background area from CNN, the conjugated phase aberration was calculated using ZPF, and then multiplied with the initial phase. To obtain the full size aberration compensated reconstructed image, zero padding and spectrum centering was performed on the Fourier transform of the aberration compensated hologram. Then, the angular spectrum reconstruction technique was performed to obtain the phase height distribution of the full-sized, aberration-free reconstructed hologram, as shown in Fig. 2.

Figures 7(a) and 7(b) show a typical manual and CNN model’s segmentation on the test image in Fig. 2. Figure 7(c) shows the Dice’s Coefficient (DC) or F1 score of background area and cell area of 9 typical cases in test data. DC is computed according to the following equation:

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15053

Page 12: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

2 ',

'

A ADC

A A

∩=

+(16)

where |.| denotes the area, A and 'A are the segmented areas of a test data based on deep learning CNN and manual segmentation, respectively. Background’s DC (0.9582 – 0.9898) is much higher than cell’s DC (0.7491 – 0.8764) because of the larger common area in the background. This will lessen the effect of true negative and false positive scenarios in ZPF.

Fig. 7. (a) Typical manual segmentation on the test image of Fig. 6, (b) CNN model’s segmentation, and (c) background (BG) dice coefficient and cell dice coefficient on 9 cases of the test data.

Figure 8(a) shows a typical reconstruction of real data wrapped phase with aberrations. It is worth noting that the cells in this image do not appear in the training data set. This means that these holograms were not segmented in the data preparation process. Figure 8(b) shows the result of background detection using the deep learning CNN classification process. In this example, considerable differences between the training data and the real data were observed. Cells obtained from real data have smoother edges than the ones obtained in the training data. The CNN produces an intentional over segmentation of the cell area which is actually beneficial for background detection. Figure 8(c) is the result of applying (through multiplication) the binary mask on the unwrapped reconstructed phase containing aberrations. Then the phase aberration in the background region was fitted using ZPF to compute the residual phase as shown in Fig. 8(d). Figure 8(e) shows the phase distribution after compensating in the spatial domain according to Fig. 2. Figure 8(f) shows the final result after phase unwrapping.

Fig. 8. (a) Wrapped phase with aberration [256x256], (b) background detection after CNN [256x256], (c) CNN’s binary mask where background (colored portion) is fed into ZPF [256x256], (d) residual phase [256x256], (e) phase map after phase compensation [1024x1024], and (f) phase unwrapping [1024x1024].

Figure 9 shows the comparison between PCA and CNN + ZPF techniques. The CNN + ZPF technique produces better results than the PCA technique in approximating the

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15054

Page 13: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

conjugated residual phase. Figures 9(a) and 9(c) show the phase compensation using PCA and CNN + ZPF, respectively. Figures 9(b) and 9(d) are the wrapped conjugated residual phases computed using PCA and CNN + ZPF, respectively. When the PCA’s technique is used, the residual phase which contains elliptical concentric pattern was fitted using the least-square method for the two dominant singular vectors corresponding to the first two dominant principal components. This will not compensate for all the distorted regions of the phase distribution.

Fig. 9. (a) Phase compensation with PCA, (b) conjugated residual phase of (a), (c) phase compensation using CNN + ZPF, (d) conjugated residual phase of (c), (e) Zernike coefficients of phase difference between CNN + ZPF technique and PCA technique using 1/|log|ak|| scale, and (f) profiles of yellow dash lines in (a) and (c) corresponding to blue and red line, respectively. Yellow bars denote the flatness of region of interest.

However, the CNN + ZPF technique takes advantage of the background area; the majority of background information was fitted with higher order (up to 5th order). Hence, the conjugated phase aberration looks more distorted because of those higher orders. Figure 9(e) shows the Zernike coefficients of the phase difference between CNN + ZPF method and PCA method indicating the error in phase compensation while using the PCA method. Figure 9(f) shows the profiles of a diagonal dashed line (from bottom left to top right) of PCA’s result of Fig. 9(a) and CNN + ZPF’s result of Fig. 9(c). The two profiles have different bias phases; the background phase of CNN + ZPF has better flatness (1.35 rad and 0.65 rad) than PCA’s background (corresponding to 2.4 rad and 0.95 rad) which can be seen inside the blue dashed rectangle.

Another example of testing data is shown in Fig. 10. The same cancer cell line was used, but cells were adherent to the surface of a thin collagen hydrogel layer. MDA-MB-231 cells were placed on a collagen layer, fed with DMEM supplemented by 10% FBS and incubated for one day to promote adhesion to collagen. Collagen polymerization conditions, at a concentration of 4 mg/ml, and polymerization temperature of 4°C, were set to produce a collagen network with large-diameter fibers [40]. The microscope stage was warmed to 37°C with a stage warmer, and cell culture media was buffered with 10 mM HEPES. BT-DHM was able to capture phase reconstruction map features consistent with collagen fibers from gels formed at the above polymerization conditions.

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15055

Page 14: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

Fig. 10. (a) Phase aberration, (b) unwrapped phase overlaid with CNN’s image segmentation mask, where background (color denoted) is fed into ZPF, (c) conjugated residual phase using CNN + ZPF, (d) fibers are visible after aberration compensation and are indicated by blue arrows, and (e) phase profile along the dash line in (d). Yellow bars denote the flatness of region of interest.

Due to the different temperatures during collagen polymerization (37 °C versus 4 °C), one image in the new data set has collagen fiber features not apparent in the CNN model training image set. However, the background region was correctly detected even with the introduction of the new features. Thus, the CNN + ZPF technique has higher accuracy in measuring the phase aberration (1.68 rad of flatness using PCA and 0.92 rad of flatness using CNN + ZPF) as shown in Fig. 10(e).

To further validate the proposed technique, a data set with more cancer cells than the training images in the CNN model was used (i.e., the training data set contains 4-10 cells in a single-phase image). Figure 11 shows a typical result with a real phase image containing 15 cells. The CNN model managed to detect the background area regardless of the number of cells that appear in the image. The CNN model managed to learn representations and make decisions based on local spatial input. By scanning kernel filters spatially over the data volume, convolutional layers could detect cells’ region features spatially better suited to enhance the ZPF performance, resulting in better phase aberration compensation. In Fig. 11(e), the dashed profile crosses 3 different cells, from left to right. The phase heights of the three cells are the same for both techniques. While the phase aberration remains visible for the 3rd cell using PCA, the aberration is cancelled using the proposed technique. The whole motivation is to ensure proper cell phase visualization for further analysis, without a phase offset error. Thus, ensuring flat phase in the background is crucial for correct analysis. Hence, the CNN + ZPF is a fully automatic technique that outperforms the PCA method in terms of accuracy and robustness, and can be implemented in real time [18, 19].

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15056

Page 15: Automatic phase aberration compensation for digital ...static.tongtianta.site/paper_pdf/fd976cb8-48c1-11e9-b702-00163e08bb86.pdfAutomatic phase aberration compensation for digital

Fig. 11. (a) Phase aberration, (b) unwrapped phase overlaid with CNN’s image segmentation mask, where background (color denoted) is fed into ZPF, (c) conjugated residual phase using CNN + ZPF, (d) 3D phase after compensation, and (e) phase profile along the dashed diagonal line from left corner to right corner.

6. Conclusion

We have proposed and demonstrated a combination of Deep Learning Convolutional Neural Network with Zernike polynomial fitting technique to automatically compensate for the phase aberration in a DHM system. The technique benefits from PCA’s ability to obtain the training data for the deep learning CNN model. The trained CNN model can be used as an automatic and in situ process of background detection and full phase aberration compensation. From the data testing stage, we noticed that even with images having new features that didn’t appear in the training process, CNN managed to detect the background with a high precision. While, many image segmentation techniques are not robust when applied to DHM images due to the overwhelming phase aberration, CNN segments the background spatially based on features regardless to the number of cells and their unknown positions. Thus, the trained CNN technique in conjunction with the ZPF technique is a very effective tool that can be employed in real time for autonomous phase aberration compensation in a DHM system.

Vol. 25, No. 13 | 26 Jun 2017 | OPTICS EXPRESS 15057


Recommended