+ All Categories
Home > Documents > Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for...

Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for...

Date post: 13-Oct-2019
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
14
© 2012 Nature America, Inc. All rights reserved. ARTICLES NATURE METHODS | ADVANCE ONLINE PUBLICATION  | Live imaging of large biological specimens is fundamentally limited by the short optical penetration depth of light microscopes. To maximize physical coverage, we developed the SiMView technology framework for high-speed in vivo imaging, which records multiple views of the specimen simultaneously. SiMView consists of a light-sheet microscope with four synchronized optical arms, real-time electronics for long-term sCMOS-based image acquisition at 75 million voxels per second, and computational modules for high-throughput image registration, segmentation, tracking and real-time management of the terabytes of multiview data recorded per specimen. We developed one-photon and multiphoton SiMView implementations and recorded cellular dynamics in entire Drosophila melanogaster embryos with 30-s temporal resolution throughout development. We furthermore performed high- resolution long-term imaging of the developing nervous system and followed neuroblast cell lineages in vivo. SiMView data sets provide quantitative morphological information even for fast global processes and enable accurate automated cell tracking in the entire early embryo. Understanding the development and function of complex bio- logical systems relies on our ability to record and quantify fast spatiotemporal dynamics on a microscopic scale. Owing to the fundamental trade-off between spatial resolution, temporal resolution and photodamage, the practical approach in biologi- cal live imaging has been to reduce the observation of large sys- tems to small functional subunits and to study these one at a time. Although this strategy has advanced our knowledge in past decades, holistic approaches are now required to visualize and analyze complex systems, such as developing embryos, in their entirety: the ability to resolve the spatiotemporal dynamics of biological processes on a systems level is indispensable for under- standing the morphological development of complex tissues 1,2 and entire organisms 3,4 , the global analysis of gene expression patterns 5,6 , the systematic dissection of functional relationships in the developmental building plan 7,8 , and the implementation of high-throughput approaches to automated screening 9 and cellular phenotyping 10 . Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy Raju Tomer, Khaled Khairy, Fernando Amat & Philipp J Keller Light-sheet microscopy techniques are evolving into essential tools for the in vivo study of biological structure and function at a systems level 11–13 . They are based on the simple yet effective idea of illuminating a specimen with a thin sheet of laser light and recording orthogonally the fluorescence emitted from this thin volume. Only the in-focus part of the specimen is exposed to laser light, which provides optical sectioning and substantially reduces photodamage. Moreover, the fluorescence signal emitted from the in-focus section is detected in parallel for the entire field of view, which provides high imaging speeds. In comparison to confocal microscopy, the most commonly used optical sectioning technique, light-sheet microscopy offers faster imaging, higher signal-to-noise ratio and lower photo-bleaching rates by up to several orders of magnitude 14 . Advances in the past few years have led to enhanced spatial 15 and temporal 16 resolution as well as breakthroughs in the conceptual design and complexity of live imaging experiments 4 . However, the optical penetration depth in light-sheet microscopes is fundamentally limited by light scattering: light microscopy in general does not penetrate more than several tens to hundreds of microns of living tissue, which precludes systems-level imaging in many biological model organisms 17 . Although penetration can be increased by nonlinear excita- tion 15,18,19 , this practical limitation persists because neither one-photon nor multiphoton light-sheet microscopes allow imaging of large multicellular organisms in their entirety from a single view 3,19 . This limitation is partially overcome by sequential multiview imaging, in which the sample is rotated and image stacks are sequentially acquired from multiple view angles 3,4,19–22 . Sequential multiview imaging is generally suitable for fixed specimens, but it is inherently slow and thus fails to capture fast processes in live specimens. In live multicellular organisms, fast developmental processes can occur between the sequential multiview acquisi- tions and prevent accurate image fusion of the acquired data. The resulting spatiotemporal artifacts constrain quantitative analyses such as the reconstruction of cell tracks and cell mor- phologies. For global measurements of the dynamic behavior and structural changes of all cells in a developing organism, data Howard Hughes Medical Institute, Janelia Farm Research Campus, Ashburn, Virginia, USA. Correspondence should be addressed to P.J.K. ([email protected]). RECEIVED 28 SEPTEMBER 2011; ACCEPTED 20 APRIL 2012; PUBLISHED ONLINE 3 JUNE 2012; DOI:10.1038/NMETH.2062
Transcript
Page 1: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

Articles

nAture methods  |  ADVANCE ONLINE PUBLICATION  |  �

live imaging of large biological specimens is fundamentally limited by the short optical penetration depth of light microscopes. to maximize physical coverage, we developed the simView technology framework for high-speed in vivo imaging, which records multiple views of the specimen simultaneously. simView consists of a light-sheet microscope with four synchronized optical arms, real-time electronics for long-term scmos-based image acquisition at �75 million voxels per second, and computational modules for high-throughput image registration, segmentation, tracking and real-time management of the terabytes of multiview data recorded per specimen. We developed one-photon and multiphoton simView implementations and recorded cellular dynamics in entire Drosophila melanogaster embryos with 30-s temporal resolution throughout development. We furthermore performed high-resolution long-term imaging of the developing nervous system and followed neuroblast cell lineages in vivo. simView data sets provide quantitative morphological information even for fast global processes and enable accurate automated cell tracking in the entire early embryo.

Understanding the development and function of complex bio-logical systems relies on our ability to record and quantify fast spatiotemporal dynamics on a microscopic scale. Owing to the fundamental trade-off between spatial resolution, temporal resolution and photodamage, the practical approach in biologi-cal live imaging has been to reduce the observation of large sys-tems to small functional subunits and to study these one at a time. Although this strategy has advanced our knowledge in past decades, holistic approaches are now required to visualize and analyze complex systems, such as developing embryos, in their entirety: the ability to resolve the spatiotemporal dynamics of biological processes on a systems level is indispensable for under-standing the morphological development of complex tissues1,2 and entire organisms3,4, the global analysis of gene expression patterns5,6, the systematic dissection of functional relationships in the developmental building plan7,8, and the implementation of high-throughput approaches to automated screening9 and cellular phenotyping10.

Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopyRaju Tomer, Khaled Khairy, Fernando Amat & Philipp J Keller

Light-sheet microscopy techniques are evolving into essential tools for the in vivo study of biological structure and function at a systems level11–13. They are based on the simple yet effective idea of illuminating a specimen with a thin sheet of laser light and recording orthogonally the fluorescence emitted from this thin volume. Only the in-focus part of the specimen is exposed to laser light, which provides optical sectioning and substantially reduces photodamage. Moreover, the fluorescence signal emitted from the in-focus section is detected in parallel for the entire field of view, which provides high imaging speeds. In comparison to confocal microscopy, the most commonly used optical sectioning technique, light-sheet microscopy offers faster imaging, higher signal-to-noise ratio and lower photo-bleaching rates by up to several orders of magnitude14. Advances in the past few years have led to enhanced spatial15 and temporal16 resolution as well as breakthroughs in the conceptual design and complexity of live imaging experiments4.

However, the optical penetration depth in light-sheet microscopes is fundamentally limited by light scattering: light microscopy in general does not penetrate more than several tens to hundreds of microns of living tissue, which precludes systems-level imaging in many biological model organisms17. Although penetration can be increased by nonlinear excita-tion15,18,19, this practical limitation persists because neither one-photon nor multiphoton light-sheet microscopes allow imaging of large multicellular organisms in their entirety from a single view3,19.

This limitation is partially overcome by sequential multiview imaging, in which the sample is rotated and image stacks are sequentially acquired from multiple view angles3,4,19–22. Sequential multiview imaging is generally suitable for fixed specimens, but it is inherently slow and thus fails to capture fast processes in live specimens. In live multicellular organisms, fast developmental processes can occur between the sequential multiview acquisi-tions and prevent accurate image fusion of the acquired data. The resulting spatiotemporal artifacts constrain quantitative analyses such as the reconstruction of cell tracks and cell mor-phologies. For global measurements of the dynamic behavior and structural changes of all cells in a developing organism, data

Howard Hughes Medical Institute, Janelia Farm Research Campus, Ashburn, Virginia, USA. Correspondence should be addressed to P.J.K. ([email protected]).Received 28 SeptembeR 2011; accepted 20 apRil 2012; publiShed online 3 june 2012; doi:10.1038/nmeth.2062

Page 2: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

�  |  ADVANCE ONLINE PUBLICATION  |  nAture methods

Articles

acquisition must occur at speeds that match the timescales of the fastest processes of interest and with minimal time shifts between complementary views11.

The development of light-sheet microscopy technology for simultaneous multiview imaging would eliminate spatiotem-poral artifacts caused by slow sequential multiview imaging and introduce powerful new capabilities14, but its realization is technically challenging. A comprehensive solution requires an optical microscope design for the parallelized acquisition of multiple complementary views, an electronics framework that provides synchronized control of all optomechanical compo-nents with millisecond precision over long periods of time, an automated computational framework that rapidly performs the complex optical alignment on the live specimen, a robust pipeline for simultaneous high-speed image acquisition with multiple cameras at sustained data rates of several hundreds of megabytes per second, and a high-throughput computational strategy for efficient automated image processing to register and reconstruct the terabytes of raw multiview image data arising from every experiment.

Here we present a complete technology framework for light sheet–based one-photon and multiphoton simultaneous multi-view (SiMView) imaging and image analysis, which overcomes the limitations of sequential multiview strategies and enables quantitative systems-level imaging of fast dynamic events in large living specimens. Our method provides near-complete physical coverage through the acquisition of four complementary optical views with a maximum time shift of 20 ms, independent of speci-men size. Temporal correspondence of complementary views is improved by more than three orders of magnitude and imaging speed is improved more than 20-fold over light-sheet microscopy with sequential four-view imaging3. We designed SiMView for high-speed long-term imaging under physiological conditions; our system uses real-time electronics and an advanced computa-tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second) for up to several days. The SiMView framework includes computational solutions

for high-throughput multiview image fusion and image data management of SiMView experiments, which typically comprise millions of high-resolution images and up to several dozens of terabytes per specimen.

To demonstrate the capabilities of our framework for simul-taneous multiview imaging, we performed two sets of experi-ments. First, we recorded cellular dynamics in entire Drosophila melanogaster embryos with subcellular resolution and followed their development in 30-s intervals throughout embryogenesis. We reconstructed nuclear morphologies as a function of time and showed that SiMView enables comprehensive tracking and detection of nuclear divisions in the entire syncytial blastoderm. For this purpose, we implemented fast, automated GPU-based processing algorithms, which achieve 95% segmentation accu-racy, 99% tracking accuracy and 94% accuracy for detecting nuclear divisions. Moreover, we demonstrate that SiMView data sets enable the reconstruction of cell lineages of the early devel-oping nervous system. Second, we performed high-resolution imaging of the developing nervous system with 25-s temporal sampling from the beginning of neuronal cell differentiation to larval hatching. These recordings capture global processes and at the same time reveal detailed filopodial dynamics in axonal morphogenesis at high spatiotemporal resolution.

All videos related to this study are available in high resolution at http://www.digital-embryo.org/.

resultssimView framework for simultaneous multiview imagingWe developed the SiMView technology framework (Fig. 1 and Supplementary Fig. 1), which comprises light sheet–based micro-scopes for simultaneous multiview imaging with one-photon or multiphoton excitation (Fig. 1a and Supplementary Tables 1 and 2), an automated real-time electronics control framework, a computational pipeline for long-term high-speed image acquisi-tion and high-throughput approaches to multiview image reg-istration, image segmentation and tracking, and real-time data management (Fig. 1b).

Figure � | Technology framework for light sheet–based simultaneous multiview imaging. (a) Conceptual illustration of the basic idea underlying the optical implementation of our light-sheet microscope for simultaneous multiview imaging. The specimen is attached to a thin cylindrical specimen holder and is optically accessible from all sides. Fluorescence excitation is performed in a µm-thin volume by two overlapping scanned light sheets that are focused into the specimen chamber by two opposite long-working-distance illumination objectives. Two opposite water-dipping detection objectives collect the fluorescence light emitted by the specimen. Each of the four combinations of light sheets and detection objectives provides a different view of the specimen. The parts of the specimen covered in the four views correspond to the regions that are sequentially captured in conventional multiview light-sheet imaging by mechanically rotating the specimen to the four view angles of 0, 90, 180 and 270 degrees. (b) The complete technology framework for simultaneous multiview imaging (SiMView) consists of the light-sheet microscope, a real-time electronics control framework and computational pipelines for high-speed image acquisition, high-throughput multiview image processing and high-throughput image data management. An optional sixth core module performs fast image segmentation and cell tracking.

Detection objective 1

Detection

DetectionSample holder Detection

objective 2Illuminationobjective 1

Illuminatio

n

Sample

Illumination objective 2

Light sheet 1 and 2(405–1,080 nm)

z yx

a bLight sheet microscopy platform

for simultaneous multiview imaging(one-photon and multiphoton excitation)

Real-time electronics control framework

Computational frameworkfor sustained high-speed image acquisition

(350 MB s–1, up to 100 TB)

Computational frameworkfor high-throughput multiview image fusion

(200 MB s–1)

Computational CPU/GPU-based frameworkfor high-throughput segmentation/tracking

Computational frameworkfor high-throughput image data management

(600 MB s–1)

Page 3: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

nAture methods  |  ADVANCE ONLINE PUBLICATION  |  3

Articles

The SiMView imaging platform uses an orthogonal arrangement of four independently operated optical arms (Supplementary Fig. 2). One pair of opposite arms is used for bidirectional light-sheet illumination with two long-working-distance air objectives (similar to the illumination arrangement used in previous studies19,23,24). The other pair, arranged at a right angle to the first, is used for bidirectional fluorescence detection with high- numerical-aperture water-dipping objectives. This optical config-uration allows simultaneous acquisition of four complementary views of the specimen for optimal physical coverage.

In our SiMView implementation for one-photon excitation, multiview image stacks are acquired by quickly moving the speci-men over the desired z range and alternating light-sheet activa-tion in the two illumination arms for each z plane, while both detection subsystems synchronously record images from opposite views for the same focal plane. This asynchronous bidirectional illumination and synchronous dual-camera detection captures recordings from four complementary views of each z plane in two illumination steps with a maximum time shift limited only by the acquisition time of the cameras (on the order of 5–25 ms). Notably, no mechanical rotation of the specimen is required. The switching of laser shutters in the two illumination subsystems is performed within a few milliseconds.

When using two-photon excitation, light scattering in the illumination process is minimal and fluorescence excitation is

spatially confined to the focal volume19. In our SiMView imple-mentation for two-photon excitation, the illumination arms are therefore activated in synchrony. All four optical subsystems are thus operated simultaneously, which results in a zero time shift in the multiview acquisition process.

The specimen is located in a 25-ml chamber with full opti-cal access from all sides to enable simultaneous light-sheet illu-mination and fluorescence detection with four objectives. Full optical access and maximum specimen positioning control (three-dimensional translation and one-dimensional rotation) are real-ized by splitting the microscope subsystems into two horizontal layers (Supplementary Fig. 2). All optical systems are located in the upper layer, whereas the specimen positioning system is located underneath the sample chamber on the lower optical table. The specimen positioning system is upright, in contrast to previous light sheet–based microscopes4,20. This allows the use of exceptionally soft (0.4%) agarose gels for specimen embedding. Mechanical stability during imaging is largely provided by the specimen holder itself.

a

Simultaneous multiview

Merge (single + simultaneous) Merge (sequential + simultaneous)

Sequential multiview Merge (sequential + simultaneous)Simultaneous multiview

c

b

Sequential multiview

Front Back Front Back

d e

Sequentialmultiview

Simultaneousmultiview

Merge (sequential +simultaneous)

Merge (sequential + simultaneous)

Front Back

Sequential multiview Merge (sequential + simultaneous)Simultaneous multiview

f

Figure � | Quantitative live imaging of entire early Drosophila embryos. (a) Maximum-intensity projections of the front and back halves of a stage 4 Drosophila embryo: overlay of single-view and simultaneous multiview recordings. Physical coverage is approximately 30% and 100%, respectively. (b) Side-by-side comparison of sequential and simultaneous multiview recordings of the same embryo during the 13th mitotic cycle. Corresponding regions from the front and back halves of the embryo are shown. To obtain the sequential multiview panels, image fusion was performed on a subset of the simultaneous multiview data set corresponding to a 35 s delay between first view and last view. (c) Side-by-side comparison of sequential and simultaneous multiview recordings of the same embryo at the end of the 13th mitotic cycle. (d) Enlarged views of the images in c. Spatial artifacts (nucleus location) and false positives in the sequential multiview recording are shown in red. (e) Enlarged views of the images in b, highlighting temporal artifacts (cell cycle state) in sequential multiview imaging. (f) Side-by-side comparison of simultaneous and sequential multiview recordings with a 25-s delay between first view and last view of the sequential multiview recording. Spatiotemporal artifacts and false positives in the sequential multiview recording are shown in red. Scale bars, 50 µm (a), 20 µm (b,c), 10 µm (d–f).

Page 4: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

�  |  ADVANCE ONLINE PUBLICATION  |  nAture methods

Articles

We generate the light sheets by rapid laser scanning4, using piezo-controlled tip-tilt mirrors. All optical and mechanical components in the illumination and detection arms are oper-ated and synchronized by a real-time electronics framework (Supplementary Fig. 3). In order to realize the complex SiMView timing workflow required for the submillisecond precise and sustained high-speed acquisition of millions of high-resolution images, the core control framework is deployed on a high- performance real-time controller. The controller communicates with an independent host computer workstation to coordinate the parallelized image acquisition workflow at a rate of 350 MB s−1 using two scientific-grade complementary metal-oxide semicon-ductor (sCMOS) cameras in the detection arms. The real-time con-trol framework performs the relative alignment of the microscope’s four optical arms with submicron precision directly on each speci-men, using automated software modules for auto-focusing and light-sheet autopositioning. Long-term stability of the alignment is achieved by closed-loop piezo control of all critical optical com-ponents, including the four objectives and both scan mirrors.

We designed the SiMView computational backbone for high-speed imaging experiments with up to several days of continuous image acquisition. The system is capable of recording more than 1 million high-resolution images in uninterrupted high-speed imaging sessions with a total data set size of up to 10 terabytes (TB) per specimen. Additionally, a secondary 10-gigabit s−1 glass fiber network–based acquisition pipeline allows recording up to 10 million high-resolution images or 100 TB per specimen for long-term imaging sessions. The maximum recording capacity of 1 petabyte is realized by the SiMView real-time data compac-tion module for lossless three-dimensional wavelet compression (average ratio 5:1 to 10:1).

The SiMView framework also comprises computational modules for high-throughput multiview image processing and real-time image data management. We developed an automated computational pipeline for content-based image registration and multiview image fusion that efficiently incorporates prior knowledge of the SiMView optical implementation to process raw SiMView image data at a rate of 200 MB s−1 (Supplementary Software). The module is capable of real-time image registration and integrates with our SiMView processing modules for large-scale data management. Because SiMView acquires multiple views simultaneously, fast and accurate image registration is achieved without the need for fiducial markers in the imaging volume.

Quantitative high-speed imaging of Drosophila developmentWe performed simultaneous multiview imaging of developing Drosophila embryos with subcellular resolution (Supplementary Fig. 4) using transgenic stocks with ubiquitous nuclear GFP expression. We used these data for a side-by-side comparison of single-view, sequential multiview and simultaneous multiview imaging by combining the images of (i) the complete recording and (ii) the subsets corresponding to sequential multiview image acquisition with 25-s and 35-s temporal sampling (Fig. 2). An overview of the acquisition settings for all imaging experiments is provided in Supplementary Table 3.

Simultaneous multiview imaging improved coverage over single- view imaging by almost fourfold and revealed cellular dynamics in the entire early embryo (Fig. 2a and Supplementary Video 1). Moreover, SiMView imaging showed several improvements over sequential multiview imaging: it eliminated temporal and

b

280″ 315″ 350″ 385″

0″ 35″ 70″ 105″

140″ 175″ 210″ 245″

420″ 455″ 490″ 525″

600″ 595″ 630″ 665″

c

PC

VF

03:00:00

Dorsal view

Ventral view

10:59:00 A PS

d

z = 16.25 µm

26.40 µm

42.65 µm

54.84 µm

77.18 µm

91.40 µm

123.89 µm

148.26 µm

158.42 µm

176.70 µm

MIP

a Figure 3 | Simultaneous multiview imaging of fast cellular dynamics. (a) Optical slices from a simultaneous multiview in vivo recording of a nuclei-labeled stage 16 Drosophila embryo (dorsal side up) obtained with one-photon SiMView microscopy. Last frame: unfiltered maximum-intensity projection (MIP). (b) MIP of a one-photon SiMView recording of a stage 5 Drosophila embryo. The white square indicates the region corresponding to the time series shown in c and is located in the transition region between two optical views. (c) Time series of nuclear dynamics during the 13th mitotic cycle, which were quantitatively resolved in the entire embryo by one-photon SiMView microscopy. (d) MIPs of a one-photon SiMView time-lapse recording of Drosophila embryonic development. Separate projections are shown for dorsal and ventral halves of the fused and background-corrected three-dimensional image stacks. The embryo was recorded at 30-s intervals using an acquisition period of 15 s per time point. The complete recording comprises 1 million images (11 TB) for ~2,000 time points recorded from 3 to 18.5 h post fertilization. PC, pole cells; VF, ventral furrow; A, amnioserosa; PS, posterior spiracles. Scale bars, 50 µm (a,b,d), 20 µm (c).

Page 5: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

nAture methods  |  ADVANCE ONLINE PUBLICATION  |  5

Articles

spatial fusion artifacts during fast nuclear movements (Fig. 2b–f), improved temporal resolution (Supplementary Video 1) and provided quantitative data for subsequent computational image analyses such as automated cell tracking. Simultaneous multiview imaging also reduced the energy load on the specimen by avoid-ing the redundant iterative acquisition of overlapping regions required for sequential multiview image registration.

To analyze the spatiotemporal fusion artifacts that can be introduced in sequential multiview imaging of cellular dynam-ics, we compared multiview transition regions in simultaneous and sequential four-view recordings with delays of 25 s and 35 s between the first and last view. Simultaneous multiview imaging revealed the synchronized propagation of nuclear division waves around the lateral circumference of the syncytial blastoderm (Supplementary Video 1), whereas sequential multiview imag-ing provided a misrepresentation by capturing premitotic nuclei in some regions and postmitotic nuclei in the corresponding regions on the opposite side of the embryo (Fig. 2b,e). Moreover, simultaneous multiview imaging accurately recorded fast local nuclear movements, whereas sequential multiview imaging often led to duplicate structures and distorted nuclear shapes (Fig. 2c,d,f). Examples of spatiotemporal artifacts resulting from

longer delays3,22 in sequential multiview image acquisition are shown in Supplementary Figure 5. Notably, the functional rela-tionship between the temporal multiview correspondence and the magnitude of spatial and temporal mismatches in sequen-tial multiview imaging was approximately linear (Fig. 2d,f). Thus, although future improvements in sequential multiview imaging speed may reduce the magnitude of the observed errors, the capabilities of sequential multiview imaging are intrinsically constrained.

Besides eliminating spatial and temporal artifacts, simultaneous multiview imaging also provides excellent temporal resolution. This point can be illustrated with basic parameters quantify-ing cellular dynamics in the Drosophila embryo. For example, fast movements of thousands of nuclei occur during the mitotic cycles in the syncytial blastoderm (Supplementary Video 1): our analysis showed that in the 12th mitotic cycle, the average movement speed of dividing nuclei was 8.12 ± 2.59 µm min−1 (mean ± s.d., n = 2,798, Huber robust estimator; Supplementary Fig. 6a) and the average nearest-neighbor distance was 7.57 ± 1.34 µm (mean ± s.d., n = 1.44 × 105, Huber robust estimator). Nearest neighbors are not usually daughter nuclei from the same mother nucleus (Supplementary Fig. 6b); hence, in order to obtain quantitative data for the analysis of nuclear dynamics in the entire blastoderm, simultaneous multiview imaging is required at an overall speed that ensures, on average, nuclear movements of no more than half of the nearest-neighbor dis-tance between subsequent time points. Simultaneous multiview imaging of the entire embryo therefore has to be performed at least once every 30 s.

To demonstrate the feasibility of such recordings, we imaged cellular dynamics throughout Drosophila embryonic develop-ment with 25-, 30- and 35-s temporal resolution (Supplementary Videos 1–3, Fig. 3 and Supplementary Fig. 7). We took advan-tage of the high-throughput capacity of our microscope by acquir-ing 1 million 5.5-megapixel images (11 TB) per embryo at a data rate of 350 MB s−1, covering development in ~2,000 time points from cellularization of the blastoderm starting at 2 h after ferti-lization until hatching at 22 h after fertilization. The recording is free of the spatial and temporal artifacts intrinsic to sequential multiview imaging and provides excellent temporal sampling of nuclei movements—characteristics that enable comprehensive nuclei tracking in the syncytial blastoderm (Fig. 3c).

09:00:00b

Dorsal view

Ventral view

10:25:00

Dorsal view

Ventral view

rGB

12:30:00c

Dorsal view

Ventral view

14:48:00

Dorsal view

Ventral view

BL

VNC

A

a

z = 17.05 µm

30.86 µm

46.69 µm

60.49 µm

78.36 µm

103.12 µm

123.42 µm

140.48 µm

152.66 µm

166.46 µm

MIP

Figure � | Simultaneous multiview two-photon imaging of entire embryos. (a) Optical slices from a simultaneous multiview in vivo recording of a nuclei-labeled stage 16 Drosophila embryo (dorsal side up) obtained with two-photon SiMView microscopy. Last frame: maximum-intensity projection (MIP). (b) MIPs of a two-photon SiMView time-lapse recording of Drosophila embryonic development. Separate projections are shown for dorsal and ventral halves of the fused three-dimensional image stacks. The entire embryo was recorded at 30-s intervals over a period of 2 h during germ band retraction, using an acquisition period of 20 s per time point. The complete recording comprises 37,620 high-resolution images (387 GB). (c) MIPs of a second two-photon SiMView time-lapse recording. The entire embryo was recorded in 30-s intervals over a period of 3 h during dorsal closure and ventral nerve cord formation, using an image acquisition period of 20 s per time point. The complete recording comprises 68,460 high-resolution images (705 GB). rGB, retracting germ band; A, amnioserosa; VNC, ventral nerve cord; BL, brain lobes. Scale bars, 50 µm (a–c).

Page 6: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

�  |  ADVANCE ONLINE PUBLICATION  |  nAture methods

Articles

One-photon SiMView provided superior signal-to-noise ratio for small penetra-tion depths but failed to capture struc-tures deep inside the embryo (Fig. 3a). In contrast, two-photon SiMView reduced autofluorescence compared to one- photon SiMView and provided near- complete physical coverage of the embryo even for late developmental stages (Fig. 4a and Supplementary Video 4). We achieved 30-s temporal resolution in whole-embryo two-photon imaging and captured global cellular dynamics dur-ing germ band retraction, dorsal closure and ventral nerve cord formation (Fig. 4b,c and Supplementary Videos 5 and 6).

Even toward the end of embryonic development, when the embryonic muscular system becomes active and fast contrac-tions are continuously reorienting the entire embryo, SiMView provided complete, artifact-free snapshots of the embryo for 94% of the recorded time points. We observed normal develop-ment and hatching of larvae within the expected time window for all recorded embryos (Supplementary Video 7 and Supplementary Fig. 8).

Analyzing cellular dynamics in entire early embryosCell tracking in entire developing Drosophila embryos has so far been technically impossible. Although there are impressive quantitative studies of cell behavior1, existing methods are limited to partial spatial observations and rely on

time-consuming semiautomated approaches to image processing that cannot be scaled to analyze the full embryo.

We took advantage of our spatiotemporal resolution to success-fully reconstruct and track Drosophila embryo cell nuclei through multiple division cycles in the entire syncytial blastoderm (Supplementary Videos 8 and 9 and Fig. 5). We set out to track all nuclei through the 12th and 13th mitotic waves, which repre-sent some of the fastest global processes in the embryo. Nuclei were detected automatically with an accuracy of 94.74% ± 0.68% with respect to false positives and almost 100% with respect to false negatives, as evaluated by the human expert. The segmenta-tions were obtained using two independent methods: an efficient implementation of the Gaussian mixture model25, which provides nuclear positions and size estimates (Fig. 5a,b and Supplementary Video 10), and a three-dimensional implementation of the diffu-sion gradient vector field algorithm26, which yields full nuclear

e

2 4 8 120

4

8

12

16

Pro

babi

lity

dens

ity (

×10

–2)

Nearest-neighbor distance (µm)6 10

Mitotic wave 12Mitotic wave 13

PremitoticPostmitotic

Mitotic wave 12Mitotic wave 13

d

0 2 4 80

2

4

6

8

Time after nuclear division (min)

Mea

n nu

cleu

s sp

eed

(µm

min

–1)

6 10

01:35:00a

02:27:30

c 01:03:00

Nuclei tracking Nuclei morphologies

01:15:50

01:22:15

Cell lineaging

02:02:30b

02:04:35

02:05:50

02:06:40

MitoticPremitotic

Mitosis detection

02:07:55

Figure 5 | Quantitative reconstruction of nuclei dynamics in the syncytial blastoderm. (a) Global nuclei tracking in the entire Drosophila syncytial blastoderm. Raw image data from supplementary Video � were superimposed with automated tracking results using our sequential Gaussian mixture model approach. Images show snapshots before the 12th mitotic wave and after the 13th mitotic wave (using a random color scheme in the first time point), which is propagated to daughter nuclei using tracking information. (b) Global detection of nuclear divisions during the 13th mitotic wave in the Drosophila syncytial blastoderm. Non-dividing nuclei are shown in cyan and dividing nuclei in magenta. The color of dividing nuclei progressively fades back to cyan within 5 time points. (c) Enlarged view of a reconstructed embryo with nuclei tracking information (left) and morphological nuclei segmentation (right). (d) Average nucleus speed as a function of time after nuclear division. Values at t = 0 represent all pre-mitotic nuclei. Values at t > 0 represent post-mitotic nuclei at time t after mitosis. The small s.e.m. (≤ line thickness) arises from the large sample size of ~2,500–5,000 samples per time point. (e) Distribution of nuclei distances between nearest neighbors. The mean and s.d. of the post-mitotic distributions are 7.57 ± 1.34 µm (12th wave; n = 1.44 × 105) and 5.52 ± 0.99 µm (13th wave; n = 4.66 × 105). Scale bars, 50 µm (a,b), 10 µm (c).

Page 7: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

nAture methods  |  ADVANCE ONLINE PUBLICATION  |  7

Articles

morphologies (Fig. 5c and Supplementary Video 10). We imple-mented the Gaussian mixture model–based segmentation and tracking on a GPU, which allowed us to reconstruct nuclei dynam-ics in the entire embryo in only 40 s per time point. Owing to the high temporal resolution in the simultaneous multiview light-sheet microscope, it was sufficient to initialize each time point with the mixture model from the previous time point in order to obtain tracking information. This approach yielded a tracking accuracy between frames of 98.98% ± 0.42%. To follow nuclei through their division, a machine learning classifier was trained based on local image features. This approach yielded a nuclei division detection and linkage accuracy of 93.81% ± 2.71% throughout the record-ing. We quantified three distinct types of motion: global nuclei displacements, synchronized waves of nuclear division and fast local nuclei displacements (daughter nuclei separating after divi-sion) (Supplementary Videos 8 and 9).

Our results are summarized in Figure 5d,e and Supplementary Figure 6. The quantitative analysis of mitotic waves reveals that average nucleus movement speeds were highest directly after nuclear division (8.12 ± 2.59 µm min−1 in 12th wave and 7.21 ± 2.21 µm min−1 in 13th wave, mean ± s.d.; n = 2,798 and 4,852, Huber robust estimator) and that the speeds exhibited two pronounced

local maxima at 2.1 and 5.0 min after division (12th wave; 3.8 and 6.3 min for 13th wave), which relate to the relaxation process of the global nuclei population toward a new packing pattern (Fig. 5d). The average distance between daughter nuclei reached a maximum 1.25 min (12th wave) and 1.67 min (13th wave) after division, which is almost twofold higher than the global average nearest-neighbor distance in the embryo (7.57 ± 1.34 µm in 12th wave and 5.52 ± 0.99 µm in 13th wave, mean ± s.d.; n = 1.44 × 105 and 4.66 × 105, Huber robust estimator), and relaxed to 8.76 µm for mitotic wave 12 and to 5.68 µm for mitotic wave 13 (Supplementary Fig. 6b), owing to the almost twofold increase in nuclei count by the end of each mitotic wave.

reconstructing cell lineages from simView recordingsTo complement the automated reconstruction of cellular dynamics in early Drosophila embryos, we manually verified that our SiMView recordings were also suitable for cell tracking and cell lineage analyses in later developmental stages. Using the two-photon SiMView recording shown in Supplementary Video 5, we performed manual cell tracking in non-superficial layers of the retracting germ band. An example of a reconstructed cell track is shown in Supplementary Figure 9. In addition, we followed

BC

15:20:00

A

*

Dorsal view

B

VM

VNC

PNS

09:30:00

Ventral view

c

Enlarged view

d

(t400)

Tim

e

t0

t400

a 06:53:00

e 0 min 3.5 min 5.0 min 9.5 min

DSC

11:09:30

LSC

VSC

EID

Originsin blastoderm

Delamination

EB1st div.

NB 1st and 2nd divisions

*****

*

b

Tim

e

t0

t400

Figure � | Simultaneous multiview imaging of Drosophila neural development. (a) One-photon SiMView recording of a histone-labeled Drosophila embryo superimposed with manually reconstructed lineages of three neuroblasts and one epidermoblast for 120–353 min after fertilization (time points 0–400 min); track color encodes time. (b) Enlarged view of tracks highlighted in a. Green spheres show cell locations at time point 400. Asterisks mark six ganglion mother cells produced in two rounds of neuroblast division. NB, neuroblast; EB, epidermoblast. (c) Maximum-intensity projections (dorsal and ventral halves) of Drosophila embryonic nervous system development recorded with one-photon SiMView microscopy. The elavC155-GAL4;UAS-mCD8øGFP transgenic embryo was recorded at 30-s intervals over the period 9.5–15.3 h post fertilization (~700 time points) using an image acquisition period of 15 s per time point. Intensity normalization was performed to compensate for GFP signal increase over time (supplementary Fig. ��). The autofluorescent vitelline membrane was computationally removed in the first panel. VNC, ventral nerve cord; PNS, peripheral nervous system; B, brain; VM, vitelline membrane; EID, eye-antennal imaginal disc; VSC, ventral sensory cells; LSC, lateral sensory cells; DSC, dorsal sensory cells; A, antenna; BC, brain commissure. Asterisk and arrow indicate shortening of the VNC. (d) Enlarged view of area highlighted in c. Arrow points to a single axon. (e) Maximum-intensity projections of axonal morphogenesis in Ftz-ng-GAL4;10XUAS-IVS-myrøGFP transgenic embryo (false-color lookup table) recorded with high-resolution one-photon SiMView microscopy. Images represent a 0.3% sub-region of the total covered volume. Arrows highlight filopodial dynamics. Scale bars, 30 µm (a,b), 50 µm (c), 10 µm (d), 5 µm (e).

Page 8: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

�  |  ADVANCE ONLINE PUBLICATION  |  nAture methods

Articles

neuroblast and epidermoblast lineages in the one-photon SiMView recording shown in Supplementary Video 3. The high signal-to-noise ratio of the one-photon SiMView data allowed us to track blastoderm cells through gastrulation and subsequent cell divisions for a total period of 400 time points (~4 h; Fig. 6a,b and Supplementary Fig. 10). These reconstructions capture cellular dynamics in the early stages of nervous system development, including neuroblast delamination and birth of the first ganglion mother cells (Supplementary Video 11 and Supplementary Fig. 10).

simView imaging of the developing nervous systemDrosophila nervous system development is an excellent model system for investigating neuron type specification and axonal guidance27–29. Neurodevelopmental dynamics are often studied by comparing samples fixed at different developmental stages30, but also by live imaging in a local context31,32. Data on the developmental dynamics of the entire embryonic nervous system, however, are still lacking.

We used transgenic embryos expressing elav(C155)-GAL4 and UAS-mCD8øGFP to visualize all postmitotic neurons and per-formed time-lapse one-photon SiMView imaging of the entire embryonic nervous system with 25-s and 30-s time resolution to capture neural development in ~400,000 high-resolution images (4 TB) for more than 700 time points. Recording a four-view data set of the entire embryo required only 2 mJ of light energy per time point and resulted in negligible photobleach-ing (Supplementary Fig. 11). The resulting data sets provide detailed information on the development of the central and peripheral nervous systems (Supplementary Videos 12–14 and Fig. 6c,d) and reveal fine details in the dynamics of axonal out-growth (Supplementary Video 15). For instance, the time-lapse recordings show several clusters of abdominal sensory organs (dorsal, lateral and ventral) and the formation of connectives in the ventral nerve cord. SiMView imaging also captured the dynamics of sensory cells in the head region and in the posterior segments: the entire morphogenesis of the eye-antennal imagi-nal disc, which separates from the embryonic brain to move in an anterior-medial direction, can be followed. The antenna anlage separates from the larval eye (Bolwig’s organ) as both establish their final locations. It is even possible to follow the morphogenesis of the deeply embedded embryonic brain (Supplementary Video 14 and Supplementary Fig. 12), which starts as two bilaterally symmetrical neurogenic regions, initially separated from each other, and begins to move posteriorly with head involution. The development of commissures is visible at high temporal resolution.

We complemented the SiMView imaging experiments of broadly labeled elav(C155)-GAL4,UAS-mCD8øGFP embryos with high-resolution SiMView experiments using Ftz-ng-GAL4,10XUAS-IVS-myrøGFP embryos exhibiting sparse expression in the central nervous system33,34 (Supplementary Videos 16–20). We increased spatial sampling by more than a factor of six and maintained the same high temporal resolu-tion and imaging for the same period of time (30-s intervals for 8.5 h, providing ~460,000 images or 4.6 TB of data; Supplementary Video 18). These recordings preserve the capability to follow global processes and at the same time reveal detailed filopodial dynamics during axonal morphogenesis at excellent spatiotemporal

resolution (Supplementary Video 19 and Fig. 6e). For example, SiMView high-speed imaging shows that zones of filopodial extension and exploration are retained proximal from the growth cones for long periods of time (Supplementary Video 19).

discussionOur technology framework for simultaneous multiview imag-ing with one-photon or multiphoton light sheet–based fluores-cence excitation is designed to deliver exceptionally high imaging speeds and physical coverage while minimizing photobleaching and phototoxic effects. SiMView excels at the quantitative in vivo imaging of large biological specimens in their entirety over long periods of time and with excellent spatiotemporal resolution.

Similar to its implementation with two-photon excitation, the SiMView concept can also be combined with Bessel plane illumi-nation15, scanned light sheet–based structured illumination3 or functional imaging35, among other approaches.

The imaging technique presented here opens the door to high-throughput high-content screening, fast functional imaging and comprehensive quantitative analyses of cellular dynamics in entire developing organisms. By combining this method with advanced computational tools for automated image segmenta-tion and cell tracking, the reconstruction of high-quality cell line-age trees, comprehensive mapping of gene expression dynamics, automated cellular phenotyping and biophysical analyses of cell shape changes and cellular forces are within reach, even for very complex biological systems.

methodsMethods and any associated references are available in the online version of the paper.

Note: Supplementary information is available in the online version of the paper.

AcknoWledgmentsWe thank M. Coleman (Coleman Technologies) for custom microscope operating software; the Janelia Farm Research Campus and European Molecular Biology Laboratory workshops for custom mechanical parts; K. Branson for helpful discussions and valuable comments on the manuscript; G. Myers for helpful discussions and for supporting F.A.; B. Coop and B. Biddle for helpful advice on instrumentation; A. Denisin for testing SiMView live-imaging assays of the Drosophila central nervous system; J. Truman, D. Mellert, J. Simpson, M. Zlatic, T. Lee, B. Lemon, E. Betzig, N. Ji, T. Planchon and L. Gao for helpful discussions; and J. Simpson, B. Pfeiffer and G. Rubin (Janelia Farm Research Campus) for transgenic Drosophila stocks. This work was supported by the Howard Hughes Medical Institute.

Author contriButionsP.J.K. conceived the research and designed the microscopes. R.T. and P.J.K. built and characterized the microscopes. R.T. developed and performed the imaging experiments. K.K., F.A. and P.J.K. developed the image processing and data management framework. All authors analyzed the data. P.J.K. wrote the paper, with input from all authors.

comPeting FinAnciAl interestsThe authors declare no competing financial interests.

Published online at http://www.nature.com/doifinder/�0.�03�/nmeth.�0��. reprints and permissions information is available online at http://www.nature.com/reprints/index.html.

1. McMahon, A., Supatto, W., Fraser, S.E. & Stathopoulos, A. Dynamic analyses of Drosophila gastrulation provide insights into collective cell migration. Science 3��, 1546–1550 (2008).

2. Fernandez, R. et al. Imaging plant growth in 4D: robust tissue reconstruction and lineaging at cell resolution. Nat. Methods 7, 547–553 (2010).

Page 9: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

nAture methods  |  ADVANCE ONLINE PUBLICATION  |  �

Articles

3. Keller, P.J. et al. Fast, high-contrast imaging of animal development with scanned light sheet–based structured-illumination microscopy. Nat. Methods 7, 637–642 (2010).

4. Keller, P.J., Schmidt, A.D., Wittbrodt, J. & Stelzer, E.H.K. Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy. Science 3��, 1065–1069 (2008).

5. Fowlkes, C.C. et al. A quantitative spatiotemporal atlas of gene expression in the Drosophila blastoderm. Cell �33, 364–374 (2008).

6. Long, F., Peng, H., Liu, X., Kim, S.K. & Myers, E. A 3D digital atlas of C. elegans and its application to single-cell analyses. Nat. Methods �, 667–672 (2009).

7. Tomer, R., Denes, A.S., Tessmar-Raible, K. & Arendt, D. Profiling by image registration reveals common origin of annelid mushroom bodies and vertebrate pallium. Cell ���, 800–809 (2010).

8. Gehrig, J. et al. Automated high-throughput mapping of promoter-enhancer interactions in zebrafish embryos. Nat. Methods �, 911–916 (2009).

9. Pardo-Martin, C. et al. High-throughput in vivo vertebrate screening. Nat. Methods 7, 634–636 (2010).

10. Held, M. et al. CellCognition: time-resolved phenotype annotation in high-throughput live cell imaging. Nat. Methods 7, 747–754 (2010).

11. Khairy, K. & Keller, P.J. Reconstructing embryonic development. Genesis ��, 488–513 (2011).

12. Tomer, R., Khairy, K. & Keller, P.J. Shedding light on the system: studying embryonic development with light sheet microscopy. Curr. Opin. Genet. Dev. ��, 558–565 (2011).

13. Huisken, J. & Stainier, D.Y. Selective plane illumination microscopy techniques in developmental biology. Development �3�, 1963–1975 (2009).

14. Keller, P.J. & Stelzer, E.H. Quantitative in vivo imaging of entire embryos with Digital Scanned Laser Light Sheet Fluorescence Microscopy. Curr. Opin. Neurobiol. ��, 624–632 (2008).

15. Planchon, T.A. et al. Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination. Nat. Methods �, 417–423 (2011).

16. Capoulade, J., Wachsmuth, M., Hufnagel, L. & Knop, M. Quantitative fluorescence imaging of protein diffusion and interaction in living cells. Nat. Biotechnol. ��, 835–839 (2011).

17. Ntziachristos, V. Going deeper than microscopy: the optical imaging frontier in biology. Nat. Methods 7, 603–614 (2010).

18. Palero, J., Santos, S.I., Artigas, D. & Loza-Alvarez, P. A simple scanless two-photon fluorescence microscope using selective plane illumination. Opt. Express ��, 8491–8498 (2010).

19. Truong, T.V., Supatto, W., Koos, D.S., Choi, J.M. & Fraser, S.E. Deep and fast live imaging with two-photon scanned light-sheet microscopy. Nat. Methods �, 757–760 (2011).

20. Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. & Stelzer, E.H.K. Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305, 1007–1009 (2004).

21. Swoger, J., Verveer, P., Greger, K., Huisken, J. & Stelzer, E.H. Multi-view image fusion improves resolution in three-dimensional microscopy. Opt. Express �5, 8029–8042 (2007).

22. Preibisch, S., Saalfeld, S., Schindelin, J. & Tomancak, P. Software for bead-based registration of selective plane illumination microscopy data. Nat. Methods 7, 418–419 (2010).

23. Dodt, H.U. et al. Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain. Nat. Methods �, 331–336 (2007).

24. Huisken, J. & Stainier, D.Y. Even fluorescence excitation by multidirectional selective plane illumination microscopy (mSPIM). Opt. Lett. 3�, 2608–2610 (2007).

25. Xiong, G., Feng, C. & Ji, L. Dynamical Gaussian mixture model for tracking elliptical living objects. Pattern Recognit. Lett. �7, 838–842 (2006).

26. Xu, C. & Prince, J. Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process 7, 359–369 (1998).

27. Pearson, B.J. & Doe, C.Q. Specification of temporal identity in the developing nervous system. Annu. Rev. Cell Dev. Biol. �0, 619–647 (2004).

28. Sánchez-Soriano, N., Tear, G., Whitington, P. & Prokop, A. Drosophila as a genetic and cellular model for studies on axonal growth. Neural Dev. �, 9 (2007).

29. Silies, M., Yuva-Aydemir, Y., Franzdottir, S.R. & Klambt, C. The eye imaginal disc as a model to study the coordination of neuronal and glial development. Fly �, 71–79 (2010).

30. Yu, H.H. et al. A complete developmental sequence of a Drosophila neuronal lineage as revealed by twin-spot MARCM. PLoS Biol. �, e1000461 (2010).

31. Williams, D.W. & Truman, J.W. Mechanisms of dendritic elaboration of sensory neurons in Drosophila: insights from in vivo time lapse. J. Neurosci. ��, 1541–1550 (2004).

32. Murray, M.J., Merritt, D.J., Brand, A.H. & Whitington, P.M. In vivo dynamics of axon pathfinding in the Drosophilia CNS: a time-lapse study of an identified motorneuron. J. Neurobiol. 37, 607–621 (1998).

33. Hiromi, Y., Kuroiwa, A. & Gehring, W.J. Control elements of the Drosophila segmentation gene fushi tarazu. Cell �3, 603–613 (1985).

34. Pfeiffer, B.D., Truman, J.W. & Rubin, G.M. Using translational enhancers to increase transgene expression in Drosophila. Proc. Natl. Acad. Sci. USA �0�, 6626–6631 (2012).

35. Arrenberg, A.B., Stainier, D.Y., Baier, H. & Huisken, J. Optogenetic control of cardiac function. Science 330, 971–974 (2010).

Page 10: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

doi:10.1038/nmeth.2062nAture methods

online methodsSiMView light-sheet microscopes. We developed simultaneous multiview light-sheet microscopes for one-photon and two-photon excitation. The SiMView microscope for one-photon excitation consists of custom laser light sources, two scanned light-sheet illumination arms, two fluorescence detection arms equipped with sCMOS cameras, a custom four-view specimen chamber with perfusion system and a four-axis specimen positioning system that is magnetically connected to the specimen holder in the imaging chamber (Supplementary Table 1).

The laser beams of two custom multilaser units (SOLE-3 and SOLE-6, Omicron Laserage) are combined by fiber optics and split into two illumination arms for fluorescence excitation from near-UV to near-IR, providing the laser wavelengths 405, 445, 488, 515, 561, 594, 642 and 685 nm.

Each illumination arm consists of two custom collimator modules, a beam coupling-unit, a high-speed laser shutter with controller (VS14S2ZM1-100 with AlMgF2 coating, VMM-D3 three-channel driver, Uniblitz), a compact illumination filter wheel with controller (96A351, MAC6000 DC servo controller, Ludl), a miniature piezo tip-tilt mirror with amplifier (S-334, E-503.00S amplifier, E-509.S3 servo controller, E-500 chassis, Physik Instrumente), an f-theta lens (S4LFT4375, Sill Optics), a tube lens module (U-TLU-1-2 camera tube, Olympus) and a piezo-positioned illumination objective with controller (P-725, E-665 piezo amplifier and servo controller, Physik Instrumente; equipped with an XLFLUOR 4×/340/0.28 objective, Olympus).

Each detection arm consists of a piezo-positioned detection objective with controller (P-725, E-665 piezo amplifier and servo controller, Physik Instrumente; equipped with LWD water- dipping objectives from Nikon and Carl Zeiss), a detection filter wheel with controller (96A354, MAC6000 DC servo controller, Ludl), a custom tube module with a tube lens (CFI second lens unit, Nikon) and a water-cooled sCMOS camera (Neo, Andor; with ExosII re-circulator, Koolance).

The four-view specimen chamber comprises a custom speci-men holder, a custom mechanical scaffold manufactured from black Delrin, a multistage adaptor module for connecting the specimen holder to the specimen positioning system and a custom perfusion system. The specimen holder was produced from medical-grade stainless steel. Using the positioning system, the holder can be translated in three dimensions and rotated around its main axis without breaking the water seal. The chamber is open to two opposite sides to accommodate the water-dipping detection objectives and contains two windows with coverslips on the remaining two sides for laser light-sheet illumination. The top of the chamber is open for mechanical or optical access and allows background illumination with a cold light source. The chamber has inlet and outlet valves connected to the perfusion system, which is operated by a dual-channel 12-roller pump (REGLO, Harvard Apparatus). The pump is con-nected to the chamber by Tygon tubing, which is guided through a bench-top incubator (Model 107 benchtop environmental chamber, TestEquity) for temperature control and oxygenation of the sample medium.

The four-axis (x-y-z-θ) specimen positioning system con-sists of three customized translation stages (M-111K046, Physik Instrumente) and one rotary stage (M-116, Physik Instrumente), a motion input/output (I/O) interface and amplifier (C-809.40

4-channel servo-amplifier, Physik Instrumente) and a motion controller (PXI-7354 4-axis stepper/servo motion controller, National Instruments).

The simultaneous multiview imaging platform for two- photon excitation comprises additional optical modules as well as different components in the illumination and detection arms. The laser light source is a pulsed Ti:Sapphire laser (Chameleon Ultra II, Coherent). An additional module between light source and illumination arms allows laser intensity modulation and IR beam splitting. It consists of a beam-attenuation submod-ule (AHWP05M-980 mounted achromatic half-wave plate and GL10-B Glan-laser polarizer, Thorlabs), a Pockels cell with driver (Model 350-80-LA-02 KD*P series electro-optic modulator and Model 302RM driver, Conoptics) and an IR beam splitter (PBSH-450-2000-100 broadband polarizing cube beam splitter, Melles Griot and WPA1312-2-700-1000 achromatic 1/2 wave plate, Casix). Customized f-theta lenses (66-S80-30T-488-1100nm cus-tom lens, Special Optics) and different illumination objectives (LMPLN10XIR 10×/0.30, Olympus) are used in the illumination arms. The illumination and detection objectives are mounted on different piezo-positioners (P-622.1CD PIHera piezo linear stage and E-665 piezo amplifier and servo controller, Physik Instrumente). Finally, different filters are used in the detection arms (BrightLine short-pass and band-pass filters, Semrock).

An overview of all mechano-optical components of the simul-taneous multiview imaging platform for two-photon excitation is provided in Supplementary Table 2.

SiMView real-time electronics control framework. All opti-cal and mechanical components of the microscopy platform are operated and synchronized by a real-time electronics framework (Supplementary Fig. 3). The core control framework is deployed on a high-performance real-time controller, which communicates with an independent host computer workstation to coordinate the parallelized image acquisition workflow at a rate of 350 MB s−1.

The real-time controller (PXI-8110 Core 2 Quad 2.2 GHz, National Instruments) runs the LabVIEW Real-Time operating system, and is equipped with three I/O interface boards (PXI-6733 high-speed analog output eight-channel board, National Instruments) linked to BNC connector blocks (BNC-2110 shielded connector block, National Instruments) as well as a serial interface board (PXI-8432/2, National Instruments). The real-time controller communicates with the host workstation by Gigabit Ethernet.

SiMView computational framework for high-speed image acquisition. Custom microscope control software was written in LabVIEW (National Instruments). The software comprises 64-bit modules running on the host computer and 32-bit modules run-ning on the real-time controller. For real-time image processing in the autoalignment and data acquisition modules, the LabVIEW software invokes external custom code written in C++. Real-time data compaction by lossless three-dimensional wavelet compres-sion is performed by multithreaded execution of external process-ing libraries (PICTools Medical, Accusoft Pegasus).

The host workstation for dual-sCMOS image acquisition comprises two six-core CPUs (X5680, Intel Corporation) for online processing, 144 gigabytes (GB) DDR-3 RAM (Kingston) for image ring buffers and online processing, a 24-channel

Page 11: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

doi:10.1038/nmeth.2062 nAture methods

RAID controller (52445, Adaptec) with 22 SAS-2 hard disks (Cheetah 15K.7, Seagate) combined in a RAID-6 for high-speed image acquisition, a 10-gigabit fiber network adaptor (EXPX9501AFXSR, Intel Corporation) for online data stream-ing, a graphics adaptor (GeForce GTX470, Nvidia Corporation) for GPU-based processing, two CameraLink frame grabbers (Neon, BitFlow; or PCIe-1429, National Instruments) and a server board (X8DAH+-F, Supermicro).

SiMView computational framework for high-throughput image data management. The SiMView image acquisition framework relays the raw multiview data stream to the SiMView processing framework via optical fibers. The integrated frame-work operates on a high-performance processing workstation in combination with a high-capacity image server and comprises computational modules for high-throughput multiview image processing and real-time image data management written in MATLAB and C++.

The processing workstation comprises two six-core CPUs (X5680, Intel Corporation), 96 GB DDR-3 RAM (Kingston), a 16-channel RAID controller (51645, Adaptec) with two SAS-2 hard disks (AL11SE 147 GB, Toshiba) combined in a RAID-1 for the operating system and 10 SAS-2 hard disks (AL11SE 600 GB, Toshiba) combined in a RAID-6 for image data buff-ering, a 10-gigabit fiber network adaptor (EXPX9501AFXSR, Intel Corporation) for online data streaming, a high- performance graphics adaptor (Quadro FX5800, Nvidia Corporation) for GPU-based processing and a server board (S5520SC, Intel Corporation).

The fiber network–attached data storage system is connected to the primary acquisition and processing workstations by 10-gigabit fiber and comprises a rack-mount server with triple-channel SAS interface and 10-gigabit fiber network adaptor (EXPX9501AFXSR, Intel Corporation) as well as two 24-disk RAID enclosures (ESDS A24S-G2130, Infortrend). The RAID enclosures are equipped with 48 SATA-2 hard disks (Ultrastar A7K2000, Hitachi) that form two RAID-6 arrays for long-term imaging experiments and post-acquisition wavelet-compressed data storage.

See Supplementary Figure 3 for an overview of the computa-tional infrastructure.

SiMView computational framework for high-throughput mul-tiview image fusion. We developed a high-throughput multi-view image fusion pipeline for spatial registration and fusion of the simultaneous four-view imaging data (Supplementary Software). It employs multithreading to take full advantage of the SiMView computational infrastructure and achieves an image data processing rate of 200 MB s−1 when operating in full-processing mode (multiFuse pipeline) and a rate four times higher when using interpolated transformation parameters (timeFuse pipeline).

No mechanical specimen rotation is required in simultaneous multiview imaging; thus, the following ten degrees of freedom were considered in the registration process: relative axial dis-placement of the two light sheets (one parameter, determined independently for the two camera views in order to assess the variability of specimen-induced optical path deviation), relative tilt of the two light sheets with respect to the illumination axis (one parameter, determined independently for the two camera

views in order to assess registration robustness), relative intensity of the two light sheets (one parameter, determined independ-ently for the two camera views in order to assess intensity cor-rection robustness), relative tilt of the two detection subsystems with respect to the detection axis (one rotation parameter), rela-tive lateral displacement of the two detection subsystems (two parameters) and relative detection efficiency of the two detection subsystems (one parameter).

In order to compensate for a possible parameter drift during time-lapse recordings, these parameters were determined in short intervals for geometrical degrees of freedom (typically every fifth time point, or 150 s) and at every time point for intensity cor-rection. When assessing the robustness of these parameters in long-term recordings of Drosophila embryonic development, we found low parameter variability as a function of time and high correspondence of independent measurements for identical time points: for the recording shown in Supplementary Video 3, parameter standard deviations over 24 h were 675 nm for axial light-sheet displacement, 0.12° for relative light-sheet tilt, 387 nm for lateral detection arm displacement, 0.06° for relative detection arm tilt, 2.3% for relative light-sheet intensity and 0.9% for relative detection efficiency in the two detection subsystems. The devia-tions of independently determined geometrical correction factors or intensity correction factors were 0.005° for relative light-sheet tilt and 3.7% for relative light-sheet intensity. This includes speci-men geometry-induced deviations because independent measure-ments were performed in different parts of the specimen.

The computation of these transformation parameters and the image fusion were performed by our custom processing pipe-line. The pipeline was implemented in MATLAB and C++ and consists of four sub-modules for multichannel registration, multichannel fusion, multicamera registration and multicam-era fusion (Supplementary Software). In the first two modules (multichannel registration and fusion), the pairs of data sets corresponding to the same detection subsystem (‘channels’ rep-resenting the information collected with the two light sheets) are registered and fused. In the last two modules (multicamera registration and fusion), the fused data sets of the two detec-tion subsystems are registered and fused. With this conceptual separation, only the degrees of freedom relevant to each module are considered, and thus the multiview fusion is performed more efficiently.

Including all disk I/O and using one core to process one time point of the multiview recording shown in Supplementary Video 3, we measured a processing time of 180 s when operating in full-processing mode (multiFuse pipeline) and 42 s when using interpolated transformation parameters (timeFuse pipeline). Using time point–based parallelization on the 12-core micro-scope acquisition computer, this corresponds to an average processing time of 15 s per time point when operating in full- processing mode and 4 s in the high-speed timeFuse pipeline, which is considerably less than the temporal sampling in the time-lapse recording. Thus, simultaneous multiview image processing can be performed in real time.

In the following, we provide a short description of the workflow in the four modules, referring to the z axis as the optical axis of the detection subsystems, the y axis as the optical axis of the illumination subsystems, and the x axis as the remaining axis to form a Cartesian coordinate system.

Page 12: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

doi:10.1038/nmeth.2062nAture methods

The multichannel registration module performs data interpo-lation and calculates the three-dimensional masks that envelop the recorded specimen in each channel, using a Gaussian filter for envelope smoothening. The combined masks are used to cal-culate the geometrical center of the specimen along the axis of the incident light sheets (y axis) as a function of (x,z) location. The resulting two-dimensional coordinate matrix indicates the y centers of the optical illumination light path as a function of (x,z) location in a first-order approximation. This matrix is used to extract, background-correct and register slices ten pixels thick for both channels. The optimal transformation settings are deter-mined, considering subpixel-precise z translation and y rotation as the only degrees of freedom. The choice of a data slice in the center of the optical light path for the purpose of this first registra-tion step is optimal for specimens with bilateral optical symmetry with respect to the (x,z) plane (such as Drosophila and zebrafish embryos) and a generally reasonable starting point in the absence of detailed knowledge of the three-dimensional optical properties of the specimen. The first channel serves as the reference channel. The second channel is transformed into the coordinate system of the reference channel by global application of the transformation parameters determined for the registration slice.

The multichannel fusion module subsequently determines average intensities in the registration slices and applies the cor-responding intensity correction factor to the second channel. The two data sets are then fused by one of the following three meth-ods: global five-level wavelet decomposition with a Daubechies D4 basis (for maximum quality), linear blending in a 20-pixel transition region relative to the coordinates of the registration matrix (for large convex specimens with complex optical proper-ties) or global arithmetic averaging (for maximum speed).

The multicamera registration module calculates the three-dimensional masks that envelope the recorded specimen in the two multichannel fusion data sets, using a Gaussian filter for enve-lope smoothening. Using the combined masks, the geometrical center of the specimen along the axis of the detection subsystems (z axis) is calculated as a function of (x,y) location. The resulting two-dimensional coordinate matrix indicates the z centers of the optical detection light path as a function of (x,y) location in a first-order approximation. This coordinate matrix is then used to extract slices ten pixels thick from the fusion data sets. The optimal transformation settings for registration of these two slices are determined, considering subpixel-precise x and y translations as well as z rotation as degrees of freedom. A region in the center of the optical detection light path is optimal for the purpose of the second registration step, since the two detection subsystems typically provide comparable image quality in this location. The fusion data set for the first camera serves as the reference data set. The fusion data set for the second camera is transformed into the coordinate system of the reference data set by global application of the transformation parameters determined for the registra-tion slice.

The multicamera fusion module operates on the registered fusion data sets for the two detection subsystems, in direct analogy to the procedure described for the multichannel fusion module.

SiMView spatial resolution and laser power settings. Relative alignment of the four optical subsystems of the microscopes was performed on the specimen using the SiMView control framework

for autofocusing and automated light-sheet positioning. We mea-sured point-spread-functions by imaging 50-nm-sized fluorescent beads with both SiMView microscope platforms (Supplementary Fig. 4). The average full width at half-maximum (FWHM) val-ues were 0.399 ± 0.025 µm laterally and 1.59 ± 0.13 µm axially (mean ± s.d., n = 5) for the one-photon SiMView configuration used to obtain the high-magnification recordings presented in Supplementary Videos 16–20. For the two-photon SiMView configuration used in Supplementary Videos 4–6, we deter-mined average FWHM values to be 0.603 ± 0.086 µm laterally and 1.87 ± 0.14 µm axially (mean ± s.d., n = 8) in the center of the field of view, and 0.640 ± 0.026 µm laterally and 2.29 ± 0.06 µm axially (mean ± s.d., n = 2) at the edge of the field of view. It should be noted that the lateral extent of the point-spread func-tion is effectively the same as in digital scanned laser light-sheet fluorescence microscopy (DSLM)-like implementations4, whereas the axial resolution is improved by √2 over DSLM through the use of two shifted light sheets24.

For one-photon SiMView, the 488-nm laser was set to a power of 330 µW in the live imaging of C155-GAL4,UAS-mCD8øGFP embryos and 360 µW in the live imaging of His2Av-GFPS65T embryos. This corresponds to an exposure of the specimen to 9 µJ and 8 µJ light energy per acquired image pair, or 2 mJ for the acqui-sition of all images in a four-view image data set of the entire speci-men per recorded time point. For two-photon SiMView, a laser power of 300 mW was used to perform live imaging of His2Av-GFPS65T embryos at an excitation wavelength of 940 nm.

Detailed specifications of all recordings are provided in Supplementary Table 3.

SiMView specimen preparation and controls for physiological development. Drosophila live imaging experiments were per-formed with His2Av-GFPS65T transgenic stocks (Bloomington stock number 5941), C155-GAL4,UAS-mCD8øGFP transgenic stocks and Ftz-ng-GAL4,10XUAS-IVS-myrøGFP transgenic stocks. Drosophila embryos were dechorionated with 50% (vol/vol) sodium hypochlorite solution (bleach; cat. no. 425044, Sigma-Aldrich) and carefully transferred to custom glass capillar-ies (2.0/2.5 mm inner/outer diameter, 20 mm length; Hilgenberg GmbH), filled with liquid 0.4% low-melting temperature agarose (SeaPlaque, Lonza) prepared in water. The temperature of the agarose was equilibrated to 30 °C before the transfer to prevent premature polymerization of the agarose gel while ensuring min-imal exposure of the embryos to elevated temperatures. Within the ~15–30 s before gel formation, each embryo was centered in the capillary and its anterior-posterior axis was oriented parallel to the symmetry axis of the capillary by gently moving the tip of a forceps around the embryo, thereby generating directional flow in the surrounding agarose. Following embedding and throughout the imaging period, embryos were kept at a 25 °C standard temperature.

The agarose was allowed to settle for about 10 min, after which the gel section containing the embryo was slightly protruded from the glass capillary to provide full optical access to the embryo. The imaging chamber was filled with tap water and the glass capillary with the protruding gel section was inserted into the specimen holder in the center of the chamber.

Image acquisition in the light-sheet microscope was typically stopped shortly after the onset of strong muscle contractions in

Page 13: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

doi:10.1038/nmeth.2062 nAture methods

the developing embryos. Following image acquisition, agarose-embedded Drosophila embryos were transferred to a dissection microscope to control for normal hatching of larvae. For the time-lapse recordings presented in Supplementary Videos 1, 4–6, the embryos were removed from the microscope at earlier developmental stages and kept in a 25 °C incubator to verify physiological development. Videos documenting the hatching process were recorded with an XM10 camera on an Olympus MVX10 macroscope (Supplementary Fig. 8). We observed normal development and hatching of larvae within the expected time window, and a representative video from a one-photon SiMView is provided as Supplementary Video 7 (showing the hatched larva after imaging the C155-GAL4,UAS-mCD8øGFP embryo from Supplementary Video 12).

Post-acquisition data handling and SiMView data browsing. The data sets produced by the SiMView framework are large in size (typically on the order of up to dozens of terabytes). When routinely working with these data, the user often needs to select spatial and temporal subsets of the images, and/or only image frames that originate from a specific color channel, view angle, specimen or camera. This requires an efficient program that facili-tates the organization, browsing and processing of the images in a way that is convenient, fast and extensible and that minimizes network and I/O load. To this end, we developed such a com-puter program. It implements the following concepts: (i) virtual folders; by which the user can make sophisticated data subset selections, (ii) processing task definition is separated from data selection (for reuse and documentation), (iii) a plug-in system, for the easy implementation of custom processing code, (iv) con-nection to external programs Vaa3D36 and ImageJ37 and (v) a convenient graphical user interface for defining the virtual folders and processing tasks as well as for browsing through data sets (for viewing the images and their associated meta information). This image data management system was programmed in MATLAB (MathWorks Inc.).

Computational framework for nuclear morphology seg-mentation. To obtain morphologies of nuclei, we segmented an image I(x,y,z) by simulating the fluorescence intensities as attractive forces (f) embedded in a medium governed by fluid flow equations26. We defined a gradient vector field v(x,y,z) = [u(x,y,z), v(x,y,z), w(x,y,z)] as the field that minimizes the functional:

e m= + + + + + + + +( )+ ∇ − ∇

∫∫∫ u u u v v v w w w

f v f dxdydz

x y z x y z x y z2 2 2 2 2 2 2 2 2

2 2

where f G x y z I x y z= ∇ ( )∗ ( )s , , , , 2

and where Gσ (x, y, z) represents a three-dimensional Gaussian with s.d. σ; ✳ denotes convolution. Briefly, using the calculus of variations, we obtained a set of Euler equations that can be solved by treating u, v and w as functions of time:

u x y z t u x y z t u x y z t f x y z

f x y z

t x

x

, , , , , , , , , , ,

, ,

( ) = ∇ ( ) − ( ) − ( )( )× ( )m 2

22 2 2+ ( ) + ( )( )f x y z f x y zy z, , , ,

(1)(1)

(2)(2)

(3)(3)

v x y z t v x y z t v x y z t f x y z

f x y z

t y

x

, , , , , , , , , , ,

, ,

( ) = ∇ ( ) − ( ) − ( )( )× ( )

m 2

22 2 2+ ( ) + ( )( )f x y z f x y zy z, , , ,

w x y z t w x y z t w x y z t f x y z

f x y z

t z

x

, , , , , , , , , , ,

, ,

( ) = ∇ ( ) − ( ) − ( )( )× ( )m 2

22 2 2+ ( ) + ( )( )f x y z f x y zy z, , , ,

The steady-state solution of these linear parabolic equations is the solution of the Euler equations and yields the required flow field. The regularization parameter µ balances the diffusive (first term) versus the advective (second term) components in equation (1): it determines the amount of smoothing exerted by the algorithm and should be increased in the presence of noise.

The gradient field calculation was followed by gradient flow tracking38, in which similar groups of voxels are identified as those that ‘flow’ toward the same sink. This generates a ‘mosaic’ image, in which each tile (the basin of one sink) contains only one object. The segmentation was finalized by adaptively thresholding each tile. We calculated the optimum threshold separating the two classes, foreground and background, so that their combined intra-class variance was minimal (Otsu’s method39). Voxels with values smaller than this threshold were considered background.

We found that the high temporal resolution of the SiMView recordings enabled the initialization of the diffusion gradient vector field segmentation algorithms for a time point Ti with the solution that converged in Ti–1. This resulted in a more than 20× convergence speedup, an essential feature for obtaining high-quality shape information and making the algorithm practical for large four-dimensional data sets.

Computational framework for automated cell tracking. Determination of nuclei positions and sizes as well as nuclei tracking were performed by modeling each image as a mix-ture of Gaussians and sequentially estimating the mixture para-meters across time. Our approach resembles the one proposed by Xiong et al.25. Approximating nuclear shape intensity by a Gaussian is a good trade-off between model complexity and shape information40. In particular, we model each image as:

I x y z x y zk k kk

K, , , , ; ,( )∝ ( )( )

=∑ p mN Σ

1

where K is the number of objects in the image, µk is the center location of each nucleus, Σk is the covariance matrix (representing the shape of each nucleus as an ellipsoid) and πk is the relative intensity. Given each image as an input, we can estimate these parameters in equation (6) in a maximum-likelihood framework using the well-known expectation-maximization algorithm41,42.

Because of the fine temporal resolution and excellent spatial coverage achieved by simultaneous multiview light-sheet micro-scopy, we used each solution from time Ti–1 as an initialization for time point Ti. Given that each Gaussian in an image derives from a Gaussian in the previous time point, we can directly recover tracking information. To handle cell divisions, we collected a set of examples with cells dividing and cells not dividing and trained a machine learning classifier based on local image features.

The tracking using Gaussian mixture models was implemented on a GPU using CUDA43,44, which resulted in a 100× speedup. The speedup allowed us to process nuclear positions and

(4)(4)

(5)(5)

(6)(6)

Page 14: Quantitative high-speed imaging of entire developing ... 2012_0.pdf · tional infrastructure for sustained data acquisition at 350 mega-bytes (MB) s−1 (175 million voxels per second)

©20

12 N

atu

re A

mer

ica,

Inc.

All

rig

hts

res

erve

d.

doi:10.1038/nmeth.2062nAture methods

movements in the entire embryo with thousands of nuclei and millions of voxels in only 40 s per time point using a processing workstation with a single Tesla GPU (Nvidia Corporation).

Quantitative estimation of segmentation and tracking accuracy. Given the scale and complexity of the image data for Drosophila embryonic development, it is very difficult to generate a meaning-ful ground truth manually. Thus, we validated our results by the following manual inspection: we randomly selected time points and sets of cells within those time points to determine if segmen-tation and parent tracking assignments were correct. Because the tracking approach is sequential, selected time frames were at least three time points apart to avoid correlations in the accuracy esti-mation. For each selected object, a human expert answered two yes-or-no questions: whether the object was located appropriately and whether the parent assignment was correct. Given that only a subset of the overall results can be inspected, it is important to determine confidence intervals. Because the questions are binary, 95% confidence intervals can be obtained based on the binomial proportion confidence interval using the Wilson score45.

To obtain the accuracy rates presented in the results section, we subsampled ten different time points (three of them during the mitotic wave) and approximately 200 objects per time point, obtaining a total of 2,051 samples. Strictly speaking, these rates represent false positives: objects that were detected and tracked inaccurately. However, we can assume that false negatives are negligible given the sequential approach (every object at time point Ti derives from an object at time point Ti–1) and the fact that superimposing raw data and segmentation (Fig. 5 and Supplementary Videos 8 and 9) reveals only three nuclei that are not part of a segmented object. We note that if a nucleus was not

localized appropriately (for example, owing to one ellipsoid cov-ering two nuclei) but tracking corresponded to the same feature, we only counted a segmentation error to avoid double-counting. Finally, the accuracy of the cell division detection rate during the mitotic wave was estimated following the same methodology. We selected at random 268 dividing cells and asked a human expert to decide whether the assignments were correct. As a tolerance criterion, the cell division assignment was accepted as correct if it occurred within an interval of ±2 time points (approximately ±1 min) of the prediction made by the machine learning classifier.

36. Peng, H., Ruan, Z.C., Long, F.H., Simpson, J.H. & Myers, E.W. V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets. Nat. Biotechnol. ��, 348–353 (2010).

37. Abramoff, M.D., Magalhaes, P.J. & Ram, S.J. Image processing with ImageJ. Biophot. Internat. ��, 36–42 (2004).

38. Li, G. et al. 3D cell nuclei segmentation based on gradient flow tracking. BMC Cell Biol. �, 40 (2007).

39. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man. Cybern. �, 62–66 (1979).

40. Gelas, A. et al. Variational level-set with Gaussian shape model for cell segmentation. in 2009 16th IEEE International Conference on Image Processing 1089–1092 (IEEE, 2009).

41. Bishop, C.M. Pattern Recognition and Machine Learning (Springer, 2007).42. Dempster, A.P., Laird, N.M. & Rubin, D.B. Maximum likelihood from

incomplete data via the EM algorithm. J. R. Stat. Soc. [Ser B] 3�, 1–38 (1977).

43. Kumar, N., Satoor, S. & Buck, I. Fast parallel expectation maximization for Gaussian mixture models on GPUs using CUDA. in 2009 11th IEEE International Conference on High Performance Computing and Communications 103–109 (IEEE, 2009).

44. Nickolls, J., Buck, I., Garland, M. & Skadron, K. Scalable parallel programming with CUDA. Queue �, 40–53 (2008).

45. Wilson, E.B. Probable inference, the law of succession, and statistical inference. J. Am. Stat. Assoc. ��, 209–212 (1927).


Recommended