+ All Categories
Home > Education > UCB 2012-02-28

UCB 2012-02-28

Date post: 02-Jul-2015
Category:
Upload: filipecmaia
View: 314 times
Download: 5 times
Share this document with a friend
23
(p)tychography aka scanning diffractive imaging
Transcript
Page 1: UCB 2012-02-28

(p)tychography aka scanning diffractive imaging

Page 2: UCB 2012-02-28

SEMptychographymicroscopy

scanning sequences can be utilized and in fact, there are some advantages to avoid regularpatterns ([?]).

We denote Qi as an m2×n2 “illumination matrix” that extracts a frame containing m×mpixels out of an image containing n × n pixels, and multiplies the frame by the illuminationfunction w(r):

w(r)ψ(r+ xi) = Qiψ = zi(r), Qi = w(r)eixi∂r .

Here zi is an intermediate veriable describing individual frames that we introduce for con-vinence. We denote F as the two dimensional Fourier transform operator with respect tor:

Ff =�

r

eiq·rf(r),

xi

|ai|2

ψ

w

w

m

n

w

zi

z1

zkz

m×m

...

d

=

( )FFFFFF( ) ( )( )=

F ψQw

n

nm

miψ(r)

x1

x2

r1

zi(r)

w

w

Q1

Q2

www

i

r1q1

r2r2

aa1a2

ai(q)

Figure 1: Forward ptychographic problem: diffraction data ai is related to the unkown objectto reconstruct ψ by a = |FQψ|. The intermediate variable zi describing individual frames isused in many iterative methods [?].

In the following, we concatenate indices q and i of ai(q) and express

a =

a1...ak

, Q =

Q1...Qk

, F =

F

. . .

F

, z =

z1...zk

, k = κ2 (3)

and rewrite (Eq. 1) as

|F z| = a, (4)

z = Qψ, (5)

referred to as a Fourier magnitude problem and an overlapping illumination problem. Theptychographic reconstruction problem consists in finding ψ knowing a, Q. Many iterativemethods introduce an intermediate variable z, and attempt to solve the two problems in Eqs.(??) using projection algorithms, iterative transform methods, or alternating direction methods[?].

In the following section we will describe the standard operators commonly used in theliterature. In section 3 we will introduce an intermediate variable ci, replacing Eq. (5) withcizi = Qiψ, i = (1, . . . , k). This intermediate variable allows us to fix perturbations in theincident flux and increase rate of convergence for large scale problems.

2

F. Maia GPU/MPI reconstruction

70 nm probe

15 nm resolution

R. Celestre, A D. Kilcoyne, Tolek T, A. Schirotzek, T. Warwick(ALS),

1 micronE=1keV700 ms exposure

longer exposure should give 7 nm res

ptychography (5.3.2.1 STXM-ALS)

5.3.2.1

Page 3: UCB 2012-02-28

replace with array detector

Scanning Transmission Microscope (STXM) retrofit

Page 4: UCB 2012-02-28

“These two improvements should be implemented at every STXM at synchrotrons worldwide. Doing so will be revolutionary, allowing desktop computers to overcome x-ray optical limitations to reach resolutions below 10 nm”, H.N. Chapman. Science 2008

Pilatus 1M, frame rate 30 Hz

phase retrieval

fast detectors

• 9.0.1•Cosmic• 5.3.2.1•11.0.2.1

+

Page 5: UCB 2012-02-28

coherent!diffractive!imaging

in!standard!user!operation• Sample: bone,

M.!Dierolf et al., Nature 467 (2010) 436

highly!resolving• voxel size (65nm)3• resolution in 3D ~100nm

resolution in 2D ~120nm

quantitative!results• uncertainty within

voxel is 0.04 e-/Å-3 • significantly higher

sensitivity for largervolumes, e.g.,<0.002 e-/Å-3 for 1µm3

COSMIC @ ALS, August 2, 2011

ptychography + tomography

Page 6: UCB 2012-02-28

5

Email: [email protected], Web: http://people.epfl.ch/franz.pfeiffer

Ptychographic phase retrieval

W. Hoppe, Acta Cryst. A 25, 508 (1969).

R. Hegerl, W. Hoppe, Ber. Bunsen-Ges. Phys. Chemie74, 1148 (1970).

P.D. Nellist et al., Nature 374, 630 (1995)

Email: [email protected], Web: http://people.epfl.ch/franz.pfeiffer

Ptychographic phase retrieval

W. Hoppe, Acta Cryst. A 25, 508 (1969).

R. Hegerl, W. Hoppe, Ber. Bunsen-Ges. Phys. Chemie74, 1148 (1970).

H.N. Chapman, Ultramicroscopy 66, 153 (1996).

Page 7: UCB 2012-02-28

A specimen with complex transmission ψ(r) is situated in the focal plane andscanned in that plane. At a given point in the scan the sample is displacedfrom the optical axis by a vector −x. (On the Stony Brook STXM the sampleis usually scanned in the negative directions, so that the apparent motion ofthe probe across the sample is in the positive directions.) The x-ray wavefieldimmediately behind the specimen will then be given by a(r)ψ(r +x), and theintensity at the far-field microdiffraction plane can be written as [1]

m(r�, x) =�����

A(r� − x�)Ψ(x�) exp(2πix� · x) dx�����2

. (1)

The intensity m(r�, x) for constant x (further referred to in this paper asan r�-plane of m) is a single microdiffraction pattern, recorded as a singleframe of the CCD. Equation (1) shows that the microdiffraction wavefield isthe Fourier transform of the complex transmission of the specimen, multipliedby a phase ramp, and then convolved with the pupil function. This equationreveals much about the imaging process and the role of the microdiffractionplane. Consider a specimen, such as a transmission grating, which has a dis-crete Fourier transform consisting of several diffraction orders. The incidentconvergent beam is diffracted by the object so that each diffraction order ofthe specimen [each non-zero Ψ(x�)] yields a pupil function A(r�) in the mi-crodiffraction plane, centred at the frequency r� = x� and multiplied by thecomplex constant Ψ(x�). Each pupil function will be multiplied by the phasefactor exp(2πix� ·x), depending on the diffraction order and the position of theobject, x. This “scanning” phase factor is a consequence of the shift theorem,which states that a linear phase ramp is introduced to the diffracted wavefieldif the object is displaced from an arbitrarily-chosen origin. If, for example, aperiodic object is shifted by one period then the phase of any diffracted ordermust change by 2π. Since each angle of the diffraction of any arbitrary objectis linearly related to a particular spatial frequency in the object, each anglerequires a different object displacement for a 2π phase change. Therefore, fora given displacement each spatial frequency in the diffraction pattern receivesa phase change which depends linearly on that frequency.

In the case of plane-wave illumination of the object plane [A(r�) = δ(r�)], thephase ramp across the diffraction pattern will have no measurable effect on theintensity of the pattern. However, with a convergent beam illuminating thespecimen, the diffracted pupil functions may overlap in the microdiffractionplane. For the transmission grating, this requires that the fundamental periodof the grating is large enough so that the orders are not separated by more thanthe “diameter” 2r�

ZP of the pupil, as demonstrated in Fig. 2(a). (That is, thebars of the grating can be resolved by the microscope.) In the regions of overlapthere will be interference between the orders and the intensity will depend notonly on the intensities of the diffraction orders but on the phase of the orders,the phase on the pupil function, and the “scanning” phase factor exp(2πix�·x).

5

illumination objectintensity

Wigner deconvolution x-ray microscopy

67

Scanning Microscopy Vol. 11, 1997 (Pages 67-80) 0891-7035/97$5.00+.25Scanning Microscopy International, Chicago (AMF O’Hare), IL 60666 USA

PHASE-RETRIEVAL X-RAY MICROSCOPY BY WIGNER-DISTRIBUTIONDECONVOLUTION: SIGNAL PROCESSING

Henry N. Chapman

Department of Physics, SUNY at Stony Brook, Stony Brook, NY

Abstract

Phase and amplitude images have beenreconstructed from data collected in a scanningtransmission x-ray microscope by applying the method ofWigner-distribution deconvolution. This required collectingcoherent microdiffraction patterns at each point of a two-dimensional scan of an object and then deconvolving thefour-dimensional Wigner-distribution function of the lensfrom the data set. The process essentially analyses theinterference which occurs in the microdiffraction plane andwhich modulates as the object is scanned. The image-processing steps required to deconvolve experimental dataare described. These steps result in the reconstructions ofdiffraction-limited phased images, to a spatial-frequency cut-off of 1/45 nm-1. The estimated accuracy of the images is0.05 rad in phase and 10% in amplitude. Data were collectedat an x-ray wavelength of 3.1 nm.

Key Words: X-ray microscopy, phase-retrieval, zone plates,deconvolution, x-ray interferometry.

*Present address and address for correspondence:Henry N. ChapmanLawrence Livermore National Laboratory L-395P.O.Box 808Livermore, CA 94550

Telephone number: (925) 423 1580FAX number: (925) 423 1488E-mail: [email protected]

Introduction

Wigner-deconvolution phase-retrieval microscopyis a new technique for retrieving the phase and amplitude oftransmission microscope images (Rodenburg and Bates,1992; Bates and Rodenburg, 1989). This technique can beemployed in a microscope of either the scanning orconventional geometry and allows the formation ofsuperresolved images (Rodenburg and Bates, 1992; Nellistand Rodenburg, 1994). The phase-retrieval andsuperresolution characteristics of the technique have beendemonstrated in scanning transmission microscopes thatutilise visible light (McCallum and Rodenburg, 1992),electrons (Rodenburg et al., 1993; Nellist et al., 1995), andsoft x-rays (Chapman, 1996). In a scanning microscope themethod requires collecting a two-dimensional micro-diffraction pattern (a coherent convergent beam diffractionpattern) at each point in a two-dimensional scan. The abilityto retrieve the phase can be interpreted as a self-interfero-metric process, where two beams travelling in different direc-tions from the objective lens are combined at the specimenand diffracted into a single element of a CCD (charge-coupled device) detector. The intensity modulation resultingfrom scanning the specimen gives the relative phase of thetwo diffracted orders. The deconvolution process separatesthe contributions of all possible pairs of interfering beams.

X-ray microscopes are in use or under developmentin a number of laboratories for imaging wet, approximatelymicrometre-thick biological specimens, and materialscharacterisation, at ~50 nm resolution (Kirz et al., 1995).Both transmission x-ray microscope (TXMs) and scanningtransmission x-ray microscopes (STXMs) exist; these areanalogous to conventional transmission (CTEM) andscanning transmission (STEM) electron microscopes,respectively. Scanning transmission x-ray microscopesrequire a highly coherent incident beam, which necessitatesthe use of a high-brightness x-ray source such as anundulator at a synchrotron facility. All current high-resolution x-ray microscopes use zone plates for the probe-or image-forming objective. These are diffractive opticalelements made up of concentric circular zones, and thenumerical aperture, and hence the resolution, is limited bythe smallest zone width that can be fabricated. Currently,

Wigner deconvolution x-ray microscopy

67

Scanning Microscopy Vol. 11, 1997 (Pages 67-80) 0891-7035/97$5.00+.25Scanning Microscopy International, Chicago (AMF O’Hare), IL 60666 USA

PHASE-RETRIEVAL X-RAY MICROSCOPY BY WIGNER-DISTRIBUTIONDECONVOLUTION: SIGNAL PROCESSING

Henry N. Chapman

Department of Physics, SUNY at Stony Brook, Stony Brook, NY

Abstract

Phase and amplitude images have beenreconstructed from data collected in a scanningtransmission x-ray microscope by applying the method ofWigner-distribution deconvolution. This required collectingcoherent microdiffraction patterns at each point of a two-dimensional scan of an object and then deconvolving thefour-dimensional Wigner-distribution function of the lensfrom the data set. The process essentially analyses theinterference which occurs in the microdiffraction plane andwhich modulates as the object is scanned. The image-processing steps required to deconvolve experimental dataare described. These steps result in the reconstructions ofdiffraction-limited phased images, to a spatial-frequency cut-off of 1/45 nm-1. The estimated accuracy of the images is0.05 rad in phase and 10% in amplitude. Data were collectedat an x-ray wavelength of 3.1 nm.

Key Words: X-ray microscopy, phase-retrieval, zone plates,deconvolution, x-ray interferometry.

*Present address and address for correspondence:Henry N. ChapmanLawrence Livermore National Laboratory L-395P.O.Box 808Livermore, CA 94550

Telephone number: (925) 423 1580FAX number: (925) 423 1488E-mail: [email protected]

Introduction

Wigner-deconvolution phase-retrieval microscopyis a new technique for retrieving the phase and amplitude oftransmission microscope images (Rodenburg and Bates,1992; Bates and Rodenburg, 1989). This technique can beemployed in a microscope of either the scanning orconventional geometry and allows the formation ofsuperresolved images (Rodenburg and Bates, 1992; Nellistand Rodenburg, 1994). The phase-retrieval andsuperresolution characteristics of the technique have beendemonstrated in scanning transmission microscopes thatutilise visible light (McCallum and Rodenburg, 1992),electrons (Rodenburg et al., 1993; Nellist et al., 1995), andsoft x-rays (Chapman, 1996). In a scanning microscope themethod requires collecting a two-dimensional micro-diffraction pattern (a coherent convergent beam diffractionpattern) at each point in a two-dimensional scan. The abilityto retrieve the phase can be interpreted as a self-interfero-metric process, where two beams travelling in different direc-tions from the objective lens are combined at the specimenand diffracted into a single element of a CCD (charge-coupled device) detector. The intensity modulation resultingfrom scanning the specimen gives the relative phase of thetwo diffracted orders. The deconvolution process separatesthe contributions of all possible pairs of interfering beams.

X-ray microscopes are in use or under developmentin a number of laboratories for imaging wet, approximatelymicrometre-thick biological specimens, and materialscharacterisation, at ~50 nm resolution (Kirz et al., 1995).Both transmission x-ray microscope (TXMs) and scanningtransmission x-ray microscopes (STXMs) exist; these areanalogous to conventional transmission (CTEM) andscanning transmission (STEM) electron microscopes,respectively. Scanning transmission x-ray microscopesrequire a highly coherent incident beam, which necessitatesthe use of a high-brightness x-ray source such as anundulator at a synchrotron facility. All current high-resolution x-ray microscopes use zone plates for the probe-or image-forming objective. These are diffractive opticalelements made up of concentric circular zones, and thenumerical aperture, and hence the resolution, is limited bythe smallest zone width that can be fabricated. Currently,

Fourier transform

pixel translation

Page 8: UCB 2012-02-28

A specimen with complex transmission ψ(r) is situated in the focal plane andscanned in that plane. At a given point in the scan the sample is displacedfrom the optical axis by a vector −x. (On the Stony Brook STXM the sampleis usually scanned in the negative directions, so that the apparent motion ofthe probe across the sample is in the positive directions.) The x-ray wavefieldimmediately behind the specimen will then be given by a(r)ψ(r +x), and theintensity at the far-field microdiffraction plane can be written as [1]

m(r�, x) =�����

A(r� − x�)Ψ(x�) exp(2πix� · x) dx�����2

. (1)

The intensity m(r�, x) for constant x (further referred to in this paper asan r�-plane of m) is a single microdiffraction pattern, recorded as a singleframe of the CCD. Equation (1) shows that the microdiffraction wavefield isthe Fourier transform of the complex transmission of the specimen, multipliedby a phase ramp, and then convolved with the pupil function. This equationreveals much about the imaging process and the role of the microdiffractionplane. Consider a specimen, such as a transmission grating, which has a dis-crete Fourier transform consisting of several diffraction orders. The incidentconvergent beam is diffracted by the object so that each diffraction order ofthe specimen [each non-zero Ψ(x�)] yields a pupil function A(r�) in the mi-crodiffraction plane, centred at the frequency r� = x� and multiplied by thecomplex constant Ψ(x�). Each pupil function will be multiplied by the phasefactor exp(2πix� ·x), depending on the diffraction order and the position of theobject, x. This “scanning” phase factor is a consequence of the shift theorem,which states that a linear phase ramp is introduced to the diffracted wavefieldif the object is displaced from an arbitrarily-chosen origin. If, for example, aperiodic object is shifted by one period then the phase of any diffracted ordermust change by 2π. Since each angle of the diffraction of any arbitrary objectis linearly related to a particular spatial frequency in the object, each anglerequires a different object displacement for a 2π phase change. Therefore, fora given displacement each spatial frequency in the diffraction pattern receivesa phase change which depends linearly on that frequency.

In the case of plane-wave illumination of the object plane [A(r�) = δ(r�)], thephase ramp across the diffraction pattern will have no measurable effect on theintensity of the pattern. However, with a convergent beam illuminating thespecimen, the diffracted pupil functions may overlap in the microdiffractionplane. For the transmission grating, this requires that the fundamental periodof the grating is large enough so that the orders are not separated by more thanthe “diameter” 2r�

ZP of the pupil, as demonstrated in Fig. 2(a). (That is, thebars of the grating can be resolved by the microscope.) In the regions of overlapthere will be interference between the orders and the intensity will depend notonly on the intensities of the diffraction orders but on the phase of the orders,the phase on the pupil function, and the “scanning” phase factor exp(2πix�·x).

5

illumination objectintensity

2.2 Deconvolution of the data set

It is seen from Eqn. (3) that the data set is separable in terms of the instrumentand the specimen. If A is well known then the pupil overlap function can becomputed and deconvolved from M(r�, x�). By the convolution theorem thisinvolves Fourier transforming to yield a product, so processing of the datarequires the step

M(r, x�) ≡ F−1r {M(r�, x�)} = Wa(r,−x�)Wψ(r, x�), (4)

where Wf is the Wigner distribution function (WDF) of f , (or, more accu-rately, the ambiguity function of f) defined as [16]

Wf (r, x�)≡�

f ∗(x)f(x + r) exp(−2πix · x�) dx,

=�

F (r�)F ∗(r� − x�) exp(2πir · r�) dr�. (5)

Hence, M(r�, x) is actually a four-dimensional convolution of the WDF of thefocal distribution with the WDF of the complex transmission of the object.Hence the name of this method, Wigner-distribution deconvolution.

Since the WDF, Wa, contains zeros and regions of low intensity, the decon-volution is best performed using a Wiener filter. That is, an estimate, Wψ, ofthe WDF of the specimen is given by

Wψ(r, x�) = M(r, x�)W∗

a(r,−x�)

|Wa(r,−x�)|2 + φa

, (6)

where φa is a small constant. An estimate of the specimen transmission canthen be obtained by first Fourier transforming the WDF to give

Wψ(r�, x�) = Fr{Wψ(r, x�)} = Ψ(r�)Ψ∗(r� − x�), (7)

then inverting the WDF to give the specimen’s transform as follows:

Ψ(x�) = W ∗ψ(0,−x�)/

�Wψ(0, 0). (8)

The retrieved object, ψ(x), is then found by inverse Fourier transformation ofΨ.

The Wigner inversion given by Eqn. (8) extracts the plane at r� = 0 fromWψ. This recovers the amplitudes and phases from the interference betweeneach frequency of the object with the zero-order, and the intensity of the zeroorder is given by Wψ(0, 0). The highest frequency for which this interferencecan occur is |x�| = 2r�ZP, which is twice the outer radius of the pupil. Thus

8

2.2 Deconvolution of the data set

It is seen from Eqn. (3) that the data set is separable in terms of the instrumentand the specimen. If A is well known then the pupil overlap function can becomputed and deconvolved from M(r�, x�). By the convolution theorem thisinvolves Fourier transforming to yield a product, so processing of the datarequires the step

M(r, x�) ≡ F−1r {M(r�, x�)} = Wa(r,−x�)Wψ(r, x�), (4)

where Wf is the Wigner distribution function (WDF) of f , (or, more accu-rately, the ambiguity function of f) defined as [16]

Wf (r, x�)≡�

f ∗(x)f(x + r) exp(−2πix · x�) dx,

=�

F (r�)F ∗(r� − x�) exp(2πir · r�) dr�. (5)

Hence, M(r�, x) is actually a four-dimensional convolution of the WDF of thefocal distribution with the WDF of the complex transmission of the object.Hence the name of this method, Wigner-distribution deconvolution.

Since the WDF, Wa, contains zeros and regions of low intensity, the decon-volution is best performed using a Wiener filter. That is, an estimate, Wψ, ofthe WDF of the specimen is given by

Wψ(r, x�) = M(r, x�)W∗

a(r,−x�)

|Wa(r,−x�)|2 + φa

, (6)

where φa is a small constant. An estimate of the specimen transmission canthen be obtained by first Fourier transforming the WDF to give

Wψ(r�, x�) = Fr{Wψ(r, x�)} = Ψ(r�)Ψ∗(r� − x�), (7)

then inverting the WDF to give the specimen’s transform as follows:

Ψ(x�) = W ∗ψ(0,−x�)/

�Wψ(0, 0). (8)

The retrieved object, ψ(x), is then found by inverse Fourier transformation ofΨ.

The Wigner inversion given by Eqn. (8) extracts the plane at r� = 0 fromWψ. This recovers the amplitudes and phases from the interference betweeneach frequency of the object with the zero-order, and the intensity of the zeroorder is given by Wψ(0, 0). The highest frequency for which this interferencecan occur is |x�| = 2r�ZP, which is twice the outer radius of the pupil. Thus

8

Wigner distribution

function

Wigner deconvolution x-ray microscopy

67

Scanning Microscopy Vol. 11, 1997 (Pages 67-80) 0891-7035/97$5.00+.25Scanning Microscopy International, Chicago (AMF O’Hare), IL 60666 USA

PHASE-RETRIEVAL X-RAY MICROSCOPY BY WIGNER-DISTRIBUTIONDECONVOLUTION: SIGNAL PROCESSING

Henry N. Chapman

Department of Physics, SUNY at Stony Brook, Stony Brook, NY

Abstract

Phase and amplitude images have beenreconstructed from data collected in a scanningtransmission x-ray microscope by applying the method ofWigner-distribution deconvolution. This required collectingcoherent microdiffraction patterns at each point of a two-dimensional scan of an object and then deconvolving thefour-dimensional Wigner-distribution function of the lensfrom the data set. The process essentially analyses theinterference which occurs in the microdiffraction plane andwhich modulates as the object is scanned. The image-processing steps required to deconvolve experimental dataare described. These steps result in the reconstructions ofdiffraction-limited phased images, to a spatial-frequency cut-off of 1/45 nm-1. The estimated accuracy of the images is0.05 rad in phase and 10% in amplitude. Data were collectedat an x-ray wavelength of 3.1 nm.

Key Words: X-ray microscopy, phase-retrieval, zone plates,deconvolution, x-ray interferometry.

*Present address and address for correspondence:Henry N. ChapmanLawrence Livermore National Laboratory L-395P.O.Box 808Livermore, CA 94550

Telephone number: (925) 423 1580FAX number: (925) 423 1488E-mail: [email protected]

Introduction

Wigner-deconvolution phase-retrieval microscopyis a new technique for retrieving the phase and amplitude oftransmission microscope images (Rodenburg and Bates,1992; Bates and Rodenburg, 1989). This technique can beemployed in a microscope of either the scanning orconventional geometry and allows the formation ofsuperresolved images (Rodenburg and Bates, 1992; Nellistand Rodenburg, 1994). The phase-retrieval andsuperresolution characteristics of the technique have beendemonstrated in scanning transmission microscopes thatutilise visible light (McCallum and Rodenburg, 1992),electrons (Rodenburg et al., 1993; Nellist et al., 1995), andsoft x-rays (Chapman, 1996). In a scanning microscope themethod requires collecting a two-dimensional micro-diffraction pattern (a coherent convergent beam diffractionpattern) at each point in a two-dimensional scan. The abilityto retrieve the phase can be interpreted as a self-interfero-metric process, where two beams travelling in different direc-tions from the objective lens are combined at the specimenand diffracted into a single element of a CCD (charge-coupled device) detector. The intensity modulation resultingfrom scanning the specimen gives the relative phase of thetwo diffracted orders. The deconvolution process separatesthe contributions of all possible pairs of interfering beams.

X-ray microscopes are in use or under developmentin a number of laboratories for imaging wet, approximatelymicrometre-thick biological specimens, and materialscharacterisation, at ~50 nm resolution (Kirz et al., 1995).Both transmission x-ray microscope (TXMs) and scanningtransmission x-ray microscopes (STXMs) exist; these areanalogous to conventional transmission (CTEM) andscanning transmission (STEM) electron microscopes,respectively. Scanning transmission x-ray microscopesrequire a highly coherent incident beam, which necessitatesthe use of a high-brightness x-ray source such as anundulator at a synchrotron facility. All current high-resolution x-ray microscopes use zone plates for the probe-or image-forming objective. These are diffractive opticalelements made up of concentric circular zones, and thenumerical aperture, and hence the resolution, is limited bythe smallest zone width that can be fabricated. Currently,

FT

Wigner deconvolution x-ray microscopy

67

Scanning Microscopy Vol. 11, 1997 (Pages 67-80) 0891-7035/97$5.00+.25Scanning Microscopy International, Chicago (AMF O’Hare), IL 60666 USA

PHASE-RETRIEVAL X-RAY MICROSCOPY BY WIGNER-DISTRIBUTIONDECONVOLUTION: SIGNAL PROCESSING

Henry N. Chapman

Department of Physics, SUNY at Stony Brook, Stony Brook, NY

Abstract

Phase and amplitude images have beenreconstructed from data collected in a scanningtransmission x-ray microscope by applying the method ofWigner-distribution deconvolution. This required collectingcoherent microdiffraction patterns at each point of a two-dimensional scan of an object and then deconvolving thefour-dimensional Wigner-distribution function of the lensfrom the data set. The process essentially analyses theinterference which occurs in the microdiffraction plane andwhich modulates as the object is scanned. The image-processing steps required to deconvolve experimental dataare described. These steps result in the reconstructions ofdiffraction-limited phased images, to a spatial-frequency cut-off of 1/45 nm-1. The estimated accuracy of the images is0.05 rad in phase and 10% in amplitude. Data were collectedat an x-ray wavelength of 3.1 nm.

Key Words: X-ray microscopy, phase-retrieval, zone plates,deconvolution, x-ray interferometry.

*Present address and address for correspondence:Henry N. ChapmanLawrence Livermore National Laboratory L-395P.O.Box 808Livermore, CA 94550

Telephone number: (925) 423 1580FAX number: (925) 423 1488E-mail: [email protected]

Introduction

Wigner-deconvolution phase-retrieval microscopyis a new technique for retrieving the phase and amplitude oftransmission microscope images (Rodenburg and Bates,1992; Bates and Rodenburg, 1989). This technique can beemployed in a microscope of either the scanning orconventional geometry and allows the formation ofsuperresolved images (Rodenburg and Bates, 1992; Nellistand Rodenburg, 1994). The phase-retrieval andsuperresolution characteristics of the technique have beendemonstrated in scanning transmission microscopes thatutilise visible light (McCallum and Rodenburg, 1992),electrons (Rodenburg et al., 1993; Nellist et al., 1995), andsoft x-rays (Chapman, 1996). In a scanning microscope themethod requires collecting a two-dimensional micro-diffraction pattern (a coherent convergent beam diffractionpattern) at each point in a two-dimensional scan. The abilityto retrieve the phase can be interpreted as a self-interfero-metric process, where two beams travelling in different direc-tions from the objective lens are combined at the specimenand diffracted into a single element of a CCD (charge-coupled device) detector. The intensity modulation resultingfrom scanning the specimen gives the relative phase of thetwo diffracted orders. The deconvolution process separatesthe contributions of all possible pairs of interfering beams.

X-ray microscopes are in use or under developmentin a number of laboratories for imaging wet, approximatelymicrometre-thick biological specimens, and materialscharacterisation, at ~50 nm resolution (Kirz et al., 1995).Both transmission x-ray microscope (TXMs) and scanningtransmission x-ray microscopes (STXMs) exist; these areanalogous to conventional transmission (CTEM) andscanning transmission (STEM) electron microscopes,respectively. Scanning transmission x-ray microscopesrequire a highly coherent incident beam, which necessitatesthe use of a high-brightness x-ray source such as anundulator at a synchrotron facility. All current high-resolution x-ray microscopes use zone plates for the probe-or image-forming objective. These are diffractive opticalelements made up of concentric circular zones, and thenumerical aperture, and hence the resolution, is limited bythe smallest zone width that can be fabricated. Currently,

Fourier transform

FT Data unknown

solution

pixel translation

M(r,x) =�F−1

r�→rF−1x→x�

�m(r�, x)] = Wa(r,−x�)Wψ(r, x�)

a linear problem illumination

Page 9: UCB 2012-02-28

WIGNER DISTRIBUTION FUNCTION

Wψ(r, q) = [Fdr→q]ψ(r + dr/2)ψ(r − dr/2)

Wψ(r, dr) = ψ(r + dr/2)ψ(r − dr/2)

ψ(r) wavefield

phase spacequasi probability distribution

ψ�ψ

cyclic permutationψ(r + dr/2)ψ(r − dr/2)

liftingnote: rank 1

Wigner distribution function

Page 10: UCB 2012-02-28

WIGNER DISTRIBUTION FUNCTION

phase space description of light

θ

Wψ(r, q) = [Fdr→q]ψ(r + dr/2)ψ(r − dr/2)

Wψ(r, dr) = ψ(r + dr/2)ψ(r − dr/2)

ψ(r) wavefield

phase spacequasi probability distribution

ψ�ψ

cyclic permutationψ(r + dr/2)ψ(r − dr/2)

liftingnote:

rq = 2π

λ θ

ψ(r)

light source

r

q

phase spacepropagation

direction

rank 1

Page 11: UCB 2012-02-28

WIGNER DISTRIBUTION FUNCTION

r

qpoint source plane wave

r

q

r

qpropagation

intensity measurement

r

q volume=λ

rq

rq

propagation

r

qr

q

illumination

object

intensity measurement

r

qlens

r

q

intensity measurement

rq

Page 12: UCB 2012-02-28

PROJECTION ALGORITHMS

Scalable augmented operators for ptychographic imaging

February 17, 2012

Abstract

Ptychography promises diffraction limited resolution without the need for high reso-lution lenses. To achieve high resolution one has to solve the phase problem for manypartially overlapping frames. Here we introduce an augmented linear projection operatorto increase the convergence rate of iterative methods for large scale problems. Numericaltests indicate that this operator enables higher rate and more robust convergence usingstandard algorithms as well as the ability to correct intensities fluctuations for small andlarge scale problems.

1 Introduction

An emerging imaging technique in X-ray science is to use a localized probe to collect multiplediffraction measurements of an unknown moving object. This technique called ptychographyenables to achieve higher resolution and extended depth of focus compared to lens basedmethods [refs]. With increase frame rate of modern x-ray detectors, ptychography promisesto revolutionize x-ray imaging, however in the absence of quasi-real-time analysis, the utilityof this new technique is greatly reduced. As we describe in this paper, convergence rate ofiterative methods may be slow for large problems.

Here we will summarize the ptychographic problem following the notation of Yang et al.??. In a ptychography experiment, a two dimensional small beam with distribution w(r) ofdimension m×m illuminates an unknown object of interest ψ(r+x). For simplicity we considersquare matrices, generalization to non-square matrices can also be considered. One collectsa sequence of k diffraction images a2x(q) of dimension m ×m as the position x of the objectis rastered. Each frame ax represents the magnitude of the discrete two dimensional Fouriertransform F of w(r)ψ(r+ x):

ax(q) =���Fw(r)ψ(r+ x)

��� , r = rm,q = 2πr m (1)

Ff =�

r

eiq·rf(r), m = (µ, ν) , µ, ν = (0 . . .m− 1),

with r is a lengthscale, and the sum over r is given on all the indices m×m of r.As x is rastered around, r + x spans a grid of dimension n × n, n > m. We denote Qx

as an m2 × n2 “illumination matrix” that extracts a frame containing m×m pixels out of animage ψ containing n× n pixels, and multiplies the frame by the illumination function w(r):

w(r)ψ(r+ x) = Qx(r)ψ = zx(r), Qx(r) = w(r)eix∂r .

Here zx is an intermediate variable describing individual frames that we introduce for convi-nence.

1

diffraction data

=

( )FFFFFF

( ) ( )( )=

ψ

F

w

w

Q1

QFa

a1a2 Q2 ψ

Q

xi

|ai|2

ψ

w

w

m

n

w

zi

z1

zkz

m×m

...=

( )FFFFFF( ) ( )( )=

ψ

F

w

w

Q1

QFa

a1a2 Q2 ψ

Q

Figure 1: Forward ptychographic problem: diffraction data ai is related to the unkown objectto reconstruct ψ by a = |FQψ|. The intermediate variable zi describing individual frames isused in many iterative methods [?].

In the following, we introduce k sequences of various matrices as follows

a =

a1...ak

, Q =

Q1...Qk

, z =

z1...zk

, F =

F

. . .

F

(2)

and rewrite (Eq. 1) as a = |FQψ|, or using the intermediate variable z as:

a = |F z|, (3)

z = Qψ, (4)

referred to as a Fourier magnitude problem and an overlapping illumination problem respec-tively. The ptychographic reconstruction problem consists in finding ψ knowing a, Q. Manyiterative methods introduce an intermediate variable z, and attempt to solve the two problemsin Eqs. (??) using projection algorithms, iterative transform methods, or alternating directionmethods [?].

In the following section we will describe the standard operators commonly used in theliterature. In section 3 we will introduce an intermediate variable ci, replacing Eq. (4) withcizi = Qiψ, i = (1, . . . , k). The linear projection operator corresponding to the augmentedproblem is computationally more intensive than for (Eq. 8), and speed may not always improve.However the benefits of introducing this augmented problem are the following:

• Intensity fluctuation introduced by instabilities in the storage ring, optics etc, are givenby the coefficients ci and their effect can be removed (see Fig. 4).

• Accelerated convergence per iteration (Fig.2). A heuristic interpretation is that longrange phase fluctuations are poorly constrained by standard projection operators, result-ing in degraded convergence rate for large scale problems. See also (Fig. 3) where thestep size was increased.

• Parallelization strategies divide the problem in subreconstruction regions and reducecommunications between subreconstructions. Constant phase factors multiplying subre-

2

Fourier magnitudeSplit into frames

unknown

xi

|ai|2

ψ

w

w

m

n

w

zi

z1

zkz

m×m

...

Scalable augmented operators for ptychographic imaging

February 17, 2012

Abstract

Ptychography promises diffraction limited resolution without the need for high reso-lution lenses. To achieve high resolution one has to solve the phase problem for manypartially overlapping frames. Here we introduce an augmented linear projection operatorto increase the convergence rate of iterative methods for large scale problems. Numericaltests indicate that this operator enables higher rate and more robust convergence usingstandard algorithms as well as the ability to correct intensities fluctuations for small andlarge scale problems.

1 Introduction

An emerging imaging technique in X-ray science is to use a localized probe to collect multiplediffraction measurements of an unknown moving object. This technique called ptychographyenables to achieve higher resolution and extended depth of focus compared to lens basedmethods [refs]. With increase frame rate of modern x-ray detectors, ptychography promisesto revolutionize x-ray imaging, however in the absence of quasi-real-time analysis, the utilityof this new technique is greatly reduced. As we describe in this paper, convergence rate ofiterative methods may be slow for large problems.

Here we will summarize the ptychographic problem following the notation of Yang et al.??. In a ptychography experiment, a two dimensional small beam with distribution w(r) ofdimension m×m illuminates an unknown object of interest ψ(r+x). For simplicity we considersquare matrices, generalization to non-square matrices can also be considered. One collectsa sequence of k diffraction images a2x(q) of dimension m ×m as the position x of the objectis rastered. Each frame ax represents the magnitude of the discrete two dimensional Fouriertransform F of w(r)ψ(r+ x):

ax(q) =���Fw(r)ψ(r+ x)

��� , r = rm,q = 2πr m (1)

Ff =�

r

eiq·rf(r), m = (µ, ν) , µ, ν = (0 . . .m− 1),

with r is a lengthscale, and the sum over r is given on all the indices m×m of r.As x is rastered around, r + x spans a grid of dimension n × n, n > m. We denote Qx

as an m2 × n2 “illumination matrix” that extracts a frame containing m×m pixels out of animage ψ containing n× n pixels, and multiplies the frame by the illumination function w(r):

w(r)ψ(r+ x) = Qx(r)ψ = zx(r), Qx(r) = w(r)eix∂r .

Here zx is an intermediate variable describing individual frames that we introduce for convi-nence.

1

framesprobe translate

feasibility problem

illumination

Page 13: UCB 2012-02-28

PROJECTION ALGORITHMS

constructions may evolve independently, and solving (Eq. 13) is required when mergingsubreconstructions.

2 Standard Projection algorithms

The Fourier magnitude projection PF is used to ensure that the frames satisfy measurementsin Eq. (3). PF can be expressed as:

PF z = F ∗ F z

|F z| · a. (5)

where F ∗ is the inverse Fourier transform operator. PF is a projection in the sense that

PF z = argminz

�zi − zi�, subject to |F z| = a, (6)

where � � denotes the Euclidian norm. The overlap projection operator PQ is used to enforcethe known set of illuminations Q:

PQz = Qψmin, where ψmin = argminψ

�z −Qψ�2, (7)

where z, Q are the set of frames and set of illuminations respectively. The running estimateof the unkown solution ψ is obtained by solving the least squares problem in Eq. (7):

ψmin = (Q∗Q)−1Q∗z. (8)

where Q∗ is the operator that multiplies by the conjugate of the probe w and merges all theframes zi onto the image ψ. Q is the operator which splits an image into frames and multiplieseach frame by a probe. (Q∗Q)−1 is a normalization factor. The linear projection operator PQ

can be expressed as:PQ = Q(Q∗Q)−1Q∗, (9)

In the alternating projection algorithm, the approximation to the solutions of (7) and (6) areupdated by:

z(�+1) = [PQPF ] z(�)

ψ(�+1) = (Q∗Q)−1Q∗z(�+1).

Here ψ(�), z(�) are the running estimate of ψ, z = Qψ. A number of diffrent algorithms hasbeen proposed, a few examples are given in Tab. 1, with β ∈ [0, 1] is a relaxation parameter.

projection algorithm updating formulaHIO z(�+1) = [PQPF + (I − PQ)(I − βPF )] z(�)

RAAR z(�+1) = [2βPQPF + (1− 2β)PF + β(PQ − I)] z(�)

Table 1: Popular fix-point algorithms used in phase retrieval

The error metrics εF , εq used to monitor progress are:

�y�εF�z(�)

�=

���[PF − I] z(�)��� , (10)

�y�εQ�z(�)

�=

���[PQ − I] z(�)��� , (11)

3

( )( )W ( )constructions may evolve independently, and solving (Eq. 13) is required when mergingsubreconstructions.

2 Standard Projection algorithms

The Fourier magnitude projection PF is used to ensure that the frames satisfy measurementsin Eq. (3). PF can be expressed as:

PF z = F ∗ F z

|F z| · a. (5)

where F ∗ is the inverse Fourier transform operator. PF is a projection in the sense that

PF z = argminz

�zi − zi�, subject to |F z| = a, (6)

where � � denotes the Euclidian norm. The overlap projection operator PQ is used to enforcethe known set of illuminations Q:

PQz = Qψmin, where ψmin = argminψ

�z −Qψ�2, (7)

where z, Q are the set of frames and set of illuminations respectively. The running estimateof the unkown solution ψ is obtained by solving the least squares problem in Eq. (7):

ψmin = (Q∗Q)−1Q∗z. (8)

where Q∗ is the operator that multiplies by the conjugate of the probe w and merges all theframes zi onto the image ψ. Q is the operator which splits an image into frames and multiplieseach frame by a probe. (Q∗Q)−1 is a normalization factor. The linear projection operator PQ

can be expressed as:PQ = Q(Q∗Q)−1Q∗, (9)

In the alternating projection algorithm, the approximation to the solutions of (7) and (6) areupdated by:

z(�+1) = [PQPF ] z(�)

ψ(�+1) = (Q∗Q)−1Q∗z(�+1).

Here ψ(�), z(�) are the running estimate of ψ, z = Qψ. A number of diffrent algorithms hasbeen proposed, a few examples are given in Tab. 1, with β ∈ [0, 1] is a relaxation parameter.

projection algorithm updating formulaHIO z(�+1) = [PQPF + (I − PQ)(I − βPF )] z(�)

RAAR z(�+1) = [2βPQPF + (1− 2β)PF + β(PQ − I)] z(�)

Table 1: Popular fix-point algorithms used in phase retrieval

The error metrics εF , εq used to monitor progress are:

�y�εF�z(�)

�=

���[PF − I] z(�)��� , (10)

�y�εQ�z(�)

�=

���[PQ − I] z(�)��� , (11)

3

xi

|ai|2

ψ

w

w

m

n

w

zi

z1

zkz

m×m

...=

( )FFFFFF( ) ( )( )=

ψ

F

w

w

Q1

QFa

a1a2 Q2 ψ

Q

Figure 1: Forward ptychographic problem: diffraction data ai is related to the unkown objectto reconstruct ψ by a = |FQψ|. The intermediate variable zi describing individual frames isused in many iterative methods [?].

In the following, we introduce k sequences of various matrices as follows

a =

a1...ak

, Q =

Q1...Qk

, z =

z1...zk

, F =

F

. . .

F

(2)

and rewrite (Eq. 1) as a = |FQψ|, or using the intermediate variable z as:

a = |F z|, (3)

z = Qψ, (4)

referred to as a Fourier magnitude problem and an overlapping illumination problem respec-tively. The ptychographic reconstruction problem consists in finding ψ knowing a, Q. Manyiterative methods introduce an intermediate variable z, and attempt to solve the two problemsin Eqs. (??) using projection algorithms, iterative transform methods, or alternating directionmethods [?].

In the following section we will describe the standard operators commonly used in theliterature. In section 3 we will introduce an intermediate variable ci, replacing Eq. (4) withcizi = Qiψ, i = (1, . . . , k). The linear projection operator corresponding to the augmentedproblem is computationally more intensive than for (Eq. 8), and speed may not always improve.However the benefits of introducing this augmented problem are the following:

• Intensity fluctuation introduced by instabilities in the storage ring, optics etc, are givenby the coefficients ci and their effect can be removed (see Fig. 4).

• Accelerated convergence per iteration (Fig.2). A heuristic interpretation is that longrange phase fluctuations are poorly constrained by standard projection operators, result-ing in degraded convergence rate for large scale problems. See also (Fig. 3) where thestep size was increased.

• Parallelization strategies divide the problem in subreconstruction regions and reducecommunications between subreconstructions. Constant phase factors multiplying subre-

2

Fourier magnitudeReduce frames

ADM for Phase Retrieval 4

Algorithm Formula

ER xk+1 = PSPM(xk)BIO xk+1 = (PSPM + I ! PM) (xk)HIO xk+1 = ((1 + !)PSPM + I ! PS ! !PM) (xk).HPR xk+1 =

!(1 + !)PS+

PM + I ! PS+! !PM

"(xk)

RAAR xk+1 =!2!PS+

PM + !I ! !PS++ (1! 2!)PM

"(xk)

Table 1. A list of projection algorithms for phase retrieval.

In practice, the signal x(r) may be nonnegative. As a result, we can modify (1)to define

S+ := {x(r) | x(r) " 0 and x(r) = 0 if r /# D}, (6)

whose projection operator is denoted by PS+. The counterpart of the HIO method

for S+ and M is the hybrid projection reflection (HPR) algorithm [4]. The relaxedaveraged alternating reflection method (RAAR) [5] is a linear combination of the HIOmethod (with ! = 1) and the projection PM. These algorithms are summarized inTable 1.

The classical phase retrieval problem can also be formulated as a constrainedminimization problem:

min "(x) := $b! PMx$2, subject to: PSx = x. (7)

It is not di!cult to show that the gradient of objective function "(x) in (7) is

%" = 2(PM ! I)x.

Using this expression, one can solve (7) by using a projected gradient algorithmdescribed by the following updating formula

xk+1 = xk ! 2!PS(PM ! I)xk, (8)

where ! is a step length along the projected gradient PS(PM ! I)xk. It is easy tosee that setting ! to 1/2 yields exactly the alternating projection or ER algorithm.Other connections between the projection algorithms and constrained minimizationapproaches are referred to [22].

3. The ADM methods

Consider the general feasibility problem

find x # X#

Y, (9)

where X and Y are two given closed sets. We assume that the projectors PX (x) andPY(x) are well defined. By introducing a new variable y such that x = y, problem (9)is equivalent to

find x and y such that x = y, x # X and y # Y. (10)

We denote by # # Rn the Lagrangian multiplier of the equation x = y and definethe augmented Lagrangian function of (10) without considering the constraints x # Xand y # Y as

L(x, y,#) := #!(x! y) +1

2$x! y$2.

mergenormalizesplit

13

Email: [email protected], Web: http://people.epfl.ch/franz.pfeiffer

Simultaneous reconstruction of probe & specimen

Current object guess Illumination function

Idea: Nested loop on O(r) & P(r) !

P. Thibault, M. Dierolf, A. Menzel, C. David, O. Bunk, and F. Pfeiffer, Science 321, 379-382 (2008)

Email: [email protected], Web: http://people.epfl.ch/franz.pfeiffer

A test case: far-field phase retrieval with laser light

M. Dierolf et al., Europhysics News 39, 22 (2008)

constructions may evolve independently, and solving (Eq. 13) is required when mergingsubreconstructions.

2 Standard Projection algorithms

The Fourier magnitude projection PF is used to ensure that the frames satisfy measurementsin Eq. (3). PF can be expressed as:

PF z = F ∗ F z

|F z| · a. (5)

where F ∗ is the inverse Fourier transform operator. PF is a projection in the sense that

PF z = argminz

�zi − zi�, subject to |F z| = a, (6)

where � � denotes the Euclidian norm. The overlap projection operator PQ is used to enforcethe known set of illuminations Q:

PQz = Qψmin, where ψmin = argminψ

�z −Qψ�2, (7)

where z, Q are the set of frames and set of illuminations respectively. The running estimateof the unkown solution ψ is obtained by solving the least squares problem in Eq. (7):

ψmin = (Q∗Q)−1Q∗z. (8)

where Q∗ is the operator that multiplies by the conjugate of the probe w and merges all theframes zi onto the image ψ. Q is the operator which splits an image into frames and multiplieseach frame by a probe. (Q∗Q)−1 is a normalization factor. The linear projection operator PQ

can be expressed as:PQ = Q(Q∗Q)−1Q∗, (9)

In the alternating projection algorithm, the approximation to the solutions of (7) and (6) areupdated by:

z(�+1) = [PQPF ] z(�)

ψ(�+1) = (Q∗Q)−1Q∗z(�+1).

Here ψ(�), z(�) are the running estimate of ψ, z = Qψ. A number of diffrent algorithms hasbeen proposed, a few examples are given in Tab. 1, with β ∈ [0, 1] is a relaxation parameter.

projection algorithm updating formulaHIO z(�+1) = [PQPF + (I − PQ)(I − βPF )] z(�)

RAAR z(�+1) = [2βPQPF + (1− 2β)PF + β(PQ − I)] z(�)

Table 1: Popular fix-point algorithms used in phase retrieval

The error metrics εF , εq used to monitor progress are:

�y�εF�z(�)

�=

���[PF − I] z(�)��� , (10)

�y�εQ�z(�)

�=

���[PQ − I] z(�)��� , (11)

3

Q*Q

Alternating direction Methods for Classical and

Ptychographic Phase Retrieval

Zaiwen Wen1, Chao Yang2, Xin Liu3, Stefano Marchesini1Department of Mathematics and Institute of Natural Sciences, ShanghaiJiaotong University, Shanghai, 200240, CHINA2Computational Research Division, Lawrence Berkeley National Laboratory,Berkeley, CA 94720, USA3Academy of Mathematics and Systems Science, Chinese Academy of Sciences,Beijing 100080, CHINA

E-mail: [email protected], [email protected] and [email protected]

Abstract. In this paper, we apply the widely used augmented Lagrangianalternating direction method (ADM) for solving both the classical andPtychographic phase retrieval problems. Although the sequence produced bythe hybrid input-output and hybrid projection algorithms can be generated fromthese from the ADM method on the classical phase retrieval problem, they usuallyperform quite di!erently in practice and the latter can often be much less sensitiveto the choice of the relaxation parameters. Similar behavior can also be observedon Ptychographic phase retrieval problem. Moreover, the ADM method canbe competitive with the nonlinear conjugate gradient and Newton’s methodson di"cult instances in terms of both reconstruction quality and computationale"ciency.

1. Introduction

Phase retrieval is a challenging inverse problem arising from a number of scientificapplications such as X-ray di!raction microscopy, astronomical imaging and optics.It attempts to estimate a signal (or image) from the measurements of the magnitude(modulus) of its Fourier transform. Since the phase information, which usually encodesa lot of the structural content of the signal, is unavailable, there is no easy wayto distinguish the true solution from other incorrect solutions that share the sameFourier magnitude with that of the true solution. However, by seeking additionala priori information that is consistent with the unknown signal and measurementsand designing suitable numerical techniques for reconstruction, phase retrieval hasbeen quite successful in many areas, and continues to be attractive fueled by newrequirements and imaging capabilities.

One commonly used prior information is the support of the signal, that is, theobject is typically constrained within a given area or support volume. By formulatingthe problem as finding the intersection of the modulus and support sets, Gerchbergand Saxton [1] devised the error reduction (ER) algorithm which projects approximatesolution into these constraints in an alternate fashion. One of the most notableextensions is the hybrid input-output (HIO) algorithm developed by Fienup [2].Bauschke, Combettes and Luke established the connections between the ER and HIO

Fourier amplitude projection

Page 14: UCB 2012-02-28

ROBUST RECONSTRUCTION TECHNIQUE

0 100 200 300 40010−10

10−8

10−6

10−4

10−2

100 RAAR ii=140

|[Pf−I] x|

|[Po−I] Pf x||Po x−xsol |

Diffractive imagingptychography

iterations

reaches double precision

Page 15: UCB 2012-02-28

method CPU Matlab+GPU c++/cuda/mpi remarks

phase retrieval x x x x [arXiv:1105.5628]

probe retrieval x x x

beamstop no good solution yet

x x regularization/high pass filter?

detector binning x recovered probe larger than field of view

intensity fluctuations

x x partial exact solution,

accelerated convergence

position error fit/correct xworks ok, we

know how to do better

step size/ccd distance

trial/error can someone compute gradient/matvec

vibrations fit/deconvolve x partialworks ok, in the fit

vibrations are averaged

incoherent backtround

simulationsonly unknown

offset

background numerical tests needed

noise more numerical tests needed

x x x

compressive... more numerical tests needed

POTENTIAL PROBLEMS

detector dynamic range is an issue, we can’t fix it by

numerical methods

Page 16: UCB 2012-02-28

13

Email: [email protected], Web: http://people.epfl.ch/franz.pfeiffer

Simultaneous reconstruction of probe & specimen

Current object guess Illumination function

Idea: Nested loop on O(r) & P(r) !

P. Thibault, M. Dierolf, A. Menzel, C. David, O. Bunk, and F. Pfeiffer, Science 321, 379-382 (2008)

Email: [email protected], Web: http://people.epfl.ch/franz.pfeiffer

A test case: far-field phase retrieval with laser light

M. Dierolf et al., Europhysics News 39, 22 (2008)

Page 17: UCB 2012-02-28

17

Ptychography Robust Phase Retrieval Method but what about Experimental Realities?

1.) Vibrations

2.) Unknown Illumination Function

Low Frequency (<100Hz)

High Frequency (>100Hz)

3.) Noise and Missing Information

Andre Schirotzek

Andre Schirotzek,

Vibrations/coherence/position errors

fit position errors

fit speed

/deconvolve

taylor expansion

taylor expansion

cross-correlation maximization for long term drifts

Page 18: UCB 2012-02-28

18

…Or: Expressed in an Equation Image = � |FTx{O(x)P(x− x0 +∆x)}|2 �σ

Taylor

≈ � | �OP + ∆x · �O ∂xP +∆x2

2· �O ∂xxP |2 �σ

≈ |�OP |2 + 2�∆x�σ · Re��OP · �O ∂xP

+ �∆x2�σ ·

�|�O ∂xP |2 + Re

��OP · �O ∂xxP

��

= I0 + �∆x� · Ix + (�∆x�2 + σ2) · Ixx- Shift probe by Taylor expansion

-  Find Shift and Vibration Parameter

through Least Squares Method

- Find Illumination iteratively (just like Object)

Bottom line:

∆x σ

Andre Schirotzek,

Conclusions

- We can retrieve unknown vibrations ~ 70nm (note: flying scan same treatment, 5ms exposure <-> 50nm)

- We can retrieve unknown probe positions = +/- 40nm ∆x

- We can retrieve unknown illumination function

- Included in the simulations: photon shot noise, camera read-out noise, missing information (beam stop)

- Object retrieval / Resolution: Work in Progress…

d

Vibrations/coherence/position errors

Page 19: UCB 2012-02-28

no fixI0

fixI0every

iteration

wrong I0correct I0

I0 fluctuations, accelerating convergence

minA0,sample

i

�A0iprobei · sample− framesi�2

e.g. 3 frames with partial overlap.compare frames, patch together

wrong I0

correct/fix I0Faster even for known I0

Page 20: UCB 2012-02-28

nanosurveyor network

Microscope-under construction

1000 frame / sec CCD- developed at LBNL

High performance computing- use of NERSC infrastructure

10 Gbps

Page 21: UCB 2012-02-28

Higher level parallelization

• To be able to process data in real time (200Hz) we need to use multiple GPUs.

GPU 1

GPU 2Combine

Phase Independently

Split

estimated 125 GPUs needed to keep up with nanosurveyor

F. Maia

DEALING WITH DATA VOLUME

Page 22: UCB 2012-02-28

Ptychography• ongoing investment by all synchrotrons

• massive data rates (1 TB/h)

• interesting computational problem (no proof of convergence, but numerical tests suggest otherwise... and then there is phaselift)

• problems:

! vibrations, camera distance, orientation, broadband illumination, partial coherence data volume....

! 3D reconstruction using multi-slice propagation, denoising strategies?

Page 23: UCB 2012-02-28

electron microscopy

! "#

!

!

Figure 2.! $%&! '()**+*,! -.)*/0+//+1*! 234(5.1*! 6+(.1/(178! +0),4! 19! )! :-;%! 01*13)84.! +/3)*<!

<471/+54<!1*!)!'+=!040>.)*4?!'()34!>).!@!:AA*0?!-B4!84331C!)..1C/!+*!+*<+()54!5B4!1.+4*5)5+1*!19!5B4!

$AD"&! (.8/5)331,.)7B+(! <+.4(5+1*! 19! 1.<4.4<! <10)+*/?! E.44*! (+.(34/! 0).F! 7138(.8/5)33+*4! ).4)/?! $;&!

G+99.)(5+1*!7)554.*/!9.10!5B4!).4)!0).F4<!>8!5B4!>3H4!.4(5)*,34!).4!/B1C*!+*!%?!'()34!>).!I!*0J"!?!$K&!

%L6!(.1//J/4(5+1*!)(.1//!)!,)7!>45C44*!5C1!:-;%!+/3)*</?!$G&!'+0H3)54<!F+*40)5+(!<+99.)(5+1*!7)554.*!!

>)/4<! 1*! 5B4! (.8/5)3+*4! /5.H(5H.4! /B1C*! +*! 2?! $2&! 517! M+4C! $H774.&! )*<! /+<4! M+4C! $31C4.&! 19! NG!

.44*<4.+*,!19!5B4!7.171/4<!(.8/5)33+*4!/5.H(5H.4!19!5B4!0134(H3).!9+30!

! "#

!

!

Figure 2.! $%&! '()**+*,! -.)*/0+//+1*! 234(5.1*! 6+(.1/(178! +0),4! 19! )! :-;%! 01*13)84.! +/3)*<!

<471/+54<!1*!)!'+=!040>.)*4?!'()34!>).!@!:AA*0?!-B4!84331C!)..1C/!+*!+*<+()54!5B4!1.+4*5)5+1*!19!5B4!

$AD"&! (.8/5)331,.)7B+(! <+.4(5+1*! 19! 1.<4.4<! <10)+*/?! E.44*! (+.(34/! 0).F! 7138(.8/5)33+*4! ).4)/?! $;&!

G+99.)(5+1*!7)554.*/!9.10!5B4!).4)!0).F4<!>8!5B4!>3H4!.4(5)*,34!).4!/B1C*!+*!%?!'()34!>).!I!*0J"!?!$K&!

%L6!(.1//J/4(5+1*!)(.1//!)!,)7!>45C44*!5C1!:-;%!+/3)*</?!$G&!'+0H3)54<!F+*40)5+(!<+99.)(5+1*!7)554.*!!

>)/4<! 1*! 5B4! (.8/5)3+*4! /5.H(5H.4! /B1C*! +*! 2?! $2&! 517! M+4C! $H774.&! )*<! /+<4! M+4C! $31C4.&! 19! NG!

.44*<4.+*,!19!5B4!7.171/4<!(.8/5)33+*4!/5.H(5H.4!19!5B4!0134(H3).!9+30!

Scanning Transmission Electron Microscopy image of a 5TBA monolayer island deposited on a SiN membrane. Scale bar : 500nm.

Virginia Altoe1, Florent Martin2,3, Allard Katan2, Miquel Salmeron1,2,3* and Shaul Aloni1


Recommended