+ All Categories
Home > Documents > Fast fine-pixel aerial image calculation in partially coherent imaging by matrix representation of...

Fast fine-pixel aerial image calculation in partially coherent imaging by matrix representation of...

Date post: 02-Oct-2016
Category:
Upload: kenji
View: 213 times
Download: 1 times
Share this document with a friend
7
Fast fine-pixel aerial image calculation in partially coherent imaging by matrix representation of modified Hopkins equation Kenji Yamazoe 231 Cory Hall, Department of Electrical Engineering and Computer Science, University of California, Berkeley, California, 94720-1770, USA ([email protected]) On leave from Canon Inc., 23-10, Kiyohara-Kogyo-danchi, Utsunomiya-shi, Tochigi-ken, 321-3298, Japan Received 19 October 2009; revised 25 May 2010; accepted 14 June 2010; posted 15 June 2010 (Doc. ID 118656); published 6 July 2010 A fast computation algorithm for fine-pixel aerial images is presented that modifies the transmission cross coefficient approach of Hopkins as a product of two matrices. The spatial frequency of the image is calculated by the sum of diagonal and off-diagonal elements of the matrix. Let N, N F , and M be the number of the point sources, the sampling number for fast Fourier transform, and the sampling number in the spatial frequency domain ranging twice the pupil size, respectively. The calculation time of this method is proportional to BN½ðM 1Þ=2 4 , while that of a conventional source integration method is 2ANN F log 2 N F , where A and B are constants and generally B < A . If N F is sufficiently greater than M or M is small enough, which is the fine-pixel condition, this method runs faster than the source integration method. If the coherence factor is 0.9 and M 55, this method runs faster than the source integration even under the Nyquist sampling condition. © 2010 Optical Society of America OCIS codes: 110.4980, 110.5220, 110.2990. 1. Introduction Fast aerial image computation has been a major re- search field dating back to Hopkinsformulation of partially coherent imaging with the introduction of the transmission cross coefficient (TCC) [1,2]. Be- cause direct computation of the Hopkins equation is computationally expensive, many image computa- tion approaches have been developed and used for fast aerial image calculation. When the mask diffrac- tion efficiency is independent of the illumination angle, a convenient aerial image calculation is the in- terchange of the diffraction angle and source inte- grals in the Hopkins equation. Source integration, or the so-called Abbe method, is based on this con- cept, which is compatible with the fast Fourier trans- form (FFT) algorithm [3,4]. If the number of the point sources in the partially coherent illumination is N, the source integration requires N FFTs. To reduce the FFT repetition, high accuracy aerial image ap- proximation by eigenfunction truncation has been introduced. There are varieties of eigenfunction ap- proaches in partially coherent imaging [511]. Cur- rently, TCC decomposition, or the so-called sum of coherent system [10], is widely used in lithography simulation. Although one can obtain the aerial image with much less than N FFTs by the TCC decomposi- tion [10,11], it is an approximate image. To obtain the complete aerial image, even the TCC decomposition requires N FFTs because the TCC has N eigenfunc- tions for a complete aerial image [11]. The aerial image simulation time is mostly occu- pied by the FFT repetition. Here, Kintner has shown an algorithm that requires only one FFT [12]. He cal- culated the aerial image as the Fourier transform of the spatial frequency of the aerial image obtained from a complicated double integration of object spec- trum and the TCC. Since TCC computation itself in- volves double integration, computation of the spatial frequency of the aerial image involves quadruple 0003-6935/10/203909-07$15.00/0 © 2010 Optical Society of America 10 July 2010 / Vol. 49, No. 20 / APPLIED OPTICS 3909
Transcript

Fast fine-pixel aerial image calculation in partiallycoherent imaging by matrix representation of

modified Hopkins equation

Kenji Yamazoe231 Cory Hall, Department of Electrical Engineering and Computer Science,

University of California, Berkeley, California, 94720-1770, USA ([email protected])

On leave from Canon Inc., 23-10, Kiyohara-Kogyo-danchi, Utsunomiya-shi, Tochigi-ken, 321-3298, Japan

Received 19 October 2009; revised 25 May 2010; accepted 14 June 2010;posted 15 June 2010 (Doc. ID 118656); published 6 July 2010

A fast computation algorithm for fine-pixel aerial images is presented that modifies the transmissioncross coefficient approach of Hopkins as a product of two matrices. The spatial frequency of the imageis calculated by the sum of diagonal and off-diagonal elements of the matrix. Let N, NF , and M be thenumber of the point sources, the sampling number for fast Fourier transform, and the sampling numberin the spatial frequency domain ranging twice the pupil size, respectively. The calculation time of thismethod is proportional to BN½ðM − 1Þ=2�4, while that of a conventional source integration method is2ANNFlog2NF , where A and B are constants and generally B < A. If NF is sufficiently greater thanM or M is small enough, which is the fine-pixel condition, this method runs faster than the sourceintegration method. If the coherence factor is 0.9 and M ≤ 55, this method runs faster than the sourceintegration even under the Nyquist sampling condition. © 2010 Optical Society of AmericaOCIS codes: 110.4980, 110.5220, 110.2990.

1. Introduction

Fast aerial image computation has been a major re-search field dating back to Hopkins’ formulation ofpartially coherent imaging with the introduction ofthe transmission cross coefficient (TCC) [1,2]. Be-cause direct computation of the Hopkins equationis computationally expensive, many image computa-tion approaches have been developed and used forfast aerial image calculation. When the mask diffrac-tion efficiency is independent of the illuminationangle, a convenient aerial image calculation is the in-terchange of the diffraction angle and source inte-grals in the Hopkins equation. Source integration,or the so-called Abbe method, is based on this con-cept, which is compatible with the fast Fourier trans-form (FFT) algorithm [3,4]. If the number of the pointsources in the partially coherent illumination is N,the source integration requires N FFTs. To reduce

the FFT repetition, high accuracy aerial image ap-proximation by eigenfunction truncation has beenintroduced. There are varieties of eigenfunction ap-proaches in partially coherent imaging [5–11]. Cur-rently, TCC decomposition, or the so-called sum ofcoherent system [10], is widely used in lithographysimulation. Although one can obtain the aerial imagewith much less than N FFTs by the TCC decomposi-tion [10,11], it is an approximate image. To obtain thecomplete aerial image, even the TCC decompositionrequires N FFTs because the TCC has N eigenfunc-tions for a complete aerial image [11].

The aerial image simulation time is mostly occu-pied by the FFT repetition. Here, Kintner has shownan algorithm that requires only one FFT [12]. He cal-culated the aerial image as the Fourier transform ofthe spatial frequency of the aerial image obtainedfrom a complicated double integration of object spec-trum and the TCC. Since TCC computation itself in-volves double integration, computation of the spatialfrequency of the aerial image involves quadruple

0003-6935/10/203909-07$15.00/0© 2010 Optical Society of America

10 July 2010 / Vol. 49, No. 20 / APPLIED OPTICS 3909

integration. Therefore, although this method re-quires only one FFT, it is not suitable for fast aerialimage calculation.

In this paper, a fast fine-pixel aerial image calcula-tion algorithm suitable for computation is presentedby modifying the Kintner method. It is as accurate assource integration. The time-consuming part of theKintner method, which is the quadruple integration,is replaced by a diagonal sum of the matrix Z thatcontains the source, mask, and pupil effects [13].The basic concept is shown in Section 2, where thealgorithm is explained with an example. In addition,a fast simulation technique is also introduced. In

Section 3, numerical simulation examples are shown.In Section 4, an image averaging method is intro-duced as an application of this method. The prelimin-ary results of this paper were reported in [14].

2. Algorithm

A. Model

In this paper, a simplified optical system that consistsof a mutually incoherent light source, a condenserlens forKoehler illumination, an object, projection op-tics, and an image is assumed. The coordinate systemon the image plane is denoted by ðx; yÞ and ðf ; gÞ isused for the pupil plane of the projection optics, whichis normalized by thepupil radius so that

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffif 2 þ g2

p¼ 1

corresponds to the pupil edge. By considering the Ny-quist frequency, the rangeof f and g is from−2 to 2. Fordiscrete computation, let M be the sampling numberin the pupil plane. Thus, f 1 ¼ g1 ¼ −2, f M ¼ gM ¼ 2,and Δf ¼ Δg ¼ 4=ðM − 1Þ. Throughout this paper,we assume M is an odd number. In the computation,the grid size of in the pupil is the same as that in thelight source.

The aerial image is written in amatrix form as [13]

Iðx; yÞ ¼ hϕjEjϕi; ð1Þ

where jϕi is a column vector representing planewaves, and the matrix E is obtained by shiftingthe pupil function as in the Hopkins TCC approach.Here, by modifying the TCC approach, the aerial im-age is also written as [13]

Iðx; yÞ ¼ hϕjZjϕi; ð2Þ

where the matrix Z is obtained by shifting the objectspectrum instead of shifting the pupil function. Here-after, we calculate the aerial image by Eq. (2). Thereason is explained in Subsection 2.E.

B. One-Dimensional Imaging

Assume a one-dimensional source and pupil. Whenwe set M ¼ 7, the discrete pupil coordinate is

ð f 1 f 2 f 3 f 4 f 5 f 6 f 7 Þ

¼�−2 −

43 −

23 0 2

343 2

�: ð3Þ

A vector representing one-dimensional planewaves is

jϕ1Di ¼ ð expð−i2πf 1xÞ expð−i2πf 2xÞ � � � expð−i2πf 7xÞ ÞT ; ð4Þ

where T represents the transpose of a matrix. Then,the one-dimensional aerial image is written as

I1DðxÞ ¼ hϕ1DjZ1Djϕ1Di; ð5Þ

where Z1D is a 7 × 7 matrix:

Z1D ¼

0BBBB@

Z11 Z12 � � � Z17

Z21 Z22...

..

. . ..

Z71 Z72 � � � Z77

1CCCCA: ð6Þ

On the other hand, the Fourier series of the aerialimage is

I1DðxÞ ¼X3k¼−3

Ck expð−i2πkΔf xÞ; ð7Þ

where Ck is a Fourier coefficient of the aerial image.When jkj > 3, Ck is 0, because the spatial frequencyof the aerial image is 0 if f is more than the Nyquistfrequency, i.e., jf j > 2. Here, from Eq. (5), the ith rowand jth column element of the matrix Z1D shows acoefficient of exp½−i2πðf j − f iÞx�. For example, thecoefficient of expð0Þ, C0 is

C0 ¼ Z11 þ Z22 þ � � � þ Z77: ð8Þ

Similarly, the coefficient of expð−i2πΔf xÞ, C1 is

C1 ¼ Z12 þ Z23 þ � � � þ Z67: ð9Þ

Therefore, the spatial frequency of the aerial imageis obtained by the sum of diagonal or off-diagonal ele-ments of the matrix Z1D. Here, we define a diagonalsum operator D. The schematic operation of theoperator D is shown in Fig. 1. For detail, refer to Ap-pendix A. Using the operator D, we obtain the spatialfrequency of the aerial image I1D as

3910 APPLIED OPTICS / Vol. 49, No. 20 / 10 July 2010

I1Dðf Þ ¼ D½Z1D�: ð10Þ

By the Fourier transform of I1D, we can obtain one-dimensional aerial image I1D.

The matrix Z1D is calculated as ðB1DÞ†B1D, whereB1D is a rectangular matrix [13]. Assume the sourceconsists of N mutually incoherent point sources,where the ith point source position is f 0i with the lightintensitySi. Then, thematrixB1D is anN ×Mmatrix:

B1D ¼

0BBBBB@

ffiffiffiffiffiffiS1

paðf 1 − f 01ÞPðf 1Þ

ffiffiffiffiffiffiS1

paðf 2 − f 01ÞPðf 2Þ � � �

ffiffiffiffiffiffiS1

paðf M − f 01ÞPðf MÞffiffiffiffiffiffi

S2

paðf 1 − f 02ÞPðf 1Þ

ffiffiffiffiffiffiS2

paðf 2 − f 02ÞPðf 2Þ

ffiffiffiffiffiffiS2

paðf M − f 02ÞPðf MÞ

..

. . .. ..

.ffiffiffiffiffiffiffiSN

paðf 1 − f 0NÞPðf 1Þ

ffiffiffiffiffiffiffiSN

paðf 2 − f 0NÞPðf 2Þ

ffiffiffiffiffiffiffiSN

paðf M − f 0NÞPðf MÞ

1CCCCCA; ð11Þ

where P is the pupil function and a is the object spec-trum. An example is shown in Subsection 2.D.

C. Two-Dimensional Imaging

By referring to [13], we can define the matrix Z fortwo-dimensional imaging with the help of the stack-ing operator Y that reshapes a two-dimensional ma-trix to a column vector [11]. In this paper, thestacking operator Y stacks the ith row and jth col-umn element onto the ½ði − 1Þ ×M þ j�th element ina column vector. The spatial frequency of the aerialimage I is derived by using the operator D as

Y½Iðf ; gÞ�T ¼ D½Z�: ð12Þ

Hence,

Iðf ; gÞ ¼ Y−1½D½Z�T �: ð13Þ

The Fourier transform of I gives the final aer-ial image.

D. One-Dimensional Imaging Example

An example is shown in Fig. 2. Assume M ¼ 7. Thepupil function that passes the light within a pupilðjf j ≤ 1Þ can be represented by

Pðf Þ ¼ ð0 0 1 1 1 0 0 Þ: ð14ÞIf there are two point sources at f 4 and f 5 with unitintensities, the matrix B1D

ex is

B1Dex ¼

�0 0 a3 a4 a5 0 00 0 a2 a3 a4 0 0

�: ð15Þ

Then,

Z1Dex ¼ ðB1D

ex Þ†B1Dex ¼

0BBBBBBBB@

0 0 0 0 0 0 00 0 0 0 0 0 00 0 a�

2a2 þ a�3a3 a�

3a4 þ a�2a3 a�

3a5 þ a�2a4 0 0

0 0 a�4a3 þ a�

3a2 a�3a3 þ a�

4a4 a�4a5 þ a�

3a4 0 00 0 a�

5a3 þ a�4a2 a�

5a4 þ a�4a3 a�

4a4 þ a�5a5 0 0

0 0 0 0 0 0 00 0 0 0 0 0 0

1CCCCCCCCCCA: ð16Þ

The operation of the operator D to the matrix Z1Dex

outputs I1Dex as

I1Dex ðf Þ ¼ D½Z1Dex � ¼

0BBBBBBBB@

0a�4a2 þ a�

5a3

a�3a2 þ 2a�

4a3 þ a�5a4

a�2a2 þ 2a�

3a3 þ 2a4a�4 þ a5a�

5a�2a3 þ 2a�

3a4 þ a�4a5

a�3a5 þ a�

2a4

0

1CCCCCCCCA

T

:

ð17ÞWe can obtain the aerial image by the Fourier trans-form of I1Dex .Fig. 1. Example of the diagonal sum operator D.

10 July 2010 / Vol. 49, No. 20 / APPLIED OPTICS 3911

E. Fast Computation Technique

We consider the same example as in Subsection 2.D.There is no object spectrum outside of the pupil.Therefore, we can reduce the size of the matrixB1Dex as

B1Dred ¼

�a3 a4 a5

a2 a3 a4

�: ð18Þ

Now the 2 × 7 matrix is reduced to the 2 × 3 matrix.With the matrix B1D

red, we can reduce the number ofthe products in calculating Z1D

ex .Likewise, for two-dimensional imaging, the size

of the matrix B is reduced to N × ½ðM − 1Þ=2�2. Wecan also reduce the size of the matrix Z to½ðM2 þ 1Þ=2� × ½ðM2 þ 1Þ=2�. Therefore, with this re-duction technique, we can reduce the sizes of the ma-trices B and Z to 1=4. Note that the size of the TCCmatrix or the matrix E is M2 ×M2 if the maximumcoherence factor is 1 [13].

Even the reduced matrix Z has rows whoseelements are all 0. By removing those rows, the sizeof the matrix Z is further reduced to ½ðM − 1Þ=2�2×½ðM2 þ 1Þ=2�. By customizing the diagonal sum op-erator D, we can obtain the spatial frequency fromthe further reduced matrix Z. Therefore, the compu-ter memory consumed by matrix Z is 1=8 comparedto that of thematrix E or the TCCmatrix. This is whywe adopted Eq. (2).

For faster computation, we utilize the featureIð−f ;−gÞ ¼ I�ðf ; gÞ. For example, in Fig. 1, d�

3 ¼ d5;therefore, we can obtain d3 as a conjugate of d5.

F. Estimated Computation Time

For two-dimensional imaging, the size of thematrix B is N × ½ðM − 1Þ=2�2. As shown in Fig. 2 orSubsection 2.D, calculation of the matrix B involvesstacking the shifted object spectrum and is not a com-putational burden. Thus, we may ignore the compu-tation time of the matrix B. Computation time of thematrix Z is BN½ðM − 1Þ=2�4, where B is constant de-

pending on the computer platform. We may ig-nore the calculation time of one FFT. Therefore,the estimated calculation time of this method isBN½ðM − 1Þ=2�4.

3. Numerical Simulation

In this section, a wavelength λ is set to 248 nm andthe numerical aperture of the projection optics is0.86. We assume aberration-free scalar imaging withthe demagnification of unity.

First, we confirm the accuracy of this method. Weset the sampling number M ¼ 63. Hence, f 1 ¼ g1 ¼−2, f 63 ¼ g63 ¼ 2, and Δf ¼ Δg ¼ 4=62. An objectand illumination are illustrated in Fig. 3. The illumi-nation contains 609 point sources, i.e., N ¼ 609. Letthe FFT sampling number be NF. When NF is 512,the aerial image was obtained in 0:64 s. The result isshown in Fig. 4(a). For reference, the calculation timeof the matrix Z followed by the operation D occupied0:55 s out of 0:64 s. Under the same simulation con-dition, the aerial image was simulated by the con-ventional source integration method in 35:2 s. Thedifference of the aerial images was less than 3:0×10−15, as shown in Fig. 4(b). Therefore, this methodis as accurate as that of the source integration.

Next, let us compare the calculation time with thesource integration. Because the calculation time ofthe source integration depends on the FFT repeti-tion, it is roughly proportional to 2ANNFlog2NF,where A is a constant depending on a FFTalgorithm.On the other hand, this method is proportional toBN½ðM − 1Þ=2�4, where usually B < A. Therefore, itdepends on M and NF to see which runs faster.For example, Fig. 5 shows the actual calculation timeof this method and the source integration whenM ¼ 63. If NF is small, the source integration runsfaster than this method. However, as NF increases,the run time of the source integration increases ra-pidly. On the other hand, the run time of this methodremains almost unchanged as NF increases becausethis method requires only one FFT. When M ¼ 63,this method runs faster than the source integra-tion if NF > 2M. Because the simulation grid size

Fig. 3. (a) An object is represented by the combination of trans-parent rectangles placed on an opaque background. (b) Partiallycoherent illumination. Coherence factor σ is set to 0.9. Each pixelrepresents a mutually incoherent point source. The white circleindicates the pupil edge.

Fig. 2. Schematic view of how to calculate the matrix Z in accor-dance with the example in Subsection 2.D. Illumination from f 4 isthe normal incident, while illumination from f 5 is the oblique in-cident that shifts the object spectrum. By stacking B1D

1 and B1D2 , we

can obtain B1Dex in Eq. (15).

3912 APPLIED OPTICS / Vol. 49, No. 20 / 10 July 2010

is λ=ð2NAÞ × ðM − 1Þ=ð2NFÞ, one can realize fast fine-pixel aerial image calculation with this method.

The minimum requirement of NF is the Nyquistsampling, which is the fastest condition for thesource integration. IfM ≤ 55, this method runs fasterthan the source integration, even under the Nyquistsampling condition. Let us see an example. Here wesetM ¼ 55 andNF ¼ 64 for the Nyquist sampling byconsidering the FFT compatibility. Under this condi-tion, the aerial image simulation was performed withthe object shown in Fig. 3(a) and illumination of thecoherence factor σ ¼ 0:9. The source integration took0:32 s. This method took 0:30 s. Note that M ¼ 55corresponds to a 13:3λ=NA simulation domain inthe image plane. If our target feature size is0:35λ=NA, the domain size corresponds to approxi-mately 38 features, or 19 pitches, which is enoughsimulation domain size.

The above computation was performed by2:40 GHz Intel Core 2 Duo processor with MATLAB.The operating system was a Mac OS 10.5.7 with4 GB memory.

4. Application

This method is useful if one needs so-called imageaveraging. There are three typical cases for imageaveraging. The first case is image simulation with la-ser bandwidth [15]. A standard way of consideringthe bandwidth is to repeat the image calculation with

the chromatic aberration, weighting each image ac-cording to the spectrum intensity, and average. Thesecond case is simulation of average light intensityinside a photoresist. The average light intensity in-side the photoresist can be obtained by calculatinglight intensity at each photoresist depth, followedby averaging. The third case is the image simulationwith vertical movement of the image plane [15]. Amechanical stage that holds a wafer vibrates, givingdefocus. The image becomes the average of defocusedimages. In any case, we need a lot of images to beaveraged, leading to a long simulation time.

Even the average image can be obtained with oneFFT by this method. For example, let us calculate theaverage light intensity inside a photoresist with Nddepths. We define the matrix Bi at the ith photoresistdepth. Set the matrix Z as

Z ¼ 1Nd

XNd

i¼1

Bi†Bi ¼

1Nd

XNd

i¼1

Zi: ð19Þ

Then, the spatial frequency of the average image is

Iðf ; gÞ ¼ Y−1½D½Z�T �: ð20Þ

Let us see an example. Unpolarized illumination isset to σ ¼ 0:9. A 100 nm isolated contact hole is usedfor the object. The film stack consists of a wafer (n ¼0:8808 − 2:7638i), 250 nm thick photoresist (n ¼1:7100 − 0:4200i), an 80 nm thick photoresist coat(n ¼ 1:7050 − 0:0157i), and air (n ¼ 1), where n is arefractive index. These material constants are notreal ones but are set as a trial. Light intensity insidethe photoresist was simulated by referring to [16],and the result is shown in Fig. 6(a). In the simula-tion, the photoresist was divided into 26 depths.Averaging the images at 26 depths, we obtain theaveraged intensity inside the photoresist, as shownin Fig. 6(b). Let us compare the simulation time.Here, we set M ¼ 55 and NF ¼ 64 (the Nyquist sam-pling condition for the source integration to run fast-est). Unpolarized illumination is an incoherent sumof X and Y polarized light. For polarized illumina-tion, we need electromagnetic field for the X, Y ,and Z directions. Thus, the simulation time wouldbe increased 26 × 2 × 3 ¼ 156 fold. For the averagedimage, the source integration took 53:40 s, which isslightly more than 0:32 × 156 s due to the resisteffect setting. By this method, it took 40:51 s, whichis less than 0:30 × 156 s because we do not need 156FFT, but only one FFT in this case. The difference ofthe averaged image between this method and thesource integration was, at most, 7:0 × 10−16. Again,this method is as accurate as the source integration.For reference, Fig. 6(b) is simulated by setting M ¼55 and NF ¼ 512 with this method. It took 40:96 s,which is almost the same as the Nyquist sam-pling case.

This approach gives us another idea for simulatingthe averaged image. A fast calculation method for

Fig. 4. (a) Aerial images obtained by this method. (b) Differencein the aerial images obtained by this method and sourceintegration.

Fig. 5. Comparison of the simulation time in theM ¼ 63 case. Asthe illumination, coherence factors of 0.35 (N ¼ 97) and 1.00(N ¼ 749) were used. Under the Nyquist sampling condition,the pixel size is 69:9 nm, while the feature width is 100 nm.

10 July 2010 / Vol. 49, No. 20 / APPLIED OPTICS 3913

TCC eigenfunctions has been introduced by thestacked pupil shift matrix P [11,17]. Let Pi be thestacked pupil shift matrix at each photoresist depth.If the number of point sources N is small, we definethe averaged pupil shift matrix Pave as

Pave ¼1ffiffiffiffiffiffiffiNd

p0BBB@

P1

P2

..

.

PNd

1CCCA: ð21Þ

If N is large, we define the TCC matrix for averagedimage Tave as

Tave ¼1Nd

XNd

i¼1

Pi†Pi: ð22Þ

Decomposing either matrix Pave or matrix Tavegenerates the TCC eigenfunctions for the averagedimage.

5. Conclusion

A fast calculation method for fine-pixel aerial imag-ing that is as accurate as source integration has beenpresented. The algorithm is based on the matrix re-presentation of the spectrum representation of mod-ified a Hopkins equation. Diagonal and off-diagonalsums of the matrix Z give the spatial frequency of theaerial image. By this method, we can obtain the aeri-al image with one FFT, leading to a calculation meth-od whose run time is almost independent of theFFT time. The calculation time of this method is pro-portional to BN½ðM − 1Þ=2�4. If the coherence factor σ

is 0.9 and M ≤ 55, this method runs faster thansource integration, even under the Nyquist samplingcondition. The averaged image can also be calculatedby one FFT with this method. The averaged imageinside the photoresist was presented as an exampleto show the advantage of this method. A fast calcula-tion method of the TCC eigenfunctions for averagedimaging has also been presented.

Appendix A

The diagonal sum operator D operates to an m ×mmatrix m, and output is a 1 ×m matrix. Let the out-put be d, then d ¼ D½m�: Let the ith row and jth col-umn of the matrix m be mi;j and the kth element ofthe vector d be dk. Furthermore, we set m0 to beðmþ 1Þ=2. Then,

(dk ¼ Pm−ðk−m0Þ

j¼1 mj;jþðk−m0Þ if k ≥ m0;

dk ¼ Pmþðk−m0Þj¼1 mjþðk−m0Þ;j if k < m0:

ðA1Þ

Equation (A1) is the operation of the operator D.Figure 1 is a schematic view of the operationwhen m ¼ 7.

I thank Andrew R. Neureuther of the Universityof California at Berkeley for his technical advice. Ithank Canon Inc. colleagues, especially MinoruYoshii, Tokuyuki Honda, Tsunefumi Tanaka, Michi-taka Setani, and Shigeyuki Uzawa, for giving this op-portunity to study at the University of California atBerkeley.

References1. H. H. Hopkins, “On the diffraction theory of optical image,”

Proc. R. Soc. London Ser. A 217, 408–432 (1953).2. M. Born and E. Wolf, Principles of Optics, 6th ed. (Pergamon,

1980), Chap. 10.3. J. W. Goodman, Statistical Optics, 1st ed. (Wiley-Interscience,

1985), Chap. 7.4. M. Yeung, “Modeling aerial images in two and three dimen-

sions,” in Proceedings of Kodak Microelectronics Seminar: In-terface ’85, Kodak Publ. G-154 (Eastman Kodak, 1986), pp.115–126.

5. H. Gamo, “Matrix treatment of partial coherence,” in Progressin Optics, E. Wolf, ed. (North-Holland, 1964), Vol. 3, Chap. 3.

6. E. L. O’Neill, Introduction to Statistical Optics (Dover, 2003),Chap. 8.

7. B. E. A. Saleh and M. Rabbani, “Simulation of partially coher-ent imagery in the space and frequency domains and bymodalexpansion,” Appl. Opt. 21, 2770–2777 (1982).

8. A. S. Ostrovsky, O. Ramos-Romero, and G. Martínez-Niconoff,“Fast algorithm for bilinear transforms in optics,” Rev. Mex.Fís. 48, 186–191 (2002).

9. R. J. Socha and A. R. Neureuther, “Propagation effects of par-tially coherence in optical lithography,” J. Vac. Sci. Technol. B14, 3724–3729 (1996).

10. N. B. Cobb, “Fast optical and process proximity correction al-gorithms for integrated circuit manufacturing,” Ph.D. disser-tation (Electrical Engineering and Computer Science,University of California, Berkeley, 1998).

Fig. 6. (a) Light intensity inside the photoresist at the y ¼ 0 crosssection. The resist depth is represented by z. (b) Averaged image ofall light intensities in the photoresist.

3914 APPLIED OPTICS / Vol. 49, No. 20 / 10 July 2010

11. K. Yamazoe, “Computation theory of partially coherent ima-ging by stacked pupil shift matrix,” J. Opt. Soc. Am. A 25,3111–3119 (2008).

12. E. Kintner, “Method for the calculation of partially coherentimagery,” Appl. Opt. 17, 2747–2753 (1978).

13. K. Yamazoe, “Two matrix approaches for aerial image forma-tion obtained by extending and modifying the transmissioncross coefficients,” J. Opt. Soc. Am. A 27 1311–1321 (2010).

14. K. Yamazoe, “Fast fine-pixel aerial image calculation by ma-trix representation of Hopkins equation,” presented at the19th Lithography Workshop, Coeur d’Alene, Idaho, USA,28 June–2 July 2009.

15. T. Brunner, D. Corliss, S. Butt, T. Wiltshire, C. P. Ausschnitt,and M. Smith, “Laser bandwidth and other sources of focusblur in lithography,” J. Microlith. Microfab. Microsyst. 5,043003 (2006).

16. S. Yu, B. J. Lin, A. Yen, C. Ke, J. Huang, B. Ho, C. Chen,T. Gau, H. Hsieh, and Y. Ku, “Thin-film optimization strat-egy in high numerical aperture optical lithography, part 1:principles,” J. Microlith. Microfab. Microsyst. 4, 043003(2005).

17. Y. Lian and X. Zhou, “Fast and accurate computation of par-tially coherent imaging by stacked pupil shift operator,” Proc.SPIE 7488, 74883G (2009).

10 July 2010 / Vol. 49, No. 20 / APPLIED OPTICS 3915


Recommended