+ All Categories
Home > Documents > Design strategies to simplify and miniaturize imaging systems

Design strategies to simplify and miniaturize imaging systems

Date post: 02-Oct-2016
Category:
Upload: jean
View: 216 times
Download: 0 times
Share this document with a friend
9
Design strategies to simplify and miniaturize imaging systems Florence de la Barrière, 1, * Guillaume Druart, 1 Nicolas Guérineau, 1 and Jean Taboury 2 1 Office National d'Etudes et de Recherches Aérospatiales, Chemin de la Hunière, 91761 Palaiseau cedex, France 2 Institut dOptique, Campus Polytechnique RD 128, 91127 Palaiseau cedex, France *Corresponding author: [email protected] Received 17 September 2010; revised 9 November 2010; accepted 9 November 2010; posted 11 November 2010 (Doc. ID 135346); published 17 February 2011 We present the range of optical architectures for imaging systems based on a single optical component, an aperture stop, and a detector. Thanksto the formalism of third-order Seidel aberrations, several strategies of simplification and miniaturization of optical systems are examined. Figures of merit are also introduced to assess the basic optical properties and performance capabilities of such systems; by this way, we show the necessary trade-off between simplicity, miniaturization, and optical performance. © 2011 Optical Society of America OCIS codes: 110.3925, 220.1010, 220.4830, 110.4190. 1. Introduction There is currently a need for miniaturized and cheap imaging systems for both military and civilian appli- cations. To reduce their size and their mass, imaging systems have to be as simple as possible, which means that they have to involve a minimal number of optical elements. In this paper, the simplest sys- tem is defined as a system which is composed of only three elements: a single optical component, an aper- ture stop, and a detector. These elements can be complex if needed. For example, optics can involve aspheric surfaces or diffractive optical elements, or can be a microlens array with a complex shape. The detector can also be curved. Indeed, a curved im- age surface would provide a way to lower the number of optical elements and to reduce the amount of off- axis aberrations [1], since all rays would fall quite perpendicularly to the surface of the detector. The design of such simple systems has been widely addressed in many papers over the past decades. We can quote, for example, systems involving curved detectors [1], multichannel systems inspired by the compound eyes of insects [27], multichannel sys- tems based on the TOMBO principle [8], lenseless imagery [917], and folded imaging systems [18,19]. Therefore, choosing the suitable system for a given application among all these concepts is often not obvious for an optical designer. The objective of this paper is to give design rules for simplified and miniaturized systems. Some papers [20,21] aim at evaluating the impact of the breakthrough with standard complex systems on image quality: in [20], scaling rules for optical systems are described and evaluated in terms of image quality through the introduction of the space-bandwidth pro- duct parameter. Reference [21] gives a new merit func- tion which takes into account the field of view, the angular resolution, the sensitivity, and the volume of the optical system in a single equation. It compares the quality of two types of systems, a multichannel system and a folded annular system, to a nominal classicoptical system, by evaluating the perfor- mance metric for each system. In this paper, we extend the classification of existing simple and miniaturized systems which can be found in literature thanks to the study of third-order Seidel aberrations. Section 2 recalls the formalism of third-order Seidel aberrations, which aims at giving a general expression of the maximal amount of fourth-order wave aberration at the edge of the exit pupil and 0003-6935/11/060943-09$15.00/0 © 2011 Optical Society of America 20 February 2011 / Vol. 50, No. 6 / APPLIED OPTICS 943
Transcript
Page 1: Design strategies to simplify and miniaturize imaging systems

Design strategies to simplify and miniaturizeimaging systems

Florence de la Barrière,1,* Guillaume Druart,1 Nicolas Guérineau,1 and Jean Taboury2

1Office National d'Etudes et de Recherches Aérospatiales, Chemin de la Hunière, 91761 Palaiseau cedex, France2Institut d’Optique, Campus Polytechnique RD 128, 91127 Palaiseau cedex, France

*Corresponding author: [email protected]

Received 17 September 2010; revised 9 November 2010; accepted 9 November 2010;posted 11 November 2010 (Doc. ID 135346); published 17 February 2011

We present the range of optical architectures for imaging systems based on a single optical component, anaperture stop, and a detector. Thanks to the formalismof third-order Seidel aberrations, several strategiesof simplification andminiaturization of optical systemsare examined. Figures ofmerit are also introducedto assess the basic optical properties and performance capabilities of such systems; by this way, we showthe necessary trade-off between simplicity, miniaturization, and optical performance. © 2011 OpticalSociety of AmericaOCIS codes: 110.3925, 220.1010, 220.4830, 110.4190.

1. Introduction

There is currently a need for miniaturized and cheapimaging systems for both military and civilian appli-cations. To reduce their size and their mass, imagingsystems have to be as simple as possible, whichmeans that they have to involve a minimal numberof optical elements. In this paper, the simplest sys-tem is defined as a system which is composed of onlythree elements: a single optical component, an aper-ture stop, and a detector. These elements can becomplex if needed. For example, optics can involveaspheric surfaces or diffractive optical elements, orcan be a microlens array with a complex shape.The detector can also be curved. Indeed, a curved im-age surface would provide a way to lower the numberof optical elements and to reduce the amount of off-axis aberrations [1], since all rays would fall quiteperpendicularly to the surface of the detector.

The design of such simple systems has been widelyaddressed in many papers over the past decades. Wecan quote, for example, systems involving curveddetectors [1], multichannel systems inspired by thecompound eyes of insects [2–7], multichannel sys-

tems based on the TOMBO principle [8], lenselessimagery [9–17], and folded imaging systems [18,19].Therefore, choosing the suitable system for a givenapplication among all these concepts is often notobvious for an optical designer. The objective of thispaper is to give design rules for simplified andminiaturized systems.

Somepapers [20,21] aimat evaluating the impact ofthe breakthrough with standard complex systems onimagequality: in [20], scaling rules for optical systemsare described and evaluated in terms of image qualitythrough the introduction of the space-bandwidth pro-ductparameter.Reference [21] givesanewmerit func-tion which takes into account the field of view, theangular resolution, the sensitivity, and the volumeof the optical system in a single equation. It comparesthe quality of two types of systems, a multichannelsystem and a folded annular system, to a nominal“classic” optical system, by evaluating the perfor-mancemetric for eachsystem. In thispaper,weextendthe classification of existing simple and miniaturizedsystemswhich canbe found in literature thanks to thestudy of third-order Seidel aberrations.

Section 2 recalls the formalism of third-orderSeidel aberrations, which aims at giving a generalexpression of the maximal amount of fourth-orderwave aberration at the edge of the exit pupil and

0003-6935/11/060943-09$15.00/0© 2011 Optical Society of America

20 February 2011 / Vol. 50, No. 6 / APPLIED OPTICS 943

Page 2: Design strategies to simplify and miniaturize imaging systems

for a maximal field of view. This equation can be ex-pressed as a function of the focal length, the field ofview, and the f -number of the optical system, as wellas the refractive index of the material, the bending ofthe lens, and the position of the aperture stop in theparticular case of a single lens. From this equation, itturns out that different strategies can be lead to sim-plify an optical system. Section 3 is a review of thesestrategies, leading to the development of optical sys-tems based on a single component, an aperture stop,and a detector. Through the introduction of differentfigures of merit, we evaluate the impact of using asingle optical component on image quality, and sev-eral solutions are proposed to overcome limitationsand to maintain a good optical quality.

2. Definition of the Maximal Wave Aberration

A. Definition of the Wave Aberration

We consider a rotationally symmetrical opticalsystem. If this optical system is limited by the diffrac-tion, its exit pupil function is described by a functionps0ðr;φÞ, which is the geometrical image of the aper-ture stop of the system [22]. The parameters r and φare the coordinates in the exit pupil plane (seeFig. 1). The function ps0ðr;φÞ is equal to 0 outsidethe pupil and to 1 inside the pupil if the illuminationof the pupil is uniform.

In the general case, the optical system may not belimited by the diffraction because of aberrations;therefore, the exit pupil function psðr;φ; r0Þ is modi-fied in the following way [23]:

psðr;φ; r0Þ ¼ ps0ðr;φÞ exp�i2Πλ Wðr;φ; r0Þ

�: ð1Þ

The function psðr;φ; r0Þ is referred to as the general-ized pupil function. The parameter r0 is the coordi-nate in the image plane (see Fig. 1).

The function Wðr;φ; r0Þ is called the wave aberra-tion. It is defined as the optical path difference be-tween a reference sphere, which is centered on theGaussian image of the object point, and the real wavesurface at the exit pupil of the optical system.

The power series expansion of W can be written inthe following form [24]:

W ¼ Wð0Þ þWð4Þ þWð6Þ þWð8Þ þ…: ð2ÞThe general expression for the fourth-order waveaberrations is given by the following equation [25]:

Wð4Þðr;φ; r0Þ ¼ −14Br4 þ Fr3r0 cosφ − Cr2r02 cos2 φ

−12Dr2r02 þ Err03 cosφ; ð3Þ

where B, F, C, D, and E are coefficients. These fiveterms stand for third-order Seidel aberrations, re-spectively, spherical aberration, coma, astigmatism,field of curvature, and distortion.

B. Expressions of the Coefficients B, F, C, D, and E inthe Case of a Thin Lens

We can give simplified expressions for the coeffi-cients B, F, C, D, and E if we consider an optical sys-tem made of a single thin lens, for which the objectplane is at infinity (see Fig. 2). The thin lens is de-scribed by three parameters: the curvatures of thetwo refractive surfaces, C1 and C2, and the refractiveindex of the lens material n. Let us introduce the op-tical power P and the bending γ of the lens, whichtake into account the three parameters of interest.Their expressions are, respectively:

P ¼ 1f¼ ðn − 1ÞðC1 − C2Þ; ð4Þ

γ ¼ 12ðC1 þ C2Þ: ð5Þ

Fig. 1. Illustration of the pupil coordinates r and φ and of the image plane coordinate r0.

944 APPLIED OPTICS / Vol. 50, No. 6 / 20 February 2011

Page 3: Design strategies to simplify and miniaturize imaging systems

The coefficients B, F, C,D, and E can be expressed asfunctions of the refractive index n, the optical powerP, the bending γ, and the distance t between the en-trance pupil and the lens [26]:

B ¼ U; ð6Þ

F ¼ −tU þ V ; ð7Þ

C ¼ t2U − 2tV þ P2; ð8Þ

D ¼ t2U − 2tV þ nþ 12n

P; ð9Þ

E ¼ −t3U þ 3t2V − t3nþ 12n

P; ð10Þwhere

U ¼ β2þ nð4n − 1Þ8ðn − 1Þ2ðnþ 2ÞP

3

þ P2nðnþ 2Þ ½ðnþ 2Þγ − ðnþ 1ÞP�2; ð11Þ

V ¼ P2n

�ðnþ 1Þγ −

�nþ 1

2

�P

�: ð12Þ

In the expression of U, β stands for the asphericprofiles of the surfaces of the lens: β ¼ ðn − 1Þðb1C3

1 −

b2C32Þ, where b1 and b2 are the conic constants of the

two surfaces of the lens.

C. Expression of the Maximal Wave Aberration

The number of optical elements which are needed tocorrect the aberrations of an optical system is linked

to the amount of wave aberrations which is presentin the exit pupil plane of the optical system. Themaximum wave aberration which can be toleratedat the edge of the exit pupil for a maximal field angleis of particular interest.

We want the maximal field angle rays to interceptthe image plane at the edge of the detector, which issupposed to be square sized, with a length tdet in eachdirection. Therefore, the maximal value of r0 (coordi-nate in the image plane) is r0max ¼ tdet=2. At the edgeof the exit pupil, the maximal value of r is rmax ¼ϕs=2, where ϕs is the diameter of the exit pupil.Under paraxial conditions, r0max and rmax can be ex-pressed by the following relations: r0max ¼ fFOV=2and rmax ¼ f =ð2#Þ, where FOV is the field of viewof the system and # its f -number (# ¼ f =ϕs). Then,the maximal fourth-order wave aberration Wð4Þ canbe expressed as a function of f , FOV, and #:

Wð4Þmaxðf ; #;FOVÞ ¼ −

B

26f 4

#4þ F

24FOV

f 4

#3−C

24FOV2 f

4

#2

−D

25FOV2 f

4

#2þ E

24FOV3 f

4

#: ð13Þ

Wð4Þmax also depends on the refractive index n, the

bending γ, and the position t of the pupil (these de-pendences are contained in the terms B, F, C, D,and E). To make things clearer, we give a simplifiedexpression ofWð4Þ

max as a function of n, γ, f , #, and FOVin the case where the aperture stop is in the plane ofthe thin lens (thus, t ¼ 0):

Wð4Þmaxðn; γ; f ; #;FOVÞ ¼ −

1

29nð4n − 1Þ

ðn − 1Þ2ðnþ 2Þf

#4

−1

27ðnþ 2Þf 3

#4

�ðnþ 2Þγ

−1fðnþ 1Þ

�2þ nþ 1

25nγf 3 FOV

#3

−2nþ 1

26nf 2

FOV#3

−1

25f 3

FOV2

#2

−1

26nþ 1n

f 3FOV2

#2: ð14Þ

Depending on which parameter is chosen to mini-mizeWð4Þ

max (either n, γ, f , #, or FOV, and in the generalcase t), a different optical architecture is obtained.

D. Influence of Reducing the Maximal Wave Aberrationon Traditional Figures of Merit

Usually, the Rayleigh criterion is used to evaluatethe maximal amount of aberration which can be tol-erated in the optical system. The Rayleigh criterionpostulates that the maximal wave aberrationamount which can be tolerated at the edge of the exitpupil is equal to λ=4 [27]. Therefore, if Wð4Þ

max remainsinferior to λ=4, the optical system is limited by thediffraction, and traditional figures of merit can beused to evaluate the impact of the simplification

Fig. 2. (Color online) Illustration of a thin lens, an aperture stop(which is the entrance pupil), and a detector. t is the distancebetween the entrance pupil and the lens.

20 February 2011 / Vol. 50, No. 6 / APPLIED OPTICS 945

Page 4: Design strategies to simplify and miniaturize imaging systems

and miniaturization of optical systems in referenceto a nominal “classic” optical system (for example,a single thin lens). These traditional figures of meritare recalled in the following subsections.

1. Angular Resolution and Number of ResolvedPoints

The angular resolution instantaneous field of viewquantifies the ability of the optical system to distin-guish small details [28]. It is linked to the maximumspatial frequency νmax that can be resolved by the op-tical system. νmax can be calculated as the ratio be-tween the blur caused by the optical system andits focal length. Two main factors contribute to theblur caused by the optical system (the diffractionspot size and the geometrical spot size), and it is com-monplace to combine these effects by calculating thesquare root of their quadratic sum [20]. As we havejust mentioned at the beginning of this section, weconsider that the optical system is limited by the dif-fraction, so that the diffraction spot size becomes pre-ponderant in relation to the geometrical spot size ofthe optical system. Thus, IFOV can be expressed bythe following equation:

IFOV ¼ λ#f: ð15Þ

The number of resolved points is defined as follows:

Nb ¼�FOVIFOV

�2: ð16Þ

By introducing Eq. (15) into Eq. (16), the number ofresolved points can be expressed in the following wayas a function of f , FOV, and #:

Nb ¼ f 2FOV2

λ2#2 : ð17Þ

Nb does not depend on n, γ, and t; therefore, it is sui-table to play on these parameters to reduce the max-imal amount of aberrations. However, like Wð4Þ

max, Nbis an increasing function of f and FOV, and it is a de-creasing function of #. Thus, playing on f , FOV, and #to minimizeWð4Þ

max results in a decrease of the numberof resolved points. A trade-off between the simplifica-tion of an optical system and the number of resolvedpoints is sometimes necessary and it can be summar-ized in the following way: “Bigger is better but smallis best” [29]. Having less resolved points is the priceto pay for widespread optical systems in simple andcheap applications.

2. Étendue

The étendue is linked to the ability for an optical sys-tem of collecting light. If we want to image a scene,the étendue G is inversely proportional to the squareof the f -number for an object at infinity:

G ¼ Πtpix2

4#2: ð18Þ

In this paper, we consider that the size of the pixel isadapted to the radius of the Airy pattern, thus, G nolonger depends on the f -number:

G ¼ 1:17λ2: ð19ÞIn this case, the étendue remains constant whileminiaturizing an optical system which is limitedby the diffraction.

3. Strategies for the Simplification and theMiniaturization of Optical Systems

A. Playing on the Refractive Index, the Bending of theLens, and the Position of the Pupil

Simple considerations based on Eqs. (6)–(10) can becarried out. We first consider that the entrance pupilis in the plane of the lens (t ¼ 0). The bending γ ap-pears both in the spherical aberration term (depen-dence with γ2) and in the coma term (dependencewith γ). Therefore, two choices are possible for the va-lue of γ. The first one consists of minimizing the sphe-rical aberration, so that the suitable value of γ isgiven by

γmin ¼ 1fnþ 1nþ 2

: ð20Þ

The second one consists of canceling the coma aber-ration; in this case, γ has to be chosen according tothe following equation:

γðcoma¼0Þ ¼1f

2nþ 12ðnþ 1Þ : ð21Þ

If we choose the bending according to Eq. (21), it ispossible to correct simultaneously spherical aberra-tion and coma just by giving aspheric profiles to thesurfaces of the lens (β ≠ 0). If the pupil remains in theplane of the lens, the astigmatism term cannot becanceled; however, it can be reduced just by movingaway the pupil from the lens (t ≠ 0), which results inintroducing a little amount of coma, and distortion.Field curvature can be reduced by choosing a highrefractive index of the material. This approach hasbeen used to design a simple system working inthe infrared spectral bandwidth. This simple systemis only composed of an aperture stop used as the en-trance pupil of the system, a Silicon meniscus, and aplanar detector [30].

If playing on the refractive index, the bending, andthe position of the pupil is not sufficient to provide agood optical quality, the traditional approach con-sists of increasing the number of optical surfaces,which often leads to increasing the number of opticalcomponents. However, a method which increases thenumber of optical surfaces while using a single opti-cal component has already been proposed: the design

946 APPLIED OPTICS / Vol. 50, No. 6 / 20 February 2011

Page 5: Design strategies to simplify and miniaturize imaging systems

is based on a folded imaging system which reflectsthe optical path multiple times with concentric re-flectors [18,19]. The limitation of such folded systemsis that the field of view is narrowed as the number ofreflective surfaces increases. However, it is possibleto maintain a constant étendue by keeping the areaof the pupil constant with respect to a classic nonobs-cured system. Other approaches which consist ofplaying on f , FOV, and # result in decreasing Wð4Þ

maxwithout increasing the number of optical surfaces.These approaches are described in the followingsubsections.

B. Increasing the f-Number #: Lenseless Imagery

By choosing a high f -number for the optical system,no focusing element is required to image a scene.This corresponds to the simplest optical systems.For example, the pinhole camera is only composedof an aperture stop with a low-diameter value [9,10].

We note M the scaling factor, that is to say that#2 ¼ M#1, where #1 is the f -number of a nominal“classic” system. IFOV is thus affected in the follow-ing way: IFOV2 ¼ MIFOV1, and the number of re-solved points scales with 1=M2: Nb2 ¼ Nb1=M2.

Note that the off-axis étendue can be improved if acurved detector is associated with the pinhole [9].

The lowangular resolution IFOVprovided by a pin-hole camera can be improved by using other lenselessimaging systems, such as coded apertures. In the caseof coded apertures, the scene is no longer imaged by asingle pinhole, but by many pinholes properly ar-ranged [11–13]. Other lenseless imaging concepts in-volve a circular diffraction grating. It has been shownthat circular diffraction gratings, which belong to theclass of self-imaging objects, concentrate the lightalong a focal line and therefore have an imaging prop-erty [14,15]. Continuously self-imaging gratings(CSIGs), which are also self-imaging objects, haveeven been studied by using the formalism of third-order Seidel aberrations [31]. This formalism enablesone to compare the performance of CSIGs with theperformance of classical lenses. Alternatives to circu-lar diffraction gratings, such as diffractive optics orholographic axilens, have also been studied for ima-ging properties [16,17]. One of the advantages of alllenseless imaging systems is their great depth offocus.

C. Decreasing the FOV: Multichannel Insect’s Eyes

Another way to reduce the maximal amount offourth-order wave aberrations is to decrease the fieldof view of the optical system (FOV2 ¼ FOV1=M).Although IFOV remains the same in reference to anominal classic optical system, the number of re-solved points decreases as 1=M2 (Nb2 ¼ Nb1=M2).

However, if the solid angle is carefully divided intoM2 optical channels (all the channels having tiltedoptical axes), the field of view, and thus the numberof resolved points, remains constant in reference to anominal classic optical system. This approach is di-rectly inspired by the compound eyes of insects [2–7].

Therefore, the multichannel approach reduces thefield of view of each channel FOVe while keeping Nbconstant. However, great care must be taken becausethe field of view of each channel (FOVe) depends onthe f -number of a single channel and is limited by thegeometrical design of the optical system, whichchanges between curved and planar configurations.Based on the notations of Fig. 3(a) for a curved sys-tem and of Fig. 3(b) for a planar system, it turns outthat the f -number is linked to the field of view FOVeof a single optical channel through the followingequations (provided the angles FOVe and θ aresmall):

#curved ¼ 1θ þ FOVe

; ð22Þ

for a curved system, where θ is the angle between theaxes of successive channels of the optical system, and

#planar ¼1

FOVe; ð23Þ

for a planar system. In the case illustrated inFig. 3(b), each channel has the same FOVe, andthe overall FOV will be increased by adding a beamdeflector at the top of the system, so that the opticalaxes of the channels are tilted in respect to eachother.

As the angles FOVe and θ are small, they can beexpressed in the following way:

FOVe ≈tdetf

; ð24Þ

θ ≈tdete

: ð25Þ

A simple ray tracing through the system shows thatif θ > FOVe (that is to say e < f ), spatial lacunaritiesappear in the image because some areas of the sceneare not imaged by the system. In general, this case isnot suitable for practical imaging applications. If θ ¼FOVe (that is to say e ¼ f ), which is illustrated inFig. 3(a), the scene is perfectly tiled between the dif-ferent optical channels, and the overall field of viewFOV of the system is given by FOV ¼ MFOVe, whereM stands for the number of channels in one direction.If θ < FOVe (that is to say e > f ), overlap areas areprovided between adjacent channels, which can helpto retrieve a single image from the collection of sub-images provided by the multichannel system.

The f -number of a single optical channel is greaterfor a planar system than for a curved one: therefore, itis more convenient to design a curved system in orderto work with a lower value of #. The minimal f -number for a curved system can be obtained by tilingthe scene into the different optical channels withoutproviding any overlap area between the channels. Inthis case, the f -number of the curved system is givenby the following expression:

20 February 2011 / Vol. 50, No. 6 / APPLIED OPTICS 947

Page 6: Design strategies to simplify and miniaturize imaging systems

#curved ¼ 12FOVe

; ð26Þ

so that

#curved ¼ #planar2

: ð27ÞThat is why a curved microlens array associated witha curved detector would convey the best results [32][this configuration is directly based on the appositioncompound eyes of the fly (see Fig. 4)].

From a practical point of view, the field of viewFOVe of one channel is chosen so that the maximumamount of fourth-order wave aberration Wð4Þ

max fulfilsthe Rayleigh criterion (Wð4Þ

max ≤ λ=4). Then, the opti-mal number of channels Nch is linked to the desiredoverall FOV of the system through the followingrelation: Nch ¼ FOV=FOVe.

However, current state-of-the-art technology im-plies the common use of planar components, such asplanar detectors and planar microlens arrays. Thus,the design becomes more complex, because it has toaddress two main difficulties (tilting the optical axisof each channel by using a planar component andsuppressing cross talk between adjacent channels).Other elements must be added to the design, such asa beam deflector [7,8,33] or thin and long opaquewalls between the microlens array and the detector[4,8].

D. Decreasing the Focal Length f : The TOMBO Principle

The final way to decreaseWð4Þmax consists of decreasing

the focal length f of the system.However, according toEqs. (15) and (17), miniaturizing an optical system bydecreasing its focal length while maintaining a con-stant FOV and a constant f -number results in adecrease of the angular resolution IFOV and of thenumber of resolved points Nb [20]: if f 2 ¼ f 1=M, thenIFOV2 ¼ MIFOV1 and Nb2 ¼ Nb1=M2. This is illu-strated in Fig. 5(a). Several solutions could be usedto compensate for this loss of resolved points [34].One of these solutions is to design amultichannel sys-

tem by replicating a miniaturized imaging system[see Fig. 5(b)], with each channel providing nonredun-dant information. Each subimage is undersampled;that is why an image processing method has to be ap-plied to the set of undersampled subimages to obtainafinal image with an enhanced angular resolution. Ac-cording to the sampling theorem of Papoulis [35], ifeach subimage is undersampled by a factorM2, a sin-gle image can be retrieved froma collection ofM2 non-redundant subimages, avoiding a loss of information.This approach relies on image reconstruction algo-rithmswhich are based onnonredundant informationprovided by the optical channels. To obtain this non-redundancy, practical challenges have to be achieved,such as introducing subpixel shifts between the sub-images. These subpixel shifts can be obtained eitherby choosing a microlens array pitch which is not amultiple of the size of the pixel, or by tilting the opticalsystem in relation to the axes of the detector [34]. Ifthe relative subimage shifts are determined onceand for all by calibrating the camera, the image recon-struction becomes impervious to changes in subimagecontent, contrast, sharpness, and noise [36].

The Nyquist frequency of the detector is given byf Ny ¼ 1=ð2psÞ, where ps is the sampling pitch of thedetector. The size of the pixels tpix is inferior to thesampling pitch (tpix ≤ ps), and we define the fill factorF of the pixels by the following equation:

Fig. 3. (Color online) Illustration of a multichannel optical system (a) with curved components, (b) with planar components.

Fig. 4. Illustration of an apposition compound eye (which corre-sponds to the eye of the fly), with a curved microlens array and acurved retina in a convex shape (see [7]).

948 APPLIED OPTICS / Vol. 50, No. 6 / 20 February 2011

Page 7: Design strategies to simplify and miniaturize imaging systems

F ¼�tpixps

�2: ð28Þ Thus, ps ¼ tpix corresponds to the case where the fill

factor of the pixels is equal to 1. We can only expect toretrieve frequencies until the cutoff frequency of the

Fig. 5. (Color online) (a) Illustration of the miniaturization of an optical system by decreasing its focal length f , while maintaining aconstant f -number and a constant field of view, and illustration of the decrease of the number of resolved points. (b) Method to compensatefor the loss of resolved points by replicating an optical miniaturized system.

Fig. 6. (Color online) Illustration of the different strategies used to design a simple imaging system with a minimal number of opticalelements.

20 February 2011 / Vol. 50, No. 6 / APPLIED OPTICS 949

Page 8: Design strategies to simplify and miniaturize imaging systems

pixel 1=tpix. Therefore, if the fill factor of the pixels isequal to 1, the Nyquist frequency is f Ny ¼ 1=ð2tpixÞ,and a system based on the TOMBO principle withtwo nonredundant channels in each direction wouldenable one to retrieve frequencies until twice theNyquist frequency. If we want to retrieve frequencieshigher than twice the Nyquist frequency, the fill fac-tor of the pixels has to be reduced in order to increasethe cutoff frequency of the pixels, and the number ofrequired nonredundant channels has to be increased.The optimal number of nonredundant channels Nchis given by the following equation:

Nch ¼ 2ffiffiffiffiF

p : ð29Þ

For example, if the fill factor of the pixels is equal toF ¼ 0:25 (that is to say tpix ¼ ps=2), the optimal num-ber of nonredundant channels is Nch ¼ 4. The pro-blem which arises in image reconstruction is thatthe contrast of high frequencies (up to the cutoff fre-quency of the pixel) is very low and even equal to 0 atthe cutoff frequency of the pixel.

4. Conclusion

In this paper, we have described different strategiesto design a simple imaging system based on a singleoptical component. These strategies are summarizedgraphically in Fig. 6. They are all deduced from thetheory of third-order Seidel aberrations. We can seethat playing on different parameters (the index of re-fraction n, the bending γ, the position of the pupil t,the focal length f , the field of view, and the f -number)reduces the amount of fourth-order wave aberrationof the system. It is more convenient to play on n, γ,and t since these parameters do not affect the resolu-tion of the system. However, if it is not sufficient, theclassic approach consists of increasing the number ofoptical elements in order to increase the number ofoptical surfaces. Nevertheless, we have shown thatother approaches keep on using a single optical com-ponent. Annular folded systems increase the numberof optical surfaces while keeping a single optical com-ponent. Playing on #, FOV, and f enables the designof simple or miniaturized systems, but it results in adecrease of the number of resolved points. The inter-est of multichannel optical systems is that they canbe very compact while maintaining a satisfying num-ber of resolved points. From a practical point of view,although multichannel systems rely on simple opti-cal architectures, the unique optical element which isused can be complex. Curved detectors, curvedmicro-lens arrays, and planar microlens arrays with poten-tially important optical power are not mature yet.Suppressing cross talk between adjacent channelsis also an important issue. Then, multichannel sys-tems have to overcome technological challenges tobe widespread in a large range of applications.

This work was sponsored by the Délégation Génér-ale pour l’Armement (DGA) of the French Ministry ofDefense.

References

1. S. B. Rim, P. B. Catrysse, R. Dinyari, K. Huang, and P.Peumans, “The optical advantages of curved focal planearrays,” Opt. Express 16, 4965–4971 (2008).

2. R. Völkel, M. Eisner, and K. J. Weible, “Miniaturized imagingsystems,” Microelectron. Eng. 67–68, 461–472 (2003).

3. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A.Tünnermann, “Artificial apposition compound eye fabricatedby micro-optics technology,” Appl. Opt. 43, 4303–4310 (2004).

4. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A.Tünnermann, “Thin compound eye camera,” Appl. Opt. 44,2949–2956 (2005).

5. J. Duparré, P. Schreiber, A. Matthes, E. Pshenay-Severin, A.Bräuer, A. Tünnermann, R. Vo lkel, M. Eisner, and T. Scharf,“Microoptical telescope compound eye,” Opt. Express 13, 889–903 (2005).

6. J. W. Duparré, and F. C. Wipermann, “Micro-optical artificialcompound eyes,” Bioinsp. Biomim. 1, R1–R16 (2006).

7. G. Druart, N. Guérineau, R. Haïdar, S. Thétas, J. Taboury, S.Rommeluère, J. Primot, and M. Fendler, “Demonstration of aninfrared microcamera inspired by Xenos Peckii vision,” Appl.Opt. 48, 3368–3374 (2009).

8. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T.Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thinobservation module by bound optics (TOMBO): Concept andexperimental verification,” Appl. Opt. 40, 1806–1813 (2001).

9. G.Druart,N.Guérineau, J.Taboury,S.Rommeluère,R.Haïdar,J. Primot, M. Fendler, and J. C. Cigna, “Compact infraredpinhole fisheye for wide field applications,” Appl. Opt. 48,1104–1113 (2009).

10. S. R. Wilk, “Ancient optics: Producing magnification withoutlenses,” Opt. Photonics News 17 (4), 12–13 (2006).

11. G. Andersen and D. Tullson, “Broadband antihole photonsieve telescope,” Appl. Opt. 46, 3706–3708 (2007).

12. E.E.FenimoreandT.M.Cannon,“Codedapertureimagingwithuniformly redundant arrays,” Appl. Opt. 17, 337–347 (1978).

13. S. R. Gottesman and E. E. Fenimore, “New family of binaryarrays for coded aperture imaging,” Appl. Opt. 28, 4344–4352 (1989).

14. G. Druart, N. Guérineau, R. Haïdar, J. Primot, A. Kattnig, andJ. Taboury, “Image formation by use of continuously self-imaging gratings and diffractive axicons,” Proc. SPIE 6712,671208 (2007).

15. G. Druart, J. Taboury, N. Guérineau, R. Haïdar, H. Sauer, A.Kattnig, and J. Primot, “Demonstration of image-zooming cap-ability for diffractive axicons,” Opt. Lett. 33, 366–368 (2008).

16. G. Mikula, A. Kolodziejczyk, M. Makowski, C. Prokopowicz,and M. Sypek, “Diffractive elements for imaging with ex-tended depth of focus,” Opt. Eng. (Bellingham, Wash.) 44,058001 (2005).

17. N. Davidson, A. A. Friesem, and E. Hasman, “Holographicaxilens: High resolution and long focal depth,” Opt. Lett.16, 523–525 (1991).

18. E. J. Tremblay, R. A. Stack, R. L. Morrison, and J. E. Ford,“Ultrathin cameras using annular folded optics,” Appl. Opt.46, 463–471 (2007).

19. E. J. Tremblay, R. A. Stack, R. L. Morrison, J. H. Karp, and J.E. Ford, “Ultrathin four-reflection imager,”Appl. Opt. 48, 343–354 (2009).

20. A. W. Lohmann, “Scaling laws for lens systems,”Appl. Opt. 28,4996–4998 (1989).

21. M. W. Haney, “Performance scaling in flat imagers,” Appl. Opt.45, 2901–2910 (2006).

22. J. W. Goodman, Introduction to Fourier Optics, 3rd ed.(Roberts and Company, 2005), p. 107.

23. J. W. Goodman, Introduction to Fourier Optics, 3rd ed.(Roberts and Company, 2005), p. 146.

950 APPLIED OPTICS / Vol. 50, No. 6 / 20 February 2011

Page 9: Design strategies to simplify and miniaturize imaging systems

24. M. Born and E. Wolf, Principles of Optics, 6th ed. (Pergamon,1989), p. 211.

25. M. Born and E. Wolf, Principles of Optics, 6th ed. (Pergamon,1989), p. 213.

26. M. Born and E. Wolf, Principles of Optics, 6th ed. (Pergamon,1989), p. 228.

27. M. Born and E. Wolf, Principles of Optics, 6th ed. (Pergamon,1989), p. 468.

28. G. Druart, N. Guérineau, R. Haïdar, M. Tauvy, S. Thétas, S.Rommeluère, J. Primot, J. Deschamps, and E. Lambert,“MULTICAM: A miniature cryogenic camera for infrareddetection,” Proc. SPIE 6992, 699215 (2008).

29. D. J. Brady, “Micro-optics and megapixels,” Opt. PhotonicsNews, 17(11), 24–29 (2006).

30. F. de la Barrière, G. Druart, N. Guérineau, J. Taboury, and M.Fendler, “Integration of advanced optical functions near thefocal plane array: First steps towards the on-chip infraredcamera,” Proc. SPIE 7787, 778706 (2010).

31. G. Druart, N. Guérineau, R. Haïdar, J. Primot, P. Chavel, andJ. Taboury, “Nonparaxial analysis of continuous self-imaginggratings in oblique illumination,” J. Opt. Soc. Am. A 24, 3379–3387 (2007).

32. N. A. Ahuja and N. K. Bose, “Design of large field-of-viewhigh-resolution miniaturized imaging system,” EURASIP J.Adv. Signal Process. 2007, 1 (2007).

33. L. C. Laycock andV. A. Handerek, “Multi-aperture imaging de-vice for airborne platforms,” Proc. SPIE 6737, 673709 (2007).

34. F. de la Barrière, G. Druart, N. Guérineau, J. Taboury, J.Primot, and J. Deschamps, “Modulation transfer functionmeasurement of a multichannel optical system,” Appl. Opt.49, 2879–2890 (2010).

35. A. Papoulis, “Generalized sampling expansion,” IEEE Trans.Circuits Syst. 24, 652–654 (1977).

36. A. V. Kanaev, J. R. Ackerman, E. F. Fleet, and D. A. Scribner,“TOMBO sensor with scene-independent superresolutionprocessing,” Opt. Lett. 32, 2855–2857 (2007).

20 February 2011 / Vol. 50, No. 6 / APPLIED OPTICS 951


Recommended