+ All Categories
Home > Documents > Exceeding the resolving imaging power using environmental conditions

Exceeding the resolving imaging power using environmental conditions

Date post: 01-Oct-2016
Category:
Upload: javier
View: 214 times
Download: 0 times
Share this document with a friend
6
Exceeding the resolving imaging power using environmental conditions Zeev Zalevsky, 1, * Efi Saat, 1 Shahar Orbach, 1 Vicente Mico, 2 and Javier Garcia 3 1 School of Engineering, Bar-Ilan University, Ramat-Gan, 52900 Israel 2 AIDO, Technological Institute of Optics, Colour and Imaging, Nicolás Copérnico, 7-13 Parc Tecnològic, 46980 Paterna (Valencia), Spain 3 Departamento de Óptica, Universitat de València, Dr. Moliner 50, 46100 Burjassot, Spain *Corresponding author: [email protected] Received 27 April 2007; accepted 28 June 2007; posted 25 July 2007 (Doc. ID 82410); published 4 September 2007 We present two approaches that use the environmental conditions in order to exceed the classical Abbe’s limit of resolution of an aperture-limited imaging system. At first we use water drops in order to improve the resolving capabilities of an imaging system using a time-multiplexing approach. The limit for the resolution improvement capabilities is equal to the size of the rain drops. The rain drops falling close to the imaged object act as a sparse and random high-resolution mask attached to it. By applying proper image processing, the center of each falling drop is located, and the parameters of the encoding grating are extracted from the captured set of images. The decoding is done digitally by applying the same mask and time averaging. In many cases urban environment includes periodic or other high-resolution objects such as fences. Actually urban environment includes many objects of this type since from an engineering point of view they are considered appealing. Those objects follow well known standards, and therefore their structure can be a priori known even without being fully capable of imaging them. We show experimentally how we use such objects in order to superresolve the contour of moving targets passing in front of them. © 2007 Optical Society of America OCIS codes: 100.6640, 110.4850. 1. Introduction The field of superresolution addresses the capability of seeing beyond the physical limitations enforced by the optics and the detector of an imaging system. The limitation of the lenses is mainly related to the F-number of the optics, and the limitation of the de- tector is related to the size of the pixels and their number [1,2]. Overcoming those limitations or ob- taining superresolution means achieving high-end system performance via use of low-end configuration. Therefore the applicability of this field is not only scientific but also industrial. The superresolution can be applied for remote imaging as well as for near- field, where seeing below the size of an optical wave- length is sometimes required as well. It can be used to overcome the limitations of the optical system itself and to exceed the imaging boundaries determined by the medium, as in, e.g., turbulence and scattering. There are many approaches to overcome the limita- tions enforced by the imaging system or the medium, by applying polarization [3,4], wavelengths [5,6], space or field of view [7–10] or time [11–13] manipulations that encode the spatial information prior to its trans- mission through the band-limited optical system or medium, and then decoding the information and re- covering the missing spatial content. Emmett Leith had arrived to optics from the field of synthetic aperture radar (SAR), which at that time was using optical means to process and recover the radar information. The field of SAR is the evolutional prototype for the time-multiplexing superresolution that later was used in optical imaging. By being fa- miliar with this approach, Leith was naturally ac- quainted with the field of superresolved imaging and knew how to implement his brilliant ideas in this field as well. 0003-6935/08/04000A1-6$15.00/0 © 2008 Optical Society of America 1 February 2008 Vol. 47, No. 4 APPLIED OPTICS A1
Transcript
Page 1: Exceeding the resolving imaging power using environmental conditions

Exceeding the resolving imaging power usingenvironmental conditions

Zeev Zalevsky,1,* Efi Saat,1 Shahar Orbach,1 Vicente Mico,2 and Javier Garcia3

1School of Engineering, Bar-Ilan University, Ramat-Gan, 52900 Israel2AIDO, Technological Institute of Optics, Colour and Imaging, Nicolás Copérnico, 7-13 Parc Tecnològic,

46980 Paterna (Valencia), Spain3Departamento de Óptica, Universitat de València, Dr. Moliner 50, 46100 Burjassot, Spain

*Corresponding author: [email protected]

Received 27 April 2007; accepted 28 June 2007;posted 25 July 2007 (Doc. ID 82410); published 4 September 2007

We present two approaches that use the environmental conditions in order to exceed the classical Abbe’slimit of resolution of an aperture-limited imaging system. At first we use water drops in order to improvethe resolving capabilities of an imaging system using a time-multiplexing approach. The limit for theresolution improvement capabilities is equal to the size of the rain drops. The rain drops falling close tothe imaged object act as a sparse and random high-resolution mask attached to it. By applying properimage processing, the center of each falling drop is located, and the parameters of the encoding gratingare extracted from the captured set of images. The decoding is done digitally by applying the same maskand time averaging. In many cases urban environment includes periodic or other high-resolution objectssuch as fences. Actually urban environment includes many objects of this type since from an engineeringpoint of view they are considered appealing. Those objects follow well known standards, and thereforetheir structure can be a priori known even without being fully capable of imaging them. We showexperimentally how we use such objects in order to superresolve the contour of moving targets passingin front of them. © 2007 Optical Society of America

OCIS codes: 100.6640, 110.4850.

1. Introduction

The field of superresolution addresses the capabilityof seeing beyond the physical limitations enforced bythe optics and the detector of an imaging system. Thelimitation of the lenses is mainly related to theF-number of the optics, and the limitation of the de-tector is related to the size of the pixels and theirnumber [1,2]. Overcoming those limitations or ob-taining superresolution means achieving high-endsystem performance via use of low-end configuration.Therefore the applicability of this field is not onlyscientific but also industrial. The superresolution canbe applied for remote imaging as well as for near-field, where seeing below the size of an optical wave-length is sometimes required as well. It can be usedto overcome the limitations of the optical system itself

and to exceed the imaging boundaries determined bythe medium, as in, e.g., turbulence and scattering.

There are many approaches to overcome the limita-tions enforced by the imaging system or the medium,by applying polarization [3,4], wavelengths [5,6], spaceor field of view [7–10] or time [11–13] manipulationsthat encode the spatial information prior to its trans-mission through the band-limited optical system ormedium, and then decoding the information and re-covering the missing spatial content.

Emmett Leith had arrived to optics from the field ofsynthetic aperture radar (SAR), which at that timewas using optical means to process and recover theradar information. The field of SAR is the evolutionalprototype for the time-multiplexing superresolutionthat later was used in optical imaging. By being fa-miliar with this approach, Leith was naturally ac-quainted with the field of superresolved imaging andknew how to implement his brilliant ideas in thisfield as well.

0003-6935/08/04000A1-6$15.00/0© 2008 Optical Society of America

1 February 2008 � Vol. 47, No. 4 � APPLIED OPTICS A1

Page 2: Exceeding the resolving imaging power using environmental conditions

During his extraordinary research career, Leithdealt with various topics of optical superresolutioninvolving holography and interferometry [14,15],first-arriving-light (FAL) superresolution to dealwith imaging through scattering media [15,16], devel-oping of general concepts for time [17], wavelength andcode encoding of optical wavefronts [18,19], and reso-lution improvement in confocal microscopy [20,21]. Le-ith pioneered some of those topics and made significantcontributions that influenced more than one genera-tion of scientists in those fields.

As previously mentioned, one of the most commonapproaches for increasing the resolution of an imag-ing setup being limited by diffraction is to use timemultiplexing [11,22,25]. The basic concept involvesattaching or projecting a high-resolution movinggrating on top of the target that we aim to superre-solve. This moving grating performs encoding of thespatial spectrum of the target. If a sequence of imagesis captured while each is multiplied at the cameraplane by proper decoding grating and afterwardsummed, one may increase the imaging resolution upto the resolution at which he has the encoding grat-ing.

In the first part of this paper we suggest usingthe time-multiplexing approach for superresolution,where instead of projecting a grating, we use theenvironmental conditions to generate it. We capturea set of images on a rainy day assuming that the rainis not very heavy and therefore its drops are rela-tively sparse. We assume that the rain is close to theobject. By digital processing we allocate the centers ofthe drops, although they are smeared due to the low-resolution imaging. A decoding pattern is digitallygenerated by creating a set of points located at thecenters of the drops. We multiply the decoding pat-tern by every captured image and integrate in time.Therefore, the encoding is made by a time-varyingrandom pattern [24], with the main advantage that,due to the low fill factor, the pattern can be extractedfrom the captured images. A superresolved image isreconstructed having the resolution as fine as thesmall drops falling close to the object. The main ad-vantage of this approach is the lack of need to projecta grating, but rather, we use the environment inorder to enhance our resolution. The superresolvingapproach fits to remote objects as well, and the dis-tance to the object is no longer an obstacle in havingits resolution enhanced.

In many urban cases, there is no need in projectionor attaching gratings, since they exist as part of theenvironment [26]. In many cases, those gratings areknown, since most of them have repeating structuralpatterns and well known standards for spatial peri-ods. Also, instead of moving the grating, we superre-solve the contour of a moving target that passes infront of such static grating. Since movement is rela-tive, it is not really important which one is the mov-ing one and which is the static one. In this paper weshow experimentally how the contour of moving tar-gets passing in front of urban environment typicalobjects such as a fence can be superresolved. In this

case the urban objects are not known a priori, butsince they have standards, the decoding can be doneand the high-resolution object can be extracted.

In a more heuristic description, both approachesthat are described in this manuscript deal with time-multiplexing superresolution. Structures containingsmall features (smaller than the optical-resolving ca-pability of the imager) are positioned near the object.Those structures are either the rain droplets (the firstapproach) or the background (the second approach).Similar to what happens with the moire effect, thosefine structures multiply the spatial distribution of theobject and demodulate its high spatial frequenciesinto low frequencies that now can be resolved by theimager. The relative movement between the objectand those fine structures (either the object is staticand the rain droplets are falling or the background isstatic and the object is moving) generates Dopplershifts, i.e., a time-varying phase that allows the sep-aration between the originally high spatial frequen-cies that were demodulated into low ones and theoriginally low spatial frequencies (despite the mixingbetween those two types of frequencies). Proper dig-ital postprocessing performs the demodulation andthe reconstruction of the original spectrum out of themixed spectral slots.

The structure of the paper is as follows. In Section2 we present a technical description of the rain dropsrelated approach. Its experimental results are pre-sented in Section 3. In Section 4 we present the the-ory for the urban background related technique.Preliminary experimental results are seen in Section5. The paper is concluded in Section 6.

2. Theoretical Analysis of the Rain Drops Approach

In the following, we use one-dimensional notation;the extension to two dimensions is straightforward.The transparency function of the rain drops is de-noted as g�x, t�. This function is space as well as timedependent. The image is designated as s�x�. The in-tensity point spread function of the imaging system isdenoted by p�x�. Therefore the intensity of each cap-tured frame equals to

I�x�, t� �� s�x�g�x, t�p�x� � x�dx. (1)

By allocating the peaks of the rain function, we man-age to have approximate reconstruction of the encod-ing function g�x, t�. In order to extract the drops’locations in every frame we prepare an average of allcaptured frames, �g�x, t��. For a given frame, taken attime t, the subtraction of the average image gives asmoothed version of the rain drops alone. Assumingthat Im is the maximal gray level of I�x, t�, then theallocation of the maxima of the difference betweeneach frame and the time average while creating adecoding grating based on that may be approximatedmathematically as

A2 APPLIED OPTICS � Vol. 47, No. 4 � 1 February 2008

Page 3: Exceeding the resolving imaging power using environmental conditions

d�x, t� � �I�x, t� � �I�x, t��Im

�K

, (2)

where d�x, t� is the decoding grating and K �� 1.Since the rain drops constantly and randomly vary

in time and space, one may assume that

� g�x�, t�d�x, t�dt ��x � x�� � �, (3)

where � is a constant. Applying the decoding gratingd�x, t� on the captured intensities of Eq. (1) and timeaveraging yields

r�x� �� I�x, t�d�x, t�dt ��� s�x��g�x�, t�p�x � x��

� d�x, t�dx�dt. (4)

Using the orthogonality of Eq. (3) yields

r�x� �� s�x��p�x � x���� g�x�, t�d�x, t�dt�dx�

�� s�x��p�x � x����x � x��dx�

� �� s�x��p�x � x��dx� � s�x�p�0� � � · LRI,

(5)

where LRI is the-low resolution image. One may seethat the reconstructed image r�x� includes the high-resolution original image s�x� multiplied by a con-stant and added to the low-resolution image obtainedwithout applying the superresolved approach.

3. Experimental Results of the Rain Drops Approach

In the experiment that we have performed, a set ofimages was captured while a resolution target wasplaced near a water tap that was constantly spread-ing drops of water. By applying the algorithm that wehave previously described, we have digitally ex-tracted the decoding grating and applied it in thereconstruction process. From each captured frame weextracted the proper decoding mask and multiplied itby the frame. We added over more than 100 frames.Each one of the frames was captured at video ratewith integration time of 1 ms in the camera. A con-ventional TV lens with a focal length of 16 mm andF-number of around 5.6 to 8 was used for the imag-ing. The camera was a Basler A312f with pixels of8.3 �m � 8.3 �m. The camera was controlled withMatlab through a laptop computer.

In the experiment we positioned the camera about3 meters away from the resolution target and

splashed water droplets from a water pipe toward thetarget.

The obtained results can be seen in Fig. 1. In Fig.1(a) we see one frame out of the low-resolution im-ages. In Fig. 1(b) we present the high-resolution tar-get we used for the experiment and that we aim toreconstruct. In Fig. 1(c) we present the obtained re-

Fig. 1. Experimental results: (a) the low-resolution images; (b)the high-resolution target we used for the experiment; (c) the ob-tained reconstruction after averaging over more than 100 frames.

1 February 2008 � Vol. 47, No. 4 � APPLIED OPTICS A3

Page 4: Exceeding the resolving imaging power using environmental conditions

construction after averaging over more than 100frames. One may clearly see the reconstruction of thehigh-resolution features that are not seen in eachindividual member of the imaged low-resolution setof images.

4. Technical Description for the Urban DetectionApproach

We denote by s1�x� the support of the object that is abinary function having the shape of the target butwith gray level of one:

s1�x� �1 s�x� � 00 s�x� � 0. (6)

Note that all the variables are optical intensities andthe imaging system is operating under spatially in-coherent illumination.

The intensity of the target that is moving in front ofa periodic urban background (say, a fence) may beexpressed mathematically as

t�x, t� � �1 � s1�x � vt��� n

An exp�2inx�0��� s�x � vt�, (7)

where v is the target’s velocity and n An exp�2inx�0�is the Fourier series of the urban periodic background(e.g., a fence). The blurred intensity I�x, t� capturedby the imaging camera equals to

I�x, t� �� t�x�, t�p�x � x��dx�. (8)

Substituting Eq. (7) into Eq. (8) yields

I�x, t� � n

An� exp�2inx��0�p�x � x��dx�

� n

An� s1�x� � vt�exp�2inx��0�p�x � x��dx�

�� s�x� � vt�p�x � x��dx�. (9)

The decoding process involves multiplication of I�x, t�by the high-resolution urban object n An exp�2inx�0�(which is known or can be extracted from the low-resolution images since such objects follow well knownstandards and formats), shifting the product back adistance of vt and summing all the images capturedalong the observation period (the sequence of imagesthat we use for the superresolution) yields

R�x� � n

An� I�x � vt, t�exp�2in�x � vt��0�dt, (10)

where R�x� is the reconstructed image. Shifting backis required in order to obtain the reconstructed targetalways in the same spatial position; otherwise,

blurred reconstruction will occur (due to bad regis-tration of images). The relative shift of two images inthe sequence can easily be found by correlating themwith each other.

By substituting Eq. (9) into Eq. (10) and changingthe integration variables from x� and t into x� � x�� vt and t, one obtains

R�x� � n

m

AnAm� exp�2in�x � vt��0�

�� exp�2im�x� � vt��0�p�x � x��dx��dt

� n

m

AnAm� exp�2in�x � vt��0�

�� s1�x��exp�2im�x� � vt��0�p�x � x��dx��dt

� n

An� exp�2in�x � vt��0�

��� s�x��p�x � x��dx��dt. (11)

Since mathematically one has

� exp�2inv�0t�exp�2imv�0t�dt � ��n � m�,

� exp�2inv�0t�dt � ��n�, (12)

we will assume that the urban grating can be approx-imated as

n

��x �n�0�

nAn exp�2inx�0�

n

AnA�n exp�2inx�0�. (13)

This is a good approximation since the fences havehigh duty cycle, and therefore their Fourier coeffi-cients An are close to unity. Using this relation in Eq.(11) yields

R�x� � p�0� �� s1�x�� n

��x � x� �n�0

�p�x � x��dx�

� A0� s�x��p�x � x��dx�. (14)

Since the width of the blurring function is smallerthan the period of the urban grating, i.e., the width ofp is smaller than I��0, one obtains that, out of thesummation in the second term, only one term re-

A4 APPLIED OPTICS � Vol. 47, No. 4 � 1 February 2008

Page 5: Exceeding the resolving imaging power using environmental conditions

mains, and therefore

R�x� � p�0� �� s1�x����x � x��p�x � x��dx�

� A0� s�x��p�x � x��dx�

� p�0� � A0� s�x��p�x � x��dx� � s1�x�, (15)

which means that the reconstructed image equals toa constant plus the low-resolution target image (i.e.,the target blurred by the blurring point spread func-tion) minus the high-resolution contour of the target(i.e., s1). Thus, the suggested approach recovers thehigh-resolution contour of the moving target, and inorder to extract it one should subtract from R�x� thelow resolution (i.e., blurred) target.

5. Experimental Results for the Urban DetectionApproach

In Fig. 2 we present some experimental results dem-onstrating the described approach. In Fig. 2(a) onemay see the high-resolution urban texture includingthe high-resolution fence. Such a fence, even if notresolved by the imaging setup, can be extracted fromthe low-resolution captured image since the spatialfeatures of fences have standardized spacing, andtherefore it can be anticipated and estimated evenwithout being fully imaged. In Fig. 2(b) we capture aset of low-resolution images, including bicycles, as amoving target passing in front of the urban fence. Thecapturing of the sequence of images was done withdigital camera Konica Minolta Z6 (has 6 megapixels),while the integration time was 1 ms. We used its lenswith a focal length of 35 mm.

One may see that, due to the low resolution of theimaging system, the bicycles cannot be even iden-tified. In Fig. 2(c) we apply our decoding algorithm,which includes multiplying each low-resolution im-age with the high-resolution coding fence, correlat-ing in order to find the relative movement of thetarget between the sequential frames, backshiftingof the target always to the same spatial position,and summation of all the decoded images in thesequence. The result may be seen in Fig. 2(c). As onecan see, the moving target is superresolved. Nowone can easily recognize that indeed the movingtarget was a bicycle.

6. Conclusions

In this paper we have experimentally demonstratedtwo applicable superresolving approaches allowinguse of the environmental conditions in order to havebetter resolution imaging. In the first approach wehave demonstrated how one may use the natural en-vironment such as rain drops in order to be able toobtained superresolved imaging without the need toattach or to project anything on the observed object.By applying the proper digital decoding algorithm,we have experimentally demonstrated a resolutionimprovement of more than 3 times in comparison tothe image quality obtained without applying the pro-posed approach. The method uses digital processing,and it does not require a priori knowledge of theencoding�decoding function that should be appliedfor the superresolved process.

In the second technique we have shown a way forreconstructing the contour of moving objects passingin front of periodic urban background. One need nothave a priori knowledge of the structure of the peri-odic urban background since it usually follows well

Fig. 2. (Color online) (a) High-resolution urban environment; (b)part of the sequence of low-resolution images containing movingtarget; (c) the superresolved moving target.

1 February 2008 � Vol. 47, No. 4 � APPLIED OPTICS A5

Page 6: Exceeding the resolving imaging power using environmental conditions

known fabrication standards and therefore can beestimated and used to decode the high-resolution im-age out of the sequence of low-resolution images.

This work was supported by the Spanish Ministeriode Educación y Ciencia under project FIS2007-60626.

References1. Z. Zalevsky and D. Mendlovic, Optical Super Resolution

(Springer-Verlag, 2003).2. Z. Zalevsky, D. Mendlovic, and A. W. Lohmann, “Optical sys-

tems with improved resolving power,” in Progress in Optics, E.Wolf, ed. (Elsevier, 2000), Vol. 40, pp. 271–341.

3. W. Gartner and A. W. Lohmann, “An experiment going beyondAbbe’s limit of diffraction,” Z. Phys. 174, 18 (1963).

4. A. Zlotnik, Z. Zalevsky, and E. Marom, “Superresolution withnonorthogonal polarization coding,” Appl. Opt. 44, 3705–3715(2005).

5. A. I. Kartashev, “Optical systems with enhanced resolvingpower,” Opt. Spectrosc. 9, 204–206 (1960).

6. D. Mendlovic, J. Garcia, Z. Zalevsky, E. Marom, D. Mas, C.Ferreira, and A. W. Lohmann, “Wavelength multiplexing sys-tem for a single mode image transmission,” Appl. Opt. 36,8474–8480 (1997).

7. M. A. Grimm and A. W. Lohmann, “Superresolution image for1-D objects,” J. Opt. Soc. Am. 56, 1151–1156 (1966).

8. H. Bartelt and A. W. Lohmann, “Optical processing of 1-Dsignals,” Opt. Commun. 42, 87–91 (1982).

9. W. Lukosz, “Optical sytems with resolving powers exceedingthe classical limits. II,” J. Opt. Soc. Am. 57, 932–941 (1967).

10. Z. Zalevsky, D. Mendlovic, and A. W. Lohmann, “Super reso-lution optical systems using fixed gratings,” Opt. Commun.163, 79–85 (1999).

11. W. Lukosz, “Optical systems with resolving powers exceedingthe classical limits,” J. Opt. Soc. Am. 56, 1463–1472 (1967).

12. M. Francon, “Amélioration de résolution d’optique,” NuovoCimento Suppl. 9, 283–290 (1952).

13. D. Mendlovic, I. Kiryuschev, Z. Zalevsky, A. W. Lohmann, andD. Farkas, “Two dimensional superresolution optical systemfor temporally restricted objects,” Appl. Opt. 36, 6687–6691(1997).

14. P. Naulleau and E. Leith, “Imaging through optical fibers byspatial coherence encoding methods,” J. Opt. Soc. Am. A 13,2096–2101 (1996).

15. K. Mills, Z. Zalevsky, and E. N. Leith, “Holographic general-ized first-arriving light approach for resolving images viewedthrough a scattering medium,” Appl. Opt. 41, 2116–2121(2002).

16. K. D. Mills, L. Deslaurier, D. S. Dilworth, S. M. Grannell, B. G.Hoover, B. D. Athey, and E. N. Leith, “Investigation of ultra-fast time gating by spatial filtering,” Appl. Opt. 40, 2282–2289(2001).

17. P. C. Sun and E. N. Leith, “Superresolution by spatial-temporal encoding methods,” Appl. Opt. 31, 4857–4862 (1992).

18. Z. Zalevsky, E. Leith, and K. Mills, “Optical implementation ofcode division multiplexing for super resolution. Part I. Spec-troscopic method,” Opt. Commun. 195, 93–100 (2001).

19. Z. Zalevsky, E. Leith, and K. Mills, “Optical implementation ofcode division multiplexing for super resolution. Part II. Tem-poral method,” Opt. Commun. 195, 101–106 (2001).

20. W.-C. Chien, D. S. Dilworth, E. Liu, and E. N. Leith,“Synthetic-aperture chirp confocal imaging,” Appl. Opt. 45,501–510 (2006).

21. E. N. Leith, K. D. Mills, P. P. Naulleau, D. S. Dilworth, I.Iglesias, and H. S. Chen, “Generalized confocal imaging andsynthetic aperture imaging,” J. Opt. Soc. Am. A 16, 2880–2886(1999).

22. A. Shemer, D. Mendlovic, Z. Zalevsky, J. Garcia, and P. G.Martinez, “Superresolving optical system with time multiplex-ing and computer decoding,” Appl. Opt. 38, 7245–7251 (1999).

23. A. Shemer, Z. Zalevsky, D. Mendlovic, N. Konforti, and E.Marom, “Time multiplexing superresolution based on interfer-ence grating projection,” Appl. Opt. 41, 7397–7404 (2002).

24. J. Garcia, Z. Zalevsky, and D. Fixler, “Synthetic aperture su-perresolution by speckle pattern projection,” Opt. Express 13,6073–6078 (2005).

25. V. Mico, Z. Zalevsky, P. Garcia-Martinez, and J. Garcia,“Single-step superresolution by interferometric imaging,” Opt.Express 12, 2589–2596 (2004).

26. Z. Zalevsky, J. Garcia, and C. Ferreira, “Superresolved im-aging of remote moving targets,” Opt. Lett. 31, 586–588(2006).

A6 APPLIED OPTICS � Vol. 47, No. 4 � 1 February 2008


Recommended