+ All Categories
Home > Documents > Towards a transparent, flexible, scalable, and disposable image sensor

Towards a transparent, flexible, scalable, and disposable image sensor

Date post: 29-Jan-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
15
Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators Alexander Koppelhuber and Oliver Bimber * Johannes Kepler University Linz, Austria * [email protected] Abstract: Most image sensors are planar, opaque, and inflexible. We present a novel image sensor that is based on a luminescent concentrator (LC) film which absorbs light from a specific portion of the spectrum. The absorbed light is re-emitted at a lower frequency and transported to the edges of the LC by total internal reflection. The light transport is measured at the border of the film by line scan cameras. With these measurements, images that are focused onto the LC surface can be reconstructed. Thus, our image sensor is fully transparent, flexible, scalable and, due to its low cost, potentially disposable. © 2013 Optical Society of America OCIS codes: (110.3010) Image reconstruction techniques;(110.0110) Imaging systems. References and links 1. H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C. J. Yu, J. B. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectron- ics,” Nature 454(7205), 748–753 (2008). 2. T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Appl. Phys. Lett. 92(21), 213303 (2008). 3. G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10(17), 1431–1434 (1998). 4. T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE T. Electron Dev. 52(11), 2502–2511 (2005). 5. A. F. Abouraddy, O. Shapira, M. Bayindir, J. Arnold, F. Sorin, D. S. Hinczewski, J. D. Joannopoulos, and Y. Fink, “Large-scale optical-field measurements with geometric fibre constructs,” Nature Mat. 5(7), 532–536 (2006). 6. R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Opt. Express 18(3), 2209–2218 (2010). 7. S. A. Evenson and A. H. Rawicz, “Thin-film luminescent concentrators for integrated devices,” Appl. Optics 34(31), 7231–7238 (1995). 8. P. J. Jungwirth, I. S. Melnik, and A. H. Rawicz, “Position-sensitive receptive fields based on photoluminescent concentrators,” P. Soc. Photo-Opt. Ins. 3199, 239–247 (1998). 9. I. S. Melnik and A. H. Rawicz, “Thin-film luminescent concentrators for position-sensitive devices,” Appl. Opt. 36(34), 9025–9033 (1997). 10. J. S. Batchelder, A. H. Zewail, and T. Cole, “Luminescent solar concentrators. 1: Theory of operation and techniques for performance evaluation,” Appl. Optics 18(18), 3090–3110 (1979). 11. M. Slaney and A. Kak, Principles of Computerized Tomographic Imaging (IEEE Press, 1988). 12. G. T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd ed. (Springer Verlag, 2010). 13. A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (SART): a superior imple- mentation of the ART algorithm,” Ultrasonic Imaging 6(1), 81–94 (1984). 14. Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). #180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013 (C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4796
Transcript

Towards a transparent, flexible, scalableand disposable image sensor using

thin-film luminescent concentrators

Alexander Koppelhuber and Oliver Bimber∗Johannes Kepler University Linz, Austria

[email protected]

Abstract: Most image sensors are planar, opaque, and inflexible. Wepresent a novel image sensor that is based on a luminescent concentrator(LC) film which absorbs light from a specific portion of the spectrum. Theabsorbed light is re-emitted at a lower frequency and transported to theedges of the LC by total internal reflection. The light transport is measuredat the border of the film by line scan cameras. With these measurements,images that are focused onto the LC surface can be reconstructed. Thus, ourimage sensor is fully transparent, flexible, scalable and, due to its low cost,potentially disposable.

© 2013 Optical Society of America

OCIS codes: (110.3010) Image reconstruction techniques;(110.0110) Imaging systems.

References and links1. H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C. J. Yu, J. B. Geddes III, J. Xiao, S. Wang,

Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectron-ics,” Nature 454(7205), 748–753 (2008).

2. T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulkheterojunction organic photodiode,” Appl. Phys. Lett. 92(21), 213303 (2008).

3. G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconductingpolymers,” Adv. Mater. 10(17), 1431–1434 (1998).

4. T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fetswith organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE T. Electron Dev.52(11), 2502–2511 (2005).

5. A. F. Abouraddy, O. Shapira, M. Bayindir, J. Arnold, F. Sorin, D. S. Hinczewski, J. D. Joannopoulos, and Y. Fink,“Large-scale optical-field measurements with geometric fibre constructs,” Nature Mat. 5(7), 532–536 (2006).

6. R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light pointon a large-area photodetector based on luminescent waveguides,” Opt. Express 18(3), 2209–2218 (2010).

7. S. A. Evenson and A. H. Rawicz, “Thin-film luminescent concentrators for integrated devices,” Appl. Optics34(31), 7231–7238 (1995).

8. P. J. Jungwirth, I. S. Melnik, and A. H. Rawicz, “Position-sensitive receptive fields based on photoluminescentconcentrators,” P. Soc. Photo-Opt. Ins. 3199, 239–247 (1998).

9. I. S. Melnik and A. H. Rawicz, “Thin-film luminescent concentrators for position-sensitive devices,” Appl. Opt.36(34), 9025–9033 (1997).

10. J. S. Batchelder, A. H. Zewail, and T. Cole, “Luminescent solar concentrators. 1: Theory of operation andtechniques for performance evaluation,” Appl. Optics 18(18), 3090–3110 (1979).

11. M. Slaney and A. Kak, Principles of Computerized Tomographic Imaging (IEEE Press, 1988).12. G. T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd ed.

(Springer Verlag, 2010).13. A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (SART): a superior imple-

mentation of the ART algorithm,” Ultrasonic Imaging 6(1), 81–94 (1984).14. Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: From error visibility to

structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4796

15. H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A. Ghosh, and A. Vorozcovs,“High dynamic range display systems,” ACM T. Graphic 23(3), 760–768 (2004).

16. J. Y. Han, “Low-cost multi-touch sensing through frustrated total internal reflection,” in Proceedings of the 18thannual ACM symposium on User interface software and technology, (Association for Computing Machinery,New York, 2005), 115–118.

17. J. Moeller and A. Kerne, “Scanning FTIR: unobtrusive optoelectronic multi-touch sensing through waveguidetransmissivity imaging,” in Proceedings of the fourth international conference on Tangible, embedded, andembodied interaction, (Association for Computing Machinery, New York, 2010), 73–76.

1. Introduction

Conventional optoelectronic techniques have forced image sensors into a planar shape. Recentapproaches ease this situation. For instance, silicon photodiodes are interconnected by elas-tomeric transfer elements in order to realize a hemispherical detector geometry that mimics theshape of the human eye, theoretically enabling a wide field of view and low aberrations [1].Organic photodiodes, as another example, allow application of ink-jet digital lithography to beused to implement sensors on fully flexible substrates [2–4].

Image sensors that consist of a grid of light-sensing fibers are also flexible, but –compared tothose consisting of grids of photodiodes– block less light. They might enable new applicationssuch as lens-less imaging [5]. Thin-film luminescent concentrators (LCs) are polymer filmsdoped with fluorescent dyes that absorb light of a specific wavelength and re-emit it at a longerwavelength. LC-based waveguides forward the emitted light to the edges of the LC by total in-ternal reflection with a non-linear attenuation that depends on the distance of the light traveled.Normally they are used to reduce costs and improve solar cells with poor spectral response atshort wavelengths. Photodiodes glued to the LC surface plane create an interface with higherrefractive index than air or the polymer of the LC. This causes light to be decoupled from the

Fig. 1. A thin-film luminescent concentrator (LC) is a flexible, fully transparent, scalable,and low-cost polymer film. Our approach reconstructs grayscale images focused onto theLC surface. The image shows Bayer Makrofol® LISA Green LC film that absorbs blue andre-emits green light.

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4797

LC at the positions of the photodiodes. The attenuation of the measured light at these positionsallows localization of an incident light point either on horizontally and vertically interwoven LCstrips or –with simple triangulation– on a continuous surface. Thus, LCs have also been usedfor camera-free laser-pointer tracking on potentially large and scalable sensor surfaces [6–9].

LCs have several interesting properties: they are flexible, fully transparent, and low-cost –and therefore scalable and disposable– polymer films (Fig. 1). The state of the art that uses LCsfor light sensing is currently able to reconstruct only simple point images. Our approach makesreconstruction of entire grayscale images possible.

This is –to the best of our knowledge– the first method that enables a fully transparent (nointegrated circuits or other structures such as grids of optical fibers or photodiodes), flexible(makes curved sensor shapes possible), scalable (sensor size can be small to large at similarcost, pixel size is not restricted to size of the photodiodes), and disposable (the sensing area islow-cost and can be replaced if damaged) image sensor.

2. Light transport within luminescent concentrators

Luminescent concentrators, described in detail in [10], are polymer plates or foils that are dopedwith fluorescent molecules. Light that is not reflected on the surface of an LC passes through,making it transparent. The dye inside the LC absorbs a specific portion of the light spectrum thatpasses through, and re-emits it a lower frequency. For instance, blue light is absorbed and re-emitted as green light. The band of the spectrum that can be absorbed depends on the chemicalstructure of the dye. The fluorescent particles randomly emit light in all directions. Most of thefirst-generation photons are trapped inside the LC due to total internal reflection (TIR) and arepropagated to the edges of the LC.

The amount of light that finally reaches the edge is subject to various losses. Cone-loss occursif the angle between the incident ray of light and the normal of the LC-to-air interface is greaterthan a critical angle. The critical angle can be derived from Snell’s law and is given by

θc = arcsin1n, (1)

where n is the refractive index. For example, TIR occurs at an angle greater than 39.2 degreesfor an LC made of polycarbonate. The solid angles above and below a fluorescent particle whereTIR does not occur are cone-shaped. For a planar LC with refractive index n, the fraction of

Fig. 2. Light transport within luminescent concentrator: 1) Incident light is transmittedand not absorbed by a fluorescent molecule. 2) Emitted light is lost at the critical escapecones. 3) Light that is not reflected on the surface is absorbed, re-emitted, and transportedto the edge either directly or by total internal reflection. 4) Emitted light is self-absorbedby another dye molecule.

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4798

luminescence P that is lost due to cone-loss is given by

P = 1−√

1− 1n2 . (2)

For an LC film made of polycarbonate (n = 1.58), the loss is approximately 22.6%.Absorption processes are another source of loss along the path from a fluorescent particle to

the edge. Self-absorption is the re-absorption of a fluorescent photon by another dye moleculedue to overlapping absorption and emission spectra. The longer the path length, the higherthe probability of self-absorption. The polymer host itself is another source of absorption. TheBeer-Lambert law states that the intensity of light decreases according to

I = I0e−µd (3)

when it travels through a homogeneous absorbing substance, where I is the intensity leaving thematerial, I0 is the intensity entering the material, µ is the attenuation coefficient that is constantalong the transport path, and d is the length of the transport path.

Other losses are due to scattering and incomplete total internal reflection because of LCsurface imperfections. The absorption and propagation of light within an LC is illustrated inFig. 2.

3. Measuring light transport by sampling a 2D light field

We use a square thin-film LC that collects and transports the incident light of an image whichis focused on its surface. The LC area is divided into a virtual grid of n×m = l discrete en-trance points (i.e., pixels). The amount of transported light at the four edges of the LC sheet ismeasured with CIS (contact image sensor) line scan cameras. Each line scan camera consistsof a single array of photosensors.

The correlation of the transport losses between the entrance points (i.e., the pixels p) on theLC surface and the total of k photosensors (s) at the edges of the LC sheet can be representedby

~s = T~p+~e, (4)

where ~s is the k-dimensional column vector of all photosensor responses, ~p the l-dimensionalcolumn vector of all pixel intensities, and T the k× l-dimensional light-transport matrix of theLC. Note that ~e is the k-dimensional column vector of the constant ambient light contributionthat is additionally transported to the photosensors (including also the sensors’ constant noiselevel).

Computing the coefficients of the light-transport matrix T as explained in section 2 wouldrequire precise knowledge of the LC’s internal (and potentially imperfect) structure and shapeat each location, which is practically impossible. Instead, we measure T as part of a one-timecalibration procedure: Projecting a single light impulse to one pixel ~pi enables simultaneousmeasurement of the i-th column of T , which equals the sensor responses ~s under the impulseillumination at pixel ~pi. Repeating this for all pixels ~pi, with 1 ≤ i ≤ l yields all coefficients ofT . Note that the photosensor response has to be linear. Thus, the line scan cameras must initiallybe linearized. Furthermore, the ambient light contribution ~e must be measured and subtractedfrom~s when the matrix coefficients are sampled. The measurement of~e is part of the calibrationprocess, and must be repeated if the ambient light changes significantly over time. The transportmatrix remains constant as long as the shape of the LC is not changed; otherwise, T must bere-calibrated.

In principle, the image focused on a statically (but arbitrarily) shaped LC surface can then bereconstructed with the inverse light transport

~p = T−1(~s−~e) (5)

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4799

(a) (b) (c)

Fig. 3. Schema for sampling light transport as a 2D light field: (a) Photosensors(s1,s2, . . . ,s j) located at the edges of the LC sheet that is divided into (p1, p2, . . . , pi) vir-tual pixels (from top left to bottom right). The photosensors are positioned at the bottom ofthe triangular aperture slits that are located along the LC edges. (b) Close-up of a triangularslit. Each photosensor measures the transported light integral at a particular angle. (c) Themeasurements of the photosensors at the same local position within each triangle at thesame edge can be considered as the projection of light to the edge at a specific angle.

or an alternative image reconstruction technique, such as tomographic reconstruction (e.g., fil-tered backprojection).

However, since each photodiode measures the integral of all pixel contributions across theentire LC film, the light-transport matrix would be dense with a high condition number andimage reconstruction using Eq. (5) would become very unstable (particularly in the presence ofsensor noise). A tomographic reconstruction would be seriously undersampled.

In order to solve this problem, we cut triangular slits in the LC edges, on the surfaces ofwhich we placed the photodiodes (Figs. 3 and 4). While reflective paint underneath the pho-tosensors at the back of the LC film reflects additional light towards the photosensors, opaqueplasticine filled into the cut-out film areas reduces stray light. Both leads to a cleaner signalwhen measuring the decoupled light.

Note that each triangular slit can be considered as a simple one-dimensional camera with theslit-opening at the top corresponding to a one-dimensional aperture.Measuring the light-transport matrix or a focused image using this triangular slit structure cor-responds to sampling a two-dimensional light field L(x,φ) and describes the amount of lightbeing transported within the LC film towards each discrete position x at the LC edges, fromeach discrete direction φ . In this case the light-transport matrix used for image reconstructionin Eq. (5) becomes sparse and its condition number is reduced. Further, more positional anddirectional samples are available for an alternative tomographic image reconstruction.

4. Image reconstruction

The measured light-transport matrix includes all attenuations of transported light that are dueto cone-loss, scattering, incomplete total internal reflection, imperfections of the LC structure,and (self-)absorption, as explained in section 2.

Reconstructing the image focused on the LC surface requires solving a system of linearequations (Eq. (4)) in ~p. By determining the (pseudo-)inverse light-transport matrix T−1, adirect solution in ~p can be found, as Eq. (5) explains. However, the inverse cannot be calculated

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4800

for every matrix and, even if it could, it would not be very robust against noise. Other methods,such as QR decomposition (QRD), singular value decomposition (SVD), biconjugate gradientsstabilized (BiCGStab) and non-negative least squares (NNLS), yield better solutions in thepresence of noise. In our experiments, we found BiCGStab and NNLS to be most robust whencompared to QRD, SVD, and the pseudo-inverse of T . Section 7 discusses this in more detail.

The drawback of most of these methods is that they are not suitable for reconstructing high-resolution images. A resolution of 512 × 512 pixels results in a system of more than 260,000unknowns. Even with today’s computing power it would take considerable time to solve suchan extensive equation system to the required level. However, numerical solvers such as NNLScan be useful in the reconstruction of low-resolution images, while BiCGStab supports alsohigh resolutions. Section 7 presents a more detailed performance and quality analysis.

Alternatively, 2D reconstruction of higher resolution images can be achieved tomographi-cally from the multiple 1D projections that are included in the sampled light field L(x,φ) overvarying directions φ , as illustrated in Fig. 3(c). This corresponds to a Radon transform, andtomographic image reconstruction can be accomplished using a backprojection technique thatenables fast and robust reconstruction of higher-resolution images:

For all pixels of the image to be reconstructed, backprojection integrates the light that istransported through a pixel in all directions. While a single row of the light-transport matrixT represents the contribution of all image pixels to one photosensor, a single column of Trepresents the contribution of one image pixel to all photosensors. Thus, the tomographic back-projection operator is, in principle, equivalent to the transpose of the light-transport matrix, andtomographic image reconstruction with backprojection corresponds to

~p = T T (~s−~e), (6)

where the transpose of T is equal to the matrix multiplication of all columns of T with themeasured photosensor values (without ambient light contribution).

Simple backprojection does not directly reconstruct the original image but a blurred ver-sion thereof. Consider the backprojection result of an image containing a single point in theimage center. It would show a point spread that falls off towards the image edges and is pro-portional to the light loss over a particular transport distance. Thus, backprojection reconstructsthe convolution of the desired image with a point-spread function (PSF) that depends on thesampling of the light transport within the LC. To obtain the desired image, the backprojectionresult must be deconvolved (or inverse-filtered) with the PSF. In practice, the PSF can vary

(a) (b)

Fig. 4. Microscopic views of triangular slit structure: (a) Darkfield image of single trian-gular slit sampling multiple directions ~φ at one slit position ~xi. (b) Brightfield image ofmultiple triangular slits sampling the 2D light field L(x,φ).

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4801

locally, as the internal structure of the LC and the sampling density might vary. Comparedto simple backprojection, advanced filtered backprojection techniques reconstruct a deblurredimage by pre-filtering the Radon transform before backprojection instead of deconvolving thebackprojected result. This has significant advantages in terms of performance and robustness.While deconvolution is ill-posed, determining the proper filter parameters is also a challengingproblem.

The algebraic reconstruction technique (ART) [11, 12] is an iterative approach to tomo-graphic reconstruction that is based on series expansion. It starts with making a guess at thesolution vector p, which is projected orthogonally onto the first hyperplane (the first equation)of the linear system y = T x, resulting in a solution for p. This process is repeated for the re-maining equations of the system, which yields a solution vector p that approximates the overallsolution. One iterative step of ART is repeated n times, each time using the solution vector pof the previous iteration as the initial guess. We apply a faster variant of ART called simulta-neous algebraic reconstruction technique (SART) [13]. Instead of calculating each value of psequentially for each equation, SART calculates p simultaneously for all equations of the linearsystem within one iteration.

5. Super-resolution imaging

Generally, image reconstruction becomes more error-prone and time-consuming with increas-ing image resolutions that lead to large light-transport matrices. This applies to tomographicmethods and to other techniques that solve large equation systems numerically. Insufficient dy-namic range and low signal-to-noise ratio of the photosensors, as well as a low sampling densitymake image reconstruction unstable for high-resolution images.

However, instead of reconstructing a high-resolution image with a single high-resolutionlight-transport matrix, the image can be approximated by combining the results of multiplereconstructions with low-resolution but shifted light-transport matrices (Fig. 5). This makesfeasible the reconstruction of higher-resolution images of adequate reconstruction quality andat acceptable speed.

Fig. 5. Super-resolution reconstruction with shifted light-transport matrices (principle): re-constructing 2 × 2 low-resolution pixels (blue) at 3 × 3 sub-pixel-shifted positions resultsin an image with a resolution of 6 × 6 pixels (yellow).

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4802

Fig. 6. Super-resolution reconstruction steps (9 × 9 to 27 × 27 example): The upper rowshows the nine low-resolution reconstructions created with the 3 × 3 shifted light-transportmatrices. The center row shows the same images with the reconstructed pixels placed atthe correct positions within the high-resolution image. The bottom row presents the accu-mulation of the center-row images from left to right. (a) The final 27×27 super-resolutionreconstruction. (b) Best possible result: original image (d) convolved with a 3×3 averagekernel. (c) Direct reconstruction with a single high-resolution transport matrix.

Just as the image can be focused anywhere on the LC surface, the light-transport matrixcan be measured at any position. Thus, during the one-time calibration procedure, we measuremultiple light-transport matrices with light impulses that are shifted by sub-pixel, rather thanfull-pixel distances. This leads to a set of low-resolution light-transport matrices that reconstructlow-resolution images at sub-pixel-shifted positions on the LC surface. The intensities of thereconstructed low-resolution pixels equal the average of the intensities of the high-resolutionimage regions that are focused on the LC surface underneath the corresponding low-resolutionpixel areas.

Thus, each sub-pixel-shifted and reconstructed low-resolution pixel corresponds to a pixel ofthe high-resolution image, as shown in Fig. 5 for the example of a super-resolution reconstruc-tion from nine shifted 2 × 2 images to one 6 × 6 image.

The result reconstructs the desired high-resolution image convolved with an average filterand a kernel size that is equal to the quotient of the two resolutions involved: high-resolution /low-resolution. Figure 6 illustrates the nine reconstruction steps needed to compute a 27 × 27image from shifted 9 × 9 image reconstructions. The result (Fig. 6(a)) approximates the desiredhigh-resolution image convolved by a 27/9 × 27/9 = 3 × 3 average kernel (Fig. 6(b)).

Without subsequent deconvolution, the convolved image defines the quality limit of oursuper-resolution technique. Nevertheless, it still leads to a better image quality than any of thelow-resolution reconstructions that can be achieved with the same precision of light-transportsampling (i.e., with the same light-transport matrix resolution), as can be seen in Fig. 6. Fig-ure 6(c) illustrates the result achieved with a single high-resolution light-transport matrix thatattempts to reconstruct the 27 × 27 pixels directly. The low dynamic range and SNR of thephotosensors lead to a noisy reconstruction. In section 7, we evaluate the advantages of super-resolution reconstruction over direct reconstruction in more detail.

Note that during capture, only a single measurement is necessary for reconstructing a super-resolution image. Thus, the recording time is not increased. Only the one-time calibration pro-cedure takes additional time. To compute an image of h×h resolution with multiple l× l imagereconstructions, (h/l)2 light-transport matrices must be calibrated.

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4803

6. Experimental setup and implementation

The sampling schema in Fig. 3 provides a number of variable parameters that need to be opti-mized in order to obtain light-transport matrices with high numerical stability (Fig. 7):

In addition to the aperture width a and the distance between photosensors and aperture d, theoptimal number of triangular slits n that surround the LC imaging area must be determined.

In general, the integration area of one photosensor should only cover a single line of pix-els, so that each equation contains as few unknowns as possible, since this results in a sparsetransport matrix and higher numerical stability. The area can be reduced by either decreasinga or increasing d, but –in order to retain a wide field of view for each triangular slit– a smallaperture width is preferred to a large distance between sensor elements and aperture.

The aperture width a and the the distances d and w define both the field of view α of atriangular slit and the integration area of a single photosensor:

a =2d tan(α/2)−w

tan(α/2)cot(α/2). (7)

These parameters must be chosen such that a single photosensor captures the light of as fewpixels as possible. At the same time, the whole LC surface area must be covered such that nopixel is omitted and that each pixel is measured multiple times from different directions. Ingeneral, this requires a wide field of view α and a small aperture width a. However, there isno clear analytical correlation between the parameter values and the condition number of theresulting light-transport matrix (which defines its numerical stability).

We found the optimal values for our prototypes by a brute force search of the entire param-eter space (a, d, and total number of triangular slits n per edge) by minimizing the conditionnumber of the resulting light-transport matrix. For a given set of parameters, the light-transportmatrix is simulated with the analytical light-transport calculations, as explained in section 2.The constraints considered in this optimization task are defined by the limitations of the fab-rication process, the line scan cameras used, the constant size of the evaluated LCs, and thedesired image resolution.We used CIS line scan cameras (M106-A4-R1/CMOS Sensor Inc.) with 1728 sensor elementson 210 mm in our experiments. While integration time and number of readouts per scan can beadjusted in software with a programmable USB controller (USB-Board-M106A4/SpectronicDevices Ltd), gain and dark level must be adjusted with potentiometers on the controller.

We evaluated two different LC sheets in our experiments: a smaller sheet of 108 mm ×108 mm and a larger sheet of 216 mm × 216 mm (both Bayer Makrofol® LISA Green with a

Fig. 7. Optimizing LC sensor parameters: An aperture of width a and a distance d to thephotosensors lead to the field of view α of one triangular slit. It defines the distance w thatis required by the photosensors at the edge of the LC. The integration area for a singlephotosensor is highlighted in orange.

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4804

Fig. 8. Experimental setup: LC sensor surrounded by four line scan cameras. An LCDprojector provides focused light impulses and sample images for automized calibration andexperimentation.

thickness of 0.3 mm). We cut out the triangular slits with a GraphRobo/Graphtec cutting plotter.The following parameter ranges were chosen: The triangular slits had to have a minimum a

of 0.5 mm to avoid breakage and the distance d was constrained to a range of 2 to 5 mm. Thenumber of triangular slits per edge n kept below 1.5 times the desired image resolution (i.e.,n = 1...24 for a desired image resolution of 16 × 16).As expected, the optimal aperture size a is always the defined minimum, as it ensures thehighest-resolution directional sampling. It should be noted that, without fabrication limitations,smaller aperture sizes yield even smaller condition numbers. The optimal numbers of triangularslits per edge n were found to be 16 and 32 for the smaller sheet with a desired resolution of 16× 16 and the larger sheet with a target resolution of 32 × 32, respectively. Thus, 54 photosen-sors were used for each triangular slit in both cases. The optimal distance d between apertureand photosensors was found to be 3.25 mm in both cases. Parameters for other configurationsand target resolutions were determined analogously.

To increase the dynamic range and signal-to-noise ratio of the photosensors, we record multi-ple exposures (up to 11, between 20 ms and 900 ms) with multiple (on average 2 per exposure –more for lower and less for higher exposures) readouts per recording. Initially, the transfer func-tions of all photosensor elements are measured and linearized individually. The measurementof the light-transport matrices and the ambient light contributions are also measured during thisone-time calibration, as explained in section 3.

To automatize the calibration procedure of our experiments, we use an LCD video projector(SP-M250S/Samsung) to focus light impulses and sample images on the LC surface. The expo-sure times of the photosensors are adjusted to the projector brightness. Figure 8 illustrates ourexperimental setup.

7. Results

In our experiments, we evaluated two different LC sheet sizes (smaller: 108 mm × 108 mm andlarger: 216 mm × 216 mm, see section 6 for optimal slit configurations), several reconstructionresolutions (9 × 9, 16 × 16, 32 × 32, and 64 × 64 for direct reconstruction and 18 × 18, 27

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4805

× 27, 32 × 32, and 64 × 64 for super-resolution reconstruction), and various image recon-struction techniques (BiCGStab, NNLS, QRD, SVD, pseudo-inverse (PINV), SART, filteredbackprojection (FBP)) using a total of nine different sample images. All images were focusedon the entire (planar) LC sheet area. Figure 9 illustrates a comparison for a reconstructed imagewith a resolution of 16 × 16 pixels.

We found that the reconstruction quality of QRD, SVD, PINV, and FBP is not acceptable– even for low image resolutions. This is due to numerical instabilities resulting from highcondition numbers of T (QRD, SVD, PINV) or insufficient directional sampling (FB). OnlySART, NNLS, and BiCGStab (without an overshooting number of iterations) provided rea-sonable reconstruction quality. For these techniques, Fig. 10 shows direct reconstruction andsuper-resolution reconstruction results for different resolutions. Note, that for SART we use aninitial guess that is computed with a few (30) BiCGStab iterations. In the following, we refer tothis as BiSART. NNLS and BiCGStab do not require an initial guess. We apply the structuralsimilarity index (SSIM) [14] for a quantitative comparison of the results with the ground truth.The SSIM is a commonly applied objective method for assessing perceptual image qualitybased on the degradation of structural information. It enhances comparison results over tradi-tional methods like peak signal-to-noise ratio (PSNR) and mean squared error (MSE).Our first observation is that no significant reconstruction difference between the small and thelarge LC sensor sheets could be found. SART, NNLS, and BiCGStab performed equally wellindependently of LC sheet size.

A second observation is that while BiSART always leads to slightly better reconstructionsthan BiCGStab, NNLS results in a higher quality only in two cases (Fig. 10, column j). We be-lieve that the sparsity of these results are of advantage for least-squares estimators that constrainthe solution by iterative extension of the active set of unknowns (e.g., based on the Lagrangemultiplier, as for NNLS).

A final observation is that we reach the limits for direct reconstructions with our prototypeat a resolution of 32 × 32. The dynamic range and the signal-to-noise ratio of the line scancameras, as well as the sampling resolution that is constrained by our fabrication process wereinsufficient for directly reconstructing a resolution of 64 × 64. A super-resolution reconstruc-tion of 64 × 64 from 2 × 2 × 32 × 32 results in improvements. The reconstruction quality at

Fig. 9. Comparison of different image reconstruction techniques for a sample image witha resolution of 16 × 16 pixels: non-negative least squares (NNLS), simultaneous alge-braic reconstruction technique (SART), biconjugate gradients stabilized (BiCGStab), QRdecomposition (QRD), singular value decomposition (SVD), pseudo-inverse (PINV), fil-tered backprojection (FBP).

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4806

Fig. 10. Direct reconstruction results for target resolutions of 9 × 9, 16 × 16, and 32 ×32; and super-resolution reconstructions results for resolutions of 16 × 16 to 32 × 32 and32 × 32 to 64 × 64. The structural similarity index (SSIM) [14] is provided in blue. ASSIM of 1.0 indicates a perfect match with the ground-truth / best possible image at thecorresponding resolution. Lower SSIM values indicate larger differences.

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4807

the image center is lower than at the borders. The reason for this are the wide apertures of thetriangle slits (limiting depth of field), and the low quality photosensors (making it impossibleto pick up slight brightness variations of far pixels within the measured light integrals).

Table 1 presents execution timings of unoptimized image reconstruction algorithms imple-mented on multicore-CPU and graphics-processor (GPU) hardware. Note that the implicit ma-trix factorization required for NNLS is not well suited for the single-instruction-multiple-data(SIMD) architecture of GPUs. A higher performance for NNLS can be achieved on multiple-instructions-multiple-data (MIMD) architectures of multicore-CPUs that allow task-level paral-lelization. However, BiCGStab and BiSART are well suited for SIMD parallelization on GPUs.

As outlined above, the difference of reconstruction quality between BiCGStab and BiSARTis marginal. However, BiCGStab is significantly faster – especially for higher resolutions. Fortarget resolutions of 128 × 128 and higher, super-resolution reconstruction outperforms directreconstruction in our experiments. The speed-up grows proportionally with the target resolu-tion.

Table 1. Computation times of NNLS multicore-CPU code (on Intel i7 QuadCode, 2.67GHz) and BiCGStab/BiSART GPU implementations (on NVIDIA GTX 580, 772 Mhz)for direct reconstructions and super-resolution reconstructions. For higher resolutions (128× 128 and above, in our experiments), super-resolution reconstruction outperforms directreconstruction. The sizes of the light-transport matrices range from 280-thousand (9 × 9)to 57-million (128 × 128) entries.

Direct reconstruction times (seconds)Resolution NNLS (multicore-CPU) BiCGStab (GPU) BiSART (GPU)

9×9 0.03 0.03 1.0016×16 0.25 0.03 1.0532×32 6.02 0.05 1.3564×64 110.41 0.15 2.74

128×128 528.51 1.38 10.45Super-resolution reconstruction times (seconds)

9×9 to 18×18 0.12 0.12 4.009×9 to 27×27 0.27 0.27 9.00

16×16 to 32×32 1.00 0.12 4.2032×32 to 64×64 24.08 0.20 5.40

64×64 to 128×128 441.64 0.60 10.96

8. Limitations

The main limiting factors that constrain reconstruction quality and resolution of our approachare the relatively small dynamic range (10 bit), low signal-to-noise ratio (we measured a 20-log-ratio as low as 20 dB), and low resolution (54 photosensors per triangle slit) of the CISline scan cameras used. They are normally employed in flatbed scanners (where neither a highsignal-to-noise ratio nor a large dynamic range or a high resolution is required), and are notcapable of measuring small differences in radiance and very low or very bright intensities.High-dynamic range (HDR) measurements using multiple exposures improves this situation,but leads to longer recording times.

Other limitations are the relatively wide aperture openings used, the inefficient decouplingof light with the photosensors that are loosely placed (not glued) on top of the LC surface, andremaining stray light that passes imperfectly cut and filled triangular areas. All of these issuesare due to fabrication constraints.

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4808

(a) (b)

Fig. 11. Color sensor: Stack of multiple LC layers (a) with small overlaps of absorptionand emission spectra (c).

Smaller aperture openings would narrow the integration area that is measured by a sin-gle photosensor. On the one hand, this has a positive effect on the condition number of thelight-transport matrix, as it increases the numerical stability of image reconstruction. On theother hand, it opens up the possibility of using other techniques such as filtered backprojection,which, in turn, enables the reconstruction of higher resolution images in less time.

Higher decoupling efficiency, dynamic range, and signal-to-noise ratio support shorter ex-posure times and smaller (and therefore more) photosensors. This also leads to higher recon-struction resolution and quality in less time. More photosensors per triangle slit together withsmaller aperture openings lead to a higher sampling density.

9. Future work and applications

Micro-lenses attached to the LC borders would be more light-efficient for sampling the 2D lightfield than triangle slits cut into the LC surface. However, this requires a production process thatensures a precise and robust alignment of lenses, photosensors and film. In addition to generalfabrication improvements, we intend to investigate the following configurations that are enabledby transparent and flexible image sensors:

Using stacks of multiple thin-film LC layers with small overlaps of absorption and emissionspectra makes color sensors possible, as illustrated in Fig. 11.

Stacking also allows simultaneous recording at multiple exposures, as shown in Fig. 12(a). Inthis case, the photosensors attached to each LC layer would use a different exposure time. Theoverall capture time then depends only on the maximum exposure time and not on the integralof all exposure time slots. Compared to multi-exposure sequences, this effectively reduces thetime for HDR recordings by a factor of two.

Simultaneous recording of multiple exposures can also be achieved by multiplexing themdirectionally (i.e., having multiple photosensors within each triangular slit record at different

(a) (b) (c)

Fig. 12. High-dynamic-range sensor: Simultaneous measurements of multiple exposureswith (a) stacked LC layers, (b) directional multiplexing, (c) positional multiplexing, or acombination thereof.

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4809

exposure times), positionally (i.e., having all multiple triangular slits record at different expo-sure times), or a combination thereof. This is illustrated in Figs. 12(b) and 12(c).

In both cases, stacking and multiplexing, the photosensors that record shorter exposure timescan repeat their measurements (possibly varying the exposure) within the time slot requiredfor the maximum exposure time. This leads to more exposure samples without increasing theoverall recording time. Initial experiments with directional and positional multiplexing revealeda decrease of reconstruction quality by 20% and a speed-up of recording by a factor of 2,compared to multiple sequential exposures for all directions. A higher sampling density willreduce the loss of reconstruction quality.

Applying different neutral density (ND) filters in front of the photosensors instead of varyingexposure times would also enable HDR recordings.

We will investigate curved and flexible sensor shapes and will evaluate the efficiency ofour light-transport measurements and image reconstruction techniques for higher degrees ofcone-loss. We will also seek more robust and faster image reconstruction techniques and willoptimize our GPU implementations for supporting applications that require better performance.

Potential applications of transparent and flexible imaging sensors include

• new forms of user-interfaces, such as non-touch screens (i.e., graphical user interfacesthat react to user input without the screen surface being touched) – e.g., by recording andevaluating shadows cast on the sensor surface);

• novel lens-less imaging devices that record 4D light fields – as discussed in [5];

• wide-field-of-view imaging systems with low aberrations – as presented in [1];

• high-dynamic-range or multi-spectral extensions for conventional cameras, e.g., bymounting a stack of LC layers on top of a high-resolution CMOS or CCD sensor, andby recording and combining low- and high-resolution images at multiple exposures orspectral bands as done, for example, in HDR and wide color gamut displays combininga high-resolution LCD panel with a low-resolution LED backlight matrix [15];

• improved touch-sensing devices that are based on frustrated total internal reflection(FTIR) [16, 17] – e.g., by enabling the recording of 2D light fields using arrays of tri-angular apertures within the light guides for improving image reconstruction, or by sand-wiching our image sensor with an unmodified light guide to enable thin form-factors(compared to the common FTIR devices that apply regular cameras).

Acknowledgments

We thank Robert Koeppe of isiQiri interface technologies GmbH for fruitful discussions and forproviding LC samples. This work was supported by Microsoft Research under contract number2012-030(DP874903) – LumiConSense.

#180837 - $15.00 USD Received 6 Dec 2012; revised 8 Feb 2013; accepted 11 Feb 2013; published 20 Feb 2013(C) 2013 OSA 25 February 2013 / Vol. 21, No. 4 / OPTICS EXPRESS 4810


Recommended