+ All Categories
Home > Documents > Image formation from severely undersampled scalar infrared data sets

Image formation from severely undersampled scalar infrared data sets

Date post: 22-Sep-2016
Category:
Upload: mf
View: 213 times
Download: 0 times
Share this document with a friend
6
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 41, NO. 6, DECEMBER 1992 891 Image Formation from Severely Undersampled Scalar Infrared Data Sets Michael F. Gard, Senior Member, IEEE Abstract-Image information from severely limited data sets is a relatively infrequent topic in the literature. This paper de- scribes formation of an image from no more than ten, and usu- ally fewer, scalar sensor readings. An infrared (IR) signal source is assumed, which places unusual demands on signal es- timation and image construction algorithms. A previously re- ported signal estimation algorithm predicts an IR sensor’s re- sponse to various objects. An unconventional image construction algorithm generates a binary image object from sensor measurements. Synthetic object studies illustrate the effectiveness of the image construction algorithm and demon- strate that image artifacts are suppressed or eliminated by us- ing the largest possible sensor field of view. I. INTRODUCTION recent petroleum industry fluids laboratory study re- A quired an image representing the displacement of one viscous fluid by another viscous fluid inside a steel pipe. A proposed solution involved heating one fluid so that in- frared (IR) sensing could be used to detect the heated fluid intruding into the second, cooler, fluid. Operational con- straints of the experimental apparatus severely restricted the number of sensor positions and prohibited the use of pinholes in combination with one- or two-dimensional sensor arrays. Because of these constraints, each sensor measurement is a scalar quantity having no directional specificity other than a priori knowledge of the field of view. This study confronts an extreme case of limited-view reconstruction. Only a severely limited number of scalar views are available (typically six), with each view being the spatial summation of information from the sensor’s entire field of view. The fact that measured IR energy comes only from the surface of a radiant object [ 11-[4] is a major theoretical complication. Finally, the radiant source is not a point, nor is it an object known a priori; instead, it is an object with an unknown spatial distribu- tion. Thus, we violate many assumptions and approxi- mations (e.g., point sources, line integrals, and well-de- fined raypaths) fundamental to most image reconstruction procedures. The original laboratory study sought to determine the distribution of two viscous fluids-one hot, the other cold-in three-dimensional (3-D) space. Measured IR en- Manuscript received May 14, 1992; revised August 17, 1992. The author is with General Electric Medical Systems, Milwaukee, WI IEEE Log Number 9204506. 53201. ergy comes predominantly from the heated material theoretically, IR emissions from the two fluid masses could be separated by suitable filters. It is therefore equiv- alent to think of the problem as the need to represent an emitting mass (the hot material) intruding into a non- emissive interstitial medium occupying the measurement space. Sensor collimation reduces the problem to forming two-dimensional cross sections, or tomographic sections, of an emitting object in a measurement plane. The origi- nal problem is reasonably satisfied if two-dimensional (2-D) tomographic images are obtained with sufficiently dense spacing in the third dimension. Accordingly, this paper is concerned with the two-dimensional imaging problem. Although we do not address the question of time dependence directly, we regard our data and the corre- sponding images as snapshots in time. 11. MEASUREMENT SPACE AND MODEL The overall measurement space may be selected arbi- trarily. In agreement with the original application (mea- surements in a pipe), the 3-D measurement space for this paper is a right circular cylinder. The 2-D measurement space is a circle, representing an arbitrary plane normal to the axis of the cylinder. For analysis, we assume ideal point sensors having uni- form sensitivity across their entire field of view. The field of view is an experimental parameter determined by an idealized slit collimator (i.e., a slit with an infinitesimally narrow aperture) defining the measurement plane. Angu- lar coverage of the field of view is determined by slit length. of coplanar sensors around the perimeter of the measure- ment space. Sensor placement is somewhat arbitrary; in circular measurement spaces, sensors will usually be equidistant around the perimeter. The number of sensors is another arbitrary parameter, although the premise dn- derlying this investigation is that the number is s ten or fewer). Field of view will be discussed in Section VI. For the present, we note that each sensor’s field of view is necessarily large if there are to be no ob- vious gaps in measurement space coverage. Each sensor’s field of view may be manipulated to suppress artifacts in the constructed image. sensors on the perimeter of a circular measurement space, The measurement plane is defined by a small This paper considers the specific case of six equid 0018-9456/92$3.00 @ 1992 IEEE
Transcript

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 41, NO. 6, DECEMBER 1992 891

Image Formation from Severely Undersampled Scalar Infrared Data Sets

Michael F. Gard, Senior Member, IEEE

Abstract-Image information from severely limited data sets is a relatively infrequent topic in the literature. This paper de- scribes formation of an image from no more than ten, and usu- ally fewer, scalar sensor readings. An infrared (IR) signal source is assumed, which places unusual demands on signal es- timation and image construction algorithms. A previously re- ported signal estimation algorithm predicts an IR sensor’s re- sponse to various objects. An unconventional image construction algorithm generates a binary image object from sensor measurements. Synthetic object studies illustrate the effectiveness of the image construction algorithm and demon- strate that image artifacts are suppressed or eliminated by us- ing the largest possible sensor field of view.

I. INTRODUCTION recent petroleum industry fluids laboratory study re- A quired an image representing the displacement of one

viscous fluid by another viscous fluid inside a steel pipe. A proposed solution involved heating one fluid so that in- frared (IR) sensing could be used to detect the heated fluid intruding into the second, cooler, fluid. Operational con- straints of the experimental apparatus severely restricted the number of sensor positions and prohibited the use of pinholes in combination with one- or two-dimensional sensor arrays. Because of these constraints, each sensor measurement is a scalar quantity having no directional specificity other than a priori knowledge of the field of view.

This study confronts an extreme case of limited-view reconstruction. Only a severely limited number of scalar views are available (typically six), with each view being the spatial summation of information from the sensor’s entire field of view. The fact that measured IR energy comes only from the surface of a radiant object [ 11-[4] is a major theoretical complication. Finally, the radiant source is not a point, nor is it an object known a priori; instead, it is an object with an unknown spatial distribu- tion. Thus, we violate many assumptions and approxi- mations (e.g., point sources, line integrals, and well-de- fined raypaths) fundamental to most image reconstruction procedures.

The original laboratory study sought to determine the distribution of two viscous fluids-one hot, the other cold-in three-dimensional (3-D) space. Measured IR en-

Manuscript received May 14, 1992; revised August 17, 1992. The author is with General Electric Medical Systems, Milwaukee, WI

IEEE Log Number 9204506. 53201.

ergy comes predominantly from the heated material theoretically, IR emissions from the two fluid masses could be separated by suitable filters. It is therefore equiv- alent to think of the problem as the need to represent an emitting mass (the hot material) intruding into a non- emissive interstitial medium occupying the measurement space. Sensor collimation reduces the problem to forming two-dimensional cross sections, or tomographic sections, of an emitting object in a measurement plane. The origi- nal problem is reasonably satisfied if two-dimensional (2-D) tomographic images are obtained with sufficiently dense spacing in the third dimension. Accordingly, this paper is concerned with the two-dimensional imaging problem. Although we do not address the question of time dependence directly, we regard our data and the corre- sponding images as snapshots in time.

11. MEASUREMENT SPACE AND MODEL The overall measurement space may be selected arbi-

trarily. In agreement with the original application (mea- surements in a pipe), the 3-D measurement space for this paper is a right circular cylinder. The 2-D measurement space is a circle, representing an arbitrary plane normal to the axis of the cylinder.

For analysis, we assume ideal point sensors having uni- form sensitivity across their entire field of view. The field of view is an experimental parameter determined by an idealized slit collimator (i.e., a slit with an infinitesimally narrow aperture) defining the measurement plane. Angu- lar coverage of the field of view is determined by slit length.

of coplanar sensors around the perimeter of the measure- ment space. Sensor placement is somewhat arbitrary; in circular measurement spaces, sensors will usually be equidistant around the perimeter. The number of sensors is another arbitrary parameter, although the premise dn- derlying this investigation is that the number is s ten or fewer). Field of view will be discussed in Section VI. For the present, we note that each sensor’s field of view is necessarily large if there are to be no ob- vious gaps in measurement space coverage. Each sensor’s field of view may be manipulated to suppress artifacts in the constructed image.

sensors on the perimeter of a circular measurement space,

The measurement plane is defined by a small

This paper considers the specific case of six equid

0018-9456/92$3.00 @ 1992 IEEE

892 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 41, NO. 6. DECEMBER 1992

with each sensor occupying a vertex of a regular inscribed hexagon. The normalized measurement space is a circle having unit diameter. The circle and sensors are oriented in a Cartesian coordinate system such that the zeroth sen- sor, So is at the origin, as shown in Fig. 1. This is done for computational convenience in a pixel-oriented image, as the symmetry it affords can often simplify computa- tion.

The emitting body to be imaged is assumed to approx- imate a black-body emitter in thermal equilibrium at a known temperature. Because the emitter is in thermal equilibrium, all points in the body have the same temper- ature. The emitting body intrudes into an interstitial me- dium which fills the measurement space; the interstitial material is assumed to have a constant linear attenuation coefficient a, which has units of length-‘, The attenuation coefficient is relatively small so that the interstitial me- dium does not behave as a gray-body radiator. Conduc- tion and convection are taken to be negligible. Under these assumptions, energy transfer from the emitting body to the sensor is by radiation alone.

All observed radiation from a black body takes place from the surface of the body [1]-[4]. Because of colli- mation, a sensor sees only an emitting edge in a 2-D mea- surement plane. This is a matter of importance, for a sen- sor can respond only to that portion of the object “illuminated” by the sensor. A sensor obtains no infor- mation from an emitting boundary point obscured by nearer boundary points, and obtains no information what- soever from the interior of the object. Accordingly, we cannot distinguish between solid and hollow emitters of the same external dimensions. We normally assume an emitting body is solid, but this convention cannot be sup- ported by measured data.

Energy from a surface point on the emitting body is subject to losses by spherical spreading and attenuation in the interstitial medium. It is often instructive to consider a lossless interstitial medium (i.e., 01 = 0), for this special case can be particularly demanding for many algorithms. The lossless case is not of academic interest alone; with the exception of discrete absorption bands, most gases ap- proximate a lossless medium.

Mathematical development reported elsewhere [5] pro- vides an analytical expression for the response of a col- limated point sensor to a filamentary emitter in an atten- uating medium. The same reference describes a signal estimation algorithm, based on a sorting procedure, which accurately matches theoretically predicted signal values. The image construction algorithm to be described as- sumes the existence of this, or an equivalent, signal esti- mator.

Given only a few scalar sensor readings, we wish to produce a two-dimensional image reasonably representa- tive of the emitter’s cross section in the measurement plane. This representation uses images composed of square pixels. Image space is a square array of M pixels on side; that is an M X M array. M is called the dimension of the array. We insure there is one central pixel by mak-

Fig. 1. Sensor geometry employed in this study. The measurement space is normalized to have unit diameter. All sensors are shown with u / 3 rad field of view.

ing M odd. The author uses array dimensions of the form M = 2‘ + 1, where P is an integer, which provides a convenient set of images varying in pixel coarseness. Our problem has a circular measurement space, represented as a circle inscribed in the square M X M image array.

The images we produce are binary images; that is, any given pixel either is an emitter or it is not. An emitting pixel has a value of 1, whereas a nonemitting pixel has a value of 0. A binary pixel choice reflects the thermody- namic model, because a black-body emitter at thermal equilibrium has uniform temperature throughout.

111. SYNTHETIC IMAGES Synthetic images are mathematically defined standard

image objects such as polygons, circular sections with prominent linear features, circles, and ellipses. Synthetics are often designed to exaggerate defects, thereby allowing a meaningful assessment of algorithmic changes. Two particularly demanding synthetics will be used later in this paper.

IV. IMAGE CONSTRUCTION ALGORITHM To understand the following image construction algo-

rithm, it is appropriate to consider why better known im- age reconstruction algorithms are not applicable to the problem. Backprojection algorithms (as employed in computed tomography imaging) are often used with densely sampled object spaces and are based on the as- sumption that measured signal attenuation represents ac- cumulated attenuation effects of all material in each of the many individual raypaths. Early backprojection routines, such as ART, MART, and their many variants, succes- sively iterated backprojected images to get a consistent fit with measurement data; present-day computed tomogra- phy (CT) usually relies on filtered backprojection [6].

In contrast, our data does not represent accumulated contributions from the total object, because IR emission takes place only from the object boundary. Even worse, individual sensor readings do not even contain informa- tion from the object’s entire perimeter. Finally, we have nothing remotely resembling individual raypath informa- tion because a sensor responds to energy originating any-

GARD: IMAGE INFORMATION 893

F

where within its large field of view. Backprojection has been tried and found to be unsatisfactory, generally be- cause the theoretical framework is ill suited to the prob- lem. and specifically because of very poor resolution as the sensors’ field of view increases. A detailed compari- son of backprojection methods as applied to this problem is not presented here.

We begin image construction with no more information than six scalar sensor readings, the sensors’ locations, the sensors’ fields of view, and the temperature of the intrud- ing mass (effectively related to the sensors’ scale factors). Given only this, how are we to construct an image rep- resenting the intruding mass?

The image is constructed by an “onion growth” algo- rithm. First, an approximate center of mass, or seed pixel, is identified. A series of concentric circles is then grown about the seed pixel. As individual sensor measurements are satisfied, the ever-increasing circles are modified to retain satisfactory matches for some sensors while new material is added elsewhere. The reference to onion growth is now clear; beginning with an initial pixel at the approximate center of mass, we grow concentric circles in much the same way an onion bulb forms.

A . The Signal Centroid

The center of mass is approximated by an artifice called the signal centroid. Measured signals and a priori knowl- edge of the sensors’ locations are used to determine the signal centroid, which may be regarded as the coordinates of a lone emitting pixel roughly located at the center of the signal. Each of the N sensors occupies a pixel at co- ordinates (xk, yk), where 0 I k s N - l . Let the sensor readings be given by rk, where 0 I k I N - 1. The signal centroid is a pixel at coordinates (xc, yc) , where x, and yc are given by

Ykrk y , = INT ($ + 0.5) . (1b)

simulated sensor reading sk is defined by

ek = sk - rk. (2)

C. The Onion-Growth Algorithm A null image, in which there is no intruding heated

mass, is a special case which we exclude. In general, mul- tiple pixels are required to represent an intruding mass; that is, a single pixel usually underestimates the actual mass. Therefore, if we start with only one pixel (the seed pixel defined by the signal centroid), we usually expect simulated readings from a single-pixel object to be smaller than observed measurements. Because of this, initial er- rors defined by (2) are usually negative. The single-pixel image determined by the signal centroid will adequately represent an intruding object’s cross section only in the most fortunate circumstances. Normally, it is necessary to increase the area of the image object while manipulat- ing its shape to conform to observed measurements.

The onion-growth, or image-growing, algorithm is an iterative procedure to systematically generate candidate image objects. Beginning at the seed pixel (the signal cen- troid), a circle of radius 1 /(M - 1) is added to the image object, where M is the image dimension. This simple im- age is passed to the signal estimator, and the resulting simulated sensor readings are compared to actual mea- surements. At each iteration, the radius is incremented by l / (M - l), and another circle is added to the image ob- ject. At some point in this process, one or more of the actual sensor measurements will be reasonably well ap- proximated by the collection of circles in the image ob- ject.

When the image object’s simulated response ade- quately approximates an observed sensor measurement, we do not wish to add more material (i.e., additional pix- els) to the image object’s nearest edge in that sensor’s field of view. At the same time, we wish to continue growing the image object in other fields of view until all sensor readings are reasonably well approximated. This requires a knowledge of the pixels added at each iteration and their locations in each sensor’s fields of view.

The onion-growth algorithm continues until all sensor readings are reasonably well approximated (as determined by approximation criteria discussed below) or until an it- eration places the next new circle completely outside im- age space. Image building continually adds new material to a sensor’s field of view until an actual sensor measure- ment is satisfied. Once the approximation is within ac- We use Only integer (’c? Y c ) pixel Because

Only the integer part Of its ar- ceptable limits, no new material capable of changing that the INT[l function reading is added to that sensor’s field of view.

gument, adding 0.5 to the argument insures proper round- ing. Two particularly simple tests may be used as sensor

reading approximation criteria. Recall that initial candi- date image objects generally underestimate an intruding mass, with the result that errors defined in (2) usually are negative. The first, and simplest, approximation criterion is to continue adding material to the image object until the error from a simulated sensor signal is positive; after observing a positive error, no new material is allowed in

B. Error Signals

For the algorithm to be discussed, we define all errors in the following way. Given a set of sensor readings rk, where 0 5 k 5 N - 1, the error ek associated with a

894 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 41, NO. 6, DECEMBER 1992

the nearest edge of that field of view. The second criterion also requires the error go from negative to positive. The prior iteration will have provided one set of errors; the current iteration will have provided another set of errors. Retention or rejection of new pixels from the current it- eration is determined by whether the current or prior it- eration produces smaller absolute error for that particular sensor.

Experience teaches that the first, and simplest, approx- imation criterion produces satisfactory images with the least execution time. Improved agreement between sim- ulated and measured sensor readings may be obtained by using the simplest exit criterion with an incremental ra- dius of less than 1 / ( M - 1). Other investigators with dif- ferent problems may prefer to formulate other exit con- ditions. However, the severely undersampled nature of measurement space and the relative lack of detail in even the best image suggest that elaborate processing is not jus- tifiable in many cases.

Although this paper does not discuss all the details un- derlying the algorithm, the onion-growth procedure is generally consistent with maximum entropy considera- tions. Images constructed from synthetic object data are quite acceptable in most cases. As we shall see in Section V, the onion-growth algorithm copes reasonably well with the problem of artifact generation. We are producing im- ages from a pathetically limited data set in which each datum only partially represents the emitting object’s boundary. We can hardly expect to compete with the de- tail of a CT image under these circumstances.

V. ARTLFACTS Artifacts are a concern in any image construction prob-

lem, and they are particularly troublesome in this study. This sensitivity to artifact is due, in large part, to the physics peculiar to IR emission. If we select any raypath in a sensor’s field of view, the sensor sees only the nearest emitting surface and is effectively blind to all other ma- terial in the angular range projected outward from that surface.

For example, imagine a tiny emitting body sufficiently close to a sensor that it completely fills the field of view. The reading will be quite large, because the distance is small, and the field of view is completely illuminated. The sensor reading cannot tell us what, if anything, lies behind this tiny emitter. The rest of the measurement space could be empty, or the universe could be filled with emitting material; the sensor reading is the same in either case.

The fact that a surface screens, or obscures, all material behind it is the essence of this study’s artifact problem. Depending on the size and location of the image object, it is very likely that several, if not all, sensors will be functionally blind to portions of the image space. When this happens, the presence or absence of material in the blind spots will have no effect on sensor readings. In this circumstance, only a priori knowledge can rightly distin- guish between true image and artifact.

Artifacts may be reduced by increasing the number of data points (sensors), but our study is explicitly directed to those very difficult situations having no more than ten sensors. Artifacts are influenced by the field of view, as discussed in Section VI.

VI. EFFECT OF FIELD OF VIEW If the number of sensors and their locations are con-

strained, the only remaining experimental parameter hav- ing major impact on image artifact is each sensor’s field of view. This study assumes that every sensor has the same angular field of view and the same relative disposi- tion in the measurement space. In this study’s circular measurement geometry, the angular field of view is be- tween ~ / 3 and T rad, and the field of view is bisected by a diagonal. This is by no means the only possible dispo- sition of the field of view, but it will certainly be a very common choice. For convenience, we speak of this rela- tionship between field of view and diagonal as the “nor- mal” configuration. The considerations we discuss apply equally well to other configurations.

If we were at liberty to use as many sensors as we wished, we would employ a great many sensors with very small fields of view. When circumstances force us to em- ploy only a few sensors to measure IR radiation, it is ad- vantageous to use a very wide field of view. This advan- tage does not arise from using a wide field of view in measurement space; a wide measurement field of view means that a sensor responds to even more emitting boundary elements, and the observed measurement is even less specific regarding angular origin of received energy. The advantage arises because a wide field of view, in con- cert with the onion-growth image construction algorithm, provides superior artifact suppression in image space.

This paper presents images using normal field of view orientations. Four convenient representative cases are presented: these fields of view are n /3 , ~ / 2 , 21r/3, and x rad. Given an arbitrary number of sensors in a regular circular geometry, it is possible to determine the corre- sponding minimum field of view which will prevent the formation of blind-spot artifacts [7]; this topic will not be explored in this paper for the sake of brevity.

We have seen that objects may produce blind spots in measurement and image space. Such blind spots can har- bor artifacts if no sensor responds to the presence or ab- sence of material in these areas. We now demonstrate that an increase in the field of view is highly beneficial, be- cause it reduces the size of the blind spots which other- wise allow artifact structures to exist. Artifact structures may still exist, but as the field of view increases, artifacts become smaller and less objectionable.

Synthetic studies dramatically illustrate the effect of the field of view on artifacts. One of the best synthetic emit- ters for this purpose is the small elliptical object shown in Fig. 2(a). This small ellipse is centered at (-0.217, +0.375) in normalized measurement space. The major axis is 0.125 units, while the minor axis is 0.0625 units.

GARD: IMAGE INFORMATION 895

( 4 (e) Fig. 2 . Onion-growth algorithm (OGA) behavior with off-center elliptical synthetic object. Image dimension M = 33 , lossless case (attenuation coefficient (Y = 0). (a) Original off-center elliptical object; (b) OGA output, 7r /3 rad field of view; (c) OGA output, ~ / 2 rad field of view; (d) OGA output, 2 n / 3 rad field of view; (e) OGA output, K rad field of view.

Fig. 3 . Onion-growth algorithm (OGA) behavior with crescent-shaped synthetic object. Image dimension M = 33 , lossless case (attenuation coefficient (Y = 0). (a) Original crescent-shaped synthetic object; (b) OGA output, 7r /3 rad field of view; (c) OGA output, 7r/2 rad field of view; (d) OGA output, 2 r / 3 rad field of view; (e) OGA output, a rad field of view.

This object was particularly designed to illustrate the re- lationship between field of view and artifact production.

To begin, the synthetic image of Fig. 2(a) is passed to

the signal estimation procedure. Simulated sensor read- ings from the signal estimator are treated as measured ex- perimental data and provide the input to the image con-

896 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 41, NO. 6, DECEMBER 1992

struction sequence. The signal estimation and image construction algorithms may be adjusted to provide any arbitrary field of view between 0 and a rad.

Fig. 2 contains the results of image construction using different fields of view in an image of dimension 33 (M = 33). These images were formed using the simplest ap- proximation criterion in the image construction algo- rithm; namely, no new material was added to the nearest edge of a sensor’s field of view once the associated error went from negative to positive on an iteration of the on- ion-growth algorithm. Fig. 2(a) is the original synthetic image. Figs. 2(b)-(e) are image constructions corre- sponding to fields of view of a / 3 , a / 2 , 2 a / 3 , and a rad, respectively. Fig. 2(b) clearly contains artifact arising from the mechanism already discussed. Increasing the field of view to a / 2 rad, as in Fig. 2(c), considerably reduces artifact. Figs. 2(d) and (e), with 2 a / 3 and a rad fields of view, respectively, are free of artifact. These lat- ter two figures are not identical to the original synthetic object, but they certainly are located properly in mea- surement space. Considering that these representations were obtained from only six scalar measurements, they are quite good.

Another example is given in Fig. 3 , a sequence of im- ages obtained using a crescent-shaped synthetic emitter. These images are presented in the same format as Fig. 2 for purposes of direct comparison. In all cases, an in- crease in the field of view suppresses artifact in the con- structed image. While the images obtained with larger fields of view do not match the original synthetic emitters exactly, they are reasonable representations of the syn- thetics.

VII. CONCLUSIONS We have considered the formation of an image from

scalar IR measurements representing a heated viscous mass intruding into and displacing a cooler viscous mass. More to the point, we have considered developing a rep- resentative image from a severely undersampled data set composed of very few (less than ten, and typically six) scalar readings. The specification of IR sensing leads to certain theoretical complications, the most notable being that emitting matter obscured by other emitting matter nearer the sensor does not contribute to a sensor reading.

Studies with synthetic objects and an iterative onion- growth algorithm demonstrate that image artifacts can be reduced by increasing each sensor’s field of view. Inves- tigators familiar with transmission CT imaging, where sampling in object space is very dense, the number of views is large, and the supporting theoretical assumptions are completely different, may find this result counterin- tuitive. However, when a scalar data set is obtained from a grossly undersampled IR experiment, it is preferable to combine the widest possible sensor field of view and the onion-growth image construction algorithm. As demon- strated, the results can be surprisingly good.

ACKNOWLEDGMENT The author thanks his wife and daughters for their un-

failing support and encouragement. Heartfelt thanks are also extended to Dr. K. Katahara of ARC0 Oil and Gas Company, Plano, TX and Dr. D. Gustafson, Dr. J. Hsieh, Dr. M. Limkeman, Dr. A. Pfoh, and Dr. D. Talwar of GE Medical Systems, New Berlin, WI. Particular thanks are due 0. Dake for providing and maintaining the com- puter environment at GE Medical Systems. This paper re- ports research done as a Ph.D. candidate at Southern Methodist University, Dallas, TX, and does not contain work done for the General Electric Company.

REFERENCES

[ I ] J. P. Holman, Heat Transfer. New York: McGraw-Hill, 1976, pp. 284-287.

[2] G. Shortley and D. Williams, Elemenrs of Physics, vol. 1 , 4th ed. Englewood Cliffs, NJ: Prentice-Hall, 1965, p. 331.

[3] J. B. Jones and G . A. Hawkins, Engineering Thermodynamics. New York: Wiley, 1960, pp. 673-675.

[4] E. Schmidt (J. Kestrin, tr.), Thermodynamics. New York: Dover, 1966, pp. 468-469 (authorized translation of 3rd German ed.).

[5] M. F. Gard, “Analytical and simulated signal prediction for a colli- mated point sensor using a surface emission model,’’ in Proc. 1992 IEEE Inr. Symp. Circuirs Syst. (San Diego, CA), May, 1992, pp. 2501- 2504.

[6] G . T. Herman, “X-ray-computed tomography-basic principles,’’ in Three-Dimensional Biomedical Imaging, vol. I, R. A. Robb, Ed. Boca Raton, FL: CRC, 1985, pp. 61-70.

[7] M. F. Gard, “Image formation from severely undersampled scalar in- frared data sets,” Ph.D. dissertation, Southern Methodist University, Dallas, TX, Dec. 1992.


Recommended