+ All Categories
Home > Documents > a073763 the Fundamentals of Theirmal Imaging Systems

a073763 the Fundamentals of Theirmal Imaging Systems

Date post: 12-Apr-2015
Category:
Upload: sapane
View: 44 times
Download: 1 times
Share this document with a friend
Description:
Thermovision fundamentals data
259
The" Fundamentals o~f, Theirmal' Iaghin Sytem Edited By FRED RoSELL AND GEORGE HARVEY May 10, 1979 0-u LAJ ELECTRO-OPTICAL TECHNOLOGY PROGRAM OFFICE NAVAL RESEARCH LABORATORY REPRODUCED FROM Washingtion, D.C. BEST AVAILABLE COPY Approved for public release-. distribution unlimited, _____ __ _9 7 980 00Q6, _ ~~~~~~~~ 4i_____________
Transcript
Page 1: a073763 the Fundamentals of Theirmal Imaging Systems

The" Fundamentals o~f,

Theirmal' Iaghin Sytem

Edited By

FRED RoSELL AND GEORGE HARVEY

May 10, 1979

0-u

LAJ

ELECTRO-OPTICAL TECHNOLOGY PROGRAM OFFICENAVAL RESEARCH LABORATORY

REPRODUCED FROM Washingtion, D.C.BEST AVAILABLE COPY Approved for public release-. distribution unlimited,

_____ __ _9 7 980 00Q6, _

~~~~~~~~ 4i_____________

Page 2: a073763 the Fundamentals of Theirmal Imaging Systems

j

Reviewed and Approved by

Dr. John M. MacCallum

tI..... .. .... . • _+ . . . . ].. .

Page 3: a073763 the Fundamentals of Theirmal Imaging Systems

SECURITY CLASSIF ICAT ION Of Twl P AGE (Wh.-' GM. £4t*.*d)

BOTPO R port No. 46 PRoMN R EOTNNI

John B. Goodell, George, L. Harvey, Walter R. Lawson, JamesA. Ratches, Robert E. Roberts, Fred A. Roseil, Robert L.Senda;ll, and David L. Shumaker NRL PrnbleM NOI-29A. P9AFQRM:.4G ORGANIZATION NAME AND ADDRESS 10. PROOR AMELEMENMT. PROJECT. TASK -~ARA AS WO UIT NIJMIERSNaval Research LaboratoryWashington, D.C. 20375

1 1. CONTROLLING OF PICE NAME AND ADDRESS IS. REPORT OATE

Naval Research Laboratory May 10, 1979Washington, D.C. 20375 I). Nu CAE OF PAGES

25914. SIONITOIIING AGCNCY NAME A ADOREISS~I dIiffrn Itoem Controllin Offic) 1I. SECURITY CLASS. (of Ohi reoi

Unclassified,111. OE6CkASSIFICATION/OOV1NORAOING

C.OULE

IS. OISTRIOUTION STATEMIN T (of (isl Report)

Approved for public release. distribution unlimited.

III. KEY WORDS (Continue an favors* side It nieted~aiy end Iden~tiy by block nuinmbof)

Imaging systemsFLI RTelevision systemsAtmnospheric transmission

20 ANSTRACT lConlIinuo on rev*ers side It necessary and ldentlir bty block nu4mber)

The purpose of this book is to provide the reader with a document which brings together underone cover an overview of radiometric and photometric concepts involved in thermal imaging systems.Governing equations are derived from simple fundamental concepts. Also, models used forthe visual discrimination i.e., detection, recognition, or identification, of real 'coene objects airediscussed. It is hoped that the text will be useful to FLIR designers, evaluators, land thermalimaging system modelers, as well as those whose sole interest is a grasp of the concepts involvedin thermal imiaginig.

DD 1413 EDI TION 001 1 NOV 69 15 O@SOLET9SIN 0102-014-660!

SECURITY CLASSIFICATION Of TNIS PAGE ("On. Dae. Nneered)

Page 4: a073763 the Fundamentals of Theirmal Imaging Systems

PREFACE

This document was supported by the Joint Technical Coordinating Groupfor Thermal Imaging Systems and the NAVMAT Electro-Optical TechnologyProgram Office. The document is meant to be of tutorial and reference valueto program managers and scientists who are familiar with thermal imaging tech-niques and systems but need detailed information relating systems modelingprocedures to visual displays and thermal imaging system performance.

The authors would like to acknowledge the helpful comments of membersof the Joint Technical Coordinating Group as well as others who have madesuggestions and inputs including Lucian Biberman, Stephen Campana, EdwardHooper, William Lawson, John MacCallum, George Mavko, Paul Moser, JamesRatches and John Walsh. A special note of gratitude is expressed for Mrs.Dora Wilbanks of the Technical Information Division for struggling with themany revisions of this document.

- -, . .-.on Fo

.4 I

: I

cp-Tr

A_.. ---

Page 5: a073763 the Fundamentals of Theirmal Imaging Systems

CONTENTS

P re fa c e ........................... ................................................................ .................................

C hapter I - IN T R O D U C T IO N .................................................................................... I

Chapter II - CHARACTERIZATION OF THE THERMAL SCENE..F..................... 7

Chapter III - ATMOSPHERIC EFFECTS ON INFRARED SYSTEMS ....................... 21

Chapter IV - VIDEO, DISPLAY AND PERCEIV'ED-IMAGESIG N A L-TO -N O ISE R ATIO S ................................................................. 49

Chapter V - LABORATORY PLRFORMANCE MODEL ........ ............. 85

Chapter VI - STATIC FIELD PERFORMANCE MODELS.......................... 97

Chapter VII - THERMAL-IMAGING SYSTEM (TIS)DYNAMIC FIELD PERFORMANCE ................................................. III

Appendix A - NOMENCLATURE, UNITS, AND SYMBOLS ...................................... 143

A ppendix B - SY M BO LS .................................................. ........................................... 149

Appendix C - THE NIGHT VISION LABORATORY STATIC PERFORMANCEMODEL BASED ON THE MATCHED FILTER CONCEPT ................. 159

Appendix D - STATIC PERFORMANCE MODEL BASED ON THE PERFECTSYNCHRONOUS INTEGRATOR MODEL ........................................... 181

Appendix E - THE COLTMAN AND ANDERSON EXPERIMENT ........................ 205

Appendix F - BASIC SNR AND DETECTIVITY RELATIONS ................................... 209

Appendix G - EFFECTS OF IM AGE SAM PLING ....................................................... 215

Appendix N - PSYCHOPHYSICAL EXPERIMENTATION .......................................... 223

Appendix I - OBSI.RVFR RFqoI!UTION P')UIRFMENTS ........................... 13

IN D E X .. ................ ............ ................................ ........................ ............... . . . . . . .. 253

i,,

Page 6: a073763 the Fundamentals of Theirmal Imaging Systems

S. .. .. . . . . . - .-.. .. . . -

Chapter 1

INTRODUCTION

G. L. HarveyF. A. Rosell

A. OBJECTIVE

The purpose of this book is to provide an overview of radiometric and photometric con-cepts involved in thermal imaging systems. Governing equations are derived from simple fun-damental concepts. Also, models used for the visual discrimination, i.e., detection, recognition,or identification, of real scene objects are discussed. I•

It is hoped that the material in the text will be useful to forward-looking infrared radar(FLIR) designers, evaluators, and thermal imaging system modelers, as well as those whosesole interest is a grasp of the concepts involved in thermal imaging.

B. OVERVIEW OF THE CHAPTERS

This chapter contains an overview of the content and significance of the followingchapters. The report is structured so that the appendixes present the basic definitions andmathematical tools which are used in the preceding discussion. The nature of the target signa-ture is considered first. Then its modification by the atmosphere, its processing by the FLIRsystem, the interpretation of the displayed information by a human observer, and the use ofthese systems in a dynamic environment in which the problems of search and limited time forthe completion of assigned tasks must be considered.

1. Characterization of the Thermal Scene (Chapter II)

The scene, because of variations in either temperature or emissivity, is the source of mostof the radiation sensed by thermal imaging systems sensitive in the 3 to 5 or 8 to 14 microme-ter spectral bands. Most of the scene objects obtain their energy from the Sun, and even man-made objects such as trucks appear very much like trucks on thermal imaging system displayswhen heated by external sources. However, the thermal images of trucks may appear radicallydifferent when heated by their own internal sources such as engines or comfort heaters. Thedetailed calculations of thermal scene object-to-background contrast are veiy complex andbeyond the scope of this report. However, the results of a number of such calculations are dis-cu•,ed. In particular, it is shown that thermal object-to-background signatures are stronglydependent upon cloud cover, the insolation, and the aspect angles between Ihe viewer, thescene object, and the Sun and sky. A ship on the open sea, for example, can reverse in con-trast a number of times; when viewed from it low flying aircraft as the 4ircraft approaches from along distance and then overflies the ship. The background may be the horizon sky, low-emissivity water viewed at a shallow angle, high-emissivity water viewed directly downward, or

- - -- -- - ~ - - r ~ ~ n ~ .I

Page 7: a073763 the Fundamentals of Theirmal Imaging Systems

IIARVEY AND) ROSELL

even mirrored sky. The ship's temperature may also be greater than the ambient air or watertemperature in the daytime and smaller at night.

In system design and analysis, it has been common practice to assume an equivalent sceneobject and background of unit emissivity. The temperature of the scene object is averaged overthe object's area and, similarly, the temperature of the background is averaged over a patchequal to the scene object's area. The only information used is a single numerical value for theobject-to-background temperature differential and the object's dimensions. Thus all tempera-ture gradients within the object and background are ignored. In the case of a hot spot on thescene object, ignoring large temperature gradients may lead to pessimistic detection ranges andoptimistic recognition ranges. Similarly background gradients may constitute clutter with noise-like properties. In addition, temperature differentials are sometimes averaged over a 24-hour orsome other fairly long time period. While, as a practical matter, it is often necessary to makegross simplifying assumptions for the purpose of comparing competing systems of the sametype, it will become clear in Chapter II that the use of some assumptions can lead to enormouserrors when computing the probabilities of visually discriminating displayed thermal sceneobjects in a dynamically changing environment.

Though existing computer models can be used to obtain quite detailed thermal signatureswith detail within the objects and backgrounds, most of the thermal imaging models now usedare not now capable of using the amount of detail which could be provided. However, it isimportant that gross changes in signature due to changes in insolation, viewing aspect, cloudcover, and the like be considered as a minimum.

2. Atmospheric Effects on Infrared Systems (Chapter III)

The atmosphere is characterized by transmitting windows whose spectral location restrictsthe choi,.;e of detector and optical materials. IR imaging systems can be severely degraded byhigh humidity or by poor visibility. The 3-5 jAm window is generally superior in transmittanceon clear humid days while hazy dry days favor the 8-12 Am band. In Chapter III, the reader isprovided with an easy to use (but accurate) procedure for evaluating atmospheric transmission.Computer-based tables give transmittances with respect to a 10°C blackbody over the 3-5 and8-12 A m windows. From temperature, relative humidity or dew point, and visual range meas-urement, the user reads molecular and continuum transmittances corresponding to a selectedrange directly from the tables. This value of transmittance is then multiplied by the value ofthe aerosol transmittance calculated by using the results of visibility measurements to obtainthe composite atmospheric transmittance.

The chapter begins with a brief discussion of the main absorbing gases and continues witha more detailed exposition of aerosol scattering and the difficulty it causes because of itsextreme variability. The chapter continues with the definition of atmospheric transmittance,the concept of visual range, and the temperature and pressure effects on molecular bandabsorption. This discussion is followed by a description of the LOWTRAN 3B computer pro-gram for compuling atmospheric transmittance as developed by the Air Force GeophysicsLaboratory in Cambridge, Mass.

Detailed expositions of meteorological variabies such as seasonal variations of pressureand temperature and the relative merits of different atmospheric models have been purposely

2

Page 8: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

avoided in this chapter. Also detailed discussion of molecular band absorption which is con-tained in the voluminous references cited are not included. However, the computer algorithmswhich produced the data in the tables provided have exploited all available current knowledge.

3. Video, Display, and Perceived-itnage Signal-to-Noise Ratios (Chapter IV)

This chapter is devoted to the basic concepts and derivations of fundamental mathematicalrelations used to describe and analyze thermal imaging systems when the input test patterns arestandardized and quantitatively describable objects such as rectangles and periodic bar patterns.The concept of the image signal-to-roise ratio obtainable from a sensor is developed along withother sensor related quantities such as the detector detectivity, the video Signal-to-Noise Ratio(SNR,) and the noise-equivalenL temperature difference NEAT. SNR, is of primary interestwhen the output of the sensor feeds a machine such as a video tracker or a scene object cueingor pattern recognition device. The SNRD is the image signal-to-noise ratio available to anobserver when the displayed image is limited only by finite sensor apertures or internally gen-erated sensor or sensor generated noises and not by parameters of the observer's eye. When allpertinent observer eye parameters are included, the SNRD becomes the perceived SNR orSNRp. In many cases of practical interest, the SNRD and SNRp are equal. As may be sur-mised, the SNRD and SNRp are directly, related to the observer's ability to discern a displayedtest pattern and do taKe into account the ability of the observer to spatially and temporallyintegrate an image.

The video SNRv and the NEAT are generally related to the SNRD and SNRp, but therelationship is not necessarily linear or direct. A sensor with a lower SNR , and a larger NEATmay im fact produce a superior SNRD. The reason for this apparent anomaly is that the SNR,and NEA T include the video bandwidth as part of their definition, and this bandwidth is com-paratively immaterial to the observer. When discerning images, the observer himself becomes,in effect, the limiting overall system bandwidth.

Two models are developed for the SNRo. a periodic model used when the input image isa sine or square wave bar pattern and an aperiodic modfl used when the test images are rectan-gles. These two models are made necessary because the effects of system apertures or opticaltransfer functions on image detectability are distinctly different for the two types of images andnot primarily because of observer effects (altkough some observer differences exist). InChapter IV, a number of thermal imaging sensor configurations are discussed along with themodulation transfer functions' of various sensor elements including the lens, the detectors, themultiplexers, and displays. The effects of the image sampling process in the cross-scan direc-tion are also considered in Appendix G, along with the criteria for eliminating spuriousresponses (aiiasing) while maintaining a flat-field-display-luminance distribution. More detailedderivations at basic equations used in Chapter IV are presented in Appendixes C through G.

4. Laboratory Performance Model (Chapter V)

In Chapter IV, the primary effort was to quantitatively determine the display SNR 0obtainable from the sensor when the input test images are rectangles or periodic bar patterns,If the cbserer's SNR requirements are known, then the probability that an observer will dis-cern an image under a given set of operating conditions should be analytically predictable. The"The modulation transfer function ii the m.dulus of the Op¢ICal trnsfer function %,hith describes the effeci% of s,-.,corapertures.

3

Page 9: a073763 the Fundamentals of Theirmal Imaging Systems

11ARVEY AND ROSELL

ability to perform such prediction is of considerable aid in designing and evaluating sensory sys-tems.

Over the past decade, many psychophysical experiments have been performed to deter-mine the image SNR required by an observer, and a representative sample of the more per-tinent experimental results is discussed in Chapter V and Appendix H. It is found that the

SNRD required to detect rectangles is approximately a constant for images which are not too

large in two directions simultaneously and that the eye can spatially integrate surprisingly well

over long thin rectangles. The eye's ability to integrate over tihe length of a bar in a bar pattern

is more limited, but the threshold value of SNRD required to discern the prekence of a pattern

appears to fall off at the higher spatial frequencies. The observer's SNR requirements are usu- 4ally specified in terms of a threshold value (50% probability of discerning the test image) and as

a function of the probability of detection. Approximately twice the SNRD is required to discernan image at the near 100% level than at the 50% level.

The two primary measures of the laboratory performance of thermai imaging systems arethe minimum resolvable temperature difference (MRT) and the minimum detectable tempera-ture (MDT). In the MRT case, the input test image is defined to be a four-bar pattern with thelength of each bar being seven times its width. The MRT represents the smallest temperaturedifference between the bars which permits the observer to resolve all of bars at the 50% proba-bility level. The MRT of the sensor is plotted as a function of the bar pattern's spatial fre-quency. In the MDT case, the test image is defined to be a square. The MDT represents thesmallest temperature between the square and its background which can be discerned by theobserver at the 50% probability of detection level. Both the MRT and MDT are analyticallypredictable if the image SNR obtainable from the sensor is known, since the obseyver thres-holds have been determined with fair-to-good accuracy at least for the cases in which the imageSNR is system rather than observer eye noise limited.

It has been shown experimentally that the ability to discern a displayed test image can belimited by fluctuation noise generated in the eye's primary photoconversion process. In Appen-dix H it is shown that the retinal fluctuation noise terms can be added to the system noiseterms to create a perceived signal-to-noise ratio expression but that the necessary psychophysi-cal experiments have not been performed to obtain the necessary eye parameters. The SNR pexpression must also, of course, include the effect9 of the eye's apertures.

5. Static Field Performance Models (Chapter VI)

The analytical models of Chapters TV and V can be used to predict the incremental tem-perature difference required to detect either aperiodic or periodic images of known geometry.However, there are continuing efforts to correlate threshold resolution as measured or predictedwith the ability to discriminate visually (detect, recognize, identify, etc.) real scene objects. Inthis chapter, we review the historical approaches which have been used, starting with the well-known Johnson criteria wherein real scene objects are replaced by bar patterns of equivalentcontrast and of spatial frequencies which are a function of the level of visual discriminationdesired. The higher the level of visual discrimination wanted, the higher the spatial frequency.

In the early 1970's, Rosell and Willson attempted to quantify further the Johnson criteria,using SNR and improved resolution considerations. The notion was to use Johnson's criteriadirectly to establish the bar pattern spatial frequency and to attempt to correlate probability of

4

Page 10: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 81t1

recognition and identification with bar pattern SNR. While the agreement appeared good forisolated targets on a uniform background, the agreement became poorer for cluttered scenes.As a next step, the Night Vision and Electro-Optics Laboratory at Ft. Belvoir, Va. hypothesizedthat the threshold bar pattern spatial frequency required at the 50% level of recognition oridentification could be selected on the basis of the Johnson criteria but that higher levels ofprobability required a higher sensor threshold resolution (as opposed to a higher image SNRalone). In concurrent experiments by O'Neill at the Naval Air Development Center at War-minster. Pa. and Rosell and Willson, it was shown that the threshold spatial frequency responserequired at the 50% probability level is not a constant but instead, ii increases as the imagerytends toward the noise limited as opposed to the aperture or MTF limited condition. On theother hand, while it appears that the Johnson criteria cannot be used to select the bar oatternspatial frequency at the 50% probability level, the NVL approach as formulated by Johnson andLawson to determine higher and lower probabiity levels appears to be superior to the SNR/)approach based on parallel experiments on human face identification by Rosell and Willson.

By a further analysis of the O'Neill data, it is hypothesized that ability to discern realimages may be strongly related to the video signal-to-noise ratio (defined with respect to areference video bandwidth). A method of taking the video SNR2 into account is proposed inChapter VI, and the method appears to have promise but it has not been ,,,rified experimen-tally. The general result of the O'Neill data analysis is that for video SNR above about 3-4, thethreshold resolution required at the 50% level of visual discrimination is a constant butincreases rapidly as the video SNR decreases below 3. It should be observed that O'Neill'sexperiments were conducted with images of 100% contrast and whether the technique suggestedwill work or not for images of lower contrast hkls not been explored. The details of these vari-ous experimental results are discussed in Appendix 1.

The original ievels of visual discrimination proposed by Johnson were: dutection, orienta-tion, recognition, and identification. The orientation criterion has seen little use, and it is pro-posed that it be dropped. However, it is proposed to increase the number of levels overall pri-marily because the gap between simple detection and classical recognition is felt to be too large.In the specific case of detection, an object may be considered to be more than simply detectedeven though the sensor's threshold resolution is only sufficient to permit simple detection inthe classical Johnson sense. A rapidly moving blob on a road, for instance, is probably a vehi-cle. Thus auxiliary cues may lead to a higher level of visual discrimination than resolutionalone would tend to indicate. It is also observed that the number of resolvable lines required todiscriminate a real scene object is often a function of the viewing aspect angle.

Chapter VI co,cludes with a discussion of methods of performing range analysis as tradi-tionally performed and also, by including some of the newer concepts discussed above. In per- Aforming static field range predictions, atmospheric and sightline instability effects are included,

6. Thermal-Imaglng (TIS) Dynamic Field Performance (Chapter VIi)

The previous chapters have been directed toward the detection of a target when theobserver has an unlimited amount of time. In Chapter VII the element of time is introducedinto the task of physical acquisition, visual acquisition, and extraction of the required informa-tion. All of the probabilities involved in acquiring and detecting a target are discussed using atypical scenario for a FLIR system.

5

Page 11: a073763 the Fundamentals of Theirmal Imaging Systems

IARVEY AND ROSELL

In Section VIIC the mechanics of visual search arc described, the meaning of a "glimpse"

is discussed, and some typical values of visual search time are given.

The following sections discuss the probability of visual direction under a variety of condi-tions, such as changing SNR, high SNR environment, and an increasingly detectable object.

Finally, Section VIIF shows how system parameters may cause the probability of detectionand identification for a high-rcsolution system to be lower than the probabilities of a lower reso-lution system.

rr

Page 12: a073763 the Fundamentals of Theirmal Imaging Systems

Chapter II

CHARACTERIZATION OF THE THERMAL SCENE

F. A. Rosell

A. INTRODUCTION

Like the visible scene, the thermal scene has infinite variety, and the detailed descriptionof all but a limited number of specialized cases would be prohibitively cost!y. Elaborate com-puter models have been developed to describe various scenes in some detail, but experimentalverifications of the models are few. However, those experimental results which exist show thesame trends as the analytical models predict. The detailed calculations of thermal scene object-to-background thermal contrasts, which are based on rather conventional heat transfer con-siderations, are beyond the scope of this report but a number of the results of caiculat~onswhich have been made will be discussed in tt chapter (Ref. 2-1).

The scene, cwing either to variations in temperature or emissivity, is the source of mostof the radiation sensed by the thermal imaging sensor. Most of the scene receives its energyfrom the Sun, and the thermal scene displayed appears very like the visual scene although theconstrasts between scene objects can be radically different. Man-made objects, when heated byinternal sources, can produce a very unnatural appearance.

It is not uncommon to assume that a scene object such as a ship or a tank always has acertain incremental temperature above background. This assumption can lead to considerableerror even when for example, a tank has been exercised for a considerable period. The terrain,for example, gentrally heats and cools much more rapidly than objects of large thermal masssuch as tanks and therefore both positive (hotter) and negative (colder) contrasts between theobject and its background can exit. It may also be erroneously assumed that a thermal imagingsensor is always the sensor of choice for night applications and that night and day performanceswill be approximately equal. While this may sometimes be true, it is found that thermal scenesignatures -.-ill generally be smaller at night and during other periods of low insolation (Sunheating).

In this chapter, a number of typical scene objecls and their backgrounds will be discussedin order !o provide soine physical insight to the general problem of characterizing the thermalscene. The cases discussed are specific to the particular environment involved and are notmeant to be generally applicable. For more specific information, the reader is referred to Refer-ences 2-2 through 2-4.

B. GENERAL SCENE CHARACTERISTICS

FLIR sensors detect radiation of wavelengths in the 3-5 or 8-14 micrometer atmosphericwindows and derive their images from variations in the radiation received. These variations canbe due either to variations in the emittance of the scene or to variations in the radiation

7

Page 13: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

reflected from the scene. The radiation reflected from the scene can come from a variety ofnatural sources such as clouds, sky, and background, but is usually less than that associatedwith a target at ambient temperature. The primary scene signal results from variations in emit-tance and may be due to variations in temperature or emissivity. That is, the scene is thesource generating most of the radiation itself due to its inherent temperature.

The background radiation is that associated with the average scene temperature and emis-sivity. Since FLIR systems currently employ ac coupling between the detectors and theiramplifiers, the signal is due to variations about the average. One primary source of noise is thatdue to the conversion of scene photons to electrons by the detectors.

Most objects in a scene obtain their energy (heat) from the Sun. Heat absorbed duringthe day is lost at night. The process depends upon the atmospheric conditions, the degree ofovercast, and the general air temperature. When humidity is high, the sky is cloud covered,and the air temperature is near constant, the scene tends not to vary much from day to night.Everything stabilizes at air temperature as though the whole system were in a blackbody cavity.Objects with low emissivity tend to take on the temperature of the air with a lag determined bythe thermal mass of the object and its thermal conductivity. Materials that conduct poorly areapt to have surface temperatures which determine their radiation characteristics and which varygreatly. Objects with high emissivity are more likely to have their temperatures stronglyinfluenced by the physical characteristics of the scene objects and the radiation characteristics ofthe sky and atmosphere than those of low emissivity.

The effect of a strong wind is to reduce substantially the temperature excursions within ascene. In effect, the thermal signature is partially "blown away." During and after a rainstorm,the scene tends to become isothermal. Also, an extended period of overcast will reduce theamplitude of cyclic scene temperature variations. The reductions in temperature differences cansubstantially degrade the appearance of the natural scene at long range, because the scene ther-mal signature is further reduced by the atmosphere. However, the detection of hot man-madeobjects could be enhanced by the washout of the background. It is also generally true thatperiods of poor atmospheric transmittance and periods of low insolation terid to go together.

C. BACKGROUNDS

The radiation characteristics of a complicated scene can take on infinite variety, anddetailed analysis of any but a small number of representative typical cases has been prohibi-tively costly in the past. In the future, wider use should be made of computer-generated ther-mal -,cene object/background signatures provided that such efforts are paralleled with experi-mental programs to validate the models. In the following some examples of computer-generated thermal signatures are discussed. The results have not been verified experimentally,but the results appear to be reasonable and are in general agreement with existing data.

Trees and bushes may be significantly warmer than grassy ground. In Fig. 2-1, acomputer-generated thermal signature is shown for five different background materials, as afunction of time of day from 6 a.m. to 8 p.m. The conditions are given in Table 2-1. It is alsoseen that sand heats more slowly and cools more slowly than the other backgrounds. The rela-tive thermil contrast, which is defined to he the differential radiance between grass and theother materials. is sliown in Fig. 2-2. Observe that in this case, the sand is of negative contrast

8

Page 14: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 9311

0 d

cc2 2wL TraU Aphl

f- 10.zI

4-4

Fig. 2-1 -Temperature difference betwe-en various backgrou-i materials andair as a function of the time of day

Table 2-! Conditions for thfe Calculation$ of TemperatureDifterence Shown in Fig. 2-1

Ambient Temperature 298 K C~oud Cover 0.6Pressure 1013 mnbar Latitude 20ONDeclination (I deg Wind Velocity 12 mrphVisible Range 20 km Range 4 kmiM~ixing Ratio 16

200-

0)

L6 0

lot 101AM TIEsph DAYlP

9 9 i

Page 15: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

during the early morning hours and of positive contrast in the afternoon. Both the scene tem-

perature and the thermal contrast of the scene are seen to be strongly time dependent.

At night, trees mirrored in water appear even warmer. Three reasons for this effect have

been postulated: 1) the undersides of leaves may stay warmer than the top surfaces due to thefact that they cool by radiation to a warm ground rather than than a cold sky; 2) the net emis-sivity of the leaves' undersides may be higher due to dew, and 3) the sensor viewing the under-sides of leaves off the water sees a cavity effect inasmuch as the water adds radiation and the

reflection from the underside of a leaf sees another leaf, etc. so as to cause a blackbody effect

while the top of the leaf reflects the sky.

Usually, a calm sca under a sky appears cold at shallow viewing angles due to its highreflectivity and low emissivity as can be inferred from Fig. 2-3. At steeper viewing angles, theemissivity is greater and the water appears warmer. A sky overcast with low-altitude clouds canmake the water appear warmer. In Fig. 2-4, we show the angular reflectance properties of watervs wavelength, which in turn shows that the angular reflection properties of Fig. 2-3 applythroughout the infrared regions of interest.

The disturbed sea surface is difficult to describe in a closed mathematical expression. In

first order analysis it is common to use the sine wave and sawtooth approximations shown in

0.05

0.9 0.1

o.1 .0.2

S')A6- •.

.S 0.4

U. 0.4 0L6

0.30

r0.2- 0.5

0 .11ý- .

0.05

0 10 20 30 40 iO 60 70 so 90

ANGLE OF INCIDENCE (9 dog)

Fig- 2.3 - Reflectivity and cmissivity of waler for the parallel (!1), average (<>) and perpendicularly U.1) polarized light components is angle of incidence

in the visible spectrum

10

Page 16: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

100

Go-

70,

3

I.S2.5 33.5 4!5.5 6.5 .5 CS 9.5 lO$11-S 12313314.5

WAVELENGTH (mlcromaters)

Fig. 2-4 - Spectral reflectance of waler v% wavelength forvarious angles of incidence

Fig, 2-5. The sine wave sea is built up from sea-state descriptions of wavelength, height andperiod, etc. The sawtooth approximation, which is analytically much simpler, has shown goodagreement with measured results in many cases. The sawtooth wave is inclined toward the sen-sor at an angle and equals about I5S for the average sea.

In Fig. 2-6, we show three viewing aspect angles over a water background. Along a sub-stantially horizontal path just above the horizon through a dense air path, the sensor "sees" theair temperature. The air along the path may be considered to have an emissivity of unity and isthus a "blackbody sky." From a perpendicular to the surface, the water has an emissivity ofunity and is thus a "blackbody sea." At the sea just below the horizon, a reflected ray wouldfollow the dashed line if the sea were perfectly flat but since the sea is almost always disturbed,reflected rays from a higher source as shown by the solid line are observed. Experimentally,the upward angular displacement has been observed to be approximately 30° on the average

.. EW

Page 17: a073763 the Fundamentals of Theirmal Imaging Systems

F A. ROSELL

Fig. 2-5 Sawtooth and sine wave approximationsto a disturbed sea

Blackbody Sea

Fig. 2-6 - Background radiation sources in three viewing directions

giving credence to a sea model with an average slope of 150. The thermal sensor viewing justIbelow the horizon senres a combination of the radiation emitted from the sea and the mirroredreflection of a cold sky.

The variation in background temperature with the viewing aspect angle is illustrated inFig. 2-7. Note that the sea backgrourd temperature dips just below the horizon and a shipwhich may be colder than either the air or the water may yet be imaged as hotter than either.The effect of an overcast sky i- illustrated in Fig. 2-8. The sky temperature is seen to increase.and the dip in apparent temperature just below the horizon is decreased.

12

Page 18: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

19 Augutt, 2240 r1ursWater Temp 24.2oC60 Dry Bulb 24.0°CWet Bulb 192 0 C

Sky Clear

ZI0 S ynthetic .. ,0 Contrast1_Region -

-204 -40--J 0

01- 40 - 30 -20 - to 0 10 20 30 40 S0 60 ;

APPARENT BACKGROUND TEMPERATURE (0C)Fig. 2-7 - Effect of ,,icing angic on ipparent background temrereidrc

of air and a disturbed sea

D. SCENE OBJECTS

A truck, when heated by nature looks very much like a truck on a FLIR display. Thecavity under the truck may appear hot if viewed at an angic since by multiple reflection, thisarea has an emissivity of unity. The truck often has dak areas due to low-emissivity metalareas which reflect the sky. When idling, the engine and exhaust become very hot, cxhibitinglocalized areas of radiation which may saturate the display. If the FLIR displays hot as white,the extreme brightness can aid detection but degrade recognitions. Localized heating can causethe entire hood of the truck to appear very bright. By reversing the polarity to hot as black, thepicture ofter, appears more normal. If the truck Is moving down the road there is again local-ized heating but not of the same type. The hoo, is cooled by the airstream to near the air tem-perature. The exhaust will still be hot and the undercarriage and tires will appear warm to hot.

Tanks have a large mass and genera,!y lag the terrain in temperature when parked. Theengine and exhaust can appear very bright when the tank is running and these features are inthe field of view. When driven, the bogie wheels and treads as well as the rest of the tank heatto provide a natural looking picture. 'n Fig. 2-9, we show the various factors which influencethe heating of a tank. The factors vhich must be considered are the insolation, the radiationexchange between the tank and it:, surround, convection due to wind or tank motion, internalheat sources such as the engine,, and conduction to the earth. Conduction is of particularinterest when the tatik is parkeci in snow.

Page 19: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

Zenith+90"

Clear \Overcast

TOt

gOHorizon

TWI To TW. To

-90INadir Equivalent Temperature -- T0

Fig. 2-8 - Effect of an overcast sky on the equivalentbackground temperature for various viewing angles

The signal obtainable from a scene object is a function of the viewing aspect angle andtherefore the signals will be time dependent on even a very short time basis when the sensor ismoving. A computed temperature profile of a tanklike object (parked) is shown in Fig. 2-10.The temperature curves show the effect of the sun passing from east to west. Temperatures ofthe eastern surfaces peak in midmorning and then slowly cool to equilibrium in midafternoonwhile the western surfaces do the opposite. South facing and roof surfaces rise to a posmnoonpeak before they decay. The thermal lag of the tank is seen by the fact that the roof surfacepeaks later than the background.

The radiant contrast between the tank and a grass background is shown in Fig. 2-11. If itis desired to image the tank, it is seen that there is four times more thermal contrast on thecast side of the tank at 9:00 a.m. Near noon, the tank will be more easily discerned in thenorth-south direction. In the late afternoon, the roof and the west sides of the tank providehigher contrasts.

Page 20: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

SOLAR

RADIATIONEXCHANGE

\IFUE DIRECTf

CONDUCTION 11THERMAL COEFICIENTS

- /, 4 4\a0 V) +* are 0 (T70 -STS )'U(To-TA)t X(711 -To)

INSOLATION RAD. EXCHANGE CONVECTION INTERNAL SOURCES

Fig. 2-9 - Factors to bc considered in (hermal modEling of a scene object

0 14- Ro s

Wa- TioLu 6- ot

g- 4 Object n TankSCC lodlground G loss Bckgtovnd

9. Range A kmn

•- e--AM PM-.-e

TIME OF DAY(hours)

Flig 1-( - ciperature diffe:re~nce e thclen a tank and air ,i• a f-unction

of the time cf da' for various % iew ng direc,,ons

Page 21: a073763 the Fundamentals of Theirmal Imaging Systems

F A, ROSELL

II

TA z 2980k m~iXR s'16<u 00. P 2 1013 M& LAT - 20DEG N

X< DEC a0 DEG WIND V a 12 MPHVIS R a20KM WIND D 2 EAST A

8UC$ 40() CC 20.6 \

USW 200 Eas Roof " l

LL l000

AM TIME OF DAY (hours) PM

FIg. 2-11 -- Effeciive radiant contrast between the tank and the grass background asa (unction o( time oC day Cor various viewing directions in the 8-12 /•m spectralband

Page 22: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

310 '

wL L

w_j

-Ai0. 12 is2

• -a

TIME (hours)

Fig. 2-12 - Equivaletu temperature of a ship., he ambient air. and the seain a semnitropical and cold northern area vs time of day fur a given set ofoperating conditions

rT

14 \ -TA : - 4AL L

Fig 2-13 -- Geometrical considerations for the calcu-

laoion ol equivalent vehicle temperature and area

Page 23: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

The thermal contrast of a ship against a sea and an air background is shown in Fig. 2.12as a function of the time of day. The upper set of curves pertains to a semitropical warm sea.In the specific case analyzed, it can be seen that the ship can appear in positive or negative con-trast relative to both the sea and the air. In the lower set of curves, a cold sea typical of thenorthern climes was assumed and for the specific case analyzed, the ship was always of positivecontrast relative to the air and the sea. In making the above calculations the ships wereassumed to be moving, there was a wind and partial cloud cover, and specific viewing angleswere assumed. A stationary ship under a clear sky with no wind might be expected to undergorather larger temperature excursions, but the same trends are expected.

E. EQUIVALENT TEMPERATURE DIFFERENCE

As noted above, the thermal signature of a scene object can be due to..temperaturedifferences, emissivity differences, and reflected radiation. To simplify calculations it is com-mon to assume an equivalent object of unity emissivity and zero reflectivity because thesensitivity-resolution characteristics of a sensor are usually specified in terms of a minimumresolvable or detectable temperature difference vs angular spatial frequency. Anothersimplification which is often made is to average the temperature over ;he entire area of thescene object (Ref. 2-2). An example of a thermal signature of a truck is shown in Fig. 2-13.The truck is divided into areas A, of substantially constant temnperaure T,. Truck areas which areat the same temperature as the background are ignored. The approximate temperature of thewhole truck for modeling purposes is taken to be

L, A;TI A,

If we assume a uniform background at temperature T9 , the average truck temperaturedifference is

AT., - T, - TH.

The equivalent truck is thus assumed to be a rectangle with the same area and a temperaturedifference AT.

Some small areas of the truck may be very much hotter than some of the larger areas.Averaging dilutes the effect of the small hot areas, but this usually is not a modeling problemwhen the sccne objects are fairly small relative to the field of view because the hot spots aresmeared out by the sensor apertures and the eye can equally detect small hot object5 and la gerbut cooler objects so long as the incremental signal integraicd over the object areas are thesame. At closer ranges, localized hot spots such as stacks on a ship which are otherwise ofsmall extent may be of significant aid in detection and in recognition.

The minimum resolvable and detectable temperature differences are ordinarily measuredusing a 300 K background. If the scene object temperature differences are small and the back-ground is approximately 300 K. the A Ta,,, approximation for the actual scene object radiancemay be used. In other cas, -, the actual incremental radiance must be calculated because ATand the incremental radian( are not linearly related.

Page 24: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

"REFERENCES

2-1 R.F. Higby and R.H. Daumit, Gaining Thermal Signature Insight Through ComputerSimulation, Proc. Iris Imaging Specialty Group, June 1978 meeting.

2-2 J.A. Ratches, Static Performance Model for Thermal Imaging Systems, Optical Engineer-ing, Vol. 15, No. 6, Nov-Dec 1976.

2-3 "Quantitative Thermal Signatures of Small Craft in Destin, Florida Coastal Waters forMaritime Search and Rescue," F. Zegel, W. Stump, S. Rodak, and P. Stamoulas,NV&EOL Report, July 74 (Unclassified).

2-4 "Thermal Signature Measurements of Four US Army Field Portable Power Generators," S.Rodak, F. Zegel and W. Stump, NV&EOL Report, August 1975 (Unclassified).

t

19

Page 25: a073763 the Fundamentals of Theirmal Imaging Systems

Chapter III

ATMOSPHERIC EFFECTS ON INFRARED SYSTEMS

J.B. GoodellR.E. Roberts

A. INTRODUCTION

The atmosphere is an important, integral factor in the analysis and design of infrared (IR)systems. The spectral location of the atmospheric windows, for example, restricts the choice ofmaterials for, detectors and optics. Furthermore, within these windows, poor visibilitysubstantially degrade the operation of IR imaging systems operations. On clear days of higlfabsolute humidity, IR systems of equivalent sensitivity usually operate better in the 3-5 /.Amwindow, while hazy or limited visibility conditions generally favor operation in the 8-1 )Imwindow. The atmosphere is undoubtedly one of the most important factors controlling th~'•er-formance charagfstics of FLIR devices and certainly must be taken into account b.h in theoptimal choice odMIectral bands and IR systems design.

This chapter provides the reader with a brief review and assessment of the LOWTRAN3b" propagation code (Ref. 3-1) for evaluating atmospheric transmission and contains con-venient computer-based tables derived from LOWTRAN 3b giving atmospheric contrasttransmittances averaged with respect to a 10*C blackbody over the 3-5 arA 8-12 j.m atmos-pheric windows. From a knowledge of the -ocal meteorological conditions (temperature, rela-tive humidity or dew point, and visual range) the reader can F~mply and directly compute thetransmittances corresponding to a selected range.

The chapter begins with a brief discussion of the main absorbing gases. It continues witha more detailed exposition of aerosol extinction tcqsther with an assessment of the aerosolmodeling uncertainties. Following this is a descrip'tik pf the LOWTRAN 3b computer pro-gram for computiing atmospheric transmittances. as developed at the Air Force GeophysicsLaboratory, Cambridge, Mass. It is currently tG.n,'kost widely accepted standard for computingatmospheric propagation and forms the bosis for the data in this chapter. The model, however,does not include important man-made aerosols such as battlefield smoke and dust which arebeyond the scope of this chapter. Detailed procedures for determining atmospheric attenuationfrom tables computed using LOWTRt•k 3b conclude the chapter.

4`0This chapter purposely avbids etailed expositions of meteorological variables such as sea-

sonal variations of pressure and temperature, or the relative merits of different atmosphericmodels beyond LOWTRAN 3b. Nor does it include detailed discussions of molecular bandabsorption theory. The literature contains voluminous theory and data concerning all thesetopics which, of ctuft, are crucial to atmospheric transmittance. The reader who wishes toprobe deeper can consult references 3-2 to 3-42.

"LOWTRAN 4. which has provisions for calculating thc radiance fron temqspheric palhli, i' now available a-, a carddIeck. rnm the National Climatic Center. Federal Building, Ashville, NC 2801 rot a chargc fr 320.,1O, (Addr"e, rv-lquciti to Mr, 0 r Davi-0) The trninmitlance rortlon (ir LOWTRAN 4 is etcnitnllv ylic same ai l.;VWTRAN 3h.

21

Page 26: a073763 the Fundamentals of Theirmal Imaging Systems

GOODELL AND ROBERTS IB. ATMOSPHERIC MOLECULAR CONSTITUENTS

1. Water Vapor

Water vapor is the most important absorbing gas in the earth's atmosphere far infraredtransmission and also the most variable. Local humidity conditions can easily double the watervapor content in the atmosphere in any locale in a matter of hours with a changing weather jfront, thus severely degrading the infrared imaging systems performance expectations (Ref. 3-43). Moreover, water vapor absorption primarily determines the atmospheric windows as Fig.3-1 shows. For clear conditions these atmospheric windows vary in transparency primarily inresponse to the water vapor content. A dry (midlatitude winter) atmosphere (e.g. 3.5 g/m-) isalmost completely transparent in the windows. A wet tropical atmosphere (19 g/m 3 ) on theother hand almost completely blocks large portions of the atmospheric windows. Table 3.1illustrates typical water content for various atmospheric models as given in the Air Force Geo-physical Laboratories tabulation (Ref. 3.44).

100 J lco

to CH4

.011

100 . . . . . . . ... DO

122 :. " iAh.,1 :.~...21u)Z0 bLO ,.Ch

2 .1 + " 7 a : o i i J2 + 14 I

Fig 3-I - Conslituenl absorption bands arnd

atmospheric windows

Table 3-1 - Water Vapor Density (g/m 3) in theAtmosphere at Various Levels for Several Atmospheric Models

Atmospheric .... el

Altlv(km) Tropics Midlat. Midlat. Subartic Subartic U,S. Std.Summer Winter Summer Winter

0 19 14 3.5 9.1 1.2 9

1 13 1 9.3 2.5 6.0 1.2 4.2

L 2 9.3 5.9 1.8 4.2 0.94 2.93 4.7 3.3 1.2 2.7 0.68 1.8

22

Page 27: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORt 8311

Water vapor absorption occurs in two forms, molecular band absorption'and continuumabsorption. Very complex spectra characterize molecular absorption. Literally hundreds ofvlbratlon-rotatlon energy level transitions create the water vapor absorption bands. Figure 3-2Is a moderately high-resolution spectrum of water vapor in the spectral region from 5 to7.69 am. The -water vapor continuum on the other hand has essentially a smooth spectraldependence and is present in both the 3-5 j~m and 8-12 Aim windows. A recent review (Ref.3-45) of the 8-12 ;&m continuum absorption indicates that the contribution due to pure watervapor is quantitatively well understood both with respect to the spectral dependence between8-12 /um as well as to the strong and important temperature dependence of that absorption."The only remaining uncertainty for this particular continuum absorption arises from thenitrogen-broadened portion of the water continuum. The overall contribution from this termIs, however, small, and its effect on the uncertainty of FLIR, performance is negligible.

769 7.14 6.67 6.25 5.88 5.56 5.26 5.00

WAVE LENGTH /jm

Fig. 3-2 - Moderate resolution 1120 spectrum, 5-7.69 Am

-Much-less is known about the.3-5 M m continuum. A best estimate for the exfinctioncoefficient P ... is 0.04knf- ± 0.04 (Ref. 46-48). These values may be overly pessimistic forthe 3-5 Aim window since the water vapor continuum coefficient appears to attain its largestvalue near 4 M~m. The relatively large uncertainty, of 0.04 km-', however, should no: affectFLIR systems performance significantly although a possible exception may be long-range shiprecolnition which can be dominated by clear but humid transmission paths.

2. Carbon Dioxide C0O

Carbon dioxide, unlike water vapor, has a constant weight ratio in standard atmospheres.Local perturbations such as automobile er',iist, dense foliage, and factory exhausts can, ofcoure, alter the *standard* distribution.

Carbon dioxide is second in Importance to water vapor in terms of infrared absorption inthe clear atmosphere. It closes the 3-5 and 8-12 j.m spectral windows. Figure 3-1 shows tilecarbon Jioxide spectra together with the atmospheric spectra and tho:;e o' other gtoscs.

the vltinclin" coaemf ent c¢llun -fltt IR. wht, rr tr th Irill•a1tiTimtwll ud N R i4 ow r.i' ,

__23 _

Page 28: a073763 the Fundamentals of Theirmal Imaging Systems

GOODELL AND ROBERTS

3. Nitrous Oxide (N20)

- Nitrous oxide has an approximately constant concentration in the atmosphere. It has astrong absorption band, at about 4.5 jim and at about 8 um, otherwise its absorption isinsignificant (see Fig. 3-1).

4. Methane (CH)

Methane has an approximately constant concentration in the atmosphere and generallyoccurs in small amounts except over swamps (marsh gas) where large quantities increaseatmospheric absorption noticeably in two narrow infrared bands centered around 3.5 and 8 Am(see Fig. 3-1).

5. Ozone (03)

Ozone has a variable distribution in the earth's atmosphere. Solar ultraviolet dissociationof 02 molecules causes 03 concentration to peak at a height of about 24 km. The very strong03 9.6-jim absorption spike attenuates noticeably over long sea level paths in spite of the verysmall sea level ozone concentration (see Fig. 3-1). Normally, however, FUR operation doesnot experience difficulty with the 03, 9.6-jum band.

6. Carbon Monoxide (CO)

Carbon monoxide has a nearly constant concentration in the atmosphere except whereincreased by pollutants, such as exhausts. Concentrations of carbon monoxide cause atmos-pheric absorptioni in a band between 3.6 and 3.8 jim (see Fig. 3-1).

"7. Nitrogen (N2) A

The concentration of nitrogen in the earth's atmosphere is about 78.088 percent byvolume. Nitrogen affects atmospheric transmission primarily through the nitrogen continuumin the 3-5 jim window.

8. Oxygen (02)

Molecular oxygen comprises 20.949 percent by volume in the earth's atmosphere. 02absorption should not be confused with 03 (ozone) absorption which is strong at 9.6 jim. Oxy-gen band absorption is negligible in the 3-5 and 8.13 ;.m windows,

Table 3-2 shows the concentrations of the principal absorbing gases in the earth's atmo-sphere, The compositions are percent by volume. In a model atmosphere all importamt spec-troscopic gases except H20 and 03 have nearly constant concentrations. The composite absorp-tion of all these gases produces the results shown in Fig. 3-3, whiclh is a Ir-nsmission spectrummeasured by Yates and Taylor over 5.5 and 16.25 km horizontal atmospheric paths.

Page 29: a073763 the Fundamentals of Theirmal Imaging Systems

A

IzI

NRL REPORT 8311

Table 3-2 - Composition of the Atmosphere

Constituent Percent by Constituent Percent byVolume Volume

Nitrogen 78.088 Krypton 1.14 x 10-4Oxygen 20.949 Nitrous Oxide 5 x 10-1Argon 0.93 Carbon Monoxide 20 x 106Carbon Dioxide 0.033 Xenon 8.6 x 10-6Neon 1.8 x 10-3 Hydrogen 5 x 10-6

Helium 5.24 x 10-4 Ozone variableMethane 1.4 x 10-4 Water Vapor variable

5 $ -Im km. . -,~ ~ I

0.9 1.0 1.5 3LO 2.1 8.0 a.$ 4.0 4.5 1.0 113WAVELENGTH (#0

log

10.0 1.0 12.0 14.0WAVE ENCI'l t(J

,5 'n- 1.2s kmN.U. ('M) U *aTemp. (of) 64 soH70 In path (Ct ) 4.18 15.1"I r iaum't,•W "0 42

Fig. 3-3 - Atmospheric transmission at sea Ictel over 5.5 and 16.25 km paths

C. AEROSOLS

The atmosphere also contains suspended particles such as dust, carbon particles, sand,ashes, water droplets, salt spray, and the like whose sizes and concentrations depend on localenvironments and can, thererome, vary not only witn locale but temporally within a locale. Typ-ical aerosol radii range from about 5 x 10-3 pm up to about 20 jum. Their number density canvary from almost zero up to about 105/cm 3 . The main contributors to aerosols are: sea spray;fog; haze; dust storms; and air pollution. Other contributors include forest fires, sea salt, rocks,soil, volcanos. meteoric dust, and biological materials.

Particulate extinction, especially the aerosol effects associated with limited visibility condi-tions, undoubtedly causes the largest uncertainty in the modeling of propagation for electro-optical systems. LOWTRAN 3b, for example, currently adopts three aerosol models applicablefor low-Altitude use (Ref. 3-1). Two of these are suggested for usage ove, land and arereferred to as the urban and rural models. For the computation of extinction coefficients thesemodels are nearly the same. The third low-altitude model employed nrt LOWTRAN 3b is

Page 30: a073763 the Fundamentals of Theirmal Imaging Systems

GOODELL AND ROBERTS

"referred to as maritime. Also, for poor seeing conditions, such as visual ranges of 2 km or less,LOWTRAN 3b adopts a stopgap measure which amounts to equalizing the attenuation in all

A. spectral bands. The equal attenuation aerosol methodology leads to a pessimistic result for theIR attenuation and in a sense, provides a lower bound to on expected infrared performancerange. Such a spectrally flat extinction coefficient, however, may not be justified. Experimentswith fogs and clouds as well as calculations based upon assumed particle distributions indicatethat although the 3-5 A.m attenuation is ruughly that of the visible band, the 8-12 Am extinc-tion is typically one half that of the other band. Thus an equal attenuation assumption can leadto extremely pessimistic predictions for 8-12 .tn systems performance. This obviously hasstrong implications oil the selection of an optimal spectral band for IR sensors (i.e., 3-5 vs 8-12A m).

The rural and urtan models have gained widespread use for central European environ.ments, particularly for the limited visibility conditions so often encountered during the wintermonths. They may not, however, be appropriate for limited visibility conditions. A misleadingconsequence of he use of these two models in any such application is that they tend to predictoptimistic values of infrared transmission. For example, the urban and rural aerosol modelspredict higher transmissions in either the 3-5 or 8-12 A.m bands than would be measured. Thesemodels tend to give optimistic results for IR propagation because of the low number density oflarge particles.

The maritime model particle distribution has a relatively higher concentration of large par-ticles which is typically characteristic of hazes and fogs. Since optical properties used in gen-erating the maritime model are very nearly those of liquid water again as found in continentalfogs and hazes, the maritime model is most appropriate not only for oceanic environments butalso for continental environments for fogs and hazy visibility conditions (Ref. 3-49). Thisstatement tends to be borne out not only by theory in terms of what distributions are expectedto look like for such conditions but also in terms of a comparison with the existing experimen-tal data base for poor visibility conditions. If one is to adopt aerosol methodologies direetlyfrom the LOWTRAN 3b code, it is advisable to employ the maritime model both for at-sea useas well as for over land usage under poor seeing conditions. As one would expect, the applica-tion of the maritime aerosol algorithm is not nearly so optimistic for the infrared spectral bandsas would be the case for the previous continental models. LOWTRAN is an evolutionarymodel which changes as the data base grows.

Most current aerosol models suffer considerably from reliance upon a singlerepresentative particle size distribution. The current LOWTRAN 3b aerosol models as dis-cussed above use measured optical properties (representative of average continental, rural,urban, or maritime conditions) with a single assumed distribution to predict, via a Mie compu-tation, a scaling model for the extrapolation of the visual range to IR transmission. The under-lying assumption is that for a particular environment, such as a continental haze, the shape orfunctional form of the distribution remains unchanged. In many, if not most, cases this is not avalid representation. For example, in an evolving fog formation the water droplet distributiontends to change in the sense that there are relatively more large particles as the visibilitybecomes lower. Figure 3-4, which shows some representative particle size distributions forhazes and fogs from the recent Grafenwohr field measurements (Ref 3-50), illustrates thisdramatically. Each of these distributions leads to a different spectral dependence for the aerosolextinction coefficient.

Page 31: a073763 the Fundamentals of Theirmal Imaging Systems

5-iNRL REPORT 8311

TRANSMISSION

RECORDEDITIME VISUAL RANGE 0.-1.1 3.4-4.1 8.1-12.0,3 12/30/76 @2200 4.6 km 57% .S, 97%

12/31/75 @0300 2. 45 7-- 12/31/75 0800

z ---- - 12/31/75 0800 1.1 1 10 14S............... 12/31/75 1000 0.6 0.6 1 4

2~~ --- -

79

-\.., ",.

S0"

-1 \ "~~ ~~ - ... ---- ;-:""T

-211.

0 I 2 3 4 5 6 8

PARTICLE RADIUS, .iICPONS

Fig. 3-4 - Change in aerosol droplet size as fogs build. December 30-31. 197S. a& Grafenwbhr

A potentially useful method (Ref. 3-49) which is aimed at obtaining information directlyrelated to IR propagation without reliance upon a single assumed distribution is the concept ofcor: -'lating infrared propagation with the integrated liquid content along the transmission path.This approach has two advantages. First, it is insensitive to the shape or form of the assumed

distribution. Second, it permits a remote measurement along the preferred line of sight (ratherthan a local measurement such as humidity, temperature. etc.) *, th, for example, a laser rang-ing system such as LIDAR for determining liquid content; thereby implying IR propagation a

directly on a real-time basis. Such a methodology can also be used to derive general scalingrelationships between visibility statistics and propagation in other spectral bands. The relation-ships derived thus far, however, are not linear in the sense that LOWTRAN 3b is, and predictmore general functional relationships between the IR and the visual bands.

Figure 3-5 illustrates that there is indeed a strong correlation of IR extinction by aerosolswith the liquid content. The points are representative of the different distributions summarizedby the recent review Article of Tomasi and Tasmpieri (Ref. 3-51). The relative placement of theGrafenwohr measurements and LOWTRAN 3b maritime and rural cases are also shown. Thepoints located toward the upper right corner are indicative of limited visibility conditions.

Page 32: a073763 the Fundamentals of Theirmal Imaging Systems

- -GOODELL AND ROBERTS

CLOUDS S

a 4

- • -

I IHAZES AND GRAFENWOHR

(X LT MARITIME

/ I

; LT RURALI

SIU

LOG OF LIQUID CONTENT (G/M3)

Fig. 3.5 - Correlation Cf 10-.m extinction coeMcienf with liquid conlent. The distributions havebeen normalized to a total number density of one particle per cubic centimeter.

Most of our propagation models rely upon meteorological inputs as a driving parameter.For example, in the case of LOWTRAN 3b one uses an estimate of the visual range to implymultispectral serosol effects. There is a serious problem in using such a meteorological parame-ter for estimating propagation either in IR channels or in the visible channels. The first prob-lem is.associated with the subjectivity of an observer making the observation, It is well knownthat different observers or even the same observer on different days can obtain drasticallydifferent results for the visual range under similar conditions. Secondly, an estimate of thevisual range is usually at best a local measurement determined by a particular line of sightwhich the observer happens to take. In many cases this has little, if any, resemblance to theapplication being made. This is due to the vertical inhomogeneities associated with atmosphericaerosols. For example, an observer making a ground-based visual range estimate can obtain aresult for !he extinction which is orders of magnitude different from a similar estimate thatmight be made from a balloon-borne platform. Figure 3-6. again based upon the Orafenwb'hrdata base, illustrates this point dramatically.

28

Page 33: a073763 the Fundamentals of Theirmal Imaging Systems

NKL REPORT 8311

LIQUID WATER CONTENT (QM PER METER *3)

o-3 . 1 0 101 102

DATE. 02/22/76 10 MICROMETERTIMi. 12:16.501u 12-26:30 EXTINCT)ON

WATER CONTENT

F ~/..

S#/ //

I MICROMETEREXTINCTION

4 MICROMETERZ... 1 EXINCTION

102 i0-1 00 '02 10EXTINCTION COEFFICIENT I PER KM

LIQUID WATER CONTENT, (GM PER METER 3)__4 10 -3 10_ o2 1o'0ATý: 02iZ816

TIME: 10:17:00 TO 10:Z3:00

10 MICROMEER

EXTINCTIONLIQUID - / .-WATER , ."'

CONTENT/ , /.i,

.o-:! EXTINCTION

-100 J"I / i

1i I MICROMETFR ".I' EXTINCTION (

0K

10 027 101 100

EXTINCTION COrFqICIENYI P• P KM•I

Fig. 3-6 - Effect of aititlude upon Ihe extincti'fn coefficientof aerosols for limited visibility conditions

Sectigns B and C have reviewed and placed in perspective the three major categories ofatmospheric transmission. It is fair to say that the first category, namely molecular absorptionby the uniformly mixed gases, is fairly well understood. It also is apparent that the water vaporcontinuum from an engineering standpoint is fairly well understood ,n the 8-12 .ir band buthas a much larger discrepancy associated with it in the 3-5 Am band. Measures to remove thisdiscrepancy are under way in several government laboratories. Finally, the largest problem that

Page 34: a073763 the Fundamentals of Theirmal Imaging Systems

GOODELL AND ROBERTS

we have in modeling the weather propagation effects for electrooptical (EO) sensors is associ-ated with our uncertainties in the aerosol environment.

D. ATMOSPHERIC TRANSMISSION AND EXTINCTION

1. Transmission

When the gas quantities are known, the atmospheric transmittance over the correspondingline of sight path can be determined from the spectral absorption properties of the gases. Thecomposite atmospheric transmittance is usually approximated by multiplying the transmittancesof each separate component, averaged over some very narrow spectral ",3nd. Thus the averageatmospheric transmittance over a line of sight path in a narrow spectral band is generally writ-ten as

where iT.....v are the transmittances of the individual absorbing species averaged over the nar-row spectral band. The line of sight path determines the quantity of gas to be inserted intoeach individual T.

In general, the calculation of the r's is very detailed and tedious because of the extreme

complexity of the molecular spectra.

2. Extinction

For very narrow spectral intervals (essentially spectral lines) and/or weak wavelengthdependence, Beer's Law, namely,

S- exp (-3o,, R),

provides a useful approximation to atmospheric transmission. l3a,, is the "extinction coeffcienfassociated with the composite atmospheric transmission, and R is the range, usually in kilometers.

t The approximation is generally fair for aerosol particles and the water vapor an~d nitrogen con-tinua all of which display weak wavelength dependences.

The concept has two advantages for IR imaging system analysis. The first is that for anatmospherically limiting environment the range performance (such as detection or recognitionrange) for a given system directly relates to 1/l0am. Thus a doubling of the extinctioncoefficient degrades range performance by roughly half. A relative uncertainty in the extinctioncoefficient d produces a comparable uncertainty in the performance range, namely,dRIR.

The second advantage is that/ 3 ,,, can be separated into various components, for example,Hl,, P3n,,,, P..,. the extinction coefficients due to molecular band absorption, gas continuum absorp-

tion, and aerosol attenuation. This form pruvides a convenient mechanism for discussing theeffects due to the dominant species, namely, continuum and aerosols.

Used with discretion and an understanding of its limits of validity, the concept of extinc-

tion provides useful insights into atmospheric influences on IR imaging system performance.

rnrn

Page 35: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

E. LOWTRAN COMPUTER PROGRAM

1. General Description

Many available computer programs today compute atmospheric transmittance over arbi-trary slant paths from temperature, pressure, gas and aerosol lapse rates, and spectral linestrength data. Most of them provide satisfactory values. Many are still evolving as new infor-niation advances the state of the art. Most of them are too complex for field use.

4The LOWTRAN 3b computer program which forms the basis for the transmittance tables

in this chapter was developed at the Air Force Geophysics Laboratory (AFGL) by JI.E.A. Selbyand R.A. McClatchey. LOWTRAN 3b predicts atmospheric transmittance along slant paths inthe wavelength region from 0.25 to 28.5 Mm. The LOWTRAN 3b program contains approxi-mately 2000 cards. The program allows a choice of one of six model atmospheres or directinputs of measured atmospheric data. LOWTRAN 3b includes the following atmosphericabsorbing molecular species as discussed in Sections B and C.

1. Water vapor from 0.690 to 28.57 Am.

2. Uniformly mixed gases including CO 2, N 2, CH 4, CO, and 02. Absorption is calculatedin two bands: 1.241 to 20.00 ,m- 0.758 to 0.771 Mm.

3. Ozone from 3.05 to 17.39 Am.

4. The N 2 continuum from 3.65 to 4.81 Am.

5 The water vapor continuum from 7.14 to 14.93 Am.

6. Ozone between 0.43 and 0.77 Am (the visible) and wavelengths shorter than 0.36 .m.

2. LOWTRAN 3b Aerosol Methodology

Although there are many gloss uncertainties (as discussed in Section C) involved in theassessment of bad weather aerosol effects upon IR systems, one can at least develop a qualita-tive if not semiquantitative understanding of weather-sensor relationships by using the straight-forward aerosol models contained in the LOWTRAN 3b code. The assumptions embodied inthis particular transmission algorithm ultimately lead to a set of fixed linear relationshipsbetween the subjective visibility estimate, V, and extinction in other spectral regions accordingto:

0a,0(A) - CAI30e, (0.55m),

where CA is determined via a Mie scattering calculation based upon a single repirsentative sizedistribution and a set of optical properties. A diffterenw C, is obtained for the so-called mari-time, urban, and rural cases. 3a,, is in turn related to the visual range, V, with the well-knownKoschmieder relationship:

9,,e,(O559m) - 3.91/V,

derived using a 2% contrast requirement.

Page 36: a073763 the Fundamentals of Theirmal Imaging Systems

GOODELL AND ROBERTS

Since C, for the LOWTRAN 3b program displays only a weak variation with A over the3-5 and 8-12 gm bands it is possible to describe the LOWTRAN 3b maritime, rural, and urbanmodels quite simply and accurately with the following formulas.

Maritime:p 3.5Am. 2 24/V.#i2jm _ 0.o8 51 V.

Rural:S3-5sm.- 0.42/ V,Off

s8-12.m. 0.43/ V.

Urban:83-3Sm - 0.60/ V,

9-t12,m . 0.41/V.

The aerosol transmission factor is then obtained in a straightforward fashion via

In el = -e 1, R

In light of the criticisms raised in Section C one would be well advised to use (within thecontext of the LOWTRAN 3b code) the maritime model for continental limited visibility (4kin) environments as well as oceanic conditions.

F. ATMOSPHERIC TRANSMISSION TABLES

1. Description

The atmospheric transmission tables in this chapter use spectrally weighted transmittancescalculated from LOWTRAN 3B in two important atmospheric windows, 3-5 and 8-12 /Am. Thespectral contrast weighting produces the average transmittance of radiation from a blackbodysource at 109C in each of the two windows according to the formula,

Y toai m f i ,, V.y k - xW \adT-•; t dtm

where r,. is an Instrument function equal to unity ove," the spectral range included in the integration,and zero elsewhere, W is the blackbody function, T Is temperature, and X is the wavelength. Tables3-3 through 3-10 include transmission data for horizontal line of sight ranges in factor of twoincrements from 0.5 to 32 km. dew point values from -200 to 40"C, in 5-deg steps, and tem-perature values (8-12 jam window only) from -20* to 40°C.

Transmittance values in the tables are due only to molecular band and continuum absorp-tion. They do not include aerosol scattering which is computed separately.

2. Atmospheric Transmittance Determination

The tables require three inputs:

* Rangea Dew Point0 Temperature

Page 37: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

and also requires a measured visual range in order to assess aerosol effects. When these areknown, use the following steps to determine atmospheric transmittance.

1. Select the range.

2. Choose the table corresponding to range and spectral region. (In the 3-5 Aim windowthere is only one table for all ranges.)

3. Find the dew point column corresponding to the measured dew point temperature.

4. (8.121Am) Go down the column to the proper temperature.

5. (3-5 Am) Go down the column to the proper range. Read the transmittance and go tostep 6. (This is the "infinite visibility" transmittance.)

6. Compute the aerosol transmittance from the formula:

- exp (-9R).

where R is the selected range, V is the measured visual range, and

-0-85_-(8-12 m band)

3"-j2.4 (3-5 Aim band).

7. Multiply the results of step 5 by the results of step 6 to complete the process.

3. Conversions

Relative humidity from absolute humidity (or use Fig. 3-7)

4 AHx Ta288.9P'

whereRH is relative humidityTa is air temperature (K)AH is absolute humidity (g/m)P is vapor pressure of water (use Table 3-11)(mm of Hg)

Dew point to absolute humidityAH - A exp (18.9766 - 14.95.95.4 -- 2.4388A2) g/m 3

A - 273.15273.15 + TdP

Td, is dew point (°C)Torr to absolute humidity

A 1.05821 Pro" gimAH " 1 + 0.0036tt is air temperature (°C)

Note that there is little difference between A H and PT,,r for ambient conditions.

33

Page 38: a073763 the Fundamentals of Theirmal Imaging Systems

G 'ODELL AND ROBERTS

Examples

For a local temperature of I0oC, a dew point temperature of STC, and a visual rangeestimated to be 4 kmn, find the atmospheric transmittance over an 8 km horizontal path.

1. 8-12Am Window

From Table 3-8, the "infinite visibility" transmittance is 0.445. The aerosol transmittancecorresponding to the maritime model value of 0.85/V.R. for the 8-kin path is 0.180. The com-posite atmospheric transmittance therefore is 0.081.

2. 3-5Arn Window

Temperature data is unnecessary in this window. From Table 3-3, the "infinite visibility"transmittance corresponding to a 5"C dew point temperature and ail 8 km range is 0.260. Fromthe maritime model, th,.- aerosol transmittance corresponding to the extinction coefficient,2.24/V is 0.011I. The composite atmospheric transmittance is therefore 0.003. ,

3. Range Interpolation, 31-12/Am Window

For the same temperature (10°C). and dew point temperature W5C), and visual range of15 kin, find the atmospheric transmittance over a 10-kin path. The tables do not include vaiuesfor 10 km so an interpolation is required. This example uses a linear interpolation.

From Table 3-8 the "infinite visibility" transmittance corresponding to an 8-kin path is0.445. From Table 3-9 the "infinite visibility" transmittance corresponding to a 16-kin path is0.234. The linear interpolation formula therefore is

T 16kinm 78k.T~,, "- &a,,, + 16krn -8kin (10kin - 8;i))

-0.445 - 2 )- 0.211/8 - 0.392.

The aerosol transmittance, using the maritime mode! value of 0.65/ V is 0.56., for a visualrange of 15 kmn. The composite transmittance is therefore

0.567 x 0.392 -* 0.2122.

The linear interpolation is simple and pioduces reaso~nablL accuracy. It can be applied t)interpolations between dew point temperatures, t.emperatures in the 8-12 ,Am window, anjranges in the 3-5 A.tm window.

4. 3-f5 Am Transmittance Nomogrzm

Figure 3-8 is a trensmittance nomogram prepared from Cie dava in "i':ble 3-.. "'! appliesonly to tran'mittance in the 3.5 A~m band. The very, :-,igt. correlallon betwc,ý;n the Iogzrithm ofthe negative !ogarithm of the transmittance and tht; logarithm of t"e targre allowt Figure 3-8 topredict transmittances it) good agreement with those or Table 3-3.

34r

Page 39: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

To use Figure 3-8 to predict atmospheric transmittance in the 3-5 Mm band, lay a straightedge on the correct dew point temperature and the range. Read the transmittance where thestraight edge intersects the transmittance column.

The nomogram also provides dew point temperature given range and transmittance, orrange given dew point temperature and transmittance. Simply lay the straight edge on the twoknown quantities and read the third quantity where the straight edge intersects the correspond-ing column.

Table 3-3 - 3-5 Am Weighted Molecular Transmissionfor Target Temperature of 10°C

Range Td_ (°C)

(km) "20 -15 -10 -5 0 o [5 30 i 35 0.5 .740 725 .708 .689 .667 644 .620 1 .595 1 111 1 542 ..514 .48 459

1.0 .683 .663 .640 .615 .589 .562 .533 .504 .1 .445 .417 .388 359ý .63 '65 59 5 .533 .5 6 .4742 1 4

2.0 .611 .585 .557 .529 .498 .467 .436 .405 .375 345 I .316 .288 2614.0 .524 .494 .463 .430 .397 .365 .333 .3043 .273 .246 .220 .194 708.0 .424 .391 .357 .324 .291 .260 .231 204 .179 .155 .133 091

16.0 .314 .281 .248 .217 .189 164 .140 .188 .098 .080 .063..048 03532.Oj .207 .177 .150 .126 .105 .086 .069 054J .041 .029 .020 .013 L 008

Table 3-4 - 8-12 Mm Weighted Molecular Transmission forTarget Temperature of 10'C and Path Length R = 0.5 km

T, T( °* . . . ... . .,T ('c) .............................-....

' -205 .1 o I0- o s t Io 1 20 _ 30 35 40j

-20 .971 I I I

-15 .971 965 i•Io .972 966 958

II-5 972 .S , 959 q48

0 972 .967 .959 948 933i5 .973 967I 960 .9,0 933 .912

0 97.93 .968 .961 .951 936 918 5 88215 .973 .968 .961 .952 .938 887 .840

20 .973 '968 .962 .952 9 29 9 0 891 847 78025 .974 .969 962 .953 .940 .922 .895 85. "790 699

.974 .969 .q2 .954 .941 I .924 .898 .858 799 13 19435 .974 .969 .963 954 .942 12.926 901 . 63 807 '25 .612 .468

40 .974 .969 963 .955 .941 Q27 .903 1A68 15 i 37 628 i.489 .3 33j

35

Page 40: a073763 the Fundamentals of Theirmal Imaging Systems

GOODELL AND ROBERTS

Table 3-5 - 8-12 Am Weighted Molecular Transmission forTarget Temperature of 100C and Path Length R - 1 km

-2 -15 - 0 - -o 5 0 1 20' 25 35

.15 .953 .943I-10 953 944 .931-5 .954 945 i.90• .914

0 .954 .946 934 9!6 .8895 .955 .947 .915 .98 893 .854,0 .955 .947 9196 .920 .896 .859 .80315 .956 .948 .937 . 898 .864 .811 .732

20 .956 .948 .938 923 .901 .868 .818 .743 .636

25 957 949 938 .924 .903 .872 .825 754 .652 5 530 1 .957 .949 939 .925 .905 .875 .831 .764 .666 .535 .378

35 957 .950 .940 .926 907 R 78 I .836 .772 .680 .554 .400 I.24140 .958 ft.950 .940 .927 .908 -.8I .841 780 .692 [571 421 262 .127

Table 3-6 - 8-12 Am Weighted Molecular Transmission forTarget Temp!;rature of 10°C and Path Length R - 2 km

Sq°C)C- - --- _

4

T, 20I1.15 -10 -5 - )(__ --- 4_---25- 0-- 3

-20 9 24 910

.10 .9261911 890.9i6 913 .892 861 j I "

0 927 914 .894 .865 .8205 928 915 ' 896 .868 826 .760

10 1929 916 898 .871 .831 .770 .677 115 .929 .91' I 899 .874 836 778 691 .568

20 930 9.8590 43420 j 9301.918: 901 877 .840 785 .703.58625 .930 .919 .902 .879 844 1792 .7,4 .602 .456 .29230 931 .919 .903 I1 -849 .7989 .724 -617 476 .314 63i

.932 .920 I 904 .883 803 .732 .631 .495 .335 .182 .071L 40 932 921 905 84_.53 108 741 664 L513 3561 .200 083 1 022

30

•-'

Page 41: a073763 the Fundamentals of Theirmal Imaging Systems

'I

NRL REPORT 8311

Table 3-7 - 8-12 Am Weighted Molecular Transmission forTarget Temperature of 10TC and Path Length R - 4 km

T c) d(C)

_20 .881

-15 .883 .850 .

-10 .884 .861 .826-5 .886 .864 .831 .779

0 .887 .866 .834 1 786 .7115 .888 .868 .838 .792 .722 .618

10 .890 .870 .841 797 .731 .633 .49715 .891 .871 .843 .802 .719 .646 .516 .356

20 .892 .873 .846 .806 .746 .658 .534 .378 .21525 .892 .8'14 .848 .810 .1753 I669 550 .399 .236 1 0230 .893 .•'5 .850 .813 .759 .679 .565 .418 .256 .117 .03535 .894 .876 .852 .816 .764 .688 .579 .437 .276 .133 .043 .00840 ,•89 .877 .853 .819 .769 .696 .592 .454 .295 .149 .051 .010 .001

Table 3-8 - 8-12 jm Weighted Molecular Transmission forTarget Temperature of 10'C and Path Length R - 8 km

____ ___-___ C)(CT ( .20 .15 .10 1 5 o0 s 5 10 5 20 I 25 3 0 35 40.20 .819 1 I.15 .822 .783 T-30 .824 .788 .731 I

-5 827 .792 .738 .6560 .829 795 .744 .667 .554

5 .31 798 .750 .677 .570 .42510 .833 .801 .755 686 .584 .445 28115 .834 .804 .159 .693 .597 .463 .303 .15020 .836 806 763 .700 .608 .480 .323 .168 .05925 .837 .8In 767 707 .619 .496 .342 .186 .070 01530 .838 .810 770 .712 .628 .510 .361 .204 .082 .020 .002

35 .840 .812 .773 .717 .637 .523 .378 .222 094 .025 .0030 000

_ .841 141 776.1 .722 .645 535 .394 .239 107 030 .0040 .0000 .000

Page 42: a073763 the Fundamentals of Theirmal Imaging Systems

bIt

GOODELL AND ROBERTS

Table 3-9 - 8-12 Am Weighted Molecular Transmission forTarget Temperature of 10°C and Path Length R - 16 km

T- Td (*)3

.20 .15 1 -10 -5 0 5 10 15 0 P25 30 3S [0

-20 .730-15 .735 .674

-10 .739 .681 .595

-5 .743 .688 .606 .487

0 .746 .694 .616 .502 .3545 .749 .699 .625 .S17 .373 214

10 .752 .704 .633 .530 .392 .234 .099

15 .755 .708 .640 .541 .408 .253 .114 .031

20 .757 .712 .646 .552 .424 .271 .129 .039 .006

30 761 .718 .658 .570 .451 .305 .159 .055 .010 10 .000 I35 .763 .721 .662 .578 .463 .320 .173 .064 .013 0010 .0000 .0004 .765 .724 .667 .586 .474 .335 .188 .074 .017 .020 .0000 .0000 .000

Table 3-10 - 8-12 Am Weighted Molecular Transmission forTarget Temperature of 100C and Path Length R - 32 km

T, (C)

T( -20 C-10 -5 0 5 10 ]15 20 25 30 35 40

-20 .610 1 F-15 .617 .527-10 .624 .538 .417-5 .630 .548 .432 .286

0 .635 .557 .446 .304 .1575 .640 .565 .459 .322 .174 .062

I0 .644 .572 .470 .337 .190 .0i3 .015A5 .648 .579 .481 .352 .206 .0,4 .019 .002

20 ý65 .584 .490 .365 .222 .096 .024 .030 00025 .654 .590 .499 .378 .236 .108 .030 .0040 .0000 .00030j .657 .594 .506 .389 .250 .119 .036 .0050 000 .0000 0 000 OW

35 .660 599 .513 .400 .263 .131 .042 .0070 w0000 .0000 wOC .00040 1662 403 .520 .410 .276 .143 .049 009 0019 0000 .0000 .0000 .000

38

Page 43: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

Table 3-11 - Vapor Pressure of Water.aPressure of Aqueous Vapor over

Water in mm of Hg.

Temp. mm of Temp. mm of5(*Q) H & (60) HB•

-15 1.436 15 12.788-14 1.560 16 13.634-13 1.691 17 14.530-14 1.834 18 15.477-11 1.987 19 16.477

-10 2.149 20 17.535-9 2.326 21 18.650-8 2.514 22 19.827

-7 2.715 23 21.068-6 2.931 24 22,377

-5 3.163 25 23.756-4 3.410 26 25,209-3 3.673 27 26.739-2 3.956 28 28.349-1 4.258 29 30.043

-0 4.57930 31.824

0 4.579 31 33.6951 4.926 32 35.6632 5,294 33 37,7293 5.685 34 39.8984 6.101

35 42.1755 6.543 36 44.5636 7.013 37 47.0677 7.513 38 49.6928 8.045 39 52.4429 8.609 1

"'Handbook of Physics and Chemistry,'Chemical Rubber Publishing Company.

G. PROCEDURE FOR CALCULATING DEW POINT TEMPERATURE

FROM TEMPERATURE AND RELATIVE HUMIDITY

Vorlables

T - Temperature (K)Td - Dew point Temperature (K)RH - Relative humidity in decimals (% RH/100)e,(T) - Saturation vapor pressure of water

surface (dynes cm-)e - Partial pressure of water vapor

Page 44: a073763 the Fundamentals of Theirmal Imaging Systems

GOODELL AND ROBFRT'

TEMPERATURE (*Fl-40 -30 -20 -I1 0 1O 20 30 40 !-0 60 70 90 100 110

- I! ,ouIJ-RELATIVE HUMIDITY %t,

z.9603

S, - l- ... 4..L. --10

-.40 -3o -20 -10 0 to 20 30 40- TEMPERATURE (C)

Fig. 3.7 - Waler vapor concentration per kilometer path length as a function of temperature and

relative humidity. Dew point corresponds to IO10A rclative humidity (top cuiveý

Constants,

a - 6108b - 17.27c - 273.16d - 35.86In a - 8.7173547

STEP 1

e,(T) - a exp [b(T - OAT- d))

STEP 2

e - KH .c,(T)

STEP 3

a' - In e - in a

40

Page 45: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

STEP 4S~a'.d - b-c

Td - a'- b

Restrictions

1. Valid for atmospheric pressure near standard value (i.e., 1000 m bar 106 dynts cm-2).

2. Use for temperatures above O0C. Check accuracy before use at temperatures below0oC.

Example

F- Given an ambient temperature of 700F and a relative humidity of 60 percent find the dew

point temperature at standard pressure conditions.

First express temperature in kelvins:

K-C + 273.16-- (OF - 32) + 273.16.

Thus K -294

STEP 1

e, (T) = 25 000

STEP 2

e - 0.6 e,(T) -15 000

STEP 3

a' - 0.9000

STEP 4

t Td - 286 K- 130C

(Values are stated t1 ee figure accuracy.)I

1 44

Page 46: a073763 the Fundamentals of Theirmal Imaging Systems

GOODELL AND ROBERTS

H. GRAPHICAL METHOD FOR RANGE INTERPOLATION

Figure 3-9 is a specially prepared graph on which the abscissas are the logarithms of thenegative logarithms of the transmittances, (In(-lnt)) and the ordinates are the logarithms ofthe ranges (In R). This chart together with two transmittance values either in the 3-5 /Am win-dow or the 8-12 ptm window provides transmittance values for other ranges under the samedew point temperatures. To use this method calculate and plot the chart two transmittances fortwo range points for a given set of conditions, then lay a straight edge along the two range-transmittance pairs. Transmittances at other ranges are read off at the intersections of thestraight edge with the corresponding ranges.

This chart is possible because the transmittances in Tables 3-1 to 3-8 follow closely (corre-lation coefficient > 0.99) an exponential power law of the form:

where a and b are constants. -4

Taking the logarithm two times produces the relation:

In(-In(7)j - ln(a) + bln(R)

which shows a linear relation with slope b and intercept a.

1. SUMMARY

The atmosphere strongly influences IR systems performance and therefore must be con-sidered in any FLIR design. The major absorbing gases in the earth's atmosphere and the onesthat primarily determine the atmospheric windows are water and carbon dioxide. Other gasesinclude nitrous oxide, methane, ozone, nitrogen, and carbon monoxide. Water content varieswidely depending on local weather conditions. Ozone concentration peaks at an altitude ofabout 24 km and normally does not substantially affect ground level operations. The othergases have fairly constant concentrations in the atmosphere.

Aerosols, many of which are water droplets, present many difficulties to FLIR perfor-

mance analysis. They are extremely variable in particle size distribution and lapse rates. Theyare difficult to characterize by conveniently measurable quantities. The LOWTRAN 3b com-puter code offers three aerosol models: maritime, rural, and urban.

Extinction is often more useful than the transmittance itself for providing insights intoFLIR performance. Thus the "extinction coefficient," which is the logarithm of the reciprocalof transmittance divided by the range, can often be expressed as the sum of extinctioncoefficients due to atmospheric attenuation components. The total extinction in these cases isthe sum of the constituent extinctions. This concept is most effective with the water contin-uurm, and aerosol extinction.

The LOWTRAN 3b computer model for calculating atmospheric transmittance developedat the Air Force Geophysical Laboratories, Cambridge, Mass. provides the data in this chapter

42

Page 47: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

0AM

I-,I- _ In-- netS

_:0125

__O_ ,32- o _.a .-0.27S

I, E0.325

105_

3O--

20 0-752S--OAS

30-- a0-go

-0-0.3$

-- 0-37$5

Fig. 3-8 -- Nomogram for determining atmospheric

transmittance in the 3-5 urn window

43 o,

Page 48: a073763 the Fundamentals of Theirmal Imaging Systems

GOODELL AND ROBERTS

0.979 50.96!-5-

0.7

0.3

01

0.05

o~, - - !....

0.001 i I I I

0.5 37S2750 1 1-25 1.5 1.75 2 2.5 3 IS 4 5 6 7 2 10 12 14 16 20 24 I2

RANGE (kmn)Fig 3-9 - G.iph for intcrpolating range 4nd/or transmittance

in either spocirid region

for determining transmittance. LOWTRAN 3b computes atmospheric transmittance by multi-plying component transmittances of each of the constituent gases and one of the aerosolmodels. LOWTRAN 3b contains several model atmospheres and also provides for inputtingmeteorological parameters.

Tables 3-1 through 3-8 list contrast transmittances in the two main atmospheric windows(3-5 and 8-12 /um). These tables and the LOWTRAN 3b aerosol models provide the means forcalculating atmospheric transmittance.

... __ i44

IL.! I

Page 49: a073763 the Fundamentals of Theirmal Imaging Systems

kI

NRL REPORT 8311

REFERENCES

3-1 J.E.A. Selby, E.P. Shettle, and R.A. McClatchey, "Atmosphrric Transmittance From 0.25to 28.5 g.m: Supplement LWTiIAN 3B (1976)", Air Force Geophysics Laboratory,November 1976.

3-2 Handbook of Military Infrared T-,h-.ology, Office of Naval Research, Department of theNavy.

3-3 Handbook of Geophysics-Revised Edition, Macmillan Company, New York (1960).

3-4 F. Stauffer and J. Strong, App. Optics, I, (1962).

3-5 T.L. Altshuler, Infrared Transmission and Background Radiation by Clear Atmospheres,Document No. 615D199, Dec. 1961, General Electric Company, Missile and Space Vehi-cle Department, Philadelphia, Pa.

3-6 J.N. Howard and J.S. Gating, The Transmission of the Atmosphere in the Infrared, GRD,AFCRL, Cambridge, Mass. (1962).

3-7 A.J. Arnulf, J. Bricard, E. Cure and C. Veret, "Transmissions by Haze and Fog in theSpectral Region 0.35 to 10 Microns," J. Opt. Soc. Am. 47, 491 (1957).

3-8 S.S. Penner, Quantitative Molecular Spectroscopy and Gas Emissivities, Addison-Wesley Pub-lishing Co., Inc., Reading, Mass. (1959).

3-9. G.N. Plass and D.I. Fivel, Astrophys. J., 117, 225 (1953).

3-10. W.M. Elsasser, Heat Transfer by lnfrared Radiation in the Atmosphere, Harvard Meteor Stu-dies No. 6, Harvard University Press, Cambridge, Mass. (1942).

3-11. G.N. Plass, J. Opt. Soc. Am., 48, 690 (1958).

3-12. G.N. Plass. J. Opt. Soc. In., 50, 868 (1960).

3-13. R. Ladenberg and F. Reiche, Ann Physik., 42, 181 (1913)

3-14. G.N. Plass and D. 1. Fivel, Astrophys. J., 117, 225 (1953).

3-15. W.M. Elsasser, Phys. Rev., 54, 126 (1938).

3-16. H. Mayer, Methods of Opacity Calculations, Los Alamos, LA-647 (1947).

3-17. P.J. Wyatt, V.R. Stull, and G.N. Plass, J. Opt. Soc. Am. (1962).

3-18. P.J. Wyatt, V.R. Stull, and G.N. Flass, App. Optics. 3 (1964), Aeronutronic Report U-1717, Aeronutronic Systems, Inc., Newport Beach, Calif. (1962).

45

Page 50: a073763 the Fundamentals of Theirmal Imaging Systems

GOODELL AND ROBERTS

3-19. V.R. Stull, P.J. Wyatt, and G.N. Plass, App. Optics. 3 (1964); Aeronutronic Report U-1718, Aeronutronic Systems, Inc., Newport Beach, Calif., 1962.

3-20. W.E.K Middleton, Vision Through the Atmosphere, University of Toronto Press, Toronto,

Canada (1952) Section 9.3.1.1.

3-21 H.C. Van De Hulst, Light Sc'ittering by Small Particles, Wiley, New York (1957).

3-22 P. Kruse, L. McGlaughlin, and R. McQuistan, Elements of Infrared Technology, Wiley,New York (1962).

3-23 H.W. Yates and J.H. Taylor, Infrared Transmission of the Atmosphere, NRL Report 5453,U.S. Naval Research Laboratory, Wash., D.C. (1960).

3-24 J.A. Curcio, G.L. Knestrick, and T.H. Cosden, Atmospheric Scattering in the Visible andInfrared, NRL Report 5567, U.S. Naval Research Laboratory, Wash., D.C. (1961) ASTIAAD 250945.

3-25. L.P. Granath and E.O. Hulburt, "'The Abscrplion of Light by Fog', Phys. Rev., 34, No. 140(1929).

3-26 M. Migeotte, L. Nevin, and J. Swensson, The Solar Spectrum from 2.8 to 23.7 Microns, Part11. Measures and Identtications, University of Liege, Contract AF 61 (514)-432, Phase A,Part 11, Geophysics Research Directorate, AFCRC, Cambridge, Mass. ASTIA AD210044.

3-27 M. Migeotte, L. Nevin, and J. Swensson, An Atlas of Nitrous Oxide, Methane and OzoneInfrared Absorption Bands, Part 1. The Photometric Records, University of Liege, ContractAF 61(614)-432, Phase B, Part I, Geophysics Research Directorate, AFCRC, Cambridge,Mass. ASTIA AD 210045.

3-28 M. Migeotte, L. Nevin, and J. Swensson, An Atlas of Nitrous Oxide, Methane and OzoneInfrared Absorption Bands, Part 11. Measures and Identificatons, University of Liege, Con-tract AF 61(514)-432, Phase B. Part 11, Geophysics Research Directorate, AFCRC, Cam-bridge, Mass. ASTIA AD 210046.

3-29. J.N. Howard and J.S. Garing, Injrared Atmospheric Transmission: Some Source Papers on theSolar Spectrum from 3 to 15 Microns, Air Force Surveys in Geophysics No. 142, AFCRI,Report No. 1098, Dec. 1961, Geophysics Research Directorate, AFCRL, Cambridge,Mass.

3-30. J.N. Howard, "Atmospheric Transmission in the 3 to 5 Micron Region," Proc. IRIS, Z,59-75 (1957).

3-31 J.N. Howard, "Atmospheric Transmission in the 8 to 13 Micron Region," Proc. of the Sym-posium on Optical Radiation from Military Airborne Targets, Final Report No. AFCRL-TR-58-146, AFCRL, Cambridge, Mass., Contract No. AF 19(604)-2451, Hailer, Raymondand Brown, Inc. State College, Pa. ASTIA AD 152411.

46

Page 51: a073763 the Fundamentals of Theirmal Imaging Systems

4 NRL REPORT 8311

3-32 D.E. Burch and D. Williams, Infrared Absorption by Minor Atmospheric Constituents, TheOhio State University Research Foundation, Scientific Report No. 1, Contract No. AF19(604)-2633, Geophysics Research Directorate, AFCRI, Report No. TN-60-674,AFCRL, Cambridge, Mass. (1960) ASTIA AD 246921.

3-33 D.E. Burch, D. Gryvnak, and D. Williams, Infrared Absorption by Carbon Dioxide, TheOhio State University Research Foundation, Scientific Report No. 11, Contract No. AF19(604)-2632. Geophysics Research Directorate, AFCRL Report No. 255, AFCRL, Cam-bridge, Mass. (1960) ASTIA AD 253435.

3-34 D.E. Burch, E.B. Singleton, W.L. France, and D. Williams, Infrared Absorption by MinorAtmospheric Constituents. The Ohio State University Research Foundation, Final Report,Contract No. AF 19(604)-2633. Geophysics Research Directorate, AFCRL Report No.412, AFCRL, Cambridge, Mass. (1960) ASTIA AD 256952.

3-35 H.W. Yates and J.H. Taylor, Infrared Transmission of the Atmosphere. NRL Report 5453,U.S. Naval Research Laboratory, Wash., D.C. (1960) ASTIA AD 240188.

3-36. T. Elder and J. Stong, "The infrared Transmission of the Atmospheric Windows." Journalof the Franklin Institute, 255, No. 3. 189 Phila., Pa. (1953).

3-37 T.L. Altshuler, Infrared Transmission and Background Radiation by Clear Atmospheres, Gen-eral Electric Company. Missile and Space Vehicle Department, Valley Forge, Pa., Docu-61SD!99 (1961).

3-38. G.N. Plass, App. Optics 2, 515 (1963).

3-39 R. O'B. Carpenter, .. A. Wight, A. Quesada, and R.E. Swing, Predicting Infrared MolecularAttenuation fbr Long Slant Paths in the Upper Atmosphere, AFCRC Report No. TN-58-253,AFCRC, Cambridge, Mass (1957).

3-40 A.S. Zachor, Near Infrared Transmission Over Atmospheric Slant Paths. Report R-328, 2,Massachusetts Institute of Technology, Cambridge, Mass, Contract AF 33(6616)-6046(1961).

3-41. A.E.S. Green and M. Griggs, Appl. Optics 2. 561 (1963).

3-42 G.N. Plass, Transmittance of Carbon Dioxide and Water Vapor over Stratospheric Slant Paths.Aeronutronic Report, Aeronutronic Systems. Inc., Newport Beach Calif. (1962); Appl.Optics, 3 (1964).

3-43 L.M. Bibcrman, "Effect of Weather at Hannover, Federal Republic of Germany, on Per-

formance of Electrooptical Imaging Systems, Part I: Theory, Methodology, and DataBase", IDA Paper P-1123, August 1976.

3-44. R.A. McClatchey, R.W. Fenn, J.E.A. Selby. F.E. Volz, J.S. Garing, Optical Prwpcrtw. ofthe Atmosphere, AFCRL-72-0497.

'.7:

Page 52: a073763 the Fundamentals of Theirmal Imaging Systems

Y GOODELL ANV 'WOUERTS

k

3-45 RE. Roberts, LM. Bibermran, and J.E.A. Selby. "Infrared Con:inuu M Absorption byAtmospheric Watei Vapor in the 8-12 um Window",. IDA Paper P- 1184, April 1976.

3-46 D.E. Burch, D.A. Gryvnak, and .ID. Pembroke, Philco Ford Corp. Aeronutronic Report U-I,4897, ASTIA AD 882876 (1971).

3-47. K.O. White, W.R. Watkins, r.W. Tuer, F.G. Smith, and R.E. Meredith, J. Opt. Soc.!Amer., 65, p. 1201 (1975).

3-48 J. Dowling (private communicaton, Naval Research Laboratory, Washington, D.C.).

3-49 R.E. Roberts, "Atmospheric Transmission Modeling: Proposed Aerosol Methodology withApplication to the Grafenw6hr Atmospheric Optics Data Base", IDA Paper P-1225,December 1976.

3-50. L.M. Biberman, R.E. Roberts, and L.N. Seekamp, A Comparison of Electrooptical Technolo-gies for Target Acquisition and Guidance. Part 2: Analysis of the Grafenwohr AtmosphericTransmission Data, IDA paper P-1218 (August 1976).

3-51. C. Tomasi and F. Tampieri, Size Distribution Models of Fog and Cloud Dropiets in Terms ofthe Modified Gamma Functions, Tellus XXVIII, 4, 333 (1976)-

48 ,4

Page 53: a073763 the Fundamentals of Theirmal Imaging Systems

Chapter IV

VIDEO, DISPLAY, AND PERCEIVED-IMAGESIGNAL-TO-NOISE RATIOS

F.A. Rosell

A. INTRODUCTION

This chapter is devoted to the discussion of the basic concepts and the derivations of thefundamental mathematical relations used to describe and analyze thermal imaging systems. Themodels devised apply when the input test patterns are images of standardized and quantitativelydescribable objects such as rectangles or periodic bar patterns.

The first concept to be discussed is that of a display signal-to-noise ratio which is definedto be the SNR of an image appearing on the display of an electrooptical sensor. This imageSNR takes into account the ability of an observer to integrate spatially and temporally but doesnot include other observer parameters such as the eye's modulation transfer function, retinalfluctuation noise, or dynamic range limitations. If these latter observer parameters wereincluded, the image SNR would be that at the output of the observer's retina and would bedesignated the perceived SNR.

Following the derivation of tnte display SNR, various thermal imaging systemconfigurations are discussed. Next the electrical signal-to-noise ratio which appears in a detec-tor channel is quantitatively described along with basic concepts such as detector detectivity,reference channel bandwidth, detector cold shielding, and scan parameters such as interlace,overscan, and efficiency.

The noise-equivalent temperature difference or NEAT is obtained by equating the channelSNR to unity and by solving for the incremental scene temperature difference which producesthis result. The NEAT has the advantage of being electrically measurable and is an indicator ofsensitivity for systems which are identical. However NEAT is not a fundamental concept whenan observer is the user of the displayed imagery since a system with a larger (inferior) NE6Tmay yet be more sensitive and produce higher resolution.

The channel SNR is converted to the video SNR which is used in turn to calculate thedisplay SNR. In the initial analysis, the aperture responses of the various sensor elements suchas the lens, detector, multiplexer, elec~rical circuits, and the display are ignored. As is wellknown, these ,,ffects are most easily analyzed in the spatial frequency domain by using Fourieranalysis and th. )ncepts of optical and modulation transfer functions. A number of typicalmodulation tran functions for various sensor components are quantitatively described in thischapter.

49

Page 54: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

Finally, a numerical example of a display SNR calculation is provided. In addition to thederivations in this chapter a number of appendixes are provided for those readers who desiremore detail. In Appendix D, the synchronous integrator concept of modeling is described. InAppendix F, a number of parameters and fundamental relationships are described including therelation between video and display SNR, the detector channel SNR, detector detectivity, andresponsivity. In Appendix G, the effects of imaging sampling are discussed.

The conven.ts of this chapter will be used to derive the minimum resolvable temperaturedifference (MRT) for a thermal imaging system in Chapter V which will also include a discus-

sion of observer thresholds.

B. BASIC SIGNAL-TO-NOISE RATIO CONSIDERATION

Every man-made detector of radiant energy, whether radio, audio, v- television generatesa noise in converting the incident radiant energy to an electronic signal. It is easily shown thatthe ability to discern a signal is a function of noise generated in the signal conversion process orby noise subsequently added in the system. One common example of visually, observablephoto-to-electron conversion or system generated noise is the "snow" seen on a home televisionreceiver when tuned in to a fringe area broadcast station. On the other hand, the existence ofnoises which are inherent in the detection process of the human eye or ear has been vigorouslydisputed by many workers; particularly in the field of psychophysical experimentation, analysis,and interpretation. The denial of a noisy detection process for the human observer has led tothe reporting of psychophysical thresholds as minimum detectable contrast rather than thres-hold signal-to-noise ratio as is customary in engineering practice.

Engineers, for the past four decades, have assumed that the eye's and ear's detection pro-cess is noisy and have constructed quantitative models based on this premise. Foremost ofthese engineers is Dr. Otto Shade, Sr., whose contributions are only now becoming appreciated.Even among psychophysicists, the view of a noiseless detection by man's senses appears to belosing ground (Ref. 4-1).

For the bulk of the analysis presented herein, the existence of noise generated in theeye's photoprocess is academic; when the primary sensor is an electro-optical device, a photo-conversion noise is generated which is electrically measurable. When the device is highly sensi-tive, these photoconversion noises are readily perceptable on the sensor's display at low inputsignal levels. The existence of these noises can neither be disputed nor ignored.

That noise could be limiting to visual perception of displayed imagery was suggested byBarnes and Czerny in 1932 (Ref. 4-2). In 1943, de Vrieý; proposed that an image to be visuallydetectable must have a signal-to-noise ratio exceeding some threshold value. The noise levelwas assumed to be signal level dependent as postulated by fluctuation theory (Ref. 4-3).

It is readily appreciated that noise, whatever its source, will make an image more difficultto perceive. Clearly, the signal must equal or exceed the noise; speaking of noise in the con-ventional enginecrirng sense. As will be seen, a SNR can be defined for an electro-oplically gen-erated image as viewed by an observer. Through psychophysical experimentation, the SNRrequired by an observer to detect the image at various levels of probability can be determined,

50

Page 55: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

By matching the image SNR obtainable from the sensor to that required by the observer, theprobability that the observer will detect the image can be surmised. This process is straightfor-ward enough for images of simple geometry, but the image SNR is difficult to define for com-plex images of irregular geometry and contrast.

C. THE NOISE LIMITED CASE - RECTANGULAR IMAGES

Consider the schematic of an electro-optical imaging process as shown in Fig. 4-1. In thisfigure, a rectangular image of area a, amid a uniform background, has been projected onto aphotontransducer by a lens. The photontransducer converts the photon image on it to a pho-toelectron image with 1:1 spatial correspondence. The incident photo image may be considerednoise-free since the existence of a noisy "coherent" electron emission from a photosurface hasyet to be demonstrated. This assumption is made to explain why "photon noise" is notobserved in the phototransduced current (Ref. 4-4). The lens forming the image is character-ized by an aperture,* or spatial frequency response, whose effect on image signals is analogous

SIGNALPROCESSOR EYE

O.JEC. • •RETINA

(a)

LENS YPHOTOSURtFACE ,

DISPLAY

Fig. 4-1 - Schematic of the electro-optical imaging process

to the effect of a filter acting on electrical signals, i.e., the image is blurred. Similarly, otherelements of the clectro-optical sensor including the observer's eye have apertures which affectthe image fidelity and SNR. These aperture effects will be assumed negligible in the initialanalysis but will be discussed in Section IV-D. We note that the smaller the image size, themore important aperture effects are.

The lens of the electro-optical sensor does not directly generate noise, but the sensor'sphoton-to-electron conversion process is noisy and therefore an image SNR is established at theoutput of the photon transducer which will inherently limit the detectability of scene objects.After photoconversion, the image is passed to the signal processor whose main purpose is toamplify the image signals and noises alike, and perhaps, to magnify the image. Next, the elec-tron image is converted to a visual image by a phosphor. The displayed image can be directlyviewed or magnified before viewing by using a lens. If we suppose that no noise was added tothe image in the signal processor or in the electron-to-photon conversion, then the image SNR

*The words operfure of aperture reiponie as used in this document do not refer to the lens diameter (although relativeapenrure Is often used in connection with the lens diameter). Instead. these words refer to the spatial impulse responseor the effective size of the impulse response of various system components. In the case of the lens, its aperture is%amelimes measured In terms of an effective blur circle sire. The dimensions or a single infrared detector are exactlythe dinmensions of Its apeFture. The Fourl.,r transform of a lense's aperture response is Its optical transfer function, Inelectrical engineering. the Fourier tranitorm of the Impulse response or a component Is Its complex steady state fre.quency re4ponse and the modulus of this frequency response ki the modulation transfer function.

t5

Page 56: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSEL.I

on the display will be identical to that at the output of the primary pholotransducer (the"sensor's photosurface). Furthermore, if the gains and magnifications of the signal processorand display are sufficient so that the observer's eye is limited by neither light level nor imagesize, then the image SNR at the eye's retina will be identical to that or. the display if dueaccount is taken of the eye-brain combination's ability to integrate in space and time. Theseconditions can be achieved in practice for a wide riinge of image sizes, signal amplifications,apertures, display luminances, image magnifications, and observer-to-display viewing distances.In generai, the image SNR is more often limited by the system than by the observer's eyealthough the case where the eye degrades image SNR is important and cannot be ignored.

Suppose that a perfectly sharp, well-defined r:,tangle is projected onto the sensors inputphotosurface. Let the irradiance of the square b! F,. Wire2 and let the irradiance of the square'sbackground be Eb. If the phototransducer is linear, then the image irradiances E, and Eb willresult in the average photoelectron rates ii, and hb photoelectrons!m 2 -s.* In viewing the display,the eye seeks areas in which the photon density is higher than in others. The eye is aided inthis process by being able to in-egrate over such areas in space and time. The eye (or eye-braincombination) has been termed a near-perfect synchronous integrator since it can completelyintegrate an image in space over a wide range of image areas. The incremental signal level isdefined as ,.

An no, - nb- (io -i, )~o,(4-1)

where n, is the average number of photoekectrons generated by the input phowosurface in the imagearea a, during the eye's integration time Tl, while nb is the average number generated in a similarequal sized comparison area of background in the same time. The noise associated with theinherent fluctuations in the photoprocess are assumed to follow the Poisson probability distribu.tion which states that the fluctuations have a standard deviation or rms noise equal to thesquare root of the total number of photoconverted electrons integrated over the image area dur-ing the integration time. For the case of an object imaged against background, the total rootmean square noise is assumed to be the average of the object and background photoelectronssum-ned in quadrature, I.e.,

rms noise - [(n0 + nlb)12]J'

- ((h' + "b)aT,/212, (4-2)

and the image SNR/ is equal to

SNRI An/[(n, + nb)i2J'12

- Are'(aT,) 12/[,] i,'2, (4-3)

where n,., - (i, + hb)/2. The above SNR is designated SNR/ to indicate that it is the SNR ofthe electron image at the output of the input photosurface after photoconversion of the scenephoton image. Suppose the gain of the signal processor is G and that the conversion efficiencyof the phosphor is K, lumens per electron. Then, the SNRD at the output of the display is

"T1,e dot and prime used in connection with 'it, denotes that the quantity is a derivative with respect to space and time.

52

Page 57: a073763 the Fundamentals of Theirmal Imaging Systems

.=

NRL REPORT 8311

4 SNR 0) GK,,Ah'(aT,),/2/[K.2,9,lJ/2

& Ah'(aT)1n/2 /[h ,i/ 2 . (4-4)

That is, the display SNR is independent of the gain of the signal processor and the phosphorelectron-to-photon conversion efficiency and is equal to the image SNR at the output of theinput photosurface. However, this is true only if the image is not degraded in the signal pro.cessing by either image deformation or noise addition in the reimaging process.

In the above, we have defined an image SNR which is proportional to the square root ofthe product of image area and integration time. This SNR could be measured on a display usinga photometer whose spot size on display perfectly matches the displayed image area, a. Theintegration time is that of the photometer or that of the sensor, whichever is larger. The SNR o,whether calculated or measured, is that obtainable from the sensor. An image may or may notbe detectable depending on its SNR 0 . In the particular case where the final detector is a humanobserver, it is assumed that the eye will integrate over the area a (within limits to be defined)

and that the integration time will be 0.1 s for display luminances in the 0.2 to 10 fi-lambertrange. Actually, the eye's integration time is variable from about 0.05 s at very high light levelsto about 0.2 s at very low light levels (Ref. 4.5) Lavin (Ref. 4-6) estimated 0.1 s using photo-graphs of televised images. In practice, the actual value is relatively unimportant because it isincluded in the measured threshold signal-to-noise ratio. However it is important, when mak-ing calculations, to use the value of integration time that was assumed in making the thresholdmeasurement.

We have inferred that the rectangular image will be detectable if its SNR/, exceeds somethreshold value which we will designate as SNRrT. Rose (Ref. 4-7) performed experimentsusing noisy photographs and concluded that the required SNRnT ranged between 3 and 7 andpreferred the value 5. For dynamic images as exemplified by TV, a value of SNR 0 T of 2.8appears appropriate based on more recent psychophysical experimentation and an assumedvalue of 0.1 s for the eye's integration time. By dynamic images, we infer the type of imagedisplayed on a real time TV or FLIR display wherein three images are typically presented to theobscrver during the 0.1-s integration period. The observer coherently sums the signal whileincoherently summing the noise when the images are stationary in space. Thus the image SNRimproves by about J/. By contrast, a photographic image represents a single sample in timeand no use is made of the observer's temporal integration capability.

In television practice, the signal current. i, can be related to the rate of photoelectron

generation,h', through the equation

i - h'eA, (4-5)

where e is the charge of an electron and A is the sensor's total effectkve photosensdtive area. WhenEq. (4-5) is substituted into Eq.(4-4), one obtains

SNRT, A,1'2 (4-6)

where Ai corresponds to Ah' and i,, to h,,. We arbitrarily multiply the numerator and demoni-nal or of the above equation by 2../'- where Af, is the video bandwidth, giving

SNR,, - 2A f, T, 12e,, 2 (4-71

53

Page 58: a073763 the Fundamentals of Theirmal Imaging Systems

F. A, ROSELL

The term to the right will be recognized by those who are familiar with video circuitry as thevideo signal-to-noise ratio SNR , and thus

SNRD - 2Afv T, SNR vo. (4-8)

For future convenience, the subscript zero has been added to SNR v to indicate that it is meas-ured using an image of spatial extent which is large relative to the overall sensor's aperture (oreffective "blur" dimensions). The case where the image is small will be treated in Section IV-D.

As shown in Appendix F(A), Eq. (4-8) also applies to thermal imaging or FLIR systemswhen A is interpreted as the total image plane area regardless of the area of the detectorswithin that area. This equation will serve as the starting point for the SNR,, calculation.

Again, we observe that the image signal is proportional to the square root of ihe productof the image area and the observer's integration time because of the observer's ability tointegrate in space and time. The display SNR is usually larger than the video SNR as can beseen by inserting typical numbers in Eq. (4-8), suppose the ratio of image area to total imageplane area (a/A) - 111000 and let Af,. - 4 x 106Hz and T, - 0.1s, then SNRD - 800 SNR P.OTo gain further insight into this relationship consider Fig. 4-2. The photoconverted electronimage in Fig. 4-2(a) is a rectangle of size Ax.Ay and is of incremental current amplitude Ai.The SNR I. is measured by use of a line selector oscilloscope. The assumed image is shown as"subtending three scan lines in the y direction in Fig. 4-2(b), but only a single line is used in theSNR v measurement as shown in Fig. 4-2(c). The incremental current Ai represents the signal,while the square root of the average sum of the mean square noise currents in the image blacksand whites represents the rms noise. To contrast the key difference between SNRV and SNRD,observe that the SNR F does not include the image dimensions or frame-to-frame signal integra-tion. The observer, on the other hand, spatially integrates each line in the horizontal, fromline-to-line in the vertical and over a number of frame times (usually about three in conven-tional TV).

A • • 976"41

(a)nw

(b) (c)

I te (A.• [,,)) lmarý ifeni q v d(Hrii t) . 4li ) inhge w i allor t, rt ivio, and

54

Page 59: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 93t1

Before proceeding, we wish to make a strong distinction between the terms perceivedtignal-to-noise ratio, SNR,,, and the display signal-to-noise ratio, SNRI. When the observer's abil-ity to discern or resolve an image is primarily limited by sensor apertures and sensor generatednoises, the SNR, equals the SNR 0 . However, as will be discussed, the observers ability may beprimarily limited by his own eye apertures and noise in which case, the SNRp can be fardifferent from the SNR[D. In the current state-of-the-art, most models do not include eyeparameters at all. In certain cases, the MT F of the eye ii taken into account but efforts toinclude retina! fluctuation noise are rarely made. If eye parameters are ignored in a model,except for the eye's ability to spatially and temporarily integrate, it is recommended that theterm SNRD be used. If all eye parameters are included, to the extent that tne state of the artpermits use SNR,. If a partial inclusion of eye parameters are included, it is tentatively pro-posed to use the term SNR Di-

D. APPLICATION OF THE NOISE.LIMITED CASE TOVARIOUS FLIR CONFIGURATIONS

The most commonly employed FLIR configuration is schematically shown in Fig. 4-3. Tfephoton image of the s.•,ne is mechanically scanned over the detector array in the horizontaldirection by use of rotating mirrors or prisms. The detectors are usually in a line array and areoften spaced by some distance for ease of manufacture. An interlace feature to reduce flicker inthe display is often provided. Typically, one field is scanned in 1/60 s and, prior to the secondfield, the optical line-of-sight is depressed by the angular subtense of 1/2 the detector's centel-to-center or pitch dimension to scan the alternate field.

Filr 4-3 - Schematic of the basic FLIUR onfisuration

The detectors may number several hundred with each detector having its own preamplifierwhose purpose is to build up the detected signal prior to multiplexing. The purpose of the mul-tiplexer is to provide a single sequential signal which can be displayed on a conventional TVdisplay using a single electron scanning beam. The multiplexer can be an electronic sampler asshown in Fig. 4-4. The output of each detector channel is w,,i.,pled one or more times in thetime that it takes the scanned image to move one detector width in the horizontal. The sampledimage is amplified and reconstructed by the display. Note that the raster lines are vertical whenthis configuration is used. A TV camera can also be used as a multiplexer. In this case, theamplified output of each detector is connected to a light-emitting diode or LED. The LED

"55

Page 60: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

Detectors

VideoCRT

PI

I.-Detector ChannelsI -4

Fig. 4-4 - Schematic of the electm.tit s$mpling process

array, whose light outputs are proportional to the retector outputs, is mechanically scanned insynchronism with the detector array.* The LED :array is viewed by the TV camera which con-verts the image generated to a video signal.

In the case of an electronic multiplexer, the output signal represents sampled data in bothdirections by virtue of the discrete detector dimensions in the vertical and the electronic sam-pling in the horizontal. In the case where a TV camera is used as a multiplexer, the TV scanlines are in the same direction as the detector scan so that the signals are essentially analog inthe horizontal. However, the signals are doubly sampled in the vertical- first by the discretenature of the detectors and second by the TV camera raster. In practice, the number of TV scanlines are made larger than the number of detectors in order to avoid aliasing effects. The sam-pling process, whether caused by the discrete nature of the detectors or by the multiplexingprocess, will affect both signals and noises. In the Initial analysis the effects of sampling will beassumed negligible, but sampling effects will be discussed in Appendix G.

A number of different scan configurations such as those shown in Figs. 4-5 and 4-6 can beused. The case (M) scan configurations of Fig. 4-5 are generally referred to as serial scan whilethe case (2) configurations are referred to as parallel scan. In case 1(a) a single detector isused to sequentially scan the image without interlace and without overscan (to be defined). Incase 2(a), a row of contiguous detectors is used to scan the image plane and has the merits ofincreasing .system sensitivity (by the square root of the number If detectors) and of reducingdetector channel bandwidth (by the number of detectors directly). In cases (b) and 2(b) aninterlace featute is used to sc:an the irmage plain for the purpose of reducing display flicker.Also case 2(b) does not require contiguous detectors which are sometimes difficult to manufac-ture.

In cases 1(c) and 2(c) the scan lines are only 1/2 a detector height apart. This mode ofoperation is called overscan and as will be discussed further, overscan results in an increase intihe system resolution which is theuretically possible for a detector of a given size because the

"See Sections 'V-E for a more detailed descriptior of a FUR wilh j TV camera muhip!exer.

56

Page 61: a073763 the Fundamentals of Theirmal Imaging Systems

U

1*. NRL REPORT 131 1

L.IJ

(b) m -- • -,

rD -

L ,,.J L,.. *- J L "

h.4Joe a Si

10.4

t. -J a.. ,-WI ,,-J,,.,

o.f t -- -- ------

(d). x i

L Si - - IS --

ce (1) Case (21

Fig. 4-5 - Various scan configurations employing serial and psral!el mcan. Ai~o, ihownare configurations which nclude interlace and time delay integration.

Page 62: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

4.t I-t4 . . ".4-1-ý4 4-4

I,.-, I I--•l

1--- 4- A 1 4- +-4-4

a. a4" asI' .,.-' ""-=44 iiI- • - I--I--F #- =+-I-4 ;

+- -g 7F' I" - ?- - -4 -41|i

L..J..t'L.-J L..LJ L J.1

Fig. 4-6 - Serial-parallel scan configurationincludingl time delay titeigration i

Nyquist spatial frequency limit is increased. The amount of overscan need not be exactly a fac- Itor of two as shown but is generally between I and 2, We define overscan as the ratio of the -

detector height .livided by the scan line pitch, where pitch is the distance between adjacent scanlines.

In cases 1 (d) and 2(d) we show two detectors in the scan direction. The signals from the

first detector are time delayed by one detector dwell time and added to the output of thesecond. The signal currents add coherently while the noises add incoherently resulting in a sen-sitivity improvement by the square root of the number at detectors added in the scan directionwhile detector chlanel bandwidth remains unchanged. This mode of operation is sometimescalled time delay and integration or TD! Another configuration used is a combination of serial,parallel, and time delay integration as shown in Fig. 4-6. The disadvantage of this format isthat each of the signals in the parallel channels must be stored for the duration of a line.

In thermal imaging systems pr3ctice, it is common to first define a signal-to.noise ratioSNR,, for a particular detector (or a number of detectors when a TDI scan mode is used)before progressing to the video SNR,0 . This intermediate SNR will be defined as the channelSNR,, and as shown in Appendix F(B) is equal to*

SNCO"o (• as rh)!'2 D'(fl)K•T,TSNRo - irl, (7/a )1 * ,,A (4-9)

where

7)o - transmittance of the objective lens.

f - focal ratio of the objective lens (focal length/diameter)

n, ,- number of detectors in the TDI (or scan) direction

'in these preliminary derivations, we will ignore the spectral wavelength dependence of terms such as the opticaltransmittance. and D'(4 ) as discussed in Appendix F

Page 63: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

a,4 - detector area (cm2)

q, - scan efficiency*

D'(fld)- detector detectivity for viewing angle 0, (cm Hz"/2 W -')

Ku - conversion factor between radiance and AT (Wcm 2sr -1K 1)

AT - temperature difference between scene object and background (00)

Al4* - detector channel bandwidth (Hz)

Observe that the subscript zero has been added to SNR, to imply that its measurement wasmade with a broad area image as discussed in connection with Eq. (4-8)

A reference bandwidtht is defined here as

A ,-1/2 T,1, (4-10)

where Td is the time that a detector dwells on a point on the scene. In general, 7T,1 is given by

TI,

O,FRn1

so that

Af O , FR fl (4-12)

where n. is the number of detectors in the parallel scan direction, 0, is the overscan ratio, fl isthe total feied of view (0,C), wv is the instantaneous field or view (0, - 0) of a detector, Tf isthe frame time, and F, is the frame rate,

The. factor KM, which relates incremental scene ritdii.nce to incremental temperaturedifference, is defied and drived hinAppendix F (B).- -The quantity Kj,, evaluated~ between thelimits 0 to A2. is provided in Table 4-1 for X2 from 2 to 13.9 A.. To find the value of Ky~ for

any spectral interval inbetwcen, the procedure is to substract t,,ic value. obtained at the higherlimit from that given at the lower limit. For example

X,%1 (0- 13 jam) - 12.40 x 10 -

Ky,(0-8 M~m) - 5.15 x 10 -

Kx, (8- 13 M~m) - 7.25 x 10 - W cm -sr -1 K'

*Icon emciency i% the ratio of Ihe time acituialv %itent hv the detoctnr ill wnning the entire imag~e pinnc tol the frametimef rhe rcfcfcce handwivtih i% tiei oficrte il acineda d, /.1 1', *lli ti I% te re~i t, ir tim~ing1 $n hi te ch~iril et hd ndtwidt lih 1%limited by a -limple RC Miter %voth I *1 tilt (lC'Wiofpiifl$1 1i /I f'.~ I NIilli tiii1% litrinititln~l 1`041111% ill in1 SNR,,, th~it Ii- 11 J%

lie- i" .p I iev that 114ing .3 f" - 112 r", Sec Anv nt~end C

A _ _ S9

Page 64: a073763 the Fundamentals of Theirmal Imaging Systems

F. A ROSELL

F Table 4-1 -Value* of K 1, Integrated from 0 to X2 Micrometers in W/cm 2-sr-Kr

A2 KM X2 KM A2 KM A2 KM X2 AM X'2 X

2.0 1.14 E-10 4.0 1.38 E-6 6.0 1.80 E-5 8.0 5.15 E-5 10.0 8.59 E-5 32.0 1.13 E-42.1 2.96 4.1 1.69 6.1 1.94 8.1 5.33 10.1 8.75 12.1 1.152.2 6.99 4.2 2.05 6.2 2.09 8.2 5.51 10.2 8.90 12.2 1.162.3 1.52 E-.9 4.3 2.45 6.3 2.23 8.3 5.69 10.3 9.05 12.3 1.172.4 3.09 4.4 2.91 6.4 2.39 8.4 5.87 10.4 9.20 12.4 1.18

2.5 6.00 4.5 3.42 6.5 2.54 8.5 6.05 10.5 9.35 12.5 1.192.6 1.06 E-4 4.6 3.99 6.6 2.70 8.6 6.22 10.6 9.50 12.6 1.202.7 1.83 4.7 4.61 6.7 2.86 8.7 6.40 10.7 9.64 12.7 1.212.8 3.01 4.8 5.30 6.8 3.03 8.8 6.58 10.8 9.78 12.8 1.222.9 4.76 4.9 6.04 6.9 3.20 8.9 6.75 10.9 9.93 12.9 1 23

3.0 7.28 5.0 6.84 7.0 3.37 9.0 6.93 11.0 1.01 E-4 13.0 1.243.1 1.08 E-7 5.1 7.70 7.1 3.54 9.1 7.10 13.1 1.02 13.1 3.253.2 1.56 5.2 8.62 7.2 3.71 9.2 7.27 i1.2 1.03 13.2 1.263.3 2.19 5.3 9.60 7.3 3.89 9.3 7.44 11.3 1.05 13.3 1.273.4 3.01 5.4 1.06 F-5 7.4 4.07 9.4 7.61 11.4 1.06 13.4 1.28

3.5 4.04 5.5 1.17 7.5 4.24 9.5 7.78 13.5 1.07 13.5 1.293.6 5.34 5.6 1.29 7.6 4.42 9.6 7.95 11.6 1.09 13.6 1.303.7 6.92 5.7 1.41 7.7 4.60 9.7 8.11 11.7 1.10 13.7 1.313.8 8.83 5.8 1.53 7.8 4.78 9.8 8.27 11.8 1.11 13.8 1.313.9 1.11 E-6 5.9 1.67 7.9 4.96 9.9 8.43 11.9 1,12 13.9 1.32

"*T - 300 K (ambient)

The noise equivalent temperature difference or NEAT is defined as the scene temperaturedifference above some reference temperature which is just large enough to provide a SNRoequal to 1.0. The two conditions imposed are that the test image be large enough so that thesignal amplitude will not be reduced by sensor apertures and that the channel bandwidth be thereference bandwidth defined by Eq. (4-12). By setting SNRO - 1 in Eq. (4-9), AT becomesNEAT and,

NEAT - -(2 ta (4-13)"7rrt (nl ad'h,)"/2 KMDO(fl)

since f - FL/Do and since

F,? - aj/!(a. (4-14)

Equation (4-!3) may be written as

NEAT - 4 aAf,]'r) ý Do2(nJio,)'2aKuwD'(fl),)

Another form which is sometimes convenient is

NEAT - -..4 [Af/,,I 2 (4-16)7r•,.D, (na. ,t17,)! 1 K,,. D' (nl,

The above equation together with Eq. (4-12) becomes

NEAT - If [0, FR fl' 27r7?,D, {2nln,,]' zawt1l g K D*( 1 )

Page 65: a073763 the Fundamentals of Theirmal Imaging Systems

4

g I

NRL REPORT 9311

The incremental irradiance AE as used in the above equations is a value which is integratedover wavelength and D" is integrated with respect to the spectral distribution of the backgroundirradiance. It is further assumed that the spectral distribution of the scene object which is ofinterest is similar to that of the background. These assumptions lead to the simplified form ofthe above equation.' (Also see Appendix F.)

The detectivity may be limited by background fluctuation noise generat( 1 ;n the primary

photon-to-electron conversion process, by noise generated in the internal b,•ratconductorprocesses within the detector, by preamp noise, or by a combination of all three. In some casesan effective D' is quoted which includes noises such as preamp noise as discussed in AppendixF(C). Also, the ,; &al case where the major noise is due to photoconversion of backgroundphotons is discu .n this appendix. The detector in this case is said to be background limitedand the detectivit) in this case is often written as D*BLIP.

A detector in a TIS can detect photons from sources which are outside the field of viewsuch as the lens housing. These photons after detection consititute an additive noise. Howeverthese noises can be reduced considerably by cold shielding. The detector is discussed inAppendix F(C). The intent of the cold shield is to restrict the detector viewfield to the solidangle subtended by the objective lens. If this is the case,

D' (fl) - 2fi D" (21), (4-18)

where 71 is the efficiency of the cold shield relative to a perfect cold shield and D" (21) is thedetectivity when the detector is viewing i, mnlid angle of 2msr of background radiation.

With Eq. (4-18), Eq. (4-16) beconwt-2 1Af , /2

NEAT - (4-19)1771.D. (n• M,)"/vKD"(2#)

and Eq. (4-17) becomes

NEAT - (,FRl (4-20)7r'JODO (2nlnI 1/2wa 1 ! • D° (210) (

The NEAT has the advantage of being electrically measureable and is an indicator of sensorQensitivity. As will be discussed it is not an absolute indicator since a system with an NEATwhich is inferior (larger) to another may yet produce a superior image on the display to theobserver The NEAT is of direct 'r-L-re 3t to video tracker designers since

SNR° -NAT (4-21)NEAT

In the case where the detector channels are multiplexed at a rate of one sample per dwelltime, the video SNR becomes n,. times the channel bandwidth. To compute the video SNR inthis case we observe that the signal, the mean square noise, and the bandwidth all increasedirectly as the number of detectors in parallel. Thus

SNR,.o - 4-0--2 n(Padjj(D"fl)/n/} KIAT (4-22)

"The simplified forms above also assume that average values of spectrally dependent quantities iuch as optical

efficiency, ?)r, taken over the spectral bandpass of the system can be used which is a usual engineering approximation.

Page 66: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELLa

and as can be seen, n. cancels out and thus the broad area video SNR is identical to the SNRcorF of Eq. (4-9) for the case where the multiplexing rate is one sample per dwell time. In some

cases, the detector output may be, for example, sampled twice per dwell time which would havethe effect of doubling the video bandwidth and would increase the measured NEAT by ,A.The quality of the image viewed by the observer will, however, increase instead of decreasebecause overall sensitivity stays the same while the increased sample rate improves sensor MTFand decreases aliasing effects as will be discussed in Appendix G.

The interest in the channel and video SNR stems in part from the relationship given inEq. (4-8) which relates the SNRD to the SNR, . The video handwidth in the simplified deriva-tion leading to Eq. (4-8) is immaterial in principle so long as it is the same as that used tomeasure SNR ,.o If, however, NEAT is used to compute SNR ... then the video bandwidthmust be nAf,.,. Strickly speaking, Eq. (4.8) applies only when the noise is band-limited whitenoise, i.e., noise which has a uniform spectral power density within the video bandpass. Thecase where the noise is nonwhite is treated in Appendix D.

Because it is wished to employ Fourier methods of analysis when sensor aperture affectsare to be taken into account in later sections, it is convenient to describe the image dimensionin terms of its reuiptocal dimensions. Therefore, we d&fine the image a! this point to be a rec-tangle of size

a - AxAy -AX 2, (4-23)

where e - Ay/Ax. Also, we define the quantity N, which will be termed a spatial frequency, as

N - Yi'Ax, (4-24)

where Y is the effective height of the image foc;.. plane as shown in Fig 4-3. The units of Nare reciprocal picture heights or, more commonly, lines per picture height The virtue of N as ameasure of spatial frequency is that it is dimensionless which Is a convenience when computingoverall sensor modulation tra"sfer function, and V relates most directly to the displayed infor-mation that the sensor is capable of displaying. With the above definitions. Eq. (4-8) become

SNRI - AX 2AfSNjR, (4-25)

- 2 I SN R ,...

where .4 - a Y` is the effective image focal plane area when it is the picture aspect ratio (hor-izorntal to vertical). Note that cc may be written as ;jjl¢.. the rteio of the horizontal to thevertical field of view,

By rewriting Eq. (4-22) in terms of a video bandwidth, we obtain

SNR,., - 1-- (n,awwim) 2J*(l1,A A'., T (4-26)4 f2 [Af,) 112

and by suhstitution of Eq. (4-26) into Eq. (4-25). the relation

SNRV -2 -T,4. 1 2D(fl) (4-27)4.f2 .(-7

Page 67: a073763 the Fundamentals of Theirmal Imaging Systems

SNRL kEPORT 8311

or by use of Eq. (4-14),

11/2 ________________S122T.e ]: 1 r7)oD. (n ~lpipw n,)":D° (11)K,11 T

SNRD- 2T- 1 " D(n 4f (4-28)a J4-f

or oy use of Eq. (4-18) for the BLIP detector case with co!d shield,•-~~ ~~ "e 'TIDntroO,1", D* (27r) K kp 7

SNRD - l I 2 (4-29)

Note that the video bandwidth cancels out.

The use of N measured in lines or half-cycles per picture height is standard in televisionpractice not only because of the analytical benefits noted above but because TV camera lensescan be changed at will. FLIR lenses are generally designed in because of cold shielding require-nientr. As a result one measure of spatial frequency which is commonly used in FLIR analysisani design is k,, which is measured in line pairs or cycles per mil/iradian. The conversion of N tok& is given by

2000 Y k,N 2000 (4-30)F,

and because the vertical field of vie ,p, is approxim" ,', Y/FL.

N - 20r (4-31)

where P, is measured in milliradians. Equa,,oi , ,:an be substituted into either Eqs. (4-

28) or (4-29) directly.

Equation (4-28) can be rearranged to read,____ 1 2,O(t•) D'(f1 )Kw4T

SNR 2To n1Af, [' ' -- 2D*D,,KtwT (4-32)a .N 4 " f . 2:I'

where we have also multiplied the numerator and denominator by Af,!' 2. The term to the farright can be recognized as being equal to AT/NEAT by comparison with Eq. (4-16) and if wedefine n,, Af, as the reference video bandwidth, A,,, Eq. (4-32) becomes

SNRJ1 - AT (4-33)NR' N NEAT

This form is convenient when noise sources are of a simple form and when there is no need totake into account the detailed effects of sensor apertures on the noises.

!n FLIR modeling practice, it is usual to calculate. NEAT but it has not been customaryto calculate SNR,., or SNR I. The reason is that the first generations of FLIR have been com-paratively simple and it has been generally possible to specify the detectivity of the detectors bya simple number or an equivalent detectivity including perhaps an additional noise as shown inAppendix F(C). In this case, it is usually simpler to calculate the minimum resolvable tem-perature. or MRT, directly as shown in Chapter V without an intermediate SNR 0 step. Thisprocedure will probably change in the future both as our understanding of the modeling processincreases and with the advent of more advanced FLIRs. For exanm if observer retinal noisecharacteristics are added in, then its noise would be difficult n into the detectivitydefinition. Also multiple noises with rather widely varying bandwiama, i.y be involved which

63

Page 68: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

makes the simple lumped detectivity concept very cumbersome. Even with current systems thesimple approach begins to break down if it is wished to make the calculations by using theactual noise power of the detector and if aperture correction of either the preamp, the multi-plexer, or the display is used.

E. THE EFFECTS OF FINITE SENSOR APERTURES

If an imaging sensor were perfect, the image of a point source would appear as a point inthe displayed image and all images would be transmitted through the sensor with perfect fidel-ity. In reality, the displayed image will differ from the scene object in amplitude, shape, posi-tion, or all three due to finite sensor apertures such as the lens, the finite dimensions of thedetector, the multiplexer, the finite dimensions of the displays electron scanning beam, etc. Toillustrate the effect of an aperture, consider the point source object of Fig. 4-7 (a). Due todiffraction, chromatic and geometric aberations, and imperfect focus, a point in object space willbe imaged by the lens as a blur in image space. An equation representing the image intensityvs the x, y coordinates in the image plane is known as the point spread function. Similarly, anequation representing the blurred line of Fig. 4-7 (b) is known as the line spread function. If theaperture responses are linear, the point and line spread functions can be Fourier transformedand the sensory system can be analyzed in the frequency domain.

y

LineLens

Focal Plane

Fig. 4-7 - Effect of'a finite aperture on a point and line source image

The methods of analyses are discussed extensively in Appendix D. Appendix C discussesa model using a matched filter concept for the human eye while Appendix D treats the eye as asynchronous integrator. Both approaches give similar results. The methods and results ofAppendix D will be used In the treatment which follows,

64

Page 69: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

To illustrate the methods of analysis, we will first consider the system of Fig. 4-8 whichemploys a TV camera to perform the multiplexing operation. The image of the scene is pro-

jected onto the line array of detectors by the first lens. The output of each detector is amplifiedand used to modulate a light-emitting diode or LED. The detectors and the LED's are scanned

in the horizontal direction by mirrors (not shown). During each scan, the TV camera storesthe image crdated by the LED array and the stored image is subsequently read out by the TVpickup tube's electron scanning beam. The resulting signals are amplified and used to modulatethe display beam which creates a visible light image on the CRT phosphor of contrast propor-tional to the radiation contrast of the photoconverted scene. In the horizontal direction, thesignals are analog so that sampling effects need not be considered. However this is not true inthe vertical direction.

lens deotore preomps LED's

* TV pickup tube apIpy

Fig. 4-S - Schematic representation or a basic FLIR configuration

We will consider two basic types of test objects which are aperiodic objects (defined hereto be isolated rectangular images on a uniform background) and periodic bar patterns. To focuson first-order effects, we will initially consider the rectangles and bars to be long relative totheir width so that the effects of the sensor apertures on bar length can be considered trivial.The analysis can then proceed on a one-dimensional basis so far as the apertures are concerned.It is further assumed that the perception of the displayed image is limited by sensor-generated,rather than retinal fluctuation, noise.

-4

The most prominent and usually dominant noise generated within the usuaLFLIR systemis due to the photoconversion of photons to electrons by the detectors. Other noises such asinverse frequency-dependent noise and preamp noise which may occur are frequently lumpedan incorporated in the D" specification. In first-order analysis, it is common to consider thereto be only one noise source and that the character of the noise is white. These assumptions areby no means necessary: multiple noise sources of nonwhite character are readily treated as dis-cussed in Appendix D but we shall assume only one noise source in the analysis that immedi-ately follows.

In the synchronous integrator concept of the human eye-brain combination as discussed inAppendix D. it is assumed that while the image of an aperiodic object is smeared out or blurredby the finite sensor apertures, the observer will extend his limits of spatial integration asrequired so as to recover all of the signal. llowever, in extending the integration distance torccovcr the signal, more background noise is also integrated. Since a sensor aperture is also afilter, the aperture may also reducc the noise of the aperturc follows a point of noise insertion,Thus. while an aperture can both increase and decreasc If the noise an observer will perceive,

Page 70: a073763 the Fundamentals of Theirmal Imaging Systems

F A. ROSELL

the increase will always be larger tiian the decrease and the net effect or the SNRb will alwaysbe a decrease.' A functional block diagram of the sensor apertures corresponding to the systemof Fig. 4-8 is shown in Fig. 4-9. The sensor optical transfer functions are symbolically denotedas R0( ) and the magnitudes of R,( ) are called the aperture's modulation transfer function orMTF.

TV video ROlens detector lens pickup amp disploay

Fig. 4-9 - Funvtional diagram for the system of Fig 4.7 showing

sensor apertures and point of noise insertion

The equations that describe the effects of apertures on the models for bar pattern detec-tion differ substantially from those used to model simple rectangle detection because of theeffects the apertures have on the signal waveform and not because of any substantial difference "in the observer's detection process (although some differences exist). in the aperiodic case, thedisplayed images waveform is smeared out and may be reduced in amplitude by apertures.However the area under Lhe output waveform is identical to the area under the input waveform.As noted above, it is assumed that the observer will simply extend his spatial integration limitsto recover all of the signal, However. in the process, the amount of noise perceive:d increasessince the limits of signal integration now include more noise from the background. This is dis-cussed in detail in connection with Fig. D-2 of Appc idix D. In the periodic case, the image ofa bar pattern is also blurred by apertures but the period of the output waveform is identical to

the period of the input waveform and it is therefore assumed that the noise integration distanceremains unchanged by aperture effects. The primary effect of apertures on bar pRttcrn detec-tion is to reduc2 signal amplitude (more correctly, the area under the output waveformdeLreases as discussed in connection with Fig. D-12 of Appendix W). lo summarize, the effectof apertures on aperiodic test objects is to increase the noise integration distance leaving theintegrated signal unchanged while in the periodic test object case, the effect is to reduceintegrated signal leaving the noise integration distance unchanged.

The pcriodic bar pattern is the most commonly used test pattern and sensor performanceis generally specified in terms of the highest spatial frequency that an observer can resolve onthe sensor's display as a function of the pattern's radiant exitance or temperature differential.For the basic FLIR configuration shown schematically in Fig. 4-8 and functionally as a blockdiagram in Fig. 4-9. the SNR • may be written as

I2 T, e.a.f,,' _____R_(,_

SNR 2 R, CV) SNR. (4.-34)

where R,.(N). the square wave flux response and 30(N) the noise filteri.,g functi,,n are to hedefiIed. The SNR 1 , as in the case of the aperiodic object, is computed on the basis of a single

"Thl, nc' SNR , will jldys, he it dccrc,•,c CXLCpI po.ih'h Il ! ,c tl'.tL d I direclion ohcc Ihlc .wperis j ,cs are , part(if thc pre- irnd po.ililicring ,li-r,iihon iod Iiu, ire hb ', nctcs,.tr, ? mag;let fCLonsif'.I•r.I5Or imi ', ''5. IrCk•fcrvii( (if

S'.'I IU'I fS FLP.on'Sc

66

Page 71: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 83111

bar, the assumption being that an observer, to resolve the presence of a bar pattern, mustresolve a single bar in the pattern. In the periodic case, all of the apertures shown on the blockdiagram of Fig. 4-9 act to reduce the integrated signal under the output waveform. If the indi-vidual aperture responses are linear, then all of the ape.-ture responses may be lumped as fol-lows.*

R0,(N) - RoL(N) . Rd(N) • RoL2(N) - Ro,,N) Ro.iN) .Roe(n), (4-35)

where Ro,(n) is the oversensory system MTF. The square wave flux response is determinedfrom the above equation and

Rsr(N) _ 8 Ro,((2k - 0NI)(N A-I (2 - )(4-36)

The noise filtering factor 63(n) includes only tiose MTFs th.-i follow the point of noise inser-tion which are

Ro,(N) - &RL2(N) - R, (N) • R,,(N) R0 D(N) (4.37)

and O(N) is obtained from the approximation

S R:,(N)dAb•(:N) - (4-38)

Alternatively 3(N) can be obtained frorm the approximation

)- I/[ ( y, ~.-i/2 (4-39)where Nr, the noise equivalent bandwidth o-" those apertures which follow the point of noise

inscrtion, is given by

N ). -(N) R,2 (N)dN. (4-40)

R,,(N), P3(N), and N~f(N) are usually cyaluated numerically.

It is observed that a bar pattern is periodic in one direction and aperiodic along the lengthof the bar. Usually the length of the bars are sufficiently long so that the noise increase andfiltering effects along ,he bar can be ignored. If the bar lengths are short, then Eq. (4-34)would be written as

SNR, 2 F(N).(N) 1/(N)if2 SNRO, (4-41)

where the bars are assurred to be periodic in x and aperiodic in y and, rF(N) and fy(N) are thenoise filtering and increast factorv respectively. Assuming that these factors can be neglected, Eq.(4-34) together with Eq. (4-28) may be written as

SNR,,- 2 T,., R, ( N) 47r7,D, ,: i) D(f1i)K~ fA T (4-42)Ik Ai1 23\ 4./f

"*The spatial frequency, m y be cpresqed in other unts ,uch u , , k cf carc is c,1tciscd in staling the MTFs .i previouslydisoi•tsed The ilmance (f error is reduced if the oserdll MTFs and specialized terms are carried forward as dimension-I¢ci units until the end and then conserted iO the dirmenrsional units dcsircd

67Al , A

°

Page 72: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

• The noise equivalent bandwidth of any aperture is defined in general by

* N, - f R2(N) dN (4-43)or

N, -2 f"R2(k(dk.,

and the noise equivalent aperture is defined in general byI

8, - -. (4-44)

N,',

The meaning of 8, can be visuallized by reference to Fig.4-10. If an image impulse (a pointsource) is passed through a Gaussian filter, the output waveform will be as shown in the figure. 3The shaded area bounded by 8, is the distance over which the eye is assumed to integratenoise. This is less than the distance over which the signal was assumed to be integrated, butsome means had to be found to bound the noise integration limits. However the difference isnot large in effect because the shaded area in Fig. 4-9 is 0.92 times the total area integratedover infinite limits. The quanity 8, may be thought of as a noise equivalent blur distance.When a number of apertures are cascaded the following approximations are sometimes useful.

e$- el + 6,22 +8• - - en 8,.. (4.45)

N N+ 4 + - - + T2 (4-46)N, N,2� '21 ' N, 3

I 1 1 I I ai . .I I I I

0.L

--

w

0.4

0.2

*1c10.0 _ ,-1.0 -0.5 0.0 0.5 1.u

Fig 4-10 - Noise integration distance for a point image after the imagehas passed through a Gaussian aperture

The aperiodic model is seldom used in either analytical modeling or in laboratory or fieldtesting of TIS. However since it is the appropriate model to use when the t.sk is the oetectionof small objects on a uniform backgrojnd, the aperiodic model will be ieviewed. Equation (4-25) modified to include aperture effects on aperiodic images becomes

68X-..-,a '

Page 73: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

SNRi 2TNR, (447)

a _ I N rr(N)f(N)]"/2 '

where f(N) is a noise increase factor and F(N) is a noise filtering factor.

Precise integral expressions are available in Appendix D for the purpose of calculating thenoise increase and filter factors but for the purpose of clarity and physical insight, we will usethe approximations developed. The approximate noise increase factor is given by

f(N),- I + (NJ21

1 1 2 (4-48)

where N,, is the noise equivalent bandwidth as defined by Eq. (4-43) and calculated by using theoverall system MTF as given by Eq. (4-35). Alternatively, Eq. (4-48) can be written as

2 ° 1 2'f(Ax) - I A +{A2J (4-49)

where 8. is the overall system noise equivalent aperture as defined by Eq. (4-44) and Ax is thewidth of the input test pattern in compatible units. It can be seen that when the width of the testobject is large relative to the noise equivalent aperture (or "blur diameter"), the noise increasefactor is near unity. The noise filtering factor F(N) is given by

I + 2 + IN 111/2

N'N) - I No~.j + nef2J/ (4-50)+ I N 12 +21 N 1v211/2

where N,, Is the noise equivalent bandwidth for all of the apertures that precede the point of noiseinsertion and Nf is the same parameter for all of the apertures that follow the point of noise inser-tion, i.e., for the block diagram of Fig. 4-8, R,, -RoLl " Rod and R0 r - RoL2 " Roe " Ro, ., RoD.Now, by analogy to Eq. (4-42) we can write the typical SNRD equation including apertures forthe aperiodic target-case as

SNR 0 2 T,., I 0 D0 (n/nP&r?,)"/2 D'(fl,)KxrAT (45)INRI N [(,(N)f,(N)r,(N)C,(N)I"/ 2 4.f

where we have included the effects of the apertures in both directions. In this formulation wehave assumed that the spatial dependences of the aperture responses are independent andseparable in x and y.

The principal sensor MTFs may include the lens, the instability of sightline, the detector,the multiplexer, and the display. In good design, the lens will be close to diffraction limited,The diffraction limit is of course wavelength dependent. However, to gain some appreciation orthe diffraction problem, we have used the simple diffraction equation given by

R0 1 (v) O - 2 cos -I -, - L - I. (4-2)

where v tI the p.aittalfreqeuency in arbitrary unlt.o When ,, i i'given /i te'rins of N 'linslpict, hr,.

A6 69

Page 74: a073763 the Fundamentals of Theirmal Imaging Systems

F-A-ROSELL

N 00o 2000 YDoN0 Xf X FL

-20000, Do, (4-3)X

where A is the wa velength in IAm, 4p,, is the vertical field of view in radians, and Do is the diameter ofthe lens in mm.. We plot Eq. (4-53) in Fig. 4-11 for X - O0jm and a al', of 2 * for various lensdiameters. When v' is given in terms of ko cyclcs/mrad,

koo - Do(4.54)

where Do is in mm and A Is in MIT. Again, we rilot Eq. (4-52) in Fig. 4.12 for x - 4 and 10p~m and the same lens diameter as before. When using k,, units, the MTFs are independent ofthe fields of view.

AN

0 100 200 300 400 S00 600 700 500 900 ,OoCSPATIAL FREQUEN4CY (Iineu/pict. ht.1

HiS. 4-11 - MTF of diffraction-Iimited t'rculsr lens a' 10 urm with a vertical fic~d of view uf 2o.The NITFs arm plotted for various lens diameters.

SPATIAL FREQUENCY llcycles/mrad) OL - 4 jun)0 S 5 Is 20 25 30 35 40 4S so

IL

02

FigSPATIAL FREQUENCY (cyclus/mrad) IA -10 Mwm)

Fg4.12 -MTF of diffraý:ion limited circular len4 at 4 ard tO pm for .,arious leis diameters

.10

Page 75: a073763 the Fundamentals of Theirmal Imaging Systems

1~

NRL REPOR1 8311 "

The effects of sightline instability are well known for television cameras which areintegrating sensors but the effects on FLIR systems are not. In the television case, image biurdue to instability takes place before readout and therefore each displayed image is blurred and Ieach point in the picture is blurred by about the same amount. In the case of a FLIR usingelectronic multiplex, each picture displayed should be relatively unblurred since the exposuretime over which the motion is interated is only equal to the detector dwell time rather thanthe much longer sensor frame time. Each point in the picture can be somewhat displaced fromits true position, and the displacement of any given point may be different from that of anyother point. When the sightline motion is of the random amplitude variety, each point in thepicture will be displaced from the next by an unpredictable and variable amount. The primary"integration of the motion will take place in the observer's eye. When the TV camera tube isused as the multiplexer, the motion is still not completely integrated as in conventional TV casealthough some TV camera tube lag effects may come into play. The psychological experimenta-tion necessary to determine sightline instability effects for FUR has not been perforrr.ed. ATherefore, the usual procedure is either to ignore motion entirely, which seems rather optimis-

*> tic, or, to use the TV-derived motional MTFs, which is probably somewhat pessimistic. Forrandom sightline motion, the MTF for TV is given by

2

RO,,(N)--exp - (4-5I) %

when N is given in lines/pict. ht. and as

Rcm(ke) - exp - [ir-Ai9 4 k0 j 2 (4-56)

when k,, is in cycles/nirad and 094 is in mrad. The quantity 0 4 is the rms amplitude of the sight-line motion. Equations (4-55) and (4-56) are plotted in Figs. 4.- 3 and 4-14 respectively. Again,observe that R0o, is field-of-view deperndcnt when given in units of N.

O.0 -T-

0.4

o.. A

0.2

0 100 200 300 400 SOo 600 700 900 1000

SPATIAL FREQUENCY (Irmasipict. ht.

Fig 4-3 - MTF% due to iandomn stghdline mol:oos of .implitudc ol for a vertikal fie(d-o--% iý.w of 2.5*

j.

.. .. . . • 7

Page 76: a073763 the Fundamentals of Theirmal Imaging Systems

F.A ROSELL

05to Is 20SPATIAL FREQUENCY (cyclel/mrad)

Fig. 4-14 -MTFs due to random sighiline motions of iamplitude 04

The MTF of a deteczor of size 8 in one direction is given byRod(N) -sin(7rbN/2FL 4#,)(57R~d (N) iTSN/2FLO, (4-57

when Nis in Iiner./pict. ht. Alternatively

sinf (rO~N/2tP, .)

R0 , (N) - 7N/t1.(4-58)

*where 9 is the instantaneous field-of-view of the detector in the appropriate direction. For k,* in cycles/mrad,

R0 ,(k 9) siý O~wk L (4-59)

or R M ) -sinir ' k p (4-60)

when 9 is the instantaneous field-of-'Niew in mrad. The above equations apply to the analog scandirection. The case where 'he signals are sampled is discussed in Appendix G. We plot thedetector MTF for 0,/9-360 in Fig. 4-15 and by use of the scale at the top of the graph, theMTF is shown i.s k, for 0 .- 0. 1 mrad. The various MTFs discussed above plus the MTFs as-sumed for a TV camera multiplexer and a typical 5" display are shown in Fig. 4-16. The lensMTF assumes a 200-mm diameter, arid the sightline motion was assumed to be 50 Arad rms.The overall system MTF is shown as Rn,. The MTFs due to the LED,. and reimaging lens usedin the: basic FLIR configuratioAi of Fig. 4-8 are assumed to be unity It) Fig. 4-17. we show theoverall system square wave amplitude response R,M N, the sine wave atnplitude responseRO(N), or MTF. and the squa,-e wave flLx response R,,(N). Also shown is the noise-filteringfunction, #3(N), for periodic patterns.

72

Page 77: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

SPATIAL FREQUENCY (cycles/mrad)

0 S 10

1.0

0.9

0.4U,

0.0

"I-

- 9-

0 100 200 300 400 500 600 700 So0 900 1000

SPATIAL FREQUENCY (lines/plct. ht.)"Fig. 4-15 - MTF for a detector vs N for a 0,/8 ratio or 360 and K kA for

an instantaneous field of view of 0.1 miad

1.0

'~. Rod

0.,

S,, .0. \ \s OL

0.2

!• "• "" --,'•- . ."-.5. •0 00 200 300 400 500 600 700 600 900 1000

SPATIAL FREOUENCY (llnaa/pict. ht.)

Fig. 4-16 - Overall system MTF. Ro,(N), and component MTFs including ROL. (lens). RO, (motion).ROd (detector), Ro0 (camera tube), and ROD (display)

7

S... n •73

Page 78: a073763 the Fundamentals of Theirmal Imaging Systems

- A. ROSELL

1.0 *

OL8\

Rsq(N0.6 .Ro

Fig. 4-11 - Square wave amplitude response

"R,(N), sine wave amplitude response R0(N), square

" wave flux response Rf(N), and noise filtering func.

0.2 A tion, j(N) tor the assumed sensory system

0 100 200 300 400 5ooSPATIAL FREQUENCY (lines/plt. ht.)

In low light level television systems, the image blurring takes place in the TV camera

tube's gain-storage target which follows the point of photoelectron noise generation (at the pho-

tosurfaces) and thus the motion MTF filters the noise and is included in P(N). If the blurring

takes place in the observer's eye, then again the noise is filtered and would be included in

/(N). This was assumed in calculating /(N) of Fig. 4-17.

The component MTFs used above for illustrational purposes are by no means all

inclusive. Other MTFs such as the preamp and video-amplifier responses, the MTF of the LED

reimaging system, and even the MTF of the observer's eye may be involved in the overallimaging process.

It should also be observed that certain MTFs can be partially compensated (Ref. 4-8). If

an aperture has the functional frequency response R 0 (N), the ideal corrector would beR.-i(N). Note that the corrector's response must be the inverse of both the magnitude and

phase terms of R 0(N). In the example described by the functional block diagram of Fig. 4-8, itwill be found that an attempt to compensate the lens and detector will be unfruitful becausethese apertures preceed the point of noise insertion and both signals ani noises will be

amplified alike by the compensator. However, it is possibla to compensate the reimaging lens,R,L2, and the TV camera, R ,, which follows the noise. In this case, it will be fo-vnd that com-

pensation increases both signal and noise but the signal increases faster. It has been found inpractice that the improvement obtainable can be appreciable for this specific case.

F. NUMERICAL EXAMPLE

In Table 4-2. the parameters of a hypothetical FLIR are summarized. It is assumed thatthe basic FLIR configuration of Fig. 4-8 is used. A vertical array -1" 180 detectors interlaced 2:1is used to provide 360 scan lines without overlap. The reference channel bandwidth from Eq.(4-12) may be written as

O, FR fl [-30"36-48

" 2" - 2 , h, o - 2 .0.75 .180 .0.1 .0.1 - 19,200 H z,

"'4

Page 79: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

Table 4-2 - Hypothetical FLIR Parameters

Field of View fl 36 x 48 mradInstantaneous Field of View W/ 0.1 x 0.1 mrad

Lens

Diameter D. 200 mmFocal Length FL 500 mmFocal Ratio f 2.5Efficiency 10 0.65

Picture Aspect Ratio a 4/3Frame Rate FR 30/sInterlace Ratio Kj, 2/1

Detectors

Size 8•,6 .05 x .05 mmNumber T 180Scan Efficiency 715C 0.75Detectivity DO (21") 1.3 x 1010 cm Hz" 2 W-1Cold Shield Efficiency 0.7

Incremental Scene Radiance

Spectral Band Ax 8.0 - 11.5 M~mTemperature T 300 KATProportionality Factor KW 5.55 x 10-1 W/cm2 -sr-K

by using the parameters of Table 4-2. The NEAT calculated from Eq. (4-19) is

NEAT - 2vyhD. (n 1w~i~),2 A'wD*(27T)

2[19,20011'2 -. .

ir-0.6 5"20"6. 7 (1 x 10-x0.75) 2 .5.5 x 10--1 -3 x 1010

The SNRD obtainable from the sensory system assuming a unity M'TF is calculated from Eq.(4-33) as follows

SNRV - 1, Af. I A T

I a N NEAT

- 2 .0.1 .7 : 3 .4 6- 106 2 1 AT4/3 N 0.157

- 12,140 AT

•" 75

Page 80: a073763 the Fundamentals of Theirmal Imaging Systems

F- A- ROSELL

where a bar pattern image of 71 length-to-width aspect is assumed. This equation is plotted inFig. 4-18 for vari:us values of AT. When the test image is an isolated rectangle and when theprincipal noise is white noise developed in the -etector's photoconversion process, then

12,140 ATSNRD- N[r(N) f(N) 112

where in tue ,quve, we have included the effects of the sensor's finite apertures in the horizon-tal direction. -',,r periodic bar patterns, the corresponding equation under the same condition is

SNRD - 12, 140 AT. R 1r(N)

N 3" /2 (N)

The above three equations are plutted in Fig. 4-19 by using the MTFs of Section D for ATl°. In the al, ove calculations, it is assumed that the bar patterns are vertically oriented and that

the effects of the apertures in the vertical direction can be neglected. Observe that the effect ofthe apertures is to decrease the SNR) obtainable at the higher line numbers and that the effectof the apertureo is much more severe on periodic than aperiodic images.

T-00

T0 30 K,

2.00

tooo

Fig. 4-18 - SNRD "S spatial frequency for theassumed sensor with unity MTF

0.28

.10

10

to 100 1000

SPATIAL FREQUENCY (Illouiplct. ht.)

76

Page 81: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

SPATIAL FREQUENCY (cycles/mradi

0.5 1 5 -0

AT Z 10

T = 300 K

100 Unity MTF

Aperiodic

Per-odic

Z

10

10 100 1000SrATIAL FREQUENCY (lines/pict. ht.)

Fig. 4-19 - SNRD r% spatial frequency for the assumed scnsorwith unity MTF and for thc assumed MTF with periodic andaperiodic test patterns

G. SUMMARY OF THE SIGNIFICANT EQUATIONS

In this section we summarize the more significant equations. The equation numberingcorresponds to the original equation numbers as they appeared earlier in this chapter. IDisplay Signal-to-Noise Ratio

SNR 0 - 2 Tf,. SNRo. (4-8)

A!

SNR 0 - display signal-to-noise ratio

T, - observer's integration time (s)

77

Page 82: a073763 the Fundamentals of Theirmal Imaging Systems

F. A, ROSELL

- video bandwidth (Hz)

a image area in focal plane (W2)

,A total focal plane area (in 2)

e charge of an electron (coul)

SNRvo video signal.to-noise ratio

Channel Slgnal-to-NoIse Ratio

irio (niadwI 2 D*(fl,) KMAT (4-9)SNR, , 1Afl 1 /2

SNRc, " channel signal-to-noise ratio

- lens transmittance

f - lens focal ratio

nj - no. of TDI detectors

a- area of I detector (cm 2)

- scan efficiency

D*(fl) detector detectivitY (cm Hz 1/2 W-1)

K& AT conversion factor (Wcm-2sr-iK'-')

Ar - channel bandwidth (Hz)

AT - incremental temperature(K)

Detector Channel Bandwidth (Reference)

0, FR f (4-12)

2•, "jnP(

- reference channel bandwidth (Hz)

0, - overscan ratio

A- frame rate (W"l)

fl - total field of view (sr)

y, - scan cmcicncy

n,, - numhcr of parallel dctcc-orts

to, - tnlIantancnus flcld of vicw (st)

i . . . .. .. . .. .

Page 83: a073763 the Fundamentals of Theirmal Imaging Systems

LJ

- - -. ,--

NRL REPORT 8311

Noise Equivalent Temperature Difference (General)

"NEAT - 4J2 ALI.;1( 3

ITro (nOad7 1,)"2 KM D* (fl,) (4-13)

4 [adAf]j'1

" ir"104D (nq,')"'2 o Km D (fl,) (4-15)4__/__ __ la_____i ___

4Lf1 2 (4-16)ff1770Do (n~w,7j,)1'2 K.,wD (fl,)

4f (O,FR fl1/ 2 (4-17)"r'roDo [2nlnP/ 2 w,),mKA4D(t1,)

Detectivity (BLIP Detectors)

D- (fl,) - 2f_[, D" (21r). (4-18)

D*(f) - detectivity for view angleflsr

-q - cold shield effiviency

D" (2w) - detectivity for 21rsrviewing angle

Noise Equivalent Temperature Difference (BLIP Detectors)

NEAT -- • .]1/9;iT0 o n1,.(n•w,1,) "2 KM D (2f) (4-19)

2 (0, F it 11/2 (4-20)7"'-, A,, f2n1 n,] w/,7, 17, KMD*(2ir)

Crhannel Signalito-Noise Ratio

ATSNRI 0 •3 (4-21)

Video Signall-o-Noise Ratio (Broad Area imugn)

SNR,o 4.D (A ,4-26)

SNR, 0 - broad vrea video SNR

Af, - video bandwidth ,I/)

Page 84: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

Image Dimeneinas and SPatial Frequency

a - AxAY - .Ax 2. (4.23) S

N - Y/Ax. (4-24)

ke - N/2000N,* (4-31)

4 - image aspect ratio Ay/Ax

N - spatial frequency (lines/pict.ht.)

Ax - minimum image dimension (mm)

Y - Image plane height (mm)

k# - spatial frequency (cycles/mrad)

o, - vertical field of view (rad)

SNIO for General Ideal Sesser (MTF 1.0)

SN~o" 2 t 1 tI•,(npnoa,•,)"!2W(flCl)KM'TSNRD 2 (nn-- N (4-27)

SNlo - display SNI

a - picture aspect rllliOi(H/IP,)

'H - horizontal field of view (rad)

SNRD 1/ 1 4 (4.28)

___D2 _of. I A T (4-33'?ao IV NEAT

Af, - the reference video bandwidth -R, Aft, (Hz).

SNI a fur Wdeal SLIf Seonet (MTl - 1.0)

i2 TIM I #v•.D, (nfn/wM,)2l/7j D*(2r) KvATsNR o- l-- N 2 (4-29) ,+I,--

20

Page 85: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 1311

SNRD for Periodic Images

SNR - 2s(N)if SNR .. (4-34)

R,,(N) - squarc wave flux response

,8(N) - noise filtering function

Square Wave Flux Response--Periodic Images

R',(N M R05[(2k - 9N)N (4-36)o(k - W2k m

Noise Filtering Function for Periodic Images

f(N R J, (N) dN(IV) N (4-38)

SNRb for Aperiodic Images

I 2TAf,, T / SNRvoS o, R SNRDo * N IF (N) 56(N)i" (4.47)

SNR. - SNR,.

r(N) - noise filtering function

fN) - noise increase function

Noise Increase Function for Aperiodic Images

•(N) + ( 4 4-48) i

N,, - ,•olse equivalent bandwidth for th. overall system

Noise Equivalent Bandwidth

NV, - f0- R I (NV) dNv (4.43)

RO(N) - modulation transfer function

81P

Page 86: a073763 the Fundamentals of Theirmal Imaging Systems

wi

F. A. ROSELL

Noise Filtering Function

i+

1r 1()v)J (4-50)1 + + 21N

N, - noise equivalent bandwidth for all apertures

preceding the point of noise insertion

N,.t - noise equivalent bandwidth for all apertures

following a point of noise insertion

MTF for a Diffraction Limited Lens

ROt(jV) cos- I ]_L11112. (4-52)

41

NO 2000o, ,D, (4-53)

k-o " D" (4-54)

- vertical field of view (rad)

- wavelength (arm), when Dois in mm

MTF for Random Sightlilne Motion

Ro,.(k) - exp - jtr'./0Ak.J2. (4-56)

'T 4N

Rom(N) -exp - . (4-55)

094 - rms angular amplitude of motion

MI F of a Detector In Scan DirectionOf,

sin(ff ON/2ip,) (4-55)Rm,,(N) 0/2

a ~7?9N/21P'

82

Page 87: a073763 the Fundamentals of Theirmal Imaging Systems

A

"NRL REPORT 8311

-3in (1 000r8k *'FL)Rod - lO kF (4-59)

€I€I

Rod•k.) - sinA'0k(, (4-60)SR~d(k,) (4-60)

8 - detector dimension

0 - instantaneous field of view

1,-

MTF of a Detector in the Across Scan Direction

- ~~sin 2(0rO 10,¥ .fMTF - R2(N) - (G-2)O (ffON/2.1) 2

REFERENCES

4-1 Stevens, S.S., A Neural Quantum in Sensory Discrimination, Science, 177, No 40:, , Sept

1972.

4-2 Barnes, R&M. Czernv, Z. Physik, 79 (1932).

4-3 de Vries, Hessel, The Quantum Character of Light and Its Bearing Upon Thresholds ofVision, the Differential Sensitivity and Visual Activity of the Eye, Physica 10 (7): 553-564(1943)

4.4 Schade, Otto H., Sr., The Resolving-Power Functions and. Quantum Processes of Televi.sion Cameras. RCA Rev. 28 (3) 1967, 460-535.

4.5 Schade, Otto H., Sr., Optical and Photoelectric Analog of the Eye JOSA, 46 (9) 1956:721-739.

4-6 H. P. Lavin, Systems Analysis, Chapter 15, Photoelectronic Imaging Devices. Vol 1, Ple-num Press, New York 1971

4.7 Rose, Albert, The Sensitivity Performance of the Human Eye on an Absolute Scale. JOSA38(2) 1948: 196-208

4-8 Rosell, F. A., Performance of Electro-optical Sensors Final Technical Report, EOTM No.575, Contract No. DAAK-53-75-C-0225, Night Vision Laboratory, Ft. Belvoir. Va-, Feb.1977: 154-187.

83

Page 88: a073763 the Fundamentals of Theirmal Imaging Systems

Chapter V

LABORATORY PERFORMANCE MODEL

F. A. Rosell

A. INTRODUCTION

In Chapter IV, we discussed the notion of an image SNR and methods of calculating theimage SNR obtainable on a sensor's display for test images of simple geometry such as rectan-gles and periodic bar patterns. If the observer's image SNR requirements were known, thenthe probr-bility that an observer will discern a particular image under a given set of operatingconditions can be analytically predicted. *rhe ability to perform such predictions is of consider-able aid in the process of designing sensory systems and in the evaluation of proposed designs.

The most commonly used and specified measure of a thermal imaging system's sensitivityand resolution has been designated the minimum resolvable temperature or MRT. The use ofthis term implies that the test image is a four-bar pattern with the length of each bar in the pat-tern being seven times its width. The MRT measure is adapted from and is directly analogousto a similar measure devised by television engineers. In the TV case, threshold spatial resolu-tion is plotted as a function of the test patterns irradiance level while in the thermal imagingsystem (TIS), the threshold incremental temperature difference about a given background tem-perature, is plotted as a function of the bar pattern spatial frequency. Aside from the inter-change of coordinates, the measures used for TIS and TV systems are conceptually identical.The difference in presentation of data is probably due to practical considerations. In the visiblespectrum, bar patterns are easy to make and the test procedure most often used is to simultane-ously image a number of patterns of different spatial frequency. At a given light level, theobserver is asked to select the pattern of highest spatial frequency that he can just barelyresolve as a bar pattern. In the TIS case, where patterns are difficult to construct, it was moreusual to image a single pattern and then increase or decrease its temperature differential untilthe pattern becomes just barely perceptible. However m" sophisticated test equipment withmultiple test patterns are now available for testing TIS fystems.

A less commonly used measure for TIS systems is the minimum detectable temperatureor MDT. This measure implies that the test image is a square. MDT is plotted as a function ofthe square's dimensions and thus, it is also a measure of sensitivity and resolution as is MRT.However, MDT is a much less sensitive measure of the effects of the sensor's apertureresponse. Both MRT and MDT are appropriate measures when the final user of the displayedinformation is a human observer, since these measures include the ability of the observe- tointegrate spatially and temporally, and the threshold SNR which determines MRT and MDT isthat which has been experimentally determined for observers.

In Section B of this chapter the observer threshold SNRs are briefly summarized. TheMRT and MDT equations are derived in teims of these thresholds, and some sample calcula-tions are performed. In Section C, the observer thresholds and the limitations of the thresholdmeasurements are briefly discussed. The psychophysical experiments which led to the thres-hold value are diecuued further in Appendix H.

85

Page 89: a073763 the Fundamentals of Theirmal Imaging Systems

F A. ROSELL

B. MINIMUM RESOLVABLE TEMPERATURE (MRT)AND MIMINUM DETECTABLE TEMPERATURE (MDT)

The display SNRD's obtainable from a sensor when the test images are simple rectanglesor periodic bar patterns were derived in Chapter IV. In Section C of this chapter, the results ofpsychophysical experimentation performed for the purpose of determining the SNRo reqniredby the observer to discern the simple test images at a given level of probability will be dis-cussed. By matching the SNRD obtainable from the sensor to that required by the observer, theprobability of discerning the image on the sensor's display can be determined. In particular.when the SNRD required by the observer is set equal to its threshold value for which the pro-bability of discerning a particular image is 50% and when the SNRD obtainable from the sensoris equal to this thresho!d value, a measure of sensor resolution and sensitivity is obtained. Inthe case of a TIS or FLIR, the measure is known as the minimum resolvable temperature or MRTwhen the input test image is a bar pattern and, the minimum deteable temperature or MDTwhen the test image is a square.

The MRT measure is by far the most commonly used. In making the MRT measurementfo. a real sensor, an observer views the displayed image of a four-bar pattern of 7:1 bar aspectratio as the temperature differential between it and a uniform background of a fixed tempera-ture is varied either continuously or randomly. In principle, that temperature at which the pat-tern is discerned 50% of the time by a number of observers or an ensemble of observations isknown as the MRT for the spatial frequency of the pattern used. In practice, the measurement

is usually made by a trained observer who varies the temperature differential until the patterncan just barely be "resolved.* These two methods of measurement have been shown to be fairlycomparable and to give consistent results for any given observer and also, between trainedobservers (Ref. 5-1). Thz MRT is plotted as a function of spatial frequency.

During the measurement of MRT, the observer is generally free to optimize his viewingdistance and to optimize ihe displayed image contrast and light level. These conditions are notalways realizable when viewing real scenes of large dynamic range. The MRT should be inter-preted is the absolute minimum temperature difference observable under optimum viewingconditions. From time-to-time a resolvable temperature or RT concept, which includes dynamicrange constraints, is proposed but standards for this measurement have not been established.

MDT is not often used in either system design, analysis, or specification although it maybe relevant to certain sensory system applications. Its measurement and its condition of meas-uremcnt are similar to that used to obtain MRT except that the test image is a square. BothMRT and MDT are analytically predictable and if the system parameters are accurately known,the accuracy of the prediction should be within the accuracy of the measurement

For a 50% probability of detecting rectangles and bar patterns, and an observer integrationtime of 0.1 s, the recommended values of observer thresholds arc as follows.

For Rectangles

If the observer is free to vary his display viewing distance at will, as in making an MDTmeasurement, the recommended value of the threshold SNRn~r is 2.8.

If the display viewing distance is fixed, the recommended value for SNRnr is 2.8 forsquares if the angular subtense of the displayed image is neither less than about 4' nor greater

86

Page 90: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

than about 30' of art. relative to the observer's eye. If this is not the case, the threshold valuemay be obtained from Fig. 5-5.

If the display viewing distance is fixed, the recommended value of threshold SNR Dr is 2.8for rectangles so long as the angular subtense of the rectangle is not greater than about 30' ofarc ,elative to the observer's eye in both dimensions simultaneously. The displayed rectangularimage can subtend up to 6 or more in one direction and an SNRDT of 2.8 is still appropriate.If both dimensions are greater than about 30' of arc, assume that the eye integrates an areaaround the perimeter which has an angular subtense of 8' to 10' of arc* relative to theobserver's eye.

For Periodic Bar Patterns

If the observer is free to vary this display viewing distance at will, as in making an MRTmeasurement, the recommcnded value of the threshold SNRot is 2.5. This value generallyholds for the lower line numbers bu" will give somewhat pessimistic threshold resolution pre-dictions at the higher spatial frequencies. (See Figs. 5-6 and 5-7).

If the observer viewing distance is fixed, the threshold value of SNRD tends to increase atthe lower spatial frequencies and decrease at the higher spatial frequencies at short viewing dis-tances and conversely (see Fig. 5.6).

In the aperiodic (rectangular image) case, the SNRD required to achieve either higher orlower probabilities of detection can be estimated from Table 5-1. The same table can be usedfor periodic case by use of the ratio column, and by noting that an SNRDT of 2.5 is usual forbar patterns rather than 2.8 as for rectangles.

Table 5.1 - SNRD Required to Achieve a Given, of Detection Probability for

Rectangular Images

-I-

Pd SNRD SNRD Pd I SNRDSNR UT SNRoT

0.1 1.40 0.50 0.7 3.36 1.230.2 I.88 0.67 0.8 3.72 1.45

0.3 2.44 0.80 0.9 4.20 1.71

0.4 2.52 0.90 0.95 4.59 1.84

0.5 2.80 1.00 0.99 5.32 1.90

0.6 3.08 1.1 1 0.995 5.60 2.00

*The I' in If of arc does not appear conslstent with the 30 dimension but as is dixiissed in Section V.C the error inassuming complete spatial integration does nol become appreciable until the ansulbrf iubien$s of the image becnmesoreater than about 30 relative in (he eye

I8

,Ai

Page 91: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

The MRT is quantitatively determined by setting the SNRD equation equal to its thres-hold SNRDT value and solving for AT which now becomes the MRT. When there are severalsignificant noise sources it is sometimes necessary to plot the SNRD as shown in Fig. 5-1 forvarious values of AT The threshold value, assumed to be 2.5, is also plotted as the solid hor-izontal line in the figure. The intersections of the SNRD curves with the SNRDT curve give thethreshold resolution for the given value of A T which is then also the MRT. If a higher valueof probability is desired, the value of SNRDr is increased. By use of Table 5-1 we see that a0.995 probability requires 2 times the SNRD as a probability of 0.5 and thus a horizontal dashedline is drawn at SNRD - 5 in Fig. 5-1 and again the intersections give the spatial frequencyrequired at any given value of A T When a single noise term suffices, the MRT may be writtenin general from Eq. (4.42) as

DT Ia N0112(N) 4f (5-1)M2c -No T," R, f(N) 17oODo(njnpwj-n,)!/2 O°(fn,)Kw

or if the detectors are BLIP

MRT SNRD /2 NA' 12 (N) 2 (5-2)S12e T, Rsf(N) jrno Do (n, no Ca 7,)1/2 7c D*(21r) Km (

T 3O00OK

30

Fig. 5-1 - Display SNR 0 vs sp.iiial frequency forvarious values of AT. Also Shownl ae the SNRDvalues at threshold and for a 99% probability of r 0detection. @

1 • 0.1-

SNID.,- ---•----,

SNR rDc0 3

A7

011.0 10SPATIAL FREOUENCY c(yctis/nrwd)

88

Page 92: a073763 the Fundamentals of Theirmal Imaging Systems

4."4

NRL REPORT 8311

In either case, by modification of Eq. (4-33) to includ,• MTF parameters,

MRT - SNRDr I 2 (NEAT. (5-3)2* T, Af,, R1~(N

By using the parameters of Table 4-2 and Section F of Chapter IV,

MRT - 2.5 4 - 0.1572 .0.1 - 7 - 3.46 x 106 R,r(N)

- 2.06 x 10-4 N"1i2(N) (5-4)

Also, for the parameters of Table 4-2, N = 72ko so that

MRT - 1.48 x 10-' R'f(k•) (55

which is plotted in Fig. 5-2 by use of the MTF data of Section E of Chapter IV.

0.$

0.6

Fig 5-2 - MRT *, spgtial Irequenry for the assamed _

Nb. FLIR contigurmovr1 0.4

0 2

0.00 1 2 3 4

SPATIAL FREOUE1-.CV Icyc~fIrnr*tC)

The procedure for determining tlie MDT is sirni~ar In ;hat used to determin~e Itle MRT.For the case where there is a single source of noise, Eqs. (5-IH, (5-2). and (5-3) can be used it'

_______' is rep!aced by mr(m i.

For example, Eq. (5-3) becomes

MDT - SNR,,r Nir()(N)J NEST. (3-6)

;.~ ~ T, If"4•"

L

Note in this derivation we have assumed t square image (e - 1) and that ther MT is the maT.ein both the vertical and horizontal direction so that ,F.(N) ( f,(N) .f(N)JI' 2

tr(N)(IN)]. In general, the horizontal arid vertical MTF, are not of course equal.

89• W i : " + - 1 • • •'-. .- i • • +•" 1 " " " - - ! • • II = ; --- ; . ; - , - ' *

Page 93: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

Again, by using the parameters of Table 4-2 and Section F of Chapter 4,[ 1/3 [/2

MDT - 2.8 I- - --- 41I/ Nir(N)J(N)l 0.1572 • 0.1 - 3.46 x 106 [

- 6.10 x 10-1 Nir(N)f (N)J. (5-7)

Or alternatively

MDT - 4.39 * 10-2 k. tr(kd)f (kd)]. (5-8)

Because both N and k, are awkward units when describing a square image, the angular subtens3

of the square image AO - 1/2ko is often used, in which case

MDT- 2.19 x 10-2 [r(A)me(A0)]. (5-9)A0

This equation is plotted in Fig. 5-3 using the parameters of Table 4-2 and the MTF data of Sec-

tion E of Chapter IV.

2.0

1.8

1.60

1.4

1.2

1.0 Fig. 5.3 - MDT vs angular subtense ora square image for the assumed basic

FUR configuration

0.4

0.2

To, 0.1 1.0

A 0 tmrIdl

C. OBSERVER THRESHOLDS

The two standardized test objects for thermal imaging systems are the aperiodic squareand the periodic four-bar patterns with each of its bars being seven times its length, However,

S~90

Page 94: a073763 the Fundamentals of Theirmal Imaging Systems

1.

NRL REPORT 8311

the models developed herein apply to other aperiodic and periodic test objects within certainlimits to be discussed. The observer threshold signal-to-noise ratios have been measuredthrough psychophysical experimentation. A few of these experiments are reviewed in Appen-dix H.

Generally, the procedure was to generate an image either electronically or by use of a TVS~camera tube. add white noise of Gaussian distribution in the video channel, and display the

noisy image on a CRT monitor. For aperiodic objects, the location of the image on the displayand the SNRD of the image was randomly varied. The observer, under a forced choice cri-terion, was asked to specify the image location whether he could see it or not. The data werecorrected for chance.

For periodic images, the location of the pattern was fixed while the SNRD was varied.Two methods of determining threshold resolution were used. In the method of random SNRDvariation, the pattern was of fixed spatial frequency. The probability of discerning the barswithin the pattern was then determined as a function of SNRD. In the method of limits, anumber of bar patterns of different spatial frequencies were simultanteously imaged and theSNRO was systematically increased or decreased. The SNRD level at which the bar patterncould be just barely discerned was taken to be the threshold* SNRDT. The threshold SNRDTdetermined by using the method of limits was found to be identical to that determined by usingthe method of random SNRn variation to within the experimental error. The value of the,SNRDT was found to be 2.5.

The corrected probability of detection for rectangular images is plotted vs SNRD in Fig.5-4. The rectangles in all cases were 4 TV scan lines wide and from 4 to 180 scan lines high.The angular subtense of the rectangles relative to the observer's eye varied from 0.13* x 0.130to 0.130 x 6.020. The ability of the eye to integrate over an angle as large as 60 or possiblymore was an unsuspected result. Therefore the experiment was repeated using squares with theresult shown in Fig. 5-5. The uptilt of the threshold SNRDT for the smallest square is attri-buted to eye MTF effects since the image is near the size of the eye's effective blur circle diam-eter (about 3.2' of arc or 0.0530). When the image becomes larger than 0.27* or 16 min of arcthe apparent SNRDT, which was calculated assuming that the observer spatially integrates all ofthe image area, begins to slowly increase but the increase is not large until the displayed imagebecomfes-lafger than about 0.5 on- a side. ThiT-i-resuWt-s-attibitd -t-the eye's-acting as adifferentiator at the higher light levels and as a result, the eye only integrates around the perim-eter of a large image. In the above experiment, the observer was 28" from an 8" high monitorand as can be seen from Fig. 5-5, the eye's apparent threshold SNRDT is only approximatelyconstant for images which are smaller than about 16 x 16 scan lines in size for a conventionaltelevision monitor with 490 active scan lines.

The probability model used to fit the experimental points of Fig. 5-4 is based on a modeldiscussed in detail by Legault (Ref. 5-2). In this model it is assumed that the mean number ofphotoconverted electrons within the sampling area and period has become sufficiently large sothat the GausJian or normal probability distribution law given by

f (:) - [exp (-: 2/2)J/(2nr)1 / (5-10)

"Sy thre4hold. a 50% probbilitly of detection Ni Implied,

_ _91_91

Page 95: a073763 the Fundamentals of Theirmal Imaging Systems

F A ROSEI L

.9 ":

- 7 ./I00 _"• :, ,7/.60

O .2 -F

0 LJLLI..u

ANULRWIT O QUR 1eres

U j

C - /

0

0 1 2 3 4 I 6 7 aDISPLAY SIGNAL-TO-NOISE RATIO

Fig. 5-4 -- C'orrected probability- of detection i,, SNRn, required fro rectang~ularimages of size © 4 ^ 4. ý1 4 x 64..j 4 x 128, and •7 4 x 180 scan lines

ANGULAR WIDTH OF SQUARE (dlegren•)

0.06 O013 0.27 0.S3 1.07 2.14

to

4-

2

12 4 8 16 32 64

WlDI H OF SQUARE iscan hmes)

Fig Th,- ~c•,ho~ld SNR;, require(,! to d lhct'l 'qtj;ire images, of vdiit- is). siz~e anld angular extent

becomes a good approximation to the Poisson distribution law, which actually represents thesignal and nois•e processes. In the above, Z is a random variable numerically equal to

7 "- SNRI, - SNRm. (-11

v,Ahere SNR,,, is the threshold display, signal-to-noise ratio, which is defined to be that needed10 olltain 4t de~te:{ion pnmbdbility ()f 0.5 The random variable Z is of unit mean and var-iance.Other vatlues ol probability are obhtained from tihe formula

ll-.. IZ / 2. 7= 1 2) f expl-:,•/2)d:. 512

92

__w_... +_ _ _

Page 96: a073763 the Fundamentals of Theirmal Imaging Systems

NRL Rt-PORT 8311

which cannot be integrated in closed form but is widely available in standard mathematicaltables. In Table 5-I, the SNRD required to obtain various levels of detection probability aregiven based on a value of SNRDT of 2.8 for a 50% probability cf detection.

The value of 2.8 for the threshold SNRDT results in part from the assumed value of 0.1 sfor the eye's integration time. If a different value had been assumed value had been assumed,a different value for the threshold whould have been obtained. In other words, the SNRD is aderived quantity based on a measured threshold video SNR and an assumed ability of the eyeto integrate perfectly in space and time.

The threshold SNRDT curve for bar pattern images which were generated by a TV cameraare shown in Fig. 5-6 for three viewing distances. As one might intuitively expect, the low.frequency bar patterns were most easily seen at a large viewing distance while the high.frequenc, patterns were most easily seen as indicated by a lower SNR •r at the shorter distance.In Fig. 5-7. the SNRrPT required at a 28" viewing distance is compared to that required when\iewing distance was optimized by the observer.

6

0-!

.o r4

0 t00 200 300 400 500 600 700

BAR PATTERN SPATIAL FREQUENCY (lines per picture heigh-) -1

fig 5-6 - rhfrcsHold display signal-to.noise iaijo is bar paticin spatial frequen-

cý lfor oispimy io obscr•,c vlc inKg diIidiwc uf•, 14" -1 28' and • 56' Tclev,';c-jim.igcs a(" 2' frmes/s and 875 ,can lines. Display %as 8 high

z 4

w 2

o 100 200 300 400 5OO 600 700

BAR PATTERN SPATIAL FREQUENCY lines per i,cture heght)

I :g I - 1hrchioid dlipLa signal-to-nonisc ralio - t, ar pdltcrn sIalijI frcqucn-i- r oplilr im "iCwl:lg dslinceN and 28 jicwing distnice from onc ,,h.

,.'rsa..fr I)i'pl N %d%• ' ' high

93

Page 97: a073763 the Fundamentals of Theirmal Imaging Systems

F. A ROSELL

The threshold value of SNRD for bar patterns is seen to be generally less for bar patternsthan for isolated rectangles when the measurement is made at optimum viewing distance. Avalue of 2.5 rather than 2.8 would appear appropriate for the range of spatial frequencies from 0to 500 I/p.h. This difference in threshold may be due in part to the method of measurement.With bar patterns, the location of the pattern is known, no chance is involved, and thedefinition of discerning or detecting a bar pattern is of a very subjective nature.

Both the MRT and MDT are measured under optimum laboratory conditions. In general,the dynamic temperature range of the test uterns is small and the display gain (contrast) con-trol can be set at a very high value. When viewing a real scene with a wide dynamic tempera-ture range the ability to adjust display gain may become limited. As discussed in Appendix H,retinal fluctuation noise, rather than system generated noise can be the principal noise limitingthreshold resolution.

D. SUMMARY OF SIGNIFICAN f EQUATIONS

Minimum Resolvable Temperaure (General)"

MRT -SNRD a' ) 4f (5.1)R2Te T, R+,(N) iryvo D, (n, n. w, ",s)I. Dw(,,)

MRT - minimum resolvable termperature (°C)SNRDr - threshold display SNRC - picture aspect ratio (W/H)e - bar length to width ratioTe - eye integration time (s)N - spatial frequency (lines/pict. ht.)P(N) - noise filtering functionRq(N) - square wave flux responsef - lens focal ratio• , - lens transmittanceD, - lens diameter (cm)it number of detectors in series

nP number of detectors in parallelW/ instantaneous field of view (sr)

S5 scan efficiencyD'(fl) detectivity for viewing solid angle (1 (cm HzI 2 wK')Ki•f ATconversicn factor (W cm- 2 sr-1 K')

Spatial Frequency ConversionN - 2000 , k,6, - vertical field of view (rad)k -, spatial frequency (cycles/mrad)

"UseC wlcn %,Icrm 1)1C canl heC kl"scflf I h ', .I sitI. n .r'1111 .su" NAU l - -1T n ; . Inl(; ciirvoI 1hr .,l i nI l,

94

Page 98: a073763 the Fundamentals of Theirmal Imaging Systems

K

NRL REPORT i311

Minimum Resolvable Temperature (BLIP)*

"MRT - SNRDTr a Nn/3(N) 2 . (5-2)cl T, RVI(N) N' i,, D0 (nlp cn i 1 q,)12 D°(2 ')Kf

,cold shield efficiencyD*(21r) - detectivity viewing solid angle of 2wi sr

Minimum Resolvable Temperature (General)"*

IRT -.SNR_•_'la ' NE2T. (5-3)-21E T,.AfN, R --('

Af, - reference video bandwidthNEAT - noise equivalent temperature difference

Minimum Detectable Temperature

The Eqs. (5-1). (5-2), and (5-3) also apply io MDT if Ng3' (N)2R,,* (N, is replaced by

Nt[ (N) t (N)l." For example, Eq. (5-3) becomes

MDT - SNRr)T F f r N) (N)1 NEaT. (5-6)

r(N) - noise fill ring functionf (N) - noise increase function.

REFERENCES

5-1 Rosell, F.A., and Willson, R.H., Chapter 5, Perception of Displayed Information, editedby L.M. Bibernan, Plenum Press, New York, 1973.

5-2 Legault, R. R., Chapter 4, Photoelectronic Imaging Devices, Vol. 1, Plenum Pres, NewYork, 1971.

""Use when systzm noise can be described by d single noisc source assuming MTF equal in . and t directions if :ot,use Ir, (0 ) V, ) 1(1, . (.il' 2

i 95i lll•l P ; --. ..."-a•'ln'4 N gli'- - II N ilm=I l iil l lli

Page 99: a073763 the Fundamentals of Theirmal Imaging Systems

Chapter VI

STATIC FIELD PERFORMANCE MODELS

F. A. Rosell

A. INTRODUCTION

The analytical models developed in Chapters IV and V can be used to predict the incre-mental temperature difference about a given temperature that is required to detect eitheraperiodic (isolated rectangles) or periodic (bar patterns) images of known geometry. That is allthe models can be expected to do. However there are continuing efforts to correlate thresholdresolution as measured or predicted with the ability to discriminate images of real scene objects.The levels of visual discrimination typically include simple detection, orientation, recognition,and identification, but other levels may be appropriate to specific tasks. If we are to predict thesensor's static field performance, we must to correlate sensor resolution with visual discrimina-tion levels and include environmental factors such as atmospheric transmission, atmosphericturbulence, scene temperature, and dynamic range. By static field performance it is usuallymeant that we desire to estimate the range at which a real object can be discriminated on thedisplay at the desired level assuming that the object is in the sensor's field of view and that theobserver is looking at the object. The scene object may be moving, but by use of the wordstatic we imply that search is not part of the observer's tasks.

In attempting to apply the MRT or MDT models to the visual discrimination of displayedimages of real scene objects, the basic premise is that the higher the resolving power of the sen-sor, the higher will be the level of visual discrimination. It is intuitively obvious that this state-ment is generally true, but it is not so obvious that the sensor resolution as measured in thethreshold sense in the laboratory under optimum conditions will in fact be realized in a fieldenvironment. Recall that the MRT measurements are made under conditions where the displaycan be optimized for a test pattern scene of 'very small dynamic range. In a real environment,the sensor resolution can be degraded considerably by dynamic range limitations imposed byeither the sensor, the display, or the observer. The current sensor models in use assume that ifthe incremental displayed image signal exceeds the noise due to background fluctuations orother system nuise by a certain amount, the observer will detect it given that he is looking at itregardless of the contrast of the displayed image. Displayed image contrast and display imageSNR are not reiated quantities, e g., a large low-contrast image can have the same SNR D as asmall bright one. Also as is discussed in this chapter and Appendix 1, the observer appears torequire more sensor resolution when the resolution is noise as opposed to spatial freq, icyresponse limited.

In some ,ases, a level of visual discrimination can be defined easily but there can be con-siderable difficulty in others. An aircraft may be easily detected against a cloudless sky giventhat it is close enough and that the observer is looking at it. The meaning of detection is clearin this case. The detection of a vehicle against a complex background such as a forest may bevery difficult and in fact, it may be necessary to recognize the vehicle before it can be said to bedetected. Alternatively, a vehicle may be considered recognized at times because of some

97

Page 100: a073763 the Fundamentals of Theirmal Imaging Systems

!-

characteristics of its image or because of its locatior. or velocity even though the sensor resolu-tion is insufficient to perform a classical shape recognition. In field trials, the number of objectssuch as vehicles which can be employed when performing recognition trials is usually limited.After a short period of training, observers are often able to "recognize" without discerningshape because of ccrtain object peculiarities such as relative size, location of hot spots, and useof comfort heaters. The recognition criteria in these cases may be more a "telling the differencebetween." Thus even "measured" recognition or identification data may not truly reflect thelevel of discrimination implied.

In spite of the !irnitations discussed above, the models developed in Chapters IV and Vare used to predict the range at which real scene objects can be discriminated and over a fairlywide range of conditions, the predictions will hold if the analyst excercises good judgment inselecting the discrimination criteria based on either experimentally measured results, by analogyto these results, or by prior experience.

B. LEVELS OF VISUAL DISCRIMINATION OF DISPLAYED IMAGES

One of the earliest attenir s to relate functionally threshold resolution with the visualdiscrimination of images of real scenes is attributed to John Johnson Ref. 6-1. The levels ofvisual discrimination were arbitrarily divided into four categories: detection, orientation, recog-nition, and identification, with detection being the lowest and identification being the highestdiscrimination level. The basic experimental scheme was to move a real scene object such as avehicle out in range until it could be just barely discerned on a sensor's display at a givendiscrimination level. Then the real scene object was replaced by a bar pattern of constrast simi-lar to that of the scene object. The number of bars in the pattern per minimum object dimen-sion was then increased until the bars could just barely be individually resolved. In this way thedetectability, recognizability, etc. of the scene object can presumably be correlated with thesensor's threshold bar pattern resolution. The basic idea makes sense-the better the sensor'sresolution, the higher the level of visual discrimination should be. Johnson's basic notion isshown schematically in Fig. 6-1 The real scene object is replaced by a bar pattern whose barspacing is some function of the minimum dimension of the object and the level of visualdiscrimination desired. The definitions of the various levels of discrimination are given inTable 6.1 in addition to the resolution required in lines or half cycles per minimum objectdimension. In addition to sufficient resolution. Johnson noted that image SNR had to besuf-icn:n, but the definition of image SNR was not clear.

0 0

0 Fig h-i - Sciientic repret•%criaon of the John-

"- 4son approach io rCSOILl:on l' iecl of v.ijuald9iscrimlifl8o0

98

Page 101: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT R11I

Table 6-1 - Definition of Visual Discrimination Level andResolution Required per Minimum Object Dimension

Resolution Required perDiscrimination Level Meaning Minimum Object Dimension

(lines or half cycles)

Detection An object is present 2• 1

Orientation The object is approximately symmetrical 2.8_08

or unsymmetrical and its orientation maybe discerned

Recognition The class to which the object belongs may .... 14

be discerned (e.g., tank, truck, man)

Identification The target can be described to the limit of 12.8_'3the observer's knowledge (e.g.. T-34 tank.friendly jeep)

It has become evident in field trials that more levels of visual discrimination than the fouroriginally proposed by Johnson would be desirable. In particular, the gap between detectionand recognition is considered to be too large. As noted previously, scene objects may often be"recognized" even when the resolution available is inadequate to perform a classical shaperecognition. The Night Vision Laboratory has proposed one intermediate step between detec-tion and recognition. This step is called classification, which is defined to be a resolving capa-bility that is insufficient to recognize a specific type of vehicle but sufficient to differentiatebetween say a wheeled and a tracked vehicle. This intermediate level appears desirable, but theword classification has a specific different meaning in naval ship discrimination and should beavoided. In Table $-2 a more detailed discrimination level breakdown is proposed which fits awider set of field conditions. Observe that the orientation level, which has been little used inthe past, has been dropped. The increase ini the number of detection levels has been made tomake note of the fact that many objects are recognized correctly even though the resolution atthe object ik far below that required to recognize shape.

Since the early Johnson results, it has been found that the resolution required to visuallydiscriminate real scene objects is much more variable than Table 6-1 suggests. In particular.the viewing aspect has a pronounced effect as shown in Appendix I. For example, it was foundto be possible to recognize a destroyer with only 6 lines per minimum dimension on a beamviewing aspect but 20 lines are required on a bow or stern viewing aspect. Table 6-3 is felt tobe somewhat more representative of the resolution required for targets in various viewingaspects for the tasks of Table 6-2. Also it is felt that the Table 6-3 applies primarily to fairlyhigh video SNR conditions. The numbvr of lines required may more than double under lowSNR,, conditions as will be discussed

99

Page 102: a073763 the Fundamentals of Theirmal Imaging Systems

F-

F. A. MOSELL

Table 6-2 -- Levels of Visual DiscriminationD i s c r i m i n a t i o n L e v O e c i p i nE x m l

TaskDescription Example

Detection 0 A blob has been discerned that A bright spot in a scene may he amay or may not warrant further tank, a smudge pot, a tree, aninvestigation. Probability of false animal, a campfire, etc, No appreciablealarm is high. cues.

I A blob has been discerned that A stationary blob on a road has ahas a reasonable probability of reasonable probability of being abeing the object sought, because vehicle but could also be a puddleof auxiliary but limited cues that or a tree shadow.definitely warrant further inves-tigation if" possible. Probabilityof false alarm is moderate.

2 A blob has been discerned that A blob movitg at high speed on thehas a high probability of being horizon sky has a high probabilit)the object sought because of strong at being an aircraft. A hot movingcues such as location, motion, radiant object on a road is probably a

signature, and reported location scale vehicle.Evidence is sufficient to abandon othersearch. Probability of false alarmis low to moderate.

Type Recognition 3 An object has been discerned with Differentiate between a trackedsufficient clarity that its general and a wheeled vehicle.cla•i can be differentiated.

Classical Recognition 4 An object has been discerned with Passenger car, van, pickup truck,sufficient clarity that its particular tank, armored 1Mrsonnel carrierclass can be definitely established.

Identification 5 An object has been discerned with M-60 tank, F-4 aircraft, a particularsufficient clarity not only to establish person, etc.

the particular class of object butalso, the specific type within the

Lclass.

O00

Page 103: a073763 the Fundamentals of Theirmal Imaging Systems

I

NRL REPORT 9.tI1

"Table 6-3 - Resolution Required for VariousLevels of Visual Discrimination

Dicii o -Estimated Resolution Required per MinimumDiscrimination

Task i Level Object Dimension_ _ _ (lines or half cycles)

Detection 0 1-3

1 2-4

2 2-5

Type Recognition 3 4-10

Classical Recognition 4 4-20

Identification 5 9 930

,= • I

The spread in values for required resolution in Table 6-3 could perhaps be reduced ifsome method of taking aspect ratio into account were developed. One such effort by Roselland Willson (Ref. 6.2) is shown in Fig. 6-2. An equivalent bar pattern is creazed in objectspace which has bars of width equal to the minimum dimension of the scene object divided bykd, the discrimination factor which is the number of lines required per minimum object dimen-sion for the discrimination level des~red. The lengths of the bars in the equivalent bar patternare made equal to the maximum dimension of the scene object. Thus, as the aspect ratio E(max mrin dimension) of the scene object increases, the bars increase in length by E and theSNR o increase as E 2"

4

, (-1 -- The eQuivalent bar ",'-- for Ihe identification of 3 real ,cene obtec1

- 101 _____

Page 104: a073763 the Fundamentals of Theirmal Imaging Systems

21

The Naval Air revelopment Center, Warminster, Pa., has used a pixel approach in deter-mining resolution requirements for discrimination of ships. If the sensor resolution of thescene object is AX as dcctermined frorn threshold resolution measured in the horizontal direc-tion for TV or in the scan direction for FLIR, then the number of pixels is equal to the area ofthe scene object divided by AX2 instead of the A,, criteria for the various levels of visualdiscrimnination. A table similar to Table 6-3 is made up by use of a pixel requirement. Forexample, for a specific ship viewed broadside, it was found that 36 pixels were needed for sim-ple detection, 100 to discern that the object was a ship. 500 to determine superstructure loca-

tion, etc. The pixel approach includes object viewing aspect ratio considerations since as thenumber of pixels increase the aspect ratio increases.

Objects such as airplanes and tanks are generally more recognizable from the sides thanthe frontal aspect and taking viewine aspect into account either in calculating SNR1) or in set-ting the resolution requirement would appear desirable. Ilowever, the amount of correctionrequired is highly variable and in many cases, totally unwarranted. In general, it has been cus-tomary to include an aspect ratio in predicting the range at which real objects can be discrim-inated whether it is correct to do so or not. In the following, the k,l criteria of Table 6-Itogether with the scene object aspect ratio E will be used in predicting range.

C. RANGE PREDICTION

If we could correlate the sensory system resolution with levels of real object discrimina-tion and if we knew enough about tie scene characteristics including the thermal signature of ascene object ind its background, the effects of the atmosphere intervening between the sceneand sensor, the interactions between the scene, sensor, display and observer dynamic ranges,etc., then it should be possible'to predict the range at which a sensor-augmented-observer candiscriminate a real scene object. In the current state-of-the-modeling art, not all of the factorswill be well enough known and great range pred;-,ion accuracy cannot be expected. Neverthe-less, the accuracy can be good enough in many cases to be useful for system design purposes.

In the following, we will show the traditional methods of predicting the range at which anobject can be--diseriminated iat the 50' level of lprobabaity, -T-de cumnulative___pmbability_of_._discrimination will be determined by using the NVL resolution dependent method, assuming anatmospheric transmission of unity. The effect of atmospheric transmission, which is both toshorten the ranges at which the 501 probability level of discrimination occurs and to make thecumulative probability vs range curve steeper, will be shown next. Finally the noise vsresolution-limited effects discussed in Appendix I will be included. As the level of complexityin the range prediction increases, the confidence in the prediction methods may decrease eventhough the trends obtained through use of tile more complex t'neth,'ds are in general conso-nance with experimentally observed results. Verification and or modification as may berequired to bring the new methods intio closer agreement with reality should be easily obtain-able ithr,,ugh continuejd pvychonhy,vical cxperimentai00s.

As conventionally pcrl'orm,.d the first step in range prediction is to convert the MRT totake into account tile Scoine ohbcot aspect rat io (1ma xini a in to minimum obje t dimension) if useis to he madc of the concept I1; ha a b 'it bect is morc readily discerned if the aspect ratio isgreater than unitv. The NI RI' is coin pl ted or nicasu red with a bar aspect ratio, E , of 7. Forsimple detection by luse of thc t1'11 lintic or one cyce pu l- miniiinimu scene object dimension, the

1012

Page 105: a073763 the Fundamentals of Theirmal Imaging Systems

NRI. REPORT 811l1

quantity E will be 2 for an object of 1:1 aspect and 4 for an object of 2:1 apsect. For the latter

case,

MRT' (2:1 object aspect, detection) - MRT V77/E (6-1)

- MRT "17•4,

where MRT' is an adjusted MRT for the spectfic case of detection. On the other hand, if thevisual discrimination level is recognition of an object of 2:1 aspect with an eight line or fourcycle per minimum object dimension criterion, E - 16 and

MRT' (2:1 object aspect, recognitionI = MRT J77Th (6-2)

I and the MRT' is seen to be smaller than the measured MRT. ObW rve that at an-, given sceneobject range, recognition requires a spatiai resolution that is four times better than for detectionby using the classical Johnson criteria.

The use of E, the bar aspect ratio, to compute the discernability of bar patterns is war-ranted on the basis of experimental psychophysical evidence Its use for the purpose of tryingto quantitatively indicate that a scene object of larger aspect is more recognizable or identifiablethan one of smalier aspect is somewhat questionable. The resolution required for recognition0 and identification of several scene objects including a tank, a destroyer, and an aircraft carrierare shown as a function of viewing aspect angle in Figs. 1-12, 1-14. and 1-15 of Appendix I. Itwas found that a tank that is of about 2:1 aspect requires about I1 to 12 lines for rec •gnitionviewed broadside while 16 lines are required frontally where the aspect is near unity. In thiscase, the E criteria which increases SNRD by -!2 when the aspect changes from frontal tobroadside appears reasonable even though a -.1 increase in SNRo may not result in a Iimprovement in resolution.

As a minimum it can be said that for tanks, the use of E should result in some improve-ment in prediction accuracy. For the aircraft carrier, the recognition criteria are almostindependent of aspect, which may increase by a factor of 50 from the bow or stern to broadside.A broadside view would increase the SNR D by a factor of V'5-, or about 7 over the bow orstern view. Clearly, (he C concept is not applicable in this case. For a destroyer where thefrontal view requires nearly four times more resolution than broadside, the E concept wouldappear to be somewhat more reasonable. The typical ship, excluding aircraft carriers, is prob-ably nearly unrecognizable from the bow or stern and rather highly recognizable viewed broad-side. But whether a ship is 100 or 500 meters long is probably less significant than the detailwhich can be seen on the superstructure. The use of a large E in the case of ships does notseem to be appropriate particularly for the higher levels of visual discrimination. The alterna-tive to using E is to use the MRT curves as is and to adjust the resolution required as a func-tion of viewing angle by using experimentally measured curves to the extent that such curvesare available or by using judgment. This is considered to be a better approich but much morework is needed in this area.

The second step in conventional range analysis is to convert the MRT vs spatial frequencyto MRT (or MRT') vs range cures. This step is required in order to include atmospherictransmittance effects that are range dependent. To convert spatial frequency to range, it isnecessary to select scene object dimensions, the desired level of visual discrimination, and theresolution requirement appropriate to the level of visual discrimination. To begin, we assumean object of minimum dimension X and a bar pattern resolution bared on a bar width aX equalto

103

Page 106: a073763 the Fundamentals of Theirmal Imaging Systems

I

II

F A ROSELL

XoAX- -- , (6-3)

kd

where ki represents the number of lines required per minimum scene object dimension toobtain the desired discrimination level. Trhe angular subtense, AG, of AX at range R is given b.

AO = AX/R. (6-4)

and since k, - 1/2 A0,

R - 2AX k0 . (6.5)

If AX is in meters, and k. is in cycles/mrad, R will be in kilometers.

The MRT curve of Fig. 5-2 is converted to a function of range if we assume IA sceneobject aspect ratio and a recognition level of discrimination. The resulting MRT' curves areplotted in Fig. 6-3 for several values of scene object size using bar widths for 1/3 to 2 m for theequivalent bar pattern. We assume a scene object to have a temperature differential of 5 abouta 300°F background. If the atmospheric transmittance is unity, the al~parent scene object A Twill equal 5° at all ranges. The intersection of the T,,, line representing this case with the MRTcurves gives the threshold ranges for each bar width assumed. In the special case of

, - T, - 1.0, it is found that .16;r - AX/R - constant and we plot A., vs range in Fig. 6-3.Atmospheric transmission at the long infrared wavelengths can often be approximated by anexponential so that

AT(R) - ATo exp [-a-R], (6-6)

where A T,, is the object temperature at zero range and a is Mhe atmospheric extinction coefficen:.The curve r,, in Fig. 6-3 corresponds to a dry air, cold weather condition for which the atmos-pheric transmittance is high while curve T,, represents a fairly low transmittance due to moistwarm air condition. The effect of the atmosphere is to cause the threshold angular resolutionA0, to increase with range as shown in Fig. 6-4 with the largest increase for -,,.

As noted in connection with Table 6-3, the sensor resolution required, as measured in thethreshold sense, to visually discriminate real scene objects appears to increase when the videoSNR is low. Thus the effective sensor resolution is less than the measured threshold resolu-tion. The ratio of effective to measured threshold angular resolution (A0q/AG, is shown as afunction video SNR in Fig. 6-5. This particular curve was derived from measurements byO'Neill using images of ship silhouettes (Refs. 6.6,4,5) as discussed in Appendix 1. It shculdbe emphasized that the curve of Fig. 6-5 is based on very little data derived from an experi-ment which was not specifikally designed to determine a correlation between video SNR andresolution criteria. However, it is believed that the curve is of the correct form if not of precisevalues.

The interpretation of the curve is as follows: suppose that the threshold resolution of asystem at a particular range is AH7r equal to 100 urad. Further suppose that the video SNR, iscalculated for that range from the relation

SNR;.- - - e R71-7

.VEA T

where A T,, is the temperature of a specific object at zero range. cr is the atmospheric extinctioncoefficient, and the calculated value of SNR,,, = 1.0. Then. from Fig. 6-5 it is found that

104

L h .... J

Page 107: a073763 the Fundamentals of Theirmal Imaging Systems

N ItL REPORT 8311

10

I-I

700

1.0

0 10 20 30

1, RANGE (kin)Fig 6-3 - MRT plotted t% range for various equivalent bdrpattern wiJths and scene object a T v% range for atmosinher-iý transmissions of /laU (no aimnsphere). 7.1 (dry air. coidi..nid T.2 (moist air. warml

105

Page 108: a073763 the Fundamentals of Theirmal Imaging Systems

F A. ROSELL

~Soo0 0

00

3 00/

2 00/

2 c2

10

0-l 1 , I m si- 1 1 i ,I' JtIl 1 11 1A lI ,1 A A a& ,6 U L

0 j

w

0CC

MAG 1m

Fa.64 - hehldaglr eouio agefrte sue sse ortreatope3trnmgscsad h fetv nua eouin o h topei aeUa oiidt nkd

I-

Ua.Wa

-jtL0 2

0.1 1 10BROAD AREA VIDEO SNRv

Fig 6-5 -Ratio of effective to measured angular resolution rcqu~red to vosuallydiscriminate fhe ship silhouette as a fowinton of the broad area vtdeo SNIR

106

Page 109: a073763 the Fundamentals of Theirmal Imaging Systems

NRL R 1E1ORIT ,31

AOa/A9T - 3.6. Then, the effective sensor resolution so far its visual discrimination of realscene objects is AP,, = 3.6 A07T = 360 Arad.

A graphical method of including the above effect in range prediction is shown in Fig. 6-6where we have plotted Eq. (6-71 with the MRT curves of Fig. 6-3 and the AO,/AOT curve ofFig. 6-5. The curves are used as follows. For the r,, atmospheric curve, the threshold rangefor a 2-m object with a AT of 50 at zero range is I I km and the threshold angular resolution isthus 174 A.rad. The video SNR,,, under these conditions is about 1.1 and A9J/0, = 3.4 andAO, = 590 Arad which is plotted its one of the points on the dashed curve -r,,, in Fig. 6-4.Observe that the correction for SNR,.,, is trivial for ranges shorter than 6 km for the assumedcase but the effect is large beyond 8 km.

D. CUMULATIVE PROBABILITY OF VISU.AL DISCRIMINATION VS RANGE

One commonly desired format in plotting predicted range is the cumulative probabilitythat a scene object will be visually discriminated at a range equal to or less than a given range vsrange. To obtain this curve, we use the methods of the previous section to obtain the thresholdangular resolution of the sensor for a specific object. Assume that the object is a bar of width 1m with a A T,, of 50. Then if the atmospheric transmittance is unity, the Ta,, curve of Fig. 6-3intersects the MRT curve for a I -m bar width at about 11 km and the threshold angular resolu-tion of the sensor is about 90 ALrad at that range. Next, A0, the angular subtense of a 1-m barwidth, is plotted as a function or range in Fig. 6-7(b). The sensor's threshold angular resolu-tion, when the atmospheric tran,-mittance is unity, is actually independent of range and thus,AOT plots as a horizontal line. The APT and A(,, curves should and do intersect at range 11km where the probability of detection is 50%.

A probability of visual discrimination is the ratio A ,/.A0 , is plotted in Fig. 6-7(c) usingthe second colhmn of Table 5-1 with A0,,/A0 / 1 corresponding to an angular resolution of90 Arad and AO,,/A07 = 2 corresponding to 180 Axrad. It can be seen that if the probability vsAO0/AO7- curve is centered at 0.50, A,,,/&0AT I and by following the desired curve up to Fig.6-7(b) and down to (c) the first point of the cumulative probability v.s range curve becomes 0.5at 11 km as it must by definition. Other points arefound in a similar manner.

Where atmospheric transmittance is a factor, the sensor-augmented observer's ability toresolve a scene object at range R becomes a 'unction of range as discussed in connection withFig. 6-4 and as shown as curve I in Fig. 6-8. From this curve we can plot the ratio ofAo,,/Ao r as shown in Fig. 6-9(b). With tihe cumulative probability curve of Fig. 6-9(c) we canobtain the cumulative probability v s range curve of Fig. 6-9(a). The curve 2 of Fig. 6-8represents the case where the sensor augmented observer's ability to resolve the scene objc,:t islimited not only by the atmospheric transmittance but also by the falloff in video SNR as dis-cussed previously. The effect of this further degradation on the cumulative probability vs rangeis shown as curve 2 on Fig. 6-O(a). For reference purposes wc also show the probability curvefor the case of Fig. 6-7 as Curve .3 oil Fig. 6-9(a). As can he seen, the effect of the atmospherictransmittance alone and the atmosplheric transmittance ipus the video SNR falloff (which is alsodue to atmospheric transmittance) is both to progressively shorten the threshold range at the50%) probability level and to cause the probability curvCs to become steeper. These trends havebeen experimentally observed and are int agreemcnt with what one would intuitively expect.

107 _____ ,A.,I

Page 110: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

I To

0.1 .. .. . 0.0 20 300 2 3

liiil

Fit 6-6 Gahclacu tiono h ai fefcietthe Ta2sol anua eouinlkigit con heffect of atophr on vie

III I

Tc2001,0 10 10 3

RANG i j

Page 111: a073763 the Fundamentals of Theirmal Imaging Systems

.- o

NRL REPORT 8311

2 : . . . . . - . . . . . ...

r 2

II I

I -

~ii - w I ,

o 1 1: 1 1 'I B l jI" l"IL• . , ,,,i .. ,I . •,...2 2? . / ;'i','iill ,

S ! i I g I I "(c)I I 1I(a) • ! I ' I I g ; l.I ',

* 0 I I B 0 III: I IO IIllll

•t0.4 g 04 ,)"t I 'I

RANGE (kin) /TFiE. 6.7 -- Graphical technique for determining the cumulative probability that a scene object will be visually

discriminated at a range that is equal 10 or less than the range shown on the abscissa

2oG 2

) 4eT I 1 14

200

RANGE 1kmn)

Plg. 6.3 - Angular subtense &d, of L har of an equivalent barpattern "nd the threshold angular subtease. A90 for two cases;casle 1, angular resolution I, range dependent because of atmos-pheric transmittance only and case 2. angular resolution is rangedependent because of a combination of atmospheric transmit.

n hd-

lanc an themagitud ofthe ide SN

Page 112: a073763 the Fundamentals of Theirmal Imaging Systems

F. A. ROSELL

2 C4

(c)

I, 2

0 2 4 6g~ 1 0 12 14 aII a a t

CUMULATIVE PROBABILITY

(a) I 3

0 14

o 2 4 6 8 10 12 14RANGE

Fig. 6-9 - Graphical method of determining the cumulative probability that a scene object will bevisually discriminated at a range that is equal to or less than the range shown on the abscissa

REFERENCES

6-1 Johnson. J. "Analysis of Image Forming System," Proc. of Image Intensitior Symposium,Oct. 6-7, 1958

6-2 Rosell, F.A., and Willson, RI-I.. .Performance Synthesis (Elect ro-Opt ical Sensors). AFAL -TR.72-229, Air Force Avionics Laboratory, Ohio, August 1972, AD-905-2812

6-3 Rosell, F.A., Performance Synthesis of Electra-Optical sensors, Report No. EOTM No 575,Night Vision Laboratory, Ft. Bevoir Va., Contract No. DAAK-53-75-C.0225 Feb. 1977

6-4 O'Neill, G., Report No. NADC-202 139: GJO. Naval Air Developement Command, Janu-ary 1974

6-5 Rosell, F.A. "Levels at Visual Discrimination for Real Scene Objects vs Bar Pattern Resolu-tion for Aperture and Noise limited Imagery" Report 75CH0956-3 NAECON June 1975.

Page 113: a073763 the Fundamentals of Theirmal Imaging Systems

4

Chapter VII

THERMAL-IMAGING-SYSTEM (TIS) DYNAMIC FIELD PERFORMANCE

D. Shurnaker

A. INTRODUCTION

1. Function of a Dynamic Model

Three major tasks must be accomplished to derive useful information from a thermal-irm aging-system:

* Physical acquisition, which is the positioning of the field of view of the thermal-imaging-system about the target so that it is imaged on the display.

* Visual acquisition of the target on the display. This task contains both a visual searchphase and a visual detection phase.

* Extraction of the required information from the previously detected target.

All of the tasks must often be accomplished in a limited amount of time.

The entire development of the last two chapters has been directed to predicting anobserver's ability to visually detect, recognize, or identify a scene object given as much titnc asnecessary to find the object of interest. The following sections are dedicated to considerationsfor incorporating this analysis into an overall characterization of thermal-imaging-system use inperforming the above three functions. We will call this total thermal-imaging-system character.ization a dynamic model, because addition of target acquisitioJn (both physical and visual)requires considerable emphasis on the pausing of time and the changing of geometry.

Dynamic modeling of imaging systems is presently in its infancy. Work on the subject isbeing pursued actively and thus the state of the art is evolving rapidly. The following materialis presented in order that the reader may consider the implications of the truly dynamic condi-tions under which FLIR systems are often operated and the ;.,)tcntial impact of these conditionson system analysis.

2. Justification for Constructing a Dynamic Model

Dynamic modeling may repiesent a considerable increase over static modeling in the com-plexity of analysis and bulk of computations required to reach meaningful conclusions. How-ever, this increase is justified by the more thorough consideration of the thermal-imaging-system afforded. Static analysis cannot completely define all of the important parametersdescribing a thermal-imaV ig-system, since it does not include the important parameters ofsearch and acquisition

Page 114: a073763 the Fundamentals of Theirmal Imaging Systems

D. SCHUMAKER

Of the several hundred thousand resolution elements typically available in a thermalimage, visual identification of real scene objects may require only 4 to 900 elements on theobject's image. The function of the remaining 99% of the resolution elements in the thermalimage is target acquisition, that is, making the field of view (FOV) of the system large enoughto ensure that the target is imaged. Since system cost is highly dependent on the product ofresolution and field of view, not considered in static analysis, if a thorough tradeoff of perfor-mance and cost is to be made, a dynamic analysis is appropriate.

3. Description of the Dynamic Environment

The treatment of dynamic performance is inherently statistical. This chapter will discussstatistical elements that could be used to describe various thermal-imaging-system applications.The synthesis of a performance model from these elements will vary, depending on the particu-lar applications. The major effort of this chapter is to establish the proper temporal relation-ships between each of the statistical elements.

The general scenario which is used herein to describe thermal-imaging-system applicationsis as follows:

The target position is assumed to be known with some degree of certainty (meters, nauti-cal miles, etc). At the initiation of the mission the observer begins to look for the target onthe system display while the system field of view itself is moved through some defined area ofsearch, within which the target is expected to be located. The area within which the thermal-imaging-system field of view is moved is defined as the search field. The line of sight to thetarget may be blocked by terrain, foliage, etc., and the probability of a clear line of sight maychange with time. The observer searches the system display visually. If an object is detected, itis retained in the field of view as the observer attempts to perform higher order visual process-ing of the image.

The above scenario is not all encompassing, but provides a convenient base for describingmany thermal-imaging-system applications. The scenario can be conveniently described byusing six probability functions listed below.

"* PSF, the probability that the object is within the search field;

"* PFov, the probability that the object is within the sensor field of view given it is withinthe search field;

"* P1,os, the probability that a clear line of sight exists between the sensor and the object;

"* PL/D, the probability that a displayed object falls into the observer's area of visualattention or "glimpse";

& PD/L, the probability that an object within the observer's glimpse is detectable, withAD being an incremental change in PD/L; and

a P1/D, the probability that an observer can identify (in the generic sense) a detectedobject, with A/ being an incremental change in P11D analogous to AD.

. -

Page 115: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8111

The three functions Psr, P?0., and PLos are considered in Section B on physical targetacquisition. Section B describes how PsF can be calculated based on system and tactical factors.Psi- is assumed to be monotonically decreasing.

From the generalized scenario, it can be seen that the sensor field of view samples thesearch field. Pf-o, might be thought of in simplest terms as the ratio of the area of the field ofview to the area of the search field, although section B develops a more rigorous formulationfor it and demonstrates the shortcomings of the simple area ratio concept.

The probability of having a clear line of sight to the target is assumed herein to be mono-t-nicall) increasing. The elements that determine PLOS include cloud cover and terrain andfoliage masking. The effects of these elements on PLOS are discussed in section B.

The function P_ ., the probability that a displayed target is within the area of theobserver's glimpse, describes visual search. It has been customary to describe the visual searchprocess as the visual moving about of an aperture within which a target can be detected and Soutside of which a target cannot be detected. It is assumed that the aperture is moved rapidlyand is then stationary for a relatively long period of time called the glimpse duration. Section Cdiscusses P. D for representative applications. The duration of the glimpse is usually the basicunit for stepping time in dynamic performance modeling.

PD..L is the probability of detecting an object that is being actively looked at by theobserver. Pr t_ is related to the static probability of detection developed by the use of alterna-tive approaches in Chapter VI. The function describes the fraction of the population that candetect the target. P/,t. is assumed to be monotonically increasing. This is because in mostapplications range i, either constant or decreasing and thus the signal to noise ratio (SNR)(upon which P.,l_ is dependent) is either constant or increasing.

The function P, 1 is the probability that an observer can identify a detected target. Thefunction behaves as does P1, L- Methods for calculating P, o were discussed in Chapter VI asthe static probability of target identification (generically). In this chapter P1.0 is assumed tochange with time, being monotically increasing.

4. Coordinate Systems

Many coordinate systems are used in describing overall sensor system performance.These include:

"* Flat earth ground coordinates (X, Y). origin at the object."* Platform coordinates (09, 4)), Az. El, origin in the platform."* Display coordinates (x. 0), origin at a display's center."* Visual coordinates (F. V'), origin at the center of the tovea.

For gencr.il discussions in which we do not specify a coordinate system, we will use (Y. tb).

B. PHYSICAL TARGET ACQUISITION

Physical acquisition is the act of getting an object displayed. An object may be physica!lyacquired but not detectable; that is, if an object's location is displayed, it is physically acquired

& • 113 J 1

Page 116: a073763 the Fundamentals of Theirmal Imaging Systems

D. SCIIUMAKER

whether it is detectable or not. An" :is of physical acquisition is broken down herein intothree facets. The first facet is determining the probability, PsF, that the object is within thefield being searched. The second part is determining the probability, PFOI., that the object isinstantaneously within the field of view of the thermal imaging system. Lastly, the probability,PLOS, that a clear line of sight exists between the object and the sensor must be determined.Each of these facets of the problem is examined below.

1. Probability that the Target Is In the Search Field (PsF)

During the acquisition/detection phase of a thermal-imaging-system application, the sen-sor field of view may be moved about in an attempt to acquire an object whose actual positionis uncertain. This search may be mechanized as a programmed search, may be totally manual,or may be partially or completely defined by platform motions. Irrespective of its mechaniza-tion, the search field is defined as that expanse throughout which the field- of view may bemoved to acquire the desired object. In many cases the limits are imposed by gimbal con-straints or physical obscurations. In other cases tactics limit the usable area within which a mis-tsion can be prosecuted.

The probability that an object is in the search field (PsF) represents the fraction of mis-sions for which it will be in the search field. The probability is distributed over the ensemble ofmission conditions but not over time within the mission itself. The search field may changesize during a mission, changing the probability of the target being therein.

If PT(y, 4) is the probability distribution function for object position in pertinent coordi-nates, and the magnitude of the search field dimensions are defined to : SF,,2 by ±SF,/2 inthe (y,. 0) coordinate system, then the probability that the object is in the search field is sim-ply:

Ps " J .-• .- .2PTV , 41) dydeb. (7-1)

The following are examples of search fields found in typical scenarios and calculations ofthe probability of objects being therein.

a. Air-to-Ground Scenario with Handoff .from a Radar to a Thermal-hIaging-System withiComputer-Aided Tracking

In this typc of mission it is assumed that the radar has acquired the desired object. Athandoff (t - t) the operator directs the field of view of the thermal-imaging-system to searchabout the object position indicated by the radar. The reference frame is the coordinate systemof the aircraft, and the object coordinates are typically in azimuth and elevation.

Systems such as radar are subject to inaccuracies in determining obJect location due to thenoisy natuie of both the electronic processing and the mechanical tolerances associated withposition-signal generation.

These errors generally take the form of:"* equipment misalignment, such as boresite errors,"* resolver inaccuracies, and"* electronic signal detection uncertainty due to noise in the processed signal.

114

Page 117: a073763 the Fundamentals of Theirmal Imaging Systems

•LR. REPORT 8311

It is normally assumed that these errors can be approximated by a binormal distribution with afirst standard deviation of a, in azimuth and a,, in elevation. Since the noise in the radar sig-nal might produce excessive focal-plane jitter if the thermal-imaging-system were simply slaved"to the radar position information, it is customary to introduce the indicated object coordinatesinto an onboard computer and use inertial-navigation-system (INS) inputs to generate continu-ous positional information for the thermal-imaging-system.

Although cumulative errors in navigation inputs after handoff are also present and couldbe treated simultaneously, the largest source of uncertainty in the object position for this situa-tion is tl~e radar positional uncertainty. For many realistic situations using thermal-imaging-systems, one can expect the cumulative navigational errors from the time of handoff untilthermal-imaging-system engagement to be small compared with radar-to-thermal-imaging-

system handoff errors, and we assume this to be the case here. Airborne radar pointing errorstypically range from less than 5 milliradians to one or more degrees depending on the type andvintage of the radar and the target conditions.

At handoff it is assumed that the search field is centered on the indicated object position.The ptobability that the search field is pointed away from the actual object position by angular

extent 0, 0) is

PN(O, ib) -- (21rro'• -! exp(-Ot/2o-•) exp(-61/2a), (7-2)

which due to the symmetry of the function becomes PT(O, 6). This is illustrated in Fig. 7-1.

JA

4

"I

U

A•'I MU T i,

Fig "1I-- Radar -to-thc'mAl-tmaging-%,•stem handoli georn~tiry

S115

_ IlL'

Page 118: a073763 the Fundamentals of Theirmal Imaging Systems

I

D. SCHUMAKER

Integrating Pr(O, 0) over the search field of the the:rmal-imaging-system yields the proba-bility that the object is in the search field at handoff.

Frequently in airborne systems using this mechanization no physical search with the fieldof view of the thermal-imaging-system is implimented, and the search field becomes the field ofview itself. In this case Eq. (7-2) becomes

275F - a~ ff exp(-0 2/2ap') exp(-02/2orl) d~dO. (7-3)2 ~~FO V

If the field of view is set to 20-, by 2oa, centered on the indicated object location, this integra-tion yields a Ps of 0.46. If this field of view is englarged to 4a,- by 4a,, the probabilitybecomes 0.91; and when the field of view is set to 6a, by 6o-,, the probability becomes 0.99. IThis illustrates the importance of sizing of the field of view in such cases, since if the field ofview is set equal to the radar's Ia- accuracy, better than 50% of all missions are consigned tofailure at handoff, because the object will not be in the field of view.

After handoff, the search-field footprint on the ground (the intersection of the search fieldwith the ground plane) shrinks as range closes. However, the object distribution function on

the ground P1 (X, Y) is constant with time, as determined from PT(O, b)) at the time ofhandoff. Therefore, the probability that the object is in the search field decreases with time,since it may be determined by integration of PT(X, Y) over the shrinking footprint.

To calculate the probability of the object being in the search field after handoff, theintegration can continue to be accomplished in angular coordinates by applying appropriatecoordinate trand,forms.

b. Ground-to-Air Scenario in Which Radar Is Used to Locate the Target and the Field of View $Ithe Thermal-Imaging-System Is Permitted to Ride the Filtered (Low-Pass) Position Output of theRadar

For this case, the search field becomes the entire radar field of view, and the probabilityof the object being in the field approaches unity. However, if the freedom of the field of viewto move is constrained to ±0/2 in azimuth and 4/2 in elevation, which is normally the case.the probability of the object being in the search field is given (assuming e and ,P are small' b):

2, 2 ex"(-0 2/20,1,) exp(-0 2 /2'•,2) ddlcb, (7Pw - 7ac. 4

where oa, is the radar lo accuracy in elevation and o-O is the radar accuracy in azimuth. Thisformulation is also appropriate for radar designation for thermal-imaging-systems in air-to-surface and air-to-air applications.

c. Air-to-Ground Scenario in Which the Object Is Found Using a Navigation System1 to Get the Air. :craft to the Known Coordinate of the Object

In this case it is assumed that the inertial navigation system (INS) of the aircraft isupdated at a time t - t, and all navigation errors are set to zero. As the aircraft proceeds tothe latitude and longitude of the object, the INS is subject to several sources or error, such asuncompensated wind gusting and INS drift, which accumulate in time to produce uncertainty inthe true position of the aircraft. At handoff (r - t.) the search field of the thermal-imaging-system is centered on the object's ground coordinates as best known to the airciaft.

116 _uhj

Page 119: a073763 the Fundamentals of Theirmal Imaging Systems

NRL RIPORT 8311

Since the accumulation of errors is random, the uncertainty in the aircraft position, andtherefore the object's position with respect to the aircraft, may be characterized by the probabil-ity distribution function (pdf) which is typically assumed to be a bivariate normal distribution inground coordinates (X, Y) (Fig. 7-2):

Pr(X. Y) = (27ro- ,r ).- exp(-X 2/2or.) exp(- y2/2cr 2.), (7-5)

where or and a- Y, the variances of the object probability distribution functions in ground coor-dinates, are functions of time:

""ro= K.(t - , (7-6)

(T r= Kr(t - , (7-7)

where K.y is the rate of error accumulation in X, K). is the rate of error accumulation in Y, andt,, is the time of the last navigation update. Typical valucs of K,1 and K) are I to 5 nauticalmiles per hour.

The thermal-imaging-system is continually directed to the object after handoff using com-puter generated commands derived from the INS and known target coordinates.

The probability that the object will he in the search field at any time is given by thlt,integral of the pdf given by Eq. (7-5) over the ground mapping of the search field:

1 f f exp((--X,2/2r.) exp(- y2/2r () dXdY. (7-8),,I" 27r'T•t ,r Search

Field

This is shown schematically in Fig. 7-2.

-

S454

. 71 ,.-,--".--,-j--z'-

Fig. 7-2 - N.,,igation -ic-thirnitl-i ,in r-',in r 's hcm land l geometry

2. Probability of the Target Being in (he Field of View, Given It Is in the Search Field( I'0 d)

The probability that an object is in the field of view of the thernial-iniaging-systern giventhat it is in the search field is a function of the object position probability distribution functondue to the uncertainty of the object location within the search field (P11). the distribution func.tion describing the positioning of the field of view of the thernial-ini;agig-systeni within the

117

Page 120: a073763 the Fundamentals of Theirmal Imaging Systems

D SCHUMAKER

search field (PFF), and the size of the field of view of the thermal-imaging-system (FOP).Since the FOV samples the search field during the mission, Pr0-o is a probability that is distri-buted in time. The probability of the object being in the FOV at a particular moment can beexpressed by calculating the probability that the effective area of the FOVenvelops the point inthe search field at which the object is positioned, given that the object is in the search field; thatis.-

The search field is subdivided into elemental areas (A-y, A14). If the FOV is assumed tobe an area VFOV high by HFOV wide, the probability that it will envelop the object is the sumof a series of terms. The first term is the probability that the object falls in (.1Y, A#P), multi-plied by the probability that a FOV center falls somewhere in the area of dimension VFOV byHFOV centered at (Ay, AO)1; the second term is the probability that the object falls in(Ay. Atb), multiplied by the probability that an FOV center falls somewhere in an area ofdimension VFOVby HFOV centered by (ANy, Ap,)2, and so on until the search field is covered.This process is illustrated in Fig. 7-3.

The integral representation of this sum is

S c ar" h J Jif O I < 2 ' ,' - H F O , 2 P , , ( y ' . 0 j' ) d y ' d l : ' d y d o . ( 7 -9 )

where PTF(Y. t•) is the object-position probability distribution function (pdf) normalized to Psiand PFF(Y. .1) is the pdf describing the positioning of the FOL Frequentiy Piro is assumed tobe the ratio of the area of the FOV to the area of the search field. Tacit to this assumption isthat Prj and PIF are uniformly distributed. While in many instances no better assumption maybe possible regarding PTf and P~f, the value for PFoI may be grossl, incorrect if Prr and PHFare either correlated or anticorrelated. Sucn an example of this problem was given in paragraphBib. In this case PTr is the same as P,,1r, maximizing the integral of Eq. (7-9) due to the corre-lation of the two functions and resulting in a very inaccurate calculation of PK,(i if it is set tothe ratio of FOV/area of the search field.

In many air-to-ground, air-to-air, and ground-to-air scenarios, %here searching with thefield of view has proven fruitless, the search fie!d is made equal to the /-01'whith is entered onthe most likely target position. In this case P,, - I.

sr, T

II l

l FO =' l Pr'T ; T RGET Ddf

SEARCH FIELD

- FOV, -----

F ig 1•-3 -- Scar clh-field geo mnetr) u sed fo r theC 'di'ýu,, t wn P ",r

Page 121: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

3. Probability of a Clear Line of Sight to the Target (P,.o.0 )

The LOS to the target can be obscured for various reasons, including foliage masking,cloud cover, terrain masking, and occlusion by the horizon. In ground-to-ground applicationsand in many low-altitude air-to-ground and ground-to-air scenarios, cloud, foliage, and terrainmasking may obscure the object to the point that detection may not be possible until range isbelow that appropriate to object identification.

The geometry of a mission and the local topography anl foliage cover dictate at whatpoint in a mission a terrain-and-foliage-free line of sight is obtained. In most cases, duringrange closure, it is expected that a single change of state from masked to unmasked will occur.Thus a statistical description of terrain-and-foliage-free line of sight reflects the statistics of mis-sion profiles and topography. P. 0 .(r) due to terrain and foliage reflects the fraction of missionsfor which the line of sight will be unmasked at range R. P,/.•" is, therefore, ensemble distri-buted. For a specific mission with known masking data, characterization of line-of-sight mask-ing would simply include a change from a probability of zero to one at the range of unmasking.

Statistics of cloud cover for a few locations have been published including Ref. 7-1 forHanover and Ref. 7-2 for Berlin/Tempelhof. The Hanover data exhibit little variation in proba-bility of cloud-free line-of-sight with elevation angle. This indicates that in this area clouddecks dominate obscuration effects rather than partly cloudy conditions. Under these condi-tions the statistics of cloud-free line of sight refer to the fraction of missions for which no clouddecks will be between the target and the sensor platform operating at different altitudes. Table7-1 provides summary material derived from the Ref. 7-1 data giving the probability of cloud-free line of sight from various altitudes to ground for various seasonal periods. In monthswhere cloud cover is minimal variation of Pl,,', with elevation angle increases indicating theeffects of partly cloudly skies. Under these conditions the assumption that P1.0 1 , is distributedonly over an cnsemble of missions and not in time is compromised. That is. during this period,an aircraft flying at constant altitude may be visible from the ground one moment and not thenext.

C. VISUAL SEARCH

1. Introduction

Visual display search is the most difficult portion of imaging system use to characterizeand is not completely describable at this time, due to a lack of data. The following sectionswill, therefore, attempt to break down the display search problem into its contributing com-ponents, present treatments of those areas that hl.ve been analyzed, and discuss potentialapproaches in those areas as yet undefined.

2. Characterization of Scene Content

The ability of an observer to search a display scene is lcpcndent on the scene content,which varies widely from uniform displays of sky or sea to highly complex displays of' mixedlandscapes or urban areas. For convenience, scene elements are broken down into competingtarget elements or clutter and major interest directing elements. The effect of clutter is tomake more difficult the task of discriminating the target from background and to enlarge the set

119

Page 122: a073763 the Fundamentals of Theirmal Imaging Systems

Ik

)D. SCIIUMAKER

Table 7-1 - Probabilities (in percent) of a Cloud-Free

Line-of-Sight from the Specified Altitude to theGround at Hanever, W. Germany (Ref. 7.1)

Altitude Time Full Apr - Oct - Clearest Cloudiest(ft) Year Sept Mar Month Month

984 D 65.9 77.8 54.0 83.4 46.iN 66.4 78.4 54.4 82.3 44.4

FD 66.1 78.05 54.15 82.7 45.3

3280 D 51.55 63,4 39.7 70.1 33.0N 55,8 70.2 41.45 73.9 32.3

FD 53,7 66.75 40.6 71.9 32.7

4920 D 47.8 58.1I 37.5 63.7 .1.0N 51.35 65.0 37.7 68.8 28.7

FD 49 5 61.5 37.5 66.2 29.7

35000a D 34.0 42.4 25.6 48.1 20.0N 37.0 46.8 27.2 51.4 19.3

FD 35.5 44 [26.4 49.7 25.5

D - Daytime (0630-1830)N - Night (1830-1630)FD - Full day (24 h)

'Infina' rt": all Pra' uc..

of possible nontargets that must be interrogated to find the desired one. Numerous tesearchersincluding Erickson (Ref 7-3) and Smith (Ref. 7.4) have found that as the clutter ii hicreased.search time is increased for any constant signal level.

Less investigated than clutter effects i•jd perhaps more important to search lime in manycases is the effect of scene complexity as dictated by the density and distribution of majorinterest.directing features. Figures 7.4 and 7-5 from Yarbus (Rot'. 7-5) illustrate the cffect ofcomplexit, on the visual search pattern. In the pictures relutively high amounts of search timeare dedicated to small areas of the srcne, In these si;(ncs an object in a high-intcrest area ismore likely to be looked at than the sarie object locatd in a low-interesi area.

Displayed scenes might therefore he broken down orthogonally by levels of clutter andcomplexity. The simplest case would he a siligle object on a clear background: such as An Air-craft against a clear sky or a ship on a calri ocean. k boat in a choppy sea, where waves rlightrepresent potential false targets ipresinis the same complcxity hut increased clutter. A higherdegree of clutter inighlt be found in :i descrt scenc, wnere isolated sparse: foliage is casily con.fused with a military target.

The addition of the horizon in the scene rcpi:sents P first level in increased complexity.The horizon does not represent a false target but does greatly influence where the observer

LL I •,Hl~ '•v

Page 123: a073763 the Fundamentals of Theirmal Imaging Systems

NRI REPORT 8311

.d%• -1] -

Fig. 7-4 - A visual s, arch pattern. (Figurer 17 from A L. -iS. 7.•5 - A vikual scirch pattern. I'Figure I18 from A L.YArbus. Eye Muvnmenl, and Vision. Plenumi Press (with pur- Yarbus. Eye M', vrimenr and Vision. 'lenum Press (with per-missiont )I m~sslnl ) nissiollt.)

looks for targets. Air.to.ground scenes with large open ameas buunded by roads and tree linesmight be the next more complex type of scene.

Both clutter and complexltV vary continuously from zero to some extremely high valuetypified by torban/industrial ireas and mixed conmplex terrains as shown diagrammatically in Fig.7-6.

A third dimenstor, which influences th•i visual Interrogation of a scene is the conspicuityof the desired object. The SNR has been used herein to describe the difficulty in detecting anobject and as such represent, a conve.ntent description of conspicuity. Studies of the effects ofSNR on search tim- indicate, as ,int would cxpect, that the higher the SNR the faster and moreaccurately objects can be located,

Little work has been dtne to define the effects of clutter, complexily, and SNR on search,detection, and idntiflcatinn in controlled experiments involving all three variables. As a result,little data exist upon which to formulate a model of this process. The following Is therefore putforth as one possible means of approaching the problem.

hA 121

Page 124: a073763 the Fundamentals of Theirmal Imaging Systems

D. SCHUMAKER

SIMPLE INCREASINO CCMPLEXITY

•'I-

Fij5 7-6 - Breakdown of scenos by clutler and complexity

3. Definition of a Glimpse

Prior to formulating an equation for visual search time; it is convenient to define a unit ofsearch in time and space. We will refer to this unit as a glimpse following traditional definition.The following explores the temporal and spatial characteristics of the glimpse.

Occulometer studies by Yarbus (Ref. 7-5) and Williams et alt (Ref. 7-6) indicate that theeye inspects a scene in a series of glimpses composed of a relatively long stationary period offixation followed by a very short movement or saccade, The eye-brain is effectively blankedduring the saccadic movement, so that the conscious effect is that of visual continuity. Fixa-tions last for varying lengths of time, as indicated in Fig. 7-7 from Yarbus. The length of thefixation depends on the context of the visual scene and the task given t'Vie observer. Yarbusand Williams conclude that the mean fixation period -,s approximately 1/3 second.

The duration of the saccade is also variable depending on the distance moved betweenfixations. Yarbus indicates that the relationship hetween duration and amplitude of the saccadeis

Ts- 0.121 8./5, (7-10)

where Ts is the duration of the saccade in seconds and 8T is its amplitude in degrees. Theinterfixation distance is itseif a function of scene context, observer tasking, and scene subtense.Yarbus indicates that the fixation consumes 95% of the glimpse time, leaving only about 5% for

the saccade.

The area of the glimpse., or that visual angle within which a target can be detected, is not,tharply defined. The detection capability of the eye varies over the entire visual field and is afunction of the type of detection problem. The fovea has the best color-detection capability,and the peripheral vision has excellent sensitivity to movement and excellent scotopic sensi-tivity. The probability of detecting an object of. anknown location in the visual field can best begiven, therefore, as an integral of detectability over the entire visual field:

Page 125: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

300 - "--

T11I I I I

0 0.04 .00.7f II.'

Fig. 7.7 - Distribution oa glimp•, dUfdlion (Figure 58 from A. L. Yarbus.

FEve ,014,•.eme.n nri d I.',nn. Plenum Preis (with permission)

PI)I," f. PWF, W) D,( F, (1 ) IA,, (7-11)

where PDV is the probability of visually detecting in object, PT is the object's positional distri-bution function, and D1 is the visual detectability of the object in an incremental area of thevisual field (dA) as a function of location within the visual field.

Since the integral is cumbersome to solve during computation of visual performance, andsince Pr in visual coordinates is virtually never definable, it has been more convenient to spa-tially describe the glimpse as an area or visual lobe having constant D1(F, %P) equivalent to thetotal integral of DI,. Thus, assuming

P7(r. 4' -'CNST and ff P, dA 1 (7-12)

we have:

"ff 01(1', 4,) (/A -ff D,(W, ') dA (7.13)4(, rI

t.I held,,' '' ' •' , ,• m ,

Page 126: a073763 the Fundamentals of Theirmal Imaging Systems

D. SCHUMAKER

and

AG " 1ID1V ff Dv(r, ') dA, (7-14)

FNW

where Dv is the detectability of the object within the area of the glimpse. For most

applications, Di, is the best set equal to the detectability at the foves, since most pertinentexperiments in detection have been conducted in this portion of the visual field. When motiondetection or detection under scotopic conditions is required, Dy. might be set to a value of peri-pheral detectability.

Setting Dv equal to the object's d.-tectabihity in the fovea and Dr(r, T) equal to the pro.duct of the object's detectability in the fovea and the ratio of detectability in the periphery tothat in the fovea R 1 , (r, *), we have

AG - f R o(r, P') dA. (7-15)Pield

Equation (7-15) gIves the commonly assumed definition of the glimpse area.

Frequently the glimpse area is assumed to be some arbitrary constant such as the fovealarea or a 5° cone. However search time is exceedingly dependent on the glimpse area, so thatsome attempt to approximat, i as a function of signal level is warranted. For instance, theassumption that the glimpse area is equal to !he fovea, which is approximately a 1 cone, yieldsa search time 25 times that required assuming a 5° cone for the same level of assurance. For atypical display of 14@ x 18" (13 by 18 cm viewed at 56 cm), random search times (assumingreplacement) for 90% detection confidence require times as given below:

"0.9 - I - - ±G " I - exp(-AGN/AD). (7-16)

ln(0.1) *- -(A GN/AD), (7-17)

or

N - -(ADI/A) In(0.1), (7.18)

where N is the numbe, of glimpses required and AD is the area of Ihe display. Equation (7.18)indicates that Nrov.. Is 739 and N3. Is 30. This result equates to a 246-second search intervalfor the fovea glimpse and 10 seconds for the 5 glimpse. While a 10-second time might be tac-tically acceptable, there are few applications in which a 4+ minute search Is acceptable. Actualexperience Indicates search time varies from effectively zero (AG ;1 AD), if the signal Issufficiently large and the picture simple, to infinity for very low signals and complex scenes.Thus, assumptions of a glimpse size can lead to staggeringly different calculations of searchtime and widely divergent conclusions regarding the tactical usefulness of a thermal-imaging-system.

4. Visual-Lobe Models

Several visual-lobe models have been established as a means of calculiting the size of theglimpse for unaided visual search. These models assume that the limitation to detectability iscontrast. Although these models have applicability to infrared imaging systems, the basic

II12A

Page 127: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

SNRD limitation to visual perception assumed in previous chapters suggests a visual lobe basedon SNRD rather than contrast. Shumaker and Keller (Ref. 7-7) have developed a visual-lobe I

r model for search in simple unclutterted displays basec, on the SNRD concept of Worthy andSendall (Ref. 7-8) which relates the visual lobe size to the falloff of visual acuity with increas-ing distance from the fovea. This is a theoretical rather than empirical model consistent withthe integral definition of the glimpse area given in Eq. (7-15). In this model the size of thevisual lobe is a function of system parameters as well as target size. The radius of the visuallobe in this model is

RG - 0.14 - 0.63 ln(C) + 3.8[ln(•;)] 2 - Rf, (7-19)where RG is the radius of the glimpse in degrees, Rr is the radius of the fovea or about 1/2",

and 4 is a complex function of target and system parameters and SNRD. Eq. (7-19) is based onthe falloff of visual acuity with increasing peripheral angle. This model applies to aperiodicdetection only. The model is qjalitatively consistent with observed changes in the detectabilityof targets of unknown display coordinates with SNR • but has not been verified experimentally.Fig. 7-8 gives the radius of the glimpse calculated using Eq. (7.19) for three systems. Thegraph illustrates why search in uncluttered displays is virtually instantaneous. As the objectSNRD exceeds threshold (2.8 in this case) the visual lobe grows rapidly to sizes comparable tothat typical of display subtense.

Pearson (Ref. 7-9) developed a visual-lobe model similar to that of Shumaker and Kellerbut applicable to periodic object detection instead of aperiodic object detection. In this case C inEq. (719) is given by

1- 1 - 2 ln(2.8/SNRDf)/irR 2 fhi-"/2 , (7-20)

where fD is the spatial frequency characterizing detection and SNRD sub f is the SNRD calcu-lated at target frequency .D using the Rosell and Willson methodology for characterizingdetection/identification.

*sIV.rln Fr;. fA-'"Ii•l

Fit. 7-8 - Radius of the visual lobe .i .. - .. ... . -

is a funcion or SNRP

I- /I- . . ,,•, .

Page 128: a073763 the Fundamentals of Theirmal Imaging Systems

L

1). SCtHUMAKER

This model, like the foregoing aperiodic model, is unproven and is based purely on thefalloff of visual acuity with increasing peripheral angle. Furthermore both of these modelssuffer from -the additional problem of being derived from monocular vision data rather thanfrom normal binocular vision.

Other yisual-lobe models such as that of Lamar (Ref. 7-10) can be applied to thermal-

imaging-system imagery, but care must be exercised in so doing to account for any potentialdifferences between contrast-limited and SNRI)-limited conditions. Under contrast-limited con-ditions, Lamar found the radius of the glimpse to be related to contrast by the following rela-tionships.

C; - 1.55 + 152•2, for 0 < 0.8 (on axis) (7-21) Iand

751 190Crý - 1.750-:2 + -- , for 0 > 0.8, (7-22)

%%hcrc 03 is angular subtense of the desired object in minutes, 0 is angle off the visual axis in

degrees, and C1 is detectable contrast in percent.

The Lamar visual lobe model can be used in high-signal conditions if the signal transfer

function of tie system is known and display contrast is deducible. Under low-signal conditions

its application may be somewhat compromised by potential differences between the ModulationTransfer Function (MTF) effects on noise-limited and contrast-limited performance. Ifsecond-order effects of noise filtering by the MTF are ignored and if both contrast-limittd andnoise-limited performance of the eye are assumed to fall off similarly with peripheral angle,Lamar's model can be applied to low-signal case. This model, like that of Shumaker andKcller, characterizes only aperiodic detection where there is no need for shape information.

5. Probability of Looking at a Displayed Target

Having dcfined in the previous section a unit of visual search (the glimpse), we canproceed to calculate the probability P, /, that a displayed object will be enclosed by the glimpsearea during the display search process. Frequently P_ 1, (probability of being looked at given itis displayed) is assumed to be simply the ratio of the glimpse size to the display size. However,this assumption is seldom correct and is only a reasonable estimate under very specialized con-ditions !o be discussed later. A general evaluation of Pt..1) is given below. The probabilitystatement is formulated as follows:

The display o' dimensions X,/ b)4 Y1, is subdivided into elemental areas (WX, A Y). If theglimpse is assumed to be a square of area D 2 (recognizing that the assumption of a square is for

simplicity of development only), the probability that it will envelope an object is the sum of aseries of terms. The first term is the probability that the object falls in (AX, A Y), multiplied bythe probability that a glimpse falls in a square of dimension D x D centered at (AW. A Y) 1, thesecond term is the probability that the object falls in (A, Y)A , multiplied by the probabilitythat the glimpse center falls somewhere in an area D x L) centered at (AX, A Y) 2, and so onuntil the display field is covered.

S........ ,,.... .. ... .... + . .......... ....... ... ... . . .. . .. . . ........ .. . . .........

Page 129: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

The process is identical to that used foi calculation of PFov. Its integral representation isr)!2 rX/2 ,.+,.2 ir+D2

P11D - J,,- JX",2 PrD(X, y) J,-o/ J-o,'2 PG(X', y') dx'dy'dxdy, (7-23)

where PL/D is the probability of a displayed object falling within a glimpse. PrD is the target position

probability distribution function (pdf) normalized to the probability of being displayed, and PG is theposition pdf for glimpses.

a. Glimpse Distribution

Equation (7-23) describes the calculation of PLO given a knowledge of the glimpse size,glimpse distribution, and displayed object pdf. Paragraph VlI C(4) examined potential calcula-tions of the glimpse size. The following subparagraphs will elaborate on formulations andassumptions with regard to determining the distribution of glimpses within (and beyond) thedisplay. The problem can be broken down into structured and unstruc:ured cases.

b. Unstructured Display Search

As indicated in VIII C(2), prominent scene features affect the order in which an observerinspects a display, especially if the features impart to the observer a higher or lower expectationof finding a target in their proximity. When no such stiucture is present on the display to oth-erwise direct search, a simpler search pattern should be expected. This pattern could still beinfluenced by expectations concerning the location of the desired object. Furthermore, the pat-

tern could be influenced by the subtense and shape of the display and its immediate surround.

Enoch (Ref. 7-1I) performed a study of the natural search patterns of observers using cir-cularily apertured maps as the display to be searched. Equation 7-24 gives a glimpse distribu-tion formulated for rectangular displays from the Enoch data (for 9' and larger displays)assuming that although structure was present on the display, the structure present was every-where of equal interest to the observer.

Pr(x, Y) - K exp(-21x1/X1) exp(-21.yv/Yt,). (7-24)

where XD is the display width (angular), Y0 is the display height (angular), and K -0 2 .2 5/(Xn Yt)). This formulation is chosen largely to preserve the analytical integrability for Eq.(7-23) at the expense of more precise formulations that would not be as easily integrable. Theequation indicates that the angular size of the display actually influences the radial distributionof glimpses.

If the target position pdf is known, Eq. (7-23) can be evaluated for Pt. t). Since in air-borne and surface-to-air applications the target pdf is frequently known, because a radar ornavigation handoff procedure is used to locate target, a good estimate of PL L can be obtained.If the pdf describing either the distribution of glimpses or targets is unknown, then for unstruc-tured displays the best estimate of PL ') is the ratio of the area of the glimpse to the area of thedisplay.

c. Structured Display Search

In most cases any attempt to predict P1. t) as a function of range is subject to considerableconjecture about the influence of displayed image structure. Unfortunately it is in these highly

S....127

Page 130: a073763 the Fundamentals of Theirmal Imaging Systems

D. SCHUMAKER

uncertain conditions that the PL/D term can be expected to dominate the overall ptobability ofmission success. The figures from Yarbus (Figs. 7-4 and 7-5) seem to indicate that an observerspends 80 to 90% of his observation time focusing on interest attracting features which mayconstitute only a few percent of the area to be searched. Although Eq. (7-23) still predicts theprobability of a displayed object falling withir a glimpse in this complex case, the function P6 isdifficult if not impossible to define. However, Eq. (7-23) does indicate why training is such animportant factor in sensor use. Training and experience eventually lead to high correlationbetween the distribution of targets and that of glimpses, since observers eventually learn tolook for objects where they have previously found them. This minimizes search time accordingto Eq. (7-23).

Fig. 7-8 from Ref. 7-12 shows the probability of target detectior function of time invarious degrees of background clutter. This is a composite set of curves comprising a largenumber of field trials involving various target-to-background contrasts. The interesting featureof the curves is that the time required to find a target is related to the final probability of everfinding it in the time allotted (45 s). It is clear that the easiest targets were not found by a sys-tematic search with an angular aperture of foveal size. Table 7-2 gives values of Pt. * for vari-ous values of probability of ever detecting the target (45 s) (which is not the same as the staticprobability of detection) as calculated from Fig. 7-9, assuming random sampling wi'h replace-menit.

Pearson (Ref. 7-9) develcped an alternative to formal application of Eq. (7-23), althoughit st;ll is based on Eq. (7-23), that takes into account experimental work with slightly clutteredscenes. He breaks up the display into high-interest and low-interest areas and develops a proba-bility (PL. D) assuming simultaneous independent searches of the two areas.

There are several treatments (GRC rmodel (Ref. 7-13) and Marsam II (Ref. 7-14)) ofsearch in cluttered fields, based on the data of Boynton (Ref. 7.15). which assume that searchis dominated by discrimination of the target from great numbers of similar false targets. How-ever the Boynton work was done with synthetic scenes and does not allow for the effects ofvisual cues, therefore indicating that a truck on a road is as difficult to find as a truck in a field.Although the degree of similarity between the target and similar nontargets would appear toheavily affect search time, it would seem that in tactical scenes visual cues may dominatesearch.

Table 7-2 - PI'D as a Function ofProbability of Detection.

Probability ofDetecting in 45 s PL;D

(/)

0 07.5 0.00058

25 0.03642 0.008

69 0.03186 0.02197 0.198

Page 131: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

JILL Seo

g -X-,

LTIMETIME

40

20

0A0 8 is 24 32 40 48 56 64

TIME is)

Fig. 7-9 - Field of view search composite data

6. Effects of Other Tasking

Enoch indicates that approximately 10% of all glimpses on tactically sized displays are lostto the immediate surroundings. Additionally, in the real world, the observer is !asked with per-forming functions other than monitoring a sensor display, such as piloting an aircraft, whichdecreases the probability of looking at a display target in any given time interval. If it isassumed that the distraction of the operator caused by monitoring nondisplay functions isindependent of the display glimpse pattern, then the single-glimpse probability of looking at anobject can be adjusted as follows to account for this additional tasking:

PLD- (1 - Ff) Pt.!v (display), (7-25)

where Fq is the fraction of the time the operator spends monitoring nondisplay functions dur-ing the detection phase of the scenario.

D. DYNAMIC MODEL FORMULATION

I. Introduction

The previous sections described the various elements that effect physical and visualacquisition. The following subparagraph will combine the classical calculations of static detec-tion and identification with the elements of visual and physical acquisition to form a dynamicmodel of thermal imaging system performance.

a. Fundainemal Concepts

Prior to synthesizing more general statistical models we will discuss some of the funda-mental and elementary concepts for the purpose of gaining physical insight to the acquisitionproblem.

Page 132: a073763 the Fundamentals of Theirmal Imaging Systems

1) SCHUMAKER

(I) Detection with a Changing Signal-to-Noise Ratio

In Chapter V, two alternative models were developed for detection of patterns; theaperiodic model which applies to the detection of rectangles on a uniform background and, theperiodic model for the detection of bars within a pattern. In Chapter VI, methods of describingthe visual discrimination of real scene objects in terms of these patterns were discussed. In thefollowing we will assume that the methods devised are applicable. We will designate PIiL asthe probability of detecting an object (given it is looked at) irrespective of the detectionmechanism or prediction method- Pb L depends on some appropriate SNR irrespective of thedetection mechanism. However, when predicting dynamic performance, the interpretation ofPD/L is "the fraction of the observer population that can detect the target with Zhe given signallevel." This differs somewhat from the classical interpretation of probability of detection basedon threshold conditions; however, it is believed representative of tactical decision making.

Let us now consider a SNR level that is changing in time such that PDL(l) is monotoni-cally increasing. At time 11. POL (11) is the fraction of observers that can detect an object withSNR (t0). At time 12, P0 11(02) is the fraction of the observers that can detect the object withSNR(i.). The fraction Pl.,(/J) contains all of the observers in the P,,(ti) fraction plusPn'L(f•) - PD L(11) observers. The probability that an observer who failed to detect the objectat time t, will be able to detect the target at time 12 is given by[PD.IL(12) - PDI,(tI)J/(I - PD L(1l)), which is the probability that the observer's threshold,which could be anywhere above SNR(,), is actually between SNR(t 1) and SNR(t,2 .

As time proceeds, a greater and greater fraction of observers can detect the object. Witheach step in time there is a distinct fraction of individuals added to the population of observersthat can detect the object. Let us for convenience designate AD(O) as that fraction of thewhole population that is added after the i th increase in SNR, and remember that prior to timet. the group AD(t,) could not detect the object, but that after time 1, they can with unity proba-bility. Note that AD(0 1) is that fraction of the population that can detect the object with thesignal at the first level attempted.

(2) Simple Search in a High SNR Environment

Suppose that the scene SNR as displayed is high enough so that there is no problem indiscerning any target in the scene given that it is looked at. If a single fairly large target ispresent in the scene which is otherwise uniform, search will be quickly concluded as indicatedin the preceding section. If. however, the scene is cluttered with objects that are close in sizeand contrast to the target, it may be reasonable to assume that the eye/brain performs a searchwhich might be described as follows. Assume that the display is covered with a mask thatcovers all of its area with the exception of a small aperture. Finally, let the probability that theobject is within the aperture and, therefore, visible to an observer be Pl,,,,, (the probability thatit is looked at, given it is displayed). If we search the display by moving about this aperture,we can build up a probability function which describes cumulative probability of detection.

At time t, the first sample of the displayed imagery is made. The probability that theobject is within the aperture is P, /,. If the object is within the aperture, it is readily detectedand the search is finished. However, the probability that the object is not found isI - P/,n =" I,,),. rhus the probability that the mask will have to be moved to attempt detec-tion again is P, ,. If detection does not occur, the aperture is rapidly moved to a new location.

Page 133: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8111

If we assume the second position of the aperture is independent of its first position and if theaperture is small compared to the display area, then the probability that the object will be withinthe aperture on the second sample is also P1 !) The probability of detecting the object aftertwo possible (not necessarily required) sample, is given by:

P -(12) - PL [) + PL ,)PLt). (7-26)

which is the probability of detection on the first sample plus the product of the probability ofnot detecting on the first sample and the probability of detecting on a second sample.

After three glimpses the probability of detection becomes

PD(03 ) - PL 1) + P1_ 1,T1, 0 + PL flZL D (7-27)

From reasoning in a similar fashion, the cumulative probability of detection after Y sam-ples is:

PD(. " PL Dkt-). (7-28)

If it had been assumed that the display was searched in a regular nonredundant pattern, asimilar but more rapidly converging series would have resulted.

If we assume that PL;[ may change its value with each new sample, Eq. (7-27) becomes

PD(03) - P1,.",(,1 ) + 51 D0(i) PL'.(01) + P1 ."l.(tl) P1.D(r?) PL 1(r,). (C-29)

where PL,,o(t,) refers to the probability of the object being located within the aperture on theith sample period. In this case Eq. (7-28) must be generalized to

V ,-I

P1,(), P1 , 1) 0 - P. oU ) . (7-30)

Although the above formulation is compat itively simple and may actually describe certainvisual search conditions, the search process may 'e-much more complex.

(3) Search for an Object That May be Lost from the Display in a High SNR Environ-ment

Assume that the object displayed in the previous example (simple search in a high SNRenvironment) is being observed by a TV camera that is pointed at some fixed point on th;-ground. The object of interest is located at some random location in the field of view. Assumefurther that the camera is steadily approaching the fixed ground point while the observersearches for the object on the TV display. With a fixed field of view, the "footprint" of theviewfield on the ground will constantly decrease and the probability. Pp,,,. that the object willbe in the field will decrease monotonicaliy in turn.

Let us now consider the first sample taken as in the example of Vllla(2) above. The

probability that the object will be detected on the first sample is

P0 (tl) - Pf 'l( 1 ) P1 1)(t1). (7-31)

which is the product of the probability that the object is within the camera ficld ol %iew,' at time

Page 134: a073763 the Fundamentals of Theirmal Imaging Systems

ID SCHUM-AKER

.t and the probability that it is within the sampling aperture, if it is on the display. The proba-bility that the object is not detected on the first sample is

Pt(tl ) -. PFOI,( 1) •L. /)(11) + T)FOV.(t1) (7-32)

which is the sum of the probability that the object was in the held of view but was not sampled,dnd the prot ,bility that the object was not in the field of view.

The probability of detecting the object on a second (possible) sample ispI(t) =0 Pf- ) p o.(tln PL.(2) + fol(tl) PF0 1.(I2) PL.D)(I2), (7-33)

which is the product of probability that the object was within the field of view on the first sam-ple but was not sampled and the piobability that the object is still on the display at time 12 andis sampled plus the probability that the target was not in the field of view on the first samplebut is in it on the second sample and is sampled. However, this latter probability is zero due tothe monotonically decreasing nature of FF0o,(r). Thus the total probability of detecting the tar-get on the second sample is

p~~(t,) - PFM (0, (' '-PMO I P C0 (f d PJo 1(t ) F Iov( PI D(12) - P1, o)t( 1 PL D(12) PFo,('). (7-34)

The cumulative probability of detecting the target by the second glimpse is:

Pr)(t2) - Po('i) + PD(t2) (7-35)

PD•0(•) - PO I( 11) PL.1)('t) + P1. t)(t1) PFo 1(0 2) PL'D (t'!). (7-36)

For the Nth sample this can be given by the series•, -i

P-(1') = • , (F) P. /)() -I P .(f). (7-37)

Equation (7-37) illustrates that the function Po-01 does not accumulate but just limits the suc-cess that can be obtained during the sampling process. PFo.(0) is a function of the initial loca-tion of the target with respect to the center of the "O1" and time. The function is ensembledistributed. The P1,'D) term does accumulate as it did in the example of VIIDF.(2).

(4) Search for an Increasingly Detect::ble Object

Let us now combine the examples in VIIDa(i) and in VIIDa(2) to demonstrate search foran onject which is more detectable as time goes by. We will assume that the display is searchedwith the aperture of VIIDa(2) and that the probability of the object being within the aperture isP1 , (t). However, if the object is within the aperture, it is detectable only by that fraction ofthe population, Pt. 1 (0), determined by the SNR(t). (See VIiDa(2) )

The probibility of detecting the object on the first sample is IP1,0 1 ) - AD0 1) P1. p(t) . (7-38)

The probability of detection is the product of zhe probability that the object is within the sampleaperture and the traction of the population for which the object is detectable on the first sam- -pie.

Page 135: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT8311

Between the first and second samples the SNR increases to SNR(t 2). On the second sam-pie two potential sets of observers must be considered, those observers AD(tO) who could havedetected the object on the first sample but did not because the target was not in the apertureand those observers AD(G2) for which the signal level was too low on the first sample for detec-tion to occur whether or not the object was in the sample aperture.

For the faction of the population AD(tO), the probability on the second sample is

p •) - PL/,(1) PLD(t2) (7-39)

just as indicated in VIIDa(2), since for this segment of the population the object was always"readily detectable."

For the fraction of the population represented by AD (12) the object could not be detectedon the first sample whether the target was in the sample aperture or not. Therefore, the proba-bility of detection on the second sample is

P2(12) - PL/D(t2). (7-40)

The total probability of detecting the object on the second sample is

P,(,2) - AD(t,) ; 10(2) + AD(t 2) P2(12)- (7-41)

PD(t2) A-D(0i) 1LLD(1I) PLID0() + AD(t 2) PLD(t2). (7-42)

The cumulative probability of detection by the second sample is

PD(t2) - PD(tl) + pD(2). (7-43)

PD02) - ADtI)1PL!Lo(',) + PL/D 0 t) PL/,(t')J + AD(0 2) PL/O(12). (7-44) KN

From inspection it can be seen that with each additional sample a series exac~tly like thatof VIIDa(2) is begun, prefixed by the AD term appropriate to the SNR on that sample. Forsample 3, pD(t) is

PD(t) - AD(ITI'PL1o(11) PL!D(12) PL'o(ID) + AD(1 2)IPLID(12) PLID(')J

+ AD(:3)PLoA(t). (7-45)

The series representation for the cumulative probability of detection is

P(t AD(,) - PLID(t,) H PL/D('k) (7-46)-I ,.i k-i

We see in Eq. (7-46) (hat the AD portion of the probability of detection does not accumulate asdoes the PL/D portion. Also we see that the two contributors to the probability of detectioncannot be separated into simple multiplicative functions, one describing detectability 'Ind onedescribing the probability of a successful search. This is an error frequently made in searchmodels.

We can rapidly generalize from Eq. (7-46) the series representation of the search for anincreasing detectabie target which may be lost from the field of view (a generalization of exam-ple VIIDa(3). The equation for this problem is:

N NI

SPn 0v) a D(t,) PFov(ti)) PL/D r0 PL/'D(',) (7-47),-I i-I k-1

133

Page 136: a073763 the Fundamentals of Theirmal Imaging Systems

t

1). SN'IUMAKF.R

(5) Conclusions Regarding Accumulation of Probbillitles

In the foregoing examples, in particular example 4, we have described the Interaction ofthree probability functions which were used to describe visual seart;fh and detection. All threefunctions behaved differently. All three probability functions change 'tslie during the experi.nients. However, the probability of detecting the object given it is looked "l-V")/L) monotoni-cally increased, the probability that the object was in the viewfield footprint (,•~)monotoni-cally decreased, and PL/fl, the probability that the object was in the aperture, was uncon-strained. However, the significant difference between the three is their statistical interpr4ts4ion.PD/L and thus AD describes the statistical behavior of an ensemble of observer.s. PFOV descritaethe statistical fluctuation within an ensemble of in/tial experimental conditlons; e.g., camera loca- "tion with respect to target location. PL/D describes the temporal stanslics of random sampling. InSection b that follows these three functions (and three others) are related to tactical perfor-inance using the statistical behavior of the functions that was explored above.

b. Dynamic Detection

Lot us now look at 'he probability of detecting a target using the general scenario given inparagraph A3 and the principles developed in this chapter, In addition to the assumptionsalready stated regarding the behavior of each of the probability functions, we will assume thatdetection occurs in a single glimpse. This assumption is extremely questionable for it appearsthat in many detection problems, actual detection results from the building up of "confidence"that i potential target is actually the target through a series of successive glimpses. However,single-glimpse detection iq widely assumed by the industry and will he assumed herein for itsmathematical simplicity. The probability of detecting the target at the time (01) of the firstvisual sample or glimpse Is given by:

Pr)01) - Ab0,)) Pifl( P.,Nti P,0 V(/J) P1 11 ,(, ,), (7-48)

which is simply the product of all (f the statistical elements affecting detection. The probabilitythat the target will not be detected on the first glimpse by the 1/0(t) fraction of observers is afunction of:

PjL.,(it)-- the probability that the line of sight to thetarget is blocked:

P•,(,)- the probability that the target is not in the search field;

Piov(t1)-- the probability that the field or view of thethermal Imaging system wax not directed at the target at time /; and

P,,,(,t)-- the probability that if the targel was displayed, itwas not within the observer's glimpse (or visual lobe).

Since, if the target is not in the search field at time /1 it will not subsequently become so, theP%(i 1 ) term does not contrihute to future dctection4. The only way the set of observersAL)(01) could fall to detect the target (o the first glimpse and detect it later is If they do notphysically or visually sample the target or ihe line of sight is blocked.

The combined miss-s•ampllng of the target area conlists of

,0,1) . + (, ) P P/,1(0g) Pi, (•id, (7-49)

Page 137: a073763 the Fundamentals of Theirmal Imaging Systems

i

iS~ NRL REPORT 8311

S~I

that is, the target was not in the field of view or was in the field of view but was not visuallysampled. Equation (7.49) can be rewritten

to(P 1,)- (1 1'PD(r) (7.-50)

The probability that the AD(O1 ) fraction of the population will detect the object on asecond glimpse is given by the sum of two terms. The first term is the probability that theobject was in the search field and had a clear line of sight on the first glimpse. Psr(:1 ) PLos(t:).but the object location was not sampled (Pey PL/ID) times the probability that object Is still inthe search field Psf(t2)/Pse(t1), still has a clear line of sight (probability of 1). and the observer(physically and visually) samples the object area on the second glimpse P ov(1 2) P'LI(l2), withthe product of these probabilities being;

PsF(11) PLos(11') {PaYO(1,) PL/o(I• ID (1) PO(2 LD2ps(- (1)

The second term is the probability that the object was in the search field on the first glimpsePs,,(t) but did not have a clear line of sight TU.101), achieves a clear line of sight on thesecond glimpse, (P1r.0(ui) - PLO(Ol)J/PLOS(r0t) (See VIlDa(l). is still in the search field,

Psr(i,)/Ps,(i,), and is sampled on the second glimpse, P.ov(f2 ) PLiD(12). with the product ofthese probabilities being

- IPLOS0(2) - ,PL0 4I)J P s(t2)

The probability that the D(f,) fraction of observers will detect the object on a wcond glimpse isthe sum of the two terms:

P102:) - PLos(IO PFOV(11) PUDOO 't)Ps(12) PFo(lY•1) PL/.DYz

+ 1PLos(') - PLoM(10)] Psf( 2) P,0 V(f2 ) PLIo(12). (7-51)

For the obsorvors represented by D(i,). detection prior to the second glimpse was notpossible because the SNR was too low. For this set of observers the probability of detecting theobject on the second glimpxe is

P2(02) - PLO.102) PSF(12) PFOv(t2) PLIo(/2). (7-52)

The total probability of detecting the object on a second glimpse isPl)02) - AD(0•) p102•) + AD02~) p202•).(o)

The cumulative probahllty of detection by the second glimpse Is

PD(t) - AD0(') p,0() + p0,(2)) + ADO) p2(02). (7.54)

Although more complicated than the simple examples. it can be seen by inspection that theprobabilities of detection form a series similar to that of Eq. 7.47.

The propagation of terms resulting from the passage of time Is given for the first fiveglimpses in Table 7.3. Table 7.4 provides a reorganization of Table 7.3. The series representa-tion of the probability of detection as derived from Table 7.4 is

P)(/,V) " AD(O) , Ps,) Proy(I,) PI r(i,)

4L.A t 1~) 71 IT 1)

Page 138: a073763 the Fundamentals of Theirmal Imaging Systems

D. SCHUMAKER

Table 7-3 - Progreulon of Glimpses. The i th Element Is to Be Multipliedby S,P, D(r,), Where S, - Pa-(t,)

and P, - PL /D(t) Povlt,).Further Element Notation Is L, - PLos(t,) and D, -L(t,-).

Cate gory 1 2 3 4 5

GlNmpse AD t 1) AD(1 ) &D( ) 6D(14 ) &D lS)N .o7. 0 so.I SP 3 1 0 00

2 S2/ 2 L2-L"P1 L2 0 0 0

3 S3P3 L3-L2'2-LlP 2P1 L3 -l.A 2 L3 0 0

4 344 L4 -L 3P3 -L 2 P 2 L4 - 3 P3 " -LA 3 P2 L1-L 3P3 L4 0

34 -L4 PPP 5 LP-LPPL- 4 4 L

$ 5 -L 4P4 "L3P 4P3 " L3 "L4 P4 - L3P4' 3 3 "A 3P4P3 L "L4P4 LS

?L4F 3P2~LP1 3 P -LL )%J5P 2

_ __ _ _ _ __

Table 7-4 - Reorganized Glimpse Progression. The i th Element Is to Be 4Multiplied by SP, D(t,); and S, P,,

and L, Is the Same Notation as In Table 7-3.

Observeir4 SCatsmgr 1 2 3 5

_GlmPse AD(S1 ) AD( 2) A.D(t 3) AD(14 ) ADO(S)No.

5 P 0 0 0 0

2 S2'2 L 1P -D2 L2o o o3 S3P3 L1 3L?, +YP24 .+ L,- 2+D L3 0 0

4 /4P4 L 1 PI3+D2P2?3D+?D 3 + D4 L?2 ?P,+D3" 34+D 4 L 3 "3 +D4 L4 0

S S5PS L,)FI1 /3 F14 +D//9)4 +D3P/ 4 L//?3P 44I0D3P3 1F4 L/3FP4 *D4, 4 LA4 .DS L3

+ D)'4 + DS +*DP + 5 DS D

.--.------......

136

Page 139: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

Equation (7-55) is complicated due to a desire to make it adaptable to the widest possible rangeof applications. However it will simplify considerably under particular mission constraints.

c, Dynamic Identificanon

To calculate the probability of identifying a target, the following have been assumed:

* A target is identified on a single glimpse, which may be the glimpse on which it wasdetected.

* The distribution of identification thresholds among the popu!ation is independent ofthe distributien of detection thresholds (e.g., a person with a low threshold for detection mayhave a high threshold for identification).

* Prior to time t - 0 there is no possibility of identifying the target.

With these assumptions a development similar to that used to calculate the dynamic pro-bability of detection can be used to calculate the dynamic probability of identification.

The probability of identifying the object on the first glimpse is

Plo(t - P,(,) Al('1 ), (7-56)which is the product of the probability that the object is detected on the first glimpse and thefraction of the population that can identify the object with the signal (calculated for theappropriate identification task) present at time t,. (Note that A/ is analogous to AD. i.e..

AR0t, - Ptin(t,) - Pi/o)(f,_-).

For the I/(it).fractioti of the population the probability that the target is not identified onthe first glimpse is PD(tNO, which is the probability that the object had not been detected by oron, as in this case, the first glimpse. The probability that the 4l(01) population will identify theobject on a second glimpse is

IP11(12) - Pi,(11) PD(tZ) -- Pot•).1 (7-57)

The probability that the AI(0) population can identify the object on the second glimpse is

P11)02) - Pi,(12), (7-58)

which is the probability that the object has been detected by the second glimpse.

The total probability of identification on the second glimpse is

PIJ('2) - 1/0(t1) 'Pir('2) + AI(i/2) 2pI',(t2). (7-59)

P11(12) - Al1("') p,•(t 2) + A4(02) P,,(, 2). (7-60)The cumulative probability of identification by the second glimpse is

"Pt) " •Idlip/J(1) + P,-)(12)1 + 4/(",) Pl,("),. (7-61)

But

P1,01) + J1,1(12) P2(12), (7-62)

Page 140: a073763 the Fundamentals of Theirmal Imaging Systems

jf

I) SCIIUMAKER

Therefore, Eq. (7-61) can be rewritten

Pho(12) - -(%11 )1P0 (12)1 + •10(2 ) PD(12). (7-63)P11)(t2) -. Pt)(12)[•l/tl),+ 1l102)]. (7-64)

P/1(12) - P,,(,2) P/iD(2). (7-65)

Thus we find that the probability of identification by the second glimpse is simply the productof the probability of having detected the object by the second glimpse and the fraction of thepopulation for which the object is ide|,tifiable bv the second glimpse.

Inspection of the third and subsequent glimpses shows that on the Mth glimpse the cumu-lative probability of identification is

P.,(,,) - Pt,(i'%,) Pr.f),(/.). (7.66)

Equations (7.55) and (7-66) give formulations for the probability of detection andidentification, respectively, in the dynamic scenario given in Section A. The analysis has beengeneralized to accommodate most applications, but there are thermal-imaging-system applica-tions, such as "snow plow" Narch. for which this analytical structure is not appropriate. WhenEqs. (7-55) and (7-661 do not apply. the appropriate scenario can be synthesized from the sixprobability functions described or others, using the approaches developed in this section.

F. EXAMPLE OF A DYNAMIC MODEL 4J

The following dynamic problem is analyzed for two systems to illustrate the effects ofmission parametcrs on sensor system tradeoffs.

The first system (system A) is a nominal FLIR easily fabricated within current technol-ogy. The second, higher resolution. FLIR (system B) is obtained using the same focal planeand proc•cssing as the first by increasing the focal length by a factor of 2.5. In order not tocompromise the sensitivity and resolution of the second FLIR. it is also assumed that aperture jand stabilization are increased by a factor of 2.5 over system A. Since the focal plane was kept 41

constant, system B has a FOV smaller than system A by a factor of 2.5. The penalties for highresolution in this case include increased aperture and stabilization which in turn increase cost,weight, drag, and complexity significantly while decreasing reliability and field of view.

The scenario for this example is an air-to-surface detection and identification of a target inclear weather, with the aircraft flying at 500 kt at a constant altitude of 900 m. The relativehumidity is 80%, and the air .emperature is 210C (70°F), The target area is detected by radarinitially and is handed off to the ihcrnial-imaging-system at 24 km (80 000 ft) slant range withcomputer-aided tracking of the target area after handoff. No search with the field of view isused. A further assumption is that the sy.,tem is not DC restored. The radar accuracy assumedin the problem is:

Azimuth - , 20 mrad.Elevation -- r,,, 14 mrad

138

Page 141: a073763 the Fundamentals of Theirmal Imaging Systems

I!

NRL REPORT 8311

The assumptions given above indicate that two of the six probability functions developedpreviously are not required in this problem. Since it is specified that the mission is flown atconstant altitude, PLOS can be removed from Eq. (7-55) and become a constant multiplier ofthe results. Furthermore, since no search with the field of view is permitted, the field of viewand the search field are the same and Prov is therefore unity. The general probabilistic equa-tion for detection is:

P)(0) - PLOS AD(O,, PLoD(I) Psr(') iPLo(t), (7-67)

which is identical to Equation 7-47 for search for an increasingly detectable target which may belost from display due to a shrinking FOVfootprint.

The probability of identification is:

PD(tv) - P,/o(tN) PD(tv). (7-68)

The parameters in Table 7-5 define the two systems being analyzed

Table 7-5 - Two Systems Being Analyzed

System A System B a

NET (0C) 0.25 0.25Frame rate (Hz) 15 15Overscan 2:1 2:1Aperture (cm) 15 38Stabilization(mrad rms) 0.1 0.04Number of detectors 178 178Field of view (mrad) 87 x 122 35 x 49Display size (cm CRT) 20 20 IViewing distance (cm) 56 56

Visual-field brightness (cd/m 2) 3 3_ . IThe value of PLOs can be derived from statistics of cloud cover and terrain masking for

the area but will be assumed to be unity here for simplicity. The functions PsF(t,) and PL.,'(t,)were calculated as indicated in this chapter.

The terms AD and A/ are calculated using the aperiodic and periodic SNRD equation ofChapter VI to calculate PD/L and PhD, respectively

The NET of both systems is 0.25"C. Compared on this basis one would project equal per-formance from both systems. System B has a distinctly superior MRT (and MDT): scaled infrequency by 2.5:1 over system A. Based on MRT, system B is distinctly superior to system A.

13: A--

r"

-7139

Page 142: a073763 the Fundamentals of Theirmal Imaging Systems

A

I) ,CIIULM AKt'R

The results of a single dynamic mission characterization for systemA a given inA

Figs. 7-10 and 7-11, respectively. The static detection and identification ranges for the sys' ýms As i

at 90% confidence read from the PoD and P, D) curves of Fig. 7-10 and 7-11 are:

System A S) stem B -

Detection (kin) 13 21Identification (kin) 1.2 3

A!though the MRT values for the two systems erc in a freque.Icy ratio of 2.5:1, static detectionrange is in a ratio of only 1.6:1 due to atmospheric effects and the aperiodic nature of detection.

Identification ranges. less affected by atmospheric attenuation, remain in a 2.5:1 ratio.

Dynamic performance is a more complex pictare. Comparing the PI) curves, we find thatsystem B can be used to detect the target at longer range than system A However, system B

fails to provide detection at all in better than 50',o of all missions because the small field of view

is insufficient to encompass the uncertainty of target location when the target becomes detect-able. S temn A provides shorter range but does so or, better than 75% of the missions rathertnan on 46% as does system B.

The cumulative probability of detection limits the probability of identification, so thatalthough system B can provide identification ranges greater than system A by a factor of 2.5:1,

50% more missionsl are fruitless with system B than with system A. Limitations to .he

minimum acceptable ranges combined with these dvnaimic performance graphs determine which

- 1k

F P/ PD

U PLE

o .... O

0

25 20 15 10 6 0

RANGE TO THE TARGET (km)

"; .- - -ii:.Inn• pcrl,,r:'.;c:c "',i1m -k

140

Page 143: a073763 the Fundamentals of Theirmal Imaging Systems

"NRL REPORT 8311

Pil 0

~'"POLrr

- o

PSF •POO "

02

RANGE TO THE TARGET (km)

Fig. 7-1i - Dynamic performance syslem B

system is tactically superior. If no rminimum range limitations exist, system A is superior to

system B, since it will provide target identification on 50% more missions than will system B.

In addition, if the line-of-sight were obscured by terrain or foliage until ranges on the order of

1 km as could easily happen in some locations, system A would provide superior performance.

REFERENCES

7-1 AASC Study Group No. 5, Final Report for the Hanover Area.

7-2 USAF Environmental Technical Application Center Projeci 7537, Cloud-Free Line-of-

Sight Probabilities for 10384, Belin/Tempelhof, Airport, May 1974.

7-3 R. Eirckson, "Visual Search for Targets: Laboratory Experiments", Naval Ordnancc TestStation Technical Publication 3328, October 1964.

7-4 S. W. Smith, "Time Required for Target Detection in Complex Abstract Visual Display",

Project Michigan Memo 2900-235-R, Apr. 1961.

7-5 A. L. Yarbus, Eye Movements and Vision, Plenum Press, New York, 1967.

7-6 L. E. Williams, D. D. Fairchild. C. P. Graf, J. F. Joula, and G. A. Trumm, "Visual

Search Effectiveness for Night Vision Devices", Report for Army Electronics Command

Night Vision Laboratory, DAAK02-67-C-0472.

141

Page 144: a073763 the Fundamentals of Theirmal Imaging Systems

D. SCHUMAKER

7-7 D. L. Shumaker and R. B. Keller, "A Mathematical Model of FLIR System Peformance inTarget Detection and Identification", Final Report on Contract N00019-74-C-021, July1974.

7-8 Discussions with N. Worthy and R. Sendall.

7-9 G. E. P,ýarson and D. L. Shumaker, "A Mathematical Model of FLIR System Performancein Target Detection and Interpretation, Mod 1", Final Report on Contract N00019-75-C-0261, December 1975.

7-10 E. S. Lamar et al, from C. P. Greening, "Ac4uisition Model Evaluation", Final SummaryReport, NWC Report TP 5536, June 1973'.

7.11 J. M. Enoch, "Effect of the Size of a Comol~x Display on Visual Search" J. Opt. Soc. Am.,49 (No. 3), 208, (March 1959).

7-12 Remotely Piloted Aerial Observation/Designation System (RPAODS) FieldExperiment-Phase 1: Philco Ford Pracire Daylight Television Sensor R and D Tech.Dept ECM 7040, Ft. Montmouth, N. J.. February 1975.

7.13 Personal discussions with A. D. Stathaopoulos, June 1974; also see Ref. 7-12.

7-14 See Ref. 7-9.

7-15 R. M. Boynton and W. R. Bush, "Laboratory Studies Pertaining to Visual Air Reconnais-sance", AD 118 250, April 1957.

142

Page 145: a073763 the Fundamentals of Theirmal Imaging Systems

Appendix A

NOMENCLATURE, UNITS, AND SYMBOLS

Al. Radlometric and Photometric Nomenclature

FLIR technology involves two sets of nomenclature, one for radiometric terms and onefor photometric terms. Unfortunately, workers in various fields have adopted certain terms thatare often foreign to workers in another field. For example, astronomers use star magnitude,and probably will always do so, therefore someone using astronomical data must convert it toilluminance, radiance, or some other unit befitting the problem at hand. For performancemodeling and for thermal imaging systems in general, the CIE nomenclature of the Interna-tional Commission on Illumination is recommended as being the least controversial and themost widely used in publications on radiometry and photometry. The CIE nomenclature andsymbols of radiometric and photometric terms are given in Tables A-I and A-2.

One of the problems encountered in making photometric measuremcnts results from thefact that they are supposed to be representative of what the human eye would observe. Sinceeveryone's eye is a little different, a standard response curve called the spectral luminousefficiency for the CIE Standard Photometric Observer has been established and is shown in Fig.A-i. Note that the spectral response of the eye varies with the adaptation and therefore curvesare shown for photopic response or cone vision which occurs when the eye is adapted to "roomlight" and scotopic or rod vision which occurs when the eye is dark-adapted. The numericalvalues of Fig. A-I are tabulated in Table A-3 (Ref. A3).

A2. International System of Units

The base units of the International System (SI) are given in Table A-4. Their magnitudesand names are well known except perhaps the candela. The candela is oflicially defined as "theluminous intensity, in the perpendicular direction, of a surface 1/600,000 square meter of ablackbody at the temperature of freezing platinum under pressure of 101325 newtons persquare meter." Prefixes for the SI units are given at the bottom of the table.

Unfortunately, one frequently encounters photometric units in the literature which do notappear in Table A-2; for example, nearly all the units given in Table A-5 are fairly common.For this reason the non-SI photometric units in Table A-2, together with the factors for con-verting to SI units, are listed in Table A-5.

Fig. At -1 SpcciruI luminous clicicnc) for thc CIEslandard photometric ohscr'cr

143wom. -I

Page 146: a073763 the Fundamentals of Theirmal Imaging Systems

+I

APPENDIX A

Table A-I - International Commission on Illumination Comparison Chartof Nomenclature Units and Symbols for Radiometry and Photometry

[~~nNom 117 NyrtdSIUernubreiatnj ThoCoQuantity (Notes 1.2) (Note 1) (Note

Radiornet -

IiEnergy Radiant Energy Q. LIugy emitted transferred or I1Joule jreceIved in the (vre of radiation -

Peor.er RaditLani Rux I its, adQ, Ait I

' Radiant Poecr r IISource Radiant Intensity 11.1 I * d$1i1 Radian pout' - iar wi -

(Point) 1iwet unit solid angle nierad'an

Sour"c Radiant Eilance .A( U t*• dVA The flun lea-ngt Wait • rits(A-0) p t acepe unit area--

I Rfceivr radiance - .F K*, - d*,;dA The flux nzident I 'alt .

I (Power) on a :urface per uant area Ie-* iI{ ttti

IRerver Radiant Eupouare ,H H,, -1 dQIA f.dt. Surface iJoule

denslty of the radiant enetgy .(E•,=) re,+.d [m0,0r);

lVnicital Radiance I.L. I dtQ" . i s I(Source.I = IAtn coS (titter V neradian

of received) Soflux per uit projected area perunit solid angle in a upven direc IUon 1lrM a iource in transit or

Energy O oatiyuf L Q,.Q Q. crid L.... .-A -i I - u' 0..h -

Lu no , Idl,l'-•,j0, l Lumen In.

Sourc. tirronouos Inltnstiy I I I, a d*/.df The luminoua Candela cJ Candlcpr,'rn designt.ae a.(Po•n0) . erlemty vi).J so the surface lumen ;i.; luriincuv intensity

of 1h60 vcmV of a black body 7c rada---- Ipr esed in rndelt.s

at tIe ermperalure of freezingI 5 i t , . . , , , (1 0 4 2 K ) aw r. c : a

Source LCuinoust E.tice Mt , _ if fl d*./4 l.ana •r fSun lumen hn m:(Amnu ileavingla surface per unrt area mIertr

Revarlsr Illuminance Fff F,,, FdA-IdA Tre luminous I.an" In I ph!iphlt W"4 In(Power) ua incident on a surface pet lumen I luaern'fr I 1r•t-

unt Irea -i- ndle lfft

.(metrier) ] lt-ir.-to dmo •)tA -

- Ih I0 ft, NaIn

S.erar t gihr Espoawe H', 11 Hr dQr'!dA - ff:'dr fnue conds I..n

(Energy) - quintily ,f light recei .ed per I In, I m)

unit area

Universal Lurgmtnaca L. Candela I ni d yr- I siit-h(ihi - 10u Ado(Socice. .IL ;.iizi--;---- Tie (a nit ap,,tltarhst'

-trairslting cs•A+50(re) (I ,i.k.I in1

-or re+el*ted) luirormtus Sun fre unil pro - ii I Iarrboti (I i -

- jecred areai per unit solid angle (I0 C eKJ in."tea plete direction (torna •I th-ilatitroi iFip -

I crarce. toI transit cit afretng at- I Cr4" , in."

[dote I Symbol suhicopis fe fyi radiant energy and ftr photnretry t reay he drrpped it nri ecinluanr-r it Inkel. I, arise

2~e Svrsbolitreay be f'Utlr~e, stibsurlpted wribh the ectemrola ;. c.it o (I e . Q+,Xl i" 'n~hlcte" the• quvIriII" per o'vaI *aurlenlroi.tloh I rci . -" -- ,

NoteI nt o sabsu•e rUK•pis X e.or stempiiry'd. he nveisjvn inthe tb•re are ote d-rodec1by'the cny icc rnaengtb lltcqircn:n iconV•s

enrrber r eapeciLcely,wrIt 4 AI inh~e apect-l l uminous efflcacn• (cit laduaiclvni t he naiolengrh iif peak aye response In phi-i--pr. nit-rin Ii' cube¢ i-. ph.u.pic ott-.n It

650 latev war *(( Its rte• spectral luminous r fticrn• fi.r phetopt. vlison scaled so that foi A,n the -evrrlnlgn -,I rIanIrIroiIet~nrctl•n• ~~~V(F,,) * I Phtoturpr anionll re'fe In to eception by the ;ove frecep ito Poe adaptl ire requirestonirnarco•¢ lenrir ,if atur sorel avt•l *sordulua

(dlmer)e

'44

Page 147: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

Table A.2 - International Commission on Illumination International lighting Vocabularyand Other Radiometric and Photometric Quantities

Name Symbol I>finition Units-- I

Spectral luminous V(X), photopic Adopted by CUF m• 1924 1 Dlmensioigeusefficiency vision

V'CA), scotopic Provisio: " c. 'y CIE for young Dimensionlessvision observers n. ;#:-

Luminous efficacy K for complex Ratio (-f the luminous flux to the cor- lumen/Wof radiation radiation; K(A) responding radiant flux:

for monochro- K-'4ý,/4)e KA)-4ývx/4)exmatic radiation

Luminous efficacy 17y, ?7 Ratio of the luminous qlx emitted to lumen/Wof a source the power consumed

Maximum spectral Km The maximum value c: -: (X). Its value lumen/Wluminous efficacy is Km - 680 Im/W at about 555 nm for

the CIE standard photometric observerin photopic vision

Lurminous efficiency V(*) Ratio of the luminous efficacy for corn-plex radiation to that for the sameradiant power at the wavelength ofmaximum photopic response: V(*)[f'kZ', V(Q)d),l / (P'exd)A = K/Km

E-issivity of a f Ratio of the thermal radiant exitancethermal radiator to that of a blackbody at the same

temperature: e = IMe, thermal]/[Me, black body]

Directional emissiv~ty e(9, (p) Ratio of the thermal radiance to that ofof a thermal rat" tor a blackbody at the same temperature:

e(8, (b) = (Le, thermal] / [Le, black body I

Geometric extent G dG = dA coz Od8 Meter 2

Optical extent ndG, where n is refractive index Meter 2

Basic radiance or Le/n 2 and L,/n2 , a constant in abasic luminance medium with no losses or scattcring

145

Page 148: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX A

Table A-3 - Relative Spectral Luminous EfficiencyValues of the Human Eye

Wavelength Photoplc V(X) J Scotopic V'(X)(nm) L > 3nt (cd m- 2 ) (L < 3 X 10-5 nt (cd m- 2 )

380 0.00004 0.00059390 0.00012 0.00221400 0.0004 u.00929410 0.001 Z 0,03484420 0.0040 0.0966430 0.0116 0.1998440 0.0230 0,3281450 0.0380 0.4550460 0.0600 0.5672*470 0.0910 0.6756480 0.1390 0 .7q3 0490 0.2080 0,90435OU 0.3230 0.9817

510 0.15030 0.9966520 0.7100 0.9352530 0.8620 0.8110540 0.9540 0.6497550 0.9950 0.4808560 0.9950 0.3288570 0.9520 0.2076580 0.8700 0. 1212590 0.7570 0.0655600 0.6310 0.03325610 0.5030 0.0i 5i3620 0.3810 0.00737630 0.2650 0.003335640 0.1750 0.00! 497650 0.1070 0.000677660 0.0610 0.0003129670 0.0320 0,0001480680 0.0170 0.0000716690 0.0082 0.00003533700 0.0041 0,00001780710 r.00•1 0-00000914720 0.00105 0.0000047b730 0.00052 0.00000254C740 0.0002'S 0.300001379750 0.00012 0.0000007',0760 0.00006 0.000000425770 0.00000 0.000000241

780 0.000000139

146fl l ls ffmf¢ -

Page 149: a073763 the Fundamentals of Theirmal Imaging Systems

F

to

NRL REPORT 8JII

Table A-4 - Systeme International (SI), The International System of Units

Type of Unit Name of Unit Symbol Basis for Definition

Langth meter m 1,650,763.73 wavelengths of 1he ofrnle.redline of krypton 86I Maun klogram kg Mau of a platinum.iridium cylinder it theIntrmational Bureau of Weights andMeasure$

Time second 6 Duration of 9,192631,770 cycles ofowillition of the hyprtfine structure tranil.lion In cesium 133

ba$1 Current mporte Thti cufrrnt which will produce & force ofunitc 2 X 10,7 newtons per meter o(lengthusbetween two Ione Pointli; wig@$ I meter• e split

Temperature kelvin K 1/273.16 of the thermodypnmic tempera.•. • Sure of the triple point of witef

SAmount of mole mol Number of atoms in 0O,02 kg of carbon 12" t substance

Luminous candels cd Radiation from 1/60 (cm)l of a blackbodyintensity at the temperature of rreezing platinum

(2045 K)

SI Plan Anoe ridlin fadsupplementary Solid Angll Steradian or

units

For c, newton N I newton - force to accelerateI kg ma.ss I mrnter/(s)2

Pre-sure pascal Pa I Po a I N/mz

Work.energy joulle I I - Im

Power wait W I W Ii J/

Si Frequency her "I I Hi , I cycle/s= derived

units Vollige volt V I V - I W/A

Rqsisttnce ohm f I £i - I V/A

Concentration m',lei per moIIni) 3

cubic meter

Lsght flux lumen im A light source havirif. an intensity ofI candela in 311 difrctions radiates a flux of

4ff lurnicui,

Multiples and Prefixes

Multiple 10-18 10 13 ]0-12 10-9 10 i0- 10-2 101 106 IO9 10I1Prefix aw, femno pico nann micro milli centi kilo mtp gigs ter#Symbol I f p n u m c k M T

1 -147

Page 150: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX A

Table A-5 - Some Additional nott.Sl Photometric Units (Rer. A-4)Unit Symbol S1 Equivalent Quantity

aposib ' jamb) M wr cd -M9 luminancecandela-sucond lcd -II - I led - 1 ergolumic intensity*footeandle elfc - I Ir rt-11 - 10.764fim -m-31 Illuminancetootlambert IfL) r -'( cd -ft-') - 3.426(cd -m-11 luminancelambert ML ir-l ld -cm- 2) -10 irl~cd -m-1)I luminancelight-waltt1 DIIW IK,;'I - Im) go 680-1 [1m) luminous fluxphot (ph) - 1im cm-31 10(0m, 'm 2J Illuminsancestilb (1b) I lcd cm-31 -10' lcd inm'') luminance I

"The CIE 151 s"eem not to have any term for this quantity, nor does that@ seem to be any In general uue other thenthe term (or the unite, slthough 'baam-candlepowsr-socold' is also used at times. This term (fo the quantity Is takenfrom Jones' "piluometry' proposal.

tThe l11ightrwa IIW Isi related to tha unit at radiant flut. the watt IW. by

whereVWA (dimenslonleesi is the photapic spectral luminous efficiency.A lmin is the wavelength,.ol, A (4) 1W -nfm 1 is a distribul~an at spectral radiant flu aso a tunction at wavelength, and6, (IWI Ie the lumtinous flux In lightl-watta of the radiation described by the spectral distribution *,,&W

The lumninous flux. In lumens. In this same boam ot radiation is given by6, K,,j -, -610' 0, 11m),

whereK, - 660(im -W-11 at A to 555 mnm) Is the maximum spectral luminous efficacy (of radiation).

Nate that both the lumen and the lighl-watt are unite of luminous Owux. They have the same# dimensionality and differonly by lha numerical stile tactor JKmj =650. Thera are approximately 650 lumenon per ilghlowitts 41 all wavelengthsIn the visible region or the spectrum where 360 1( h iG 760 minIl,

REFERENCES

A. I "International Lighting Vocabulary,' Third Edition, Publication CIE No. 17 (E-1.1) 1970of the International Commission on Illumination, Bureau Central do I& CIE, 4 Av. duRecteur Poincare, 73 Paris 16', France. c/o L. E, Barbrow, National Bureau of StandardsWashington, D.C. 20234.

A.2 NBS Special Publications 330 and 304A (Revised October 1972).

A.3 K~ninsike, R. Applied Optics and Optical Engineering Vol. I "Light; Generation andModification" Academic Press, New York, N.Y., 1965,

A.4 "Self-Study Manual on Optical Radiation Measurements* Part I - Concepts Chapter 1-3,NBS Technical Note 910-1, Fred E. Nicodemue, Editor, March 1976.

148

Page 151: a073763 the Fundamentals of Theirmal Imaging Systems

Appendli B

SYMBOLS

a Image area (cm')

A Image plane area (cm')

Sad Detector area (cm 2)

AD Area of the display

AG Area of the glimpse

SA, Areas of constant temperature (cm')

All Absolute humidity

BLIP Designation for background noise limited detector

C-1 Contrast threshold for visual detection, from reference 7.10

C, Radiation constant, 3.7413 x 104 W cm' /AM4

C2 Radiation constant, 14 388 urm K

D Dimension of a square glimpse

D,, Lens diameter (cm7)

Dr(r, 4') Visual target detectability as a function of location in the visual field

D'j, Visual target detectability within the glimpse area

D*((1,) Detectivity for a solid viewing angle of n, sr (cm Hz"/' W-1)

D*(21T) Detectivity for a solid viewing angle of 2ir sr (cm HzI/' W -)

D*" (270) D' of a perfect detector of unit quantum efficiency viewing 21? srof buckground (cm Hz"' W-1)

AD An incremental change in PD/f

AD(0,) The change in P0 ., between the (i--I) th and ith glimpse

e Charge of an electron (coul)

Irradiance (W cm-)

149

Page 152: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX B

Eav Average photosurface irradiance (W cm-)

Eh Photosurface highlight irradiance (W cm-)

e,(T) Saturation vapor pressure of water surface (dynes cm- 2

f Lens focal ratio (F,./D,)

fD The spatial frequency characterizing detection of a specific target

FL Lens focal length (cm)

Fraction of time an observer spends monitoring nondisplay functions

FR Frame rate (s-1)

Sf,.Detector channel bandwidth (Hz)

4/. Video bandwidth (Hz)

a f,., Reference channel bandwidth (Hz)

4f, Video bandwidth (Hz)

.f,., n Af,, - reference video bandwidth (liz)

g Grams

G Gain of signal processor

G, Display (or v;deo) gain

HFOV Horizontal field of view (deg, rad, mrad)

Signal current (A)

i,,, Average of current due to object and background (A)

I,,, rms noise current (A)

.'p, (,) The probability that the segment of the population whose threshold for identificationwas passed on the / th glimpse will be able to identify the target on the j th glimpse

Incremental signal current (A)

At An incremental change in P['1,

K Temperature in Kelvins

K Constant [2.25/(XA, )"I,)I uscd to calculate distribution of glimpses on a uniform display

150

Page 153: a073763 the Fundamentals of Theirmal Imaging Systems

4 4

NRL REPORT 8311

Kd Display conversion pin

K, Interlace ratio

4K AL (radiance) to AT (temperature) conversion factor (W cm-2 sr- K-')

ke Spatial frequency (cycles/mrad) I

kd Number of lines required per minimun object dimension for a given level of visualdiscrimination of a displayed image

ko Spatial cut off frequency for a diffraction limited lens (cyles/mrad)

km kilometers

Ko Phosphor gain (lumens/electron)

Kx Rate of navigational error accumulation in dir.ctior A

Ky Rate of navigational error accumulation in '.arection Y

Lav Average display luminance (lumens sr-1 cm-)

LED Light emitting diode

I/p.h. Lines per picture height

LY Luminance distribution in the vertical direction (lumens sr-' cm-)

AL Incremental scene radiance (W sr-' cm-)

ALH Incremental luminance swing (high) (W sr-1 cm-)

ALL Incremental luminance swing (low) (W sr'- cm.)

mb Pressure (mbar)

MDT Minimum detectable temperature (K)

MRT Minimum resolvable temperature (QC)

MRT' Minimum resolvable temperature for a bar aspect ratio E other than 7

MTF Modulation transfer function

Mx(T) Spectral radiant exitance (W cm-)

nav (no + nb)/2

Page 154: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX B

ft'a Derivatives of the above quantities with respect to space and time(photoelectrons/cm

2 . S)

Number of photoelectrons in image area a and sampling time T, due to background

Derivatives of the above quantity with respect to space and time(photoelectrons/cmr2 . s)

ni Number of detectors in series

n, Number of photoelectrons in image area a and sampling time T,due to object (photoelectrons)

pi'0 Derivatives of the above quantity with respect to space and time(photoelectrons/cm2 . s)

np Number of detectors in parallel

N Number of glimpses taken (Chapter 7)

N Spatial frequency (lines/pict. ht.)

N, Noise equivalent bandwidth (1/p.h.)

N, o0 0Noise equivalent bandwidth of specific sensor components or groups of componentsas identified by the appropriate subscript (I/p.h.)

N, Sampling frequency

NEP" Specific noise equivalent power of detector (W/cm Hz"/ 2)

NEAT Noise equivalent temperature difference (°C)

IV, Number of active scan lines

An no - nb

Afi' Derivatives of the above quantity with respect to space and time (photoelectrons/cm 2e -sg

0, Overscan ratio - ratio of detector dimension in the cross scan directionto the scan line pitch

P Vapor pressure of water (mm of Hg)

P Scan line or detector pitch (cm)

PD/L Probability that a target is detectable given it is looked at

PD(,) Cumulative probability of detection by the time of the i th glimpse

PD(r,) Probability of detection on the rth glimpse

Page 155: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

pdf Probability distribution function

oPD Probability of visual detection integrated over the visual field

PFF Probability distribution function (pdf) describing the positioning of thefield of view within the search field

PFOV Probability that the target is in the sensor field of view

P(, Pdf for glimpse position on the system display

PH(. ) Probability that the search field is centered on a position that is I0, 4P distant from the actual target location

P,/0 Probability that a target is identifiable given it is detected

P (0,0 The probability of identification by" the i th glimpse

"P,0 (0,) The probability of identification on the i th glimpse

p, (t1) The probability that the segment of the population whose detection threshold wassurpassed on the i th glimpse will detect the target on the J th glimpse

PL/ID Probability that a displayed target is looked at (in the observers glimpse area)

PLOS Probability that the clear-line-of-sight exists between sensor and target t

PSF Probability that the target is in the search field

Pr(v, 41) Probability distribution function describing the uncertaintyin the target iocation in coordinates (y, tp)

PrD Pdf for target location on the system display

PTF Probability distribution function describing the uncertainty of targetlocation within the search field (normalized to PsF)

(Quantity) One minus (Quantity) or not (Quantity)

R Range (m or km)

R, Radius of the fovea (deg)

R.(v) Frequency response of a filter

Rc; Radius of the glimpse (deg)

: RH Relative humidity

R,() Optical transfer function or complex steady state spatial frequency response

Page 156: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX B

R0 oO Same as above but added subscript identifies specific or groups of specific sensort components

Rsf(N) Square wave flux response

R,p(N) Spurious response

Rso(N) Square wave amplitude response

R ,o(r, ') Ratio of target detectability at visual coordinate (r, T) to that in the fovea

S Specific detector spectral responsivity (A/W)

s Time (s)

SFY Dimension of the search field in direction y

SFp Dimension of the search field in direction 0

SNR Signal-to-noise ratio

SNR D Display signal-to-noise ratio

SNRD, SNRD calculated using the MTF of the fovea

SNRCo Detector channel SNR for broad area images

SNR Dr Threshold SNRD

SNR, SNR of the electron image

SNRP Perceived SNR

SNR k Video SNR

SNRo Video SNR for broad area images

t Time (s)

T Temperature (K)

T. Temperature of air (K)

Td Dwell time (s)

T, Eye integration time (s)

TDI Time delay integration

Td, Dew point temperature

Tf Frame time

Page 157: a073763 the Fundamentals of Theirmal Imaging Systems

INRL REPORT 8311

1t, Time of the i th sample, usually time of the i th visual glimpse

to Time of target handoff from a cueing sensor to the therma!-imaging-system

Dead time

TS Duration of a visual saccade

Time of the last inertial navigation system update

T,. Temperature of water (K)

A T Incremental temperature (°C)

A T, Incremental temperature at zero range (MC)

A T(R) Incremental Temperature at range R (°C)

V Visual meteorological range (kin)

VFOV Vertical field of view (deg, rad, mrad)

W Watts

X Effective focal plane width (cm)

Xo Size of the scene object in the x direction (cm)

Ax Size of the equivalent bar pattern for bar width in the x direction of the image (cm)

AX Incremental horizontal element of the display (cm)

XD Display width (cm)

Y Effective height of the image plane (cm)r

rYD Display height (cm)

Yo Size of the scene object in the y ditection (cm)

SAy Size of the equivalent bar pattern bar width in the v direction of the image (cm)

A Y Incremental vertical element of the display (cm)

Z Random variablef

Sa Picture aspect ratio, width-to-height d ,/.6

1/3 Target subtense used in reference 7.10

Page 158: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX 8

OCONT Extinction coefficient for continental aerosol model (kmi-)

8 ( ) Noise filtering function (periodic patterns)

r ( ) Noise filtering function (aperiodic patterns)

S8, Noise equivalent aperture (cm, mrad or pict. his.)

8,, Noise equivalent aperture of specific sensor or groups of sensor componentsas identifie, by the appropriate subscript (cm, mrad, or pict. hts.)

as Amplitu,, .i a visual saccade (deg)

Amplitude of a iaccade

ax, a Detector dimensions in x and y (cm)

Sa Bar aspect ratio (length.to-width)

71 Cold shield efficiency

1 ,o Lens transmittance

Scan efficiency

"71h Detector spectral quantum efficiencý (electrons/photons)

Instantaneous field of view (rad or mrad)

9 Visual angle from the fovea to a tvr. in the visual field, used by Lamar (reference 7.10)

Spectral wavelength (Ami

Spatial frequency

Complex function of syste,' -' and target parameters usedin reference 7.7 to calculate - size

C 0 Noise increase factor

SAtmospheric extinction coefficient (Km-')

First standard deviation of pdf for target location in direction O

First standard deviation of pdf for target location in dir clion 0

Transmittance

i, t 15

Page 159: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT $31l

, •Vertical and horizontal fields of view (deg, rid, mrad)SN 0, Ou (deg, tad, mrad)

Instantaneous solid angle of view (9)0

An incremental element of the search field in direction 4

Ay, An incremental element of the search field in direction y

&A

IA

TIrI

15

I-!

157

Page 160: a073763 the Fundamentals of Theirmal Imaging Systems

_2

Appendix C

THE NIGHT VISION LABORATORY STATIC PERFORMANCEMODEL BASED ON THE MATCHED FILTER CONCEPT

W.R. LawsonJ.A. Ratches

A. INTRODUCTION

The NVL nodcl for thermal imaging systems is based on a model originally proposed byBarnard [Ref. C.l1 and is conceptually different from the synchronous integrator concept origi-nally formulated by Otto Schade, Sr. The latter model, as refined by Sendall and Rosell is dis-cussed in detail in Appendix D. In the cynchronous filter concept, it is assumed that theeye/brain combination will integrate over the entire area of an image even though the imagehas been smeared out over a large distance by finite sensor apertures. In the formulation forsignal, the integration limits are plus or minus infinity although as a practical matter theeffective distance is usually much smaller because signal integrated in the low-amplitude tails ofa blurred imagc in a real imaging system increases only very slowly with increase in integrationdistance from the image center or core. It is felt that the eye/brain only integrates in space forso long as image SNR increases in turn but that this more complex consideration would undulycomplicate the analysis and would result in only minor charges in the final result for most casesof practical interest. In the synchronous integrator concept a perfect integrating filter ishypothesized and used to bound and compute the noise.

The matched filter concept stems from communications theory [Ref. C.2]. In* a videochannel "a matched filter is a delayed (time shifted), time reversed (spatially reversed) versionof the signal. Thus if i(t) is the signal function, the response function of the matched filter isproportional to i(tU-t. (In discussing time functions causality becomes a problem; howeverthe discussion here will not be complicated by this). The matched filter is the filter which max-imizes the signal-to-noise ratio (signal being the magnitude of the output from the matchedfilter and noise being the standard deviation of the noise fluctuations) at a time t, for the casethat the noise is additive (independent of signal) and white (the power spectrum equals a con-stunt at all frequencias). Note thai for the case of a symmetrical signal and for tj equal to zero,the matched filter has precisely the same shape as the signal. (In general, the matched filter isthe mirror image of the signal.)"

So far as is known, neither the synchronous-integrator nor the matched filter conceptshave any proven basis in psychophysical fact. The eye's ability to spatiilly integrate overimages which are not too large in two dimensions simultaneously is beyond question, but theprecise method of image processing that takes place in the eye-brain is probably much morecomplex than either model would indicate. Efforts have been made to validate, or, at leastdetermine the limits to the applicability of the synchronous filter concept. Over a fairly wide

*Rtf C-2

------ I

Page 161: a073763 the Fundamentals of Theirmal Imaging Systems

-APPENDIX

range of conditions, the experimental results are generally in satisfactory, if not perfect agree-

ment with the model predictions as is shown in pan in Appendix D. However, no efforts havebeen made to determine whether the matched filter or the synchronous filter approach gives abetter fit to the data, and to provide definitive results, specific experiments of high precisionwill be required.

A model is required in order to take into account the effects of finite sensor apertures.The matched filter equations have been formulated in a manner so as tu agree with the syn-chronous filter equations for periodic images and will also agree for aperiodic images when theimage being viewed is rectangular in intensity distribution, i.e., constant amplitude. However,with blurred images, the matched filter ccncept will result in a somewhat smaller image signallevel prediction for aperiodic images. However, the difference between the signal levelspredicted using either model will be trivial until the image dimensions become of the order ofthe dimensions of the overall sensory systems ricise equivalent aperture or smaller. In general,the statistical variance in mak; threshold resolution measurements will be larger than anyerror which may result from uaing one larger or the other model, and the selection of themodel to be used is largely one of personal preference.

B. NEAT, MRT, AND MDT DERIVATIONS

The noise equivalent temperature (NEAT), the minimur, resolvable temperature (MRT),and the minimum detectable temperature (MDT) will be derived in the following. Completeand simplified expressions are given for each quantity, the complete expressions provide a basisfor rigorous analyses, and the simplified expressions provide a means for obtaining reasonableestimates through use of hand calculations. -

Neither the concepts nor the final relationships contained herein are new. The NEATderivation is similar to an analysis in Jamieson [C.31, The MRT and MDT derivations areslightly different from others of which the author is aware. The techniques employed to derive

MRT and MDT are equally applicable to the derivation of subjective resolution relationships forintensifier and LLLTV viewers,

Terminology

NEAT - Noise equivalent temperatureMRT - Minimum resolvable temperature

MDT - Minimum detectable temperatureI - timeo(,) - an output signal, W) - an input signal

"- convolutionh () temporal response functionf - Frequencyl(f) -Fourier transform of i,0()1,(f) - Fourier transform of i,(t)H(f) - transfer function (fourier transform of h ()OTF - optical transfer functionMTF - modulation transfer functionR(r) - auto-correlation function

160

Page 162: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

S(f) - power spectrum

< > - ensembie average .- time difference

or -- rms value of a random processr - detector response function1J v, () - a (voltage) signal

.(X) -- watts/micron on the detector

D, (X)- detector detectivity

Df:*(A)- detector detactivity (no cold shielding)

I - focal length71,r(A) - optical transmissionT - temperatureL - radiance from source (target)HFOV - horizontal field of viewVFOV - vertical field of view

F t - eye integration tirrT ýo.%C -- overscan ratioFR - frame raten, -- number of detectors in seriesn -- number of detectors in parallel

74 - cold shield efficiency

71) - picture element delay time71sc - scan efficiency

0 - cold shield angleW - an integral Eq.(C-24)i(x,y) - spatial signal"A - area (signal)

k - a constantA-M -- watts/area from display

- distance between scan linesSv -- scan velocity

- frequency of MRT bar patternL -bar length in MRT bar patternb - noise function along a line of the display

- spatial frequency (x or horizontal direction)- spatial frequen,.y (y or vertical direction)

.S - a threshold signal-to-noise ratio

q, - an integral (defined below equation, Eq. (C-45))P -- an integral (defined below equation, Eq. (C-45))

P, - an integral (deiined below equation, Eq. (C-45))

•q - an integral (defined below equation, Eq. (C-51))-- an integral (defined below equation, Eq. (C-51))

an integral (defined below equation, Eq. (C-5I))

Preliminaries

Tnroughout this !cction. elementary concepts and analysis techniques employed in electri-

cal comnunication theory arc used. The necessary relationships are presented below. the

Page 163: a073763 the Fundamentals of Theirmal Imaging Systems

-q

A

SAPPENDIX C :

."reader unfamiliar with these relationships -.ould profitably read the first three or four chapters A

[ of Wozencraft and Jacobs 1c.41. (it is possible to derive NEaT, MRT, etc., without employing,• these conc~epts.)

. An output signal from a linear system (ci:rcuit, optical device) is equal to the input signal iconvolved with the response function of that system, i.e.,

VOi~t - i'(1) * W~) f ii(01)h (t -0d.'): (C-1)

where in), W,(t), and h (1) equal the output signal, the input signal, and the system response -function, respectively. The response function h(t) is simply the system output for an inputpulse approximating a Dirac deita function. If both sides of Eq. (C-1) are Fourier trallsformed,the expression

1, (P) - I W fH ( P (C-2)

is obtained. Here 1,, (f), ,1, (f) , and H(f) are the Fourier transforms or 1ý ),i, (t), and h W),respectively. The quantity H(f) is referred to as the transfer function of the system. Theone-dimensional (spatial) version of H(f) (i.e., the Fourier transform of the line spread func-tion) for an optical system cor-responds to the system's optical transfer function (OTF) whoseabsolute value equals the modulation transfer function (MTF) of the systems, In Eq. (C-2),the quantity H(f) is said to "filter" the signal 1,(f). Note that if a signal is passed through twosystems in series the output from the first system equals the input to the second; therefore. ifi,(t) is the input signal and h,() and h2(t are the response functions of the two systems, theoutput is given by

1ý (r) - 4 (1) . h,(1) ,. /2(1). (C-3)

Correspondingly, the transform of /0 W/ is given by

I,, f) - P, (./) H .)f2 (f), (C-4)

Thus the "two-system" response and transfer functions equal hi(t) * h2(W and H-[(f)H2(f),respectively-, e.g., the OTF of a complex optical system equals the product of the componentOTFs (ignoring component interactions).

A wide-sense stationary random process (e.g., noise in most electrooptical viewers) can becharacterized by its autocorrelation function

R(O) - R(tLt+r) - <n n(tnr+r)>. (C-5)

where n(t) designates the random process and 7" represents a time difference. The Fouriertransform of this function, called the power spectrum of the process, is given by

s(.f- f- R(r)e--7d. (C-6)

The brackets in Eq. (C-5) indicate an average over an ensemble of Of/) functions. The outputpower spectrum of noise prc,.esses passed through (filtered by) a linear system is given by

S", (pf - S, ( f) h." (I-), (C-7)

S. . . . • 62 . .. .

Page 164: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

where S, and S, are the output and input power spectra, respectively. An extremely importantrelationship between the power spectrum and the variance (at a point) of the random process is

2 f- S(f)df. (C-8)

Since engineers are reluctant to employ negative frequencies and since S(f) is an even functionof frequency, it is common practice to redefine the power spectrum such that

.2 f S (f)df. (C-9)

This latter power spectrum is just twice the one used in Eq. (C-8); this power spectrum is usedfor the temporal voltage noise and the corresponding (horizontal) spatial noise since it is theone commonly employed by thermal viewer engineers. In the vertical direction, however, thepower spectrum in Eq. (C-8) is used.

A matched filter is a filter whose response function is a delayed (shifted), time-reversed(spatially reversed) version of the signal. Thus, if i(I) is the signal function, the responsefunction of the matched filter is proportional to i(tt). (in discussing time functions,casuality becomes a problem; however, the discussion here will not be complicated by this.)The matched filter is the filter which maximizes the signal-to-noise ratio (signal being the mag-nitude of the output from the matched filter and noise being the standard deviation of the noisefluctuations) at a time t, for the case that the noise is additive (independent of the signal) andwhite (the power spectrum equals a constant at all frequencies). Note that for the case of asymmetrical signal and for t, equal to zero the matched filter has precisely the same shape asthe signal. (In general, the matched filter is the mirror image of the signal.) Also note that if

1(f) W f i(0e 2 v' dt, (C- 10)

then the frequency response of the matched filter is proportional to 1i(f), i.e.,

H.,f)- •i(-t0e-2""Idt - I'(f). (C-1I1

NEAT Derivation

The noise equivalent temperature is defined as that input temperature difference for a"large" target (a large target being one whose size is large relative to the system response func-tion) which is required to generate a signal (vcltage amplitude) just prior to the display (or afterthe detector preamplifier) which is just equal to the rms noise (voltage) at that point, assumingthat the filtering action of the electronics prior to the measurement point corresponds to that ofa "standard" filter. The ambiguities in this NEAT definition provide at least part of the reasonNEAT is viewed with disfavor in some circles, the precise point of measurement and the "stan-dard" filter are not necessarily identical from one measurement to the next. A second reasonNEAT is viewed with disfavor is that it does not relate directly to the signal-to-noise ratioswhich are fundamental for perception of targets on the device display. it is not a display signal-

to-noise ratio and it is a point signal-to-noise ratner than one " averaged" over the target.Nevertheless, NEAT can be a useful indication of system sensitivity, and (although not neces-sary) it can be used to simplify the MRT kind MDT relations, therefore, its derivation follows.

Page 165: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX C

The detector plus its associated preamplifier is assumed to be a linear system with aresponse function r(X,1) hELecT Wi, where r(Xkf) is the response function of the detector in -volts per watt and hELECT- is the amplifier (and other circuitry) response function. Therefore, ifthe signal onto the detector equals A-O(XMI() watts per micron, where i(i) is a normalized timefunction, the response of (i.e., the signal from) the detector-amplifier system is given by

(1 f A-0 (A) i(1) -r (A, 1) -hELEcr W dX. (C- 12)

1- ef'f (f HELCT..)fO AO r(Ajf) d Xdf,

where IVf).HrLrCr(f) and r(k,f) are the Fourier transforms of i(t), hELECT(I), and r(Aj),respectively. Assume that r(Axf) ( or r(k,r) ) is separable into a frequency and a wavelength-dependent part; then

r(Xf) - r(kj0 ) (C-13)r (XJf)

where rs(nfun) function of f and where function of X. Equation (C-12) giving the signalr (X .f')

v, can now be simplified to

Vt, e2`"'(f)HELECT) ) (ft) (C-14)f_ 'r(A)fo)

W i'(:4 4Ar(Aj,f)dX,

where i'(0 is defined in an obvious manner.

The rms noise voltage corresponding to v() must now be determined. Let S(f) equalthe power spectrum of the noise from the detector. Then the power spectrum beyond thepreamplifier (i.e., system with transfer function HELECT(f) equals S(fHLECT(f) and there-fore the desired rin) noise is given by

-f S f) HkZJECT (f df. (C- 15)

Combining Eq. (C-14) and (C-15), the signal-to-noise beyond the preamplifier is given by

"SIN - V (of 146r(A,f)x (C-f14)( 1 V.L SW) HLECL (f) rf) U2 )

Equation (C-16) yields the NEAT once the various variables are recast into more usefulforms, the S/N is set equal to I (note that the NPAT definition can be recast to "that tempera-ture dif(erence such that SIN - 1), and /'(1) is set equal to 1. The quantity i'(z) can be setequal to I because the signal is measured (determined) at approximately the midpoint of anextended (large) signal; if iM - I at its midpoint then i'(1) will also equal one since the signalis of much greater duration than the reponse function of the detector-amplifier system.

164

Page 166: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

To recast the variables, first note that the detector detectivity D; is given by

-AJ,ý' r(X, fo)D(( (J0) - . (C-1 7)S~(S Yd))• 1/

where Ad equals the area of the detector (C.31. Those familiar with the expression

(Ad Ai,)

NEPwhere Af, is the bandwidmn and NEP is the noise-equivalent power, should note the followingheuristic derivation:

Detector signal-to-noise ratio (S/N)D - A r IS(f) d]

For a small bandwidth around f0. (S/N) • - (S (fo) a f72

(S(ff) A. AJ.))

Now, NEP - A10 for (S/N)D, 1; therefore, NEP - and, therefore,r(fo)

A/(fo) Sov inD (S(/ 0))" Q.E.D. Solving Eq. (C-17) for r(X, f.,) and inserting into Eq. (A16), the

signal-to-noise ratio is given by

S/N - £ A4D' (x) d• j12 C-SI - - (C- 18)

Ad fr S() H2 "I 0 S(f) F.LECr

where i'(t) has been set equal to 1. Next, note that for a simple imaging system7r Ad

-- o . (C-19)

where71 (X) - the optical efficiency of the viewc,f - the flnumberT - temperatureSL, - watts/cml/steradian/;nicron from the suurce.

Finally, using Eq. (C-19) for A 0, and defining A!f, by

" A f, - ; r~c Cdr.f (C-20)

Eq. (C-18) becomes

7Y A2'• A Tf 7(,) W D,-(X) d.\S/N - 2 (C-21)ý4 4 F1(Af")'

The A Tin Eq. (C-21) is the desired NEAT provided the S/N is set equal to I and provided thebandwidth equals the appropriate reference bandwidth.

Page 167: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX C

The bandwidth to which NEAT is commonly referenced is given by

•[(HFOV) (FOV) FR •°'sc

A f "r/2f° - n- 4 x Ay -q SC (C-22)

where HFOV- device horizontal field of view (mrad)

VFOV- device vertical field of view (mrad)

FR - frame rate

Ylopsc - overscan ratio for the device

n. - number of detectors in parallel

Ax - horizontal detector size (tarad)

Ay -- vertical detector size (mrad)

1sc -- scan efficiency (fraction of time spent in actually scanning the field).

The initial form for Af, in Eq. (C-22) is obtained from Eq. (C-20), first, by setting the powerspectrum ratio equal to I (i.e., ignoring any low-frequency l/f component and high-frequencyroll off) and, second, by equating HILECT to 1/(1 + (f/fO)2) corresponding to an exponentialresponse function for the electronic circuitry. The expression for f, is simply derived by settingf1 equal to l/(2it,), where 'TD is the delay time fco a pi.•ture element of size A x Ay (essen-tially the time the detector element spends on each picture element). The 1/(2 rq) correspondsto

sin( r f'-" D)2 df,f:I irf TD

which is the bandwidth associated with a rect function of duration r[D.

The use of the "standard" bandwidth given in Eq. (C-22) in place of the bandwidth givenin Eq. (C-20) yields the NEAT values commonly used. Recognize, however, that thebandwidth given by Eq. (C-20) is the true system noise bandwidth; and, therefore, a measure-ment of the S/N will yield the value given by Eq. (C.21) using this bandwidth (assumingHELECT includes any filtering by the measuring device): the S/Ncalculated using the "standard"bandwidth of Eq. (C-22) would be measured only if HELECT in Eq. (C-20) were so adjusted(e.g, by the measuring device) so as to make the true bandwidth of Eq. (C-20) equal to the"standard" bandwidth of Eq. (C-22).

The SIN (and the NEAT) obtained from Eq. (C-21) is that for a single detector: this S/Nis appropriate for parallel scanning thermal viewers; however, for discoid systems, the S/Nobtained by summing the signals and noises from the number of detectors in series is moreuseful. in this latter case, a reasonable approximation to the SIN is the SI/N given in Eq. (C-21) divided by (n,)'' 2 , where n, equals the number or detectors in series (assuming uniform

4

Page 168: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORTR311

= D,'s). In general, blind application of Eq. (C-21) (as well as the MRT and MDT equations) tounconventional systems can lead to difficulties and incorrect conclusions: this problem is usu-ally easily circumvented by simple adjustments to the equation which can be made by anyonehaving an underszanding of the material presented herein. (More often than not, one only

* needs to recognize the fact that the noise variances add directly.)

Prior to summarizing the results, several additional expressions and definitions are useful.First, D. is given by

i- 2 DD (2 F) (C-23)D, sin 0/2 " 7 cs

where D," is D, for no cold shielding and 100% quantum efficiency;

- quantum efficiency

7ics - the cold shield efficiency

0 - the cold shield angle.

Second, the quantity W is defined by

W ,J -D*(xp da , (C-24)

where X, is the wavelength for maximum D;,(&). For hand calculations, Hudson notes that-- " dX equals 5.2 x 10-6 while -- dX equals 7.4 x 10-5 [C.6Vthese quantitites

are obviously useful approximations to W.

To summarize, then, the NEAT using Eq. (A-24) is given by

NEAT - 4F2 (Af) (C-25)i Adl,'2 71"(,\•p DP ,() W '

where Eqs. (C-22), (C-23), and (C-24) provide useful expressions for Af,,, D,(X), and W.Also, note that Ad can be expressed in terms of focal length and nominal system resolution inmilliradians, i.e.,

4 d 12 - (focal length). (resolution in mrad)/(1000). (C-26)

(If we included atmospheric transmission over the short path length in the NFAT laboratoryexperiment, then Eq. (C-24) becomes

w -fo (Ad) , (',,(x) a Td

and Eq. (C-25) becomes4 F2'.•7

NEAT -1*AJ 2 71 , h ,) D,,(Xo) W

Page 169: a073763 the Fundamentals of Theirmal Imaging Systems

4Ia

APPENDIX C

* MRT and MDT Derivations

a Basic Concepts

The minimum resolvable temperature (MRT) of a system is defined as the temperaturedifference relative to a background which the bars of a bar pattern must possess in order for ahuman observer to detect the individual bars when viewing the pattern through the system.The minimum detectable temperature (MDT) is the temperature difference a square object

F. must possess in order to be detectable. Obviously, the MRT is a function of the bar patternspatial frequency while the MDT is a function of the object size.

fistorically, the MRT bar pattern has been a 4-bar pattern whose bars had lengths equal12 / , nes their width; also, the pattern has been oriented such that the bars are perpendicular.. ,detector scan direction. The derivation presented here assumes that both the pattern and

ientation correspond to these historical precedents, The derivation also assumes that

is no sampling in the direction (horizontal) along which the detectors are scanned. Thisiatter assumption is not valid for all systems; specifically, the signals from the detectors of a

parallel sea- ing system are sometimes multiplexed in a manner which provides a samplingeffect in the scan direction. This sampling can introduce noise fold-over and signal aliasingeffects: however, if the system is well designed these effects will not be severe and the equa-tions derived herein can be applied to these systems.

The basic hypothesis underlying the theory of MRT and MDT is that visual thresholdscorrespond to a critical value of "matched filter" signal-to-noise ratios; i.e., the ratio formedfrom the maximum amplitude of the target and the rms value of the noise obtained by passingthe signal (target) and noise which are actuAly observed by an individual through a filtermatched to the observed signal. (Note that -ne signal and noise are not actually physicallyfiltered by a matched filter; it is jus' hypothesized that the relevant signal-to-noise ratio for per-ceptual purposes is the signal-te se ratio obtained assuming that the signal and noise arefiltered by the matched filter.) Thus, if the viewed object is characterized by the spatial functioni(x,y), the signal will be proportional to i(x,y) i(-x, -Y) for x and y equal to zero whichequal,

E~ 1(,2 f,) d~f.

where 1(f,, f,.) is the transform of i(x,Y). Correspondingly, the noise will be proportional toif S(f,. f,) /;.,,I) d'•d

where S(f, ./,) is the power spectrum of the observed noise. (Throughout this section. thequantity i,,(x.y) representing the undegraded target will be normalized such tha( its maximumvalue is I while the (matched) filter corresponding to this quantity, /1 "(/.I ), will be normal-ized such that I1 (0,0) = 1. Thus, for a uniform target,

"l,,(/,f,) A1TF (0,f,),

where Ar equals the arei of the target.)

"Although the determination of MDT is straightforward using the abo•e hypothesis, anextension is required to determine MRT, i.e., the perception threshold for a periodic pattern.Specifically, the nature of the matched filter (and the signal) must be established for the(potentidlly) infinite periouic pattern. The assumption is made that the filter in the periodic

168

Page 170: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

direction is a rect function whose width is equal to the width of the bar while in the other direc-tion the filter is simply the device degraded rect function corresponding: 'he length of the bar.(Note that a degraded periodic pattern retains its periodicity with unchaned spatial frequency.)Furthermore, the *signal* is assumed to be the difference between the "signal energy" comingthrough the filter centered over the bar and a filter centered over the neighboring trough.(Note that in some sense this corresponds to taking the signal for an aperiodic pattern as thedifference between the "signal energy" passed through the matched filter centered over the tar-get and the "energy" passed through a filter centered over the background.) With theseassumptions, calculation of the MRT becomes very straightforward.

In the above, the implication is made that the object and noise observed are the objectand noise existing on the device display. More fundamentally, they are the object and noiseprojected on the retina of the eye or, still better, the object and noise interpreted by theobserver. i.e., after degradation by the retina and nervous systemr. Given a transfer functionfor the eye, an effective power spectrum for the internal noise in the eye, and a knowledge ofthe actual extent of eye signal (and noise) summation, it is possible to extend the calculations.to the retina and beyond. This extension will not be pursued here; rather, the assumption willbe made that the eye transfer function and noise do not significantly alter the signal-to-noiseratio caiculated using the displayed quantitites. (In actual calculations, however, an eyeballterm is included.)

A few comments concerning the (matched) filter formulation are possibly useful. Thematched filter can be thought of as a window over which the signal and nosie 'energies" aresummed to formulate a signal-to-noise ratio. This summation is similar to tnat performed byRose in formulating S/N ratios which correlate with Blackwell's visual thresholds (C.51; inRose's case, MTF type degradations were not considered and, consequently, the matched filterwas just the target itself. Thus, a matched filter signal and noise are just slightly sophisticatedversions of a signal and noise summed over the target; the matched filter procedure merely pro-vides a consistent technique for handling degraded (blurred) targets. An equivalent (but, tothis author's thinking, more cumbersome) formulation uses the total signal energy as the signal(i.e., sums all the signal energy) and then sums the noise over a equivalent target area which islarger than the original target as a result of MTF degradations.

The Derivations

The MRT and MDT equations can now be formulated rather easily, the only complicationbeing that introduced by sampling.

To perform a reasonably rigorous derivation, a consistent set of units must be ,)sed. Let kbe defined such that k A T equals the watts emitted by a display element (spot, etc.) for a largetarget with a temperature difference AT Then, the signal energy per unit area from the displayfor a single frame will be equal to

M(x, y) - k AT i(xy) (C-27)

Ay, V

where

Ay, is the distance between scan lines . Ay/llovsc.

v is the scan velocity of the display element.

i(x,y)is the spatial distribution function of the degraded target.

Ain, 169

Page 171: a073763 the Fundamentals of Theirmal Imaging Systems

t

APPENDIX C

The quantity i(x,y) will equal (ignoring sampling effects, a procedure completely legitimateonly if Ay, is very small) the convolution of the original target with the system response func-tion, i.e.,

i(x,y) I iT(X,Y) * hD(X,y) (C-28)

or, taking transforms,

!( Q ) - IT(,. Q ) HD(/, 1,1where iT is the target distribution and hD is the system response function. (Note that for con.stant I(x,y) the formulation above will yield a uniform display brightness; thus, this formula-tion uses an average display radiance across scan lines.)

The aperiodic matched filter signal, using Eq. (C-27), is given by

Signal - MAX k .42r, I(x,y) * hmj,(C-29)

k k4.T f IQ, f)a, ) d2,Ay, V

where h, and f,,, are, respectively, the real space and the frequency space representations of

the matched filter. (Note that 'MAX" refers to the maximum value of the convolution over xand y.) As indicated previously, HM iW simply the normalized version of I(Q, J',) (the degraded Itarget); therefore, the signal for the aperiodic target is

(SIGNAL),, - A f H (f1 , f,) HA d f, (CKO)Ay, v f •

where AT is the area and HT is the transfer function corresponding to the undegraded target.

The periodic matched filter signal, using Eq. (C-21) is given by

SIGNAL MX i(x,y) - X MIN,) , (C-31)

where i(x,y) is the degraded bar pattern and h,, is the undegraded rect function horizontallyand the degraded rect function vertlcaily. The quantity i(x,y) is approximated (horizontally) bythe first harmonic of the square wave; therefore, since the amplitude of this h.armonic is 4/i,times the amplitude of the square wave,

Isquare wave withi(x,y) - amplitude .5 + 5J he. (xy) (C-32)

a MTF(f,) 4 (.5) 3in (2ir.,x), (y) + .5.

where f,, is the frequency of the bar pattern and i,(y) is the degraded vertical rect functioncorresponding to the length of the bar. ('The fact that I(x,y) will be negative at some points

Page 172: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8111

when MTF(.f) equals approximately unity is an unimportant consequence of using the firstharmonic approximation.) Substitution of i(x,y) from Eq. (C-32) into Eq. (C-32) yields(evaluating the horizontal integrals in real space and the vertical integrals in frequeihcy space):

•" i,(2f,,)(SIGNAL), - -k MTFQfo) f sin f21rfox) (2f 0 ) dx f I. H, df,. (C-33)

i'f

In Eq. (C-33), the factor 2f, in the first integral comes about because the horizontal filter (rectfunction) of width 1/(2fo) has an amplitude of 2f, under the normalization convention thatH(f) - I for f, - 0. Since the first integral in Eq. (C-33) equals 2/ir and since 1, equalsL 11L HD, where Hp is the transfer function of the device in the y direction and L is the lengthof the bars, Eq. (C-33) can be simplified to

kk

(SIGNAL) - k MTF(f 0 ) fTL HL H0 df,. (C-34)"Ay, v ""

EfJ, h flUNDEGRADEDSBAR PATTERN

DEGRADED

BAR PATTERN

SGAL MATCHED FILTER

IGNAL. .DEGRADED BARPATTERN

HORIZONTAL FILTERING

The noise expressions for MRT and MDLT mu,3 now be determined- this requires estab-lishing the power spectrum of the noise displayed to the observer. The function describing thenoise on the displry is given by

n (xy) bx8(y - yh, (y - b W b l(x)hd (Y -- . (C-35)

L 171

Page 173: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX C jwhere hd((y) is the impulse response of th,. display in the y direction and A (x) is the functiondescribing the hoizontal noise function along the Ah scan line. The form of Eq. (C-34) arisesas an obvious result of the sampled nature of the thermal image which consists of independentscan lines; the convolution is merely a manifestation of the fact that each line is "spread out" bythe display element. The autocorrelation of the noise is given by

"<,"x~y) Mt ,.y," > -

- 0 b,(x)b•,(x')hd (-y,)hdYO -yJ) (C-36)

=••< b,( W bJ W') > h~ (Y Yd,). V -YJ).iJ

Assuming that < (x) > equals zero, note that < b,(x) b1(x') • will equal zero unless i - jsince b, and b, are, otherwise, independent randorn processes. Thus

R(xx'yy') I < n(xy) n(x',y') > - < b,(x) b,(x') > h'(y - yi)hd(y' - y). (C. 37)

Now < b,(x) b,(x') > is independent of i since all lines are (supposedly) the same and, there-fore,

R(xx'yy') - < b(x) b(x') > , /t (y - y,) hd (Y' - yi). (C-38)

Approximating the summation by an integral, we have

R (xx'yy') - < b(x) b(x') > = hd(y - Y,) hd (V' - Y,) dy, (C-39)

- W b W) > hd(p)hd(Y + p) dp 4 R(XY),

where Y - y - y' and X - x - x'. (The quantity < (b(x) b(x') > is as3Umed to be a function

only of x - x' which is true if the random process is wide-sense stationary.)

The power spectrum of the noise is just the Fourier transform of R (XM Y), i.e.,

s(4j.•) - f < b(x) b(x') > e-2D""" dX 1 1 (f,) H, (fy). (c-4o)

Now b (x) corresponds to the "voltage" noise function which is transformed from a "voltage" !oa one-dimensional radiant energy function by the display elements; therefore, the Fouriertransform of < b (x) 5 (x') > equals the " voltage" noise power spectrum provided ihe units are

172

Page 174: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

"properly transformed from "voltage" and "voltage" space to radiant energy and display space.(This conversion, itemized below, is based upon the implicit assumption that voltage is linearlyrelated to radiant energy.) As discussed prior to Eq. (C-15), the "voltage" noise power spec-trum equals

S~)HE2LFCT Wf

or

contantS(f LECT (W).

In the latver expression above, the constant obviously equals Sff]) if the units of this expres.sion are the same as those of the first expression (i.e., (volts) (secoid)), since the signal isgiven in terms of temperature units, the noise must also be and, therefore, the value of theconstant is desired which references the power spectrum to temperature units, i.e.. (tempera-ture difference) (second). To establish the value of this constant, note that by definition thesquare of the NEATequals the variance of the voltage noise in temperature units. Thereforewe have,

(NEA T) 2- -f (constant) S(f) HELrcr d4f - (constant) Af,

where Af, is defined by Eq. (C-20), and therefore,

constant - (NEAT)

(The quantity (NEAT) 2iAf, can be expressed in terms of detector iensitivity and deviceparameters using Eq. (C-25). Note that although the above discutsion uses the true NEA TandIf,, b?., not the standardized ones, the last equation is valid regardless of which Af, is usedprovided the 4f, in the denominator is the same as the ,f, used to calculate the NEA T.)

Consequently , the voltate noise power spectrum referenced to temperature units equals

NEAT 2 S(f) H.

A f, S(f,,) Et ECT (P

Now, converting from temperawure to radiant energy through use of the correspondence (seereasoning prior to Eq. (C-27),

__kNEA TNEAT ::* v (energy/cm),

using the relation (valid since f - v.1')

S(f) 2 S(L Ele-.[ U(. ,S H[tF(T (f) " S

where S(./) - V(vf,), etc.. and. using the fact that the Fourier transform of < b(x) b(x') >corresponds to the voltage power spectrum, the relation

Page 175: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX C

f < )b ) e-2 dX k2 NEAT 2 I S(f,)

"b ) > vX-2--- HELECT (4) (C-41)v2 A~fn S (fo)

is obtained (assuming that the display transfer function equals 1). A careful examination of Eq.(C-40) shows that the units are energy)2/cm which are those desired of the one-dimensional"display" power spectrum. Combining Eqs. (C-41) and (C-40) including the display transferfunction, Hd(f,), the desired (two-dimensional) noise power spectrum is given by

k 2 (NEAT)2 S(f) 2 (C-42)) AY, VAf, S(f) HELECT " '

(The critical step in the derivation of S(f,,f,.) is Eq. (C-39 ) where the sampling characteristicof the display is in a sense approximated away. Str;'tly speaking, the sampled noise processcannot be characterized by a power spectrum.)

Given the power spectrum S(,.. f,.), the (matched, filtered) noises required to establishMRT and MDT are easily determined. As previously indicated, the matched filter for the MRTcalculation is

Hw (f,) H1, (f,) 11D (fy),

where Hw, H,, and HI) are the transfer functions corresponding to the width of the bar, thelength of the bar, and the system impulse function in the y direction, respectively. Therefore,the MRT noise is given by

(Noise)p =

k2 (NEAT)2 SV S( 2f)

AY'f, f-- f HEECTW , (C-43)

The filter for the MDT is

HT (ff,) HO (f.,).

where HT and H0 are the target and device transfer functions: therefore, the MDT noise is

(Noise),, =

k 2(N EA T) H 2 H) 1t/2

f f H," LECI' Hp HTI d |dfl . (C-44)AvA f. I, S(f, ,)

The ratio of the signal given in Eq. (C-34) and the noise given in Eq. (C-43) yields the funda-mental signal-to-noise ratio for periodic patterns for a single frame. The MRT is simply the A Tfound by summing the signal and noise over the frames in an eye integration time and settingthe signal-to-noise ratio equal to a threshold value .S Thus, the MRT is given by (from Eqs.(C-34) and (C-43).MRT -

ay 8 (N EAT )2 f 21 FCT H HI H -. IMTF(f,) L f 1,111,jd!, Ay A)", H 2 H 2H t"f df

& ._ NEAT Ay, v f- " ,, --- d2 (C-45)7 'I f -m ]11d.[, 7 I .

1 74

Page 176: a073763 the Fundamentals of Theirmal Imaging Systems

k~.r.

NRL REPORT 8311

where FR is the frame rat2 of the system and tE is the eye integration time. Similarly, theMDT is given by (using Eqs. (C.30) and (C-44)

NEtAT6 Ay, V S(fx) V' .

MDT HHLECT H T'H dfTdjJ. IC-46)

I' he omewhat formidable Eq. (C-45) can be expressed in a much more useful form

through use of the following definitions and relations:

q,• 4L f_ • IH df,

PX--2W f0 fSo) HELECT HA(f,) HWaf-

p L f HLI H9 H?(f, )df,

L - (assuming bar length equals 7 times its width)2f,

1

Employing the', last relations, the MRT reduces to

7r S- NEAT IAX~ v _-7

MP - - - ifpp (C4)8 MTF(f,) q, fR1if A A.

This expression is further simplified by noting that q, and p, will equal approximately I foressentially all applications since the bar length will almost always be large compared to the sys-tem response function (for any ieasonable f.) in the y direction: therefore, the MRT is finallygiven by

lr2 SNEATfJ Ay,vp 1 1I/2MRT- 4(14)i12 MTF(f0 ) F~,At'.p ' (C-48)

.i:h s tbh- rnconirended equ tior for ¢•!ctulating MRT. '1hki last approximate expression.. ,n'd-,. ;it by a so,'cwhat sinip!c. argumc'it which i% pe-haps use ul Calkulate the one-

iJ!Th;flnofl matched fillef t.nn.i rv2 noise for a single scari lie evs.Jinng that the bat lenglh j3trc_ !cr than Ihe hc ht of v(,c,.i ho,, Ibis Lak.t:lation. 43 easil) seen from the above aih0ysis,.,ields sirna!

4(SinalU MTF(,..j 8 A / cncrgy.'cm

and a noisei ~ ~~~~~I ,ZNEAT),"ý-- "SJ t

(Noire) . f , I, -,

V

Page 177: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX C

Since the "matched" filtering in the y direction corresponds to summing the signal and noiser over the length of a bar, since signals add directly and noises add quadfadically, and since the

bar extends over (L/Ay,) independent lines, the desired MRT signal-to-noise ratio is given by

L KAT 8-- MTF(J0 )-2 AT(SIN,- Ay, V 772 112

(Ay. af, V

which airectly yields the MRT given in Eq. (C-48).

Each individual conce~rned with MRT has his own favorite form for the MRT equationderived by using different definitions and different approximations thani those used above. Forexample, a quantity Q is used by some individuals where

0-pj 0 ;

others approximate the integrals such as

by £ ~u

9 fi/ + A,

wnere

fL-f HI d!

and

f" f Ht' df; etc.To the author's knowlesde, however, all the expressions follow directly from Eq. (C-45) usingthe appropriate definitions and approximations. (in one instance. an equation is used which isderived on the assumption that the *matched filter* for the bar patterin is a square whose side isequal to the width of the bar. Even in this case, the final equation reduces to Eq. (C-48)except for a different constant.)

Trhe use of L4~. (C-49) requires establishing the values of S-and 't, again, unfortunately,universal '.alue; for thege constants do not exist. The vil.es recommended at this time are

S-- 2.25, (C-49)

- 0.2.

Several approxirnatio'ni and facts iire useful for usingB EQ. (C-48) to mak.: quick tCjlcula-tim, - Fir,t. from Fq (C-22 i (and the mater-ial following E4. (C-22))

-. V7'1 1)- Ax.

hecAx is the detector width fMlo At. ii given by

Page 178: a073763 the Fundamentals of Theirmal Imaging Systems

_ _-

NRL REPORT 8311 11AY'where Ay is the detector height and novsc is the overscan ratio. Finally, P, will equal approxi-mately I for small f, while for any f, a respectable approximation, assuming S(f,)/S(f0 x)equals 1. is

" ~1PX (4fo(Ax)2 + 1)1/p. (C-50)

Therefore, a useful form of Eq. (C-48) for hand caiculations is

NE a TfO ( 4 AXA),1/'I~2( 1

MRT-0"66 T I (4fAx) 2s1+ (C-t1)-MTF(f 0,) i'r 'qOVSCFR'1E

where the last factor can be set equal to a I for many values of.".

The MDT given in Eq. (C-46) can be simplified to

MDT- S IT -y, 1 J.52)qA FR1EIf, 2W ,

through use of the definitions

qA - Arf HjHad2ft"S(f)•

PX.4 - 2 W f~- HLLECr HJ HH~idf.

P, W f HAH) Hjdf,.

where Hw is the transfer function corresponding to the side of the test square. Approximationsto q,,, p,,4. and p,.4 can be formulated similar to those used to simplify the MkT; these will notbe pursued here.

VERTICAL MRT

If sampling effects are assumed to be negligible, then a vertical MTF and MRT can bedefined and an expression for them derived. A system's performance can then be a function ofsome combination of hnrizontal and vertical MRT. A, an example, the MRTS in the twodirections can be ass' to form an average MRT whose value is

MRT, - IMRT 2(f) + MRT'(1, )1"11,12

A vertical MRT (f,) similar to the horiLontal MRT (.f,) can he derived in the same

manner utilized in A. The only difference is that the target bar pattern is now oriented with thelong dimension parallel to the scan direction. "hen returning to Eq. (C-32), we get

, v. ) - MNT-(./,, ) 4 (.5) sin (217f,,,)i, (x) 4 .5. (C-53)IT

whure (, ) is the degraded re•t luncaoin in the -% direction Eq ((-331 becomes

I ; 7

Page 179: a073763 the Fundamentals of Theirmal Imaging Systems

r

APPENDIX C

kI

(SIGNAL) k MTF(f.) AT sin (2Irfo.y)(2foY) dy IXH~df, (C-S4)17y, V 7" 4" f (C 54

where now , - L HL.(fJ) HD(fX). Hence, the signal for the case of horizontal bars is

Sk f H¢-"(SIGNAL), - - MTF(fY) ATLry HZr)•(.)H(f)df,. (C-55)

In deriving the noise power spectrum, we still get the result

k2NEAT 2 SY.)S(fQ.fy) -- HiLEC( V-) HJ4.f. ,). (C-56)AyVaff, s(f'7)

since the target plays no roll in the noise at this point. The matched filter for the horizontalcase is

H w(f,) HL (f) H1,(fx).

Hence, the noise is

(NOISE) , - I S if 2( f,,f,) H (f,) HZffHf(f,) d21l. (C-57)Ay, YAf S(f ) r

Taking the ratio of Eqs. (C-55) to ((C-57), integrating over frames, and :solving for MRTyields

Ay.v V

MRT(/o) - -(C-SS)

MTF((f0) L f.H'i1iAdf, C-9

NEAT 2 I I IfLECT (f)H, '(fs,,)H (f) Hj) (f,) f -df, dSI O' E xLl A~~d~L

Defining the quantities

L JHHAdJ, II'Hd!p, S 2L " S 'fE)

, w f ) .HHj:Hdf.d

where L -- then Eq (C-58) becomes

MRTr(f,,) - S NEAT IAl v 2 f, i/ (C59)8 MTF(f,,) 7, F• -/, 7 (

Page 180: a073763 the Fundamentals of Theirmal Imaging Systems

+• •• .t -•_. • j-'-.•• - - .. ' =... ... -- " - .......- ,• ,. - =• , , • .--r -= . ' - -••

NRL REPORT 8311

As in Appendix A, g, approaches 1, however p, will not asymptote as fast as before because ofthe additional electronic filtering HELECT. Using this relation, we get

MRT(f0 ) - S &NEATfo Ay,vpxpy

4(14)1!/2 MTF(f 0 ) Fr tAf• j (C-60)

REFERENCES

C.1 Perkin-Elmer Corporation, "A Symposium on Sampled Images", Report No. IS10763,1971.

C.2 Ratches, J.A., et a]., "Night Vision Laboratory Static Performance Model for ThermalViewing Systems", U.S. Army Electronics Command, Report No. ECOM 7043, April

1975.

C.3 J. A. Jamieson ef of., Itfrared Physics and Engineering, McGraw-Hill, New York, 1963.

C.4 $. W. Wozer.craft and I. M. Jacobs. Principles of Communication Engineering, Wiley, NewYork, 1965.

C.5 J. A. Jamieson ef at., Infrared Physics and Engineering, McGraw-Hill, New York, 1963.

C.6 R. D. Hudson, Jr,, Infrared System Engineering, Wiley, New York, 1969.

C.7 Albert Rose, "The Sensitivity Performance of the Human Eye on an Absolute Scale," J.Opt. Soc Am. 35, 196 (1948).

1*1

_IL

. k a

Page 181: a073763 the Fundamentals of Theirmal Imaging Systems

Appendix DSTATIC PERFORMANCE MODEL BASED ON THEPERFECT SYNCHRONOUS INTEGRATOR MODEL

R.L. Sendall and F.A. Rosell

The perfect synchronous integrator model was originally developed by Otto Shade, Sr. andpredates the matched filter concept by 15 to 20 years. Its utility is in predicting sensorresolution-irradiance characteristics with excellent precision. The model presented in thisappendix is the result of a collaboration by R.L. Sendall and F.A. Rosell based on an earlierTV/IR comparison study (Ref. D.1) and previous independent efforts. Numerous approxima-tions have been developed in order to make rapid computations on hand calculators possibleand to give physical insight to the physical processes involved.

D1. THE ELEMENTARY MODEL

Consider a scene consisting of a uniformly radiant object of area "a" upon a uniformlyradiant background. This scene is focused onto a photosurface of an imaging sensor and thesensor output is used to generate a displayed image which is viewed by an observer. Theobserver detection theory to be discussed here considers the observer to function as a spatialand temporal integrator and therefore image signal-to-noise ratios, SNR, will be defined interms of area and time integrations. The photosurface of the sensor linearly converts a portionof the impinging photons to photoelectrons. The photoconverted image signal is thereforedefined as an area and time integration of the photoelectrons derived from the object and is

Signal - 4n - (ho - hb) aT, (D-I)

where h' and hý -are the per unlt area and time rates of photoelectron generation due to object andbackground images, respectively, and T, is the time over which photoelectrons are integrated. Thesignal generated by this photoconversion process is noisy with the photoelectrons being gen-erated with a Poisson distribution which results in white noise with an rms value equal to thesquare root of the average number of photoelectrons generated. That is

Noise - (n,,) /. [aT,(.n. + hb)/2] 1/2 (D-2)

The image SNR at the output of the photosurface therefdte becomes

SNR - Ahi'[aT,(n0 + hb)12i"- Ah'IaTIha, 1/2 . (D-3)

The image SNR defined in Eq. (D.3) is essentially that proposed by Barnes and Czerny in1932, and by do Vries in 1943 but has been formulated to agree with Schade. In the earlyefforts, It was postulated that an image, to be detectable must have a SNR exceeding somethreshold value and that this threshold value is a constant. This has been found to be a reason-able approximation for small images but as Rosell and Willson (D.2) have shown. the apparentthresholdi increase for images which subtend more than about 1/2' in two directions simul-taneously at the observer's eye.

A .- , 4 . .. ..-- I - , ,

Page 182: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX D

D2. FOURIER ANALYSIS OF IMAGE SIGNALS

If an imaging system were perfect, point source images would appear as points on the -

display and all images would be displayed with perfect fidelity. In reality, displayed imagesdiffer from the original scene in amplitude, shape, and/or phase due to finite sensor apertureswhich may include optical and electronoptical lenses, phosphor particles, electron scanningbeams, and the like. Whatever their form, the effect of finite apertures is to blur image detailin a manner analogous to electrical filters in communications systems. Being analogous, themathematical transform methods of Heaveside, LaPlace, and Fourier originally developed forcommunications systems are directly applicable. Since images are two-dimensional, two-dimensional transforms may be required. However, many images are primarily one-dimensional or functionally independent and separable in two orthogonal dimensions so thatone-dimensional analysis applies. One-dimensional analysis will Ie employed for the discussionbelow for the sake of brevity and clarity.

Following standard television practice, the fundamental measure of spatial frequency willbe N lines or half cycles per picture height. For a repetitive bar pattern of period 2Ax, asshown in Fig. D-1, the spatial frequency

N - yiAx, (D-4)

where Y is the height of the picture. The virtue of N is that it is dimensionless and eliminatesthe need for scale changes when multiple imaging and reimaging processes are involved. Theapplicable Fourier transform pair is

F(N) - ff(x) exp-(jNx)dx. (D-S)

f(x) - F(N) exp(JirNx)dN. (D-6)

The response of a one-dimensional linear system to a unit area impulse 8,(X) is known as theimpulse response r,(x) or alternatively, as the line spread function for imaging systems. TheFourier transform of r,(x) is R,(N) which, for imaging systems, is known as the OpticalTransfer Function or OTF. In general.

Rý(N) - IRo(N)expjieb(N). (D-7)

The modulus IR,(N)I is the Modulation Transfer Function or MTi, and the argument O(N) isthe Phase Transfer Function or PIF. rhe conjugate R•(N) - R,(-N) and Ro(O) - 1.0.Furthermore, R,(N) is ahlays smaller than 1.0 at all N > 0 (Ref. D.3).

HD•I-.- - Bar pelcert goomery

A

' ~182

Page 183: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

In optical systems, the transmission losses or gains are always separated from the MTF.Hence, when a unit area impulse is used as the input, the integral of the output waveform mustequal unity. This places a normalization on the OTF that causes it to have a value of unity at N

0- . It has become common practice to maintain this normalization when analyzing imagingsystems, i.e., if the image intensity distribution is r0 (x) for a unit area impulse input, then

r,(X)dx - I - Ro(O) (D-8)

"as can be seen by setting N - 0 in Eq. (D-5).

In ihe space domain, the input to any component aperture in a linear system will bekf(x), the aperture response will be ro(x), and the output will be kg(x), where g(x) is theconvolution of re(x) and f(x), i.e.,

g(x) - r(x) .f(x) (D-9)

and k is an arbitrary constant. In the spatial frequency domain,

6G(N) - R 0 (N) • F(N), (D-10)

where G(M), RJ(N), and F(N) are the Fourier transforms of g(x), r,(x), and .f(x), respec-tively.

D3. THE EFFECT OF APERTURES ON APERIODIC IMAGES

An aperiodic image, as the term is used here, is defined to be an isolated object viewedagainst a uniform background of large extent relative to the object. Suppose that an inputimage is a rectangular pulse of amplitude k and duration x.. After passing through a linear sys-tem with finite apertures, the output image waveform will be kg(x). This output image will beof greater duration and its amplitude may be altered. It is customary to assume that theobserver is able to integrate all of the image sign.I under the waveform of the output image.This assumption is obviously optimistic, but it is a good approximation for most waveforms.For images with a strong central core but with a long low-amplitude skirt, it would be expectedthat the eye-brain combination will truncate the integration when the signal-to-noise ratio nolonger improves by increasing the integration distance. Using these criteria for limiting theintegration distance would be more accurate, but the calculation would be difficult and timeconsuming.

If we accept an infinite integration distance or duration for the signal on the basis that thesignal would not change considerably and even though a different duration is used for thenoise, we have that the signal will be equal to

Signal - kf g(x)dx. (D-l1)

Observe that

G (N) - f_ ) exp.-(jlrNx)dx, (D- 12)

and

(:(0) g(x)dx. (D-13))

183

Page 184: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX D

That is, the integral over the output image area is equal to the value of the Fourier transformof the output image at zero frequency. From Eq. (D-1O), we can infer that G(O) - R&(O).F(O) and since Re(O) - 1.0, G(0) - F(O). Thus,

Signal kG (0) - kF(O). (D-14)

By the same argument then,

kF(O) - k f (x)dx (D-15)

or the

Signal- k f f(x)dx, (D-16)

which is equal to Jcx when kf(x) is a pulse of amplitude k and duration x,. The implication ofthe above result is that the area under the output pulse is identical to the area under the inputpulse. If the observer is able to integrate all of the image signal under the output waveform, aswe have assumed, then the sensor's apertures have no effect on signal. Note that to conformwith the elementary imaging model, the value of k for our one-dimensional image of constantamplitude and of duration x, will be A 'T, and

Signal - A h'T~x,. (D-17)

While an imaging sensor's aperture may not affect the image signal, we would expect the blur-ring of an image by sensor apertures to adversely affect the image's detectability. On the prem-ise that the important parameter with respect to image detectability is its SNR, we are led tohypothesize that noise discerned must then be increased by the apertures.

Before contlnu~ng, it is worthwhile to review the effects of an integrator. A perfectintegrator continuously sums the value of a function for a set period of time or space. For thetheory being presented here, it is usumed that the observer's eye-brain combination processesimages in this manner with the duration being synchronous with the image anomally beingdetected. There are two effects of interest. The low-frequency gain is proportional to theintegration duration, and the response to hi'h frequencies, relative to low frequencies,decreases as the imago duration increases. Specifically, the normalized frequency response ofan Integrator is given by

R (0)/R (0) - [sin~r&,x,/2J/(0fx,/2). (D-18)

where Y is the dummy frequency variable in lines per unit distance and x, is the duration vari-able. The low-frequency response R (0) is equal to x,.

If noise is being inteogated, then the rms value of the variations, 1,, in the output of theintegrator from integration period to integration period, which is the integrator output noise, ispredicted by integrating the product of the spectral density of the noise and the response func-tion of the integrator, i.e.,

S- f" o 2 (v) R1()d&,. (D-19)

For white noise, l2 (v) - 2, constant and

i -x,o'(N,./)f'f - o7x,1'2 (D-20)

154

Page 185: a073763 the Fundamentals of Theirmal Imaging Systems

1QR1. RI-URT 9111

since the noise bandwidth of the integrator N, issin(ffvx,/2)1

Ne;- • -'dv (D-21)0 (n''X,/2) j xi

When the input signal is of constant amplitude, the signal output of the integrator is propor-tional to x, and therefore the integrator SNR is proportional to x,2.

It should be noted that the definition of a(&,) as noise spectral density implies an integra-t tion for the duration or one cycle. Therefore our integrator is summing up independent sam-pies of one cycle duration and as is commonly noted when the signal is coherent and the noiseis random, the SNR improves as the square root of the number of samples summed, i.e., thesquare root of the integration duration, If the noise is not broadband, then the noise bandwidthis not determined solely by the integration duration but also by the spectral characteristics ofthe noise being integrated. Consider the case of bandlimited white noise which is flat fromapproximately DC until rolled off by a filter Qftrquency rcsponse RI(G,) and noise bandwidth N,,.For this case, the integrated noise is

I/fI, r , o .(I/IN,1) 2 + (I /NV, i) 1 1 !.4

- ,rx,[x,2 + (I/.VNt) 2] ". (D-22)

* Considering the total noise bandwidth out of the integrator to be that due to the filter and the"integrator, we have

(N,.) -2 - (N,.,) 2 + (N,.,) -2 (D-23)

and for this case

/1. - or'.N,,J,,. '•. (D-24)

Sfile signel 4hould be the requit of integi ating over the same v. duration as the noise. However,as was discussed above it is easier to assume that the signal integration over x, is essentially thetotal integrated signal and therefore is the same as the integral over the total image and. inturn, equal to the integral over the total object x,_ The image SNR for this caso becomes pro.portional to (x,•)(IIN,.r)' •

All pholoconversion processes are noisy. The principal noises ma) be added either priorto or subsequent to a sensor aperture. To begin, let the noise be added after the imarge hasp•ssed through an aperture. Suppose the input pulse is recuctgular of duration x,, as shown inFig. D-2(a) and then let noise b"ý added with the r,'ult shown in Fig. D-2(b). As•uming aunity OTF. the noise integration distance will then be x,. If, however, the input image isblurred by apertures as shnwn in Fig. D-2(c), the effective noise integration distance will beincreased as shown in Fig. D-2(d). While the exact size of this integration will involve furtherassumptions and will be less than that presumed for the signal, it is clear that it can heincreased beyond the size of the input object. I is also apparent that an oplimum integrationdistance will exist which is less than the (hiration we assume 1()r the signal but which would notsignificantly change the value of the iner,,terd slignuls and nois•s. While tt ic optimization of;niegrator area to maximi/c SNK may he important in some applications, it is ignored here inlior of a more direct solution. !

L

Page 186: a073763 the Fundamentals of Theirmal Imaging Systems

APPEND)IX D)

(cC - -I C)

[X 2 + 1/2

I iii. D-2- Noise intgraitton disances Aith unt MTF (case h)i. iJaifter passin~g through a rea aperture ica d)

If the input image is of unit amplitude and long duration x.. relative to the sensor's linespread function. the~ scrnsor apertures should have little effect and the rms noise would simplybe proportional to thc square root of x,, (or image area in the case of a two-dimensional I.mage).As x., is made small. approaching an impulse in dimensions, thz noise integration distancebecomes the effective dimensions of the line spread function.

We will first consider the casc %here the sensor apertures precede the point of noise inser-tion as shown in Fig. D-3. Assume the input image to be a unit amplitude rectangular pulse ofwidth x,, and the output pulse is a function q(x) of amplitude g(o). Noise is then linearlyadded. The noise to be integrated is broadband and independent of the sensor apertures, butthe integration duration is determined by the object size and the sensor apertures. The pre.ferred image size approximation can be obtained~ frontl

X. _V'[x) + (Il/ N,) 1 2 (x?+ h',]1 '2, (-5

where N, is the sensor'.s equii'alent hand widili. i.e..

N,. R- f ]N )(IN' a nd h,. I /N, (D-26)

It Exi It 9[XJ

~ ..

1p . ... 11" , IIII "fl i, 'i 1- , l Ii el ;)

Page 187: a073763 the Fundamentals of Theirmal Imaging Systems

Ir 41

NRL REPORT R311

An alternative approximation that has been investigated is to use the equivalent signalduration, xa, which is defined as the width of a rectangle of height equal to the maximum amplitudeof the output signal waveform g(o) and of area equal to the area under the waveform g(x). Thisduration xa can be approximated in a manner similar to that used for x, by

X. [- 2 + 8.]/1,. (D-27)

where 8, is the equivalent duration as xo approaches zero and can be exactly determined from thislimiting process by having kx,, - 1. Then,

g(o)8, - kx0 - 1,

b" - l/g(o)

- G(I)dNJ. (D-28)

and since as x. - 0, F(,V) - I for all N, G(N) - R 0 (N) and

8a -ff 0 RO(N)dNJ. (D-29')

This particular integration duration (area) is less than or equal to 8 e and is of interest not onlybecause it puts a lower bound on the noise integration distance but because it is determined bya limiting case of the area under the MTF curve which has been described by Snyder (D-4).The use of 8,, rather than 8., as the noise integration duration in conjunction with the total sig-nal integration assumption will result in a higher SNRD value but has not proved to be asaccurate experimentally as the 8, choice and therefore is only presented as alternate and a limit-ing case. For images that arc much larger than the sensor line spread function, the choice isimmaterial and even for small images, the differences will be comparatively small.

Using the 8, assumption, the mean-square-noise for a rectangular image of unit amplitude

is

M.S. Noise - (Y2X

-- c-2[xI + (8) j] 2] (D-30)

In :wo dimensions

M.S. Noise - c2x,y.

So2(xn2 + (I/N,.,) "I " I ' ) + (1/,..,) •2,

- j, +2. -+ 8 2 ) [ I+ 8,212, (D-31)

where

X1,, is the input image width,y, is the output image height,N,, is the sensor noise equivalent bandwidth in x, andN, is the sensor noise equivalent bandwidth in y.

SThe image signal for the two-dimensional object is kx..)',, -Xn' ., y,, and (Y• -2 o There-

fore, for the cdse of white noise

_ 187

Page 188: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX D

-SNR T'xo.}'1S-RD (*,,) 1:2[x2 + (0IN2),4 '" [(y2 + (/Ney)9,/ 4

S(x,,.v 0) t: n• )'=XY)ah . (D-32)II + (0Ix.,N,,) 1 1. [ + (I/y,, N. )2]114 (ha'.) 1 "2

For the case where the system MTF - 1.0 at all spatial frequencies of interest or where the

input image is of large extent, N,- - - and we have agreement with Eq. (D-3) of theelementary model since

SNR ) = [T,.,Jy ' 4.n'/[nJ,' (D-33)

On the other hand, when the object is very small such that x,, - yo - 0,

SNRD = -x4yo/(,.•e) 1 2) - n' 7Te'"2/(n,) 1 "2, (D-34)

showing that sensor apertures degrade the SNRD. These two expressions show that for largeobjects, SNRD is proportional to (xv,)' 2 while for objects which are smaller than the sensorsequivalent aperture (8,. - N,-'), the SNRI) is proportional to xoy,,. As a result, it is possible tointroduce a noise increase factor. f, which is a measure of the noise power increase due tointegration over the blurred image. These factors are

+- I - (l/x,,V.)21' 2

•,- [I + (1/),,N,,) 2 1' 2. (D-35)

+ 1.2

•, is therefore a factor for comparing the system with finite apertures to a perfect systemand for any given system is a function of the input image size having a minimum value of unitywhen .,. > > 6, and a value k, - &:,./x,, which is greater than unity as x,- 0. The SNRDfor a sensor with an aperture which does not filter the noise becomes

SIR I? - ] (D-36)• •1:2 . 1J

Y a,

We next consider the case where the aperture follows the noint of noise insertion asshown in Fig. D-4(a). In this case, the aperture function, rj(x), can both increase the per-ceived noise by increasing the noise integration distance and decrease it by virtue of a filteringaction. In the previous ca.c (noise added after the aperture), the noise in the image is white incharacter (though spatially band-limited). In the case now under consideration, the displayednoise will have a finite spectrum due to passing through the aperture.

Conceptually, the processes will be assumed to be of the following nature. The input isfirst passed through the aperture so as to increase the noise integiation distance. Next, thenoise is bandpass I!mited and added to the output signal as shown in the functional noisediagram of Fig. D-4(b).

For this case, the noise to he integrated is band-limited to the transfer function R,,(,A)

and the integration duration is determined by the object size and the sensor aperture. Thenoise integration duration as determined by the sensor and object can be approximated by

+ (D-37)

188

Page 189: a073763 the Fundamentals of Theirmal Imaging Systems

F

NRL REPORT 8311

kf~

(a)

in

kfboS.

,rn~r - [ ,b)InRaH

Fig. D-4 - (a) Aperture following a point of noise insertion and

(W) functional diagram for analysis of (a)

The noise to be integrated has already been band-limited by the sensor to Ne. - 1/8',. ThisM.S. noise out of the integrator can then be approximated by

2sinwNx, 2M.S. Noise a +X2 f siRN- - dN

0 ffX,

- r2( + x2 Y 2crIxo2 + 28.]2

-Ix,3 + 283 1 112 (D-38)

The signal is as was presented previously, Consequently, the two-dimensional SNRP may bewritten as

SNRD - 1x2/( 2 +8,,)T/4S R X"- lx + 8.,1 (x + 22 :)1/41.1((yo2 +82 ),,21(y.+2 •)•l (h,o T.) 1/2

(x"y0) 1)2 rh, 1-'/2L21.112 /41 II + 12 2 ý,aý 1; (D-39)

+ + + I . + 2+,Iii~ h'' 11,1 If 51 i1"4ISince this expression is cumbersome, it is convenient, as in the previous case, to compare theabove SNRD to the case where the displayed noise is white and to introduce factors to quantifythe differences. Therefore, we define a bandwidth reduction factor r to quantify the SNRDimprovement occurring due to filtering of white noise by the sensor apertures. These factorsare

r- ( + 8 ,,lx/xýl + 283lx,,] )I 2

- 1(1 + 853,-)/'(i + 2A 2-).-02)] (D-40)

189

Page 190: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX D

Using these factors and the noise increase factors of Eq. (D-35) the SNR D becomes(xoy 0)''2 Ail'( Te)" 2

SNRD - ((•(,rr>)/ (,.): (D-41)

r is a function of x, and has a maximum value of I when x, >> 8,,. When x. << 8. Finthe limit becomes %/'71. Thus F can often be ignored in first-order calculations. It should benoted that while f, the noise increase factor is always >I, r, the noise bandwidth reduction factoris always •<1 but the product Fr is always _A. Hence, the effect of an aperture following apoint cf noise insertion is to increase the noise but by an amount that is smaller than if theaperture preceded the point of noise insertion. j

'1f (X) Rolt N) Ro02(N)

o'I02

Fig. D-5 - Aper(ure RI(V) precedes noise source I and aperture Ro,(%) whichrollows noise source I and precedes noise source 2

ITo complete the discussion of aperiodic object discussion, we consider the case of two sys-

tem apertures; one preceding a point of noise insertion followed by a second aperture which isfollowed in turn by a second point of noise insertion as shown schematically in Fig. D-S. Whenmultiple noise sources are involved, they are not usually related to h0 , alone. Hence, the fol-lowing will be presented in terms of two noise spectral densities aI and a02 referred to the inputso that sensor gain parameters can be ignored. If the two apertures of Fig. D-5 did not exist,the two noise sources would add in quadrature and

o" - [a 2 + o' 211 2. (D-42)

If the constant amplitude input signal with duration x, were used and both noise sources werewhite, then the perceived mean square noise would he simply

M.S. Noise - 1oy 2 + 0J•1 TeX", (D-43)

but both apertures increase the noise integration distance. Since the apertures are in series andare independent, the noise increase factor can be calculated from the combined transfer func-tion and

•,- /+ [ij- + - ;jI .(D-44)

where 8,1 and 8 ,2 art, the noise equivalent apertures of R,(V(N) and R, 2(.V). respectively. Sincethe first noise source prcccdcs the sccord aperture, it mlist ic ccrrccted for noise barndt idthreduction due to the second aperture. F can be determined by considering the image at theinput to R 02 (N) to be of width (8f? + xo) 21,+ and by use of Eq. D-41,

190

Page 191: a073763 the Fundamentals of Theirmal Imaging Systems

rA

NRL REPORT 8311

e2 f Xo2] [ +e 2 xo1] /2F, - [[1 + 8,z2,/(8,I•+ )1 + 25,2x/(fieXl +4)1"

- ,1 + +1-2 + 1 + 81 + 2-'- 2x1 (D-45)

and the noise becomes

M.S. Noise - to l'r•ry + afl]t 0xo.y (D-46)

and the SNR L is written as1 oo /2 Ai ,/

bNRD - (j 0 ) (D-47)[f ey 1'~ [a'rj,2,r + a7211/

Before proceeding, initial experimental efforts to confirm the theory will be presented.

D4. PSYCHOPHYSICAL EXPERIMENTATION - APERIODIC IMAGES

The theory for the elementary case where the input image is large with respect to thesensor's noise equivalent aperture has been experimentally verified and the equations presentedherein are consistent with the elementary theory. The psychophysical experiments reportedherein are concerned with the case where the input image width becomes small with respect tothe sensory systems noise equivalent aperture, 85. The results obtained are in general conso-nance with the theory but some differences are noted which are probably due to secondaryobserver effects which have not yet been included in the overall model. The primary parametertested was the noise increase factor f since the bandwidth increase factor F has a weak depen-dence which would be diificult to test at best and has been bounded by the analysis.

The experimental setup for the television camera generated images is shown in Fig. D-6.The test images consisted of a series of a well spaced, vertically oriented white bars against ablack background. The individual bars were of constant height but of varying widths and theirwidths are described in terms of their line number N, equal to Y (the picture height) divided byAx (the bar width). These bars were projected onto the faceplate of a high resolution 1.5-in.vidicon operated at 25 frames/s with 875 scan lines (825 active) interlaced 2:1 and at abroadarea video SNR of 50:1 or more. The display brightness was I fIt-Lambert and the displayedpicture height was 8 in. located 28 in. from the observer's eye. Both signals and noises werepassed through identical low pass filters of 12.5 MHz bandwidth which have negligible effect onthe results reported here. White noise of Guassian distribution was linearly added to the signalsprior to display.

The overall sensor MTF was varied by defocusing the lens with the results shown in Fig.D-7. Since neither of these curves approach a Guassian (for which the approximations for thetheory hold). scme experimental deviations from the theory could be expected. The noiseequivalent passbands of the lens and camera are indicated in Table D-1. In the measurement,the camera Nec was calculated from a measured MTF curve as was that of the overall sensory

'!',Qtp ftI,). The lens NL was calculated from the relation NL - [N,2CN: 7. / (N,2C- NlHT)] ,2to show the degree of defocus necessary. The quantity AOT is the angular subtense of 8,T rela-tive to the observer's eye.

191

Page 192: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX D

TELEVISION MONITOR /-- TEST IMAGES~CRT DISPA1

j~c O .VIDICN FITE 8

Fig D-6 - Experimental setup for the television camera generated imagery

1.0=

04

0 P00 200 300 400 S00 600 700 400 900 1000

LINES

Fig. D-7 -Modulition transcr 9unctions for Cases A and B

Table D-I -- Sensory Sy.Ntcni Charictefisfics

ICANe I Ve NT mad

A 1266 261 2% 1.1

B123 261 69 4,14.

192

Page 193: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

The line numbers of the bars used are summarized in Table D-2. The angular subtenseof each bar was 20.6 mrad in the vertical and variable in the horizontal (relative to theobserver's eye). A0b represents the angular subtense which would be observed with a unityMTF, while A8, represents the noise effective width including the sensor's noise equivalentaperture.* When the display luminance is about 1 ft-Lambert, the noise equivalent aperture ofthe eye is about 1 mrad (D-5) and therefore should be included as a filter but would have verylittle effec! on Case B and an almGst uniform effect on Case A since it increases every A0except the largest by less than %/2-. Observe that in Case B, A10 is very nearly a constant overthe entire range of bar line numbers.

Table D-2 - Bar Number Summary

Bar Case A Case B Bar Case A Case BLine Aeb Me ,.e Line AO oe 1eNo. (mrd) No. (mra (mrad)

(P rad) (rirad) (mrad) (L/PH) (rd _ __ ad

74 3.86 4.02 5.6( 900 0.32T 1.15 4.15261 1.10 1.56 4.28 1090 0.26 1.14 4.15357 0.80 1.37 4.22 1205 0.24 1.14 4.15494 0.58 1.25 4.18 1605 0.18 1.12 4.14625 0.46 1.20 4.17 1780 0.16 1.12 4.14760 0.38 1 L17 4.16

In the experiment all of the bars were simultaneously imaged. The observer's task was toselect the bar of highest line number which he could just barely discern as the bar image's SNRwere randomly varied through increase or decrease of noise. For Case A, 6 observers wereused to make 1730 trials while 5 observers made 1400 observations for Case B. To calculate thethreshold SNRo, we note that the photociccirun rate h' is related to the signal current as rreas-ured in the video channel by

h' - i/eA, (D-48)

where e is the charge of an electron and A is the effective area of r/c photosurface. Using thisquation, the elo.ntntary Eq. (D-3) may be written as

SNRp- T, 2 e.0 j'• 2 (D-49)A

Next, we note that mean square shot noise has the form /.2 - 2eiAf,, where Af,. is the videobandwidth. Then the above equation can be written as

11/2

I I'2 IT~AfI a AA N(

"12 TeA f, SN R o.(D-50)

Page 194: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX D

In the above equation, Ai/l, is the peak-to-peak signal to rms noise measured for a broad area

image when the noise is white and SNR vo is therefore the video signal-to-noise ratio. For con-venience, we will let A - a Y2, where a is the picture aspect ratio (H: V) and Y is the pictureheight. We will also describe the bar dimensions in the forms Ax Ay - eax 2, where Ax is thebar width and * is the ratio of bar length-to-width. Finally, we will note that

a _ Ax AY Ax2 4A a y2 a Y2 ON 2 '

where N - Y/Ax. Now, Eq. (D-50) may be written asI t I-2":f 11 ~SNR vo. D2

The effect, of an aperture is to increase the noise. Using the formulation of Eq. (D-37), we

note this fact by modifying the above equation to read

SNRD jC 1 (N) I" SNR,,o. (D-53)

We observe that the aperture also increases perceived noise along the length of the bar, but thiseffect is negligible compared to the increase across the bar widths. The noise increase factor isgiven by

f(N) - [1 + (N/Ncr)T]l'2. (D-54)

Since J is the primary new parameter in this experiment, its value is presented in Fig. D-8. Itis a significant factor for both cases and in Case A, adding the effect of the eye MTF would

have small effect.

25

20

15Is-

0 00 15oo 2000

BAR LINE NUMBER Olines/pict. ht.)

Fig. D- -8 Noise increase fac-or for Cases A and B Dashed curves for Case Anclude the noise increase due to Ih cg'e ,MTF

194

Page 195: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311I

The results ot the experiment are shown in Fig. D-9. According to the theory, theSNR DT. as calculated using Eq. (D-53) and the measured SNR 1,, should be a constant over allline numbers. This is nearly true for Case A except at the lowest line number. For Case B,the SNRDT appears consistent with that of Case A up to about bar line numbers of 500 to 700lines/picture height, but linearly increases with bar number thereafter. The increase at very lowline numbers has been previously noted and while the cause is unknown, the result was notunexpected. The theory aopears valid for Case A over a wide range of line numbers but doesnot completely account for Case B. One possible explanation is the difference in displayedimage amplitude which is plotted in Fig. D-10 for the two cases. The noise equivalent pulse

II I I ' I I I I I I ' I I

6

z

2

0 1 , . I I I .- ,I I AII

0 200 400 600 500 1000 1200 1400 1400 ia0o 2000

BAR IJNE NUMBER (liniis/Pict. ht.)

Fig ID-9 - Threshold perceived svgnal.Lo-noise ratio vs spatial frequency lor C Case A and 0 Case B NITFs

uj 1.0

S0.6

4A

0 0.4

0.2I-cc0

0 200 400 600 900 1000 1200 1400 1600 l1800 2000

BAR LINE NUMBER (Iines/pict. ht.)

Fig. D-10 Relative output image amplitude is bar line number for Case A and B MTFs

L 195

Page 196: a073763 the Fundamentals of Theirmal Imaging Systems

I

APPENDIX D jshapes at N ' 2000 is shown in Fig. D- 11. In Case B, the image amplitude, and hence theimage contrast is a factor of 3.65 lower. In the sensor system model, we have ignored the pos-sibility of a photoconversion noise in the observer's retina which surely must exist. Whenvideo gain is insufficient, images of small amplitude (or incremental luminance) may becomeperception limited by retinal photoconversion noise component and this effect may account forthe departure from theory ,ioted in Case B. In summary, the theory appears to be valid but therange of validity needs further investigation and other effects apparently must be taken intoaccount.

4 ' I ' I I ' I ' I •:4 A

3

U.1

<2

-3 -2 -I 0 1 2 3AMPLITUDE

Fig. D-II -- Noise equivaleni oulpu! pulse amplitude vs disiancr foiCase A arnd B MTFs ai bar line number 2000

D5. THE EFFECT OF APERTURES ON PERIODIC BAR PATTERN PERCEPTION

A periodic bar pattern consists of an alternating series of black 1n6 ;.,hite bars as shown inFig. D-I. In practice, the number of black and white bars used is usually 5 to II but thenumber should be much larger to approach true periodicity. It is well known that if the inputimage to a linear system is periodic, the displayed image will also be periodic. It is postulatedthat to perceive the presence of a bar pattern that the observer must make the decision on thebasis of resolving a single bar. Thus the problem becomes one of calculating the perceivedSNR based upon a single bar. Again, it is assumed that the observer functions as a spatial andtemporal integrator and makes a detection decision when the SNRp exceeds a threshold value.The effect of a sensor aperture is to smear periodic images as in the aperiodic case, but onenoticeable difference is that adjacent bars may become blurred into one another. At very lowspatial frequencies, the effect of an aperture may be only one of slightly rounding the edges ofthe input square wave train while at high spatial frequencies, the displayed waveform will be

sinusoidal as shown in Fig. D.12.

There are at least two plausible methods of calculating the SNRD. One method is toassume the image runs from trough-to-trough of the periodic image in which case the widthwill vary from I/N to 2/N as the spatial frequency of the patt;rn increases. Alternatively, it canbe considered to be the 50% width which is equal to I/N. The second method ik suggested by0. Schade and will be used herein but the first method will be discussed for the purpose of giv-ing perspective.

196

Page 197: a073763 the Fundamentals of Theirmal Imaging Systems

NRI. RIEPORT 9311

I I !I4I I I- I

I I I II I

IIII II I

I i

_2 ,

Fig. D-12 - Waveform diagram for periodic waves- Dashed line is

input waveform; solid line is output waveform

In the first case. the signal is the integral from trough-to-trough and will be the Area B inFig. D-12. Observe that Area A - Area C. and Area B equals the output peak-to-peak value,or g(x)p., , times the duration I/N. That is,

Integrated Signal = Area B N (D-55)

g(x) P-P is often referred to as the square wave amplitude response R_so(N) and is related to theMTF or R,(N) by

R .QV) - - R[(2k - I)N1. (D-56)2r k I--[1 k

which is equal to (4/1r)R. (N) when N >, N,/3. Here, N, is the "cutoff" frequency whereRe(N) becomes zero. The noise integration distance will vary from 1/N at very low spatialfrequencies to 2/N when N > N,/3. Actually the very low spatial frequencies are not usuallyof much interest, and 2/N is a useful approximation for the integration distance. Th" abovemethod is of interest because it is a str:tightforward extension of the aperiodic theory and itrelates directly to the square wave amplitude response (which is most often measured). The4answers obtained will not be far differert than will be obtained using the preferred methodbelow particularly if the thresholds are measured and specified using the parameters of thesquare wave train.

The accepted method considers the signal and noise duratit-n to remain constant and equalto a bar width, I/N. The integrated signal then becomes the difference between the fluxintegrated over the area of the bar and that obtained by integrating an immediately adjacentarea (background) of equal duration. This assumption leads directly to the concept of thesquare wave flux response function R,,(N) which represents the peak-to-peak amplitude of anequivalent square wave function which would have the same integral difference as the actualsignal. To derive Rr(N). each term of the square wave transfer function is integrated for aduration I/N. Since the square wave transfer function has only odd harmonics of the inputspatial frequency, each odd harmonic adds twice the integral of 1/2 cycle to the total integraldifference. The value of twice the integral of 1/2 cycle of frequency v in lines is 2 /7r times the 4

197

Page 198: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX D

peak-to-peak value times the half period (0) -'. As a result, the square wave flux transfer func-tion can be written as

k I (2k - 1)2

and R,/(N) - (8/v') Ro(N) for N >, NV/3. The above function can then be used to predictthe integrated signal compared to an equal neighboring area over a duration of I/N. If thepeak-to-peak signal is 4't', the perceived signal for the one-dimensional case with bars of con-stant height is given by

R1f (N)Signal - N AhA'T,. (D-58)

The noise integration distance used in this method is constant and equal to N-'. If the noise iswhite at the input to the integrator, which is the observer, we can refer to the aperiodic imagediscussion of an integrator and write the noise expression as

2 i ITM.S. Noise - - -ay-e (D-59)

N N

Consequently, the SNRp for the simplest one-dimensional case becomes

SNR R(N) .A '(TD)/ 2 (D-60)N11 2 -1/2naý

Extending this expression to the two-dimensional case with bars of length y., we find that

SNRp - Rsf(N)In.. " (D-61)

The bars are aperiodic in the y direction and herefore we introduce the noise increase factor .as previously discussed. The bar length y,, can be written as E/Nas desribed in connection withEq. (D-51) and then

SNRp - RN /((N) n,,),' 2 (D-62)

and as can be seen, SNR, is proportional to N-1 at low spatial frequencies for bar charts whenE is a constant.

In the aperiodic cqse, the sensor apertures have no effect on the perceived signal but theperceived noise does increase due to image blurring and the resulting increase in the observer'sintegration distance. Also, the apertures can band-limit and, therefore, reduce the effect of thenoise. In the periodic image case, the image size does not increase due to apertures so there isno noise increase factor f. Instead, the integrated signal is reduced rather than being main-tained at a constant level. While the .ensor apertures do not increase noise in the periodiccase, the apertures can band-limit the noise so there will be a factor similar to F.

When white noise is filtered by the sensor apertures as in Fig. D-4, the mean square noiseis

M.S. Noise - a' • R'(v) R/2 (v/N) dv, (D-63)

198

Page 199: a073763 the Fundamentals of Theirmal Imaging Systems

NRL RIIPORI 0311

r

where P is introduced as a dummy spatial frequency variable since the integration is keyed to theduration which is now N. There is no generality lost in assuming white noise as an input sincean equivalent system model can be cot,structed for any colored noise by adding dummy transferfunctions to shape the noise spectrum- As before R,(v) is the sensor MTF and R,(V/N)represents the integrator. When the aperture has very little effect on the noise, R,(N)-1and f R ? (0' dN - 1/N which agrees with Eq. (0-59). It is convenient to separate R,(G/N) into gain and bandwidth te-ms as was done originally, i.e.,

R,('/N =I sin(ira.'2N) (-4N (7rv/2:,V)

Then,

S1 2

h-____ sin-(77MS N - RNR(v) jPdv (D-65)

and for the general two-dimensional case

SNRp - -~() _____ _______ f~ (D-66)

N~~~ [r.R,?(v) fi!AL.i;V di (, :)

in analogy with Eq. (D-62). As in the aperiodic case, it is convenient to introduca a bandwidthreduction term Ov(N) which relates the decrease in integrated noise that occurs dum to filtering.For the unfiltered case, the noise bandwidth is N and for the filtered cases it is as given by Eq.(D-63) and

1 sin(Trv/2N)

O3(N) ,A = -v (7v/2:N) (D-67)

whence Eq. ( D-66) om es SR , h(aT d 2(

Sr.- ,0-- •( (h•r,/.) d (D-6I)

A Gaussian approximation for -3(N) is given by

R 3(N) 1/11 + (D(-69)

and an alternative approximation which is often used (D-6) is based on the notion that theintegrator limits the region of integration to N so that

oR,(iN) d dOC(N ) (DN-70)

In general, O3(N) has a stronger effect than r(N). the bandwidth reduction factor for theapericdic case but even so. the effect is not large except at high line numbers.

06. PSYCHOPHYSICAL EXPERIMENTATION - PEtUIODIC IM kGES

The matiic-natical model developed herein has been extensiveiy used to predict the thr~es-hold resolution vs input image ifradiance level or input photocurrent characteristic of a-widevarwenc of imaging sensors. As will be seen, the predicted performance closely correlates with

199

Page 200: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX D

that measured. To show the prediction method, we start with Eqs. (D-49) and (D-51) of theaperiodic case which give

_•._ 1R (D-71)SNR° " N [e,,,.j2

Following the discussion of section D5, we modify the above equation for the periodic case asfollows:

SNR1 -- R(" [e23().(.(N) (D-72)NRDV - , (N) I'If the bars are long with respect to their width, f, and F, can be neglected and by use of thecontrast definition C, - Ai - (iG - io)/Ih, we may writ.e

SNRD - T E1,2R~,(N) C1 4'ý.2 1,(D-73)

N2 1 C1 3 11where note is made of the fact that i.,j - 1(2 - C,)/21 1, where i, is the input photocurrent dueto the "bright" bars in the bar pattern. Sensor gains and other sources of noise such as preampnoise can also be included in the above expression (D-71 but with low-light-level televisioncameras with very high prestor,•ge gain, the above equation applies to a very good approxima-tion.

In a particular LLLTV camera design the square wave flux factor R,,(.V) and noisebandwidth reduction terms were estimated to be as shown in Fig. D-13. These were obtainedby measuring the square wave amplitude response, and by mathematically calculating the MI F.R,f(N), and ,3(N). Note that two sets of curves are shown. TDe TV camera in use employshorizontal aperture correction which is an electronic boost of midfrequency spatial frequencies.The boost has a peak response of about 2.5 at 400 lines/picture height and is unity at 0 and 800lines. The square wave amplitude response with boost is shown in Fig. D-14. The effec; of theaperture correction is to both improve the sensor MTF and to decrease the noise bandwidthreduction, i.e., the noise increases. The preamp noise which precedes the aperture correctc- isalso increased by a considerable arnuni but when the sensor is operated at maximum gain, itcan ,till be neglected except at the lowest light levels. In Fig. D-15, we show the SNR calcula-tion normalized to the contrast function noted for the uncompensated case. Also shown is thethreshold SNRDT for an input image of unity contrast. The decrease in threshold with increa,z"in spatial frequency is an experimentally noted effect (D-4). The intersection of the SNRI,with the SNRDT curves give the threshold resolution is highlight input photocurrent charac-teristic shown in Fig. D-16 as the dot-dash curve.

The predicted curve with aperture correction is shown a: the dashed line and the meas-ured curve with correction is shown as the solid line and is seen to agree quite closely with thepredicted curve except at the lowest and the highest photocurrents. At low photocurrents, thedeparture is due to the neglect of preamp noise whie at high photocurrents. the difference isattributed to a ;a.k of precision in measuring the sensor's square wave amplitude response 1tthe higher line numbers.

200

Page 201: a073763 the Fundamentals of Theirmal Imaging Systems

N RL K rTIORT R3

-4

UJI

0o 100 00 300 400 S00 600 700 Soo

SPATIAL FREQUENCY (lines/pict. hit.)

Fig. D-13 -Sqijate wgve (lux response and noise bandwidth reduction factor for the

(U') Unriompensated and (C) Compensated 25/25/27 mm I-Zoom EBSICON

1.0

4 .

4C

0 1100 200 300 400 s00 600 700 g00

SPATIAL FREQUENCY (lines/pict, lit.)

Fig 0-14 -Square %a~e amiplitudeC rcslponqt for -hc I-Zioom I1SKi.N,

with horizontal aperture compnspf!ationl

201

Page 202: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX D

-10

to71 \

100

z1-13

1o N

)L- I111 I I 21

10 100 1000SPATIAL FREQUENCY fllnes/pict. h~t.)

Fig. D-13 -SNRp v.ý spatial frequency for the uncompen-imed I-Zoom EBSICON.Alio shown is the observer's threshold SNRpr for C -1.0. Doished linc'l are 10lower in lphoiocurrellt than -iolid curve directly sh~ove

202

Page 203: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

g

id1

HIGHLIGHT FHOTOCURRENT (A)

Fig 1)-16 - Threshold resolution vs highlight photocurrent for the I-Zoom EBSICON. ,- measured,(.-)Icalculated With aperture correCLion. and ( -)calculated without aperture correction

REFERENCES

D-I R.L. Sendall and F.A. Rosell, E/O Sensor Performance Analysis and Synthesis (TV/IRComparison Study), Air Force Avionics Laboratory Report No. AFAL-TR-72-374, April1973.

D-2 F.A. Rosell and R.H. Willson, Psychophysical Experiments and the Display - SNR Con-cept, Chapt. 5, Perception of Displayed Information, Edited by L. M. Biberman, PlenumPress (1973): 167-232.

D-3 J.W, Goodman, Introduction to Fourier Optics, Chapt. 1, McGraw Hill (1968). 'D-4 H.L. Snyder, Image Quality and Observer Performance, Chapt. 3, Perception of Displayed

Information (Edited by L. M. Biberman), Plenum Press (1973): 87-118.

D-5. 0.14. Schade, Sr., Optical and Photoelectric Analog of the Eye. J. Opt. Soc. Am. 46(9):726.

D-6. 0,11. Schad*, Sr., The Resolving Power Functions and Quantum Processes of TelevisionCameras, RCA Rev'iew 28(3): 460-535.

D-7. F.A. Rosell. E.L. Svensson. and R.H Willson, Performance of the Intensified Electron-Bombarded Silicon Camera Tube in Low-Light-Level Television Systems, Applied Optics,2, May 1972: 1058-1067.

203

Page 204: a073763 the Fundamentals of Theirmal Imaging Systems

Appendix E

THE COLTMAN AND ANDERSON EXPERIMENT

F.A. Rosell

The Coltman and Anderson experiment (Ref. E-1) is often cited to prove that the eyeuses up to 14 lines in the process of detecting a bar pattern. This conclusion is believed to bean error in interpretation as will be discussed. It is our view that the eye integrates along thelength of the bar and that only one bar is used by the observer in discerning the bar pattern.Coltman and Anderson derived Eq. (E-1) below through a thought argument combined with apsychophysical experiment in order to obtain a necessary constant. Their equation is

Nw- 615 V.47 SNRrms, (E-1)

where Nw is the threshold resolution in line pairs per picture width, Af, is the video bandwidth inMHz, and SNR , is the rms threshold video SNR.

In our formulation,

N N Ti A fv1 SNRv° (E-2)1 SNRDtr'(E2

where N is the threshold resolution in lines per picture height, T, is the observers integration time,e is the picture aspect ratio, Af, is the video bandwidth in Hz, SNR vo is the broad area image peak.to-peak to rms noise ratio, and SNRDT is the threshold SNRD. The purpose of the followinganalysis is to determine whether Eq. (E-2) matches that of the Coltman and Anderson equa-tion, Eq. (E-i). To do this, it is necessary to convert units. Since Coltman and Andersonsbandwidth is in MHz rather than Hertz,

Af,, - , 1 I0', (E-3)

and they use rms signal to rms noise rather than peak-to-peak signal to rms noise,

SNR 0o - A/S SNRrm,, (E-4)

and then spatial frequency is in line pairs per picture width rather than lines per picture height,so that

Nw =N (E-5)2

Inserting these changes into Eq. (E-2),

Nw -8.106 a Tee Afj1/2 SNRvems E

2 SNRIr(

With a 4/3, T, 0.1 s, e - 14, and SNRor - 3.0,

Nw- 644 14fT SNR, (E-7)

which is only 5% different from Coltman and Anderson's experimentally derived value. Thelength-to-width ratio of 14 used is an approximation since the actual value used is unknown.However, it is not critical as can be inferred from the discussion below.

205 j

Page 205: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX E

Coltman and Anderson did not take the bar aspect ratio into account but only stipulatedthat bar length should be large relative to its width.

To show the impact of reducing the number of bars available to the observer, Coltmanand Anderson devised the experiment shown in Fig. E-1. The displayed pattern "was left fixedand a series of cardboard apertures were employed to vary the number of lines seen by theobserver." The mask was presumably of square aspect ratio. The results, as shown in Fig. E-1"show that the observer probably uses no more than seven line pairs in making anidentification. As the number which he is permitted to see is decreased, the signal requiredrises rapidly being greater by a factor of four when only one line pair is presented."

0.012

S0.06.

i-0.04-e

z 00 002.

0 0WS0.00 .------ L

10 is 20

LINE PAIRS

Fig. E-I - Video SNR required to detect pattern as a functionof the number of line pairs visible through mask

Suppose that instead of the main effect of the cardboard aperture being one of reducingthe number of bars, that the main effect was really one of reducing the bar length, or aspectratio. if this latter interpretation were correct, then the SNR,, should decrease with increase inbar length as

NwSNR -- (E-8)

"615 .1 x7/t4With a mask of square aspect ratio, e - 2 times the number of line pairs seen through themask. The t/14 term under the radical in Eq. (E-8) results from an assumption that the SNR,,threrholds were measured with bars of 14:1 aspect ratio. Using Eq. (E-7), we plot the thres-hold SNR. in Fig. E-2 as a dashed curve. Also shown on this curve are the experimentallymeasured data points from Fig. E-1. It appears clear that the change in aspect ratio hypothesisis a reasonable interpretation of the measured results rather than the number of lines that arevisible through the mask.

For the past two decades or more, threshold resolution m.,.5urements have been made byvarious television camera manufacturers. The number of bars used in the experiments are notreported showing a lack of concern for bar number. On the othv, hand, bar aspect ratio is notreported either. As is evident from Fig. E-2 , the bar aspect ratio is not particularly significant

206

Page 206: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

0.06 , , i

0.05

0.04

0 0.03

w 0.02i ~0

0.000.o , , , I t i I , I , ,

0 10 20 30 40

BAR ASPECT RATIO

Fig. E-2 - Video SNR required to detect bar patternsinterpreted as being a function or bar aspect ratio

if the ratio is above about 10:1. In the particular case of TIS, the bar pattern to be used hasbeen specified to consist of four bars with each bar being of 7:1 aspect, In general, more barswould be desired in order to avoid transient end effects but the temperature variant bar patternsare difficult to construct at best.

REFERENCE

E-1 Coltman, J.W., and Anderson. A.E., Noise Limitations to Resolving Power in ElectronicImaging, Proc. IRE, 48(5), 1960.

..Lr"y- l

Page 207: a073763 the Fundamentals of Theirmal Imaging Systems

Appendix F

BASIC SNR AND DETECTIVITY RELATIONS

F. A. Rose!!

A. INTRODUCTION

In this appendix, a number of basic relations will be derived and discussed. These includethe relation between the display SNRD and the video SNR vo, the channel SNRco and thedetector's detectivity, the detectors responsivity and detectivity, and various detector parame-ters such as cold shielding. This appendix is written in part to increase the reader's understand.ins of the basic sensor parameters, but certain equations such as those relating detector respon-sivity and detectivity are required when multiple noise sources must be considered.

B. RELATIONSHIP BETWEEN SNRD AND SNRvo FOR TIS

In Chapter IV.B, it was stated that Eq. (4-8) which is repeated below

SNRD - 2AfL T,11-11 SNRvo (F-I)

holds for thermal imaging and television systems if A is interpreted as the total effective imageplane area regardless of the total area of the detectors within that area. To find this resultobserve that the signal current from an array of n, n. detectors of size 8a 8, will be

i - h'enlnA, 8, (F-2)

regardless of the total scan area. By analogy to Eqs. (4-3), (4-7). and (4-8)

[2 1f, T, a]I' 2 AlSNR I - 8,, ,,1 1 5 1] 2(e, ifj 7/2

t2A1, T, al]" 2"[ 2n, iT a)"' SNRvo.

(F-3)

The quantity in the denominator represents the total sensitive area of the detectors. However,this equation does not take into account the fact thai the image plane area A Is larger than thetotal detector area. 1he scanning required to image the entire area results in a loss of spatialintegration by a factor of exactly In1 nt,, 8,1,1/A}J17 because scene photons are "raining" on por-tions of the image plane where the detectors are not always present at any instant of time. Byincluding this spatial integration factor in Eq. (F-3), Eq. (F-I) results. If desired the aboveresult can be interpreted as a loss of temporal integration within the sensor (but not in the eyewhich is unaware of the source the displayed Imagery).

C. DETECTOR CHANNEL SNR(*o

The Incremental signal current out of any given detector channel (which may containn1 TDI elements) is equal to,

209St v

Page 208: a073763 the Fundamentals of Theirmal Imaging Systems

iI!J

APPENDIX F

Ai - Snt68,y/,E,7,, (F-4)

detector(s), n/is the number of detectors in the scan direction, 8,8, art the detector dimensions,

and -q, is the scan efficiency defined as the ratio ( Tf - TR)/ Tf, where T, is the frame time and TRis the time during each frame that the detectors are not viewing the scene. The rms noise current, I,,is equal to

-, - (2eAf i,)/2, (F-5)

where e ,s tht charge of an electron, Af. is the bandwidth of each detector channel, and i, fordetectors sensitive to the longer infrared bandwidths, is primarily photoelectron noise and is usually ofnear constant magnitude. Numerically.

tic - Snt/8 x8, 'YE5 , (F-6)

where E£ is taken to be the average value of the bactground irradiance which the detectorviews and to which it is sensitive. By use of Eqs. (F-4), (F-5), and (F-6),thc channel signal-to-noise ratio for broad area objects which are unattenuated by sensor apertures is given by"1

SNRco - (F7)[2eSn18.,8,E87?,AfCJ

If we set SNRco - 1 and solve for AE, the value of AE becomes AEmn, the minimumdetectable irradiance which is equal to

A~rn,. 2eE8 Af• 1/"'AE, Sn 8,Tl l (F-8)

When the detector area, channel bandwidth, and scan efficiency are set equal to 1 cm2, 1 Hz,and 1. respectively, AEmi, is defined as the detector's noise equivalent power or NEP*. Thus,

NEP" - I e 8 I P9NIP . (F-9)

Now, Eq. (F-7) may be written as

SNRco - "IAE (F-P10)(Aft) '/NEP"

Detector sensitivity is usually specified in terms of its specific detectivity which is theinverse of NEP" or

- (F-11)

'The specific responsivity is defined aq S - w S, E, ' here S, is the spectral responsivity of the detectorF A dA

(A/W - Am) and F. is the detector irradiance (Wfcmn - /Am) due to a specific source such as a 300 K blackbody.Delectivity, to be defined. ii most often given as iln integrated value over wavelength for a specific source and to ber.recise, it should be designated a% a specific detecuivity when so giventAgain, the subscript 0 used in connection with SNRc is used to indicate a broad area image which is unaffected bysensor apertures as in the case of the SNR ,o discussed in connection with Eq (4.-)

210- - -- ~ IM

Page 209: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

with which, Eq. (F-10) becomes

(n, 8, 8y ),' D*AESNRCO -- , (F- 12)A

The incremental irradiance level AE can be related to the incremental scene radiance ALby the relation

AE - ir70 (F- 13)4f'where 7)0 is the light gathering efficiency of the objective lens. For a Lambertian, or diffusely,

emitting scene, we cah write

A L- 2 8 fim,(rd, F 47, 8"T I ,( -4

which relates the incremental scene radiance to the incremental scene temperature differencesAT M, M• T is the spectral radiant exitance of the scene. For a blackbody,

C, 101M , (T) X5k (ec' - i)/ T (F- 15)

where C, - 3.7412 . 10-4 WI.2 and C2 - 14.388 uK. For X T < 6200 ;.K, the error will be

less than 10% if wc assume that

M, (T) -C1 108 -(C2', T) W(F16cm2 . sr - K

For X T < 3 100 ;zK, the error in using Eq. (F- 16) will be less than I1%. Integrating M•, with

respect to wavelength, we have

f A C, x 108 e- C21x- 1.kT 3 (NT): 6(X T) 3 6(•KT)4 •M j,(r)d X 4 - - + c 2 + c 3 + c 4 (F -17

and by inserting Eq. (F-17) in Eq. (4-28) and by performing the indicated integration we obtain

A I Cl' 10s T' e- (C4~~ + 4C3 xT + 12C2 (xT)2 + 24C2(,\T)3 + 24(. T)4 ]1

7 r C4( )4

; kI

: = ~, K ,f A T. ( - $

Calculated values f'or K•w are provided in Table 4-1 of Chapter IV.

By combining Eqs. (F- 12), (F- 13), and (F- 18), we have that

SNRco - iry.._• (n, 8, 8 •,c~' D" Kw %T (F- 19)

4J" [A fcll12

which is the Eq. (4-9) desired when aj is substituted for 8, 8_.

211

Page 210: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX F

D. DETECTIVITY

The fundamental measure of infrared uctector sensitivity is detectivity as previously dis-cussed. Detectivity is actually a signal-to-noise measure rather than a pure sensitivity measureas in the case of responsivity as discussed in connection with Eq. (F-4). When normalized tounit area ari," bandwidth, detectivity is primarily a function of the quantum efficiency of thedetector materiJ., the noise which may be generated either in the primary photon-to-electronconversion process, in the mechanics of the semiconductor process, or in external circuits suchas the preamplifier. In the simplest case, the detector is primarily limited by fluctuation noisedue to its photoconversion of background photons. When this is the case, the detector is saidto be background limited and the detectivity is then sometimes written as D°la-p.

The spectral detectivity of a BLIP photoconductive detector is given by

D, - - I -p- I (photoconductive), (F-20)

where h is Planck's constant, c is the velocity of light, A is the radiation wavelength, 7 is thedetector'c srectral quantum efficiency, and p, is the rate at which scene image photon. fall on thedetector. For a photovoltaic detector, the detectivity is greater by V/2' so that

"D - X-- I- 12 (photovoltaic). (F-21)

hc 2p, I

Observe that the quantum efficiency is given by

"- S•-•, - (F-22)

and the photon arrival rate due to background is equal toE X (F-23)

PA j- EB hc

By insertion of Eqs. (F-22) and F-23) in Eq. (F-21), we see that

[ S{ 112D 'x - I• (photovoltaic), (F-24)

which is analogous to the inverse of Eq. (F-9).

The radiation to which a BLIP detector responds may come in part from outside the heldof view such as from the objective lens housing. These photons from outside the field of viewintroduce an additive fluctuation noise in the detector but, this noise (..,. be reduced consider-ably by cold shielding the detector. With optimum cold shielding, the field of view of a detec-tor will include only the objective lens and the number of background photons detected will beproportional to sin2 9 where 0 - D,/2FL: the lens diameter, D,, divided by its focal length, Ft..Since the lens focal ratio is FL/D,, the number of background photons detected is proportionalto (1/2f)2 and consequently, a detector with a cold shield limiting the field of view to a solidangle of f0sr will have a detectivity cqual to

D% (f4,) - 2.fD*,(21r). (F-25)

212

Page 211: a073763 the Fundamentals of Theirmal Imaging Systems

P,

I

NRL REPORT 8311

where D'x (27r) implies the detectivity of a detector viewing a solid angle of background of2wsr. The detectivity of a perfect detector viewing 2wrsr of background is defined as D**(2r)and in this case,

DO(fl) - 2f'q,'1 2 D"(21r), (F-26)

where %j is the detector's quantum efficiency. If the cold shielding is not optimum, it is commonpractice to define a cold shield efficiency -q which is equal to

1 C 2(F-27)

where [I is the solid angle which the detector would view with perfect cold shielding and fl, is theactual solid angle being viewed. With Eq. (F.2 7), Eq. (F-25) becomes

DO,(ic' 2f~lD°, (21r). (F-28)

In the •ab. where system-generated noise is a factor, an effective D" which includes theadded noise is sometimes reported. As an example, consider the case where preamp noise iscomparab*c or larger than the photoelectron noise. Let D'(f) be the detector detectivity as afunction of frequency and note that

NEP (f) - -) (F-29)D'(f) 'where Ad is the area of the detector. The noise voltage can be calculated from

Vd - DO(f--- R (f), (F-30)

where R (f) is the detector responsivity in volts/watt. Let the preamp noise spectral density bewhite and referred to the preamp input (or detector output), the noise voltage is

v, - Ftv , (F-31)

where FN is the preamp noise figure, and V] is the mean square noise spectral density. The overallsystem mean square noise voltage V,2 is then

adR 2(f)VU2() - +RF) V], (F-32)

[D*(f)12 + FV

and sinceVi

NEP,(f) - R(f' (F-33)

it is found that

[NEP ,(f)I ad + FNV](F-34)

(D,(f)1 2 R 2 (f) '

and by use of Eq. (F.19),

(D" (f)]I - 11 + V 1/2. (F-35)

lb(f) I•21

1~ 2 12

Page 212: a073763 the Fundamentals of Theirmal Imaging Systems

I

APPENDIX F

The noise terms must of course be appropriately integrated to find D*,, the effective overall "sys-tem" detectvity. The above example is primarily for the purpose of illustrating a method of han- Idling multiple noise sources.

Finally, it is noted that D* is usually quoted in terms of a specific test source backgroundsuch as a 300 K background, i.e.,

J D'x E, (300 K) dAD*(300 K) - (F-36)

fj E, (300 K) dA

where E, (300 K) is the detector irradiance due to a 300 K blackbody.

2I

214 _ _ _ _ _ _ _ _ _

Page 213: a073763 the Fundamentals of Theirmal Imaging Systems

AI

Appendix G

EFFECTS OF IMAGE SAMPLING

F.A. Rosell

Commerical television broadcast systems form images by a line raster process and thus fallin the category of sampled data systems. The effects of bampling in the television process hasbeen quite thoroughly analyzed and discussed by Schade (Ref. G-0). While the sampling pro-cess can have a noticeable effect on televised pictures, the effects are not as objectionable asthey are in either FLIRs or some of the newer solid state imagers which may be sampled in twodirections instead of one and because the amount of prefitering of input spatial frequencies isgenerally less for systems using discrete detectors. It is noted that 'he so-called bad effects ofsampling, i.e.. moir6, spurious responses, and information loss are felt by some to be over-emphasized in military and industrial applications. However, this view has not been quantita-tively supported and it can be shown that image quality improvements call be obtained byproper pre- and postfiltering of the sampled images in many cases.

A high sensor MTF is generally desired. Detector MTF can be improved by reducing thedetector dimensions, but if the detectors are made smaller than the interline spacing, for exam-ple, losses in information and in sensitivity will result. Even further, it is clearly evident thatcontinuity in the displayed image requires that the display aperture fill the interline spaces. Theinference which can be drawn is that picture quality and overall resolution may be improved bypurposely degrading certain system component apertures.

It is common experience that the majority of FLIR and television monitors have a pro-nounced visible line structure. Images with line structure may actually appear to be sharperwhile, in fact, more image detail will be perceived when the line structure is removed by aper-ture overlap. An alternative, most commonly employed method of eliminating fine structure issimply to increase the observer-to.display viewing distance whereupon the blur circle of the eyesmears out the line structure to the point of bare perceptibility.

The ideal sampled data system is shown schematically in Fig. G-1. An ideal low pass filterprefilters the input image function f 1(y) prior to sampling. The high-frequency cutoff should benumerically equal to the sampling frequency N. and thus the image to be sampled will contain nofrequencies higher than N,. The sampling frequency results in a line spacing Ay, - 1/N,.Each sample duration should be infinites~mally short so that only the signal amplitude ispreserved. The output of the sampler is then passed through an ideal low pass filter which willthen perfectly construct the function fl(y) to within a constant multiplier.

As we noted, the highest spatial frequency which can be reproduced is equal to half thesampling frequcncy. Stated differently the sample frequency must be equal to or greater thantwice the highest spatial frequency of interest and this means that at least two samples per cycle(or one sample per half-cycle) must be taken. This may be visualized by reference to Fig. G-2.If every other cycle were eliminated, the output of the sampler would be indistinguishable fromthat which one ,.'ow-U obtain from . uniform background. .4 frequency corresponding to half thesampling frequency is sometimes called the Nyquist frequency limit.

215

Page 214: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX G

fj f, Y) fijy].Sly] kficy]0° Hr

_TTIFig. G-I - Block diagram of the ideal sampling and reconstruction process

"N',m2

Fig. G-2 - The highest spatial frequency of interest must be

sampled at least twice per cycle

The ideal sampled data system of Fig. G-1 is typified by the electronically multiplexed sys-tem of Fig. 4-4. In the basic FUR configuration of Fig. 4-8, the detectors themselves performthe sampling process. In Fig. G-3, we show three line array detector configurations. In allcases, the detector widths are identical. Also, the Nyquist frequency is the same for all threebecause the scan line pitch (or center-to-center detector spacing), ?, is the same. However, thesensitivities which are proportional to ,he square root of detector area are clearly different and,the MTFs in the vertical direction rre widely different as shown in Fig. G-4. With 8. - P/2,the MTF is 0.90 at the Nyquist limit while with 8y - P and 2P, the corresponding MTFs are0.64 and 0, respectively

The combination of the lens and the detectors form the prefilter of the sampled data sys-tem. The 8y - P12 provides the least prefiltering and will be the most subject to spuriousresponses and aliasing. Furthermore scene information will be lost since only half of the areaof the vertical field is scanned. The eye can of course perform an image reconstruction processprovided that details such as horizontal edges are not completely lost.

The detector MTFs shown in Fig. G-4 represent the mayimum MTF obtainable in thesampling direction. Actually, the word MTF not really approv.'i ite after a sampling processhas taken place since sampling is not a linear operation but we shall ignore this distinction forthe present discussion purposes. By maximum MTF obtainable, it is meant that this "MTF" isobtained only when the sine wave pat"-rn is aligned for maximum response. At the Nyquistfrequency, this occurs when the positive swing of a half cycle of the pattern falls exactly on onedetector and the negative half cycle falls exactly on the adjacent detector. On the other hand, ifthis same pattern is shifted one-quarter cycle or 900, the measured "MTF" will be zero. In gen-eral, the *MTF' is given by

Page 215: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

L~LF- v PA

3Fig. G-3 - Three line array detectorconfigurations with the same Nyquist spatial fre.queacy limit and the same resolution in the hot-izantai scan direction

0p.O 8'S 2P P1

0.4

+ --

0.20.01 1 a a I k I i I I i I -.. a a i

0 0.2 OA Q4 . 0-.1 1.0 1.2 IA 1.6 I'. 2.0NORMALIZED SPATIAL FREQUENCY (N/Nvq)

Fig. G-4 - Geometrical MTF of detector elements as a function of sp,,tial frequency normalizedto the Nyquist limit for detectors of size equal to 2P. P. and P/2

MTF - Rod(N)cos 9. (G-1)

The angle 0 will have a maximum value which is different for each spatial frequency. At theNyquist limit Om,,. 90', and at half the Nyquist limit, Om,,= - 45¶ In Fig. G-5, we plot Eq.(4-58) as modified by Eq (G-1) as Curve I with 0 - 0. tts Curve 2 with 0 - mx,/2, and asCurve 4 with 0 - On,, If any value of 9 between 0 - 0 and 0 - 9,.,,/2 is equally probable.then the apparent "MTF" will be greater than that of Curve 3 half the time and less, the otherhalf of the time. It has also been suggested thit' the "average" detector "MTF' in a cross scandirection be given by

siU,•(n N/2 Nyq)MTF - R24,(N) - (nN/2Nyg)

(iN/2Nyq)2 02which is nlotted as Curve 3 and is seen to be only slightly less than that given by Curve 2.

Because of its simplicity, this latter approach is suggested as d reasonable approximation,

Page 216: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX G

1' 1.0

LL0.6 1- Rod(N'N)yq)

4,.Rod (N/Nyq) rc4 (#M~ax) N.

2.

0.2

010 0.1 0.2 03 i3.4 01 0.6 0,7 0.1 0.9 1.0NORMALIZECJi SPATIAL FRauEONCY (?IN/Nvt

Figs (-_5 - J~teiecor M ri: in ihc acmosi ican dirc'::ion for vdriouA pha4c anglict Andfor the doceclot M7!`%qw~are(I

In the particular case where the detector array performs the samrpring function, the inputsignal fl(y) may hc considered to he sampled b) a N'unction I?2(y), where

12.)- rOdLI') *fl(Y). (G-3)

In th above '.Od(X) is Me Impulse responts" of the dreector aind

.f3(y) - i uvt) -- n.%y,), (CY-4)

where AO are impulses occurring al y v- 0 and A'.V IS 1h." S471npli1g period. The Fourier transformof f 3(0) Lre unit amplitude impulses in the spatial frequenvy domain which repeat sit spatial fre-qtuencies 2nN,q, The output of the sampler is J'(0 - .f (r) - f2(j,) and it will he found thatthe amplitude spectrum F(N,i will consist of a ban(] abouit zero frequency and identical bandsabout ± 2nN,q as shown in Fig. G-6 (a) and (b. In cast Wa, it is a"sumed that the input sig-nal fl(y) has been bandpass limited to N, and thu,i none ofi the spectra overlap. In case (b),there is considerable overlap due~ to the sidebands which will result in spurious responses andaliasing.

The detectors are desigreatcd the Pmn-iyzint; tpfirluic by Schade (Ref. G.1 ) while the elec-tron beam of the display along with the phosphor 15 designaited the synthesizing aper;urc. Thefunction of the display is ol' course to recongfrtrut flhe image but it also serves a pcosifflteringfunction. Consider a CRT displ~iy operatedi in a converitionial manner, A Fourier analysis of theluminan-et disiribunmi L , i,h 1 kerticol dir'cia?) (across .he scar.) sh',ws that L.. contains tile dclevel L nd a serics of ha-monic c'osine wave-; givven by

L, L 1 + 2 1:R (A'JCo (2 A 7r,Vy)I

A - 1 3, 5 - -(G-5)

Page 217: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

/ I \ /I I\I / IW II

,, I I, , ,•, • - I I ! I-- - --

g I II I Ipr I I

--ANyq -2Nyq 0 2 Nyq ANyq

SPATIAL FREQUENCY

Fig. 0.6 -- Repetitve amplitude spectra due to Ismpl;ng. Case (a) -- Specers banidpsss limiLcd to the

Nyquisi frequency. and Case (b) -- Overlapping spectra due to in-diequate preflihering

where I R od(kN,) I represents the "MTF of the display at integral multiples of the roster frequencyN,. This luminance distribution exists without any added signal and thus is independent of thesensor MTF. The cosine waves are in place on the center of a raster • when the display'saperture has axial symmetry and its response decreases asymptotically to zero. A tine structureIn the vertical direction, without signal being prereni, is undesirable. Pcrfect continuity, i.e., aflat field, is restored when none of the carrier wave components are reproduced by the displaysaperture. For this to be true, the display MTF must be equal to zero at line numbers above2N,. A substantially flat field condition is obtained when

ROd(2N,) '• 0.005, (0-6)

which as can be seen rrom Eq. (0-.5) gives a ripple amplitude of 1% and a peak-to-peak lumi-nance amplitude of 2%. Most television monitors show a pronounced line structure. A flat fieldcondition can still be ac~iieved by moving away from the screen until the lines can no longer beresolved by the observer's eye.

Suppose next that we modulate the display with a periodic signal of frequency N,, in thevertlca; direction. The display luminance terms will then be given by

L- LU[ + 2 T.. R o,(AN,t cos (2kwrN,y)] (carrier)

+ LmRoF-(Nm,)Rod( N,) cos ('Nmy + 0) (signal)

+ L,,,Rop(Nm,) T..Rod(2kN, - N,,,) cos In(2N, -+ N,,,) + 61. (sum)

+ L,,RoF(Nm) ,Rod(2kN, - N,,,) cos (ir(2N, - Nm) "+ 61 (difference)A

A - 1, 2. 3. 4, - -- (0-7)

where £ Is. the average luminance of the test signal waveform, L,,, is the credt luminance of the signalwaveform, and 9 is the displacme~mnt between the peak of[ the L,,, waveform and the raster lines.

Page 218: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX G

The first term is the carrier waveform discussed in cornection with Eq. (G-5) while thesecond, the signal term, is recognized as nearly equal to the normal term expected without araster process, i.e., the displayed image waveform amplitude is proportional to the amplitude ofthe test image times the product of the camera and display MTFs. Note, however, that the out-put amplitude can be modified by the phase angle between the input waveform and the rasterwhich is a consequence of the sampling process.

The .third and fourth terms are sumn and difference modulation products of the carrier fre-quencies with the test object signal. In communications theory, these sum and difference pro-ducts are called sidebands while in imaging practice their effects are more often referred to asaoasing. Patterns generated by the sum and difference frequencies when periodic patterns arebeing viewed are called moire patterns. The difference term is more likely to present problems.Consieer the case where N,,, - 1.5 N, and k - 1. The crest amplitude of spurious response isthen equal to the crest amplitude of the original test image amplitude multiplied by the MTF ofthe sensor at 1.5 N, and the MTF of the display at 0.5 N,. It is clearly evident that with perfectprefiltering, no aliasing results but a perfect postfilter would not attenuate the alias frequency atall. Postfiltering cannot eliminate the effecs of impeffect prefiltering with respect to the firstdifference, low-frequency terms.

The various terms of Eq. (G-7) are depicted in graphical form in Fig. G-7. The sensorMTF, or Ro0 (N), is the product of al' the MTFs in the y direction which precede the rasterprocess including the scannink detectors which are part of the prefiltering process. The displayMTF, or Rod(N), includes all of the MTFs following the raster process including Rod I(N) and

ROd 2(N), but Rod 2 (N) has essentially zero response for the difference frequency D,.

The higher MTF of Rod (N) reproduces the carrier, C, with a modulation amplitude of36% which corresponds to a 72% peak-to-peak luminance variation on a uniform field. TheRod,(N) provides a nearly flat field but the MTF is reduced at line numbers less than N,. As acompromise, a display MTF of ().025 at 2N, is often selected. This gives ,a peak-to-peak iumi-nance variation of 10% and still provides a reasonable MTF at the lower spatial frequencies.

In Fig. G-8, the MTFs for five sensors are shown together with the display MTF refer-enced to the first carrier frequency (the display MTF repeats at each carrier frequency). Thesecond difference terms (D2) can be neglected. The MTF diagram is used to evaluate theROF(N) R M(N) products which are shown in Fig. G-8. Note that the zero frequency of thecross products occurs when N,, - 2N, and that the spurious modulation frequencies are higherthan N,,, for N, < V,. A spurious response of R,,(N) equal to 15% is considered to be an upperlimit for good system design according to Schade. The value is a worst case and occurs occa-sionally for periodic image inputs of 100% contrast. A maximum spurious response of 15% isobtained with the ROM3(N) curve in combination with the assumed d'splay. The se'scrs withhigher MTFs would exceed the 15% criterion, and thus it follows that the number of rasterlines should be equal to or greater than

N, > Np7 0 41, (4-89)

where Nm( 0 41 is the spatial frequency where the sensor MTF is equal to 0.4. An overall systemresponse R0,(N) 'Rod(N) -0.15 is then obtained with a flat-field display having a sine waveresponse of 0.38 at the theoretical limit N,,, - N,.

Page 219: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

%I

C2 Ls2L'RI " c " •3 "I I L-"

z/4. C2 1

\- c !/ - I

210 .Nm -

.% Q,%/ , •/]

1.0 2.0 .0 4,0

1.0 -ide'I'OF • Idral'input Filter

a' Input fille

R (N) N,

0 1.0 2.0 3.0 4.0

Input Spatial Frequncy N/Nr

Fig 63-7 - Graphica; representation of the carrief and sideband terms of Eq. (4-118)

Page 220: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX 0

MIFF M I NTF61 MTP i

S0 / \

2.0 3.0 4.0 5.0SPATIAL FREQUIENCY iN/N,) FOR T'HE FLIR

SPATIAL FREQUENCY ININo. FOR THE DISPLAY ij)

.0 I1 • , 1 -.0I

SPATIAL FREQUENCY N/N,

PIS. 0-4 - Evaluation of spurious cross products for display MIF jovins a flat Aolgl

REFERENCE

0-1 Schade, Otto H. Sr., Image Reproduction by a Line Raster Process Chaptcr 6, Perceptionof Displayed Information, Plenum Press, May 1973.

Page 221: a073763 the Fundamentals of Theirmal Imaging Systems

fItI

t

Appendix H

PSYCHOPHYSICAL EXPERIMENTATION

F.A. Rosell

A. INTRODUCTION

A number of pychophysical experiments performed for the purpose of determiningobserver thresholds are summarized in this appendix. The test objectE include both rectanglesand bar patterns. The observer effects studied include the limits to the eye's ability to spatiallyintegrate, the effects of observe, to display viewing distance, and retinal fluctuation noise con.siderations.

B. THRESHOLD SIGNAL-TO-NOISE RATIOS FOR APERIODIC IMAGES

Aperiodic Images are defined here as rectangles imaged against a uniform background. Toperform psychophysical experimentation to determine observer thresholds, use is mad,: of Eq.4-8, which is repeated below

SNR/ 1 27T,.1fl[ SNR ,

which holds for band.limited white noise and for the case where the observer is not limited byimage magnification or retinal fluctuation noie. The experimental setup ufed is shown in Fig.H-11. In the experiment, a single pulse of rectangular wavesh4pe but variable duration is elec.trunicully Senerated and mixed with band-limited white noise of Gaussian distribution. The Bpa.tlil image displayed on the screen is a rctAngle which can appear in any of four quadrants (butalways in the same position in the quadrant selected). The observer is asked to specify the qua-drant in which the image is located is the video signal-to.-noi, ritio and the image locations arerandomly located. The observer is asked to specify the image location whether he could see itor not. The probability of detectioi, determined in this manner, was then corrected for chance.The observation times were usually 10 seconds. For u complete description and discussion ofthis experiment and the other experiments discuscd below, the reader is referred to Ref. (5-I).

For the rectangle experiment%, it was found to be convenient to define the image size interms of the dimensions of ii single ,ian line I hus, we define the quantities A , and A, Is

490a 4) 90A• . -. . . . ýy . .. .

where Ax'Ay equals the Image artu u, 4V1u ti% it,,amiher ouf Ualive litiit' In a cnti'enhonal 523 linetelevision dijplay, and Y is the tiimage- plan, h',,g0ht. Si-RC flOW umagr plane area A can he written asa r~, where tzis n te width .ta-hlcgIt Iut ui:re c Isc~ raioi

A., , - (490,) (H-2)A

and Eq. (I-I1) may be written as

SNR ,- (1/490) (2, A ,1, Af,/n?' ' SNR (11-3)

223

Page 222: a073763 the Fundamentals of Theirmal Imaging Systems

I-

APPENDIX H

Television Monitor Tust I map

Generator Signui Leveluy

Noosie F ilter Moto

Generator

Figr H-1 The display signal-to-noise ratio experimet. Display background luminance was0.2-0.3 or I ft-L. TV camera operated ist 30 frames/s with a 525 line raster scan.

This equation is used to calculate the SNRD for the rectangular images used in the experimentalprogram reported below. The numerical values used for and ,a were 0.1 s and 4/3, respectively.

It is generally true that the probability of detecting a signal in the presence of noise isrelated to the signal-to-noise ratio and the purpose of the first experiment was to determine the

form of the probability function when the signal is a visually observed image. We hypothesizethat the probability of detection would be a constant for any given value of SNR when theimage SNR is as quantitatively described by Eq. (H-i), as opposed to being described by thevideo signal-to-noise ratio associated with the image To test this concept, the probability of /detection, corrected for chance, was measured as a function of the video SNR, or SNR v, withthe results shown in Fig. H-2 for images of size 4 x 4, 4 x 64, 4 x 128, and 4 x 180 scan lines.It is seen that the larger the image length (and therefore the image area), the smaller the SNR vrequired for a given value of detection probability. Thus, a given vwlue of probability cannot beassociated with a given value of SNR y because the SNR v required at the given probability levelis image area dependent. However, when the probability is plotted vs SNRD, which includesimage area in its definition, the dependence of detection probability on area disappears asshown in Fig. H-3 where we have plotted Eq. (H-i) using the probabilities and SNRV values ofFig. H-2. This is interpreted as a confirmation of the original hypothesis.

The angular extent of the test images relative to the observer's eye varied from 0.13* x0.13" for the smallest rectangle to 0.130 x 6.02° for the largest rectangle. This experimentinfers that the eye can integrate over very large angles in space; angles which are much largerthan were previously thought to be the case.

The most probable explanation for this effect is that the eye, through a differentiatingaction, is more sensitive to edges and the test images of large angular extent in one dimensionwere nearly all adge, being long thin rectangles. To test this hypotheses, a second experimentwas performed using a rectangle of length 96 scan lines (or angular subtense 3.20) and of vari-able widths of 4, 8, 16, and 32 scan lines corresponding to angular subtenses of 0.130, 0.267',0.534%, and 1,07° relative to the observer. The corrected probability of detection for this case isshown for the various rectangles in Fig. H-4 and a plot of the thresholds as a function of image

224jib' ''

Page 223: a073763 the Fundamentals of Theirmal Imaging Systems

r NRL REPORT 8311

iri

S0.I "1.0

0

S0.4

-I

0.2

0.0

0 0.1 0.2 0.3 0.4 0$ 0.6 0.7

VIDEO SIGNAL-TO-NOISE RATIO

Fig. H-2 - Probability of detection vs video signal-to-noise ratio required for rectangular imagesof size O4 x 4,0 4 x 64, 4 x 128. and 4 x 180 scan lines

1.0 , .. ... ,, .... ....... .... . "' i' T ... , .. .. I.

05 .6 -00

U-

t. 4 -

~ 2

DISPLAY SIGNAL-TO-NOISE RATIO

Fig- H-3 - Corrected probability of detection V5 SNRD required for rectangular

images of size 0 4 x 4, 0 4 x 64, A 4 x 129, and 0 4 x ISO scan lines

size is shown in Fig. H-S. For long narrow rectangles, the same threshold value of SNRD isobtained as was obtained for narrow rectangles of various lengths (Fig. H-4) and we concludethat for narrow widths (angular subterise of up to 0.50) that the eye fully integrates the wholearea of the rectangle, but for wider rectangles of angular subtense larger than 0.58, the eye isapparently less efficient in utilizing image area.

225

Page 224: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX H

0 ,

Lu

0 04

U 0.2

-aJ

Lu0.0

.,O DO 0 2 3 5 6 7 8 9

DISPLAY SIGNAL-TO.NOISE RATIO

Fig. H-4 - Probability of detection is display signal-to-noise ratio for a rectangle of height96 scan lines and widths 0 4. 0 8. 0 16. and ." 32 scan lines

ANGULAR SUBTENSE OF FRECTANGLE WIDTH (degrees)

0.0 0.2 0.4 0.6 0.8 1.0 1.2io '-1 T I T' I I V I V I II I I T 1 1 1VI T I 11- 1 1 I V I FI :

z

0 - - - .- - -- "

1.U 4 -.. •.. 0J"

0 10 20 30 40

WIDTH OF RECTANGLE (aeon lines)

Fig, H-S - Threshold SNRD as a function of linear and angular extent of rectangle of height96 scan lines (3.2") arid variable width 4. 8. 16. and 32 scan lines (0.1.1". 0.261". 0.5.14". and

1010°. Dashed curve is theoretical.

To further verify the area effect, square images of size 2x2, 4x4, 8x8, and 16xl6 scanlines were used. The measured threshold SNRDT values, calculated on the assumption that theeye integrates over the total image area, are shown in Fig. H-6. The incres - noted for the 2x2

226

Page 225: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

line image is thought to be due to the aperturc response of the eye while the apparent thresholdincreases for the 16x16, 32x32, and 64x64 line squares is thought to be due to the edgedifferentiation effect.

ANGULAR WIDTH OF SQUARE tý*egrees)0.06 0.13 0.27 O.S3 1.07 2.14

10

0

2

O 2 4 a 16 32 64

WIDTH OF SQUAR-E (wosn lines)

Fig. H-6 - Threshold SNR 0 required to detect square images nf ;:ioug oize and angulir extent

C. THRESHOLD SIGNAL-TO-NOISE RATIOS FOR PERIODIC BAR PATTERNS

The relationship between the video signal-to-noise ratio and the detectability of bar pat-terns was investigated and reported by Coltman and Anderson in 1960 (Ref. H-I). A review ofthe Coltman and Anderson experiment has historical merit and tends credence to the currentapproaches being used. In addition, the Coltman and Anderson experiment is often cited toprove that the eye uses tip to 14 lines in the process of detecting a bar pattern. This result Isbelieved to be an error i.i interpretation as was discussed in Appendix E. Rather than integrat-ing over a number of tars, it is hypothesized that the observer, to detect the presence or a barpattern, must detect thc presence of a single bar.

Psychophysical threshold SNR experiments using bar pattern test images were performedusing the experimental setup of Fig. H-7. The periodic test images were bar patterns of variousheight-to-width ratios and spatial frequencies and were projected on the faceplate of a high.resolution 1-1/2-inch vidicon operated at highlight video signal-to-noise ratios of 50:1 or better.Band-limited while noise of Gaussian distribution was mixed with the camera-generated signal.

In the experiment, the observer wa-, requiied to state whether or not the pattern displayedwas resolvable as the image SNR,) was randomly varied. Chance was not Involved sincc the pat-terns were always present on the diiplay. The experimental constants for the various experi-ments are given in Ref. 5-I. The purpose of the expeiimcnt was to determine the effect of bar

227

Page 226: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX HI

TELEVISION MONITOR TEST IMAGE

LOW PASSEYE

Fig. H-7 - Experimental setup fro the tole vision -comra to-enerste4 imagery. TV eamera was operatedat 25 frsams/s and 87S Kean lines (825 active) ulaplay luminances with i ft-L.

height-to-width ratio on the bar pattern detectability, with the results shown in Figs. H-8thrlough H-10. At a low spatial rrequeny of 104 lines/picture height, the threshold SNRDT. isseen to increase slightly with bar height-to-width ratio (Fig. H-8). while at the highest spatialfrequency of 396 lines/picture height the SNRDT required is very nearly independent of barheight-to-width ratio, as shown in Fig. H-10. By threshold SNRDT it is implied that the observerdiscerned the pattern 50% of the time when the image SNR was at the threshold value.

014[

00 1 2 3 4 5 6DISPLAY SIGNAL-TO-NOISE RATIO

Fig. H-8 - Probability of bar pattern detection vo 5NRD tot a 1104. line barpattern of bar hslgh:.Io-wldth ratio Wu) 5.1, (0) 10:1, and (A) 20:1

221

Page 227: a073763 the Fundamentals of Theirmal Imaging Systems

I

f NRL REPORT 8311

0

- us: 0.8z

T 0.4

S0.2",•

0

0 1 2 3 4 S 6 7DISPLAY SIGNAL.TO-NOISE RATIO

Fi$. H-9 - Probability of bar pattern detection v- SNRD for a 200-line bar patternof bar height-to-width ratio (0) 5:1. (0) 10:1. and (1) 20)

S 1.0 v,1 - - l2• I.C 'l'"" ''"I'YI",,r r,, ' rl wT'l " ' I'Ii' l"Wn "Thl"T

I-C-)

w F

S0AOA

z

0.2

0-

4 0.4

F. 02- - 'y o -0

0 1 2 3 4 S 6 7

DISPLAY SIGNAL-TO-NOISE RATIO

Fig. H-IC - Probability of- bar pattern detection ,s SNsR 0 fo, a 396-line bar pattern

of bar height-to-width ratio (t) 5:1, (0)10: M , and (1) 20:1

1 he SNR DT values are summaried in Fig H-I-I as a function of spatial frequency and areseen to decrease slowly with line number for all the patterns.

The angular subtenses of the bars used in the experiment, relative to the observer's eye,are given in Table H-I. It is seen that a bar of the 104 line pattern with 20:1 aspect subtended

229

Page 228: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX H

* I£10

z

2 - 1

2 - t - -I |

0 00 200 300 400 500 600 700

BAR PATTERN SPATIAL FREUUENCY (lines per picture height)

Fig. H-II - Threshold SNROI vs bar pattern spatial frequency for three barheight-to-width ratios of C-) S5I, (Q) 10:1, and (1) 20:1

Table H-I - Angular Subtense (degrees) of a Barin Each Experiment Relative to the Obvserver as

a Function of the Bar Height-to-Width Ratio.

Bar Pattern Angular Angular Substense it. the

Spatial Subtense Vertical for Bar Height-

Frequency in the to-Width Ratios of

Lines/P.H. Horizontal 1

104 0.157 0.785 1.57 3.14

200 0.0818 0.409 0.818 1.636

396 0.0413 0.2065 0.413 0.826

3.14'. Apparently the observer was not able to fully integrate over ihe entire bar length as evi-denced by the higher SNRDT required. For an 'olated bar, the observer was able to spatiallyintegrate over at least an angle of 60 as was uiscussed previously. This marks at least onedifference between the detection of isolated bars and bar patterns.

The probability of bar pattern detection is plotted in Fig. .4-12 vs SNR• for spatial fre-quencies equal to 104, 200, 329, 396, 482, and 635 I/p.h. Bar length-to-width was 5:1 in allcases. The corresponding threshold SNRDr values are plotted in Fig. H-13. Again the fall off inSNRDT with increase in spatial frequency is noted. The effect of observer-to-display viewing

230

Page 229: a073763 the Fundamentals of Theirmal Imaging Systems
Page 230: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX H

o -

0 100 200 300 400 500 600 700

BAR PATTERN SPATIAL FREQUENCY (fin" Per picture height)

Fig. H-15 - Threshold display signal-to-noise ratio vi bar pattern spatial frequencyfor 1 optimum viewing distances and 0 28" viewing distance from one observer

distance is shown in Figs. H-14 and H-15. In Fig. H-14, the thresholds were measured for view-in& distances of 14", 28", and 56" from an 8" high display. As can be seen, low frequency pat-terns are discerned, with the lowest SNRo at the largest viewing distances while the higher spa-

tial frequencies are discerned with the lowest SNRD at shorter viewing distances. In Fig. H-15,we compare the thresholds measured with a viewing distance of 28" to that measured at anoptimum distance, i.e., the observer was permittd to choose the viewing distance at will duringthe experiment.

D. RETINAL FLUCTUATION NOISE LIMITATIONS

Both the MRT and MDT are measured under optimum laboratory conditions, In general,the dynamic temperature range of the test patterns is small and the display gain (contrast) con-trol can be set at a very high value. When viewing a rea! scene with a wide dynamic tempera-lure range the ability to adjust display gain may become limited. In Case I of Fig. H.16, the

incremental luminance swing AL11 is not far different from that of ALL. The average display lumi-nance L0, can be set at some low comfortable level just high enough to prevent clipping thescene blacks. If desired, both L., and G, the display (or video) gain can be increased to improvethe contrast of the display6d images but the exact values, so long as they are high enough is notcritical. In Case 2, we assume that the incremental luminance swing ALq required to imagelarge scene temperature swings has become so large that the observer must both increase La,and decrease gain to stay within the display's dynamic range.

It is postulated that a retinal fluctuation noise exists due to photoconversion of displayphotons to sensor impulses within the observer's retina. The effect of increasing di3play lumi-nance is to increase the retinal fluctuation noise and the decrease in gain decreases the incre-mental signal ALL and therefore, ALL may become imperceptible because of the overalldecrease in signal-to-noise ratio.

A number of psychophysical experiments were performed to determine the effect ofdisplay luminance on the detection of square images and (he interaction of display luminancewith image detection and video gain. The result of one such experiment is shown in Fig. i-1-17,As can be seen, an increase in display gain must accompany an increase in display luminancewhen operating in the retinal fluctualion noise limited region. From the experiments It wasfound to be possible to estimate the individual contributions of thie sys tem Lind retinal fluctua-tion noi:ic to the total noise as shown in Fig. H-18.

Page 231: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

2

------ I

3 I A

DISTANCE

Fig. H-16 - Case I. Displayed image¢ has small dynamic range. moderaiekly low brightnessand adequate video gain. Caie 2. Large input signal. LH, requires an increase in displaybrightness and a retduction of video g~ain. LL/LI, is reduced in turn.

" 0"0ins~

0.I0 IS4O 2! 0 0.2f0 L

Fig.la Hg6ase 1 . ' Dis lay ied imageh s .smal dynamicrnel, moderatel l owirihtes

eitand adeqimuiate ido Pin tham thare inpt sgal. Mt, requies obnincraed wilno dislwaysbbrightd nessmagndg redu ctin o ieos. n L/1 s eue i un

1 II' E

z .0

0 .

uj 0.2 1.

0I0 . ..NA. ... f I5ipa ga0 - . eeIse imgs20 Cme/.25 30a lines

The conclusion drawn from the exp~erimental program is that a retinal fluctuation noiseexists, and the implication is that the MRT and MDT values obtained will not always beachieved in imaging real scenes.

The SNRP equation, including retinal fluctuation noise, will take the form (Ref. H-12)

N ~ ~~ A,-vj,~j(' ~~ - (H--4)

4 7 1 Ad r-A,

Page 232: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX H

I-

"" -

0 -oo -- ---- '-10 I II I J I I I I_ I I 1 11

. 0,1 0-5 0.7 tO 3.0 5.0 7.0 10.0DISPLAY LUMINANCE (ft-L)

Fig. H-18 - (---) Estimated fixed system noise. (---) estimated retinil photoconversion noise, and t-)total noise. Measured noise dala t"or a (0) 4 x 4 and a (0) 8 x 8 scan line square inmage.

where the terms are as before except that"71e is the quantum efficiency of the observer's eye,K is the conversion constant (photons/lumen),T is Tstop of the observer's eye,

(AlAd) is the ratio of picture area on retina to that on the display,G, is video gain,Kd is the phosphor conversion constant (lumens/ampere),0a, is the average luminious flux received from the display (lumens),41 is the incremental signal current (A), and

ib is rms noise current due to scene background photons.The 0,a, term above can be converted to current in the video channel by dividing by Kd thedisplay conversion gain. It is therefore clear that we can refer the eye noise term to an equivalentnoise current in the video channel. When the perceived SNR is sensory system noise limited,Eq. (H-4) reduces to

SNR, - N"e 1112 R(N)J /2' (H-5)

which is seen to be independent of operator-to-display viewing distance (except to the extent

that the eye MTF effects are inc!uded in R,r(N)).

When the SNR is primarily retinal fluctuation noise limited, Eq. (H-5) becomes

!eRT,I R,,(N) [(,),K,) "'12,'? T(F/ID,) 4-,(H6- N (eO,,,) "'

where FL/D, - (A,/Ad) where F1 is the air equivalent focal lenglh of the eye and D, is thedisplay viewing distan.e. As can be seen, the SNR, is now inversely proportional to viewingdistance.

S. .. . . . . . . . . . . . . .. . . .. . . . . .. . .. . _A . . . . ... . .. . . .. .. . . . . . . ... . . .. ... . . . . . . . .. . . . .111111

Page 233: a073763 the Fundamentals of Theirmal Imaging Systems

42

NRL REPORT 8311

In addition to retinal fluctuation noise, there are generally other image defects which maycause the MRT to be higher than predicted. Some causes are unbalance between adjacent detec-tor channels due to either image cell detectivity differences or preamp channel gain variations,and extraneous noise such as switching transients resulting from the sampling processes.

REFERENCES

H-1 Coltman, 1•W., and Anderson, A.F., Noise Limitations to Resolving Power in ElectronicImaging, Proc. IRE 48(5). 1960.

"H-2 Rosell, F.A., Performance Synthesis of Electro-Optical Sensors, EOTM No. 575. FinalTechnical Report, Contract No. DAAK-53-75-c-0225, Night Vision Laboratory, Ft. Bel-voir, Va., Feb. 1977.

iv

r" r

JF i

tI

Page 234: a073763 the Fundamentals of Theirmal Imaging Systems

Apleindix I

OBSERVER RESOLUTION REQUIREMENTS

F.A. Rosell

A. INTRODUCTION

In this appendix, it is shown that the equivalent bar pattern approach which has seen longuse in predicting the range at which scene objects can be visually discriminated on a sensor'sdisplay has restricted validity. It was formerly assumed that the number of resolution linesrequired per minimum object discussion was relatively a constant for a given object. Morerecently it has been found that the number is variable with signal level and that the variabilityis considerable. It is also shown that the resolution required can be a strong function of sceneobject viewing aspect angle.

B. HISTORICAL APPROACHES

One of the earliest attempts to functionally relate threshold resolution with the visualdiscrimination of images of real scenes is attributed to John Johnson (Ref. 6.1). The levels ofvisual discrimination were arbitrarily divided into four categories: detection, orientation, recog-nition, and identification, with detection being the lowest and identification being the highestdiscrimination level. The basic experimental scheme was to move a real scene object such as avehicle out in range until it could be just barely discerned on a sensor's display at a givendiscrimination level. Then the real scene object was replaced by a bar pattern of contrast simi-lar to that of the scene object. The number of bars in the pattern per minimum object dimen-sion was then increased until the bars could just barely be individually resolved. In this way thedetectability, recognizability, etc. of the scene object can presumably be correlated with thesensor's threshold bar pattern resolution. The basic idea makes sense-the better the sensor'sresolution, the higher the level of visual discrimination should be.

In addition to sufficient resolution, Johnson noted that image SNR had to be sufficient butthe definition of image SNR was not clear.

By use of the methods of Chapters IV and V, image SNR values can be defined for barpatterns and othir simple geometric shapes. By 1970, it appeared appropriate to test the John-son criteria further by using improved image SNR concepts. For analysis purposes, anequivalent bar pattern was defined. This bar pattern concept is essentially the same as proposedby Johnson except that the bar lengths are made equal to the length of the scene objectwhereas in the Johnson criteria the lengths are unspecified. The justification for this approachis that many objects will be more easily detected, recognized, or identified when viewed broad-side as opposed to head-on, and this notion can be quantitatively noted by including the max-imum object dimension in the image SNR definition. The Navy, as will be discussed, has useda pixel approach in an attempt to realize the same objective. To test the lohnson concept byusing the equivalent bar pattern, images of four different vehicles were televised against a uni-form background. Additive white noise was added to the displayed image and background to

l 237.

Page 235: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX I

vary the SNR. The probabilities of correct recusnition were measured as the type of vehicleand the Image SNRp was randomly varied, The result Is shown in Fig. I.1. The SNR for thereal object was calculated on the basis of the peak-to-peak signal excursion within the vehicleoutline, the area of a bar of width equal to 1/8 the object's minimum dimension, of lengthequal to the objct's longest dimension, and the rms noise. The vehicles were then replaced by -a bar pattern or the appropriate spatial frequency for recognition, and the probability or discern-Ing the bar patiern was determined and plotted in Fig, 1-2, The results appear to confirm theIJohnson hypothesia, However, wher, the same vehicles were Imaged against a more complexbackground (though not so complex as to obtcure the vehicle outline), the SNRD needed At agiven prbobaility and, the spread in the data increased as shown In Fig. 1-3. The Increase inSNRD required could alternatively be treated as a need for en increase In resolution. Note also,that It is diffcult to define; the signal excursion and average contrast (or real scene objects.

1101 . .. Q71

Fig, 1.1 - Probablilty or ruawnition vt NAD far a 0 lank,0~ radar halftak a rcen drikbldtrBackground wai uniform,

In the concepts discussed Above, resolution Is not deifined by the number of scan lines ordetectors but rather, by the threshold resolution as defined by an MRT curve. For an Ideal sen-ior, which Is defined as one with a unity NITP. the SNP.D can be written as

1llIl

SNRI0 W T.' (1-1)

where the termi Are as defined In Chapter IV and /, to thr rms no/sr.

Two .¶NR/0 curves for an Ideal sensor are plotted In Fig. 1.4. Also shown Is the observer'sthreshold SNRor which Is very nearly at constant at optimum viewing distance. The intersec-tion of the SNRD and SNR~r curves gives the threshold resolution for the sensor Aidedobserver. Note that doubling the SNR L, by Increasing the Incremental signal current from levelI to level 2, doubles the threahold resolution from NI to N3,

Page 236: a073763 the Fundamentals of Theirmal Imaging Systems

fr.

NRL REPORT 8311

I..rE

0.6

0.4

04

0.2

0 .0 ... .. .. .. . .. . . . .

0 2 3 4 5 7

DISPLAY 8IGNAL.-TO-NOISE RATIO

Fig. 1-2 - Probability of bar pat!ern detection vs SNRDI for pattern of areaequal to average real object area of N - 0 329, 0 396, 0 482, * 635lines/plcture height

1.0

13 000.3

0.6

0,A4'

0 2 4 6 4 10 12 14oNRL (bed on total area divided bV 8)

Fig. 1.3 - Probability of recognition v' SNRD for a 0 tank, 0 radar half track,

U van truck.and 0 dertick bulldooter, graosstrees background, televised imageryo(1873 lines, 23 frames/s

239

Page 237: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX I

= 107

77- \ L2

S-

3 SNE~~OT - - - -~

2- NJ N2

a. , a a anlI n , n" p Jlan

SPATIAL FREQUENCY (lines/plct. ht.)

Fig. 1-4 - SNRD v• spatial frequency for an ideal sensor. Doubling SNR0 from LIto L2 doubles threshold resolution from N, to N 2.

In Fig. 1-5, we plot three SNRD curves for a real nonideal sensor at three different lightlevels. At some low-light level assume that the threshold resolution is N,. Doubling theSNRD increases the threshold resolution from N, to N 2. For Case 1, the resolution increasesfrom 80 to 140 lines/picture height; not quite 2 times. For Case 2, the increase is from 190 to290; about 1.5 times. For Case 3, the increase is only about 1.15 times. Thus, for a systemincluding finite apertures, it would appear that an increase in SNR0 at low spatial frequencieswould be much more effective than a similar increase at high spatial frequencies since at lowspatial frequencies, an increase in SNRD results in a greater increase in threshold resolution.An increase in SNRD at high spatial frequency may result in no improvement in threshold reso-lution at all. However, an improvement in overall image quality- will result because of theincrease in SNRD at all of the spatial frequencies below the threshold.

The probability of detecting simple geometric shapes such Ls bar patterns generally followsa normal cumulative probability curve with unit variance as shown in Fig. 1-6(a). It has beenassumed that the probability of recognizing real scene objects will follow the same functionalrelationship. In Fig. 1-6(a) we see that increasing the SNRD by about 2 incres,'s the probabil-ity of discerning a given bar pattern from 50% to near 100%. Also, as dist.ussed above anincrease in SNRD of 2 at line number N, will increase the system's limiting resolution to 2 NJ.It can therefore be argued that the increase in probability of detecting the bar pattern of fre-quency N, is due to the increase in limiting resolution. Since limiting resolution and SNRD arelinearly related in 1:1 correspondence for the ideal sensor, we could alternatively plot the cumu-lative probability curve as a function of threshold resolution as shown in Fig. 1-6(b).

The Night Vision Laboratory hypothesized that the probability of correct recognition vsthreshold resolution was the correct approach (Ref. 6-3) and attempted, through field tests, toprove its validity. Although the results were somewhat subject to interpretation and notentirely conclusive, the probability vs threshold resolution approach appeared to be superior tothe probability vs SNRD approach. It was stated, however, that both approaches Zi,,. the sameresult at the 50% level of probability.

240

Page 238: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

'10

6-

NR N. N. .. N 1V-2

2N

I tlii i, I I i I

SO 100 200 3O0 500 7O0 o00SPATIAL FREQUENCY (LINES/PICT. HT.)

Fig. 1-5 - SNRD vi spatial frequency for a typical LLLTV sensor forthree values of input photocurtent. (...) Case 1, (-) Case 2, (-.-)Case 3. If N, is the threshold frequency at one level, N2 is the newthreshold frequency when S.ARD at N, is doubled.

1.0 1.0

0. 0 JAI

0.6 t 0.4

W CA4 49

04 0

0.2 0.2

0.c0.0 1 2 O0 12

SNRD NSNR DT NTIHRESHOLo

(a) (b)

Fig. 1-6 - Probability of discerning bar pattern vs (a) SNRL0 normalizcd to ihe threshold SNRDTand (b) spatial frequency % normalized to the threshold resolution

-. b-241 J. .

Page 239: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX I

The work of Johnson and the previously described work of Rosell did not broadly rangeover the methods of limiting image quality, i.e., aperture effects (MTF limitations) versus noiselimitations. Discrepancies were noted between crisp image quality of broadcast images aperturelimited to 340 TV lines per picture height, and the poor image quality limited to 340 TV linesper picture height by noise in low-light-level imagery.

In an effort to understand the difference in image quality, when both images yielded theidentical values of limiting resolution, O'Neill and Rosell in independent experiments, showedthat neither approach is valid Refs (6-4 and 6-5). Even further, it was shown that the originalJohnson criteria and the equivalent bar pattern approaches are also untenable because thenumber of lines required per minimum object dimensions is a variable depending on somefunction of the video signal-to-noise ratio. This result should have been forseen. Its reason-ableness can be inferred from the following simple illustration. The maximum resolution of astandard television broadcast camera is bandwidth or aperture limited, to approximately 340lines per picture height. Picture quality can be very high indeed, with good gray shade rendi-tion, and the picture may be near noise-free. Contrast this with the picture obtainable from alow-light-level television sensor which is irradiance level limited to 340 lines per picture height.The picture observed is very noisy, and only a few shades of gray can be differentiated.Clearly, the picture obtained from the aperture limited broadcast camera is substantially supe-rior to that obtained from the noise limited LLLTV sensor. Thus, the threshold resolutionobservable is not necessarily a reliable guide to a sensory systems performance.

Rosell's experiments were conducted to determine the resolution required to identifytelevised human faces under two extremes: the case where scene light level is more thanadequate as in sunlight and where the scene light level is very low as at night. Under high lightlevel conditions the sensor resolution is primarily aperture response limited while under lowlight level conditions the sensor resolution was primarily noise limited. It was found that whenthe resolution was primarily aperture response limited, 5.8 lines per face width were needed fora 50% probability of correct identification while when the resolution was primarily noise limited12.4 lines were required for the same probability. About 1.8 to 2.0 times more resolution,measured in the threshold sense was required for 100% identification probability in both casesi.e., about 11 lines per face width in the aperture limited case and 23 lines per face width in thenoise limited case.

O'Neill performed extensive experiments by using televised ship silhouette images. Thetelevised images were photographed and later sorted according to the level "of visual discrimina-tion each photograph provided. Seven discrimination tasks are defined as shown in Table I-I.In Naval Air Development Center models, pixels are used to describe resolution requirements.The number of pixels required to perform a given task is found by creating two equivalent barpatterns: one with horizontal and the other with vertical bars. The product of horizontal andvertical resolutions represents the number of pixels. For example, if 3-foot resolution isneeded to achieve a given level of discrimination and the ship is 30 x 300 feet in size, thenumber of pixels is 10 x 100 or 1,000. However, pixels are converted to equivalent bar patternresolution at the ship for discussion purposes.

To continue, O'Neill's results were evaluated in terms of the resolution required in thevertical or ship height direction with the results shown in Table 1-1. The ship was 46 ft highand 520 ft long. The data in Table I-I are the results obtained when the light level was very

242

Page 240: a073763 the Fundamentals of Theirmal Imaging Systems

-I i•

NRL REPORT 8311

Table I-I - Resolution Required for the Various Levels of Ship Classification

Pixels Resolution No. of LinesDiscrimination Task Required at Ship per Ship

(feet) (meters) (height)

0 Detect object on horizon sky 36 25.78 7.86 1.78t

I Recognize as vessel 100 15.47 4.72 2.97

2 Recognize ship structure 5CO 6.9 2.10 6.653 Recognize ship type 1,000 4.89 1.49 9.4

4 Classify king posts 2,000 3.46 1.05 13.35 Discern radar detail 4,300 2.36 0.72 19.5 I6 Detect 40 mm gun barrel 12,000 1.41 0.43 32.6

high so that the sensor was primarily aperture response rather than noise iimited. For initialanalytical purposes, it is assumed that the resolution required by the observer at the ship is aconstant independent of light level. O'Neill provides data showing the range at which theobserver visually discriminated the targets at various light levels which we can use to calculatethe angular subtense of the assumed required resolution. Then, knowing the camera field ofview (2.1' in the vertical), we can calculate the apparent TV camera angular resolution as theratio of the resolution required (from Table 1.1) for any given visual discrimination-level to thethreshold detection range which is shown in Table 1.2. Also shown are the spatial frequenciescorresponding to a bar pattern with bar 0l angular extent equal to the apparent angular resolu-tion, We plot these spatial frequencies on Fig. 1-7 along with the threshold TV camera resolu.tion measured using bar patterns. As can be seen the apparent resolution is much less than thethreshold camera resolution at the lower light levels. Ih should be noted tnat the TV camerahad to be quite highly aperture corrected in order to achieve the measured curve shown. It ispossible that aperture correction is not very eflective in improving the visual discrimination ofreal scone objects (as opposed to bar patterns), In Fig. I-8. we plot the ratio of apparent tomeasured threshold resolution,

In the above discussion it was assumed that the resolution required for a given visualdiscrimination level is a constant independent of light levI which leads to the conclusion thatthe apparent resolution is less than measured, Alternatively it could be assumed that morelines are required per ship height at the 50% probability level to discriminate an object at lowlight level (noise limited) conditions.

As the level of visual discrimination increases at a given light level, the resolution meas-iured in feet or meters at the scene object must decrease as can be seen from column 2 of Table1.2. Observe that at 10 ' ft.c, the apparent angular resolution corresponding to the assumedresolution required divided by the range, is; approximately the same for all the discriminationlevels. As the light level increases to 3 x 10 6 ft.c. the apparent angular resolution required forany given discrimination level decreases but again, the apparent angular resolution required isapproximately a constant independent of the discrimination level. This same result is observedas the light level increases to 10 5ft c. In Fig. 1.9, the apparent angular resolution required isplotted as a function of the visual discrimination levels to emphasizL these results.

Page 241: a073763 the Fundamentals of Theirmal Imaging Systems

A

APPENDIX I

rI

Table 1-2 - Apparent Thfeshold TV Resolution iq Visual DiscriminationLevel at Various Photosurface Light Levels

visual Assum e = ThresholdAnplesDi cim n ti n Resolution R n e Angular Apparent TV -Disriinaio at Ship (kange Resolution Resolution

Lvl (meters) ((krad (Lines/Picture heightI

Photosurface Illuminance =10. fcnles

0 7.86 45.0 175 1971 4.71 23.5 201 1722 2.10 14.0 151 2293 1.45 6.8 219 157

Avg 189Photosurface Illuminance 3 X 10-6 ft-candles

0 7.86 78.0 100 3421 4.72 44.0 107 3222 2.10 26.5 80 4333 1.49 20.0 75 4624 1.05 8.2 128 2695 0.71 5.4 132 2626 0.43 3.0 143 241

Avg = 333

Photosurface Illuminance = 10-5 ft-candles

0 7.86 120.0 65.5 5261 4.72 68.0 69.3 4982 2.10 33.0 63.9 5393 1.49 20.0 74.5 4634 1.05 13.0 80.7 4235 0.71 9.4 75.7 4556 0.43 5.6 76.7 449

Avg 479

'From Tahle I-Ib10 ) times column 2 divided by column 3

i-A

Page 242: a073763 the Fundamentals of Theirmal Imaging Systems

fI

NRL REPORT 8311

600 -r---,-r-r !• I • r 1

-J 6p 00 . .. >

JJ

z 40

0

2w 30

z 200

- - ,,

-0 t. 4 ,ILI100 L3

PHOTOSURFACE ILLUMINANCE (ft-c, 286-. K,

Fig,. 1-7 - Threshold resolution v%• photosurf'ace illuminance. (--) measqured using bar patterns

and (-)inferred from real object discrimination measurements assuming• a resolution require-

ment which is independent of" light level.

6 - ------ t t I I - P tI

0

0: z

Z 3u 13 _ .2

a --

_____ _____ __i__ _l _, I I l l

U.

00_

107 10, 10

PHOTOSURFACE ILLUMINANCE (ft-c, 2854 K)

Fig. 1-8 -- Rtin oe apparent du measured anaulan resolution required to -isually discriminatethe ship oilhiuette as a function of photosurrlce illuminance

I 11 24 .

Page 243: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX I

250 1 I 1

-6 -k E 200 -31

ZI0-

0 10Z - -5

I II I I0 1 3 4

a-0-

VISUAL DISCRIMINATION LEVEL

Fig. 1-9 - Angular subtense of the apparent spatial resolution at the ship forthreshold visual discrimination %, the visual discrimination level

The tentative conclusion is that as the signal levels decrease, the number of lines requiredper minimum object dimension to perform a specified visual discrimination task must increasebut the relative increase is nearly the same for all the discrimination levels.

A third set of data is shown in Fig. 1-10 which represents data meatured at the NightVision Laboratory by R. Flaherty which again shows that the number of reaolvable lines per tar-get height required to recognize a real scene object increases with decreasing light level. Therecognittin task in this experiment consisted of discriminating front and side views of a tank,an APC, and a truck (9 signatures in all).

The above data conclusively show that an equivalent bar pattern approach cannot employa fixed line number criterion for any given level of visual discrimination but will vary depend-ing on the degree to which the sensor's resolution is noise or aperture response limited.

To make use of the O'Neill data, the broad area video SNR,.o was estimated by using therelation

SAE4/e, ehSNRvo - Ale (1-2)

(2eSAE,. Afl/e, ehj;,'[2

where for the case under consideration S is the photosurlace rsponse to 2854K light, A is theeffective photosurface area. Eh is the highlight and Ea, is the average photosurface irradiance, e~e, isthe scan-efficienco, e is the charge of an electron, and Af, is the video bandwidth defined by

-f " 2t/-Neeh (1.3)

r 2 t, eie,,

41WWrf flrWM - .. -k-mim

Page 244: a073763 the Fundamentals of Theirmal Imaging Systems

I

NRL REPORT 8311

Sz 12 ! 1 '

LJ 0 (D 100"I-0

O0U,0

W

LL X I I ..

0 , 4-

2-

Z0 -0--.J 0. 101 10 10 i~1 0 1 7 1

SCENE LUMINANCE (ft-L, 2854 K)

Fig. 1-10 - Lines required per object height for 50% probabilityof correct object recognition vs scene luminance level

where a is the picture aspect ratio (HI V), Ne is the noise equivalent bandwidth, N, is the number ofactive scan lines, and tf is the frame time. In the above, we have assumed that the sensor is pho-toelectron noise limited. The ratio of apparent to measured angular resolution is plotted vs theSNR vo calculated using Eqs. (1-2) and (1-3) and the available data in Fig. 1-1. It can be seenthat with broad area video signal-to-noise ratios below about 3 to 5, substantial increases in thenumber of lines per minimum scene object dimension may be required for any visual discrimi-nation task.

It is seen that the equivalent bar pattern approach as used in the past is not valid. How-ever there is no proven alternative at present. One possible near term solution is to adjust thenumber of lines required per minimum object dimension at the-50% probability level on thebasis of the broad area SNR in accord with the curve of Fig. I-11. It should be emphasized thatthe curve of Fig. 1-11 is based on very little data derived from an experiment which was notspecifically designed to determine a correlation between video SNR and resolution criteria.However, it is believed that the curve is of the correct form if not of precise values. A methodof using the data will be developed and discussed in Chapter 6 including methods of estimatinghigher levels of probability.

C. SCENE OBJECT VIEWING ASPECT VS RESOLUTION REQUIRED

The results of an NVL aspect angle experiment are partially shown in Figs. 1-12 through1-15. These experiments were performed using scale models viewed through an imageintensifier. The resolution of the intensifier was varied through light level control and wasmeasured by viewing a repetitive square wave pattern of 7:1 aspect and contrast equal to that ofthe object to be recognized or identified, The resolution was set to zero at the beginning ofeach set of trials and increased until the object was correctly recognizcld at the 80% probability

247

Page 245: a073763 the Fundamentals of Theirmal Imaging Systems

ii

APPENDIX I

level. As noted in the previous section, the number of lines needed per minimum object* dimension may be light level dependent but this aspect of the problem was not investigated in

the experiment.

a -:

4

U)

U.

o. 4 10

210.1 BROAD AREAI VIDEO SNRYo 10

Fig. 1-11 - Ratio of effective to measured angular resolution re-quired to visually discriminale the ship silhouette as a function of thebroad area video SNR

TANK

20 16 12 8 4 0 4 S 12 14 20

45° 0 Nil $O 45°

91•° I

Fig. 1-12 - Number or lines on half cycles per minimurm. dimension required forrecognition and identification of a tank as a function of viewing aspect angle

i;1

Page 246: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

tMIG 21

20 16 12 1k 4 0 4 a 12 16 20 0oIs Is30 30

IDENTIFICATION0 450

Fig. 1-13 - Number of lines or half cycles pev minimum dimension required forrecognition and identification of a MIG 21 as a function of viewing aspect angle

AIRCRAFT CARRIER

20 16 12 1 4 0 4 a 12 1 06

0 00

Fig. 1-14 -- Number of lines on half cycles requied the minimum dimensionrequired for recognition and identification of an aircraft carrier as a function ofviewing aspect angle

Page 247: a073763 the Fundamentals of Theirmal Imaging Systems

APPENDIX I

DESTROYER

•-30 24 is 12 6 0 6 12 is 24 30 O

Sis ICC)OJCis

45 45

r

Fig. 1-15 - Number of lines on half cycles recquired per minimum dimensionrequired for recognitinn ard identificationof ade..royer as a function of viewingaspect anwile

Table 1-3 - Number of Lines or Half Cycles Required per

Minimum Object Dimension for Recognition

Lines orLength-to-Width Half Cycles/Minimum

Object Aspect Ratio Object D•mension

Battleship 16/1 5.6

Destroyer 12/1 9.0

Aircraft Carrier 15/1 9.0

P.T. Boat 12/1 6.0

Passenger Ship 12/1 6.0

Fighter 6/I 6.0

Jet Transport 9/I 5.0

Tank 2/1 7.0

Armored Personnel Carrier 2/1 7.0-A

250

Page 248: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 3II1

As can be sen in FPI. 1.12, the number of lines needed for recognition of A tank is about7 in the side aspect and 8 head-on while Identifc•ation requires about 13 on the side and 20head-on. The results for the MIG 21 are similar to the aircraft carrier except for nose onrecognition. The destroyer is characterized by a generally greater discrimination difficulty. InTable 1-3, the resolution required for side aspect recognition, as measured in the same manneras for the above results, is $iven,

No single, unique number can be associated with any given level of visual discrimination.The analyst must therefore exercise judgment, It is clear that generally higher levels of visualdiscrimination will require generally higher levels of sensor resolution but that overlap in reso.lution requirement between levels will exist. For example 21 lines are required to recolnize adestroyer in the frontal aspect while only 8 lines are needed to identify an aircraft carrier in thebeam aspect.

* i

I.

Page 249: a073763 the Fundamentals of Theirmal Imaging Systems

INDEXIIi

Aerosols., 25, 31 DO, detectivity, 61, 210urban, 25 deVries, Hesse),. 33, I1Irural. 25 Diffraction limit. 69maritime. 26 Display sasrch. 127oontinental. 26 Display SNR, 49, 79

Anderson, A.E., 205 Dwell time, 61Aperiodic object, 65 Dynamic model, I 1 IAperture, aperture response, 51, 64, 206Atmospheric transmission, 30 3

Tables, 32Equivalent Temperature Difference. ISExBtinction Goomcient. 30

Backgrounds, I [Sea, 10 Flaherty. R.. 246

Bandwidth, reference, 59. 71. FLIR, 5SBar pattern, 66 Fourier methods, 62, 64, 162Bernard, 162Barnes, RAM., 50, IIIIBlur circle, 69

(Jlimpu, definition, 122S; Oraefnwhtr field measurements, 26

Carbon dioxide, 23Carbon monoxide, 24Channel, detector, 49 Hudson, R D., Jr, 167Channel, SNR, 49, 79Characterization of scene, 119 [CIE, 143Clear line of sIght, 119Cold shield. 61Conversion., ,3 Jacobs, QM., 162

Relative humidity, .J, 43 )amleson, IA., 160Absolute humidity, 3. 43 Johnson, J., 237Dew pohlt, 33Torr, 3)

Coltman, JW., 212Coordinate systems, 113 Koschmieder relation. 31Czerny, Z., 30, I11 Keller, R B., 125

Page 250: a073763 the Fundamentals of Theirmal Imaging Systems

INDEX

Photopic, 143Physical target acquisition, 113

Lamar, ES., 126 Pixel, 102Lavin, H.P., 53 Point spread function, 64LED, 55 Probability functions, 112Leqault, R., 91 Probability of visual discrimination, 107LIDAR, 27 Psychophysical experiment, 191, 232

* Line pairs. 63Line spread function, 64 QLOWTRAN code, 21, 31

RSM Range interpolation table, 42, 44

Marsam If model. 128 Range prediction, 102

Matched filter concept, 181 Reference bandwidth, 59Methane, 24 Responsivity, 210Mi¢ scattering, 26, 31 Retinal fluctuation, 232

Mic cattrin, 26 31Rose, Albert. 53Minimum Detectable Temperature Rose, Al , 59(MDT), 90. 160. 175 Rosell, F.A., 159

Minhnum Resolvable Temperature(MRT), 50, 88. 94. 160. 175. 178. 179 S

Mixed gases, 31Molecular absorption, 29 Saccade, 122

Scene characteristics, 7Schade, Otto Sr., 156

NEST. 49, 61, 79, 160, 163 Scotopic. 143* Nitrogen, 24 Search field, 114Nitrou, O;2id', '4 Sendall, R., 125, 159, 181

Noise equivalent bandwidth, 69, 81 Serial scan, 56Noise equivalent power (NEP), 210, 213 Shade, Dr. Otto, Sr., 50Noise filtering factor, 69, 82 Shumaker, D.L., 125Noise increase function, 81 S1 units, 143

Nomenclature, 149 SkyNomogram, atmospheric transmittance, 34, 40 overcast, 12,14Nyquist frequency, 215 SNR, Channel, 49, 58, 78, 210

dlsptay, 49, 76, 77, 80, 81, 209Q2 Image, 52

video, 50, 54, 79, 209Obmewer threshold, 90 perceived, 3O'neill, G., 104, 243 Spatial frequency, 71, 72, 73, 74, 76, 80Oxygen. 24 Symbols. 149Ozone, 24 Synchronous integrator, 65

Synchronous integrator, 189

TParillel scan, 56Pearson, G.E., 128 Thermal Images, 7Perceived SNR, 49 Tomasi and Tampieri, 27"Periodic bar paltern, 65, 87, 199 Trucks as targets, 13Photocrnductlve, 212Photovoltaic, 212 U

Page 251: a073763 the Fundamentals of Theirmal Imaging Systems

NRL REPORT 8311

V Willson, 125Worthy, N., 125

Vapor pressure of water, 39 Wozencraft, J.W., 162Visual-lobe models, 124Visual range, 31 XVisual search, 119

W Y

Water vapor, 22 Yarbus, A.L., 122continuum, 31

Williams, L.E., 122 Z

tI

¢I

:I

!,

iI

Page 252: a073763 the Fundamentals of Theirmal Imaging Systems

DEPARTMENT OF THE NAVYNAVAL MATERIAL COMMAND

ELECTRO.OPTICAL TECHNOLOGY PROGRAM OFFICE

NAVAL RESEARCH LABORATORY IN REPLY 111190R TO

WASHINGTON, D.C. 20375 140g-197:RAS:cr6 August 1979

From: Electro-Optical Technology Program OfficeTo: Distribution List

Subj: "The Fundamentals of Thermal Imaging"

1. The Electro-Optical Technology Program Office (EOTPO) has re-ceived numerous requests for multiple copies of "The Fundamentalsof Thermal Imaging Systems", F. Rosell and G. Harvey, editors.Due to the limited size of the first printing and the large initialdistribution we are unable to honor all such requests at this time.

2. Individuals and organizations desiring additional copies of theRosell/Harvey report are requested to indicate their requirements inwriting to the EOTPO. The written requests will be used to justifya second printing and to reserve copies from the second printingfor the requestors.

J M. MacCALLUM, JR.Head, Electro-OpticalTechnology Program Office

Page 253: a073763 the Fundamentals of Theirmal Imaging Systems

August, 1979

REVISED MAILING LIST

NRL REPORT 8311

"THE FUNDAMENTALS OF THERMAL IMAGING SYSTEMS"

CommanderNaval Air Development Center(Steve Campana, Code 3011)Warminster, PA 18974

CommanderNaval Air Systems Command(AIR-03)Washington, D.C. 20361

Commanding OfficerUS Army Missile R&D CommandDRDMI-TEIiTracey Jackson)

dstone ArsenalHuntsville, AL 25809

Chief of Naval OperationsNavy Department (OP-98)Washington, D.C. 20360

DirectorNight Vision & E-O LaboratoryAMSEL-NV-VI(Russell Moulton)Fort Belvoir, VA 22060

CommanderNaval Sea Systems CommandNaval Sea Systems Command Hdqtrs(CAPT L.R. Patterson, PM5405)Washington, D.C. 20362

ConmmanderNavel Electronic System CommandNaval Electronic Command Hdqtrs(Albert Ritter PMEl07-52)Washington, D.C. 20361

Grunrian Aerospace (orporatiuonResuarch nuipart•nent(J. L. Sulby, N i/11/141)Ibothlp~e, NV 11114

Page 254: a073763 the Fundamentals of Theirmal Imaging Systems

Revised Mailing List, NRL Report 8311, p. 2

Commanding OfficerUS Army Aviation Research & Dev. Command(Steven Smith)DRCPM-ASE-TMPO Box 209St. Louis, MO 63166

Commanding OfficerNaval Material CommandNaval Material Command Hdqtrs(Clinton Spindler, MAT 08T2211)Washington, D.C. 20350

MRJ, Inc.(Mel Watkins)7929 Westpark DriveMcLean, VA 20901

Def. Adv. Research Projects Agency(Stove Zakanyscz, STO)1400 Wilson Blvd.Arlington, VA 22209

Commanding OfficerAFAL(Bill Lloyd, AFAL/WRE-3)Wright Patterson AFBDayton, OH 45433

Comianding OfficerAFAL(Roger Cranos, AFAL/NVN-1)Wright Patterson AFBDayton, OH 45433

Caoanding OfficerEustace DirectorateUSA Air Mobility R&D Laboratory(J. P. Ladd, SAUDL-EV-MOS)Ft. Eustace, VA 23604

Mr. Earl J. McCartney2 Winding RoadRockvllle Center, NY 111170

Page 255: a073763 the Fundamentals of Theirmal Imaging Systems

Mailing List, NRL Report 8311, p. 3

Commanding OfficerOffice of Naval Research(CDR Stanley E. Sokol)223 Old Marylebone Rd.London, NWI 5th England

CommanderNaval Air Systems Command Hdqtrs(E. Beggs, AIR 360)Washington, D.C. 20361

CommanderNaval Air Systems Command Hdqtrs(Webb Whiting, 633365C)Washington, D.C. 20361

CommanderNaval Air Systems Command Hdqtrs.(E.V. Cosgrove, 533351)Washington, D.C. 20361

Institute for Defense Analysis(V. Corcoran)400 Army Navy DriveArlington, VA 22202

SuperintendentNaval Post Graduate School(Dr. Allen E. Fuhs, Code 57FU)Monterey, CA 92940

CommanderNaval Air Development Center(Nancy MacMeekin, Code 3011)Warminster, PA 18974

Officer in ChargeNaval Surface Weapons Center(L. J. Fontenot, N-54)Dahlgren LaboratoryDahlgren, VA 22448

DirectorNight Vision & EO Laboratory(R. J. Bergemann)Fort Belvoir, VA 22060!i

y

lI

Page 256: a073763 the Fundamentals of Theirmal Imaging Systems

Mailing List, NRL Report 8311, p. 4

Chief of Naval OperationsNavy Department (OP-03)Washington, D.C. 20350

Chief of Naval MaterialNaval Material Command Hdqtrs(R. E. Gakird, 08TE)Wdshington, D.C. 20360

CommanderNaval Weapons Center(Sterling Haaland, 39012)China Lake, CA 93555

CommanderNaval Ocean Systems Center(Dr. J. Richter, 532)San Diego, CA 92152

CommanderNaval Weapons Center(S. T. Smith, 39403)China Lake, CA 93555

CommanderNaval Sea Systems CommandNaval Sea Systems Command Hdqtrs(Toshio Tasaka, 03415)Washington, D.C. 20362

Defense Advanced Research Projects Agency(CDR Thomas Wiener)1400 Wilson Blvd.Arlington, VA 22209

Hughes Aircraft Co.(Ken Powers, Rm. 9208, Bldg. 369)PO Box 92426Los Angeles, CA 90009

Commanding OfficerAir Force Avionics Laboratory(Louis Meuser)AFAL/RWPWright Patterson AFBDayton, Onio 45433

SuperintendentNaval Post Graduate School(LT John Nute, Code 32)Monterey, CA 92940

Page 257: a073763 the Fundamentals of Theirmal Imaging Systems

Mailing List, NRL Report 8311, p. 5

Defense Intelligence Agency(Seymour Berler, DTIA)Washington, D.C. 20301

Chief of Naval OperationsNavy Department (OP-O5)Washington, D.C. 20350

Commanding OfficerCommander Operational Test Forces(LCDR C. L. Sale, Code 63)Norfolk, VA 23511

Commanding OfficerAFAL-RWI-3Air Force Systems Laboratory(CAPT James D. Pryce)Wright Patterson AFBDayton, OH 45433

Chief of Naval OperationsNavy Department(CAPT J. H. Eckhart, OP506G)Washington, D.C. 20350

CommanderNaval Elcctronic Systems Command(CAPT L. E. Pellock, PME-107-5R)Washington, D.C. 20360

Chief of Naval OperationsNavy Department(B. R. Petrie, OP 987P4)Washington, D.C. 20350

CommanderNaval Air Systems Command Hdqtrs%'V, A. 'rarulls, 360E)Washington, D.C. 20361

CommanderNaval Sea Systems Command(CAPT A. Skolnick , PMS-405)Washington, D.C. 20362

Commandi ng OfficerNational Naval Medical Center(Elliot Postow, 43)Bethesda, MD 20014

Page 258: a073763 the Fundamentals of Theirmal Imaging Systems

Mailing List, NRL Report 8311, p. 6

Commanding OfficerNaval Weapons Support Center( D. E. Douda, 50421)Crane, IN 47522

Commanding OfficerNaval Avionics Center(Ronald Wesolowski, 813)21st & Arlington AvenueIndianapolis, IN 46218

CommanderNaval Air Test Center(Lynn C. Krouse, SA83)Patuxent River, MD 20670

Commanding OfficerNaval Coastal Systems Laboratory(H. Larrimore, 751)Panama City, FL 32401

Commanding OfficerNaval Intelligence Support Center(C. E. Field, 53)Washington, D.C. 20390

CommanderPacific Missile Test Center(Milton R. Marson, 1230)Point Mugu, CA 92042

CommanderNaval Surface Weapons CenterWhite Oak Laboratory(Paul R. Wessel, Code R40)Silver Spring, MD 20910

CommanderNaval Surface reapons CenterWhite Oak Laboratory(0. Dengel, CR-22)Silver Spring, MD 20910

SuperintendentNaval Postgraduate School(E. C. Crittenden, 6lCt)Monterey, CA 93940

Officer in ChargeNaval Ship R&D Center(J. W. Dickey, 2732)Annapolis, MD 21402

REPRODUCED FROMBEST AVAILABLE COPY

Page 259: a073763 the Fundamentals of Theirmal Imaging Systems

,.

Mailing List, NRL Report 8311, p. 7

Commanding OfficerNaval Training Equipment Center(George Derderian, 73)Orlando, FL 32813

Commanding OfficerNaval Ocean Systems Center(Gary Gilbert, 6514)San Diego, CA 92132

Commanding OfficerNaval Ocean R&D Activity(R. R. Goodman, 110)NSTL Station, MI 39529

CommanderNaval Weapons Center(E. E. Benton, 39401)China Lake, CA 93555

Chief of Naval Research(W. J. Condell, 421)800 N. Quincy StreetArlington, VA 22217

DirectorStrategic Systems Project Office(D. A. Rogers, SP-2025)Department of the NavyWashington, D.C. 20376

REPRODUCED FROMBEST AVAILABLE COPY


Recommended