+ All Categories
Home > Documents > Sensor Fault Detection Using Noise Analysis

Sensor Fault Detection Using Noise Analysis

Date post: 27-Jan-2017
Category:
Upload: babu
View: 217 times
Download: 0 times
Share this document with a friend
12
Sensor Fault Detection Using Noise Analysis Chao-Ming Ying and Babu Joseph* Department of Chemical Engineering, Washington University, Campus Box 1198, St. Louis, Missouri 63130 The feasibility of sensor fault detection using noise analysis is evaluated. The noise powers at various frequency bands present in the sensor output are calculated using power spectrum density estimation and compared with historically established noise pattern to identify any abnormalities. The method is applicable to systems for which the noise is stationary under normal operating conditions. Principal component analysis (PCA) is used to reduce the space of secondary variables derived from the power spectrum. T 2 statistics is used to detect deviations from the norm. We take advantage of the low-pass filtering characteristics exhibited by most process plants and closed-loop control systems, which allows the noise power at higher frequency bands to be used in the fault detection. The algorithm does not require a process model because it focuses on characterization of each individual sensor and the measurement it generates. Experimental studies with two kinds of garden variety sensors (off the shelf temperature and pressure sensors) are used to validate the feasibility of the proposed approach. Introduction Sensors form the crucial link in providing controllers and operators with a view of the process status. Sensor failures can cause process disturbances, loss of control, profit loss, or catastrophic accidents. In process control, many (reportedly up to 60%) of the perceived malfunc- tions in a plant are found to stem from the lack of credibility of sensor data. 1 While hard (catastrophic) failure of a sensor is easy to detect, the degradation of a sensor such as noncatastrophic damage, error, or change in calibration and/or physical deterioration of the sensor itself can be hard to detect during normal operation. Reliable on-line detection mechanisms would be useful for the operators and the control systems that must rely on the information provided by the sensor for decision making. Sensor fault, as used in this paper, is defined as a change in the sensor operation that causes it to send erroneous information regarding the meas- ured variable. It could be a catastrophic fault such as a break in the connection or it could be a more subtle change such as the signal conditioner going out of calibration. Because the former types of faults are easy to detect, we focus more on the latter type of faults in this paper. Generally, sensor fault detection and isola- tion (FDI) techniques can be classified as the following 2 with the assumption that the process is in normal condition: Analytical Redundancy. This approach exploits the implicit redundancy in the static and dynamic relation- ships between measurements and process inputs using a mathematical process model. 3-5 Different models of residual generation have been proposed: 5 parity space approach, 6-8 dedicated observer approach, 9,10 fault de- tection filter, 11 parameter identification method, 12 and artificial neural networks. 13-16 Implementation of the AR fault detection algorithms is usually expensive because of the time and effort needed to develop, test, and identify a good process model. The method is plant- dependent, that is, for each process plant, a unique model must be constructed. An alternative to the development of an accurate dynamic process model is to use past historical data for construction of data-driven models. Using the identified relationship between dif- ferent sensor measurements, principal component analy- sis (PCA), and partial least squares (PLS) can identify sensor faults. 17-26 Although dynamic PCA is available currently in the literature, 27,28 most of the application studies to date have been limited to steady-state PCA. Good steady-state detection algorithms are needed to prevent missed or false alarms. Wise and Gallagher 22 give an extensive review of the process chemometrics approach to process monitoring and fault detection. Knowledge-Based Methods. This approach uses qualitative models of the plant combined with experi- ence-based heuristics to infer a fault. The models might include knowledge concerning patterns of signal behav- ior, operation conditions, or historical fault statistics. 29,30 The method is also plant-dependent. It has found applications in process industry for the diagnosis of commonly encountered (repeated occurrence) faults. The method requires the construction and maintenance of an accurate knowledge base. Extensive knowledge of the process is required to build a comprehensive knowledge base. The method primarily focuses on catastrophic failures of sensors and/or equipment. Measurement Aberration Detection (MAD). The two approaches discussed above are usually based on the centralized control computer or distributed control system (DCS) host computer. MAD tries to identify faults locally at the sensor level. Yung and Clarke 31 proposed the so-called local sensor validation method. However, the analysis is based on steady state. Henry and Clarke 2 proposed a self-validating (SEVA) scheme. They suggest introducing some additional signals to monitor the health of a sensor as well as to identify a fault. Yang and Clarke 1 describe a self-validating thermocouple according to the SEVA rationale. This approach requires additional signals complementing the sensor data. Additional circuits are attached to enable the self-validation capability of the sensor. 1 The concept of using noise to detect faults is not new. Recently, Luo et al. 32 proposed using multiscale analysis to detect faults in a sensor. The high-frequency noise is * To whom correspondence should be addressed. Phone: 314- 935-6076. Fax: 314-935-7211. E-mail: [email protected]. 396 Ind. Eng. Chem. Res. 2000, 39, 396-407 10.1021/ie9903341 CCC: $19.00 © 2000 American Chemical Society Published on Web 12/28/1999
Transcript
Page 1: Sensor Fault Detection Using Noise Analysis

Sensor Fault Detection Using Noise Analysis

Chao-Ming Ying and Babu Joseph*

Department of Chemical Engineering, Washington University, Campus Box 1198, St. Louis, Missouri 63130

The feasibility of sensor fault detection using noise analysis is evaluated. The noise powers atvarious frequency bands present in the sensor output are calculated using power spectrum densityestimation and compared with historically established noise pattern to identify any abnormalities.The method is applicable to systems for which the noise is stationary under normal operatingconditions. Principal component analysis (PCA) is used to reduce the space of secondary variablesderived from the power spectrum. T2 statistics is used to detect deviations from the norm. Wetake advantage of the low-pass filtering characteristics exhibited by most process plants andclosed-loop control systems, which allows the noise power at higher frequency bands to be usedin the fault detection. The algorithm does not require a process model because it focuses oncharacterization of each individual sensor and the measurement it generates. Experimentalstudies with two kinds of garden variety sensors (off the shelf temperature and pressure sensors)are used to validate the feasibility of the proposed approach.

Introduction

Sensors form the crucial link in providing controllersand operators with a view of the process status. Sensorfailures can cause process disturbances, loss of control,profit loss, or catastrophic accidents. In process control,many (reportedly up to 60%) of the perceived malfunc-tions in a plant are found to stem from the lack ofcredibility of sensor data.1 While hard (catastrophic)failure of a sensor is easy to detect, the degradation ofa sensor such as noncatastrophic damage, error, orchange in calibration and/or physical deterioration ofthe sensor itself can be hard to detect during normaloperation. Reliable on-line detection mechanisms wouldbe useful for the operators and the control systems thatmust rely on the information provided by the sensor fordecision making. Sensor fault, as used in this paper, isdefined as a change in the sensor operation that causesit to send erroneous information regarding the meas-ured variable. It could be a catastrophic fault such as abreak in the connection or it could be a more subtlechange such as the signal conditioner going out ofcalibration. Because the former types of faults are easyto detect, we focus more on the latter type of faults inthis paper. Generally, sensor fault detection and isola-tion (FDI) techniques can be classified as the following2

with the assumption that the process is in normalcondition:

Analytical Redundancy. This approach exploits theimplicit redundancy in the static and dynamic relation-ships between measurements and process inputs usinga mathematical process model.3-5 Different models ofresidual generation have been proposed:5 parity spaceapproach,6-8 dedicated observer approach,9,10 fault de-tection filter,11 parameter identification method,12 andartificial neural networks.13-16 Implementation of theAR fault detection algorithms is usually expensivebecause of the time and effort needed to develop, test,and identify a good process model. The method is plant-dependent, that is, for each process plant, a uniquemodel must be constructed. An alternative to the

development of an accurate dynamic process model isto use past historical data for construction of data-drivenmodels. Using the identified relationship between dif-ferent sensor measurements, principal component analy-sis (PCA), and partial least squares (PLS) can identifysensor faults.17-26 Although dynamic PCA is availablecurrently in the literature,27,28 most of the applicationstudies to date have been limited to steady-state PCA.Good steady-state detection algorithms are needed toprevent missed or false alarms. Wise and Gallagher22

give an extensive review of the process chemometricsapproach to process monitoring and fault detection.

Knowledge-Based Methods. This approach usesqualitative models of the plant combined with experi-ence-based heuristics to infer a fault. The models mightinclude knowledge concerning patterns of signal behav-ior, operation conditions, or historical fault statistics.29,30

The method is also plant-dependent. It has foundapplications in process industry for the diagnosis ofcommonly encountered (repeated occurrence) faults. Themethod requires the construction and maintenance ofan accurate knowledge base. Extensive knowledge of theprocess is required to build a comprehensive knowledgebase. The method primarily focuses on catastrophicfailures of sensors and/or equipment.

Measurement Aberration Detection (MAD). Thetwo approaches discussed above are usually based onthe centralized control computer or distributed controlsystem (DCS) host computer. MAD tries to identifyfaults locally at the sensor level. Yung and Clarke31

proposed the so-called local sensor validation method.However, the analysis is based on steady state. Henryand Clarke2 proposed a self-validating (SEVA) scheme.They suggest introducing some additional signals tomonitor the health of a sensor as well as to identify afault. Yang and Clarke1 describe a self-validatingthermocouple according to the SEVA rationale. Thisapproach requires additional signals complementing thesensor data. Additional circuits are attached to enablethe self-validation capability of the sensor.1

The concept of using noise to detect faults is not new.Recently, Luo et al.32 proposed using multiscale analysisto detect faults in a sensor. The high-frequency noise is

* To whom correspondence should be addressed. Phone: 314-935-6076. Fax: 314-935-7211. E-mail: [email protected].

396 Ind. Eng. Chem. Res. 2000, 39, 396-407

10.1021/ie9903341 CCC: $19.00 © 2000 American Chemical SocietyPublished on Web 12/28/1999

Page 2: Sensor Fault Detection Using Noise Analysis

removed first (using wavelet-based filters) and waveletdecomposition is then applied to the filtered signal.Various statistical methods are then applied to thewavelet decomposed signal. Their recent paper28 usesdynamic PCA on the wavelet decomposition, where thedynamics of noise power at some specific frequency bandis modeled by PCA. While they focus on modeling thedynamics of the noise component in the mid-frequencyband to detect faults, we are proposing the use of severalhigh-frequency bands by sampling the signal at a highrate and monitoring their variations from normal condi-tions. This method does not require a dynamic processmodel. Also, no knowledge database is required.

Most DCS and PLC programmable logic controlsystems employed in the process industry use fairly lowsampling rates (of the order of 1 Hz). However, manysensors are capable of sampling and processing at muchfaster rates with the availability of low-cost A/D con-verters. A major impetus to explore this concept is theintroduction (and expected widespread usage soon) ofintelligent (microprocessor-based) sensor/transmitterdevices. With the current industry trend toward elimi-nation of field sensor wiring through introduction oflocal area sensor networks such as Fieldbus and Hart-bus, it is possible now to require that a sensor supplynot just data but also information regarding its validity(such as accuracy, level of confidence, expected error,possibility of total or partial failure, etc.). The methodol-ogy proposed here would not be possible without theavailability of embedded microprocessors and signalprocessing in the sensor/transmitter unit.

The objective then is to explore an FDI approachusing signal-processing techniques. A high samplingrate is used to capture the frequency spectrum of themeasurement over a wide frequency band. Usually, theprocess variations are of low frequencies below thebandwidth of the process and the control system. Hence,there is reason to suspect that the high-frequency bandvariations are caused by noise. By isolation and analysisof the measurement noise (which includes contributionsfrom the process noise, the instrument noise from thesignal conditioning unit and electromagnetic interfer-ence), the method attempts to detect variations causedby defects in the sensor or signal-conditioning unit. Themethod isolates and monitors the high-frequency bands,in a region away from the frequency bands normallyexcited by process disturbances and control system

actions. A major assumption used is that the noise isstationary. Using this assumption, the measurementnoise can be characterized by its past history. Theproposed method can be implemented locally in a signal-processing module associated with the sensor. Thisallows application of the algorithm to existing sensorscurrently in use by the industry. Another advantage ofthis approach is that it adopts the idea of distributedcomputing and intelligence. That is, fault detection isperformed at each local sensor and the risk of possiblesystem failure is distributed.

Figure 1 shows the basic structure of the proposedFDI system. Briefly, the methodology can be sum-marized as follows. First, a sample window of small size(small in comparison to the time constants of theprocess) is chosen. The measurement is sampled at avery high frequency in this window, allowing us tocapture the noise present in the measurement. Next,we remove any outliers in the measurement using thecriteria described below. A multiresolution of the noisespectrum using short-time Fourier transform (STFT) isused to break up the noise into various frequency scalesso that the change in the noise pattern can be detectedmore easily. The power spectrum densities (PSDs) atvarious frequency bands are calculated. The PSD rep-resents the average noise power of all frequencies withineach of the bands. This PSD is then compared againstthe PSD of the noise obtained when the sensor wasoperating normally (no-fault condition), obtained, andstored in memory. The multiresolution allows us tocapture even small changes in the noise pattern thatcannot be detected by looking at the change in the meanof the power spectral density averaged over all the high-frequency bands. Instead, we look at changes in themean of the power spectral density within each band.This results in a multivariate-monitoring problem.Therefore, we will need to use multivariate statisticalmonitoring tools to detect changes. Because the noisesin different frequency bands are likely to be correlatedwith each other, PCA must also be used in the monitor-ing scheme. Finally, a Hotelling T2 chart allows us tograph and hence detect the change introduced by thesensor fault or sensor degradation.

Sensors are an integral part of a control system. Theclosed-loop system will have its own set of naturalharmonics, which will always be present in the signal.Consider a local control system as shown in Figure 2.Usually, the process-transfer function acts as a low-pass

Figure 1. Overview of the proposed FDI system. The signal conditioner should use a wide-bandwidth filter. Otherwise, FDI should beperformed before the signal is filtered.

Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000 397

Page 3: Sensor Fault Detection Using Noise Analysis

filter, that is, the time constants in Gp are large whencompared with the response time of the sensor itself.By closure of the control loop, a well-tuned controllermay increase the bandwidth of the system by a factorof 4 or more. However, for most chemical systems, theclosed-loop system bandwidth (less than 1 Hz in mostcases) is generally small compared with the noisefrequency. In this case the dynamics of the controlsystem will not significantly alter the PSD at higherfrequency bands, thus enabling correct sensor FDI evenunder closed-loop regulation in the presence of distur-bances or under setpoint transfer.

The outline of the paper is as follows: first, the powerspectrum density estimation used is discussed. Second,the advantages and disadvantages of Hotelling T2 arediscussed. Third, the idea of using PCA to improveHotelling T2 is presented. Fourth, the fault detectionand isolation algorithm is then outlined. To verify themethodology, experimental studies on a temperaturesensor and a pressure sensor are presented.

1. Power Spectrum Density (PSD) Estimation

PSD is the power of the noise as a function offrequencies (or frequency bands). There are many waysto estimate PSD based on a finite length of data.33,34 Inthis paper we use the Welch method35 because theconfidence limits are easily computed.

Welch’s algorithm of PSD estimation is as follows.Suppose we have sampled a window with N data: y(0),y(1), ..., y(N - 1). Welch divided the N data into K )(N/Q) segments, possibly overlapping, of length Q:

A window, w(n), is applied directly to the data segmentsbefore computation of the periodogram. K modifiedperiodograms are then computed:

where

PSD is defined as

which stands for the power of noise at frequency ω(average noise power in the frequency band centered

around ω). This method of PSD estimation is asymptoti-cally unbiased. Welch showed that if the segments ofy(n) are nonoverlapping, then

where pxx(ω) is the true spectral density. If y(n) issampled from a Gaussian process, the equivalent degreeof freedom of the approximating chi-square distributionof Bxx

w (ω) is35

This is used to calculate the 100R% confidence limit onBxx

w (ω).Bxx

w (ω) may be estimated from old data and updatedwith new data to improve accuracy. We can approximate

where var(Bxxw (ω)|old) and var(Bxx

w (ω)|new) are the vari-ance estimates based on old data and new data, respec-tively.

As can be seen from eq 5, the variance of Bxxw (ω) is

inversely proportional to K; that is, to keep the samespectral frequency resolution (Q), if more data areavailable by increasing the window size N, a moreaccurate estimate of PSD with a smaller confidence limitis obtained, thus enabling the detection of fault whichmay not be detectable by short-length PSD analysis.However, a larger window size increases the delay infault detection. A longer data length also requires morecomputation time. Because all the calculations must bedone on-line, very long data PSD analysis is not feasible.We used 5000 data points to estimate PSD in theexample problems studied. Figure 3 shows a typical PSDcurve (middle one) obtained along with its 99.7%confidence limits. The noise is assumed to be Gaussian.

2. Hotelling T2 Control Chart

Consider p frequency bands, centered around ωi, i )1, 2, ..., p. At each time interval, p PSD estimates are

Figure 2. A local control system.

y(i)(n) ) y(n + iQ - Q), 0 e n e Q -1, 1 e i e K(1)

JQ(i)(ω) )

1

QU|∑n)0

Q-1

y(i) (n)w(n)e-jωn|2, i ) 1, 2, ..., K

(2)

U )1

Q∑n)0

Q-1

w2(n) (3)

Bxxw (ω) )

1

K∑i)1

K

JQ(i)(ω) (4)

Figure 3. A typical noise PSD pattern in a thermocouplemeasurement data collected in a window of 20 s. The circles arethe PSD of the frequency band centered around that frequency.

var(Bxxw (ω)) ) 1

Kpxx

2 (ω) (5)

EDF {Bxxw (ω)} ) 2K (6)

var(Bxxw (ω)) ≈ 1/2(var(Bxx

w (ω)|old) + var(Bxxw (ω)|new))

(7)

398 Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000

Page 4: Sensor Fault Detection Using Noise Analysis

available, namely, Bxxw (ωi), i ) 1, 2, ..., p. If each PSD is

plotted vs time, for p frequency bands, we will get p PSDplots. The PSD at various frequency bands are usuallycorrelated with each other. It is better to consolidatethem to determine the sensor status. One simple wayto do this is the Hotelling T2 control chart.20,36

Suppose the PSD data we obtain at time k is ex-pressed in vector form as

The purpose is to test whether Bk deviates significantlyfrom the mean µ, where µ ) [µ1, µ2, ..., µp]T is the vectorof means for each frequency band under normal condi-tions. The test statistics plotted on the control chart foreach sample is

where Σ is the covariance matrix. Both µ and Σ areestimated by some samples of data in the learning stage:

The test statistics is

where FR(p,m-p) is the upper 100R% critical point ofthe F distribution with p and m - p degrees of freedom.m is the number of samples used to estimate µ and Σ.If eq 11 is true, a fault is detected. Otherwise, we admitthat the sensor is at normal conditions. The T2 test isapplied on a periodic basis to see if the sensor noisecharacteristics have changed. Because we are focusingon frequencies much higher than the bandwidth of theprocess and control system, a change in this T2 impliesthat sensor characteristics have changed. If this changeis persistent, then the T2 chart will reflect this oversubsequent samples.

In practice, the percent level of confidence (R) mustbe selected with care to distinguish between Type I andType II errors. Rather than use this as an absoluteyardstick of the existence of a fault in a sensor, wesuggest that it be used to determine a degree ofconfidence in the measurement itself. Hence, the furtheraway the T2 chart moves from the expected value, thehigher the probability that a fault has occurred. This isverified in the experimental tests described below.

3. Principal Component Analysis (PCA)

The Hotelling T2 control chart works fine when thecorrelation between variables is not very high. If thecorrelation is very high, then µ will be close to singular.In this case, the Hotelling T2 control chart will not workwell because a small estimation error in Σ will changethe T2 value greatly, causing false alarms. One remedyhere is to extract the noncorrelated part from the datausing principal component analysis (PCA).

PCA finds combinations of variables that describemajor trends in the data, thus reducing the actual

number of variables to be monitored. Suppose the PSDdata we obtained at time k are denoted by

and m samples are available in the learning stage tobuild a PCA model. These m samples construct a m ×p matrix X with each variable being a column and eachsample a row:

PCA decomposes the data matrix X as the sum of theouter product of vector ti (score) and pi (load) plus aresidual matrix, E:22

Principal component projection reduces the original setof variables to l principal components (PC). In practice,2 or 3 PCs are often sufficient to explain most of thepredictable variations in the sensor performance.20 TheHotelling T2 statistics can now be applied to the l PCs(Note the covariance matrix of ti’s is a diagonal matrixbecause of orthogonality):

where ti in this instance refers to the ith row of T.Statistical confidence limits for the values for T2 canbe calculated by means of F distribution as follows:20

where m is the number of samples used to develop thePCA model in the learning stage.

It is important to note that T2 statistics make someassumptions regarding the distribution of the data,specifically, that it is multivariate normal.22 Clearly, thePSD data is often not normally distributed (chi-squaredistributed if the noise is Gaussian). However, thecentral limit theorem states that sums of several dif-ferent groups will tend to be normally distributed,regardless of the probability distribution of the indi-vidual groups.22 This suggests that the PCA-basedmethod, where scores are a weighted sum of individualvariables, will tend to produce measures that are morenormally distributed than the original data. This isverified by the experimental studies presented later.

4. Sensor Fault Detection and IsolationAlgorithm

The proposed sensor FDI algorithm can be dividedinto several stages: learning stage, fault detection stage,and fault isolation stage.

4.1. Learning Stage. The learning stage is triggeredwhen the FDI module is first installed into the sensoror when the process changes its operating conditionssignificantly so that the observed noise PSD pattern also

Bk ) [Bxxw (ω1,k), Bxx

w (ω2,k), ..., Bxxw (ωp,k)]T (8)

T2 ) (Bk - µ)TΣ-1(Bk - µ) (9)

µ )1

m∑k)1

m

Bk

Σ ) (m - 1)-1∑k)1

m

(Bk - µ)(Bk - µ)T (10)

T2 >(m - 1)(m + 1)p

m(m - p)FR(p,m-p) (11)

Bk ) [Bxxw (ω1,k), Bxx

w (ω2,k), ..., Bxxw (ωp,k)]T (12)

X ) [Bxxw (1,ω)1 ... Bxx

w (1,ω)p

l l lBxx

w (1,ω)1 ... Bxxw (1,ω)p

] (13)

X ) TPT + E ) ∑i)1

l

tipiT + E (14)

Tl2 ) ∑

i)1

l titiT

λi

(15)

Tl,m,R2 )

l(m - 1)(m + 1)(m - l)m

FR(l,m-1) (16)

Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000 399

Page 5: Sensor Fault Detection Using Noise Analysis

changes. The purpose of the learning stage is to buildthe PCA model and T2 statistics.

(1) Determine the data length of each batch, thesampling rate, the confidence percentage R, the numberof learning samples, and frequency bands to monitor.The lowest frequency band should be well above thecontrol system bandwidth.

(2) Monitor the process for a specified time undernormal sensor conditions to get learning samples.Calculate the PSD distribution as well as the 100R%confidence limit. The confidence limit can be calculateddirectly if the noise distribution is known; otherwise,an estimate method like a histogram should be usedinstead.

(3) Build a PCA model from a sampled PSD data setand use 2 or 3 principal components to form theHotelling T2 statistics.

4.2. Fault Detection Stage. This is the workingstage of the proposed method. It reads some length(window size) of data and tests the health of the sensor.Fatal failures such as open circuit (zero reading) or hardover (full-scale reading) are easily detectable usingconventional methods and therefore are not consideredhere.

(1) Read a batch of data at each time interval; performPSD estimation to form a new PSD sample.

(2) Use the PCA model developed in the learning stageand monitor the T2 value of the new sample. If T2 isabove the confidence limit (learned in the learningstage), give a warning. If T2 continues to be over thelimit, a sensor fault alarm should be activated.

(3) If there is a sensor fault, trigger the sensor faultisolation stage to determine possible faults.

Setpoint transfers and process disturbance should nottrigger a fault alarm because it only affects low-frequency bands (see experimental results below). Mid-dle- to high-frequency bands are monitored to detectsensor faults. If the process operating conditions havechanged significantly, then there is a possibility thatthe assumption of noise stationarity is no longer valid.In this case, it may be necessary to invoke the learningstage again. Field tests are necessary to determine therange of validity of the stationarity assumption.

4.3. Fault Isolation Stage. Fault isolation is not ascritical as fault detection. However, information onpossible fault type may help the operator remove thefault quickly. A simple way is to read the PSD plots withconfidence limits based on Welch’s method. If differentfaults cause PSD plots change in different ways, thenwe may be able to identify the fault from this pattern.We do not have to investigate this possibility in detailat this time.

5. Parameters Used in the Algorithm

The algorithm set forth above requires a number ofdesign parameters to be specified. The important onesare discussed below.

(1) Sampling Frequency: A generally high samplingfrequency (g1000 Hz) is preferred to get sufficientinformation on noise power distribution. For mostchemical processes, sampling rates from several hun-dred to several thousand Hz may be used. The minimumsampling frequency should be higher than the closed-loop process stopband cutoff frequency (e10 Hz). Thisminimum requirement can be easily met with currentA/D and microchip technology.

(2) Data Window Size: This is the size of the datasampled during one fault detection interval. Usually,this should be large enough (∼1000) to ensure suf-ficiently accurate estimation of PSD. A longer windowsize is preferred but limited by the computation powerand promptness requirement of fault detection.

(3) Frequency Band Selection: A minimum frequencyband used for fault detection should be higher than theclosed-loop process stopband cutoff frequency. (For theexamples we studied in our laboratory, we used afrequency higher than 10 Hz.) However, the minimumfrequency band should also be close to the processstopband cutoff frequency to capture most of the noisenot affected by the process dynamics. The number ofbands used cannot be too small as it would obscurevariations. A very large number of bands is not desirablebecause the accuracy of PSD estimation is reduced. Inour examples, the number of frequency bands largerthan 8 and less than 32 proved to be satisfactory.

(4) Number of Principal Components: The numberof PCs that provide an adequate description of the datacan be assessed using a number of methods.37 Somegeneral rules of thumb based on inspecting the eigen-value versus PC plot are available.38 Principal compo-nents with large variance are usually retained. How-ever, exceptions do exist where some PCs with smallvariances may prove to be more physically significantthan those with large variances.39-41 For many cases,however, 2-3 principal components are usually enoughto describe the relationship existing in the data.

(5) Percent Level of Confidence R: The higher thepercent level of confidence, the lower the probability ofType I errors (false alarm) and the higher the prob-ability of Type II errors (miss alarm). The percent levelof confidence is usually determined by the importanceof a sensor. The most crucial sensor should use thelowest percent level of confidence in fault detectionbecause one cannot afford a Type II error. However, toolow of a percent level of confidence will result in smallaverage run lengths (ARLs). In this regard, we suggestthat rather than using the confidence limit as anabsolute criteria for determining the existence of a fault,one should use the T2 variation as a measure of theconfidence we have on the change in the sensor char-acteristics. Thus, large variations in T2 values shouldbe taken as an indication that the noise pattern haschanged. The engineer monitoring the process can thendecide whether this change justifies further field testsor calibration of the sensor.

(6) Outlier Detection and Removal: With the assump-tion that noise is Gaussian, the probability of themeasurement going out of 3σ is 0.3%. If a measurementgoes out of that range, one may discard it as an outlier.For non-Gaussian signals, higher limits may be used.Because the measurement is assumed to change slowlycompared with the sampling frequency, outlier detectionand removal can be done in a piecewise manner, thatis, window by window.

6. Experimental Studies Using a TemperatureSensor

Experiments were conducted using an off-the-shelftemperature sensor and a pressure sensor to test thevalidity of the proposed scheme. The temperature sensorused is a type K thermocouple. The experimental setupshown in Figure 4 was used to test fault detection inthe temperature sensor. Note the closed-loop nature of

400 Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000

Page 6: Sensor Fault Detection Using Noise Analysis

the process. PLS•Toolbox 2.0.138 was used to build thePCA model.

The temperature of air exiting from a hot air blowerwas measured using a thermocouple. A well-tunedcontroller was commissioned to read the sensor meas-urement and manipulate the heater power to controlthe temperature.

A discrete PID controller with a sample time of 1 swas used in the control system. This introduced discretestep changes in the manipulated variable because of thezero-order hold present. To verify that the controlsystem affects only the low-frequency range, the fre-quency response of the closed-loop system-transferfunction is plotted in Figure 5. This plot was generatedusing a first-order plus dead-time (FOPDT) model of theprocess. The process time constant was estimated to be55 s and process delay to be 1 s. (These numbers arerelatively small in comparison with the time constantsencountered in actual industrial processes.) Despite thediscrete nature of the PID controller used, the high-frequency bands were well-suppressed in the closed-loopsystem. For example, at frequency ) 125/16 × 4 ) 31.25Hz, the amplitude of the process output will be less than0.001 of the original amplitude at the setpoint. A unitstep function has an amplitude of 0.0051 at a frequencyof 31.25 Hz. Thus, the effect of unit step setpoint changewill cause the process output amplitude change of lessthan 5.1 × 10-6, that is, 2.6 × 10-11 power change at afrequency of 31.25 Hz. It was determined experimen-tally that the noise power is mostly around an order of1 × 10-6 (see Figure 6). Thus, the setpoint change willhave virtually no effect on the noise component of themeasurement.

The following design parameters are selected:(1) Sampling Frequency: 250 Hz. This is high enough

to capture most of the noise and this sampling frequencyis high above the closed-loop process stopband cutofffrequency, which is around 10 Hz.

(2) Data Window Size: 5000 data points, correspond-ing to 20 s. This amount of data is enough to provide agood estimation of noise power. Also, a 20-s delay infault detection was considered acceptable.

(3) Frequency Band Selection: The whole frequencyrange is divided into 16 bands out of which 10 frequencybands centering around [125/16 × 4, 125/16 × 5, ..., 125/16 × 13] Hz are used. The lowest frequency bandcentered at 31.25 Hz is higher than the process stopbandcutoff frequency and is low enough to capture a widerange of noise.

(4) Outlier Detection and Removal: The output fromthe sensor was seen to have occasional random spikes(probably introduced by the signal conditioning unit orA/D unit) which might be outliers and not part of theinherent noise coming from the process and instrument(See Figure 11). These spikes were detected and re-moved using a simple 3σ threshold check. Every 2 s (500data points), variance of the data is estimated andmeasurement above 3σ or below -3σ is discarded.

(5) Percent Level of Confidence: 95%. Noise is as-sumed to be Gaussian.

First, in the learning stage, the PCA model and T2

statistics are built from 20 samples (corresponding to400 s). Figure 6 shows the PSD data used for trainingfrom the 10 frequency bands we selected. Table 1 showsthe percent variance captured by the PCA model. It isclear that 2 PCs are enough to express the data well.

The PCA model and T2 statistics built from thelearning stage are subsquently tested in the faultdetection stage. To verify that the noise is stationaryat different times and different temperature levels withnormal sensor health, the experiment was repeated 5days later, one at the same operating temperature andanother at a different operating temperature. T2 valuesare ploted using the PCA model from the learning stage.

Figure 4. Air temperature control setup.

Figure 5. Frequency response of the temperature control system(closed-loop, response to setpoint changes).

Figure 6. PSD data used for trainning the PCA model. Thevertical axis is the PSD estimate at frequency band centering f,and the horizontal axis is the sample number. The dashed linesare the estimated upper and lower 99.7% confidence limits on thePSD estimation based on eq 7 at the spedcified frequency band.

Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000 401

Page 7: Sensor Fault Detection Using Noise Analysis

Figures 7 and 8 show that the noise is indeed time- andtemperature-independent over this range of operation.

For setpoint change under normal conditions, PSD atmiddle- to high-frequency bands is not affected; thus,T2 should still be below the confidence limit. Figure 9shows the result when the controller setpoint changessuddenly from 50 to 70 °C. T2 is still below theconfidence limit (with only one exception that can beattributed to the random nature of the noise). Thesetpoint change did raise some values of T2 immediatelyafter the change occurred for a short period of time. Andsome points are close to the confidence limit. A possiblecause may be the violation of noise stationary assump-tion during the transition time. Thus, the setting of the

confidence percentage level should be carefully done. Onthe whole, however, this change of T2 is still smallcompared with that for a sensor fault.

Next, we deliberately introduced some faults to thesensor. Again, the PCA model from the learning stageis used to calculate the T2 values. Three types of faultsare considered here: coating fault, dislocation fault, andspan error.

(1) Coating Fault: Sensor performance degradationcaused by scaling deposits on the sensor is a frequentoccurrence. With coating, the time constant of thethermocouple will increase, thus filtering out some ofthe high-frequency components. Figure 10 plots the T2

chart when the thermocouple is coated with an inertmaterial instantly. The T2 value goes over the confi-dence limit and stays there persistently (even aftersample number > 70), indicating the presence of a fault.The coating acts as a filter for the noise and hencemanifests itself in the PSD. Scale buildup usually occursgradually over a period of time. This will be manifestedby an upward trend in the T2 plot, and when reachessteady state, T2 will be above the confidence limit. Here,we show the T2 plot before and after the fault in Figure10. This figure also shows that the fault has led to apersistent excursion of the T2 plot over its normal range.In this case, the deviation will depend on the extent ofcoating present. We suggest that the T2 plot be usedvery much like a control chart, showing possible devia-tion in the sensor behavior. Although we have not shownthe result here, the noise pattern remained stationaryafter the coating fault was applied. If the coating

Table 1. Percent Variance Captured by the PCA Model

principalcomponent

numbereigenvalue of

Cov(X) × (1.0 × 1012)

% variancecapturedthis PC

% variancecaptured

total

1 6.13 × 10-1 72.78 72.782 1.89 × 10-1 22.43 95.213 1.88 × 10-2 2.23 97.444 1.25 × 10-2 1.49 98.935 4.72 × 10-3 0.56 99.496 1.63 × 10-3 0.19 99.687 1.48 × 10-3 0.18 99.868 7.04 × 10-4 0.08 99.949 3.70 × 10-4 0.04 99.98

10 1.35 × 10-4 0.02 100.00

Figure 7. Noise stationarity verification: the testing data aresampled 5 days later.

Figure 8. Noise stationarity verification: the testing data aresampled at a higher temperature (70 °C) than that at the learningstage (50 °C).

Figure 9. T2 chart when the process undergoes a step setpointchange (from 50 to 70 °C). Step change occurred at time 0.

Figure 10. T2 chart under coating fault (fault occurs at sampleno. ) 20).

402 Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000

Page 8: Sensor Fault Detection Using Noise Analysis

occurred gradually over a period of time, we would haveobserved a gradual shift in the T2 chart. Hence, themethod is independent of the dynamic of the fault itself.Figure 11 shows the sensor reading before and after thecoating fault is introduced. The fault does not manifestitself in the time domain clearly.

(2) Sensor Dislocation: Sometimes sensor may bedislocated due to mechanical damage or due to theimpact from the media in which it is located. In such acase, the measurement will not reflect the true temper-ature at the original location. Figure 12 shows the T2

chart when there is a dislocation of 129 mm of thesensor. The noise characterization has changed and theT2 chart shows a fault.

(3) Span (Gain) Error: The span of the signal-conditioning unit usually changes with time, requiringrecalibration of the instrument periodically. A changein span will magnify or reduce the signal power, causingerror in measurement. The noise PSD will also bemagnified or reduced also as a result of any change inthe gain of the transmitter. T2 charts are plotted inFigures 13 and 14 for different span changes. It can beseen that the method detects both +20% and -20% spanchanges in the sensor.

The above tests show the feasibility of this faultdetection method. In case one needs to analyze the faulttype, PSD plots versus time may be helpful. Figure 15is an example of the PSD plots when the sensor is

covered by cotton. After the coating, the PSD dropsbelow the lower confidence limit. Given such a graph,the possible faults could be a coating fault, stuck fault,or span error. The method cannot distinguish the kind

Figure 11. Time domain signal when there is a coating fault inthe temperature sensor. (10 000 data points are plotted. The spikesshow the possible existence of outliers.)

Figure 12. T2 chart under dislocation fault (fault occurs at sampleno. ) 20).

Figure 13. T2 chart under -20% span change (fault occurs atsample no. ) 20).

Figure 14. T2 chart under +20% span change (fault occurs atsample no. ) 20).

Figure 15. PSD plots versus time (10 frequency bands areplotted). The vertical axis is the PSD estimate at frequency bandcentering f, and the horizontal axis is the sample number. Thedashed lines are the estimated upper and lower 99.7% confidencelimits on the PSD estimation based on eq 7 at the spedcifiedfrequency band.

Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000 403

Page 9: Sensor Fault Detection Using Noise Analysis

of fault, which causes the observed variations in thePSD plot.

7. Experimental Studies Using a PressureSensor

We have also tested the proposed FDI method on apressure sensor (Omega PX26-015GV). The experimen-tal setup is shown in Figure 16.

The sensor is used to measure the pressure at thebottom of the tank. A well-tuned controller is used tomaintain the pressure inside the tank. The followingdesign parameters are selected:

(1) Sampling Frequency: 1000 Hz. This is highenough to capture most of the noise and this samplingfrequency is high above the closed-loop process stopbandcutoff frequency which is around 50 Hz.

(2) Data Window Size: 5000 data points, correspond-ing to 5 s. This amount of data is enough to provide agood estimation of noise power. Also, a 5-s delay in faultdetection is not a major concern.

(3) Frequency Band Selection: The whole frequencyrange is divided into 16 bands out of which 10 frequencybands centering around [500/16 × 4, 500/16 × 5, ..., 500/16 × 13] Hz are used. The lowest frequency bandcentered at 125 Hz is higher than the process stopbandcutoff frequency and is low enough to capture a widerange of noise.

(4) Outlier Detection and Removal: The same outlierdetection algorithm as that in the temperature sensoris used. Every 0.5 s (500 data points), variance of thedata is estimated and measurements above 3σ or below-3σ are discarded.

(5) Percent Level of Confidence: 95%. The noise isassumed to be Gaussian.

(6) Number of Principal Components: 2 PCs are keptwhich capture 80% of the total variance.

Noise Stationarity Verification. The sensed pres-sure is maintained at 5.85 psi (or 6.67 psi) for a longtime. The noise stationary property is checked. Figures17 and 18 show that the noise characteristic does notchange with time or pressure. Also, the setpoint changedoes not affect fault detection too much, as is shown inFigure 19. As discussed earlier in the former example,the noise may not be stationary under transition. Butthe T2 change is small compared with that of a fault.Although one point in Figure 17 lies above the limit,this is to be expected occasionally because of the randomnature of the noise.

Several kinds of faults were introduced deliberately.The PCA model from the learning stage is used to

calculate the T2 values. Figures 20 and 21 show T2

charts when span error is introduced. The proposedmethod detects this change quite well.

Figures 22 and 23 show the effect of an air bubbleintroduced deliberately into a 30-cm long tube connect-ing the tank and the sensor. Apparently, a small airbubble (5-cm length) does not activate the alarm whilea large air bubble (15-cm length) activates the alarm.When the separate Fluke calibration unit (see Figure16) was used, it was seen that the small bubble did notappreciably affect the pressure reading whereas thelarger air bubble did cause a measurement error ofaround 4% (at steady state).

Figure 24 plots the T2 chart when the sensor has beenslightly damaged as a result of overpressurization. Thefull scale of the sensor is 15 psi and the burst pressure

Figure 16. Pressure sensor experiment setup. Figure 17. T2 charts after 7 days (at pressure 5.85 psi).

Figure 18. T2 chart at a higher pressure (6.67 psi).

Figure 19. T2 chart under setpoint change (from 5.85 to 6.67 psi).

404 Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000

Page 10: Sensor Fault Detection Using Noise Analysis

is 50 psi. The actual pressure was kept at 25-30 psifor 30 min, causing slight damage to the sensor. Thesensor is now not functioning normally and has anactual error of about 0.3-0.7 psi as measured inde-pendently using the Fluke calibration unit. See Figure25 for the calibration before and after the damage. Theproposed method detects such a fault well. At theconditions of operation, this caused an error of 0.35 psiin the measurement (verified using the Fluke pressurecalibration unit).

8. Conclusions

In this paper, we proposed an approach to detectsensor faults by analyzing high-frequency noise presentin the measurement. Experimental studies with atemperature sensor and a pressure sensor show thefeasibility of the method.

The method takes advantage of the emergence ofsignificant local computing power expected to be embed-

Figure 20. T2 chart under -20% span change (fault occurs atsample no. ) 20).

Figure 21. T2 chart under +20% span change (fault occurs atsample no. ) 20).

Figure 22. T2 chart when there is an air bubble (with length )5 cm) in the tube connecting the sensor to the tank (“fault” occursafter sample no. ) 20).

Figure 23. T2 chart when there is an air bubble (with length )15 cm) in the tube connecting the sensor to the tank (fault occursafter sample no. ) 20).

Figure 24. T2 chart after the sensor has been slightly damageddue to overpressurization.

Figure 25. Pressure sensor calibration before and after slightdamage caused by overpressurization of the sensor.

Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000 405

Page 11: Sensor Fault Detection Using Noise Analysis

ded in future generations of intelligent sensors. Itrequires sampling the signal at a high frequency,thereby isolating the noise content in the high-frequencybands through standard signal-processing techniquesand then monitoring these signals using the multivari-able statistical control charts. Principal componentanalysis is used to remove correlation of the noiseamong the frequency bands.

One key assumption used is that the noise is station-ary and any change in the noise characteristics can beattributed to sensor fault. If the noise is nonstationary,then this method will not be applicable. For the twoexamples studied, the assumption was verified experi-mentally. Significant changes in the process operationwill require relearning of the new noise characteristicsat the current operating conditions.

Only certain types of faults can be detected using thismethod. It may not be possible to uniquely identify thedetectable faults, although some classification is pos-sible.

While we have established the feasibility of theapproach, some questions remain unanswered. Becasueour experiments were carried out under laboratoryconditions, there is a need to verify the validity of theassumptions used in industrial environments. The mostimportant of these is the assumption of stationarity ofthe noise used. Although numerous authors have usedthis assumption in the past, it is necessary to verify itbefore the proposed technique can be used.

We have also addressed some issues concerning thechoice of parameters in the algorithm. These include thesampling frequency used, the data window size, thefrequency band selection, and the choice of the percentlevel of confidence. While many of the parameters canbe selected on the basis of an analysis of the applicationat hand, definite guidelines and a thorough analysis ofthe consequence of the selection of each parameter needto be developed.

On the whole, the method looks promising, especiallybecause of the inherent advantages it possesses suchas localizing the self-diagnosing process, the ability toembed the scheme into the signal-conditioning modulehardware, the ease of training the fault detectionalgorithm, and the ability to complement more powerfulfault detection algorithms which are based on multiplesensors and a process model.

Acknowledgment

Support provided by a NSF grant CTS-95-29578 isgratefully acknowledged.

Notation

Bxx ) averaged PSD estimate from K segmentsE ) residual matrixJ ) PSD estimate with one segmentK ) number of segmentsl ) number of principal componentsm ) number of samplesN ) data number in one window (window size)p ) number of frequency bandspi ) loadpxx ) true PSDti ) score vectorT2 ) defined in eq 9Q ) number of data points in each segmentU ) defined in eq 3w(n) ) window function

X ) data matrixy(i) ) the ith data pointµ ) estimated meanΣ ) estimated covariance matrixλi ) the ith eigenvalue

Literature Cited

(1) Yang, Janice C.-Y.; Clarke, D. W. A Self-Validating Thermo-couple. IEEE Trans. Control Syst. Technol. 1997, 5 (2), 239.

(2) Henry, M. P.; Clarke, D. W. The Self-Validating Sensor:Rationale, Definitions and Examples. Control Eng. Practice 1993,1 (4), 585.

(3) Isermann, R. Process Fault Detection Based on Modelingand Estimation MethodsA Survey. Automatica 1984, 20 (4), 387-404.

(4) Patton, R.; Frank, P.; Clark, R. Fault Diagnosis in DynamicSystems; Theory and Application; Prentice Hall International (UK)Ltd.: Hertfordshire, UK, 1989.

(5) Frank, P. M. Fault Diagnosis in Dynamic Systems UsingAnalytical and Knowledge-Based RedundancysA Survey andSome New Results. Automatica 1990, 26 (3), 459-474.

(6) Gerler, J.; Singer, D. A New Structural Framework forParity Equation-Based Failure Detection and Isolation. Auto-matica 1990, 26, 381-388.

(7) Tsai, T. M.; Chou, H. P. Sensor Fault Detection with theSingle Sensor Parity Relation. Nucl. Sci. Eng. 1993, 114, 141-148.

(8) Patton, R. J.; Chen, J. Review of Parity Space Approachesto Fault Diagnosis for Aerospace Systems. J. Guidance ControlDynam. 1994, 17 (2), 278-284.

(9) Keller, J. Y.; Summerer, L.; Boutayeb, M.; Darouach, M.Generalized Likelihood Ratio Approach for Fault Detection inLinear Dynamic Stochastic Systems with Unknown Inputs, Int.J. Syst. Sci. 1996, 27 (12), 1237-1241.

(10) Chang, C. T.; Hwang, J. I. Simplification Techniques forEKF Computations in Fault Diagnosis: Model Decomposition.AIChE J. 1998, 44 (6), 1392-1403.

(11) Wilbers, D. N.; Speyer, J. L. Detection Filters for AircraftSensor and Actuator Faults. In ICCON ’89: Proceedings of theIEEE International Conference on Control and Applications,Jerusalem, April 1989; IEEE: New York, 1989.

(12) Kage, K.; Joseph, B. Measurement Selection and Detectionof Measurement Bias in the Context of Model-Based Control andOptimization. Ind. Eng. Chem. Res. 1990, 29 (10), 2037-44.

(13) Watanabe, K.; Hirota, S.; Hou, L.; Himmelblau, D. M.Diagnosis of Multiple Simultaneous Fault via Hierarchical Arti-ficial Neural Networks. AIChE J. 1994, 40 (5), 839-848.

(14) Tsai, C. S.; Chang, C. T. Dynamic Process Diagnosis viaIntegrated Neural Networks. Comput. Chem. Eng. 1995, 19(Suppl.), S747-S752.

(15) Tzafestas, S. G.; Dalianis, P. J. Artificial Neural Networksin the Fault Diagnosis of Technological Systems: A Case Studyin Chemical Engineering Process. Eng. Simul. 1996, 13, 939-954.

(16) Brydon, D. A.; Cilliers, J. J.; Willis, M. J. Classifying Pilot-Plant Distillation Column Faults Using Neural Networks. ControlEng. Practice 1997, 5 (10), 1373-1384.

(17) Leonard, J. A.; Kramer, M. A. Radial Basis FunctionNetworks for Classifying Process Faults. IEEE Control Syst.Magazine 1991, 11 (3), 31-38.

(18) Kramer, M. A. Nonlinear Principal Component AnalysisUsing Auto Associative Neural Networks. AIChE J. 1991, 37 (2),233-243.

(19) MacGregor J. F.; Jaeckle, C.; Kiparissides, C.; Koutodi, M.Process Monitoring and Diagnosis by Multiblock PLS Methods.AIChE J. 1994, 40, 826.

(20) MacGregor J. F.; Kourti, T. Statistical Process Control ofMultivariate Process. Control Eng. Practice 1995, 3 (3), 403-414.

(21) Dunia, R.; Qin, S. J.; Edgar, T. F.; McAvoy, T. J. Identifica-tion of Fault Sensors Using Principal Component Analysis. AIChEJ. 1996, 42, 2797-2812.

(22) Wise, B. M.; Gallagher, N. B. The Process ChemometricsApproach to Process Monitoring and Fault Detection. J. ProcessControl 1996, 6(6), pp 329-348.

(23) Raich, A.; Cinar, A. Statistical Process Monitoring andDisturbance Diagnosis in Multivariable Continuous Processes.AIChE J. 1996, 42 (4), 995-1009.

406 Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000

Page 12: Sensor Fault Detection Using Noise Analysis

(24) Maryak, J. L.; Hunter, L. W.; Favin, S. Automated SystemMonitoring and Diagnosis via Singular Value Decomposition.Automatica 1997, 33 (11), 2059-2063.

(25) Dunia R.; Qin, S. J. A Unified Geometric Approach toProcess and Sensor Fault Identification and Reconstruction: TheUnidimensional Fault Case. Comput. Chem. Eng. 1998, 22 (7-8),927-943.

(26) Qin, S. J.; Yue, H.; Dunia, R. Self-Validating InferentialSensors with Application to Air Emission Monitoring, Ind. Eng.Chem. Res. 1997, 36, 1675-1685.

(27) Bakshi, B. R. Multiscale PCA with Application to Multi-variate Statistical Process Monitoring. AIChE J. 1998, 44 (7),1596-1610.

(28) Luo, R.; Misra, M.; Himmelblau, D. Sensor Fault Detectionvia Multiscale Analysis and Dynamic PCA. Ind. Eng. Chem. Res.1999, 38 (4), 1489-1495.

(29) Nam, D. S.; Jeong, C. W.; Choe, Y. J.; Yoon, E. S.Operation-Aided System for Fault Diagnosis of Continuous andSemi-Continuous Processes. Comput. Chem. Eng. 1996, 20 (6-7),793-803.

(30) Guan, J.; Graham, J. H. An Integrated Approach for FaultDiagnosis with Learning. Comput. Ind. 1996, 32, 33-51.

(31) Yung, S. K.; Clarke, D. W. Sensor Validation. Meas. Control1989, 22, 132-150.

(32) Luo, R.; Misra, M.; Qin, S. J.; Barton, R.; Himmelblau, D.Sensor Fault Detection via Multiscale Analysis and NonparametricStatistical Inference. Ind. Eng. Chem. Res. 1998, 37 (3), 1024-1032.

(33) Rabiner, L. R.; Gold, B. Theory and Application of DigitalSignal Processing; Prentice-Hall: Englewood Cliffs, NJ, 1975.

(34) Oppenheim, A. V.; Schafer, R. W. Digital Signal Processing;Prentice-Hall: Englewood Cliffs, NJ, 1975.

(35) Welch, P. D. The Use of Fast Fourier Transform for theEstimation of Power Spectra: A Method Based on Time AveragingOver Short, Modified Periodograms. IEEE Trans. Audio Electro-acoust. 1967, AU-15 (2), 70-73.

(36) Montgomery, D. C. Introduction to Statistical QualityControl, 2nd ed.; John Wiley & Sons: New York, 1991.

(37) Jackson, J. E. A User’s Guide to Principal Components;John Wiley & Sons: New York, 1991.

(38) Wise, B. M.; Gallagher, N. B. PLS•Toolbox 2.0 for Usewith MATLAB; Eigenvector Research, Inc.: Manson, WA, 1998.

(39) Mandel, J. Principal Components, Analysis of Variance,and Data Structure. Stat. Neerland. 1972, 26 (3), 119-129.

(40) Jolliffe, I. T. A Note on the Use of Principal Componentsin Regression, Appl. Stat. 1982, 31 (3), 300-303.

(41) Thomas, M. M. Quality Control of Batch Chemical Proc-esses with Application to Autoclave Curing of Composite LaminateMaterials. D. Sc. Dissertation, Washington University, St. Louis,MO, Dec 1995.

(42) Bakshi, B. R.; Locher, G.; Stephanopoulos G.; Stephan-opoulos, G. Analysis of Operating Data for Evaluation, Diagnosisand Control of Batch Operations. J. Process Control 1994, 4 (4),179-194.

(43) Garcia, E. A.; Frank, P. M. Analysis of a Class of DedicatedObserver Schemes to Sensor Fault Isolation. UKACC InternationalConference on Control, Sept 2-5, 1996.

(44) Gross, K. C.; Hoyer, K. K.; Humenik, K. E. System forMonitoring an Industrial Process and Determining Sensor Status.U.S. Patent 5,459,675, 1995.

(45) Henry, M. Automatic Sensor Validation. Control Instrum.1995, 27, 60.

(46) Himmelblau, D. M.; Mohan, B. On-line Sensor Validationof Single Sensors Using Artificial Neural Networks. AmericanControl Conference, Seattle, WA, 1995; IEEE: New York, 1995;pp 766-770,

(47) Hsiung, J. T.; Himmelblau, D. M. Detection of Leaks in aLiquid-Liquid Heat Exchanger Using Passive Acoustic Noise.Comput. Chem. Eng. 1996, 20 (9), 1101-1111.

(48) Jenkins, G. M.; Watt, D. G. Spectral Analysis and ItsApplications; Holden-Day Inc.: San Francisco, CA, 1968.

(49) Karjala, T. W.; Himmelblau, D. M. Dynamic Data Recti-fication by Recurrent Neural Networks vs. Traditional Methods.AIChE J. 1994, 40 (11), 1865-1875.

(50) Rengaswamy R.; Venkatasubramanian, V. A SyntacticPattern-Recognition Approach for Process Monitoring and FaultDiagnosis. Eng. Appl. Artif. Intell. 1995, 8 (1), 35-51.

(51) Stockdale, R. B. New Standards Squeeze TemperatureMeasurements and Control Specifications. Control Eng. 1992, 39,89-92.

(52) Watanabe, K.; Koyama, H.; Tanoguchi, H.; Ohma, T.;Himmelblau, D. M. Location of Pinholes in a Pipeline. Comput.Chem. Eng. 1993, 17 (1), 61-70.

(53) Ying, C.-M. Issue in Process Monitoring and Control:Identification, Model Predictive Control, Optimization and FaultDetection. D. Sc. Thesis, Washington University, St. Louis, MO,1999.

(54) Ying, C.-M.; Joseph, B. Sensor Fault Detection UsingPower Spectrum Density Analysis. AIChE Annual Meeting, Miami,FL, 1998.

Received for review May 13, 1999Revised manuscript received November 5, 1999

Accepted November 9, 1999

IE9903341

Ind. Eng. Chem. Res., Vol. 39, No. 2, 2000 407


Recommended