+ All Categories
Home > Documents > [IEEE 2010 IEEE/ION Position, Location and Navigation Symposium - PLANS 2010 - Indian Wells, CA, USA...

[IEEE 2010 IEEE/ION Position, Location and Navigation Symposium - PLANS 2010 - Indian Wells, CA, USA...

Date post: 09-Dec-2016
Category:
Upload: jorn
View: 220 times
Download: 7 times
Share this document with a friend
6
Approach for detection and identification of multiple faults in satellite navigation Lucila Patino-Studencka and G¨ unter Rohmer Fraunhofer Institut IIS Nuremberg, Germany Email: {pat,rmr}@iis.fraunhofer.de orn Thielecke Friedrich-Alexander University of Erlangen-Nuremberg Email: [email protected] Abstract—Fault detection and identification (FDI) techniques recognize faulty or corrupted navigation satellite signals using the redundancy provided by measurements of multiple satellite signals. The majority of FDI-approaches assumes only one faulty measurement, this assumption is no longer valid if the number of satellite signals increases as expected with inclusion of the new satellite systems. This paper presents a new efficient FDI- approach for the identification of multiple simultaneous failures, which delivers quality values for each measurement, based on statistic concepts like test statistic and observed significance level. Index Terms—Satellite navigation systems; positioning; fault detection; multiple detection. I. I NTRODUCTION Position determination in satellite navigation systems is based upon pseudorange and phase measurements. These measurements are often affected by phenomena like reflection and diffraction, which are not always detected or corrected during the signal processing steps that precede position de- termination. Such disturbances result in biases associated with the measurements. With the introduction of new Global Navigation Satellite System like Galileo, it can be expected that on average 18 satellites and a minimum of 13 will be in view [1]. This does not only lead to higher redundancy for position determination, but also to more possible error sources. Therefore, the assumption of a single faulty signal is no longer valid. The first task of FDI techniques, the detection, consists in determining if at least one faulty measurement is present. It is treated as a hypothesis test problem. The null hypothesis (H 0 ) corresponds to the ”no error” case, whereas the alternate hy- pothesis (H 1 ) corresponds to the ”faulty” case. The detection is accomplished by comparing the current test statistic with a threshold derived from a maximum allowable false alarm rate. In the past years several approaches under the scope of Receiver Autonomous Integrity Monitoring (RAIM) were presented, most of which were developed for safety critical applications. The primary emphasis of these methods is to detect failures in order to protect against excessive position errors [2] by, for instance, switching to alternative navigation means. Moreover, traditional RAIM algorithms assume that all measurements have the same statistical behavior. Besides detecting the presence of measurement errors, FDI algorithms are also in charge of identifying particular faulty measurements. Identification of a single failure is achieved by comparing the parity vector with the characteristic bias line of each satellite [3], [4] and [5]. Identification of multiple failures can be performed by the testing of measurement subsets. A recently presented approach, the so-called RANCO [1], is able to detect multiple satellite failures by range comparison of possible position solutions calculated with subsets of four measurements. In this approach two thresholds are necessary: one to determine the inliers and outliers of each subset and a second one for selecting the faulty satellites in the end. The faulty satellites are not used for the final position calculation. The selection of the thresholds determines the quality of the identification. Moreover, the use of hard thresholds in the evaluation of the subsets can lead to both missed and false detections. In addition, the final exclusion of faulty measurements causes a reduction of the dilution of precision (DOP). In this work first a test statistic considering that the noise affecting each measurement can be different (have different variances) is derived and a new algorithm for the identification of simultaneous multiple failures is proposed. The algorithm consists of a subset checking strategy. Unlike a hard detection approach, this algorithm assigns a quality value to each measurement. The quality value represents the congruency of each single measurement with all others. This quality value can be used to exclude faulty measurements or give them less weight in the position calculation. This information can be exploited in order to increase both robustness and position accuracy. Section II describes the mathematical formulation of the position estimation process exploiting the relationship between the least squares residual vector and the parity vector. In Section III the Rao test statistic depending on the measurement noise is derived and the significance level is presented. Based on these concepts an FDI approach is described in detail in Section IV. Simulation tests and results are discussed in Sections V and VI, respectively. Finally conclusions are drawn in Section VII. II. POSITION CALCULATION AND PARITY SPACE The position calculation in satellite navigation systems is based on the linearized measurement equation given by y = Hx + e (1) 221 978-1-4244-5037-4/10/$26.00 ©2010 IEEE
Transcript

Approach for detection and identification ofmultiple faults in satellite navigation

Lucila Patino-Studencka and Gunter RohmerFraunhofer Institut IISNuremberg, Germany

Email: {pat,rmr}@iis.fraunhofer.de

Jorn ThieleckeFriedrich-Alexander University of Erlangen-Nuremberg

Email: [email protected]

Abstract—Fault detection and identification (FDI) techniquesrecognize faulty or corrupted navigation satellite signals usingthe redundancy provided by measurements of multiple satellitesignals. The majority of FDI-approaches assumes only one faultymeasurement, this assumption is no longer valid if the numberof satellite signals increases as expected with inclusion of thenew satellite systems. This paper presents a new efficient FDI-approach for the identification of multiple simultaneous failures,which delivers quality values for each measurement, based onstatistic concepts like test statistic and observed significance level.

Index Terms—Satellite navigation systems; positioning; faultdetection; multiple detection.

I. INTRODUCTION

Position determination in satellite navigation systems isbased upon pseudorange and phase measurements. Thesemeasurements are often affected by phenomena like reflectionand diffraction, which are not always detected or correctedduring the signal processing steps that precede position de-termination. Such disturbances result in biases associatedwith the measurements. With the introduction of new GlobalNavigation Satellite System like Galileo, it can be expectedthat on average 18 satellites and a minimum of 13 will bein view [1]. This does not only lead to higher redundancy forposition determination, but also to more possible error sources.Therefore, the assumption of a single faulty signal is no longervalid.

The first task of FDI techniques, the detection, consists indetermining if at least one faulty measurement is present. It istreated as a hypothesis test problem. The null hypothesis (H0)corresponds to the ”no error” case, whereas the alternate hy-pothesis (H1) corresponds to the ”faulty” case. The detectionis accomplished by comparing the current test statistic witha threshold derived from a maximum allowable false alarmrate. In the past years several approaches under the scopeof Receiver Autonomous Integrity Monitoring (RAIM) werepresented, most of which were developed for safety criticalapplications. The primary emphasis of these methods is todetect failures in order to protect against excessive positionerrors [2] by, for instance, switching to alternative navigationmeans. Moreover, traditional RAIM algorithms assume that allmeasurements have the same statistical behavior.

Besides detecting the presence of measurement errors, FDIalgorithms are also in charge of identifying particular faulty

measurements. Identification of a single failure is achieved bycomparing the parity vector with the characteristic bias line ofeach satellite [3], [4] and [5]. Identification of multiple failurescan be performed by the testing of measurement subsets. Arecently presented approach, the so-called RANCO [1], isable to detect multiple satellite failures by range comparisonof possible position solutions calculated with subsets of fourmeasurements. In this approach two thresholds are necessary:one to determine the inliers and outliers of each subset and asecond one for selecting the faulty satellites in the end. Thefaulty satellites are not used for the final position calculation.

The selection of the thresholds determines the quality ofthe identification. Moreover, the use of hard thresholds inthe evaluation of the subsets can lead to both missed andfalse detections. In addition, the final exclusion of faultymeasurements causes a reduction of the dilution of precision(DOP).

In this work first a test statistic considering that the noiseaffecting each measurement can be different (have differentvariances) is derived and a new algorithm for the identificationof simultaneous multiple failures is proposed. The algorithmconsists of a subset checking strategy. Unlike a hard detectionapproach, this algorithm assigns a quality value to eachmeasurement. The quality value represents the congruency ofeach single measurement with all others. This quality valuecan be used to exclude faulty measurements or give them lessweight in the position calculation. This information can beexploited in order to increase both robustness and positionaccuracy.

Section II describes the mathematical formulation of theposition estimation process exploiting the relationship betweenthe least squares residual vector and the parity vector. InSection III the Rao test statistic depending on the measurementnoise is derived and the significance level is presented. Basedon these concepts an FDI approach is described in detailin Section IV. Simulation tests and results are discussed inSections V and VI, respectively. Finally conclusions are drawnin Section VII.

II. POSITION CALCULATION AND PARITY SPACE

The position calculation in satellite navigation systems isbased on the linearized measurement equation given by

y = H x + e (1)

221978-1-4244-5037-4/10/$26.00 ©2010 IEEE

where y is the vector of measurement residuals calculatedas the difference between the measured and the predictedpseudoranges. For N observed satellites y has dimensionNx1, H is the observation matrix (Nx4), x is a vector ofdimension 4x1 containing the user position and clock errorrelative to the linearization point, and e is the error in themeasurements (Nx1), which is composed of noise and bias.

Since the state vector x has four unknown variables, at leastfour measurements are necessary to obtain a solution. All otherN − 4 measurements provide redundancy in the solution andcontribute to improve the DOP.

The optimal solution of (1) is obtained through the WeightedLeast Squares (WLS) method.

x = (HT W−1H)−1HT W−1y (2)

where W−1 is a weight matrix, which relates the relative noiselevels.

If more than four measurements are present the matrix Hcontains not only the information necessary for the positioncalculation (geometrical information) but also redundancyinformation. A separation of the geometrical information andthe redundancy in the measurements can be obtained bytransforming (1) from the position space into the parity spacethrough the parity matrix P. It can be obtained by, e.g.,performing a QR decomposition of the matrix H.

QT y = Rx + QT e (3)[Q1

P

]y=

[R1

0

]x +

[Q1

P

]e (4)

where Q1 and P are the upper and lower submatrices ofQT respectively and R1 the upper matrix of R. The rows ofP are mutually orthogonal, unity in magnitude and mutuallyorthogonal to the columns of H. The corresponding parityvector p is given in [6]:

p= Py = Pe (5)

The parity vector does not contain geometrical information,but only information about the biases and the noise presentin the measurements. It is a linear combination of them.Assuming that the measurement residuals are uncorrelated andgaussian random variables, the resulting parity vector is alsogaussian.

Since the covariance matrix C of the measurements isassumed to be a priori known, the covariance of p can becalculated. In the case of fault-free measurements the meanvalue of p is zero, otherwise it is a linear combination of themeasurement bias vector b and therefore unknown.

e ∼ N (b, C) (6)

p ∼ N (Pb, PCPT ) (7)

Thus, the probability density function (PDF) of the parityvector is given by:

fp(p) =1

(2π)N2

∣∣∣PCPT∣∣∣ exp− 1

2 (p−Pb)T (PCPT )−1(p−Pb)

(8)

III. STATISTICAL BACKGROUND

A. Test statistics

Since each measurement can have one of two possiblestates namely, faulty or fault free, it is possible to define twohypotheses for the parity vector: all measurements are free offault H0 and at least one measurement is faulty H1.

H0 : b = 0

H1 : b �= 0 (9)

For this hypothesis test the PDFs under H0 and H1 areidentical except for the value of the unknown parametervector b.

If the PDFs under both H0 and H1 are completely knownit is possible to design optimal detectors. In the case thatthe PDFs have some unknown parameters, these must beaccommodated in the composite hypothesis test. The Bayesianapproach models the unknown parameters as realization of avector random variable assigning to these prior PDFs. Thisapproach was followed by [3]. However, setting up the priorprobability models for biases for single and multiple faults ischallenging. The Generalized Likelihood Ratio Test (GLRT)replaces the unknown parameters by their maximum likelihoodestimates (MLE) as used in [5]. The Rao test is asymptoticallyequivalent to the GLRT, and was chosen in this paper tocalculate the test statistic of the hypothesized model describedby (9). The main advantage of this test is that it only requiresassumptions about the null hypothesis H0, which is completelyknown, and does not need a MLE evaluation under H1.

The Rao test has the form [7]:

TR(p) =∂ln fp(p; b)

∂b

∣∣∣∣T

b=0I−1(b = 0)

∂ln fp(p; b)∂b

∣∣∣∣b=0

(10)where I denotes the Fischer information matrix and fp(p) isgiven by (8).

The Rao test of (10) applied to the parity vector p yields:

TR(p) = pT (PCPT )−1p (11)

This test statistic can be used to test the current measurementset when the measurements have different noise levels, asin the case of combined GPS and Galileo measurements.Information about the variance of the measurements and itscorrelations is included in C. If all measurements are uncorre-lated and have the same standard deviation σ, the covariancematrix C takes the from:

C = σ2 I (12)

since PPT = I, the test statistic leads in this case to thetraditional chi-square test statistic used by GPS-RAIM aspresented in [6]

TR(p) =∣∣∣ pσ

∣∣∣2 (13)

For the null hypothesis the Rao test leads to a centralizedchi-square distribution with N − 4 degrees of freedom. Forthe alternative hypothesis H1 it results in a non-centralized

222

distribution with non-centrality parameter λ depending on thebias b.

TR(p) ∼{

χ2N−4 under H0

χ2N−4(λ) under H1

(14)

The test statistic (11) takes into account that the standarddeviation of the noise affecting each measurement can bedifferent. Therefore, it can be applied in RAIM-algorithmsaddressing multiple navigation systems with multiple noisevariances. The test statistic is able to evaluate the completemeasurement set, but not about individual measurements.

B. Observed significance level

Besides the test statistic, another important statistical con-cept is the observed significance level or p-value (m). For agiven observation p0, the significance level of p is defined as:

m(p) = F0(TR(p) ≥ TR(p0)) (15)

where F0 is the probability under the null hypothesis.The p-value (m) is defined as the probability that the

outcome of an experiment under the null hypothesis equalsor exceeds the just observed value. It can be interpreted as ameasure of the extent to which the observation contradicts orsupports the null hypothesis in a single experiment [8]. A smallvalue of m shows a high evidence against the null hypothesis.

Clearly, the observed significance level can only take valuesbetween zero and one. If the null hypothesis is true, m canbe considered as a realization of a random variable, that isuniformly distributed on the interval [0, 1] [9].

IV. ALGORITHM

As mentioned before, position determination (in standalonemode) requires a minimum of four measurements. Any addi-tional measurement provides redundancy and allows to detectmeasurement errors. Traditional hypothesis testing approachesuse a fixed threshold TRF and its associated significance levelαRF , represented by the shadowed area in Fig. 1. The thresh-old is derived from a desired false alarm probability. If theobserved significance level is smaller than the correspondingsignificance level αRF the hypothesis H0 is rejected, in theother case it is accepted. In the example presented in Fig. 1the value TRi would lead to accept the hypothesis while thevalue TRj would cause to reject it. Rejection based on aminimization of false detection probabilities subject to a fixedsignificance level applies only to long-run repeated samplingsituations not to individual experiments [10]. Considering thatour goal is to make a decision about each single measurementgiven the current measurement set, the traditional approachdoes not seem to be adequate.

The algorithm proposed here is based on a subset teststrategy, whose goal it is to find a quality value for eachmeasurement, depending on its congruence with the remainingmeasurements. The algorithm consists of three main stepsas shown in Fig. 2: Subset arrangement, subset check anddecision.

1) Subset arrangement: Subsets are built dividingthe initial set of N satellite measurements(sv1, sv2, .., svk, ..., svN ) into groups of at least 5measurements, in order to assure redundancy. TheL resulting subsets (S1, S2, S3, .., SL) are analyzedindependently. L corresponds to the total number ofsubsets and is given as:

L =(

N5

)=

N !(N − 5)!5!

(16)

2) Subset check: For each subset (S1, S2, ..., Si, .., SL) atest statistic value TRi

based on (11) is calculated.For the obtained value the observed significance levelaccording to (15) is determined, i.e. the right-tail prob-ability of a central chi-square distribution is calculated.The chi-square distribution is completely described bythe degree of freedom given by the redundancy count. Inthe case of subsets with 5 measurements the distributionhas one degree of freedom.Because the significance level (m) is a monotonic func-tion decreasing with the value of the test statistic it canbe considered as a measurement of the evidence of thesubset to be faulty or error free. It means a small value isan indication that at least one measurement present in thecurrent subset is faulty or has an extremely high noiselevel. The evidence value obtained for the completesubset is assigned to all measurements comprising thesubset.

3) Decision: In order to obtain a evaluation for each singlemeasurement, the observed significance levels assignedto each subset need to be combined. The proposedstrategy and its differences to traditional approaches canbe explained using Fig. 1.Assuming, the observed significance value for a subsetTRi

is close to the value for the subset TRj. In this case,

making a hard decision, i.e. assigning confidence value1 (inlier) to subset Si and confidence value 0 (outlier) tosubset Sj , can lead to undesirable results. The approachproposed here is to calculate, collect and combine theobserved significance levels for each subset. Instead ofadding the count of times that a measurement was ina subset containing at least a faulty measurement, thecorresponding observed significance values are added.This value is further on called quality value (rk).

Obviously, the obtained quality value of a measurement svk

is not only dependent of its own error but it is also highlydependent on other measurements. Therefore, this qualityvalue is rather a relative than an absolute statement aboutthe measurement error. Nevertheless, the longer the error ispresent in the measurement the smaller its quality value. Thisinformation can be applied as a weight factor in the positioncalculation or can be used to exclude defect measurements, orboth in a combined strategy.

For the results presented in the next sections a combinedstrategy was chosen: Measurements with quality values belowa threshold were excluded, the remaining measurements were

223

T

0.05

0.1

0.15

0.2

0.25

0.3

RF TRj

PDF

T

Fig. 1. Representation of observed significance level

sv1 sv2 sv3 sv4 sv5 sv6 svN· · ·

TR1 TR2 TR3 TR4 TR5 TR6 TRL· · ·

m1 m2 m3 m4 m5 m6 mL

Σ1 Σ2 Σ3 Σ4 Σ5 Σ6 ΣN· · ·

r1 r2 r3 r4 r5 r6 rN

1

2

3

Fig. 2. FDI-Algorithm: 1) Subset arrangement, 2) Subset check, 3) Decision

used in the position calculation. A weight based on the qualityvalue was applied.

V. SIMULATIONS

In order to evaluate the performance of the proposed algo-rithm two types of tests were performed. First, a test usingdata from a Spirent STR 4760 GPS simulator was executed.As scenario, a static receiver capturing signals from ninesatellites was chosen. Due to reflections the pseudorangesof some satellites are biased temporary as shown in Fig. 3,time is counted in epochs. Between epochs 860 and 970 thesatellite sv2 fails completely, so that only eight satellites wereavailable. The benefit of applying the proposed FDI algorithmon the position calculation was evaluated. For the positioncalculation a combined strategy was followed: measurementswith normalized quality values below 0.4 were excluded andfor the rest of the measurements the quality value was takenas a weight factor. The position was calculated using a WLSapproach as described by (2) using the quality value in thematrix W.

Second, a MATLAB simulation with 106 measurements

0 200 400 600 800 1000 1200 14000

10

20

30

40

50

60

70

80

90

100

110

Epoch

Bia

s (m

)

sv1

sv8

sv10

sv20

sv2

Fig. 3. Bias magnitude for defect satellites

sets based on a fixed satellite geometry and a static receiverwas performed. Pseudorange measurements were simulated,affected by noise and biases. Several tests varying the biasmagnitude and the number of satellites were performed inorder to evaluate the performance of the algorithm. The effectof these parameters on the magnitude of the quality values aswell as on the false alarm rate was studied.

VI. RESULTS

For the first scenario the proposed FDI-algorithm providesthe normalized quality values shown in Fig. 4. The smaller thequality value the higher the error in the measurement. It canbe seen from these results that the magnitude of the qualityvalue is influenced by both the magnitude of the bias and thenumber of defect satellites. E.g. the normalized quality valuefor satellite sv1 is about 0.15 if no other satellite measurementsare corrupted (epoch 300). This value increases to 0.25 ifone additional faulty measurement is present (epoch 600) andincreases to 0.35 if three satellites are defect (epoch 700).This effect can be explained by the reduction of the fault-free measurements and therefore the number of subsets with agood observed significance level. Between epoch 860 and 970eight satellites are available and four faulty measurements arepresent. In this case all the subsets have at least one corruptedmeasurement and therefore, their significance level tends tozero. The algorithm calculates quality values, which are notcorrect, because there is not enough redundancy.

The advantage of using the proposed FDI approach on theposition error for the same scenario is shown in Fig. 5 andcontrasted with the case without FDI. The small error (5 m)on the measurement of satellite sv2 between epoch 50 and200 is not identified as such. But because of its lower qualityvalue (ca. 0.5) in comparison with other measurements, theposition error is lower using FDI than without it. Additionally,the case of including all measurement using the quality valueas a weight factor without eliminating any measurements ispresented. In this scenario the use of weights only is not suf-ficient to correct the errors. The erroneous weight calculation

224

0

0.5

1

0

0.5

1

Nor

mal

ized

qua

lity

valu

e

0 200 400 600 800 1000 1200 14000

0.5

1

Epoch

sv1

sv2

sv10

Fig. 4. Normalized quality values for sv1, sv2 und sv10

0 200 400 600 800 1000 1200 14000

50

100

150

200

250

300

Epoch

Pos

ition

Err

or

Using FDI: Only weightingUsing FDI: Exclusion and weightingWhitout FDI

Fig. 5. Position error with and without FDI

0

5

10

15

20

810

1214

1618

0

0.2

0.4

0.6

0.8

1

Bias Magnitude (σ)Number of Satellites

Mea

n va

lue

of q

ualit

y va

lue

Fig. 6. Mean value of quality for one defect measurement vs. bias magnitudeand number of measurements

between epoch 860 and 970 causes a higher position error, ifthe FDI approach is used.

Fig. 6 illustrates the dependency of the quality value versusthe magnitude of the bias and the number of measurements.The longer the bias and the higher the number of fault-freemeasurements the smaller the quality value of the measure-ment. Clearly the magnitude of the bias has more impact inthis case than the number of measurements.

89

1011

1213

0

0.2

0.4

0.6

0.80

0.05

0.1

0.15

0.2

Number of SatellitesThreshold

Fal

se a

larm

rat

e

Fig. 7. False alarm rate vs. number of satellites and threshold

This FDI approach includes the quality value in the positioncalculation process: if a defect measurement is not detected,i.e. its quality value is not smaller than a threshold, itsweight in the position calculation is small compared withother good measurements. Thus, false alarm, i.e., to declare agood measurement as faulty, has worse impact on the positionerror than the exclusion of faulty measurements. For a fixedgeometry, using fault-free measurements the false alarm ratedepending on the threshold and the number of satellites ispresented in Fig. 7. It is clear that the smaller the thresholdthe smaller the false alarm rate. Simulations showed that forvalues below 0.4 the false alarm rate is less than 5 · 10−4.

VII. CONCLUSION

A new algorithm for multiple fault identification was pre-sented. The algorithm provides a quality value for eachmeasurement based on the statistical behavior of the noise.This quality value can be interpreted as congruence betweenthe measurements. In general, if a measurement is definitelycorrupted, it is more advantageous to exclude it than to includeit with a low weight factor. On the other hand, excludinga good measurement can lead to poor results because oflost of dilution of precision. This approach allows to ex-clude extremely corrupted measurements and to weight theremaining measurements in the position calculation to improvethe accuracy. This algorithm can be used as an enhancementof traditional RAIM algorithms to protect against excessiveposition errors, but also in order to increase the accuracy androbustness of the position solution.

REFERENCES

[1] G. Schroth, M. Rippl, A. Ene, J. Blanch, B. Belabbas, T. Walter, P. Enge,and M. Meurer, “Enhancements of the Range Consensus Algorithm(RANCO),” Proceedings of the ION GNSS Conference, 2008.

[2] H. Kuusniemi, “User-level reliability and quality monitoring in satellite-based personal navigation,” Ph.D. dissertation, Tampere University ofTechnology, 2005.

[3] B. Pervan, D. Lawrence, C. Cohen, and B. Parkinson, “Parity spacemethods for autonomous fault detection and exclusion using GPS carrierphase,” in IEEE 1996 Position Location and Navigation Symposium,1996., 1996, pp. 649–656.

225

[4] Y. Lee, “Receiver autonomous integrity monitoring(RAIM) capabilityfor sole-means GPS navigation in the oceanic phase of flight,” IEEEAerospace and Electronic Systems Magazine, vol. 7, no. 5, pp. 29–36,1992.

[5] I. Nikiforov and B. Roturier, “Statistical analysis of different RAIMschemes,” in ION GPS 2002: 15 th International Technical Meetingof the Satellite Division of The Institute of Navigation; Portland, OR.Institute of Navigation, 3975 University Drive, Suite 390, Fairfax, VA,22030, USA,, 2002.

[6] E. Kaplan and C. Hegarty, Understanding GPS: Principles and Appli-cations Second Edition, C. J. H. Elliott Kaplan, Ed. Artech House,2006.

[7] S. M. Kay, Fundamentals of Statistical Signal Processing: DetectionTheory. Prentice-Hall, Upper Saddle River, New Jersey, 2008.

[8] J. Gibbons and J. Pratt, “P-values: interpretation and methodology,” TheAmerican Statistician, vol. 29, no. 1, pp. 20–25, 1975.

[9] R. Elston, “On Fisher’s method of combining p-values,” Biometricaljournal, vol. 33, no. 3, pp. 339–345, 2007.

[10] R. Hubbard and M. Bayarri, “Confusion over measures of evidence(p’s) versus errors (α’s) in classical statistical testing,” The AmericanStatistician, vol. 57, no. 3, pp. 171–178, 2003.

226


Recommended