+ All Categories
Home > Documents > Founded 1348

Founded 1348

Date post: 03-Jan-2016
Category:
Upload: martena-boyer
View: 15 times
Download: 0 times
Share this document with a friend
Description:
Charles University. Founded 1348. Austria, Linz 16. – 18. 6. . 2003. Johann Kepler University of Linz. Johann Kepler University of Linz. ROBUST STATISTICS -. ROBUST STATISTICS -. - BASIC IDEAS. - BASIC IDEAS. Jan Ámos Víšek. Jan Ámos Víšek. FSV UK. Institute of Economic Studies - PowerPoint PPT Presentation
36
Founded 1348 Charles University
Transcript
Page 1: Founded 1348

Founded 1348Charles University

Page 2: Founded 1348

Johann Kepler University of LinzJohann Kepler University of Linz

FSV UK

STAKAN III STAKAN III

Institute of Economic Studies Faculty of Social Sciences

Charles UniversityPrague

Institute of Economic Studies Faculty of Social Sciences

Charles UniversityPrague

Jan Ámos VíšekJan Ámos Víšek

- BASIC IDEAS

ROBUST STATISTICS - ROBUST STATISTICS -

Austria, Linz 16. – 18. 6.. 2003

- BASIC IDEAS

Page 3: Founded 1348

Schedule of today talk

A motivation for robust studies

Huber’s versus Hampel’s approach

Prohorov distance - qualitative robustness

Influence function - quantitative robustness • gross-error sensitivity

• local shift sensitivity • rejection point Breakdown point

Recalling linear regression model

Scale and regression equivariance

Page 4: Founded 1348

Introducing robust estimators

continued

Schedule of today talk

Maximum likelihood(-like) estimators - M-estimators

Other types of estimators - L-estimators -R-estimators - minimum distance - minimum volume

Advanced requirement on the point estimators

Page 5: Founded 1348

AN EXAMPLE FROM READING THE MATH

.8

1lim

8

xx

.5

1lim

5

xx

Having explained what is the limit,

an example was presented:

To be sure that the students

they were asked to solve the exercise :

The answer was as follows:

really understand what is in question,

Page 6: Founded 1348

Why the robust methods should be also used?

Fisher, R. A. (1922): On the mathematical foundations of theoretical statistics.

Philos. Trans. Roy. Soc. London Ser. A 222, pp. 309--368.

)1(

61

)x(var

)x(var

n)(t

n)1,0(N

nlim

)1(

121

)s(var

)s(var2n)(t

2n)1,0(N

nlim

Page 7: Founded 1348

Continued

Why the robust methods should be also used?

9t 5t 3t

nx

2ns

0.93 0.80 0.50

0.83 0.40

)T(var

)T(var

n)(t

n)1,0(N

nlim

0 !

)s(var 2nt 3

)s(var 2n)1,0(N

is asymptotically

infinitely larger than

Page 8: Founded 1348

Standard normal density

Student density with 5 degree of freedom

Is it easy to distinguish between normal and student density?

Page 9: Founded 1348

Continued

Why the robust methods should be also used?

New York: J.Wiley & Sons

Huber, P.J.(1981): Robust Statistics.

n

1i in xxn2

d 2

1

n

1i

2in xx

n

1s

)(

3)x()1()x(F

n2

n

n2

n

n dE/dvar

sE/svarlim)(ARE

Page 10: Founded 1348

0 .001 .002 .05

.876 .948 1.016 2.035

)(ARE

Continued

Why the robust methods should be also used?

So, only 5% of contamination makes two times better than . ns

nd

Is 5% of contamination much or few?

E.g. Switzerland has 6% of errors in mortality tables, see Hampel et al..

Hampel, F.R., E.M. Ronchetti, P. J. Rousseeuw, W. A. Stahel (1986):

Robust Statistics - The Approach Based on Influence Functions. New York: J.Wiley & Sons.

Page 11: Founded 1348

Conclusion: We have developed efficient monoposts which however work only on special F1 circuits.

A proposal: Let us use both. If both work, bless the God. We are on F1 circuit. If not, let us try to learn why.

What about to utilize, if necessary, a comfortable sedan.

It can “survive” even the usual roads.

Page 12: Founded 1348

Huber’s approach

One of possible frameworks of statistical problems is to consider

a parameterized family of distribution functions.

Let us consider the same structure of parameter space but instead of each distribution function

let us consider a whole neighborhood of d.f. .

Huber’s proposal:

Finally, let us employ usual statistical technique for solving the problem in question.

Page 13: Founded 1348

continued - an exampleHuber’s approach

Let us look for an (unbiased, consistent, etc.) esti- mator of location with minimal (asymptotic)

variance for family . )x(F)x(F

, i.e. consider instead of single d.f. the family .

H )x(H:)x(H)x(F)1()x(GQ H,

F Q

Let us look for an (unbiased, consistent, etc.) estimator of location with minimal (asymptotic) variance

for family of families .

Q)x(G H,

Finally, solve the same problem as at the beginning of the task.

For each let us define

Page 14: Founded 1348

Hampel’s approach

The information in data )x,,x,x( n21 x

is the same as information in empirical d.f. .nF

An estimate of a parameter of d.f. can be then considered as a functional .)F(T nn

has frequently a (theoretical) counterpart .)F(TAn example:

)F(TdFxxn

1x nn

n

1i i

)F(T)x(dFxXE

)F(T nn

Page 15: Founded 1348

continued Hampel’s approach

Expanding the functional at in direction to , we obtain:

)F(T n FnF

nnnn R)x(dF)x(dF)x,F('T)F(T)F(T

where is e.g. Fréchet derivative - details below.)x,F('T

Message: Hampel’s approach is an infinitesimal one, employing “differential calculus” for functionals.

Local properties of can be studied through the properties of .)F('T

)F(T nn

Page 16: Founded 1348

Qualitative robustness

Let us consider a sequence of “green” d.f. which coincide with the red one,

up to the distance from the Y-axis . n/1

Does the “green” sequence converge to the red d.f. ?

Page 17: Founded 1348

Let us consider Kolmogorov-Smirnov distance, i.e.

continuedQualitative robustness

)x(F)x(Fmax)F,F(d nRx

n

K-S distanceof any “green” d.f.

from the red one is equal to the length of yellow

segment.

The “green” sequence does not converge in K-S metric

to the red d.f. !

CONCLUSION:Independently on n,

unfortunately.

Page 18: Founded 1348

continuedQualitative robustness

A , ) A( G ) A( F; inf ) G, F(

Prokhorov distance

Now, the sequenceof the green d.f. converges

to the red one.

We look for a minimal length, we have to move the green d.f.

- to the left and up - to be above the red one.

In words:

CONCLUSION:

Page 19: Founded 1348

)( )(),( nGnF TT)G,F( LL

Conclusion : For practical purposes we need something “stronger” than qualitative robustness.

:G,F00 DEFINITION

E.g., the arithmetic mean is qualitatively robust at normal d.f. !?!

In words: Qualitative robustness is the continuity with respect to Prohorov distance.

i.i.d.

Qualitative robustness

)F(Tˆx,...,x,x nnn21

)( nF1 TF)x( LL

Page 20: Founded 1348

Quantitative robustness

nnnn R)x(dF)F,T,x(IF)F(T)F(T

ni

n

1i

2/1nn R)F,T,x(IFn))F(T)F(T(n

The influence function is defined where the limit exists.

Influence function

)F,T,x(IF lim0h h

)F(T)hF)h1((T x

Page 21: Founded 1348

continuedQuantitative robustness

Characteristics derived from influence function

)F,T,x(IFsup

Rx

*

Gross-error sensitivity

)F,T,y(IF)F,T,x(IFsup{

yx

*

Local shift sensitivity

/ }yx

rxfor0)F,T,x(IF;0rinf* Rejection point

Page 22: Founded 1348

Breakdown point

(The definition is here only to show that the description of breakdown which is below, has good mathematical basis. )

)F,ˆ( )n(*

1))K(βG(εG)π(F, (n) :compaktis)(K,R)(K:sup p

10

nfor

Definition – please, don’t read it

in the sense that the estimate tends (in absolute value ) to infinity or to zero.

is the smallest (asymptotic) ratio )F,ˆ( )n(*

which can destroy the estimate

In words

obsession

(especially in regression

– discussio

n below)

Page 23: Founded 1348

An introduction - motivation Robust estimators of parameters

Let us have a family )}x(f{

and data .n21 x,,x,x

Of course, we want to estimate .

Maximum likelihood estimators :

)x(fmaxargˆi

n

1i

)x(flogmaxarg i

n

1i

What can cause a problem?

Page 24: Founded 1348

What can cause a problem? Robust estimators of parameters

})x(2/1exp{)2()x(f 22/1

2)x()2log()x(flog2

}{2n

1i iR

)x(maxarg

}{ 0)x(argn

1i iR

nxn

1i i n

1i ixn/1

Consider normal family with unit variance: An example

2n

1i iR

)x(minarg

(notice that does not depend on ).So we solve the extremal problem

)2log(

Page 25: Founded 1348

A proposal of a new estimator

Robust estimators of parameters

Maximum likelihood-like estimators :

Once again: What caused the problem in the previous example?

So what about

kxfor)x(k

1)x( 2

kxforx

n

1i ixn/1

)x(maxarg i

n

1i

}{ 0)x(arg i

n

1i

Page 26: Founded 1348

2)x()x(

kxfor)x()x( 2

k

1

kxforx

Robust estimators of parameters

0

kxfor)x)(k/1()x()2/1(

kxfor1 x)x()2/1(

quadratic part

linear part

Page 27: Founded 1348

The most popular estimators

Robust estimators of parameters

maximum likelihood-like estimators

M )x(maxarg i

n

1i

M-estimators

based on order statistics

L )xw(maxarg )i(i

n

1i

L-estimators

based on rank statistics

R )Rw(maxarg ii

n

1i

R-estimators

Page 28: Founded 1348

Robust estimators of parameters The less popular estimators

but still well known.

Robust estimators of parameters

based on minimazing distance between empirical d.f. and theoretical one.

d )F,F(dminarg n

n

1i

Minimal distance estimators

based on minimazing volume containing given part of data and applying “classical”

(robust) method.

V }{ }Vx{Iw:)x(wminarg ii

n

1i ii

Minimal volume estimators

Page 29: Founded 1348

Robust estimators of parameters

The classical estimator, e.g. ML-estimator, has typically a formula to be employed for evaluating it.

Algorithms for evaluating robust estimators

Extremal problems (by which robust estimators are defined) have not

(typically) a solution in the form of closed formula.

To find an algorithm how to evaluate an approximation to the precise solution.

Firstly

To find a trick how to verify that the appro- ximation is tight to the precise solution.

Secondly

Page 30: Founded 1348

High breakdown point

obsession (especially in regression

– discussion below)

Hereafter let us have in mind that we speak implicitly about

Page 31: Founded 1348

Recalling the model

Put

1p,,2,1j,n,,2,1i,x)X( ijij

( if intercept n,,2,1i1x 1i ),T

n21 ),,,( .andTp21 ),,,(

where Tip2i1ii )x,,x,x(X .

0T

i

n

1j

0jiji XXY

Tn21 )Y,,Y,Y(Y

LineLineaar regresr regressionsion model model

0XY

Page 32: Founded 1348

So we look for a model“reasonably” explaining data.

LineLineaar regresr regressionsion model model

Recalling the model graphically

Page 33: Founded 1348

This is a leverage point and this is an outlier.

LineLineaar regresr regressionsion model model

Recalling the model graphically

Page 34: Founded 1348

Formally it means:

If for data )X,Y( the estimate is , )X,aY(than for data the estimate is .ˆa

Equivariance in scale

If for data )X,Y( the estimate is , )X,XY( Tthan for data the estimate is .ˆ

Equivariance in regression Scale equivariant

Affine equivariant

We arrive probably easy to an agreementthat the estimates of parameters of model

should not depend on the system of coordinates.

Equivariance of regression estimators

Page 35: Founded 1348

Unbiasedness Consistency

Asymptotic normality Gross-error sensitivity

Reasonably high efficiency Low local shift sensitivity

Finite rejection point Controllable breakdown point

Scale- and regression-equivariance Algorithm with acceptable complexity

and reliability of evaluation Heuristics, the estimator is based on,

is to really work

Advanced (modern?) requirement on the point estimator

Still not

exhaustive

Page 36: Founded 1348

THANKS for A

TTENTION


Recommended