+ All Categories
Home > Documents > Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Date post: 14-Dec-2015
Category:
Upload: travis-germann
View: 249 times
Download: 8 times
Share this document with a friend
54
Lecture 13 L 1 , L Norm Problems and Linear Programming
Transcript
Page 1: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Lecture 13

L1 , L∞ Norm Problemsand

Linear Programming

Page 2: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

SyllabusLecture 01 Describing Inverse ProblemsLecture 02 Probability and Measurement Error, Part 1Lecture 03 Probability and Measurement Error, Part 2 Lecture 04 The L2 Norm and Simple Least SquaresLecture 05 A Priori Information and Weighted Least SquaredLecture 06 Resolution and Generalized InversesLecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and VarianceLecture 08 The Principle of Maximum LikelihoodLecture 09 Inexact TheoriesLecture 10 Nonuniqueness and Localized AveragesLecture 11 Vector Spaces and Singular Value DecompositionLecture 12 Equality and Inequality ConstraintsLecture 13 L1 , L∞ Norm Problems and Linear ProgrammingLecture 14 Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15 Nonlinear Problems: Newton’s Method Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17 Factor AnalysisLecture 18 Varimax Factors, Empircal Orthogonal FunctionsLecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s ProblemLecture 20 Linear Operators and Their AdjointsLecture 21 Fréchet DerivativesLecture 22 Exemplary Inverse Problems, incl. Filter DesignLecture 23 Exemplary Inverse Problems, incl. Earthquake LocationLecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Page 3: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Purpose of the Lecture

Review Material on Outliers and Long-Tailed Distributions

Derive the L1 estimate of themean and variance of an exponential distribution

Solve the Linear Inverse Problem under the L1 normby Transformation to a Linear Programming Problem

Do the same for the L∞ problem

Page 4: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Part 1

Review Material on Outliers and Long-Tailed Distributions

Page 5: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Review of the Ln family of norms

Page 6: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

0 1 2 3 4 5 6 7 8 9 10-1

0

1

z

e

0 1 2 3 4 5 6 7 8 9 10-1

0

1

z

|e|

0 1 2 3 4 5 6 7 8 9 10-1

0

1

z

|e|2

0 1 2 3 4 5 6 7 8 9 10-1

0

1

z

|e|10

higher norms give increaing weight to largest element of e

Page 7: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

limiting case

Page 8: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

but which norm to use?

it makes a difference!

Page 9: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

0 2 4 6 8 100

5

10

15

z

d

outlier

L1L2

L∞

Page 10: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

B)A)

0 5 100

0.1

0.2

0.3

0.4

0.5

d

p(d)

0 5 100

0.1

0.2

0.3

0.4

0.5

d

p(d)

Answer is related to the distribution of the error. Are outliers common or rare?

long tailsoutliers common

outliers unimportantuse low norm

gives low weight to outliers

short tailsoutliers uncommonoutliers important

use high normgives high weight to outliers

Page 11: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

as we showed previously …

use L2 norm when data has

Gaussian-distributed error

as we will show in a moment …

use L1 norm when data has

Exponentially-distributed error

Page 12: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

comparison of p.d.f.’s

-5 -4 -3 -2 -1 0 1 2 3 4 50

0.2

0.4

0.6

0.8

1

d

p(d)

Gaussian Exponential

Page 13: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

to make realizations of an exponentially-distributed random variable in MatLab

mu = sd/sqrt(2);rsign = (2*(random('unid',2,Nr,1)-1)-1);dr = dbar + rsign .* ... random('exponential',mu,Nr,1);

Page 14: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Part 2

Derive the L1 estimate of themean and variance of an exponential distribution

Page 15: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

use of Principle of Maximum Likelihood

maximizeL = log p(dobs)the log-probability that the observed data was in fact

observed

with respect to unknown parameters in the p.d.f.

e.g. its mean m1 and variance σ2

Page 16: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Previous Example: Gaussian p.d.f.

Page 17: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

solving the two equations

Page 18: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

solving the two equations

usual formula for the

sample mean

almost the usual formula for the sample standard

deviation

Page 19: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

New Example: Exponential p.d.f.

Page 20: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

solving the two equations

m1est=median(d) and

Page 21: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

solving the two equations

m1est=median(d) andmore robust than sample meansince outlier moves it only by

“one data point”

Page 22: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

0.5 1 1.50

1

2

3

4

5

6

7

8

9

10

m

E(m

)

0.5 1 1.50

1

2

3

4

5

6

7

8

9

10

m

E(m

)

0.5 1 1.50

1

2

3

4

5

6

7

8

9

10

msq

rt E

(m)E(m)

(A) (C)(B)

E(m) E(m)mest mest mest

Page 23: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

observations

1. When the number of data are even, the solution in non-unique but bounded

2. The solution exactly satisfies one of the data

these properties carry over to the general linear problem

1. In certain cases, the solution can be non-unique but bounded

2. The solution exactly satisfies M of the data equations

Page 24: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Part 3

Solve the Linear Inverse Problem under the L1 normby Transformation to a Linear

Programming Problem

Page 25: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

the Linear Programming problem

review

Page 26: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Case A

The Minimum L1 Length Solution

Page 27: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

minimize

subject to the constraint

Gm=d

Page 28: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

minimize

subject to the constraint

Gm=dweighted L1 solution

length(weighted by σm-1)

usual data equations

Page 29: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

transformation to an equivalent linear programming problem

Page 30: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.
Page 31: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

all variables are required to be positive

Page 32: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

usual data equationswith m=m’-m’’

Page 33: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

“slack variables”standard trick in linear programming

to allow m to have any sign while m1 and m2 are non-negative

Page 34: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

same as

Page 35: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

then α ≥ (m-<m>)since x≥0

if +

can always be satisfied by choosing an appropriate x’

Page 36: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

if -

can always be satisfied by choosing an appropriate x’

then α ≥ -(m-<m>)since x≥0

Page 37: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

taken togetherthen α ≥|m-<m>|

Page 38: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

minimizing zsame as minimizing

weighted solution length

Page 39: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Case B

Least L1 error solution(analogous to least squares)

Page 40: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

transformation to an equivalent linear programming problem

Page 41: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.
Page 42: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

same asα – x = Gm – dα – x’ = -(Gm – d)so previous argument applies

Page 43: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

MatLab

% variables% m = mp - mpp% x = [mp', mpp', alpha', x', xp']'% mp, mpp len M and alpha, x, xp, len NL = 2*M+3*N;x = zeros(L,1);f = zeros(L,1);f(2*M+1:2*M+N)=1./sd;

Page 44: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

% equality constraintsAeq = zeros(2*N,L);beq = zeros(2*N,1);

% first equation G(mp-mpp)+x-alpha=dAeq(1:N,1:M) = G;Aeq(1:N,M+1:2*M) = -G;Aeq(1:N,2*M+1:2*M+N) = -eye(N,N);Aeq(1:N,2*M+N+1:2*M+2*N) = eye(N,N);beq(1:N) = dobs;

% second equation G(mp-mpp)-xp+alpha=dAeq(N+1:2*N,1:M) = G;Aeq(N+1:2*N,M+1:2*M) = -G;Aeq(N+1:2*N,2*M+1:2*M+N) = eye(N,N);Aeq(N+1:2*N,2*M+2*N+1:2*M+3*N) = -eye(N,N);beq(N+1:2*N) = dobs;

Page 45: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

% inequality constraints A x <= b

% part 1: everything positiveA = zeros(L+2*M,L);b = zeros(L+2*M,1);A(1:L,:) = -eye(L,L);b(1:L) = zeros(L,1);

% part 2; mp and mpp have an upper bound. A(L+1:L+2*M,:) = eye(2*M,L);mls = (G'*G)\(G'*dobs); % L2mupperbound=10*max(abs(mls));b(L+1:L+2*M) = mupperbound;

Page 46: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

% solve linear programming problem[x, fmin] = linprog(f,A,b,Aeq,beq);fmin=-fmin;mest = x(1:M) - x(M+1:2*M);

Page 47: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

2

4

6

8

10

z

d

zi

dioutlier

Page 48: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

the mixed-determined problem of

minimizing L+Ecan also be solved via transformation

but we omit it here

Page 49: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

Part 4

Solve the Linear Inverse Problem under the L∞ normby Transformation to a Linear

Programming Problem

Page 50: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

we’re going to skip all the details

and just show the transformationfor the overdetermined case

Page 51: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

minimize E=maxi (ei /σdi) where e=dobs-Gm

Page 52: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

note α is a scalar

Page 53: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.
Page 54: Lecture 13 L 1, L ∞ Norm Problems and Linear Programming.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

2

4

6

8

10

z

d

zi

di

outlier


Recommended