+ All Categories
Home > Documents > 2 Linear Transformations

2 Linear Transformations

Date post: 14-Apr-2018
Category:
Upload: shan
View: 231 times
Download: 0 times
Share this document with a friend

of 32

Transcript
  • 7/29/2019 2 Linear Transformations

    1/32

    AGC

    DSP

    Professor A G Constantinides 1

    Hilbert Spaces

    Linear Transformations and Least Squares:

    Hilbert Spaces

  • 7/29/2019 2 Linear Transformations

    2/32

    AGC

    DSP

    Professor A G Constantinides 2

    Linear Transformations A transformation from a vector space to a vector

    space with the same scalar field denoted by

    is linear when

    Where

    We can think of the transformation as an operator

    YXL :

    X

    Y

    2121

    )( xxxxL

    )()( xaLaxL

    Xxxx 21,,

  • 7/29/2019 2 Linear Transformations

    3/32

    AGC

    DSP

    Professor A G Constantinides 3

    Linear Transformations Example: Mapping a vector space from to

    can be expressed as a mxn matrix.

    Thus the transformation

    can be written as

    nR mR

    )43,2().,( 3221321 xxxxxxxL

    3

    2

    1

    2

    1

    430

    021

    x

    x

    x

    y

    y

  • 7/29/2019 2 Linear Transformations

    4/32

    AGC

    DSP

    Professor A G Constantinides 4

    Range space & Null space The range space of a transformation is

    the set of all vectors that can be reached by the

    transformation

    The null space of the transformation is the set of all

    vectors inX that are transformed to the null vector inY.

    XLLR xxy :)()(

    YXL :

    XLLN x0x :)()(

  • 7/29/2019 2 Linear Transformations

    5/32

    AGC

    DSP

    Professor A G Constantinides 5

    Range space & Null space If is a projection operator then so is

    Hence we have

    Thus the vector is decomposed into two disjoint

    parts. These parts are not necessarily orthogonal

    If the range and null space are orthogonal then the

    projections is said to be orthogonal

    P PI

    xP)x(IPx

    x

  • 7/29/2019 2 Linear Transformations

    6/32

    AGC

    DSP

    Professor A G Constantinides 6

    Linear Transformations Example: Let

    and let the transformation a nxm matrix

    Then

    Thus, the range of the linear transformation (orcolumn space of the matrix ) is the span of thebasis vectors.

    The null space is the set which yields

    Tmxxxx ...321x

    mm332211 ppppAx xxxx ...

    A

    0Ax

  • 7/29/2019 2 Linear Transformations

    7/32

    AGC

    DSP

    Professor A G Constantinides 7

    A Problem Given a signal vector in the vector space S, we

    want to find a point in the subset Vof the space ,

    nearest to

    x

    v

    x

    x

    0v1v

    2v

    0w

    V

    W00 vxw

  • 7/29/2019 2 Linear Transformations

    8/32

    AGC

    DSP

    Professor A G Constantinides 8

    A Problem Let us agree that nearest to in the figure is taken in

    the Euclidean distance sense.

    The projection orthogonal to the set Vgives thedesired solution.

    Moreover the error of representation is

    This vector is clearly orthogonal to the set V (Moreon this later)

    00 vxw

    0v

  • 7/29/2019 2 Linear Transformations

    9/32

    AGC

    DSP

    Professor A G Constantinides 9

    Another perspective We can look at the above problem as seeking to find

    a solution to the set of linear equations

    where the given vector is not in the range of

    as is the case with an overspecified set of equations.

    There is no exact solution. If we project orthogonallythe given vector into the range of then we have

    the shortest norm solution in terms of the Euclidean

    distance of the error.

    xAv

    v

    x A

    A

  • 7/29/2019 2 Linear Transformations

    10/32

    AGC

    DSP

    Professor A G Constantinides 10

    Another perspective The least error is then orthogonal to the data into

    which we are projecting

    Set

    Then as in the above figure we can write

    Where is the error, which is orthogonal to eachof the members of above.

    m321 ppppA ...m321 ppppv mvvvv ...321

    wAvx w

    A

  • 7/29/2019 2 Linear Transformations

    11/32

    AGC

    DSP

    Professor A G Constantinides 11

    Another perspective Thus we can write

    Or

    mjj ,...,3,2,1,0, pAvx

    0Avx

    p

    p

    p

    )(.2

    1

    H

    m

    H

    H

    0AvxA )(H

    xAAAvHH 1)(

  • 7/29/2019 2 Linear Transformations

    12/32

    AGC

    DSP

    Professor A G Constantinides 12

    Another perspective Thus

    and hence the projection matrix is

    ie this is the matrix that projects orthogonally into the

    column space of

    HHAAAP

    1)(

    PxxAAAv HH 1)(

    A

  • 7/29/2019 2 Linear Transformations

    13/32

    AGC

    DSP

    Professor A G Constantinides 13

    Another perspective

    If we adopt the weighted form

    The induced norm is

    Then the projection matrix is

    Where is positive definite

    WAWAAAPHH 1)(

    Wyxyx,W

    H

    WxxxW

    H2

    W

  • 7/29/2019 2 Linear Transformations

    14/32

    AGC

    DSP

    Professor A G Constantinides 14

    Least Squares Projection

    PROJECTION THEOREM

    In a Hilbert space the orthogonal projection of asignal into a smaller dimensional space minimises

    the norm of the error, and the error vector is

    orthogonal to the data (ie the smaller dimensional

    space).

  • 7/29/2019 2 Linear Transformations

    15/32

    AGC

    DSP

    Professor A G Constantinides 15

    Orthogonality Principle Let be a set of

    independent vectors in a vector spaceS.

    We wish to express any vector inSas

    If is in the span of the independent vectors then

    the representation will be exact.

    If on the other hand it is not then there will be an

    error

    }...{ m321 pppp

    x

    mm332211 ppppx xxxx ...

    x

  • 7/29/2019 2 Linear Transformations

    16/32

    AGC

    DSP

    Professor A G Constantinides 16

    Orthogonality Principle In the latter case we can write

    Where

    is an approximation to given vector with error

    We wish to find that approximation which minimises

    the Euclidean error norm (squared)

    i

    m

    iix px 1

    exx

    e

    i

    m

    iii

    m

    iim xxxxJ pxpx

    111 ,),...,(

  • 7/29/2019 2 Linear Transformations

    17/32

    AGC

    DSP

    Professor A G Constantinides 17

    Orthogonality Principle Expand to

    Where T

    mxxx ...21

    m

    iiji

    m

    jji

    m

    iim xxxxxJ

    1

    *

    11

    *1 ,,Re2,),...,( pppxxx

    Rpx THHm

    xxJ Re2),...,(2

    1

    Tmpxpxpxp ,...,, 21

  • 7/29/2019 2 Linear Transformations

    18/32

    AGC

    DSP

    Professor A G Constantinides 18

    Reminders

    0axx

    *

    aax

    x

    H*

    2/)Re(*

    aaxx

    H RxRxxx

    H*

  • 7/29/2019 2 Linear Transformations

    19/32

    AGC

    DSP

    Professor A G Constantinides 19

    Orthogonality Principle

    On setting this to zero we obtain the solution

    This is a minimum because on differentiating we

    have a positive definite matrix

    RpRpx

    THHRe22*

    pR 1

  • 7/29/2019 2 Linear Transformations

    20/32

    AGC

    DSP

    Professor A G Constantinides 20

    Alternatively The norm squared of the error is

    where

    We note that

    and

    eeT

    J

    ee

    T

    iix

    Jx

    2

    i

    ix

    pe

  • 7/29/2019 2 Linear Transformations

    21/32

    AGC

    DSP

    Professor A G Constantinides 21

    Orthogonality Principle At the minimum

    Thus we haveand hence

    Thus,

    1) At the optimum the error is orthogonal to the data (Principle oforthogonality)

    2)

    0 xx,pe,p ii

    022

    epee

    T

    i

    T

    ii x

    J

    x

    mixm

    jjjiii ,...,1,,

    1

    ppxpx,p

  • 7/29/2019 2 Linear Transformations

    22/32

    AGC

    DSP

    Professor A G Constantinides 22

    Orthogonality Principle Thus for

    Hence or

    Tmpxpxpxp ,...,, 21

    Tm

    xxx ...21

    mmmm

    m

    m

    pppppp

    pppppp

    pppppp

    R

    ,..,,

    ........

    ,..,,

    ,..,,

    21

    22212

    12111

    pR1

    Rp

  • 7/29/2019 2 Linear Transformations

    23/32

    AGC

    DSP

    Professor A G Constantinides 23

    Orthogonalisation A signal may be projected into any linear space.

    The computation of its coefficients in the various

    vectors of the selected space is easier when thevectors in the space are orthogonal in that they are

    then non-interacting, ie the evaluation of one such

    coefficient will not influence the others

    The error norm is easier to compute

    Thus it makes sense to use an orthogonal set of

    vectors in the space into which we are to project a

    signal

  • 7/29/2019 2 Linear Transformations

    24/32

    AGC

    DSP

    Professor A G Constantinides 24

    Orthogonalisation Given any set of linearly independent vectors that

    span a certain space, there is another set ofindependent vectors of the same cardinality, pair-

    wise orthogonal, that spans the same space

    We can think of the given set as a linear combinationof orthogonal vectors

    Hence because of independence, the orthogonal

    vectors is a linear combination of the given vectors

    This is the basic idea behind the Gram-Schmidtprocedure

  • 7/29/2019 2 Linear Transformations

    25/32

    AGC

    DSP

    Professor A G Constantinides 25

    Gram-Schmidt Orthogonalisation The problem: (we consider finite dimensional spaces

    only)

    Given a set of linearly independent vectorsto determine a set of vectors that are pair-

    wise orthogonal

    Write the ith vector as

    }{x

    }{p

    mixxxx mi

    m

    iii

    i ,...,1...)(

    3)(

    32)(

    21)(

    1 ppppx

  • 7/29/2019 2 Linear Transformations

    26/32

    AGC

    DSP

    Professor A G Constantinides 26

    Gram-Schmidt Orthogonalisation If we knew the orthogonal set then the

    coefficients of the expression can be determined as

    the inner product

    Step(1) The unknown orthogonal vector can be

    oriented such that one of its members coincides withone of the members of the given set

    Choose to be coincident with

    }{x

    }{p

    2, jjji x ppx

    1p 1x

  • 7/29/2019 2 Linear Transformations

    27/32

    AGC

    DSP

    Professor A G Constantinides 27

    Gram-Schmidt Orthogonalisation Step (2) Each member of has a projection

    onto given by

    Step(3) We construct

    Step(4) Repeat the above on

    }{x

    21

    )(11, ppx ii x

    1p

    }{u

    mixi

    ii ,...,21)(

    1 pxu

  • 7/29/2019 2 Linear Transformations

    28/32

    AGC

    DSP

    Professor A G Constantinides 28

    Gram-Schmidt Orthogonalisation Example: Let

    Then

    And the projection of onto is

    1

    11x

    2

    12x

    1

    11p

    11

    121

    2x 1p

  • 7/29/2019 2 Linear Transformations

    29/32

    AGC

    DSP

    Professor A G Constantinides 29

    Gram-Schmidt Orthogonalisation Form

    Then

    5.1

    5.1

    2/1

    1

    12

    1

    /1

    2

    112 ppx

    5.15.1

    2p

  • 7/29/2019 2 Linear Transformations

    30/32

    AGC

    DSP

    Professor A G Constantinides 30

    Gram-Schmidt

    -2 -1.5 -1 -0.5 0 0.5 10

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1 1x

    212 xxp a

    11 xp

    2x

    Projection ofin

    2x

    1x

    2

    p

    1

    xa

  • 7/29/2019 2 Linear Transformations

    31/32

    AGC

    DSP

    Professor A G Constantinides 31

    3-D G-S Orthogonalisation

    -1

    -0.5

    0

    0.5

    1

    -1

    -0.5

    0

    0.5

    1-1

    -0.5

    0

    0.5

    1

  • 7/29/2019 2 Linear Transformations

    32/32

    AGC

    DSP

    P f A G C t ti id 32

    Gram-Schmidt Orthogonalisation Note that in the previous 4 steps we have

    considerable freedom at Step 1 to choose any

    vector not necessarily coincident with onefrom the given set of data vectors.

    This enables us to avoid certain numerical ill-conditioning problems that may arise in the

    Gram-Schmidt case.

    Can you suggest when we are likely to haveill-conditioning in the G-S procedure?


Recommended