Machine Learning for Signal Processing Fundamentals of Linear Algebra Class 2. 22 Jan 2015...

Post on 20-Jan-2016

216 views 0 download

transcript

Machine Learning for Signal Processing

Fundamentals of Linear AlgebraClass 2. 22 Jan 2015

Instructor: Bhiksha Raj

04/21/23 11-755/18-797 1

Overview

• Vectors and matrices• Basic vector/matrix operations• Various matrix types• Projections

04/21/23 11-755/18-797 2

Book• Fundamentals of Linear Algebra, Gilbert Strang

• Important to be very comfortable with linear algebra– Appears repeatedly in the form of Eigen analysis, SVD, Factor

analysis– Appears through various properties of matrices that are used in

machine learning– Often used in the processing of data of various kinds– Will use sound and images as examples

• Today’s lecture: Definitions– Very small subset of all that’s used– Important subset, intended to help you recollect

04/21/23 11-755/18-797 3

Incentive to use linear algebra

• Simplified notation!

• Easier intuition–Really convenient geometric interpretations

• Easy code translation!

04/21/23 11-755/18-797 4

for i=1:n for j=1:m c(i)=c(i)+y(j)*x(i)*a(i,j) endend

C=x*A*y

y j x iaiji

jyAx T

And other things you can do

• Manipulate Data• Extract information from data• Represent data..• Etc.

04/21/23 11-755/18-797 5

Rotation + Projection +Scaling + Perspective

Time F

requ

ency

From Bach’s Fugue in Gm

Decomposition (NMF)

Scalars, vectors, matrices, …• A scalar a is a number

– a = 2, a = 3.14, a = -1000, etc.

• A vector a is a linear arrangement of a collection of scalars

• A matrix A is a rectangular arrangement of a collection of scalars

04/21/23 11-755/18-797 6

A 3.12 10

10.0 2

a 1 2 3 , a 3.14

32

Vectors in the abstract• Ordered collection of numbers

– Examples: [3 4 5], [a b c d], ..– [3 4 5] != [4 3 5] Order is important

• Typically viewed as identifying (the path from origin to) a location in an N-dimensional space

04/21/23 11-755/18-797 7

x

z

y

3

4

5

(3,4,5)

(4,3,5)

Vectors in reality• Vectors usually hold sets of

numerical attributes– X, Y, Z coordinates

• [1, 2, 0]

– [height(cm) weight(kg)]– [175 72]

– A location in Manhattan • [3av 33st]

• A series of daily temperatures

• Samples in an audio signal

• Etc.

04/21/23 11-755/18-797 8

[-2.5av 6st]

[2av 4st]

[1av 8st]

Matrices• Matrices can be square or rectangular

– Can hold data• Images, collections of sounds, etc.• Or represent operations as we shall see

– A matrix can be vertical stacking of row vectors

– Or a horizontal arrangement of column vectors

04/21/23 11-755/18-797 9

Sa b

c d

, R

a b c

d e f

, M

fed

cbaR

fed

cbaR

Dimensions of a matrix• The matrix size is specified by the number of rows and

columns

– c = 3x1 matrix: 3 rows and 1 column– r = 1x3 matrix: 1 row and 3 columns

– S = 2 x 2 matrix– R = 2 x 3 matrix– Pacman = 321 x 399 matrix

04/21/23 11-755/18-797 10

cba

c

b

a

rc ,

fed

cba

dc

baRS ,

Representing an image as a matrix• 3 pacmen• A 321 x 399 matrix

– Row and Column = position

• A 3 x 128079 matrix– Triples of x,y and value

• A 1 x 128079 vector– “Unraveling” the matrix

• Note: All of these can be recast as the matrix that forms the image– Representations 2 and 4 are equivalent

• The position is not represented

04/21/23 11-755/18-797 11

1.1.00.1.11

10.10.65.1.21

10.2.22.2.11

1..000.11.11

Y

X

v

Values only; X and Y are implicit

Basic arithmetic operations

• Addition and subtraction– Element-wise operations

04/21/23 11-755/18-797 12

a ba1

a2

a3

b1

b2

b3

a1 b1

a2 b2

a3 b3

a ba1

a2

a3

b1

b2

b3

a1 b1

a2 b2

a3 b3

A Ba11 a12

a21 a22

b11 b12

b21 b22

a11 b11 a12 b12

a21 b21 a22 b22

Vector Operations

• Operations tell us how to get from origin to the result of the vector operations– (3,4,5) + (3,-2,-3) = (6,2,2)

04/21/23 11-755/18-797 13

3

4

5

(3,4,5)3

-2

-3

(3,-2,-3)

(6,2,2)

Vector norm• Measure of how long a vector is:

– Represented as

• Geometrically the shortest distance to travel from the origin to the destination– As the crow flies– Assuming Euclidean Geometry

• MATLAB syntax: norm(x)

04/21/23 11-755/18-797 15

[-2av 17st][-6av 10st]

a b ... a2 b2 ...2

x

3

4

5

(3,4,5)Length = sqrt(32 + 42 + 52)

Transposition• A transposed row vector becomes a column (and vice versa)

• A transposed matrix gets all its row (or column) vectors transposed in order

• MATLAB syntax: a’

04/21/23 11-755/18-797 16

x a

b

c

, xT a b c

Xa b c

d e f

, X

T a d

b e

c f

y a b c , yT a

b

c

M

, MT

Vector multiplication• Multiplication by scalar

• Dot product, or inner product– Vectors must have the same number of elements– Row vector times column vector = scalar

• Outer product or vector direct product– Column vector times row vector = matrix

04/21/23 11-755/18-797 17

a

b

c

d e f

ad ae afbd be bfc d c e c f

a b c d

e

f

ad be c f

dcdbdacbad

cd

bd

ad

d

c

b

a

.

Vector dot product• Example:

– Coordinates are yards, not ave/st

– a = [200 1600], b = [770 300]

• The dot product of the two vectors relates to the length of a projection– How much of the first vector have we

covered by following the second one?– Must normalize by the length of the

“target” vector

04/21/23 11-755/18-797 18

[200yd 1600yd]norm ≈ 1612

[770yd 300yd]norm ≈ 826

abT

a

200 1600 770

300

200 1600 393yd

norm≈ 393yd

Vector dot product

• Vectors are spectra– Energy at a discrete set of frequencies– Actually 1 x 4096– X axis is the index of the number in the vector

• Represents frequency– Y axis is the value of the number in the vector

• Represents magnitude04/21/23 11-755/18-797 19

frequency

Sqr

t(en

ergy

)

frequencyfrequency 1...1540.911 1.14.16..24.3 0.13.03.0.0

C E C2

Vector dot product

• How much of C is also in E– How much can you fake a C by playing an E– C.E / |C||E| = 0.1– Not very much

• How much of C is in C2?– C.C2 / |C| /|C2| = 0.5– Not bad, you can fake it

• To do this, C, E, and C2 must be the same size04/21/23 11-755/18-797 20

frequency

Sqr

t(en

ergy

)

frequencyfrequency 1...1540.911 1.14.16..24.3 0.13.03.0.0

C E C2

Vector outer product

• The column vector is the spectrum• The row vector is an amplitude modulation• The outer product is a spectrogram

– Shows how the energy in each frequency varies with time– The pattern in each column is a scaled version of the spectrum– Each row is a scaled version of the modulation

04/21/23 11-755/18-797 21

Multiplying a vector by a matrix• Generalization of vector scaling

– Left multiplication: Dot product of each vector pair

– Dimensions must match!!• No. of columns of matrix = size of vector• Result inherits the number of rows from the matrix

04/21/23 11-755/18-797 22

baba

baa

BA2

1

2

1

cd

bd

ad

d

c

b

a

.

Multiplying a vector by a matrix• Generalization of vector multiplication

– Right multiplication: Dot product of each vector pair

– Dimensions must match!!• No. of rows of matrix = size of vector• Result inherits the number of columns from the matrix

04/21/23 11-755/18-797 23

2121 .. bababbaBA

dcdbdacbad .

Multiplication of vector space by matrix

• The matrix rotates and scales the space– Including its own vectors

04/21/23 11-755/18-797 24

6.13.1

7.03.0Y

Multiplication of vector space by matrix

• The normals to the row vectors in the matrix become the new axes– X axis = normal to the second row vector

• Scaled by the inverse of the length of the first row vector04/21/23 11-755/18-797 25

6.13.1

7.03.0Y

Matrix Multiplication

• The k-th axis corresponds to the normal to the hyperplane represented by the 1..k-1,k+1..N-th row vectors in the matrix– Any set of K-1 vectors represent a hyperplane of dimension K-1 or less

• The distance along the new axis equals inner product with the k-th row vector– Expressed in inverse-lengths of the vector

04/21/23 11-755/18-797 26

ifc

heb

gda

Matrix Multiplication: Column space

• So much for spaces .. what does multiplying a matrix by a vector really do?

• It mixes the column vectors of the matrix using the numbers in the vector

• The column space of the Matrix is the complete set of all vectors that can be formed by mixing its columns

04/21/23 11-755/18-797 27

f

cz

e

by

d

ax

z

y

x

fed

cba

Matrix Multiplication: Row space

• Left multiplication mixes the row vectors of the matrix.

• The row space of the Matrix is the complete set of all vectors that can be formed by mixing its rows

04/21/23 11-755/18-797 28

fedycbaxfed

cbayx

Matrix multiplication: Mixing vectors

• A physical example– The three column vectors of the matrix X are the spectra of

three notes– The multiplying column vector Y is just a mixing vector– The result is a sound that is the mixture of the three notes

04/21/23 11-755/18-797 29

1..

.249

0..

031

X

1

2

1

Y

2

.

.

7

=

Matrix multiplication: Mixing vectors

• Mixing two images– The images are arranged as columns

• position value not included– The result of the multiplication is rearranged as an image

04/21/23 11-755/18-797 30

200 x 200 200 x 200 200 x 200

40000 x 2

75.0

25.0

40000 x 1

2 x 1

Multiplying matrices

• Simple vector multiplication: Vector outer product

04/21/23 11-755/18-797 31

2212

211121

2

1

baba

bababb

a

aab

Multiplying matrices

• Generalization of vector multiplication– Outer product of dot products!!

– Dimensions must match!!• Columns of first matrix = rows of second• Result inherits the number of rows from the first matrix

and the number of columns from the second matrix

04/21/23 11-755/18-797 32

A B a1 a2

b1 b2

a1 b1 a1 b2

a2 b1 a2 b2

Multiplying matrices: Another view

• Simple vector multiplication: Vector inner product

04/21/23 11-755/18-797 33

22112

121 baba

b

baa

ab

Matrix multiplication: another view

04/21/23 11-755/18-797 34

NKN

MN

N

K

M

K

MNKN

NK

MNM

N

N

bb

a

a

bb

a

a

bb

a

a

bb

bb

aa

aa

aa

..

.....

.

..

.

.

.

...

.

..

....

..

..

1

1

221

2

12

111

1

11

1

11

1

221

111

The outer product of the first column of A and the first row of B + outer product of the second column of A and the second row of B + ….

Sum of outer products

22222

121 . baba

b

baaBA

Why is that useful?

• Sounds: Three notes modulated independently

04/21/23 11-755/18-797 35

1..

.249

0..

031

X

.....195.09.08.07.06.05.0

......5.005.07.09.01

.....05.075.0175.05.00

Y

Matrix multiplication: Mixing modulated spectra

• Sounds: Three notes modulated independently

04/21/23 11-755/18-797 36

1..

.249

0..

031

X

.....195.09.08.07.06.05.0

......5.005.07.09.01

.....05.075.0175.05.00

Y

Matrix multiplication: Mixing modulated spectra

• Sounds: Three notes modulated independently

04/21/23 11-755/18-797 37

1..

.249

0..

031

X

.....195.09.08.07.06.05.0

......5.005.07.09.01

.....05.075.0175.05.00Y

Matrix multiplication: Mixing modulated spectra

• Sounds: Three notes modulated independently

04/21/23 11-755/18-797 38

.....195.09.08.07.06.05.0

......5.005.07.09.01

.....05.075.0175.05.00

1..

.249

0..

031

X

Matrix multiplication: Mixing modulated spectra

• Sounds: Three notes modulated independently

04/21/23 11-755/18-797 39

.....195.09.08.07.06.05.0

......5.005.07.09.01

.....05.075.0175.05.00

1..

.249

0..

031

X

Matrix multiplication: Mixing modulated spectra

• Sounds: Three notes modulated independently

04/21/23 11-755/18-797 40

Matrix multiplication: Image transition

• Image1 fades out linearly• Image 2 fades in linearly

04/21/23 11-755/18-797 41

..

..

..22

11

ji

ji

19.8.7.6.5.4.3.2.1.0

01.2.3.4.5.6.7.8.9.1

Matrix multiplication: Image transition

• Each column is one image– The columns represent a sequence of images of decreasing

intensity• Image1 fades out linearly04/21/23 11-755/18-797 42

19.8.7.6.5.4.3.2.1.0

01.2.3.4.5.6.7.8.9.1

..

..

..22

11

ji

ji

0......8.09.0

.0.........

0.........

0......8.09.0

0......8.09.0

222

111

NNN iii

iii

iii

Matrix multiplication: Image transition

• Image 2 fades in linearly

04/21/23 11-755/18-797 43

19.8.7.6.5.4.3.2.1.0

01.2.3.4.5.6.7.8.9.1

..

..

..22

11

ji

ji

Matrix multiplication: Image transition

• Image1 fades out linearly• Image 2 fades in linearly

04/21/23 11-755/18-797 44

..

..

..22

11

ji

ji

19.8.7.6.5.4.3.2.1.0

01.2.3.4.5.6.7.8.9.1

The Identity Matrix

• An identity matrix is a square matrix where– All diagonal elements are 1.0– All off-diagonal elements are 0.0

• Multiplication by an identity matrix does not change vectors

04/21/23 11-755/18-797 45

10

01Y

Diagonal Matrix

• All off-diagonal elements are zero• Diagonal elements are non-zero• Scales the axes

– May flip axes

04/21/23 11-755/18-797 46

10

02Y

Diagonal matrix to transform images

• How?

04/21/23 11-755/18-797 47

Stretching

1.1.00.1.11

10.10.65.1.21

10.2.22.2.11

100

010

002

• Location-based representation

• Scaling matrix – only scales the X axis– The Y axis and pixel value are

scaled by identity

• Not a good way of scaling.

04/21/23 11-755/18-797 48

Stretching

)2x(

Newpic.....

.0000

.5.000

.5.15.0

.005.1

NN

DA

A

• Better way• Interpolate

04/21/23 11-755/18-797 49

D =

N is the widthof the originalimage

Modifying color

• Scale only Green

100

020

001

PNewpic

BGR

P

04/21/23 11-755/18-797 50

Permutation Matrix

• A permutation matrix simply rearranges the axes– The row entries are axis vectors in a different order– The result is a combination of rotations and reflections

• The permutation matrix effectively permutes the arrangement of the elements in a vector

04/21/23 11-755/18-797 51

x

z

y

z

y

x

001

100

010

34

5(3,4,5)

X

Y

Z

45

3

X (old Y)

Y (old Z)

Z (old X)

Permutation Matrix

• Reflections and 90 degree rotations of images and objects

04/21/23 11-755/18-797 52

100

001

010

P

001

100

010

P

1.1.00.1.11

10.10.65.1.21

10.2.22.2.11

Permutation Matrix

• Reflections and 90 degree rotations of images and objects– Object represented as a matrix of 3-Dimensional “position” vectors– Positions identify each point on the surface

04/21/23 11-755/18-797 53

100

001

010

P

001

100

010

P

N

N

N

zzz

yyy

xxx

..

..

..

21

21

21

Rotation Matrix

• A rotation matrix rotates the vector by some angle • Alternately viewed, it rotates the axes

– The new axes are at an angle to the old one04/21/23 11-755/18-797 54

'

'

cossin

sincos

y

xX

y

xX

new

R

X

Y

(x,y)

newXXR

X

Y

(x,y)

(x’,y’)

cossin'

sincos'

yxy

yxx

x’ x

y’y

Rotating a picture

• Note the representation: 3-row matrix– Rotation only applies on the “coordinate” rows– The value does not change– Why is pacman grainy?

04/21/23 11-755/18-797 55

1.1.00.1.11

..10.65.1.21

..2.22.2.11

100

045cos45sin

045sin45cos

R

1.1.00.1.11

..212.2827.23.232

..28.2423.2.20

3-D Rotation

• 2 degrees of freedom– 2 separate angles

• What will the rotation matrix be?

04/21/23 11-755/18-797 56

X

Y

Z

Xnew

Ynew

Znew

Orthogonal/Orthonormal vectors

• Two vectors are orthogonal if they are perpendicular to one another– A.B = 0– A vector that is perpendicular to a plane is orthogonal to every vector on the

plane

• Two vectors are orthonormal if– They are orthogonal– The length of each vector is 1.0– Orthogonal vectors can be made orthonormal by normalizing their lengths to 1.0

04/21/23 11-755/18-797 57

zyx

A

wvu

B

0 0. zwyvxuBA

Orthogonal matrices

• Orthogonal Matrix : AAT = ATA = I– The matrix is square– All row vectors are orthonormal to one another

• Every vector is perpendicular to the hyperplane formed by all other vectors

– All column vectors are also orthonormal to one another– Observation: In an orthogonal matrix if the length of the row vectors

is 1.0, the length of the column vectors is also 1.0– Observation: In an orthogonal matrix no more than one row can

have all entries with the same polarity (+ve or –ve)

04/21/23 11-755/18-797 58

5.075.0 0375.0 125.0 5.0375.0 125.0 5.0

All 3 at 90o toon another

Orthogonal and Orthonormal Matrices

• Orthogonal matrices will retain the length and relative angles between transformed vectors– Essentially, they are combinations of rotations, reflections and

permutations– Rotation matrices and permutation matrices are all orthonormal

04/21/23 11-755/18-797 59

Ax

Orthogonal and Orthonormal Matrices

• If the vectors in the matrix are not unit length, it cannot be orthogonal– AAT != I, ATA != I– AAT = Diagonal or ATA = Diagonal, but not both– If all the entries are the same length, we can get AAT = ATA = Diagonal, though

• A non-square matrix cannot be orthogonal– AAT=I or ATA = I, but not both

04/21/23 11-755/18-797 60

5.075.0 0

375.0 125.0 5.0

1875.0 0675.0 1

Matrix Operations: Properties

• A+B = B+A• AB != BA

04/21/23 11-755/18-797 61

Matrix Inversion• A matrix transforms an

N-dimensional object to a different N-dimensional object

• What transforms the new object back to the original?– The inverse transformation

• The inverse transformation is called the matrix inverse

04/21/23 11-755/18-797 62

1

???

???

???

TQ

7.09.0 7.0

8.0 8.0 0.1

7.0 0 8.0

T

Matrix Inversion

• The product of a matrix and its inverse is the identity matrix– Transforming an object, and then inverse

transforming it gives us back the original object

04/21/23 11-755/18-797 63

T T-1

ITTDTDT 11

ITTDDTT 11

Matrix inversion (division)• The inverse of matrix multiplication

– Not element-wise division!!• Provides a way to “undo” a linear transformation

– Inverse of the unit matrix is itself– Inverse of a diagonal is diagonal– Inverse of a rotation is a (counter)rotation (its transpose!)

04/21/23 11-755/18-797 64

Projections

• What would we see if the cone to the left were transparent if we looked at it from above the plane shown by the grid?– Normal to the plane– Answer: the figure to the right

• How do we get this? Projection

04/21/23 11-755/18-797 65

Projection Matrix

• Consider any plane specified by a set of vectors W1, W2..

– Or matrix [W1 W2 ..]

– Any vector can be projected onto this plane– The matrix A that rotates and scales the vector so that it becomes its

projection is a projection matrix

04/21/23 11-755/18-797 66

90degrees

projectionW1

W2

Projection Matrix

• Given a set of vectors W1, W2, which form a matrix W = [W1 W2.. ]• The projection matrix to transform a vector X to its projection on the plane is

– P = W (WTW)-1 WT

• We will visit matrix inversion shortly

• Magic – any set of vectors from the same plane that are expressed as a matrix will give you the same projection matrix– P = V (VTV)-1 VT

04/21/23 11-755/18-797 67

90degrees

projectionW1

W2

Projections

• HOW?

04/21/23 11-755/18-797 68

Projections

• Draw any two vectors W1 and W2 that lie on the plane– ANY two so long as they have different angles

• Compose a matrix W = [W1 W2]• Compose the projection matrix P = W (WTW)-1 WT

• Multiply every point on the cone by P to get its projection• View it

– I’m missing a step here – what is it?04/21/23 11-755/18-797 69

Projections

• The projection actually projects it onto the plane, but you’re still seeing the plane in 3D– The result of the projection is a 3-D vector– P = W (WTW)-1 WT = 3x3, P*Vector = 3x1– The image must be rotated till the plane is in the plane of the paper

• The Z axis in this case will always be zero and can be ignored• How will you rotate it? (remember you know W1 and W2)

04/21/23 11-755/18-797 70

Projection matrix properties

• The projection of any vector that is already on the plane is the vector itself– Px = x if x is on the plane– If the object is already on the plane, there is no further projection to be performed

• The projection of a projection is the projection– P (Px) = Px– That is because Px is already on the plane

• Projection matrices are idempotent– P2 = P

• Follows from the above 7104/21/23 11-755/18-797

Projections: A more physical meaning• Let W1, W2 .. Wk be “bases”

• We want to explain our data in terms of these “bases”– We often cannot do so– But we can explain a significant portion of it

• The portion of the data that can be expressed in terms of our vectors W1, W2, .. Wk, is the projection of the data on the W1 .. Wk (hyper) plane– In our previous example, the “data” were all the points on a

cone, and the bases were vectors on the plane

04/21/23 11-755/18-797 72

Projection : an example with sounds

• The spectrogram (matrix) of a piece of music

04/21/23 11-755/18-797 73

How much of the above music was composed of the above notes I.e. how much can it be explained by the notes

Projection: one note

• The spectrogram (matrix) of a piece of music

04/21/23 11-755/18-797 74

M = spectrogram; W = note P = W (WTW)-1 WT

Projected Spectrogram = P * M

M =

W =

Projection: one note – cleaned up

• The spectrogram (matrix) of a piece of music

04/21/23 11-755/18-797 75

Floored all matrix values below a threshold to zero

M =

W =

Projection: multiple notes

• The spectrogram (matrix) of a piece of music

04/21/23 11-755/18-797 76

P = W (WTW)-1 WT

Projected Spectrogram = P * M

M =

W =

Projection: multiple notes, cleaned up

• The spectrogram (matrix) of a piece of music

04/21/23 11-755/18-797 77

P = W (WTW)-1 WT

Projected Spectrogram = P * M

M =

W =

Projection and Least Squares• Projection actually computes a least squared error estimate• For each vector V in the music spectrogram matrix

– Approximation: Vapprox = a*note1 + b*note2 + c*note3..

– Error vector E = V – Vapprox

– Squared error energy for V e(V) = norm(E)2

– Total error = sum over all V { e(V) } = V e(V)

• Projection computes Vapprox for all vectors such that Total error is minimized– It does not give you “a”, “b”, “c”.. Though

• That needs a different operation – the inverse / pseudo inverse04/21/23 11-755/18-797 78

c

b

a

Vapprox no

te1

note

2no

te3

Perspective

• The picture is the equivalent of “painting” the viewed scenery on a glass window

• Feature: The lines connecting any point in the scenery and its projection on the window merge at a common point– The eye– As a result, parallel lines in the scene apparently merge to a point

04/21/23 11-755/18-797 79

An aside on Perspective..

• Perspective is the result of convergence of the image to a point• Convergence can be to multiple points

– Top Left: One-point perspective– Top Right: Two-point perspective– Right: Three-point perspective

04/21/23 11-755/18-797 80

Representing Perspective

• Perspective was not always understood.• Carefully represented perspective can create

illusions..

04/21/23 11-755/18-797 81

Central Projection

• The positions on the “window” are scaled along the line• To compute (x,y) position on the window, we need z (distance of

window from eye), and (x’,y’,z’) (location being projected)04/21/23 11-755/18-797 82

'z

z

'y

y

'x

x Property of a line through origin 'yy

'xx'z

z

X

Yx,y

z

x’,y’,z’

Homogeneous Coordinates

• Represent points by a triplet– Using yellow window as reference:– (x,y) = (x,y,1)– (x’,y’) = (x,y,c’) c’ = ’/– Locations on line generally represented as (x,y,c)

• x’= x/c’ , y’= y/c’

04/21/23 11-755/18-797 83

X

Yx,y

x’,y’'x'x

'y'y

'xx'

'xx'

'yy'

Homogeneous Coordinates in 3-D

• Points are represented using FOUR coordinates– (X,Y,Z,c)– “c” is the “scaling” factor that represents the distance of the actual

scene• Actual Cartesian coordinates:

– Xactual = X/c, Yactual = Y/c, Zactual = Z/c

04/21/23 11-755/18-797 84

x1’,y1’,z1’

x2’,y2’,z2’x2,y2,z2

x1,y1,z1

'x'x 11

'y'y 11

'z'z 11

'x'x 22

'y'y 22

'z'z 22

Homogeneous Coordinates

• In both cases, constant “c” represents distance along the line with respect to a reference window

– In 2D the plane in which all points have values (x,y,1)

• Changing the reference plane changes the representation

• I.e. there may be multiple Homogenous representations (x,y,c) that represent the same cartesian point (x’ y’)

04/21/23 11-755/18-797 85

Matrix Rank and Rank-Deficient Matrices

• Some matrices will eliminate one or more dimensions during transformation– These are rank deficient matrices– The rank of the matrix is the dimensionality of the transformed

version of a full-dimensional object04/21/23 11-755/18-797 86

P * Cone =

Matrix Rank and Rank-Deficient Matrices

• Some matrices will eliminate one or more dimensions during transformation– These are rank deficient matrices– The rank of the matrix is the dimensionality of the transformed

version of a full-dimensional object

04/21/23 11-755/18-797 87

Rank = 2 Rank = 1

Non-square Matrices

• Non-square matrices add or subtract axes– More rows than columns add axes

• But does not increase the dimensionality of the dataaxes• May reduce dimensionality of the data

04/21/23 11-755/18-797 88

N

N

yyy

xxx

..

..

21

21

X = 2D data

06.

9.1.

9.8.

P = transform PX = 3D, rank 2

N

N

N

zzzyyyxxx

ˆ..ˆˆˆ..ˆˆˆ..ˆˆ

21

21

21

Non-square Matrices

• Non-square matrices add or subtract axes– More rows than columns add axes

• But does not increase the dimensionality of the data– Fewer rows than columns reduce axes

• May reduce dimensionality of the data04/21/23 11-755/18-797 89

N

N

yyyxxxˆ..ˆˆˆ..ˆˆ

21

21

X = 3D data, rank 3

115.

2.13.

P = transform PX = 2D, rank 2

N

N

N

zzz

yyy

xxx

..

..

..

21

21

21

The Rank of a Matrix

• The matrix rank is the dimensionality of the transformation of a full-dimensioned object in the original space

• The matrix can never increase dimensions– Cannot convert a circle to a sphere or a line to a circle

• The rank of a matrix can never be greater than the lower of its two dimensions

04/21/23 11-755/18-797 90

115.

2.13.

06.

9.1.

9.8.

Are Projections Full-Rank?

04/21/23 11-755/18-797 91

P = W (WTW)-1 WT ; Projected Spectrogram = P*M The original spectrogram can never be recovered

P is rank deficient P explains all vectors in the new spectrogram as a mixture of

only the 4 vectors in W There are only a maximum of 4 linearly independent bases Rank of P is 4

M =

W =

The Rank of Matrix

04/21/23 11-755/18-797 92

Projected Spectrogram = P * M Every vector in it is a combination of only 4 bases

The rank of the matrix is the smallest no. of bases required to describe the output E.g. if note no. 4 in P could be expressed as a combination of notes 1,2 and 3, it provides

no additional information Eliminating note no. 4 would give us the same projection The rank of P would be 3!

M =

Matrix rank is unchanged by transposition

• If an N-dimensional object is compressed to a K-dimensional object by a matrix, it will also be compressed to a K-dimensional object by the transpose of the matrix

04/21/23 11-755/18-797 93

86.044.042.0

9.04.01.0

8.05.09.0

86.09.08.0

44.04.05.0

42.01.09.0

Inverting rank-deficient matrices

• Rank deficient matrices “flatten” objects– In the process, multiple points in the original object get mapped to the same

point in the transformed object

• It is not possible to go “back” from the flattened object to the original object– Because of the many-to-one forward mapping

• Rank deficient matrices have no inverse04/21/23 11-755/18-797 94

75.0433.00

433.025.0

001

Rank Deficient Matrices

04/21/23 11-755/18-797 95

The projection matrix is rank deficient You cannot recover the original spectrogram from the

projected one..

Rank deficient matrices have no inverse

M =

Matrix Determinant

• The determinant is the “volume” of a matrix• Actually the volume of a parallelepiped formed from its

row vectors– Also the volume of the parallelepiped formed from its column

vectors• Standard formula for determinant: in text book

04/21/23 11-755/18-797 96

(r1)

(r2) (r1+r2)

(r1)

(r2)

Matrix Determinant: Another Perspective

• The determinant is the ratio of N-volumes– If V1 is the volume of an N-dimensional sphere “O” in N-dimensional

space• O is the complete set of points or vertices that specify the object

– If V2 is the volume of the N-dimensional ellipsoid specified by A*O, where A is a matrix that transforms the space

– |A| = V2 / V1

04/21/23 11-755/18-797 97

Volume = V1 Volume = V2

7.09.0 7.0

8.0 8.0 0.1

7.0 0 8.0

Matrix Determinants• Matrix determinants are only defined for square matrices

– They characterize volumes in linearly transformed space of the same dimensionality as the vectors

• Rank deficient matrices have determinant 0– Since they compress full-volumed N-dimensional objects into zero-

volume N-dimensional objects• E.g. a 3-D sphere into a 2-D ellipse: The ellipse has 0 volume (although it

does have area)

• Conversely, all matrices of determinant 0 are rank deficient– Since they compress full-volumed N-dimensional objects into

zero-volume objects

04/21/23 11-755/18-797 98

Multiplication properties• Properties of vector/matrix products

– Associative

– Distributive

– NOT commutative!!!

• left multiplications ≠ right multiplications– Transposition

04/21/23 11-755/18-797 99

A (BC) (A B)C

A BBA

A (B C) A B A C

TTT ABBA

Determinant properties• Associative for square matrices

– Scaling volume sequentially by several matrices is equal to scaling once by the product of the matrices

• Volume of sum != sum of Volumes

• Commutative– The order in which you scale the volume of an object is irrelevant

04/21/23 11-755/18-797 100

CBACBA

BAABBA

CBCB )(

Revisiting Projections and Least Squares• Projection computes a least squared error estimate• For each vector V in the music spectrogram matrix

– Approximation: Vapprox = a*note1 + b*note2 + c*note3..

– Error vector E = V – Vapprox

– Squared error energy for V e(V) = norm(E)2

• Projection computes Vapprox for all vectors such that Total error is minimized

• But WHAT ARE “a” “b” and “c”?04/21/23 11-755/18-797 101

T

note

1no

te2

note

3

c

b

a

TVapprox

The Pseudo Inverse (PINV)

• We are approximating spectral vectors V as the transformation of the vector [a b c]T

– Note – we’re viewing the collection of bases in T as a transformation

• The solution is obtained using the pseudo inverse– This give us a LEAST SQUARES solution

• If T were square and invertible Pinv(T) = T-1, and V=Vapprox

04/21/23 11-755/18-797 102

c

b

a

TVapprox

c

b

a

TV

Explaining music with one note

04/21/23 11-755/18-797 103

Recap: P = W (WTW)-1 WT, Projected Spectrogram = P*M

Approximation: M = W*X The amount of W in each vector = X = PINV(W)*M W*Pinv(W)*M = Projected Spectrogram

W*Pinv(W) = Projection matrix!!

M =

W =

X =PINV(W)*M

PINV(W) = (WTW)-1WT

Explanation with multiple notes

04/21/23 11-755/18-797 104

X = Pinv(W) * M; Projected matrix = W*X = W*Pinv(W)*M

M =

W =

X=PINV(W)M

How about the other way?

04/21/23 11-755/18-797 105

W = M Pinv(V) U = WV

M =

W = ??

V =

U =

Pseudo-inverse (PINV)• Pinv() applies to non-square matrices• Pinv ( Pinv (A))) = A• A*Pinv(A)= projection matrix!

– Projection onto the columns of A

• If A = K x N matrix and K > N, A projects N-D vectors into a higher-dimensional K-D space– Pinv(A) = NxK matrix– Pinv(A)*A = I in this case

• Otherwise A * Pinv(A) = I04/21/23 11-755/18-797 106

Matrix inversion (division)• The inverse of matrix multiplication

– Not element-wise division!!• Provides a way to “undo” a linear transformation

– Inverse of the unit matrix is itself– Inverse of a diagonal is diagonal– Inverse of a rotation is a (counter)rotation (its transpose!)– Inverse of a rank deficient matrix does not exist!

• But pseudoinverse exists

• For square matrices: Pay attention to multiplication side!

• If matrix is not square use a matrix pseudoinverse:

04/21/23 11-755/18-797 107

A BC, A CB 1, BA 1 C

CABBCACBA , ,