+ All Categories
Home > Education > Eigenvectors & Eigenvalues: The Road to Diagonalisation

Eigenvectors & Eigenvalues: The Road to Diagonalisation

Date post: 07-May-2015
Category:
Upload: christopher-gratton
View: 4,298 times
Download: 8 times
Share this document with a friend
55
Eigenvectors and Eigenvalues By Christopher Gratton [email protected]
Transcript
Page 1: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Eigenvectors andEigenvalues

By Christopher [email protected]

Page 2: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Introduction: Diagonal Matrices

Before beginning this topic, we must first clarify the definition of a “Diagonal Matrix”.

A Diagonal Matrix is an n by n Matrix whose non-diagonal entries have all the value zero.

Page 3: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Introduction: Diagonal Matrices

In this presentation, all Diagonal Matrices will be denoted as:

diag(d11, d22, ..., dnn)

where dnn is the n-th row and the n-th column of the Diagonal Matrix.

Page 4: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Introduction: Diagonal Matrices

For example, the previously given Matrix of:

Can be written in the form: diag(5, 4, 1, 9)

Page 5: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Introduction: Diagonal Matrices

The Effects of a Diagonal Matrix

The Identity Matrix is an example of a Diagonal Matrix which has the effect of maintaining the properties of a Vector within a given System. For example:

Page 6: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Introduction: Diagonal Matrices

The Effects of a Diagonal Matrix

However, any other Diagonal Matrix will have the effect of enlarging a Vector in given axes. For example, the following Diagonal Matrix:

Has the effect of stretching a Vector by a Scale Factor of 2 in the x-Axis, 3 in the z-Axis and reflecting the Vector in the y-Axis.

Page 7: Eigenvectors & Eigenvalues: The Road to Diagonalisation

The Goal

By the end of this PowerPoint, we should be able to understand and apply the idea of Diagonalisation, using Eigenvalues and

Eigenvectors.

Page 8: Eigenvectors & Eigenvalues: The Road to Diagonalisation

The Matrix Point of View

By the end, we should be able to understand how, given an n by n Matrix, A, we can say that A is Diagonalisable if and only if there is a Matrix, δ, that allows the following Matrix to be Diagonal:

And why this knowledge is significant.

The Goal

Page 9: Eigenvectors & Eigenvalues: The Road to Diagonalisation

The Square Matrix, A, may be seen as a Linear Operator, F, defined by:

Where X is a Column Vector.

The Points of View

Page 10: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Furthermore:

Represents the Linear Operator, F, relative to the Basis, or Coordinate System, S, whose Elements are the Columns of δ.

The Points of View

Page 11: Eigenvectors & Eigenvalues: The Road to Diagonalisation

If we are given A, an n by n Matrix of any kind, then it is possible to interpret it as a

Linear Transformation in a given Coordinate System of n-Dimensions.

For example:

Has the effect of 45 degree Anticlockwise Rotation, in this case, on the Identity Matrix.

The Effects of a Coordinate System

Page 12: Eigenvectors & Eigenvalues: The Road to Diagonalisation

However, it is theorised that it is possible to represent this Linear Transformation as a Diagonal Matrix within another, different

Coordinate System.

We define the effect upon a given Vector in this new Coordinate System as:

The a scalar multiplication of the Vector relative to all the axes by an unknown

Scale Factor, without affecting direction or other properties.

The Effects of a Coordinate System

Page 13: Eigenvectors & Eigenvalues: The Road to Diagonalisation

This process can be summarised by the following definition:

Where: A is the Transformation Matrixv is a non-Zero Vector to be Transformedλ is a Scalar in this new Coordinate System

that has the same effect on v as A.

The Effects of a Coordinate System

Current Coordinate

System

NewCoordinate

System

Page 14: Eigenvectors & Eigenvalues: The Road to Diagonalisation

This process can be summarised by the following definition:

Where: : The Matrix is a Linear Transformation upon the Vector .

: is the Scalar which results in the same Transformation on as .

The Effects of a Coordinate System

Current Coordinate

System

NewCoordinate

System

Page 15: Eigenvectors & Eigenvalues: The Road to Diagonalisation

This can be applied in the following example:

The Effects of a Coordinate System

MatrixA

1

2

Vector

Page 16: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Vector

This can be applied in the following example:

The Effects of a Coordinate System

MatrixA

3

4

Thus, when: A is equivalent to the Diagonal Matrix: Which, as discussed previously, has the effect of enlarging the Vector by a Scale Factor of 2 in each Dimension.

Page 17: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Thus, if is true, we call:

the Eigenvector the Eigenvalue of corresponding to

Definitions

Page 18: Eigenvectors & Eigenvalues: The Road to Diagonalisation

•We do not count as an Eigenvector, as for all values of λ.• , however, is allowed as an accepted Eigenvalue.

•If is a known Eigenvector of a Matrix, then so is , for all non-Zero values of .•If Vectors are both Eigenvectors of a given Matrix, and both have the same resultant Eigenvalue, then will also be an Eigenvector of the Matrix.

Exceptions and Additions

Page 19: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Establishing the Essentials

is an Eigenvalue for the Matrix , relative to the Eigenvector . Is the Identity Matrix of the same Dimensions as .

Thus:

Characteristic Polynomials

This is possible, asIs invertible.

Page 20: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Application of the Knowledge

What this, essentially, leads to is the finding of all Eigenvalues and Eigenvectors of a specific Matrix.

This is done by considering the Matrix, , in addition to the Identity Matrix, .We then multiply the Identity Matrix by the unknown quantity, .

Characteristic Polynomials

Page 21: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Application of the Knowledge

Proceeding this, we then take the lots of the Identity Matrix, and subtract the Matrix from it.We then take the Determinant of the result which ends up as a Polynomial equation, in order to find possible values of , the Eigenvalues. This can be exemplified by the following example:

Characteristic Polynomials

Page 22: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Calculating Eigenvalues from a Matrix

To find the Eigenvalues of:

We must consider :

Characteristic Polynomials

Page 23: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Calculating Eigenvalues from a Matrix

Then, equals:

Which factorises to:

Therefore, the Eigenvalues of the Matrix are:

Characteristic Polynomials

Page 24: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Calculating Eigenvectors from the Values

With: We need to solve for all given values of .

This is done by solving a Homogeneous System of Linear Equations. In other words, we must turn into Echelon Form and find the values of , which are the Diagonals of the Matrix.

Characteristic Polynomials

Page 25: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Calculating Eigenvectors from the Values

For example, we will take from the Previous example:

Characteristic Polynomials

Page 26: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Calculating Eigenvectors from the Values

For example, we will take from the Previous example:

Therefore, the result is that:

Characteristic Polynomials

Page 27: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Calculating Eigenvectors from the Values

For example, we will take from the Previous example:

Therefore, this is the set of general Eigenvectors for the Eigenvalue of 4.

Characteristic Polynomials

Page 28: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Mentioned earlier was the ultimate goal of Diagonalisation; that is to say, finding a Matrix, , such that the following can be applied to a given Matrix, :

Where the result is a Diagonal Matrix.

Diagonalisation

Page 29: Eigenvectors & Eigenvalues: The Road to Diagonalisation

There are a few rules that can be derived from this:

Firstly, must be an Invertible Matrix, as the Inverse is necessary to the calculation.

Secondly, the Eigenvectors of must necessarily be Linearly Independent for this to work.Linear Independence will be covered later.

Diagonalisation

Page 30: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Eigenvectors, Eigenvalues & Diagonalisation

It turns out that the columns of the Matrix are the Eigenvectors of the Matrix . This is why they must be Linearly Independent, as Matrix must be Invertible.

Furthermore, the Diagonal Entries of the resultant Matrix are the Eigenvalues associated with that Column of Eigenvectors.

Diagonalisation

Page 31: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Eigenvectors, Eigenvalues & Diagonalisation

For example, in the previous example, we can create a Matrix from the Eigenvalues 4, 2 and 6, respectively.

It is as follows:

Diagonalisation

Page 32: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Eigenvectors, Eigenvalues & Diagonalisation

Furthermore, we can calculate that:

Diagonalisation

Page 33: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Eigenvectors, Eigenvalues & Diagonalisation

Thus, the Diagonalisation of can be created by:

Solving this gives:

The Eigenvalues in the Order given!

Diagonalisation

Page 34: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Introduction

This will be a brief section on Linear Independence to enforce that the Eigenvectors of must be Linearly

Independent for Diagonalisation to be implemented.

Linear Independence

Page 35: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Linear Independency in x-Dimensions

The vectors are classified as a Linearly Independent set of Vectors if the following rule applies:

The only value of the Scalar, , which makes the equation:

True is for all instances of

Linear Independence

Page 36: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Linear Independency in x-Dimensions

The vectors are classified as a Linearly Independent set of Vectors if the following rule applies:

The only value of the Scalar, , which makes the equation:

True is for all instances of

Linear Independence

Page 37: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Linear Independency in x-Dimensions

If there are any non-zero values of at any instance of within the equation, then this set of Vectors, , is considered Linearly Dependent. It is to note that only one instance of at non-zero is needed to make the dependence.

Linear Independence

Page 38: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Linear Independency in x-Dimensions

Therefore, if, say, at , the value of , then the vector set is Linearly Dependent.But, if were to be omitted from the set, given all other instances of were zero, then the set would, therefore, become Linearly Independent.

Linear Independence

Page 39: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Implications of Linear Independence

If the set of Vectors, is Linearly Independent, then it is not possible to write any of the Vectors in the set in terms of any of the other Vectors within the same set.

Conversely, if a set of Vectors is Linearly Dependent, then it is possible to write at least one Vector in terms of at least one other Vector.

Linear Independence

Page 40: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Implications of Linear Independence

For example, the Vector set of:

Is Linearly Dependent, as can be written as:

Linear Independence

Page 41: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Implications of Linear Independence

For example, the Vector set of:

We can say, however, that this Vector set may be considered as Linearly Independent if were omitted from the set.

Linear Independence

Page 42: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Finding Linear Independency

The previous equation can be more usefully written as:

More significantly, additionally, is the idea that this can be translated into a Homogeneous System of x Linear Equations, where x is the Dimension quantity of the System.

Linear Independence

Page 43: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Finding Linear Independency

The previous equation can be more usefully written as:

More significantly, additionally, is the idea that this can be translated into a Homogeneous System of x Linear Equations, where x is the Dimension quantity of the System.

Linear Independence

Page 44: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Finding Linear Independency

Therefore, the Matrix of Coefficients, , is an n by x Matrix, where n is the number of Vectors in the System and x is the Dimensions of the System. The Columns of are equivalent to the Vectors of the System, .

Linear Independence

Page 45: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Finding Linear Independency

To observe whether is Linearly Independent or not, we need to put the Matrix into Echelon Form. If, when in Echelon Form, we can observe that each Column of Unknowns has a Leading Entry, then the set of Vectors are Linearly Independent.

Linear Independence

Page 46: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Finding Linear Independency

If not, then the set of Vectors are Linearly Dependent.To find the Coefficients, we can put into Reduced Echelon Form to consider the general solutions.

Linear Independence

Page 47: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Finding Linear Independency: Example

Let us consider whether the following set of Vectors are Linearly Independent:

Linear Independence

Page 48: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Finding Linear Independency: Example

These Vectors can be written in the following form:

Linear Independence

Page 49: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Finding Linear Independency: Example

The following EROs put this Matrix into Echelon Form:

As this Matrix has a leading entry for every Column, we can conclude that the set of Vectors is Linearly Independent.

Linear Independence

Page 50: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Thus, to conclude:

is the formula for Eigenvectors and Eigenvalues.

is a Matrix that has Eigenvectors and Eigenvalues to be calculated. is an Eigenvector of is an Eigenvalue of , corresponding to

Summary

Page 51: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Thus, to conclude:

is the formula for Eigenvectors and Eigenvalues.

Given and , we can find by Matrix Multiplying and observing how many times the result is, relative to .

Summary

Page 52: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Thus, to conclude:

is the Characteristic Polynomial of . This is used to find the general set of Eigenvalues of , and thus, its Eigenvectors. This is done by finding the determinant of and solving the resultant Polynomial equation to isolate the Eigenvalues.

Summary

Page 53: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Thus, to conclude:

is the Characteristic Polynomial of . This is used to find the general set of Eigenvalues of , and thus, its Eigenvectors. Then, by Substituting the Eigenvalues back into and reducing the Matrix to Echelon Form, we can find the general set of Eigenvectors for that Eigenvalue.

Summary

Page 54: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Thus, to conclude:

is the Diagonalisation of . is a Matrix created from the Eigenvectors of , where each Column is an Eigenvector.In order for to exist, must necessarily be Invertible, where the Eigenvectors of are Linearly Independent.

Summary

Page 55: Eigenvectors & Eigenvalues: The Road to Diagonalisation

Thus, to conclude:

is the Diagonalisation of . is a Matrix created from the Eigenvectors of , where each Column is an Eigenvector.The resultant is a Diagonal Matrix where the diagonal values are the Eigenvalues in the same column as its associated Eigenvectors.

Summary


Recommended