+ All Categories
Home > Documents > A Comprehensive Summary on Introductory Linear Algebra

A Comprehensive Summary on Introductory Linear Algebra

Date post: 26-Apr-2023
Category:
Upload: khangminh22
View: 0 times
Download: 0 times
Share this document with a friend
18
A Comprehensive Summary on INTRODUCTORY LINEAR ALGEBRA
Transcript

A Comprehensive Summary on

INTRODUCTORY LINEAR

ALGEBRA

A Comprehensive Summary on Introductory Linear Algebra

A Comprehensive Summary onIntroductory Linear Algebra

mathvault.ca

Table of Contents

1 Matrix Terminologies 3

2 Operations on Matrices 32.1 Scalar Multiplication . . . . . . . . 32.2 Addition . . . . . . . . . . . . . . . 32.3 Multiplication . . . . . . . . . . . . 32.4 Transposition . . . . . . . . . . . . 32.5 Trace . . . . . . . . . . . . . . . . . 4

3 Elementary Row Operations 4

4 Row Reduction on Matrices 44.1 Preliminaries . . . . . . . . . . . . 44.2 Procedure . . . . . . . . . . . . . . 44.3 Comparison of Different Forms . . . 4

5 System of Linear Equations 45.1 Procedure for Solving a System . . . 55.2 Facts . . . . . . . . . . . . . . . . . 5

6 Determinant 56.1 Definition . . . . . . . . . . . . . . 56.2 Facts . . . . . . . . . . . . . . . . . 56.3 Properties on Matrix Rows and

Columns . . . . . . . . . . . . . . . 56.4 Properties on Matrices . . . . . . . 6

7 Inverse 67.1 Invertibility . . . . . . . . . . . . . 67.2 Procedures for Finding Inverses . . . 7

7.2.1 Terminologies . . . . . . . . 7

7.2.2 Procedures . . . . . . . . . 77.3 Properties on Matrices . . . . . . . 7

8 Elementary Matrices 78.1 Definition . . . . . . . . . . . . . . 78.2 Facts . . . . . . . . . . . . . . . . . 8

9 Diagonalization 89.1 Definitions . . . . . . . . . . . . . . 89.2 Characteristic Polynomial . . . . . . 89.3 Properties of Eigenvectors . . . . . . 89.4 Diagonalizable Matrices . . . . . . 8

9.4.1 Definition and Procedure . . 89.4.2 Properties of Diagonalizable

Matrices . . . . . . . . . . . 9

10 Basis and Related Topics 910.1 Definitions . . . . . . . . . . . . . . 910.2 Facts . . . . . . . . . . . . . . . . . 910.3 Procedure for Basis Extraction . . . 1010.4 Equivalences . . . . . . . . . . . . . 10

11 Subspaces 1011.1 Definition . . . . . . . . . . . . . . 1011.2 Examples of Subspace . . . . . . . . 1011.3 Standard Subspaces . . . . . . . . . 10

11.3.1 Definitions . . . . . . . . . 1011.3.2 Bases of Standard Subspaces 11

12 Operations on Vectors 1112.1 Preliminaries . . . . . . . . . . . . 1112.2 Length . . . . . . . . . . . . . . . . 1112.3 Dot Product . . . . . . . . . . . . . 11

12.3.1 Definition and Facts . . . . 11

Table of Contents 2

A Comprehensive Summary on Introductory Linear Algebra

12.3.2 Properties . . . . . . . . . . 1212.4 Cross Product . . . . . . . . . . . . 12

12.4.1 Definition and Facts . . . . 1212.4.2 Properties . . . . . . . . . . 12

12.5 Projection-Related Operations . . . 12

13 2D/3D Vector Geometry 1313.1 Equations . . . . . . . . . . . . . . 13

13.1.1 Lines . . . . . . . . . . . . . 1313.1.2 Planes . . . . . . . . . . . . 13

13.2 Point vs. Point . . . . . . . . . . . . 1313.3 Point vs. Line . . . . . . . . . . . . 1313.4 Point vs. Plane . . . . . . . . . . . . 1413.5 Line vs. Line . . . . . . . . . . . . . 1513.6 Line vs. Plane . . . . . . . . . . . . 1513.7 Plane vs. Plane . . . . . . . . . . . 16

14 Matrix Transformation 1614.1 Preliminaries . . . . . . . . . . . . 1614.2 Standard Matrix Transformations in 2D 16

$ 1 Matrix Terminologies

• Diagonal matrix: A matrix whose non-zero en-tries are found in the main diagonal only.

• Identity matrix: A diagonal, n× n matrix with1 across the main diagonal. Usually denotedby I.

• Upper-triangular matrix: A matrix whose non-zero entries are found at or above the maindiagonal only.

• Lower-triangular matrix: A matrix whose non-zero entries are found at or below the maindiagonal only.

$ 2 Operations on Matrices

2.1 Scalar Multiplication

• Preserves the dimension of the matrix.

• Scalar division can be defined simiarly(i.e.,

A

k

df=

1

kA).

• k1(k2A) = (k1k2)A

2.2 Addition

• Requires two matrices of the same dimension.

• Preserves the dimension of the matrices.

• A+B = B + A

• (A+B) + C = A+ (B + C)

• k(A+B) = kA+ kB

2.3 Multiplication

• Requires two matrices with matching “innerdimensions".

• Produces a matrix with the corresponding“outer dimensions” (i.e., m× n times n× p→m× p).

• The ij entry of AB results from dot-multiplyingthe ith row of A with the jth column of B.

• (AB)C = A(BC), but AB 6= BA in general.

• (kA)B = k(AB) = A(kB)

• A(B+C) = AB+AC, (A+B)C = AC+BC

2.4 Transposition

• Reverses the dimension of the matrix.

• (AT )ij = Aji

• ith row of AT corresponds to ith column of A.

• j th column of AT corresponds to j th row of A.

• (AT )T = A, (kA)T = k(AT )

• (A+B)T = AT +BT , (AB)T = BTAT

2 Operations on Matrices 3

A Comprehensive Summary on Introductory Linear Algebra

2.5 Trace

Given a n× n matrix A, the trace of A — or Tr(A)for short — is the sum of all the entries in A’s maindiagonal.

• Tr(kA) = kTr(A)

• Tr(A+B) = Tr(A) + Tr(B)

• Tr(AT ) = Tr(A), Tr(AB) = Tr(BA)

$ 3 Elementary Row Opera-tions

The three elementary row operations are:

• Row Multiplication (Ri → kRi, k 6= 0)

• Row Swapping (Ri ↔ Rj)

• Row Absorption (Ri → Ri + kRj)

Note that each elementary row operation can bereversed by another elementary row operation ofthe same type.

$ 4 Row Reduction on Ma-trices

4.1 Preliminaries

At the most fundamental level, to perform row reduc-tion on a matrix is to alternate between the followingtwo processes:

1. Finding a leading number (i.e., a small, non-zero number) in a column — whenever appli-cable.

2. Nullify all other entries in the same column.

By running through all the columns this way fromleft to right, the final matrix — also known as thereduced matrix — can then be obtained.

4.2 Procedure

To search for the ith leading number in a column,check the column from the ith entry and onwards:

• If all said entries are zero, search for the ith

leading number in the next column instead.

• If not, use elementary row operations to:

• Create a leading number in the ith entryof the column.

• Reduce all entries beneath it to zero.

and proceed to search for the (i + 1)th leadingnumber in the next column.

Once all the columns are handled, the matrix wouldbe in a ladder form, where

• Dividing each non-zero row with its leadingnumber would put the matrix into a row-echelon form.

• From here, further reducing all non-leading-one entries in each column to zero would putthe matrix into the reduced row-echelon form.

4.3 Comparison of DifferentForms

• A ladder form is similar to a row-echelon form,except that a non-zero row needs not start with1.

• In a row-echelon form, all entries beneath aleading one is 0. In a reduced row-echelonform, all entries above and beneath a leadingone is 0.

• While a matrix can have several ladder formsand row-echelon forms, it can only have onereduced row-echelon form.

$ 5 System of Linear Equa-tions

5 System of Linear Equations 4

A Comprehensive Summary on Introductory Linear Algebra

5.1 Procedure for Solving a Sys-tem

To solve a system of linear equations with m equa-tions and n variables using matrices, proceed asfollows:

1. Convert the equations into an augmented ma-trix — with the m×n coefficient matrix on theleft and the m× 1 constant matrix on the right.

2. Reduce the augmented matrix into a ladderform, or — if needed — a row-echelon formor a reduced row-echelon form. Once there,three scenarios ensue:

• If an inconsistent row — a row with zeroeverywhere except the last entry — popsup during the row reduction process, thenthe original system is unsolvable.

• If the reduced matrix has n leading num-bers and no inconsistent row exists, thenthe system has a unique solution.

• If the reduced matrix has less than n lead-ing numbers and no inconsistent row ex-ists, then the system has infinitely manysolutions, in which case:

• When converted back into equationform, the system will have less thann leading variables.

• By turning the non-leading variablesinto parameters and applying backsubstitution, we can then find the gen-eral solution to the system — alongwith the basic vectors that generatethem.

5.2 Facts

• For a n-variable system with infinitely manysolutions:

# of parameters = # of non-leading variables= n− # of leading variables

• A homogeneous system — a system whose con-stant terms are all zero — has at least the trivialsolution.

$ 6 Determinant

6.1 Definition

Given an n× n matrix A, the determinant of A —or det(A) for short — is a scalar quantity which canbe defined recursively:

• For a 2× 2 matrix(a bc d

):

det

(a bc d

)df= ad− bc

• For a n×nmatrix A, Caij — the cofactor of theij entry of A — is defined to be the signed de-terminant of the matrix resulted from removingthe ith row and the j th column of A.

• For a general n × n matrix A (with n ≥ 3),the determinant can be defined as the cofactorexpansion along the first row of A:

det(A)df= a11Ca11 + · · ·+ a1nCa1n

6.2 Facts

Guveb an n× n matrix A, det(A) can be obtainedby cofactor-expanding along any row or any columnof A. As a result:

• If A has a row of zeros or a column of zeros,then det(A) = 0.

• If A is upper or lower-triangular, then det(A)is the product of its main-diagonal entries.

6.3 Properties on Matrix Rowsand Columns

In what follows:

6 Determinant 5

A Comprehensive Summary on Introductory Linear Algebra

• All matrices presented are assumed to be n× nmatrices.

• Ri and Rj are assumed to be n-entry row vec-tors.

• Ci and Cj are assumed to be n-entry columnvectors.

Row/Column Multiplication

det

...

kRi...

= k det

...Ri...

det[· · · kCi · · ·

]= k det

[· · · Ci · · ·

]Row/Column Addition

det

...

Ri +Rj...

= det

...Ri...

+ det

...Rj...

det[· · · Ci + Cj · · ·

]= det

[· · · Ci · · ·

]+ det

[· · · Cj · · ·

]Row/Column Swapping

det

...Rj...Ri...

= − det

...Ri...Rj...

det[· · · Cj · · · Ci · · ·

]=

− det[· · · Ci · · · Cj · · ·

]Row/Column Absorption

det

...Ri...

Rj + kRi...

= det

...Ri...Rj...

det[· · · Ci · · · Cj + kCi · · ·

]=

det[· · · Ci · · · Cj · · ·

]

6.4 Properties on Matrices

Given a n× n matrix A:

• IfA has a duplicate row or a duplicate column,then det(A) = 0.

• det(kA) = kn det(A)

• det(AT ) = det(A)

• det(A−1) =1

det(A)

• det(

Adj(A))

=(

det(A))n−1

In addition, if B is also a n× n matrix, then:

det(AB) = det(A) det(B)

In particular:

det(Am) =(

det(A))m (where m ∈ N)

$ 7 Inverse

7.1 Invertibility

Given a n×n matrix A and the n×n identity matrixI, A is said to be invertible if and only if there is an× n matrix B such that:

AB = I and BA = I

7 Inverse 6

A Comprehensive Summary on Introductory Linear Algebra

In which case, since B is the only matrix with suchproperties, it is referred to as the inverse of A — orA−1 for short.

The following claims are all equivalent:

• A is invertible.

• det(A) 6= 0

• The equation Ax = 0 (where x and 0 are n-entry column vectors denoting the variable vec-tor and the zero vector, respectively) has onlythe trivial solution.

• The reduced row-echelon form of A is I.

• The equation Ax = b has a unique solutionfor each n-entry column vector b.

• The rows of A are linearly independent.

• The columns of A are linearly independent.

7.2 Procedures for Finding In-verses

Ï 7.2.1 Terminologies

Given a n× n matrix A:

• The cofactor matrix of A is the n × n matrixwhose ij entry is the cofactor of aij.

• Adj(A), the adjoint of A, is the transpose ofthe cofactor matrix of A.

Ï 7.2.2 Procedures

Given an invertible n × n matrix A, two methodsfor finding A−1 exist:

• Adjoint method: Since AAdj(A) = det(A)Iand det(A) 6= 0, A−1 can be determined usingthe following formula:

A−1 =Adj(A)

det(A)

In particular, in the case of a 2× 2 matrix:

(a bc d

)−1=

(d −b−c a

)ad− bc

• Row reduction method: By writing A and Ialongside each other and carry out row reduc-tion until A is reduced to the identity matrix,the original I would be reduced to A−1 as well:[A∣∣ I ] Row Reduction−−−−−−−→

[I∣∣ A−1 ]

7.3 Properties on Matrices

Given an invertible n× n matrix A:

• (A−1)−1

= A

• (kA)−1 =A−1

k(where k 6= 0)

•(AT)−1

= (A−1)T

In addition, if B is also an invertible n× n matrix,then:

(AB)−1 = B−1A−1

In particular:

(Am)−1 = (A−1)m (where m ∈ N)

$ 8 Elementary Matrices

8.1 Definition

An n× n elementary matrix is a matrix obtainableby performing one elementary row operation onthe n× n identity matrix I.

As a result, three types of elementary matrices exist:

• Those resulted from row multiplication

8 Elementary Matrices 7

A Comprehensive Summary on Introductory Linear Algebra

• Those resulted from row swapping

• Those resulted from row absorption

8.2 Facts

Given a matrix A with n rows:

• Performing an elementary row operation onA is equivalent to left-multiplying A with then × n elementary matrix associated with thesaid operation.

• Since each elementary row operation is re-versible, each elementary matrix is invertible

In particular, if A is an n×n invertible matrix, then:

• A−1 can be conceived as the series of ele-mentary row operations leading A to I (i.e.,A−1 = En . . . E1).

• Similarly, A can be conceived as the series ofelementary row operations leading I to A (i.e.,A = E−11 . . . E−1n ).

More schematically:

AE1

E−11

···...

En

E−1n

I

$ 9 Diagonalization

9.1 Definitions

Given an n× n matrix A:

• A number λ is called an eigenvalue of A if andonly if the equation Ax = λx has a non-zerosolution. In which case:

• The set of all the solutions is called theeigenspace of A with eigenvalue λ.

• Each non-zero solution is called an eigen-vector of A with eigenvalue λ.

9.2 Characteristic Polynomial

Given an n× n matrix A, the following claims areequivalent:

• λ is an eigenvalue of A.

• The equation Ax = λx has a non-zero solution.

• The equation (λI − A)x = 0 has a non-zerosolution.

• det(λI − A) = 0

In other words:

• If we define det(xI − A) as the characteristicpolynomial of A, then its roots are preciselythe eigenvalues of A.

• Once an eigenvalue λ is determined, itseigenspace and basic eigenvectors can also befound by solving the equation (λI − A)x = 0.

9.3 Properties of Eigenvectors

• Since eigenspaces of distinct eigenvalues are— apart from the zero vector — disjoint fromeach other, it follows that every eigenvector isassociated with a unique eigenvalue.

• The collection of all basic eigenvectors (fromthe distinct eigenspaces) forms a linearly inde-pendent set.

• As a result, an n× n matrix can have at mostn basic eigenvectors.

9.4 Diagonalizable Matrices

Ï 9.4.1 Definition and Procedure

Given a n × n matrix A, A is said to be diagonal-izable if and only if there exists an n× n diagonalmatrix D and an n×n invertible matrix P such that:

P−1AP = D

9 Diagonalization 8

A Comprehensive Summary on Introductory Linear Algebra

In fact, it can be shown that:

A is diagonalizable ⇐⇒ A has n linearlyindependent eigenvectors.

For example:

• If A has n eigenvalues, then it is automaticallydiagonalizable.

• If A has less than n eigenvalues, but neverthe-less possesses n basic eigenvectors, then it isstill diagonalizable.

In which case, if we let:

• v1, . . . ,vn to be the n basic eigenvectors asso-ciated with the eigenvalues λ1, . . . , λn, respec-tively.

• P to be the n× n matrix[v1 . . . vn

].

• D to be the n × n diagonal matrix withλ1, . . . , λn in the main diagonal.

then it would follow that:

P−1AP = D or A = PDP−1

Ï 9.4.2 Properties of Diagonaliz-able Matrices

Given a n× n diagonalizable matrix A with eigen-values λ1, . . . , λn (with repeating multiplicities):

• Am = PDmP−1 (where m ∈ N)

(Note: This formula can be used to computeany power of A quickly.)

• Tr(A) = λ1 + · · ·+ λn

• det(A) = λ1 × · · · × λn

$ 10 Basis and Related Top-ics

10.1 Definitions

Given a set of vectors v,v1, . . . ,vn from a vectorspace V :

• A linear combination of v1, . . . ,vndf= a vector

of the form k1v1+· · ·+knvn (for some numbersk1, . . . , kn)

• Span(v1, . . . ,vn)df= the set of all linear com-

binations of v1, . . . ,vn

(In other words, to show that v is in the span ofv1, . . . ,vn is to show that v can be expressedas a linear combination of v1, . . . ,vn.)

• v1, . . . ,vn is a spanning set of V (equiv.,v1, . . . ,vn span V ) df⇐⇒ Span(v1, . . . ,vn) =V .

• v1, . . . ,vn are linearly independent df⇐⇒ theequation x1v1 + . . . + xnvn = 0 has only thetrivial solution.

(In other words, the zero vector in V can beexpressed as a linear combination of v1, . . . ,vn

in a unique way.)

• v1, . . . ,vn is a basis of V df⇐⇒ v1, . . . ,vn spanV and are linearly independent.

10.2 Facts

Given a series of vectors v1, . . . ,vn from a vectorspace V :

• v1, . . . ,vn are linearly dependent ⇐⇒ oneof the vector vi can be expressed as a linearcombination of the other vectors.

• v1, . . . ,vn is a basis of V =⇒ every vector inV can be expressed as a linear combination ofV in a unique way.

In general, given two sets A and B:

If A is a linearly independent set in V andB is a spanning set of V , then |A| ≤ |B|.

10 Basis and Related Topics 9

A Comprehensive Summary on Introductory Linear Algebra

In particular

• If A and B are both basis of V , then |A| = |B|.

• In other words, any basis of V will have thesame number of vectors. This number is knownas the dimension of V — or dim(V ) for short.

As a result, given a series of vectors v1, . . . ,vn inV , we have that:

• n < dim(V ) =⇒ v1, . . . ,vn does not span V .

• n > dim(V ) =⇒ v1, . . . ,vn are not linearlyindependent.

• n = dim(V ) and v1, . . . ,vn are linearly inde-pendent =⇒ v1, . . . ,vn is a basis of V .

10.3 Procedure for Basis Ex-traction

Given a series of vectors v1, . . . ,vn from a vectorspace V , a basis of Span(v1, . . . ,vn) can be deter-mined as follows:

1. Create the matrix

v1...vn

.

2. Reduce the matrix to a ladder form (equiv., arow-echelon form, or a reduced row-echelonform). Once there:

• The non-zero rows of the reduced matrixwill form a basis of Span(v1, . . . ,vn).

• The number of those non-zero rows willbe the dimension of Span(v1, . . . ,vn).

10.4 Equivalences

Given a series of vectors v,v1, . . . ,vn,u1, . . . ,um

from a vector space V :

• v is a linear combination of v1, . . . ,vn ⇐⇒the augmented matrix

[v1 . . . vn v

]is

solvable.

• v1, . . . ,vn are linearly independent ⇐⇒ thematrix

[v1 . . . vn

]can be reduced to a lad-

der form (equiv., a row-echelon form or a re-duced row-echelon form) with n leading num-bers.

• Span(u1, . . . ,um) = Span(v1, . . . ,vn) ⇐⇒each ui can be expressed as a linear combi-nation of v1, . . . ,vn, and each vi can be ex-pressed as a linear combination of u1, . . . ,um.

$ 11 Subspaces

11.1 Definition

Given a subset S of a vector space V , S is called asubspace of V if and only if all of following threeconditions hold:

1. 0V ∈ S.

2. For all v1,v2 ∈ S, v1 + v2 ∈ S.

3. For all v ∈ S and each number k, kv ∈ S.

11.2 Examples of Subspace

Given a vector space V and a series of vectorsv1, . . . ,vn in V , some examples of subspace in-clude:

• The trivial subspaces (i.e., {0v} and V )

• Span(v1, . . . ,vn)

• A line through the origin (in R2 or R3)

• A plane through the origin (in R3)

11.3 Standard Subspaces

Ï 11.3.1 Definitions

Given a m× n matrix A:

11 Subspaces 10

A Comprehensive Summary on Introductory Linear Algebra

• The row space of A — or Row(A) for short —is the span of the rows of A.

• The column space of A — or Col(A) for short— is the span of the columns of A.

• The null space of A — or Null(A) for short —is the set of all solutions to the homogeneoussystem Ax = 0.

• In particular, if A is n× n matrix and λ isan eigenvalue of A, then the eigenspaceof A (with eigenvalue λ) df

= Null(λI−A).

Ï 11.3.2 Bases of Standard Sub-spaces

Given am×nmatrix A, when A is reduced to a lad-der form (equiv., a row-echelon form or a reducedrow-echelon form) A′:

• The non-zero rows of A′ form a basis ofRow(A).

• The columns ofA corresponding to the leadingcolumns of A′ form a basis of Col(A).

• The basic vectors in the general solution ofthe augmented matrix

[A′ 0

]form a basis of

Null(A), where:

# of basic vectors = dim(Null(A))df= Nullity(A)

Since in A′, the number of non-zero rows is equalto the number of leading numbers, we have that:

dim(Row(A)) = dim(Col(A))df= Rank(A)

Furthermore, since in the homogeneous system as-sociated with A′, the number of leading variablesand non-leading variables add up to n, we also havethat:

Rank(A) + Nullity(A) = n

$ 12 Operations on Vectors

12.1 Preliminaries

Given a number k and two vectors u = (u1, . . . , un)and v = (v1, . . . , vn) in Rn:

• u + vdf= (u1 + v1, . . . , un + vn)

• kvdf= (kv1, . . . , kvn)

• 0df= (0, . . . , 0︸ ︷︷ ︸

n times

)

In the case where u and v are non-zero vectors:

u and v are parallel df⇐⇒ u = kv forsome number k.

12.2 Length

Given a vector v = (v1, . . . , vn) in Rn, the length ofv — or |v| for short — is defined as follows:

|v| df=√v21 + . . .+ v2n

Note that:

• |v| = 0 ⇐⇒ v = 0

• |kv| = |k||v|

12.3 Dot Product

Ï 12.3.1 Definition and Facts

Given two vectors u = (u1, . . . , un) and v =(v1, . . . , vn) in Rn:

u · v df= u1v1 + · · ·+ unvn

In the case where u and v are non-zero vectors inR3 (or in R2), we have that:

u · v = |u| |v| cos θ

(θdf= the angle between u and v)

From which it follows that:

12 Operations on Vectors 11

A Comprehensive Summary on Introductory Linear Algebra

• u · v = 0 ⇐⇒ u and v are perpendicular.

• u · v > 0 ⇐⇒ u and v form an acute angle.

• u · v < 0 ⇐⇒ u and v form an obtuse angle.

Ï 12.3.2 Properties

Given a number k and three vectors u,v,w in Rn,we have that:

• u · u = |u|2

• u · 0 = 0 · u = 0

• u · v = v · u

• (ku) · v = k(u · v) = u · (kv)

• u · (v + w) = u · v + u ·w

• (u + v) ·w = u ·w + v ·w

12.4 Cross Product

Ï 12.4.1 Definition and Facts

Given two vectors u = (u1, u2, u3) and v =(v1, v2, v3) in R3:

u× vdf=(

det

(u2 u3v2 v3

),− det

(u1 u3v1 v3

), det

(u1 u2v1 v2

))In the case where u and v are non-zero vectors, wehave that:

|u× v| = |u| |v| sin θ

(θdf= the angle between u and v)

From which it follows that:

u× v = 0 ⇐⇒ u and v are parallel.

In the case where u and v are non-parallel vectors:

• u× v is a vector perpendicular to both u andv.

• |u×v| is the area of the parallelogram spannedby u and v.

Ï 12.4.2 Properties

Given a number k and three vectors u,v,w in R3,we have that:

• 0× v = v × 0 = 0

• v × v = 0

• u× v = −(v × u)

• (ku)× v = k(u× v) = u× (kv)

• u× (v + w) = u× v + u×w

• (u + v)×w = u×w + v ×w

• (u× v) · u = 0, (u× v) · v = 0

• u · (v ×w) = det

uvw

12.5 Projection-Related Opera-tions

Given a vector v and a non-zero directional vectord in R3 (or in R2):

• projd vdf= the projection of v onto d

• oprojd vdf= the orthogonal projection of v

onto d

• refld vdf= the reflection of v about d

v

d

projd voprojd v

refld v

In addition:

• projd v =v · dd · d

d

12 Operations on Vectors 12

A Comprehensive Summary on Introductory Linear Algebra

• oprojd v = v − projd v

(since projd v + oprojd v = v)

• refld v = 2 projd v − v

(since v + refld v = 2 projd v)

$ 13 2D/3D Vector Geome-try

13.1 Equations

Ï 13.1.1 Lines

Given a point P = (x0, y0.z0) and a directionalvector d = (dx, dy, dz) in R3, the line spanned byd passing through P can be expressed in variousforms:

• Point-Direction Form: P + dt

(t being a scalar parameter)

• Component Form:

x = x0 + dxt

y = y0 + dyt

z = z0 + dzt

• Symmetric Form:x− x0dx

=y − y0dy

=z − z0dz

(provided that dx, dy, dz 6= 0)

Note that each of the forms above have an analoguein R2 as well.

Ï 13.1.2 Planes

Given a point P = (x0, y0, z0), two non-paralleldirectional vectors d1, d2 and an associated nor-mal vector n in R3, the plane spanned by d1 andd2 passing through P can be expressed in variousforms:

• Point-Direction Form: P + d1s+ d2t

(s, t being scalar parameters)

• Point-Normal Form: (X − P ) · n = 0

(X df= (x, y, z) being the variable vector )

• Standard Form: ax+ by + cz = k

(where (a, b, c) = n and k = ax0 + by0 + cz0)

13.2 Point vs. Point

Given two points P and Q in R2 (or in R3):

•−→PQ = Q− P

• The distance between P and Q = |−→PQ|

13.3 Point vs. Line

Given a point Q and a line ` : P + dt in R2 (or R3):

Q is on ` df⇐⇒ Q = P + dt for some number t.

In the case where Q is not on `, the distance be-tween Q and ` can be determined in three ways:

• Orthogonal Projection Approach

1. Compute vdf= oprojd

−→QP . Once there:

• |v| would give the distance betweenQ and `.

• Q+v would give the point on ` clos-est to Q.

Q

P

d`

v

Q+ v

• Dot Product Approach

1. Let X = P + dt be the point on ` wherethe shortest distance occurs.

2. By plugging X (in the above form) intothe equation

−−→QX · d = 0, we can solve

13 2D/3D Vector Geometry 13

A Comprehensive Summary on Introductory Linear Algebra

for the missing value of t — and hencedetermine the coordinates of X as well.

3. Once there, |−−→QX|would give the distance

between Q and `.

• Cross Product Approach

The distance between Q and ` is calculated asthe height of the parallelogram spanned by

−→QP

and d:Area︷ ︸︸ ︷

|−→QP × d||d|︸︷︷︸Base

13.4 Point vs. Plane

Given a point Q = (x1, y1, z1) and a plane P in R3:

• If P is in the point-direction form P + d1s +d2t:

Q is on P df⇐⇒ Q = P +d1s+d2tfor some numbers s and t

• If P is in the standard form ax+ by + cz = k:

Q is on P df⇐⇒ ax1 + by1 + cz1 = k

In the case where Q is not on P, the distance be-tween Q and P can be determined in three ways:

• Projection Approach

1. Compute v df= projn

−→QP . Note that:

• If P is in point-direction form, thecross product of the directional vec-tors can be used as n.

• If P is in standard form, then anypoint on the plane can be used asP .

2. Once there:

• |v| would give the distance betweenQ and P.

• Q+v would give the point on P clos-est to Q.

P

n

Q

v

Q+ v

• Dot Product Approach (for P : P +d1s+d2t)

1. Let X = P + d1s + d2t be the point onP where the shortest distance occurs.

2. By plugging X (in the above form) intothe system: {−−→

QX · d1 = 0−−→QX · d2 = 0

we can solve for the missing values of sand t — and hence determine the coordi-nates of X as well.

3. Once there, |−−→QX|would give the distance

between Q and P.

• Intersection Point Approach (forP in standardform)

1. Let X be the point on P closest to Q.Since

−−→QX is parallel to n, X must be of

the form Q+ nt for some number t.

2. By plugging X (in the above form) intothe equation of P, we can solve for themissing value of t— and hence determinethe coordinates of X as well.

3. Once there, |−−→QX|would give the distance

between Q and P.

13 2D/3D Vector Geometry 14

A Comprehensive Summary on Introductory Linear Algebra

13.5 Line vs. Line

In R2, a pair of two lines falls into exactly one offollowing categories:

• Parallel intersecting

• Parallel non-intersecting

• Non-parallel intersecting

In contrast, a pair of two lines in R3 falls into exactlyone of the following four categories:

• Parallel intersecting (i.e., overlapping)

• Parallel non-intersecting

• Non-parallel intersecting

• Non-parallel non-intersecting (i.e. skew)

Given two lines `1 : P1 + d1s and `2 : P2 + d2t:

• `1 and `2 are parallel ⇐⇒ d1 and d2 areparallel.

• `1 and `2 are intersecting ⇐⇒ the equationP1 +d1s = P2 +d2t is solvable for some num-bers s and t.

(in the case where such a (s, t) pair existsand is unique, the coordinates of the intersec-tion point can be determined by, say, back-substituting the value of s into P1 + d1s.)

In the case where `1 and `2 don’t intersect, the dis-tance between them can be determined as follows:

• If `1 and `2 are parallel, then the distance be-tween them is simply the distance between `1and any point on `2.

• If `1 and `2 are non-parallel, then:

1. The shortest distance must occur betweena point X1 = P1 + d1s on `1, and a pointX2 = P2 + d2t on `2.

2. By plugging X1 and X2 (in the aboveforms) into the following system:{−−−→

X1X2 · d1 = 0−−−→X1X2 · d2 = 0

we can solve for the missing values of sand t — and hence determine the coordi-nates of X1 and X2 as well.

3. Once there, the distance between `1 and`2 is simply |

−−−→X1X2|.

13.6 Line vs. Plane

In R3, a line ` and a plane P must fall into exactlyone of the following categories:

• Parallel intersecting (i.e., overlapping)

• Parallel non-intersecting

• Non-parallel intersecting

In what follows, we assume that a plane is alwaysconverted into standard form for easier analysis.More specifically, if ` is in the form (x(t), y(t), z(t))with directional vector d and P is in the form ax+by + cz = k with n = (a, b, c), then:

• ` and P are parallel ⇐⇒ d · n = 0

• ` and P are intersecting ⇐⇒ the equationax(t) + by(t) + cz(t) = k is solvable for somet. Moreover:

• If the equation holds for all t, then ` andP are overlapping.

• If the equation holds for a single t, then` and P intersect at a unique point —whose coordinates can be determinedby back-substituting the value of t into` : (x(t), y(t), z(t)).

In the case where ` and P are non-intersecting(hence parallel), the distance between ` and P issimply the distance between P and any point on `.

13 2D/3D Vector Geometry 15

A Comprehensive Summary on Introductory Linear Algebra

13.7 Plane vs. Plane

In R3, any pair of planes must fall into exactly oneof the following categories:

• Parallel intersecting (i.e., overlapping)

• Parallel non-intersecting

• Non-parallel intersecting

In what follows, we assume that a plane is alwaysconverted into standard form for easier analysis.More specifically, given P1 : a1x + b1y + c1z = k1and P2 : a2x+ b2y + c2z = k2:

• P1 and P2 are parallel ⇐⇒ (a1, b1, c1) =(a2, b2, c2)

• P1 and P2 are intersecting ⇐⇒ the system{a1x+ b1y + c1z = k1

a2x+ b2y + c2z = k2

is solvable for some x, y and z. In which case:

• If the solution set is generated by one pa-rameter, then P1 and P2 intersect at a line.

• If not, then P and Q are overlapping.

In the case where P1 and P2 are non-intersecting(hence parallel), the distance between P1 and P2 issimply the distance between P1 and any point onP2.

$ 14 Matrix Transformation

14.1 Preliminaries

A function f from Rn to Rm is a matrix transforma-tion if and only and there exists a m× n matrix Msuch that:

f(v) = Mv

In which case, f is also a linear transformation inthe sense that:

• f(v1 + v2) = f(v1) + f(v2) for all n-entryvectors v1 and v2.

• f(kv) = kf(v) for all numbers k and n-entryvectors v.

In fact, any function from Rn to Rm with these twoproperties must be a matrix transformation as well.

14.2 Standard Matrix Transfor-mations in 2D

If f is a linear transformation on 2D vectors, thenf must be a matrix transformation induced by the2× 2 matrix [

f

(10

)f

(01

)]The following matrices operate on 2D vectors basedon the line ` : y = mx:

Matrix Eigenvectors

Projection(onto `)

(1 mm m2

)1 +m2

(1,m) (λ = 1)(m,−1) (λ = 0)

OrthogonalProjection(onto `)

(m2 −m−m 1

)1 +m2

(1,m) (λ = 0)(m,−1) (λ = 1)

Reflection(about `)

(1−m2 2m

2m m2 − 1

)1 +m2

(1,m) (λ = 1)(m,−1) (λ = −1)

The following matrix operates on 2D vectors byapplying a counter-clockwise rotation with angleθ: (

cos θ − sin θsin θ cos θ

)

14 Matrix Transformation 16

How's higher math working for you?

But if you use these 10

principles, you can

assimilate concepts

and solve problems

much faster and in a

more meaningful way.

H E Y . Q U I C K Q U E S T I O N . . .

Rote memorization and

algorithmic learning

rarely work in higher

mathematics...

HERE!


Recommended