Date post: | 03-Jun-2018 |
Category: |
Documents |
Upload: | siddharth-kurella |
View: | 230 times |
Download: | 0 times |
of 34
8/11/2019 MCLA Concise Review
1/34
MULTIVARIABLE CALCULUS AND LINEAR ALGEBRA CONCISE REVIEW
VARUN DANDA AND SIDDHARTH KURELLA
Part 1. Multivariable Calculus
1. Parametric Equations
Supposex fptq and y gptqthen the variable t is considered a parameterand the equations are called parametric equations
As t varies, the pointpx, yq varies and traces out a curve that is called a parametric curve
Theslope of a parametric curve is
dy
dx dy{dt
dx{dt (1)
Proof:
dy
dt dy
dxdx
dt (Chain Rule)
Given xptq and yptq
A
b
a
y dx
yp
tq
x1
ptq
dt (2)
where t increases from to
The lengthof a curve in the form fpxq is defined as
L ba
d1 `
dy
dx
2
dx (3)
Substituting equation1 into2, one can find the equation for the length of a parametric curve
L
ddxdt
2 ` dydt
2dt (4)
Thesurface area of a parametric curve rotated around the x or y axis is
x axis: S
2y ds ; y axis: S
2xds (5)
where ds c
dx
dt
2
`dy
dt
2
1
8/11/2019 MCLA Concise Review
2/34
MCLA Concise Review
2. Polar Coordinates
TheCartesian coordinatespx, yq of a polar coordinate (r, q is defined byx r cos ; y r sin (6)
Therefore : r2 x2 ` y2 and tanpq yx
Figure 1. Polar Coordinate Representation
The graph of a polar equation, r
f
p
qconsists of all points P that have at least one polar
representationpr, q whose coordinates satisfy the equationTo find atangentline to a polar curve r fpq, is regarded as a parameterand the parametric
equations are written:
x r cospq fpq cospthetaq and y r sinpq fpq sinpthetaqTo find slope use equation 1 in terms of polar coordinates
dy
dx dy{d
dx{ddrd
sinpq ` r cospqdrd
cospq r sinpq (6)
The area of a sectorof a circle isA 1
2r2 (7)
Considering infinitesimally small sectors of circles, the area of a polar curve between two angles is
A ba
1
2r2 d (8)
To find the length of a polar curve simplify`dxdt
2 ` `dy
dt
2
dx
d dr
dcospq r sinpq ; dy
d dr
dsinpq ` r cospq
dx
d
2 `
dy
d
2
dr
dcospq r sinpq
2 `
dr
dsinpq ` r cospq
2
r2 `
dr
d
2
Therefore, plugging into equation 4,
L
dr2 `
dr
d
2
dt (9)
2
8/11/2019 MCLA Concise Review
3/34
MCLA Concise Review
3. Sequences and Series
A sequencecan be thought of as a list of numbers written in a definite order
If a sequence approaches a certain value it is considered convergent, otherwise it is divergent
A sequence is consideredincreasingif every term after the first is greater than the preceding terms
anddecreasing if every term after the first is lower than the preceding termsA sequence is consideredmonotonic if it is either increasing or decreasing
A sequence isbounded above if there exists a number M such that an M for all n 1A sequence is bounded below if there exists a number M such that an M for all n 1
Theorem: Every bounded, monotonic sequence is convergent
Aseries is the addition of the terms of an infinite sequence and is denoted by
8
n1
an a1 ` a2 ` a3 ` ` an `
Theorem: If limn8 an exists or =0 then the series8
n1 an is convergent, otherwise it is divergent
The Integral Test: If f is a continuous, positive, decreasing function onr1, 8q and let an fpnq,then the series
8n1 an is convergent if and only if the improper integral
81
fpxq dxis convergent
P-Series:8
n11
np is convergent ifp 1 and divergent ifp 1;Proved by integral test
The Comparison Test: Suppose that
an and
bn are series with positive terms(i) If
bn is convergent and an bn for all n, then
an is also convergent
(ii) If bn is divergent and an bn for all n, then an is also divergentAnalternating series is a series whose terms are alternately positive and negative
Example:
8n1
p1qn n
If the alternating series8
n1 p1qn1 bn satisfies (i) bn`1 bn (ii) limn8 bn 0Then the alternating series is convergent
A series an is absolutely convergentif the series of the absolute values |an
|is convergent
and is called conditionally convergent if it is convergent but not absolutely convergent
The Ratio Test:
limn8
an`1an
L
(i) IfL 1 then the series8n1 an is absolutely convergent and therefore convergent(ii) IfL 1 or L 8 then the series8n1 an is divergent(iii) IfL 1 then the ratio test is inconclusive
3
8/11/2019 MCLA Concise Review
4/34
MCLA Concise Review
The Root Test:
limn8n
a|an| L
(i) IfL 1 then the series8n1 an is absolutely convergent and therefore convergent(ii) IfL 1 or L 8 then the series8n1 an is divergent(iii) IfL 1 then the ratio test is inconclusive
Apower series is a series of the form8
n0cn px aqn c0 ` c1 px aq ` c2 px aq2 `
Theorem: For a power series8
n0 cnpx aqn there areonly three options:(i) The series converges only when x a(ii) The series converges for all x(iii) There is a positive number R, called the radius of convergence, such that the seriesconverges if|x a| Rand diverges if|x a| R
Theinterval of convergence of a power series is the interval that consists of all values of x forwhich the series converges
ATaylor Series centered at a is given by:
fpxq 8
n0
fpnqpaqn!
px aqn wherecn fpnqpaq
n! in the power series form
The special case where a=0 is called a Maclaurin Series
Important Maclaurin Series:
1
1 x8
n0xn
ex 8
n0
xn
n!
sinpxq 8
n0p1qn
x2n`1
p2n ` 1q!
cospxq 8
n0p1qn x
2n
p2nq!
tan1 x 8
n0p1qn x
2n`1
p2n ` 1q
Above series were derived by differentiating and integrating other series
4. Vectors
Avector is a quantity that has both a magnitude and a direction
Vector Addition: To add two vectors geometrically, align the head of the first vector beingadded to the tailof the second and draw a ling from the tail of the first to the head of the second
Scalar Multiplication: The scalar multiple of a vector is cv whose length is|c| times the lengthofv and whose directions is the same as v ifc 0 and opposite ifc 0
Themagnitude of a vector is the length of it and is represented by either|v| or||v||; In terms ofitscomponents, the magnitude of a vector is|a|
aa21` a2
2` ` a2n where n is the number of
4
8/11/2019 MCLA Concise Review
5/34
MCLA Concise Review
dimensions the vector is in
Vectors can also be added algebraicallyby adding their components; For example, ifu ru1, u2sand v rv1, v2s then u ` v ru1 ` v1, u2 ` v2s
Thestandard basis vectors i, j, k can be used to express any vector in V3
i x1, 0, 0y;j x0, 1, 0y; k x0, 0, 1yFor example a xa1, a2, a3y xa1, 0, 0y ` x0, a2, 0y ` x0, 0, a3y a1i ` a2j ` a3k
Aunit vector is a vector whose length is 1 such as i, j, and k; The unit vector that has the samedirection as another vector, a is u 1|a|a
Thedot product of two vectors a xa1, a2, a3y and b xb1, b2, b3yis given bya b a1 b1 ` a2 b2 ` a3 b3 which gives a scalar
The dot product can also be expressed by using the law of cosines
a b |a| |b| cospq(10)Theorem: From the above definition: Two vectors a andb are orthogonal if and only ifa b 0
The scalar projectionandvector projection of a vector b onto another vectora is defined as
compab a b|a| and projab a b|a|2 a (11)
Thecross productof two vectors a xa1, a2, a3yandb xb1, b2, b3y is given bya b xa2b3 a3b2, a3b1 a1b3, a1b2 a2b1ywhich results a vector
An easier way to view the cross product is to use determinants:
a b
i j ka1 a2 a3b1 b2 b3
(12)
The cross product is also expressed as
|a b| |a||b| sinpq (13)The length of the dot producta bis equal to the area of the parallelogram determined by a and b
Properties of Dot Products: Supposea,b,andc are vectors and c is a scalar
a a |a|2a pb ` cq a b ` a c
a b b apcaq b cpa bq a pcbq
0 a 0
Properties of Cross Products: Supposea, b,andcare vectors and c is a scalar
a b b apcaq b cpa bq a pcbq
a pb ` cq a b ` a ca pb cq pa bq c
a pb cq pa cqb pa bqc5
8/11/2019 MCLA Concise Review
6/34
MCLA Concise Review
5. Vector Functions
Avector function is a function whose domain is a set of real-numbers and range is a set of vectors:
rptq xfptq, gptq, hptqy fptqi ` gptqj ` hptqkThe components of vector function can be considered parametric equations that can be used to
sketch the curve
The derivative of a vector function is defined as follows:
dr
dt r1ptq xf1ptq, g1ptq, h1ptqy f1ptq i ` g1ptqj ` h1ptq k
The integral of a vector function is defined as follows:ba
rptq dt b
a
fptqdt
i `b
a
gptqdt j `b
a
hptqdt
k
The length of a vector function can be determined by using equation 4 in three dimensions
L ba
arf1ptqs2 ` rg1ptqs2 ` rh1ptqs2s dt whererptq xfptq, gptq, hptqyor the more compact form: L
ba
|r1ptq| dt (14)
Theunit tangent vector indicates the direction of the curve at a certain point and is given by
Tptq r1ptq
|r1ptq| (15)
Thecurvature
of a curve at a given point is a measure of how quickly the curve changes at thatpoint
dTds
(16)
It is, however, easier to compute curvature in terms of the parameter t instead of s
dT
dt dT
ds
ds
dt and therefore
dT{dtds{dt
So
pt
q |T1ptq||r1ptq|
(17)
Theunit normal vector is orthogonal to the unit tangent vector and points in the direction thatthe curve is curving towards
Nptq T1ptq
|T1ptq| (18)
Thebinormal vectorcan be found by using the right hand rule on the tangent and normal vectors
Bptq Tptq Nptq (19)6
8/11/2019 MCLA Concise Review
7/34
MCLA Concise Review
Figure 2. Tangent, Normal, and Binormal Vectors
6. Cylindrical and Spherical Coordinates
Apart from the commonly used Cartesian Coordinates Cylindrical Coordinates andSpherical Coordinates are used in special cases to easily solve certain problems.
In a Cylindrical Coordinate system, a point P is represented bypr,,z q, where r and are the polarcoordinates of the projection of P onto the xy -plane and z is the distance from the point to the
xy-plane.
Using geometry, Conversions from cylindrical to cartesian coordinates and vice versa are below:
x r cos ; y r sin ; z z | r2
x2
` y2
; tan y
x ; z zIn a Spherical Coordinate system, a point P is represented byp ,,q, where is the distance from
the origin to point P, is the angle that the projection of P makes with the x-axis, and is theangle that the line from the origin to P makes with the z-axis; 0 and 0
(a) 2 dimension Vector Field (b) 3 dimension Vector Field
Figure 3. Vector Fields
The conversions from spherical to cartesian coordinates and vice versa are below:
x sin cos ; y sin sin ; z cos | 2 x2 ` y2 ` z2
7. Partial Derivatives
Suppose fis a function of two variables, xand y , and we let only xvary while keeping y fixed at aconstant. Then we are really considering a function of a single variable x, namely, gpxq fpx, bq. If
g has a derivative at a, then we call it the partial derivative of fwith respect to x atpa, bqTherefore:
fxpa, bq g1paq where gpxq fpx, bqPartial Derivative Notations:
fxpx, yq fxBfBx Dxf7
8/11/2019 MCLA Concise Review
8/34
MCLA Concise Review
fypx, yq fyBfBy DyfTo find partial derivatives with respect to a variable consider the other variables as constants and
differentiate the function with respect with the variable. Note: Partial Derivatives can be found forfunctions with more than 2 variables.
Higher derivativescan also be applied to partial derivatives:
pfxqx fxx BBxBf
Bx
B2f
Bx 2
pfxqy fxy BByBf
Bx
B2f
ByBx
pfyqx fxy BBxBf
By
B2f
BxBy
pfyqy fyy BByBfBy B
2
fBy 2Partial Derivatives can help to compute the tangent plane to a surface that can be used to
perform linear approximations
The equation of a tangent plane to the surface z fpx, yq at the point Ppx0, y0, z0q isz z0 fxpx0, y0qpx x0q ` fypx0qpy0qpy y0q
This can be used to find approximate values of points near the one used for the tangent planeapproximation
Thechain rule can also be applied to derivatives of functions with more than one variable.Suppose that z fpx, yq is differentiable and x gptq and y hptq. Then:
dz
dtBfBx
dx
dt`BfBy
dy
dt (20)
Ifx gps, tq and y hps, tq then:BzBs
BzBx
BxBs`
BzBy
ByBs and
BzBt
BzBx
BxBt`
BzBy
ByBt (21)
Example: Write out the Chain Rule for the case where w
f
px , y , z , t
qand
x xpu, vq, y ypu, vq, z zpu, vq, and t tpu, vqBwBu
BwBx
BxBu`
BwBy
ByBu`
BwBz
BzBu`
BwBt
BtBu
BwBv
BwBx
BxBv`
BwBy
ByBv`
BwBz
BzBv`
BwBt
BtBv
The directional derivative of f atpx0, y0q in the direction of a unit vectoru xa, by is
Dufpx0, y0q limh0 fpx0 ` ha, y0 ` hbq fpx0, y0qh
(22)
8
8/11/2019 MCLA Concise Review
9/34
MCLA Concise Review
Ifu ithen Dif fx and ifu j then Djf fy Therefore the partial derivatives of f withrespect to x and y are special cases of the directional derivative An easier way to represent the
directional derivative in the direction of any unit vector u xa, by isDufpx, yq fxpx, yqa ` fypx, yqbor more simply Dufpx, yq xfxpx, yq, fypx, yqy u
The termxfxpx, yq, fypx, yqy is also used commonly and is called the gradient vector and is denoted byfpx,
The gradient vector represents the direction of fastest increase of fIt is easy to see that Dufpx, yq fpx, yq u (23)
The directional deriviative and gradient can be applied in three dimensions as well:
fBfBx i `BfByj `
BfBzk and Dufpx,y,zq fpx,y,zq u
Just as with single variable functions, themaximaand minimaof multivariable functions can alsobe found
Theorem: Iffhas a local maximum or minimum at (a,b) and the first-order partial derivativesexist there, then fxpa, bq 0 and fypa, bq 0. The points at which fxpa, bq 0 and fypa, bq 0 are
calledcritical pointsSecond Derivatives Test: Suppose (a,b) is a critical point.
Let D
fxx fxyfyx fyy
fxx fyy pfxyq2
(a)IfD 0 and fxxpa, bq 0 then fpa, bq is alocal minimum(b) IfD 0 and fxxpa, bq 0 then fpa, bq is a local maximum(c) IfD 0 then fpa, bq is neither a local minimum nor maximum but a saddle point
8. Multiple Integrals
The concept of an integral can be applied to multiple dimensions. Double Integrals are integralsof two-variable functions that represent the volume under this curve. Riemann sums can be used to
estimate these integrals by adding up rectangular prisms instead of rectangles.
Figure 4. Riemann Sum of a 3-Dimensional function
Double integrals can be solved one integral at a time as an iterated integraldc
ba
fpx, yqdxdydc
ba
fpx, yqdx
dy
Theorem: If the rectangle R px, yq|a x b, c y d, thenR
fpx, yq dA dc
ba
fpx, yq dxdyba
dc
fpx, yq dy dx
9
8/11/2019 MCLA Concise Review
10/34
MCLA Concise Review
Theorem:
R
gpxq hpyq dA ba
gpxq dxdc
hpyq dy
whereR ra, bs rc, ds, a rectangleAlthough, these calculations may seem simple, regions that are being integrated over are not always
rectangles q A plane region D is said to be Type Iif it lies between graphs of two continuousfunctions of x. Similarly it is said to be Type II if it lies between two continuous functions of y.
Figure 5. Type I and Type II Regions
Iffis continuous on a type I region D such that D tpx, yq|a x b, g1pxq x g2pxqusq
then
D
fpx, yq dA dc
g2pxqg1pxq
fpx, yq dy dx (24)
Iff is continuous on a type II region D such that D tpx, yq|c x d, h1pyq x h2pyqusq
then
D
fpx, yq dA dc
h2pyqh1pyq
fpx, yq dxdy (24)
Example: EvaluateD px ` 2yq dAwhere D is the region bounded by the parabolas y 2x
2
andy 1 ` x2
From the graph the region D can be described as
D tpx, yq| 1 x 1, 2x2 y 1 ` x2u
Figure 6. y 2x2 and y 1`x2
Therfore
D
px ` 2yqdA
1
1
1`x2
2x2px ` 2yqdydx
1
1rxy ` y2sy1`x2
y2x2 dx 32
15
Therefore the Volume formed by z x ` 2y over region D is 3215
10
8/11/2019 MCLA Concise Review
11/34
MCLA Concise Review
Triple Integralsare used similarly to double integrals
E
fpx,y,zqdVD
u2px,yqu1px,yp
fpx,y,zqdz
dA
ApDq D
dAand VpEq E
dV
Multiple integrals can also be applied to regions defined by polar, cylindrical, and sphericalcoordinates
Iff is continuous on a polar regions of the form D tpr, q| , h1pq r h2pqu
Then
D
fpx, yq dA
h2pqh1pq
fpr cospq, r sinpq r d r d (25)
Figure 7. D tpr, q| , h1pq r h2pqu
11
8/11/2019 MCLA Concise Review
12/34
MCLA Concise Review
Example: Use a double integral to find the are enclosed by one loop of the four leaved roser cosp2q
Figure 8. r cosp2q
ApDq D
dA {4{4
cosp2q
0
r d r d {4{4
1
2r2cosp2q
0
d 8
The formula for triple integration in cylindrical coordinates:E
fpx,y,zq dV
h2pqh1pq
u2pr cospq,r sinpqqu1pr cospq,r sinpqq
fpr cospq, r sinpq rdrd (26)
Figure 9. Triple Integral of Cylindrical Coordinates
Figure 10
Example: Evaluate 2
2
?4x2
?4
x2
2
?x2
`y2
px2 ` y2q dz dy dx
This iterated integral is a triple over the solid region:
E tpx,y,zq| 2 x 2, ?
4 x2 ya
4 y2,a
2 ` y2 z 2uUsing cylindrical coordinates and the fact that r
ax2 ` y2 the solid region can be represented as:
E tpr,,z q|0 2, r 2, r z 2uTherfore
2
2
?4x2
?4x2
2
?x2`y2
px2 ` y2q dz dy dx
2
0
2
0
2
r
r2 r d z d r d 165
12
8/11/2019 MCLA Concise Review
13/34
MCLA Concise Review
The formula for triple integration in spherical coordinates:E
fpx,y,zq dVdc
ba
p sin cos , sin sin , cos q 2 sin d dd (27)
Where E is given by E tp ,,q|a b, , c du
Figure 11. Spherical Volume Element: 2 sin d d d
When using spherical coordinates for triple integrals use the fact that a
x2 ` y2 ` z2 tosimplify the integral
To determine limits for triple integrals in spherical or cylindrical coordinates use a process similarto the one below:
Figure 12. Process for Triple Integrals in Spherical Coordinates
9. Change of Variables in Multiple Integrals
Consider the displacement vectorrpx, yq and the new set of variables u and v where x xpu, vq andy ypu, vq.
R
fpx, yq dxdyS
fpxpu, vq, ypu, vqqBpx, yqBpu, vq
dudv (28)
WhereBpx, yqBpu, vq
BxBu
BxBv
ByBu
ByBv
ffand is called the Jacobian
Proof:
d A BrBudu BrBvdv
BrBuBrBvdudv
In cartesian coordinates d A dx ` dy
Thereforedxdy
BrBu
BrBv
dudv where
Bpx, yqBpu, vq
BrBu
BrBv
13
8/11/2019 MCLA Concise Review
14/34
MCLA Concise Review
Similarly in three dimensions:R
fpx, yq dxdyS
fpxpu,v,wq, ypu,v,wq, zpu,v,wqq Bpx,y,zqBpu,v,wq
dudv (29)
WhereBpx,y,zqBpu,v,wq
BxBu
BxBv
BxBw
ByBu
ByBv
ByBwBz
BuBzBv
BzBw
fiffiffifl
Example: Use formula 29 to derive the formula for triple integration in spherical coordinates.
x sin cos y sin sin z cos
Bpx,y,zqBp ,,q
BxB
BxB
BxB
ByB
ByB
ByB
BzB
BzB
BzB
=
sin cos sin sin cos cos sin sin sin cos cos sin
cos 0 sin
=2 sin
10. Vector Calculus
AVector Field on two-dimensional space is a function Fthat assigns each pointpx, yq with a twodimensional vectorFpx, yq
F can be written in terms of its component functions P and Q as follows:
Fpx, y
q P
px, y
qi`
Qp
x, yqj (30)
Similarly for three dimensional vector fields:
Fpx,y,zq Ppx,y,zqi ` Qpx,y,zqj ` Rpx,y,zqk (31)
(a) 2 dimension Vector Field (b) 3 dimension Vector Field
Figure 13. Vector Fields
It is important to note that the gradient vector is also a vector field called thegradient vectorfield
fpx, yq fxpx, yqi ` fypx, yqjIf f is defined on a smooth curve C with two dimensions then the line integral of f along C is
c
fpx, yq dswhere ds c
dx
dt
2
`dy
dt
2
(32)
A line integral can be interpreted as the area under the field carved out by a particular curve.14
8/11/2019 MCLA Concise Review
15/34
MCLA Concise Review
Line integrals can also be respect to x and y:C
fpx, yq dx ba
fpxptq, yptqq x1ptq dt (33)C
fpx, yq dyba
fpxptq, yptqq y1ptq dt (34)Suppose that the curve Cis given by a vector function r
pt
q, a
t
b. Then:
C
F dr ba
Fprptqq r1ptq dt C
F Tds (35)Line integrals are treated similarly in three-dimensional space
(a) (b) (c)
(d) (e) (f)
Figure 14. Line Integral
C
F dr C
P dx ` Q dy|R dz
Theorem: Let Cbe a smooth curve given by the vector function rptq, a t b.ThenC
f dr fprpbqq fprpaqq (36)Proof:
C
f dr ba
Fprptqq r1ptq
ba
BfBx
BxBt`
BfBy
ByBt`
BfBz
BzBt
dt
b
a
d
dtfprptqq dt( Chain Rule) fprpbqq fprpaqq (Fundamental Theorem of Calculus)
Theorem:C
F dris independent of pathin D if and only ifC
F dr 0 for every closed pathC in D.
Theorem: IfC
F dris independent of path in D, the Fis a conservative vector field on D;that is, there exists a function f such that f F.
Theorem: IfFpx, yq Ppx, yqi ` Qpx, yqj is a conservative vector field then:BPBy
BQBx (37)
15
8/11/2019 MCLA Concise Review
16/34
MCLA Concise Review
Proof:
F fPi ` Qj BfBx i `
BfByj
ThereforeP
Bf
Bx and Q
Bf
ByBPBy
B2fByBx
B2fBxBy
BQBx (Clairauts Theorem)
Greens Theorem: Let Cbe a positively oriented, simple closed curve in the plane and let D be theregion bounded by C.
C
P dx ` Q dyD
BQBx
BPBy
dA (38)
IfF Pi ` j ` R kthen thecurlofF is
F curl F
i j k
BBx
BBy
BBz
P Q R
Theorem: IfFis a vector field and curl F 0 then F is a conservative vector field
IfF Pi ` j ` R kthen thedivergence ofF is
div F F BPx
` QBy`BRBz
Acts On GivesGradient Scalars VectorsDivergence Vectors ScalarsCurl Vectors Vectors
Similar to line integrals,Surface Integrals can be thought of as the volume of the space carvedout by a surface over a space
S
fpx,y,zq dSD
fpx,y,gpx, yqqd
BzBx2
`Bz
Bx2
` 1 dA D
fprpu, vqq |ru rv| dA (39)
The surface integral over a continuous vector field F is called theflux ofFover SS
F dSD
F pru rvq dA
Stokes Theorem: LetSbe a positively oriented, simple closed surface that is bounded by a curve Cwith positice orientation. Let Fbe a vector field whose components have continuous partial
derivatives. ThenC
F dr S
curl F dS (40)
16
8/11/2019 MCLA Concise Review
17/34
MCLA Concise Review
Proof: F dr
P dx ` Q dy ` R dz
BP
ByBy Bx `BPBzBzBx `
BQBxBx By `
BQBzBzBy `
BRBxBx Bz`
BRByBy Bz
BPBypBAkq `BPBzpBAjq `BQBxpBAkq `BQBzpBAiq `BRBxpBAjq `BRBypBAiqBP
BzBRBx
BAk `
BQBx
BPBy
BAj `
BRBy
BQBx
BAi
curl F dA
Example: Use Stokes Theorem to evaluateC
, F drwhere Fpx,y,zq y2i ` xj ` z2k and C isthe curve of intersection of the plane y ` z 4 and the cylinder x2 ` y2 a2
Figure 15. Intersection ofy `z 4 and x2 `y2 a2
curl F
i j kBBx
BBy
BBz
y2 x z2
p1 ` 2yq k
C
F dr S
curl F dS D
p1 ` 2yq dA
20
10
p1 ` 2r sin q r d r d
Divergence Theorem: Let Ebe a simple solid region and let Sbe the boundary surface of E, givenwith positive orientation. LetF be a vector field whose component functions have continuous
partial derivatives. ThenS
F dS E
div F dV (41)
17
8/11/2019 MCLA Concise Review
18/34
MCLA Concise Review
Proof: F dS
Fi dSi `
FjdSj `
FkdSk
BFi BSi `
BFj BSj `
BFk BSk
BFi
Bx BxByBzBFj
ByByBzBx `BFk
BzBz BxByBFx
Bx `BFyBy `
BFzBz
BV
p Fq dV
Example: Find the flux of the vector field F zi ` y j ` x k through the spherical surface of radiusa centered at the origin.
div F Bz
Bx`By
By`Bx
Bz 0 ` 1 ` 0 1S
F dS B
div F dVB
1 dV VpBq 43
r3 43
a3
11. Second-Order Differential Equations
ASecond-order linear differential equation has the form
Ppxqd2
ydx2
` Qpxqdydx
` Rpxqy Gpxq (42)
The cases where Gpxq 0 are called homogenous linear equations and the cases where Gpxq 0are called non-homogeneous linear equations
Theorem: Ify1pxq and y2pxq are both solutions of the linear homogeneous equation 42and c1 andc2 are any constants, then the function
ypxq c1y1pxq ` c2y2pxqis also a solution of Equation 42
Proof:Ppxqy2
1` Qpxqy1
1` Rpxqy1 0 Ppxqy22 ` Qpxqy12 ` Rpxqy2 0
Ppxqy2 ` Qpxqy1 ` Rpxqy Ppxqpc1y1 ` c2y2q2 ` Qpxqpc1y1 ` c2y2q1 ` Rpxqpc1y1 ` c2y2q Ppxqpc1y21 ` c2y22q ` Qpxqpc1y11 ` c2y12q ` Rpxqpc1y1 ` c2y2q c1rPpxqy21 ` Qpxqy11 ` Rpxqy1s ` c2rPpxqy22 ` Qpxqy12 ` Rpxqy2s c1p0q ` c2p0q 0
18
8/11/2019 MCLA Concise Review
19/34
MCLA Concise Review
The term ypxq c1y1pxq ` c2y2pxq is also called the general solutionIf the second-order linear equation has constants for Ppxq, Qpxq, Rpxq then theauxillary
equation(orcharacteristic equation) is
ar2 ` br ` c 0 where ay2 ` by1 ` cy 0The roots of this equation, r1 and r2, are used to find the general solution of the general equation
Case 1: b2 4ac 0y c1er1x ` c2er2x
Case 2: b2 4ac 0y c1erx ` c2xerx
Case 3: b2 4ac 0y expc1cos x ` c2sin xq
On the other hand, the general solution of a nonhomogeneous differential equation can be written as
ypxq yppxq ` ycpxqwhereyp is aparticular solution and yc is the general solution of the homogeneous form
of the equation
Procedure to find the particular solution: Substitute yppxq a polynomial of the same degree as Ginto the differential equation and determine the coefficients. Example: Solve y2 ` 4y e3x
yc c1cos 2x ` c2sin 2x(Using method to find homogeneous solutionFor the particular solution we try yp Ae3x. Then y 1p 3Ae3x and y2p 9Ae3x
Therefore 9Ae3x ` 4pAe3xq e3xso 13Ae3x e3x and A 113
yppxq 113
e3x
The general solution is thus y
yc`
yp
ypxq 113
e3x ` c1cos 2x ` c2sin 2xIf initial conditions were provided, the constants c1 and c2 can be found.
Part 2. Linear Algebra
12. Linear Combinations
Alinear combination is when a vector is equal to the sum of a scalar multiple of other vectors.
In mathematics, this is:v c1v1 ` c2v2 ` c3v3 ` . . .
The scalars c1, c2, and c3, etc. are called thecoefficientsof the linear combination.Using linear combinations, one can create a new coordinate grid expressing all points as linear
combinations of two initial vectors.
13. Vector Inequalities
19
8/11/2019 MCLA Concise Review
20/34
MCLA Concise Review
TheTriangle Inequalitystates the following:For all vectors u and v in Rn:
u ` v u ` vProof:
This inequality will be proven by proving thesquare of the left hand side is less than or equal
to the square of the right hand side.u ` v2 pu ` vq pu ` vq
u u ` 2pu vq ` v v u2 ` 2|u v| ` v2 (Absolute Value is non-negative) u2 ` 2uv ` v2 (Cauchy-Schwarz Inequality) pu` vq2
Since the squares of both sides satisfy theinequality, and the sides are non-negative, the
inequality must be true.
TheCauchy-Schwarz Inequalitystates thefollowing:
For all vectors u and v in Rn:
|u v| uvProof:
|u
v|
uv|cos |
Since|cos | 1, the inequality is true, asuv|cos | uv
14. Lines and Planes
In Rn, thenormal form of the equation for a line is:
n px pq 0 orn x n pwherep is a specific point on the line and n, which is not0, is a normal vector for the line .
Thegeneral form for the equation of a line is:
ax ` by c
where a normal vector n for the line is equal to
ab
.
Thevector form of the equation of a line in R2 or R3 is:
x p ` tdwherep is a specific point on the line andd, which is not 0, is a direction vector for the line .
By taking each component of the vectors, the parametric equations of a line are obtained.
Thenormal form of the equation of a plane in R3 is:
n px pq 0 orn x n pwherep is a specific point on P andn, which is not0, is a normal vector for P.
Thevector form of the equation of a plane in R3 is:
x p ` su ` tvwherep is a point on P andu and v are direction vectors for the plane P(and are non-zero,
parallel to P, but not parallel to each other).20
8/11/2019 MCLA Concise Review
21/34
MCLA Concise Review
The equations that result from each component of the vectors are called the parametricequations of the a plane.
15. Systems of Linear Equations and Matrices
Alinear equation in nvariables is an equation in the following form:
a1x1 ` a2x2 ` a3x3 ` . . . ` anxn bwherea1 . . . an are thecoefficientsand b is theconstant term.
Thecoefficientsand the constant termmust be constants.
Asolutionof a linear equation is a vector whose components satisfy the equation (subsituting eachxi with the corresponding component).
Asystem of linear equations is a finite set of linear equations with the same variables. A
solutionof the system is a vector that is a solution of all linear equations in the system. Thesolution set is the set of all solutions for the system. Finding the solution set is called solvingthe system.
A system of linear equations with real coefficients has either:
A unique solution (consistent system) Infinitely many solutions (consistent system) No solutions (inconsistent system)
Two linear systems are equivalentif their solution sets are the same.
Linear systems are generally solved by utilizing Gaussian elimination.Take the linear system:
ax ` by mcx ` dy n
Solving this using Gaussian elimination involves creating an augmented matrix:a b mc d n
and putting it in row-echelon form. Using back substitution from there, all the variables can be
solved for, and the solution vector
xy
can be found.
A matrix is in row echelon form if it follows these guidelines:
(1) Rows consisting of zeros only are at the bottom(2) The first nonzero element in each nonzero row (called the leading entry) is in a column to
the left of all other leading entries.
This is an example of a matrix in row echelon form:21
8/11/2019 MCLA Concise Review
22/34
MCLA Concise Review 2 4 10 1 2
0 0 0
fifl
Elementary row operations are used to put matrices into row echelon form. The following areelementary row operations:
Interchange two rows
Multiply a row by a constant (nonzero)
Add a row (or any multiple of it) to another
The process of putting a matrix in row echelon form is called row reduction.Matrices are considered row equivalent if a series of elementary row operations can convert one
matrix into the other, and, therefore, if they share a row echelon form, since you can apply theopposite row operations from row echelon form to get the original matrix.
Therankof a matrix is the number of nonzero rows it has in row echelon form.
The Rank Theorem:
Let Abe the coefficient matrix of a system of linear equations in nvariables. If it is a consistentsystem (that is, it has at least one solution), then:
number of free variables n rankpAqor, for every variable, there needs to be another equation to find a single solution.
To simply back substitution, reduced row echelon form can be used.
A matrix is in reduced row echelon form if:
(1) It is in row echelon form(2) The leading entry in each nonzero row is a 1 (called a leading 1)(3) Each column containing a leading 1 has zeroes everywhere else
Gauss-Jordan elimination is similar to Gaussian elimination, except instead of stopping at row
echelon form, it proceeds to reduced row echelon form.
Homogeneous systems of linear equations are systems where the constant term is zero in eachequation. Its augmented matrix is in the form
A 0
.
Homogeneous systems have at least one solution (the trivial solution, which is0
).
If a homogeneous system has less equations than it does variables, it must have infinitely manysolutions.
16. Spanning Sets
Theorem:
A system of linear equations with augmented matrix
A b
is consistent if and only ifb is a linearcombination of the columns in A .
Thespanof a set of vectorstv1, v2, v3, . . . , vku is the set of all linear combinations of that set.If the span is equal to Rn, the set is referred to as a spanning set for Rn.
22
8/11/2019 MCLA Concise Review
23/34
MCLA Concise Review
17. Linear Independence
A set of vectorstv1, v2, v3, . . . , vku islinearly dependentif scalars exist such that:c1v1 ` c2v2 ` . . . ` ckvk 0
and at least one of the scalars is not 0.Additionally, a set of vectors is linearly dependent if and only if one of the vectors can be expressed
as a linear combination of the others.If a set of vectors is not linearly dependent, they are said to be linearly independent.
Theorem:
If there is a set ofmrow vectors in a matrix A, the set of row vectors is linearly dependent if andonly if rankpAq m
Therefore, any set ofmvectors in Rn is linearly dependent ifm n.
18. Matrices and Matrix Algebra
Amatrixis a rectangular array of numbers. These numbers are called entries or elements.This is an example of a matrix:
a b cd e fg h i
fifl
Thesizeof a matrix is represented as the number of rows (m)the number of columns (n).The above matrix has a size of 3 3.
A 1 mmatrix is called a row matrix(and is also a row vector).A n
1 matrix is called a column matrix (and is also a column vector).
Thediagonal entries of a matrix are those where the column and row it is in are the same.Examples are a11, a22, a33, . . . , ann.
If the matrix has the same number of rows as it does columns, it is a square matrix.A square matrix with zero non-diagonal entries is a diagonal matrix.
A diagonal matrix with all diagonal entries the same is a scalar matrix.Lastly, if the scalar on the diagonal is 1, it is an identity matrix.
Two matrices areequalif their size and corresponding entries are the same.
Adding matrices is as simple as adding each corresponding entry to each other.Scalar multiplication is just as easy: multiply every entry in the matrix by the scalar.
Multiplying two matrices is more complex. If the matrix C AB, where A has size m nand Bhas size n r, then the size ofC is m r.
Each element cij in C is equal to Ai bj.Another way to write this is:
cijn
k1aikbkj
Matrices can be divided into submatrices by partitioning it into blocks.23
8/11/2019 MCLA Concise Review
24/34
MCLA Concise Review
Take, for example, the following matrix:
1 0 0 2 10 1 0 1 30 0 1 4 00 0 0 1 70 0 0 7 2
fiffiffiffiffifl
It can be partitioned as:I BO C
whereI is the 3 3 identity matrix, B is a 3 2 matrix, O is the 2 3 zero matrix, and C is a
2 2 matrix.
Just like with scalar numbers, a matrix Ak is equal to the matrix A multiplied by itselfk times.
IfA is a square matrix, and r and s are non-negative integers, then the following is true:
(1) ArAs Ar`s
(2)pAr
qs
Ars
Thetranspose of a matrix AT is obtained by interchanging the rows and columns of a matrix.Therefore,pATqij Aji.
A matrix is symmetric if its transpose is equal to itself.
Algebraic Properties of Matrix Addition and Scalar Multiplication:
IfA, B, and Care matrices of the same size and c and dare scalars, then the following is true:
A ` B B ` A (Commutativity)
pA
`B
q `C
A
` pB
`C
q(Associativity)
A ` O A A ` pAq O cpA ` Bq cA ` cB (Distributivity) pc ` dqA cA ` dA(Distributivity) cpdAq pcdqA 1A A
Matrices can form linear combinations in the same way vectors do.Similarly, the concept oflinear independence also applies.
Lastly, just as we define the span of a set of vectors,thespan of a set or matrices is the set of all linear combinations of the matrices.
Properties of Matrix Multiplication:
IfA, B, and Care matrices of the appropriatesize such that the operations can be performed,and k is a scalar, then the following is true:
ApBCq pABqC(Associativity) ApB ` Cq AB ` BC(Left
Distributivity)
pA ` BqC AC` BC (RightDistributivity)
kpABq pkAqB ApkBq ImA A AIn (ifA is m n)
Transpose Properties:
24
8/11/2019 MCLA Concise Review
25/34
MCLA Concise Review
IfA and B are matrices of the appropriate sizeso that the operations can be performed, and kis a scalar, then the following is true:
pATqT A
pA ` BqT AT ` BT pkAqT kpATq pABqT BTAT pArqT pATqr for all non-negative
integers r.
IfA is a square matrix, then A ` AT is symmetric.IfA is a matrix, then AAT and ATAare symmetric for any matrix A.
19. The Inverse of a Matrix
IfA is a square, n nmatrix, it has a unique inverse A1 that is also n nand where thefollowing is true:
AA1 I A1AIf this matrix A
1
exists,Ais invertible.
IfA is a matrix of the form
a bc d
, then ifad bc 0, Ais invertible and the following is true:
A1 1ad bc
d bc a
IfA is an invertible matrix, then A1 is invertible andpA1q1 A. IfA is invertible and c is a nonzero scalar, then cA is invertible andpcAq1 1
cA1.
IfA and B are invertible and have the same size, then AB is invertible andpABq1 B1A1. IfA is invertible, then so is AT, andpATq1 pA1qT. IfA is invertible, then An is invertible for all non-negative integers n, andpAnq1 pA1qn.
IfA is invertible, we can define An aspA1qn pAnq1. Therefore, all properties of matrix powershold for a negative power, provided that the matrix is invertible.
Elementary matricesare those that can be obtained by performing a single elementary rowoperation on an identity matrix.
Performing a row operation can then be expressed by left multiplying an elementary matrix to theoriginal matrix, provided it is of correct size.
All elementary matrices are invertible, and its inverse is also an elementary matrix, with the sametype of row operation performed.
IfA is a square matrix, and a series of elementary row operations can reduce it to I, the same seriesof row operations changesI into A1.
Using this, we can compute the inverse using Gaussian elimination. This is done by reducingA I
I A1.25
8/11/2019 MCLA Concise Review
26/34
MCLA Concise Review
IfAB Ior BA I, A is invertible and B A1
20. LU Factorization
Just as we can factor natural numbers, such as 10
2
5, we can factor matrices as a product ofother matrices. This is called a matrix factorization.
IfAis a square matrix that can be reduced to row echelon form without interchanging any rows,then the Ahas an LU factorization.
An LU factorization is a matrix factorization of the form:
A LUwhereLis a unit lower triangular matrix, and Uis upper triangular.
IfUis the upper triangular matrix obtained when Ais put into row echelon form, then Lis the
product of the elementary matrices representing each row operation.Lcan also be obtained by using the multipliers of each row operation of the form Ri kRj , with
each multiplier being thepi, jq entry in the Lmatrix.The row operations must be done in top to bottom, left to right order for this process to apply.
A matrix P that arises from interchanging two rows in the identity matrix is called apermutationmatrix.
The transpose of a permutation matrix is its inverse.
IfA is a square matrix, then it has a factorization A PTLU, where Pis a permutation matrix,and U and Lare as defined above.
Every square matrix has a PTLU factorization.
21. Subspaces, Dimensions and Basis
Asubspaceis a collection of vectors inside the space Rn, so that it contains the zero vector 0, it isclosed under addition, and it isclosed under scalar multiplication.
Therefore, for any set of vectors in a subspace, all linear combinations of those vectors are in thesame subspace.
For this reason, the span of a set of vectors forms a subspace in Rn itself.The subspace formed by the span of a set of vectors is the subspace spanned by that set.
Therow space is the subspace spanned by the rows of a matrix, and the column space, by thecolumns.
Matrices that are row equivalent have equivalent row spaces.Abasisfor a subspace is a set of vectors that spans it, and is linearly independent.
Thestandard basis is a basis that consists of the standard unit vectors. An example would be:
e1, e2, . . . , en for the space Rn.
26
8/11/2019 MCLA Concise Review
27/34
MCLA Concise Review
The solution to an equation of the form Ax 0is referred to as the null spaceof the matrix A,and is a subspace ofRn, where n is the number of columns in the matrix.
For any system of linear equations of the form Ax b, it either has no solutions, one uniquesolution, or infinitely many solutions.
Two bases for a subspace must have the same number of vectors.
This leads to the definition of dimension, which is the number of vectors in the basis for asubspace.
The row and column spaces of a matrix must have the same dimension.
We can now redefinerankas the dimension of a matrixs row/column space.A matrixs transpose has the same rank as itself.
Thenullity of a matrix is defined as the dimension of its null space.rankpAq ` nullitypAq n, where n is the number of columns.
Using the notion of basis, one can denote any vector in a subspace as a linear combination of itsbasis vectors. The coefficients of the linear combination form thecoordinates of the vector with
respect to the basis.
22. Linear Transformations
A transformation T : Rn Rm is a linear transformation ifTpu ` vq Tpuq ` Tpvq
andTpcvq cTpvq.
Amatrix transformation of the form Ax
TApxq
is a linear transformation as well.All linear transformations can be expressed as a matrix transformation where each column of the
standard matrix is the transformation of the standard unit vectors.
Composite transformationsarise when apply a transformation to another transformation. Theirstandard matrices are related in that the matrix of the composite is the product of the matrix.
Two transformations areinverse to each other if both composites (in either order) yield theidentity matrix.
The matrix of an inverse transformation is the inverse of the original matrix.
23. Determinant of a Matrix
Only square matrices have determinants.
Thedeterminantof a square matrix is:
det A n
j1aijCij
or
det A n
i1aijCij
whereCij is thepi, jqcofactor.Thecofactoris defined asp1qi j det Aij.
Aij is the matrix A with the ith row and jthcolumn omitted.
27
8/11/2019 MCLA Concise Review
28/34
MCLA Concise Review
The determinant for a 1 1 matrix is the valueof its element.
For a matrix Aand any square matrix B,
detpABq pdet Aqpdet Bq.
A matrix is invertible if and only if itsdeterminant is zero.
For any square matrix A:
det A det AT
and
detpA1q 1det A
(if the matrix is invertible)s
The inverse of any invertible matrix is:
A1
1
det Aadj A
Theadjoint matrix is the transpose of thecofactor matrix.
24. Eigenvalues and Eigenvectors
Aneigenvalue is a scalar (denoted by ) such that:
Ax xxis called an eigenvectorof the matrix A.
All eigenvectors corresponding to an eigenvalue forms an eigenspace.The eigenvalues of a matrix Aare the solutions to:
detpA Iq 0
Thealgebraic multiplicity of an eigenvalue is the number of times it is a root in thecharacteristic equation.
Thegeometric multiplicity is the dimension of its eigenspace.Since the determinant of a triangular matrix is equal to the product of the diagonal entries, the
eigenvalues of a triangular matrix are the values on its diagonal.n is an eigenvalue ofAn for any integer n, with the same corresponding eigenvector, ifis an
eigenvalue ofA.
If a vectorx can be expressed as a linear combination of the eigenvectors ofA, the following is true:
Akx c1kv1 ` c2kv2 ` . . .
The eigenvectors corresponding to unique eigenvectors are linearly independent.
25. Similarity of Matrices
If a matrix Pexists so that P 1AP B, A B.This issimilarityof matrices.
28
8/11/2019 MCLA Concise Review
29/34
MCLA Concise Review
Similarity is transitive.The following is true of similar matrices:
Their determinants are the same. They are either both invertible or not invertible. Their rank is the same. Their characteristic polynomial, and, therefore, their eigenvalues, are the same.
A matrix is diagonalizable if it is similar to a diagonal matrix.From this, it is diagonalizable if it the same number of unique eigenvectors, and, therefore, distinct
eigenvalues as it does rows/columns.If a matrix is diagonalizable, its diagonal matrix D has entries that are its eigenvalues, and its P
matrixs columns are the corresponding eigenvectors, in order.
26. OrthogonalityA set of vectors is an orthogonal setif every vector is orthogonal (its dot product is zero) to every
other vector in the set.An orthogonal set is linearly independent.
Anorthoganal basis is a basis that is an orthogonal set.
For any vectorx in a subspace defined by the orthogonal basis v1, v2, . . ., it is equal to
c1v1 ` c2v2 ` . . . .where:
ci x vivi vi
Anorthonormal set is an orthogonal set of unit vectors.Anorthogonal basis is defined similarly.
The columns of a matrix Q are an orthonormal set if and only if
QTQ Iand this is called an orthogonal matrix.
For every orthogonal matrix Q: Q1 QT Qx x Qx Qy x y Q1 is orthogonal det Q 1 The absolute value of its eigenvalues is one. Q1Q2 is orthogonal ifQ1 and Q2 are.
29
8/11/2019 MCLA Concise Review
30/34
MCLA Concise Review
A vector isorthogonal to a subspaceif it is orthogonal to every vector inside of the subspace(which is equivalent to being orthogonal to each of its basis vectors).
The set of all vectors orthogonal to a subspace is its orthogonal component.
For a matrix A:
prowpAqqK nullpAq
andpcolpAqqK nullpATq
These four subspaces are called the fundamental subspaces ofA.
Theorthogonal projection of a vector onto a subspace is the sum of the projections of the vectoronto each of the orthogonal basis vectors.
Thecomponent of the vector orthogonal to the space is the difference between the vectorand its projection.
The Gram-Schmidt Process: This is a process that takes a basis for a subspace and makes anorthogonal one. it is done by taking each basis vector one at a time, and subtracting theprojections of it onto the previous basis vectors (taking the perpendicular component of the vector
to the previous basis vectors).
This leads to the QR factorization.The QRfactorization can be done with any matrix with linearly independent columns.
It is factored A QR, where Q is a matrix with orthonormal columns and R is an invertible uppertriangular matrix.
A matrix isorthogonally diagonalizable if the diagonalizing matrix is orthogonal.
Orthogonally diagonalizable matrices are symmetric, and vice versa.Distinct eigenvalues of a symmetric matrix have orthogonal eigenvectors.
Based on the Spectral Theorem, any real symmetric matrix Acan be written as A QDQT, andsince the diagonal entries ofD are the eigenvalues ofA, the matrix Acan be written as:
A 1q1q1T ` 2q2q2T ` . . .This is called thespectral decomposition ofA.
27. Vector Spaces
Avector space is a set where addition and scalar multiplication are defined in some fashion, andthe following axioms are true for all vectors u, v, andw and for all scalars c and din the set:
(1) It is closed under addition.(2) It is commutative for addition.(3) It is associative for addition.(4) A zero vector exists, and is the additive identity.(5) For each vector uthat exists, there is an opposite,u, such that their sum is0.(6) It is closed under scalar multiplication.(7) It is distributive such that the scalar can be distributed to the sum of vectors.
30
8/11/2019 MCLA Concise Review
31/34
MCLA Concise Review
(8) It is distributive such that the vector can be distributed to the sum of scalars.(9) cpduq pcdqu
(10) 1u uThe operations addition andscalar multiplicationcan be defined in any way so that the axioms arefulfilled, and the vectors in the set may be anything so that the axioms are fulfilled as well.For any vector space with u and cin it, the following are true as well:
0u
0
c0 0 p1qu u Ifcu 0, either c 0 oru 0.
A subset is called a subspaceof a vector space if it is also a vector space, and has the same scalars,addition, and scalar multiplication definitions.
Following from this, ifV is a vector space, and W is a subset (and is not empty) ofV , W is asubspace ofV if and only ifWis closed under addition and scalar multiplication.
A subspace has the same zero vector as its containing space(s).
The span of a set of vectors in a vector space is the smallest subspace containing those vectors.
Linear combinations, linear independence, and basis is defined in the same way as conventionalvectors.
For a basis of a vector space: any set with more vectors than the basis is linearly dependent, andwith fewer, it cannot span the vector space.
Vector spaces are finite-dimensionalif their basis has a finite amount of vectors; otherwise, theyare infinite-dimensional.
28. Change of Basis
Thechange-of-basis matrix fromBtoC PCB is the matrix whose columns are the coordinatevectors of the original basis vectors into the new basis.
The matrix PCB has the following properties: PCBrxsB rxsC PCB is unique for the above property. PCB is invertible and its inverse is PBC
By expressing the basis vectors for two bases BandCin terms of another base (call theseexpressionsB and C) , row reduction of
C B
yields
I PCB
.
29. Kernel and Range
Thekernelof a linear transformation is the set of all vectors mapped by a linear transformation.Therange is the set of all vectors that are images created by a linear transformation.
31
8/11/2019 MCLA Concise Review
32/34
MCLA Concise Review
Therankof a linear transformation is the dimension of its range.Thenullityof a linear transformation is the dimension of its kernel.
The rank and nullitys sum is equal to the dimension of the original vector space of thetransformation.
A transformation is called one-to-one if every mapping is distinct. If the range of a lineartransformation is equal to the ending space, it is onto.
A linear transformation is one-to-one if and only if its kernel ist0u.If the dimension of both spaces is the same for a linear transformation, either it is one-to-one and
onto, or neither.A linear transformation is invertible if and only if it is one-to-one and onto.
A linear transformation that is both one-to-one and onto is anisomorphism.
30. Inner Products
Aninner product is an operation on a vector space that assigns a real number to every pair of
vectors so that the following are true for any vector u and v, and for any scalar c:(1)xu, vy xv, uy(2)xu.v ` wy xu, vy ` xu, wy(3)xcu, vy cxu, vy(4)xu, uy 0, and only equals 0 when u 0
Thelength or normof a vector v is v axv, vy
The distance between two vectors is the norm of their difference.Two vectors areorthogonalif their inner product is zero.
31. Norms
Anorm on a vector space associates a single vector with a real number so that the following aretrue:
(1) The norm of a vector is always greater than or equal to zero. The only time it is zero is whenthe vector is the zero vector.
(2) cv |c|v(3) The norm of the sum of two vectors is less than or equal to the sum of the norms of those
two vectors.
The sum norm, also called the 1-normis the sum of the absolute value of the vectors components.The8-normor max norm is the greatest of the absolute values of the components of a vector.
Adistance functionis defined as:
distancepu, vq u vAmatrix normassociates each square matrix A with a real number so that the following is true,
in addition to the vector norm conditions (with matrices instead):
AB AB32
8/11/2019 MCLA Concise Review
33/34
MCLA Concise Review
32. Least Squares Approximation
The best approximation to a vector in a finite-dimensional subspace is its projection upon thesubspace.
IfAis an m nmatrix andb is in Rn, a least squares solutionofAx b in Rn so that:b Ax b Ax
for allxin Rn.
The solution,xis a unique least square solution ifAtAis invertible. If so, the following is true:
x pATAq1ATbThe least squares solution of a QR factorized matrix is:
x R1QTb
Thepseudoinverseof a matrix Awith linearly independent columns is:
A` pATAq1ATThe pseudoinverse has the following properties:
AA`A A A`AA` A` AA` and A`Aare symmetric.
33. Singular Value Decomposition
Thesingular values of a matrix Aare the square roots of the eigenvalues ofATA. They aredenoted by 1 . . . n.
Thesingular value decomposition is a way of factoring a m nmatrix Ainto the formA UVT whereU is an m morthogonal matrix, V is an n northogonal matrix, and is an
m nmatrix of the form:
D OO O
whereD is a diagonal matrix whose entries are the nonzero singular values ofA.V is constructed from the eigenvectors ofATAso that each column is the corresponding eigenvector
to the singular value.U is constructed from the eigenvectors ofAAT so that each column is the corresponding eigenvector
to the singular value.
The matrix Acan then be written as:
1u1v1T ` 2u2v2T ` . . .
(similar to the Spectral Theorem)33
8/11/2019 MCLA Concise Review
34/34
MCLA Concise Review
34. Fundamental Theorem of Invertible Matrices
Each of the following statements implies the others:
Ais invertible, where Ais a n nmatrix.
Ax
b has a unique solution for every b.
Ax 0has only the trivial solution. The reduced row echelon form ofA is I. Ais a product of elementary matrices. rankpAq n. nullitypAq 0. The column vectors ofAare linearly independent. The column vectors ofAspan Rn. The column vectors ofAform a basis for Rn. The row vectors ofAare linearly independent. The row vectors ofAspan Rn.
The row vectors ofAform a basis for Rn.
det A 0. 0 is not an eigenvalue ofA. T is invertible where T :V Wis a linear transformation from basis Bto basis C. T is one-to-one. T is onto. kerpTq t0u rangepTq W 0 is not a singular value ofA.