Post on 04-Jun-2018
transcript
8/13/2019 maths notes 1
1/26
Lecture 2: Elimination with Matrices
Elimination is the way every software package solves equations. If the elimination succeeds it gets
the answer. If the matrix A in Ax=b is a "good" matrix (we'll see what a good matrix is later) then the
elimination will work and we'll get the answer in an efficient way. It's also always good to ask how can
it fail. We'll see in this lecture how elimination decides if the matrix A is good or bad. After the
elimination there is a step calledback-substitution to complete the answer.
Okay, here is a system of equations. Three equations in three unknowns.
Remember from lecture one, that every such system can be written in the matrix form Ax=b, where A is
the matrix of coefficients, x is a column vector of unknowns and b is the column vector of solutions
(the right hand side). Therefore the matrix form of this example is the following:
For the elimination process we need the matrix A and the column vector b. The idea is very simple,
first we write them down in the augmented matrix form A|b:
Next we subtract rows from one another in such a way that the final result is an upper triangular
matrix (a matrix with all the elements below the diagonal being zero).
So the first step is to subtract the first row multiplied by 3 from the second row. This gives us the
following matrix:
The next step is to subtract the second row multiplied by 2 from the third row. This is the final step
and produces an upper triangular matrix that we needed:
http://www.catonmat.net/blog/mit-linear-algebra-part-one/http://www.catonmat.net/blog/mit-linear-algebra-part-one/http://www.catonmat.net/blog/mit-linear-algebra-part-one/http://www.catonmat.net/blog/mit-linear-algebra-part-one/
8/13/2019 maths notes 1
2/26
Now let's write down the equations that resulted from the elimination:
Working from the bottom up we can immediately find the solutions z, y, and x. From the last equation,
z = -10/5 = -2. Now we put z in the middle equation and solve for y. 2y = 6 + 2z = 6 + 2(-2) = 6 - 4 = 2
=> y = 1. And finally, we can substitute y and z in the first equation and solve for x. x = 2 - 2y - z = 2 -
2(1) - (-2) = 2.
We have found the solution, it's (x=2, y=1, z=-2). The process we used to find it is called the back-
substitution.
The elimination would fail if taking a multiple of one row and adding to the next would produce a zero
on the diagonal (and there would be no other row to try to exchange the failing row with).
The lecture continues with figuring out how to do the elimination by using matrices. In the first lecture
we learned that a matrix times a column vector gave us a combination of the columns of the
matrix. Similarly, a row times a matrix gives us a combination of the rows of the matrix .
Let's look at our first step of elimination again. It was to subtract 3 times the first row from the second
row. This can be expressed as matrix multiplication (forget the column b for a while):
Let's call the matrix on the right E as elimination matrix (or elementary matrix), and give it subscript
E21 for making a zero in the resulting matrix at row 2, column 1.
The next step was twice the second row minus the third row:
The matrix on the right is again an elimination matrix. Let's call it E32 for giving a zero at row 3, column
2.
But notice that these two operations can be combined:
8/13/2019 maths notes 1
3/26
And we can write E32(E21 A) = U. Now remember that matrix operations are associative, therefore we
can change the parenthesis (E32E21)A = U. If we multiply (E32E21) we get a single matrix E that we will
call theelimination matrix. What we have done is expressed the whole elimination process in matrix
language!
Lecture 5: Vector Spaces and Subspaces
Lecture starts with reminding some facts about permutation matrices. Remember from the previous
lecturethat permutation matrices P execute row exchanges and they are identity matrices with
reordered rows.
Let's count how many permutation matrices are there for an nxn matrix.
For a matrix of size 1x1, there is just one permutation matrix - the identity matrix.
For a matrix of size 2x2 there are two permutation matrices - the identity matrix and the identity matrix
with rows exchanged.
For a matrix of size 3x3 we may have the rows of the identity matrix rearranged in 6 ways - {1,2,3},
{1,3,2}, {2,1,3}, {2,3,1}, {3,1,2}, {3,2,1}.
For a matrix of size 4x4 the number of ways to reorder the rows is the same as the number of ways to
rearrange numbers {1,2,3,4}. This is the simplest possible combinatorics problem. The answer is 4! =
24 ways.
In general, for an nxn matrix, there are n! permutation matrices.
Another key fact to remember about permutation matrices is that their inverse P-1
is their transpose
PT. Or algebraically P
T·P = I.
The lecture proceeds to transpose matrices. The transpose of a matrix exchanges its columns with
rows. Another way to think about it that it flips the matrix over its main diagonal. Transpose of matrix A
is denoted by AT.
Here is an example of transpose of a 3-by-3 matrix. I color coded the columns to better see how they
get exchanged:
A matrix does not have to be square for its transpose to exist. Here is another example of transpose
of a 3-by-2 matrix:
http://www.catonmat.net/blog/mit-linear-algebra-part-four/http://www.catonmat.net/blog/mit-linear-algebra-part-four/http://www.catonmat.net/blog/mit-linear-algebra-part-four/http://www.catonmat.net/blog/mit-linear-algebra-part-four/http://www.catonmat.net/blog/mit-linear-algebra-part-four/http://www.catonmat.net/blog/mit-linear-algebra-part-four/
8/13/2019 maths notes 1
4/26
In algebraic notation transpose is expressed as (AT)ij = A ji, which says that an element a ij at position ij
get transposed into the position ji.
Here are the rules for matrix transposition:
The transpose of A + B is (A + B)T = AT + BT.
The transpose of A·B is (A·B)T = BT·AT.
The transpose of A·B·C is (A·B·C)T = CT·BT·AT.
The transpose of A-1 is (A-1)T = (AT)-1.
Next the lecture continues with symmetric matrices. A symmetric matrix has its transpose equal to
itself, i.e., AT = A. It means that we can flip the matrix along the diagonal (transpose it) but it won't
change.
Here is an example of a symmetric matrix. Notice that the elements on opposite sides of the diagonal
are equal:
Now check this out. If you have a matrix R that is not symmetric and you multiply it with its transpose
RT as R·R
T, you get a symmetric matrix! Here is an example:
Are you wondering why it's true? The proof is really simple. Remember that matrix is symmetric if its
transpose is equal to itself. Now what's the transpose of the product R·RT
? It's (R·RT
)T
= (RT
)T
·RT
=R·R
T - it's the same product, which means that R·R
T is always symmetric.
Here is another cool fact - the inverse of a symmetric matrix (if it exists) is also symmetric. Here is the
proof. Suppose A is symmetric, then the transpose of A-1
is (A-1
)T = (A
T)-1
. But AT = A, therefore (A
T)-
1 = A
-1.
At this point lecture finally reaches the fundamental topic of linear algebra - vector spaces. As usual,
it introduces the topic by examples.
Example 1: Vector space R2 - all 2-dimensional vectors. Some of the vectors in this space are (3, 2),
(0, 0), (π, e) and infinitely many others. These are all the vectors with two components and they
represent the xy plane.
8/13/2019 maths notes 1
5/26
Example 2: Vector space R3 - all vectors with 3 components (all 3-dimensional vectors).
Example 3: Vector space Rn - all vectors with n components (all n-dimensional vectors).
What makes these vectors vector spaces is that they are closed under multiplication by a scalar and
addition, i.e., vector space must be closed under linear combination of vectors. What I mean by that is
if you take two vectors and add them together or multiply them by a scalar they are still in the same
space.
For example, take a vector (1,2,3) in R3. If we multiply it by any number α, it's still in R
3 because
α·(1,2,3) = (α, 2α, 3α). Similarly, if we take any two vectors (a, b, c) and (d, e, f) and add them
together, the result is (a+d, b+e, f+c) and it's still in R3.
There are actually 8 axioms that the vectors must satisfy for them to make a space, but they are not
listed in this lecture.
Here is an example of not-a-vector-space. It's 1/4 of R
2
(the 1st quadrant). The green vectors are in the1st quadrant but the red one is not:
An example of not-a-vector-space.
This is not a vector space because the green vectors in the space are not closed under multiplication
by a scalar. If we take the vector (3,1) and multiply it by -1 we get the red vector (-3, -1) but it's not in
the 1st quadrant, therefore it's not a vector space.
Next, Gilbert Strang introduces subspaces of vector spaces.
For example, any line in R
2
that goes through the origin (0, 0) is a subspace of R
2
. Why? Because ifwe take any vector on the line and multiply it by a scalar, it's still on the line. And if we take any two
vectors on the line and add them together, they are also still on the line. The requirement for a
subspace is that the vectors in it do not go outside when added together or multiplied by a number.
Here is a visualization. The blue line is a subspace of R2 because the red vectors on it can't go
outside of line:
http://mathworld.wolfram.com/VectorSpace.htmlhttp://mathworld.wolfram.com/VectorSpace.htmlhttp://mathworld.wolfram.com/VectorSpace.htmlhttp://mathworld.wolfram.com/VectorSpace.html
8/13/2019 maths notes 1
6/26
An example of subspace of R 2.
And example of not-a-subspace of R2 is any line that does not go through the origin. If we take any
vector on the line and multiply it by 0, we get the zero vector, but it's not on the line. Also if we take
two vectors and add them together, they are not on the line. Here is a visualization:
An example of not-a-subspace of R 2.
Why not list all the subspaces of R2. They are:
the R 2 itself,
any line through the origin (0, 0),
the zero vector (0, 0).
And all the subspaces of R3 are:
the R 3 itself,
any line through the origin (0, 0, 0),
any plane through the origin (0, 0, 0),
the zero vector.
The last 10 minutes of the lecture are spent on column spaces of matrices.
The column space of a matrix is made out of all the linear combinations of its columns. For example,
given this matrix:
8/13/2019 maths notes 1
7/26
The column space C(A) is the set of all vectors {α·(1,2,4) + β·(3,3,1)}. In fact, this column space is a
subspace of R3 and it forms a plane through the origin.
Lecture 3: Matrix Multiplication and Inverse Matrices
Lecture three starts with five ways to multiply matrices.
The first way is the classical way. Suppose we are given a matrix A of size mxn with elements aij and
a matrix B of size nxp with elements b jk, and we want to find the product A·B. Multiplying
matrices A and B will produce matrix C of size mxp with elements .
Here is how this sum works. To find the first element c11 of matrix C, we sum over the 1st row
of A and the 1st column of B. The sum expands to c11 = a11·b11 + a12·b21 + a13·b31 + ... + a1n·bn1. Here
is a visualization of the summation:
We continue this way until we find all the elements of matrix C. Here is another visualization of finding
c23:
The second way is to take each column in B, multiply it by the whole matrix A and put the resulting
column in the matrix C. The columns of C are combinations of columns of A. (Remember
from previous lecture that a matrix times a column is a column.)
For example, to get column 1 of matrix C, we multiply A·(column 1 of matrix B):
http://www.catonmat.net/blog/mit-linear-algebra-part-two/http://www.catonmat.net/blog/mit-linear-algebra-part-two/http://www.catonmat.net/blog/mit-linear-algebra-part-two/http://www.catonmat.net/blog/mit-linear-algebra-part-two/
8/13/2019 maths notes 1
8/26
The third way is to take each row in A, multiply it by the whole matrix B and put the resulting row in
the matrix C. The rows of C are combinations of rows of B. (Again, remember from previous
lecture that a row times a matrix is a row.)
For example, to get row 1 of matrix C, we multiply row 1 of matrix A with the whole matrix B:
The fourth way is to look at the product of A·B as a sum of (columns of A) times (rows of B).
Here is an example:
The fifth way is to chop matrices in blocks and multiply blocks by any of the previous methods.
Here is an example. Matrix A gets subdivided in four submatrices A1 A2 A3 A4, matrix B gets divided in
four submatrices B1 B2 B3 B4 and the blocks get treated like simple matrix elements.
Here is the visualization:
Element C1, for example, is obtained by multiplying A1·B1 + A2·B3.
Next the lecture proceeds to finding the inverse matrices. An inverse of a matrix A is another matrix,
such that A-1
·A = I, where I is the identity matrix. In fact if A-1
is the inverse matrix of a square
matrix A, then it's both the left-inverse and the right inverse, i.e., A-1
·A = A·A-1
= I.
If a matrix A has an inverse then it is said to be invertible or non-singular .
http://www.catonmat.net/blog/mit-linear-algebra-part-two/http://www.catonmat.net/blog/mit-linear-algebra-part-two/http://www.catonmat.net/blog/mit-linear-algebra-part-two/http://www.catonmat.net/blog/mit-linear-algebra-part-two/http://www.catonmat.net/blog/mit-linear-algebra-part-two/http://www.catonmat.net/blog/mit-linear-algebra-part-two/
8/13/2019 maths notes 1
9/26
Matrix A is singular if we can find a non-zero vector x such that A·x = 0. The proof is easy. Suppose A
is not singular, i.e., there exists matrix A-1
. Then A-1
·A·x = 0·A-1
, which leads to a false statement
that x = 0. Therefore A must be singular.
Another way of saying that matrix A is singular is to say that columns of matrix A are linearly
dependent (one ore more columns can be expressed as a linear combination of others).
Finally, the lecture shows a deterministic method for finding the inverse matrix. This method is called
theGauss-Jordan elimination. In short, Gauss-Jordan elimination transforms augmented matrix (A|I)
into (I|A-1
) by using only row eliminations.
Linear Independence and Span
Span
We have seen in the last discussion that the span of vectors v1, v2, ... , vn is the set of linear
combinations
c1v1 + c2v2 + ... + cnvn
and that this is a vector space.
We now take this idea further. If V is a vector space and S = {v1, v2, ... , vn) is a subset
of V, then is Span(S) equal to V?
Definition
Let V be a vector space and let S = {v1, v2, ... , vn) be a subset of V. We say
that S spans V if every vector v in V can be written as a linear combination ofvectors in S.
v = c1v1 + c2v2 + ... + cnvn
Example
Show that the set
S = {(0,1,1), (1,0,1), (1,1,0)}
spans R 3 and write the vector (2,4,8) as a linear combination of vectors in S.
8/13/2019 maths notes 1
10/26
Solution
A vector in R 3 has the form
v = (x, y, z)
Hence we need to show that every such v can be written as
(x,y,z) = c1(0, 1, 1) + c2(1, 0, 1) + c3(1, 1, 0)
= (c2 + c3, c1 + c3, c1 + c2)
This corresponds to the system of equations
c2 + c3 = x
c1 + c3 = yc1 + c2 = z
which can be written in matrix form
We can write this as
Ac = b
Notice that
det(A) = 2
Hence A is nonsingular and
c = A-1b
So that a nontrivial solution exists. To write (2,4,8) as a linear combination of vectors in S,
we find that
so that
8/13/2019 maths notes 1
11/26
We have
(2,4,8) = 5(0,1,1) + 3(1,0,1) + (-1)(1,1,0)
Example
Show that if
v1 = t + 2 and v2 = t2
+ 1
and S = {v1, v2}
then
S does not span P2
Solution
A general element of P2 is of the form
v = at2 + bt + c
We set
v = c1v1 + c2v2
or
at2 + bt + c = c1(t + 2) + c2(t2 + 1) = c2t
2 + c1t + c1 + c2
Equating coefficients gives
a = c2 b = c1c = c1 + c2
Notice that if
a = 1 b = 1 c = 1
8/13/2019 maths notes 1
12/26
there is no solution to this. Hence S does not span V.
Example
Let
Find a spanning set for the null space of A.
Solution
We want the set of all vectors x with
Ax = 0
We find that the rref of A is
The parametric equations are
x1 = 7s + 6t
x2 = -4s - 5tx3 = s
x4 = t
We can get the span in the following way. We first let
s = 1 and t = 0
to get
v1 = (7,-4,1,0)
and let
8/13/2019 maths notes 1
13/26
s = 0 and t = 1
to get
v2 = (6,-5,0,1)
If we let S = {v1,v2} then S spans the null space of A.
Linear Independence
We now know how to find out if a collection of vectors span a vector space. It should be
clear that if S = {v1, v2, ... , vn) then Span(S) is spanned by S. The question that we next ask
is are there any redundancies. That is, is there a smaller subset of S that also
span Span(S). If so, then one of the vectors can be written as a linear combination of theothers.
vi = c1v1 + c2v2 + ... + ci -1vi -1 + ci+1vi+1 + ... + cnvn
If this is the case then we call S a linearly dependent set. Otherwise, we say that S is linearly
independent. There is another way of checking that a set of vectors are linearly dependent.
Theorem
Let S = {v1, v2, ... , vn) be a set of vectors, then S is linearly dependent if and only
if 0 is a nontrivial linear combination of vectors in S. That is, there are
constants c1, ..., cn with at least one of the constants nonzero with
c1v1 + c2v2 + ... + cnvn = 0
Proof
Suppose that S is linearly dependent, then
vi = c1v1 + c2v2 + ... + ci -1vi -1 + ci+1vi+1 + ... + cnvn
Subtracting vi from both sides, we get
c1v1 + c2v2 + ... + ci -1vi -1 + vi + ci+1vi+1 + ... + cnvn = 0
In the above equation ci = 1 which is nonzero, so that 0 is a nontrivial linear combination of
vectors in S.
Now let
8/13/2019 maths notes 1
14/26
c1v1 + c2v2 + ... + ci -1vi -1 + civi + ci+1vi+1 + ... + cnvn = 0
with ci nonzero. Divide both sides of the equation by ci and let a j = -c j / ci to get
-a1v1 - a2v2 - ... - ai -1vi -1 + vi - ai+1vi+1 - ... - anvn = 0
finally move all the terms to the other right side of the equation to get
vi = a1v1 + a2v2 + ... + ai -1vi -1 + ai+1vi+1 + ... + anvn
Example
Show that the set of vectors
S = {(1, 1, 3, 4), (0, 2, 3, 1), (4, 0, 0, 2)}
are linearly independent.
Solution
We write
c1(1, 1, 3, 4) + c2(0, 2, 3, 1) + c3(4, 0, 0, 2) = 0
We get four equations
c1 + 4c3 = 0
c1 + 2c2 = 03c1 + 3c2 = 0
4c1 + c2 + 2c3 = 0
The matrix corresponding to this homogeneous system is
and
8/13/2019 maths notes 1
15/26
Hence
c1 = c2 = c3 = 0
and we can conclude that the vectors are linearly independent.
Example
Let
S = {cos2 t, sin2 t, 4)
then S is a linearly dependent set of vectors since
4 = 4cos2t + 4sin2t
Linear Independence
Definition. Let V be a vector space over a field F, and let . S is linearly
independent if , , and
An equation like the one above is called a linear relationship among the ; if at
least one of the coefficients is nonzero, it is a nontrivial linear relationship.Thus, a set of vectors is independent if there is no nontrivial linear relationshipamong finitely many of the vectors.
A set of vectors which is not linearly independent is linearly dependent. (I'll
usually say "independent" and "dependent" for short.) Thus, a set of vectors S is
dependent if there are vectors and numbers , not all of
which are 0, such that
8/13/2019 maths notes 1
16/26
Example. Any set containing the zero vector is dependent. For if ,
then is a nontrivial linear relationship in S.
Example. Consider as a vector space over in the usual way.
The vectors and are independent in . To prove this, suppose
Then , so .
More generally, if F is a field, the standard basis vectors
are independent in .
Example. The vectors and are dependent in . To show this, I
have to find numbers a and b, not both 0, such that
There are many pairs of numbers that work. For example,
Likewise,
More generally, vectors and in are dependent if and only if they are
multiples of one another.
Example. The set of vectors
8/13/2019 maths notes 1
17/26
is a dependent set in . For
(One of the things I'll discuss shortly is how you find numbers like 3, 4, and 1
which give a linear combination which equals .)
Example. The vectors
are independent in .
Suppose
This is equivalent to the following set of linear equations:
You can verify that the solution is . Hence, the vectors areindependent.
The previous example illustrates an algorithm for determining whether a set of
vectors is independent. To determine whether vectors , , ..., in a vectorspace V are independent, I try to solve
If the only solution is , then the vectors are independent;
otherwise, they are dependent.
Here's the most important special case of this.
Suppose that , , ... are vectors in , where F is a field. The vectorequation above is equivalent to the matrix equation
8/13/2019 maths notes 1
18/26
I'll obtain the solution if and only if the coefficient matrixrow reduces this way:
To test whether vectors , , ..., in are independent, form the
matrix with the vectors as columns and row reduce.
The vectors are independent if and only if the row reduced
echelon matrix has the identity matrix as its upper block(with rows of zeros below):
I've drawn this picture as if --- that is, as if the number of vectors is no
greater than their dimension. If , row reduction as above cannot producethe identity matrix as an upper block.
Corollary. If , a set of m vectors in is dependent.
Example. I know that the set of vectors
8/13/2019 maths notes 1
19/26
in is dependent without doing any computation. Any set of three (or more)
vectors in is dependent.
Example. Determine whether the set of vectors
is independent in .
I'll work this example from scratch and show the steps in the reasoning, thenconnect the result with the algorithm I gave above.
The question is whether you can find , not all 0, such that
This amounts to solving the system
The augmented matrix is
However, I can see that row operations will never change the fourth column, so I
can omit it. So the row reduction is
Remembering that there is a fourth all-zero column that I'm not writing, this says
The parametrized solution is
8/13/2019 maths notes 1
20/26
So, for example, I can get a nonzero solution by setting .
Then and . And in fact,
In the row-reduced echelon matrix, you can see that I didn't get a copy of
the identity matrix. If this had happened,
says , , and , and the vectors would have been independent.
You can just do the algorithm if you wish, but it's always better to understandwhere it's coming from.
Example. is a vector space over the reals. The set is independent.
For if
it follows that for all i.
The next lemma says that a independent set can be thought of as a set without
"redundancy", in the sense that you can't build any one of the vectors out of the
others.
Lemma. Let V be an F-vector space, and let . S is linearly independent if
and only if no can be expressed as a linear combination of other vectors in S.
("Other" means vectors other than v itself.)
Proof. Suppose S is linearly independent. Suppose
for , for all i, .
Then
8/13/2019 maths notes 1
21/26
is a nontrivial linear relation among elements of S. This contradicts linearindependence. Hence, v cannot be a linear combination of other vectors in S.
Conversely, suppose no can be expressed as a linear combination of other
vectors in S. Suppose
where and for all i. I want to show that for al i.
Suppose that at least one is nonzero. Assume without loss of generality
that . Write
I've expressed v as a linear combination of other vectors in S \contra.
Hence, for all i, and S is independent.
Recall that if S is a set of vectors in a vector space V, the span of S is
That is, the span consists of all linear combinations of vectors in S.
Definition. A set of vectors S spans a subspace W if ; that is, if everyelement of W is a linear combination of elements of S.
Example. Let
Then
For example, consider . Is this vector in the span of S?
I need numbers a and b such that
8/13/2019 maths notes 1
22/26
This is equivalent to the matrix equation
Row reduce the augmented matrix:
The solution is , . That is,
The vector is in the span of S.
On the other hand, try the same thing with the vector . Solving
amounts to the following row reduction:
But the last matrix says " ", a contradiction. The system is inconsistent, so
there are no such numbers a and b. Therefore, is not in the span of S.
To determine whether the vector is in the span of , ,
..., in , form the augmented matrix
8/13/2019 maths notes 1
23/26
If the system has a solution, b is in the span, and coefficients of a
linear combination of the v's which add up to b are given by a
solution to the system. If the system has no solutions, then b isnot in the span of the v's.
Example. The span of the set of vectors
is all of . In other words, if is any vecotr in , there are real numbers
a, b, c such that
In words, any vector is a linear combination of the three vectors in the set.
Obviously, there are sets of three vectors in which don't span. (For example,
take three vectors which are multiples of one another.) Geometrically, the span of a
set of vectors in can be a line through the origin, a plane through the origin,and so on. (The span must contain the origin, because a subspace must contain the
zero vector.)
Example. Determine whether the vector is in the span of the set
(a) .
I want to find a and b such that
This is the system
8/13/2019 maths notes 1
24/26
Form the augmented matrix and row reduce:
The last matrix says and . Therefore, is in the span of S.
(b) .
I want to find a and b such that
This is the system
Form the augmented matrix and row reduce:
The last row of the row reduced echelon matrix says " ". This contradiction
implies that the system is has no solutions. Therefore, is not in the span of
S.
Testing for Linear Dependence of Vectors
There are many situations when we might wish to know whether a set of vectors is
linearly dependent, that is if one of the vectors is some combination of the others.
Two vectors u and v are linearly independent if the only numbers x and y
satisfying xu+yv=0 are x=y=0. If we let
then xu+yv=0 is equivalent to
8/13/2019 maths notes 1
25/26
If u and v are linearly independent, then the only solution to this system of
equations is the trivial solution, x=y=0. For homogeneous systems this happens
precisely when the determinant is non-zero. We have now found a test fordetermining whether a given set of vectors is linearly independent: A set of n
vectors of length n is linearly independent if the matrix with these vectors ascolumns has a non-zero determinant. The set is of course dependent if thedeterminant is zero.
Example
The vectors and are linearly independent since the matrix
has a non-zero determinant.
Example
The vectors u=, v=, and w= are dependent since the
determinant
is zero. To find the relation between u, v, and w we look for constants x, y, and z
such that
This is a homogeneous system of equations. Using Gaussian Elimination, we see
that the matrix
in row-reduced form is
http://www.math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/system/system.html#hsolnhttp://www.math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/system/system.html#hsolnhttp://www.math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/system/system.html#hsolnhttp://www.math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/gauss/gauss.htmlhttp://www.math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/gauss/gauss.htmlhttp://www.math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/gauss/gauss.htmlhttp://www.math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/gauss/gauss.htmlhttp://www.math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/system/system.html#hsoln
8/13/2019 maths notes 1
26/26
Thus, y=-3z and 2x=-3y-5z=-3(-3z)-5z=4z which implies 0=xu+yv+zw=2zu-3zv+zw or equivalently w=-2u+3v. A quick arithmetic check verifies that thevector w is indeed equal to -2u+3v.