249
Pauli Pedersen: 12. An index to matrices
12. AN INDEX TO MATRICES
--- definitions, facts and rules ---
This index is based on the following goals and observations:
¯Togive the userquick reference to anactualmatrix definition or rule,
the index form is preferred. However, the index should to a large
extent be self-explaining.
¯The contents is selected in relation to the importance for matrix for-
mulations in solid mechanics.
¯The existence of good computer software for the numerical calcula-
tions, diminishes the need for details on specific procedures.
¯The existence of good computer software for the formulamanipula-
tions means that extended analytical work is possible.
¯The index is written by a non---mathematician (but hopefully with-
out errors), and is written for readers with a primary interest in
applying the matrix formulation without studying the matrix theory
itself.
¯Available chapters or appendices in books on solid mechanics are
not found extensive enough, and good classic books on linear alge-
bra are found too extensive. For further reference, see e.g.
250
Pauli Pedersen: 12. An index to matrices
Gantmacher, F.R. (1959) ‘The Theory of Matrices’,
Chelsea Publ. Co., Vol. I, 374 p., Vol. II, 276 p.
Gel’fand, I.M. (1961) ‘Lectures on Linear Algebra’,
Interscience Publ. Inc., 185 p.
Muir, T. (1928) ‘A Treatise on the Theory of Determinants’,
Dover Publ. Inc., 766 p.
Noble, B. and Daniel, I.W. (1988) ‘Applied Linear Algebra’,
Prentice---Hall, third ed., 521 p.
Strang, G. (1988) ‘Linear Algebra and its Applications’,
Harcourt Brace Jovanovich, 505 p.
Strang, G. (1986) ‘Introduction to Applied Mathematics’,
Wellesley---Cambridge Press, 758 p.
It will be noticed that the rather lengthy notation with [ ] for matrices
and { } for vectors (columnmatrices) is preferred for the more simple
boldface or underscore notations. The reason for this is that the reader
by the brackets is constantly remindedabout the fact thatwe aredealing
with a blockof quantities.Tomiss this point is catastrophic inmatrix cal-
culations. Furthermore, the lengthy notation adds to the possibilities
for direct graphical interpretation of the formulas.
Cross-reference in the index is symbolized by boldface writings. The
preliminary advices from colleagues and students are verymuch appre-
ciated, and I shall be grateful for further critics and comments that can
improve the index.
251
Pauli Pedersen: 12. An index to matrices
ADDITION Matrices are added by adding the corresponding elements
of matrices
[C]= [A]+ [B] with Cij= Aij+ Bij
The matrices must have the same order.
ANTI--METRIC or See skew---symmetric matrix.
ANTI--SYMMETRIC
matrix
BILINEAR FORM For a matrix [A] we define the bilinear form by
{X}T[A]{Y}
252
Pauli Pedersen: 12. An index to matrices
BILINEAR For a symmetric, positive definitematrix [A] we have by definition for
INEQUALITY the following two quadratic forms:
{Xa}T[A]{Xa
}= ua> 0 for {Xa}≠ {0}
{Xb}T[A]{X
b}= u
b> 0 for {X
b}≠ {0}
The bilinear form fulfills the inequality
{Xa}T[A]{X
b}≤ 1
2(ua+ u
b)
i.e. less than or equal to the mean value of the values of the quadratic
forms.
This follows directly from
�{Xa}T–{X
b}T�[A]�{Xa}–{Xb
}�≥ 0
and only equality for {Xa}= {Xb} . Expanding we get with the defini-
tions
ua+ ub–2{Xa}
T[A]{Xb}≥ 0
because [A]T= [A] .
253
Pauli Pedersen: 12. An index to matrices
BIORTHOGONALITY From the description of the generalized eigenvalue problem (see this)
conditions with right and left eigenvectors {Φ}iand {Ψ}
iwe have
{Ψ}Tj �[A] – λ
i[B]�{Φ}
i= 0
and
{Ψ}Tj�[A] – λ
j[B]�{Φ}i= 0
which by subtraction gives
(λi– λj)�{Ψ}T
j [B]{Φ}i�= 0
For different eigenvalues
λi≠ λj
this implies
{Ψ}Tj [B]{Φ}
i= 0
and thus also
{Ψ}Tj [A]{Φ}
i = 0
which is termed the biorthogonality conditions.
For a symmetric eigenvalue problem {Ψ}i=
{Φ}i (see orthogonality
conditions).
254
Pauli Pedersen: 12. An index to matrices
CHARACTERISTIC From the determinant condition
POLYNOMIUM
(generalized) |[A]λ2+ [B]λ+ [C]|= 0
with the square matrices [A] , [B] and [C] all of order n we obtain
a polynomium of order 2n in λ . This polynomium is termed the char-
acteristic polynomium of the triple ([A] , [B] , [C]).
Specific cases as
|[A]λ2+ [C]|= 0
|[I]λ+ [C]|= 0
are often encountered.
CHOLESKI See factorization of a matrix.
factorization /
triangularization
COEFFICIENTS See elements of a matrix.
of a matrix
COFACTOR The cofactor of a matrix element is the corresponding minor with an
of a matrix element appropriate sign. If the sum of row and column indices for the matrix
element is even, the cofactor is equal to theminor. If this sum is odd the
cofactor is the minor with reversed sign, i.e.
Cofactor (Aij)= (–1)i+j Minor (Aij)
255
Pauli Pedersen: 12. An index to matrices
COLUMN A column matrix is a matrix with only one column, i.e. order m× 1 .
matrix The notation { } is used for a columnmatrix. The name columnvector
or just vector is also used.
CONGRUENCE Acongruence transformation of a squarematrix [A] to a squarematrix
transformation [B] of the same order is by the regular transformation matrix [T] of
the same order
[B]= [T]T[A][T]
Matrices [A] and [B] are said to be congruent matrices, they have the
same rank and the same definiteness, but not necessarily same eigenva-
lues. A congruence transformation is also an equivalence transforma-
tion.
CONJUGATE The conjugate transpose is a transformation of matrices with complex
TRANSPOSE elements. Complex conjugate is denoted by a bar and transpose by a
superscript T . With a short notation (from the name Hermitian) we
denote the combined transformation as
[A]H= [A]
T
256
Pauli Pedersen: 12. An index to matrices
CONTRACTED For a symmetricmatrix, a simpler contracted notation in terms of a row
NOTATION or columnmatrix is possible.Of the notations which keep the orthogo---
for a symmetric matrix nal transformation, we choose the form with 2� ---factors multiplied to
the off diagonal elements in the matrix, i.e.
{B} from [A] with
Bi= A
iifor i= 1, 2, ..., n
Bn+...= 2� Aij for j> i
(The ordering within {B} symbolized by n+... is not specified).
257
Pauli Pedersen: 12. An index to matrices
CONVEX SPACE For a symmetric, positive definitematrix [A] we have by definition for
by positive the following two quadratic forms:
definite matrix
{Xa}T[A]{Xa
}= ua ; 0< ua
{Xb}T[A]{X
b}= u
b; 0< u
b≤ ua
The matrix [A] describes a convex space such that for
{Xα}= α{Xa}+ (1 – α){X
b} ; 0≤ α≤ 1
we have for all values of α
{Xα}T[A]{Xα
}= uα≤ ua
Inserting directly we have with [A]T
= [A]
�α{Xa}T+ (1 – α){X
b}T�[A]�α{Xa
}+ (1 – α){Xb}�
= α2{Xa
}T[A]{Xa}+ (1 – α)2{X
b}T[A]{X
b}+ 2α(1 – α){Xa
}T[A]{Xb}
= α2ua+ (1 – α)2u
b+ 2α(1 – α){Xa
}T[A]{Xb}
From the bilinear inequality we have
{Xa}T[A]{X
b}≤ 1
2(ua+ u
b)
and thus with ub≤ ua we can substitutive greater values and obtain
{Xα}T[A]{Xα
}≤ α2ua+ (1 – α)2ua+ 2α(1 – α)ua= ua
258
Pauli Pedersen: 12. An index to matrices
DEFINITENESS
For a symmetric matrix the notions of: are used if, for the matrix:
¯ positive definite ¯ all eigenvalues are positive
¯ positive semi---definite ¯ eigenvalues non---negative
¯ negative definite ¯ all eigenvalues are negative
¯ negative semi---definite ¯ eigenvalues non---positive
¯ indefinite ¯ both positive and negative eigenvalues
See specifically positive definite, negative definite and indefinite for
alternative statement of these conditions.
259
Pauli Pedersen: 12. An index to matrices
DETERMINANT The determinant of a squarematrix is a scalar, calculated as the sum of
of a matrix products of elements from the matrix. The symbol of two vertical lines
det ([A])= |[A]|
is used for this quantity.
For a square matrix of order two the determinant is
|[A]|=
A11
A21
A12
A22
=A
11A
22– A
12A
21
For a square matrix of order three the determinant is
|[A]|=
A11
A21
A31
A12
A22
A32
A13
A23
A33
=
A11A
22A
33+A
12A
23A
31+A
13A
21A
32– A
31A
22A
13– A
32A
23A
11– A
33A
21A
12
We note that for each product the number of elements is equal to the
order of the matrix, and that in each product a row or a column is only
represented by one element. Totally for a matrix of order n there are
n! terms to be summed.
For further calculation procedures see determinants by minors/cofac-
tors.
260
Pauli Pedersen: 12. An index to matrices
DETERMINANTS A determinant can be calculated in terms of cofactors (or minors), by
BY MINORS / expansion in terms of an arbitrary row or column.
COFACTORS
As an example, for a matrix of order three expansion of the third col-
umn yields:
A11
A21
A31
A12
A22
A32
A13
A23
A33
= A
13Minor(A
13) – A
23Minor(A
23)+A
33Minor(A
33)
See determinant of a matrix for direct comparison.
DETERMINANT The product of the determinants for a regular matrix [A] and its
OF AN INVERSE inverse [A]–1 is equal to 1
matrix
|[A]–1|= 1��|[A]|�
DETERMINANT The determinant of a product of squarematrices is equal to the product
OF A PRODUCT of the individual determinants, i.e.
of matrices
|[A][B]| = |[A]||[B]|
261
Pauli Pedersen: 12. An index to matrices
DETERMINANT The determinant of transposed square matrix is equal to the deter---
OF A TRANSPOSED minant of the matrix itself, i.e.
matrix
|[A]T| = |[A]|
DIAGONAL A diagonal matrix is a matrix where all off diagonal elements have the
matrix value zero
[A] a diagonal matrix when Aij = 0 for i ≠ j
and at least one diagonal element is non---zero. This definition also
holds for non---square matrices, as by singular value decomposition.
DIFFERENTIAL See functional matrix.
matrix
DIFFERENTIATION Differentiation of a matrix is carried out by differentiation of each
of a matrix element
[C]= d([A])�db with Cij= d(Aij)�db
DIMENSIONS See order of a matrix.
of a matrix
262
Pauli Pedersen: 12. An index to matrices
DOT PRODUCT See scalar product of two vectors.
of two vectors
DYADIC PRODUCT The dyadic product of two vectors {A} and {B} of the same order
of two vectors n results in a square matrix [C] of order n× n , but only with rank
1
[C]= {A}{B}T with Cij= A iBj
Dyadic products of vectors of different order can also be defined,
resulting in a matrix of order m× n .
EIGENPAIR The eigenpair λi , {Φ}i is a solution to an eigenvalue problem. The
eigenvector {Φ}icorresponds to the eigenvalue λ
i.
EIGENVALUES The eigenvalues λi
of a square matrix [A] are the solutions to the
of a matrix standard form for the eigenvalue problem, with
([A] – λi[I]){Φ}
i= {0}⇒ |[A] – λ
i[I]|= 0
which gives a characteristic polynomium.
263
Pauli Pedersen: 12. An index to matrices
EIGENVALUE With [A] and [B] being two squarematrices of order n , the general---
PROBLEM ized eigenvalue problem is defined by
�[A] – λi[B]�{Φ}
i= {0} for i= 1, 2, ..., n
or by
{Ψ}Ti�[A] – λ
i[B]�= {0}
Tfor i= 1, 2, ..., n
The pairs of eigenvalue, eigenvectors are λi, {Φ}
iand λ
i, {Ψ}T
i with
{Φ}ias right eigenvector and {Ψ}
ias lefteigenvector. Theeigenvalue
problem has n solutions with possibility for multiplicity.
With [B] being an identity matrix we have the standard form for an
eigenvalue problem, while for [B] not being an identity matrix the
name generalized eigenvalue problem is used.
EIGENVECTOR An eigenvector {Φ}iis the vector---part of a solution to an eigenvalue
problem. The word eigen reflects the fact that the vector is transformed
into itself except for a factor, the eigenvalue λi.
ELEMENTS The elements of amatrix [A] are the individual entries Aij . In amatrix
of a matrix of order m× n there are mn elements Aij , for i= 1, 2, ...,m ,
j= 1, 2, ..., n .Elements are also called themembersor the coefficients
of the matrix.
EQUALITY Twomatrices of the sameorder are equal if the corresponding elements
of matrices of each of the matrices are equal, i.e.
[A]= [B] if Aij= B ij for all ij
264
Pauli Pedersen: 12. An index to matrices
EQUIVALENCE An equivalence transformation of a matrix [A] to a matrix [B] (not
transformations necessarily square matrices) by the two square, regular transformation
matrices [T1] and [T
2] is
[B]= [T1][A][T2
]
Matrices [A] and [B] are said to be equivalent matrices and have the
same rank.
EXPONENTIAL The exponential of a square matrix [A] is defined by its power series
of a matrix expansion
e[A]t := [I]+ [A]t+ [A]2 t2
2!+ [A]
3 t3
3!+���
The series always converges, and the exponential properties are kept,
i.e.
e[A]te[A]s= e[A](t+s) , e[A]te[A](–t)
= [I] , d�e[A]t��dt= [A]e[A]t
265
Pauli Pedersen: 12. An index to matrices
FACTORIZATION A symmetric, regularmatrix [A] of order n can be factorized into the
of a matrix product of a lower triangular matrix [L] , a diagonalmatrix [B] and
the upper triangular matrix [L]T
all of the order n
[A]= [L][B][L]T
In a Gauss factorization the diagonal elements of [L] are all 1 .
A Choleski factorization is only possible for positive semi---definite
matrices, and then [B]= [I] and we get
[A]= [L][L]T
with Lii
not necessarily equal to 1 .
FROBENIUS The Frobenius norm of a matrix [A] is defined as the square root of
norm of a matrix the sum of the squares of all the elements of [A] .
For a square matrix of order 2 we get
Frobenius= A2
11+ A2
22+A2
12+ A2
21�
and thus for a symmetricmatrix equal to the squareroot of the invariant
I3.
For a square matrix of order 3 we get
Frobenius= �(A2
11+A2
21+A2
31)+ (A2
22+A2
12+A2
32)+ (A2
33+A2
13+ A2
23)�½
and thus for a symmetricmatrix equal to the squareroot of the invariant
I4.
266
Pauli Pedersen: 12. An index to matrices
FULL RANK See rank of a matrix.
FUNCTIONAL The functional matrix [G] consists of partial derivatives --- the partial
MATRIX derivatives of the elements of a vector {A} of order m with respect
to the elements of a vector {B} of order n . Thus the functionalmatrix
is of the order m× n
[G]=∂{A}
∂{B}with G
ij=∂Ai
∂Bj
The name gradient matrix is also used. A square functional matrix is
named a Jacobimatrix, and the determinant of this matrix as the Jaco-
bian.
GAUSS See factorization of a matrix.
factorization /
triangularization
GENERALIZED See eigenvalue problem.
EIGENVALUE
PROBLEM
GEOMETRIC A vector of order two or three in an Euclidian plane or space. See vec---
vector tors. By a geometric vector we mean a oriented piece of a line (an
“arrow”).
267
Pauli Pedersen: 12. An index to matrices
GRADIENT See functional matrix.
matrix
HERMITIAN A square matrix [A] is termed Hermitian if it is not changed by the
matrix conjugate transpose transformation, i.e.
[A]H= [A]
Every eigenvalue of aHermitianmatrix is real, and the eigenvectors are
mutually orthogonal, as for symmetric real matrices.
HESSIAN AHessian matrix [H] is a square, symmetricmatrix containing second
matrix order derivatives of a scalar F with respect to the vector {A}
[H]= ∂2F∂{A}∂{A}
with Hij =∂2F
∂A i∂A j
268
Pauli Pedersen: 12. An index to matrices
HURWITZ The Hurwitz determinants up to order eight are defined by
determinants
Hi:=
a1
a0
a3
a2
a1
a0
a5
a4
a3
a2
a1
a0
a7
a6
a5
a4
a3
a2
a1
a0
a8
a7
a6
a5
a4
a3
a2
a8
a7
a6
a5
a4
a8
a7
a6
a8
to be read in the sense that Hiis the determinant of order i defined
in the upper left corner (principal submatrix). More specifically,
H1= a
1
H2= a
1a2– a
0a3
H3= H
2a3– (a
1a4– a
0a5)a
1
··
If the highest order is n , then am = 0 for m> n , and therefore the
highest Hurwitz determinant is given by
Hn= Hn–1
an
269
Pauli Pedersen: 12. An index to matrices
IDENTITY An identity matrix [I] is a square matrix where all diagonal elements
matrix have the value one and all off diagonal elements have the value zero
[I] := [A] with Aii= 1, A
ij= 0 for i≠ j
The name unit matrix is also used for the identity matrix.
INDEFINITE Asquare, realmatrix [A] is called indefinite if positive aswell asnega---
matrix tive values of {X}T[A]{X} exist, i.e.
{X}T[A]{X} ><
0
depending on the actual vector (column matrix) {X} .
INTEGRATION The integral of a matrix is the integral of each element
of a matrix
[C]= �[A]dx with Cij=�Aijdx
INVARIANTS For matrices which transforms by similarity transformations we can
of similar matrices determine a number of invariants, i.e. scalars which do not change by
the transformation. The number of independent invariants are equal to
the order of the matrix, and as any combination is also an invariant
many different forms are possible. Tomention some important invaria-
nts we have eigenvalues, trace, determinant, and Frobenius norm. The
principal invariants are the coefficients of the characteristic polyno-
mium.
270
Pauli Pedersen: 12. An index to matrices
INVARIANTS For the square, symmetric matrix [A] of order 2 we have
of symmetric, similar
matrices of order 2
[A]= �A11
A12
A12
A22
�with invariants being the trace I
1by
I1= A
11+A
22
and the determinant I2by
I2= A
11A22
– A2
12
Taking as an alternative invariant I3by
I3= (I
1)2 – 2I
2= A2
11+A2
22+ 2A2
12
we get the squared length of the vector {A} contracted from [A] by
{A}T= {A11, A
22, 2� A
12}
Setting up the polynomium to find the eigenvalues of [A] we find
λ2 – I
1λ+ I
2= 0
and again see the importance of the invariants I1and I
2, termed the
principal invariants.
271
Pauli Pedersen: 12. An index to matrices
INVARIANTS For the square, symmetric matrix [A] of order 3 we have
of symmetric, similar
matrices of order 3
[A]=
A11
A12
A13
A12
A22
A23
A13
A23
A33
with invariants being the trace I1by
I1= A
11+A
22+A
33
the norm I2by
I2= �A
11A
22– A2
12�+ �A
22A
33– A2
23�+ �A
11A
33– A2
13�
and the determinant I3by
I3= |[A]|
These three invariants are the principal invariants and they give the
characteristic polynomium by
λ3– I
1λ2+ I
2λ – I
3= 0
The squared length of the vector {A} contracted from [A] by
{A}T= �A11, A
22, A
33, 2� A
12, 2� A
13, 2� A
23�
isI4= A2
11+A2
22+A2
33+ 2A2
12+ 2A2
13+ 2A2
23
related to the principal invariants by
I4= (I
1)2 – 2I
2
and therefore another invariant, equal to the squared Frobenius norm.
272
Pauli Pedersen: 12. An index to matrices
INVERSE The inverse of a square, regularmatrix is the square matrix, where the
of a matrix product of the two matrices is the identity matrix. The notation [ ]–1
is used for the inverse
[A]–1[A]= [A][A]
–1= [I]
INVERSE OF A From the matrix product in partitioned form
PARTITIONED
matrix
[A]
[C]
[B]
[D]
[E]
[G]
[F]
[H]=
[I]
[0]
[0]
[I]
follows the four matrix equations
[A][E]+ [B][G]= [I] ; [A][F]+ [B][H]= [0]
[C][E]+ [D][G]= [0] ; [C][F]+ [D][H]= [I]
Solving these we obtain (in two alternative forms)
[H]= [D]–1
– [D]–1
[C][F]
[G]= – [D]–1
[C][E]
[E]= �[A] – [B][D]–1[C]�–1
[F]= – [E][B][D]–1
[E]= [A]–1
– [A]–1
[B][G]
[F]= – [A]–1
[B][H]
[H]= �[D] – [C][A]–1[B]�–1
[G]= – [H][C][A]–1
The special case of an upper triangular matrix, i.e. [C]= [0] gives
[H]= [D]–1
[G]= [0]
[E]= [A]–1
[F]= – [A]–1
[B][D]–1
[E]= [A]–1
[F]= – [A]–1
[B][D]–1
[H]= [D]–1
[G]= [0]
273
Pauli Pedersen: 12. An index to matrices
The special case of a symmetric matrix, i.e. [C]= [B]T
gives
[H]= [D]–1
– [D]–1
[B]T
[F]
[G]= – [D]–1
[B]T
[E]
[E]= �[A] – [B][D]–1[B]T�–1
[F]= – [E][B][D]–1
= [G]T
[E]= [A]–1
– [A]–1
[B][G]
[F]= – [A]–1
[B][H]
[H]= �[D] – [B]T[A]–1[B]�–1
[G]= – [H][B]T
[A]–1
= [F]T
The matrices to be inverted, are assumed to be regular.
INVERSE OF The inverse of a product of square, regular matrices is the product of
A PRODUCT the inverse of the individual multipliers, but in reverse sequence
([A][B])–1= [B]
–1[A]
–1
It follows directly from
([B]–1[A]
–1)([A][B])= [I]
INVERSE OF The inverse of a matrix of order two is given by
ORDER TWO
�A11A
12
A21
A22
�–1
= � A22
–A12
–A21
A11
� 1|[A]|
with the determinant given by
|[A]|= A11A
22– A
21A
12
274
Pauli Pedersen: 12. An index to matrices
INVERSE OF The inverse of a matrix of order three is given by
ORDER THREE
A11
A12
A13
A21
A22
A23
A31
A32
A33
–1
=
(A22A
33– A
32A
23) , (A
32A
13– A
12A
33) , (A
12A
23– A
22A
13)
(A31A
23– A
21A
33) , (A
11A
33– A
31A
13) , (A
21A
13– A
11A
23)
(A21A
32– A
31A
22) , (A
31A
12– A
11A
32) , (A
11A
22– A
21A
12)
1|[A]|
With the determinant given by
|[A]|=
A11A
22A
33+A
12A
23A
31+A
13A
21A
32– A
31A
22A
13– A
32A
23A
11– A
33A
21A
12
INVERSE OF The inverse and the transpose transformations can be interchanged
TRANSPOSED
matrix
([A]T)–1= ([A]–1)
T= [A]
–T
from which follows the definition of the symbol [ ]–T
.
JACOBI The Jacobi matrix [J] is a square functional matrix. We define it here
matrix as thematrix containing the derivatives of the elements of a vector {A}
with respect to the elements of a vector {B} , both of order n
[J]=∂{A}
∂{B}with J
ij =∂A i
∂Bj
275
Pauli Pedersen: 12. An index to matrices
JACOBIAN The Jacobian J is the determinant of the Jacobi matrix, i.e.
determinant
J= |[J]|
and thus a scalar.
JORDAN BLOCKS A Jordan block is a square upper---triangular matrix of order equal to
themultiplicity of an eigenvalue with a single corresponding eigenvec-
tor. All diagonal elements are theeigenvalue andall theelements of the
first upper codiagonal are 1 . Remaining elements are zero. Thus the
Jordan block [Jλ] of order 3 corresponding to the eigenvalue λ is
[Jλ]=
λ
00
1λ
0
01λ
Multiple eigenvalues with linear independent eigenvectors belongs to
different Jordan blocks.
Jordan blocks or order 1 aremost common, as this results for eigenva-
lue problems described by symmetric matrices.
JORDAN FORM The Jordan form of a square matrix [A] is the similar matrix [J] con-
sisting of Jordan blocks along the diagonal (block diagonal), and with
remaining elements equal to zero.
Only when we havemultiple eigenvalues with a single eigenvector will
the Jordan form be different from pure diagonal form. Jordan forms
represent the closest---to---diagonal outcome of a similarity transfor-
mation.
276
Pauli Pedersen: 12. An index to matrices
LAPLACIAN See determinants by minors/cofactors.
EXPANSION
of determinants
LEFT The left eigenvector {Ψ}T (row matrix) corresponding to eigenvalue
eigenvector λiis defined by
{Ψ}T
i ([A] – λi[B])= {0}
T
see eigenvalue problem.
LENGTH The length |{A}| of a vector is the square---root of the scalar product
of a vector of the vector with itself
|{A}|= {A}T{A}�
A geometric vector has an invariant length, but this do not hold for all
algebraic vector definitions.
LINEAR Consider a matrix [A] of order m× n , constituting the n vectors
DEPENDENCE / {A}ifor i= 1, 2, ..., n . Then if there exist a non---zero vector {B} of
LINEAR order n such that
INDEPENDENCE
[A]{B}= [{A}1{A}
2���{A}
n]{B}= {0}
then the vectors {A}iare said to be linear dependent. The vector {B}
contains a set of linear combination factors.
If on the other hand
[A]{B}= {0} only for {B}= {0}
277
Pauli Pedersen: 12. An index to matrices
then the vectors {A}iare said to be linear independent.
278
Pauli Pedersen: 12. An index to matrices
MEMBERS See elements of a matrix.
of a matrix
MINOR The minor of a matrix element is a determinant, i.e. a scalar.
of a matrix element
The actual squarematrix corresponding to this determinant is obtained
by omitting the row and column corresponding to the actual element.
Thus, for a matrix of order 3, the minor corresponding to element A12
become
Minor(A12)=
�A21
A23
A31
A33
�= A21A
33– A
31A
23
MODAL The modal matrix corresponding to an eigenvalue problem is a square
matrix matrix constituting all the linear independent eigenvectors
[Φ]= [{Φ}1{Φ}
2���{Φ}
n]
and the generalized eigenvalue problem can then be stated as
[A][Φ] – [B][Φ][Γ] = [0]
Note that the diagonalmatrix [Γ] of eigenvaluesmust be post---multi-
plied.
279
Pauli Pedersen: 12. An index to matrices
MULTIPLICATION The product of two matrices is a matrix, where the resulting element
of two matrices �ij� is the scalar productof the i---th rowof the first matrixwith the j---th
column of the second matrix
[C] = [A][B] with Cij=�K
k=1
AikBkj
The number of columns in the first matrix must be equal to the number
of rows in the second matrix (here K) .
MULTIPLICATION A matrix is multiplied by a scalar by multiplying each element by the
BY SCALAR scalar
[C]= b[A] with Cij= bAij
MULTIPLICITY In eigenvalue problems the same eigenvalue may be a multiple solu---
OF EIGENVALUES tion, mostly (but not always) corresponding to linear independent
eigenvectors.As an example a bimodal solution is a solution, where two
eigenvectors correspond to the same eigenvalue. Multiplicity of eigen-
values is also named algebraic multiplicity.
For non---symmetric eigenvalue problems multiple eigenvalues may
correspond to the same eigenvector. We then talk about, e.g., a double
eigenvalue/eigenvector solution (by contrast to a bimodal solution,
where only the eigenvalue is the same). Thismultiplicity is described by
the geometric multiplicity of the eigenvalue. For a specific eigenvalue
we have
1≤ geometric multiplicity≤ algebraic multiplicity
Note that the geometric multiplicity of an eigenvalue counts the number of linear independent
eigenvectors for this eigenvalue, and not the number of times that the eigenvector is a solution.
280
Pauli Pedersen: 12. An index to matrices
NEGATIVE DEFINITE A square, real matrix [A] is called negative or negative definite if for
matrix any non---zero vector (column matrix) {X} we have
{X}T[A]{X}< 0
The matrix is called negative semi---definite if
{X}T[A]{X}≤ 0
NORMALIZATION Eigenvectors can be multiplied with an arbitrary constant (even a
of a vector complex constant). Thus we have the possibility for a convenient scal-
ing, and often we choose the weighted norm. Here we scale the vector
{A}ito the normalized vector {Φ}
i
{Φ}i= {A}
i� {A}T
i [B]{A}i
�
by which we obtain
{Φ}Ti [B]{Φ}
i= 1
Alternative normalizations are by other norms, such as the 2---norm
{Φ}i = {A}
i� {A}Ti{A}�
i
or by the ∞ ---norm
{Φ}i = {A}
i�(Max|A j|)
281
Pauli Pedersen: 12. An index to matrices
NULL A null matrix (symbolized [0]) is a matrix where all elements have the
matrix value zero
[0] := [A] with Aij= 0 for all ij
A null matrix is also called a zero matrix. The null vector is a special
case.
ONE A one matrix (symbolized [1]) is a matrix where all elements have the
matrix value one
[1] := [A] with Aij= 1 for all ij
The one vector is a special case. Note the contrast to the identity (unit)
matrix [I] , which is a diagonal matrix.
ORDER The order of a matrix is the (number of rows)×(number of columns) .
of a matrix Usually the letters m× n are used, and a rowmatrix thenhas theorder
1× n while a column matrix has the order m× 1 . For square
matrices a single number gives the order. The order of a matrix is also
called the dimensions or the size of the matrix.
282
Pauli Pedersen: 12. An index to matrices
ORTHOGONALITY For an eigenvalue problem ([A] – λi[B]){Φ}
i= {0} with symmetric
conditions matrices [A] and [B] the biorthogonality conditions simplifies to
{Φ}Tj [B]{Φ}i= 0 , {Φ}
Tj [A]{Φ}i= 0
for non---equal eigenvalues, i.e. λi ≠ λj .
For standard form eigenvalue problems with [A] symmetric this fur-
ther simplifies to
{Φ}Tj {Φ}
i= 0 , {Φ}Tj [A]{Φ}
i= 0 for λi≠ λj
Using normalization of the eigenvectors we can obtain
{Φ}Ti [B]{Φ}
i= 1 or {Φ}Ti {Φ}
i= 1
and thus
{Φ}Ti [A]{Φ}
i= λi
Orthogonal, normalized eigenvectors are termed orthonormal.
283
Pauli Pedersen: 12. An index to matrices
ORTHOGONAL An orthogonal transformation of a square matrix [A] to a square
transformations matrix [B] of the same order is by the orthogonal transformation
matrix
[T]–1= [T]
T
and thus the transformation is both a congruence transformation and
a similarity transformation
[B]= [T]T[A][T]= [T]–1[A][T]
Matrices [A] and [B] are said to be orthogonal similar, and have same
rank, same eigenvalues, same trace and same determinant (same
invariants).
If matrix [A] is symmetric, matrix [B] is also symmetric, which do not
hold generally for similar matrices.
ORTHONORMAL A orthonormal set of vectors {X}i fulfill the conditions
{X}Ti [A]{X}
j= �01 for
for
i≠ j
i= j
284
Pauli Pedersen: 12. An index to matrices
PARTITIONING Partitioning ofmatrices is a very important tool to get closer insight and
of matrices overview. By the example
[A]=[A]
11[A]
12
[A]21
[A]22
we see that the submatrices are given indices exactly like thematrix ele-
ments themselves.
Multiplication on submatrix level is identical to multiplication on ele-
ment level. For example see inverse of a partitioned matrix.
POSITIVE DEFINITE Asquare, realmatrix [A] is called positive or positive definite if for any
matrix non---zero vector (column matrix) {X} we have
{X}T[A]{X}> 0
The matrix is called positive semi---definite if
{X}T[A]{X}≥ 0
285
Pauli Pedersen: 12. An index to matrices
POSITIVE DEFINITE The conditions for a square matrix [A] to be positive definite can be
matrix conditions stated in many alternative forms. From the Routh---Hurwitz---Lie-
nard---Chipart teoremwe can directly in termsofHurwitzdeterminants
obtain the necessary and sufficient conditions for eigenvalues with pos-
itive real part.
For a matrix of order 2 we get that
[A]= �A11
A21
A12
A22
� has positive real part of all eigenvalues if and only if
(A11+ A
22)> 0 and A
11A
22– A
12A
21> 0
and the conditions for a symmetricmatrix (A21= A
12) to be positive
definite is then
A11> 0 , A
22> 0 and A
11A
22– A2
12> 0
For a matrix of order 3 we get that
[A]=
A11
A21
A31
A12
A22
A32
A13
A23
A33
has positive real part of all eigenvalues if and only if
I1= (A
11+A
22+A
33)> 0
I2= �(A
11A
22– A
21A
12)+ (A
22A
33– A
32A
23)+ (A
11A
33– A
31A
13)�> 0
I3= |[A]|> 0 and I
1I2– I
3> 0
and the conditions for a symmetric matrix to be positive definite will
then be
A11> 0 , A
22> 0 , A
33> 0
286
Pauli Pedersen: 12. An index to matrices
A11A
22– A2
12> 0 , A
22A33
– A2
23> 0 , A
11A
33– A2
13> 0 , |[A]|> 0
287
Pauli Pedersen: 12. An index to matrices
POSITIVE DEFINITE Assume that the two square, real matrices [A] and [B] of the same
SUM order are positive definite, then their sum is also positive definite.
of matrices Using the symbol � for positive definite, we have
[A]� 0 , [B]� 0 ⇒ ([A]+ [B])� 0
It follows directly from the definition
{X}T([A]+ [B]){X}= {X}T[A]{X}+ {X}T[B]{X}> 0
because both terms are positive for {X}≠ {0} .
From this also follows directly that
�α[A]+ (1 – α)[B]�� 0 for 0≤ α≤ 1
which implies that [A]� 0 is a convex condition.
Identical relations hold for negative definite matrices.
POWER The power of a square matrix [A] is symbolized by
of a matrix
[A]p= [A][A]��� [A] (p times)
[A]–p = [A]–1[A]–1��� [A]
–1(p times)
[A]0= [I] ; [A]
p[A]
r= [A]
(p+r); �[A]
p�r = [A]pr
288
Pauli Pedersen: 12. An index to matrices
PRINCIPAL Theprincipal invariants are the coefficients of the characteristic poly---
INVARIANTS nomium for similar matrices.
PRINCIPAL The principal submatrices of the squarematrix [A] of order n , are the
SUBMATRIX n squared matrices of order k (1≤ k≤ n) found in the upper left
corner of [A] .
PRODUCT See multiplication of two matrices.
of two matrices
PRODUCTS Three different products of vectors are defined. The scalar product or
of two vectors dot product resulting in a scalar. The vector product or cross product
resulting in a vector, and especially used for vectors of order three.
Finally, the dyadic product resulting in a matrix.
PROJECTION A projection matrix different from the identity matrix [I] is a square
matrix singular matrix that is unchanged when multiplied by itself
[P][P]= [P] , [P]–1
non–existent
289
Pauli Pedersen: 12. An index to matrices
PSEUDOINVERSE The pseudoinverse [A+] of a rectangular matrix [A] of order m× n
of a matrix always exists. When [A] is a regular matrix the pseudoinverse is the
same as the inverse. Given the singular value decomposition of [A] by
[A]= [T1][B][T2
]T
then with the diagonal matrix [C] of order n×m defined from the
diagonal matrix [B] of order m× n by
[C] from C ii= 1�Bii for B ii≠ 0 (other C ij= 0)
the pseudoinverse [A+] is given by the product
[A+]= [T2][C][T1
]T
Case 1: [A] is a n×m matrix where n> m . The solution to
[A]{X}= {B} with the objective of minimizing the error
�{e}T{e} , {e}= [A]�X�− {B}� , is given by
�X�= �[A]T[A]�
−1
[A]T{B}
Case 2: [A] is a n×m matrix where n< m . The solution to
[A]{X}= {B} with the objective of minimizing the length of the solu-
tion ��X�T�X�� , is given by
�X�= [A]
T�[A][A]T�−1
{B}
290
Pauli Pedersen: 12. An index to matrices
QUADRATIC By a symmetricmatrix [A] of order n we define the associated qua---
FORM dratic form
{X}T[A]{X}
that gives a homogeneous, second order polynomial in the n parame-
ters constituting the vector {X} . The quadratic form is used in many
applications, and thus knowledge about its transformations, definite-
ness etc. is of vital importance.
RANK The rank of a matrix is equal to the number of linearly independent
of a matrix rows (or columns) of the matrix. The rank is not changed by the trans-
pose transformation.
From a matrix [A] of order (m× n) we can, by omitting a number
of rows and/or a number of columns, get square matrices of any order
from 1 to theminimum of m,n . Normally there will be several differ-
ent matrices of each order.
The rank r is defined by the largest order of these square matrices, for
which the determinant is non---zero, i.e. the order of the “largest” regu-
lar matrix we can extract from [A] .
Only a zero matrix has the rank 0 .
The rank of any other matrix will be
1≤ r≤ min (m, n)
If r = min(m,n) we say that the matrix has full rank.
291
Pauli Pedersen: 12. An index to matrices
REAL With [A] and [B] being two real and symmetricmatrices, then for the
EIGENVALUES eigenvalue problem
([A] – λi[B]){Φ}
i= {0}
¯ if λiis complex, then {Φ}
iis also complex ([A] and [B] regular)
¯ if λi,{Φ}
iis a complex pair of solution, then the complex conju-
gated pair λi,{Φ}
iis also a solution.
The condition derived under biorthogonality conditions for these two
pairs is
(λi– λ
i)({Φ}T
i[B]{Φ}
i)= 0
which expressed in real and imaginary parts are
2 Im(λi)�Re({Φ}Ti )[B] Re({Φ}i)+ Im({Φ}
T
i )[B] Im({Φ}i)�= 0
It now follows that if [B] is a positive definitematrix, then Im(λi)= 0
and we have real eigenvalues.
REGULAR A non---singular matrix, see singular matrix.
matrix
RIGHT The right eigenvector {Φ}i(column matrix) corresponding to eigen---
eigenvector values λi
is defined by
([A] – λi[B]){Φ}
i= {0}
292
Pauli Pedersen: 12. An index to matrices
see eigenvalue problem.
293
Pauli Pedersen: 12. An index to matrices
ROTATIONAL For two dimensional problems we shall list some important orthogonal
transformation transformation matrices. The elements of these matrices involves
matrices trigonometric functions of the angle θ defined in the figure. For short
notation we also define
θ
c1= cosθ s
1= sin θ
c2= cos 2θ s
2= sin 2θ
c4= cos 4θ s
4= sin 4θ
The two Cartesian coordinate systems with the definition of the angle θ .
We then have for rotation of a geometric vector {V} of order 2
{V}y =
[Γ]{V}x
with [Γ]= � c1–s1
,,s1c1� ; [Γ]
–1= [Γ]
T
For a symmetric matrix [A] of order 2× 2 , contracted with the
2� ---factor to the vector {A}T= {A11, A22, 2� A12} we have
{A}y= [T]{A}x
294
Pauli Pedersen: 12. An index to matrices
with [T]= 12
1+ c2
1 – c2
– 2� s2
,
,
,
1 – c2
1+ c2
2� s2
,
,
,
2� s2
– 2� s2
2c2
; [T]
–1= [T]
T
For a symmetricmatrix [B] of order 3× 3 , contractedwith the 2� ---
factor to the vector {B}T = {B11, B22, B33, 2� B12, 2� B13, 2� B23} we
have
{B}y = [R]{B}x
with [R]–1= [R]
Tand [R]= 1
8·
3+ 4c2+ c
4
3 – 4c2+ c
4
2 – 2c4
2� – 2� c4
– 4s2– 2s
4
– 4s2+ 2s
4
,
,
,
,
,
,
3 – 4c2+ c
4
3+ 4c2+ c
4
2 – 2c4
2� – 2� c4
4s2– 2s
4
4s2+ 2s
4
,
,
,
,
,
,
2 – 2c4
2 – 2c4
4+ 4c4
– 2 2� + 2 2� c4
4s4
– 4s4
,
,
,
,
,
,
2� – 2� c4
2� – 2� c4
– 2 2� + 2 2� c4
6+ 2c4
2 2� s4
– 2 2� s4
,
,
,
,
,
,
4s2+ 2s
4
– 4s2+ 2s
4
– 4s4
– 2 2� s4
4c2+ 4c
4
4c2– 4c
4
,
,
,
,
,
,
4s2– 2s
4
– 4s2– 2s
4
4s4
2 2� s4
4c2– 4c
4
4c2+ 4c
4
Note that the listed orthogonal transformation matrices [Γ] , [T] and
[R] only refer to two dimensional problems, where the rotation is spe-
cified by a single parameter (the angle θ) .
ROW Arowmatrix is amatrixwith only one row, i.e.order 1× n . Thenota---
matrix tion { }T is used for a row matrix ( { } for column matrix and T for
transposed). The name row---vector or just vector is also used.
295
Pauli Pedersen: 12. An index to matrices
SCALAR PRODUCT The scalar product of two vectors {A} and {B} of the same order n
of two vectors results in a scalar C
(standard
Euclidean norm) C= {A}T{B}=�n
i=1
AiB
i
The scalar product is also called the dot product.
SCALAR PRODUCT The scalar product of two complex vectors {A} and {B} of the same
of two complex vectors order n involves the conjugate transpose transformation
(standard norm)
C= {A}H{B}=�n
i=1
�Re(Ai) – i Im(A
i)��Re(B
i)+ i Im(B
i)�
With this definition the length of a complex vector {A} is obtained by
|{A}|2= {A}H{A}=�
n
i=1
��Re(Ai)�
2
+ �Im(Ai)�
2�
SIMILARITY A similarity transformation of a square matrix [A] to a square matrix
transformations [B] of the same order is by the regular transformation matrix [T] of
the same order
[B]= [T]–1[A][T]
Matrices [A] and [B] are said to be similar matrices, they have the
same rank and the same eigenvalues, i.e. the same invariants, but dif-
ferent eigenvectors, related by [T] . A similarity transformation is also
an equivalence transformation.
296
Pauli Pedersen: 12. An index to matrices
SINGULAR A singular matrix is a square matrix for which the corresponding
matrix determinant has the value zero, i.e.
[A] is singular if |[A]|= 0 , i.e. [A]–1 does not exist
If not singular, the matrix is called regular or non---singular.
SINGULAR VALUE Any matrix [A] of order m× n can be factorized into the product of
DECOMPOSITION an orthogonal matrix [T1] of order m , a rectangular, diagonalmatrix
[B] of order m× n and an orthogonal matrix [T2]T
of order n
[A]= [T1][B][T
2]T
The r singular values (positive values) on the diagonal of [B] are the
square roots of the non---zero eigenvalues of both [A][A]T
and
[A]T[A] ,and the columns of [T
1] are the eigenvectors of [A][A]
Tand
the columns of [T2] are the eigenvectors of [A]
T[A] .
SIZE See order of a matrix.
of a matrix
297
Pauli Pedersen: 12. An index to matrices
SKEW A skew matrix is a specific skew symmetric matrix of order 3, defined
matrix to have a more workable notation for the vector product of two vectors
of order 3 . From the vector {A} the corresponding skew matrix is
defined by
[A~
]=
0
A3
–A2
–A3
0
A1
A2
–A1
0
by which {A}× {B}= [A~
]{B} .
The tilde superscript is normally used to indicate this specific matrix.
From {B}× {A}= – {A}× {B} follows
[B~
]{A}= – [A~
]{B}
SKEW SYMMETRIC A square matrix is termed skew---symmetric if the transposed trans---
matrix formation only changes the sign of the matrix
[A]T= – [A] , i.e. Aji= – Aij for all ij (Aii= 0)
The skew symmetric part of a square matrix [B] is obtained by the dif-
ference 12([B]–[B]
T) .
298
Pauli Pedersen: 12. An index to matrices
SPECTRAL For a symmetric matrix a spectral decomposition is possible. The
DECOMPOSITION eigenvalues λiof the matrix [A] are factors in this decomposition
of a symmetric matrix
[A]=�n
i=1
λi[B]
i=�
n
i=1
λi{Φ}
i{Φ}T
i
where {Φ}i
is the eigenvector corresponding to λi
(orthonormal
eigenvectors).
SQUARE A square matrix is a matrix where the number of rows equals to the
matrix number of columns, thus the order of the matrix is n× n or simply
n .
STANDARD FORM The standard form for an eigenvalue problem is
for eigenvalue problem
[A]{Φ}i= λ
i{Φ}
i
or
{Ψ}Ti[A]= λ
i{Ψ}T
i
see eigenvalue problem.
SUBTRACTION Matrices are subtracted by subtracting the corresponding elements
of matrices
[C]= [A] – [B] with Cij= A ij – Bij
The matrices must have the same order.
299
Pauli Pedersen: 12. An index to matrices
SYMMETRIC With [A] and [B] being two symmetric matrices of order n , the left
EIGENVALUE eigenvectors will be equal to the right eigenvectors. From the descrip---
PROBLEM tion of eigenvalue problem this means
{Ψ}i= {Φ}
i
and thus the biorthogonality conditions simplifies to the orthogonality
conditions. The symmetric eigenvalue problem have only real eigenva-
lues and real eigenvectors.
SYMMETRIC A square matrix is termed symmetric if the transposed transformation
matrix does not change the matrix
[A]T= [A] , i.e. A ji= Aij for all ij
The symmetric part of a square matrix [B] is obtained by the sum12([B]+ [B]
T) .
TRACE The trace of a square matrix [A] of order n is the sum of the diagonal
of a square matrix elements
trace([A])=�n
i=1
Aii
300
Pauli Pedersen: 12. An index to matrices
TRANSFORMATION The different transformations like equivalence, congruence, similarity
matrices and orthogonal are characterized by the involved square, regular trans-
formation matrices. The equivalence transformation of
[B]= [T1][A][T2
]
is a congruence transformation if [T1]= [T2
]T
and it is a similarity
transformation if [T1]= [T2
]–1
. The orthogonal transformation,
which at the same time is a congruence and a similarity transformation,
thus assumes [T1]= [T2
]T= [T2
]–1
.
TRANSPOSE The transposed of a matrix is the matrix with interchanged rows/
of a matrix columns. The superscript T is used as notation for this transformation
[B]= [A]T
with Bij= Aji for all ij
The transposed of a row matrix is a column matrix, and vise versa.
The transposed matrix of a transposed matrix is the matrix itself
([AT])T= [A]
TRANSPOSE The transposed of a product of matrices is the product of the trans---
OF A PRODUCT posed of the individual multipliers, but in reverse sequence
([A] [B])T = [B]T[A]
T
It follows directly from
301
Pauli Pedersen: 12. An index to matrices
Cij=�K
k=1
AikBkj and Cji=�K
k=1
AjkBki =�K
k=1
BkiAjk
TRIANGULAR A triangularmatrix is a squarematrix with only zeros above the diago---
matrix nal (lower triangular matrix)
[L] with Lij= 0 for j> i
or below the diagonal (upper triangular matrix)
[U] with Uij= 0 for j< i
TRIANGULARIZA-- See factorization of a matrix.
TION
of a matrix
UNIT See identity matrix.
matrix
VECTORS As a common name for row matrices and column matrices, the name
vector is used.
Some authors distinguish between geometric vectors (oriented piece of
a line) of order twoor three andalgebraic vectors.Algebraic vectors are
column matrices and row matrices of any order.
302
Pauli Pedersen: 12. An index to matrices
VECTOR PRODUCT The vector product of two vectors {A} and {B} , both of the order 3
of two vectors is a vector {C} defined by
{C}= {A}× {B} with
C1
C2
C3
=
A2B3
A3B1
A1B2
–
–
–
A3B2
A1B3
A2B1
The vector product is also called the cross product. See skewmatrix for
an easier notation.
ZERO See null matrix.
matrix