+ All Categories
Home > Documents > optimization

optimization

Date post: 06-Jan-2016
Category:
Upload: asdas
View: 214 times
Download: 0 times
Share this document with a friend
Description:
lecture 1
Popular Tags:

of 15

Transcript
  • UUM 526 Optimization Techniques in EngineeringLecture 3: Vector Spaces and Matrices

    Asst. Prof. N. Kemal Ure

    Istanbul Technical University

    [email protected]

    October 8th, 2015

    Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 1 / 15

  • Overview

    1 Vectors and Matrices

    2 Rank of a Matrix

    3 Inner Products and Norms

    Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 2 / 15

  • Vectors and Matrices

    Linear Combination

    Abstract Definition: Let V be a set, F be a field, + V V V , F V V . Then < V,F,+, . > is a vector space if: v1 + v2 V,v1, v2 V,, F (1)

    We will be mainly dealing with V = Rn and F = R. + will be theusual vector addition and will be the usual scalar product. If S V satisfies Eq. 1, then S is called a subspace of V .

    Definition (Linear Combination)

    Let ak, k = 1, . . . , n be a finite number of vectors. Vector b is said to bea linear combination of vectors ak, if there exists scalars k such that:

    b = 1a1 + 2a2 + . . . + nan = nk=1kak

    Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 3 / 15

  • Vectors and Matrices

    Linear Dependency

    Definition (Linearly Dependent Set)

    The set of vectors A = {ak k = 1, . . . , n} are said to be linearlydependent if one of the vectors is a linear combination of the others.

    If the set A is not linearly dependent, then it is called linearlyindependent.

    Theorem (Test for linear independence)

    The set A = {ak k = 1, . . . , n} is linearly independent iff the equality,nk=1kak = 0, (2)

    implies that k = 0, k = 1 . . . , n.Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 4 / 15

  • Vectors and Matrices

    Span, Basis and Dimension

    Definition (Span)

    The set of all linear combinations of the set of vectors {ak} is calledthe span of {ak}

    span[a1, . . . ,an] = { nk=1kak k R}

    Span is always a subspace!Definition (Basis)

    If the set {ak} is linearly independent and span[a1, . . . ,an] = V , then{ak} is a basis for V .By fixing a basis, we can represent other vectors in V in terms of

    that basis.

    Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 5 / 15

  • Vectors and Matrices

    Span, Basis and Dimension

    Theorem (Unique Coordinates for a Fixed Base)

    Let {ak} be a basis for V . Then any vector v V can be representeduniquely as,

    v = nk=1kak.

    There are usually infinite number of bases for a subspace. Forinstance if {ak} is a basis, so is {cak}, c R.What about the number of vectors in a basis?

    Theorem (Unique Number of Vectors in a Basis)

    Let ak, k = 1, . . . , n and bi, i = 1, . . . ,m be two different bases for V .Then n =m.Hence every space (or subspace) V has a unique number of vectors

    in its every basis. That number is called the dimension of V .We call k the coordinates of the vector v in base {ak}. For Rn the usual choice for basis is the natural basis {ei}.Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 6 / 15

  • Vectors and Matrices

    Vector and Matrix Notation

    For a Rn we will write vectors in column notationa =

    a1a2an

    = [a1 a2 . . . an]T , ai R

    A matrix A Rmn is a rectangular collection of real numbersaij R, i = 1, . . . ,m.j = 1, . . . , n

    A =a11 a12 . . . a1na21 a22 . . . a2n am1 am2 . . . amn

    much more useful to think A as a collection of n vectors lying inRm: A = [a1,a2, . . . ,an],ak Rm Also matrix-vector multiplication Av makes more sense this way

    Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 7 / 15

  • Rank of a Matrix

    Matrix Rank

    Definition (Rank of a Matrix)

    Let A Rmn. Rank r is the maximal number of independent columnsof A.

    Notice that r n. When r = n, we say that matrix is full rank.Theorem (Invariance of Rank)

    Rank of a matrix A Rmn is invariant under following operations.1 Multiplication of columns of A by nonzero scalars.

    2 Interchange of columns.

    3 Addition to a given column a linear combination of other columns.Nice, but is there a formula for testing if the matrix has full rank? Isthere a scalar quantity that measures the independency of columns? For square matrices (m = n), the answer is yes! It is called the

    determinant of the matrix.

    Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 8 / 15

  • Rank of a Matrix

    Determinant

    Determinant of matrix (denoted by A) is a confusing concept atfirst, has many different interpretations.The properties of the determinant are more important than itsexplicit formula

    Definition (Determinant)

    Determinant is a function denoted as det Rnn R, and possess thefollowing properties:

    1 Determinant is linear in matrixs columns

    A = detA = det[a1, . . . , ak + bk, . . . ,an]= det[a1, . . . ,ak, . . . ,an]+ [det[a1, . . . ,bk, . . . ,an].2 If for some k, ak = ak+1, then A = 0.3 Determinant of the identity matrix is 1, that is In = 1.Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 9 / 15

  • Rank of a Matrix

    Consequences of Determinant Definition

    If there is a zero column in the matrix, determinant is zero,det[a1, . . . ,0, . . . ,an] = 0.

    If we add a column linear combination of other columns, determinantdoes not change.

    det

    a1, . . . ,ak +n

    j=1,jkjaj, . . . ,an = det[a1, . . . ,ak, . . . ,an]Determinant change sign if we interchange columns,

    det[a1, . . . ,ak1ak, . . . ,an] = det[a1, . . . ,ak,ak1, . . . ,an].Most importantly, if the columns of A are not linearly independent,

    then A = 0. Hence for a square matrix: full rank nonzerodeterminant.

    Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 10 / 15

  • Rank of a Matrix

    Determinant and Rank

    Only square matrices have determinants. What if I want to test therank of a rectangular matrix? Rectangular matrices have square sub-matrices! Wonder if their

    determinant is useful... First we need to define it:

    Definition (Minor)

    pth order minor of a matrix A Rmn is the determinant of sub-matrixformed by deleting m p rows and n p columns.Then we have this cool theorem:Theorem (Minors and Rank)

    If an A Rmn (m n) has a nonzero nth order minor, thenrankA = n. It is straightforward to show that rank of a matrix is the maximal

    order of its nonzero minors.

    Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 11 / 15

  • Rank of a Matrix

    Nonsingular matrices and Inverses

    A square matrix A with detA 0 is called nonsingular.A matrix A Rnn is nonsingular if and only if there exists a matrixB Rnn such that:

    AB = BA = InMatrix B is called the inverse of A and denoted as A1. Shows up in the solution of linear equations Ax = b. The uniquesolution exists if A is nonsingular (x =A1b)What about non-square linear systems?

    Theorem (Existence of Solution in a Linear System)

    The set of equations represented by Ax = b has a solution if and only ifrankA = rank[A,b].

    If rankA =m < n, then we have infinite number of solutions.Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 12 / 15

  • Inner Products and Norms

    Euclidian Inner Product

    We need to turn our vector space into a metric space by adding alength function.

    A such function already exists for V = R, the absolute value function . Some very useful properties: a a a, ab = ab The most useful property: a + b a + b

    For Rn, before defining the length function, it is helpful to define theinner product first

    Definition (Euclidean Inner Product)

    The Euclidean Inner Product of two vectors x,y Rn is defined as< x,y >= n

    i=1xiyi = xTyAsst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 13 / 15

  • Inner Products and Norms

    Inner Product Properties

    Inner product has the following properties Positivity: < x,x 0, x = 0 < x,x >= 0. Symmetry: < x,y >=< y,x >. Additivity: < x + y,z >=< x,z > + < y,z >. Homogeneity: < rx,y >= r < x,y >, r R.These properties also hold for the second vector.Vectors are orthogonal if < x,y >= 0.Now we can define the length, it is called the Euclidean norm:x = < x,x > = xTx

    Theorem (Cauchy-Schwartz Inequality)

    For any x,y Rn < x,y > xy,The equality holds only if x = y for some R.Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 14 / 15

  • Inner Products and Norms

    Norm Properties

    Norm possess many properties of the absolute value function Positivity: x 0, x = 0 x = 0 Homogeneity: rx = rx, r R Triangle Inequality: x + y x + yThere are many other vector norms. Actually any function thatsatisfies the properties above is a norm.

    p-norm: xp = (x1p + x2p + + xnp) 1p p = 2, the Euclidean norm What does p = 0 or 1 corresponds to? What happens when p? Continuity of f Rn Rm can be formulated in terms of norms f is continuous at x0 Rn if and only if for all > 0, there exists a > 0

    such that x x0 < f(x) f(x0) < if x Cn (complex numbers), inner product is defined as ni=1 xiyi,hence < x,y >= < y,x > and < x, ry >= r < x,y >.

    Asst. Prof. N. Kemal Ure (ITU) Lecture 3 October 8th, 2015 15 / 15

    Vectors and MatricesRank of a MatrixInner Products and Norms


Recommended