+ All Categories
Home > Documents > Intro to Tensors

Intro to Tensors

Date post: 14-Apr-2018
Category:
Upload: sumeet-khatri
View: 258 times
Download: 0 times
Share this document with a friend

of 22

Transcript
  • 7/29/2019 Intro to Tensors

    1/22

    Introduction to Tensors

    Sumeet Khatri

  • 7/29/2019 Intro to Tensors

    2/22

    Table of Contents

    List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

    List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

    List of Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

    Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

    1 The Index Notation and Einstein Summation Convention . . . . . . . . . . . 1

    2 Covariant and Contravariant Vectors . . . . . . . . . . . . . . . . . . . . . . . . . 7

    3 Introducing Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    3.1 The Inner Product and the First Tensor . . . . . . . . . . . . . . . . . . . . . . . 12

    3.2 Creating Tensors from Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    4 Tensor Definition and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    4.1 Symmetry and Anti-Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    4.2 Contraction of Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    i

  • 7/29/2019 Intro to Tensors

    3/22

    List of Figures

    ii

  • 7/29/2019 Intro to Tensors

    4/22

    List of Tables

    iii

  • 7/29/2019 Intro to Tensors

    5/22

    List of Theorems

    iv

  • 7/29/2019 Intro to Tensors

    6/22

    Introduction

    Tensors are geometric objects that describe linear relations between vectors, scalars, and othertensors. Elementary examples of such relations include the dot product, the cross product, andlinear mappings. We will see that in fact vectors and scalars are also tensors.

    Tensors are important in physics because they provide a concise mathematical framework forformulating and solving physics problems in areas such as elasticity, fluid mechanics, and specialand general relativity.

    v

  • 7/29/2019 Intro to Tensors

    7/22

    1 The Index Notation and Einstein Summation Convention

    Let us first introduct a new notation for vectors and matrices and their algebraic manipulations,called the index notation.

    Let us take a manifold with dimension n. We will denote the components of a vector v with thenumbers v1, v2, . . . , vn in some basis {e1, e2, . . . , en}. If one modifies the vector basis in which thecomponents of v are expressed, then these components will also change. Such a transformationis often called a change-of-basis matrix, say A, in which the columns are the old basis vectorse1, e2, . . . , en expressed in the new basis, say {e

    1, e

    2, . . . , e

    n}. So we have

    v1v2...

    vn

    =A11 . . . A1n... ...

    An1 . . . Ann

    v1v2...

    vn

    , (1.1)

    taking note of the fact that the first index denotes the row of the matrix A and the second indexthe column.

    According to the rules of matrix multiplication, the above matrix equation is the system ofequations

    v1 = A11v1 + A12v2 + + A1nvn,

    ...

    vn = An1v1 + An2v2 + + Annvn,

    (1.2)

    or equivalently,

    v1 =n

    =1

    A1v,

    ...

    v

    n =

    n=1

    Anv,

    (1.3)

    or even more succinctly,

    v =n

    =1

    Av ( N, 1 n) . (1.4)

    1

  • 7/29/2019 Intro to Tensors

    8/22

    Introduction to Tensors The Index Notation and Einstein Summation Convention

    Each of the three systems above are written in the index notation. In (1.4), we call a dummyindex and a running index or a free index. Keep in mind that and are merelylabelswe could have equally well called them whatever else we like, say and .

    Usually, the condition for in (1.4) are not explicitly stated because they should be obvious

    from the context. We therefore havev = y v = y

    v = Ay v =n

    =1

    Ay.(1.5)

    The index notation is also applicable to operation such as the dot product (and indeed innerproducts in general), so that if v and w are any two vectors, then

    v w = v1w1 + v2w2 + + vnwn =n

    =1vw. (1.6)

    We also have

    C = A + B C = A + B

    z = v + w z = v + w.(1.7)

    Example 1.0.1: Working with index notation.

    1. A, B, and C are matrices of appropriate dimensions. Assume that A = BC. Writeout this matrix multiplication using index notation.

    2. A and B are matrices and x is a vector. Show that

    n=1

    A

    n

    =1

    Bx

    =

    n=1

    n=1

    (ABx)

    =n

    =1

    n=1

    (ABx)

    =n

    =1

    n

    =1

    (AB) x

    .

    3. Which of the following statements is true?

    (a) The summation signs in an expression can always be moved to the far left withoutchanging the meaning of the expression.

    (b) If all summation signs are on the far left of an expression, you can exchange theirorder without changing the meaning of the expression.

    2

  • 7/29/2019 Intro to Tensors

    9/22

    Introduction to Tensors The Index Notation and Einstein Summation Convention

    (c) If all summation signs are on the far left of an expression, you cannot just changethe order of the variables in the expression because this changes the order in whichmatrices are multiplied, and generally AB = BA for any two arbitrary matricesA and B of appropriate dimensions.

    (d) A =

    AT .

    (e) A =

    AT

    .

    Solution:

    1. Let

    B =

    B11 . . . B1n...

    ...Bm1 . . . Bmn

    and C =

    C11 . . . C 1m...

    ...Cn1 . . . C nm

    .

    Then,

    A =

    B11C11 + + B1nCn1 B11C12 + + B1nCn2 . . . B11C1m + + B1mCnm...

    ...Bm1C11 + + BmnCn1 Bm1C12 + + BmnCn2 . . . Bm1C1m + + BmnCnm

    ,

    so that

    A =ni=1

    BiCi.

    2. We have

    n=1

    A n=1

    Bx

    = A1 n=1

    B1x

    + + An n=1

    Bnx

    =n

    =1

    A1B1x +n

    =1

    A2B2x + +n

    =1

    AnBnx

    =n

    =1

    n=1

    ABx,

    proving the first equality. The second equality follows from the commutativity ofaddition (i.e., elements can be added in different orders without altering the result),and

    n=1

    n

    =1

    (AB) x

    =

    n=1

    n=1

    (AB) x

    =n

    =1

    n=1

    ABx.

    3.

    3

  • 7/29/2019 Intro to Tensors

    10/22

    Introduction to Tensors The Index Notation and Einstein Summation Convention

    (a) True, since doing so simply changes the order in which elements are multiplied,and since addition of numbers is commutative, this is no problem.

    (b) True, since this also merely changes the order of addition.

    (c) False as long as the summand contains expression involving commutative opera-tors, like multiplicaiton. As for the matrix multiplication mentioned, it is not theorder of the multiplication/addition of elements that makes matrix multiplicationnon-commutative but rather the definition of matrix multiplication.

    (d) True

    (e) False

    We have seen in the above example that the summation symbol can always be put at the start ofany expression and that if there is more than one summation sign then their order is irrelevant.

    It is therefore convenient to omit the summation signs as long as we make it clear in advancewhich index is being summed over, for instance, by putting it beside the formula as shown below:

    n=1

    Av Av {}

    n=1

    n=1

    ABC ABC {, } .

    (1.8)

    The above example also indicates to us the following:

    It appears that if an index only appears once in a summand then this index is not summed;

    It appears that if an index appears at least twice in a summand then that index is summed.

    After making routine usage of index notation, however, indicating the summation index everytime might become irritating. Our two observations above will eventually allow us to easilydetermine the summation index (or indices), so that indicating it no longer becomes necessary.This leads to the Einstein Summation Convention:

    Einstein Summation Convention

    In a summation over one or more indices, the summation sign and theindex of summation may be omitted with the following conventions:

    A summation is assumed over all indices that appear twice in asummand; and

    No summation is assumed over indices that appear only once in thesummand.

    4

  • 7/29/2019 Intro to Tensors

    11/22

    Introduction to Tensors The Index Notation and Einstein Summation Convention

    We will use index notation with Einstein summation convention from now on. So we will writen

    =1

    Av Av

    n=1

    n=1

    ABC ABC .

    (1.9)

    Also,

    ni=1

    AiAi AiA

    i

    ni=1

    nj=1

    AijkBij AijkB

    ij,

    (1.10)

    and by extension we shall also understand summation in such expressions as

    ui

    xi,

    q

    xidxi

    dt, etc. (1.11)

    Example 1.0.2: Working with the Einstein summation convention.

    1. Write as matrix multiplication:

    (a) D = ABC;

    (b) D = ABC;

    (c) D = A(B + C ).

    2. Consider a vector field in an n-dimensional space, F (x). We perform a coordinatetransformation x = Ax, where A is a nn change-of-basis matrix. Show that F = AF.

    3. For a change of basis, we have x = Ax. This corresponds to x =n

    =1 Ax. Canyou understand the expression

    n=1 xA, and how can you construct the matrix

    multiplication equivalent ofn

    =1 xA?

    Solution:

    1. (a) We have by the summation convention

    D =n

    =1

    n

    =1

    ABC

    =n

    =1

    (AB) (previous example)

    =n

    =1

    (AB)

    CT

    (previous example),

    so that D = ABCT.

    5

  • 7/29/2019 Intro to Tensors

    12/22

    Introduction to Tensors The Index Notation and Einstein Summation Convention

    (b) By the summation convention, and since the order of elements in a summand isirrelevant (as multiplication is commutative), we have

    D =n

    =1

    n

    =1

    ACB =n

    =1

    (AC)

    B =n

    =1

    (AC)BT

    ,

    so that D = ACBT.

    (c) We have

    D = AB + AC = (AB) + (AC) ,

    so that D = AB + AC = A (B + C).

    2.

    3. Because we can change the order of elements in the summand, we have simply

    n=1

    xA =n

    =1

    Ax = x

    .

    And if we let

    x =

    x1x2...

    xn

    = x1 x2 . . . xnT ,

    then

    xTA =

    x1 x2 . . . xn

    A11 . . . A1n...

    ...An1 . . . Ann

    =

    x1A11 + x2A21 + + xnAn1x1A12 + x2A22 + + xnAn2

    ...x1A1n + x2A2n + + xnAnn

    ,

    so that the matrix multiplication equivalent ofn

    =1 xA is xTA.

    6

  • 7/29/2019 Intro to Tensors

    13/22

    2 Covariant and Contravariant Vectors

    In this chapter we will describe how vectors change under a coordinate transformation, i.e.,under a change of basis. Doing this will allow us to make a distinction between two types ofvectors, which we will call contravariant vectors and covariant vectors (sometimes shortened tocovector).

    In physics, a vector typically arises as the outcome of a measurement or series of measurementsand is represented as a tuple of numbers, such as (v1, v2, v3). This tuple of numbers, each ofwhich is called a coordinate, depends on the choice of coordinate system. Let us assume that weuse a linear coordinate system, so that we can use linear algebra to describe it. The position of

    physical objects, for example, can be specified using a Cartesian coordinate system and are oftenrepresented as an arrow from its origin. We can then use a chosen set of basis vectors belonging tothe coordinate system, for example, the standard basis {i = (1, 0, 0) ,j = (0, 1, 0) , k = (0, 0, 1)}in R3, specify the location of the object as r1i + r2j + r3k, so that (r1, r2, r3) is the 3-tuple ofthe coordinates of the object.

    In such a description of objects with coordinates, we must be fully aware that the coordinatesthemselves have no meaning. Only with the corresponding basis vectors do these numbersacquire meaning. Remember that the object one describes is independent of the coordinatesystem (and hence the set of the basis vectors) chosen. We are thus interested in how thecoordinates of an object transform when the original coordinate system is changed to a newone.

    Now, suppose we have two bases in three-dimensional vector space V (indeed, we could generalisethis to n dimensions, but this makes the subsequent equations less cumbersome to write down),{e1, e2, e3} and {e

    1, e

    2, e

    3}. Suppose every basis vector in the primed basis can be written as alinear combination of the basis vectors of the unprimed basis, i.e.,

    e1 = a11e1 + a12e2 + a13e3,

    e2 = a21e1 + a22e2 + a23e3,

    e3 = a31e1 + a32e2 + a33e3,

    (2.1)

    or e1e2e3

    = e1e2e3

    , where = a11 a12 a13a21 a22 a23a31 a32 a33

    , (2.2)

    where we assume that is non-singular (and hence invertible).

    Now, let us take a vector v = (v1, v2, v3) in the unprimed basis and see how its coordinatestransform when written in terms of the primed basis. From linear algebra, we know that thechange-of-basis matrix is constructed by letting the columns be the old basis vectors expressed

    7

  • 7/29/2019 Intro to Tensors

    14/22

    Introduction to Tensors Covariant and Contravariant Vectors

    in terms of the new ones, i.e., the columns will be unprimed basis vectors expressed in terms ofthe primed basis vectors. Now, since above is invertible, we have

    e1e2

    e3

    =

    1

    e1e2

    e

    3

    ,

    and hence the columns of

    1T

    will contain unprimed basis vectors in terms of the primedbasis vectors.

    Remark: To see why we must take the transpose, note that

    T =

    a11 a21 a31

    a12 a22 a32

    a13 a23 a33

    ,

    so that the columns of T are the primed basis vectors expressed in terms of the unprimed basis vectors. In

    the same way, 1 will have as it rows the coordinates of the unprimed basis vectors in terms of the primedbasis vectors, so that its transpose will contain these coordinates as its columns, as required.

    Therefore,

    v =

    1T

    v v =

    1T

    v. (2.3)

    Vectors that transform in this manner are called contravariant vectors and the transformation1

    Trepresents a contravariant transformation. These are the vectors that we typically

    deal with, which is why we almost always simply call them vectors. Observe that the coor-

    dinates of v transform in the opposite way (i.e., contrary) as the basis vectors that describeit. This means that the vector itself does not change, i.e., if we use an arrow to indicate ourvector, then physically this arrow will be unchanged, as we require. Instead, the components ofthe vector make a change that cancels the change in the basis vectors, resulting in a change ofcoordinates. In other words, if the basis vectors were rotated in one direction, the componentrepresentation of the vector would rotate in exactly the same way (with the effect seen in thevalues of the coordinates). Similarly, if the basis vectors were stretched in one direction, thecomponents of the vector, like the coordinates, would reduce in an exactly compensating way.

    Let us now look at vectors that transform in the same way as the basis vectors. Consider ann-dimensional manifold V with coordinates x1, x2, . . . , xn. Let f be some scalar function. Thenthe gradient of f(x1, x2, . . . , xn) is

    f

    =f

    x, i.e., f =

    f

    x1e1 +

    f

    x2e2 + +

    f

    xnen. (2.4)

    Suppose we have a vector field defined on this manifold V, V = V (x). Let us perform ahomogeneous linear transformation of the coordinates:

    x = Ax. (2.5)

    8

  • 7/29/2019 Intro to Tensors

    15/22

    Introduction to Tensors Covariant and Contravariant Vectors

    As we saw in the previous example, we thus have a corresponding change in the vector field V:

    V (x) = AV(x) , (2.6)

    where A is the same matrix as in (2.5). Note that this matrix describes the transformation ofthe vector components, while previously our matrix described the transformation of the basis

    vectors, so that A =

    1T

    .

    Now, take the function f(x1, x2, . . . , xn) and the gradient w at a point P as so,

    w =f

    x; (2.7)

    and in the new coordinate system,

    w =f

    x. (2.8)

    (i.e., the w are the components of the gradient vector) Then, by the chain rule,

    f

    x1

    =f

    x1

    x1

    x1

    +f

    x2

    x2

    x1

    + +f

    xn

    xn

    x1

    ,

    that is,

    f

    x= w =

    f

    x

    x

    x= w

    x

    x

    w = x

    xw.(2.9)

    Now, take (2.5) and rewrite it asx =

    A1

    x.

    Then,

    x

    x=

    A1

    x

    x

    =

    A1

    xx

    +

    A1

    xx. (2.10)

    Because in this case A does not depend on x, the last term on the right-hand side of the aboveequation vanishes. Also,

    xx

    = , =

    1 when = ,0 when = .

    (2.11)

    Therefore, what remains is

    x

    x=

    A1

    =

    A1

    . (2.12)

    9

  • 7/29/2019 Intro to Tensors

    16/22

    Introduction to Tensors Covariant and Contravariant Vectors

    Finally, in combination with (2.9), we get the following transformation of the components of thegradient:

    w =

    A1T

    w

    f

    =

    A1T f. (2.13)

    But remember that A =

    1T

    , so that

    A1T

    = . Therefore,

    f

    =

    f

    , (2.14)

    i.e., we have shown that the components of the gradient vector transform in exactly the sameway as the basis vectors. Vectors that transform in this manner are called covariant vectorsor simply covectors, and the matrix represents a covariant transformation.

    To distinguish contravariant vectors from covariant vectors, we will write the indices of con-travariant vectors as a superscript and the indices of covariant vectors as subscripts.

    y: contravariant vectorw: covariant vector

    In addition, we will denote contravariant vectors in boldface (v) and write then explicitly ascolumn matrices, as we have been doing all along,

    v =

    v1

    v2

    ...vn

    (contravariant vector), (2.15)

    and we will denote covariant vectors in boldface with a tilde (v) and write them explicitly asrow matrices,

    v =

    v1 v2 . . . vn

    (covariant vector). (2.16)

    (Note the position of the index in both cases, following the convention in the box above.) Wealso introduce a similar notation convention for matrices, which we will regard as an extensionof the Einstein summation convention. Instead of the usual index notation Amn used to refer tothe mth row and nth column of a matrix A, we will write Amn, which means that the transposeis ATm

    n

    = Anm. Then the transformation rules for contravariant vectors and covariant vectors,respectively, are

    v = Av

    (contravariant vectors),

    w =

    A1

    w (covariant vectors).(2.17)

    As for matrix multiplication, we get

    (AB)ik = AijB

    jk. (2.18)

    10

  • 7/29/2019 Intro to Tensors

    17/22

    Introduction to Tensors Covariant and Contravariant Vectors

    This new notation will be useful later because it will indicate that such matrices have mixedcontravariant and covariant transformation properties.

    11

  • 7/29/2019 Intro to Tensors

    18/22

    3 Introducing Tensors

    3.1 The Inner Product and the First Tensor

    The dot product is very important in physics. In classical mechanics, for example, we have thatthe work that is done when an object is moved equals the dot product of the force F acting onthe object and the displacement vector x of the object: W = F x. As we know from linearalgebra, the dot product is just a special case of the inner product (the dot product is oftencalled the standard inner product on Rn), so we might also write W = F, x. The work mustof course be independent of the coordinate system in which the vectors F and x are expressed.

    However, the dot product s = a, b = ab

    does not in general have this invariance property for arbitrary vectors a and b and arbitrarylinear transformations a

    = Aa and b = Ab

    :

    s =

    a, b

    = AaA

    b

    =

    AT

    Aa

    b.

    So we see that s = s if and only if A1 = A, i.e., if and only if we are dealing with orthog-onal transformation (i.e., A is an orthogonal matrix). However, we would like s = s for anytransformation matrix A. To try to accomplish this, notice that the dot product between a(contravariant) vector x and a covector y, s = xy, is invariant under all transformations since

    for all transformation matrices A

    s = xy = A

    x

    A1

    y =

    A1

    Axy =

    x

    y = s.

    With the help of this dot product, we can introduce a new standard inner product between twocontravariant vectors that also has the invariance property. Let us define the inner product as

    s = gxy, (3.1)

    where, in R3,

    g =

    g11 g12 g13g21 g22 g23

    g31 g32 g33

    . (3.2)

    Now, we must make sure that this object g is chosen so that our new inner product reproducesthe old one if we choose an orthonormal coordinate system. So, in R3, we should get

    s = gxy =

    x1 x2 x3

    g11 g12 g13g21 g22 g23g31 g32 g33

    y1y2

    y3

    = x1y1 + x2y2 + x3y3 (in an orthonormal system).

    12

  • 7/29/2019 Intro to Tensors

    19/22

    Introduction to Tensors Introducing Tensors

    This implies that

    g =

    1 0 00 1 00 0 1

    in an orthonormal coordinate system. (3.3)

    Note, however, that g does not have the transformation properties of an ordinary matrix.Remember that the matrix A of the previous chapter had one index up and one index down,like A, indicating that it has mixed contravariant and covariant transformation properties.This new object g, however, has been written with both indices down, so it transforms in acovariant. This object, which looks like a matrix but does not transform like one, is an exampleof a tensor. A matrix is also a tensor, as are vectors and covectors. Matrices, vectors, andcovectors are special cases of the more general class of objects called tensors. The object gis a kind of tensor that is neither a matrix nor a vector nor a covector. It is a new kind of objectfor which only tensor mathematics has a proper description. It is called a metric tensor orsimply a metric.

    3.2 Creating Tensors from Vectors

    We have seen that the inner product of a vector with a covector is

    s = xy.

    In this case the indices are paired, indicating by the Einstein convention a summation over allpossible values of the index. We can also multiply vectors and covectors without pairing theindices, and therefore without summation. For example, in three dimensions, we get

    s = xy =

    x1y1 x1y2 x

    1y3x2y1 x

    2y2 x2y3

    x3

    y1 x3

    y2 x3

    y3

    .

    This object still looks very much like a matrix, since a matrix is also nothing more or less than anarray of numbers labelled with two indices. To check if this is a true matrix, or something else, weneed to see how it transforms. From linear algebra, we know that ifA is a matrix representing alinear mapping, and S is a change-of-basis matrix (from unprimed to prime coordinate systems),then A = SAS1, where A represents the matrix A in the primed coordinate system. Now,

    s

    = xy = A

    x

    A1

    y = A (xy)

    A1

    = As

    A1

    ,

    so that s transforms like an ordinary matrix, which means that s is indeed an ordinary

    matrix. But if we instead use two covectors,

    t = xy =x1y1 x1y2 x1y3x2y1 x2y2 x2y3

    x3y1 x3y2 x3y3

    ,then we get a tensor with different transformation properties,

    t = x

    y

    =

    A1

    x

    A1

    y

    =

    A1

    (xy)

    A1

    =

    A1

    t

    A1

    .

    13

  • 7/29/2019 Intro to Tensors

    20/22

    Introduction to Tensors Introducing Tensors

    The difference here lies in the first matrix of the transformation equation. For s it is thetransformation matrix for contravariant vectors, while for t it is the transformation for covariantvectors. The tensor t is clearly not a matrix, so we indeed created something new here. Themetric tensor g of the previous section is of the same type as t.

    The beauty of tensors is that they can have an arbitrary number of indices. One can alsoproduct, for instance, a tensor with three indices,

    A = xyz.

    In three dimensions, this gives an ordered array of 27 elements, a kind of super matrix.

    Let us know introduce some terminology.

    The tensor A is a rank 3 tensor. Tensors of rank 0 are scalars, tensors of rank 1are vectors and covectors, and tensors of rank 2 are matrices and other types of tensors(such as the metric tensor).

    In general, in n-dimensional space, a tensor of rank r has nr elements.

    We can distinguish between the contravariant rank and covariant rank of a tensor. Ais a tensor of covariant rank 3 and contravariant rank 0. Its total rank is 3. One can alsoproduce tensors of, for instance, contravariant rank 2 and covariant rank 3, B, withtotal rank 5.

    Typically, when tensor mathematics is applied, the meaning of each index has been definedbeforehand: the first index means this, the secon means that, etc. As long as this is well-

    defined, then one can have covariant and contravariant indices in any order.

    Remark: Although a multiplication (without summation) of m vectors and m covectors produces a tensorof rank m+n, not every tensor of rank m+n can be constructued as such a product. Tensors are much moregeneral than these simple products of vectors and covectors. It is therefore important to step away from thispicture of combining vectors and covects into a tensor, and consider this construction as nothing more than asimple example.

    Remark: We have said that tensors of rank 2 are matrices. It is not true, however, that all matrices aretensors (of rank 2), as we have seen already with the object s .

    14

  • 7/29/2019 Intro to Tensors

    21/22

    4 Tensor Definition and Properties

    Let us now formally define a tensor.

    Definition of Tensor

    A (n, m) tensor t1n1m at a given point in space can be described by anarray of numbers with n + m indices that transforms, upon coordinatetransformation by a given matrix A, in the following way:

    t

    1n

    1m = A1

    1 An

    n

    A

    111

    A

    1mm t

    1n

    1m.

    An (n, m) tensor in a k-dimensional manifold therefore has kn+m ele-ments. It is contravariant in n components and covariant in m compo-nents.

    4.1 Symmetry and Anti-Symmetry

    In practice is often happens that tensors display a certain amount of symmetry, like what weknow from matrices. Such symmetries have a strong effect on the properties of tensors. Often,

    many of these properties or even tensors equations can be derived solely on the basis of thesesymmetries.

    A tensor t is called symmetric in the indices and if the elements are equal upon exchangeof the index values. So, for a second-rand contravariant tensor,

    t = t (symmetric (2,0) tensor). (4.1)

    A tensor t is called anti-symmetric in the indices and if the elements are equal in absolutevalue but opposite in sign upon exchange of the index values. So, for a second-rank contravarianttensor,

    t = t (anti-symmetric (2,0) tensor). (4.2)

    It is not useful to speak of symmetry in a pair of indices that are not of the same type (eithercovariant to contravariant), i.e., we can only consider symmetry of a tensor with respect to twoindices that are either both covariant or both contravariant. The reason for this is that theproperties of symmetry only remain invariant upon basis transformatoin if the indices are of thesame type.

    15

  • 7/29/2019 Intro to Tensors

    22/22

    Introduction to Tensors Tensor Definition and Properties

    4.2 Contraction of Indices

    With tensors of at least on covariant and at least

    16


Recommended