+ All Categories
Home > Documents > ND Mathematical Methods Lecture notes

ND Mathematical Methods Lecture notes

Date post: 04-Apr-2018
Category:
Upload: ucaptd3
View: 223 times
Download: 0 times
Share this document with a friend

of 502

Transcript
  • 7/30/2019 ND Mathematical Methods Lecture notes

    1/501

    LECTURE NOTES ON

    MATHEMATICAL METHODS

    Mihir SenJoseph M. Powers

    Department of Aerospace and Mechanical EngineeringUniversity of Notre Dame

    Notre Dame, Indiana 46556-5637USA

    updated29 July 2012, 2:31pm

  • 7/30/2019 ND Mathematical Methods Lecture notes

    2/501

    2

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    3/501

    Contents

    Preface 11

    1 Multi-variable calculus 13

    1.1 Implicit functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.2 Functional dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161.3 Coordinate transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    1.3.1 Jacobian matrices and metric tensors . . . . . . . . . . . . . . . . . . 221.3.2 Covariance and contravariance . . . . . . . . . . . . . . . . . . . . . . 311.3.3 Orthogonal curvilinear coordinates . . . . . . . . . . . . . . . . . . . 41

    1.4 Maxima and minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431.4.1 Derivatives of integral expressions . . . . . . . . . . . . . . . . . . . . 441.4.2 Calculus of variations . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    1.5 Lagrange multipliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    2 First-order ordinary differential equations 572.1 Separation of variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572.2 Homogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592.3 Exact equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612.4 Integrating factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622.5 Bernoulli equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652.6 Riccati equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662.7 Reduction of order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    2.7.1 y absent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682.7.2 x absent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    2.8 Uniqueness and singular solutions . . . . . . . . . . . . . . . . . . . . . . . . 712.9 Clairaut equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

    3 Linear ordinary differential equations 793.1 Linearity and linear independence . . . . . . . . . . . . . . . . . . . . . . . . 793.2 Complementary functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    3.2.1 Equations with constant coefficients . . . . . . . . . . . . . . . . . . . 82

    3

  • 7/30/2019 ND Mathematical Methods Lecture notes

    4/501

    4 CONTENTS

    3.2.1.1 Arbitrary order . . . . . . . . . . . . . . . . . . . . . . . . . 823.2.1.2 First order . . . . . . . . . . . . . . . . . . . . . . . . . . . 833.2.1.3 Second order . . . . . . . . . . . . . . . . . . . . . . . . . . 84

    3.2.2 Equations with variable coefficients . . . . . . . . . . . . . . . . . . . 853.2.2.1 One solution to find another . . . . . . . . . . . . . . . . . . 853.2.2.2 Euler equation . . . . . . . . . . . . . . . . . . . . . . . . . 86

    3.3 Particular solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883.3.1 Method of undetermined coefficients . . . . . . . . . . . . . . . . . . 883.3.2 Variation of parameters . . . . . . . . . . . . . . . . . . . . . . . . . 903.3.3 Greens functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923.3.4 Operator D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

    Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

    4 Series solution methods 1034.1 Power series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

    4.1.1 First-order equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044.1.2 Second-order equation . . . . . . . . . . . . . . . . . . . . . . . . . . 107

    4.1.2.1 Ordinary point . . . . . . . . . . . . . . . . . . . . . . . . . 1074.1.2.2 Regular singular point . . . . . . . . . . . . . . . . . . . . . 1084.1.2.3 Irregular singular point . . . . . . . . . . . . . . . . . . . . 114

    4.1.3 Higher order equations . . . . . . . . . . . . . . . . . . . . . . . . . . 1144.2 Perturbation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

    4.2.1 Algebraic and transcendental equations . . . . . . . . . . . . . . . . . 115

    4.2.2 Regular perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . 1204.2.3 Strained coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234.2.4 Multiple scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284.2.5 Boundary layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304.2.6 WKBJ method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1354.2.7 Solutions of the type eS(x) . . . . . . . . . . . . . . . . . . . . . . . . 1394.2.8 Repeated substitution . . . . . . . . . . . . . . . . . . . . . . . . . . 140

    Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

    5 Orthogonal functions and Fourier series 1475.1 Sturm-Liouville equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

    5.1.1 Linear oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1495.1.2 Legendres differential equation . . . . . . . . . . . . . . . . . . . . . 1535.1.3 Chebyshev equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1575.1.4 Hermite equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

    5.1.4.1 Physicists . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605.1.4.2 Probabilists . . . . . . . . . . . . . . . . . . . . . . . . . . 161

    5.1.5 Laguerre equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635.1.6 Bessels differential equation . . . . . . . . . . . . . . . . . . . . . . . 165

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    5/501

    CONTENTS 5

    5.1.6.1 First and second kind . . . . . . . . . . . . . . . . . . . . . 1655.1.6.2 Third kind . . . . . . . . . . . . . . . . . . . . . . . . . . . 1695.1.6.3 Modified Bessel functions . . . . . . . . . . . . . . . . . . . 1695.1.6.4 Ber and bei functions . . . . . . . . . . . . . . . . . . . . . 169

    5.2 Fourier series representation of arbitrary functions . . . . . . . . . . . . . . . 169Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

    6 Vectors and tensors 1776.1 Cartesian index notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1776.2 Cartesian tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

    6.2.1 Direction cosines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1796.2.1.1 Scalars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1846.2.1.2 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

    6.2.1.3 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856.2.2 Matrix representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1866.2.3 Transpose of a tensor, symmetric and anti-symmetric tensors . . . . . 1876.2.4 Dual vector of an anti-symmetric tensor . . . . . . . . . . . . . . . . 1886.2.5 Principal axes and tensor invariants . . . . . . . . . . . . . . . . . . . 189

    6.3 Algebra of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1936.3.1 Definition and properties . . . . . . . . . . . . . . . . . . . . . . . . . 1946.3.2 Scalar product (dot product, inner product) . . . . . . . . . . . . . . 1946.3.3 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1956.3.4 Scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . . . . 1956.3.5 Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

    6.4 Calculus of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1966.4.1 Vector function of single scalar variable . . . . . . . . . . . . . . . . . 1966.4.2 Differential geometry of curves . . . . . . . . . . . . . . . . . . . . . . 196

    6.4.2.1 Curves on a plane . . . . . . . . . . . . . . . . . . . . . . . 1996.4.2.2 Curves in three-dimensional space . . . . . . . . . . . . . . . 201

    6.5 Line and surface integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2046.5.1 Line integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2046.5.2 Surface integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

    6.6 Differential operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2086.6.1 Gradient of a scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

    6.6.2 Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2116.6.2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2116.6.2.2 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

    6.6.3 Curl of a vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2126.6.4 Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

    6.6.4.1 Scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2136.6.4.2 Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

    6.6.5 Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    6/501

    6 CONTENTS

    6.6.6 Curvature revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2146.7 Special theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

    6.7.1 Greens theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2176.7.2 Divergence theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2196.7.3 Greens identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2216.7.4 Stokes theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2226.7.5 Leibnizs rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

    Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

    7 Linear analysis 2297.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2297.2 Differentiation and integration . . . . . . . . . . . . . . . . . . . . . . . . . . 231

    7.2.1 Frechet derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231

    7.2.2 Riemann integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2317.2.3 Lebesgue integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2327.2.4 Cauchy principal value . . . . . . . . . . . . . . . . . . . . . . . . . . 233

    7.3 Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2337.3.1 Normed spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2377.3.2 Inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

    7.3.2.1 Hilbert space . . . . . . . . . . . . . . . . . . . . . . . . . . 2477.3.2.2 Non-commutation of the inner product . . . . . . . . . . . . 2497.3.2.3 Minkowski space . . . . . . . . . . . . . . . . . . . . . . . . 2507.3.2.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . 253

    7.3.2.5 Gram-Schmidt procedure . . . . . . . . . . . . . . . . . . . 2547.3.2.6 Projection of a vector onto a new basis . . . . . . . . . . . . 2557.3.2.6.1 Non-orthogonal basis . . . . . . . . . . . . . . . . . 2567.3.2.6.2 Orthogonal basis . . . . . . . . . . . . . . . . . . . 261

    7.3.2.7 Parsevals equation, convergence, and completeness . . . . . 2687.3.3 Reciprocal bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

    7.4 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2747.4.1 Linear operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2757.4.2 Adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2767.4.3 Inverse operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2807.4.4 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . 283

    7.5 Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2967.6 Method of weighted residuals . . . . . . . . . . . . . . . . . . . . . . . . . . 3007.7 Uncertainty quantification via polynomial chaos . . . . . . . . . . . . . . . . 310Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316

    8 Linear algebra 3238.1 Determinants and rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3248.2 Matrix algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    7/501

    CONTENTS 7

    8.2.1 Column, row, left and right null spaces . . . . . . . . . . . . . . . . . 3258.2.2 Matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 3278.2.3 Definitions and properties . . . . . . . . . . . . . . . . . . . . . . . . 329

    8.2.3.1 Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3298.2.3.2 Nilpotent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3298.2.3.3 Idempotent . . . . . . . . . . . . . . . . . . . . . . . . . . . 3298.2.3.4 Diagonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3308.2.3.5 Transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3308.2.3.6 Symmetry, anti-symmetry, and asymmetry . . . . . . . . . . 3308.2.3.7 Triangular . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3308.2.3.8 Positive definite . . . . . . . . . . . . . . . . . . . . . . . . . 3308.2.3.9 Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . 3318.2.3.10 Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332

    8.2.3.11 Similar matrices . . . . . . . . . . . . . . . . . . . . . . . . 3338.2.4 Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333

    8.2.4.1 Over-constrained systems . . . . . . . . . . . . . . . . . . . 3338.2.4.2 Under-constrained systems . . . . . . . . . . . . . . . . . . . 3368.2.4.3 Simultaneously over- and under-constrained systems . . . . 3388.2.4.4 Square systems . . . . . . . . . . . . . . . . . . . . . . . . . 340

    8.3 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 3428.3.1 Ordinary eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . 3428.3.2 Generalized eigenvalues and eigenvectors in the second sense . . . . . 346

    8.4 Matrices as linear mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . 3488.5 Complex matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3498.6 Orthogonal and unitary matrices . . . . . . . . . . . . . . . . . . . . . . . . 352

    8.6.1 Orthogonal matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 3528.6.2 Unitary matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355

    8.7 Discrete Fourier transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . 3568.8 Matrix decompositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362

    8.8.1 L D U decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 3628.8.2 Cholesky decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 3658.8.3 Row echelon form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3668.8.4 Q R decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 3698.8.5 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372

    8.8.6 Jordan canonical form . . . . . . . . . . . . . . . . . . . . . . . . . . 3798.8.7 Schur decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 3818.8.8 Singular value decomposition . . . . . . . . . . . . . . . . . . . . . . 3828.8.9 Hessenberg form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385

    8.9 Projection matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3868.10 Method of least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388

    8.10.1 Unweighted least squares . . . . . . . . . . . . . . . . . . . . . . . . . 3888.10.2 Weighted least squares . . . . . . . . . . . . . . . . . . . . . . . . . . 389

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    8/501

    8 CONTENTS

    8.11 Matrix exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3918.12 Quadratic form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3938.13 Moore-Penrose inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399

    9 Dynamical systems 4059.1 Paradigm problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

    9.1.1 Autonomous example . . . . . . . . . . . . . . . . . . . . . . . . . . . 4069.1.2 Non-autonomous example . . . . . . . . . . . . . . . . . . . . . . . . 409

    9.2 General theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4129.3 Iterated maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4149.4 High order scalar differential equations . . . . . . . . . . . . . . . . . . . . . 4179.5 Linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419

    9.5.1 Homogeneous equations with constant A . . . . . . . . . . . . . . . . 4199.5.1.1 N eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . 4209.5.1.2 < N eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . 4219.5.1.3 Summary of method . . . . . . . . . . . . . . . . . . . . . . 4229.5.1.4 Alternative method . . . . . . . . . . . . . . . . . . . . . . . 4229.5.1.5 Fundamental matrix . . . . . . . . . . . . . . . . . . . . . . 426

    9.5.2 Inhomogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . 4279.5.2.1 Undetermined coefficients . . . . . . . . . . . . . . . . . . . 4309.5.2.2 Variation of parameters . . . . . . . . . . . . . . . . . . . . 431

    9.6 Non-linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4319.6.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4319.6.2 Linear stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4339.6.3 Lyapunov functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4389.6.4 Hamiltonian systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 440

    9.7 Differential-algebraic systems . . . . . . . . . . . . . . . . . . . . . . . . . . 4429.7.1 Linear homogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . . 4439.7.2 Non-linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445

    9.8 Fixed points at infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4469.8.1 Poincare sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4469.8.2 Projective space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450

    9.9 Fractals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452

    9.9.1 Cantor set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4529.9.2 Koch curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4539.9.3 Menger sponge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4539.9.4 Weierstrass function . . . . . . . . . . . . . . . . . . . . . . . . . . . 4549.9.5 Mandelbrot and Julia sets . . . . . . . . . . . . . . . . . . . . . . . . 454

    9.10 Bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4559.10.1 Pitchfork bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4569.10.2 Transcritical bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . 457

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    9/501

    CONTENTS 9

    9.10.3 Saddle-node bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . 4599.10.4 Hopf bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460

    9.11 Lorenz equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4609.11.1 Linear stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4619.11.2 Non-linear stability: center manifold projection . . . . . . . . . . . . 4639.11.3 Transition to chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468

    Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473

    10 Appendix 48110.1 Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48110.2 Trigonometric relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48210.3 Hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48310.4 Routh-Hurwitz criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483

    10.5 Infinite series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48410.6 Asymptotic expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48510.7 Special functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485

    10.7.1 Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48510.7.2 Beta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48510.7.3 Riemann zeta function . . . . . . . . . . . . . . . . . . . . . . . . . . 48610.7.4 Error functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48710.7.5 Fresnel integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48810.7.6 Sine-, cosine-, and exponential-integral functions . . . . . . . . . . . . 48810.7.7 Elliptic integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48910.7.8 Hypergeometric functions . . . . . . . . . . . . . . . . . . . . . . . . 490

    10.7.9 Airy functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49110.7.10Dirac distribution and Heaviside function . . . . . . . . . . . . . . . 491

    10.8 Total derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49310.9 Leibnizs rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49310.10Complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493

    10.10.1Eulers formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49410.10.2Polar and Cartesian representations . . . . . . . . . . . . . . . . . . . 49410.10.3 Cauchy-Riemann equations . . . . . . . . . . . . . . . . . . . . . . . 496

    Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497

    Bibliography 499

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    10/501

    10 CONTENTS

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    11/501

    Preface

    These are lecture notes for AME 60611 Mathematical Methods I, the first of a pair of courseson applied mathematics taught in the Department of Aerospace and Mechanical Engineeringof the University of Notre Dame. Most of the students in this course are beginning graduatestudents in engineering coming from a variety of backgrounds. The course objective is to

    survey topics in applied mathematics, including multidimensional calculus, ordinary differ-ential equations, perturbation methods, vectors and tensors, linear analysis, linear algebra,and non-linear dynamic systems. In short, the course fully explores linear systems and con-siders effects of non-linearity, especially those types that can be treated analytically. Thecompanion course, AME 60612, covers complex variables, integral transforms, and partialdifferential equations.

    These notes emphasize method and technique over rigor and completeness; the studentshould call on textbooks and other reference materials. It should also be remembered thatpractice is essential to learning; the student would do well to apply the techniques presentedby working as many problems as possible. The notes, along with much information on thecourse, can be found at http://www.nd.edu/

    powers/ame.60611 . At this stage, anyone is

    free to use the notes under the auspices of the Creative Commons license below.These notes have appeared in various forms over the past years. An especially general

    tightening of notation and language, improvement of figures, and addition of numerous smalltopics was implemented in 2011. Fall 2011 students were also especially diligent in identifyingadditional areas for improvement. We would be happy to hear further suggestions from you.

    Mihir [email protected]

    http://www.nd.edu/msenJoseph M. Powers

    [email protected]://www.nd.edu/powers

    Notre Dame, Indiana; USACC BY: $\ = 29 July 2012

    The content of this book is licensed under Creative Commons Attribution-Noncommercial-No Derivative

    Works 3.0.

    11

    http://www.nd.edu/~powers/ame.60611http://www.nd.edu/~powers/ame.60611http://www.nd.edu/~powers/ame.60611mailto:[email protected]://www.nd.edu/~msenhttp://www.nd.edu/~msenhttp://www.nd.edu/~msenmailto:[email protected]://www.nd.edu/~powershttp://www.nd.edu/~powershttp://www.nd.edu/~powershttp://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/http://www.nd.edu/~powersmailto:[email protected]://www.nd.edu/~msenmailto:[email protected]://www.nd.edu/~powers/ame.60611
  • 7/30/2019 ND Mathematical Methods Lecture notes

    12/501

    12 CONTENTS

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    13/501

    Chapter 1

    Multi-variable calculus

    see Kaplan, Chapter 2: 2.1-2.22, Chapter 3: 3.9,

    Here we consider many fundamental notions from the calculus of many variables.

    1.1 Implicit functions

    The implicit function theorem is as follows:

    Theorem

    For a given f(x, y) with f = 0 and f/y = 0 at the point (xo, yo), there corresponds aunique function y(x) in the neighborhood of (xo, yo).

    More generally, we can think of a relation such as

    f(x1, x2, . . . , xN, y) = 0, (1.1)

    also written asf(xn, y) = 0, n = 1, 2, . . . , N , (1.2)

    in some region as an implicit function of y with respect to the other variables. We cannothave f/y = 0, because then f would not depend on y in this region. In principle, we canwrite

    y = y(x1, x2, . . . , xN), or y = y(xn), n = 1, . . . , N , (1.3)

    if f/y

    = 0.

    The derivative y/xn can be determined from f = 0 without explicitly solving for y.First, from the definition of the total derivative, we have

    df =f

    x1dx1 +

    f

    x2dx2 + . . . +

    f

    xndxn + . . . +

    f

    xNdxN +

    f

    ydy = 0. (1.4)

    Differentiating with respect to xn while holding all the other xm, m = n, constant, we getf

    xn+

    f

    y

    y

    xn= 0, (1.5)

    13

  • 7/30/2019 ND Mathematical Methods Lecture notes

    14/501

    14 CHAPTER 1. MULTI-VARIABLE CALCULUS

    so thaty

    xn=

    fxnf

    y

    , (1.6)

    which can be found if f/y = 0. That is to say, y can be considered a function of xn iff/y = 0.

    Let us now consider the equations

    f(x,y,u,v) = 0, (1.7)

    g(x,y,u,v) = 0. (1.8)

    Under certain circumstances, we can unravel Eqs. (1.7-1.8), either algebraically or numeri-cally, to form u = u(x, y), v = v(x, y). The conditions for the existence of such a functionaldependency can be found by differentiation of the original equations; for example, differen-tiating Eq. (1.7) gives

    df =f

    xdx +

    f

    ydy +

    f

    udu +

    f

    vdv = 0. (1.9)

    Holding y constant and dividing by dx, we get

    f

    x+

    f

    u

    u

    x+

    f

    v

    v

    x= 0. (1.10)

    Operating on Eq. (1.8) in the same manner, we get

    g

    x+

    g

    u

    u

    x+

    g

    v

    v

    x= 0. (1.11)

    Similarly, holding x constant and dividing by dy, we get

    f

    y+

    f

    u

    u

    y+

    f

    v

    v

    y= 0, (1.12)

    g

    y+

    g

    u

    u

    y+

    g

    v

    v

    y= 0. (1.13)

    Equations (1.10,1.11) can be solved for u/x and v/x, and Eqs. (1.12,1.13) can be solved

    for u/y and v/y by using the well known Cramers1 rule; see Eq. (8.93). To solve foru/x and v/x, we first write Eqs. (1.10,1.11) in matrix form:

    fu

    fv

    gu

    gv

    uxvx

    =

    fx

    gx

    . (1.14)

    1Gabriel Cramer, 1704-1752, well-traveled Swiss-born mathematician who did enunciate his well knownrule, but was not the first to do so.

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://en.wikipedia.org/wiki/Gabriel_Cramerhttp://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/http://en.wikipedia.org/wiki/Gabriel_Cramer
  • 7/30/2019 ND Mathematical Methods Lecture notes

    15/501

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://en.wikipedia.org/wiki/Carl_Gustav_Jakob_Jacobi
  • 7/30/2019 ND Mathematical Methods Lecture notes

    16/501

    16 CHAPTER 1. MULTI-VARIABLE CALCULUS

    Using the formula from Eq. (1.15) to solve for the desired derivative, we get

    ux

    = fx fv

    g

    x

    g

    v fu fvgu

    gv

    . (1.23)Substituting, we get

    u

    x=

    1 1y u 6u5 + 1 1v u

    =y u

    u(6u5 + 1) v . (1.24)

    Note when

    v = 6u6 + u, (1.25)

    that the relevant Jacobian determinant is zero; at such points we can determine neither u/x noru/y; thus, for such points we cannot form u(x, y).

    At points where the relevant Jacobian determinant (f, g)/(u, v) = 0 (which includes nearly all ofthe (x, y) plane), given a local value of (x, y), we can use algebra to find a corresponding u and v, whichmay be multivalued, and use the formula developed to find the local value of the partial derivative.

    1.2 Functional dependence

    Let u = u(x, y) and v = v(x, y). If we can write u = g(v) or v = h(u), then u and v are saidto be functionally dependent. If functional dependence between u and v exists, then we canconsider f(u, v) = 0. So,

    f

    u

    u

    x+

    f

    v

    v

    x= 0, (1.26)

    f

    u

    u

    y+

    f

    v

    v

    y= 0. (1.27)

    In matrix form, this is ux

    vx

    uy

    vy

    fufv

    =

    00

    . (1.28)

    Since the right hand side is zero, and we desire a non-trivial solution, the determinant of thecoefficient matrix must be zero for functional dependency, i.e. ux vxu

    yvy

    = 0. (1.29)CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    17/501

    1.2. FUNCTIONAL DEPENDENCE 17

    Note, since det J = det JT, that this is equivalent to

    J = ux

    uy

    vx

    vy = (u, v)(x, y) = 0. (1.30)

    That is, the Jacobian determinant J must be zero for functional dependence.

    Example 1.2Determine if

    u = y + z, (1.31)

    v = x + 2z2, (1.32)

    w = x 4yz 2y2, (1.33)

    are functionally dependent.The determinant of the resulting coefficient matrix, by extension to three functions of three vari-

    ables, is

    (u,v,w)

    (x,y,z)=

    ux

    uy

    uz

    vx

    vy

    vz

    wx

    wy

    wz

    =

    ux

    vx

    wx

    uy

    vy

    wy

    uz

    vz

    wz

    , (1.34)

    =

    0 1 11 0 4(y + z)1 4z 4y

    , (1.35)= (1)(4y (4)(y + z)) + (1)(4z), (1.36)= 4y

    4y

    4z + 4z, (1.37)

    = 0. (1.38)

    So, u,v,w are functionally dependent. In fact w = v 2u2.

    Example 1.3Let

    x + y + z = 0, (1.39)

    x2

    + y2

    + z2

    + 2xz = 1. (1.40)

    Can x and y be considered as functions ofz?

    If x = x(z) and y = y(z), then dx/dz and dy/dz must exist. If we take

    f(x,y,z) = x + y + z = 0, (1.41)

    g(x,y,z) = x2 + y2 + z2 + 2xz 1 = 0, (1.42)df =

    f

    zdz +

    f

    xdx +

    f

    ydy = 0, (1.43)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    18/501

    http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    19/501

    1.3. COORDINATE TRANSFORMATIONS 19

    -1

    0

    1

    x

    -0.5

    0

    0.5

    y

    -1

    -0.5

    0

    0.5

    1

    z

    -1

    0

    1

    x

    -0.5

    0

    0.5

    y

    -1

    -0

    0

    0

    1

    -1-0.5

    0

    0.5

    1x

    -2

    -1

    0

    1

    2

    y

    -1

    -0.5

    0

    0.5

    1

    z

    -1-0.5

    0

    0.5

    1x

    2

    -1

    0

    1

    2

    y

    -1

    .5

    0

    5

    1

    Figure 1.1: Surfaces ofx + y + z = 0 and x2

    + y2

    + z2

    + 2xz = 1, and their loci of intersection.

    then the solution matrix (dx/dz, dy/dz)T

    is found as before:

    dx

    dz=

    fz fygz gy fx fyg

    xgy

    =

    1 1(2z + 2x) 2y 5 12x + 2z 2y

    =

    2y + 2z + 2x10y 2x 2z , (1.56)

    dy

    dz=

    fx fzgx gz

    fx

    fy

    gx

    gy

    =

    5 12x + 2z (2z + 2x)

    5 12x + 2z 2y

    =8x 8z

    10y 2x 2z . (1.57)

    The two original functions and their loci of intersection are plotted in Fig. 1.2.Straightforward algebra in this case shows that an explicit dependency exists:

    x(z) =6z 213 8z2

    26, (1.58)

    y(z) =4z 5213 8z2

    26. (1.59)

    These curves represent the projection of the curve of intersection on the x, z and y, z planes, respectively.In both cases, the projections are ellipses.

    1.3 Coordinate transformations

    Many problems are formulated in three-dimensional Cartesian3 space. However, many ofthese problems, especially those involving curved geometrical bodies, are more efficiently

    3Rene Descartes, 1596-1650, French mathematician and philosopher.

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://en.wikipedia.org/wiki/Descarteshttp://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/http://en.wikipedia.org/wiki/Descartes
  • 7/30/2019 ND Mathematical Methods Lecture notes

    20/501

    20 CHAPTER 1. MULTI-VARIABLE CALCULUS

    -0.20 0.2

    -1

    -0.5

    0

    0.5

    1

    -1

    0

    1

    z

    0

    x

    -1

    -0.5

    0

    0.5

    1

    y

    -1

    0

    1

    -1

    -0.50

    0.5

    1

    x

    -2

    -1

    0

    1

    2

    y

    -1

    -0.5

    0

    0.5

    1

    z

    -1

    -0.50

    0.5

    1

    x

    -2

    -1

    0

    1

    2

    y

    -1

    -0.5

    0

    0.5

    1

    z

    Figure 1.2: Surfaces of 5x+y +z = 0 and x2 +y2 +z2 +2xz = 1, and their loci of intersection.

    posed in a non-Cartesian, curvilinear coordinate system. To facilitate analysis involvingsuch geometries, one needs techniques to transform from one coordinate system to another.

    For this section, we will utilize an index notation, introduced by Einstein.4 We will takeuntransformed Cartesian coordinates to be represented by (1, 2, 3). Here the superscript

    is an index and does not represent a power of . We will denote this point by

    i

    , wherei = 1, 2, 3. Because the space is Cartesian, we have the usual Euclidean5 distance fromPythagoras6 theorem for a differential arc length ds:

    (ds)2

    =

    d12

    +

    d22

    +

    d32

    , (1.60)

    (ds)2 =3i=1

    didi didi. (1.61)

    Here we have adopted Einsteins summation convention that when an index appears twice,a summation from 1 to 3 is understood. Though it makes little difference here, to strictly

    adhere to the conventions of the Einstein notation, which require a balance of sub- andsuperscripts, we should more formally take

    (ds)2 = djjidi = did

    i, (1.62)

    4Albert Einstein, 1879-1955, German/American physicist and mathematician.5Euclid of Alexandria, 325 B.C.- 265 B.C., Greek geometer.6Pythagoras of Samos, c. 570-c. 490 BC, Ionian Greek mathematician, philosopher, and mystic to whom

    this theorem is traditionally credited.

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://en.wikipedia.org/wiki/Albert_Einsteinhttp://en.wikipedia.org/wiki/Euclidhttp://en.wikipedia.org/wiki/Pythagorashttp://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/http://en.wikipedia.org/wiki/Pythagorashttp://en.wikipedia.org/wiki/Euclidhttp://en.wikipedia.org/wiki/Albert_Einstein
  • 7/30/2019 ND Mathematical Methods Lecture notes

    21/501

    1.3. COORDINATE TRANSFORMATIONS 21

    where ji is the Kronecker7 delta,

    ji = ji = ij = 1, i = j,0, i = j. (1.63)

    In matrix form, the Kronecker delta is simply the identity matrix I, e.g.

    ji = ji = ij = I =

    1 0 00 1 0

    0 0 1

    . (1.64)

    Now let us consider a point P whose representation in Cartesian coordinates is (1, 2, 3)and map those coordinates so that it is now represented in a more convenient (x1, x2, x3)space. This mapping is achieved by defining the following functional dependencies:

    x1 = x1(1, 2, 3), (1.65)

    x2

    = x2

    (1

    , 2

    , 3

    ), (1.66)x3 = x3(1, 2, 3). (1.67)

    We note that in this example we make the common presumption that the entity P is invariantand that it has different representations in different coordinate systems. Thus, the coordinateaxes change, but the location of P does not. This is known as an alias transformation. Thiscontrasts another common approach in which a point is represented in an original space,and after application of a transformation, it is again represented in the original space in analtered state. This is known as an alibi transformation. The alias approach transforms theaxes; the alibi approach transforms the elements of the space.

    Taking derivatives can tell us whether the inverse exists.

    dx1 = x1

    1d1 + x

    1

    2d2 + x

    1

    3d3 = x

    1

    jdj, (1.68)

    dx2 =x2

    1d1 +

    x2

    2d2 +

    x2

    3d3 =

    x2

    jdj, (1.69)

    dx3 =x3

    1d1 +

    x3

    2d2 +

    x3

    3d3 =

    x3

    jdj, (1.70)

    dx1

    dx2

    dx3

    =

    x1

    1x1

    2x1

    3

    x2

    1x2

    2x2

    3

    x3

    1x3

    2x3

    3

    d1

    d2

    d3

    , (1.71)

    dxi = xi

    jdj. (1.72)

    In order for the inverse to exist we must have a non-zero Jacobian determinant for thetransformation, i.e.

    (x1, x2, x3)

    (1, 2, 3)= 0. (1.73)

    7Leopold Kronecker, 1823-1891, German/Prussian mathematician.

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://en.wikipedia.org/wiki/Kroneckerhttp://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/http://en.wikipedia.org/wiki/Kronecker
  • 7/30/2019 ND Mathematical Methods Lecture notes

    22/501

    22 CHAPTER 1. MULTI-VARIABLE CALCULUS

    As long as Eq. (1.73) is satisfied, the inverse transformation exists:

    1 = 1(x1, x2, x3), (1.74)

    2 = 2(x1, x2, x3), (1.75)3 = 3(x1, x2, x3). (1.76)

    Likewise then,

    di =i

    xjdxj . (1.77)

    1.3.1 Jacobian matrices and metric tensors

    Defining the Jacobian matrix8 J to be associated with the inverse transformation, Eq. (1.77),we take

    J = i

    xj=

    1

    x11

    x21

    x3

    2x1

    2x2

    2x3

    3

    x13

    x23

    x3

    . (1.78)We can then rewrite di from Eq. (1.77) in Gibbs9 vector notation as

    d = J dx. (1.79)Now for Euclidean spaces, distance must be independent of coordinate systems, so we

    require

    (ds)2 = didi =

    i

    xkdxk

    i

    xldxl

    = dxk

    i

    xki

    xl

    gkldxl. (1.80)

    In Gibbs vector notation Eq. (1.80) becomes10

    (ds)2 = dT d, (1.81)= (J dx)T (J dx) . (1.82)

    8The definition we adopt influences the form of many of our formul given throughout the remainder ofthese notes. There are three obvious alternates: i) An argument can be made that a better definition ofJ would be the transpose of our Jacobian matrix: J JT. This is because when one considers that thedifferential operator acts first, the Jacobian matrix is really xj

    i, and the alternative definition is more

    consistent with traditional matrix notation, which would have the first row as ( x1 1, x1

    2, x1 3), ii)

    Many others, e.g. Kay, adopt as J the inverse of our Jacobian matrix: J J1. This Jacobian matrix isthus defined in terms of the forward transformation, xi/j, or iii) One could adopt J (JT)1. As longas one realizes the implications of the notation, however, the convention adopted ultimately does not matter.9Josiah Willard Gibbs, 1839-1903, prolific American mechanical engineer and mathematician with a life-time affiliation with Yale University as well as the recipient of the first American doctorate in engineering.

    10Common alternate formulations of vector mechanics of non-Cartesian spaces view the Jacobian as anintrinsicpart of the dot product and would say instead that by definition (ds)2 = dx dx. Such formulationshave no need for the transpose operation, especially since they do not carry forward simply to non-Cartesiansystems. The formulation used here has the advantage of explicitly recognizing the linear algebra operationsnecessary to form the scalar ds. These same alternate notations reserve the dot product for that betweena vector and a vector and would hold instead that d = Jdx. However, this could be confused with raisingthe dimension of the quantity of interest; whereas we use the dot to lower the dimension.

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://en.wikipedia.org/wiki/Josiah_Willard_Gibbshttp://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/http://en.wikipedia.org/wiki/Josiah_Willard_Gibbs
  • 7/30/2019 ND Mathematical Methods Lecture notes

    23/501

    1.3. COORDINATE TRANSFORMATIONS 23

    Now, it can be shown that (J dx)T = dxT JT (see also Sec. 8.2.3.5), so(ds)2 = dxT

    JT

    J G dx. (1.83)

    If we define the metric tensor, gkl or G, as follows:

    gkl =i

    xki

    xl, (1.84)

    G = JT J, (1.85)then we have, equivalently in both Einstein and Gibbs notations,

    (ds)2 = dxkgkldxl, (1.86)

    (ds)

    2

    = dx

    T

    G dx. (1.87)Note that in Einstein notation, one can loosely imagine super-scripted terms in a denominatoras being sub-scripted terms in a corresponding numerator. Now gkl can be represented as amatrix. If we define

    g = det gkl, (1.88)

    it can be shown that the ratio of volumes of differential elements in one space to that of theother is given by

    d1 d2 d3 =

    g dx1 dx2 dx3. (1.89)

    Thus, transformations for which g = 1 are volume-preserving. Volume-preserving trans-formations also have J = det J = 1. It can also be shown that if J = det J > 0, thetransformation is locally orientation-preserving. If J = det J < 0, the transformation isorientation-reversing, and thus involves a reflection. So, ifJ = det J = 1, the transformationis volume- and orientation-preserving.

    We also require dependent variables and all derivatives to take on the same values atcorresponding points in each space, e.g. if ( = f(1, 2, 3) = h(x1, x2, x3)) is a dependentvariable defined at (1, 2, 3), and (1, 2, 3) maps into (x1, x2, x3), we require f(1, 2, 3) =h(x1, x2, x3). The chain rule lets us transform derivatives to other spaces:

    ( x1

    x2

    x3

    ) = ( 12

    3 )

    1

    x11

    x21

    x32

    x12

    x22

    x33

    x13

    x23

    x3

    J

    , (1.90)

    xi=

    jj

    xi. (1.91)

    Equation (1.91) can also be inverted, given that g = 0, to find (/1,/2,/3).

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    24/501

    24 CHAPTER 1. MULTI-VARIABLE CALCULUS

    Employing Gibbs notation11 we can write Eq. (1.91) as

    Tx =

    T

    J. (1.92)

    The fact that the gradient operator required the use of row vectors in conjunction with theJacobian matrix, while the transformation of distance, earlier in this section, Eq. (1.79),required the use of column vectors is of fundamental importance, and will be soon exam-ined further in Sec. 1.3.2 where we distinguish between what are known as covariant andcontravariant vectors.

    Transposing both sides of Eq. (1.92), we could also say

    x = JT . (1.93)

    Inverting, we then have

    = (JT)1 x. (1.94)Thus, in general, we could say for the gradient operator

    = (JT)1 x. (1.95)

    Contrasting Eq. (1.95) with Eq. (1.79), d = J dx, we see the gradient operation transformsin a fundamentally different way than the differential operation d, unless we restrict attentionto an unusual J, one whose transpose is equal to its inverse. We will sometimes make thisrestriction, and sometimes not. When we choose such a special J, there will be manyadditional simplifications in the analysis; these are realized because it will be seen for many

    such transformations that nearly all of the original Cartesian character will be retained,albeit in a rotated, but otherwise undeformed, coordinate system. We shall later identify amatrix whose transpose is equal to its inverse as an orthogonal matrix, Q: QT = Q1 andstudy it in detail in Secs. 6.2.1, 8.6.

    One can also show the relation between i/xj and xi/j to be

    i

    xj=

    xi

    j

    T1=

    xj

    i

    1, (1.96)

    1

    x11

    x21

    x32

    x1

    2

    x2

    2

    x33

    x13

    x23

    x3

    = x1

    1x1

    2x1

    3

    x2

    1

    x2

    2

    x2

    3

    x3

    1x3

    2x3

    3

    1

    . (1.97)

    11In Cartesian coordinates, we take

    1

    2

    3

    . This gives rise to the natural, albeit unconventional,

    notation T =

    1

    2

    3

    . This notion does not extend easily to non-Cartesian systems, for which

    index notation is preferred. Here, for convenience, we will take Tx ( x1 x2 x3 ), and a similarcolumn version for x.

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    25/501

    1.3. COORDINATE TRANSFORMATIONS 25

    Thus, the Jacobian matrix J of the transformation is simply the inverse of the Jacobian ma-trix of the inverse transformation. Note that in the very special case for which the transposeis the inverse, that we can replace the inverse by the transpose. Note that the transpose ofthe transpose is the original matrix and determines that i/xj = xi/j. This allows thei to remain upstairs and the j to remain downstairs. Such a transformation will be seento be a pure rotation or reflection.

    Example 1.4Transform the Cartesian equation

    1+

    2=

    12

    +

    22

    . (1.98)

    under the following:

    1. Cartesian to linearly homogeneous affine coordinates.Consider the following linear non-orthogonal transformation:

    x1 =2

    31 +

    2

    32, (1.99)

    x2 = 23

    1 +1

    32, (1.100)

    x3 = 3. (1.101)

    This transformation is of the class of affine transformations, which are of the form

    xi = Aij j + bi, (1.102)

    where Aij and bi are constants. Affine transformations for which bi = 0 are further distinguished

    as linear homogeneous transformations. The transformation of this example is both affine and linear

    homogeneous.Equations (1.99-1.101) form a linear system of three equations in three unknowns; using standard

    techniques of linear algebra allows us to solve for 1, 2, 3 in terms of x1, x2, x3; that is, we find theinverse transformation, which is

    1 =1

    2x1 x2, (1.103)

    2 = x1 + x2, (1.104)

    3 = x3. (1.105)

    Lines of constant x1 and x2 in the 1, 2 plane as well as lines of constant 1 and 2 in the x1, x2

    plane are plotted in Fig. 1.3. Also shown is a unit square in the Cartesian 1, 2 plane, with verticesA ,B,C,D. The image of this rectangle is plotted as a parallelogram in the x1, x2 plane. It is seen the

    orientation has been preserved in what amounts to a clockwise rotation accompanied by stretching;moreover, the area (and thus the volume in three dimensions) has been decreased.The appropriate Jacobian matrix for the inverse transformation is

    J =i

    xj=

    1

    x11

    x21

    x3

    2

    x12

    x22

    x3

    3

    x13

    x23

    x3

    , (1.106)

    J =

    12 1 01 1 0

    0 0 1

    . (1.107)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    26/501

    26 CHAPTER 1. MULTI-VARIABLE CALCULUS

    2 1 0 1 2

    2

    1

    0

    1

    2

    2 1 0 1 2

    2

    1

    0

    1

    2

    1

    2

    x1

    x2

    x1=-2

    x1=0

    x1

    =2

    x1=-1

    x1=1

    x2=1

    x2=0

    x2=-1

    A B

    CD

    A

    B

    C

    D

    1=1

    1=0

    1=2

    1=-1

    1=-2

    2=-2

    2=22=1

    2=0

    2=-1

    2=3

    2=-3

    Figure 1.3: Lines of constant x1 and x2 in the 1, 2 plane and lines of constant 1 and 2 inthe x1, x2 plane for the homogeneous affine transformation of example problem.

    The Jacobian determinant is

    J = det J = (1)

    1

    2

    (1) (1)(1)

    =

    3

    2. (1.108)

    So a unique transformation, = J x, always exists, since the Jacobian determinant is never zero.Inversion gives x = J1

    . Since J > 0, the transformation preserves the orientation of geometric

    entities. Since J > 1, a unit volume element in space is larger than its image in x space.The metric tensor is

    gkl =i

    xki

    xl=

    1

    xk1

    xl+

    2

    xk2

    xl+

    3

    xk3

    xl. (1.109)

    For example for k = 1, l = 1 we get

    g11 =i

    x1i

    x1=

    1

    x11

    x1+

    2

    x12

    x1+

    3

    x13

    x1, (1.110)

    g11 =

    1

    2

    1

    2

    + (1) (1) + (0)(0) =

    5

    4. (1.111)

    Repeating this operation for all terms of gkl, we find the complete metric tensor is

    gkl =

    54 12 01

    2 2 00 0 1

    , (1.112)

    g = det gkl = (1)

    5

    4

    (2)

    1

    2

    1

    2

    =

    9

    4. (1.113)

    This is equivalent to the calculation in Gibbs notation:

    G = JT J, (1.114)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    27/501

    1.3. COORDINATE TRANSFORMATIONS 27

    G =

    12 1 01 1 00 0 1

    12 1 01 1 00 0 1

    , (1.115)

    G = 54 12 01

    2 2 00 0 1

    . (1.116)Distance in the transformed system is given by

    (ds)2

    = dxk gkl dxl, (1.117)

    (ds)2

    = dxT G dx, (1.118)

    (ds)2 = ( dx1 dx2 dx3 )

    54 12 01

    2 2 00 0 1

    dx1dx2

    dx3

    , (1.119)

    (ds)2

    = ( ( 54 dx1 + 12 dx

    2) ( 12 dx1 + 2 dx2) dx3 )

    =dxl=dxkgkl

    dx1

    dx2

    dx3 =dxl

    = dxldxl, (1.120)

    (ds)2

    =5

    4

    dx1

    2+ 2

    dx2

    2+

    dx32

    + dx1 dx2. (1.121)

    Detailed algebraic manipulation employing the so-called method of quadratic forms, to be discussed inSec. 8.12, reveals that the previous equation can be rewritten as follows:

    (ds)2

    =9

    20

    dx1 + 2dx2

    2+

    1

    5

    2dx1 + dx22 + dx32 . (1.122)Direct expansion reveals the two forms for (ds)2 to be identical. Note:

    The Jacobian matrix J is not symmetric.

    The metric tensor G = JT

    J is symmetric.

    The fact that the metric tensor has non-zero off-diagonal elements is a consequence of the transfor-mation being non-orthogonal.

    We identify here a new representation of the differential distance vector in the transformed space:dxl = dxkgkl whose significance will soon be discussed in Sec. 1.3.2.

    The distance is guaranteed to be positive. This will be true for all affine transformations in ordinarythree-dimensional Euclidean space. In the generalized space-time continuum suggested by the theoryof relativity, the generalized distance may in fact be negative; this generalized distance ds for an

    infinitesimal change in space and time is given by ds2 =

    d12

    +

    d22

    +

    d32 d42, where the

    first three coordinates are the ordinary Cartesian space coordinates and the fourth is

    d42

    = (c dt)2,where c is the speed of light.

    Also we have the volume ratio of differential elements as

    d1 d2 d3 =

    9

    4dx1 dx2 dx3, (1.123)

    =3

    2dx1 dx2 dx3. (1.124)

    Now we use Eq. (1.94) to find the appropriate derivatives of. We first note that

    (JT)1 =

    12 1 01 1 0

    0 0 1

    1 =

    23 23 02

    313 0

    0 0 1

    . (1.125)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    28/501

    http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    29/501

    1.3. COORDINATE TRANSFORMATIONS 29

    -2 0 2

    -2

    0

    2

    1

    2

    J=x1=3

    J=x1=2

    J=x1=1

    x2=2n

    x2=(1/4+2n)

    x2=(5/4+2n)

    x2=(3/4+2n)

    x2=(7/4+2n)

    x2=(1/2+2n)

    x2=(3/2+2n)

    x2=(1+2n)

    1=-2

    1=-2

    1=-1/2

    1=-1/2

    1=0

    2=0

    1=1/2

    1=1/2

    1=2

    1=2

    2=2

    2=2

    2=1/2

    2=1/2

    2=02=0

    1=0

    2=0

    2=-1/2

    2=-2

    J > 0

    J < 0 J > 0

    J < 0

    n = 0,1,2,...

    -2 0 2

    -2

    0

    2

    1=2

    1=2

    1=1/2

    1=1/2

    1=0 1=0

    1=0 1=0

    1=-2

    1=-2

    1=-1/2

    1=-1/2

    2=1/2

    2=2

    2=2

    2=1/2

    2=-2

    2=-2

    2=-2

    2=-1/2

    2=-1/2

    2=-1/2

    x1

    x2

    2=0 2=0

    2=02=0

    1=0

    2=0

    1=0

    2=0

    A B

    CD

    AB

    C

    DA

    A

    A

    AB

    C

    DA

    A

    A

    A

    A

    A

    B

    Figure 1.4: Lines of constant x1 and x2 in the 1, 2 plane and lines of constant 1 and 2 inthe x1, x2 plane for cylindrical coordinates transformation of example problem.

    J =

    cos x2 x1 sin x2 0sin x2 x1 cos x2 0

    0 0 1

    . (1.138)

    The Jacobian determinant is

    J = x1 cos2 x2 + x1 sin2 x2 = x1. (1.139)

    So a unique transformation fails to exist when x1 = 0. For x1 > 0, the transformation is orientation-preserving. For x1 = 1, the transformation is volume-preserving. For x1 < 0, the transformation isorientation-reversing. This is a fundamental mathematical reason why we do not consider negativeradius. It fails to preserve the orientation of a mapped element. For x1 (0, 1), a unit element in space is smaller than a unit element in x space; the converse holds for x1 (1, ).

    The metric tensor is

    gkl =i

    xki

    xl=

    1

    xk1

    xl+

    2

    xk2

    xl+

    3

    xk3

    xl. (1.140)

    For example for k = 1, l = 1 we get

    g11 =

    i

    x1

    i

    x1 =

    1

    x1

    1

    x1 +

    2

    x1

    2

    x1 +

    3

    x1

    3

    x1 , (1.141)

    g11 = cos2 x2 + sin2 x2 + 0 = 1. (1.142)

    Repeating this operation, we find the complete metric tensor is

    gkl =

    1 0 00 x12 0

    0 0 1

    , (1.143)

    g = det gkl =

    x12

    . (1.144)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    30/501

    30 CHAPTER 1. MULTI-VARIABLE CALCULUS

    This is equivalent to the calculation in Gibbs notation:

    G = JT J, (1.145)

    G = cos x2 sin x2 0x1 sin x2 x1 cos x2 0

    0 0 1

    cos x2 x1 sin x2 0sin x2 x1 cos x2 00 0 1

    , (1.146)G =

    1 0 00 x12 0

    0 0 1

    . (1.147)

    Distance in the transformed system is given by

    (ds)2

    = dxk gkl dxl, (1.148)

    (ds)2

    = dxT G dx, (1.149)

    (ds)2 = ( dx1 dx2 dx3 )1 0 00 x12 00 0 1

    dx1

    dx2

    dx3 , (1.150)

    (ds)2 = ( dx1 (x1)2dx2 dx3 ) dxl=dxkgkl

    dx1dx2

    dx3

    =dxl

    = dxldxl, (1.151)

    (ds)2

    =

    dx12

    +

    x1dx22

    +

    dx32

    . (1.152)

    Note:

    The fact that the metric tensor is diagonal can be attributed to the transformation being orthogonal. Since the product of any matrix with its transpose is guaranteed to yield a symmetric matrix, the

    metric tensor is always symmetric.Also we have the volume ratio of differential elements as

    d1 d2 d3 = x1 dx1 dx2 dx3. (1.153)

    Now we use Eq. (1.94) to find the appropriate derivatives of. We first note that

    (JT)1 =

    cos x2 sin x2 0x1 sin x2 x1 cos x2 0

    0 0 1

    1 =

    cos x2 sin x

    2

    x1 0

    sin x2 cos x2

    x1 00 0 1

    . (1.154)

    So

    1

    2

    3

    = cos x2 sin x2x1 0sin x

    2 cos x2

    x1 00 0 1

    x1

    x2x3

    = x1

    1x2

    1x3

    1

    x1

    2x2

    2x3

    2x1

    3x2

    3x3

    3

    (JT)1

    x1

    x2x3

    . (1.155)

    Thus, by inspection,

    1= cos x2

    x1 sin x

    2

    x1

    x2, (1.156)

    2= sin x2

    x1+

    cos x2

    x1

    x2. (1.157)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    31/501

    1.3. COORDINATE TRANSFORMATIONS 31

    So the transformed version of Eq. (1.98) becomes

    cos x2 x1 sin x2

    x1

    x2 + sin x2 x1 + cos x2

    x1

    x2 = x12 , (1.158)cos x2 + sin x2

    x1

    +

    cos x2 sin x2

    x1

    x2=

    x12

    . (1.159)

    1.3.2 Covariance and contravariance

    Quantities known as contravariant vectors transform locally according to

    ui = xi

    xjuj. (1.160)

    We note that local refers to the fact that the transformation is locally linear. Eq. ( 1.160) isnot a general recipe for a global transformation rule. Quantities known as covariant vectorstransform locally according to

    ui =xj

    xiuj. (1.161)

    Here we have considered general transformations from one non-Cartesian coordinate system(x1, x2, x3) to another (x1, x2, x3). Note that indices associated with contravariant quantitiesappear as superscripts, and those associated with covariant quantities appear as subscripts.

    In the special case where the barred coordinate system is Cartesian, we take U to denotethe Cartesian vector and say

    Ui =i

    xjuj, Ui =

    xj

    iuj. (1.162)

    Example 1.5Lets say (x,y,z) is a normal Cartesian system and define the transformation

    x = x, y = y, z = z. (1.163)

    Now we can assign velocities in both the unbarred and barred systems:

    ux =dx

    dt, uy =

    dy

    dt, uz =

    dz

    dt, (1.164)

    ux =dx

    dt, uy =

    dy

    dt, uz =

    dz

    dt, (1.165)

    ux =x

    x

    dx

    dt, uy =

    y

    y

    dy

    dt, uz =

    z

    z

    dz

    dt, (1.166)

    ux = ux, uy = uy, uz = uz , (1.167)

    ux =x

    xux, uy =

    y

    yuy, uz =

    z

    zuz. (1.168)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    32/501

    32 CHAPTER 1. MULTI-VARIABLE CALCULUS

    This suggests the velocity vector is contravariant.Now consider a vector which is the gradient of a function f(x,y,z). For example, let

    f(x,y,z) = x + y2

    + z3

    , (1.169)

    ux =f

    x, uy =

    f

    y, uz =

    f

    z, (1.170)

    ux = 1, uy = 2y, uz = 3z2. (1.171)

    In the new coordinates

    f x

    ,

    y

    ,

    z

    =

    x

    +

    y2

    2+

    z3

    3, (1.172)

    so

    f(x, y, z) =x

    +

    y2

    2+

    z3

    3. (1.173)

    Now

    ux =

    f

    x , uy =

    f

    y , uz =

    f

    z , (1.174)

    ux =1

    , uy =

    2y

    2, uz =

    3z2

    3. (1.175)

    In terms of x,y,z, we have

    ux =1

    , uy =

    2y

    , uz =

    3z2

    . (1.176)

    So it is clear here that, in contrast to the velocity vector,

    ux =1

    ux, uy =

    1

    uy, uz =

    1

    uz. (1.177)

    More generally, we find for this case that

    ux =x

    xux, uy =

    y

    yuy, uz =

    z

    zuz, (1.178)

    which suggests the gradient vector is covariant.

    Contravariant tensors transform locally according to

    vij =xi

    xkxj

    xlvkl. (1.179)

    Covariant tensors transform locally according to

    vij =xk

    xixl

    xjvkl. (1.180)

    Mixed tensors transform locally according to

    vij =xi

    xkxl

    xjvkl . (1.181)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    33/501

    1.3. COORDINATE TRANSFORMATIONS 33

    -2 -1 0 1 2

    -2

    -1

    0

    1

    2

    -0.25

    0

    0.250.5

    0.5

    -0.5

    -0.25

    0

    0.25

    0.5

    -0.4 -0.2 0.0 0.2 0.4

    -0.4

    -0.2

    0.0

    0.2

    0.4

    1

    2

    1

    2

    J > 0 J > 0

    J > 0

    J < 0 J < 0

    J > 0

    contravariant

    covariant

    Figure 1.5: Contours for the transformation x1 = 1 + (2)2, x2 = 2 + (1)3 (left) and ablown-up version (right) including a pair of contravariant basis vectors, which are tangentto the contours, and covariant basis vectors, which are normal to the contours.

    Recall that varianceis another term for gradientand that co- denotes with. A vector whichis co-variant is aligned with the variance or the gradient. Recalling next that contra- denotesagainst

    , a vector which is contra-variant is aligned against the variance or the gradient.This results in a set of contravariant basis vectors being tangent to lines of xi = C, whilecovariant basis vectors are normal to lines of xi = C. A vector in space has two naturalrepresentations, one on a contravariant basis, and the other on a covariant basis. Thecontravariant representation seems more natural because it is similar to the familiar i, j, andk for Cartesian systems, though both can be used to obtain equivalent results.

    For the transformation x1 = 1 + (2)2, x2 = 2 + (1)3, Figure 1.5 gives a plot of aset of lines of constant x1 and x2 in the Cartesian 1, 2 plane, along with a local set ofcontravariant and covariant basis vectors. Note the covariant basis vectors, because theyare directly related to the gradient vector, point in the direction of most rapid change of x1

    and x2 and are orthogonal to contours on which x1 and x2 are constant. The contravariant

    vectors are tangent to the contours. It can be shown that the contravariant vectors arealigned with the columns of J, and the covariant vectors are aligned with the rows of J1.This transformation has some special properties. Near the origin, the higher order termsbecome negligible, and the transformation reduces to the identity mapping x1 1, x2 2.As such, in the neighborhood of the origin, one has J = I, and there is no change inarea or orientation of an element. Moreover, on each of the coordinate axes x1 = 1 andx2 = 2; additionally, on each of the coordinate axes J = 1, so in those special locations thetransformation is area- and orientation-preserving. This non-linear transformation can be

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    34/501

    34 CHAPTER 1. MULTI-VARIABLE CALCULUS

    shown to be singular where J = 0; this occurs when 2 = 1/(6(1)2). As J 0, the contoursof 1 align more and more with the contours of 2, and thus the contravariant basis vectorscome closer to paralleling each other. When J = 0, the two contours of each osculate. Atsuch points there is only one linearly independent contravariant basis vector, which is notenough to represent an arbitrary vector in a linear combination. An analog holds for thecovariant basis vectors. In the first and fourth quadrants and some of the second and third,the transformation is orientation-reversing. The transformation is orientation-preserving inmost of the second and third quadrants.

    Example 1.6Consider the vector fields defined in Cartesian coordinates by

    a) Ui = 1

    2 , b) Ui = 1

    2

    2 . (1.182)At the point

    P :

    1

    2

    =

    11

    , (1.183)

    find the covariant and contravariant representations of both cases of Ui in cylindrical coordinates.

    a) At P in the Cartesian system, we have the contravariant

    Ui =

    12

    1=1,2=1

    =

    11

    . (1.184)

    For a Cartesian coordinate system, the metric tensor gij = ij = gji = ji. Thus, the covariant

    representation in the Cartesian system is

    Uj = gjiUi = jiU

    i =

    1 00 1

    11

    =

    11

    . (1.185)

    Now consider cylindrical coordinates: 1 = x1 cos x2, 2 = x1 sin x2. For the inverse transformation, letus insist that J > 0, so x1 =

    (1)2 + (2)2, x2 = tan1(2/1). Thus, at P we have a representation

    of

    P :

    x1

    x2

    =

    2

    4

    . (1.186)

    For the transformation, we have

    J = cos x2 x1 sin x2sin x2 x1 cos x2 , G = JT J = 1 00 (x1)2 . (1.187)At P, we thus have

    J =

    2

    21

    22 1

    , G = JT J =

    1 00 2

    . (1.188)

    Now, specializing Eq. (1.160) by considering the barred coordinate to be Cartesian, we can say

    Ui =i

    xjuj . (1.189)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    35/501

    1.3. COORDINATE TRANSFORMATIONS 35

    Locally, we can use the Gibbs notation and say U = J u, and thus get u = J1 U, so that thecontravariant representation is

    u1u2

    = 22 1

    22 1

    1 11

    = 12 12

    12

    12

    11

    =2

    0

    . (1.190)

    In Gibbs notation, one can interpret this as 1i+1j =

    2er +0e. Note that this representation is differentthan the simple polar coordinates of P given by Eq. (1.186). Let us look closer at the cylindrical basisvectors er and e. In cylindrical coordinates, the contravariant representations of the unit basis vectorsmust be er = (1, 0)T and e = (0, 1)T. So in Cartesian coordinates those basis vectors are representedas

    er = J

    10

    =

    cos x2 x1 sin x2sin x2 x1 cos x2

    10

    =

    cos x2

    sin x2

    , (1.191)

    e = J 01 =

    cos x2 x1 sin x2sin x2 x1 cos x2

    01 =

    x1 sin x2x1 cos x2 . (1.192)

    In general a unit vector in the transformed space is not a unit vector in the Cartesian space. Note thate is a unit vector in Cartesian space only when x1 = 1; this is also the condition for J = 1. Lastly, wesee the covariant representation is given by uj = u

    igij. Since gij is symmetric, we can transpose thisto get uj = gjiui:

    u1u2

    = G

    u1

    u2

    =

    1 00 2

    20

    =

    2

    0

    . (1.193)

    This simple vector field has an identical contravariant and covariant representation. The appropriateinvariant quantities are independent of the representation:

    UiUi = ( 1 1 )

    1

    1 = 2, (1.194)uiu

    i = (

    2 0 )

    2

    0

    = 2. (1.195)

    Thought tempting, we note that there is no clear way to form the representation xixi to demonstrateany additional invariance.

    b) At P in the Cartesian system, we have the contravariant

    Ui =

    1

    22

    1=1,2=1

    =

    12

    . (1.196)

    In the same fashion as demonstrated in part a), we find the contravariant representation of Ui in

    cylindrical coordinates at P isu1

    u2

    =

    2

    2 12

    2 1

    1

    12

    =

    12

    12

    12 12

    12

    =

    32

    12

    . (1.197)

    In Gibbs notation, we could interpret this as 1i + 2j = (3/

    2)er + (1/2)e.The covariant representation is given once again by uj = gjiu

    i:u1u2

    = G

    u1

    u2

    =

    1 00 2

    3

    212

    =

    3

    21

    . (1.198)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    36/501

    36 CHAPTER 1. MULTI-VARIABLE CALCULUS

    This less simple vector field has distinct contravariant and covariant representations. However, theappropriate invariant quantities are independent of the representation:

    UiUi = ( 1 2 ) 12 = 5, (1.199)uiu

    i = 3

    21 3

    212

    = 5. (1.200)

    The idea of covariant and contravariant derivatives play an important role in mathemat-ical physics, namely in that the equations should be formulated such that they are invariantunder coordinate transformations. This is not particularly difficult for Cartesian systems,

    but for non-orthogonal systems, one cannot use differentiation in the ordinary sense butmust instead use the notion of covariant and contravariant derivatives, depending on theproblem. The role of these terms was especially important in the development of the theoryof relativity.

    Consider a contravariant vector ui defined in xi which has corresponding components Ui

    in the Cartesian i. Take wij and Wij to represent the covariant spatial derivative of u

    i andUi, respectively. Lets use the chain rule and definitions of tensorial quantities to arrive ata formula for covariant differentiation. From the definition of contravariance, Eq. (1.160),

    Ui =i

    xlul. (1.201)

    Take the derivative in Cartesian space and then use the chain rule:

    Wij =Ui

    j=

    Ui

    xkxk

    j, (1.202)

    =

    xk

    i

    xlul

    =Ui

    xkj , (1.203)

    = 2i

    xkxl ul

    +

    i

    xlul

    xk xk

    j , (1.204)

    Wpq =

    2p

    xkxlul +

    p

    xlul

    xk

    xk

    q. (1.205)

    From the definition of a mixed tensor, Eq. (1.181),

    wij = Wpq

    xi

    pq

    xj, (1.206)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    37/501

    1.3. COORDINATE TRANSFORMATIONS 37

    =

    2p

    xkxlul +

    p

    xlul

    xk

    xk

    q

    =Wpqxi

    pq

    xj, (1.207)

    =2p

    xkxlxk

    qxi

    pq

    xjul +

    p

    xlxk

    qxi

    pq

    xjul

    xk, (1.208)

    =2p

    xkxlxk

    xjkj

    xi

    pul +

    xi

    xlil

    xk

    xjkj

    ul

    xk, (1.209)

    =2p

    xkxlkj

    xi

    pul + il

    kj

    ul

    xk, (1.210)

    =2p

    xjxlxi

    pul +

    ui

    xj. (1.211)

    Here, we have used the identity that

    xi

    xj= ij , (1.212)

    where ij is another form of the Kronecker delta. We define the Christoffel12 symbols ijl as

    follows:

    ijl =2p

    xjxlxi

    p, (1.213)

    and use the term j to represent the covariant derivative. Thus, the covariant derivative ofa contravariant vector ui is as follows:

    jui = wij =

    ui

    xj+ ijlu

    l. (1.214)

    Example 1.7Find T u in cylindrical coordinates. The transformations are

    x1 = +

    (1)

    2+ (2)

    2, (1.215)

    x2 = tan1 2

    1 , (1.216)

    x3 = 3. (1.217)

    The inverse transformation is

    1 = x1 cos x2, (1.218)

    2 = x1 sin x2, (1.219)

    3 = x3. (1.220)

    12Elwin Bruno Christoffel, 1829-1900, German mathematician.

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://en.wikipedia.org/wiki/Christoffelhttp://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/http://en.wikipedia.org/wiki/Christoffel
  • 7/30/2019 ND Mathematical Methods Lecture notes

    38/501

  • 7/30/2019 ND Mathematical Methods Lecture notes

    39/501

    http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    40/501

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://en.wikipedia.org/wiki/Gaspard-Gustave_Coriolis
  • 7/30/2019 ND Mathematical Methods Lecture notes

    41/501

    http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    42/501

    42 CHAPTER 1. MULTI-VARIABLE CALCULUS

    curl u = u = 1h1h2h3

    h1e1 h2e2 h3e3,q1

    q2

    q3

    u1h1 u2h2 u3h3

    , (1.266)

    div grad = 2 = 1h1h2h3

    q1

    h2h3

    h1

    q1

    +

    q2

    h3h1

    h2

    q2

    +

    q3

    h1h2

    h3

    q3

    .

    (1.267)

    Example 1.9Find expressions for the gradient, divergence, and curl in cylindrical coordinates (r,,z) where

    x1 = r cos , (1.268)

    x2 = r sin , (1.269)

    x3 = z. (1.270)

    The 1,2 and 3 directions are associated with r, , and z, respectively. From Eq. (1.263), the scalefactors are

    hr =

    x1r

    2+

    x2r

    2+

    x3r

    2, (1.271)

    =

    cos2 + sin2 , (1.272)

    = 1, (1.273)

    h = x1

    2

    + x2

    2

    + x3

    2

    , (1.274)

    =

    r2 sin2 + r2 cos2 , (1.275)

    = r, (1.276)

    hz =

    x1z

    2+

    x2z

    2+

    x3z

    2, (1.277)

    = 1, (1.278)

    so that

    grad =

    rer +

    1

    r

    e +

    zez, (1.279)

    div u =

    1

    r r (urr) + (u) + z (uzr) = urr + urr + 1r u + uzz , (1.280)curl u =

    1

    r

    er re ez

    r

    zur ur uz

    . (1.281)

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/
  • 7/30/2019 ND Mathematical Methods Lecture notes

    43/501

    1.4. MAXIMA AND MINIMA 43

    1.4 Maxima and minima

    Consider the real function f(x), where x

    [a, b]. Extrema are at x = xm, where f

    (xm) = 0,

    if xm [a, b]. It is a local minimum, a local maximum, or an inflection point according towhether f(xm) is positive, negative or zero, respectively.

    Now consider a function of two variables f(x, y), with x [a, b], y [c, d]. A necessarycondition for an extremum is

    f

    x(xm, ym) =

    f

    y(xm, ym) = 0. (1.282)

    where xm [a, b], ym [c, d]. Next, we find the Hessian14 matrix:

    H = 2fx2

    2fxy

    2fxy

    2fy2 . (1.283)

    We use H and its elements to determine the character of the local extremum:

    f is a maximum if2f/x2 < 0, 2f/y2 < 0, and 2f/xy 0, 2f/y2 > 0, and 2f/xy 0,which corresponds to a minimum. For negative definite H, we have a maximum.

    14Ludwig Otto Hesse, 1811-1874, German mathematician, studied under Jacobi.15Brook Taylor, 1685-1731, English mathematician, musician, and painter.

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://en.wikipedia.org/wiki/Otto_Hessehttp://en.wikipedia.org/wiki/Brook_Taylorhttp://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/http://en.wikipedia.org/wiki/Brook_Taylorhttp://en.wikipedia.org/wiki/Otto_Hesse
  • 7/30/2019 ND Mathematical Methods Lecture notes

    44/501

    44 CHAPTER 1. MULTI-VARIABLE CALCULUS

    Example 1.10Consider extrema of

    f = x2 y2. (1.286)Equating partial derivatives with respect to x and to y to zero, we get

    f

    x= 2x = 0, (1.287)

    f

    y= 2y = 0. (1.288)

    This gives x = 0, y = 0. For these values we find that

    H =

    2fx2

    2fxy

    2fxy

    2fy2

    , (1.289)

    = 2 0

    0 2

    . (1.290)

    Since det H = 4 = 0, and 2f/x2 and 2f/y2 have different signs, the equilibrium is a saddle point.

    1.4.1 Derivatives of integral expressions

    Often functions are expressed in terms of integrals. For example

    y(x) =

    b(x)a(x)

    f(x, t) dt. (1.291)

    Here t is a dummy variable of integration. Leibnizs16 rule tells us how to take derivativesof functions in integral form:

    y(x) =

    b(x)a(x)

    f(x, t) dt, (1.292)

    dy(x)

    dx= f(x, b(x))

    db(x)

    dx f(x, a(x))da(x)

    dx+

    b(x)

    a(x)

    f(x, t)

    xdt. (1.293)

    Inverting this arrangement in a special case, we note if

    y(x) = y(xo) +

    xx0

    f(t) dt, (1.294)

    then

    16Gottfried Wilhelm von Leibniz, 1646-1716, German mathematician and philosopher of great influence;co-inventor with Sir Isaac Newton, 1643-1727, of the calculus.

    CC BY-NC-ND. 29 July 2012, Sen & Powers.

    http://en.wikipedia.org/wiki/Gottfried_Leibnizhttp://en.wikipedia.org/wiki/Isaac_Newtonhttp://creativecommons.org/licenses/by-nc-nd/3.0/http://creativecommons.org/licenses/by-nc-nd/3.0/http://en.wikipedia.org/wiki/Isaac_Newtonhttp://en.wikipedia.org/wiki/Gottfried_Leibniz
  • 7/30/2019 ND Mathematical Methods Lecture notes

    45/501

    1.4. MAXIMA AND MINIMA 45

    dy(x)

    dx= f(x)

    dx

    dx f(x0)dxo

    dx+

    x

    x0

    f(t)

    xdt, (1.295)

    dy(x)

    dx= f(x). (1.296)

    Note that the integral expression naturally includes the initial condition that when x = x0,y = y(x0). This needs to be expressed separately for the differential version of the equation.

    Example 1.11Find dy/dx if

    y(x) = x2

    x (x + 1)t

    2

    dt. (1.297)

    Using Leibnizs rule we get

    dy(x)

    dx= ((x + 1)x4)(2x) ((x + 1)x2)(1) +

    x2x

    t2 dt, (1.298)

    = 2x6 + 2x5 x3 x2 +

    t3

    3

    x2

    x

    , (1.299)

    = 2x6 + 2x5 x3 x2 + x6

    3 x

    3

    3, (1.300)

    =7x6

    3

    + 2x5

    4x3

    3 x2. (1.301)

    (1.302)

    In this case it is possible to integrate explicitly to achieve the same result:

    y(x) = (x + 1)

    x2x

    t2 dt, (1.303)

    = (x + 1)

    t3

    3

    x2

    x

    , (1.304)

    = (x + 1)

    x6

    3 x

    3

    3

    , (1.305)

    y(x) =

    x7

    3 +

    x6

    3 x4

    3 x3

    3 , (1.306)dy(x)

    dx=

    7x6

    3+ 2x5 4x


Recommended