+ All Categories
Home > Documents > AEM ADV03 Introductory Mathematics€¦ ·

AEM ADV03 Introductory Mathematics€¦ ·

Date post: 21-May-2020
Category:
Upload: others
View: 6 times
Download: 0 times
Share this document with a friend
98
AEM ADV03 Introductory Mathematics 2019/2020
Transcript
Page 1: AEM ADV03 Introductory Mathematics€¦ ·

AEM ADV03 Introductory Mathematics

2019/2020

Page 2: AEM ADV03 Introductory Mathematics€¦ ·

2

Table of Contents: Introductory Mathematics ..........................................................................................................................................4

1 Function Expansion & Transforms..........................................................................................................................7

1.1 Power Series .....................................................................................................................................................7

1.1.1 Taylor Series ..............................................................................................................................................7

1.1.2 Fourier Series ...........................................................................................................................................12

1.1.3 Complex Fourier series ............................................................................................................................14

1.1.4 Termwise Integration and Differentiation ...............................................................................................15

1.1.5 Fourier series of Odd and Even functions ........................................................................................17

1.2 Integral Transform ..........................................................................................................................................19

1.2.1 Fourier Transform ...................................................................................................................................20

1.2.2 Laplace Transform ...................................................................................................................................22

2. Vector Spaces, vector Fields & Operators ............................................................................................................25

2.1 Scalar (inner) product of vector fields ............................................................................................................26

2.1.1 Lp norms ..................................................................................................................................................27

2.2 Vector product of vector fields .......................................................................................................................29

2.3 Vector operators .............................................................................................................................................30

2.3.1 Gradient of a scalar field .........................................................................................................................30

2.3.2 Divergence of a vector field ....................................................................................................................33

2.3.3 Curl of a vector field ...............................................................................................................................35

2.4 Repeated Vector Operations – The Laplacian ................................................................................................37

3. Linear Algebra, Matrices & Eigenvectors ............................................................................................................42

3.1 Basic definitions and notation ........................................................................................................................42

3.2 Multiplication of matrices and multiplication of vectors and matrices ..........................................................44

3.2.1 Matrix multiplication ...............................................................................................................................44

3.2.2 Traces and determinants of square Cayley products ...............................................................................45

3.2.3 The Kronecker product ............................................................................................................................45

3.3 Matrix Rank and the Inverse of a full rank matrix .........................................................................................47

3.3.1 Full Rank matrices ...................................................................................................................................47

3.3.2 Solutions of linear equations ...................................................................................................................48

3.3.3 Preservation of positive definiteness .......................................................................................................48

3.3.4 A lower bound on the rank of a matrix product.......................................................................................49

3.3.5 Inverse of products and sums of matrices................................................................................................49

Page 3: AEM ADV03 Introductory Mathematics€¦ ·

3

3.4 Eigensystems ..................................................................................................................................................50

3.5 Diagonalisation of symmetric matrices ..........................................................................................................53

4. Generalised Vector Calculus – Integral Theorems ..............................................................................................56

4.1 The gradient theorem for line integral ............................................................................................................56

4.2 Green’s Theorem ............................................................................................................................................57

4.3 Stokes’ Theorem .............................................................................................................................................62

4.4 Divergent Theorem .........................................................................................................................................68

5. Ordinary Differential Equations ..........................................................................................................................71

5.1 First-Order Linear Differential Equations ......................................................................................................71

5.2 Second-Order Linear Differential Equations ..................................................................................................73

5.3 Initial-Value and Boundary-Value Problems .................................................................................................77

5.4 Non-homogeneous linear differential equation ..............................................................................................80

6. Partial Differential Equations ..............................................................................................................................83

6.1 Introduction to Differential Equations ............................................................................................................83

6.2 Initial Conditions and Boundary Conditions ..................................................................................................83

6.3 Linear and Nonlinear Equations .....................................................................................................................84

6.4 Examples of PDEs .........................................................................................................................................86

6.5 Three types of Second-Order PDEs................................................................................................................86

6.6 Solving PDEs using Separation of Variables Method ...................................................................................87

6.6.1 The Heat Equation ...................................................................................................................................88

6.6.2 The Wave Equation .................................................................................................................................95

Page 4: AEM ADV03 Introductory Mathematics€¦ ·

4

Introductory Mathematics

What is Mathematics?

Different schools of thought, particularly in philosophy, have put forth radically different definitions of mathematics. All are controversial and there is no consensus.

Leading definitions

1. Aristotle defined mathematics as: The science of quantity. In Aristotle's classification of the sciences, discrete quantities were studied by arithmetic, continuous quantities by geometry.

2. Auguste Comte's definition tried to explain the role of mathematics in coordinating phenomena in all other fields: The science of indirect measurement, 1851. The ``indirectness'' in Comte's definition refers to determining quantities that cannot be measured directly, such as the distance to planets or the size of atoms, by means of their relations to quantities that can be measured directly.

3. Benjamin Peirce: Mathematics is the science that draws necessary conclusions, 1870. 4. Bertrand Russell: All Mathematics is Symbolic Logic, 1903. 5. Walter Warwick Sawyer: Mathematics is the classification and study of all possible patterns, 1955.

Most contemporary reference works define mathematics mainly by summarizing its main topics and methods:

6. Oxford English Dictionary: The abstract science which investigates deductively the conclusions implicit in the elementary conceptions of spatial and numerical relations, and which includes as its main divisions geometry, arithmetic, and algebra, 1933.

7. American Heritage Dictionary: The study of the measurement, properties, and relationships of quantities and sets, using numbers and symbols, 2000.

Other playful, metaphorical, and poetic definitions

8. Bertrand Russell: The subject in which we never know what we are talking about, nor whether what we are saying is true, 1901.

9. Charles Darwin: A mathematician is a blind man in a dark room looking for a black cat which isn't there.

10. G. H. Hardy: A mathematician, like a painter or poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas, 1940.

Field of Mathematics Mathematics can, broadly speaking, be subdivided into the study of quantity, structure, space, and change (i.e. arithmetic, algebra, geometry, and analysis). In addition to these main concerns, there are also

Page 5: AEM ADV03 Introductory Mathematics€¦ ·

5

subdivisions dedicated to exploring links from the heart of mathematics to other fields: to logic, to set theory (foundations), to the empirical mathematics of the various sciences (applied mathematics), and more recently to the rigorous study of uncertainty. Mathematical awards Arguably the most prestigious award in mathematics is the Fields Medal, established in 1936 and now awarded every four years. The Fields Medal is often considered a mathematical equivalent to the Nobel Prize. The Wolf Prize in Mathematics, instituted in 1978, recognizes lifetime achievement, and another major international award, the Abel Prize, was introduced in 2003. The Chern Medal was introduced in 2010 to recognize lifetime achievement. These accolades are awarded in recognition of a particular body of work, which may be innovational, or provide a solution to an outstanding problem in an established field. A famous list of 23 open problems, called Hilbert's problem, was compiled in 1900 by German mathematician David Hilbert. This list achieved great celebrity among mathematicians, and at least nine of the problems have now been solved. A new list of seven important problems, titled the Millennium Prize Problems, was published in 2000. A solution to each of these problems carries a $1 million reward, and only one (the Riemann hypothesis) is duplicated in Hilbert's problems. Mathematics in Aeronautics Mathematics in aeronautics includes calculus, differential equations, and linear algebra, etc. Calculus1 Calculus has been an integral part of man's intellectual training and heritage for the last twenty-five hundred years. Calculus is the mathematical study of change, in the same way that geometry is the study of shape and algebra is the study of operations and their application to solving equations. It has two major branches, differential calculus (concerning rates of change and slopes of curves), and integral calculus (concerning accumulation of quantities and the areas under and between curves); these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. Generally, modern calculus is considered to have been developed in the 17th century by Isaac Newton and Gottfried Leibniz, today calculus has widespread uses in science, engineering and economics and can solve many problems that algebra alone cannot.

1 Extracted from: Boyer, Carl Benjamin. The history of the calculus and its conceptual development. Courier Dover Publications, 1949.

Page 6: AEM ADV03 Introductory Mathematics€¦ ·

6

Differential and integral calculus is one of the great achievements of the human mind. The fundamental definitions of the calculus, those of the derivative and the integral, are now so clearly stated in textbooks on the subject, and the operations involving them are so readily mastered, that it is easy to forget the difficulty with which these basic concepts have been developed. Frequently a clear and adequate understanding of the fundamental notions underlying a branch of knowledge has been achieved comparatively late in its development. This has never been more aptly demonstrated than in the rise of the calculus. The precision of statement and the facility of application which the rules of the calculus early afforded were in a measure responsible for the fact that mathematicians were insensible to the delicate subtleties required in the logical development of the discipline. They sought to establish the calculus in terms of the conceptions found in the traditional geometry and algebra which had been developed from spatial intuition. During the eighteenth century, however, the inherent difficulty of formulating the underlying concepts became increasingly evident, and it then became customary to speak of the “metaphysics of the calculus”, thus implying the inadequacy of mathematics to give a satisfactory exposition of the bases. With the clarification of the basic notions --which, in the nineteenth century, was given in terms of precise mathematical terminology-- a safe course was steered between the intuition of the concrete in nature (which may lurk in geometry and algebra) and the mysticism of imaginative speculation (which may thrive on transcendental metaphysics). The derivative has throughout its development been thus precariously situated between the scientific phenomenon of velocity and the philosophical noumenon of motion. The history of integration is similar. On the one hand, it had offered ample opportunity for interpretations by positivistic thought in terms either of approximations or of the compensation of errors, views based on the admitted approximate nature of scientific measurements and on the accepted doctrine of superimposed effects. On the other hand, it has at the same time been regarded by idealistic metaphysics as a manifestation that beyond the finitism of sensory percipiency there is a transcendent infinite which can be but asymptotically approached by human experience and reason. Only the precision of their mathematical definition --the work of the nineteenth century-- enables the derivative and the integral to maintain their autonomous position as abstract concepts, perhaps derived from, but nevertheless independent of, both physical description and metaphysical explanation.

Page 7: AEM ADV03 Introductory Mathematics€¦ ·

7

1 Function Expansion & Transforms

A series expansion is a representation of a particular function of a sum of powers in one of its variables, or by a sum of powers of another function 𝑓𝑓(𝑥𝑥). There are many areas in engineering, such as the motion of fluids, the transfer of hear or processing of signals where the application of certain quantities involves functions as independent variables. Therefore, it is important for us to understand how to solve each function in the equations. In this chapter, we will cover infinite series, convergence and power series.

Furthermore, in engineering, transforms in one form to another plays a major role in analysis and design. An area of continuing importance is the use of Laplace, Fourier, and other transforms in fields such as communication, control and signal processing. These will be covered later in this chapter.

1.1 Power Series We must therefore give meaning to an infinite sum of constants, using this to give meaning to an infinite sum of functions. When the functions being added are the simple powers (𝑥𝑥 − 𝑥𝑥𝑜𝑜)𝑘𝑘, the sum is called a Taylor (power) series and if 𝑥𝑥𝑜𝑜 = 0, a Maclaurin series. When the functions are trig terms such as 𝑠𝑠𝑠𝑠𝑠𝑠(𝑘𝑘𝑥𝑥) or 𝑐𝑐𝑐𝑐𝑠𝑠(𝑘𝑘𝑥𝑥), the series might be a Fourier series, certain infinite sums of trig functions that can be made to represent arbitrary functions, even functions with discontinuities. This type of infinite series is also generalized to sums of other functions such as Legendre polynomials. Eventually, solutions of differential equations will be given in terms of infinite sums of Bessel functions, themselves infinite series. 1.1.1 Taylor Series Having understood sequences, series and power series, now we will focus to one of the main topic: Taylor polynomials. The Taylor polynomial approximation is given by:

𝑓𝑓(𝑥𝑥) = 𝑝𝑝𝑛𝑛(𝑥𝑥) +1𝑠𝑠!� (𝑥𝑥 − 𝑡𝑡)𝑛𝑛 𝑓𝑓(𝑛𝑛+1)(𝑡𝑡)𝑑𝑑𝑡𝑡𝑥𝑥

𝑎𝑎 (1)

Where the 𝑠𝑠-th degree Taylor polynomial 𝑝𝑝𝑛𝑛(𝑥𝑥) is given by:

𝑝𝑝𝑛𝑛(𝑥𝑥) = 𝑓𝑓(𝑎𝑎) + 𝑓𝑓′(𝑎𝑎)

1!(𝑥𝑥 − 𝑎𝑎) + ⋯+

𝑓𝑓(𝑛𝑛)(𝑎𝑎)𝑠𝑠!

(𝑥𝑥 − 𝑎𝑎)𝑛𝑛 (2)

When 𝑎𝑎 = 0, the series is also called Maclaurin series.

Page 8: AEM ADV03 Introductory Mathematics€¦ ·

8

There are 2 conditions apply:

1. 𝑓𝑓 (𝑥𝑥),𝑓𝑓(1)(𝑥𝑥),⋯ ,𝑓𝑓(𝑛𝑛+1)(𝑥𝑥) are continuous in a closed interval containing 𝑥𝑥 = 𝑎𝑎. 2. 𝑥𝑥 is any point in the interval.

A Taylor series represents a function for a given value as an infinite sum of terms that are calculated from the value of the function’s derivatives. Therefore, the Taylor series of a function 𝑓𝑓 (𝑥𝑥) for a value 𝑎𝑎 is the power series, and can be written as:

𝑓𝑓(𝑥𝑥) = �𝑓𝑓𝑛𝑛(𝑎𝑎)𝑠𝑠!

𝑛𝑛=0

(𝑥𝑥 − 𝑎𝑎)𝑛𝑛 (3)

Example 1.1: Find the Maclaurin series of a function 𝑓𝑓 (𝑥𝑥) = 𝑒𝑒𝑥𝑥 and its radius of convergence. Solution: So, if 𝑓𝑓 (𝑥𝑥) = 𝑒𝑒𝑥𝑥 , then 𝑓𝑓(𝑛𝑛) (𝑥𝑥) = 𝑒𝑒𝑥𝑥 , so 𝑓𝑓(𝑛𝑛) (0) = 𝑒𝑒0 = 1 for all 𝑠𝑠. Therefore, the Taylor series for 𝑓𝑓 at 0 (which is the Maclaurin series), so:

𝑓𝑓(𝑥𝑥) = �𝑓𝑓𝑛𝑛(0)𝑠𝑠!

𝑛𝑛=0

(𝑥𝑥)𝑘𝑘 = �𝑥𝑥𝑛𝑛

𝑠𝑠!

𝑛𝑛=0

= 1 +𝑥𝑥1!

+𝑥𝑥2

2!+𝑥𝑥3

3!+ ⋯

To find the radius of convergence, let 𝑎𝑎𝑛𝑛 = 𝑥𝑥𝑛𝑛/𝑠𝑠! . Then,

�𝑎𝑎𝑛𝑛+1𝑎𝑎𝑛𝑛

� = �𝑥𝑥𝑛𝑛+1

(𝑠𝑠 + 1)!∙𝑠𝑠!𝑥𝑥𝑛𝑛� =

|𝑥𝑥|𝑠𝑠 + 1

→ 0 < 1

So, by Ratio Test, the series converges for all 𝑥𝑥 and the radius of convergence is 𝑅𝑅 = ∞ The conclusion we can draw from example 1.1 is that if 𝑒𝑒𝑥𝑥 has a power series expansion at 0, then:

𝑒𝑒𝑥𝑥 = �𝑥𝑥𝑛𝑛

𝑠𝑠!

𝑛𝑛=0

So now, under what circumstances is a function equal to the sum of its Taylor series? Or if 𝑓𝑓 has derivatives of all orders, when is it that equation (3) is true?

Page 9: AEM ADV03 Introductory Mathematics€¦ ·

9

With any convergent series, this means that 𝑓𝑓(𝑥𝑥) is the limit of the sequence of partial sums. In the case of Taylor series, the partial sums can be written as in as equation (2), where:

𝑝𝑝𝑛𝑛(𝑥𝑥) = 𝑓𝑓(𝑎𝑎) + 𝑓𝑓′(𝑎𝑎)

1!(𝑥𝑥 − 𝑎𝑎) +

𝑓𝑓′′(𝑎𝑎)2!

(𝑥𝑥 − 𝑎𝑎)2 ⋯+𝑓𝑓(𝑛𝑛)(𝑎𝑎)𝑠𝑠!

(𝑥𝑥 − 𝑎𝑎)𝑛𝑛

For the example of the exponential function 𝑓𝑓 (𝑥𝑥) = 𝑒𝑒𝑥𝑥, the results from example 1.1 shows that the Taylor polynomials at 0 (or Maclaurin polynomials) with 𝑠𝑠 = 1, 2 and 3 are:

𝑝𝑝1(𝑥𝑥) = 1 + 𝑥𝑥

𝑝𝑝2(𝑥𝑥) = 1 + 𝑥𝑥 +𝑥𝑥2

2!

𝑝𝑝3(𝑥𝑥) = 1 + 𝑥𝑥 +𝑥𝑥2

2!+ 𝑥𝑥3

3!

In general, 𝑓𝑓 (𝑥𝑥) is the sum of its Taylor series if

𝑓𝑓(𝑥𝑥) = lim𝑛𝑛→∞

𝑝𝑝𝑛𝑛(𝑥𝑥) (4) If we let

𝑅𝑅𝑛𝑛(𝑥𝑥) = 𝑓𝑓(𝑥𝑥) − 𝑝𝑝𝑛𝑛(𝑥𝑥) so that 𝑓𝑓(𝑥𝑥) = 𝑝𝑝𝑛𝑛(𝑥𝑥) + 𝑅𝑅𝑛𝑛(𝑥𝑥) (5) Then, 𝑅𝑅𝑛𝑛(𝑥𝑥) is called the remainder of the Taylor series. If we show that lim

𝑛𝑛→∞𝑅𝑅𝑛𝑛(𝑥𝑥) = 0, then it follows that from equation (5):

lim

𝑛𝑛→∞𝑝𝑝𝑛𝑛(𝑥𝑥) = lim

𝑛𝑛→∞[𝑓𝑓(𝑥𝑥) − 𝑅𝑅𝑛𝑛(𝑥𝑥)] = 𝑓𝑓 (𝑥𝑥) − lim

𝑛𝑛→∞𝑅𝑅(𝑥𝑥) = 𝑓𝑓(𝑥𝑥)

We have therefore proved the following: If 𝑓𝑓(𝑥𝑥) = 𝑝𝑝𝑛𝑛(𝑥𝑥) + 𝑅𝑅𝑛𝑛(𝑥𝑥), where 𝑝𝑝𝑛𝑛 is the 𝑠𝑠th degree Taylor Polynomial of 𝑓𝑓 at 𝑎𝑎 and

lim𝑛𝑛→∞

𝑅𝑅𝑛𝑛(𝑥𝑥) = 0 (6) for |𝑥𝑥 − 𝑎𝑎| < 𝑅𝑅, then 𝑓𝑓 is equals to the sum of its Taylor series on the interval |𝑥𝑥 − 𝑎𝑎| < 𝑅𝑅.

Page 10: AEM ADV03 Introductory Mathematics€¦ ·

10

Therefore, if 𝑓𝑓 has 𝑠𝑠 + 1 derivatives in an interval 𝐼𝐼 that contains the number 𝑎𝑎, then for 𝑥𝑥 in 𝐼𝐼 there is a number 𝑧𝑧 strictly between 𝑥𝑥 and 𝑎𝑎 such that the remainder term in the Taylor series can be expressed as

𝑅𝑅𝑛𝑛(𝑥𝑥) = 𝑓𝑓(𝑛𝑛+1)(𝑧𝑧)(𝑠𝑠 + 1)!

(𝑥𝑥 − 𝑎𝑎)𝑛𝑛+1 (7)

Example 1.2: Find the Maclaurin series for sin 𝑥𝑥 and prove that it represents sin 𝑥𝑥 for all 𝑥𝑥. Solution: First, we arrange our computation in two columns as follows:

𝑓𝑓(𝑥𝑥) = sin 𝑥𝑥 𝑓𝑓(0) = 0 𝑓𝑓(1)(𝑥𝑥) = cos 𝑥𝑥 𝑓𝑓(1)(0) = 1 𝑓𝑓(2)(𝑥𝑥) = −sin 𝑥𝑥 𝑓𝑓(2)(0) = 0 𝑓𝑓(3)(𝑥𝑥) = −cos 𝑥𝑥 𝑓𝑓(3)(0) = 1 𝑓𝑓(4)(𝑥𝑥) = −cos 𝑥𝑥 𝑓𝑓(4)(0) = 0

Since the derivatives repeat 𝑠𝑠 a cycle of four, we can write the Maclaurin series as follow:

𝑓𝑓(0) +𝑓𝑓(1)(0)

1!𝑥𝑥 +

𝑓𝑓(2)(0)2!

𝑥𝑥2 +𝑓𝑓(3)(0)

3!𝑥𝑥3 +

𝑓𝑓(4)(0)4!

𝑥𝑥4 + ⋯ = 0 +11!𝑥𝑥 +

02!𝑥𝑥2 +

−13!𝑥𝑥3 +

04!𝑥𝑥4 + ⋯

= 𝑥𝑥 −𝑥𝑥3

3!+𝑥𝑥5

5!−𝑥𝑥7

7!+ ⋯

= �(−1)𝑘𝑘∞

𝑘𝑘=0

𝑥𝑥2𝑘𝑘+1

(2𝑘𝑘 + 1)!

Page 11: AEM ADV03 Introductory Mathematics€¦ ·

11

You can try with different types of functions, and you will get a Maclaurin series table that looks like this:

11 − 𝑥𝑥

= �𝑥𝑥𝑛𝑛∞

𝑛𝑛=0

= 1 + 𝑥𝑥 + 𝑥𝑥2 + 𝑥𝑥3 + ⋯ 𝑅𝑅 = 1

𝑒𝑒𝑥𝑥 = �𝑥𝑥𝑛𝑛

𝑠𝑠!

𝑛𝑛=0

= 1 +𝑥𝑥1!

+𝑥𝑥2

2!+𝑥𝑥3

3!+ ⋯ 𝑅𝑅 = ∞

sin 𝑥𝑥 = �(−1)𝑛𝑛𝑥𝑥(2𝑛𝑛+1)

(2𝑠𝑠 + 1)!

𝑛𝑛=0

= 𝑥𝑥 −𝑥𝑥3

3!+𝑥𝑥5

5!−𝑥𝑥7

7!+ ⋯ 𝑅𝑅 = ∞

cos 𝑥𝑥 = �(−1)𝑛𝑛𝑥𝑥2𝑛𝑛

(2𝑠𝑠)!

𝑛𝑛=0

= 1 −𝑥𝑥2

2!+𝑥𝑥4

4!−𝑥𝑥6

6!+ ⋯ 𝑅𝑅 = ∞

tan−1 𝑥𝑥 = �(−1)𝑛𝑛𝑥𝑥2𝑛𝑛+1

2𝑠𝑠 + 1

𝑛𝑛=0

= 𝑥𝑥 −𝑥𝑥3

3+𝑥𝑥5

5−𝑥𝑥7

7+ ⋯ 𝑅𝑅 = 1

ln(1 + 𝑥𝑥) =�(−1)𝑛𝑛−1𝑥𝑥𝑛𝑛

𝑠𝑠

𝑛𝑛=0

= 𝑥𝑥 −𝑥𝑥2

2+𝑥𝑥3

3−𝑥𝑥4

4+ ⋯ 𝑅𝑅 = 1

Example 1.3: Find the first 3 terms of the Taylor series for the function sin𝜋𝜋𝑥𝑥 centered at 𝑎𝑎 = 0.5. Use

your answer to find an approximate value to sin �𝜋𝜋2

+ 𝜋𝜋10�

Solution: Let us first do the derivatives for the function given: 𝑓𝑓(𝑥𝑥) = sin𝜋𝜋𝑥𝑥 . Therefore, 𝑓𝑓(1)𝑥𝑥 = 𝜋𝜋 cos𝜋𝜋𝑥𝑥 , 𝑓𝑓(2)𝑥𝑥 = −𝜋𝜋2 sin𝜋𝜋𝑥𝑥 , 𝑓𝑓(3)𝑥𝑥 = −𝜋𝜋3 cos𝜋𝜋𝑥𝑥 , 𝑓𝑓(4)𝑥𝑥 = 𝜋𝜋4 sin𝜋𝜋𝑥𝑥 And so, Substituting this back into equation (17), we get:

sin𝜋𝜋𝑥𝑥 = sin𝜋𝜋2

+�𝑥𝑥 − 1

2�2

2!× (−𝜋𝜋)2 +

�𝑥𝑥 − 12�

4

4!× 𝜋𝜋4 + ⋯

= 1 − 𝜋𝜋2�𝑥𝑥 − 1

2�2

2!+ 𝜋𝜋4

�𝑥𝑥 − 12�

4

4!+ ⋯

Page 12: AEM ADV03 Introductory Mathematics€¦ ·

12

Therefore,

sin𝜋𝜋 �12

+1

10� = 1 − 𝜋𝜋2

� 110�

2

2!+ 𝜋𝜋4

� 110�

4

4!+ ⋯

= 1 − 0.0493 + 0.0004 = 0.9511

1.1.2 Fourier Series As mentioned previously, a Fourier series decomposes periodic functions into a sum of sines and cosines (trigonometric terms or complex exponentials). For a periodic function 𝑓𝑓(𝑥𝑥), periodic on [−𝐿𝐿, 𝐿𝐿], its Fourier series representation is:

𝑓𝑓(𝑥𝑥) = 0.5𝑎𝑎0 + ��𝑎𝑎𝑛𝑛 cos �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� + 𝑏𝑏𝑛𝑛 sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿��

𝑛𝑛=1

(8)

where 𝑎𝑎0, 𝑎𝑎𝑛𝑛 and 𝑏𝑏𝑛𝑛 are the Fourier coefficients and they can be written as:

𝑎𝑎0 = 1𝐿𝐿� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑥𝑥𝐿𝐿

−𝐿𝐿

(9)

𝑎𝑎𝑛𝑛 =

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) cos �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�𝑑𝑑𝑥𝑥

𝐿𝐿

−𝐿𝐿

(10)

𝑏𝑏𝑛𝑛 =

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� 𝑑𝑑𝑥𝑥

𝐿𝐿

−𝐿𝐿 (11)

where period, 𝑝𝑝 = 2𝐿𝐿. Equation (8) is also called Real Fourier series. There are 2 conditions apply:

1. 𝑓𝑓(𝑥𝑥) is a piecewise continuous is piecewise continuous on the closed interval [−𝐿𝐿, 𝐿𝐿]. A function is said to be piecewise continuous on the closed interval [𝑎𝑎, 𝑏𝑏] provided that it is continuous there, with at most a finite number of exceptions where, at worst, we would find a removable or jump discontinuity. At both a removable and a jump discontinuity, the one-sided limits 𝑓𝑓(𝑡𝑡+) =lim𝑥𝑥→𝑡𝑡+

𝑓𝑓(𝑥𝑥) and 𝑓𝑓(𝑡𝑡−) = lim𝑥𝑥→𝑡𝑡−

𝑓𝑓(𝑥𝑥) exist and are finite.

Page 13: AEM ADV03 Introductory Mathematics€¦ ·

13

2. A sum of continuous and periodic functions converges pointwise to a possibly discontinuous and

non-periodic function. This was a startling realisation for mathematicians of the early nineteenth century.

Example 1.4: Find the Fourier series of (𝑥𝑥) = 𝑥𝑥2 , −1 < 𝑥𝑥 < 1 Solution: In this example, period, 𝑝𝑝 = 2, but we know that 𝑝𝑝 = 2𝐿𝐿, therefore, 𝐿𝐿 = 1. First, let us find 𝑎𝑎0. From equation (9),

𝑎𝑎0 = 1

2𝐿𝐿� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑥𝑥𝐿𝐿

−𝐿𝐿

𝑎𝑎0 = 12� 𝑥𝑥2𝑑𝑑𝑥𝑥 =

13

1

−1

Next, let us find 𝑏𝑏𝑛𝑛. From equation (11),

𝑏𝑏𝑛𝑛 = 1𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� 𝑑𝑑𝑥𝑥

𝐿𝐿

−𝐿𝐿

𝑏𝑏𝑛𝑛 = 11� 𝑥𝑥2 sin𝑠𝑠𝜋𝜋𝑥𝑥 𝑑𝑑𝑥𝑥1

−1= 0

Finally, we will find 𝑎𝑎𝑛𝑛. From equation (10),

𝑎𝑎𝑛𝑛 = 1𝐿𝐿� 𝑓𝑓(𝑥𝑥) cos �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�𝑑𝑑𝑥𝑥

𝐿𝐿

−𝐿𝐿

𝑎𝑎𝑛𝑛 = 11� 𝑥𝑥2 cos𝑠𝑠𝜋𝜋𝑥𝑥 𝑑𝑑𝑥𝑥1

−1

Solving using integration by parts, we get:

𝑎𝑎𝑛𝑛 = 2𝑥𝑥𝑐𝑐𝑐𝑐𝑠𝑠𝑠𝑠𝜋𝜋𝑥𝑥𝑠𝑠2𝜋𝜋2

�−1

1

𝑎𝑎𝑛𝑛 = 2

𝑠𝑠2𝜋𝜋2[(−1)𝑛𝑛 + (−1)𝑛𝑛]

Page 14: AEM ADV03 Introductory Mathematics€¦ ·

14

𝑎𝑎𝑛𝑛 = 4(−1)𝑛𝑛

𝑠𝑠2𝜋𝜋2

Therefore, the Fourier series can be written as:

𝑓𝑓(𝑥𝑥2) = 13

+ �4(−1)𝑛𝑛

𝑠𝑠2𝜋𝜋2 cos(𝑠𝑠𝜋𝜋𝑥𝑥)

𝑛𝑛=1

1.1.3 Complex Fourier series A function 𝑓𝑓(𝑥𝑥) can also be expressed as a Complex Fourier series and can be defined to be:

𝑓𝑓(𝑥𝑥) = �𝑐𝑐𝑛𝑛𝑒𝑒𝑖𝑖𝑛𝑛𝜋𝜋𝑥𝑥/𝐿𝐿+∞

−∞

(12)

where

𝑐𝑐𝑛𝑛 =1

2𝜋𝜋� 𝑓𝑓(𝑥𝑥)𝑒𝑒−𝑖𝑖𝑛𝑛𝑥𝑥𝜋𝜋

−𝜋𝜋 (13)

We know that:

𝑒𝑒𝑖𝑖𝑥𝑥 = cos 𝑥𝑥 + 𝑠𝑠 sin 𝑥𝑥

𝑒𝑒−𝑖𝑖𝑥𝑥 = cos 𝑥𝑥 − 𝑠𝑠 sin 𝑥𝑥

𝑒𝑒𝑖𝑖𝑥𝑥 − 𝑒𝑒−𝑖𝑖𝑥𝑥 = 2𝑠𝑠 sin 𝑥𝑥

𝑒𝑒𝑖𝑖𝑥𝑥 + 𝑒𝑒−𝑖𝑖𝑥𝑥 = 2 cos 𝑥𝑥

(14)

Therefore, from equation (13),

𝑐𝑐𝑛𝑛 =1

2𝜋𝜋� 𝑓𝑓(𝑥𝑥)𝑒𝑒−𝑖𝑖𝑛𝑛𝑥𝑥𝜋𝜋

−𝜋𝜋

𝑐𝑐𝑛𝑛 =12�1𝜋𝜋� 𝑓𝑓(𝑥𝑥) cos𝑠𝑠𝑥𝑥 𝑑𝑑𝑥𝑥 − 𝑠𝑠𝜋𝜋

−𝜋𝜋

1𝜋𝜋� 𝑓𝑓(𝑥𝑥) sin𝑠𝑠𝑥𝑥 𝑑𝑑𝑥𝑥𝜋𝜋

−𝜋𝜋�

Page 15: AEM ADV03 Introductory Mathematics€¦ ·

15

Hence, we can write:

⟹ 𝑐𝑐𝑛𝑛 =12

(𝑎𝑎𝑛𝑛 − 𝑠𝑠𝑏𝑏𝑛𝑛) , 𝑠𝑠 > 0

⟹ 𝑐𝑐𝑛𝑛 =12

(𝑎𝑎−𝑛𝑛 + 𝑠𝑠𝑏𝑏−𝑛𝑛) , 𝑠𝑠 < 0

⟹ 𝑐𝑐𝑛𝑛 = 𝑎𝑎0 , 𝑠𝑠 = 0 Example 1.5: Write the complex Fourier transform of 𝑓𝑓(𝑥𝑥) = 2 sin 𝑥𝑥 − cos 10𝑥𝑥 Solution: Here, we can expand the function by substituting the sin and cos functions from equation (14), we get:

𝑓𝑓(𝑥𝑥) = 2𝑒𝑒𝑖𝑖𝑥𝑥 − 𝑒𝑒−𝑖𝑖𝑥𝑥

2𝑠𝑠−𝑒𝑒10𝑖𝑖𝑥𝑥 + 𝑒𝑒−10𝑖𝑖𝑥𝑥

2

𝑓𝑓(𝑥𝑥) =1𝑠𝑠𝑒𝑒𝑖𝑖𝑥𝑥 −

1𝑠𝑠𝑒𝑒−𝑖𝑖𝑥𝑥 −

12𝑒𝑒10𝑖𝑖𝑥𝑥 −

12𝑒𝑒−10𝑖𝑖𝑥𝑥

Therefore:

𝑐𝑐1 =1𝑠𝑠

, 𝑐𝑐10 = −12

, 𝑐𝑐−1 = −1𝑠𝑠

, 𝑐𝑐−10 = −12

1.1.4 Termwise Integration and Differentiation Parseval’s Identity Consider a Fourier series below and expand it

𝑓𝑓(𝑥𝑥) = 𝑎𝑎0 + �{𝑎𝑎𝑛𝑛 cos𝑠𝑠𝑥𝑥 + 𝑏𝑏𝑛𝑛 sin𝑠𝑠𝑥𝑥} = 𝑎𝑎0 + 𝑎𝑎1 cos 𝑥𝑥 + 𝑏𝑏1 sin 𝑥𝑥 + 𝑎𝑎2 cos 2𝑥𝑥 + 𝑏𝑏2 sin 2𝑥𝑥 + ⋯∞

𝑛𝑛=1

Square it, we get:

Page 16: AEM ADV03 Introductory Mathematics€¦ ·

16

𝑓𝑓2(𝑥𝑥) = 𝑎𝑎02 + ��𝑎𝑎𝑛𝑛2 cos2 𝑠𝑠𝑥𝑥 + 𝑏𝑏𝑛𝑛2 sin2 𝑠𝑠𝑥𝑥� + 2𝑎𝑎0�(

𝑁𝑁

𝑛𝑛=1

𝑎𝑎𝑛𝑛 cos𝑠𝑠𝑥𝑥 + 𝑏𝑏𝑛𝑛 sin𝑠𝑠𝑥𝑥)𝑁𝑁

𝑛𝑛=1

+ 2𝑎𝑎1 cos 𝑥𝑥 𝑏𝑏1 sin 𝑥𝑥 + 2𝑎𝑎1 cos 𝑥𝑥�(𝑁𝑁

𝑛𝑛=1

𝑎𝑎𝑛𝑛 cos𝑠𝑠𝑥𝑥 + 𝑏𝑏𝑛𝑛 sin𝑠𝑠𝑥𝑥) + ⋯

+ 2𝑎𝑎𝑁𝑁 cos𝑁𝑁𝑥𝑥 𝑏𝑏𝑁𝑁 sin𝑁𝑁𝑥𝑥 Integrate both sides, we get:

� 𝑓𝑓2(𝑥𝑥) 𝑑𝑑𝑥𝑥𝜋𝜋

−𝜋𝜋= � �𝑎𝑎02 + ��𝑎𝑎𝑛𝑛2 cos2 𝑠𝑠𝑥𝑥 + 𝑏𝑏𝑛𝑛

2 sin2 𝑠𝑠𝑥𝑥� + ⋯𝑁𝑁

𝑛𝑛=1

�𝜋𝜋

−𝜋𝜋 𝑑𝑑𝑥𝑥

⟹� 𝑓𝑓2(𝑥𝑥) 𝑑𝑑𝑥𝑥𝜋𝜋

−𝜋𝜋= 2𝜋𝜋𝑎𝑎02 + ��𝜋𝜋𝑎𝑎𝑛𝑛2 + 𝜋𝜋𝑏𝑏𝑛𝑛

2� + 0𝑁𝑁

𝑛𝑛=1

Parseval’s Identity can be written as:

1𝐿𝐿� |𝑓𝑓(𝑥𝑥)|2 𝑑𝑑𝑥𝑥 = 2|𝑎𝑎0|2𝐿𝐿

−𝐿𝐿+ �(|𝑎𝑎𝑛𝑛|2 + |𝑏𝑏𝑛𝑛|2)

𝑛𝑛=1

(15)

If:

a) 𝑓𝑓(𝑥𝑥) is continuous, and 𝑓𝑓′(𝑥𝑥) is a piecewise continuous on [−𝐿𝐿, 𝐿𝐿] b) 𝑓𝑓(𝐿𝐿) = 𝑓𝑓(−𝐿𝐿) c) 𝑓𝑓′′(𝑥𝑥) exist at 𝑥𝑥 in (−𝐿𝐿, 𝐿𝐿),

Therefore:

𝑓𝑓′(𝑥𝑥) =𝜋𝜋𝐿𝐿�𝑠𝑠∞

𝑛𝑛=1

�−𝑎𝑎𝑛𝑛 sin𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿

+ 𝑏𝑏𝑛𝑛 cos𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� (16)

Example 1.6: From Example 1.4, we found that the Fourier series is:

𝑓𝑓(𝑥𝑥2) = 13

+ �4(−1)𝑛𝑛

𝑠𝑠2𝜋𝜋2 cos(𝑠𝑠𝜋𝜋𝑥𝑥) , 𝑥𝑥2

𝑛𝑛=1

Page 17: AEM ADV03 Introductory Mathematics€¦ ·

17

Solution: Applying Parseval’s to the equation above, we get:

2 �13�2

+ �16𝑠𝑠4𝜋𝜋4

𝑛𝑛=1

= � 𝑥𝑥41

−1 𝑑𝑑𝑥𝑥 =

25

⟹�16𝑠𝑠4𝜋𝜋4

𝑛𝑛=1

= 25−

29

= 8

45

⟹�1𝑠𝑠4

𝑛𝑛=1

= 𝜋𝜋4

90

1.1.5 Fourier series of Odd and Even functions A function 𝑓𝑓(𝑥𝑥) is called an 𝑒𝑒𝑒𝑒𝑒𝑒𝑠𝑠 or 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑒𝑒𝑡𝑡𝑠𝑠𝑠𝑠𝑐𝑐 function if it has the property

𝑓𝑓(−𝑥𝑥) = 𝑓𝑓(𝑥𝑥) (17) i.e. the function value for a particular negative value of x is the same as that for the corresponding positive value of x. The graph of an even function is therefore reflection symmetrical about the y-axis.

Figure 1.1: Square waves showing an even function A function 𝑓𝑓(𝑥𝑥) is called an 𝑐𝑐𝑑𝑑𝑑𝑑 or 𝑎𝑎𝑠𝑠𝑡𝑡𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑒𝑒𝑡𝑡𝑠𝑠𝑠𝑠𝑐𝑐 function if

𝑓𝑓(−𝑥𝑥) = −𝑓𝑓(𝑥𝑥) (18) i.e. the function value for a particular negative value of x is numerically equal to that for the corresponding positive value of x but opposite in sign. We can say these functions to be symmetrical about the origin.

Page 18: AEM ADV03 Introductory Mathematics€¦ ·

18

Figure 1.2: Example of odd function A function that is neither even nor odd can be represented as the sum of an even and an odd function. Cosine waves are even, so any Fourier series representation of a periodic function must have an even symmetry. A function 𝑓𝑓(𝑥𝑥) defined on [0, 𝐿𝐿] can be extended as an even periodic function (𝑏𝑏𝑛𝑛 = 0). Therefore, the Fourier series representation of an even function is:

𝑓𝑓(𝑥𝑥) = 0.5𝑎𝑎0 + �𝑎𝑎𝑛𝑛 cos �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝑛𝑛=1

, 𝑎𝑎𝑛𝑛 =2𝐿𝐿� 𝑓𝑓(𝑥𝑥) cos �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑥𝑥 (19)

Similarly sine waves are odd, so any Fourier sine series representation of a periodic function must have odd symmetry. Therefore, a function 𝑓𝑓(𝑥𝑥) defined on [0, 𝐿𝐿] can be extended as an odd periodic function (𝑎𝑎𝑛𝑛 = 0) and the Fourier series representation of an even function is:

𝑓𝑓(𝑥𝑥) = 0.5𝑎𝑎0 + �𝑏𝑏𝑛𝑛 sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝑛𝑛=1

, 𝑏𝑏𝑛𝑛 =2𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑥𝑥 (20)

Example 1.7: If 𝑓𝑓(𝑥𝑥) is even, show that

(a) 𝑎𝑎𝑛𝑛 = 2𝐿𝐿 ∫ 𝑓𝑓(𝑥𝑥) cos �𝑛𝑛𝜋𝜋𝑥𝑥

𝐿𝐿�𝐿𝐿

0 𝑑𝑑𝑥𝑥

(b) 𝑏𝑏𝑛𝑛 = 0

Solution: For an even function, we can write the equation as:

𝑎𝑎𝑛𝑛 =1𝐿𝐿� 𝑓𝑓(𝑥𝑥) cos

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿

𝐿𝐿

−𝐿𝐿 𝑑𝑑𝑥𝑥 =

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) cos

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿

0

−𝐿𝐿 𝑑𝑑𝑥𝑥 +

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) cos

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿

𝐿𝐿

0 𝑑𝑑𝑥𝑥

Let x=-u, we can re-write:

Page 19: AEM ADV03 Introductory Mathematics€¦ ·

19

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) cos

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿

0

−𝐿𝐿 𝑑𝑑𝑥𝑥 =

1𝐿𝐿� 𝑓𝑓(−𝑢𝑢) cos �

−𝑠𝑠𝜋𝜋𝑢𝑢𝐿𝐿

�𝐿𝐿

0 𝑑𝑑𝑢𝑢 =

1𝐿𝐿� 𝑓𝑓(𝑢𝑢) cos �

𝑠𝑠𝜋𝜋𝑢𝑢𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑢𝑢

Since by definition of an even function f(-u) = f(u). Then:

𝑎𝑎𝑛𝑛 =1𝐿𝐿� 𝑓𝑓(𝑢𝑢) cos �

𝑠𝑠𝜋𝜋𝑢𝑢𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑢𝑢 +

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) cos

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿

𝐿𝐿

0 𝑑𝑑𝑥𝑥 =

2𝐿𝐿� 𝑓𝑓(𝑥𝑥) cos

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿

𝐿𝐿

0 𝑑𝑑𝑥𝑥

To show that 𝑏𝑏𝑛𝑛 = 0, we can write the expression as

𝑏𝑏𝑛𝑛 =1𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝐿𝐿

−𝐿𝐿 𝑑𝑑𝑥𝑥 =

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

0

−𝐿𝐿 𝑑𝑑𝑥𝑥 +

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑥𝑥

If we make the transformation x=-u in the first integral on the right of the equation above, we obtain:

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

0

−𝐿𝐿 𝑑𝑑𝑥𝑥 =

1𝐿𝐿� 𝑓𝑓(−𝑢𝑢) sin �−

𝑠𝑠𝜋𝜋𝑢𝑢𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑢𝑢 = −

1𝐿𝐿� 𝑓𝑓(𝑢𝑢) sin �

𝑠𝑠𝜋𝜋𝑢𝑢𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑢𝑢

= −1𝐿𝐿� 𝑓𝑓(𝑢𝑢) sin �

𝑠𝑠𝜋𝜋𝑢𝑢𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑢𝑢 = −

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑥𝑥

Therefore, substituting this into the equation for 𝑏𝑏𝑛𝑛, we get

𝑏𝑏𝑛𝑛 = −1𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑥𝑥 +

1𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝐿𝐿

0 𝑑𝑑𝑥𝑥 = 0

1.2 Integral Transform An integral transform is any transform of the following form

𝐹𝐹(𝑤𝑤) = � 𝐾𝐾(𝑤𝑤, 𝑥𝑥)𝑓𝑓(𝑥𝑥)𝑥𝑥2

𝑥𝑥1 𝑑𝑑𝑥𝑥 (21)

With the following inverse transform

Page 20: AEM ADV03 Introductory Mathematics€¦ ·

20

𝑓𝑓(𝑥𝑥) = � 𝐾𝐾−1(𝑤𝑤, 𝑥𝑥)𝐹𝐹(𝑤𝑤)𝑤𝑤2

𝑤𝑤1 𝑑𝑑𝑤𝑤 (22)

1.2.1 Fourier Transform

A Fourier series expansion of a function 𝑓𝑓(𝑥𝑥) of a real variable 𝑥𝑥 with a period of 2𝐿𝐿 is defined over a finite interval −𝐿𝐿 ≤ 𝑥𝑥 ≤ 𝐿𝐿 . If the interval becomes infinite and we sum over infinitesimals, we then obtain the Fourier integral

𝑓𝑓(𝑥𝑥) =1

2𝜋𝜋� 𝐹𝐹(𝑤𝑤)𝑒𝑒𝑖𝑖𝑤𝑤𝑥𝑥∞

−∞ 𝑑𝑑𝑤𝑤 (23)

with the coefficients

𝐹𝐹(𝑤𝑤) = � 𝑓𝑓(𝑥𝑥)𝑒𝑒−𝑖𝑖𝑤𝑤𝑥𝑥∞

−∞ 𝑑𝑑𝑥𝑥 (24)

Equation (24) is the Fourier transform of 𝑓𝑓(𝑥𝑥). The Fourier integral is also known as the inverse Fourier transform of 𝐹𝐹(𝑤𝑤). In this example, 𝑥𝑥1 = 𝑤𝑤1 = −∞, 𝑥𝑥2 = 𝑤𝑤2 = ∞ and 𝐾𝐾(𝑤𝑤, 𝑥𝑥) = 𝑒𝑒−𝑖𝑖𝑤𝑤𝑥𝑥. The Fourier transform transforms a function of one variable (e.g. time in seconds) which lives in the time domain to a second function which lives in the frequency domain and changes the basis of the function to cosines and sines. Example 1.8: Find the Fourier transform of

𝑓𝑓(𝑥𝑥) = �1 ∶ −2 < 𝑥𝑥 < 20 ∶ 𝑐𝑐𝑡𝑡ℎ𝑒𝑒𝑠𝑠𝑤𝑤𝑠𝑠𝑠𝑠𝑒𝑒

Solution: We can write the Fourier transform as in equation (24):

𝐹𝐹(𝑤𝑤) =1

√2𝜋𝜋� 𝑓𝑓(𝑥𝑥)𝑒𝑒−𝑖𝑖𝑤𝑤𝑥𝑥2

−2 𝑑𝑑𝑥𝑥

=1

√2𝜋𝜋� 𝑒𝑒−𝑖𝑖𝑤𝑤𝑥𝑥2

−2 𝑑𝑑𝑥𝑥

=1

√2𝜋𝜋�𝑒𝑒−𝑖𝑖𝑤𝑤𝑥𝑥

−𝑠𝑠𝑤𝑤�−2

2

Page 21: AEM ADV03 Introductory Mathematics€¦ ·

21

=−1

𝑠𝑠𝑤𝑤√2𝜋𝜋�𝑒𝑒−2𝑖𝑖𝑤𝑤 − 𝑒𝑒2𝑖𝑖𝑤𝑤�

=−1

𝑠𝑠𝑤𝑤√2𝜋𝜋[(cos 2𝑤𝑤 − 𝑠𝑠 sin 2𝑤𝑤) − (cos 2𝑤𝑤 + 𝑠𝑠 sin 2𝑤𝑤)]

=−1

𝑠𝑠𝑤𝑤√2𝜋𝜋[−2𝑠𝑠 sin 2𝑤𝑤]

= �2𝜋𝜋�

sin 2𝑤𝑤𝑤𝑤

Example 1.9: Find the Fourier transform of

𝑓𝑓(𝑡𝑡) = � 𝑡𝑡 ∶ −1 ≤ 𝑡𝑡 ≤ 10 ∶ 𝑒𝑒𝑒𝑒𝑠𝑠𝑒𝑒𝑤𝑤ℎ𝑒𝑒𝑠𝑠𝑒𝑒

Solution: Recalling the Fourier transform in equation (24), we can write

𝐹𝐹(𝑤𝑤) = � 𝑓𝑓(𝑥𝑥)𝑒𝑒−𝑖𝑖𝑤𝑤𝑥𝑥∞

−∞ 𝑑𝑑𝑥𝑥

= � 𝑡𝑡 ∙ 𝑒𝑒−𝑖𝑖𝑤𝑤𝑡𝑡1

−1 𝑑𝑑𝑡𝑡

By applying integration by parts, we get:

= �𝑡𝑡

−𝑠𝑠𝑤𝑤𝑒𝑒𝑖𝑖𝑤𝑤𝑡𝑡�

−1

1− �

1−𝑠𝑠𝑤𝑤

𝑒𝑒−𝑖𝑖𝑤𝑤𝑡𝑡1

−1 𝑑𝑑𝑡𝑡

We can also rewrite −1

𝑖𝑖= 𝑠𝑠, therefore:

= �𝑠𝑠𝑡𝑡𝑤𝑤𝑒𝑒𝑖𝑖𝑤𝑤𝑡𝑡�

−1

1

+ �1𝑠𝑠𝑤𝑤

∙1

−𝑠𝑠𝑤𝑤𝑒𝑒𝑖𝑖𝑤𝑤𝑡𝑡�

−1

1

= �𝑠𝑠𝑡𝑡𝑤𝑤𝑒𝑒𝑖𝑖𝑤𝑤𝑡𝑡�

−1

1

+ �1𝑤𝑤2 𝑒𝑒

𝑖𝑖𝑤𝑤𝑡𝑡�−1

1

Page 22: AEM ADV03 Introductory Mathematics€¦ ·

22

=𝑠𝑠𝑤𝑤�𝑒𝑒−𝑖𝑖𝑤𝑤 + 𝑒𝑒𝑖𝑖𝑤𝑤� +

1𝑤𝑤2 �𝑒𝑒

−𝑖𝑖𝑤𝑤 + 𝑒𝑒𝑖𝑖𝑤𝑤�

= 2𝑠𝑠𝑤𝑤

12�𝑒𝑒−𝑖𝑖𝑤𝑤 + 𝑒𝑒𝑖𝑖𝑤𝑤� + �−

2𝑠𝑠𝑤𝑤2 ∙

12𝑠𝑠� �𝑒𝑒𝑖𝑖𝑤𝑤 − 𝑒𝑒−𝑖𝑖𝑤𝑤�

=2𝑠𝑠𝑤𝑤

cos𝑤𝑤 −2𝑠𝑠𝑤𝑤2 sin𝑤𝑤

=2𝑠𝑠𝑤𝑤�cos𝑤𝑤 −

1𝑤𝑤

sin𝑤𝑤�

1.2.2 Laplace Transform The Laplace transform is an example of an integral transform that will convert a differential equation into an algebraic equation. The Laplace transform of a function 𝑓𝑓(𝑥𝑥) of a variable 𝑥𝑥 is defined as the integral

𝐹𝐹(𝑠𝑠) = ℒ{𝑓𝑓(𝑡𝑡)} = � 𝑓𝑓(𝑡𝑡)𝑒𝑒−𝑠𝑠𝑡𝑡∞

0 𝑑𝑑𝑡𝑡 (25)

where 𝑠𝑠 is a real, positive parameter that serves as a supplementary variable. The conditions are: if 𝑓𝑓(𝑡𝑡) is piecewise continuous on (0,∞), and of exponential order (|𝑓𝑓(𝑡𝑡)| ≤ 𝐾𝐾𝑒𝑒𝛼𝛼𝑡𝑡 for some 𝐾𝐾 and 𝛼𝛼 > 0), then 𝐹𝐹(𝑠𝑠) exists for 𝑠𝑠 > 𝛼𝛼. Several Laplace transforms are given in the table below, where 𝑎𝑎 is a constant and 𝑠𝑠 is an integer. Example 1.10: Find the Laplace transforms of the following functions:

𝑓𝑓(𝑡𝑡) = �3 ∶ 0 < 𝑡𝑡 < 50 ∶ 𝑡𝑡 > 0

Solution:

ℒ{𝑓𝑓(𝑡𝑡)} = � 𝑓𝑓(𝑡𝑡) 𝑒𝑒−𝑠𝑠𝑡𝑡∞

0 𝑑𝑑𝑡𝑡 = � 3 ∙ 𝑒𝑒−𝑠𝑠𝑡𝑡 𝑑𝑑𝑡𝑡 + � 0 ∙ 𝑒𝑒−𝑠𝑠𝑡𝑡 𝑑𝑑𝑡𝑡

5

5

0

= 3 �𝑒𝑒−𝑠𝑠𝑡𝑡

−𝑠𝑠�0

5

+ 0

Page 23: AEM ADV03 Introductory Mathematics€¦ ·

23

= 3 �𝑒𝑒−5𝑠𝑠

−𝑠𝑠−

1−𝑠𝑠�

=3𝑠𝑠

(1 − 𝑒𝑒−5𝑠𝑠)

Example 1.11: Find the Laplace transforms of the following functions:

𝑓𝑓(𝑡𝑡) = � 𝑡𝑡 ∶ 0 < 𝑡𝑡 < 𝑎𝑎𝑏𝑏 ∶ 𝑡𝑡 > 𝑎𝑎

Solution:

ℒ{𝑓𝑓(𝑡𝑡)} = � 𝑓𝑓(𝑡𝑡) 𝑒𝑒−𝑠𝑠𝑡𝑡∞

0 𝑑𝑑𝑡𝑡 = � 𝑡𝑡 ∙ 𝑒𝑒−𝑠𝑠𝑡𝑡 𝑑𝑑𝑡𝑡 + � 𝑏𝑏 ∙ 𝑒𝑒−𝑠𝑠𝑡𝑡 𝑑𝑑𝑡𝑡

𝑎𝑎

𝑎𝑎

0

= �𝑒𝑒−𝑠𝑠𝑡𝑡

−𝑠𝑠𝑡𝑡 −

𝑒𝑒−𝑠𝑠𝑡𝑡

𝑠𝑠2∙ 1�

0

𝑎𝑎

+ 𝑏𝑏 �𝑒𝑒−𝑠𝑠𝑡𝑡

−𝑠𝑠�𝑎𝑎

= 𝑒𝑒−𝑎𝑎𝑠𝑠 �−𝑎𝑎𝑠𝑠−

1𝑠𝑠2� − 𝑒𝑒0 �0 −

1𝑠𝑠2� −

𝑏𝑏𝑠𝑠

(0 − 𝑒𝑒−𝑎𝑎𝑠𝑠)

= 1𝑠𝑠2

+ �𝑏𝑏 − 𝑎𝑎𝑠𝑠

−1𝑠𝑠2� 𝑒𝑒−𝑎𝑎𝑠𝑠

Example 1.12: Determine the Laplace transform of the function below: 𝑓𝑓(𝑡𝑡) = 5 − 3𝑡𝑡 + 4 sin 2𝑡𝑡 − 6𝑒𝑒4𝑡𝑡 Solution: First, let’s break the equation one by one, we get:

ℒ{5} =5𝑠𝑠

, 𝑅𝑅𝑒𝑒 (𝑠𝑠) > 0

ℒ{𝑡𝑡} =1𝑠𝑠2

, 𝑅𝑅𝑒𝑒 (𝑠𝑠) > 0

ℒ{sin 2𝑡𝑡} =2

𝑠𝑠2 + 4 , 𝑅𝑅𝑒𝑒 (𝑠𝑠) > 0

ℒ{𝑒𝑒4𝑡𝑡} =1

𝑠𝑠 − 4 , 𝑅𝑅𝑒𝑒 (𝑠𝑠) > 4

Page 24: AEM ADV03 Introductory Mathematics€¦ ·

24

Therefore, by linearity property, ℒ{𝑓𝑓(𝑡𝑡)} = ℒ{5 − 3𝑡𝑡 + 4 sin 2𝑡𝑡 − 6𝑒𝑒4𝑡𝑡} = ℒ{5} − 3ℒ{𝑡𝑡} + 4ℒ{sin 2𝑡𝑡} − 6ℒ{𝑒𝑒4𝑡𝑡}

=5𝑠𝑠−

3𝑠𝑠2

+8

𝑠𝑠2 + 4−

6𝑠𝑠 − 4

LAPLACE TRANSFORMS 𝒇𝒇(𝒙𝒙) = 𝓛𝓛−𝟏𝟏{𝑭𝑭(𝒔𝒔)} 𝑭𝑭(𝒔𝒔) = 𝓛𝓛{𝒇𝒇(𝒔𝒔)}

𝒂𝒂 𝑎𝑎𝑠𝑠

𝒕𝒕 1𝑠𝑠2

𝒙𝒙𝒏𝒏 (𝑠𝑠!)𝑠𝑠𝑛𝑛+1

𝒆𝒆𝒂𝒂𝒙𝒙 1

(𝑠𝑠 − 𝑎𝑎)

𝐬𝐬𝐬𝐬𝐬𝐬 𝒂𝒂𝒙𝒙 𝑎𝑎

(𝑠𝑠2 + 𝑎𝑎2)

𝐜𝐜𝐜𝐜𝐬𝐬 𝒂𝒂𝒙𝒙 𝑠𝑠

(𝑠𝑠2 + 𝑎𝑎2)

𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝒂𝒂𝒙𝒙 𝑎𝑎

(𝑠𝑠2 − 𝑎𝑎2)

𝐜𝐜𝐜𝐜𝐬𝐬𝐬𝐬 𝒂𝒂𝒙𝒙 𝑠𝑠

(𝑠𝑠2 − 𝑎𝑎2)

Page 25: AEM ADV03 Introductory Mathematics€¦ ·

25

2. Vector Spaces, vector Fields & Operators

In the context of physics we are often interested in a quantity or property which varies in a smooth and continuous way over some one-, two-, or three-dimensional region of space. This constitutes either a scalar field or a vector field, depending on the nature of property. In this chapter, we consider the relationship between a scalar field involving a variable potential and a vector field involving ‘field’, where this means force per unit mass or change. The properties of scalar and vector fields are described and how they lead to important concepts, such as that of a conservative field, and the important and useful Gauss and Stokes theorems. Finally, examples will be given to demonstrate the ideas of vector analysis. There are basically four types of functions involving scalars and vectors:

• Scalar functions of a scalar, 𝑓𝑓(𝑥𝑥) • Vector function of a scalar, 𝒓𝒓(𝑡𝑡) • Scalar function of a vector, 𝜑𝜑(𝒓𝒓) • Vector function of a vector, 𝑨𝑨(𝒓𝒓)

1. The vector x is normalised if xTx = 1 2. The vectors x and y are orthogonal if xTy = 0 3. The vectors x1, x2, …, x𝑛𝑛 are linearly independent if the only numbers which satisfy the equation

𝑎𝑎1x1 + 𝑎𝑎2x2 + … + 𝑎𝑎𝑛𝑛x𝑛𝑛 = 0 are 𝑎𝑎1 = 𝑎𝑎2 = . . . = 𝑎𝑎𝑛𝑛 = 0 4. The vectors x1, x2, …, x𝑛𝑛 form a basis for a 𝑠𝑠 −dimensional vector-space if any vector x in the vector-

space can be written as a linear combination of vectors in the basis thus x = 𝑎𝑎1x1 + 𝑎𝑎2x2 + ⋯+ 𝑎𝑎𝑛𝑛x𝑛𝑛 where 𝑎𝑎1,𝑎𝑎2,⋯ ,𝑎𝑎𝑛𝑛 are scalars.

Figure 2.1: Components of a vector

Page 26: AEM ADV03 Introductory Mathematics€¦ ·

26

For example, a vector A from the origin in the figure above to a point P in the 3-dimensions takes the form

𝑨𝑨 = 𝑎𝑎𝑥𝑥𝚤𝚤̂ + 𝑎𝑎𝑦𝑦𝚥𝚥̂ + 𝑎𝑎𝑧𝑧𝑘𝑘� (26) Where �𝚤𝚤̂, 𝚥𝚥̂,𝑘𝑘��are unit vectors along the {𝑥𝑥,𝑠𝑠, 𝑧𝑧} axes, respectively. The vector components �𝑎𝑎𝑥𝑥,𝑎𝑎𝑦𝑦,𝑎𝑎𝑧𝑧, � are the corresponding distances along the axes. The length or magnitude of Vector 𝑨𝑨 is

|𝑨𝑨| = �𝑎𝑎𝑥𝑥2 + 𝑎𝑎𝑦𝑦2 + 𝑎𝑎𝑧𝑧2 (27)

2.1 Scalar (inner) product of vector fields The scalar product of vector fields is also called as the dot product. For example, if we have 2 vectors as 𝑨𝑨 = (𝐴𝐴1,𝐴𝐴2,𝐴𝐴3) and 𝑩𝑩 = (𝐵𝐵1,𝐵𝐵2,𝐵𝐵3), therefore,

⟨𝑨𝑨,𝑩𝑩⟩ = 𝑨𝑨 ∙ 𝑩𝑩 = 𝑨𝑨𝑇𝑇𝑩𝑩 = 𝐴𝐴1𝐵𝐵1 + 𝐴𝐴2𝐵𝐵2 + 𝐴𝐴3𝐵𝐵3 (28) We can also write

𝑨𝑨 ∙ 𝑩𝑩 = ‖𝑨𝑨‖‖𝑩𝑩‖ cos𝜃𝜃 (29) where 𝜃𝜃 is the angle between 𝑨𝑨 and 𝑩𝑩 satisfying 0 ≤ 𝜃𝜃 ≤ 𝜋𝜋. The inner product of vectors is a scalar. The scalar product obeys the product laws which are listed below: Product laws:

1. Commutative: 𝑨𝑨 ∙ 𝑩𝑩 = 𝑩𝑩 ∙ 𝑨𝑨 2. Associative: 𝑠𝑠𝑨𝑨 ∙ 𝑠𝑠𝑩𝑩 = 𝑠𝑠𝑠𝑠𝑨𝑨 ∙ 𝑩𝑩 3. Distributive: 𝑨𝑨 ∙ (𝑩𝑩 + 𝑪𝑪) = 𝑨𝑨 ∙ 𝑩𝑩 + 𝑨𝑨 ∙ 𝑪𝑪

4. Cauchy-Schwarz inequality: 𝑨𝑨 ∙ 𝑩𝑩 ≤ (𝑨𝑨 ∙ 𝑨𝑨)12(𝑩𝑩 ∙ 𝑩𝑩)

12

Note that a relation such as 𝑨𝑨 ∙ 𝑩𝑩 = 𝑨𝑨 ∙ 𝑪𝑪 does not imply that 𝑩𝑩 = 𝑪𝑪, as

𝑨𝑨 ∙ 𝑩𝑩 − 𝑨𝑨 ∙ 𝑪𝑪 = 𝑨𝑨 ∙ (𝑩𝑩− 𝑪𝑪) = 0 (30) Therefore, the correct conclusion is that 𝑨𝑨 is perpendicular to the vector 𝑩𝑩 − 𝑪𝑪. Example 2.1: Determine the angle between 𝑨𝑨 = ⟨1,3,−2⟩ and 𝑩𝑩 = ⟨−2, 4,−1⟩.

Page 27: AEM ADV03 Introductory Mathematics€¦ ·

27

Solution: All we need to do here is to rewrite equation (29) as:

cos 𝜃𝜃 =𝑨𝑨 ∙ 𝑩𝑩

‖𝑨𝑨‖‖𝑩𝑩‖

Therefore we know that:

cos 𝜃𝜃 =𝑨𝑨 ∙ 𝑩𝑩

‖𝑨𝑨‖‖𝑩𝑩‖

We’ll first have to compute the individual parameters

𝑨𝑨 ∙ 𝑩𝑩 = 12 ‖𝑨𝑨‖ = √14 ‖𝑩𝑩‖ = √21 Hence, the angle between the vectors is:

cos𝜃𝜃 =12

√14√21= 0.69985 ⟹ 𝜃𝜃 = cos−1(0.69985) = 45.58°

2.1.1 Lp norms There are many norms that could be defined for vectors. One type of norms is called the 𝐿𝐿𝑝𝑝 norm, often denoted as ‖ ∙ ‖𝑝𝑝. For 𝑝𝑝 ≥ 1, it is defined as the 𝑝𝑝 − 𝑠𝑠𝑐𝑐𝑠𝑠𝑠𝑠 and can be written as:

‖𝑥𝑥‖𝑝𝑝 = ��‖𝑥𝑥𝑖𝑖‖𝑝𝑝𝑛𝑛

𝑖𝑖=1

1𝑝𝑝

, 𝑥𝑥 = [𝑥𝑥1,⋯𝑥𝑥𝑛𝑛]𝑇𝑇 (31)

There are a few types of norms such as the following:

1. ‖𝑥𝑥‖1 = ∑ |𝑥𝑥𝑖𝑖|𝑖𝑖 , also called the Manhattan norm because it corresponds to sums of distances along coordinate axes, as one would travel along the rectangular street plan of Manhattan.

2. ‖𝑥𝑥‖2 = �∑ |𝑥𝑥𝑖𝑖|2𝑖𝑖 , also called the Euclidean norm, the Euclidean length, or just the length of the vector.

3. ‖𝑥𝑥‖∞ = 𝑠𝑠𝑎𝑎𝑥𝑥𝑖𝑖|𝑥𝑥𝑖𝑖|, also called the max norm or the Chebyshev norm.

Page 28: AEM ADV03 Introductory Mathematics€¦ ·

28

Some relationships of norms are as below:

‖𝑥𝑥‖∞ ≤ ‖𝑥𝑥‖2 ≤ ‖𝑥𝑥‖1

‖𝑥𝑥‖∞ ≤ ‖𝑥𝑥‖2 ≤ √𝑠𝑠‖𝑥𝑥‖∞

‖𝑥𝑥‖2 ≤ ‖𝑥𝑥‖1 ≤ √𝑠𝑠‖𝑥𝑥‖2

(32)

If we define the inner product induced norm ‖𝑥𝑥‖ = �⟨𝑥𝑥, 𝑥𝑥⟩. Then,

(‖𝑥𝑥‖ + ‖𝑠𝑠‖)2 ≥ ‖𝑥𝑥 + 𝑠𝑠‖2 , ‖𝑥𝑥 + 𝑠𝑠‖2 = ‖𝑥𝑥‖2 + ‖𝑠𝑠‖2 + 2⟨𝑥𝑥,𝑠𝑠⟩ (33) Example 2.2: Given a vector �⃗�𝑒 = 𝚤𝚤 − 4𝚥𝚥 + 5𝑘𝑘�⃗ , determine the Manhattan norm, Euclidean length and Chebyshev norm. Solution: So, if we re-write the vector �⃗�𝑒 as �⃗�𝑒 = (1,−4,5), then we can calculate the norms easily. A. Manhattan norm (One norm):

‖�⃗�𝑒‖1 = �|𝑒𝑒𝑖𝑖|𝑖𝑖

= |1| + |−4| + |5| = 10

B. Euclidean norm (Two norm)

‖�⃗�𝑒‖2 = ��|𝑥𝑥𝑖𝑖|2𝑖𝑖

= �|1|2 + |−4|2 + |5|2 = √42

C. Chebyshev norm (Infinity norm)

‖�⃗�𝑒‖∞ = 𝑠𝑠𝑎𝑎𝑥𝑥𝑖𝑖|𝑥𝑥𝑖𝑖| = 𝑠𝑠𝑎𝑎𝑥𝑥𝑖𝑖{|1|, |−4|, |5|} = 5

Therefore, we can see that

‖𝑥𝑥‖∞ ≤ ‖𝑥𝑥‖2 ≤ ‖𝑥𝑥‖1

Page 29: AEM ADV03 Introductory Mathematics€¦ ·

29

5 ≤ √42 ≤ 10

2.2 Vector product of vector fields The vector product of vector fields is also called as the cross product. For example, if we have 2 vectors as 𝑨𝑨 = (𝐴𝐴1,𝐴𝐴2,𝐴𝐴3) and 𝑩𝑩 = (𝐵𝐵1,𝐵𝐵2,𝐵𝐵3), therefore,

𝑨𝑨 × 𝑩𝑩 = (𝐴𝐴2𝐵𝐵3 − 𝐴𝐴3𝐵𝐵2,𝐴𝐴1𝐵𝐵3 − 𝐴𝐴3𝐵𝐵1,𝐴𝐴1𝐵𝐵2 − 𝐴𝐴2𝐵𝐵1) (34) The cross product of the vectors 𝑨𝑨 and 𝑩𝑩, is orthogonal to both 𝑨𝑨 and 𝑩𝑩, forms a right-handed systems with 𝑨𝑨 and 𝑩𝑩, and has length given by:

‖𝑨𝑨 × 𝑩𝑩‖ = ‖𝑨𝑨‖‖𝑩𝑩‖ sin𝜃𝜃 (35) where 𝜃𝜃 is the angle between 𝑨𝑨 and 𝑩𝑩 satisfying 0 ≤ 𝜃𝜃 ≤ 𝜋𝜋. The vector product of a vector is a vector. A few additional properties of the cross product are listed below:

1. Scalar multiplication (𝑎𝑎𝑨𝑨) × (𝑏𝑏𝑩𝑩) = 𝑎𝑎𝑏𝑏(𝑨𝑨 × 𝑩𝑩) 2. Distribution laws 𝑨𝑨 × (𝑩𝑩 + 𝑪𝑪) = 𝑨𝑨 × 𝑩𝑩 + 𝑨𝑨 × 𝑪𝑪 3. Anticommuttaion 𝑩𝑩 × 𝑨𝑨 = −𝑨𝑨 × 𝑩𝑩 4. Nonassociativity 𝑨𝑨 × (𝑩𝑩 × 𝑪𝑪) = (𝑨𝑨 ∙ 𝑪𝑪)𝑩𝑩− (𝑨𝑨 ∙ 𝑩𝑩)𝑪𝑪

If we breakdown equation (34), we ca rewrite the cross product of vectors 𝑨𝑨 and 𝑩𝑩 as:

𝑨𝑨 × 𝑩𝑩 = �𝐴𝐴2 𝐴𝐴3𝐵𝐵2 𝐵𝐵3

� 𝚤𝚤 − �𝐴𝐴1 𝐴𝐴3𝐵𝐵1 𝐵𝐵3

� 𝚥𝚥 + �𝐴𝐴1 𝐴𝐴2𝐵𝐵1 𝐵𝐵2

� 𝑘𝑘�⃗

= �𝚤𝚤 𝚥𝚥 𝑘𝑘�⃗𝐴𝐴1 𝐴𝐴2 𝐴𝐴3𝐵𝐵1 𝐵𝐵2 𝐵𝐵3

Example 2.3: If 𝑨𝑨 = (3,−2,−2) and 𝑩𝑩 = (−1, 0, 5), compute 𝑨𝑨 × 𝑩𝑩 and find the angle between the two vectors. Solution: It’s a very simple solution here, all we have to do is the compute the cross product first, so

𝑨𝑨 × 𝑩𝑩 = �−2 −20 5 � 𝚤𝚤 − � 3 −2

−1 5 � 𝚥𝚥 + � 3 −2−1 0 � 𝑘𝑘�⃗

= −10𝚤𝚤 − 13𝚥𝚥 − 2𝑘𝑘�⃗

Page 30: AEM ADV03 Introductory Mathematics€¦ ·

30

Angle between the two vectors are given as: ‖𝑨𝑨 × 𝑩𝑩‖ = ‖𝑨𝑨‖‖𝑩𝑩‖ sin𝜃𝜃. Rearranging equation (51), we get:

sin𝜃𝜃 =‖𝑨𝑨 × 𝑩𝑩‖‖𝑨𝑨‖‖𝑩𝑩‖

= �(−10)2 + (−13)2 + (−2)2

�(3)2 + (−2)2 + (−2)2�(−1)2 + (0)2 + (5)2

= √273√17√26

𝜃𝜃 = 51.80°

2.3 Vector operators Certain differential operations may be done on a scalar and vector fields. This may have a wide range of applications in physical sciences. The most important tasks are those of finding the gradient of a scalar field and the divergence and curl of a vector field. In the following topics, we will discuss the mathematical and geometrical definitions of these, which will rely on concepts of integrating vector quantities along lines and over surfaces. In the midst of these differential operations is the vector operator ∇, which is called as del (or nabla) and in Cartesian coordinates, ∇ is defined as:

∇ ≡ 𝜕𝜕𝜕𝜕𝑥𝑥

𝚤𝚤 +𝜕𝜕𝜕𝜕𝑠𝑠

𝚥𝚥 +𝜕𝜕𝜕𝜕𝑧𝑧𝑘𝑘�⃗ (36)

2.3.1 Gradient of a scalar field The gradient of a scalar field 𝜑𝜑(𝑥𝑥,𝑠𝑠, 𝑧𝑧) is defined as

grad φ = ∇φ = 𝜕𝜕𝜑𝜑𝜕𝜕𝑥𝑥

𝚤𝚤 +𝜕𝜕𝜑𝜑𝜕𝜕𝑠𝑠

𝚥𝚥 +𝜕𝜕𝜑𝜑𝜕𝜕𝑧𝑧

𝑘𝑘�⃗ (37)

Clearly, ∇φ is a vector field whose 𝑥𝑥,𝑠𝑠 and 𝑧𝑧 components are the first partial derivatives of 𝜑𝜑(𝑥𝑥,𝑠𝑠, 𝑧𝑧) with respect to 𝑥𝑥,𝑠𝑠 and 𝑧𝑧.

Page 31: AEM ADV03 Introductory Mathematics€¦ ·

31

Example 2.4: Find the gradient of the scalar field 𝜑𝜑 = 𝑥𝑥𝑠𝑠2𝑧𝑧3. Solution: We can easily solve this problem by using equation (37), so the gradient of the scalar field 𝜑𝜑 =𝑥𝑥𝑠𝑠2𝑧𝑧3 is

grad φ = 𝜕𝜕𝜑𝜑𝜕𝜕𝑥𝑥

𝚤𝚤 +𝜕𝜕𝜑𝜑𝜕𝜕𝑠𝑠

𝚥𝚥 +𝜕𝜕𝜑𝜑𝜕𝜕𝑧𝑧

𝑘𝑘�⃗

= 𝑠𝑠2𝑧𝑧3𝚤𝚤 + 2𝑥𝑥𝑠𝑠𝑧𝑧3𝚥𝚥 + 3𝑥𝑥𝑠𝑠2𝑧𝑧2𝑘𝑘�⃗

If we consider a surface in 3D space with 𝜑𝜑(𝒓𝒓) = 𝑐𝑐𝑐𝑐𝑠𝑠𝑠𝑠𝑡𝑡𝑎𝑎𝑠𝑠𝑡𝑡 then the direction normal (i.e. perpendicular) to the surface at the point 𝒓𝒓 is the direction of grad 𝜑𝜑. The magnitude of the greater rate of change of 𝜑𝜑(𝒓𝒓) is the magnitude of grad 𝜑𝜑.

Figure 2.2. Direction of gradient In a physical situations, we may have a potential, 𝜑𝜑, which varies over a particular region and this constitutes a field 𝐸𝐸, satisfying:

𝐸𝐸 = −∇φ = −� 𝜕𝜕𝜑𝜑𝜕𝜕𝑥𝑥

𝚤𝚤 +𝜕𝜕𝜑𝜑𝜕𝜕𝑠𝑠

𝚥𝚥 +𝜕𝜕𝜑𝜑𝜕𝜕𝑧𝑧

𝑘𝑘�⃗ �

Example 2.5: Calculate the electric field at point (𝑥𝑥,𝑠𝑠, 𝑧𝑧) due to a charge 𝑞𝑞1 at (2, 0, 0) and a charge 𝑞𝑞2 at (-2, 0, 0) where charges are in coulombs and distances are in metres. Solution: We need to understand the equation for Electric field which is given by:

𝐸𝐸 = 𝑘𝑘𝑐𝑐𝑞𝑞𝑠𝑠

∇φ

𝜑𝜑 = constant

Page 32: AEM ADV03 Introductory Mathematics€¦ ·

32

where 𝑠𝑠 is the magnitude or position and 𝑘𝑘𝑐𝑐 is the Coulomb constant and is given by:

𝑘𝑘𝑐𝑐 =1

4𝜋𝜋𝜖𝜖0

Therefore, the potential at the point (𝑥𝑥,𝑠𝑠, 𝑧𝑧) is

φ(𝑥𝑥, 𝑠𝑠, 𝑧𝑧) = −𝑞𝑞1

4𝜋𝜋𝜖𝜖0�(2 − 𝑥𝑥)2 + 𝑠𝑠2 + 𝑧𝑧2+

𝑞𝑞24𝜋𝜋𝜖𝜖0�(2 + 𝑥𝑥)2 + 𝑠𝑠2 + 𝑧𝑧2

As a result, the components of the fields are

𝐸𝐸𝑥𝑥 = −𝑞𝑞1(2 − 𝑥𝑥)

4𝜋𝜋𝜖𝜖0{(2− 𝑥𝑥)2 + 𝑠𝑠2 + 𝑧𝑧2}3/2 +𝑞𝑞2(2 + 𝑥𝑥)

4𝜋𝜋𝜖𝜖0{(2 + 𝑥𝑥)2 + 𝑠𝑠2 + 𝑧𝑧2}3/2

𝐸𝐸𝑦𝑦 = −𝑞𝑞1𝑠𝑠

4𝜋𝜋𝜖𝜖0{(2 − 𝑥𝑥)2 + 𝑠𝑠2 + 𝑧𝑧2}3/2 +𝑞𝑞2𝑠𝑠

4𝜋𝜋𝜖𝜖0{(2 + 𝑥𝑥)2 + 𝑠𝑠2 + 𝑧𝑧2}3/2

𝐸𝐸𝑧𝑧 = −𝑞𝑞1𝑧𝑧

4𝜋𝜋𝜖𝜖0{(2 − 𝑥𝑥)2 + 𝑠𝑠2 + 𝑧𝑧2}3/2 +𝑞𝑞2𝑧𝑧

4𝜋𝜋𝜖𝜖0{(2 + 𝑥𝑥)2 + 𝑠𝑠2 + 𝑧𝑧2}3/2

Example 2.6: The function that describes the temperature at any point in the room is given by:

𝑇𝑇(𝑥𝑥,𝑠𝑠, 𝑧𝑧) = 100 cos �𝑥𝑥

10� sin �

𝑠𝑠10� cos 𝑧𝑧

Find the gradient of 𝑇𝑇, the direction of greatest change in temperature in the room at point (10𝜋𝜋, 10𝜋𝜋,𝜋𝜋) and the rate of change of temperature at this point. Solution: First, let’s find the gradient of the function 𝑇𝑇, which is given by equation (37):

∇ 𝑇𝑇 = 𝜕𝜕𝑇𝑇𝜕𝜕𝑥𝑥

𝚤𝚤 +𝜕𝜕𝑇𝑇𝜕𝜕𝑠𝑠

𝚥𝚥 +𝜕𝜕𝑇𝑇𝜕𝜕𝑧𝑧

𝑘𝑘�⃗

= �−10 sin �

𝑥𝑥10� sin �

𝑠𝑠10� cos 𝑧𝑧� 𝚤𝚤 + �10 cos �

𝑥𝑥10� cos �

𝑠𝑠10� cos 𝑧𝑧� 𝚥𝚥

− �100 cos �𝑥𝑥

10� sin �

𝑠𝑠10� sin 𝑧𝑧� 𝑘𝑘�⃗

Page 33: AEM ADV03 Introductory Mathematics€¦ ·

33

Therefore, at the point (10𝜋𝜋, 10𝜋𝜋,𝜋𝜋) in the room, the direction of the greatest change in temperature is:

∇ 𝑇𝑇 = 0𝚤𝚤 − 10𝚥𝚥 + 0𝑘𝑘�⃗ And the rate of change of temperature at this point is the magnitude of the gradient, which is

|∇ 𝑇𝑇| = �(−10)2 = 10 2.3.2 Divergence of a vector field The divergence of a vector field 𝑨𝑨(𝑥𝑥,𝑠𝑠, 𝑧𝑧) is defined as the dot product of the operator ∇ and 𝑨𝑨:

div 𝑨𝑨 = ∇ ∙ 𝑨𝑨 = 𝜕𝜕𝐴𝐴1𝜕𝜕𝑥𝑥

+𝜕𝜕𝐴𝐴2𝜕𝜕𝑠𝑠

+𝜕𝜕𝐴𝐴3𝜕𝜕𝑧𝑧

(38)

where 𝐴𝐴1,𝐴𝐴2 and 𝐴𝐴3 are the 𝑥𝑥−,𝑠𝑠 − and 𝑧𝑧 − components of 𝑨𝑨. Clearly, ∇ ∙ 𝑨𝑨 is a scalar field. Any vector field 𝑨𝑨 for which ∇ ∙ 𝑨𝑨 = 0 is said to be solenoidal. Example 2.7: Find the divergence of a vector field 𝑨𝑨 = 𝑥𝑥2𝑠𝑠2𝚤𝚤 + 𝑠𝑠2𝑧𝑧2𝚥𝚥 + 𝑥𝑥2𝑧𝑧2𝑘𝑘�⃗ Solution: This is a straight forward example, using equation (38) we can solve this easily:

∇ ∙ 𝑨𝑨 = 𝜕𝜕𝐴𝐴1𝜕𝜕𝑥𝑥

+𝜕𝜕𝐴𝐴2𝜕𝜕𝑠𝑠

+𝜕𝜕𝐴𝐴3𝜕𝜕𝑧𝑧

= 2(𝑥𝑥𝑠𝑠2 + 𝑠𝑠𝑧𝑧2 + 𝑥𝑥2𝑧𝑧)

Example 2.8: Find the divergence of a vector field 𝑭𝑭 = (𝑠𝑠𝑧𝑧𝑒𝑒𝑥𝑥𝑦𝑦, 𝑥𝑥𝑧𝑧𝑒𝑒𝑥𝑥𝑦𝑦, 𝑒𝑒𝑥𝑥𝑦𝑦 + 3 cos 3𝑧𝑧) Solution: Again, using equation (38) we can solve this easily:

∇ ∙ 𝑭𝑭 = 𝜕𝜕𝐹𝐹1𝜕𝜕𝑥𝑥

+𝜕𝜕𝐹𝐹2𝜕𝜕𝑠𝑠

+𝜕𝜕𝐹𝐹3𝜕𝜕𝑧𝑧

= 𝑠𝑠2𝑧𝑧𝑒𝑒𝑥𝑥𝑦𝑦 + 𝑥𝑥2𝑧𝑧𝑒𝑒𝑥𝑥𝑦𝑦 − 9 sin 3𝑧𝑧

The value of the scalar div 𝑨𝑨 at point 𝑠𝑠 gives the rate at which the material is expanding or flowing away from the point 𝑠𝑠 (outward flux per unit volume).

Page 34: AEM ADV03 Introductory Mathematics€¦ ·

34

2.3.2.1 Theorem involving Divergence Divergence theorem, also known as Gauss theorem relates a volume integral and a surface integral within a vector field. Let 𝑭𝑭 be a vector field, 𝑆𝑆 be a closed surface and ℛ be the region inside of 𝑆𝑆, then:

� 𝑭𝑭 ∙ 𝑑𝑑𝑨𝑨𝑆𝑆

= � ∇ ∙ 𝑭𝑭𝑑𝑑𝑑𝑑ℛ

(39)

Example 2.9: Evaluate the following

� (3𝑥𝑥𝚤𝚤 + 2𝑠𝑠𝚥𝚥) ∙ 𝑑𝑑𝑨𝑨𝑆𝑆

where 𝑆𝑆 is the sphere 𝑥𝑥2 + 𝑠𝑠2 + 𝑧𝑧2 = 9. Solution: We could parameterize the surface and evaluate the surface integral, but it is much faster to use the divergence theorem. Since:

div (3𝑥𝑥𝚤𝚤 + 2𝑠𝑠𝚥𝚥) = 𝜕𝜕𝜕𝜕𝑥𝑥

(3𝑥𝑥) +𝜕𝜕𝜕𝜕𝑠𝑠

(2𝑠𝑠) +𝜕𝜕𝜕𝜕𝑧𝑧

(0) = 5

The divergence theorem gives:

� (3𝑥𝑥𝚤𝚤 + 2𝑠𝑠𝚥𝚥) ∙ 𝑑𝑑𝑨𝑨𝑆𝑆

= � 5 𝑑𝑑𝑑𝑑ℛ

= 5 × (Volume of sphere)

= 180π Example 2.10: Evaluate the following

� (𝑠𝑠2𝑧𝑧𝚤𝚤 + 𝑠𝑠3𝚥𝚥 + 𝑥𝑥𝑧𝑧𝑘𝑘�⃗ ) ∙ 𝑑𝑑𝑨𝑨𝑆𝑆

where 𝑆𝑆 is the boundary of the cube defined by −1 ≤ 𝑥𝑥 ≤ 1,−1 ≤ 𝑠𝑠 ≤ 1,𝑎𝑎𝑠𝑠𝑑𝑑 0 ≤ 𝑧𝑧 ≤ 2.

Page 35: AEM ADV03 Introductory Mathematics€¦ ·

35

Solution: First let’s solve the divergence of the equation given:

div �𝑠𝑠2𝑧𝑧𝚤𝚤 + 𝑠𝑠3𝚥𝚥 + 𝑥𝑥𝑧𝑧𝑘𝑘�⃗ � = 𝜕𝜕𝜕𝜕𝑥𝑥

(𝑠𝑠2𝑧𝑧) +𝜕𝜕𝜕𝜕𝑠𝑠

(𝑠𝑠3) +𝜕𝜕𝜕𝜕𝑧𝑧

(𝑥𝑥𝑧𝑧)

= 3𝑠𝑠2 + 𝑥𝑥

The divergence theorem gives:

� �𝑠𝑠2𝑧𝑧𝚤𝚤 + 𝑠𝑠3𝚥𝚥 + 𝑥𝑥𝑧𝑧𝑘𝑘�⃗ � ∙ 𝑑𝑑𝑨𝑨𝑆𝑆

= � (3𝑠𝑠2 + 𝑥𝑥) 𝑑𝑑𝑑𝑑ℛ

= � � � (3𝑠𝑠2 + 𝑥𝑥) 𝑑𝑑𝑥𝑥 𝑑𝑑𝑠𝑠 𝑑𝑑𝑧𝑧1

−1

1

−1

2

0

= 2� 6𝑠𝑠2𝑑𝑑𝑠𝑠1

−1

= 8

2.3.3 Curl of a vector field The vector product (cross product) of operator and the vector A is known as the curl or rotation of A. Thus in Cartesian coordinates, we can write:

curl 𝑨𝑨 = ∇ × 𝑨𝑨 = ��𝚤𝚤 𝚥𝚥 𝑘𝑘�⃗𝜕𝜕𝜕𝜕𝑥𝑥

𝜕𝜕𝜕𝜕𝑠𝑠

𝜕𝜕𝜕𝜕𝑧𝑧

𝐴𝐴1 𝐴𝐴2 𝐴𝐴3

�� (40)

Therefore:

curl 𝑨𝑨 = ∇ × 𝑨𝑨 = �𝜕𝜕𝐴𝐴3𝜕𝜕𝑠𝑠

−𝜕𝜕𝐴𝐴2𝜕𝜕𝑧𝑧

𝚤𝚤,𝜕𝜕𝐴𝐴1𝜕𝜕𝑧𝑧

−𝜕𝜕𝐴𝐴3𝜕𝜕𝑥𝑥

𝚥𝚥,𝜕𝜕𝐴𝐴2𝜕𝜕𝑥𝑥

−𝜕𝜕𝐴𝐴1𝜕𝜕𝑠𝑠

𝑘𝑘�⃗ , � (41)

where 𝑨𝑨 = (𝐴𝐴1,𝐴𝐴2,𝐴𝐴3). The vector curl 𝑨𝑨 at point r gives the local rotation (or vorticity) of the material at point r. The direction of curl 𝑨𝑨 is the axis of rotation and half the magnitude of curl 𝑨𝑨 is the rate of rotation or angular frequency of the rotation.

Page 36: AEM ADV03 Introductory Mathematics€¦ ·

36

Example 2.11: Find the curl of a vector field 𝒂𝒂 = 𝑥𝑥2𝑠𝑠2𝑧𝑧2𝚤𝚤 + 𝑠𝑠2𝑧𝑧2𝚥𝚥 + 𝑥𝑥2𝑧𝑧2𝑘𝑘�⃗ Solution: This is a straight forward question. All we have to do is to put the equation in the form of equation (41), we get:

∇ × 𝒂𝒂 = ��𝚤𝚤 𝚥𝚥 𝑘𝑘�⃗𝜕𝜕𝜕𝜕𝑥𝑥

𝜕𝜕𝜕𝜕𝑠𝑠

𝜕𝜕𝜕𝜕𝑧𝑧

𝑥𝑥2𝑠𝑠2𝑧𝑧2 𝑠𝑠2𝑧𝑧2 𝑥𝑥2𝑧𝑧2��

= �𝜕𝜕𝜕𝜕𝑠𝑠

(𝑥𝑥2𝑧𝑧2) −𝜕𝜕𝜕𝜕𝑧𝑧

(𝑠𝑠2𝑧𝑧2)� 𝚤𝚤 − �𝜕𝜕𝜕𝜕𝑥𝑥

(𝑥𝑥2𝑧𝑧2) −𝜕𝜕𝜕𝜕𝑧𝑧

(𝑥𝑥2𝑠𝑠2𝑧𝑧2)� 𝚥𝚥 + �𝜕𝜕𝜕𝜕𝑥𝑥

(𝑠𝑠2𝑧𝑧2) −𝜕𝜕𝜕𝜕𝑠𝑠

(𝑥𝑥2𝑠𝑠2𝑧𝑧2)� 𝑘𝑘�⃗

= −2�𝑠𝑠2𝑧𝑧𝚤𝚤 + (𝑥𝑥𝑧𝑧2 − 𝑥𝑥2𝑠𝑠2𝑧𝑧)𝚥𝚥 + 𝑥𝑥2𝑠𝑠𝑧𝑧2𝑘𝑘�⃗ �

2.3.3.1 Theorem involving Curl The theorem involving curl of vectors is better known as Stoke’s theorem. If we consider a surface 𝑆𝑆 in ℝ3 that has a closed non-intersecting boundary, 𝐶𝐶, the topology of, say, one half of a tennis ball. That is, “if we move along C and fall to our left, we hit the side of the surface where the normal vectors are sticking out”. Stoke’s theorem states that for a vector field 𝑭𝑭 within which the surface is situated is given by:

� 𝑭𝑭 ∙ 𝑑𝑑𝒓𝒓𝐶𝐶

= � (∇ × 𝑭𝑭) ∙ 𝑠𝑠�⃗ 𝑑𝑑𝑆𝑆𝑆𝑆

(42)

The theorem can be useful in either direction: sometimes the line integral is easier than the surface integral, and sometimes vice-versa. Example 2.12: Evaluate the line integral of the function 𝑭𝑭(𝑥𝑥, 𝑠𝑠, 𝑧𝑧) = ⟨𝑥𝑥2𝑠𝑠3, 𝑒𝑒𝑥𝑥𝑦𝑦+𝑧𝑧 ,𝑥𝑥 + 𝑧𝑧2⟩ around a circle 𝑥𝑥2 + 𝑠𝑠2 = 1 in the plane 𝑠𝑠 = 0, oriented counterclock-wise as viewed from the positive 𝑠𝑠 −direction. Solution: Whenever we want to integrate a vector field around a closed curve, and it looks like the computation might be messy, think of applying Stoke’s Theorem. The circle 𝐶𝐶 in question is the positively-oriented boundary of the disc 𝑆𝑆 given by 𝑥𝑥2 + 𝑠𝑠2 ≤ 1,𝑠𝑠 = 0, with the unit normal vector 𝑠𝑠�⃗ pointing in the positive 𝑠𝑠 −direction. That is 𝑠𝑠�⃗ = 𝚥𝚥 = ⟨0, 1, 0⟩.

Page 37: AEM ADV03 Introductory Mathematics€¦ ·

37

Stoke’s Theorem tells us that:

� 𝑭𝑭 ∙ 𝑑𝑑𝒓𝒓𝐶𝐶

= � (∇ × 𝑭𝑭) ∙ 𝑠𝑠�⃗ 𝑑𝑑𝑆𝑆𝑆𝑆

Evaluating the curl of 𝑭𝑭, we get:

∇ × 𝑭𝑭 = ��𝚤𝚤 𝚥𝚥 𝑘𝑘�⃗𝜕𝜕𝜕𝜕𝑥𝑥

𝜕𝜕𝜕𝜕𝑠𝑠

𝜕𝜕𝜕𝜕𝑧𝑧

𝑥𝑥2𝑠𝑠3 𝑒𝑒𝑥𝑥𝑦𝑦+𝑧𝑧 𝑥𝑥 + 𝑧𝑧2��

= �−𝑒𝑒𝑥𝑥𝑦𝑦+𝑧𝑧𝚤𝚤 − 𝚥𝚥 + (𝑠𝑠𝑒𝑒𝑥𝑥𝑦𝑦+𝑧𝑧 − 3𝑥𝑥2𝑠𝑠2)𝑘𝑘�⃗ �

(∇ × 𝑭𝑭) ∙ 𝑠𝑠�⃗ = �−𝑒𝑒𝑥𝑥𝑦𝑦+𝑧𝑧𝚤𝚤 − 𝚥𝚥 + (𝑠𝑠𝑒𝑒𝑥𝑥𝑦𝑦+𝑧𝑧 − 3𝑥𝑥2𝑠𝑠2)𝑘𝑘�⃗ � ∙ (0, 1, 0) = −1

� 𝑭𝑭 ∙ 𝑑𝑑𝒓𝒓𝐶𝐶

= � (∇ × 𝑭𝑭) ∙ 𝑠𝑠�⃗ 𝑑𝑑𝑆𝑆𝑆𝑆

= � −1 𝑑𝑑𝑆𝑆𝑆𝑆

= −𝑎𝑎𝑠𝑠𝑒𝑒𝑎𝑎(𝑆𝑆) = −𝜋𝜋

2.4 Repeated Vector Operations – The Laplacian So far, note the following:

i. grad must operate on a scalar field and gives a vector field in return ii. div operates on a vector field and gives a scalar field in return, and, iii. curl operates on a vector field and gives a vector field in return

In addition to the vector relations involving del (∇) mentioned above, there are six other combinations in which del appears twice. The most important one which involves a scalar is:

Page 38: AEM ADV03 Introductory Mathematics€¦ ·

38

𝒅𝒅𝒅𝒅𝒅𝒅 𝒈𝒈𝒓𝒓𝒂𝒂𝒅𝒅 𝜑𝜑 = ∇ ∙ ∇φ = ∇2𝜑𝜑 (43) where 𝜑𝜑(𝑥𝑥,𝑠𝑠, 𝑧𝑧) that is a scalar point function. The operator ∇2= ∇ ∙ ∇, is also known as the Laplacian, takes a particularly simple form in Cartesian coordinates, which are:

∇2= 𝜕𝜕2

𝜕𝜕𝑥𝑥2+

𝜕𝜕2

𝜕𝜕𝑠𝑠2+𝜕𝜕2

𝜕𝜕𝑧𝑧2 (44)

When applied to a vector, it yields a vector, which is given in Cartesian coordinates:

∇2𝑨𝑨 = 𝜕𝜕2𝑨𝑨𝜕𝜕𝑥𝑥2

+𝜕𝜕2𝑨𝑨𝜕𝜕𝑠𝑠2

+𝜕𝜕2𝑨𝑨𝜕𝜕𝑧𝑧2

(45)

The cross product of two dels operating on a scalar function yields

∇ × ∇φ = 𝒄𝒄𝒄𝒄𝒓𝒓𝒄𝒄 𝒈𝒈𝒓𝒓𝒂𝒂𝒅𝒅 φ =�

�𝚤𝚤 𝚥𝚥 𝑘𝑘�⃗𝜕𝜕𝜕𝜕𝑥𝑥

𝜕𝜕𝜕𝜕𝑠𝑠

𝜕𝜕𝜕𝜕𝑧𝑧

𝜕𝜕𝜑𝜑𝜕𝜕𝑥𝑥

𝜕𝜕𝜑𝜑𝜕𝜕𝑠𝑠

𝜕𝜕𝜑𝜑𝜕𝜕𝑧𝑧

�= 0 (46)

If ∇ × 𝑨𝑨 = 0 for any vector 𝑨𝑨, then 𝑨𝑨 = ∇𝜑𝜑. In this case, 𝑨𝑨 is irrotational. Similarly,

∇ ∙ ∇ × 𝑨𝑨 = 𝒅𝒅𝒅𝒅𝒅𝒅 𝒄𝒄𝒄𝒄𝒓𝒓𝒄𝒄 𝑨𝑨 = 0 (47) Finally, a useful expansion is given by:

∇ × (∇ × 𝑨𝑨) = 𝒄𝒄𝒄𝒄𝒓𝒓𝒄𝒄 𝒄𝒄𝒄𝒄𝒓𝒓𝒄𝒄 𝑨𝑨 = ∇(∇ ∙ 𝑨𝑨) − ∇2𝑨𝑨 (48) Other forms for other coordinate systems for ∇2 are as follows:

1. Spherical polar coordinates:

∇2= 1𝑠𝑠2

𝜕𝜕𝜕𝜕𝑠𝑠𝑠𝑠2

𝜕𝜕𝜕𝜕𝑠𝑠

+1

𝑠𝑠2 sin𝜃𝜃𝜕𝜕𝜕𝜕𝜃𝜃

�sin𝜃𝜃𝜕𝜕𝜕𝜕𝜃𝜃� +

1𝑠𝑠2 sin2 𝜃𝜃

𝜕𝜕2

𝜕𝜕𝜙𝜙2 (49)

Page 39: AEM ADV03 Introductory Mathematics€¦ ·

39

2. Two-dimensional polar coordinates:

∇2= 𝜕𝜕2

𝜕𝜕𝑠𝑠2+

1𝑠𝑠𝜕𝜕𝜕𝜕𝑠𝑠

+1𝑠𝑠2

𝜕𝜕2

𝜕𝜕𝜃𝜃2 (50)

3. Cylindrical coordinates:

∇2= 𝜕𝜕2

𝜕𝜕𝑠𝑠2+

1𝑠𝑠𝜕𝜕𝜕𝜕𝑠𝑠

+1𝑠𝑠2

𝜕𝜕2

𝜕𝜕𝜃𝜃2+𝜕𝜕2

𝜕𝜕𝑧𝑧2 (51)

Several other useful relations are summarised below:

DEL OPERATOR RELATIONS

Let 𝜑𝜑 and 𝜓𝜓 be scalar fields and 𝑨𝑨 and 𝑩𝑩 be vector fields

Sum of fields ∇(𝜑𝜑 + 𝜓𝜓) = ∇𝜑𝜑 + ∇𝜓𝜓

∇ ∙ (𝑨𝑨 + 𝑩𝑩) = ∇ ∙ 𝑨𝑨 + ∇ ∙ 𝑩𝑩

∇ × (𝑨𝑨 + 𝑩𝑩) = ∇ × 𝑨𝑨 + ∇ × 𝑩𝑩

Product of fields ∇(𝜑𝜑𝜓𝜓) = 𝜑𝜑(∇𝜓𝜓) + 𝜓𝜓(∇𝜑𝜑)

∇ ∙ (𝜑𝜑𝑨𝑨) = 𝜑𝜑(∇ ∙ 𝑨𝑨) + (∇𝜑𝜑) ∙ 𝑨𝑨

∇ × (𝜑𝜑𝑨𝑨) = 𝜑𝜑(∇ × 𝑨𝑨) + (∇𝜑𝜑) × 𝑨𝑨

∇ ∙ (𝑨𝑨 × 𝑩𝑩) = 𝑩𝑩 ∙ (∇ × 𝑨𝑨) − 𝑨𝑨 ∙ (∇ × 𝑩𝑩)

∇ × (𝑨𝑨 × 𝑩𝑩) = 𝑨𝑨 ∙ (∇ ∙ 𝑩𝑩) + (𝑩𝑩 ∙ ∇)𝑨𝑨− 𝑩𝑩(∇ ∙ 𝑨𝑨) − (𝑨𝑨 ∙ ∇)𝑩𝑩

∇(𝑨𝑨 ∙ 𝑩𝑩) = 𝑨𝑨 × (∇ × 𝑩𝑩) −𝑩𝑩(∇ ∙ 𝑨𝑨) + (𝑩𝑩 ∙ ∇)𝑨𝑨− (𝑨𝑨 ∙ ∇)𝑩𝑩

Laplacian ∇ ∙ (∇𝜑𝜑) = ∇2𝜑𝜑

∇ × (∇ × 𝑨𝑨) = ∇(∇ ∙ 𝑨𝑨) − ∇2𝑨𝑨

Page 40: AEM ADV03 Introductory Mathematics€¦ ·

40

Example 2.13: If 𝑨𝑨 = 2𝑠𝑠𝑧𝑧𝚤𝚤 − 𝑥𝑥2𝑠𝑠𝚥𝚥 + 𝑥𝑥𝑧𝑧2𝑘𝑘�⃗ ,𝑩𝑩 = 𝑥𝑥2𝚤𝚤 + 𝑠𝑠𝑧𝑧𝚥𝚥 − 𝑥𝑥𝑠𝑠𝑘𝑘�⃗ and 𝜙𝜙 = 2𝑥𝑥2𝑠𝑠𝑧𝑧3, find (a) (𝑨𝑨 ∙ ∇)𝜙𝜙 (b) 𝑨𝑨 ∙ ∇𝜙𝜙 (c) 𝑩𝑩 × ∇𝜙𝜙 (d) ∇2𝜙𝜙

Solution:

(a)

(𝑨𝑨 ∙ ∇)𝜙𝜙 = ��2𝑠𝑠𝑧𝑧𝚤𝚤 − 𝑥𝑥2𝑠𝑠𝚥𝚥 + 𝑥𝑥𝑧𝑧2𝑘𝑘�⃗ � ∙ �𝜕𝜕𝜕𝜕𝑥𝑥

𝚤𝚤 +𝜕𝜕𝜕𝜕𝑠𝑠

𝚥𝚥 +𝜕𝜕𝜕𝜕𝑧𝑧𝑘𝑘�⃗ ��𝜙𝜙

= �2𝑠𝑠𝑧𝑧𝜕𝜕𝜕𝜕𝑥𝑥

− 𝑥𝑥2𝑠𝑠𝜕𝜕𝜕𝜕𝑠𝑠

+ 𝑥𝑥𝑧𝑧2𝜕𝜕𝜕𝜕𝑧𝑧� 2𝑥𝑥2𝑠𝑠𝑧𝑧3

= 2𝑠𝑠𝑧𝑧𝜕𝜕𝜕𝜕𝑥𝑥

(2𝑥𝑥2𝑠𝑠𝑧𝑧3) − 𝑥𝑥2𝑠𝑠𝜕𝜕𝜕𝜕𝑠𝑠

(2𝑥𝑥2𝑠𝑠𝑧𝑧3) + 𝑥𝑥𝑧𝑧2𝜕𝜕𝜕𝜕𝑧𝑧

(2𝑥𝑥2𝑠𝑠𝑧𝑧3)

= 2𝑠𝑠𝑧𝑧(4𝑥𝑥𝑠𝑠𝑧𝑧3) − 𝑥𝑥2𝑠𝑠(2𝑥𝑥2𝑧𝑧3) + 𝑥𝑥𝑧𝑧2(6𝑥𝑥2𝑠𝑠𝑧𝑧2) = 8𝑥𝑥𝑠𝑠2𝑧𝑧4 − 2𝑥𝑥4𝑠𝑠𝑧𝑧3 + 6𝑥𝑥3𝑠𝑠𝑧𝑧4

(b)

∇𝜙𝜙 =𝜕𝜕𝜕𝜕𝑥𝑥

(2𝑥𝑥2𝑠𝑠𝑧𝑧3)𝚤𝚤 +𝜕𝜕𝜕𝜕𝑠𝑠

(2𝑥𝑥2𝑠𝑠𝑧𝑧3)𝚥𝚥 +𝜕𝜕𝜕𝜕𝑧𝑧

(2𝑥𝑥2𝑠𝑠𝑧𝑧3)𝑘𝑘�⃗

= 4𝑥𝑥𝑠𝑠𝑧𝑧3𝚤𝚤 + 2𝑥𝑥2𝑧𝑧3𝚥𝚥 + 6𝑥𝑥2𝑠𝑠𝑧𝑧2𝑘𝑘�⃗

Therefore 𝑨𝑨 ∙ ∇𝜙𝜙 = �2𝑠𝑠𝑧𝑧𝚤𝚤 − 𝑥𝑥2𝑠𝑠𝚥𝚥 + 𝑥𝑥𝑧𝑧2𝑘𝑘�⃗ � ∙ �4𝑥𝑥𝑠𝑠𝑧𝑧3𝚤𝚤 + 2𝑥𝑥2𝑧𝑧3𝚥𝚥 + 6𝑥𝑥2𝑠𝑠𝑧𝑧2𝑘𝑘�⃗ �

= 8𝑥𝑥𝑠𝑠2𝑧𝑧4 − 2𝑥𝑥4𝑠𝑠𝑧𝑧3 + 6𝑥𝑥3𝑠𝑠𝑧𝑧4

(c) ∇𝜙𝜙 = 4𝑥𝑥𝑠𝑠𝑧𝑧3𝚤𝚤 + 2𝑥𝑥2𝑧𝑧3𝚥𝚥 + 6𝑥𝑥2𝑠𝑠𝑧𝑧2𝑘𝑘�⃗ , therefore:

Page 41: AEM ADV03 Introductory Mathematics€¦ ·

41

𝑩𝑩 × ∇𝜙𝜙 = �𝚤𝚤 𝚥𝚥 𝑘𝑘�⃗𝑥𝑥2 𝑠𝑠𝑧𝑧 −𝑥𝑥𝑠𝑠

4𝑥𝑥𝑠𝑠𝑧𝑧3 2𝑥𝑥2𝑧𝑧3 6𝑥𝑥2𝑠𝑠𝑧𝑧2�

= (6𝑥𝑥2𝑠𝑠2𝑧𝑧3 + 2𝑥𝑥3𝑠𝑠𝑧𝑧3)𝚤𝚤 + (−4𝑥𝑥2𝑠𝑠2𝑧𝑧3 − 6𝑥𝑥4𝑠𝑠𝑧𝑧2)𝚥𝚥 + (2𝑥𝑥4𝑧𝑧3 − 4𝑥𝑥𝑠𝑠2𝑧𝑧4)𝑘𝑘�⃗

(d)

∇2𝜙𝜙 =𝜕𝜕2

𝜕𝜕𝑥𝑥2(2𝑥𝑥2𝑠𝑠𝑧𝑧3) +

𝜕𝜕2

𝜕𝜕𝑠𝑠2(2𝑥𝑥2𝑠𝑠𝑧𝑧3) +

𝜕𝜕2

𝜕𝜕𝑧𝑧2(2𝑥𝑥2𝑠𝑠𝑧𝑧3)

= 4𝑠𝑠𝑧𝑧3 + 0 + 12𝑥𝑥2𝑠𝑠𝑧𝑧

Page 42: AEM ADV03 Introductory Mathematics€¦ ·

42

3. Linear Algebra, Matrices & Eigenvectors In many practical systems, there naturally arises a set of quantities that can conveniently be represented as a certain dimensional array, referred to as matrix. If matrices were simply a way of representing array of numbers, then they would have only a marginal utility as a means of visualising data. However, a whole branch of mathematics has evolved, involving manipulation of matrices, which has become a powerful tool for the solution f many problems. For example, consider the set of 𝑠𝑠 linear equations with 𝑠𝑠 unknowns

𝑎𝑎11𝑌𝑌1 + 𝑎𝑎12𝑌𝑌2 + ⋯+ 𝑎𝑎1𝑛𝑛𝑌𝑌𝑛𝑛 = 0

(52) 𝑎𝑎21𝑌𝑌1 + 𝑎𝑎22𝑌𝑌2 + ⋯+ 𝑎𝑎2𝑛𝑛𝑌𝑌𝑛𝑛 = 0 ∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙ 𝑎𝑎𝑛𝑛1𝑌𝑌1 + 𝑎𝑎𝑛𝑛2𝑌𝑌2 + ⋯+ 𝑎𝑎𝑛𝑛𝑛𝑛𝑌𝑌𝑛𝑛 = 0

The necessary and sufficient condition for the set to have a non-trivial solution (other than 𝑌𝑌1 = 𝑌𝑌2 = ⋯ =𝑌𝑌𝑛𝑛 = 0) is that the determinant of the array of coefficients is zero: 𝑑𝑑𝑒𝑒𝑡𝑡(𝐴𝐴) = 0.

3.1 Basic definitions and notation A matrix is an array of numbers with 𝑠𝑠 rows and 𝑠𝑠 columns. The (i, j)th element is the element found in row 𝑠𝑠 and column 𝑗𝑗. For example, have a look at the matrix below. Tis matrix has 𝑠𝑠 = 2 rows, 𝑠𝑠 = 3 column, and therefore the matrix order is 2 × 3. The (i, j)th element is 𝑎𝑎𝑖𝑖𝑖𝑖

𝐴𝐴 = �𝑎𝑎11 𝑎𝑎12 𝑎𝑎13𝑎𝑎21 𝑎𝑎22 𝑎𝑎23� (53)

Matrices may be categorized based on the properties of its elements. Some basic definitions include:

1. The transpose of matrix 𝐴𝐴 (or 𝐴𝐴𝑇𝑇) is formed by interchanging element 𝑎𝑎𝑖𝑖𝑖𝑖 with element 𝑎𝑎𝑖𝑖𝑖𝑖. Therefore:

𝐴𝐴𝑇𝑇 = �𝑎𝑎𝑖𝑖𝑖𝑖�, (𝐴𝐴 + 𝐵𝐵)𝑇𝑇 = 𝐴𝐴𝑇𝑇 + 𝐵𝐵𝑇𝑇 , (𝐴𝐴𝐵𝐵)𝑇𝑇 = 𝐴𝐴𝑇𝑇𝐵𝐵𝑇𝑇 (54) A symmetric matrix is equals to its transpose, 𝐴𝐴 = 𝐴𝐴𝑇𝑇.

Page 43: AEM ADV03 Introductory Mathematics€¦ ·

43

2. Diagonal matrix is a square matrix (𝑠𝑠 = 𝑠𝑠) that has it’s only non-zero elements along the leading diagonal. For example:

𝑑𝑑𝑠𝑠𝑎𝑎𝑑𝑑 𝐴𝐴 = �𝑎𝑎11 0 00 𝑎𝑎22 00 0 𝑎𝑎33

Diagonal can also be written for a list of matrices as:

𝑑𝑑𝑠𝑠𝑎𝑎𝑑𝑑 (𝑎𝑎11,𝑎𝑎22,⋯𝑎𝑎𝑛𝑛𝑛𝑛) Which denotes the block diagonal matrix with elements 𝑎𝑎11,𝑎𝑎22,⋯𝑎𝑎𝑛𝑛𝑛𝑛 along te diagonal and zeros elsewhere. A matrix is formed in this way is sometimes called a direct sum of 𝑎𝑎11,𝑎𝑎22,⋯𝑎𝑎𝑛𝑛𝑛𝑛 and the operation is denoted by ⨁:

𝑎𝑎11⨁⋯⨁ 𝑎𝑎𝑛𝑛𝑛𝑛 = 𝑑𝑑𝑠𝑠𝑎𝑎𝑑𝑑 (𝑎𝑎11,𝑎𝑎22,⋯𝑎𝑎𝑛𝑛𝑛𝑛)

3. In a square matrix of order n, the diagonal containing elements 𝑎𝑎11,𝑎𝑎22,⋯𝑎𝑎𝑛𝑛𝑛𝑛 is called the principle or leading diagonal. The sum of elements in this diagonal is called the trace of 𝑠𝑠 × 𝑠𝑠 square matrix A, hence:

𝑇𝑇𝑠𝑠𝑎𝑎𝑐𝑐𝑒𝑒 (𝐴𝐴) = 𝑇𝑇𝑠𝑠 (𝐴𝐴) = �𝑎𝑎𝑖𝑖𝑖𝑖𝑖𝑖

(55)

We can also define a few more notations for Trace as:

𝑇𝑇𝑠𝑠(𝐴𝐴) = 𝑇𝑇𝑠𝑠(𝐴𝐴𝑇𝑇), 𝑇𝑇𝑠𝑠(𝑐𝑐𝐴𝐴) = 𝑐𝑐𝑇𝑇𝑠𝑠(𝐴𝐴), 𝑇𝑇𝑠𝑠(𝐴𝐴 + 𝐵𝐵) = 𝑇𝑇𝑠𝑠(𝐴𝐴) + 𝑇𝑇𝑠𝑠(𝐵𝐵) (56)

4. Determinant of a square 𝑠𝑠 × 𝑠𝑠 matrix 𝐴𝐴 is denoted as det(𝐴𝐴) or |𝐴𝐴|. It is determined by:

|𝐴𝐴| = �𝑎𝑎𝑖𝑖𝑖𝑖𝑎𝑎(𝑖𝑖𝑖𝑖)

𝑛𝑛

𝑖𝑖=1

(57)

where: 𝑎𝑎(𝑖𝑖𝑖𝑖) = (−1)𝑖𝑖+𝑖𝑖�𝐴𝐴(𝑖𝑖)(𝑖𝑖)� (58)

with �𝐴𝐴(𝑖𝑖)(𝑖𝑖)� denoting the submatrix that is formed from 𝐴𝐴 by removing the 𝑠𝑠th row and the 𝑗𝑗th column. Determinant of a matrix can also be defined as the following:

Page 44: AEM ADV03 Introductory Mathematics€¦ ·

44

|𝐴𝐴𝐵𝐵| = |𝐴𝐴||𝐵𝐵|, |𝐴𝐴| = |𝐴𝐴𝑇𝑇|, |𝑐𝑐𝐴𝐴| = 𝑐𝑐𝑛𝑛|𝐴𝐴| (59)

5. Adjugate of a 𝑠𝑠 × 𝑠𝑠 matrix 𝐴𝐴 is defined as an 𝑠𝑠 × 𝑠𝑠 matrix of the cofactors of the elements of the transposed matrix. Therefore we can write Adjugate of 𝑠𝑠 × 𝑠𝑠 matrix 𝐴𝐴 as:

𝑎𝑎𝑑𝑑𝑗𝑗(𝐴𝐴) = �𝑎𝑎(𝑖𝑖𝑖𝑖)� = �𝑎𝑎(𝑖𝑖𝑖𝑖)�𝑇𝑇 (60)

Adjugate has an interesting property:

𝐴𝐴 𝑎𝑎𝑑𝑑𝑗𝑗(𝐴𝐴) = 𝑎𝑎𝑑𝑑𝑗𝑗(𝐴𝐴)𝐴𝐴 = |𝐴𝐴|𝐼𝐼 (61)

3.2 Multiplication of matrices and multiplication of vectors and matrices 3.2.1 Matrix multiplication

If we let 𝐴𝐴 be order 𝑠𝑠 × 𝑠𝑠 and 𝐵𝐵 of order 𝑠𝑠 × 𝑝𝑝. Then the product of two matrices 𝐴𝐴 and 𝐵𝐵 is

𝐶𝐶 = 𝐴𝐴𝐵𝐵 (62)

or

𝑐𝑐𝑖𝑖𝑖𝑖 = �𝑎𝑎𝑖𝑖𝑘𝑘𝑏𝑏𝑘𝑘𝑖𝑖

𝑛𝑛

𝑘𝑘=1

(63)

where the resulting matrix 𝐶𝐶 is in the order of 𝑠𝑠 × 𝑝𝑝. Square matrices obey the laws expressed as below:

Associative: 𝐴𝐴(𝐵𝐵𝐶𝐶) = (𝐴𝐴𝐵𝐵)𝐶𝐶 (64)

Distributive: (𝐴𝐴 + 𝐵𝐵)𝐶𝐶 = 𝐴𝐴𝐶𝐶 + 𝐵𝐵𝐶𝐶, (𝐵𝐵 + 𝐶𝐶)𝐴𝐴 = 𝐵𝐵𝐴𝐴 + 𝐶𝐶𝐴𝐴 (65)

Matrix Polynomials Polynomials in square matrices are similar to the more familiar polynomials in scalars. Let us consider:

𝑝𝑝(𝐴𝐴) = 𝑏𝑏0𝐼𝐼 + 𝑏𝑏1𝐴𝐴 + ⋯𝑏𝑏𝑘𝑘𝐴𝐴𝑘𝑘 (66)

The value of this polynomial is a matrix. The theory of polynomials in general holds, we have the useful factorizations of monomials:

Page 45: AEM ADV03 Introductory Mathematics€¦ ·

45

For any positive integer k,

𝐼𝐼 − 𝐴𝐴𝑘𝑘 = ((𝐼𝐼 − 𝐴𝐴)(𝐼𝐼 + 𝐴𝐴 + ⋯𝐴𝐴𝑘𝑘−1) (67)

For an odd positive integer k, 𝐼𝐼 + 𝐴𝐴𝑘𝑘 = ((𝐼𝐼 + 𝐴𝐴)(𝐼𝐼 − 𝐴𝐴 + ⋯𝐴𝐴𝑘𝑘−1) (68)

3.2.2 Traces and determinants of square Cayley products The useful property of the trace for the matrix 𝐴𝐴 and 𝐵𝐵 that are conformable for the multiplication 𝐴𝐴𝐵𝐵 and 𝐵𝐵𝐴𝐴 is

𝑇𝑇𝑠𝑠(𝐴𝐴𝐵𝐵) = 𝑇𝑇𝑠𝑠 (𝐵𝐵𝐴𝐴) (69)

This is obvious from the definitions of matrix multiplication and the trace. Due to associativity of matrix multiplications, equation (18) can be further extended to:

𝑇𝑇𝑠𝑠(𝐴𝐴𝐵𝐵𝐶𝐶) = 𝑇𝑇𝑠𝑠 (𝐵𝐵𝐶𝐶𝐴𝐴) = 𝑇𝑇𝑠𝑠(𝐶𝐶𝐴𝐴𝐵𝐵) (70)

If 𝐴𝐴 and 𝐵𝐵 are square matrices conformable for multiplication, then an important property of the determinant is

|𝐴𝐴𝐵𝐵| = |𝐴𝐴||𝐵𝐵| (71)

Or we can write the equation as:

�� 𝐴𝐴 0−𝐼𝐼 𝐵𝐵�� = |𝐴𝐴||𝐵𝐵| (72)

3.2.3 The Kronecker product The Kronecker multiplication, denoted by ⨂, is not commutative, rather it is associative. Therefore, 𝐴𝐴 ⨂ 𝐵𝐵 may not equal to 𝐵𝐵 ⨂ 𝐴𝐴. Let us have a 𝑠𝑠 × 𝑠𝑠 matrix 𝐴𝐴 and an 𝑠𝑠 × 𝑠𝑠 matrix 𝐵𝐵. We can then form an 𝑠𝑠𝑠𝑠 × 𝑠𝑠𝑠𝑠 matrix 𝐶𝐶 by defining the direct product as:

Page 46: AEM ADV03 Introductory Mathematics€¦ ·

46

𝐶𝐶 = 𝐴𝐴 ⨂ 𝐵𝐵 =

⎣⎢⎢⎡ 𝑎𝑎11𝐵𝐵 𝑎𝑎12𝐵𝐵 ⋯ 𝑎𝑎1𝑚𝑚𝐵𝐵𝑎𝑎21𝐵𝐵 𝑎𝑎22𝐵𝐵 ⋯ 𝑎𝑎2𝑚𝑚𝐵𝐵⋮

𝑎𝑎𝑚𝑚1𝐵𝐵⋮

𝑎𝑎𝑚𝑚2𝐵𝐵 ⋯⋮

𝑎𝑎𝑚𝑚𝑚𝑚𝐵𝐵⎦⎥⎥⎤ (73)

To be more specific, let 𝐴𝐴 and 𝐵𝐵 be a 2 × 2 matrices

𝐴𝐴 = �𝑎𝑎11 𝑎𝑎12𝑎𝑎21 𝑎𝑎22� 𝐵𝐵 = �𝑏𝑏11 𝑏𝑏12

𝑏𝑏21 𝑏𝑏22�

The Kronecker product matrix 𝐶𝐶 is the 4 × 4 matrix

𝐶𝐶 = 𝐴𝐴 ⨂ 𝐵𝐵 = �

𝑎𝑎11𝑏𝑏11 𝑎𝑎11𝑏𝑏12 𝑎𝑎12𝑏𝑏11 𝑎𝑎12𝑏𝑏12𝑎𝑎11𝑏𝑏21 𝑎𝑎11𝑏𝑏22 𝑎𝑎12𝑏𝑏21 𝑎𝑎12𝑏𝑏22𝑎𝑎21𝑏𝑏11𝑎𝑎21𝑏𝑏21

𝑎𝑎21𝑏𝑏12𝑎𝑎21𝑏𝑏22

𝑎𝑎22𝑏𝑏11𝑎𝑎22𝑏𝑏21

𝑎𝑎22𝑏𝑏12𝑎𝑎22𝑏𝑏22

The determinant of Kronecker product of two square matrices 𝑠𝑠 × 𝑠𝑠 matrix 𝐴𝐴 and an 𝑠𝑠 × 𝑠𝑠 matrix 𝐵𝐵 has a simple relationship to the determinant of the individual matrices. Hence:

|𝐴𝐴 ⨂ 𝐵𝐵| = |𝐴𝐴|𝑚𝑚|𝐵𝐵|𝑛𝑛 (74)

Assuming the matrices are conformable for the indicated operations, some additional properties of Kronecker products are as follows:

(𝑎𝑎𝐴𝐴) ⨂ (𝑏𝑏𝐵𝐵) = 𝑎𝑎𝑏𝑏(𝐴𝐴 ⨂ 𝐵𝐵) = (𝑎𝑎𝑏𝑏𝐴𝐴) ⨂ 𝐵𝐵 = 𝐴𝐴 ⨂ (𝑎𝑎𝑏𝑏𝐵𝐵) (75)

where 𝑎𝑎 and 𝑏𝑏 are scalars.

(𝐴𝐴 + 𝐵𝐵) ⨂ (𝐶𝐶) = 𝐴𝐴 ⨂ 𝐶𝐶 + 𝐵𝐵 ⨂ 𝐶𝐶 (76) (𝐴𝐴 ⨂ 𝐵𝐵 ) ⨂ 𝐶𝐶 = 𝐴𝐴 ⨂ (𝐵𝐵 ⨂ 𝐶𝐶) (77) (𝐴𝐴 ⨂ 𝐵𝐵 ) 𝑇𝑇 = 𝐴𝐴𝑇𝑇 ⨂ 𝐵𝐵𝑇𝑇 (78) (𝐴𝐴 ⨂ 𝐵𝐵 )(𝐶𝐶 ⨂ 𝐷𝐷 ) = 𝐴𝐴𝐶𝐶 ⨂ 𝐵𝐵𝐷𝐷 (79)

Page 47: AEM ADV03 Introductory Mathematics€¦ ·

47

3.3 Matrix Rank and the Inverse of a full rank matrix The linear dependence or independence of the vectors forming the rows or columns of a matrix is an important characteristic of the matrix. The maximum number of linearly independent vectors is called the rank of the matrix, 𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐴𝐴). Multiplication by a non-zero scalar does not change the linear dependence of vectors. Therefore, for the scalar 𝑎𝑎 with 𝑎𝑎 ≠ 0, we have

𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝑎𝑎𝐴𝐴) = 𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘(𝐴𝐴) (80) For a 𝑠𝑠 × 𝑠𝑠 matrix 𝐴𝐴,

𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐴𝐴) ≤ min(𝑠𝑠,𝑠𝑠) (81) Example 3.1: Find the rank of the matrix 𝐴𝐴 below:

𝐴𝐴 = �1 2 1−2 −3 13 5 0

Solution: First we understand that this is a 3 × 3 matrix. If w elook closey we can see that the first two rows are linearly independent. However, the third row is dependent on the first and second rows where 𝑅𝑅𝑐𝑐𝑤𝑤 1 − 𝑅𝑅𝑐𝑐𝑤𝑤 2 = 𝑅𝑅𝑐𝑐𝑤𝑤 3. Therefore, rank of matrix 𝐴𝐴 is 2.

3.3.1 Full Rank matrices If the rank of a matrix is the same as its smaller dimension, we say the matrix is of full rank. In the case of non-square matrix, we say the matrix is of full row rank or full column rank just to emphasis which one is of the smaller number. A matrix is a full row rank when each row is linearly independent, while ma matrix is a full column rank when each column s linearly independent. For a square matrix, however, the matrix is a full rank when all rows and columns are linearly independent and that the determinant of the matrix is not zero. Rank of product of two matrices is less than or equals to the lesser rank of the two, or:

𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐴𝐴𝐵𝐵) ≤ min�𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘(𝐴𝐴), 𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘(𝐵𝐵)� (82) Rank of sum of two matrices is less than or equals to the sum of their ranks, or:

𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐴𝐴 + 𝐵𝐵) ≤ 𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐴𝐴) + 𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐵𝐵) (83)

Page 48: AEM ADV03 Introductory Mathematics€¦ ·

48

From equation (83), we can also write:

|𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐴𝐴) − 𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐵𝐵)| ≤ 𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐴𝐴 + 𝐵𝐵) (84)

3.3.2 Solutions of linear equations An application of vectors and matrices involve systems of linear equations:

𝑎𝑎11𝑥𝑥1 + ⋯+ 𝑎𝑎1𝑚𝑚𝑥𝑥𝑚𝑚 = 𝑏𝑏1 ⋮ ⋮ ⋮ (85) 𝑎𝑎𝑛𝑛1𝑥𝑥1 + ⋯+ 𝑎𝑎𝑛𝑛𝑚𝑚𝑥𝑥𝑚𝑚 = 𝑏𝑏𝑛𝑛

Or

𝐴𝐴𝑥𝑥 = 𝑏𝑏 (86) In this system, 𝐴𝐴 is called the coefficient matrix. The 𝑥𝑥 that satisfied this system of equation is then called the solution to the system. For a given 𝐴𝐴 and 𝑏𝑏, a solution may or may not exist. A system for which a solution exist, is said to be consistent; otherwise, it is inconsistent. A linear system 𝐴𝐴𝑛𝑛𝑥𝑥𝑚𝑚𝑥𝑥 = 𝑏𝑏 is consistent if and only if:

𝑅𝑅𝑎𝑎𝑠𝑠𝑘𝑘([𝐴𝐴|𝑏𝑏]) = 𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐴𝐴) (87) Namely, the space spanned by the columns of 𝐴𝐴 is the same as that spanned by the columns of 𝐴𝐴 and the vector 𝑏𝑏; therefore, 𝑏𝑏 must be a linear combination of the columns of 𝐴𝐴. A special case that yields equation (87) for any 𝑏𝑏 is:

𝑅𝑅𝑎𝑎𝑠𝑠𝑘𝑘(𝐴𝐴𝑛𝑛𝑥𝑥𝑚𝑚) = 𝑠𝑠 (88) And so if 𝐴𝐴 is of full row rank, the system is consistent regardless of the value of 𝑏𝑏. In this case, of course, the number of rows of 𝐴𝐴 must not be greater than the number of columns. A square system in which 𝐴𝐴 is non-singular is clearly consistent, and the solution is given by:

𝑥𝑥 = 𝐴𝐴−1𝑏𝑏 (89) 3.3.3 Preservation of positive definiteness A certain type of product of a full rank matrix and a positive definite matrix preserves not only the rank but also the positive definiteness. If 𝐶𝐶 is an 𝑠𝑠 × 𝑠𝑠 and positive definite and 𝐴𝐴 = 𝑠𝑠 × 𝑠𝑠 of rank 𝑠𝑠 (𝑠𝑠 ≤ 𝑠𝑠), then 𝐴𝐴𝑇𝑇𝐶𝐶𝐴𝐴 is positive definite. To understand this, let us assume the matrix C and A as

Page 49: AEM ADV03 Introductory Mathematics€¦ ·

49

described. Letx be any m-vector such that 𝑥𝑥 ≠ 0 and ley 𝑠𝑠 = 𝐴𝐴𝑥𝑥. Because 𝐴𝐴 is a full column rank, therefore 𝑠𝑠 ≠ 0, we then have:

𝑥𝑥𝑇𝑇(𝐴𝐴𝑇𝑇𝐶𝐶𝐴𝐴) = (𝑥𝑥𝐴𝐴)𝑇𝑇𝐶𝐶(𝐴𝐴𝑥𝑥) = 𝑠𝑠𝑇𝑇𝐶𝐶𝑠𝑠 > 0 (90) Therefore, since 𝐴𝐴𝑇𝑇𝐶𝐶𝐴𝐴 is symmetric:

1. If 𝐶𝐶 is positive definite and 𝐴𝐴 is of full column rank, then 𝐴𝐴𝑇𝑇𝐶𝐶𝐴𝐴 is positive definite.

Furthermore, we then have the converse:

2. If 𝐴𝐴𝑇𝑇𝐶𝐶𝐴𝐴 is positive definite, then 𝐴𝐴 is of full column rank. For otherwise there exists an 𝑥𝑥 ≠ 0 such that 𝐴𝐴𝑥𝑥 = 0, and so 𝑥𝑥𝑇𝑇(𝐴𝐴𝑇𝑇𝐶𝐶𝐴𝐴)𝑥𝑥 = 0. 3.3.4 A lower bound on the rank of a matrix product Equation (82) gives an upper bound on the rank of the product of two matrices; where the rank cannot be greater than the rank of either of the factors. Now, we will develop a lower bound of the rank of the product of two matrices if one of them is square. If 𝐴𝐴 is an 𝑠𝑠 × 𝑠𝑠 (square) and 𝐵𝐵 is a matrix with n rows, then:

𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐴𝐴𝐵𝐵) ≥ 𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐴𝐴) + 𝑠𝑠𝑎𝑎𝑠𝑠𝑘𝑘 (𝐵𝐵) − 𝑠𝑠 (91) 3.3.5 Inverse of products and sums of matrices The inverse of the Cayley product of two nonsingular matrices of the same size is particularly easy to form. If 𝐴𝐴 and 𝐵𝐵 are square full rank matrices of the same size, then:

(𝐴𝐴𝐵𝐵)−1 = 𝐵𝐵−1𝐴𝐴−1 (92) 𝐴𝐴(𝐼𝐼 + 𝐴𝐴)−1 = (𝐼𝐼 + 𝐴𝐴−1)−1 (93) (𝐴𝐴 + 𝐵𝐵𝐵𝐵 )−1𝐵𝐵 = 𝐴𝐴−1𝐵𝐵(𝐼𝐼 + 𝐵𝐵𝑇𝑇𝐴𝐴−1𝐵𝐵)−1 (94) (𝐴𝐴−1 + 𝐵𝐵−1 )−1 = 𝐴𝐴(𝐴𝐴 + 𝐵𝐵)−1𝐵𝐵 (95)

Page 50: AEM ADV03 Introductory Mathematics€¦ ·

50

𝐴𝐴 − 𝐴𝐴(𝐴𝐴 + 𝐵𝐵 )−1𝐴𝐴 = 𝐵𝐵 − 𝐵𝐵(𝐴𝐴 + 𝐵𝐵 )−1𝐵𝐵 (96) 𝐴𝐴−1 + 𝐵𝐵−1 = 𝐴𝐴−1(𝐴𝐴 + 𝐵𝐵)𝐵𝐵−1 (97) (𝐼𝐼 + 𝐴𝐴𝐵𝐵 )−1 = 𝐼𝐼 − 𝐴𝐴(𝐼𝐼 + 𝐵𝐵𝐴𝐴)−1𝐵𝐵 (98) (𝐼𝐼 + 𝐴𝐴𝐵𝐵 )−1𝐴𝐴 = 𝐴𝐴(𝐼𝐼 + 𝐵𝐵𝐴𝐴)−1 (99) (𝐴𝐴 ⨂ 𝐵𝐵 )−1 = 𝐴𝐴−1 ⨂ 𝐵𝐵−1 (100)

Note: When 𝐴𝐴 and/or 𝐵𝐵 are not full rank, the inverse may not exist.

3.4 Eigensystems Suppose 𝐴𝐴 is an 𝑠𝑠 × 𝑠𝑠 matrix. The number 𝜆𝜆 is said to be an eigenvalue of 𝐴𝐴 if some non-zero vector 𝒙𝒙, 𝐴𝐴𝒙𝒙 = 𝜆𝜆𝒙𝒙. Any non-zero vector 𝒙𝒙 for which this equation holds is called an eigenvector for eigenvalue 𝜆𝜆 or an eigenvector of 𝐴𝐴 corresponding to eigenvalue 𝜆𝜆. How to find eigenvalues and eigenvectors? To determine whether 𝜆𝜆 is an eigenvalue of 𝐴𝐴, we need to determine whether there are any non-zero solutions to the matrix equation 𝐴𝐴𝒙𝒙 = 𝜆𝜆𝒙𝒙. To do this, we can define the following:

(a) The eigenvalues of a symmetric matrix 𝐴𝐴 are the numbers 𝜆𝜆 that satisfy |𝐴𝐴 − 𝜆𝜆𝐼𝐼| = 0 (b) The eigenvectors of a symmetric matrix 𝐴𝐴 are the vectors 𝒙𝒙 that satisfy (𝐴𝐴 − 𝜆𝜆𝐼𝐼)𝒙𝒙 = 0

There are two theorems involved in the eigensystems and they are:

1. The eigenvalues of any real symmetric matrix are real. 2. The eigenvectors of any real symmetric matric corresponding to different eigenvalues are

orthogonal. Example 3.2: Let 𝐴𝐴 be a square matrix as below. Find the eigenvalues and eigenvectors of matrix 𝐴𝐴

𝐴𝐴 = �1 12 2�

Solution: To find the eigenvalues, we will need to find the determinant of |𝐴𝐴 − 𝜆𝜆𝐼𝐼| = 0, therefore:

|𝐴𝐴 − 𝜆𝜆𝐼𝐼| = ��1 12 2� − 𝜆𝜆 �1 0

0 1��

Page 51: AEM ADV03 Introductory Mathematics€¦ ·

51

= �1 − 𝜆𝜆 12 2 − 𝜆𝜆�

= (1 − 𝜆𝜆)(2 − 𝜆𝜆) − 2 = 𝜆𝜆2 − 3𝜆𝜆

So, the eigenvalues are the solutions of 𝜆𝜆2 − 3𝜆𝜆 = 0. We could simplify the equation is 𝜆𝜆(𝜆𝜆 − 3) = 0 with the solutions of 𝜆𝜆 = 0 and 𝜆𝜆 = 3. Hence the eigenvalues of 𝐴𝐴 are 0 and 3. Now to find the eigenvectors for eigenvalue 0 we can solve the system (𝐴𝐴 − 0𝐼𝐼)𝒙𝒙 = 0, that is 𝐴𝐴𝒙𝒙 = 0, or

𝐴𝐴𝒙𝒙 = 0

�1 12 2� �

𝑥𝑥1𝑥𝑥2� = �00�

We then have to solve for

𝑥𝑥1 + 𝑥𝑥2 = 0 , 2𝑥𝑥1 + 2𝑥𝑥2 = 0 Which gives 𝑥𝑥1 = −𝑥𝑥2 = −1. Therefore, the eigenvectors for eigenvalue 0 is:

𝒙𝒙 = �−11 �

Similarly, to find the eigenvector for eigenvalue 3, we will solve (𝐴𝐴 − 3𝐼𝐼)𝒙𝒙 = 0 which is:

(𝐴𝐴 − 3𝐼𝐼)𝒙𝒙 = 0

�−2 12 −1� �

𝑥𝑥1𝑥𝑥2� = �00�

This is equivalent to the equations

−2𝑥𝑥1 + 𝑥𝑥2 = 0 , 2𝑥𝑥1 − 𝑥𝑥2 = 0 Which gives 𝑥𝑥2 = 2𝑥𝑥1. If we choose 𝑥𝑥1 = 1, we then obtain the eigenvectors

𝒙𝒙 = �12�

Page 52: AEM ADV03 Introductory Mathematics€¦ ·

52

Example 3.3: Suppose that

𝐴𝐴 = �4 0 40 4 44 4 8

Find the eigenvalues of 𝐴𝐴 and obtain one eigenvector for each eigenvalues. Solutions: To find the eigenvalues, we will solve |𝐴𝐴 − 𝜆𝜆𝐼𝐼| = 0, so we can write as:

|𝐴𝐴 − 𝜆𝜆𝐼𝐼| = �4 − 𝜆𝜆 0 4

0 4 − 𝜆𝜆 44 4 8 − 𝜆𝜆

= (4 − 𝜆𝜆) �4 − 𝜆𝜆 44 8 − 𝜆𝜆� + 4 �0 4 − 𝜆𝜆

4 4 �

= (4 − 𝜆𝜆) �(4 − 𝜆𝜆)(8 − 𝜆𝜆) − 16� + 4�−4(4 − 𝜆𝜆)� = (4 − 𝜆𝜆) �(4 − 𝜆𝜆)(8 − 𝜆𝜆) − 16� − 16(4 − 𝜆𝜆) = (4 − 𝜆𝜆) �(4 − 𝜆𝜆)(8 − 𝜆𝜆) − 16 − 16� = (4 − 𝜆𝜆) (32 − 12𝜆𝜆 + 𝜆𝜆2 − 32) = (4 − 𝜆𝜆) (𝜆𝜆2 − 12𝜆𝜆) = (4 − 𝜆𝜆) 𝜆𝜆 (𝜆𝜆 − 12)

Therefore, we can solve |𝐴𝐴 − 𝜆𝜆𝐼𝐼| = 0 and the eigenvalues are 4, 0, 12. To find the eigenvectors for eigenvalue of 4, we solve the equation (𝐴𝐴 − 4𝐼𝐼)𝒙𝒙 = 0, that is,

(𝐴𝐴 − 4𝐼𝐼)𝒙𝒙 = 0

�0 0 40 0 44 4 4

� �𝑥𝑥1𝑥𝑥2𝑥𝑥3� = �

000�

Page 53: AEM ADV03 Introductory Mathematics€¦ ·

53

The equations we get out of the equation above are:

4𝑥𝑥3 = 0 4𝑥𝑥3 = 0

4𝑥𝑥1 + 4𝑥𝑥2 + 4𝑥𝑥3 = 0 Therefore, 𝑥𝑥3 = 0 and 𝑥𝑥2 = −𝑥𝑥1. Choosing 𝑥𝑥1 = 1, we get the eigenvector

𝒙𝒙 = �1−10�

Similar solution for 𝜆𝜆 = 0, the eigenvector is:

𝒙𝒙 = �11−1

And the solution for 𝜆𝜆 = 12, the eigenvector is:

𝒙𝒙 = �112�

3.5 Diagonalisation of symmetric matrices A square matrix 𝑈𝑈 is said to be orthogonal if its reverse (if it exists) is equals to its transpose. Therefore:

𝑈𝑈−1 = 𝑈𝑈𝑇𝑇 or equivalently, 𝑈𝑈𝑈𝑈𝑇𝑇 = 𝑈𝑈𝑇𝑇𝑈𝑈 = 𝐼𝐼 (101) If 𝑈𝑈 is a real orthogonal matrix of order 𝑠𝑠 × 𝑠𝑠 and 𝐴𝐴 is a real matrix of the same order then 𝑈𝑈𝑇𝑇𝐴𝐴𝑈𝑈 is called the orthogonal transform of 𝐴𝐴 Note: Since 𝑈𝑈−1 = 𝑈𝑈𝑇𝑇 for orthogonal U, the equality of 𝑈𝑈𝑇𝑇𝐴𝐴𝑈𝑈 = 𝐷𝐷 is the same as 𝑈𝑈−1𝐴𝐴𝑈𝑈 = 𝐷𝐷, the diagonal entries of 𝐷𝐷 are the eigenvalues of 𝐴𝐴, and the columns of 𝑈𝑈 are the corresponding eigenvectors. The theorems involving diagonalization of a symmetric matrix are as follows:

Page 54: AEM ADV03 Introductory Mathematics€¦ ·

54

1. If 𝐴𝐴 is a symmetric matrix in the order of 𝑠𝑠 × 𝑠𝑠 then it is possible to find an orthogonal matrix 𝑈𝑈 of the same order such that the orthogonal transform of 𝐴𝐴 with respect to 𝑈𝑈 is diagonal and the diagonal elements of the transform are the eigenvalues of 𝐴𝐴.

2. Cayley-Hamilton Theorem: A real square matrix satisfies its own characteristic equation (i.e. its own eigenvalue equation).

𝐴𝐴𝑛𝑛 + 𝑎𝑎𝑛𝑛−1𝐴𝐴𝑛𝑛−1 + 𝑎𝑎𝑛𝑛−2𝐴𝐴𝑛𝑛−2 + ⋯+ 𝑎𝑎1𝐴𝐴 + 𝑎𝑎0𝐼𝐼 = 0 Where

𝑎𝑎0 = (−1)𝑛𝑛|𝐴𝐴| , 𝑎𝑎𝑛𝑛−1 = (−1)𝑛𝑛−1𝑡𝑡𝑠𝑠(𝐴𝐴)

3. Trace Theorem: The sum of eigenvalues of matrix 𝐴𝐴 is equals to the sum of the diagonal elements of 𝐴𝐴 and is defined as 𝑇𝑇𝑠𝑠(𝐴𝐴).

4. Determinant Theorem: The product of eigenvalues of 𝐴𝐴 is equals to the determinant of 𝐴𝐴. Example 3.4: If we worked on the same matrix as in example 3 before, find the orthogonal matrix U, and shows that 𝑈𝑈𝑇𝑇𝐴𝐴𝑈𝑈 = 𝐷𝐷:

𝐴𝐴 = �4 0 40 4 44 4 8

Solution: As we already observed, matrix A is symmetric, and we have calculated the three distinct eigenvalues 4, 0, 12 (in that order) and the eigenvectors associated with them are:

�1−10� , �

11−1

� , �112�

Now these eigenvectors are not f length 1. For example, the first eigenvector has a length of �12 + (−1)2 + 02 = √2. So, if we divide each row by √2., we will indeed obtain eigenvector of length 1.

�1/√2−1/√2

0�

We can similarly normalize the other two vectors and therefore we obtain:

Page 55: AEM ADV03 Introductory Mathematics€¦ ·

55

�1/√31/√3−1/√3

� , �1/√61/√62/√6

Now we can form the matrix 𝑈𝑈 whose columns are these normalized eigenvectors:

𝑈𝑈 = �1/√2 1/√3 1/√6−1/√2 1/√3 1/√6

0 −1/√3 2/√6�

Therefore, U is orthogonal and 𝑈𝑈𝑇𝑇𝐴𝐴𝑈𝑈 = 𝐷𝐷 = diag(4, 0, 12).

Page 56: AEM ADV03 Introductory Mathematics€¦ ·

56

4. Generalised Vector Calculus – Integral Theorems The four fundamental theorems of vector calculus are generalisations of the fundamental theorem of calculus, which equates the integral of the derivative G″ (t) to the values of G(t) at the interval boundary points:

� 𝐺𝐺" (𝑡𝑡) 𝑑𝑑𝑡𝑡𝑏𝑏

𝑎𝑎= 𝐺𝐺(𝑏𝑏) − 𝐺𝐺(𝑎𝑎) (102)

Similarly, the fundamental theorems of vector calculus state that an integral of some type of derivative over some object is equal to the values of function along the boundary of that object. The four fundamental theorems are the gradient theorem for line integral, Green’s theorem, Stokes’ theorem and the divergence theorem.

4.1 The gradient theorem for line integral The Gradient Theorem is also referred to as the Fundamental Theorem of Calculus for Line Integrals. It represents the generalisation of an integration along an axis, e.g. dx or dy, to the integration of vector fields along arbitrary curves, C, in their base space. It is expressed by

� ∇𝑓𝑓 ∙ 𝑑𝑑𝒔𝒔𝐶𝐶

= 𝑓𝑓(𝐪𝐪) − 𝑓𝑓(𝐩𝐩) (103)

where p and q are the endpoints of C. This means the line integral of the gradient of some function is just the difference of the function evaluated at the endpoints of the curve. In particular, this means that the integral of ∇f does not depend on the curve itself. A few notes to remember when using this theorem:

i. For closed curves, the line integral is zero

� ∇𝑓𝑓 ∙ 𝑑𝑑𝒔𝒔𝐶𝐶

= 0

ii. Gradient fields are path independent: if F = ∇ f, then the line integral between two points P and Q does not depend on the path connecting the two points.

iii. The theorem holds in any dimensions. In one-dimension, it reduces to the fundamental theorem of calculus as per equation (103) above.

iv. The theorem justifies the name conservative for gradient vector fields.

Page 57: AEM ADV03 Introductory Mathematics€¦ ·

57

Example 4.1: Let f (x, y, z) = x2 + y4 + z. Find the line integral of the vector field F (x, y, z) = ∇ f (x, y, z) along the path s(t) = ⟨cos(5t), sin(2t), t2⟩ from t = 0 to t = 2π. Solution: At t = 0, s(0) = ⟨1, 0, 0⟩, therefore, f (s (0)) = 1 At t = 2π, s(2π) = ⟨1, 0, 4π2⟩, therefore f (s (2π)) = 1+4π2 Hence:

� ∇𝑓𝑓 ∙ 𝑑𝑑𝒔𝒔𝐶𝐶

= 𝑓𝑓(𝐬𝐬 (2π)) − 𝑓𝑓(𝐬𝐬 (0))

� ∇𝑓𝑓 ∙ 𝑑𝑑𝒔𝒔𝐶𝐶

= 1 + 4π2 − 𝑓𝑓�𝐬𝐬 (0)� = 4π2

4.2 Green’s Theorem Let’s first define some notation. Consider a domain 𝒟𝒟 whose boundary 𝒞𝒞 is a simple closed curve – that is, a closed curve that does not intersect itself (see Figure 4.1 below). We follow standard usage and denote the boundary curve 𝒞𝒞 by 𝜕𝜕𝒟𝒟. The counterclockwise orientation of 𝜕𝜕𝒟𝒟 is called the boundary orientation. When you traverse the boundary in this direction, the domain lies to your left (see Figure 4.1).

Figure 4.1. The boundary of 𝒟𝒟 is a simple closed curve 𝒞𝒞 that is denoted by 𝜕𝜕𝒟𝒟. The boundary is

oriented in the counterclockwise direction.

We have two notations for the line integral of F = ⟨F1, F2⟩, which are:

Page 58: AEM ADV03 Introductory Mathematics€¦ ·

58

� 𝐅𝐅 ∙ 𝑑𝑑𝒔𝒔𝐶𝐶

and � 𝐹𝐹1 𝑑𝑑𝑥𝑥 + 𝐹𝐹2 𝑑𝑑𝑠𝑠𝐶𝐶

(104)

If 𝒞𝒞 is parametrized by c (t) = (x (t), y (t)) for a ≤ t ≤ b, then

dx = x’ (t) dt, dy = y’ (t) dt

� 𝐹𝐹1 𝑑𝑑𝑥𝑥 + 𝐹𝐹2 𝑑𝑑𝑠𝑠 = � �𝐹𝐹1 �𝑥𝑥(𝑡𝑡),𝑠𝑠(𝑡𝑡)�𝑥𝑥′(𝑡𝑡) + 𝐹𝐹2 �𝑥𝑥(𝑡𝑡),𝑠𝑠(𝑡𝑡)�𝑠𝑠′(𝑡𝑡)� 𝑑𝑑𝑡𝑡𝑏𝑏

𝑎𝑎𝐶𝐶 (105)

In this section, we will assume that the components of all vector fields have continuous partial derivatives, and also that 𝒞𝒞 is smooth (𝒞𝒞 has a parametrization with derivatives of all orders) or piecewise smooth (a finite union of smooth curves joined together at corners). Green’s Theorem: Let 𝒟𝒟 be a domain whose boundary 𝜕𝜕𝒟𝒟 is a simple closed curve, oriented counterclockwise. Then:

� 𝐹𝐹1 𝑑𝑑𝑥𝑥 + 𝐹𝐹2 𝑑𝑑𝑠𝑠𝜕𝜕𝒟𝒟

= � �𝜕𝜕𝐹𝐹2𝜕𝜕𝑥𝑥

−𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

� 𝑑𝑑𝐴𝐴𝒟𝒟

(106)

Proof: A complete proof is quite technical, so we shall make the simplifying assumption that the boundary of 𝒟𝒟 can be described as the union of two graphs y = g (x) and y = f (x), with g (x) ≤ f (x), as in figure 4.2 and also as the union of two graphs x = g1 (y) and x = f1 (y), with g1 (y) ≤ f1 (y), as in Figure 4.3. Green’s Theorem splits up into two equations, one for F1 and one for F2:

� 𝐹𝐹1 𝑑𝑑𝑥𝑥𝜕𝜕𝒟𝒟

= −�𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

𝑑𝑑𝐴𝐴𝒟𝒟

(107)

� 𝐹𝐹2 𝑑𝑑𝑠𝑠𝜕𝜕𝒟𝒟

= �𝜕𝜕𝐹𝐹2𝜕𝜕𝑥𝑥

𝑑𝑑𝐴𝐴𝒟𝒟

(108)

In other words, Green’s Theorem is obtained by adding equations (107) and (108). To prove equation (107), we write:

Page 59: AEM ADV03 Introductory Mathematics€¦ ·

59

� 𝐹𝐹1 𝑑𝑑𝑥𝑥𝜕𝜕𝒟𝒟

= � 𝐹𝐹1 𝑑𝑑𝑥𝑥 + 𝒞𝒞1

� 𝐹𝐹1 𝑑𝑑𝑥𝑥 𝒞𝒞2

(109)

where 𝒞𝒞1 is the graph of y = g (x) and 𝒞𝒞2 is the graph of y = f (x), oriented as in Figure 4.2. To compute these line integrals, we parametrized the graphs from left to right using t as parameter:

Graph of y = g (x): c1 (t) = (t, g (t)), a ≤ t ≤ b Graph of y = f (x): c2 (t) = (t, f (t)), a ≤ t ≤ b

Since 𝒞𝒞2 is oriented from right to left, the line integral over 𝜕𝜕𝒟𝒟 is the difference

� 𝐹𝐹1 𝑑𝑑𝑥𝑥𝜕𝜕𝒟𝒟

= � 𝐹𝐹1 𝑑𝑑𝑥𝑥𝒞𝒞1

− � 𝐹𝐹1 𝑑𝑑𝑥𝑥 𝒞𝒞2

In both parametrizations, x = t, so dx = dt and by Equation (105),

� 𝐹𝐹1 𝑑𝑑𝑥𝑥𝜕𝜕𝒟𝒟

= � 𝐹𝐹1�𝑡𝑡,𝑑𝑑(𝑡𝑡)�𝑑𝑑𝑡𝑡𝑏𝑏

𝑡𝑡=𝑎𝑎− � 𝐹𝐹1�𝑡𝑡,𝑓𝑓(𝑡𝑡)�𝑑𝑑𝑡𝑡

𝑏𝑏

𝑡𝑡=𝑎𝑎 (110)

Figure 4.2. The boundary curve 𝜕𝜕𝒟𝒟 is the union of the graphs of y = g (x) and y = f (x) oriented counterclockwise.

Figure 4.3. The boundary curve 𝜕𝜕𝒟𝒟 is also the union of the graphs of x = g1 (x) and y = f (x), oriented counterclockwise.

Page 60: AEM ADV03 Introductory Mathematics€¦ ·

60

Now, the key step is to apply the Fundamental Theorem of Calculus to 𝜕𝜕𝐹𝐹1𝜕𝜕𝑦𝑦

(𝑡𝑡,𝑠𝑠) as a function of y with t

held constant:

𝐹𝐹1�𝑡𝑡,𝑓𝑓(𝑡𝑡)� − 𝐹𝐹1�𝑡𝑡,𝑑𝑑(𝑡𝑡)�𝑑𝑑𝑡𝑡 = �𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

(𝑡𝑡,𝑠𝑠) 𝑑𝑑𝑠𝑠𝑓𝑓(𝑡𝑡)

𝑦𝑦=𝑔𝑔(𝑡𝑡)

Substituting the integral on the right in Equation (110), we obtain Equation (107)

� 𝐹𝐹1 𝑑𝑑𝑥𝑥𝜕𝜕𝒟𝒟

= −� �𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

(𝑡𝑡,𝑠𝑠) 𝑑𝑑𝑠𝑠 𝑑𝑑𝑡𝑡𝑓𝑓(𝑡𝑡)

𝑦𝑦=𝑔𝑔(𝑡𝑡)

𝑏𝑏

𝑡𝑡=𝑎𝑎= −�

𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

𝑑𝑑𝐴𝐴𝒟𝒟

Equation (108) is proved in a similar fashion, by expressing 𝜕𝜕𝒟𝒟 as the union of the graphs of x = f1 (y) and x = g1 (y). Recall that if curl F = 0 in a simply connected region, then the line integral along a closed curve is zero. If two curves connect two points then the line integral along those curves agrees. Therefore, Equation (106) becomes:

𝜕𝜕𝐹𝐹2𝜕𝜕𝑥𝑥

−𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

= 0

Example 4.2: Verify Green’s Theorem for the line integral along the unit circle 𝒞𝒞, oriented counterclockwise

� 𝑥𝑥𝑠𝑠2 𝑑𝑑𝑥𝑥 + 𝑥𝑥 𝑑𝑑𝑠𝑠𝒞𝒞

Solution: Step 1. Evaluate the line integral directly. We use the standard parametrization of the unit circle to be:

𝑥𝑥 = cos 𝜃𝜃, 𝑠𝑠 = sin𝜃𝜃 𝑑𝑑𝑥𝑥 = − sin𝜃𝜃, 𝑑𝑑𝑠𝑠 = cos 𝜃𝜃 𝑑𝑑𝜃𝜃

The line integrand in the line integral is

Page 61: AEM ADV03 Introductory Mathematics€¦ ·

61

𝑥𝑥𝑠𝑠2 𝑑𝑑𝑥𝑥 + 𝑥𝑥 𝑑𝑑𝑠𝑠 = cos 𝜃𝜃 sin2 𝜃𝜃(− sin𝜃𝜃 𝑑𝑑𝜃𝜃) + cos 𝜃𝜃 (cos𝜃𝜃 𝑑𝑑𝜃𝜃) = (− cos 𝜃𝜃 sin3 𝜃𝜃 + cos2 𝜃𝜃) 𝑑𝑑𝜃𝜃

And

� 𝑥𝑥𝑠𝑠2 𝑑𝑑𝑥𝑥 + 𝑥𝑥 𝑑𝑑𝑠𝑠𝒞𝒞

= � (− cos 𝜃𝜃 sin3 𝜃𝜃 + cos2 𝜃𝜃) 𝑑𝑑𝜃𝜃2𝜋𝜋

0

= −sin4 𝜃𝜃

4�0

2𝜋𝜋

+12�𝜃𝜃 +

12

sin 2𝜃𝜃��0

2𝜋𝜋

= 0 +12

(2𝜋𝜋 + 0)

= 𝜋𝜋

Step 2: Evaluate the line integral using Green’s Theorem. In this example, F1 = xy2 and F2 = x, so

𝜕𝜕𝐹𝐹2𝜕𝜕𝑥𝑥

−𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

=𝜕𝜕𝜕𝜕𝑥𝑥

𝑥𝑥 −𝜕𝜕𝜕𝜕𝑠𝑠

𝑥𝑥𝑠𝑠2 = 1 − 2𝑥𝑥𝑠𝑠

According to Green’s Theorem, from Equation (106):

� 𝑥𝑥𝑠𝑠2 𝑑𝑑𝑥𝑥 + 𝑥𝑥 𝑑𝑑𝑠𝑠𝒞𝒞

= � �𝜕𝜕𝐹𝐹2𝜕𝜕𝑥𝑥

−𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

� 𝑑𝑑𝐴𝐴𝒟𝒟

= � 1 − 2𝑥𝑥𝑠𝑠 𝑑𝑑𝐴𝐴𝒟𝒟

Where 𝒟𝒟 is the disk x2 + y2 ≤ 1 enclosed by 𝒞𝒞. The integral 2xy over 𝒟𝒟 is zero by symmetry – the contributions for positive and negative x cancel. We can check this directly:

� 1 − 2𝑥𝑥𝑠𝑠 𝑑𝑑𝐴𝐴𝒟𝒟

= −2� � 𝑥𝑥𝑠𝑠 𝑑𝑑𝑠𝑠 𝑑𝑑𝑥𝑥 = −√1−𝑥𝑥2

𝑦𝑦=−√1−𝑥𝑥2

1

𝑥𝑥=−1� 𝑥𝑥𝑠𝑠2|𝑦𝑦=−√1−𝑥𝑥2

√1−𝑥𝑥2 𝑑𝑑𝑥𝑥1

𝑥𝑥=−1= 0

Therefore,

� �𝜕𝜕𝐹𝐹2𝜕𝜕𝑥𝑥

−𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

� 𝑑𝑑𝐴𝐴𝒟𝒟

= � 1 𝑑𝑑𝐴𝐴 = Area (𝒟𝒟) = 𝜋𝜋𝒟𝒟

Page 62: AEM ADV03 Introductory Mathematics€¦ ·

62

4.3 Stokes’ Theorem Stokes’ Theorem is an extension of Green’s Theorem to thre dimensions in which circulation is related to a surface integral in ℝ3 (rather than to a double integral in the plane). In order to state it, let’s first introduce some definitions and terminology. Figure 4.4 shows three surfaces with different types of boundaries. The boundary of a surface is denoted as 𝜕𝜕𝑆𝑆. Observe that the boundary in (A) is a single, simple closed curve and the boundary in (B) consists of three closed curves. The surface in (C) is called a closed surface because its boundary is empty. In this case, we write 𝜕𝜕𝑆𝑆 = 0.

Figure 4.4. Surfaces and their boundaries.

Recall that an orientation is a continuously varying choice of unit normal vector at each point of a surface S. When S is oriented, we can specify an orientation of 𝜕𝜕𝑆𝑆, called the boundary orientation. Imagine that you are a unit vector walking along the boundary curve. The boundary orientation is the direction for which the surface is on your left as you walk. For example, the boundary of the surface in Figure 4.5 consists of two curves, 𝒞𝒞1 and 𝒞𝒞2. In Figure 4.5 (A), the normal vector points to the outside. The woman (representing the normal vector) is walking along 𝒞𝒞1 and has the surface to her left, so she is walking in the positive direction. The curve 𝒞𝒞2 is oriented in the opposite direction because she would have to walk along 𝒞𝒞2 in that direction to keep the surface to her left. The boundary orientations in Figure 4.5 (B) are reversed because the opposite normal has been selected to orient the surface.

Page 63: AEM ADV03 Introductory Mathematics€¦ ·

63

Figure 4.5. The orientation of the boundary 𝜕𝜕𝑆𝑆 for each of the two possible orientations of the surface S. Recall from Chapter 2: All that is left to do is to define curl. The curl of a vector field F = ⟨F1, F2, F3⟩ is a vector field defined by the symbolic determinant

curl (𝐅𝐅) = ��𝚤𝚤 𝚥𝚥 𝑘𝑘�⃗𝜕𝜕𝜕𝜕𝑥𝑥

𝜕𝜕𝜕𝜕𝑠𝑠

𝜕𝜕𝜕𝜕𝑧𝑧

𝐹𝐹1 𝐹𝐹2 𝐹𝐹3

��

= �𝜕𝜕𝐹𝐹3𝜕𝜕𝑠𝑠

−𝜕𝜕𝐹𝐹2𝜕𝜕𝑧𝑧

� 𝚤𝚤 − � 𝜕𝜕𝐹𝐹3𝜕𝜕𝑥𝑥

−𝜕𝜕𝐹𝐹1𝜕𝜕𝑧𝑧

� 𝚥𝚥 + � 𝜕𝜕𝐹𝐹2𝜕𝜕𝑥𝑥

−𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

� 𝑘𝑘�⃗

Recall from Chapter 2, the curl is the symbolic cross product

curl (𝐅𝐅) = ∇ × 𝐅𝐅 where ∇ is the del “operator” (also called “nabla”):

∇= ⟨𝜕𝜕𝜕𝜕𝑥𝑥

,𝜕𝜕𝜕𝜕𝑠𝑠

,𝜕𝜕𝜕𝜕𝑧𝑧⟩

It is straightforward to check that curl obeys the linearity rules:

curl (F + G) = curl (F) + curl (G)

curl (cF) = c curl (F) (c being any constant)

Page 64: AEM ADV03 Introductory Mathematics€¦ ·

64

Now, going back to Stokes’ Theorem, let’s assume that S is an oriented surface with parametrization G : 𝒟𝒟 → S, where 𝒟𝒟 is a domain in the plane bounded by smooth, simple closed curves, and G is one-to-one and regular, except possibly on the boundary of 𝒟𝒟. More generally, S may be a finite union of surfaces of this type. The surfaces in applications we consider, such as spheres, cubes and graphs of functions, satisfy these conditions. For surface S described above, Stokes’ Theorem gives:

� 𝐅𝐅.𝑑𝑑𝐬𝐬𝜕𝜕𝑆𝑆

= �� curl (𝐅𝐅).𝑑𝑑𝐬𝐬𝑆𝑆

(111)

The integral on the left is defined relative to the boundary orientation of 𝜕𝜕𝑆𝑆. If S is closed (that is, 𝜕𝜕𝑆𝑆 is empty), then the surface integral on the right is zero. Proof: Each side of Equation (111) is equal to a sum over the components of F:

� 𝐅𝐅.𝑑𝑑𝐬𝐬𝒞𝒞

= � 𝐹𝐹1 𝑑𝑑𝑥𝑥 + 𝐹𝐹2 𝑑𝑑𝑠𝑠 + 𝐹𝐹3 𝑑𝑑𝑧𝑧𝒞𝒞

�� curl (𝐅𝐅).𝑑𝑑𝐬𝐬 = �� curl (𝐹𝐹1𝚤𝚤).𝑑𝑑𝐬𝐬𝑆𝑆

+ �� curl (𝐹𝐹2𝚥𝚥).𝑑𝑑𝐬𝐬𝑆𝑆

+ �� curl �𝐹𝐹3𝑘𝑘�⃗ �.𝑑𝑑𝐬𝐬𝑆𝑆𝑆𝑆

The proof consists of showing that the F1-, F2-, and F3- terms are separately equal. We will proof this under the simplifying assumption that S is the graph of a function z = f (x, y) lying over a domain in the xy-plane. Furthermore, we will carry the details only for the F1- terms. The calculation for F2- and F3- components are similar. Thus we shall prove that

� 𝐹𝐹1 𝑑𝑑𝑥𝑥𝒞𝒞

= �� curl (𝐹𝐹1(𝑥𝑥,𝑠𝑠, 𝑧𝑧)𝚤𝚤).𝑑𝑑𝐬𝐬𝑆𝑆

(112)

Page 65: AEM ADV03 Introductory Mathematics€¦ ·

65

Figure 4.6.

Orient S with upward-pointing normal as in Figure 4.6 and let 𝒞𝒞 = 𝜕𝜕𝑆𝑆 be the boundary curve. Let 𝒞𝒞0 be the boundary of 𝒟𝒟 in the xy-plane, and let c0 (t) = (x(t), y(t)) (for a ≤ t ≤ b) be a counterclockwise parametrization of 𝒞𝒞0 as in Figure 4.6. The boundary curve 𝒞𝒞 projects onto 𝒞𝒞0 so 𝒞𝒞 has parametrization

𝐜𝐜(𝑡𝑡) = (𝑥𝑥(𝑡𝑡),𝑠𝑠(𝑡𝑡),𝑓𝑓�𝑥𝑥(𝑡𝑡),𝑠𝑠(𝑡𝑡)�) And thus

� 𝐹𝐹1 (𝑥𝑥, 𝑠𝑠, 𝑧𝑧) 𝑑𝑑𝑥𝑥𝒞𝒞

= � 𝐹𝐹1(𝑥𝑥(𝑡𝑡),𝑠𝑠(𝑡𝑡),𝑓𝑓�𝑥𝑥(𝑡𝑡),𝑠𝑠(𝑡𝑡)�)𝑑𝑑𝑥𝑥𝑑𝑑𝑡𝑡𝑑𝑑𝑡𝑡

𝑏𝑏

𝑎𝑎

The integral on the right is precisely the integral we obtain by integrating 𝐹𝐹1�𝑥𝑥,𝑠𝑠,𝑓𝑓(𝑥𝑥,𝑠𝑠)�𝑑𝑑𝑥𝑥 over the curve 𝒞𝒞0 in the plane ℝ2. In other words,

� 𝐹𝐹1 (𝑥𝑥,𝑠𝑠, 𝑧𝑧) 𝑑𝑑𝑥𝑥𝒞𝒞

= � 𝐹𝐹1�𝑥𝑥,𝑠𝑠,𝑓𝑓(𝑥𝑥, 𝑠𝑠)�𝑑𝑑𝑥𝑥 𝒞𝒞0

By applying Green Theorem to the integral on the right,

Page 66: AEM ADV03 Introductory Mathematics€¦ ·

66

� 𝐹𝐹1 (𝑥𝑥,𝑠𝑠, 𝑧𝑧) 𝑑𝑑𝑥𝑥𝒞𝒞

= ��𝜕𝜕𝜕𝜕𝑠𝑠

𝐹𝐹1�𝑥𝑥,𝑠𝑠, 𝑓𝑓(𝑥𝑥, 𝑠𝑠)� 𝑑𝑑𝐴𝐴𝒟𝒟

By the Chain Rule,

𝜕𝜕𝜕𝜕𝑠𝑠

𝐹𝐹1�𝑥𝑥,𝑠𝑠,𝑓𝑓(𝑥𝑥, 𝑠𝑠)� = 𝐹𝐹1𝑦𝑦�𝑥𝑥,𝑠𝑠,𝑓𝑓(𝑥𝑥, 𝑠𝑠)� + 𝐹𝐹1𝑧𝑧�𝑥𝑥,𝑠𝑠,𝑓𝑓(𝑥𝑥,𝑠𝑠)�𝑓𝑓𝑦𝑦(𝑥𝑥,𝑠𝑠)

So, we finally obtain

� 𝐹𝐹1 𝑑𝑑𝑥𝑥𝒞𝒞

= �� �𝐹𝐹1𝑦𝑦�𝑥𝑥,𝑠𝑠,𝑓𝑓(𝑥𝑥,𝑠𝑠)� + 𝐹𝐹1𝑧𝑧�𝑥𝑥,𝑠𝑠,𝑓𝑓(𝑥𝑥,𝑠𝑠)�𝑓𝑓𝑦𝑦(𝑥𝑥,𝑠𝑠)�𝑑𝑑𝐴𝐴𝒟𝒟

(113)

To finish the proof, we will compute the surface integral of curl (𝐹𝐹1𝚤𝚤) using the parametrization G (x, y) = (x, y, f(x, y)) of S: (Note that n is the upward-pointing normal)

𝐬𝐬 = ⟨−𝑓𝑓𝑥𝑥(𝑥𝑥,𝑠𝑠),−𝑓𝑓𝑦𝑦(𝑥𝑥, 𝑠𝑠), 1⟩

curl (𝐹𝐹1𝚤𝚤) ∙ 𝐬𝐬 = ⟨0,𝐹𝐹1𝑧𝑧 ,−𝐹𝐹1𝑦𝑦⟩ ∙ ⟨−𝑓𝑓𝑥𝑥(𝑥𝑥, 𝑠𝑠),−𝑓𝑓𝑦𝑦(𝑥𝑥,𝑠𝑠), 1⟩ = 𝐹𝐹1𝑧𝑧(𝑥𝑥,𝑠𝑠,𝑓𝑓(𝑥𝑥,𝑠𝑠)𝑓𝑓𝑦𝑦(𝑥𝑥,𝑠𝑠) − 𝐹𝐹1𝑦𝑦(𝑥𝑥,𝑠𝑠,𝑓𝑓(𝑥𝑥,𝑠𝑠)

�� curl (𝐹𝐹1𝚤𝚤) ∙ 𝑑𝑑𝐬𝐬𝑆𝑆

= −�� (𝐹𝐹1𝑧𝑧(𝑥𝑥,𝑠𝑠, 𝑓𝑓(𝑥𝑥,𝑠𝑠)𝑓𝑓𝑦𝑦(𝑥𝑥,𝑠𝑠) − 𝐹𝐹1𝑦𝑦(𝑥𝑥, 𝑠𝑠,𝑓𝑓(𝑥𝑥,𝑠𝑠) 𝑑𝑑𝐴𝐴𝒟𝒟

(114)

The right-hand sides of Equation (113) and Equation (114) are equal. This proves Equation (112) Example 4.3: Let F(x, y, z) = −𝑠𝑠2𝚤𝚤 + 𝑥𝑥𝚥𝚥 + 𝑧𝑧2𝑘𝑘�⃗ and 𝒞𝒞 is the curve of intersection of the plane y + z = 2 and the cylinder x2 + y2 = 1 (Orient 𝒞𝒞 to be counterclockwise when viewed from above). Evaluate

� 𝐅𝐅 ∙ 𝑑𝑑𝐫𝐫𝒞𝒞

Page 67: AEM ADV03 Introductory Mathematics€¦ ·

67

Solution: We first compute for F(x, y, z) = −𝑠𝑠2𝚤𝚤 + 𝑥𝑥𝚥𝚥 + 𝑧𝑧2𝑘𝑘�⃗ ;

𝑐𝑐𝑢𝑢𝑠𝑠𝑒𝑒 𝐹𝐹 = ��𝚤𝚤 𝚥𝚥 𝑘𝑘�⃗𝜕𝜕𝜕𝜕𝑥𝑥

𝜕𝜕𝜕𝜕𝑠𝑠

𝜕𝜕𝜕𝜕𝑧𝑧

−𝑠𝑠2 𝑥𝑥 𝑧𝑧2�� = (1 + 2𝑠𝑠)𝑘𝑘�⃗

If we look at the figure above, there are many surfaces with boundary 𝒞𝒞. The most convenient choice, though, is the elliptical region S in the plane y + z = 2 that is bounded by 𝒞𝒞. If we orient S upward, 𝒞𝒞 has the induced positive orientation. The projection 𝒟𝒟 of S on the xy-plane is the disk x2 + y2 ≤ 1, so by using the equation z = 2 – y and applying the Stokes’ Theorem, we obtain:

� 𝐅𝐅 ∙ 𝑑𝑑𝐫𝐫𝒞𝒞

= �� curl 𝐅𝐅 ∙ 𝑑𝑑𝐒𝐒𝑆𝑆

= �� (1 + 2𝑠𝑠) 𝑑𝑑𝐴𝐴𝒟𝒟

= � � (1 + 2𝑠𝑠 sin𝜃𝜃) 𝑠𝑠 𝑑𝑑𝑠𝑠 𝑑𝑑𝜃𝜃1

0

2𝜋𝜋

0

= � �𝑠𝑠2

2+ 2

𝑠𝑠3

3sin 𝜃𝜃�

0

`2𝜋𝜋

0𝑑𝑑𝜃𝜃

= � �12

+23

sin𝜃𝜃� 𝑑𝑑𝜃𝜃2𝜋𝜋

0

Page 68: AEM ADV03 Introductory Mathematics€¦ ·

68

=12

(2𝜋𝜋) + 0 = 𝜋𝜋

4.4 Divergent Theorem In section 4.2, Green Theorem was written in a vector version as:

� 𝐹𝐹1 𝑑𝑑𝑥𝑥 + 𝐹𝐹2 𝑑𝑑𝑠𝑠𝐶𝐶

= � �𝜕𝜕𝐹𝐹2𝜕𝜕𝑥𝑥

−𝜕𝜕𝐹𝐹1𝜕𝜕𝑠𝑠

� 𝑑𝑑𝐴𝐴𝒟𝒟

= � div 𝐅𝐅(𝑥𝑥,𝑠𝑠) 𝑑𝑑𝐴𝐴𝒟𝒟

where 𝐶𝐶 is the positively oriented boundary curve of the plane region 𝒟𝒟. If we were seeking to extend this theorem to vector fields on ℝ3, we might make the guess that

� 𝐅𝐅.𝐬𝐬 𝑑𝑑𝑆𝑆 = � div 𝐅𝐅 (𝑥𝑥,𝑠𝑠, 𝑧𝑧)𝑑𝑑𝑑𝑑𝐸𝐸𝑆𝑆

(115)

where S is the boundary surface of the solid region E. Let E be a simple solid region and let S be the boundary surface of E, given positive outward orientation and F be a vector field whose component functions have continuous partial derivatives on an open region that contains E. Therefore, Divergence Theorem can be written as:

� 𝐅𝐅 𝑑𝑑𝑆𝑆 = � div 𝐅𝐅 𝑑𝑑𝑑𝑑𝐸𝐸𝑆𝑆

(116)

Note that Divergent Theorem are also usually called Gauss Theorem. Example 4.4: Evaluate

� 𝐅𝐅 𝑑𝑑𝑆𝑆𝑆𝑆

Where F(x, y, z) = 𝑥𝑥𝑠𝑠𝚤𝚤 + (𝑠𝑠2 + 𝑒𝑒𝑥𝑥𝑧𝑧2)𝚥𝚥 + sin(𝑥𝑥𝑠𝑠)𝑘𝑘�⃗ and S is the surface of the region E bounded by the parabolic cylinder 𝑧𝑧 = 1 − 𝑥𝑥2 and the planes 𝑧𝑧 = 0,𝑠𝑠 = 0, 𝑠𝑠 + 𝑧𝑧 = 2

Page 69: AEM ADV03 Introductory Mathematics€¦ ·

69

Solution: It would be extremely difficult to evaluate the given surface integral directly. So we would have to evaluate four surface integrals corresponding to the four pieces of S. Also, the divergence of F is much less complicated than F itself:

div 𝐅𝐅 =𝜕𝜕𝜕𝜕𝑥𝑥

(𝑥𝑥𝑠𝑠) +𝜕𝜕𝜕𝜕𝑠𝑠

�𝑠𝑠2 + 𝑒𝑒𝑥𝑥𝑧𝑧2� +𝜕𝜕𝜕𝜕𝑧𝑧

sin(𝑥𝑥𝑠𝑠) = 𝑠𝑠 + 2𝑠𝑠 = 3𝑠𝑠

So, we will use the Divergence Theorem to transform the given surface integral into a triple integral. The easiest way to evaluate triple integral is to express E as a type 3 region:

𝐸𝐸 = {(𝑥𝑥,𝑠𝑠, 𝑧𝑧)| − 1 ≤ 𝑥𝑥 ≤ 1, 0 ≤ 𝑧𝑧 ≤ 1 − 𝑥𝑥2, 0 ≤ 𝑠𝑠 ≤ 2 − 𝑧𝑧} Then, if we use Equation (116), we will have:

� 𝐅𝐅 𝑑𝑑𝑆𝑆𝑆𝑆

= � div 𝐅𝐅 𝑑𝑑𝑑𝑑𝐸𝐸

= � 3y 𝑑𝑑𝑑𝑑𝐸𝐸

Page 70: AEM ADV03 Introductory Mathematics€¦ ·

70

= 3� � � 𝑠𝑠2−𝑧𝑧

0

1−𝑥𝑥2

0

1

−1𝑑𝑑𝑠𝑠 𝑑𝑑𝑧𝑧 𝑑𝑑𝑥𝑥

= 3� �(2 − 𝑧𝑧)2

2

1−𝑥𝑥2

0

1

−1𝑑𝑑𝑧𝑧 𝑑𝑑𝑥𝑥

=32� �

(2 − 𝑧𝑧)3

3�0

1−𝑥𝑥21

−1𝑑𝑑𝑥𝑥

= −12� [(𝑥𝑥2 + 1)3 − 8]1

−1𝑑𝑑𝑥𝑥

= −� (𝑥𝑥6 + 3𝑥𝑥4 + 3𝑥𝑥2 − 7)1

0𝑑𝑑𝑥𝑥

=18435

Page 71: AEM ADV03 Introductory Mathematics€¦ ·

71

5. Ordinary Differential Equations

5.1 First-Order Linear Differential Equations The first order linear differential equation takes the form of

𝑑𝑑𝑠𝑠𝑑𝑑𝑥𝑥

+ 𝑃𝑃(𝑥𝑥)𝑠𝑠 = 𝑄𝑄(𝑥𝑥) (117)

where 𝑃𝑃 and 𝑄𝑄 are continuous functions on a given interval. Let’s take an easy example of a linear equation 𝑥𝑥𝑠𝑠′ + 𝑠𝑠 = 2𝑥𝑥, for 𝑥𝑥 ≠ 0. We can rewrite this equation as:

𝑠𝑠′ +1𝑥𝑥𝑠𝑠 = 2 (118)

Using the Product Rule, we can rewrite the original equation as

𝑥𝑥𝑠𝑠′ + 𝑠𝑠 = (𝑥𝑥𝑠𝑠)′ Now we can rewrite the above equation as

(𝑥𝑥𝑠𝑠)′ = 2𝑥𝑥 Now if we integrate both sides, we get

𝑥𝑥𝑠𝑠 = 𝑥𝑥2 + 𝐶𝐶 or 𝑠𝑠 = 𝑥𝑥 +𝐶𝐶𝑥𝑥

We can solve every first-order differential equation in a similar fashion by multiplying both sides of Equation (117) by a suitable function I(x) called an integrating factor. We try to find 𝐼𝐼 so that the left side of Equation (117) when multiplied by 𝐼𝐼(𝑥𝑥), becomes the derivative of the product 𝐼𝐼(𝑥𝑥)𝑠𝑠:

𝐼𝐼(𝑥𝑥)(𝑠𝑠′ + 𝑃𝑃(𝑥𝑥)𝑠𝑠) = (𝐼𝐼(𝑥𝑥)𝑠𝑠)′ (119)

If we can find such a function I, then Equation (117) becomes

(𝐼𝐼(𝑥𝑥)𝑠𝑠)′ = 𝐼𝐼(𝑥𝑥)𝑄𝑄(𝑥𝑥)

Integrating both sides, we would have

Page 72: AEM ADV03 Introductory Mathematics€¦ ·

72

𝐼𝐼(𝑥𝑥)𝑠𝑠 = �𝐼𝐼(𝑥𝑥)𝑄𝑄(𝑥𝑥)𝑑𝑑𝑥𝑥 + 𝐶𝐶

So the solution would be

𝑠𝑠(𝑥𝑥) =1𝐼𝐼(𝑥𝑥)

�� 𝐼𝐼(𝑥𝑥)𝑄𝑄(𝑥𝑥)𝑑𝑑𝑥𝑥 + 𝐶𝐶� (120)

To find such an 𝐼𝐼, we expand Equation (119) and cancel terms

𝐼𝐼(𝑥𝑥)𝑠𝑠′ + 𝐼𝐼(𝑥𝑥)𝑃𝑃(𝑥𝑥)𝑠𝑠 = (𝐼𝐼(𝑥𝑥)𝑠𝑠)′ = 𝐼𝐼′(𝑥𝑥)𝑠𝑠 + 𝐼𝐼(𝑥𝑥)𝑠𝑠′ 𝐼𝐼(𝑥𝑥)𝑃𝑃(𝑥𝑥) = 𝐼𝐼′(𝑥𝑥)

This is a separable differential equation for 𝐼𝐼, which we solve as follows:

�𝑑𝑑𝐼𝐼𝐼𝐼

= �𝑃𝑃(𝑥𝑥)𝑑𝑑𝑥𝑥

ln|𝐼𝐼| = �𝑃𝑃(𝑥𝑥)𝑑𝑑𝑥𝑥

𝐼𝐼 = 𝐴𝐴𝑒𝑒∫𝑃𝑃(𝑥𝑥)𝑑𝑑𝑥𝑥

where 𝐴𝐴 = ±𝑒𝑒𝑐𝑐. Let’s take 𝐴𝐴 = 1, as we are looking for a particular integrating factor

𝐼𝐼(𝑥𝑥) = 𝑒𝑒∫𝑃𝑃(𝑥𝑥)𝑑𝑑𝑥𝑥 (121)

Therefore, to solve a linear differential equation 𝑠𝑠′ + 𝑃𝑃(𝑥𝑥)𝑠𝑠 = 𝑄𝑄(𝑥𝑥), multiply both sides with the integrating factor 𝐼𝐼(𝑥𝑥) = 𝑒𝑒∫𝑃𝑃(𝑥𝑥)𝑑𝑑𝑥𝑥 and integrate both sides. Example 5.1: Find the solution of the initial-value problem

𝑥𝑥2𝑠𝑠′ + 𝑥𝑥𝑠𝑠 = 1 𝑥𝑥 > 0 𝑠𝑠(1) = 2 Solution: We must first divide both sides by the coefficient of 𝑠𝑠’ to put the differential equation into standard form

𝑠𝑠′ +1𝑥𝑥𝑠𝑠 =

1𝑥𝑥2

𝑥𝑥 > 0 (122)

Page 73: AEM ADV03 Introductory Mathematics€¦ ·

73

The integrating factor is

𝐼𝐼(𝑥𝑥) = 𝑒𝑒∫(1𝑥𝑥)𝑑𝑑𝑥𝑥 = 𝑒𝑒ln𝑥𝑥 = 𝑥𝑥 Multiplication of Equation (122) by 𝑥𝑥 gives

𝑥𝑥𝑠𝑠′ + 𝑠𝑠 =1𝑥𝑥

or (𝑥𝑥𝑠𝑠)′ =1𝑥𝑥

Then:

𝑥𝑥𝑠𝑠 = �1𝑥𝑥𝑑𝑑𝑥𝑥 = ln 𝑥𝑥 + 𝐶𝐶

𝑠𝑠 =ln 𝑥𝑥 + 𝐶𝐶

𝑥𝑥

Since 𝑠𝑠(1) = 2, we have

2 =ln 1 + 𝐶𝐶

1= 𝐶𝐶

Therefore, the solution to the initial-value problem is

𝑠𝑠 =ln 𝑥𝑥 + 2

𝑥𝑥

5.2 Second-Order Linear Differential Equations A second-order linear differential equation has the form

𝑃𝑃(𝑥𝑥)𝑑𝑑2𝑠𝑠𝑑𝑑𝑥𝑥2

+ 𝑄𝑄(𝑥𝑥)𝑑𝑑𝑠𝑠𝑑𝑑𝑥𝑥

+ 𝑅𝑅(𝑥𝑥)𝑠𝑠 = 𝐺𝐺(𝑥𝑥) (123)

where 𝑃𝑃,𝑄𝑄,𝑅𝑅 and 𝐺𝐺 are continuous functions. In this section, we will only cover the case where 𝐺𝐺(𝑥𝑥) = 0 for all 𝑥𝑥, in Equation (123). Such equations are called homogeneous linear differential equations. Hence, the form of second order linear differential equation is

Page 74: AEM ADV03 Introductory Mathematics€¦ ·

74

𝑃𝑃(𝑥𝑥)𝑑𝑑2𝑠𝑠𝑑𝑑𝑥𝑥2

+ 𝑄𝑄(𝑥𝑥)𝑑𝑑𝑠𝑠𝑑𝑑𝑥𝑥

+ 𝑅𝑅(𝑥𝑥)𝑠𝑠 = 0 (124)

If 𝐺𝐺(𝑥𝑥) ≠ 0 for some 𝑥𝑥, Equation (123) is nonhomogeneous and will be dealt with in section 5.3. Two basic facts enable us to solve homogeneous linear differential equations.

A. If we know two solutions 𝑠𝑠1 and 𝑠𝑠2 of such an equation, then the linear combination 𝑠𝑠 =𝑐𝑐1𝑠𝑠1(𝑥𝑥) + 𝑐𝑐2𝑠𝑠2(𝑥𝑥) is also a solution. Therefore, if 𝑠𝑠1(𝑥𝑥) and 𝑠𝑠2(𝑥𝑥) are both solutions of the linear homogeneous equation and 𝑐𝑐1 and 𝑐𝑐2 are any constants, then the function in Equation (125) below is also a solution of Equation (124)

𝑠𝑠(𝑥𝑥) = 𝑐𝑐1𝑠𝑠1(𝑥𝑥) + 𝑐𝑐2𝑠𝑠2(𝑥𝑥) (125)

Let’s proof this: Since 𝑠𝑠1 and 𝑠𝑠2 are solutions of Equation (124), we then have

𝑃𝑃(𝑥𝑥)𝑠𝑠1′′ + 𝑄𝑄(𝑥𝑥)𝑠𝑠1′ + 𝑅𝑅(𝑥𝑥)𝑠𝑠1 = 0

𝑃𝑃(𝑥𝑥)𝑠𝑠2′′ + 𝑄𝑄(𝑥𝑥)𝑠𝑠2′ + 𝑅𝑅(𝑥𝑥)𝑠𝑠2 = 0 And therefore, suing the basic rule for differentiation, we have

𝑃𝑃(𝑥𝑥)𝑠𝑠′′ + 𝑄𝑄(𝑥𝑥)𝑠𝑠′ + 𝑅𝑅(𝑥𝑥)𝑠𝑠 = 𝑃𝑃(𝑥𝑥)( 𝑐𝑐1𝑠𝑠1 + 𝑐𝑐2𝑠𝑠2)′′ + 𝑄𝑄(𝑥𝑥)( 𝑐𝑐1𝑠𝑠1 + 𝑐𝑐2𝑠𝑠2)′ + 𝑅𝑅(𝑥𝑥)( 𝑐𝑐1𝑠𝑠1 + 𝑐𝑐2𝑠𝑠2) = 𝑃𝑃(𝑥𝑥)( 𝑐𝑐1𝑠𝑠1′′ + 𝑐𝑐2𝑠𝑠2′′) + 𝑄𝑄(𝑥𝑥)(𝑐𝑐1𝑠𝑠1′ + 𝑐𝑐2𝑠𝑠2′) + 𝑅𝑅(𝑥𝑥)( 𝑐𝑐1𝑠𝑠1 + 𝑐𝑐2𝑠𝑠2) = 𝑐𝑐1[𝑃𝑃(𝑥𝑥)𝑠𝑠1′′ + 𝑄𝑄(𝑥𝑥)𝑠𝑠1′ + 𝑅𝑅(𝑥𝑥)𝑠𝑠1] + 𝑐𝑐2[𝑃𝑃(𝑥𝑥)𝑠𝑠2′′ + 𝑄𝑄(𝑥𝑥)𝑠𝑠2′ + 𝑅𝑅(𝑥𝑥)𝑠𝑠2] = 𝑐𝑐1(0) + 𝑐𝑐2(0) = 0

Thus, 𝑠𝑠 = 𝑐𝑐1𝑠𝑠1 + 𝑐𝑐2𝑠𝑠2 is a solution of Equation (124).

B. The second means of solving the equation says that the general solution is a linear combination of two linearly independent solutions 𝑠𝑠1 and 𝑠𝑠2. This means that neither 𝑠𝑠1 nor 𝑠𝑠2 is a constant multiple of the other. For instance, the functions 𝑓𝑓(𝑥𝑥) = 𝑥𝑥2 and 𝑑𝑑(𝑥𝑥) = 5𝑥𝑥2 are linearly dependent, but 𝑓𝑓(𝑥𝑥) = 𝑒𝑒2 and 𝑑𝑑(𝑥𝑥) = 𝑥𝑥𝑒𝑒𝑥𝑥 are linearly independent. Therefore, if 𝑠𝑠1 and 𝑠𝑠2 are linearly independent solutions of Equation (124), and 𝑃𝑃(𝑥𝑥) is never 0, then the general solution is given by:

Page 75: AEM ADV03 Introductory Mathematics€¦ ·

75

𝑠𝑠(𝑥𝑥) = 𝑐𝑐1𝑠𝑠1(𝑥𝑥) + 𝑐𝑐2𝑠𝑠2(𝑥𝑥) (126)

where 𝑐𝑐1 and 𝑐𝑐2 are arbitrary constants. In general, it is not easy to discover solutions to a second-order linear differential equation. But it is always possible to do so if the coefficient 𝑃𝑃,𝑄𝑄 and 𝑅𝑅 are constant functions, i.e., if the differential equation has the form

𝑎𝑎𝑠𝑠′′ + b𝑠𝑠′ + cy = 0 (127)

where 𝑎𝑎, 𝑏𝑏 and 𝑐𝑐 are constants and 𝑎𝑎 ≠ 0. We know that the exponential function 𝑠𝑠 = 𝑒𝑒𝑟𝑟𝑥𝑥 (where 𝑠𝑠 is a constant) has the property that its derivative is a constant multiple of itself, i.e., 𝑠𝑠′ = 𝑠𝑠𝑒𝑒𝑟𝑟𝑥𝑥. Furthermore, 𝑠𝑠′′ = 𝑠𝑠2𝑒𝑒𝑟𝑟𝑥𝑥. If we substitute these expressions into Equation (127), we get:

𝑎𝑎𝑠𝑠2𝑒𝑒𝑟𝑟𝑥𝑥 + b𝑠𝑠𝑒𝑒𝑟𝑟𝑥𝑥 + c𝑒𝑒𝑟𝑟𝑥𝑥 = 0 (𝑎𝑎𝑠𝑠2 + 𝑏𝑏𝑠𝑠 + 𝑐𝑐)𝑒𝑒𝑟𝑟𝑥𝑥 = 0

But 𝑒𝑒𝑟𝑟𝑥𝑥 is never 0. Therefore, 𝑠𝑠 = 𝑒𝑒𝑟𝑟𝑥𝑥 is a solution of Equation (127) is 𝑠𝑠 is a root of the equation

𝑎𝑎𝑠𝑠2 + 𝑏𝑏𝑠𝑠 + 𝑐𝑐 = 0 (128)

Equation (128) is called auxiliary equation (or characteristic equation) of the differential equation 𝑎𝑎𝑠𝑠′′ +b𝑠𝑠′ + cy = 0. Realise that it is an algebraic equation that is obtained from the differential equation by replacing 𝑠𝑠′′ by 𝑠𝑠2, 𝑠𝑠′ by 𝑠𝑠 and 𝑠𝑠 by 1. Sometimes the roots 𝑠𝑠1 and 𝑠𝑠2 of the auxiliary equation can be found by factoring. Sometimes they are found by using the quadratic formula:

𝑠𝑠1 =−𝑏𝑏 + √𝑏𝑏2 − 4𝑎𝑎𝑐𝑐

2𝑎𝑎 𝑠𝑠2 =

−𝑏𝑏 − √𝑏𝑏2 − 4𝑎𝑎𝑐𝑐2𝑎𝑎

(129)

From Equation (129), let’s look at the expression of 𝑏𝑏2 − 4𝑎𝑎𝑐𝑐

Page 76: AEM ADV03 Introductory Mathematics€¦ ·

76

Case A. If 𝒃𝒃𝟐𝟐 − 𝟒𝟒𝒂𝒂𝒄𝒄 > 𝟎𝟎 In this case, the roots 𝑠𝑠1 and 𝑠𝑠2 , of the auxiliary equation are real and distinct. If the roots 𝑠𝑠1 and 𝑠𝑠2 of the auxiliary equation 𝑎𝑎𝑠𝑠2 + 𝑏𝑏𝑠𝑠 + 𝑐𝑐 = 0 are real and unequal, then the general solution of 𝑎𝑎𝑠𝑠′′ + b𝑠𝑠′ + cy =0 is

𝑠𝑠 = 𝑐𝑐1𝑒𝑒𝑟𝑟1𝑥𝑥 + 𝑐𝑐2𝑒𝑒𝑟𝑟2𝑥𝑥 (130)

Case B. If 𝒃𝒃𝟐𝟐 − 𝟒𝟒𝒂𝒂𝒄𝒄 = 𝟎𝟎 In this case, 𝑠𝑠1 = 𝑠𝑠2 , that is the roots of the auxiliary equation are real and equal. If the auxiliary equation 𝑎𝑎𝑠𝑠2 + 𝑏𝑏𝑠𝑠 + 𝑐𝑐 = 0 has only one real root 𝑠𝑠, then the general solution of 𝑎𝑎𝑠𝑠′′ + b𝑠𝑠′ + cy = 0 is

𝑠𝑠 = 𝑐𝑐1𝑒𝑒𝑟𝑟𝑥𝑥 + 𝑐𝑐2𝑥𝑥𝑒𝑒𝑟𝑟𝑥𝑥 (131)

Case C. If 𝒃𝒃𝟐𝟐 − 𝟒𝟒𝒂𝒂𝒄𝒄 < 𝟎𝟎 In this case, the roots 𝑠𝑠1 and 𝑠𝑠2 of the auxiliary equation are complex numbers, we can write

𝑠𝑠1 = 𝛼𝛼 + 𝑠𝑠𝑖𝑖 𝑠𝑠2 = 𝛼𝛼 − 𝑠𝑠𝑖𝑖 where 𝛼𝛼 and 𝑖𝑖 are real numbers. In fact, we can write:

𝛼𝛼 =−𝑏𝑏2𝑎𝑎

𝑖𝑖 =√4𝑎𝑎𝑐𝑐 − 𝑏𝑏2

2𝑎𝑎

Then, using Euler’s equation

𝑒𝑒𝑖𝑖𝑖𝑖 = cos 𝜃𝜃 + 𝑠𝑠 sin𝜃𝜃 So, we can write the solution of the differential equation as

𝑠𝑠 = 𝐶𝐶1𝑒𝑒𝑟𝑟1𝑥𝑥 + 𝐶𝐶2𝑒𝑒𝑟𝑟2𝑥𝑥 = 𝐶𝐶1𝑒𝑒(𝛼𝛼+𝑖𝑖𝑖𝑖)𝑥𝑥 + 𝐶𝐶2𝑒𝑒(𝛼𝛼−𝑖𝑖𝑖𝑖)𝑥𝑥 = 𝐶𝐶1𝑒𝑒𝛼𝛼𝑥𝑥(cos𝑖𝑖𝑥𝑥 + 𝑠𝑠 sin𝑖𝑖𝑥𝑥) + 𝐶𝐶2𝑒𝑒𝛼𝛼𝑥𝑥( cos𝑖𝑖𝑥𝑥 − 𝑠𝑠 sin𝑖𝑖𝑥𝑥) = 𝑒𝑒𝛼𝛼𝑥𝑥[(𝐶𝐶1 + 𝐶𝐶2) cos𝑖𝑖𝑥𝑥 + 𝑠𝑠(𝐶𝐶1 − 𝐶𝐶2) sin𝑖𝑖𝑥𝑥]

Page 77: AEM ADV03 Introductory Mathematics€¦ ·

77

= 𝑒𝑒𝛼𝛼𝑥𝑥[𝑐𝑐1 cos𝑖𝑖𝑥𝑥 + 𝑐𝑐2 sin𝑖𝑖𝑥𝑥]

where 𝑐𝑐1 = 𝐶𝐶1 + 𝐶𝐶2, 𝑐𝑐2 = 𝑠𝑠(𝐶𝐶1 − 𝐶𝐶2). This gives all solutions (real and complex) of differential equation. The solution is real when constants 𝑐𝑐1 and 𝑐𝑐2 are real. Therefore, if the roots of the auxiliary equation 𝑎𝑎𝑠𝑠2 + 𝑏𝑏𝑠𝑠 + 𝑐𝑐 = 0 are the complex numbers 𝑠𝑠1 = 𝛼𝛼 + 𝑠𝑠𝑖𝑖,𝑠𝑠2 = 𝛼𝛼 − 𝑠𝑠𝑖𝑖, then the general solution of 𝑎𝑎𝑠𝑠′′ + b𝑠𝑠′ + cy = 0 is

𝑠𝑠 = 𝑒𝑒𝛼𝛼𝑥𝑥(𝑐𝑐1 cos𝑖𝑖𝑥𝑥 + 𝑐𝑐2 sin𝑖𝑖𝑥𝑥) (132)

5.3 Initial-Value and Boundary-Value Problems An initial-value problem for the second order Equation (124) or Equation (125) consists of finding a solution 𝑠𝑠 of the differential equation that also satisfies initial conditions of the form

𝑠𝑠(𝑥𝑥0) = 𝑠𝑠0 𝑠𝑠′(𝑥𝑥0) = 𝑠𝑠1 where 𝑠𝑠0 and 𝑠𝑠1 are given constants. If 𝑃𝑃,𝑄𝑄,𝑅𝑅 and 𝐺𝐺 are continuous on an interval and 𝑃𝑃(𝑥𝑥) ≠ 0, then this guarantees the existence and uniqueness of a solution to this initial-value problem. Example 5.2: Solve the initial-value problem

𝑠𝑠′′ + 𝑠𝑠′ − 6𝑠𝑠 = 0 𝑠𝑠(0) = 1 𝑠𝑠′(0) = 0 Solution: The auxiliary equation is then

𝑠𝑠2 + 𝑠𝑠 − 6 = (𝑠𝑠 − 2)(𝑠𝑠 + 3) = 0 Therefore, the roots are 𝑠𝑠 = 2 and −3. So, the general equation (given by Equation (131)) is

𝑠𝑠(𝑥𝑥) = 𝑐𝑐1𝑒𝑒2𝑥𝑥 + 𝑐𝑐2𝑒𝑒−3𝑥𝑥 Differentiating this equation, we get:

𝑠𝑠′(𝑥𝑥) = 2𝑐𝑐1𝑒𝑒2𝑥𝑥 − 3𝑐𝑐2𝑒𝑒−3𝑥𝑥 To satisfy the initial conditions, we require that

𝑠𝑠(0) = 𝑐𝑐1 + 𝑐𝑐2 = 1

Page 78: AEM ADV03 Introductory Mathematics€¦ ·

78

𝑠𝑠′(0) = 2𝑐𝑐1 − 3𝑐𝑐2 = 0

Solving for 𝑐𝑐1 and 𝑐𝑐2, we get

𝑐𝑐1 =35

, 𝑐𝑐2 =25

Substituting these values, the solution of the initial-value problem is

𝑠𝑠(𝑥𝑥) =35𝑒𝑒2𝑥𝑥 +

25𝑒𝑒−3𝑥𝑥

Example 5.3: Solve the initial-value problem

𝑠𝑠′′ + 𝑠𝑠 = 0 𝑠𝑠(0) = 2 𝑠𝑠′(0) = 3 Solution: The auxiliary equation here is 𝑠𝑠2 + 1 = 0 or 𝑠𝑠2 = −1, whose roots are ±𝑠𝑠. Thus, 𝛼𝛼 = 0,𝑖𝑖 = 1, and since 𝑒𝑒0𝑥𝑥 = 1, the general solution is

𝑠𝑠 (𝑥𝑥) = 𝑐𝑐1 cos 𝑥𝑥 + 𝑐𝑐2 sin 𝑥𝑥 (133)

Differentiating Equation (133), we get

𝑠𝑠′ (𝑥𝑥) = −𝑐𝑐1 sin 𝑥𝑥 + 𝑐𝑐2 cos 𝑥𝑥 The initial conditions become

𝑠𝑠(0) = 𝑐𝑐1 = 2 , 𝑠𝑠′(0) = 𝑐𝑐2 = 3 Therefore, the solution of the initial-value problem is

𝑠𝑠 (𝑥𝑥) = 2 cos 𝑥𝑥 + 3 sin 𝑥𝑥 A boundary-value problem however consists of finding a solution y of the differential equation that also satisfies boundary conditions of the form

𝑠𝑠(𝑥𝑥0) = 𝑠𝑠0 𝑠𝑠(𝑥𝑥1) = 𝑠𝑠1

Page 79: AEM ADV03 Introductory Mathematics€¦ ·

79

In contrast with the situation for initial-value problems, a boundary-value problem does not always have a solution. Example 5.4: Solve the boundary-value problem

𝑠𝑠′′ + 2𝑠𝑠′ + 𝑠𝑠 = 0 𝑠𝑠(0) = 1 𝑠𝑠(1) = 3 Solution: The auxiliary equation is

𝑠𝑠2 + 2𝑠𝑠 + 1 = 0 or (𝑠𝑠 + 1)2 = 0

whose only root is 𝑠𝑠 = −1. Therefore, the general solution is:

𝑠𝑠(𝑥𝑥) = 𝑐𝑐1𝑒𝑒−𝑥𝑥 + 𝑐𝑐2𝑥𝑥𝑒𝑒−𝑥𝑥 The boundary conditions are satisfied if

𝑠𝑠(0) = 𝑐𝑐1 = 1

𝑠𝑠(1) = 𝑐𝑐1𝑒𝑒−1 + 𝑐𝑐2𝑒𝑒−1 = 3 The first condition gives 𝑐𝑐1 = 1, so the second condition becomes

𝑒𝑒−1 + 𝑐𝑐2𝑒𝑒−1 = 3 Solving this equation for 𝑐𝑐2 by first multiplying through by 𝑒𝑒, we get

1 + 𝑐𝑐2 = 3𝑒𝑒 so 𝑐𝑐2 = 3𝑒𝑒 − 1 Thus, the solution of the boundary-value problem is

𝑠𝑠(𝑥𝑥) = 𝑒𝑒−𝑥𝑥 + (3𝑒𝑒 − 1)𝑥𝑥𝑒𝑒−𝑥𝑥

Page 80: AEM ADV03 Introductory Mathematics€¦ ·

80

Summary: Solutions of 𝒂𝒂𝒚𝒚′′ + 𝒃𝒃𝒚𝒚′ + 𝒄𝒄 = 𝟎𝟎 are as follows:

Roots of 𝒂𝒂𝒓𝒓𝟐𝟐 + 𝒃𝒃𝒓𝒓 + 𝒄𝒄 = 𝟎𝟎 General solution

𝒓𝒓𝟏𝟏, 𝒓𝒓𝟐𝟐 real and distinct 𝑠𝑠 = 𝑐𝑐1𝑒𝑒𝑟𝑟1𝑥𝑥 + 𝑐𝑐2𝑒𝑒𝑟𝑟2𝑥𝑥

𝒓𝒓𝟏𝟏 = 𝒓𝒓𝟐𝟐 = 𝒓𝒓 𝑠𝑠 = 𝑐𝑐1𝑒𝑒𝑟𝑟𝑥𝑥 + 𝑐𝑐2𝑥𝑥𝑒𝑒𝑟𝑟𝑥𝑥

𝒓𝒓𝟏𝟏, 𝒓𝒓𝟐𝟐 complex: 𝜶𝜶 ± 𝒅𝒅𝒊𝒊 𝑠𝑠 = 𝑒𝑒𝛼𝛼𝑥𝑥(𝑐𝑐1 cos𝑖𝑖𝑥𝑥 + 𝑐𝑐2 sin𝑖𝑖𝑥𝑥)

5.4 Non-homogeneous linear differential equation Remember from section 5.3, the second-order nonhomogeneous linear differential equation with constant coefficients has the form

𝑎𝑎𝑠𝑠′′ + 𝑏𝑏𝑠𝑠′ + 𝑐𝑐𝑠𝑠 = 𝐺𝐺(𝑥𝑥) (134)

where 𝑎𝑎, 𝑏𝑏 and 𝑐𝑐 are constants and 𝐺𝐺 is a continuous function. The related homogeneous equation (Equation (127)) is also called the complementary equation and is important in solving the nonhomogeneous equation. The general solution of the nonhomogeneous differential equation (Equation (133)) can be written as

𝑠𝑠(𝑥𝑥) = 𝑠𝑠𝑝𝑝(𝑥𝑥) + 𝑠𝑠𝑐𝑐(𝑥𝑥) (135)

where 𝑠𝑠𝑝𝑝 is a particular solution of Equation (124) and 𝑠𝑠𝑐𝑐 is the general solution of the complementary Equation (127). Example 5.5: Solve the equation 𝑠𝑠′′ + 𝑠𝑠′ − 2𝑠𝑠 = 𝑥𝑥2 Solution: The auxiliary equation for 𝑠𝑠′′ + 𝑠𝑠′ − 2𝑠𝑠 = 0 is

𝑠𝑠2 + 𝑠𝑠 − 2 = (𝑠𝑠 − 1)(𝑠𝑠 + 2) = 0 With roots 𝑠𝑠 = 1 and −2. So the solution of the complementary equation is

Page 81: AEM ADV03 Introductory Mathematics€¦ ·

81

𝑠𝑠𝑐𝑐 = 𝑐𝑐1𝑒𝑒𝑥𝑥 + 𝑐𝑐2𝑒𝑒−2𝑥𝑥 Since 𝐺𝐺(𝑥𝑥) = 𝑥𝑥2 is a polynomial of degree 2, we seek a particular solution of the form

𝑠𝑠𝑝𝑝(𝑥𝑥) = 𝐴𝐴𝑥𝑥2 + 𝐵𝐵𝑥𝑥 + 𝐶𝐶 Then

𝑠𝑠𝑝𝑝′ = 2𝐴𝐴𝑥𝑥 + 𝐵𝐵

𝑠𝑠𝑝𝑝′′ = 2𝐴𝐴 Substituting these into the given differential equation, we get

(2𝐴𝐴) + (2𝐴𝐴𝑥𝑥 + 𝐵𝐵) − 2(𝐴𝐴𝑥𝑥2 + 𝐵𝐵𝑥𝑥 + 𝐶𝐶) = 𝑥𝑥2

−2𝐴𝐴𝑥𝑥2 + (2𝐴𝐴 − 2𝐵𝐵)𝑥𝑥 + (2𝐴𝐴 + 𝐵𝐵 − 2𝐶𝐶) = 𝑥𝑥2 Polynomials are equal when their coefficients are equal. Thus

−2𝐴𝐴 = 1 2𝐴𝐴 − 2𝐵𝐵 = 0 2𝐴𝐴 + 𝐵𝐵 − 2𝐶𝐶 = 0 The solution of this system of equation is

𝐴𝐴 = −12

𝐵𝐵 = −12

𝐶𝐶 = −34

A particular solution is therefore

𝑠𝑠𝑝𝑝(𝑥𝑥) = −12𝑥𝑥2 −

12𝑥𝑥 −

34

And the general solution according to Equation (129) is

𝑠𝑠 = 𝑠𝑠𝑝𝑝 + 𝑠𝑠𝑐𝑐 = 𝑐𝑐1𝑒𝑒𝑥𝑥 + 𝑐𝑐2𝑒𝑒−2𝑥𝑥 −12𝑥𝑥2 −

12𝑥𝑥 −

34

Page 82: AEM ADV03 Introductory Mathematics€¦ ·

82

Example 5.6: Solve 𝑠𝑠′′ + 4𝑠𝑠 = 𝑒𝑒3𝑥𝑥 Solution: The auxiliary equation is 𝑠𝑠2 + 4 = 0 with roots ±2𝑠𝑠, so the solution of the complementary equation is

𝑠𝑠𝑐𝑐 = 𝑐𝑐1 cos 2𝑥𝑥 + 𝑐𝑐2 sin 2𝑥𝑥 For a particular solution, we try 𝑠𝑠𝑝𝑝(𝑥𝑥) = 𝐴𝐴𝑒𝑒3𝑥𝑥. Then 𝑠𝑠𝑝𝑝′(𝑥𝑥) = 3𝐴𝐴𝑒𝑒3𝑥𝑥 and 𝑠𝑠𝑝𝑝′′(𝑥𝑥) = 9𝐴𝐴𝑒𝑒3𝑥𝑥. Substituting into the differential equation, we have

9𝐴𝐴𝑒𝑒3𝑥𝑥 + 4(𝐴𝐴𝑒𝑒3𝑥𝑥) = 𝑒𝑒3𝑥𝑥 So, 13𝐴𝐴𝑒𝑒3𝑥𝑥 = 𝑒𝑒3𝑥𝑥

𝐴𝐴 =1

13

Therefore,

𝑠𝑠𝑝𝑝(𝑥𝑥) =1

13𝑒𝑒3𝑥𝑥

And the general solution is

𝑠𝑠(𝑥𝑥) = 𝑐𝑐1 cos 2𝑥𝑥 + 𝑐𝑐2 sin 2𝑥𝑥 +1

13𝑒𝑒3𝑥𝑥

Page 83: AEM ADV03 Introductory Mathematics€¦ ·

83

6. Partial Differential Equations

6.1 Introduction to Differential Equations Although we have introduced the ordinary differential equation in Chapter 5, let’s just recap and get a bit into the details of differential equations. A differential equation is an equation that relates the derivatives of a (scalar) function depending on one or more variables. For example,

𝑑𝑑4𝑢𝑢𝑑𝑑𝑥𝑥4

+𝑑𝑑2𝑢𝑢𝑑𝑑𝑥𝑥2

+ 𝑢𝑢3 = cos 𝑥𝑥 (136)

Is a differential equation for the function u (x) depending on a single variable x, while

𝜕𝜕𝑢𝑢𝜕𝜕𝑡𝑡

=𝜕𝜕2𝑢𝑢𝜕𝜕𝑥𝑥2

+𝜕𝜕2𝑢𝑢𝜕𝜕𝑠𝑠2

− 𝑢𝑢 (137)

Is a differential equation involving a function u (t, x, y) of three variables. A differential equation is called ordinary if the function u depends on only a single variable, and partial if it depends on more than one variable. The order of a differential equation is that of the highest-order derivatives that appears in the equation. Thus, Equation (136) is a fourth-order ordinary differential equation (ODE) while Equation (137) is a second-order partial differential equation (PDE). There are 2 common notations for partial derivatives, and we shall use them interchangeably. The first, used in Equation (136) and Equation (137) is the familiar Leibniz notation that employs a d to denote ordinary derivatives of a function of single variable and the 𝜕𝜕 symbol (usually pronounced “dee”) for partial derivatives of functions of more than one variable. An alternative, more compact notation employs subscripts to indicate partial derivatives. For example, ut represent 𝜕𝜕𝑢𝑢/𝜕𝜕𝑡𝑡, while uxx is used for 𝜕𝜕2𝑢𝑢/𝜕𝜕𝑥𝑥2 and 𝜕𝜕3𝑢𝑢/𝜕𝜕𝑥𝑥2𝜕𝜕𝑠𝑠 for uxxy. Thus, in subscript notation, the partial differential equation for Equation (137) is written as:

𝑢𝑢𝑡𝑡 = 𝑢𝑢𝑥𝑥𝑥𝑥 + 𝑢𝑢𝑦𝑦𝑦𝑦 − 𝑢𝑢 (138)

6.2 Initial Conditions and Boundary Conditions How many solutions does a partial differential equation have? In general, lots! The solutions to dynamical ordinary differential equations are singled out by the imposition of initial conditions, resulting in an initial

Page 84: AEM ADV03 Introductory Mathematics€¦ ·

84

value problem. On the other hand, equations modelling equilibrium phenomena require boundary conditions to specify their solutions uniquely, resulting in a boundary-value problem. For partial differential equations modeling dynamic process, the number of initial conditions required depends on the highest-order time derivative that appears in the equation. On bounded domains, one must also impose suitable boundary conditions in order to uniquely characterise the solution and hence the subsequent dynamical behavior of the physical system. The combination of the partial differential equation, the initial conditions, and the boundary conditions leads to an initial-boundary value problem. We will encounter and solve many important examples of such problem throughout this section.

6.3 Linear and Nonlinear Equations Linearity means that all instances of the unknown and its derivatives enter the equation linearly. We can use the concept of a linear differential operator 𝓛𝓛. Such operator is assembled by summing the basic partial derivative operators, with either constant coefficients or ore generally, coefficients depending on the independent variables. A linear differential equation has the form:

𝓛𝓛[𝑢𝑢] = 0 (139)

For example, if 𝓛𝓛 = 𝜕𝜕2

𝜕𝜕𝑥𝑥2+ 1, then 𝓛𝓛[𝑢𝑢] = 𝑢𝑢𝑥𝑥𝑥𝑥 + 𝑢𝑢

The operator 𝓛𝓛 is called linear if

𝓛𝓛(𝑢𝑢 + 𝑒𝑒) = 𝓛𝓛𝑢𝑢 + 𝓛𝓛𝑒𝑒 and 𝓛𝓛(𝑐𝑐𝑢𝑢) = 𝑐𝑐𝓛𝓛𝑢𝑢 (140)

for any functions u, v and a constant c. Example 6.1: Is the heat equation 𝑢𝑢𝑡𝑡 − 𝑢𝑢𝑥𝑥𝑥𝑥 = 0 linear or non-linear? Solution: 𝓛𝓛(𝑢𝑢 + 𝑒𝑒) = (𝑢𝑢 + 𝑒𝑒)𝑡𝑡 − (𝑢𝑢 + 𝑒𝑒)𝑥𝑥𝑥𝑥 = 𝑢𝑢𝑡𝑡 + 𝑒𝑒𝑡𝑡 − 𝑢𝑢𝑥𝑥𝑥𝑥 − 𝑒𝑒𝑥𝑥𝑥𝑥 = (𝑢𝑢𝑡𝑡 − 𝑢𝑢𝑥𝑥𝑥𝑥) + (𝑒𝑒𝑡𝑡 − 𝑒𝑒𝑥𝑥𝑥𝑥) = 𝓛𝓛𝑢𝑢 + 𝓛𝓛𝑒𝑒

And

𝓛𝓛(𝑐𝑐𝑢𝑢) = 𝑐𝑐𝓛𝓛𝑢𝑢 = (𝑐𝑐𝑢𝑢)𝑡𝑡 − (𝑐𝑐𝑢𝑢)𝑥𝑥𝑥𝑥 = 𝑐𝑐𝑢𝑢𝑡𝑡 − 𝑐𝑐𝑢𝑢𝑥𝑥𝑥𝑥 = 𝑐𝑐(𝑢𝑢𝑡𝑡 − 𝑢𝑢𝑥𝑥𝑥𝑥) = 𝑐𝑐𝓛𝓛𝑢𝑢. Therefore, the heat equation is a linear equation, since it is given by a linear operator.

Page 85: AEM ADV03 Introductory Mathematics€¦ ·

85

Example 6.2: Is the Burger’s equation 𝑢𝑢𝑡𝑡 + 𝑢𝑢𝑢𝑢𝑥𝑥 = 0 linear or non-linear? Solution:

𝓛𝓛(𝑢𝑢 + 𝑒𝑒) = (𝑢𝑢 + 𝑒𝑒)𝑡𝑡 + (𝑢𝑢 + 𝑒𝑒)(𝑢𝑢 + 𝑒𝑒)𝑥𝑥 = 𝑢𝑢𝑡𝑡 + 𝑒𝑒𝑡𝑡 + (𝑢𝑢 + 𝑒𝑒)(𝑢𝑢𝑥𝑥 + 𝑒𝑒𝑥𝑥)

= (𝑢𝑢𝑡𝑡 + 𝑢𝑢𝑢𝑢𝑥𝑥) + (𝑒𝑒𝑡𝑡 + 𝑒𝑒𝑒𝑒𝑥𝑥) + 𝑢𝑢𝑒𝑒𝑥𝑥 + 𝑒𝑒𝑢𝑢𝑥𝑥 ≠ 𝓛𝓛𝑢𝑢 + 𝓛𝓛𝑒𝑒 Therefore, the Burger’s equation is a non-linear differential equation. Equation (139) is also called homogeneous linear PDE, while the Equation (141) below:

𝓛𝓛[𝑢𝑢] = 𝑑𝑑(𝑥𝑥,𝑠𝑠) (141)

is called inhomogeneous linear equation. If uh is a solution to the homogeneous Eq. (139), and up is a particular solution to the inhomogeneous Equation (141), then uh + up is also a solution to the inhomogeneous Equation (141). Indeed

𝓛𝓛(𝑢𝑢ℎ + 𝑢𝑢𝑝𝑝) = 𝓛𝓛𝑢𝑢ℎ + 𝓛𝓛𝑢𝑢𝑝𝑝 = 0 + 𝑑𝑑 = 𝑑𝑑 Therefore, in order to find the general solution to the inhomogeneous Eq. (6), it is enough to find the general solution of the homogeneous Equation (139), and add to this particular solution of the inhomogeneous equation (check that the difference of any two solutions of the inhomogeneous equation is a solution of the homogeneous equation). In this sense, there is similarity between ODEs and PDEs, since this principle relies only on the linearity of the operator 𝓛𝓛. Notice that where the solution of an ODE contains arbitrary constants, the solution to a PDE contains arbitrary functions. The potential degree of non-linearity embedded in PDE of first order leads to the following differentiations:

PDE Type Description

Linear Constant coefficient 𝑎𝑎, 𝑏𝑏, 𝑐𝑐 are constant functions Linear 𝑎𝑎, 𝑏𝑏, 𝑐𝑐 are functions of x and y only Semi-Linear 𝑎𝑎, 𝑏𝑏 functions of x and y, 𝑐𝑐 may depend on u Quasi-Linear 𝑎𝑎, 𝑏𝑏, 𝑐𝑐 are functions of x, y and u

Non-Linear The derivatives carry exponents, e.g. (𝑢𝑢𝑥𝑥)2, or derivatives cross-terms exist, e.g. 𝑢𝑢𝑥𝑥 𝑢𝑢𝑦𝑦

Page 86: AEM ADV03 Introductory Mathematics€¦ ·

86

Let’s assume a first-order PDE in the form:

𝑎𝑎(𝑥𝑥,𝑠𝑠)𝜕𝜕𝑢𝑢(𝑥𝑥,𝑠𝑠)𝜕𝜕𝑥𝑥

+ 𝑏𝑏(𝑥𝑥, 𝑠𝑠)𝜕𝜕𝑢𝑢(𝑥𝑥,𝑠𝑠)𝜕𝜕𝑠𝑠

= 𝑐𝑐(𝑥𝑥,𝑠𝑠,𝑢𝑢(𝑥𝑥,𝑠𝑠)) (142)

Hence, Equation (142) represents a semi-linear PDE, because it permits for light non-linearities in the source term, 𝑐𝑐(𝑥𝑥,𝑠𝑠,𝑢𝑢(𝑥𝑥,𝑠𝑠)).

6.4 Examples of PDEs Some examples of PDEs of physical significance are listed below:

𝑢𝑢𝑥𝑥 + 𝑢𝑢𝑦𝑦 = 0 Transport equation (143) 𝑢𝑢𝑡𝑡 + 𝑢𝑢𝑢𝑢𝑥𝑥 − 𝑒𝑒𝑢𝑢𝑥𝑥𝑥𝑥 = 0 Viscous Burger’s equation (144) 𝑢𝑢𝑡𝑡 + 𝑢𝑢𝑢𝑢𝑥𝑥 = 0 Inviscid Burger’s equation (145) 𝑢𝑢𝑥𝑥𝑥𝑥 + 𝑢𝑢𝑦𝑦𝑦𝑦 = 0 Laplace’s equation (146) 𝑢𝑢𝑡𝑡𝑡𝑡 − 𝑢𝑢𝑥𝑥𝑥𝑥 = 0 Wave equation (147) 𝑢𝑢𝑡𝑡 − 𝑢𝑢𝑥𝑥𝑥𝑥 = 0 Heat equation (148) 𝑢𝑢𝑡𝑡 + 𝑢𝑢𝑢𝑢𝑥𝑥 + 𝑢𝑢𝑥𝑥𝑥𝑥𝑥𝑥 = 0 Kortewedge Vries equation (149)

6.5 Three types of Second-Order PDEs The classification theory of real linear second-order PDEs for scalar-valued function 𝑢𝑢(𝑡𝑡, 𝑥𝑥) depending on two variables proceed as follows. The most general such equation has the form

𝓛𝓛 [𝑢𝑢] = 𝐴𝐴𝑢𝑢𝑡𝑡𝑡𝑡 + 𝐵𝐵𝑢𝑢𝑡𝑡𝑥𝑥 + 𝐶𝐶𝑢𝑢𝑥𝑥𝑥𝑥 + 𝐷𝐷𝑢𝑢𝑡𝑡 + 𝐸𝐸𝑢𝑢𝑥𝑥 + 𝐹𝐹𝑢𝑢 = 𝐺𝐺 (150)

Where the coefficients 𝐴𝐴,𝐵𝐵,𝐶𝐶,𝐷𝐷,𝐸𝐸,𝐹𝐹 are all allowed to be functions of (𝑡𝑡, 𝑥𝑥), as is the inhomogeneity or forcing function 𝐺𝐺(𝑡𝑡, 𝑥𝑥). The equation is homogenous if and only if 𝐺𝐺 ≡ 0. We assume that at least one of the leading coefficients 𝐴𝐴,𝐵𝐵,𝐶𝐶 is not identically zero, since otherwise, the equation degenerates to a first-order equation. The key quantity that determines the type of such a PDE is its discriminant:

Page 87: AEM ADV03 Introductory Mathematics€¦ ·

87

∆ = 𝐵𝐵2 − 4𝐴𝐴𝐶𝐶 (151)

This should (and for good reason) remind you of the discriminant of the quadratic equation

𝑄𝑄(𝑥𝑥,𝑠𝑠) = 𝐴𝐴𝑥𝑥2 + 𝐵𝐵𝑥𝑥𝑠𝑠 + 𝐶𝐶𝑠𝑠2 + 𝐷𝐷𝑥𝑥 + 𝐸𝐸𝑠𝑠 + 𝐹𝐹 = 0 (152)

Therefore, at a point (𝑡𝑡, 𝑥𝑥), the linear second-order PDE Equation (150) is called:

i. Hyperbolic, if ∆(𝑡𝑡, 𝑥𝑥) > 0 ii. Parabolic, if ∆(𝑡𝑡, 𝑥𝑥) = 0 but 𝐴𝐴2 + 𝐵𝐵2 + 𝐶𝐶2 ≠ 0 iii. Elliptic, if ∆(𝑡𝑡, 𝑥𝑥) < 0

In particular:

• The wave equation (Equation (147)) 𝑢𝑢𝑡𝑡𝑡𝑡 − 𝑢𝑢𝑥𝑥𝑥𝑥 = 0 has discriminant ∆ = 4, and is hyperbolic • The heat equation (Equation (148)) 𝑢𝑢𝑥𝑥𝑥𝑥 − 𝑢𝑢𝑡𝑡 = 0 has discriminant ∆ = 0, and is parabolic • The Laplace equation (Equation (146)) 𝑢𝑢𝑥𝑥𝑥𝑥 + 𝑢𝑢𝑦𝑦𝑦𝑦 = 0 has discriminant ∆ = −4, and is elliptic

Example 6.3: The Tricomi equation from the theory of supersonic aerodynamics is written as:

𝑥𝑥𝜕𝜕2𝑢𝑢𝜕𝜕𝑡𝑡2

−𝜕𝜕2𝑢𝑢𝜕𝜕𝑥𝑥2

= 0

Comparing the equation above to Equation (150), we find that 𝐴𝐴 = 𝑥𝑥,𝐵𝐵 = 0,𝐶𝐶 = −1 while 𝐷𝐷 = 𝐸𝐸 = 𝐹𝐹 = 𝐺𝐺 = 0 The discriminant in this particular case is:

∆ = 𝐵𝐵2 − 4𝐴𝐴𝐶𝐶 = 4𝑥𝑥 Hence, the equation is hyperbolic when 𝑥𝑥 > 0, elliptic when 𝑥𝑥 < 0, and parabolic on the transition line 𝑥𝑥 = 0. In this physical model, the hyperbolic region corresponds to subsonic flow, while the supersonic regions are of elliptic type. The transitional parabolic boundary represents the sock line between the sub- and super-sonic regions – the familiar sonic boom as an airplane crosses the sound barrier.

6.6 Solving PDEs using Separation of Variables Method The separation of variables method is used for solving key PDEs in their two-independent-variables incarnations. For wave and heat equations (Equation (147) and (148), respectively), the variables are time,

Page 88: AEM ADV03 Introductory Mathematics€¦ ·

88

t, and a single space coordinate, x, leading to initial boundary value problems modelling the dynamic behavior of the one-dimensional medium. For the Laplace equation (Equation (146)), the variables represent space coordinates, 𝑥𝑥 and 𝑠𝑠, and the associated boundary value problems model the equilibrium configuration of a planar body, e.g., the deformation of a membrane. In order to use the separation of variables method, we must be working with a linear homogeneous PDEs with linear homogeneous boundary conditions. The separation of variables method relies upon the assumption that a function of the form,

𝑢𝑢(𝑥𝑥, 𝑡𝑡) = 𝜑𝜑(𝑥𝑥)𝐺𝐺(𝑡𝑡) (153)

will be a solution to a linear homogeneous PDE in 𝑥𝑥 and 𝑡𝑡. This is called a product solution and provided the boundary conditions are also linear and homogeneous, this will also satisfy the boundary conditions. 6.6.1 The Heat Equation Let’s start with the one-dimensional heat equation:

𝜕𝜕𝑢𝑢𝜕𝜕𝑡𝑡

= 𝑘𝑘𝜕𝜕2𝑢𝑢𝜕𝜕𝑥𝑥2

(154)

Let the initial and boundary conditions be:

𝑢𝑢(𝑥𝑥, 0) = 𝑓𝑓(𝑥𝑥) 𝑢𝑢(0, 𝑡𝑡) = 0 𝑢𝑢(𝐿𝐿, 𝑡𝑡) = 0 So, we have the heat equation with fixed boundary conditions (that are also homogeneous) and an initial condition. The separation of variables method tells us to assume that the solution will take the form of the product (Equation (153)),

𝑢𝑢(𝑥𝑥, 𝑡𝑡) = 𝜑𝜑(𝑥𝑥)𝐺𝐺(𝑡𝑡)

So, all we have to do here is substituting Equation (153) into Equation (154), we obtain

𝜕𝜕𝜕𝜕𝑡𝑡

(𝜑𝜑(𝑥𝑥)𝐺𝐺(𝑡𝑡)) = 𝑘𝑘𝜕𝜕2

𝜕𝜕𝑥𝑥2(𝜑𝜑(𝑥𝑥)𝐺𝐺(𝑡𝑡))

𝜑𝜑(𝑥𝑥)𝑑𝑑𝐺𝐺𝑑𝑑𝑡𝑡

= 𝑓𝑓𝐺𝐺(𝑡𝑡)𝑑𝑑2𝜑𝜑𝑑𝑑𝑥𝑥2

Page 89: AEM ADV03 Introductory Mathematics€¦ ·

89

Therefore, we can factor the 𝜑𝜑(𝑥𝑥) out of the time derivative and similarly we can factor 𝐺𝐺(𝑡𝑡) out of the spatial derivative. Also note that after we have factored these out, we no longer have partial derivatives left in the problem. In the time derivative, we are only differentiating 𝐺𝐺(𝑡𝑡) with respect to 𝑡𝑡 and this is now an ordinary derivative. Likewise, in the spatial derivative, we are now only differentiating 𝜑𝜑(𝑥𝑥) with respect to 𝑥𝑥 so again we have ordinary derivative. Now, to solve the equation, we want to get all the 𝑡𝑡’s on one side of the equation and all the 𝑥𝑥’s on the other side. In other words, we want to “separate the variables”. In this case, we can just divide both sides by 𝜑𝜑(𝑥𝑥)𝐺𝐺(𝑡𝑡) but this is not always the case. So, diving gives us:

1𝐺𝐺𝑑𝑑𝐺𝐺𝑑𝑑𝑡𝑡

= 𝑘𝑘1𝜑𝜑𝑑𝑑2𝜑𝜑𝑑𝑑𝑥𝑥2

⟹ 1𝑘𝑘𝐺𝐺

𝑑𝑑𝐺𝐺𝑑𝑑𝑡𝑡

=1𝜑𝜑𝑑𝑑2𝜑𝜑𝑑𝑑𝑥𝑥2

Let’s pause here for a bit. How is it possible that a function of 𝑡𝑡’s only can be equal to a function of only 𝑥𝑥’s regardless of the choice of 𝑡𝑡 and/or 𝑥𝑥? This is impossible until there is one way it can be true. If both functions (i.e. both sides of the equation) were in fact a constant and of the same constant, then they can in fact be equal. So, we must have

1𝑘𝑘𝐺𝐺

𝑑𝑑𝐺𝐺𝑑𝑑𝑡𝑡

=1𝜑𝜑𝑑𝑑2𝜑𝜑𝑑𝑑𝑥𝑥2

= −𝜆𝜆 (155)

where −𝜆𝜆 is called the separation constant and is arbitrary. The next step is to acknowledge that we can take Equation (155) and split it into the following two ordinary differential equations.

𝑑𝑑𝐺𝐺𝑑𝑑𝑡𝑡

= −𝑘𝑘𝜆𝜆𝐺𝐺 𝑑𝑑2𝜑𝜑𝑑𝑑𝑥𝑥2

= −𝜆𝜆𝜑𝜑

Both of these are very simple differential equations. However, since we do not know what 𝜆𝜆 is, we can’t solve them yet. The last step in the process is to make sure our product solution (Equation (153)), satisfy the boundary conditions so let’s substitute it into both of the boundary conditions.

𝑢𝑢(0, 𝑡𝑡) = 𝜑𝜑(0)𝐺𝐺(𝑡𝑡) = 0 𝑢𝑢(𝐿𝐿, 𝑡𝑡) = 𝜑𝜑(𝐿𝐿)𝐺𝐺(𝑡𝑡) = 0 Let’s consider the first one. We have two options. Either 𝜑𝜑(0) = 0 or 𝐺𝐺(𝑡𝑡) = 0 for every 𝑡𝑡. However, if we have 𝐺𝐺(𝑡𝑡) = 0 for every 𝑡𝑡 then we will also have 𝑢𝑢(𝑥𝑥, 𝑡𝑡) = 0. Instead, let’s assume that we must have

Page 90: AEM ADV03 Introductory Mathematics€¦ ·

90

𝜑𝜑(0) = 0. Likewise, from the second boundary condition, we will get 𝜑𝜑(𝐿𝐿) = 0 to avoid having a trivial solution. Now, let’s try and solve the problem. Note the general solution for differential equation cases for 𝜆𝜆

• Case (i): 𝜆𝜆 > 0: 𝑠𝑠(𝑥𝑥) = 𝑐𝑐1 cos�√𝜆𝜆𝑥𝑥� + 𝑐𝑐2 sin�√𝜆𝜆𝑥𝑥� • Case (ii): 𝜆𝜆 = 0: 𝑠𝑠(𝑥𝑥) = 𝑎𝑎 + 𝑏𝑏𝑥𝑥, 𝐺𝐺(𝑡𝑡) = 𝑐𝑐 • Case (ii): 𝜆𝜆 < 0: Always ignore since this case only gives trivial solution satisfying the PDE and

boundary conditions. Let’s look at case (i), 𝜆𝜆 > 0 We now know that the solution to the differential equation is

𝜑𝜑(𝑥𝑥) = 𝑐𝑐1 cos�√𝜆𝜆𝑥𝑥� + 𝑐𝑐2 sin�√𝜆𝜆𝑥𝑥� Applying the first boundary condition gives:

0 = 𝜑𝜑(0) = 𝑐𝑐1 Now, applying the second boundary condition, and using the above result gives:

0 = 𝜑𝜑(𝐿𝐿) = 𝑐𝑐2 sin�𝐿𝐿√𝜆𝜆� Now we are after non-trivial solutions and therefore we must have:

sin�𝐿𝐿√𝜆𝜆� = 0

𝐿𝐿√𝜆𝜆 = 𝑠𝑠𝜋𝜋 𝑠𝑠 = 1, 2, 3, …. The positive eigenvalues and their corresponding eigenfunctions of this boundary problem are:

𝜆𝜆𝑛𝑛 = �𝑠𝑠𝜋𝜋𝐿𝐿�2

𝜑𝜑𝑛𝑛(𝑥𝑥) = sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� 𝑠𝑠 = 1, 2, 3, … .

Let’s look at case (ii), 𝜆𝜆 = 0 The solution to differential equation is:

Page 91: AEM ADV03 Introductory Mathematics€¦ ·

91

𝜑𝜑(𝑥𝑥) = 𝑐𝑐1 + 𝑐𝑐2𝑥𝑥 Applying the boundary conditions, we get

0 = 𝜑𝜑(0) = 𝑐𝑐1

0 = 𝜑𝜑(𝐿𝐿) = 𝑐𝑐2𝐿𝐿 ⟹ 𝑐𝑐2 = 0 So, in this case, the only solution is the trivial solution, so 𝜆𝜆 = 0 is not an eigenvalue for this boundary value problem. Let’s look at case (iii), 𝜆𝜆 < 0 Here, the solution to the differential equation is

𝜑𝜑(𝑥𝑥) = 𝑐𝑐1 cosh�√−𝜆𝜆𝑥𝑥� + 𝑐𝑐2 sinh�√−𝜆𝜆𝑥𝑥� Applying the first boundary condition gives:

0 = 𝜑𝜑(0) = 𝑐𝑐1 Now, applying the second boundary condition gives:

0 = 𝜑𝜑(𝐿𝐿) = 𝑐𝑐2 sinh�𝐿𝐿√−𝜆𝜆� So, we are assuming 𝜆𝜆 < 0 and so 𝐿𝐿√−𝜆𝜆 ≠ 0 and this means that sinh�𝐿𝐿√−𝜆𝜆� ≠ 0. Therefore, we must have 𝑐𝑐2 = 0 and again, we can only get the trivial solution in this case. Therefore, there will be no negative eigenvalues for this boundary value problem. Hence, the complete list of eigenvalues and eigenfunctions for this problem are:

𝜆𝜆𝑛𝑛 = �𝑠𝑠𝜋𝜋𝐿𝐿�2

𝜑𝜑𝑛𝑛(𝑥𝑥) = sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� 𝑠𝑠 = 1, 2, 3, … .

Now, let’s solve the time differential equation,

𝑑𝑑𝐺𝐺𝑑𝑑𝑡𝑡

= −𝑘𝑘𝜆𝜆𝑛𝑛𝐺𝐺

Page 92: AEM ADV03 Introductory Mathematics€¦ ·

92

This is a simple linear first order differential equation and therefore the solution is:

𝐺𝐺(𝑡𝑡) = 𝑐𝑐𝑒𝑒−𝑘𝑘𝜆𝜆𝑛𝑛𝑡𝑡 = 𝑐𝑐𝑒𝑒−𝑘𝑘�𝑛𝑛𝜋𝜋𝐿𝐿 �

2𝑡𝑡

Now, we have solved both ordinary differential equations, we can finally write down a solution. The product solution is therefore

𝑢𝑢𝑛𝑛(𝑥𝑥, 𝑡𝑡) = 𝐵𝐵𝑛𝑛 sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� 𝑒𝑒−𝑘𝑘�

𝑛𝑛𝜋𝜋𝐿𝐿 �

2𝑡𝑡 𝑠𝑠 = 1, 2, 3, ….

Please note that we have denoted the product solution to 𝑢𝑢𝑛𝑛 to acknowledge that each value of 𝑠𝑠 will result in different solutions. Also note that we’ve changed 𝑐𝑐 to 𝐵𝐵𝑛𝑛 to denote that it might also be different for any value of 𝑠𝑠 as well. Example 6.4: Solve the initial-boundary value problem

𝑢𝑢𝑡𝑡 = 𝑢𝑢𝑥𝑥𝑥𝑥 0 < 𝑥𝑥 < 2, 𝑡𝑡 > 0

𝑢𝑢(𝑥𝑥, 0) = 𝑥𝑥2 − 𝑥𝑥 + 1 0 ≤ 𝑥𝑥 ≤ 2

𝑢𝑢(0, 𝑡𝑡) = 1,𝑢𝑢(2, 𝑡𝑡) = 3 𝑡𝑡 > 0 Find lim𝑡𝑡→+∞𝑢𝑢(𝑥𝑥, 𝑡𝑡). Solution: First, we need to obtain a function 𝑒𝑒 that satisfies 𝑒𝑒𝑡𝑡 = 𝑒𝑒𝑥𝑥𝑥𝑥 and takes 0 boundary conditions. So, let

𝑒𝑒(𝑥𝑥, 𝑡𝑡) = 𝑢𝑢(𝑥𝑥, 𝑡𝑡) + (𝑎𝑎𝑥𝑥 + 𝑏𝑏) (156)

where 𝑎𝑎 and 𝑏𝑏 are constants to be determined. Then,

𝑒𝑒𝑡𝑡 = 𝑢𝑢𝑡𝑡

𝑒𝑒𝑡𝑡𝑡𝑡 = 𝑢𝑢𝑡𝑡𝑡𝑡 Thus,

𝑒𝑒𝑡𝑡 = 𝑒𝑒𝑡𝑡𝑡𝑡

Page 93: AEM ADV03 Introductory Mathematics€¦ ·

93

We need Equation (156) to take 0 boundary conditions for 𝑒𝑒(0, 𝑡𝑡)and 𝑒𝑒(2, 𝑡𝑡):

𝑒𝑒(0, 𝑡𝑡) = 0 = 𝑢𝑢(0, 𝑡𝑡) + 𝑏𝑏 = 1 + 𝑏𝑏 ⟹ 𝑏𝑏 = −1

𝑒𝑒(2, 𝑡𝑡) = 0 = 𝑢𝑢(2, 𝑡𝑡) + 2𝑎𝑎 − 1 = 2𝑎𝑎 + 2 ⟹ 𝑎𝑎 = −1 Therefore, Equation (156) becomes

𝑒𝑒(𝑥𝑥, 𝑡𝑡) = 𝑢𝑢(𝑥𝑥, 𝑡𝑡) − 𝑥𝑥 − 1 (157)

The new problem now is

𝑒𝑒𝑡𝑡 = 𝑒𝑒𝑥𝑥𝑥𝑥

𝑒𝑒(𝑥𝑥, 0) = (𝑥𝑥2 − 𝑥𝑥 + 1) − 𝑥𝑥 − 1 = 𝑥𝑥2 − 2𝑥𝑥

𝑒𝑒(0, 𝑡𝑡) = 𝑒𝑒(2, 𝑡𝑡) = 0 Let’s solve the problem for 𝑒𝑒 using separation of variables method. Let

𝑒𝑒(𝑥𝑥, 𝑡𝑡) = 𝜑𝜑(𝑥𝑥)𝐺𝐺(𝑡𝑡) Which gives (Equation (155)):

1𝐺𝐺𝑑𝑑𝐺𝐺𝑑𝑑𝑡𝑡

=1𝜑𝜑𝑑𝑑2𝜑𝜑𝑑𝑑𝑥𝑥2

= −𝜆𝜆

From

𝑑𝑑2𝜑𝜑𝑑𝑑𝑥𝑥2

+ 𝜆𝜆𝜑𝜑 = 0,

We get,

𝜑𝜑𝑛𝑛(𝑥𝑥) = 𝑎𝑎𝑛𝑛 cos�√𝜆𝜆𝑥𝑥� + 𝑏𝑏𝑛𝑛 sin�√𝜆𝜆𝑥𝑥� Using boundary conditions, we have

𝑒𝑒(0, 𝑡𝑡) = 𝜑𝜑(0)𝐺𝐺(𝑡𝑡) = 0 𝑒𝑒(2, 𝑡𝑡) = 𝜑𝜑(2)𝐺𝐺(𝑡𝑡) = 0

Page 94: AEM ADV03 Introductory Mathematics€¦ ·

94

Therefore, 𝜑𝜑(0) = 𝜑𝜑(2) = 0 Hence,

𝜑𝜑𝑛𝑛(0) = 𝑎𝑎𝑛𝑛 = 0

𝜑𝜑𝑛𝑛(𝑥𝑥) = 𝑏𝑏𝑛𝑛 sin�√𝜆𝜆𝑥𝑥�

𝜑𝜑𝑛𝑛(2) = 𝑏𝑏𝑛𝑛 sin�2√𝜆𝜆� ⟹ 2√𝜆𝜆 = 𝑠𝑠𝜋𝜋 ⟹ 𝜆𝜆𝑛𝑛 = �𝑠𝑠𝜋𝜋2�2

Therefore,

𝜑𝜑𝑛𝑛(𝑥𝑥) == 𝑏𝑏𝑛𝑛 sin𝑠𝑠𝜋𝜋𝑥𝑥

2 , 𝜆𝜆𝑛𝑛 = �

𝑠𝑠𝜋𝜋2�2

With these values of 𝜆𝜆𝑛𝑛, we solve

𝑑𝑑𝐺𝐺𝑑𝑑𝑡𝑡

+ 𝜆𝜆𝐺𝐺 = 0

Or can be written as

𝑑𝑑𝐺𝐺𝑑𝑑𝑡𝑡

+ �𝑠𝑠𝜋𝜋2�2𝐺𝐺 = 0

And we get:

𝐺𝐺𝑛𝑛(𝑡𝑡) = 𝑐𝑐𝑛𝑛𝑒𝑒−�𝑛𝑛𝜋𝜋2 �

2𝑡𝑡

Therefore,

𝑒𝑒(𝑥𝑥, 𝑡𝑡) = �𝜑𝜑𝑛𝑛(𝑥𝑥)𝐺𝐺𝑛𝑛(𝑡𝑡) =∞

𝑛𝑛=1

� �̃�𝑐𝑛𝑛𝑒𝑒−�𝑛𝑛𝜋𝜋2 �

2𝑡𝑡 sin

𝑠𝑠𝜋𝜋𝑥𝑥2

𝑛𝑛=1

Coefficients �̃�𝑐𝑛𝑛 are obtained using the initial condition:

𝑒𝑒(𝑥𝑥, 0) = ��̃�𝑐𝑛𝑛 sin𝑠𝑠𝜋𝜋𝑥𝑥

2

𝑛𝑛=1

= 𝑥𝑥2 − 2𝑥𝑥

Page 95: AEM ADV03 Introductory Mathematics€¦ ·

95

�̃�𝑐𝑛𝑛 = � (𝑥𝑥2 − 2𝑥𝑥)2

0sin

𝑠𝑠𝜋𝜋𝑥𝑥2

𝑑𝑑𝑥𝑥 = � 0 𝑠𝑠 𝑠𝑠𝑠𝑠 𝑒𝑒𝑒𝑒𝑒𝑒𝑠𝑠

−32

(𝑠𝑠𝜋𝜋)3 𝑠𝑠 𝑠𝑠𝑠𝑠 𝑐𝑐𝑑𝑑𝑑𝑑

Therefore,

𝑒𝑒(𝑥𝑥, 𝑡𝑡) = �−32

(𝑠𝑠𝜋𝜋)3 𝑒𝑒−�𝑛𝑛𝜋𝜋2 �

2𝑡𝑡 sin

𝑠𝑠𝜋𝜋𝑥𝑥2

𝑛𝑛=1

We now use Equation (157) to convert back to function 𝑢𝑢:

𝑢𝑢(𝑥𝑥, 𝑡𝑡) = 𝑒𝑒(𝑥𝑥, 𝑡𝑡) + 𝑥𝑥 + 1

𝑢𝑢(𝑥𝑥, 𝑡𝑡) = �−32

(𝑠𝑠𝜋𝜋)3 𝑒𝑒−�𝑛𝑛𝜋𝜋2 �

2𝑡𝑡 sin

𝑠𝑠𝜋𝜋𝑥𝑥2

𝑛𝑛=1

+ 𝑥𝑥 + 1

And finally,

lim𝑡𝑡→+∞

𝑢𝑢(𝑥𝑥, 𝑡𝑡) = 𝑥𝑥 + 1

6.6.2 The Wave Equation Let’s start with a wave equation as follows:

𝜕𝜕2𝑢𝑢𝜕𝜕𝑡𝑡2

= 𝑐𝑐2𝜕𝜕2𝑢𝑢𝜕𝜕𝑥𝑥2

(158)

The initial and boundary conditions are as follows:

𝑢𝑢(𝑥𝑥, 0) = 𝑓𝑓(𝑥𝑥) 𝜕𝜕𝑢𝑢𝜕𝜕𝑡𝑡

(𝑥𝑥, 0) = 𝑑𝑑(𝑥𝑥)

𝑢𝑢(0, 𝑡𝑡) = 0 𝑢𝑢 (𝐿𝐿, 𝑡𝑡) = 0

One of the main difference is now we have two initial conditions. So, let’s start with the product solution:

Page 96: AEM ADV03 Introductory Mathematics€¦ ·

96

𝑢𝑢(𝑥𝑥, 𝑡𝑡) = 𝜑𝜑(𝑥𝑥)ℎ(𝑡𝑡) Substituting the two boundary conditions gives:

𝜑𝜑(0) = 0 𝜑𝜑(𝐿𝐿) = 0 Substituting the product solution into the differential equation (Eq. 21), separating and introducing a separation constant gives:

𝜕𝜕2

𝜕𝜕𝑡𝑡2 (𝜑𝜑(𝑥𝑥)ℎ(𝑡𝑡)) = 𝑐𝑐2

𝜕𝜕2

𝜕𝜕𝑥𝑥2(𝜑𝜑(𝑥𝑥)ℎ(𝑡𝑡))

𝜑𝜑(𝑥𝑥)𝑑𝑑2ℎ𝑑𝑑𝑡𝑡2

= 𝑐𝑐2ℎ(𝑡𝑡)𝑑𝑑2𝜑𝜑𝑑𝑑𝑥𝑥2

1𝑐𝑐2ℎ

𝑑𝑑2ℎ𝑑𝑑𝑡𝑡2

=1𝜑𝜑𝑑𝑑2𝜑𝜑𝑑𝑑𝑥𝑥2

= −𝜆𝜆

We moved the 𝑐𝑐2 to the left side for convenience and chose −𝜆𝜆 for the separation constant so the differential equation for 𝜑𝜑 would match a known (and solved) case. The two ordinary differential equations we get from separation of variables methods are:

𝑑𝑑2ℎ𝑑𝑑𝑡𝑡2

+ 𝑐𝑐2𝜆𝜆ℎ = 0 𝑑𝑑2𝜑𝜑𝑑𝑑𝑥𝑥2

+ 𝜆𝜆𝜑𝜑 = 0

𝜑𝜑(0) = 0 𝜑𝜑(𝐿𝐿) = 0

We have solved the boundary value problem above in the Example in solving the Heat Equation in section 6.6.1, so the eigenvalues and eigenfunctions for this problem are:

𝜆𝜆𝑛𝑛 = �𝑠𝑠𝜋𝜋𝐿𝐿�2

𝜑𝜑𝑛𝑛(𝑥𝑥) = sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� 𝑠𝑠 = 1, 2, 3, … .

The first ordinary differential equation is now

𝑑𝑑2ℎ𝑑𝑑𝑡𝑡2

+ �𝑠𝑠𝜋𝜋𝑐𝑐𝐿𝐿�2ℎ = 0

And because the coefficient of the ℎ is clearly positive the solution to this is

Page 97: AEM ADV03 Introductory Mathematics€¦ ·

97

ℎ(𝑡𝑡) = 𝑐𝑐1 cos �𝑠𝑠𝜋𝜋𝑐𝑐𝑡𝑡𝐿𝐿

� + 𝑐𝑐2 sin �𝑠𝑠𝜋𝜋𝑐𝑐𝑡𝑡𝐿𝐿

Because there is no reason to think that either of the coefficients above are zero we then get two product solutions,

𝑢𝑢𝑛𝑛(𝑥𝑥, 𝑡𝑡) = 𝐴𝐴𝑛𝑛 cos �𝑠𝑠𝜋𝜋𝑐𝑐𝑡𝑡𝐿𝐿

� sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝑢𝑢𝑛𝑛(𝑥𝑥, 𝑡𝑡) = 𝐵𝐵𝑛𝑛 cos �𝑠𝑠𝜋𝜋𝑐𝑐𝑡𝑡𝐿𝐿

� sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� 𝑠𝑠 = 1,2,3, …

The solution is then,

𝑢𝑢(𝑥𝑥, 𝑡𝑡) = ��𝐴𝐴𝑛𝑛 cos �𝑠𝑠𝜋𝜋𝑐𝑐𝑡𝑡𝐿𝐿

� sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� + 𝐵𝐵𝑛𝑛 sin �

𝑠𝑠𝜋𝜋𝑐𝑐𝑡𝑡𝐿𝐿

� sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿��

𝑛𝑛=1

Now, in order to apply the second initial condition, we’ll need to differentiate this with respect to 𝑡𝑡, so

𝜕𝜕𝑢𝑢𝜕𝜕𝑡𝑡

= ��−𝑠𝑠𝜋𝜋𝑐𝑐𝐿𝐿

𝐴𝐴𝑛𝑛 sin �𝑠𝑠𝜋𝜋𝑐𝑐𝑡𝑡𝐿𝐿

� sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� +

𝑠𝑠𝜋𝜋𝑐𝑐𝐿𝐿

𝐵𝐵𝑛𝑛 cos �𝑠𝑠𝜋𝜋𝑐𝑐𝑡𝑡𝐿𝐿

� sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿��

𝑛𝑛=1

If we now apply the initial conditions, we get,

𝑢𝑢(𝑥𝑥, 0) = 𝑓𝑓(𝑥𝑥) = ��𝐴𝐴𝑛𝑛 cos(0) sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� + 𝐵𝐵𝑛𝑛 sin(0) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿��

𝑛𝑛=1

= �𝐴𝐴𝑛𝑛

𝑛𝑛=1

sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝜕𝜕𝑢𝑢𝜕𝜕𝑡𝑡

(𝑥𝑥, 0) = 𝑑𝑑(𝑥𝑥) = �𝑠𝑠𝜋𝜋𝑐𝑐𝐿𝐿

𝐵𝐵𝑛𝑛 sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�

𝑛𝑛=1

Both of these are Fourier sine series. The first 𝑓𝑓(𝑥𝑥) on 0 ≤ 𝑥𝑥 ≤ 𝐿𝐿 while the second is for 𝑑𝑑(𝑥𝑥) on 0 ≤𝑥𝑥 ≤ 𝐿𝐿 with a slightly messy coefficient. Using the Fourier series formula for Fourier Sine series, we get

𝐴𝐴𝑛𝑛 =2𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� 𝑑𝑑𝑥𝑥 𝑠𝑠 = 1,2,3, … .

𝐿𝐿

0

𝑠𝑠𝜋𝜋𝑐𝑐𝐿𝐿

𝐵𝐵𝑛𝑛 =2𝐿𝐿� 𝑑𝑑(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� 𝑑𝑑𝑥𝑥 𝑠𝑠 = 1,2,3, … .

𝐿𝐿

0

Page 98: AEM ADV03 Introductory Mathematics€¦ ·

98

Upon solving, we get:

𝐴𝐴𝑛𝑛 =2𝐿𝐿� 𝑓𝑓(𝑥𝑥) sin �

𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿� 𝑑𝑑𝑥𝑥 𝑠𝑠 = 1,2,3, … .

𝐿𝐿

0

𝐵𝐵𝑛𝑛 =2𝑠𝑠𝜋𝜋𝑐𝑐

� 𝑑𝑑(𝑥𝑥) sin �𝑠𝑠𝜋𝜋𝑥𝑥𝐿𝐿�𝑑𝑑𝑥𝑥 𝑠𝑠 = 1,2,3, … .

𝐿𝐿

0


Recommended