+ All Categories
Home > Documents > Chapter2-2008

Chapter2-2008

Date post: 24-Apr-2015
Category:
Upload: jackie-tai
View: 21 times
Download: 2 times
Share this document with a friend
36
1 Revised November 2008 Chapter 2: Spectral Analysis Chapter Contents 1. Signals and vectors, Orthogonal Functions 2. Energy and Power Signals, 3. Fourier series, Fourier Transforms, Signal Spectrum 4. Correlation, Spectral Density Objective : To answer the following queries and study of related topics 1. Signals and vectors, Orthogonal Functions What is meant by component of one vector / signal in another? What is Minimum Mean Square Error (MMSE) criterion? What is the condition for two signals to be orthogonal? Example of some commonly encountered orthogonal functions 2. Energy and Power Signals The difference between energy and power signals Familiarization of terms related to power and energy 3. Fourier Series, Fourier Transforms, Signal Spectrum, What is Fourier series representation? What is generalized Fourier series representation? Frequency analysis of periodic signals: find the Fourier series representation of a function in trigonometry or exponential form. Understand the concept of frequency contents of a signal. Single and double sided plot of amplitude and phase spectrum Frequency analysis of aperiodic signals : find the Fourier transform of a signal Understand the concept of impulse: defined by its properties Recognize some important properties of Fourier transform 4. Correlation, Spectral Density What is correlation of power and energy signals? What is power and energy spectral density? Learning outcome of Chapter 2: At the completion of the subject, students should be able to: Understand the concepts of signals and vectors, orthogonal function, Fourier series, Fourier transform, convolution and correlation for spectral analysis.
Transcript
Page 1: Chapter2-2008

+-,�./$&'&0/���1�2���3�4 )5#�)

1 Revised November 2008

Chapter 2: Spectral Analysis

Chapter Contents

1. Signals and vectors, Orthogonal Functions 2. Energy and Power Signals, 3. Fourier series, Fourier Transforms, Signal Spectrum 4. Correlation, Spectral Density

Objective: To answer the following queries and study of related topics 1. Signals and vectors, Orthogonal Functions

• What is meant by component of one vector / signal in another? • What is Minimum Mean Square Error (MMSE) criterion? • What is the condition for two signals to be orthogonal? • Example of some commonly encountered orthogonal functions

2. Energy and Power Signals

• The difference between energy and power signals • Familiarization of terms related to power and energy

3. Fourier Series, Fourier Transforms, Signal Spectrum,

• What is Fourier series representation? • What is generalized Fourier series representation? • Frequency analysis of periodic signals: find the Fourier series representation of a function in

trigonometry or exponential form. • Understand the concept of frequency contents of a signal. • Single and double sided plot of amplitude and phase spectrum • Frequency analysis of aperiodic signals : find the Fourier transform of a signal • Understand the concept of impulse: defined by its properties • Recognize some important properties of Fourier transform

4. Correlation, Spectral Density

• What is correlation of power and energy signals? • What is power and energy spectral density?

Learning outcome of Chapter 2: At the completion of the subject, students should be able to: Understand the concepts of signals and vectors, orthogonal function, Fourier series, Fourier transform, convolution and correlation for spectral analysis.

Page 2: Chapter2-2008

2 Revised November 2008

In this chapter, we review the basics of signal and linear systems in the frequency domain. The

motivation for studying these fundamental concepts stems from the basic role they play in modeling

various types of communication systems. In transmission of an information signal over a

communication channel, the shape of the signal is changed, or distorted, by the channel. The

communication channel is an example of a system; i.e., an entity that produce an output signal when

excited by an input signal. A large number of communication channels can be modeled closely by a

subclass of systems called linear systems. Linear systems are a large subclass of systems, which arise,

naturally in many practical applications. We have devoted this entire chapter to the study of the basics of

signals and linear systems in the frequency domain.

2.1 SIGNALS AND VECTORS

2.1.1 Component of a Vector

A vector, V, is specified by its magnitude and direction.

Consider two vectors, V1 and V2.

a) Suppose we want to write V1 in terms of V2. Components of V1 along V2 is C12V2, where C12

is chosen such that the error vector, Ve, is minimum.

V1 = C12 V2 + Ve (2.1)

Figure 2.1: Vectors

b) Physical interpretation:

the larger the component of a vector along the other vector, the more closely do the two

vectors resemble each other in their directions, and the smaller is the error vector.

c) C12 is given by :

C122

=⋅⋅

V VV V

1 2

2212

2

.1

VVV

= (2.2)

Ve V1

V2 C12V2

Page 3: Chapter2-2008

3 Revised November 2008

d) Orthogonal vectors: if C12 = 0 or

V V1 2⋅ = 0 θcos21 VV= (2.3)

The vectors are independent of each other.

2.1.2 Component of a Signal

Consider two signals, f1(t) and f2(t).

a) Suppose we want to approximate f1(t) in terms of f2(t) over a certain interval t1 < t < t2 :

f t C f t1 12 2( ) ( )≈ (2.4)

b) Error function is given by f t f t C f te( ) ( ) ( )= −1 12 2 . (2.5)

This error function in the approximation can be minimized by minimizing the average

(mean) of the square of the error :

ε =−

U1

2 1

2

1

2

t tf t dtet

t

( ) (2.6)

c) To find the value of C12 which will minimize ε , we must have

d

dC

ε12

0=

The solution is :

Cf t f t dt

f t dt

t

t

t

t12

1 2

22

1

2

1

2=

VV ( ) ( )

( ) (2.7)

d) From the previous section on Vectors, we say that a signal V1 contains a component C12V2.

Note that in vector terminology, C12V2 is the projection of V1 on V2. Continuing with that

analogy, we say that if the component of a signal V1 of the form V2 is zero (that is C12 = 0),

the signals V1 and V2 are orthogonal. Therefore, we define the real signals f1(t) and f2(t) to be

orthogonal over an interval (t1, t2) if :

C12 0= or f t f t dtt

t

1 21

2

0( ) ( )W

= (2.8)

Page 4: Chapter2-2008

4 Revised November 2008

Examples 2.1: Minimum mean square error

A rectangular function f(t) is defined by

f tt

t( ) =

< <− < <

qr s 1 0

1 2

ππ π

Approximate this function by a waveform sin t over the interval (0,2π) such that the mean square

error is minimal.

Solution:

Let f t C t( ) sin≈ 12

To minimize the mean square error:

C f t t dt

t dt

t dt t dt

t dt 12

0 2

2 0 2

0 2

2 0 2

4 = =

+ − =

t t

t

t t

( ) sin

sin

sin sin

sin

π

π

π π

π

π π

Thus, f t t( ) sin≈4

π represents the best approximation of f(t) by a function sin(t) in

minimum mean square error (MMSE) sense.

Figure 2.2: Example of MMSE approximation

t

4/π 1

π 2π 0

Page 5: Chapter2-2008

�-���/�&�&�/���1}2~��3�� �5���

5 Revised November 2008

2.1.3 Orthogonal Functions

Functions ϕn and ϕm are said to be orthogonal with respect to each other over the interval (a, b) if they

satisfy the condition

� ≠ =

b

a

m n where dt t t m n , 0 ) ( ) ( * ϕ ϕ

(2.9)

The zero result implies that these functions are independent. If the result is not zero, the two functions

have some dependence to each other.

If the function the set { ϕn(t) } are orthogonal , then they also satisfy the relation

{ }ϕ ϕ δ n m K n n m

n m

n nm t t dt a

b

K ( ) ( ) *

,

,

= � =

=

≠ 0

(2.10)

Where

δ nm n m

n m =

≠ � �

� � � � � � � � 1

0

,

,

δnm is called the delta function. If the constant Kn all equal to 1, the function ϕn(t) are said to

be orthonormal functions.

Examples of Orthogonal Functions - trigonometry functions [sin(kt), cos(kt)] and exponential

functions [exp(kt)]

Page 6: Chapter2-2008

6 Revised November 2008

2.2 SIGNAL MODELS

2.2.1 Deterministic and Random Signals

In communication systems we are concerned with two broad classes of signals, referred to as

deterministic and random. Deterministic signals can be modeled as completely specified functions of

time. For example, the signal )cos()( 0tAtx ω= , where A and ω0 are constants.

Random signals are signals that take on random values at any given time instant and must be

modeled probabilistically.

2.2.2 Per iodic and Aper iodic Signals

A signal x(t) is periodic if and only if

∞<<∞−=+ ttxTtx ),()( 0 (2.11)

Any signal not satisfying equation (2.11) is called aperiodic

2.2.3 Phasor Signals and Spectra

An important periodic signal in system analysis is the complex signal

∞<<∞−= + tAetx tj ,)(~ )( 0 θω (2.12)

We will refer to equation (2.12) as a rotating phasor to distinguish it from the phasor θjAe . Using

Euler’ s theorem, we may show that equation (2.12) is periodic with period 00 /2 ωπ=T . The

rotating phasor )( 0 θω +tjAe can be related to a real sinusoidal signal in two ways. The First is:

)(0

0Re)(~Re)cos()( θωθω +==+= tjAetxtAtx (2.13)

The second is:

)()(0

00

2

1

2

1)(*~

2

1)(~

2

1)cos( ϑωϑωθω +−+ +=+=+ tjtj AeAetxtxtA (2.14)

The equivalent representation of equation (2.13) in frequency domain is shown in Figure 2.3(a). The

resulting plots are referred to as the amplitude line spectrum and the phase line spectrum for x(t), or

the single-sided amplitude and phase spectra of x(t).

The spectrum of equation (2.14) is shown in Figure 2.3(b) and referred to as the double-sided

amplitude and phase spectra.

Page 7: Chapter2-2008

Æ-Ç�È/Ã&Ä&É/»�¼1¹2º�»3¼Ê Å5Â�Å

7 Revised November 2008

Figure 2.3: Amplitude and phase spectra for the signal )cos( 0 θω +tA (a) Single-sided. (b)

Double-sided.

2.2.4 Singular ity Functions

An important subclass of aperiodic signals is the singularity functions. Here, we will be concerned

with only two: the unit impulse function δ(t) and the unit step function u(t). The unit impulse function

is defined in terms of the integral

∫∞

∞−

= )0()()( xdtttx δ (2.15)

A change of variables and redefinition of x(t) results in the shifting property

∫∞

∞−

=− )()()( 00 txdttttx δ (2.16)

The other singularity function may be defined as

∫ ∞− ËÌËÍ Î =><

==t

tunderfined

t

t

dtu

0,

0,1

0,0

)()( λλδ (2.17)

f f0 0

A

Amplitude

f f0 0

θ

Phase

f f0 0

A/2

Amplitude

−f0 f

f0 0

θ

Phase

−f0

−θ

(a) (b)

Page 8: Chapter2-2008

8 Revised November 2008

Table 2.1 : Some useful function definition

èéêëìí

T

trect or îïðñòó

ΠT

t

t

-1

1

t=0 sgn (t)

t

1

t=0 u(t)

t

1 t=0

T/2 -T/2 t

1 t=0

T -T

îïðñòó∧

T

t

Note this is twice as wide (2T) as rect (t/T), which is only T wide

Page 9: Chapter2-2008

9 Revised November 2008

2.3 SIGNAL CLASSIFICATIONS

2.3.1 Energy and Power Signals In this chapter we will be considering two signal classes, those with finite energy and those with finite

power. As a specific example, suppose e(t) is the voltage across a resistance R producing a current

i(t). The instantaneous power per ohm is p(t) = e(t)i(t)/R = i2(t). The total energy and the average

power on a per-ohm basis are obtained as the limits õ−∞→

=2/

2/

2 )(limT

TTtiE (2.18)

and ö−∞→

=2/

2/

2 )(1

limT

TTti

TP (2.19)

For an arbitrary signal x(t), in general, be complex, we define total (normalized) energy as ÷÷

∞−−∞→== dttxdttxE

T

TT

22/

2/

2)()(lim (2.20)

and (normalized) power as ø

−∞→=

2/

2/

2)(

1lim

T

TTdttx

TP (2.21)

Based on these definitions, we can define two distinct classes of signals:

1. x(t) is an energy signal if and only if 0 < E < ∞ (finite and nonzero), so that P = 0.

Aperiodic signal is usually an energy waveform (as the energy is finite, period is infinite, thus

power is equal to zero).

2. x(t) as a power signal if and only if 0 < P < ∞ (finite and nonzero), implying that E = ∞.

Periodic signal is a power waveform (as energy is infinite for infinite duration).

Note:- Every signal that can be generated in a lab has finite energy. In other words, every signal

observed in real life is an energy signal. A power signal must necessarily have an infinite duration.

Examples 2.2 : Energy and Power Signals

As an example of determining the classification of a signal, consider

0),()(1 >= − αα tuAetx t

where A and α are constants. Using equation (2.20), we may readily verify that x1(t) is an energy

signal, since α2/2AE = . Letting 0→α , we obtain the signal x2(t) = Au(t), which has infinite

energy. Applying equation (2.21), we find that 2/2AP = , thus verifying that x2(t) is a power signal.

Page 10: Chapter2-2008

10 Revised November 2008

2.4 FOURIER SERIES

2.4.1 Four ier analysis of signals

The analysis of signals and linear systems in frequency domain is based on representation of signals in

terms of the frequency and this is done through employing Fourier series and Fourier transform.

Fourier series is applied to periodic signals whereas the Fourier transform can be applied to periodic

and aperiodic signals.

2.4.2 Complex Exponential Four ier Ser ies

The complex Fourier series uses the orthogonal exponential functions tjn

noe ωϕ = , where n ranges

over all possible integer values, negative, positive, and zero and 00 /2 Tπω = .

A signal x(t) can be expressed over an interval of duration T0 seconds as an exponential Fourier series

∑∞=

−∞=

=n

n

tjnnectx 0)( ω

(2.22)

Where the complex Fourier coefficients cn are given by

∫+ −=

00

0

0.

.0

)(1 Tt

t

tjnn dtetx

Tc ω

(2.23)

for some arbitrary t0 , by setting t0 = −T0/2, we have

∫−

−=2/.

2/.0

0

0

0)(1 T

T

tjnn dtetx

Tc ω

(2.24)

and ,/22 000 Tf ππω == The frequency 00 /1 Tf = is said to be the fundamental frequency, and

00 /Tnnf = is said to be the nth harmonic frequency, when n >1.

The Fourier coefficient co is equivalent to the dc value of the waveform x(t) when n = 0. In general,

cn is a complex number and can be expressed as njnn ecc θ= that gives the magnitude and

phase angle of nth harmonic.

Some properties of the complex Fourier series

• If x(t) is real , then nn cc −=* , therefore nn cc −= and nn θθ −= .

• If x(t) real and even, then 0][Im =n

c and if x(t) real and odd, then 0][Re =n

c

Page 11: Chapter2-2008

11 Revised November 2008

Example 2.3 : Exponential Four ier series

Let x(t) denote the periodic signal depicted in Figure 2.4 and described analytically by

∑∑∏∞=

−∞=

∞=

−∞=

MNOPQR −=

−=

n

n

n

n

nTtrect

nTttx

ττ)()(

)( 00

where ΠΠΠΠ(t) or rect(t) is a rectangular pulse of width τ. Determine the Fourier series expansion for this

signal.

Figure 2.4 : A periodic rectangular waveform Solutions

We find observe that the period is T0 and

SSTUVVWX==SSTUVVWX=

YYZ[

\\]^ −SSTUVVWX−

===−

− ∫∫

000

0

00

0

0

2/.

2/.

2

0

2/.

2/.0

sin)sin(

sin1

2

1)1(

1)(

1000

0

0

0

T

nc

TT

nT

n

TT

n

n

eejn

T

Tdte

Tdtetx

Tc T

jn

T

jnt

Tjn

T

T

tjnn

ττπτ

πττπτ

π

π

πτπττ

τ

πω

where we use the definition of sinc(X) = sin(πX)/(πX).

Figure 2.5 : Four ier ser ies coefficient for periodic rectangular waveform

x(t)

t τ/2 −τ/2 T0 2T0 −T0 −2T0

1

τ

__`abbcd00

.sin

T

xc

T

ττ

0T

τ

0 2 -2 x

Cn

4 -4

Page 12: Chapter2-2008

12 Revised November 2008

2.4.3 Four ier ser ies for Real signals – Tr igonometr ic Four ier ser ies

For real x(t), we have nn cc −=* and by taking the complex conjugate inside the integral and noting

that the same result is obtained by replacing n by –n in njnn ecc θ= , we obtain

nn cc −= and nn θθ −=

We can now regroup the complex exponential Fourier series by pairs of terms of the form

)cos(2 0)()( 0000

nntnj

ntnj

ntjn

ntjn

n tncecececec nn θωθωθωωω +=+=+ +−+−−

Hence equation (2.22) can be written in the equivalent compact trigonometric form

∑∞

=

++=1

00 )cos(2)(n

nn tncctx θω (2.25)

Expanding the cosine in equation (2.25), we obtain still another equivalent series of the form

∑∑∞

=

=

++=1

0

1

00 )sin()cos()(n

n

n

n tnBtnActx ωω (2.26)

where ∫+

==00

0

.

.0

0

)cos()(2

)cos(2Tt

tnnn dttntx

TcA ωθ (2.27)

and

∫+

=−=00

0

.

.0

0

)sin()(2

)sin(2Tt

tnnn dttntx

TcB ωθ (2.28)

and also

∫+

=00

0

.

.00 )(

1Tt

t

dttxT

c (2.29)

cn and θn are also related to An, and Bn by the equations

22

21

nnn BAc += and ~�����

= −

n

nn A

B1tanθ (2.30a & 2.30b)

In either the trigonometric or the exponential form of the Fourier series, 0c represent the average or dc

component of x(t). The term for n = 1 is called the fundamental, the term for n = 2 is called the

second harmonic, and so on.

Page 13: Chapter2-2008

13 Revised November 2008

2.4.3.1 Trigonometric Fourier series for Even and Odd functions

In some of the problems encountered, the Fourier coefficients 0c , nA or nB become zero after

integration. Finding zero coefficients is time consuming and can be avoided. With the knowledge of

even and odd functions, a zero coefficient may be predicted without performing the integration. A

function x(t) is said to be even, if x(− t) = x(t) for all values of t. The graph of an even function is

always symmetrical about the y-axis (a mirror image). For an even function x(t), defined over the

range T, the zero coefficient is Bn = 0.

A function x(t) is said to be odd, if x(− t) = − x(t) for all values of t. The graph of an odd function is

always symmetrical about the origin. For an odd function x(t), defined over the range T, the zero

coefficients are c0 = 0 and An = 0.

2.4.4 Frequency or L ine Spectra

A plots of |cn| and θn versus the frequency, f is called the amplitude spectrum and phase spectrum of

the periodic signal x(t) respectively. These plots are referred as frequency spectra of x(t). Since the

index n assumes only integers, the frequency spectra of a periodic signal exist only at discrete

frequency nf0. These are therefore referred to as discrete frequency spectra or line spectra.

(i) Single-sided plot of Fourier series coefficients

Write the function in F.S. trigonometric form : � ∞

=

++=1

00 )cos(2)(n

nn tncctx θω

Frequency components are represented by a line with amplitude Cn and at frequency f = nfo.

Figure 2.6. Single-sided line spectra

fo 2fo 3fo nfo

2|C1| C0 2|C2|

2|C3| 2|Cn|

f

Amplitude, |cn|

fo 2fo

3fo

nfo

θ1

θ0

θ2

θ3

θn

f

Phase (rad)

Page 14: Chapter2-2008

14 Revised November 2008

(ii) Double-sided plot of Fourier series coefficients

Write the function in F.S. exponential form : ¸ ∞=

−∞=

=n

n

tjnnectx 0)( ω

Figure 2.7. Double-sided line spectra

Example 2.4: Tr igonometr ic Four ier ser ies

Evaluate the trigonometric Fourier Series expansion for the periodic function x(t) of a Triangular

waveform with period T0 = 1 and x(0) = 1. Sketch the magnitude spectrum for n = 3.

Figure 2.8. Triangular waveform, x(t)

Solutions:

0 0 0 0 00

1 1

2( ) cos sin , 1, 2

n n

n n

n n

x t c A nt B nt TT

πω ω ω π=∞ =∞

= =

= + + = = =∑ ∑

0 0 0

0 0 0

/ 2 / 2 / 2

00 0 0 0 0

/ 2 / 2 / 2

1 2 2 2 2( ) , ( )cos ( )sin 0

T T T

n n

T T T

nt ntc x t dt A x t dt B x t dt

T T T T T

π π

− − −

= = = =∫ ∫ ∫

(Even Function)

1/ 2 1/ 2

1/ 220 0

01/ 2 0

1 1( ) 2 (1 2 ) 2

2c x t dt t dt t t

T−

¹ º= = − = − =» ¼∫ ∫

x(t) 1

−½ ½ 1 −1 t

fo 2fo 3fo nfo

|C1|

C0

|C2| |C3|

|Cn|

f

Amplitude, |cn|

-fo -2fo -3fo -nfo

|C-1| |C-2| |C-3|

|C-n|

θ0 f

fo 2fo

3fo

nfo

θ1 θ2

θ3

θn

Phase (rad)

-fo -2fo

-3fo

-nfo

θ-1 θ-2

θ-3

θ-n

Page 15: Chapter2-2008

15 Revised November 2008

[ ]

[ ] [ ] [ ]

1/ 2 1/ 2

01/ 2 0

1/ 21/21/2

2 00 0

2 2 2

2 2( )cos 4 (1 2 )cos(2 )

1

sin2 84 8 cos(2 ) cos2 2 sin2

2 (2 )

2 2 2cos sin cos0 sin0 cos 1 1 cos

( ) ( ) ( )

n

ntA x t dt t nt dt

T

ntt nt dt nt nt nt

n n

n n n n nn n n

π π

π π π π ππ π

π π π π ππ π π

= = −

× Ø−= − = +Ù ÚÛ Ü

− −= + − + = − = −

∫ ∫∫

2

0, for n=even22, for n=odd( )nA

Ý∴ = × Þ�ß

Hence:

0 2 2 2 20

1 10

2 1 4 1 4 4 4( ) cos cos2 cos2 cos6 cos10 ....

2 ( ) 2 9 25

n n

n

n nn n odd

x t c A nt nt t t tT n

π π π π ππ π π π

=∞ =∞

= =≠ =

= + = + = + + + +∑ ∑

Figure 2.9 : Amplitude (or Magnitude) spectrum (a) Single-sided (b) Double-sided

½

Amplitude,

½

− 2 3 −3

f

− 1 1 2 0

2

2

π

2

2

2

2

π

2

2

3

f 1 2 0

2

4

π

2

4

Amplitude,

(a) (b)

Page 16: Chapter2-2008

16 Revised November 2008

2.4.5 Parseval’s Theorem

A periodic signal is a power signal and every term in its Fourier series is also a power signal. The

power of x(t) is equal to the power of its Fourier series, which is the sum it the powers of its Fourier

components. Parseval’s theorem for the Fourier series states that the average power if a periodic

signal, x(t) is the sum of the powers in the phasor components of its Fourier series ùú ∞

−∞=−

==n

n

T

Tcdttx

TP

22/

2/

2)(

1 (2.31)

Page 17: Chapter2-2008

17 Revised November 2008

2.5 FOURIER TRANSFORMS 2.5.1 Frequency Domain Representation of Aper iodic Signals

A non-periodic signal can be viewed as a limiting case of a periodic whose period approaches infinity.

Since the period approaches infinity, the fundamental frequency fo becomes approaches zero.

The harmonics get closer and closer together, and in the limit the Fourier series summation

representation becomes an integral. In this manner, we could develop the Fourier integral (transform)

theory.

The Fourier transform (FT) of a waveform x(t), symbolized by “F ”is defined by:

[ ] ∫∞

∞−

−== dtetxtxFfX ftj π2)]([)()( (2.32)

The Inverse Fourier transform (IFT) of X(f), symbolized by “F −−−−1 ”, is defined by:

[ ] ∫∞

∞−

− == dfefXfXFtx ftj π21 )()()( (2.33)

The equations (2.32) and (2.33) are called the Fourier transform pair denoted by )()( fXtx ⇔

• By using Fourier transformation, an energy signal x(t) is represented by the Fourier transform

X(f), which is a function of the frequency variable f.

2.5.2 Amplitude and Phase Spectra

In general, the Fourier transform X(f) is a complex function of the continuous frequency f. We may

therefore express it in the form

)()()( fjefXfX θ= , where )()( fXf ∠=θ (2.34)

Where )( fX is called the continuous amplitude spectrum of x(t), and )( fθ is called the

continuous phase spectrum of x(t).

If x(t) is a real function of time, we have

)()()(*)( fjefXfXfX θ−==− or (2.35)

)()( fXfX −= and )()( ff −−= θθ (2.36)

Thus, just as for the complex Fourier series, the amplitude spectrum, )( fX is an even function of f

and the phase spectrum, )( fθ is an odd function of f.

Page 18: Chapter2-2008

18 Revised November 2008

Example 2.5 : Four ier transform of rectangular function

Find the Fourier transform for the following gate function, ,-./01=,-./01

= ∏ τττt

rectt

tg )(

g t

t

t τ τ

τ ( )

/

/ =

>

2 3 4

1, 2

0, 2

Figure 2.10 : A rectangular gate function

Solutions

[ ] )(sin)sin(

)1()()()(2/

2/

22 τττπ

τπττ

τ

ππττ fc

f

fdtedtetgtgFfG ftjftj =====

55−

−∞

∞−

Therefore, )(sin τττ

fct∏ ⇔

6789:;

Figure 2.11: A sinc function

t

gτ(t)

1

-τ/2 τ/2 0

( ) 1,sin =τττ fc

0 2 -2

f

G(f)=sinc(f)

4 -4 5 3 1 -1 -3 -5

1

Page 19: Chapter2-2008

19 Revised November 2008

Example 2.6 : Four ier transform of exponential function

Find the Fourier transform of x(t) = )(tue at− , where a > 0.

Solutions

The Fourier transform of x(t) is

fjae

fjadtedtetuefX tfjatfjaftjat

πππππ

2

1

2

1)()(

0

)2(

0

)2(2

+=

+−===

∞+−

∞+−

∞−

−− ``

Expressing in polar form, we obtain abcdef

− −

+= a

fj

efa

fXπ

π

2tan

22

1

)2(

1)( , where

22 )2(

1)(

fafX

π+= , and ghijkl

−= −

a

ff

πθ 2tan)( 1

2.5.3 Energy Spectral Density (ESD) and Parseval’s Theorem

Parseval’s theorem for the Fourier transform states that if x(t) is an energy signal, then mm ∞

∞−

∞−== dffXdttxE

22)()( (2.37)

Equation (2.37) gives an alternative method for evaluating the energy by using the frequency

domain description instead of the time domain definition. Examining 2

)( fX , we note that it

has the unit of (volt-seconds) or, since we are considering power on a per-ohm basis, (watts-

seconds) per hertz = joules per hertz. Thus, we can see that 2

)( fX has the units of energy

density. So, we can now defined the energy spectral density (ESD), with units of joules per

hertz for energy signal by:

2)()( fXfG = (2.38)

By integrating G(f) over all frequency, we can obtain the total energy as n ∞

∞−= dffGE )( (2.39)

Page 20: Chapter2-2008

20 Revised November 2008

2.5.4 Proper ties of Four ier Transform

Superposition Theorem

If )()( 11 fXtx ⇔ and )()( 22 fXtx ⇔ , then )()()()( 22112211 fXafXatxatxa +⇔+

(2.40)

Proof: trivial

Frequency Translation Theorem

If )()( fXtx ⇔ then )()( 0..2 0 ffXetx tfj −⇔π (2.41)

Proof :

[ ] � �∞

∞−

∞−

−−− −=== )()()()( 0)..(22..2..2 000 ffFdtetxdteetxetxF tffjftjtfjtfj ππππ

Time-Delay Theorem

If )()( fXtx ⇔ then 0..20 )()( tfjefXttx π−⇔− (2.42)

Proof: [ ] � ∞

∞−

−−=− dtettxttxF ftj π200 )()(

Let 0tty −=

then

[ ] � �∞

∞−

∞−

−−−+− ===− 000 ..2..2..2.).(.20 )()()()( tfjyfjtfjtyfj efXdyeyxedyeyxttxF ππππ

Scale-Change Theorem

If )()( fXtx ⇔ then ������⇔

a

fX

aatx

1)( (2.43)

Proof: Assume a > 0, [ ] � ∞

∞−

−= dteatxatxF ftj π2)()(

Let aty =

then [ ] � �∞

∞−

∞−

−− ������===

a

fX

adyeyx

aa

dyeyxatxF a

yfj

a

yfj 1

)(1

)()(..2..2 ππ

Duality Theorem

If )()( fXtx ⇔ then )()( fxtX −⇔ (2.44)

Proof: The proof of this theorem follows by virtue of the fact that only differences between Fourier

transform integral and the inverse Fourier transform integral is a minus sign in the exponent of the

integrand

Page 21: Chapter2-2008

21 Revised November 2008

Modulation Theorem

If )()( fXtx ⇔ then )(2

1)(

2

1)2cos()( 000 ffXffXtftx ++−⇔π

(2.45)

Proof: The proof of this theorem follows by writing )2cos( 0tfπ in exponential form as

tfjtfj ee 00 22

2

1

2

1 ππ −+ and applying the superposition and frequency translation theorems.

Differentiation Theorem

If )()( fXtx ⇔ then )()2()(

fXfjdt

txd nn

n

π⇔ (2.46)

Proof: We prove the theorem for n = 1 by using integration by parts on the defining Fourier transform

integral as follows:

³ ³∞

∞−

∞−

−∞

∞−

−− =+==´µ¶·¸¹

)(2)(2)()()( 222 ffXjdtetxfjetxdte

dt

tdx

dt

tdxF ftjftjftj ππ πππ

where the definitions ftjeu π2−= and dtdtdxdv )/(= have been used in the integration-by-parts

formula and first term of the middle equation vanishes at each end point by virtue of x(t) being an

energy signal. The proof for values of n > 1 follows by induction.

Convolution Theorem

If )()( 11 fXtx ⇔ and )()( 22 fXtx ⇔

then )()()()()()()(*)( 21212121 fXfXdxtxdtxxtxtx ⇔−=−= ºº ∞

−∞=

−∞=ττττττ

ττ

(2.47)

When the convolution (denoted by “ *” ) of two signals, x1(t) and x2(t) is conducted, a new function of

time, x(t) is produced and is given by

ττττ

dtxxtxtx )()()(*)( 2121 −= » ∞

−∞=

The integrand is formed from x1 and x2 by three operations:

1. Time reversal to obtain x2(−τ)

2. Time shifting to obtain x2(t−τ)

3. Multiplication of x1(τ) and x2(t−τ) to form the integrand.

Page 22: Chapter2-2008

22 Revised November 2008

Figure 2.12: The convolution processes of two exponentially decaying signals.

Proof:

[ ] ττττττ ππ ddtetxxdtedtxxtxtxF ftjftj Õ ÕÕ Õ ∞

∞−

−∞

∞−

−∞

∞−

∞− Ö×ØÙÚÛ −=Ö×ØÙÚÛ −= 221

22121 )()()()()(*)(

By the time-delay theorem,

τππτ fjftj efXdtetx 22

22 )()( −−

∞−=−

Ü

Thus, we have

[ ] )().()()()()()(*)( 2122

12

2121 fXfXfXdexdefXxtxtxF fjfj =ÝÞßàáâ== ãã ∞

∞−

−∞

∞−

− ττττ τπτπ

In communication system, the output (response) of a (stationary, or time- or space-invariant) linear system is the convolution of the input (excitation) with the system's response to an impulse or Dirac delta function. Multiplication Theorem If )()( 11 fXtx ⇔ and )()( 22 fXtx ⇔

then τττ dfXXfXfXtxtx )()()(*)()().( 212121 −=⇔ ä ∞

∞− (2.48)

Proof: The proof of the multiplication theorem proceeds in a manner analogous to the proof of the

convolution theorem.

Page 23: Chapter2-2008

23 Revised November 2008

2.5.5 Impulse or Dirac delta function, δδδδ(t)

Figure 2.13: Rectangular pulse → impulse ( when ∆T → 0)

The Dirac delta function is not a true function. It can be obtained by letting the pulse width of a

rectangular pulse, ∆T, with unit area to go to infinity. The normal concept of amplitude does not apply

here. Alternatively, the concept of weight is introduced.

The impulse, δ(t) has the following features:

i) unit area which depends on location

δ ( ),

,t t dt

t t t

elsewhereoo

t

t

− =< <

ÿ� �� 1

01 2

1

2

ii) ability to “weight” the impulse with a

resulting area other than unity

A t t dtA t t t

elsewheret

t

o

o

1

2

1 2

0

�− =

< <�� �

δ ( )

iii) sampling property

x t t t x t t to o o( ) ( ) ( ) ( )δ δ− = −

iv) shifting property

x t t t dt x to o( ) ( ) ( )−∞

∞�− =δ

v) scaling property

δ δ(at)dta

(t)dta−∞

−∞

∞� �= =

1 1

vi) Fourier transform pair:

F t t t t e dt

t

e elsewhere o o j ft o

j ft o { ( )} ( ) ,

, δ δ π

π − = − = =

� − ∞

+∞ −

− �

2 2 1 0

Figure 2.14. Impulse representation

∆T2 < ∆T1

∆T1 t

A1

A2

A1∆T1= A2∆T2=1

t 0

δ(t) δ(t-t1) (2.49)

(2.50)

(2.51)

(2.52)

(2.53)

(2.54)

Page 24: Chapter2-2008

24 Revised November 2008

Example 2.7 : Fourier transform properties

Find the Fourier transform for the following signals:

1) v(t) = δ(t)

2) v(t) = 1

3) v t to( ) cos= ω

4) v t m t to( ) ( ) cos= ω and m t M f( ) ( )↔

Solutions

1.) v(t) = δ(t), V f t dtj ft

e( ) ( )= =−∞

∞−

@δ π2

1

2.) v(t) = 1, V f dtj ft

e( ) =−∞

∞−

A2π

and δ π( )t e dfj ft=−∞

∞B2

Therefore, V f dt f fj f t

e( ) ( ) ( )( )= = − =

−∞

∞−

C2π δ δ

where scaling property of impulse function has been applied in the last step.

3.) v t to( ) cos= ω => ( )v t e ej t j to o( ) = + −1

2ω ω

From frequency shift property :

If 1↔ δ ( )f , Then e e f fj t j f to

o oω π δ= ↔ −2 ( ) centered at fo and

e e f fj t j f to

o o− −= ↔ +ω π δ2 ( ) centered at -fo

From linearity property : [ ]V f f f f fo o( ) ( ) ( )= − + +1

2δ δ

4.) v t m t to( ) ( ) cos= ω and m t M f( ) ( )↔ => ( )v t e e m tj t j to o( ) ( )= + −1

2ω ω

From frequency shift property :

If m t M f( ) ( )↔ then e m t M f fj to

oω ( ) ( )↔ − centered at fo and

e m t M f fj to

o− ↔ +ω ( ) ( ) centered at -fo

From linearity property :

[ ]V f M f f M f fo o( ) ( ) ( )= − + +1

2

Modulation theorem : [ ]m t M f f M f fo o( ) ( ) ( )↔ − + +1

2

Page 25: Chapter2-2008

25 Revised November 2008

Table 2.2 Short Table of Four ier Transforms

x(t) X(f) efghijT

trect T sinc ( ft )

sinc (2Wt ) klmnopW

frect

W 22

1

2te π−

2fe π− qrstuv

∧T

t T sinc2 ( fT )

sinc2 (t) ( )f∧ δ (t) 1 1 δ ( f )

δ (t − t0 ) 0..2 tfje π− tfj ce ..2π δ ( f− fc )

cos (2πfct ) 2

1 [ δ ( f− fc )+ δ ( f + f c)]

sin (2πfct ) j2

1 [ δ ( f− fc ) − δ ( f + fc )]

x(t).cos (2πfct ) 2

1 [ X ( f− fc )+ X ( f + fc )]

sgn (t ) fjπ1

tπ1 -j sgn ( f )

u (t) fjf

πδ

2

1)(

2

1 + w ∞

−∞=

−i

iTt )( 0δ x ∞

−∞=

−n T

nf

T)(

1

00

δ

)(tue at− fja π2

1

+, for a >0

)( tueat − fja π2

1

−, for a >0

tae− 22 )2(

2

fa

a

π+, for a >0

)(tute at− 2)2(

1

fja π+, for a >0

)()2cos( 0 tutfe at π− 2

02 )2()2(

2

ffja

fja

πππ++

+, for a >0

)()2sin( 0 tutfe at π− 2

02

0

)2()2(

2

ffja

f

πππ

++, for a >0

Page 26: Chapter2-2008

26 Revised November 2008

2.5.6 Four ier Transform of Per iodic Signals

The Fourier transform of a periodic signal, in a strict mathematical sense, does not exist because

periodic signals are not energy signals. However, we could, in a formal sense write down the Fourier

transform of a periodic signal by Fourier transforming its complex Fourier series term by term (using

the properties of impulse function).

From a periodic signal x(t) with a period T, we express x(t) as

∑∞=

−∞=

=n

n

tjnnectx 0)( ω

, where T

πω 20 = (2.55)

Taking the Fourier transform of both sides and using e e f fj t j f to

o oω π δ= ↔ −2 ( ) , we obtain

∑∞=

−∞=

−=n

nn nffcfX )()( 0δ (2.56)

Note that the Fourier transform of a periodic signal consists of a sequence of equidistant impulses

located at the harmonic frequencies of the signal.

2.6 CORRELATION FUNCTIONS 2.6.1 Cor relation of Energy Signals

Correlation is a process of comparing two signals in order to measure the degree of similarity

(agreement or alignment) between them. It is usually used in signal processing application in

radar, sonar, digital communication, electronic warfare and many others. In signal

processing, the cross-correlation (or sometimes "cross-covariance") is a measure of

similarity of two signals, commonly used to find features in an unknown signal by comparing

it to a known one. The cross-correlation function of two energy signals, x(t) and y(t) is defined as �� ∞

∞−

∞−−=+= dttytxdttytxxy )(*).()().(*)( τττφ (2.57)

Autocorrelation is a mathematical tool used frequently in signal processing for analysing

functions or series of values, such as time domain signals. It is the cross-correlation of a

signal with itself. Autocorrelation is useful for finding repeating patterns in a signal, such as

determining the presence of a periodic signal which has been buried under noise, or

identifying the fundamental frequency of a signal which doesn't actually contain that

Page 27: Chapter2-2008

27 Revised November 2008

frequency component, but implies it with many harmonic frequencies. The time-

autocorrelation function for a given energy signal x(t) is defined as ­­ ∞

∞−

∞−−=+= dttxtxdttxtxx )(*).()().(*)( τττφ (2.58)

For a real signal x(t), the autocorrelation function is given by ® ∞

∞−+= dttxtxx )().()( ττφ (2.58a)

Setting y = t + τ in equation (2.58) yields ¯ ∞

∞−−= dyyxyxx )().()( ττφ

where the y is a dummy variable and could be replaced by t. Thus ® ∞

∞−±= dttxtxx )().()( ττφ (2.59)

This shows that for a real x(t), the autocorrelation function is an even function of τ, that is

)()( τφτφ −= xx (2.60)

The Fourier transform for )(τφx in equation (2.58a) is

[ ]

[ ]

[ ] 22.2

.22

22

)()()()()()()(

)()(,)()(

)().()()(

fXfXfXdtetxfXdtefXtx

fXetxFwheredtdetxtx

ddttxtxedeF

ftjtfj

tfjfj

fjfjxx

=−===

=+°±²³´µ +=

°±²³´µ +==

¶¶¶¶

¶¶¶

∞−

∞−

∞−

−∞

∞−

∞−

∞−

−∞

∞−

ππ

πτπ

τπτπ

τττ

ττττφτφ

This shows that: [ ] 2)()()( fXfGtF x ==φ (2.61)

where the energy spectral density (ESD), G(f) is the Fourier transform of the autocorrelation

function. Although this result is proved here for real signal, it is valid for complex signal also.

Note that the autocorrelation function is a function of τ, not t. Hence, its Fourier transform is

[ ] · ∞

∞−

−= ττφτφ τπ deF fjxx

2)()( .

Page 28: Chapter2-2008

28 Revised November 2008

Example 2.8 : Autocorrelation of energy signals

Find the time autocorrelation function of the signal, x(t) = )(tue at− , (where a > 0) and from it

determine the ESD of x(t).

Solutions

We have )()( tuetx at−= and )()( )( ττ τ −=− −− tuetx ta . The autocorrelation function )(txφ is

given by the area under the product x(t).x(t−τ), as shown in Figure 2.15. Therefore,

τ

τ

ττ τττφ aatataatx e

adteedttuetuedttxtx −

∞−

∞−

−−−∞

∞− ÑÑÑ ==−=+=2

1)()()()()( 2)(

This is valid for positiveτ. We can perform a similar procedure for negativeτ. However, we know that

for a real x(t), φx(τ) is an even function of τ. Therefore,

ττφ ax e

a−=

21

)(

Figure 2.15 shows the autocorrelation function φx(τ). The ESD, G(f) is the Fourier transform of φx(τ).

From Table 2.2, G(f) is:

[ ]2222 )2(

1

)2(

2

2

1

2

1)()(

fafa

a

ae

aFFfG a

x ππτφ τ

+=ÒÓÔÕÖ×

+=ÒÓÔÕÖ×== −

Figure 2.15 : Computation of the time autocorrelation function

2.6.2 Power Spectral Density (PSD)

The power spectral density (PSD) is very useful in describing how the power content of

signals and noise is affected by filters and other communication systems. It is very useful in

solving communication problems since power-type models are usually used.

To develop a frequency-domain description of a power, we need to know the Fourier transform of the

signal x(t). However, this may pose a problem, because power signals have infinite energy and may

therefore not Fourier transformable.

g(t) g(t−τ)

0 τ t

a2

1

0 τ

φx(τ)

Page 29: Chapter2-2008

í^î'ï;ê/ë/ð;â�ãaàbá,âcãñ ì8é$ì

29 Revised November 2008

To overcome the problem, we consider a truncated version of the signal x(t) (Figure 2.16) by òóôõö÷=ø ùúûü ý <<−

=T

trecttx

elsewheret

TtTtxtxT ).(

,0

2/2/),()(

Figure 2.16 : L imiting process in derivation of PSD

Using equation (2.21) and Figure 2.16, we obtain the average normalized power: þþ ∞

∞−∞→∞→−∞→=== dttx

TT

Edttx

TP T

T

T

T

T

TT

22/

2/

2)(

1limlim)(

1lim

From Parseval’s theorem, equation (2.37) ( )()( fXtx TT ⇔ ), the average normalized power is ÿÿ ∞

∞− ∞→

∞−∞→ �������

�== df

T

fXdffX

TP T

TT

T

22 )(

lim)(1

lim (2.62)

The integrand of the right-hand integral has unit of watts per hertz and can be defined as PSD.

Therefore, the PSD with unit watts per hertz, for deterministic power signal is defined as

����

�=

∞→ T

fXfS T

T

2)(

lim)( (2.63)

Note that the PSD is always a real non-negative function of frequency. By integrating S(f) over all

frequency, we can obtain the normalized average power as:

x(t)

t

xT (t)

t

T/2 −T/2

power signal

truncated power signal

Page 30: Chapter2-2008

30 Revised November 2008

∫∞

∞−= dffSP

2)( (2.64)

This result is parallel to the result obtained for energy signals in equation (2.39). The area under the

PSD function is the normalized average power. Observe that the PSD is the time average of ESD of

xT(t) (equation 2.63).

As is the case with ESD, the PSD is also a positive, real and even function of f. If x(t) is a voltage

signal, the units of PSD are volts squared per hertz.

2.6.3 Time-Autocor relation Function of Power Signals

The autocorrelation function, )(τxR of a power signal x(t) is defined as ??−∞→−∞→

−=+=2/

2/

2/

2/)(*).(

1lim)().(*

1lim)(

T

TT

T

TTx dttxtx

Tdttxtx

TR τττ (2.65)

For a real signal x(t), the autocorrelation function is given by @−∞→

+=2/

2/)().(

1lim)(

T

TTx dttxtx

TR ττ (2.65a)

Using the same argument as that used for energy signals (derived in previous section), we

can show that )(τxR is an even function of τ. This means for a real x(t) @−∞→

−=2/

2/)().(

1lim)(

T

TTx dttxtx

TR ττ

and )()( ττ −= xx RR

If x(t) is periodic with period T, then A−

−=2/

2/)().(

1)(

T

Tx dttxtx

TR ττ (2.65b)

For energy signals, the ESD G(f) is the Fourier transform of the autocorrelation function )(τφx . A

similar result applies to power signals, where the PSD S(f) is the Fourier transform of the

autocorrelation function )(τxR . From equation (2.65a) and Figure 2.16, we have B ∞

∞− ∞→∞→=+=

Tdttxtx

TR xT

TTT

Tx

)(lim)().(

1lim)(

τφττ (2.66)

Recalling that [ ] 2)()()( fXfGtF TTxT ==φ , the Fourier transform of the preceding

equation yields

Page 31: Chapter2-2008

31 Revised November 2008

[ ] )()(

lim)(2

fST

fXRF T

Tx ==

∞→τ (2.67)

The above proves are also valid for complex signal. The autocorrelation function and power spectral

density are important tools for system analysis involving random signals.

Some properties of the autocorrelation function Rx(ττττ),

• x

T

TTx Pdttx

TR == d

−∞→

2/

2/

2)]([1

lim)0( , average power (2.68)

• )0()( xx RR ≤τ , a relative maximum exist at origin (2.69)

• )()( * ττ xx RR =− (2.70)

Example 2.9 : Autocorrelation of power signals

Find the average autocorrelation function for the sinusoidal signal )sin()( 1 θω += tAtx , where

11

2

T

πω = . Determine also the power spectral density.

Solutions

From equation 2.65b, the autocorrelation function is given by

τω

τωτωθωτω

θτωθωττ

1

2

2/

2/1

1

22/

2/111

1

2

2/

2/11

1

22/

2/1

cos2

)1(cos2

)]22cos([cos2

])(sin[)sin()().(1

)(

1

1

1

1

1

1

1

1

A

dtT

Adtt

T

A

dtttT

Adttxtx

TR

T

T

T

T

T

T

T

Tx

=

=−+−=

+−+=−=

eeee

−−

−−

The power spectral density is the Fourier transform of the autocorrelation. Therefore S(f) is

[ ]

[ ])()(4

)(2

1)(

2

1

2cos

2)()(

11

2

11

2

1

2

ffffA

ffffAA

FRFfS x

++−=

fghijk ++−=fghijk==

δδ

δδτωτ

Page 32: Chapter2-2008

32 Revised November 2008

Selected Questions and Answers

Question 1 (a) A periodic signal g(t) is represented by its Fourier series

g(t) = 2 + 4cos(2t - π) + 2cos(3t - π/2)

(i) Draw the trigonometric Fourier spectra of g(t), |Cn| and ∠Cn.

(ii) Sketch the exponential Fourier spectra of g(t).

(iii) By inspection of the exponential Fourier spectra obtained in part a(ii), find the exponential

Fourier series for g(t).

(b) Fourier Transform is the common method used to analyze a non-periodic signal in frequency

domain.

(i) If g(t) has the Fourier transform G(f), show that

)(2

1)(

2

1)2cos()( ccc ffGffGtftg ++−⇔π

(ii) Name the theorem for the Fourier transform properties in Part b(i). Using this theorem, find

the Fourier transform of a signal

tfT

ttp cπ2cos)( ∏ ������

=

(iii) Is the signal p(t) in Part b(ii) a power or an energy signal? Explain

(iv) Using the result in Part b(iii), determine the power or energy of p(t) if cf

T2

1= .

Page 33: Chapter2-2008

33 Revised November 2008

Solution Question 1

Solution a(i) C0 =2, C1 = 0, C2 = 4, C3 = 2, C4 = 0 θ0 = 0, θ1 = 0, θ2 = -π, θ3 = -π/2, θ4 = 0

a(ii)

4

2

0 1 2 3 4 �

nC

0

1 2 3 4 �

nC∠

π

� /2

-1

1 2 3 4 �

∠ Vn

-

-2 -3 -4

2

0 1 2 3 4 �

Vn

-1 -2 -3 -4

2

1 2

1

Page 34: Chapter2-2008

34 Revised November 2008

a(iii) �

−=

=4

4

)(n

jntneVtg

where

V0 = 2, V1 = V-1 = 0, V2 = πje−2 , V-2 = πje2 , V3 = 2/πje− , V-3 =2/πje , V4 = V-4 = 0

therefore

g(t)= 2/πje tje 3− + πje2 tje 2− + 2 + 2 πje− tje 2 + 2/πje− tje 3 (b)(i)

{ } ( )

{ }

2 21( )cos2 ( )

2

1( ) ( ) ( )

21 1

( ) ( )2 2

c cj f t j f tc

c c

c c

F g t f t G f F e e

G f f f f f

G f f G f f

π ππ

δ δ

� �= ∗ +

� �� �= ∗ − + +

= − + +

(c)(ii)

Since sinc( )t

T fTT

�

↔� ����∏

1 1( ) sinc( ( )) sinc( ( ))

2 2c cP f T f f T f fπ π= − + +

(b)(iii) p(t) is an energy signal since it has finite total energy and zero average power (b)(iv)

/ 2 2

/ 2cos 2

2

T

cT

TE f t dtπ

−= =�

= 1/4fc

Page 35: Chapter2-2008

35 Revised November 2008

Question 2

(a) Determine the Fourier transforms of the signals shown in Figure S2(i) and Figure S2(ii).

(i)

(ii)

Figure S2

(b) Determine whether the following signals are energy-type or power-type. In each case determine the energy or power-spectral density and also the energy or power content of the signal.

(i) x(t) = sinc(t)

(ii) · ∞

−∞=−Λ=

nnttx )2()(

Solution Question 2

(a)

(i)

(ii)

Page 36: Chapter2-2008

36 Revised November 2008

(b) (i) Energy signal

The energy content of the signal is

(ii)


Recommended