+ All Categories
Home > Documents > DC Lecture3

DC Lecture3

Date post: 03-Jun-2018
Category:
Upload: aamir-habib
View: 235 times
Download: 0 times
Share this document with a friend

of 35

Transcript
  • 8/12/2019 DC Lecture3

    1/35

    Lecture #3

    Review on Lecture 2

    Random Signals Signal Transmission Through Linear Systems

    Bandwidth of Digital Data

  • 8/12/2019 DC Lecture3

    2/35

  • 8/12/2019 DC Lecture3

    3/35

    Averages

    time average:Averaged quantity of a single

    system over a time interval.(Directly related

    to the real experiment)

    ensemble average:Averaged quantity ofmany identical systems at a certain time

    (theoretical concept)

  • 8/12/2019 DC Lecture3

    4/35

  • 8/12/2019 DC Lecture3

    5/35

    1.1 Ensemble Averages

    The first moment of a probability

    distribution of a random variableX is called mean value mX, orexpected value of a randomvariableX

    The second moment of aprobability distribution is the

    mean-square value of X

    Central momentsare themoments of the differencebetweenX and mXand thesecond central moment is thevariance ofX

    Variance is equal to thedifference between the mean-square value and the square ofthe mean

    mX= E{X} = xpx(x) dx-

    E{X2} = x2px(x) dx-

    Var(X)= E{(XmX)2}

    = (xmX)2px(x) dx

    -

    Var(X)=E{X2}E{X}2

  • 8/12/2019 DC Lecture3

    6/35

    2. Random Processes A random processX(A, t) can be viewed as a function of two

    variables: an event A and time.

  • 8/12/2019 DC Lecture3

    7/35

    2.1 Statistical Averages of a Random

    Process A random process whose distribution functions are continuous can be

    described statistically with a probability density function (pdf).

    A partial description consisting of the mean and autocorrelation

    function are often adequate for the needs of communication systems.

    Mean of the random processX(t) :

    E{X(tk)} = x Px(x) dx= mx(tk) (1.30)

    Autocorrelation function of the random processX(t)Rx(t1,t2) = E{X(t1) X(t2)} = xt1xt2Px(xt1,xt2) dxt1dxt2 (1.31)

    -

  • 8/12/2019 DC Lecture3

    8/35

    2.2 Stationarity

    A random processX(t) is said to be stationary in the strict sense

    if noneof its statistics are affected by a shift in the time origin.

    A random process is said to be wide-sense stationary (WSS) if

    two of its statistics, its mean and autocorrelation function, do notvary with a shift in the time origin.

    E{X(t)} = mx= a constant (1.32)

    Rx(t1,t2) = Rx(t1t2) (1.33)

  • 8/12/2019 DC Lecture3

    9/35

    2.3 Autocorrelation of a Wide-Sense

    Stationary Random Process

    For a wide-sense stationary process, the autocorrelation

    function is only a function of the time difference = t1t2;

    Rx() = E{X(t) X(t + )} for - < < (1.34)

    Properties of the autocorrelation function of a real-valued wide-

    sense stationary process are

  • 8/12/2019 DC Lecture3

    10/35

    3. Time Averaging and Ergodicity

    When a random process belongs to a special class, known as an

    ergodic process, its time averages equal its ensemble averages.

    The statistical properties of such processes can be determinedby time averaging over a single sample function of the process.

    A random process is ergodic in the mean if

    mx= lim 1/T x(t) dt (1.35-a)

    It is ergodic in the autocorrelation function if

    Rx() = lim 1/T x(t)x(t + ) dt (1.35-b)

    -T/2

    T/2

    T -T/2

    T/2

    T

  • 8/12/2019 DC Lecture3

    11/35

  • 8/12/2019 DC Lecture3

    12/35

  • 8/12/2019 DC Lecture3

    13/35

  • 8/12/2019 DC Lecture3

    14/35

    5. Noise in Communication Systems

    The term noise refers to unwanted electrical signals that are

    always present in electrical systems; e.g spark-plug ignitionnoise, switching transients, and other radiating electromagnetic

    signals or atmosphere, the sun, and other galactic sources.

    Can describe thermal noise as a zero-mean Gaussian random

    process.

    A Gaussian process n(t) is a random function whose amplitudeat

    any arbitrary time t is statistically characterized by the Gaussian

    probability density function

    (1.40)

  • 8/12/2019 DC Lecture3

    15/35

    Noise in Communication Systems

    The normalized or standardized Gaussian density function of a

    zero-mean process is obtained by assuming unit variance.

    Central limit theorem

  • 8/12/2019 DC Lecture3

    16/35

    5.1 White Noise

    The primary spectral characteristic of thermal noise is that its

    power spectral density is the same for all frequencies of interestin most communication systems

    Power spectral density Gn(f )

    (1.42)

    Autocorrelation function of white noise is (meaning) uncorrelated

    (1.43)

    The average power Pnof white noise is infinite

    (1.44)

  • 8/12/2019 DC Lecture3

    17/35

  • 8/12/2019 DC Lecture3

    18/35

    The effect on the detection process of a channel with additivewhite Gaussian noise (AWGN) is that the noise affects each

    transmitted symbol independently.

    Such a channel is called a memoryless channel.

    The term additive means that the noise is simply superimposed

    or added to the signal

  • 8/12/2019 DC Lecture3

    19/35

    Signal Transmission through Linear

    Systems

    A system can be characterized equally well in the time domain

    or the frequency domain, techniques will be developed in both

    domains

    The system is assumed to be linear and time invariant.

    It is also assumed that there is no stored energy in the system

    at the time the input is applied

  • 8/12/2019 DC Lecture3

    20/35

    1. Impulse Response A.5

    The linear time invariant system or network is characterized in

    the time domain by an impulse response h (t ),to an input unit

    impulse (t)

    h(t) = y(t) when x(t) = (t) (1.45)

    The response of the network to an arbitrary input signalx (t )isfound by the convolution ofx (t )with h (t )

    y(t) = x(t)*h(t) = x()h(t- )d (1.46)

    The system is assumed to be causal,which means that there can

    be no output prior to the time, t =0,when the input is applied.

    convolution integral can be expressed as:

    y(t) = x() h(t- )d (1.47a)

    -

    -

  • 8/12/2019 DC Lecture3

    21/35

    2. Frequency Transfer Function

    The frequency-domain output signal Y (f )is obtained by taking

    the Fourier transform

    Y(f) = H(f)X(f) (1.48)

    Frequency transfer function or the frequency response is defined

    as:H(f) = Y(f) / X(f) (1.49)

    H(f) = |H(f)| ej (f) (1.50)

    The phase response is defined as:(f) = tan-1 Im{H(f)} / Re{H(f)} (1.51)

  • 8/12/2019 DC Lecture3

    22/35

    2.1. Random Processes and Linear Systems

    If a random process forms the input to a time-invariant linear

    system,the output will also be a random process.

    The input power spectral density GX(f )and the output power

    spectral density GY(f )are related as:

    Gy(f)= Gx(f) |H(f)|2 (1.53)

  • 8/12/2019 DC Lecture3

    23/35

    3. Distortionless TransmissionWhat is the required behavior of an ideal transmission line?

    The output signal from an ideal transmission line may have sometime delay and different amplitude than the input

    It must have no distortionit must have the same shape as theinput.

    For ideal distortionless transmission:

    y(t) = K x( t - t0 ) (1.54)

    Y(f) = K X(f) e-j2f t0 (1.55)

    H(f) = K e-j2f t0 (1.56)

    Output signal in time domain

    Output signal in frequency domain

    System Transfer Function

  • 8/12/2019 DC Lecture3

    24/35

    What is the required behavior of an ideal transmission line?

    The overall system response must have a constant magnituderesponse

    The phase shift must be linear with frequency

    All of the signals frequency components must also arrive withidentical time delay in order to add up correctly

    Time delay t0is related to the phase shift and the radianfrequency = 2f by:

    t0(seconds) = (radians) / 2f (radians/seconds ) (1.57a)

    Another characteristic often used to measure delay distortion of asignal is called envelope delay or group delay:

    (f) = -1/2 (d(f) / df) (1.57b)

  • 8/12/2019 DC Lecture3

    25/35

    3.1. Ideal Filters

    For the ideal low-pass filtertransfer function with bandwidth Wf=

    fuhertz can be written as:

    Figure1.11 (b) Ideal low-pass filter

    H(f) = | H(f) | e-j (f) (1.58)

    Where

    | H(f) | = { 1 for |f| < fu

    0 for |f| fu }

    (1.59)

    e-j (f) = e-j2f t0 (1.60)

  • 8/12/2019 DC Lecture3

    26/35

    3.1. Ideal Filters

    The impulse response of the ideal low-pass filter:

    h(t) = F-1 {H(f)}

    = H(f) e-j2f tdf (1.61)

    = e-j2f t0e-j2f tdf

    = e-j2f (t - t0)df

    = 2fu* sin 2fu(tt0)/ 2fu(tt0)

    = 2fu* sinc 2fu(tt0)

    (1.62)

    -

    - fu

    fu

    - fu

    fu

  • 8/12/2019 DC Lecture3

    27/35

    3.1. Ideal Filters

    For the ideal band-pass filter

    transfer function

    For the ideal high-pass filter

    transfer function

    Figure1.11 (a) Ideal band-pass filter Figure1.11 (c) Ideal high-pass filter

  • 8/12/2019 DC Lecture3

    28/35

    3.2. Realizable Filters

    The simplest example of a realizable low-pass filter; an RC filter

    H(f) = 1/ 1+j2f RC = e-j (f)/ 1+ (2f RC )2 (1.63)

    Figure 1.13

  • 8/12/2019 DC Lecture3

    29/35

    3.2. Realizable Filters

    Phase characteristic of RC filter

    Figure 1.13

  • 8/12/2019 DC Lecture3

    30/35

    3.2. RealizableFilters

    There are several useful approximations to the ideal low-pass filter

    characteristic and one of these is the Butterworth filter

    | Hn(f) | = 1/(1+ (f/fu)2n)0.5

    n 1 (1.65)

    Butterworth filters arepopular because they

    are the best

    approximation to theideal, in the sense of

    maximal flatness in thefilter passband.

  • 8/12/2019 DC Lecture3

    31/35

    4. Bandwidth Of Digital Data4.1 Baseband versus Bandpass

    An easy way to translate thespectrum of a low-pass or baseband

    signal x(t) to a higher frequency is to

    multiply or heterodyne the baseband

    signal with a carrier wave cos 2fct

    xc(t) is called a double-sideband

    (DSB) modulated signal

    xc(t) = x(t) cos 2fct (1.70)

    From the frequency shifting theorem

    Xc(f) = 1/2 [X(f-fc) + X(f+fc) ] (1.71)

    Generally the carrier wave frequency

    is much higher than the bandwidth ofthe baseband signal

    fc>> fm and therefore WDSB= 2fm

  • 8/12/2019 DC Lecture3

    32/35

    4.2 Bandwidth

    Theorems ofcommunication and

    information theory are

    based on the

    assumption of strictly

    bandlimited channels

    The mathematical

    description of a real

    signal does not permit

    the signal to be strictlyduration limited and

    strictly bandlimited.

  • 8/12/2019 DC Lecture3

    33/35

    4.2 Bandwidth

    All bandwidth criteria have in common the attempt to specify a

    measure of the width, W, of a nonnegative real-valued spectraldensity defined for all frequencies |f |<

    The single-sided power spectral density for a pulsexc(t) takes the

    analytical form:

    (1.73)

  • 8/12/2019 DC Lecture3

    34/35

    Different Bandwidth Criteria

    (a) Half-power bandwidth.

    (b) Equivalent rectangular

    or noise equivalent

    bandwidth.

    (c) Null-to-null bandwidth.

    (d) Fractional power

    containment

    bandwidth.

    (e) Bounded power

    spectral density.

    (f)Absolute bandwidth.

  • 8/12/2019 DC Lecture3

    35/35

    Digital Communications: A Discrete-Time Approach:

    International Edition

    View Larger

    ImageMichael Rice

    Publisher: Pearson HigherEducation

    Copyright: 2009

    Format: Paper; 800 pp

    ISBN-10: 0138138222

    ISBN-13: 9780138138226


Recommended