+ All Categories
Home > Documents > 1 The Auditory Peripherymath.arizona.edu/~jwatkins/sfoaeCS1.pdf · 2009. 4. 6. · oscillatory...

1 The Auditory Peripherymath.arizona.edu/~jwatkins/sfoaeCS1.pdf · 2009. 4. 6. · oscillatory...

Date post: 05-Feb-2021
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
11
Quantifying Stimulus Frequency Otoacoustic Emissions Empirically C. Bergevin This case study is designed with a practical emphasis on how to quantify uncertainty in experimental data. While the focus here is on a specific type of experiment related to auditory physiology, the general principles described will potentially be of value fin a wide range of different empirical contexts (i.e., the basic ideas discussed here apply broadly). In terms of mathematical content, this case study explores the following topics: Fourier analysis, complex variables and propagation of errors. 1 The Auditory Periphery The primary purpose of the ear is convert incoming acoustic stimuli into electrical signals that can subsequently be passed onto the central nervous system (i.e., it acts as a transducer). However, in accomplishing such a task, the ear exhibits a remarkable frequency discrimination as well as the ability to be sensitive to stimuli encompassing a very large range of amplitudes. For example, the range of amplitudes to which the ear normally responds over represents roughly 12 orders of magnitude in energy! The ear can be roughly distinguished into three different parts: the outer, middle and inner ear (Fig. 1). Sound enters the external auditory meatus, or ear canal (outer ear), and sets the tympanic membrane (or eardrum) into motion. On the other side coupled to the tympanic membrane, are the three small bones (ossicles) which span the middle ear. To a first degree, these bones serve as impedance matchers between air-filled outer world and the fluid-filled inner ear. The final bone, the stapes, is coupled to the oval window, entrance to the snail-shaped cochlea (inner ear). The cochlea is a long coiled tube comprised of three different chambers. A flexible partition called the basilar membrane (BM) separates the top two chambers (scala vestibuli and scala media) from the bottom (scala tympani), except at the most apical region of the cochlea where there is a hole called the helicotrema. The width and thickness of the BM change along the cochlear length, as does the cross-sectional area of the chambers. As the stapes footplate moves in and out in response to sound, pressure variations in the fluid will be setup inside the cochlea. A flexible membrane called the round window allows for the volume displacements outwards as the stapes pushes inwards (since the cochlear fluid is largely incompressible). This creates a pressure difference across the basilar membrane. As a result, a traveling wave propagates along the BM as shown in Fig. 1. Each point along the BM resonates at a particular frequency due to its graded mass and stiffness. It is this tonotopic organization that allows the cochlea to function as a spectrum analyzer (see subsequent section for further discussion). Higher frequencies excite excite basal regions (near the stapes) while lower frequencies stimulate more apically. Sitting on top of the BM in the scala media is a remarkable structure called the organ of corti. This structure contains the hair cells (HCs), which effectively act as the mechano-electro transducers, mapping BM motion to action potentials in the auditory nerve fibers (ANFs) innervating the HCs (Fig. 1). A stereociliary bundle extends out of the epithelial surface of the HC that contains a unique set of transduction channels. As the BM is displaced and moves up- wards in the transverse direction, there is a shearing between the BM and the overlying tectorial membrane (TM). This shearing causes a deflection of the stereociliary bundle (shown by bi-directional arrow in Fig. 1), thereby stimulating the transduction channels [Corey and Hudspeth, 1979]. The scala media is unique in that its fluid composition creates an extremely high potential of about +80 mV (the largest resting potential in the entire body) due to the pumping action of the stria vascularis which also causes a large K + concentration. This potential is quite high compared to the resting potential of the hair cell, which is at about -60 mV. As a result of this large potential difference, small bundle deflections cause appreciable changes in HC membrane potential, triggering a synaptic release of neurotransmitter to the innervating neurons. 1
Transcript
  • Quantifying Stimulus Frequency Otoacoustic Emissions Empirically

    C. Bergevin

    This case study is designed with a practical emphasis on how to quantify uncertainty in experimental data. Whilethe focus here is on a specific type of experiment related to auditory physiology, the general principles describedwill potentially be of value fin a wide range of different empirical contexts (i.e., the basic ideas discussed here applybroadly). In terms of mathematical content, this case study explores the following topics: Fourier analysis, complexvariables and propagation of errors.

    1 The Auditory Periphery

    The primary purpose of the ear is convert incoming acoustic stimuli into electrical signals that can subsequently bepassed onto the central nervous system (i.e., it acts as a transducer). However, in accomplishing such a task, the earexhibits a remarkable frequency discrimination as well as the ability to be sensitive to stimuli encompassing a verylarge range of amplitudes. For example, the range of amplitudes to which the ear normally responds over representsroughly 12 orders of magnitude in energy!

    The ear can be roughly distinguished into three different parts: the outer, middle and inner ear (Fig. 1). Soundenters the external auditory meatus, or ear canal (outer ear), and sets the tympanic membrane (or eardrum) into motion.On the other side coupled to the tympanic membrane, are the three small bones (ossicles) which span the middle ear.To a first degree, these bones serve as impedance matchers between air-filled outer world and the fluid-filled inner ear.The final bone, the stapes, is coupled to the oval window, entrance to the snail-shaped cochlea (inner ear).

    The cochlea is a long coiled tube comprised of three different chambers. A flexible partition called the basilarmembrane (BM) separates the top two chambers (scala vestibuli and scala media) from the bottom (scala tympani),except at the most apical region of the cochlea where there is a hole called the helicotrema. The width and thicknessof the BM change along the cochlear length, as does the cross-sectional area of the chambers. As the stapes footplatemoves in and out in response to sound, pressure variations in the fluid will be setup inside the cochlea.

    A flexible membrane called the round window allows for the volume displacements outwards as the stapes pushesinwards (since the cochlear fluid is largely incompressible). This creates a pressure difference across the basilarmembrane. As a result, a traveling wave propagates along the BM as shown in Fig. 1. Each point along the BMresonates at a particular frequency due to its graded mass and stiffness. It is this tonotopic organization that allows thecochlea to function as a spectrum analyzer (see subsequent section for further discussion). Higher frequencies exciteexcite basal regions (near the stapes) while lower frequencies stimulate more apically.

    Sitting on top of the BM in the scala media is a remarkable structure called the organ of corti. This structurecontains the hair cells (HCs), which effectively act as the mechano-electro transducers, mapping BM motion to actionpotentials in the auditory nerve fibers (ANFs) innervating the HCs (Fig. 1). A stereociliary bundle extends out of theepithelial surface of the HC that contains a unique set of transduction channels. As the BM is displaced and moves up-wards in the transverse direction, there is a shearing between the BM and the overlying tectorial membrane (TM). Thisshearing causes a deflection of the stereociliary bundle (shown by bi-directional arrow in Fig. 1), thereby stimulatingthe transduction channels [Corey and Hudspeth, 1979]. The scala media is unique in that its fluid composition createsan extremely high potential of about +80 mV (the largest resting potential in the entire body) due to the pumpingaction of the stria vascularis which also causes a large K+ concentration. This potential is quite high compared to theresting potential of the hair cell, which is at about -60 mV. As a result of this large potential difference, small bundledeflections cause appreciable changes in HC membrane potential, triggering a synaptic release of neurotransmitter tothe innervating neurons.

    1

  • Statistics Case Studies: SFOAEs Math 363 (Spring 2009)

    Figure 1: Cross-section of the human auditory periphery [A. Greene].

    C.D. Geisler (modified)

    Mammalian Cochlea Uncoiledto

    VestibularSystem

    Stapes

    RoundWindow

    CochlearPartition

    Helicotrema

    toMiddle

    Ear

    pliant &massive

    stiff &thin

    Acoustic Energy

    Figure 2: Schematic showing BM traveling wave along a straightened (uncoiled) cochlea [Geisler, 1990].

    2

  • Statistics Case Studies: SFOAEs Math 363 (Spring 2009)

    K+ ions

    Ca2+ ions

    +80 mV

    -60 mVInnerHair Cell

    Afferent AuditoryNerve Fiber (ANF)

    Figure 3: Schematic showing that the hair cells act as mechano-electro transducers that convert BM displacement into electrical signals that triggerthe synapsed neurons.

    There are two distinct types of HCs in the mammalian inner ear: inner hair cells (IHCs) and outer hair cells(OHCs). There are roughly three OHCs for every IHC. IHCs receive the bulk of the afferent innervation (going tothe brain) while OHCs receive the bulk of the efferent innervation (coming back down from the brain). MammalianOHCs appear unique in that they exhibit somatic cell motility, a process by which the cell changes its length inresponse to mechanical or electrical stimulation [Brownell et al., 1985]. It is commonly believed somatic motilityplays an important role in cochlear mechanics, acting in some way to provide an amplification mechanism and boostthe response of the BM to low-level signals.

    2 Otoacoustic Emissions

    In addition to being responsive to sound, the inner ear also emits sound [Kemp, 1978]. These otoacoustic emissions(OAEs) are believed to be a by-product of processes occurring in the inner ear that allow it to achieve its exquisitesensitivity and frequency selectivity. OAEs can arise both with or without an evoking stimulus and can be measurednon-invasively. An example of spontaneous emissions (SOAEs) are shown in Fig. 2. A time waveform was recordedfrom a sensitive microphone placed in the ear canal of an adult with normal hearing. Taking the Fourier transform ofthe time waveform (see subsequent section), the spectrum in Fig. 2 shows the various frequency components presentin the signal. The tall narrow-peaks indicate almost tone-like sounds that are spontaneously (i.e., no stimulating soundis present) coming from the ear.

    Given the difficulty of direct physiological measurements (e.g., the fragile mammalian cochlea is completely en-cased by the hardest bone in the body), OAEs provide a direct and objective measure of the processes of the innerear. In general, healthy ears emit while impaired ones (i.e., individuals with some form of hearing loss) do not. Asa result, OAEs have been utilized extensively in clinical settings (e.g. diagnostic hearing screening for adults andnewborns, monitoring intra-cranial pressure in head trauma patients). However, further applications are limited byour incomplete understanding of the actual mechanisms in the inner ear that give rise to the emissions. Thus, there issignificant motivation to further study OAEs and characterize their underlying generation mechanisms.

    3

  • Statistics Case Studies: SFOAEs Math 363 (Spring 2009)

    0 500 1000 1500 2000 2500 3000 3500 4000 4500-30

    -20

    -10

    0

    10

    20

    30

    40

    Mea

    sure

    d Le

    vel (

    dB S

    PL)

    Frequency (Hz)

    Figure 4: Spontaneous otoacoustic emissions (SOAEs) from an individual human ear.

    One type of evoked emission that is of significant interest is a stimulus-frequency emission (SFOAE). This isan emission that is evoked by a single tone. While relatively straight-forward to interpret1, SFOAEs are somewhatdifficult to measure in that they occur at the same frequency as the stimulus being used to evoke them, but typicallyat a much lower level. Thus the microphone signal in the ear canal is typically dominated by the stimulus tone. Atlower stimulus levels, interference between the SFOAE and the stimulus can be observed as ripples in the measuredmicrophone response (Fig. 2).

    3 Empirical Determination of SFOAEs

    3.1 Measuring Time Waveforms From the Ear CanalIn order to measure SFOAEs, the proper equipment and acoustic environment are required. As schematized in Fig. 3.1,a subject sits in an acoustic isolation booth (to minimize the effects of external noise) with a sensitive probe placedgently in their ear (similar to an insert earphone one uses for an iPod). This probe contains a microphone (to measurethe emission) and two earphones (to provide a stimulus to evoke the emission). Further technical details with regardto measuring SFOAEs are provided in the appendix. An example of an SFOAE frequency sweep in a normal hearingadult is shown in Fig. 3.1.

    3.2 Fourier TransformIt is likely worthwhile to briefly provide a brief review of the concept of the Fourier transform. The basic idea isthat a signal can be expressed as a sum of linearly-independent basis functions. Similar to a Taylor series expansion(which describes a function, within some region of convergence, as the sum of polynomials), the Fourier transformuses sinusoids as the underlying basis functions. Given that many acoustic sounds in nature derive from some sort ofoscillatory behavior (e.g., vocal fold vibration, oscillation of a piano string), it intuitively makes sense that sinusoidsmake an appropriate choice for the underlying basis function.

    In our specific case of measuring OAEs, the Fourier transform allows for one to go back and forth between de-scribing a signal in either the time or frequency domain. As indicated in Fig. 2, a time waveform from the probe

    1Many other types of evoked emissions use mutli-tone (DPOAEs) or broadband (CEOAEs) stimuli. Furthermore, these emissions are thoughtto be composed of different generator regions throughout the cochlea while SFOAEs are thought to arise in a single localized region.

    4

  • Statistics Case Studies: SFOAEs Math 363 (Spring 2009)

    Figure 5: Frequency dependence of ear-canal pressure and its variation with sound level in a normal human ear. The curves, offset verticallyfrom one another for clarity, represent the normalized amplitude of the ear-canal pres- sure, Pec , measured with a constant voltage applied tothe earphone driver [Shera and Zweig, 1993]. The approximate sensation level of the stimulus tone at 1300 Hz is indicated on the right. At thehighest level the pressure amplitude varies relatively smoothly with frequency. As the stimulus level is lowered, sound generated within the cochleacombines with the stimulus tone to create an oscillatory acoustic interference pattern that appears super- posed on the smoothly varying backgroundseen at high levels. Near 1500 Hz, the frequency spacing fOAE between adjacent spectral maxima is ap- proximately 100 Hz. [figure and captiontaken from Shera and Guinan, 1999]

    OAE Measurement System

    COMPUTER

    - all acquisition/analysis software coded manually in C

    OAE probe contains both a microphoneand two earphones (to minimize systemdistortion)noise reduction chamber

    ER-10C(amplifier)

    24 bit A/D and D/A

    probe coupledtightly to ear

    frommic

    toearphones

    human subject

    Figure 6: Schematic showing setup for measuring SFOAEs.

    5

  • Statistics Case Studies: SFOAEs Math 363 (Spring 2009)

    0.5 1 2 3 4 5-40

    -30

    -20

    -10

    0

    10

    -30

    -20

    -10

    0

    0.5 1 2 3 4 5

    Mag

    nit

    ud

    e [d

    B S

    PL]

    Ph

    ase

    [cyc

    les]

    Probe Frequency [kHz]

    Figure 7: SFOAE frequency sweep from a single individual human ear. Both magnitude (top) and phase (bottom) are shown. Noise floor is shownby the dashed lines. Error bars indicate standard error of the mean across the 35 measurements taken at a given frequency. [Lp = 40 dB SPL,Ls = Lp + 15 dB, fs = fp + 40 Hz]

    microphone was transformed into the spectral representation, thereby allowing us to see the different frequency com-ponents making up the signal. Each component has a unique frequency, amplitude and phase2. In the most generalcase, we can express a time waveform f(t) as

    f(t) = a0 +∞∑n=0

    an cos (nt) +∞∑n=0

    bn sin (nt) (1)

    where the coefficients (ai and bi) can be explicitly expressed as definite integrals related to f(t). In the continuouscase, f(t) can be expressed as

    f(t) =∫ ∞−∞

    F(ω)eiωt dω (2)

    where F(ω) is the (complex) Fourier coefficient (describing the amplitude and phase of the sinusoidal component atfrequency ω = 2πf ). Similarly, the transform also goes back from the frequency domain to the time domain

    F(ω) =∫ ∞−∞

    f(t)e−iωt dt (3)

    One technique, called the fast fourier transform (FFT) allows for a rapid transformation between the time- andfrequency domain representations of a signal. In this case, one uses the discrete Fourier transform (since the timewaveform is obviously sampled at a finite rate over a certain interval). As is shown in Fig. 2, the FFT was used todetermine F(ω) from the time waveform f(t), of which the magnitude is plotted.

    3.3 Two-Tone SuppressionGiven that SFOAEs tend to be small relative to the stimulus used to evoke them (called the probe tone), they aretypically hard to measure. However, several techniques have been developed that make use of the nonlinearity present

    2Conversely, amplitude and phase can be considered as the sum of a cosine and sine. For example, consider Euler’s formula: Aeiθ =A cos (θ) + iA sin (θ).

    6

  • Statistics Case Studies: SFOAEs Math 363 (Spring 2009)

    in the ear in order to measure SFOAEs. We will focus on one known as two-tone suppression (2TS), schematized inFig.3.3 [Brass and Kemp 1993; Shera and Guinan 1999].

    The basic idea is as follows. In present the probe tone in two different conditions. For condition 1, the probe ispresented alone, the microphone picking up both the stimulus and the evoked SFOAE. In condition 2, an additionaltone (called the suppressor tone) is also simultaneously presented. In this case, the SFOAE has been effectivelyremoved (due to nonlinearities in the inner ear) and thus the microphone only picks up the stimulus at the probefrequency. By comparing the signal at the probe frequency across the two conditions, the stimulus can effectively besubtracted out and the SFOAE revealed.

    Let’s flesh this out in further detail. During condition 1, you measure a time waveform. Taking the fast Fouriertransform (FFT) of this signal, you can extract the magnitude and phase at the probe frequency (Pn and θn respec-tively). We can express this n’th measurement as the complex quantity P̄n = Pneiθn for the probe alone condition.You repeat this process a total of N times to minimize random background noise. Thus, across your N averages, youend up with mean values for the magnitude and phase (P and θ), such that

    P̄ = Peiθ (4)

    Furthermore, from the averages you have estimates of the standard deviations (σP and σθ) such that P±σP and θ±σθ.Similarly for condition 2, you will end up with the magnitude and phase for the probe plus suppressor condition thatcan be expressed as S̄ = Seiφ and also have values for σS and σφ.

    We now define the emission as

    P̄OAE ≡ P̄− S̄ (5)

    That is, the complex difference between the probe alone and probe and suppressor conditions. As shown in Fig.3.3,this essentially amounts to subtracting two vectors in the complex plane. Let

    α ≡∣∣P̄OAE ∣∣ β ≡ arg (P̄OAE) (6)

    and let P̄ = a+ ib and S̄ = c+ id, then

    α =√

    (a− c)2 + (b− d)2 (7)

    and

    β = arctan(b− da− c

    )(8)

    So now based upon our two (time averaged) measurements from both stimulus conditions, we have an expression forthe magnitude and phase of the SFOAE.

    4 Quantifying Uncertainty in SFOAE Measurements

    The basis for determining uncertainty in SFOAE measurements stems from two considerations. First, there is somedegree of inherent uncertainty in any particular measurement, most notably due to the influence of background noise.Thus, there is some degree of random fluctuation (which we assume to be normally distributed) from one measure-ment to another for a given set of stimulus conditions. This is the main reason why we average: to minimize theconfounding factor of random error/fluctuations. Averaging also allows the opportunity to quantify how big said fluc-tuations are (i.e., determine standard deviation over the course of averages). Second, SFOAEs are determined througha suppression paradigm (as outlined in the previous section) that involves a functional composition across two distinctconditions. Thus, we will need to consider propagation of error in order to quantify the desired uncertainty. Keep inmind that our goal is to determine σα and σβ (i.e., how do we determine the error bars as is shown in Fig. 3.1).

    7

  • Statistics Case Studies: SFOAEs Math 363 (Spring 2009)

    Magnitude

    Frequency

    Condition 1: Probe Alone

    Magnitude

    Frequency

    Re

    Im

    Re

    Im

    SFOAE (= ) revealed!!

    θma

    g. (P)

    Condition 2: Probe +Suppressor

    Step 1. Present two different conditions: A. Probe tone alone B. Probe and Suppressor tones

    Step 2. Take Fourier transform of both (time-averaged) waveforms to obtain magnitude and phase

    [consider as vectors in the complex plane at probe frequency only]

    Condition 1 Condition 2

    Step 3. Subtract phasors to obtain SFOAE (nonlinear suppression)

    Re

    Im

    Figure 8: Schematic outlining the basic 2TS paradigm. Note that the phasors in the complex plane represent the magnitude and phase at the probefrequency.

    8

  • Statistics Case Studies: SFOAEs Math 363 (Spring 2009)

    4.1 Propagation of ErrorLet us first consider the general case3. Suppose that we wish to determine a quantity x that depends upon at least twomeasured values, u and v, such that

    x = f(u, v, · · ·) (9)

    Suppose that you make N measurements of u and v, then

    x̄ ≈ f(ū, v̄, · · ·) (10)

    where ū = 1N∑Ni=1 ui (and similarly for v̄). From our N measurements, you also have empirical estimates for the

    variances σ2u and σ2v . What we now wish to determine is the variance σ

    2x. In the case that N →∞,

    σ2x = limN→∞

    [1N

    N∑i=1

    (xi − x̄)2]

    (11)

    where xi = f(ui, vi, · · ·). Now we wish to express our deviations in xi as deviations in ui and vi (i.e., the things weactually measure). We then have

    xi − x̄ ≈ (ui − ū)(∂x

    ∂u

    )+ (vi − v̄)

    (∂x

    ∂v

    )· · · (12)

    Combining Eqns.11 and 12, one can show that

    σ2x ≈ σ2u(∂x

    ∂u

    )2+ σ2v

    (∂x

    ∂v

    )2+ σ2uv

    (∂x

    ∂u

    )(∂x

    ∂v

    )+ · · · (13)

    where the covariance is σ2uv = limN→∞[

    1N

    ∑Ni=1(ui − ū)(vi − v̄)

    ]. Equation 13 is commonly known as the error

    propagation formula

    4.2 SFOAEsFor simplicity in deriving the uncertainty in our SFOAE measurements, we assume that there is no covariance, that isno correlation amongst the various measured quantities4.

    Combining Eqns.7 and 13, one can show that

    σ2α =1α2[σ2a(a− c)2 + σ2b(b− d)2 + σ2c (c− a)2 + σ2d(d− b)2

    ](14)

    Similarly, combining Eqns.8 and 13

    σ2β =[

    1(a− c)2 + (b− d)2

    ] [σ2a(b− d)2 + σ2b(a− c)2 + σ2c (b− d)2 + σ2d(a− c)2

    ](15)

    So now we can express the SFOAE magnitude as α ± σα and phase as β ± σβ based solely off of our measuredquantities (i.e., we know the appropriate error bars to plot)!

    Also note that in a lot of contexts, the standard error (sometimes also referred to as the standard error of the mean)is commonly plotted to visualize the uncertainty. The standard error is simply the variance divided by the square rootof the number of trials

    SE =σ

    N(16)

    3See Data Reduction and Error Analysis by P.R. Bevington and D.K Robinson for further reference.4One needs to be careful here. In our present case with regard to SFOAE, this simplifying assumptions turns out to be fairly reasonable.

    However, in many other cases this assumption is not valid and the covariation across the measured quantities needs to be accounted for.

    9

  • Statistics Case Studies: SFOAEs Math 363 (Spring 2009)

    5 SummaryUsing a specific example relevant to auditory physiology and incorporating aspects of Fourier transforms and complexnumbers, we were able to use the propagation of errors to derive the appropriate expression for the uncertainty associ-ated with a specific type of measurement. Since different intervals (i.e., two conditions in the suppression paradigm)and thus the SFOAE depends upon multiple variables, we saw how the uncertainty in the various measured quantitiespropagates through to an uncertainty in the final, desired quantity5. Hopefully it should be apparent how easily thisroutine can be automated by the use of a computer (i.e., you have to do the initial coding, but after that, the computerdoes all the work for you!).

    5Note that since acoustic sound pressure is typically plotted on a logarithmic scale, the error bars will not be symmetric, as is shown in Fig. 3.1.

    10

  • Statistics Case Studies: SFOAEs Math 363 (Spring 2009)

    Potential Questions to Explore• So if we measure the magnitude and phase at the probe frequency for both stimulus conditions, then we measured

    values P ±σP , θ±σθ, S±σS , and φ±σφ. However, our expression for the SFOAE and the uncertainty (Eqns.7and 8) are in terms a ± σa, b ± σb, etc. What is the connection between these two sets of representations andhow does that specifically relate in the numerical estimates for the uncertainties?

    • Measure SFOAEs in your own ear [contact C. Bergevin about this ([email protected])].

    • Why might the OAE probe used to measure the emissions have two earphones (in stead of just one)?

    • Explicitly derive Eqn.13 from Eqns.11 and 12.

    • Explicitly derive Eqns.14 and 15.

    • Consider Eqn.14. At what values of α, a, b, etc... is the equation evaluated at? Can you explain why?

    • In many contexts, data are reported with their 95% confidence intervals. Explain what these intervals representand (clearly) explain how they relate to the variance and standard error.

    • Explain the difference between a sample distribution and parent distribution. Go one step further to explain howthe standard error is connected to both of these.

    Appendix - Technical Details On Measuring SFOAEsFor those interested in more specific details, one possible paradigm for measuring SFOAEs is as follows (see subse-quent section first for description of the suppression paradigm). A desktop computer houses a 24-bit soundcard (LynxTWO-A, Lynx Studio Technology), whose synchronous input/output is controlled using a custom data-acquisitionsystem. A sample rate of 44.1 kHz is used to transduce signals to/from an Etymotic ER-10C (+40 dB gain). Themicrophone signal is high-pass filtered with a cut-off frequency of 0.41 kHz to minimize the effects of noise.

    The probe earphones are calibrated in-situ using flat-spectrum, random-phase noise. Calibrations are typicallyverified repeatedly throughout the experiment. Re-calibration is performed if the level presented differed by more that3 dB from the specified value. The stimulus frequency range employed (fp for SFOAEs) is typically 0.5–5 kHz. Thesuppressor stimulus parameters are commonly as follows: fs = fp +40 Hz Ls = Lp +15 dB. One earphone producesa sinusoidal signal over a 464 ms time window at the probe frequency fp, ramped on/off over 12 ms at the ends of thewindow. The other earphone also produces a 464 ms signal, but at the suppressor frequency fs, which was rampedon only during the latter half of the window (the first half was silence). The microphone response is extracted fromtwo 186 ms segments from the total waveform, one from the probe alone and one with the probe+suppressor. Thesesegments are extracted at least 20 ms after the end of the ramping-on periods to allow any transient behavior to decay.Thus, the measurements are for the steady-state condition. The Fourier transform of each segment is then computedand the complex difference of the two Fourier coefficients at fp was defined to be the SFOAE.

    For SFOAEs, 35 waveforms are typically averaged, excluding any flagged by an artifact-rejection paradigm [Sheraand Guinan, 1999]. Furthermore, all stimulus frequencies are quantized so that an integral number of stimulus periodsfit in the response segment. Frequency step-size during sweeps should be small enough to avoid ambiguity during thephase unwrapping. Delays associated with the measurement system (such as the analog/digital converters on the soundcard) can readily be corrected for in the fp phase before taking the complex difference in the SFOAE suppressionparadigm. The noise floor can be defined as the average sound-pressure level centered about (but excluding) thefrequency of interest. It is found by averaging the Fourier amplitudes in the ±3 adjacent frequency bins centered onthe OAE frequency. For the stimulus levels commonly used to measure, artifactual system distortionis small comparedto the signal levels of the SFOAEs (≈70–80 dB below the evoking stimulus levels and typically beneath the acousticnoise floor).

    11


Recommended