+ All Categories
Home > Documents > Hearing aids-a development with digital signal processing devices

Hearing aids-a development with digital signal processing devices

Date post: 19-Sep-2016
Category:
Upload: n
View: 215 times
Download: 2 times
Share this document with a friend
9
Hearing aids- a development with digital signal processing devices by H. 6. McAlBister, N. D. Black and N. Waterman Current hearing aids based on analogue technology are often unsuitable for correction of dysfunction and use in particular environments. This article describes a development using digital signal processing devices. Two devices have evolved: a bench resident master hearing aid and a body worn aid. An audiogram matching algorithm is used with the development and the devices have been tested using normal audiometric practice. T he earl is the organ of hearing. It is supplied by the eighth cranial nerve, which connects the inner ear to the cerebellum in the brain. The cochlear nerves form part of the vestibulocochlear nerve and are stimulated by air vibrations caused by sound waves. This nerve conveys the decoded sound impulses, detected by the mechanism of the ear, to hearing areas in the cerebral cortex-the area of the brain where sound is perceived. The ear is very sensitive and highly selective. Its sensitivity is demonstrated by the ability to respond to sounds from those barely perceptible to those that can set the whole body vibrating. Its selectivity is demonstrated by, for example, the ability to single out one person speaking in a noisy crowded room, or one musician in an orchestra who is not quite in tune or time. The ear structure is shown in Fig. 1 and is divided into three distinct parts: 1 The external ear consisting of the auricle (pinna) and external acoustic meatus (auditory canal). 2 The middle ear, an irregular shaped cavity which contains the auditory ossicles, three very small bones called the malleus, incus and stapes. 3 The inner ear comprising the coiled structure called the cochlea, one end of which is connected to the oval window and the other end to the round window. Acoustic signals presented to the external ear travel up the ear canal and cause movement of the tympanic membrane. This in turn causes mechanical movement of the ossicles, which transmits to the cochlea through the oval window. The cochlea itself is filled with fluid and contains small hair cells. Movement of the oval window causes displacement of the cochlear fluids and movement of the small hair cells. These hair cells are connected to nerve fibres and on movement send an electric signal to the auditory nerve. This nerve impulse is then carried to the brain where it is perceived as sound. Of all the human body's sensory functions the ear is one of the most prone to being defective. Deafness as a sensory impairment is estimated to affect one in six adults, with the elderly section of the population particularly vulnerable. It results in moderate to severe communication difficulties, dependent on the degree of malfunction, and can lead in the extreme to social isolation. Deafness can result from disease, abnormality, or obstruction, in one or more of a number of different parts of the hearinglsensory pathway.2 The human ear, like many electronic devices,possesses its own frequency response. This response varies from person to person and changes dynamically with time. Hearing response is measured in a hospital audiology unit and recorded on a chart known as an audiogram (Fig. 2) which reflects threshold of hearing across the COMPUTING & CONTROL ENGINEERING JOURNAL DECEMBER 1995
Transcript
Page 1: Hearing aids-a development with digital signal processing devices

Hearing aids- a development with digital signal processing devices

by H. 6. McAlBister, N. D. Black and N. Waterman

Current hearing aids based on analogue technology are often unsuitable for correction of dysfunction and use in particular environments. This article describes a development using digital signal processing devices. Two devices have evolved: a bench resident master hearing aid and a body worn aid. An audiogram matching algorithm is used with the development and the devices have been tested using normal audiometric practice.

T he earl is the organ of hearing. It is supplied by the eighth cranial nerve, which connects the inner ear to the cerebellum in the brain. The cochlear nerves form part of the

vestibulocochlear nerve and are stimulated by air vibrations caused by sound waves. This nerve conveys the decoded sound impulses, detected by the mechanism of the ear, to hearing areas in the cerebral cortex-the area of the brain where sound is perceived. The ear is very sensitive and highly selective. Its sensitivity is demonstrated by the ability to respond to sounds from those barely perceptible to those that can set the whole body vibrating. Its selectivity is demonstrated by, for example, the ability to single out one person speaking in a noisy crowded room, or one musician in an orchestra who is not quite in tune or time.

The ear structure is shown in Fig. 1 and is divided into three distinct parts:

1 The external ear consisting of the auricle (pinna) and external acoustic meatus (auditory canal).

2 The middle ear, an irregular shaped cavity which contains the auditory ossicles, three very small bones called the malleus, incus and stapes.

3 The inner ear comprising the coiled structure called the cochlea, one end of which is connected to the oval window and the other end to the round window.

Acoustic signals presented to the external ear travel up the ear canal and cause movement of the tympanic membrane. This in turn causes mechanical movement of the ossicles, which transmits to the cochlea through the oval window. The cochlea itself is filled with fluid and contains small hair cells. Movement of the oval window causes displacement of the cochlear fluids and movement of the small hair cells. These hair cells are connected to nerve fibres and on movement send an electric signal to the auditory nerve. This nerve impulse is then carried to the brain where it is perceived as sound.

Of all the human body's sensory functions the ear is one of the most prone to being defective. Deafness as a sensory impairment is estimated to affect one in six adults, with the elderly section of the population particularly vulnerable. It results in moderate to severe communication difficulties, dependent on the degree of malfunction, and can lead in the extreme to social isolation. Deafness can result from disease, abnormality, or obstruction, in one or more of a number of different parts of the hearinglsensory pathway.2

The human ear, like many electronic devices, possesses its own frequency response. This response varies from person to person and changes dynamically with time. Hearing response is measured in a hospital audiology unit and recorded on a chart known as an audiogram (Fig. 2) which reflects threshold of hearing across the

COMPUTING & CONTROL ENGINEERING JOURNAL DECEMBER 1995

Page 2: Hearing aids-a development with digital signal processing devices

Fig. 1 Ear structure. Reproduced by kind permission of Starkey Laboratories Ltd.

spectrum of speech-normally assumed to be from 125 to 8000 Hz

Conventional hearing aids Hearing loss can be partially compensated for through

the use of a hearing aid which amplifies acoustic signals with a frequencyigain characteristic which best compen- sates for the deficiency in hearing. These devices have been reduced in size and power consumption needs to an extent that they are available in different moulds which fit behind the ear, in the ear at the pinna, or are inserted in the auditory canal. Power is supplied by small zinc-air or mercury cells, which adds minimally to the size and weight of the device. They have thus become cosmetically acceptable to many users. The main problems identified

by the deaf community however do not indicate that a simple amplification of the signal is the only criterion necessary for compensation of hearing loss. The problems which must be considered in the development of better hearing aids are:

0 Current hearing aids have traditional analogue amplifiers as their compensating element. These amplifiers, while allowing emphasis of high or low frequencies, have otherwise largely fixed frequency responses, whose spectrum cannot always match the hearing loss.

0 Most, if not all, patients with a cochlear hearing loss have a smaller than normal dynamic range between the threshold for detecting sounds and the level at

frequency, Hz

250 500 1000 2000 4000 8000 I I I I I I Name .................. ......... .......... .......... ..,

Age ........._....._ ........ Date . ....... ... By ........................................................ Hearing Loss For Speech L ........._..... dB

R _._........... dB Remarks . . . . . . . . . . . . .. .. . . . . ._. . . _. . . . . . , , . . . . . . . . . . .

Right - Air - ............... Left - X Bone - ......,.... IS0 Standard

Fig. 2 Audiogram

- which sounds become un- comfortably loud.3 This is referred to as recruitment and results in a signal such as speech being only audible and comfortable over a small range of loudness levels. Hearing aid users often complain about the inability to discriminate between two signals of similar loudness levels? For example under- standing the content of a speech signal while in the presence of background noise.

COMPUTING & CONTROL ENGINEERING JOURNAL DECEMBER 1995

Page 3: Hearing aids-a development with digital signal processing devices

In principle, amplification with a hearing aid can overcome loss of sensitivity. To do this effectively however amplification must vary with frequency in a manner tailored to the individual loss. Loudness recruitment can be partially overcome by hearing aids that incorporate automatic gain control or

microphone t receiver

compression. This has been Fig. 3 Hearing-aid components incorporated in the more sophisticated (and expensive) hearing aids, but it has been found there are many ways of implementing compression and different methods have varying degrees of effectiveness. Finally the problem of existing hearing aids amplifying background noise along with the speech often makes it impossible to pick out the desired sound. This ability in normal hearing is identified with an active mechanism within the cochlea5 and its loss results in perceptual distortion which today’s hearing aids do nothing to compensate for.

Digital signal processing based hearing aids

While current hearing aids, based on analogue technology, have the size and cosmetic advantages previously mentioned, it is recognised that to improve the quality of these devices some processing of the signal information will be necessary to overcome the identified deficiencies. Digital signal processing @SP) devices would appear to offer the best platform to design hearing aids which can process information in real time and hence allow the development of programmable, adaptive, digital hearing aids. The availability of such a device offers the potential for algorithms such as compression and noise reduction to be developed and tested on patients with differing hearing deficiencies. Feedback from such patients, and easy modification of hearing aid parameters by software means, would allow hearing aids to be programmed to individual needs.

Digital hearing aids operate on the principle of converting a band-limited analogue signal from the microphone into discrete time samples. A digital signal processor can either mathematically process the samples directly in the time domain or manipulate them in the frequency domain through spectral transformation. The output is transmitted to the tympanic membrane via acoustical tubing and an earmould. This complete process is shown in the block diagram in Fig. 3.

Frequency sampling filter Audiogram matching

The amplification required from a hearing aid to overcome deficiency in hearing varies across the spectrum of normal hearing frequencies. A severe or mild

loss in hearing at any particular frequency point indicates the need for a high or low gain, respectively, to compensate. The characteristic pattern of the patient’s audiogram indicates the gain needed to achieve an equalised response across the range of hearing. The frequency/gain characteristic of the hearing aid should thus reflect to some extent the inverse of the audiogram characteristic. With DSP circuitry it is possible to accurately match any audiogram by using digital filtering. The frequency sampling filtel-6 is one method which allows the design of recursive, finite impulse response (FIR) filters, which greatly reduces the number of arithmetic operations, leading to computationally efficient implementation.

While DSP offers the possibility of a host of algorithms for noise reduction, non-linear amplification and others, the audiogram matching algorithm was chosen as an ‘anchor’ algorithm in order to develop and test the hardware for a digital hearing aid.

Structure of frequency sampling jlter The comb-resonator filter is an example of a filter

designed using the frequency sampling method. The basic elements of a frequency sampling filter (FSF) are a digital transversal filter known as a ‘comb filter’ cascaded with a digital resonator, as shown in Fig. 4.

Mathematical theory of comb-resonator filter (i) Digital resonator

Consider a digital resonator with a complex conjugate pole-pair on the unit circle in the z-plane and a second- order zero at the origin. The transfer function of such a device can be expressed as:

input x[n]

L Fig. 4 Comb-filter resonator

COMPUTING & CONTROL ENGINEERING JOURNAL DECEMBER 1995

Page 4: Hearing aids-a development with digital signal processing devices

Such a resonator, with its poles exactly on the unit circle, is at the very limit of stability and thus prone to instability due to noise or other artefact. Therefore, to ensure stability of the system, the poles are moved inside the circle to a radius Y , where Y < 1, to give:

H(z) = 3d 22

W(Z) [Z - Y exp(je)][z - Y exp(-iO)l

Transferring this into the time domain as a recurrence formula, successive values of y can be iterated using the difference equation:

The resonator by itself represents an infinite impulse response (IIR) system, with a continuous oscillation whose frequency is determined by the pole positions. Its impulse response can, however, be made finite by cascading with the nonrecursive comb filter.

(ii) Comb filter In analysing this very simple form of non-recursive

filter, consider a system where an impulse input causes it to produce one positive and one negative output impulses, separated by m sampling intervals. The transfer function of such a filter in z-domain is:

0 xi3 x w

Fig. 5 Filter output

where X(z) and W(z) are input and output, respectively. Expressed as a time domain signal this yields:

w[n] = x[n] - r%[n - m] (2)

Composite filter Substituting the value for w[n] obtained in eqn. 1 into

the expression for the resonator, eqn. 2 yields the composite expression for the complete comb filter- resonator combination:

In the z-plane, m is effectively a number of zeros equally spaced around the unit circle at a radius Y. Combining the two responses will cause the complex conjugate pole-pair of the resonator to be cancelled by two of the filter's zeros, resulting in a filter which has only z-plane zeros. The first positive output from the comb filter starts the resonator to oscillate while the negative impulse stops the oscillation m sampling intervals later. The resultant output is a filter with a sin(x)/x (sinc) function whose centre frequency corresponds to the resonator's pole location as shown in Fig. 5.

A complete frequency sampling filter to accommodate the spectrum in question consists of one comb filter exciting a bank of resonators in parallel, each resonator having a different pole location. A close approximation to any frequency-response magnitude characteristic can be achieved by first sampling the frequency function and then superimposing a weighted set of sinc functions around each sample. The outputs of all combiresonator combinations are then summed to produce the desired response. The composite difference equation is thus:

where P = number of resonators G = resonator gain factor

In this application 100 resonators were chosen, giving a frequency resolution of 50 Hz with bandwidth of 0-5 kHz. The weighting coefficient associated with adjacent resonators is inverted due to phase reversal between the outputs.

Development tools and methods Development emironment

Processors whose instruction execution times are fast enough to perform the multiplications necessary for DSP real-time applications are now available. This technology can be implemented in future hearing aids, if size and power consumption can be significantly reduced. Texas Instruments provides such a set of processors ranging

COMPUTING & CONTROL ENGINEERING JOURNAL. DECEMBER 1995

Page 5: Hearing aids-a development with digital signal processing devices

from the earlier integer TMS320Clx and TMS32OC2x processors to the floating-point TMS320C3x range and more recently TMS320C5x fixed- point devices. These are available as development kits for use with personal computer systems.

The toolkit used comprises hard- ware devices and software utilities. The hardware includes the TMS320Cxx itself, an evaluation EVM (development) board on which the TMS32OCxx is mounted, and the computer interface board. The software encompasses utilities for C and Assembler code development, and tools for the generation of PC- based applications.

Fig. 6 Digital hearing aid hardware Hard ware

In addition to the cost of the device the main obstacles to be overcome in order to develop a wearable digital hearing aid are:

0 reduction in size reduction in power consumption.

The frequency sampling filter for this digital aid has been implemented and tested on the Texas Instruments TMS32OC30, C31 and C50 digital signal processors. Additionally, two generations of the digital hearing aid (DHA) have been developed and tested. The initial prototype is a bench resident programmable aid with the TMS320C30 processor, which can be described as a master digital hearing aid. It is powered by a k5 V power supply with microphone and receiver housed in separate behind-the-ear (BTE) casings for the purpose of reducing feedback. It is not suitable for portable patient trials. The second prototype is a wearable hearing aid (Fig. 6) which uses the TMS320C31 processor. The device is small and

lightweight enough to be carried on the body, being belt worn, with receiver and transmitter again mounted inside the casing of a conventional BTE aid. Power is supplied through four AA size and one PP3 battery, giving a useful working period of up to eight hours without replacement. A block diagram of the portable system is shown in Fig. 7.

The analogue interface is programmed to sample at 12.5 kHz and connected to the processor through the serial interface. The A/D and D/A converters include anti- aliasing and smoothing switched capacitor filters, whose clock rate is a ratio of the sampling frequency. The resolution of the devices is 14-bit. Input and output amplifiers provide gain for the analogue signals received at the microphone and delivered to the receiver.

The TMS32OC31 is a 32-bit floating-point processor, fabricated in CMOS technology. The device includes 2K of internal RAM and is connected to an external 32 K x 8-bit flash memory. This is used to store the bootstrap code and accessed by a special boot loader program.

address bus

data bus

COMPUTING & CONTROL ENGINEERING JOURNAL DECEMBER 1995

Fig. 7 Wearable hearing aid

Page 6: Hearing aids-a development with digital signal processing devices

i

RING AIDS

Fig. 8 Hearing aid prescription

Since the DSP chip is a CMOS device, the power supply current is related to the rate of switching and inherently only draws current when in transition through the linear region. Therefore the supply current can be reduced by running the processor at a lower clock rate. A signal bandwidth of 5 kHz was considered adequate and a sampling frequency of 12.5 kHz used. With the processor clock rate reduced to 12 MHz, it was possible to implement 479 instructions per sample (two clock cycles per instruction). The algorithm thus only required the processor to run at 45% of its maximum clock rate.

Software Jilteying algorithms An FSF can be realised using either the difference

equation directly or through a convolution process, which results in a more efficient implementation. A systems frequency response or transfer function H c f ) is the Fourier transform of its impulse response h(t). Furthermore the systems output signal y(t) is the convolution of h(t) and the input signal x(t). Expressed mathematically:

C m

where z is a dummy variable that facilitates time shifting in the convolution operation?

For a discrete impulse response h(n) and input signal x("~1), each of length N, the time domain convolutiony(n) is expressed as:

Thus each input sample x(n) can result in a corres- ponding output sample y(n) when convolved with h(n).

Filtey implementation

consists of two separate stages: Implementation of the digital hearing aid software

Offline installation of the algorithms; setting up the hearing aid to suit the characteristics of the individual patient audiogram.

0 Real-time implementation of the algorithms to process and reproduce the auditory signals.

(i) Hearing aid set-up The stages involved in prescribing a hearing aid to a

patient are shown in Fig. 8. The audiogram provides the weights to be applied to the FSF. Amplitude points from the audiogram at 0.125,0.25,0.5, 1, 1.5,2,3,4 and 5 kHz are input to the system from a menu driven display. At these values the gain of the hearing aid can be adjusted in 5 dB steps. Linear interpolation between input points is used for intermediate points at 50 Hz intervals. Other formal hearing aid prescription ruless (such as NAL or half gain) can also be applied to the individual audiometric data. Thus the resonator gain factor, Gk in the FSF, is made available. The filter is simulated in a C coded program which calculates the impulse response values and stores these in the flash ROM.

(ii) Hearing aid real-time implementation Operation of the hearing aid in real-time is illustrated

in Fig. 9. A single push switch on the hearing aid causes the processor to be reset and an interrupt generated. The impulse response values are then downloaded from

activate device

JI

Fig. 9 Hearing aid operation

1

COMPUTING & CONTROL ENGINEERING JOURNAL DECEMBER 1995

Page 7: Hearing aids-a development with digital signal processing devices

Fig. 10 2 cc c :oupler match

the flash ROM into on-chip RAM and the device is in operation.

In the executable code two main buffers are used an impulse response buffer and an input samples buffer. In executing the convolution algorithm, two features of the TMS320C31 which allow very efficient implementation are:

parallel multiply/add operations circular addressing.

The first feature permits a multiplication and an addition in one machine cycle, while the second allows finite buffer lengths for continuous data.

Calibration and assessment procedures Acoustic assessment of hearing aids

Hearing aids are usually described in terms of their ‘electroacoustic characteristics’. The procedures for testing are standardised by IEC 118 and ANSI S3.22. These standards require that hearing aids be tested in a sound treated chamber with the aid output attached to a microphone in the chamber using a 2cc coupler. This coupler, universally used, is a cavity of predetermined shape and volume (2 cm3) used in conjunction with a calibrated microphone adapted to measure the pressure developed within the chamber. A loudspeaker is encased in the lid of the chamber and measurements are made by sweeping the input frequency from 100 Hz to 8 kHz. The output from the coupler unit is measured in dB SPL (sound pressure level) and presented in graphical form against frequency. The performance of a hearing aid is assessed by measuring the features:

maximum power output 0 gain

0 frequency response 0 total harmonic distortion 0 equivalent input noise.

Hearing-aid test equipment is readily available commercially and contains the acoustic cavity to house the aid under test. They provide a practically automatic test of all parameters.

Real ear testing of hearing aids While a 2 cc coupler provides the aid characteristics it

does not accurately mimic the action of the hearing aid in the ear. Furthermore, since all’ ears are different a standard model to mimic this is not possible. The real ear test procedure allows the hearing aid dispenser to see precisely what the gain and output of the hearing aid is while inserted in the patient’s ear. This is achieved by inserting a probe microphone in the ear canal together with the patient’s hearing aid mould.

Subjective testing Acoustic cavity and real-ear testing yield quantitative

values and comparisons. Only patient trials, however, and subjective feedback indicate the users’ preferences. This testing must be carried out in a clinical setting, with patients being asked to compare the aid, programmed to fit their needs, with their own currently used hearing aid. This is achieved using word discrimination or speech intelligibility tests. Patients are presented with words and sentences embedded in background noise at varying levels and asked to identify the correct word or sequence of words.

Results Hearing impaired patients visiting audiology out-

patient clinics in a local hospital served as subjects for testing the device. A pure tone audiogram was obtained

COMPUTING & CONTROL ENGINEERING JOURNAL DECEMBER 1995

Page 8: Hearing aids-a development with digital signal processing devices

Table 1 Characteristics of seven trial subjects

Table 2 Pure-tone thresholds (dB HL) for each subject. A dash indicates that the threshold could not be determined within the limitations of the equipment.

coupler. A prescription rule is applied to this response and the resulting match between this target and the real- ear response is shown in Fig. 11. It is evident from these responses that:

0 The device is capable of matching any response up to 4 kHz, thereafter it requires more gain than is currently available. With the target points at a frequency resolution of 50 Hz this is much more selective than existing analogue aids and can be more easily and accurately tuned to peaks and troughs across the audiogram spectrum.

0 Finer adjustment is needed io provide a correct match with the real-ear system. The significant factor in this measurement is the resonance characteristics of the ear canal which varies from person to person. This can provide a gain of 10-20 dB at frequencies around 2-3 kHz. A hearing mould inserted into the ear canal causes an occlusion, the effect of which is to change this resonance and reduce the natural gain at these frequencies. This demands a corresponding increase in gain of the aid to compensate.

and the results entered into a hearing aid dispensing system. Using a particular prescription method (NAL), the required frequency response was determined. This response was then used to set the weighting parameters for the frequency sampling filter. The hearing aid and appropriate real ear responses were checked using the probe microphone. Adjustments could be made in situ if necessary.

Figs. 10 and 11 show comparisons between desired points and actual points achieved over the range. Fig. 10 shows the points to be matched across the spectrum and the resultant aid response, measured with the 2 cc

Fig. 11 Real-ear match

For clinical trials (subjective testing) seven hearing impaired patients, ranging in age from 47 to 76 years, were selected. The characteristics of this group are indicated in Table 1 and pure tone thresholds in Table 2. All subjects were experienced hearing aid users. Speech perception for sentences rather than isolated words was measured, the measurements being limited to the average level at which 50% of the sentences were repeated correctly. The Institute of Hearing Research Adaptive Sentence List (lHRASL) was used as the test stimulus in a background noise of 60 dB(A). Fig. 12 shows the results of these tests.

COMPUTING & CONTROL ENGINEERING JOURNAL DECEMBER 1995

Page 9: Hearing aids-a development with digital signal processing devices

80 70

p 60

2 50 5 40

d 30 20

10

a,

n 1 2 3 4 5 6 7

subiect

[3 NHS

NAL

Conclusions and future considerations

The overall results show the performance of the aid to be compatible with the NHS aid. However, it is noted that the patients studied had previously been prescribed with an NAL response and fitted with the closest matching NHS aid to this response. This similarity is thus reassuring in assessing aid performance against existing devices.

In considering the future, this development with a basic frequency-matching algorithm which allows the instrument to be programmed for individual patients proves the abiIity of digital technology to perform adequately as a hearing aid. However, many other types of more complex signal-processing algorithms can be developed and implemented on this device, the flexibility of the tool being provided by the software. This creates the opportunity of providing the hearing aid dispenser with a more diverse array of intervention techniques than is currently possible. Examples of differing needs in hearing aids, which may be individual or in combination, and are not all possible within any one currently available aid. are:

differing prescription rules noise reduction loudness control

0 speech cue enhancement 0 frequency shaping 0 frequency shifting 0 distortion control

The need for more accurate fitting procedures is demonstrated in the difference in gain as predicted from coupler measurements from that of the true in situ gain. This change in functional frequency response by the real ear means there is in fact no single frequency response for a particular aid. An important feature of digital systems is the inclusion of the aid during testing of the patient. Thus these systems offer great flexibility in terms of

.

Fig. 12 Subjective results

optimising performance with regard to the patient’s impairment, the need for prescriptive fitting procedures and easily programmed frequency responses. Size and power consumption can continue to decrease as technology develops. This will be driven not only by hearing aid needs but also by the commercial interests in producing other portable devices such as personal communication systems which demand ever smaller, lightweight, packaging, and thus low power consumption semiconductor chips and more efficient battery power supplies.

Acknowledgments The authors wish to acknowledge the financial

support provided by the Research Corporation Trust, International Fund for Ireland and the EC STRIDE initiative.

References 1 ROSS and WILSON ‘Anatomy and physiology in health and illness

2 FREELAND, A.: ‘Deafness: the facts (Oxford University Press, 1989, pp. 19-22)

3 MOORE, B. C. J., GLASBERG, B. R., and STONE, M. A.: ‘Optimization of a slow-acting automatic gain control system for use in hearing aids’, British Journal of Audiology, 1991,25, pp.171-182

4 MOORE, B. C. J., LAURENCE, R. F., and WRIGHT, D.: ‘Improvements in speech intelligibility in quiet and in noise produced by two-channel compression hearing aids’, British Journal of Audiology, 1985, 19, pp.175-187

5 MOORE, B.: ‘The shape of sounds to come’, MRC News, Winter 1995,

6 LYNN, P. A., and FUERST, W.: ‘Introductory digital signal processing with computer applications (John Wiley and Sons, 1992, pp.194-201)

7 RAMIREZ, W. R.: ‘The FFT fundamentals and concepts’ (Prentice Hall, 1985, pp.50-53)

8 BYRNE, D., and COTTON, S.: ‘Evaluation of the National Acoustic Laboratories’ new hearing aid selection procedure’,J Speech Hearing Research, 1988,31, pp.178-186

(Churchill Livingstone, 1987, pp.253-255)

pp.32-35

0 IEE: 1995

H. G. McAllister and N. Waterman are with the School of Information and Software Engineering and N. D. Black is with the School of Electrical &Mechanical Engineering and Northern Ireland Biomedical Engineering Centre, University of Ulster, Jordanstown, Co. Antrim BT37 OQB, UK.

COMPUTING & CONTROL ENGINEERING JOURNAL DECEMBER 1995


Recommended