+ All Categories
Home > Documents > Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael...

Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael...

Date post: 27-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
20
Perceptual Modeling Through an Auditory-Inspired Sparse Representation EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann 1,2 and Hervé Bourlard 1,2 1 Idiap Research Institute and 2 EPFL
Transcript
Page 1: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation

EPFL-Idiap-ETH Sparsity Workshop 2015

Raphael Ullmann1,2 and Hervé Bourlard1,2

1Idiap Research Institute and 2EPFL

Page 2: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Prediction of Perceived Noise Intrusiveness

2

TelecommunicationSystem+

Clean SpeechSignal

Noise

Test SpeechSignal

Mean OpinionScore (MOS)

Page 3: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Prediction of Perceived Noise Intrusiveness

2

TelecommunicationSystem+

Clean SpeechSignal

Noise

Test SpeechSignal

Mean OpinionScore (MOS)

• 10 real-world noise types with signal-to-noise ratios (SNR) at 3–40 decibels

• 3 datasets (500+ recordings)

• Noise intrusiveness ratings from listeners, scored on a 5-point scale

Page 4: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Prediction of Perceived Noise Intrusiveness

TelecommunicationSystem+

Clean SpeechSignal

Noise

Test SpeechSignal

Mean OpinionScore (MOS)

Objective QualityMeasure

Objective (Predicted)Mean Opinion Score

2

TelecommunicationSystem+

Clean SpeechSignal

Noise

Test SpeechSignal

Mean OpinionScore (MOS)

Page 5: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Why Sparsity?

• Traditional approach

• Combine acoustic features (noise level, variance, spectral composition)

• Our study: Focus on low-level sensory coding principles

• Efficient Coding Hypothesis:“(...) our perceptions are caused by the activity of a rather small number of neurons selected from a very large population (...)” — [Barlow, 1972]

• Redundancy reduction to help make sense of sensory inputs[Olshausen & Field, 1996]

3

•Barlow H. B. (1972) Single units and sensation: A neuron doctrine for perceptual psychology? Perception.•Olshausen B. A., Field D. J. (1996) Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1?

Neural Computation.

Page 6: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Efficient Auditory Coding — Model

• Generative waveform model [Lewicki & Sejnowski,1999]:

• Shiftable kernels , can have different lengths

• Use Matching Pursuit to approximate , includes translation of kernels

• May think of each kernel instance as a population of spiking auditory neurons➔“Spike Coding”

•Lewicki M. S., Sejnowski T. J. (1999) Coding time-varying signals using sparse, shift-invariant representations. Adv. NIPS 11.

x̂(t) =MX

m=1

ImX

i=1

im�m

�t� ⌧

im

{�1(t), . . . ,�m(t), . . . ,�M (t)}

4

x̂ = �↵

Page 7: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Efficient Auditory Coding — Dictionary

• How to choose the dictionary ?

➔Learn a dictionary from natural environmental noises [Smith & Lewicki, 2006]

� = {�1(t), . . . ,�m(t), . . . ,�M (t)}

•Smith E. C., Lewicki M. S. (2006) Efficient Auditory Coding. Nature.

© 2006 Nature Publishing Group

of optimizing the efficiency of the representation: if the coefficientssmi are assumed to be continuous in time and then optimized torepresent the signal efficiently, only a discrete set of temporally sparsecoefficients emerges8–10,14.Figure 1 illustrates the spike code model and its efficiency in

representing speech. The spoken word ‘canteen’ was encoded with aset of spikes with the use of a fixed set of kernel functions (because thekernels can have arbitrary shape, for illustration purposes here wehave chosen gammatones, mathematical approximations of cochlearfilters). A brief segment from the input speech signal (Fig. 1, input)consists of three glottal pulses in the /a/ vowel. The resulting spikecode is shown above it. The coloured arrows and curves indicatethe relationship between the spikes (grey ovals) and the acousticcomponents they represent. The figure shows that a small set ofspikes (for comparison, the sound segment contains about 400

samples) is sufficient to produce a very accurate reconstruction ofthe sound (Fig. 1, reconstruction and residual).The spike-coding algorithm provides a way to encode signals given

a set of kernel functions, but the actual efficiency of this code dependson howwell the kernel functions capture the acoustic structure of thesound ensemble. To optimize the kernel functions we derived agradient-based algorithm for adapting each kernel in shape andlength to improve the fidelity of the representation (SupplementaryMethods). Information theory states that there is a fundamentalrelationship between the efficiency of a code and the degree to whichit captures the statistical structure of the signals being encoded. Thus,one of the primary tenets of efficient coding theory is that sensorycodes should be adapted to the statistics of the relevant sensoryenvironment. To make predictions, it is necessary to optimize thecode to an ensemble of sounds to which the auditory system isthought to be adapted. However, this poses a problem because theprecise composition of the natural acoustic environment isunknown, and many common sounds, such wind noise, may havemuch less behavioural relevance than other sounds.To address this issue, we made the generic assumption that the

auditory system is adapted to an unknown mixture of three broadcategories of natural sounds. The kernel functions were optimizedto an ensemble of natural sounds that consisted of mammalianvocalizations15 and two subclasses of environmental sounds (Sup-plementary Methods). These sound classes represent a wide range ofacoustic structure. Vocalizations tend to be harmonic and moresteady-state, whereas environmental sounds have little or no har-monic structure and are more transient. Furthermore, to obtain anensemble composition that yielded a goodmatch to the physiologicaldata (described below), we found it necessary to divide environmen-tal sounds into two subclasses, namely transient environmentalsounds, such as cracking twigs and crunching leaves, and ambientenvironmental sounds, such as rain and rustling sounds. Thisapproach has the added advantage that we can investigate how thetheoretically ideal code changes as a function of the sound ensemblecomposition.Figure 2a shows the learned kernel functions (red curves) for the

natural sounds ensemble. All kernels are time-localized, have anarrow spectral bandwidth and show a strong temporal asymmetrynot predicted by previous theoretical models. The sharp attack and

Figure 1 | Representing a natural sound with the use of spikes. A briefsegment of the word ‘canteen’ (input) is represented as a spike code (top).Each spike (oval) represents the temporal position and centre frequency ofan underlying kernel function, with oval size and grey value indicatingkernel amplitude. The coloured arrows illustrate the correspondencebetween the spikes and the underlying acoustic structure represented by thekernel functions. Alignment of the spikes with respect to the kernels isarbitrary and is an issue only for plotting. We choose the kernel centre ofmass, which for a delta-function input yields aligned spikes across the kernelpopulation. A reconstruction of the speech from only the 60 spikes shown isaccurate with little residual error (reconstruction and residual).

Figure 2 | Efficient codes for natural sounds predict revcor filter shapes andpopulation characteristics. a, When optimized to encode an ensemble ofnatural sounds, kernel functions become asymmetric sinusoids (smoothcurves in red, with padding removed) with sharp attacks and gradual decays.They also adapt in temporal extent, with longer and shorter functionsemerging from the same initial length (grey scale bars, 5ms). Each kernel

function is overlaid on a revcor function obtained from cat auditory nervefibres (noisy curves in blue). b, The bandwidth–centre-frequencydistribution of learned kernel functions (red squares) is plotted togetherwith cat physiological data (small blue dots) and with kernel functionstrained on environmental sounds alone (black circles) or animalvocalizations alone (green triangles).

NATURE|Vol 439|23 February 2006 LETTERS

979

© 2006 Nature Publishing Group

of optimizing the efficiency of the representation: if the coefficientssmi are assumed to be continuous in time and then optimized torepresent the signal efficiently, only a discrete set of temporally sparsecoefficients emerges8–10,14.Figure 1 illustrates the spike code model and its efficiency in

representing speech. The spoken word ‘canteen’ was encoded with aset of spikes with the use of a fixed set of kernel functions (because thekernels can have arbitrary shape, for illustration purposes here wehave chosen gammatones, mathematical approximations of cochlearfilters). A brief segment from the input speech signal (Fig. 1, input)consists of three glottal pulses in the /a/ vowel. The resulting spikecode is shown above it. The coloured arrows and curves indicatethe relationship between the spikes (grey ovals) and the acousticcomponents they represent. The figure shows that a small set ofspikes (for comparison, the sound segment contains about 400

samples) is sufficient to produce a very accurate reconstruction ofthe sound (Fig. 1, reconstruction and residual).The spike-coding algorithm provides a way to encode signals given

a set of kernel functions, but the actual efficiency of this code dependson howwell the kernel functions capture the acoustic structure of thesound ensemble. To optimize the kernel functions we derived agradient-based algorithm for adapting each kernel in shape andlength to improve the fidelity of the representation (SupplementaryMethods). Information theory states that there is a fundamentalrelationship between the efficiency of a code and the degree to whichit captures the statistical structure of the signals being encoded. Thus,one of the primary tenets of efficient coding theory is that sensorycodes should be adapted to the statistics of the relevant sensoryenvironment. To make predictions, it is necessary to optimize thecode to an ensemble of sounds to which the auditory system isthought to be adapted. However, this poses a problem because theprecise composition of the natural acoustic environment isunknown, and many common sounds, such wind noise, may havemuch less behavioural relevance than other sounds.To address this issue, we made the generic assumption that the

auditory system is adapted to an unknown mixture of three broadcategories of natural sounds. The kernel functions were optimizedto an ensemble of natural sounds that consisted of mammalianvocalizations15 and two subclasses of environmental sounds (Sup-plementary Methods). These sound classes represent a wide range ofacoustic structure. Vocalizations tend to be harmonic and moresteady-state, whereas environmental sounds have little or no har-monic structure and are more transient. Furthermore, to obtain anensemble composition that yielded a goodmatch to the physiologicaldata (described below), we found it necessary to divide environmen-tal sounds into two subclasses, namely transient environmentalsounds, such as cracking twigs and crunching leaves, and ambientenvironmental sounds, such as rain and rustling sounds. Thisapproach has the added advantage that we can investigate how thetheoretically ideal code changes as a function of the sound ensemblecomposition.Figure 2a shows the learned kernel functions (red curves) for the

natural sounds ensemble. All kernels are time-localized, have anarrow spectral bandwidth and show a strong temporal asymmetrynot predicted by previous theoretical models. The sharp attack and

Figure 1 | Representing a natural sound with the use of spikes. A briefsegment of the word ‘canteen’ (input) is represented as a spike code (top).Each spike (oval) represents the temporal position and centre frequency ofan underlying kernel function, with oval size and grey value indicatingkernel amplitude. The coloured arrows illustrate the correspondencebetween the spikes and the underlying acoustic structure represented by thekernel functions. Alignment of the spikes with respect to the kernels isarbitrary and is an issue only for plotting. We choose the kernel centre ofmass, which for a delta-function input yields aligned spikes across the kernelpopulation. A reconstruction of the speech from only the 60 spikes shown isaccurate with little residual error (reconstruction and residual).

Figure 2 | Efficient codes for natural sounds predict revcor filter shapes andpopulation characteristics. a, When optimized to encode an ensemble ofnatural sounds, kernel functions become asymmetric sinusoids (smoothcurves in red, with padding removed) with sharp attacks and gradual decays.They also adapt in temporal extent, with longer and shorter functionsemerging from the same initial length (grey scale bars, 5ms). Each kernel

function is overlaid on a revcor function obtained from cat auditory nervefibres (noisy curves in blue). b, The bandwidth–centre-frequencydistribution of learned kernel functions (red squares) is plotted togetherwith cat physiological data (small blue dots) and with kernel functionstrained on environmental sounds alone (black circles) or animalvocalizations alone (green triangles).

NATURE|Vol 439|23 February 2006 LETTERS

979

Figure © 2006 Nature Publishing Group

5

Page 8: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Perceptual Model — Dictionary

• Use a dictionary of analytically defined auditory filter shapes (“gammatones”)

• We use 32 gammatones sampled at 16 kHz, generated with Slaney’s toolbox

•Slaney M. (1998) Auditory Toolbox — Version 2. Technical Report. Interval Research Corp.

100 1’000 10’0000

10

20

30

Frequency [Hz]

Mag

nit

ude

[dB

]

0 0.02 0.04

−0.2

0

0.2

Time [s]

Am

pli

tude

6

Page 9: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Perceptual Model — Noise Signal Analysis

0.2 0.4 0.6

102

103

104

Time [s]

Spik

e C

F [

Hz]

Kernel instances are localized in time and frequency

Kernel instances are called atoms or “spikes”

�j(k)(t)

7

Page 10: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Perceptual Model — Noise Signal Analysis

0.2 0.4 0.6

102

103

104

Time [s]

Spik

e C

F [

Hz]

Kernel instances are localized in time and frequency

Kernel instances are called atoms or “spikes”

�j(k)(t)

Compute number of spikes (i.e., norm)`0

7

Page 11: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Perceptual Model — Noise Signal Analysis

0.2 0.4 0.6

102

103

104

Time [s]

Spik

e C

F [

Hz]

Kernel instances are localized in time and frequency

Kernel instances are called atoms or “spikes”

�j(k)(t)

Get norm over timeTake the 5th percentile

`0

7

Page 12: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Perceptual Model — Evaluation

5th percentile of “spikes” over time highly correlates with subjective scores of noise intrusiveness

0 2’000 4’0001

2

3

4

5

Subje

ctiv

e N

ois

e In

trusi

ven

ess Set 1

0 2’000 4’0001

2

3

4

5Set 2

0 2’000 4’0001

2

3

4

5Set 3

5th

Percentile Count of Kernel Instances ["spikes"/s]

8

Page 13: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Why Does It Work? — Because of Greedy Pursuit

• Decrease of spike energies (black line) depends on signal type

• White noise is a kind of “worst case”, i.e., it does not correlate well with any kernel in the dictionary

• Logarithmic changes in sound energy produce linear changes in spike counts

• Greedy decomposition captures high-energy sounds first

White NoisePure ToneSpeechE

ner

gy

of

Spik

es/

Res

idual

[Pa2

]

MP Iteration (in thousands)

0 15 300 15 300 15 30

10−7

10−5

10−3

10−1

101

103

White NoisePure ToneSpeech

Ener

gy

of

Spik

es/

Res

idual

[Pa2

]

MP Iteration (in thousands)

0 15 300 15 300 15 30

10−7

10−5

10−3

10−1

101

103

White NoisePure ToneSpeech

Ener

gy

of

Spik

es/

Res

idual

[Pa2

]MP Iteration (in thousands)

0 15 300 15 300 15 30

10−7

10−5

10−3

10−1

101

103

9

Page 14: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Why Does It Work? — Because of the Dictionary

Some tests with narrowband noises

100 1’000 8’0000

500

1’000

1’500

Noise CF [Hz]Eq

uiv

alen

t S

pik

e In

crem

ent

[sp

ikes

/s]

100 1’000 8’000Noise BW [Hz]

1 ERB

ITU−R 468 Weighting

ISO 532 B Loudness

Sparse Spike Coding

10

Page 15: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Why Does It Work? — Because of the Dictionary

Some tests with narrowband noises

• ERB-wide noise at varying center frequencies (CF)

➔Spike count similar to noise weighting curves

100 1’000 8’0000

500

1’000

1’500

Noise CF [Hz]Eq

uiv

alen

t S

pik

e In

crem

ent

[sp

ikes

/s]

100 1’000 8’000Noise BW [Hz]

1 ERB

ITU−R 468 Weighting

ISO 532 B Loudness

Sparse Spike Coding

10

Page 16: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Why Does It Work? — Because of the Dictionary

Some tests with narrowband noises

• ERB-wide noise at varying center frequencies (CF)

➔Spike count similar to noise weighting curves

• Fixed center frequency, increasing noise bandwidth

➔Spike count increases above auditory bandwidth (dotted line)

100 1’000 8’0000

500

1’000

1’500

Noise CF [Hz]Eq

uiv

alen

t S

pik

e In

crem

ent

[sp

ikes

/s]

100 1’000 8’000Noise BW [Hz]

1 ERB

ITU−R 468 Weighting

ISO 532 B Loudness

Sparse Spike Coding

10

Page 17: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Results — Comparison to Other Measures

• Comparison to widely used acoustic indicators

• Noise level in decibels with “A” frequency weighting, denoted “dB(A)”• Loudness (a psychoacoustic model of perceived sound intensity)

➔Significantly lower prediction error ( ) on 2 datasets

Measure Prediction Error (lower values are better)

Set 1 Set 2 Set 3

Weighted Level [dB(A) SPL] 0.230 0.277 0.234

Mean Loudness [sone] 0.257 0.206 0.1975

th

Percentile Loudness [sone] 0.191 0.234 0.270

5

th

Percentile Density [spikes/s] 0.087** 0.117** 0.231

p < 0.01

11

Page 18: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Results — (In)sensitivity to Parameters

• Robust to changes in dictionary design

Set 3Set 2Set 1

Sta

cked

Pre

dic

tion

Err

ors

rmse∗ 3rd

Γtones (ERB) Γtones (Log) Gabors (Log)

Number and Type of Kernels in Dictionary

16 32 64 16 32 64 16 32 640

0.2

0.4

0.6

0.8

1

1.2

Set 3Set 2Set 1

Sta

cked

Pre

dic

tion

Err

ors

rmse∗ 3rd

Γtones (ERB) Γtones (Log) Gabors (Log)

Number and Type of Kernels in Dictionary

16 32 64 16 32 64 16 32 640

0.2

0.4

0.6

0.8

1

1.2

Set 3Set 2Set 1

Sta

cked

Pre

dic

tion

Err

ors

rmse∗ 3rd

Γtones (ERB) Γtones (Log) Gabors (Log)

Number and Type of Kernels in Dictionary

16 32 64 16 32 64 16 32 640

0.2

0.4

0.6

0.8

1

1.2

Set 3Set 2Set 1

Sta

cked

Pre

dic

tion

Err

ors

rmse∗ 3rd

Γtones (ERB) Γtones (Log) Gabors (Log)

Number and Type of Kernels in Dictionary

16 32 64 16 32 64 16 32 640

0.2

0.4

0.6

0.8

1

1.2

12

Page 19: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]

Conclusion

• We are doing audio processing, not speech processing

• Number of “spikes” reflects the level and type of noise

• Sparsity of noise over time highly correlates with perceived intrusiveness

• Efficient coding hypothesis offers a different interpretation of intrusiveness:

• Complexity of the input stream to the auditory system

• Activations of nerve spike populations in response to noise

13

Page 20: Perceptual Modeling Through an Auditory-Inspired …EPFL-Idiap-ETH Sparsity Workshop 2015 Raphael Ullmann1,2 and Hervé Bourlard1,2 1Idiap Research Institute and 2EPFL Perceptual Modeling

Thank You for Your Attention

Thanks to• Laboratory of Electromagnetics and Acoustics (LEMA), EPFL• SwissQual AG• Dr. Marc Ferras and Dr. Mathew Magimai-Doss for useful discussions

Perceptual Modeling Through an Auditory-Inspired Sparse Representation — [email protected]


Recommended