+ All Categories
Home > Documents > Phonological Encoding II Producingconnectedspeech.

Phonological Encoding II Producingconnectedspeech.

Date post: 01-Jan-2016
Category:
Upload: rafe-leonard
View: 216 times
Download: 2 times
Share this document with a friend
Popular Tags:
34
Phonological Encoding II Producingconnectedspeech
Transcript
Page 1: Phonological Encoding II Producingconnectedspeech.

Phonological Encoding II

Producingconnectedspeech

Page 2: Phonological Encoding II Producingconnectedspeech.

Producing words: Lecture 2

Lexical Concepts

(Lemmas)

Word Forms

TIGER(X)

Tijger

<Tijger>

Page 3: Phonological Encoding II Producingconnectedspeech.

Producing words: Lecture 3

Lexical Concepts

(Lemmas)

Word Forms

TIGER(X)

Tijger

<Tijger>

Page 4: Phonological Encoding II Producingconnectedspeech.

Producing words: Lecture 4

Lexical Concepts

(Lemmas)

Word FormsStructureSegments

TIGER(X)

Tijger

<Tijger>

/t/ /EI/ /x/ /@/ /r/ ‘s1(on nu coda) s2(on nu coda)

Page 5: Phonological Encoding II Producingconnectedspeech.

So what have got at the end of the day?

Lexical Concepts

(Lemmas)

Word FormsStructureSegments

TIGER(X)

Tijger

<Tijger>

/t/ /EI/ /x/ /@/ /r/ ‘s1(on nu coda) s2(on nu coda)

Page 6: Phonological Encoding II Producingconnectedspeech.

Lecture 5

Lexical Concepts

(Lemmas)

Word FormsStructureSegments

Er…Word Forms

TIGER(X)

Tijger

<Tijger>

/t/ /EI/ /x/ /@/ /r/ ‘s1(on nu coda) s2(on nu coda)

‘s1(on /t/ nu /EI/) s2 (on/x/ nu /@/ coda /r/)

Page 7: Phonological Encoding II Producingconnectedspeech.

Levelt’s paradox

• All models of phonological encoding distinguish between the retrieval of content (segments) and structure (word or syllable template)

• Evidence: properties of speech errors• But what’s the point to re-order, if you’ve stored

the order in the lexicon (word form)?• Answer: domain of syllabification (thus,

structure) is the phonological word.

Page 8: Phonological Encoding II Producingconnectedspeech.

Phonological word

• Content morpheme, preceded and/or followed by 0 or more closed class morphemes (e.g., inflections, pronouns).

• Examples:– <understand> + <ing>: un der stan ding

– <understand> + <er>: un der stan der

– <understand> + <her>: un der stan der

Page 9: Phonological Encoding II Producingconnectedspeech.

Syllabification Rules

• Principle of Maximal Onset (Dutch, English)• Principle of Minimal Coda (Dutch)• Sonority hierarchy (Universal?): the ideal syllable

has a maximal rise in sonority in the onset, and a minimal decline in sonority in the coda– Vowels > liquids, nasals, glides > the rest

Page 10: Phonological Encoding II Producingconnectedspeech.

How does it work in Levelt et al (1999)?

• Word form(s) are retrieved• Word forms are spelled out

– Spell-out of segments

– Spell-out of structure (#sylls and stress)

• Frames are merged• Segments are placed in frames, respecting

language-specific rules of syllabification• Syllable nodes are retrieved (from a syllabary)

Page 11: Phonological Encoding II Producingconnectedspeech.

Thus:

<demand> <her>

/d/ /i/ /d/

16

... /h/ /@/ /r/

Page 12: Phonological Encoding II Producingconnectedspeech.

Thus:

<demand> <her>

/d/ /i/ /d/

16

... /h/ /@/ /r/W(S S’) W(S)

Page 13: Phonological Encoding II Producingconnectedspeech.

Thus:

<demand> <her>

/d/ /i/ /d/

16

... /h/ /@/ /r/W(S S’) W(S)

W(S1 S2’ S3)

Page 14: Phonological Encoding II Producingconnectedspeech.

Thus:

<demand> <her>

/d/ /i/ /d/

16

... /h/ /@/ /r/W(S S’) W(S)

W(S1 S2’ S3)OnsetS1

Page 15: Phonological Encoding II Producingconnectedspeech.

Thus:

<demand> <her>

/d/ /i/ /d/

16

... /h/ /@/ /r/W(S S’) W(S)

W(S1 S2’ S3)OnsetS1

NucleusS1

Page 16: Phonological Encoding II Producingconnectedspeech.

Thus:

<demand> <her>

/d/ /i/ /d/

16

... /h/ /@/ /r/W(S S’) W(S)

W(S1 S2’ S3)OnsetS1

NucleusS1

OnsetS2

Page 17: Phonological Encoding II Producingconnectedspeech.

Thus:

<demand> <her>

/d/ /i/ /d/

16

... /h/ /@/ /r/W(S S’) W(S)

W(S S’ S)

[di] [man] [d@r]

onsetonset

coda

SYLLABARY

Page 18: Phonological Encoding II Producingconnectedspeech.

Properties of the model

• The segments connected to the word form are numbered. Numbers specify attachment order.

• Segments know where to go, and can look at their neighbours.– If I am a vowel: nucleus of next available syllable

– If I am a consonant, put me in the onset of the next syllable

– If there is no next syllable, put me in the coda of the current syllable.

Page 19: Phonological Encoding II Producingconnectedspeech.

Properties of the model(2)

• There is a verification mechanism, preventing errors. Thus, if phoneme /d/ is selected, only syllable programs [d*] can be selected.

• There is a suspension/resumption mechanism, allowing for incrementality. Thus, even if /m/, /ae/, etc., or not selected yet, the model can already build the Phon. Word corresponding to the first syllable.

Page 20: Phonological Encoding II Producingconnectedspeech.

Meyers’ paradox

• Meyer & Schriefers (1991): Picture/Word interference, with phonological relatedness.– TAFEL with tapir vs jofel

– Early SOA: Effect of Begin-relatedness

– Late SOA: Effect of END-relatedness

• Meyer (1990, 1991): Implicit priming with begin and end-homogenous sets:– Lotus, loner, local; murder, ponder, boulder

– Effect of Begin-relatedness only.

Page 21: Phonological Encoding II Producingconnectedspeech.

Explanation

• Explicit priming (p/w interference) speeds up the retrieval of segments. This depends on the time-course of the spoken distractor.

• Implicit priming does not speed up the retrieval of segments. But the participant, when doing a homogeneous set, can prepare part of the phonological word (suspension/resumption mechanism).

Page 22: Phonological Encoding II Producingconnectedspeech.

The Syllabary

• Stored programs for entire syllables, specified as sets of articulatory gestures. That is, abstract instructions to the articulators.

• For example, one such instruction could be to “close the lips” (but not: move upper-lip -8 mms AND move lower-lip + 5 mms, following velocity trajectories v1 and v2).

• Thus, these instructions are not external context-sensitive.

Page 23: Phonological Encoding II Producingconnectedspeech.

Why a syllabary?

• Phonetic accomodation in speech errors. If phonemes end up in the wrong place, they are pronounced correctly for their environment:

• E.g., tab stops -> tap [stabz] (Fromkin, 1971)

Page 24: Phonological Encoding II Producingconnectedspeech.

Why a syllabary (2)?

• If you do something really often, it is better to store and reuse it than it is to start from scratch.

• The top 500 sylls (out of roughly 12,000) cover 80% of words in English, 85% in Dutch.

Page 25: Phonological Encoding II Producingconnectedspeech.

Why a syllabary (3)?

• Levelt & Wheeldon (1994): Frequency effects in word production.

• Practice phase: Symbol to word association.– %%%%% = Tiger, ***** = Lotus

• Test phase: Symbol cue for production– %%%%% TIGER

Page 26: Phonological Encoding II Producingconnectedspeech.

Why a syllabary (3)?

• Additive effects of word frequency and syllable frequency

• Especially frequency of SECOND syllable was important

• Not reducible to syllabic complexity

Page 27: Phonological Encoding II Producingconnectedspeech.

Why a syllabary (3)?

• Additive effects of word frequency and syllable frequency

• Especially frequency of SECOND syllable was important

• Not reducible to syllabic complexity• HOWEVER: there were confounding factors in

the experiment. Conclusions should not be taken at face value! (Levelt et al., 1999).

Page 28: Phonological Encoding II Producingconnectedspeech.

What about errors?

• Weaver++ does not make ANY errors. It always ensures that the selected unit at level n+1 is connected to the selected unit(s) at level n.

• Errors were simulated, by assuming that this checking mechanism sometimes produces false positives at the level of the syllabary.

• Thus, target is red sock. If the syllable program [sed] is happy -> anticipation. If [rok] is happy -> persevaration. If both happy, exchange.

Page 29: Phonological Encoding II Producingconnectedspeech.

Exchange rate: sed rock

• In WEAVER, probability of false positive for [sed] is independent of that for [rok]. Both p’s are extremely small. The p of both occurring is infinitely small => 0% exchanges.

• In Dell’s model, selected phonemes are turned off. Thus, if /r/ is not selected in word 1, it has an advantage over /s/ for word 2 (because /s/ is set to 0). See also Dell, Burger, & Svic (Psych. Rev. 97)

Page 30: Phonological Encoding II Producingconnectedspeech.

Exchange rate

• Fromkin (1971) (and Matt, last week): Anticipations could be half-way corrected exchanges! Yew…New York

• Nooteboom (in press). If we assume detection p is same for anticipation and perseveration, we can estimate the proportion of half-way corrected exchanges.

Page 31: Phonological Encoding II Producingconnectedspeech.

Nooteboom (in press)

P A E Tot

Corrected 103 442? 42? 587

Not correct 153 238 175 566

Total 256 680? 217? 1153

22% 59%? 19%? 100%

103: 153 = Acor : 238 => Acor = 160

Page 32: Phonological Encoding II Producingconnectedspeech.

Nooteboom (in press)

P A E Tot

Corrected 103 160 324 587

Not correct 153 238 175 566

Total 256 398 499 1153

22% 35% 43% 100%

Page 33: Phonological Encoding II Producingconnectedspeech.

Nooteboom (in press)

P A E Tot

Corrected 103 160 324 587

Not correct 153 238 175 566

Total 256 398 499 1153

22% 35% 43% 100%

Weaver 19% 80% 1%

Page 34: Phonological Encoding II Producingconnectedspeech.

Conclusions

• WEAVER++ (as opposed to Dell’s model) accounts for resyllabification in running speech

• Like Dell’s model, it captures seriality effects• It accounts for the paradoxical RT data found in

implicit and explicit priming• It’s syllable theory is supported by theoretical

arguments, but not by conclusive data• Unlike Dell’s model, it does not predict the

occurrence of exchange errors.


Recommended