+ All Categories
Home > Documents > Hearing Loss Is Negatively Related to Episodic and Semantic Long-Term Memory but Not to Short-Term...

Hearing Loss Is Negatively Related to Episodic and Semantic Long-Term Memory but Not to Short-Term...

Date post: 30-Nov-2023
Category:
Upload: liu-se
View: 0 times
Download: 0 times
Share this document with a friend
23
JSLHR Article Hearing Loss Is Negatively Related to Episodic and Semantic Long-Term Memory but Not to Short-Term Memory Jerker Rönnberg, a,b,c Henrik Danielsson, a,b,c Mary Rudner, a,b,c Stig Arlinger, a,c Ola Sternäng, d,e Åke Wahlin, d,e and Lars-Göran Nilsson d,e Purpose: To test the relationship between degree of hearing loss and different memory systems in hearing aid users. Method: Structural equation modeling (SEM) was used to study the relationship between auditory and visual acuity and different cognitive and memory functions in an age-hetereogenous subsample of 160 hearing aid users without dementia, drawn from the Swedish prospective cohort aging study known as Betula (L.-G. Nilsson et al., 1997). Results: Hearing loss was selectively and negatively related to episodic and semantic long-term memory (LTM) but not short-term memory (STM) performance. This held true for both ears, even when age was accounted for. Visual acuity alone, or in combination with auditory acuity, did not contribute to any acceptable SEM solution. Conclusions: The overall relationships between hearing loss and memory systems were predicted by the ease of language understanding model (J. Rönnberg, 2003), but the exact mechanisms of episodic memory decline in hearing aid users (i.e., mismatch/disuse, attentional resources, or information degradation) remain open for further experiments. The hearing aid industry should strive to design signal processing algorithms that are cognition friendly. Key Words: hearing loss, episodic and semantic long-term memory, short-term memory, ease of language understanding (ELU), structural equation modeling (SEM) T here is growing consensus that sensory decline is associated with cognitive decline in old age (Arlinger, 2003; Lindenberger & Baltes, 1994; Nilsson et al., 1997). This finding is well established in studies that ana- lyze age-heterogeneous cross-sectional data, but the pic- ture presented by longitudinal studies is less conclusive (for reviews, see Christensen & Mackinnon, 2004, or Hofer, Berg, & Era, 2003; Li & Lindenberger, 2002; Lövdén, Ghisletta, & Lindenberger, 2005). Furthermore, it is not clear whether a decline in a specific sensory function such as auditory acuityand associated hearing impairmentis selectively related to cognitive decline or whether the association is dependent on general sensory decline (e.g., a combined auditory and visual factor). In the seminal studies by Lindenberger and Baltes (1994; Baltes & Lindenberger, 1997), an overall correla- tion was found between general sensory deficits and age- related cognitive deficits, suggesting some common cause of the decrements rather than specific sensory mecha- nisms. Both studies used acuity tests of hearing and vi- sion, and the correlations grew stronger with increasing age. Because the correlations with cognitive functions were of similar magnitude for both vision and hearing, it was argued that the simultaneous deficits in sen- sory and cognitive function reflect widespread neural degenerationthat is, the so-called common-cause hy- pothesis (Baltes & Lindenberger, 1997). Arlinger (2003) pointed out that only 17 % of the participants in the Baltes and Lindenberger studies wore hearing aids; thus, it a Linnaeus Centre HEAD (HEaring and Deafness), Linköping University, Sweden b The Swedish Institute for Disability Research, Linköping University c Linköping University d Stockholm University, Sweden e Stockholm Brain Institute, Sweden Correspondence to Jerker Rönnberg: [email protected] Editor: Robert Schlauch Associate Editor: Stanley Gelfand Received May 8, 2009 Revision received February 24, 2010 Accepted September 14, 2010 DOI: 10.1044/1092-4388(2010/09-0088) Journal of Speech, Language, and Hearing Research Vol. 54 705726 April 2011 D American Speech-Language-Hearing Association 705
Transcript

JSLHR

Article

Hearing Loss Is Negatively Related to Episodicand Semantic Long-Term Memorybut Not to Short-Term Memory

Jerker Rönnberg,a,b,c Henrik Danielsson,a,b,c Mary Rudner,a,b,c Stig Arlinger,a,cOla Sternäng,d,e Åke Wahlin,d,e and Lars-Göran Nilssond,e

Purpose: To test the relationship between degree of hearingloss and different memory systems in hearing aid users.Method: Structural equation modeling (SEM) was used tostudy the relationship between auditory and visual acuity anddifferent cognitive andmemory functions in an age-hetereogenoussubsample of 160 hearing aid users without dementia, drawnfrom the Swedish prospective cohort aging study known as Betula(L.-G. Nilsson et al., 1997).Results: Hearing loss was selectively and negatively related toepisodic and semantic long-term memory (LTM) but not short-termmemory (STM) performance. This held true for both ears, evenwhenage was accounted for. Visual acuity alone, or in combination withauditory acuity, did not contribute to any acceptable SEM solution.

Conclusions: The overall relationships between hearing lossand memory systems were predicted by the ease of languageunderstanding model (J. Rönnberg, 2003), but the exactmechanisms of episodic memory decline in hearing aid users(i.e., mismatch/disuse, attentional resources, or informationdegradation) remain open for further experiments. The hearingaid industry should strive to design signal processing algorithmsthat are cognition friendly.

Key Words: hearing loss, episodic and semantic long-termmemory, short-term memory, ease of language understanding(ELU), structural equation modeling (SEM)

T here is growing consensus that sensory decline isassociatedwith cognitive decline in old age (Arlinger,2003; Lindenberger & Baltes, 1994; Nilsson et al.,

1997). This finding iswell established in studies that ana-lyze age-heterogeneous cross-sectional data, but the pic-ture presented by longitudinal studies is less conclusive(for reviews, seeChristensen&Mackinnon, 2004, orHofer,

Berg, & Era, 2003; Li & Lindenberger, 2002; Lövdén,Ghisletta, & Lindenberger, 2005). Furthermore, it is notclear whether a decline in a specific sensory function suchas auditory acuity—andassociatedhearing impairment—is selectively related to cognitive decline or whether theassociation is dependent on general sensory decline (e.g.,a combined auditory and visual factor).

In the seminal studies by Lindenberger and Baltes(1994; Baltes & Lindenberger, 1997), an overall correla-tionwas found between general sensory deficits and age-related cognitive deficits, suggesting some common causeof the decrements rather than specific sensory mecha-nisms. Both studies used acuity tests of hearing and vi-sion, and the correlations grew stronger with increasingage. Because the correlations with cognitive functionswere of similar magnitude for both vision and hearing,it was argued that the simultaneous deficits in sen-sory and cognitive function reflect widespread neuraldegeneration—that is, the so-called common-cause hy-pothesis (Baltes & Lindenberger, 1997). Arlinger (2003)pointed out that only 17% of the participants in the Baltesand Lindenberger studies wore hearing aids; thus, it

aLinnaeus Centre HEAD (HEaring and Deafness),Linköping University, SwedenbThe Swedish Institute for Disability Research,Linköping UniversitycLinköping UniversitydStockholm University, SwedeneStockholm Brain Institute, SwedenCorrespondence to Jerker Rönnberg: [email protected]

Editor: Robert SchlauchAssociate Editor: Stanley Gelfand

Received May 8, 2009Revision received February 24, 2010Accepted September 14, 2010DOI: 10.1044/1092-4388(2010/09-0088)

Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011 • D American Speech-Language-Hearing Association 705

remains to be investigated whether systematic use ofhearing aids, leading to improved hearing, can slow cog-nitive decline.

Studies by Appolonio, Carabellese, Frattola, andTrabucchi (1996) andCacciatore et al. (1999) suggest thatcorrection of hearing loss has positive effects on cogni-tive function, but the studieswere based onquestionnairedata only and need to be verified by behavioral data. Re-sults from theMaastricht Aging Study (Jolles, van Boxtel,Ponds, Metsemakers, & Houx, 1998) show that there is acovariation of sensory decline and deteriorating cognitivefunction (Valentijn et al., 2005). In particular, the studyshowed that decline in auditory acuity (dB loss) during a6-year follow-up study predicted a decline in visual ver-bal learning performance. However, in this study and inanother study by the same research group, hearing aidintervention—thus, improvement of hearing—left cog-nitive function relatively unaffected (van Hooren et al.,2005; Valentijn et al., 2005).

Other studies that have focused specifically on hear-ing impairment and cognitive function have shown thatthe introduction of a hearing aid may actually improvecognitive function. Lehrl, Funk, and Seifert (2005) foundthat for 70-year-old peoplewith hearing impairment, theintroduction of a hearing aid for 2–3 months improvedworkingmemory (WM) capacity compared to controlswithunaided hearing and thosewhowerematched on IQ, chro-nological age, and level of hearing loss. However, an ear-lier study evaluating the effects of 6months of first-timehearing aid use on cognition was unsuccessful in detectingimprovements compared with two age-matched controlgroups: one groupwith impaired hearing and one controlgroup with normal hearing (Tesch-Römer, 1997). Finally,a study of the effects of hearing aid use among patientswith both hearing impairment and dementia showed nosigns of cognitive improvement (Allen et al., 2003). Takentogether, these intervention studies are not conclusive.

Thus far, we have concluded that (a) there is asensory–cognitive connection mainly based on cross-sectional data, where decrements in sensory acuity andcognitive function seem to go hand in hand with chrono-logical aging; (b) we do not know whether there are spe-cific sensory mechanisms that explain the connection orwhether this connection is just a manifestation of a gen-eral overall decline of neural functions in the brain; and(c) the effects of hearing aid intervention may mitigatecognitive deficits, but the overall findings in the litera-ture are inconclusive.

The overall purpose of the present study was to in-vestigate the link among sensory function, hearing lossin particular, and cognition in hearing aid users. Thisstudy focused on statisticalmodeling of selective relation-ships between hearing loss and different memory sys-tems. The study used data from the Swedish prospective

cohort aging study called Betula (Nilsson et al., 1997,2004). This data related to the third test occasion (T3),which was the test occasion with the largest subsampleof hearing aid wearers not suffering fromdementia. Datawere collected from 1998 to 2000. The overall structuralequation modeling (SEM) strategy was to use the objec-tive measurements of hearing threshold levels as well asassessments of visual acuity, and a battery of cognitiveand memory tests to evaluate the main hypothesis thatthere is a sensory-specific link between hearing loss andcognitive functions, even when hearing impairment iscorrected using hearing aids. Visual acuitywas included asa control variable to be accounted for in the SEM analysesand in order to evaluate differentmodels pertinent to thecommon-causehypothesis (Baltes&Lindenberger, 1997).

One general prediction was that even though all par-ticipants had used hearing aids—some for extended pe-riods of time (see Method section)—hearing problemswould continue to occur in noisy, or otherwise taxing,everyday listening conditions. This is because signalprocessing algorithms in hearing aids generally cannotprovide an optimum listening situation in taxing condi-tions (Lunner, Rudner, & Rönnberg, 2009). Thus, by in-cluding only hearing aid users in the present study, wemade a conservative test of the hypothesis that hearingloss is related to memory deficits. However, the focus ofthe studywas not on evaluating the effects of the hearingaids, per se.

Hearing Loss and MemorySystems: Predictions

Although we were generally interested in whetherthere is a specific link between hearing loss and cogni-tion in hearing aid users, the Betula database allowedus to test further predictions regarding the specificity ofthe relationship between hearing loss and type of mem-ory system. We focused particularly on the short-termmemory (STM) as well as the episodic and semantic long-term memory (LTM) systems (Tulving, 1983). This al-lowed us not only to be more analytical about the putativerelationships of auditory loss and cognition but also toderive several predictions from current models.

One set of predictionswas derived fromaWMmodelfor ease of language understanding (ELU; Rönnberg,2003; Rönnberg, Rudner, Foo, & Lunner, 2008). Thismodel proposes that perceived linguistic signals are basedonmultimodal information, and under optimal conditions,information processing is assumed to involve rapid, au-tomatic, and multimodal binding of phonological infor-mation (RAMBPHO). RAMBPHO generates a streamof phonological information that implicitly unlocksthe lexicon by matching extracted phonological input

706 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

(typically syllabic information; Rönnberg, 2003) withstored phonological representations in semantic LTM(Rönnberg et al., 2008; Stenfelt&Rönnberg, 2009).How-ever, when suboptimum conditions prevail (e.g., hearingimpairment, inefficient signal processing in a hearinginstrument, or noisy conditions), the probability of a mis-matchbetween input and stored phonological representa-tions increases. In the ELU model (Rönnberg, 2003), themismatch effect is assumed to occur at the syllable level inlexical access and retrieval (cf. Pulvermuller et al., 2001).

Given that mismatch occurs, the model further as-sumes that explicit resources are invoked to infer andconstruct themeaning of amessage, partly in further in-teraction with phonological, lexical, and semantic repre-sentations retrieved from LTM and partly on the basisof the actual information already decoded and currentlyheld in WM. Such explicit resources involve storage andprocessing functions of working memory (Daneman &Carpenter, 1980) and have proven to be crucial undersuboptimal speech understanding conditions (i.e., Foo,Rudner, Rönnberg,&Lunner, 2007; Lunner, 2003; Lunner& Sundewall-Thorén, 2007; Rudner, Foo, Rönnberg, &Lunner, 2009). In addition, WM supports verbal infer-ence making (i.e., inferringmissing pieces of information;Lyxell & Rönnberg, 1989) and is assumed to facilitateexecutive functioning in terms of updating and switch-ing between linguistic fragments (Lyxell, Andersson,Borg,& Ohlsson, 2003) and between prediction and repair ina dialogue (Ibertsson, Hansson, Asker-Àrnason, Sahlén,& Mäki-Torkko, 2009).

For old persons, WM deteriorates (e.g., Bopp &Verhaegen, 2005), and especially for old personswith hear-ing impairment, explicit WM resources will therefore—by implication of theELUmodel (Rönnberg, 2003) and bydata obtained (see Rudner et al., 2009)—be crucial forcompensating for hearing loss. Thus, WM is importantfor attaining a reasonable flow, or ease of language un-derstanding, under adverse dialogue conditions (cf. DeDe,Caplan, Kemtes, & Waters, 2004; van der Linden et al.,1999).

Phonological–lexical–conceptual representations be-long to the semantic portion of LTM (Tulving, 1983). TheELU model postulates that unlocking of the lexicon isdone via phonological representations in semantic mem-ory,which arematched toRAMBPHO-delivered phonolog-ical representations abstracted from the perceived inputsignal. Therefore, to be able to encode verbal informationinto episodic LTM, an interaction between RAMBPHO-delivered phonological representations abstracted fromthe perceived input signal and phonological–lexical rep-resentations in semantic LTM is a prerequisite. Whenmismatches occur between the RAMBPHO-deliveredinformation and phonological–semantic representationsin LTM, lexical access suffers, and less information is

therefore encoded into episodic LTM. In the long term,this is assumed to lead to an increasing disuse of epi-sodic LTM and a subsequent decline of episodic LTMfunction (Tulving, 1983).

In fact, independent neuroimaging data verify thatsemanticLTM interactswith episodicLTMby influencingboth the encoding and retrieval stages. For example, ven-trolateral prefrontal cortexmayplayadomain-general rolein the encoding of item–item associations (phonological orsemantic) for later episodic retrieval (Park & Rugg, 2008).In addition, encoding and retrieval operations of episodicmemory rely on left and right prefrontal cortex, respec-tively (see Wheeler, Stuss, & Tulving, 1997, for a compre-hensive review)—anddynamic switchingbetweenmemorysystem at retrieval, as a function of pre-training for certainitems, also reflects the interaction between semantic andepisodic LTM (Kompus, Olsson, Larsson, &Nyberg, 2009).

However, because themismatch function in the ELUmodel is dependent both onwhat is perceived from the sig-nal and the continuous use of the phonological–semanticrepresentation in LTM to match the perceived input, adirect prediction of decline in semanticLTMfromadisuseconcept is not applicable or logically reasonable (see Al-ternative Semantic LTM Predictions subsection below).

Apart from the ELU prediction of differences in usebetween the episodic and semantic LTM systems, thereare a number of independent reasons for predicting thatepisodicmemorywill suffer relativelymore than semanticmemory. For example, there is an overall higher sensitiv-ity of episodic memory compared with semantic memoryto hippocampal damage (e.g.,Vargha-Khademet al., 1997)and to frontal lobe damage. There is also a stronger associ-ation of episodic memorywith source amnesia (Janowsky,Shimamura, & Squire, 1989). Episodic memory declinesearlier and more rapidly than does semantic memory asa function of age (e.g., Luo & Craik, 2008; Rönnlund,Nyberg, Bäckman, & Nilsson, 2005). Because of theage-related nature of both episodic LTM and hearingimpairment—and to assess more rigorously the directlink between hearing impairment and episodic LTM—in statistical modeling, we account for the separate con-tribution of age to episodic LTM deficits.

Alternative Semantic LTM PredictionsAnalternative to conceiving of relative disuse effects

on episodic and semantic memory is to consider how thelexicon (i.e., part of semantic LTM) is structurally orga-nized in terms of “neighbors” competing for selectionin speech perception tasks. In Luce and Pisoni’s (1998)neighborhood activation model (NAM; see also Pisoni,Nusbaum, Luce, & Slowiaczek, 1985), the probability of,for example, perception of words in noise is dependenton neighborhood density—that is, acoustic–phonetic

Rönnberg et al.: Hearing Loss and LTM 707

similarity to other words—as well as word frequency. Inother words, the neighborhood structure of the mentallexicon in semantic LTM interacts with stimulus wordsto determine “ease” of lexical discrimination and sub-sequent word identification.

The insights offered by the NAM have proven to beimportant when considering speech perception capabil-ities in cochlear implantees, especially for word identi-fication but not for mere phoneme identification (Kirk,Pisoni,&Osberger, 1995).Recent research (see, e.g., Lyxellet al., 2009) confirms that children who are prelinguallydeaf andwho have cochlear implants experience problemsdeveloping appropriate phonological skills when com-pared with age-matched controls, whereas visuospatialWM and reading skills are on par with those of age-matched controls. Lazard and colleagues (2010) recentlysuggested that for adults who are postlingually deaf, thekind of neural networks recruited for phonological pro-cessing in rhyme tasks are predictive of later outcome.Furthermore, it has also been shown that neighborhoodstructures are already picked up in early infancy (Jusczyk,Luce, & Charles-Luce, 1994) and that declines in speechperception ability in old persons may be attributed toa loss of cognitive resources to utilize such lexical struc-ture (Sommers, 1996). It is particularly clear that theeffect of “easy” (i.e., high-frequencywords in sparse, low-frequency neighborhoods) and “hard” words (i.e., low-frequencywords indense, high-frequencyneighborhoods)is exacerbated in old age. Thus, there is reason to expectthat the phonological representations in the lexicon maydeteriorate with increasing age and hearing impairmentand that, as predicted by NAM, phonologically mediatedabilities will decline as a function of hearing loss in oldpersons (cf. Luce & Pisoni, 1998).

In fact, at least for personswithmore profoundhear-ing impairments, there is a direct correlation betweenthe deterioration of semantic LTM phonological repre-sentations and the number of years a person has spentbeing hearing impaired (Andersson, 2002; Lyxell et al.,2009). However, phonological skills seem to be sensitiveeven to temporary fluctuations in hearing ability due tootitis media (Majerus et al., 2005). The question waswhether impairment of phonologically related skillswouldbe observed in the present sample of old persons whohave only moderate to severe hearing impairment (seeMethod section).

Alternative Episodic LTM PredictionsIn a recent account of episodic LTM function in par-

ticipants with hearing impairment, Tun, McCoy, andWingfield (2009) advocate the hypothesis that sensory–perceptual decoding of words costs more, in terms of at-tentional resources, in old persons than in young persons,

especiallywhenhearing acuity is poor. Itwas shown thatrecall of word lists was inferior for old participants withpoor hearing. The attention costs were measured bymeans of a visuomotor tracking method, showing thatin the dual-task condition of both recalling and tracking,tracking performance was also much lower in the oldgroup with poor hearing. The findings are not a result ofaudibility issues, as a pretest confirmed that all partic-ipants could correctly identify the stimulus words.

The information degradation hypothesishas receivedempirical support (e.g., Schneider, Daneman, & Pichora-Fuller, 2002) and states that lower episodic memory—including episodic LTM—is explained by impoverishedstimulus conditions rather than by actual cognitive defi-cits in the old participants with hearing impairment (fora review, see Gallacher, 2005). Failure to equate differ-ent groups on the perceptual difficulty or the perceptualstress of encoding to-be-remembered items in experimen-tal studies will lead to apparently compromised memoryperformance in more groups with hearing impairment(Pichora-Fuller, Schneider, & Daneman, 1995; Schneideret al., 2002). When listeners with hearing impairmentencode to-be-recalled items, there is always the possibil-ity that recall performancemay be affected by perceptualstress, evenwhen their hearing aids have been turned on.

In sum, apart from the common-cause prediction(Baltes & Lindenberger, 1997) that cognitive decline isconnected to a general sensory decline, several theoret-ical accounts predict episodic LTMdeficits related to hear-ing impairment. The proposedmechanisms vary, and theyinclude mismatch/disuse (Rönnberg et al., 2008), at-tentional resources (Tun et al., 2009), and informationdegradation (e.g., Schneider et al., 2002). Semantic LTMdeficits—as a function of hearing impairment and oldage—were assumed to be predicted by the NAM only(Luce&Pisoni, 1998; Sommers, 1996). For STM, theELUmodel predicts an intact STM because of the continuoususe of WM in reconstruction and repair of messages anddialogical contents in conditions of mismatch. By impli-cation, the attentional resources hypothesis would alsopredict problems in divided attention conditions of STM.The ELUmodel predicts a relationship between seman-tic and episodic LTM because the encoding of items intoepisodic memory is critically dependent on the match/mismatchbetween incoming informationwith thequalityof phonological representations in semantic LTM. Noneof the other accounts are explicit on the relationship be-tween semantic and episodic LTM.

Hearing Loss, Common Cause,and Memory Systems: Tests

In the present study, the hearing impairment latentvariables for the best andworst ear were composed of the

708 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

average hearing losses at four frequencies (500, 1000,2000, and 4000 Hz). Using loss in the most severely im-paired ear optimizes the impairment variability in thesample and reflects subsequent degree of disability un-der environmentally challenging circumstances. How-ever, it is the better ear that compensates for the loss.Therefore, we compared the two ears in the analysesusing SEM.

To test the hypothesis of a common-cause, sensory–memory relationship (Baltes & Lindenberger, 1997), wealso constructed a latent visual variable consisting of vi-sual acuity estimates of the left and right eyes, respec-tively. Thus, analyses could also be based on a generalsensory factor, or on amediating sensory variable, in turnbased on the joint contributions by the visual and audi-tory latent variables. For comparison purposes, we alsocomputed themodels for visual acuity and how they wererelated to STM and episodic and semantic LTM, respec-tively. Themain visual test parameter usedwas collectedwith corrections (e.g., wearing eyeglasses) for the impair-ment. Other visual data were also collected without cor-rections.Hearing losswas estimated bymeans of standardpure-tone audiometric measurements without hearingaids (see Method section).

Thus, to test the specific hearing loss–memory hy-potheses, we constructed three main latent variables—one STM variable, one episodic LTM variable, and onesemanticLTMvariable—on thebasis of thememory testsused in theBetula project (Nilsson et al., 1997, 2004). TheSTMlatent variable included indices of verbal STMbasedon the following tasks: free recall of visually presentedlists of action imperatives with a verbal retrieval task(e.g., “comb your hair” or “roll a dice”; Nyberg, Nilsson, &Bäckman, 1992) that were either verbally or motoricallyencoded. A third task was free recall of auditory wordlists.We also included a dual task, divided attention con-dition of the word recall task. Thus, by varying the mo-dality of encoding conditions (visual, auditory, motor) aswell as demands on attention (in the divided attentioncondition), we could start evaluating the disuse, attentionallocation, and information degradation hypotheses.

The STM (or primary memory) latent variable andthe episodic LTM (secondary memory) latent variable

of the recall data were defined by means of the well-established Tulving and Colotla (1970) lag measure (cf.Wahlin, Bäckman, &Winblad, 1995), based on the num-ber of intervening items (presented and recalled) sincethe start of presentation of a particular item until recallof the same item (i.e., LTM > 7 items; STM ≤ 7 items).The episodic LTM latent variable included the LTMcom-ponents of the same tasks.

Although the Betula project did not include a WMmeasure that was explicitly designed to tap both storageand processing components, as inmore complex versionsof WM tests (cf. reading span; Daneman & Carpenter,1980), the Tulving–Colotla (1970) distinction was as-sumed to capture an important aspect of the dichotomyon which the disuse hypothesis is based. Keeping itemsin mind with a lag of ≤ 7 entails an online STM storagecapacity that represents part of amore complex storageand processing WM concept (Daneman & Carpenter,1980), which is relevant to ELU. Processing items witha lag > 7 requires long-term encoding and storage. Thus,the Tulving–Colotla distinction allows us to begin to testthe potentially relevant selective associations between theSTMand episodic LTMmemory systems on the onehandandhearing loss on the other. The STMand episodic LTMlatentmemory constructs were included tomake an eval-uation of the predictions of the ELUmodel—that is, thatthe degree of hearing impairment is related to retrievalof episodes from LTM, whereas STM performance is un-related to hearing impairment.

The semantic LTM latent variable was based pri-marily on two tests: word fluency and vocabulary. Fluencymay be important because time-constrained retrievaloperations from lexical/semantic portions of LTM arelikely to contribute to explicit online computations oflanguage understanding in the course of successive mis-matches. In the present study, we selected the initialletter fluency test (Hultsch, Hertzog, Small, McDonald-Miszczak, & Dixon, 1992) from the Betula battery be-cause it mimics the phonological–lexical access processof the ELUmodel. The vocabulary test in this study waschosen for two reasons:First, itwasusedasaproxy for ver-bal IQ (see Table 1), and secondly, it plays an importantrole in semantic memory studies showing relationships

Table 1. Sample characteristics for the included sample (without dementia, having hearing aids, an audiogramand long-term memory data at T3) compared to the Betula sample.

Sample n Age

% females MMT Block Vocabulary

M (SD) M (SD) M (SD) M (SD)

Entire Betula sample 2,756 64 (15) 54 27.5 (2.0) 25 (11) 22 (5.2)Hearing aid wearers 160 75 (9) 50 27.3 (1.8) 21 (9) 21 (5.6)

Note. Persons with dementia have been removed from all samples.

Rönnberg et al.: Hearing Loss and LTM 709

to hearing impairment (e.g., Davis, Elfenbein, Schum, &Bentler, 1986; Kiese-Himmel, 2008).

For control purposes, three additional taskswereusedas simple test parameters to assess the boundaries andselectivity of the basic audioverbal and episodic/semanticmemory predictions: (a) the classic Tower of Hanoi task(ToH); (b) a visual, nonverbal episodic LTM (i.e., episodicface recognition;Nilsson et al., 1997); and (c) the recall taskunder divided attention encoding conditions (Baddeley,Lewis, Eldridge, & Thomson, 1984). We expected thatexplicit functions other than the ones assumed withinthe ELU model (i.e., storage and processing; Daneman& Carpenter, 1980) may be involved in language under-standing. These may include the executive functionsinvolved in ToH, whichmay be involved in the inference-making processes needed for disambiguation online (cf.van der Sluis, de Jong,&vander Leij, 2007).We also pre-dicted that the cognitive processes involved in face recog-nition of unfamiliar faceswouldnot be related to hearingloss, as the ELUmodel is about retrieval of linguistic in-formation and not the processing of visuospatial infor-mation required for encoding and retrieval of unfamiliarfaces. Finally, we expected that the divided attentioncondition of the word recall task would correlate withhearing loss (Tun et al., 2009).

Thus, the sensory latent variables on the one hand,and the STM and episodic/semantic LTM latent vari-ables on the other hand—as well as the executive, facerecognition, and divided attention parameters—wereused to test the different predictions regarding STMandepisodic and semantic LTM while also evaluating theprediction based on the common-cause hypothesis (i.e.,where both sensory variables are assumed to contributeto memory/cognitive deficits).

Method

ParticipantsThe participants in the present study were a subsam-

ple of participants in the Betula study. All participants inthe subsamplewerehearingaidwearers.TheBetula studyis a prospective cohort study where the participants takepart in extensive health andmemory examinations aswellas interviews about social factors (Nilsson et al., 1997).The main purpose is to study the development of healthand memory functions in adulthood and old age, risk fac-tors of dementia, and premorbid memory functions.

One sample in Betula was tested the first time at T1(Sample 1 [S1]). One thousand persons were randomlyselected from the population registry of Umeå, a city innorthern Sweden with a population of about 110,000

inhabitants. The participants in S1 were 35, 40, 45, 50,55, 60, 65, 70, 75, and 80 years of age when tested at T1in 1988–1990. There were 100 persons in each of these10 age cohorts. Participants in S1were tested again at T2(1993–1995), at T3 (1998–2000), and at T4 (2003–2005).The data used in the present study were from T3. Therewere 716 participants still remaining in S1 at T3. Themain cause of attrition at each wave of data collectionwas death (about 10%). Some participants had movedfrom Umeå (about 2%), and some did not want to takepart again or were unable to participate due to illness(2%). An additional two samples, S2 and S3, were testedfor the first time at T2 (1993–1995). These participantswere independently and randomly selected from the pop-ulation registry of Umeå. Participants in S2 were of thesame age at T2 as S1 participants were at T1—that is,35, 40, 45, I 80 years of age. Participants in S3 were ofthe same age as S1 participants were at T2—that is, 40,45, 50, I, 85 years of age. There were approximately1,000 participants in each of these two samples, withabout 100 participants in each age cohort in each of thetwo samples. The eight oldest age cohorts of S2 partici-pants were called back for testing at T3. Attrition ratesfor those who were called back were similar to those forS1 (14%), leaving 665 S2 participants being tested at T3.In S3, 812 participants returned for testing at T3. Notonlywere the overall attrition rates similar for S2 andS3as they were for S1, but also the rates for the three typesof attrition categories were about the same as for S1. InS4, 563 persons were participating for the first time.Summing up the number of participants from each sam-ple, there were 2,756 participants available at T3.

Exclusion criteria for participation in the Betulaproject are severe visual or auditory handicaps, mentalretardation or dementia, and amother tongue other thanSwedish. In addition, all participants undergo repeatedmedical and neurological examinations throughout theproject (Nilsson, 2003). Prosopagnosia was not tested for,and this should be taken into account in interpretation offace recognition performance.

We chose T3 as ourmain test occasion for this study,as the number of participants (n = 160) who did not havedementia and who had hearing aids, an audiogram, andLTM data at T3 was larger than at any other test occa-sion. Descriptive data for this subsample of hearing aidwearers are reported in Table 1.

The research reported in this paper was approvedby the Ethics Committee of Umeå University, Sweden(Forskningsetisk kommitté, dnr x 7/870303) onMarch 3,1987, and by the Regional Ethics Committee of NorthernSweden, Department of Medical Research (RegionalaEtikprövningsnämndenUmeå, avdelningen förmedicinskforskning, dnr 08-132M) onOctober 7, 2008. Basic ethicalconsiderations for the protection of human participantsin research is the key issue in both of these committees.

710 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

General Description of Betula Participantsat T3 and Included Subsampleof Hearing Aid Wearers

Overall cognitive status (Mini-Mental State Exam-ination [MMSE]; Folstein, Folstein, & McHugh, 1975),block design performance (Wechsler, 1991), and perfor-mance on a vocabulary test (Dureman, 1960) were usedas backgrounddata. Block designwas used as a proxy fornonverbal IQ, and vocabulary was used as a proxy forverbal IQ. These data, for both the sample and the sub-sample, can be seen in Table 1. The hearing aid wearersare representative of the Betula participants at T3 inthat test performances and gender distribution are simi-lar. The subsample of hearing aid wearers is older, whichis to be expected because the prevalence of hearing im-pairment increases with age (Morton, 2006). Neverthe-less, all cognitive test differences fell well within 1 SD ofBetula participants at T3. Thus, there was no selectiveimpairment of verbal and visuospatial test performanceto indicate neurological or other medical conditions thatmight bias performance on verbal and/or face recogni-tion tasks.

Hearing and Visual Impairmentsof the Subsample

Data on the sensory impairments of the subsampleof hearing aid wearers are presented in Table 2. Theseinclude standard audiometric (International Organiza-tion for Standardization [ISO] 8253-1, 1989) data andvisual acuity data obtained by means of the Jaeger eyechart (seeProcedure subsectionbelow). Theaverage hear-ing loss for the poorer ear was moderate to severe (44 dBto 73 dB),with a successive increase in the loss going from

low to high frequencies. The corresponding data for thebetter ear was 34 dB to 63 dB. The overall hearing im-pairment profile is characteristic of presbyacusis. Theaverage hearing thresholds correspond approximately tothe 80th percentile of a nonscreened Swedish populationat the age of 75 years (Johansson&Arlinger, 2002)—thatis, 80% of the population have better hearing thresholdsand 20% have worse hearing thresholds.

Some participants received their hearing aids aftertheir present audiogram had been collected. Others hadbeen fitted with their aids earlier, and they were askedhow long they had been using hearing aids before thepresent audiogram. The duration of the participants’ useof hearing aids was therefore calculated as the time be-tween thedate of the audiogramand thedate of themem-ory tests plus the time of use before the audiogram date,if any. The median time of use was 2.0 years, the meanwas 6.4 years (SD = 11.3 years), and the maximum was58 years.

Visual acuity was assessed as the smallest font sizea participant could read, going from the smallest text(e.g., the font size typically used in small bibles [5 points]or the telephone directory [6 points]) with several inter-vening steps to a medium difficulty level (e.g., the fontsize typically used in paperbacks [10 points]) followed byseveral intervening steps to the largest text size (e.g., thefont size typically used innewspaperheadings [24points]).The values reported are average point values for the dif-ficulty levels at which the participantswere able to read—8.1 for the right eye and 6.2 for the left eye on a scaleranging from 5 to 24. The values were collected usingthe participants’ own corrective aids (i.e., eyeglasses,lenses, or a magnifying glass). Of the participants, 89%used corrective aids for their visual impairments, 21%reported that they were able to read newspaper textwithout their corrective aids, and 98% had no problemscounting fingers at a distance of 1 m without correctiveaids.

The participants were instructed to wear their hear-ing aids and used their visual corrective aids at the timeof memory testing. Test lists were used to enable audi-bility for the participants, and similar measures weretaken to ensure individual visibility of the stimuluslists. There were no reports of having problems withthe visual or auditory aspects of testing for the includedsample.

ProcedureMemory and Cognitive Tests

Three episodic memory tests formed the basis of theepisodic LTMand STM latent variables. These testswerefree recall of subject-performed tasks (SPTs), free recall

Table 2. Sensory impairments for included sample at T3.

Sensory variable M SD

500 Hz better ear 34 17500 Hz worse ear 44 211000 Hz better ear 40 171000 Hz worse ear 51 202000 Hz better ear 50 162000 Hz worse ear 60 164000 Hz better ear 63 184000 Hz worse ear 73 18Right eye 8.1 6.0Left eye 6.2 2.8

Note. Hearing thresholds in dB HL. Visual acuity in point values forsmallest readable font. Means and standard deviations are presentedfor n = 160.

Rönnberg et al.: Hearing Loss and LTM 711

of sentences, and freeword recall. Vocabulary and fluencyformed the semantic LTM latent variable.

Episodic Memory, STM, and LTMFree recall of SPTs. The participant was presented

with a list of Swedish two-word imperatives including averb and a noun with a suffixed definitive article (e.g.,roll the ball, break thematch; cf. Nilsson et al., 1997) andenacted the imperatives at encoding. The imperativeswere printed on index cards that were presented at a rateof one action per 8 s, for a total of 16 actions. Free oralrecall of the list of imperatives was allowed during 2min.Duration of response period was chosen on the basis ofextensive piloting work prior to the commencement ofthe Betula study, which showed that no participant gavea response after 1.5 min. Experience from other studiesshows that this response period is adequate (Fahlander,Wahlin, Almkvist, & Bäckman, 2002; Nilsson, 2003;Wahlin et al., 1995). The length of the response periodmust be seen in relation to the rate of the presentation.The LTM and STM components were derived with theTulving and Colotla lag method (1970).

Free recall of sentences (verbal tasks). In a condi-tion parallel to the SPT condition, similar imperativeswere encoded verbally (verbal tasks [VTs])—that is, theparticipants listened to the experimenter reading thesentences out loud while they were shown the samesentences on an index card. The LTM and STM com-ponentswerederived bymeans of theTulving andColotlalag method (1970). Free oral recall was allowed during2 min.

Free word recall. The participants listened to lists ofwords (12Words × 4 Lists) presented auditorily at a rateof one word every 2 s. Care was taken to ensure that theparticipants could repeat the words with their hearingaids turned on. Immediately after termination of eachlist presentation, participantswere asked to orally recallas many words as possible in any order that they de-sired. The LTM and STM components were derived bymeans of the Tulving andColotla lagmethod (1970). Therecall period was 45 s (Baddeley et al., 1984).

Semantic LTMVocabulary.Vocabularywas estimatedbya commonly

used Swedish vocabulary test (Dureman, 1960).

Fluency. This test involved lexical retrieval fromsemantic LTM on the basis of an initial letter. Given theinitial letter A, the participants orally retrieved asmanywords as possible starting with the letter A from LTMduring one minute. This test taps the phonological-to-lexical access mechanism proposed in the ELU model(Rönnberg et al., 2008), and is togetherwith the vocabulary

test part of the semantic long-term memory latent var-iable (SLTM).

Additional TasksTower of Hanoi (ToH). The equipment consisted of

a horizontal stand with three vertical rods attached andthree disks of different sizes, which can be slid onto anyrod. The test startswith the disks stacked in order of sizeon one rod, the smallest at the top. The task is to movethe entire stack to another rod, obeying the followingrules: Only one disk may be moved at a time, each moveconsists of taking the upper disk fromone of the rods andsliding it onto another rod, on top of the other disks thatmay already be present on that rod. No disk may beplaced on top of a smaller disk.Number ofmoveswas thedependent variable. This test provides a measure of ex-ecutive functioning. The time allowed to solve the testwas maximized to 20 min.

Face recognition. The participants were presentedwith 16 color pictures of unfamiliar faces of 10-year-oldchildren for later episodic recognition. The children de-picted were from a small place in a southern province ofSweden,whichmakes it extremely unlikely that the par-ticipants from a northern province around Umeå wouldrecognize them. Presentation rate was 8 s per item. Atencoding of the faces, the participantswere also instructedto encode the picture together with a fictitious name.Approximately 1 hr later, 12 of the pictures were pre-sented again, randomly interspersed with 12 distracters.The rate of presentation was one picture/15 s. Partici-pants were instructed to point out the faces that theyrecognized from the previous presentation. The namerecognition part of the test was not included in the pres-ent study.

Word recall—divided attention. This was the sametask as the free word recall, with the addition of a di-vided attention condition. The divided attention condi-tion involved asking participants to carry out a secondarytask (i.e., sorting red and black cards into two stacks ac-cording to color) while performing theword-encoding task(Baddeley et al., 1984).We used the LTM component only(Tulving & Colotla, 1970).

SEMSEM was performed with AMOS 17.0 using maxi-

mum likelihood estimation. Factor scaling was accom-plished by fixing one item for each factor to a value of 1in the pattern matrix, and the same items were used toscale factors among all models. The most important cri-terion for a model to be accepted is that all pathways inthemodelmust be significant (p < .05). This is in contrastto an exploratory approach, where insignificant path-ways are deleted. With the confirmatory approach to

712 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

testing different models adopted here, pathways cannotbe deleted without changing the model. Thus, proposedmodels with insignificant pathways are considered notto stand the test. Model fit was evaluated by examiningthe c2, the comparative fit index (CFI; Bentler, 1990), andthe root-mean-square error of approximation (RMSEA;Browne & Cudeck, 1993). Values > .90 are generally con-sidered to be acceptable for CFI. For theRMSEA, values≤.05 are preferred. Chen, Curran, Bollen, Kirby, andPaxton(2008) have shown that the cutoff forRMSEA is dependenton sample size, where smaller sample sizes should haveahigher cutoff. For our small sample size, itmeant that acutoff higher than .05 would be more accurate. We keptthe .05 cutoff but considered the RMSEA to be the leastimportant of the fit indices and, thus, did not reject mod-els based on only a slightly higher RMSEA. We reportedthe c2 fit statistic for all models. This statistic is sensi-tive to sample size; therefore, because we had a rela-tively small sample size, we chose p > .05 as the criterionfor significance.

For 44 of the 160 participants, there were one ormore missing values. The AMOS program substitutedthese values by means of a standard regression method.However, where valuesweremissing for all three indicesmaking up the episodic LTM construct, individuals wereexcluded. For 17 of the participants, audiograms wereavailable only for the worse ear because these partici-pants had had nomeasurable hearing loss (i.e., they hadnormal hearing) for the better ear. Thus, the analysesare based on a subsample size of 143 (160 participants –17 participants = 143) for the better ear.

Results and Discussion

Memory and CognitivePerformance Data

Memory and cognitive test performances are pre-sented in Table 3, and the hearing aid wearers (n = 160)are compared to the Betula participants at T3 (n =2,756). The general observation is that the hearing aidwearers perform at a lower level, which is to be expectedbecause the sample is, on average, 11 years older (Nilssonet al., 1997) but still within 1 SD of the larger group onall test parameters. It is interesting to note that theyperform on a par with the Betula sample on all STMindices, ToH, and face recognition, whereas their perfor-mance on the episodic LTM components is relativelylower than in the STM and semantic LTM conditions.This pattern of results is also in line with the generalliterature on the selective effects of aging on differentmemory systems and cognition (Fahlander et al., 2002;

Nilsson, 2003; Nilsson et al., 1997; Sternäng et al., 2008;Wahlin et al., 1995).

Hearing Loss and Memory Systems:Correlation Analyses

Initial steps of correlation analyses (see Table 4)showed that therewas a pattern of negative associationsbetween the hearing loss latent variables for both earsand episodic LTM measures: More severe hearing losswas associated with lower levels of verbal episodic LTMperformance but not with nonverbal LTM for faces. Sig-nificant associations were not present for the STM tests.Semantic LTM performance (i.e., verbal fluency and vo-cabulary) also showed significant negative relationshipsto hearing impairment. For the visual impairment latentvariable, there were no significant correlations with anyof the STM and episodic/semantic LTM measures. TheToH test was not significantly related to any of the sen-sory latent variables.

Age correlated systematically andmore strongly withepisodic LTM than with STM and semantic LTM mea-surements. Age displayed moderate but significant cor-relations with all three sensory latent variables, whichseems reasonable from a general view of aging sensorysystems (Fahlander et al., 2002; Nilsson, 2003; Nilssonet al., 1997; Sternäng et al., 2008). Visionwas not correlatedwith either of the two hearing latent variables. Age wasalso correlated with fluency and face recognition. Note

Table 3. Ms and SDs for memory and cognitive tests for hearing aidwearers (n = 160) and for all the Betula participants at T3.

Variable Included sampleEntire Betula

sample

Memory and cognitive tests M SD M SD

ELTM subject-performed tasks 4.9 2.7 6.4 3.2ELTM verbal sentence task 2.4 1.9 3.4 2.5ELTM word recall 1.8 1.3 2.6 1.7STM subject-performed tasks 1.9 1.0 1.9 1.0STM verbal sentence task 1.4 0.8 1.6 0.9STM word recall 2.4 1.0 2.5 1.0SLTM vocabulary 20.8 5.6 22.1 5.2SLTM fluency 9.9 4.5 11 4.8ToH task 60 30 61 27Face recognition 8.4 2.3 8.8 2.4Word recall–divided attention 0.77 0.88 1.2 1.1

Note. ELTM = episodic long-term memory; STM = short-term memory;SLTM = semantic long-term memory. Face recognition, word recall, anddivided attention performance are represented by the number of itemsrecalled correctly. Tower of Hanoi (ToH) performance is measured bythe number of moves used to solve the puzzle.

Rönnberg et al.: Hearing Loss and LTM 713

that significance levels for the correlation coefficients areBonferroni corrected for each latent sensory variable.

We also computed point-biserial correlation coeffi-cients between the hearing aid wearers’ own rating oftheir ability to “read a newspaper without visual correc-tions” and the latent episodic and semantic LTM andSTM factors. This resulted in the following correlationcoefficients: rpbis = .01 for STM, rpbis = .09 for episodicLTM, and rpbis = 0.06 for semantic LTM. Thus, no signif-icant correlations were found between either objective orsubjective visual measures and LTM performance.

To further pursue the possibility of an association be-tween visual acuity and episodic memory performance,we divided the hearing aid wearers into two groups bymeans of amedian split procedure based on visual acuity:The low-performing 50% of the subsample had a meannumber of points summed across both eyes of 18.4 (SD =8.0), thus defining the general level of legible text. Forthe better performing half, the M was 10.0 and the SDwas 0. Obviously, testing the hypothesis of an associa-tion with visual acuity for the low-performing sample ismore relevant and theoretically appropriate when vari-ability is more substantial. The consistency of the corre-lation pattern for the low-performing sample with theearlier data was even more pronounced (see Table 5):Hearing loss for this particular subsample of participantswith a relatively lower visual acuity still correlated with

an episodic LTM deficit, especially for the better ear,whereas visual acuity was generally not indicative ofany episodic memory deficit.

Taken together, the correlation analyses showed thatepisodic and semantic LTM performance were sensitiveto hearing loss but hada lack of sensitivity to visual acuity.The lack of correlations for visual impairment may bea function of the low resolution of the visual impairmentvariable (corrected). However, we compensated for thiswith the analysis of uncorrected vision estimates (abilityto read newspaper without visual correction) and with aseparate analysis of half the samplewhere corrected visionis lower and more variable (see Table 5). None of theseanalyses gave any indications that memory measures aresensitive to changes in the visual sense. Correspondinganalyses on a further subsample (n = 116), where no statis-tical estimations of missing values were required, showedsimilar results. Thus, although a better measure of uncor-rected or corrected vision (with a better overall resolution)potentially may have produced other results, the datafrom the present study showed no such indications.

Hearing Loss andMemory Systems: SEMEpisodic LTM and STM

The initial SEMmodels that we tested relied on thepredicted link between the hearing latent variables and

Table 4. Correlations among the hearing loss and vision acuity latentvariables, age, and the tests used to construct the ELTM, SLTM, andSTM latent variables for hearing aid wearers (n = 160).

VariableHearingloss (WE)

Hearingloss (BE) Vision Age

Hearing loss (BE) .77** — .03 .30**Vision .02 .03 — .22Age .24* .30** .22 —ELTM subject-performed tasks –.28* –.37** –.05 –.49**ELTM verbal sentence task –.26* –.32** –.18 –.41**ELTM word recall –.26* –.30** –.17 –.26*STM subject-performed tasks –.10 –.07 –.18 –.26*STM verbal sentence task –.20 –.14 –.18 –.18STM word recall –.09 –.12 –.05 –.11SLTM vocabulary –.26* –.25* –.23 –.26*SLTM fluency –.29** –.30* –.11 –.23ToH –.10 –.09 –.13 –.09Face recognition –.01 .01 –.12 –.23Word recall–divided attention –.11 –.21 –.08 –.21

Note. Additional correlations with parameters relevant to the ELU modelare presented. SLTM = semantic long-term memory. WE = worse ear;BE = better ear.

*Correlation significant at p < a = .05/14 = .004 (Bonferroni correctionbased on the number of tests per sensory variable). **Correlationsignificant at p < a = .01/14 = .0007.

Table 5. Correlations among the hearing loss and vision acuity latentvariables and the tests used to construct the ELTM and SLTM, andthe STM latent variables for the 50% of hearing aid wearers whohad poor vision.

VariableHearingloss (WE)

Hearingloss (BE) Vision

Hearing loss (BE) .75** — .09Vision .01 0.9 —ELTM subject-performed tasks –.28 –.32 –.04ELTM verbal sentence task –.31 –.39* –.15ELTM word recall –.28 –.43** –.19STM subject-performed tasks .03 .03 –.18STM verbal sentence task –.23 –.20 –.16STM word recall .00 –.02 –.11SLTM vocabulary –.33 –.28 –.22SLTM fluency –.35 –.44** –.13ToH –.15 –.10 –.20Face recognition .06 .10 –.08Word recall–divided attention –.12 –.11 –.31

Note. Additional correlations with parameters relevant to the ELUmodel are presented.

*Correlation significant at p < a = .05/13 = .004 (Bonferroni correctionbased on the number of tests per sensory variable). **Correlationsignificant at p < a = .01/13 = .0008.

714 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

episodic LTM (based on SPT + VT + word recall) in thewhole subsample of hearing aid wearers, starting withthe better ear (n = 143; see Table 6). In Table 7, the mod-els for the worse ear are presented.

As can be seen in Table 6 for the better ear (seeModel 1), the main and straightforward model betweenhearing loss and episodic LTM (standardized regressionweight = –.46, p < .001) showed that the fit parameterswere all acceptable according to the SEM criteria. How-ever, for the corresponding model between hearing lossand STM, not all fit parameters were acceptable, and themodeled link between hearing loss and STMwas nonsig-nificant (seeModel 5). The selective dependence on hear-ing loss for episodic LTM but not for STM is a pattern ofdata that confirms theELUuse–disuse prediction.A sim-ilar result was obtained for the worse ear (see Table 7),however, with a slightly weaker regression weight (–.37,p < .001) between the hearing loss and deficit in episodicLTM. The difference between ears should be understoodin relation to the fact that the better ear mainly deter-mines auditory function in daily life, as illustrated by rulesapplied in insurance compensation for, say, occupationalhearing loss (cf. Dobie, 1996). For SEMmodels with age-related variance included, see the section titled Age andthe Hearing Loss–Episodic LTM Link.

Apart from the disuse prediction, the attention re-source and information degradation hypotheses alsoreceive support from the general results of Model 1.However, a more detailed look at the data may war-rant some critical comments (see Tables 4 and 5). Inthe word recall—divided attention conditions (calculatedon the basis of the LTM component), the simple correla-tions were nonsignificant, whereas the correlations were

generally significant in the nondivided recall conditions(i.e., in the episodic LTM word recall condition). It wouldhave been expected that the dependence on hearing im-pairment would be stronger, not weaker, in the dividedattention condition (Tun et al., 2009).

An alternative interpretation is that because age is astrong underlying factor, it obliterates the effects of hear-ing loss (e.g., Anderson,Craik,&Naveh-Benjamin, 1998).Partialling out for age in a separate analysis did not alterthe results in the divided attention task; the correlationswere still nonsignificant for both the worse (r = –.07) andbetter (r = –.17) ears, respectively. An age-dependent cor-relation should have been expected, given that the agingfactor dominates.

For the sake of completeness,we also added ananal-ysis of the STM component of the divided attention taskand found that the hearing loss correlations were sig-nificant (r = –.23 and r = –.31 for the worse and betterears, respectively). Also, after partialing out for age, thehearing loss correlations remained significant, althoughsomewhat deflated (r = –.19 and r = –.27 for the worseand better ears, respectively). Thus, the STM dividedattention task component, but not the LTM component,results support theTun et al. (2009) prediction. TheELUmodel does not include an explicit mechanism for pre-dicting reliance on hearing loss in divided attention con-ditions. It predicts intact performance under nondividedSTM conditions.

Furthermore, if we consider the information degra-dation hypothesis, we can note that the episodic LTMconcept is composed of an auditory component in two ofthe three episodic LTM tasks used (i.e., in theword recalland the VT tasks) and, hence, information degradation

Table 6. Twelve SEM-models and fit parameters relating to the better ear.

Model c2 df p RMSEA CFIPathway hearing/

senses > Memory systemNumber of insignificantpathways (p > .05)

1. Hearing > ELTM 18.84 13 .13 .06 .99 –.46 02. Hearing/vision > ELTM 36.34 26 .09 .05 .97 –.44 23. Senses > ELTM 40.85 26 .03 .06 .96 –.46 24. Hearing/vision > Senses > ELTM 37.00 26 .08 .06 .97 –.85 35. Hearing > STM 12.73 13 .47 .00 1.00 ns 36. Hearing age > ELTM 19.64 18 .35 .02 1.00 –.30 07. Vision > ELTM 5.8 5 .32 .03 .98 ns 28. Vision > STM 1.9 4 .75 .00 1.00 ns 49. Hearing > SLTM 15.2 8 .06 .08 .98 –.34 010. Hearing/vision > SLTM 29.8 19 .05 .06 .97 –.30 211. Hearing > SLTM > ELTM 30.53 24 .17 .04 .99 –.22 012. Hearing > LTM 37.8 26 .06 .06 .97 –.43 0

Note. The model, “Hearing > ELTM” is understood to mean “Hearing loss is related to ELTM” and similarly for the other 11 modelsand memory systems. Calculations are based on n = 143. Unfulfilled criteria for a good model is marked in bold (p < .05, RMSEA < .05,CFI < .95, insignificant pathways > 0). RMSEA = root-mean-square error of approximation; CFI = comparative fit index.

Rönnberg et al.: Hearing Loss and LTM 715

could have played a role here, even if the participants hadtheir hearing aids turned onandhad heard andperceivedthe stimuli. We further examined a set of SEM models(not presented in table format) based on only two of theLTM tasks at a time (i.e., letting two task indices of epi-sodic LTM be the basis of the episodic LTM latent vari-able). The resultswere clear in the sense that the episodicLTM deficit was present in all three cases (SPT/VT, wordrecall/ VT, and word recall/SPT) for both ears, with oneminor exception (for themodelwithSPT/VT for the betterear, the overall fit was too low [c2 = .032], but the modelmet all of the other criteria). Thus, the analysis showedthat the link between hearing loss and episodic LTM isrelatively general and robust. However, this does not ruleout potential information degradation, as there is still atleast one task in the three different episodic LTM latentvariables that potentially involves a degradation of in-formation at encoding.

Nevertheless, if in a separate analysis we examinethe simple correlation with the SPT condition—where noauditory information at encoding is present (seeTable 4)—and, hence, there is no degradation of stimulus informa-tion,we obtainahigh simple correlationwithhearing lossfor the better ear. Partialing out for age left us with apattern of significant correlations that is relatively stableacross the three episodic LTM tests (better ear: r = –.22,r = –.26, and r = –.23 for the SPT, VT, and word recalltasks, respectively; worse ear: r = –.18, r = –.18, and r =–.19 for the same tasks. This fits inwith the overall SEManalyses (see Figures 1 and 2). Because the SPT partialcorrelation now is on par with the other correlations, weargue that hearing loss has a general impact on tasksinvolving linguistic information, including tasks with

potential sources for information degradation as well astasks without such degradation.

Thus, it seems that the connection between degreeof hearing loss and episodic LTM performance can be es-tablished with quite different measures of the episodicLTMconstruct bymeans of simple correlations, providedthat thosemeasures rely on linguistic information. Thesefindings are generally consonantwith theELUpredictionas well as the attention resource and information degra-dation accounts (i.e., the Model 1 result from both ears).But some of the results at the level of simple and partialcorrelations warrant further study to pinpoint the ex-planatory valueof all threepredictions involved. It shouldalso be added that the general result is corroborated bythe fact that episodic LTM for unfamiliar faces,which de-pends onnonlinguistic visuospatial processing and there-fore did not directly involve a phonological analysis—didnot show a relationship with hearing loss.

In Table 6, we also tested themodels of the direct re-lationships between the latent visual variable and STMand episodic LTM (Models 7 and 8). We found no signif-icant links for visual acuity and STM or for visual acuityand episodic LTM. This held true for both ears (see alsoTable 7). We also found other nonsignificant pathwaysfor Models 7 and 8, clearly producing unacceptable mod-els. Finally, unacceptable models were also produced bythe relationships between hearing loss and STM (Model 5,Tables 6 and 7), hence supporting the ELU model.

Thus, the overall results support theELUmodel thatpredicts an audioverbal link to an episodic LTM deficit(Model 1) while leaving STM intact relative to hearingloss (Model 5), unaffected by visual acuity for both the

Table 7. Twelve SEM-models and fit parameters relating to the worse ear.

Model c2 df p RMSEA CFIPathway hearing/

senses > Memory systemNumber of insignificantpathways (p > .05)

1. Hearing > ELTM 11.36 13 .58 .00 1.00 –.37 02. Hearing/vision > ELTM 26.03 26 .46 .00 1.00 –.36 13. Senses > ELTM 34.78 26 .12 .05 .98 –.37 24. Hearing/vision > Senses > ELTM 36.16 26 .09 .05 .98 ns 35. Hearing > STM 13.54 13 .40 .02 1.00 ns 36. Hearing age > ELTM 11.97 18 .85 .00 1.00 –.23 07. Vision > ELTM 5.85 5 .32 .04 .98 ns 28. Vision > STM 6.33 5 .28 .04 1.00 ns 49. Hearing > SLTM 10.4 8 .24 .04 1.00 –.33 010. Hearing/vision > SLTM 22.3 19 .27 .03 .99 –.31 111. Hearing > SLTM > ELTM 21.36 24 .62 .00 1.00 ns 112. Hearing > LTM 29.11 26 .31 .03 .99 –.38 0

Note. The model “Hearing > ELTM” should be read “Hearing loss is related to ELTM” and similarly for the other 11 models and memory.Calculations are based on n = 160. Unfulfilled criteria for a good model is marked in bold (p < .05, RMSEA < .05, CFI < .95, insignificantpathways > 0).

716 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

STM and episodic LTM systems (Models 7 and 8). Al-though it may be argued that the attention resource andinformation degradation accounts gain support from theepisodic LTM data, the STM and episodic LTM overallpattern is predicted only by the ELU model. However,recent research has proposed a development of the atten-tion and information degradation account that predictsselective effects on memory for early and late serial posi-tions of the serial position curve (Heinrich & Schneider,2010; see further under General Discussion).

Alternative Common-Cause ModelsTo explore the boundaries of themodel relating hear-

ing loss to episodic LTM and to test the notion that theassociation between sensory decline and episodic LTMperformance is not specific to decline in hearing loss butalso involves visual decline (i.e., a common cause), wecomputed a set of three alternative models that incor-porate visual acuity data (i.e., Table 6, Models 2–4). Thefirst alternativemodelwas one inwhichwe let the latentvisual and hearing impairment variables contribute sep-arately to episodic LTM (seeTable 6,Model 2). Thismodelwas not acceptable because the visual latent variablewasnot significantly linked to episodic LTM. In a second

alternative model (Model 3), we constructed a more ab-stract, latent sensory variable based on the joint contri-butions from the hearing loss and the visual impairmentvariables. Using such a general sensory–episodic LTMconnection, we were still unsuccessful in producing anacceptable model, as the basic link between sensoryimpairment and visual subscores was nonsignificant.A third model (Model 4) that we tested included anabstract sensory variable, which became a mediatinglatent variable between the separate sensory variablesand episodic LTM. Again, this version did not producean acceptable model, as the relationship between thelatent visual impairment variable and the abstract sen-sory variable was nonsignificant. The same general pat-tern was shown for the worse ear (see Table 7).

Thus, the modeling results are not consonant with acommon-cause hypothesis of a deficit in cognitive func-tion (Baltes&Lindenberger, 1997). It is important to notethat the separate correlation analyses of uncorrected vi-sual acuity (point-biserial)—and the median split dataon lower visual acuity (corrected)—give further credenceto the conclusions drawn: A sensory-specificmechanism,conceptualized in terms of theELUmodel, receives strongsupport. A common-cause hypothesis is not supported,as visual acuity data—when selected to vary despite

Figure 1. Visualized structural equation modeling (SEM) for the hearing losses of the better ear (BE; see upperpanel) and worse ear (WE; see lower panel) and their relation to episodic long-term memory (ELTM) performancewhen taking age into account. SPTs = subject-performed tasks; Rec = recall.

Rönnberg et al.: Hearing Loss and LTM 717

corrections—do not parallel the hearing loss predictionsof deficits, despite corrections. However, as already dis-cussed, a measure with better overall resolution forvisual acuity (when corrected) may alter the picture, butthe total empirical picture on visual acuity does not seemto support that possibility.

Ageand theHearing Loss–Episodic LTM LinkAs also can be seen in Table 6 (Model 6), the basic

model relatinghearing loss to episodic LTMshowedgoodfit parameters when chronological agewas allowed to af-fect both latent variables (i.e., the standardized regres-sionweight = –.30, p < .05). The link between hearing lossand episodic LTM was somewhat weaker for the worseear but still significant (standardized regressionweight =–.23, p < .05), apart from fulfilling the remaining criteriafor an acceptable model. Thus, introducing age-relatedvariance into the SEMmodels did not eliminate the linkbetweenhearing loss and episodicLTM (seeFigure 1). Butincluding age weakened this link considerably, suggest-ing that part of the relationship is due to a covariationwith age that is independent of hearing status.

However, one impressive aspect of the data from theperspective of hearing loss is that at the same time as

there were relatively strong regression weights betweenage and episodic LTM (r = –.60 for the worse ear and r =–.58 for the better ear), there were also significant betaweights between age and hearing loss (worse ear, r = .22,and better ear, r = .26; c.f. Johansson & Arlinger, 2002).However, despite these significant regression weightsand constraints on the SEM solution, the predicted linkbetween hearing loss and episodic LTM survived correc-tion for age for both the worse ear and better ear cases(see Figure 1). This further validates the conclusion thathearing loss has an independent and significant effecton episodic LTM.

Finally, we computed an alternative kind of model,where age and hearing loss were pitted against eachother as the two predictor variables of episodic LTMperformance.As canbe seen inFigure 2, thehearing loss–episodic LTM link was again significant for both ears inthis type of model.

Thus, the generalmessage is that hearing loss is notrelated to STM. This is true despite the fact that STMperformance of the hearing aid wearers was on a parwith that of all Betula participants at T3 and also dem-onstrated a larger variability than for episodic LTMperformance. Age was moderately related to STM butwas substantially related to episodic LTM (see Table 4).

Figure 2. Visualized SEMs for the hearing losses of the BE (see upper panel) and WE (see lower panel) and theirrelation to ELTM performance when age is not corrected for in the latent hearing loss variables.

718 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

Despite the relatively high age-episodic LTM correla-tions, the effect of hearing loss, corrected for age, was em-pirically present for all age-related episodic LTM modelsin Figures 1 and 2, as predicted by the ELU model.

Semantic LTMModel 9 (see Table 6) tested whether there was a

significant link between hearing loss and semantic LTM(i.e., fluency and vocabulary). This model was acceptable,and only the predictionderived fromNAM(Luce&Pisoni,1998) received support. In Model 10, we tested whetherauditory and visual acuity together (as one common-causemodel) could predict a semantic LTM deficit. Neitherthe better nor the worse ear prediction gave acceptableModel 10 solutions.

Semantic LTM and Episodic LTMInModel 11 (see Table 6), we tested whether hearing

loss could independently predict (a) episodic LTMdeficitsand (b) semantic LTM deficits, as well as (c) a significantrelationship between semantic and episodic LTM. Themodel was found acceptable for the better ear, with astrong link between the semantic and episodic latentvariables (see Figure 3). Regarding this model, only theELU model predicts a relationship between semanticLTMand episodic LTM. The vocabulary and fluency testsobviously tap into two important aspects of semantic LTMthat mediate—and dynamically interact with—encodingof verbal items into episodic LTM.

For control purposes,we computed two othermodelspertinent and complementary to the overall model pre-sented in Figure 3. The significant relationship betweenhearing loss and semantic LTM remained (a) in a modelincluding age and hearing loss as predictor variables

and semantic LTM as the dependent latent variable (cf.the analogous model for episodic LTM presented in Fig-ure 2) and (b) in a model where age was allowed to affecthearing loss, semantic LTM, and episodic LTM, andwheresemantic LTM was the link to episodic LTM. In fact, se-mantic LTM was the most powerful variable predictingepisodic LTM, when compared with hearing loss and age,and hearing loss exerted its largest influence via semanticLTM. Thus, the overall interpretation of the model pre-sented in Figure 3 is not affected by the inclusion of age-related variance.

Finally, Model 12 summarized the empirical pictureby showing that hearing loss predicts LTM deficits ingeneral, the latent construct of LTM now building on allfive memory indices. Interestingly, there was a substan-tial link to this latent construct (standardized regressionweight = –.43, p < .05), independent of whether the testsload on a particular memory system. This result is notclearly predicted by any hypothesis or model advancedhere (see Figure 4) but is consonant with a theory wherethe similarities between the memory systems are em-phasized, as is done by, for example, Squire, Clark, andBayley (2004), where episodic and semantic memory areproposed to belong to the same memory system, viz. de-clarative memory.

General DiscussionOverall,Models 1–8 (cf. Tables 6–7 andFigures 1–2)

support the ELU predictions. This is because the hear-ing loss–episodic LTM link is significant in all cases, evenwhen age is accounted for, while at the same time thehearing loss–STMmodel (Model 5) implies that hearingloss is not related to STMperformance. It should be noted

Figure 3. Visualized SEM for the BE and its relation to ELTM and SLTM.

Rönnberg et al.: Hearing Loss and LTM 719

that age accounts for a substantial part of the variance inepisodic LTM that is independent of hearing status. Age-related variance was not the main focus of the presentarticle but was included in a few strategically chosenmodels (Model 6, Figures 1 and 2) and in separate anal-yses of partial correlations.

Visual acuity was related neither to STM nor to epi-sodic LTM (Models 7 and 8). Thus, the prediction of aselective empirical association between hearing loss andepisodic LTM is confirmed, and the ELU-derived disusehypothesis is supported. In general modeling terms, thesensory-specific model that was based on hearing lossgave a better fit than the models that included bothhearing loss and visual acuity—that is, the common-causemodels (Models 2–4; cf. Lindenberger&Ghisletta,2009).

Theoretically, when we consider the predictions ofan episodic LTMdeficit seen in isolation, alternative pre-dictions based on attention allocation and informationdegradation also receive support at the level of SEM.When we consider the semantic LTM results, only thederivedNAMprediction receives support (Models 9–10),whereas only the ELU model predicted the interactionbetween episodic and semantic LTM (Model 11). Nomodel predicts the overall link between hearing loss andlong-term memory (episodic and semantic components;Model 12).

Although several parts of the overall pattern of re-sults support the ELU model, there are critical theoret-ical issues that remain to be discussed, some of whichsupport alternative mechanisms.

Episodic LTM and STM MechanismsThe ELU prediction is based on a mechanism that

relies on RAMBPHO-extracted information and how thatmatches/mismatches with phonological representationsin semantic LTM (Rönnberg, 2003; Rönnberg et al., 2008).The derived prediction was that the disuse wouldnot strike against STM because according to the model,

mismatches push the system into the explicit mode ofprocessing and cause STM (orWM) to be constantly occu-pied with retrospective disambiguation of what has beensaid in a conversation as well as prospective guessing ofwhat is to come (see Foo et al., 2007; Lunner&Sundewall-Thorén, 2007; Rudner et al., 2009, for detailed analyses ofmismatch conditions and WM correlations). Therefore,STM is assumed to be forced into use more than episodicLTM. Speculatively, this works in two ways: Mismatchesdisrupt episodic LTM encoding and retrieval while simul-taneously causing STM to be engaged to a larger extentwhen the systemshifts into the explicitmode.Even thoughthe eventual disambiguation ofmessage contentmayhaveto rely on successive additional retrievals from episodicand semantic LTM, the net activity of STM will alwaysbe higher. This is because it involves all the verbal frag-ments that have already been decoded, plus the addi-tional explicit inference making that is necessary whenperceived and retrieved information are combined.

An interesting corollary of the disuse hypothesis isthat the STMsystem should actually improve in terms ofa positive correlation between degree of hearing loss andSTMperformance because STMpractice increaseswhenmismatches increase. This effect may be true for indi-viduals who are younger than those taking part in thepresent study. For individuals who are older than thosetaking part in the present study, it could be argued thateven STM should decline (e.g., Luo & Craik, 2008), de-spite mismatch-induced STM practice effects (Rocheet al., 2009). This is because effects of practice are smallerin old age (Dahlin,Nyberg,Bäckman,&Neely, 2008). Thekey prediction with the current disuse hypothesis is thatthere is a relative difference between STM and episodicLTM and that as long as hearing loss contributes to anincrease in the everyday count of mismatches betweenthe incoming language signal and semantic LTM, episodicLTM should suffer relatively more than STM.

The lack of improvement in STM as a function ofhearing loss could also be due to the fact that Tulving andColotla’s (1970) definition of primary memory (i.e., STM)does not capture all the relevant components of what is

Figure 4. Visualized SEM for the BE and its relation to long-term memory (LTM; which is composed of episodic andsemantic components).

720 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

meant by the complex working memory function—thatis, involving both storage and processing componentsrelevant to ELU (Daneman & Carpenter, 1980). Giventhat the STMmeasure in future studies could be replacedby a more complex dual-task WM measure, it could per-haps be expected (on the basis of a use/disuse logic) thatan improvement of WMmight be observed as a functionof hearing loss.

A further issuewith themismatchmechanism is thatit has been measured in experimental studies where we(so far) have changed only some aspects of phonologicalprocessing (i.e.,manipulation of compression release timesin hearing aids) prior to testing (Foo, Rudner, Rönnberg,& Lunner, 2007; Rudner et al., 2009). This means thatwe have not exactly defined what is required minimalchange or disruption in phonological processing in or-der to push processing into an explicit mode. We havealso opened up the possibility that mismatch could beinduced by nonphonological contextual or semantic fac-tors (Stenfelt & Rönnberg, 2009).

Here, the alternative theoretical mechanisms aremore delimited and exact when it comes to episodic LTMbut, at the same time, are less clear regarding the othermemory systems.

Attention ResourcesIf attention resources were more easily depleted in

participants with poor hearing, hearing loss would berelated to performance in STM conditions involving effort-or attention-demanding activities (Tun et al., 2009). Thisprediction is not borne out for the STM components inthe current data set because the hearing aid wearerswere very similar in performance to the whole T3 Betulasample, and in addition, no correlations with hearingloss were statistically reliable. In the divided attentiontask in the present study, where attention demandswereexplicitly manipulated, the STM component—not the epi-sodic LTM component—was related to hearing loss. ThisSTMresult supports theTun et al. (2009) prediction.Butit is not clear why divided attention should not be relatedto hearing loss for the episodic LTM component in thepresent study. The Tun et al. (2009) account does notaddress STM/episodic LTM distinctions, and hence, afirm conclusion cannot be drawn.

In a recent account of attentional resources and in-formation segregation (Heinrich & Schneider, 2010), ithas been argued that information segregation in old peo-ple (e.g., segregatingword pairs from continuous babble)is relativelymore optimal thandistorting thewordsduringword pair presentation only (e.g., by means of jittering).The argument was that speed of accessing the lexicon isfacilitated by segregation, improving encoding andmem-ory performance for recency (late) positions of the serialposition curve. Memory was probed at different serial

positions from list to list, with one word serving as a cuefor the other. The downside of information segregation incontinousbabble is that attentional costs accrue relativelymore as list items unfold, hence affecting episodic storagefor pre-recency (early) positions (Heinrich & Schneider,2010; see also Sarampalis, Kalluri, Edwards, & Hafter,2009).

To the extent that early and late serial positionsmaponto our measurement of STM and episodic LTM, theHeinrich and Schneider (2010) account presents a pre-cise mechanism for encoding and episodic memory stor-age, given different ways of experimentally manipulatingattentional allocation.However, theELUmodel predictionis primarily about hearing loss–related and selective ef-fects on STM and episodic LTM, whereas the Heinrichand Schneider (2010) account is about attentional capac-ities in old age, not focusing on the effects of hearing im-pairment (i.e., hearingwas controlled for by individuallyadjusting the noise backgrounds such that the individ-uals could hear the stimulus words equally well).

Information DegradationIf information degradation related to hearing im-

pairment were the only cause of the deficit, it should cutacross memory systems—including STM—not only pre-dict the loss in episodic LTM where there was an audi-tory component involved. Furthermore, comparison ofdifferent test combinations making up the episodic LTMconstruct showed that the disuse effect was not confinedto a particular test or to a particular type of encoding butwas rather general provided that there were linguisticcomponents involved. However, in this particular study,we could not rule out the possibility that information deg-radation also played a role because all linguistic latentepisodic LTM constructs had an auditory component in-cluded in the SEM analysis. In the simple correlationanalyses, we observed that when age was partialed out,the SPT performance still correlated to the same degreewith hearing loss as the other two tests (i.e., VTs andword recall). This is an interesting indication that thehearing loss effects on episodic LTMmay extend to non-auditory linguistic stimuli, and it supports the gener-ality of the ELU prediction rather than disqualifyingthe information degradation account.

A more detailed look at the episodic, verbal LTMtests opens the possibility for a theoretical refinement ofthe episodic LTM effect: Actionmemory is overtly verbalonly at retrieval, but the interpretation of whethermotorencoding of imperatives is only nonverbal in nature is stillfar from decided (see Nilsson, 2000). Still, in the presentstudy, action memory is significantly related to hearingloss in the same way as the two verbal encoding–verbalretrieval episodic LTM tests. Thus, the datamay suggestthat the locus of a putative causal mechanism is the

Rönnberg et al.: Hearing Loss and LTM 721

verbal retrieval stage of episodic LTM. Insofar as the at-tention allocation hypothesis predicts the episodic LTMeffect, the experimentalmanipulation of tracking/recallingin the Tun et al. (2009) study also targeted the retrievalphase.

The data pattern from simple correlations among theadditional test parameters further adds to the specificityof an audioverbal or phonologically mediated episodicLTMmechanism: In particular, face recognition ability,which presumably taps nonphonological episodic LTM,was not compromised by, or related to, hearing loss. How-ever, age correlated with face recognition, which, again,attests to the specificity and invariance of the hearingloss–episodic LTM deficit. Further support for selectivitywas provided by the fact that the results from the execu-tive ToH test were not correlated with degree of hearingloss. To the extent that the TOH test mimics actual exec-utive functions that contribute to online processing ofspeech, the ELU model prediction was not supported.

Cognitive Aging and Episodic LTMThe present studywas based on cross-sectional data

and needs to be followed up by longitudinal studies of cor-relation patterns at different points in time in the samesample. Predictions of decline fromchanges in acuity overtimealso need to be tested.Wedonot know from this studywhether mean rates of decline are different for hearingloss and visual acuity in different age groups. Given sucha possibility, it may be the case that sensitivity andselectivity of the different sensory systemswith respectto declines in memory and cognitive systems operatedifferently at different ages, and that other samplescould show different SEMmodels. Therefore, we want tobe cautious with respect to any general “falsification” ofa common-cause hypothesis, but even recent longitudinalfollow-up studies do not give unequivocal support for acommon-cause interpretation of data (Lindenberger &Ghisletta, 2009).

A further aspect of a cognitive aging hypothesis isthat the negative correlations between hearing loss andepisodic LTM are robust despite the fact that the varia-bility in episodic LTMperformance is lower than for STMin the included sample compared with the Betula sample(Ms and SDs are lower/smaller). Still, the aging effects inthe current data set strike relatively more at the episodicLTM than STM memory components, which is in linewith several hypotheses of cognitive aging (Fahlanderet al., 2002; Nilsson, 2003; Nilsson et al., 1997; Wahlinet al., 1995). This means that estimates of the disuse ef-fect in this study are probably conservative.

We have shown in this study that there is a sensory-specific mechanism that relates hearing loss to episodicLTM, and we argue that this mechanism operates via

mismatch at the phonological level. Whether cognitiveaging, directly or indirectly, alters themismatch functionas well is a question for future studies. One indication ofdifferences in their respective influence on episodic LTMis the fact that age correlated with nonverbal episodicLTM, whereas hearing loss did not.

Semantic and EpisodicLTM Mechanisms

Our measures constituting the semantic memoryconstruct that is relevant for the lexical access mediatedby RAMBPHO are indirect. For example, other kinds ofphonological–lexical assessment (measuring the effectsof phonological neighborhoods; cf. NAM [Luce & Pisoni,1998]) or use of sensitive rhyme tests (e.g., Andersson,2002) might have been more appropriate.

However, we deemed it important—within the con-straints of the Betula test battery—to create a semanticLTMconstruct that at least should have some bearing onthemismatchmechanism. Therefore, verbal fluencywaschosen because it depends on precise retrieval mecha-nisms from semantic LTM under time pressure and isassumed to mimic some of the phonological and lexicalaccess speed aspects of retrieval. Vocabularywas used asa proxy for verbal IQ but has also been found to be sensi-tive to hearing impairment (e.g., Davis et al., 1986; Kiese-Himmel, 2008). Together, they form a semantic LTMconstruct that was sensitive to hearing loss in the SEM.

This effect of hearing loss on semantic LTM was notpredicted by the ELUmodel because only indirectly canwe assume that the disuse effects should have affected se-mantic LTM. The assumption of ELU is that the match /mismatch function is partly dependent on the status of theinformation delivered byRAMBPHOandpartly due to thephonological–lexical representations in semantic LTM,and thatmismatch causes disuse of episodic LTM.Seman-tic LTM is always used in language understanding—evenin conditions of mismatch—and, hence, no direct disuseeffects are predicted by ELU. The NAM model, on theother hand, was assumed to predict the negative rela-tionship because of a relative inability in old people withhearing impairment to use the neighborhood structurein word identification under adverse conditions.

Nevertheless, the semantic–episodic LTM relation-ship was predicted by the ELUmodel, as there is a tightrelationship between the status of semanticmemory andhow well those structures mediate encoding of lexicallyretrieved items: Without a match, no successful encodinginto episodic LTM will occur. This is a strong predictionand a strong result: The standardized beta weight washigh indeed (b = .70). It is possible that the NAM or sim-ilarmodels would predict this effect by extrapolation, but

722 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

they have not focused on the possibility. Submodels includ-ing age-related variance did not affect the overall inter-pretation of the semantic–episodic LTM relationship.

Finally, the overall memory composite latent con-struct was also related to hearing impairment. By virtueof the above reasoning, this was not predicted by ELU orby any semantic or lexical model, nor by any of the hy-potheses related to episodic memory.

Acclimatization andHearing Compensation

It is important to note that the duration of hearing aiduse in the present study did not break the link betweenauditory impairment and episodic LTM. This was truedespite reasonable acclimatization to the aid. We knowthat despite the use of hearing aids, suboptimal speechunderstanding may arise due to ecologically relevantvariations of signal and noise variables—for example, inrate of speech (Tun &Wingfield, 1999; Wingfield, 1996);levels of context (Pichora-Fuller, 2008); reverberation andbackground noise (informational and energetic masking;Brungart, 2001); and various spatial listening conditions(Blauert, 1997; Freyman,Helfer,McCall, &Clifton, 1999;Li, Daneman, Qi, & Schneider, 2004). All of these ecolog-ical factors may, in one way or another, contribute to mis-matches depending on the adequacy of signal processing,the type of hearing impairment, and how these factorsinteract with cognition (Stenfelt & Rönnberg, 2009).

Although acclimatization to the hearing aids couldbe assumed in the majority of cases in the present sam-ple, these acclimatization periods are relatively shortin relation to the time span across which an age-relatedhearing loss typically progresses. The time period duringwhich participants did not use a hearing aid—despite thefact that they could have benefited from amplification—mayalso be significant (Davis, Smith, Ferguson, Stephens,& Gianopoulos, 2007). This implies that a hearing com-pensation hypothesis may have to be tested by studyingparticipants who have worn their hearing aids for muchlonger periods of time than in the current sample. There-fore, some caution iswarranted beforewe draw the conclu-sion that hearing aid use does not prevent the cognitivedeterioration associated with hearing loss. One relatedgeneral consequence of the present results is that futuredesign of hearing aid signal processing algorithms shouldconsider the potential benefits to be reaped in terms ofpreserved episodic and semantic LTM memory function,were hearing acuity to be adequately preserved. There-fore, we also need long-term evaluations of the kinds ofsignal processing in hearing aids that actually minimizethe number of mismatches with semantic LTM in every-day life.

ConclusionsThis study led to four major conclusions, which are

detailed in the paragraphs that follow.

First, the key result is that in the sample of 160 hear-ing aid users tested in the present study, degree of hear-ing loss (in both ears analyzed separately) is related toepisodic LTM performance, whereas STM performanceis unrelated to hearing loss. This result holds true eventhough the participants have become accustomed to thehearing aids, but the hearing compensation hypothesisneeds further evaluation. Furthermore, it applies whenwe include age in the SEM models—that is, the modelsare still statistically acceptable after controlling for age,and especially for the better ear. It should also be notedthat age, as such, accounts for a substantial portion ofthe variance in episodic LTM, but the focus of the presentarticle was on hearing loss–related variance. This over-all result of different relations between hearing loss andmemory system (i.e., STM and episodic LTM) supportsthe ELU prediction.

Second, the hypothesis about a general sensory de-cline that is paralleled by a cognitive decline (Baltes &Lindenberger, 1997; Lindenberger&Baltes, 1994) is notsupported by SEM model testing of episodic LTM basedon either visual or auditory factors, or even by a combinedvisual plus auditory latent sensory factor. The data froma testwith uncorrected visiondonot support the common-cause hypothesis—neither do the separate correlationsfor the half of the subsample with poorer visual acuity(cf. Lindenberger & Ghisletta, 2009).

Third, themore precise nature of mechanisms of thehearing loss–episodic LTM deficit link could not be un-equivocally determined. Information degradation (e.g.,Schneider et al., 2002) or attention allocation (Tun et al.,2009) could not be ruled out at the level of SEM, butwhen age was partialed out at the level of simple corre-lations, the hearing loss–episodic LTMeffect was observedeven for nonauditory tasks (i.e., SPTs), supporting thegenerality of the ELU concept. Some support for the Tunet al. (2009) prediction was obtained in the divided atten-tion task, and recent theoretical developments (Heinrich& Schneider, 2010) may prove to have integrative powerwhen it comes to different kinds of attention manipula-tions and how they selectively relate to involvement ofthe STM and episodic LTM systems.

Fourth, the relationship between hearing loss andsemantic memory was predicted by the NAM (Luce &Pisoni, 1998) but not the ELU model. However, only theELU model predicted dependence between semanticLTM and episodic LTM. Including age-related variancedid not affect the overall interpretation of the episodic–semantic LTM relationship.

Rönnberg et al.: Hearing Loss and LTM 723

AcknowledgmentsThis research was supported by Linnaeus Centre HEAD

Grant 349-2007-8654 from the Swedish Research Council,awarded to the first author, and by grants from the SwedishCouncil for Research in the Humanities and Social Sciences(Grant F337/1988–2000), the Swedish Council for SocialResearch (Grants 1988–1990: 88-0082, 311/1991–2000), andthe Swedish Research Council (Grants 2001-6654, 2002-3794,and 2003-3883), awarded to the last author. We thank KathyPichora-Fuller at the University of Toronto, Ontario, Canada,for her constructive comments on this article.

ReferencesAllen,N.H.,Burns,A.,Newton,V.,Hickson,F.,Ramsden,R.,Rogers, J., . . . Morris, J. (2003). The effects of improvinghearing in dementia. Age and Ageing, 32, 189–193.

Anderson, N. D., Craik, F. I., & Naveh-Benjamin, M.(1998). The attentional demands of encoding and retrieval inyounger and older adults: 1. Evidence from divided attentioncosts. Psychology and Aging, 13, 405–423.

Andersson, U. (2002). Deterioration of the pnonologicalprocessing skills in adults with an acquired hearing loss.European Journal of Cognitive Psychology, 14, 335–352.

Appolonio, I., Carabellese, C., Frattola, L., & Trabucchi, M.(1996). Effects of sensory aids on the quality of life andmortality of elderly people: A multivariate analysis. Ageand Ageing, 25, 89–96.

Arlinger, S. (2003). Negative consequences of uncorrectedhearing loss—a review. International Journal of Audiology,42, S17–S20.

Baddeley, A. D., Lewis, V., Eldridge, M., & Thomson, N.(1984). Attention and retrieval from long-term memory.Journal of Experimental Psychology: General, 113, 518–540.

Baltes, P., & Lindenberger, U. (1997). Emergence of apowerful connection between sensory and cognitive func-tions across the adult life span: A new window to the studyof cognitive aging? Psychology and Aging, 12, 12–21.

Bentler, P. M. (1990). Comparative fit indexes in structuralmodels. Psychological Bulletin, 107, 238–246.

Blauert, J. (1997). Spatial hearing: The psychophysics ofhuman sound localization. Cambridge, MA: MIT Press.

Bopp, K. L., & Verhaegen, P. (2005). Aging and verbalmemory span: A meta-analysis. Journal of Gerontology.B: Psychological Sciences, 60, 223–233.

Browne, M. W., & Cudeck, R. (1993). Alternate ways ofassessing model fit. In K. A. Bollen & J. S. Long (Eds.),Testing structural equation models (pp. 136–162). NewburyPark, CA: Sage.

Brungart, D. S. (2001). Informational and energetic mask-ing effects in the perception of two simultaneous talkers.The Journal of the Acoustical Society of America, 109,1101–1109.

Cacciatore, F., Napoli, C., Abete, P., Marciano, E., Triassi,M., & Rengo, F. (1999). Quality of life determinants andhearing function in an elderly population: OsservatorioGeriatrico Campano Study Group.Gerontology, 45, 323–328.

Chen, F., Curran, P. J., Bollen, K. A., Kirby, J., & Paxton, P.(2008). An empirical evaluation of the use of fixed cutoff

points in RMSEA test statistic in structural equationmodels. Sociological Methods Research, 36, 462–494.

Christensen, H., & Mackinnon, A. J. (2004). Exploring therelationships between sensory, psychological, genetic, andhealth measures in relation to the common cause hypoth-esis. In R. A. Dixon, L. Backman, & L.-G. Nilsson (Eds.),New frontiers in cognitive aging (pp. 217–234). New York,NY: Oxford University Press.

Dahlin, E., Nyberg, L., Bäckman, L., & Neely, A. S. (2008).Plasticity of executive functioning in young and older adults:Immediate training gains, transfer, and long-term mainte-nance. Psychology and Aging, 23, 720–730.

Daneman, M., & Carpenter, P. A. (1980). Individual differ-ences in working memory and reading. Journal of VerbalLearning and Verbal Behavior, 19, 450–466.

Davis, J., Elfenbein, J., Schum, R., & Bentler, R. (1986).Effects of mild and moderate hearing impairments on lan-guage, educational, and psychosocial behavior of children.Journal of Speech and Hearing Disorders, 51, 53–62.

Davis, A., Smith, P., Ferguson, M., Stephens, D., &Gianopoulos, I. (2007). Acceptability, benefit and costs ofearly screening for hearing disability: A study of potentialscreening tests and models. Health Technology Assessment,11, 1–294.

DeDe, G., Caplan, D., Kemtes, K., & Waters, G. (2004). Therelationship between age, verbal working memory, and lan-guage comprehension. Psychology and Aging, 19, 601–616.

Dobie,R.A. (1996). Compensation for hearing loss.Audiology,35, 1–7.

Dureman, I. (1960). SRB: 1. Stockholm, Sweden:Psykologiförlaget.

Fahlander, K., Wahlin, Å., Almkvist, O., & Bäckman, L.(2002). Cognitive functioning in Alzheimer ’s disease andvascular dementia: Further evidence for similar patternsof deficits. Journal of Clinical and Experimental Neuro-psychology, 24, 734–744.

Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975).“Mini-mental state”: A practical method for grading thecognitive state of patients for the clinician. Journal ofPsychiatric Research, 12, 189–198.

Foo, C., Rudner, M., Rönnberg, J., & Lunner, T. (2007).Recognition of speech in noise with new hearing instrumentcompression release settings requires explicit cognitivestorage and processing capacity. Journal of the AmericanAcademy of Audiology, 18, 553–566.

Freyman, R. L., Helfer, K. S., McCall, D. D., & Clifton,R. K. (1999). The role of perceived spatial separation in theunmasking of speech. The Journal of the Acoustical Societyof America, 106, 3578–3588.

Gallacher, J. (2005). Hearing, cognitive impairment andaging: A critical review. Reviews in Clinical Gerontology,14, 1–11.

Heinrich, A., & Schneider, B. A. (2010). Elucidating theeffects of ageing on remembering perceptually distortedword pairs. Quarterly Journal of Experimental Psychology,64, 186–205. doi:10.1080/17470218.2010.492621.

Hofer, S. M., Berg, S., & Era, P. (2003). Evaluating the inter-dependence of aging-related changes in visual and auditoryacuity, balance, and cognitive functioning. Psychology andAging, 18, 285–305.

724 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

Hultsch, D. F., Hertzog, C., Small, B. J., McDonald-Miszczak,L.,&Dixon,R.A. (1992). Short-term longitudinalchange in cognitive performance in later life. Psychology andAging, 7, 571–584.

Ibertsson, T., Hansson, K., Asker-Àrnason, L., Sahlén,B., & Mäki-Torkko, E. (2009). Speech recognition, work-ing memory and conversation in children with cochlearimplants. Deafness and Education International, 11,132–151.

International Organization for Standardization. (1989).ISO 8253-1: 1989. Acoustics—Audiometric test methods—Part 1: Basic pure tone air and bone conduction thresholdaudiometry. Geneva, Switzerland: Author.

Janowsky, J. S., Shimamura, A. P., & Squire, L. R. (1989).Source memory impairments in patients with frontal lobelesions. Neuropsychologia, 27, 1043–1056.

Johansson, M. S. K., & Arlinger, S. (2002). Hearing thresh-old levels for an otologically unscreened, non-occupationallynoise-exposed population in Sweden. International Journalof Audiology, 41, 180–194.

Jolles, J., van Boxtel, M. P., Ponds, R. W., Metsemakers,J. F., & Houx, P. J. (1998). Cognitive aging in a longitu-dinal perspective: The Maastricht Aging Study [MAAS].Acta Neuropsychiatrica, 10, 81–83.

Jusczyk, P. W., Luce, P. A., & Charles-Luce, J. (1994).Infants’ sensitivity to phonotactic patterns in the nativelanguage. Journal of Memory and Language, 33, 630–645.

Kiese-Himmel, C. (2008). Receptive (aural) vocabularydevelopment in children with permanent bilateral sensori-neural hearing impairment. Journal of Laryngology &Otology, 122, 458–465.

Kirk, K. I., Pisoni, D. B., & Osberger, M. J. (1995). Lexicaleffects on spoken word recognition by pediatric cochlearimplant users. Ear and Hearing, 16, 470–481.

Kompus, K., Olsson, C.-J., Larsson, A., & Nyberg, L.(2009). Dynamic switching between semantic and episodicmemory systems. Neuropsychologica, 47, 2252–2260.

Lazard, D. S., Lee, H. J., Gaebler, M., Kell, C. A., Truy, E.,& Giraud, A. L. (2010). Phonological processing in post-lingual deafness and cochlear implant outcome. Neuro-image, 49, 3443–3451.

Lehrl, S., Funk, R., & Seifert, K. (2005). The first hearingaid increases mental capacity. Open controlled clinical trialas a pilot study. Hals- Nasen- und Ohrenheilkunde, 53,852–862.

Li, L., Daneman, M., Qi, J. G., & Schneider, B. A. (2004).Does the information content of an irrelevant source dif-ferentially affect spoken word recognition in younger andolder adults? Journal of Experimental Psychology: HumanPerception and Performance, 30, 1077–1091.

Li,K. Z., &Lindenberger,U. (2002). Relations between agingsensory/sensorimotor and cognitive functions. Neuroscienceand Biobehavioral Reviews, 26, 777–783.

Lindenberger, U., &Baltes, P. B. (1994). Sensory functioningand intelligence in old age: A strong connection. Psychologyand Aging, 9, 339–355.

Lindenberger, U., &Ghisletta, P. (2009). Cognitive and sen-sory declines in old age: Gauging the evidence for a commoncause. Psychology and Aging, 24, 1–16.

Lövdén, M., Ghisletta, P., & Lindenberger, U. (2005).Social participation attenuates decline in perceptual speedin old and very old age. Psychology and Aging, 20, 423–434.

Luce, P. A., & Pisoni, D. B. (1998). Recognizing spokenwords: The neighborhood activationmodel.Ear andHearing,19, 1–36.

Lunner, T. (2003). Cognitive function in relation to hearingaid use. International Journal of Audiology, 42(Suppl. 1),S49–S58.

Lunner, T., Rudner, M., & Rönnberg, J. (2009). Cognitionand hearing aids. Scandinavian Journal of Psychology,50, 395–403.

Lunner, T., & Sundewall-Thorén, E. (2007). Interactionsbetween cognition, compression, and listening conditions:Effects on speech-in-noise performance in a two-channelhearing aid. Journal of the American Academy of Audiology,18, 539–552.

Luo,L.,&Craik, F. I.M. (2008). Aging andmemory: A cognitiveapproach. Canadian Journal of Psychiatry, 53, 346–353.

Lyxell, B., Andersson, U., Borg, E., & Ohlsson, I.-S. (2003).Working-memory capacity and phonological processing indeafened adults and individuals with a severe hearing im-pairment. International Journal of Audiology, 42(Suppl. 1),S86–S89.

Lyxell, B., & Rönnberg, J. (1989). Information-processingskill and speech-reading. British Journal of Audiology,23, 339–347.

Lyxell, B., Wass, M., Sahlén, B., Samuelsson, C., Asker-Àrnarson, L., Ibertsson, T., . . . Hällgren, M. (2009).Cognitive development, reading and prosodic skills in chil-dren with cochlear implants. Scandinavian Journal ofPsychology, 50, 463–474.

Majerus, S., Amand, P., Boniver, V., Demanez, J.-P.,Demanez, L., & Van der Linden, M. (2005). A quantita-tive and qualitative assessment of verbal short-termmemoryand phonological processing in 8-year-olds with a historyof repetitive otitis media. Journal of CommunicationDisorders, 38, 473–498.

Morton, N. E. (2006). Genetic epidemiology of hearing impair-ment.Annals of the New York Academy of Sciences, 630, 16–31.

Nilsson, L.-G. (2000). Memory of actions and words. InE. Tulving & F. I. M. Craik (Eds.), The Oxford handbookof memory (pp. 137–148). Oxford, United Kingdom: OxfordUniversity Press.

Nilsson, L.-G. (2003). Memory function in normal aging. ActaNeurologica Scandinavia, 107(Suppl. 179), 7–13.

Nilsson, L.-G., Adolfsson, R., Bäckman, L., de Frias,C. M., Molander, B., & Nyberg, L. (2004). Betula: Aprospective cohort study on memory, health and aging.Aging Neuropsychology and Cognition, 11(2–3), 134–148.

Nilsson, L.-G., Bäckman, L., Erngrund, K., Nyberg, L.,Adolfsson, R., Bucht, G., . . . Winblad, B. (1997). TheBetula prospective cohort study: Memory, health, and aging.Aging, Neuropsychology, and Cognition, 4, 1–32.

Nyberg, L., Nilsson, L.-G., & Bäckman, L. (1992). Recall ofactions, sentences, and nouns: Influences of adult age andpassage of time. Acta Psychologica, 79, 245–254.

Park, H., & Rugg, M. D. (2008). Neural correlates of suc-cessful encoding of semantically and phonologically mediatedinter-item associations. NeuroImage, 43, 165–172.

Rönnberg et al.: Hearing Loss and LTM 725

Pichora-Fuller, M. K. (2008). Use of supportive context byyounger and older adult listeners: Balancing bottom-up andtop-down information processing. International Journal ofAudiology, 47(Suppl. 2), S72–S82.

Pichora-Fuller,M. K., Schneider, B. A., &Daneman,M. K.(1995). How young and old adults listen to and rememberspeech in noise. The Journal of the Acoustical Society ofAmerica, 97, 593–608.

Pisoni, D. B., Nusbaum, H. C., Luce, P. A., & Slowiaczek,M. L. (1985). Speech perception, word recognition and thestructure of the lexicon. Speech Communication, 4, 75–95.

Pulvermuller, F., Kujala, T., Shtyrov, Y., Simola, J.,Tiitinen, H., Alku, P., . . . Näätänen, R. (2001). Memorytraces for words as revealed by the mismatch negativity.Neuroimage, 14, 607–616.

Roche, R. A. P., Mullally, S. L., McNulty, J. P., Hayden, J.,Brennan, P., Doherty, C. P., et al. (2009, January 19).Prolonged rote learning produces delayed memory facilita-tion and metabolic changes in the hippocampus of theageing human brain. BMC Neuroscience, 10, article 136.doi:10.1186/1471-2202-10-136.

Rönnberg, J. (2003). Cognition in the hearing impaired anddeaf as a bridge between signal and dialogue: A frameworkand amodel. International Journal of Audiology, 42,S68–S76.

Rönnberg, J., Rudner, M., Foo, C., & Lunner, T. (2008).Cognition counts: A working memory system for ease oflanguage understanding (ELU). International Journal ofAudiology, 47, S99–S105.

Rönnlund, M., Nyberg, L., Bäckman, L., & Nilsson, L.-G.(2005). Stability, growth, and decline in adult life spandevelopment of declarative memory: Cross-sectional andlongitudinal data from a population-based study. Psychologyand Aging, 20, 3–18.

Rudner, M., Foo, C., Rönnberg, J., & Lunner, T. (2009).Cognition and aided speech recognition in noise: Specificrole for cognitive factors following nine-week experiencewith adjusted compression settings in hearing aids.Scandinavian Journal of Psychology, 50, 405–418.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E.(2009). Objective measures of listening effort: Effects ofbackground noise and noise reduction. Journal of Speech,Language, and Hearing Research, 52, 1230–1240.

Schneider, B. A., Daneman, M., & Pichora-Fuller, M. K.(2002). Listening in aging adults: From discourse comprehen-sion to psychoacoustics. Canadian Journal of ExperimentalPsychology, 56, 139–152.

Sommers, M. S. (1996). The structural organization of themental lexicon and its contribution to age-related declines inspoken-word recognition. Psychology and Aging, 11, 333–341.

Squire, L. R., Clark, R. E., & Bayley, P. J. (2004). Medialtemporal lobe function and memory. In M. S. Gazzaniga(Ed.), The cognitive neurosciences (pp. 691–708). Cambridge,MA: MIT Press.

Stenfelt, S., & Rönnberg, J. (2009). The signal–cognitioninterface: Interactions between degraded auditory signalsand cognitive processes. Scandinavian Journal of Psychology,50, 385–393.

Sternäng, O., Wahlin, Å., & Nilsson, L.-G. (2008). Exami-nation of the processing speed account in a population-based longitudinal study with narrow age cohort design.Scandinavian Journal of Psychology, 49, 419–428.

Tesch-Römer, C. (1997). Psychological effects of hearing aiduse in older adults. The Journals of Gerontology. Series B,Psychological Sciences and Social Sciences, 52, 127–138.

Tulving, E. (1983). Elements of episodic memory. Oxford,United Kingdom: Oxford University Press.

Tulving, E., & Colotla, V. A. (1970). Free recall of trilinguallists. Cognitive Psychology, 1, 86–98.

Tun, P. A., McCoy, S., &Wingfield, A. (2009). Aging, hearingacuity, and the attentional costs of effortful listening.Psychology and Aging, 24, 761–766.

Tun, P. A., &Wingfield, A. (1999). One voice too many: Adultage differences in language processing with different types ofdistracting sounds. The Journals of Gerontology. Series B,Psychological Sciences and Social Sciences, 54, P317–P327.

Valentijn, S. A. M., van Boxtel, M. P. J., van Hooren,S. A. H., Bosma, H., Beckers, H. J. M., Ponds, R. V. H.M.,& Jolles, J. (2005). Change in sensory functioning predictschange in cognitive functioning: Results from a 6-yearfollow-up in the Maastricht aging study. Journal of theAmerican Geriatrics Society, 53, 374–380.

VanderLinden,M.,Hupet,M.,Feyereisen,P., Schelstraete,M., Bestgen, M., Bruyer, G. L., . . . Seron, X. (1999). Cogni-tive mediators of age-related differences in language com-prehension and verbal processing. Aging, Neuropsychology,and Cognition, 6, 32–55.

Van der Sluis, S., de Jong, P. F., & van der Leij, A. (2007).Executive functioning in children, and its relations with rea-soning, reading, and arithmetic. Intelligence, 35, 427–449.

VanHooren, S. A., Anteunis, L. J., Valentijn, S. A., Bosma,H., Ponds, R. W., Jolles, J., & van Boxtel, M. P. (2005).Does cognitive function in older adults with hearing impair-ment improve by hearing aid use? International Journalof Audiology, 44, 265–271.

Vargha-Khadem, F., Gadian, D. G., Watkins, K. E.,Connelly, A., Van Paesschen, W., & Mishkin, M. (1997,July 18). Differential effects of early hippocampal pathologyon episodic and semantic memory. Science, 277, 376–380.

Wahlin, Å., Bäckman, L., & Winblad, B. (1995). Free recalland recognition of slowly and rapidly presented words invery old age: A community based study. Experimental AgingResearch, 21, 251–271.

Wechsler, D. (1991). Manual for the Wechsler Adult Intelli-gence Scale—Revised. New York, NY: The PsychologicalCorporation.

Wheeler, M. A., Stuss, D. T., & Tulving, E. (1997). Toward atheory of episodic memory: The frontal lobes and autonoeticconsciousness. Psychological Bulletin, 121, 331–354.

Wingfield, A. (1996). Cognitive factors in auditoryperformance:Context, speed of processing, and constraints of memory.Journal of the American Academy of Audiology, 7, 175–182.

726 Journal of Speech, Language, and Hearing Research • Vol. 54 • 705–726 • April 2011

DOI: 10.1044/1092-4388(2010/09-0088) 2011;54;705-726; originally published online Sep 30, 2010; J Speech Lang Hear ResWahlin, and Lars-Göran Nilsson

  Jerker Rönnberg, Henrik Danielsson, Mary Rudner, Stig Arlinger, Ola Sternäng, Åke 

Memory but Not to Short-Term MemoryHearing Loss Is Negatively Related to Episodic and Semantic Long-Term

http://jslhr.asha.org/cgi/content/full/54/2/705#BIBLaccess for free at: The references for this article include 6 HighWire-hosted articles which you can

This information is current as of April 3, 2011

http://jslhr.asha.org/cgi/content/full/54/2/705located on the World Wide Web at:

This article, along with updated information and services, is


Recommended