+ All Categories
Home > Documents > Cross Modal Function

Cross Modal Function

Date post: 02-Apr-2018
Category:
Upload: gumilang-reza
View: 218 times
Download: 0 times
Share this document with a friend

of 27

Transcript
  • 7/27/2019 Cross Modal Function

    1/27

    Psychological Bulletin1974, Vol. 81, No. 5, 284-310

    H U M A N INFORMATION PROCESSINGAND SENSORY MODALITY:

    CROSS-MODAL FUNCTIONS, INFORMATION COMPLEXITY,M E M O R Y , AND DEFICIT1DAVID FREIDES 2Emory University

    The relationship between sensory modality and information processing wasexamined. Available hypotheses concerning modality influence were not sup-ported, but an interaction between modality and information complexity couldbe discerned. With simpler information (reaction signals and unidimensionaldiscriminations) modalities are equivalent; contextual and parametric factorsdetermine intermodal relations. With complex information (spatial or temporalpatterns) specific modalities are adept with certain kinds of information, an dwhat is received in nonadept modalities is recoded in the most adept. Gen-eralizations about modali ty-information interactions depend on control ofresponse factors. Implications of data an d generalizations fo r theories .ofmemory, fo r measuring deficit in pathology, and for resolving th e nativism-empiricism controversy ar e discussed.

    There are many views concerning differ-ences among sensory modalities. One, herecalled "modal specific," is the view that eachmodality has distinct patterns of transduc-tion, specific sites of neural transmission andprocessing, an d unique sensory and percep-tual qualities. Even though each sensory sys-tem is separate, vision is often treated asthough it represented all of perception. Theother extreme, here termed "nonmodal," isone in which the specific qualities of eachmodality are disregarded and a single modeof information processing is assumed. In thisapproach, common in the literature on mem-ory and information processing, materialsare usually verbal in nature. To the extentthat language is an auditory code, there is aninadvertent emphasis on the auditory modethat contrasts with th e more explicit emphasisgiven to vision within the modal-specificframework.The applied literature dealing with educa-tional, emotional, and behavioral disorderslargely reflects th e moda l-specific orientation,1The author would like to thank several of hisstudents fo r their reactions to this article and

    Boyd McCandless, Donald Schurman, Martha Wil-son, and Thelma Freides fo r their thoughtfu lcriticism and editorial suggestions,2 Requests for reprints should be sent to DavidFreides, Department of Psychology, Emor y Uni^versity, Atlanta, Georgia 30322.

    but there is also a special emphasis on in-tegration among sensory modalities. Reading,for example, is a task that requires trans-lation from an auditory to a visual code andvice versa. Reading impairment, it has beensuggested, is a failure of cross-modal inte-gration. This hypothesis has been investigatedin research on cross-modal functions in whichth e extent to which inform ation conveyedto one sensory modality is made availableto another is examined.This review was undertaken to evaluatethe promising sensory integration hypothesisand was approached without prior notionsas to its limitations or implications. It extendsto related questions concerning modality inth e English language literature of the last ISto 20 years. A review of a much earlierl i terature can be found in Ryan (1940).It appears that th e evidence does not sup-port th e sensory integration hypothesis oreither of the more general modal-specific ornonmodal approaches. However, if informa-tion complexity is analyzed, a set of relationscan be discerned fitting a coherent pattern.Briefly th e modal-specific position, withsome modifications, is consistent with the dataon complex pattern information processing,whereas the nonmodal position is supportedby evidence fo r simpler information loads.However, there are further complications be-

    284

  • 7/27/2019 Cross Modal Function

    2/27

    INFORMATION PROCESSING AND MODALITY 28 5cause incapacity in one modality may beassociated with compensatory enhancementin others, an d patterns of functioning dependon th e subject's prior experience and responsebiases.These broad generalizations were distilledfrom many diverse studies, none of whichstated explicitly th e points put forward. Theissues may be examined in a stepwise surveyo f the relevant research.

    THE SENSORY INTEGRATION HYPOTHESISA N D I T S EMPIRICAL S U P P O R T

    Birch and his colleagues proposed thatsensory integration is critical in human in -formation processing. Their point of view, asstated by Birch and Lefford (1967) , incor-porated the classic empiricist position aboutperceptual development:information derived from proximoceptive input isdominant in controlling th e actions of infants. How-ever, with age, proximo ception comes to be increas-ingly replaced by teloreceptor control systems.Simultaneously with the emergence of teloreceptorpreeminence, a second mechanism of input organiza-tion seems to be evolving. It consists of the increas-ing tendency of the separate sensory modalities tointegrate with one another and of organized an ddirected action to be subserved by intersensory ormult imodal rather than unimodal patterning [italicsadded; pp. 5-7].3

    Birch and L. Belmont (1964) proposed anauditory-visual, matching-to-sample methodfor studying sensory integration (hereafterreferred to as the Birch-Belmont procedure).3 The writers attributed this point of view toSherrington (1951). In an earlier article (Birch &

    Lefford, 1963 ), they quoted Sherrington (1951)specifying it to be from pp. 287-289. Part of thisquotation was repeated, th e reference citing th e same1951 edition in the later article: "Not ne w sensesbut better liaison between old senses is what th edeveloping nervous system . . . has stood for [Bel-mont, Birch , & Karp, 1965, p. 410]." However ,these quotations are not found in the 1951 volumebut are in an earlier edition (Sherrington, 1941,but on p. 279 ) . Because Sherrington's text adjacentto the quotation w as retained in the later edition,but the quotation was eliminated and apparentlynot developed elsewhere, it appears that Sherringtonwas reconsidering his emphasis perhaps finding hisstatement misleading. It is not feasible here toassess Birch's treatment of Sherrington, but it isclear that even in the 1941 edition only phylogeneticdevelopment and not ontogeny w as under considera-tion.

    A sound pattern tapped by the examiner ismatched to one of three printed dot patterns.It was claimed that this procedure yielded ameasure of auditory-visual* integration an dthat cross-modal tasks were intrinsically moredifficult than comparable intramodal tasks.Using this procedure, Birch and Belmont(1965b) found improvement in accuracy anddecreased variability with increasing age innormal children between the ages of 5 and 11.In other studies, the same or similar pro-cedures revealed deficiency (relative to con-trol groups) of intersensory function in re-tarded readers (B irch & Belmo nt, 1964,196Sb; Muehl & Kremenak, 1966), brain-damaged persons (Aten & Davis, 1968; Birch& I. B elmont, 1964; B irch & L. Belmont,196Sa), psychiatrically disturbed adolescents(Hertzig & B irch, 1966, 1968 ), malnou r-ished children (Cravioto, Gaona, & Birch,1967), and schizophrenic children (Walker &Birch, 1970).In another procedure (Birch & Lefford,1963) intersensory equivalences among thehaptic, visual, and kinesthetic modalitieswere examined. Eight of the geometric formsfrom th e Sequin Form Board were presentedvisually, haptically (by allowing explorationwith a concealed hand), and kinesthetically(by moving a concealed arm gripping a stylusthrough an outline of the stimulus). Visual-haptic, visual-kinesthetic, and haptic-kines-thetic combinations were studied in childrenbetween the ages of 5 an d 11. The comparisonstimuli were tw o forms identical to and sevenforms different from the standard. The stand-ard was represented before each match in asame-different procedure.Birch and Lefford (1963, 1967) found an dthen replicated an increase in accuracy withage for all intersensory tasks, the visual-haptic task being the easiest. With these tasksCravioto, De Li cardie, and Birch (1966)

    4 The sequence in which modalities ar e mentioneddesignates standard or match status. Thus an audi-tory-visual procedure specifies an auditory standardand a visual match, whereas a visual auditory pro-cedure involves a visual standard and an audi torymatch. Visual auditory and auditory visual ar e"converse" sequences. If a set of converse sequencesare of equivalent difficulty, the adjective "sym-metric" is employed, if not, "asymmetric."

  • 7/27/2019 Cross Modal Function

    3/27

    28 6 DAVID FREIDESdifferentiated malnourished from more ade-quately nourished (defined in terms of height)children. Although there is less proceduraluni formi ty among studies on haptic andvisual matching of forms, fur ther evidence ofdeficit has been obtained fo r brain-damaged(Connors & Barta, 1967; Rudel & Teuber,1971) , emotionally disturbed (Conne rs &B arta, 1 96 7), disadvantaged (Conners, Schu-ette, & Goldman, 196 7), and imm ature(Abravanel, 1968; Connolly & Jones, 1970;Millar, 1971, 1972b; Milner & Bryant , 1970;Rudel & Teuber, 1964; Zaporozhets, 1965)subjects.In general these data support the sensoryintegration hypothesis an d give promise ofproviding measures of the impairment presentin a n u mb er of pathological conditions. In anumber of articles methodological questionswere raised regarding details of the auditory-visual procedure (B eery, 19 67; Bryd en, 197 2;Ford, 1967; Goodnow, 197I b ; Kahn & Birch,1968; Rudnick, Sterri t t , & Flax, 1967; Ster-ri t t & Rudnick, 1966) , the visual-haptic pro-cedures (Milner & Bryant , 1970), or the useof incentives and feedback (DeLeon, Raskin,& Gruen, 1970; Muehl & Kremenak, 1966).These yielded procedural improvements, butno disconfirmation of the original findingsby Birch and L. Belmont (1964) were dis-covered; th e implicated subjects always didworse on the cross-modal task than their con-trols. A more substantive challenge to thesensory integration hypothesis came from ananalysis of research design.Cross-modal versus Intramodal Difficulty

    Birch and L. Belmont (1964) assumedthat intersensory tasks are intrinsically moredifficult than intrasensory and claimed thatth e auditory-visual m atching procedure re-vealed a selective intersensory deficit. Bry an t(1968) pointed out that intrasensory controlshad not been run so there was no evidencethat intramodal functions were not impaired.Studies of auditory-visual p attern m atch ingby Muehl and Kremenak (1 96 6) , M cGradyand Olson (1970), Zurif and Carson ( 1 9 7 0 ) ,Rudel and Teuber (1971), Sterritt, Martin,and Rudnick ( 1 9 7 1 ) , Bryden ( 1 9 7 2 ) , Kuhl -man and Wolking ( 1 9 7 2 ) , Rudnick, Mart in ,and Sterri t t (1972 ) , and Vande Voort , Senf,

    and B en to n (1972) have substantiated thechallenge: Retarded readers, brain-damaged,and middle-class and impoverished childrenwere as impaired intramodally as cross-mod ally. Fu rtherm ore, studies exam iningvision an d touch in normal children and adults(Abravanel , 19 72 ; Cashdan, 1968; M illar,1972b; Milner & Bryant, 1970; Rose, Blank,& B ridger, 197 2) found cross-modal perform-ance equivalent to that of the slowest mo-dality, touch.A further challenge to the Birch positioncomes from studies that examined empiricistassumptions in sensory integration theorythat visual perception depends on priortactual knowledge. DeLeon, Raskin, andGruen (1 9 7 0 ) and Millar (1971) evaluatedvisual and tactile matching when the stand-ard was explored both visually and hapticallyas compared with controls who had eithervisual or haptic informat ion.The nursery-school subjects in both in-stances demonstrated that tactile informa-tion contributed nothing to the visual, butthere was some evidence fo r visual enhance-m en t of tactual informa tion. Intram odal tac-tile performance typically was the most diffi-cult. Fico and Brodsky (1972) obtainedsimilar results from college students usingmeasures of both recognition and recall.Cashdan and Zung (1970) partially replicatedthis design and found recognition errors weregreater with haptic than with visual stand-ards.A possible exception to the disconfirmationof empiricist assumptions comes from th e

    ^ study by Denner and Cashdan (19 67 ), whichcompared visual recognition memory fo r two-dimensional forms after exposure with visionalone, vision plus haptic exploration, andvision plus manipulation without haptic ex-ploration (where th e forms were encased ina transparent ball). Vision alone w as foundto be inferior to the others and the authorssuggested that activity fa cil i tated m em ory.However , Weiner and Goodnow (1970) ar-ranged for the manipulated and vision-alonestimuli to be equally interesting and found nodifferences in recognition following the twoexposure conditions. Butter and Zung (1970)and Cronin (1973), us ing a somewhat differ-ent procedure, also found no advantage of

  • 7/27/2019 Cross Modal Function

    4/27

    INFORMATION PROCESSING AND MODALITY 28 7manipulation plus vision over purely visual'inspection fo r later recognition.Another test of the Birch position is thedegree of symmetry in converse cross-modalsequences. The Birch-Belmont procedure, anauditory-visual sequence, was offered as anabsolute measure of sensory integration, butthere was no reason given why the reversesequence could not serve as well. Beery(1967) found that the visual-auditory pro-cedure was easier than th e auditory-visualprocedure for both poor readers and controls,th e controls excelling in both conditions. How-ever, with correction for a differential op-portunity fo r correct guessing, th e differencedisappeared. On the other hand, Muehl andKremenak (1966), Rudel and Teuber (1971),and B r yde n (1972) found significant asym-metries, th e easier task being the one with avisual standard. In studies of visual-tactilematching (Abravanel, 1972; Cashdan, 1968;Millar, 1972b; Milner & Bryant, 1970; Rose,Blank, & Bridger , 1972; Rude l & Teuber,1971) , th e tactile-visual sequence was neverfound to be easier than it s converse in normalsubjects; it was either equally or more diffi-cult. Rudel and Teuber (1964) and Zung(1971) reported no differences between anymodality combination with simple materials.The evidence from studies using the kindsof tasks investigated by Birch and his col-leagues disconfirms important features of thesensory integration hypothesis. Cross-modalproblems were not more difficult than intra-modal problems, converse sequences were notsymmetrical, and there was no evidence thatvisual perception depends on tactile functions.What is discernible is that visual tasks wereeasier than auditory and haptic tasks, andthat cross-modal matches with visual stand-ards were easier than other com bina tions .Cross-modal matches were either interme-diate between their component's intramodallevels of difficulty or as difficult as the mostdifficult intramodal combination.

    In the studies previously reviewed, cita-tions were given to articles with discrepantfindings in which unidimen sional discr imina-tions or signal detections were requ ired. Thesetasks appeared to be simpler than th e patterndiscriminations employed in the research di-rectly examining Birch's hypotheses, hence it

    w as suggested that informational complexitymight influence th e pa t te rn of cross-modalrelations. A review of these data, classifiedaccording to type of stimulus, suggests anamendment to the generalization concerningvisual functions and modal relationships pre-viously described.

    MODALITY DIFFERENCES WITHUNIDIMENSIONAL STIMULILength

    Connolly and Jones (1970) hypothesized(like Birch, though fo r reasons to be de-scribed in the section on memory) that cross-modal tasks should be more difficult thanintramodal. They used a length estimationtask, and the modalities were kinesthetic(drawing a l ine) and visual (having a tapemeasure set to a visually determined length).The subjects were normal S-, 8-, an d 11-year-olds and adults. They found that intramodalmatches in each modality were of equivalentdifficulty and were easier than the cross-modal and that the ease of cross-modalmatches differed depending on which camesecond. Contrary to findings with shapes inwhich cross-modal combinations with visualstandards were easier than th e converse, theyfound that th e kinesthetic-visual combina-tion was easier than th e visual-kinesthetic.Kelvin (1954) ran intramodal and cross-modal comparisons of length (longer orshorter) , a single standard against a series ofcomparison stimuli symmetrically longer andshorter than the standard, and found nodifferences among modality combinations. Alater study (Kelvin & Mulik , 1958) repli-cated this finding with symmetric compari-sons. With an asymmetric comparison series,there were intramodal trends that reflectedth e altered context, and the cross-modalcompairsons selectively showed a significanteffect.Davidon and Mather (196 6) pursuedKelvin's (1954) work investigating contexteffects on both th e standard and comparisonseries while manipulating psychophysicalmethod and modality combinations. Theyfound that changes in the standard equallyaffected evaluation of the standard in allmodality combinations but that th e compari-

  • 7/27/2019 Cross Modal Function

    5/27

    28 8 DAVID FREWESso n series adaptationa! level (in all modalitycombinations) was not affected by changes inth e standard. On the other hand, variabilityin intermodal data was significantly greaterthan in intramodal data. They also partlyrepeated Kelvin and Mulik 's (1958) judg-mental procedure and found that a tactualstandard and visual comparison series did notshow the consistent changes in equivalencefound in all other modality combinationswhen th e standard was changed. Visual func-t ions were more resistant than tactual toanchoring effects, but the anchors present inthe two modalities were probably not equiv-alent because the visual stimuli were viewedin a detailed context not apparent when thestimulation was tactile.Teghtsoonian and Teghtsoonian (1965)compared magni tude estimates5 of length intw o modalities and found th e resulting func-tions to be highly comparable intramodallyand cross-modally. Plots of visual and feltestimates of length fell on a straight linewith an approximate slope of 1 indicatingdirect comparability in both modalities andno asym me try. Their later study (Teghtsoon-ian & Teghtsoonian, 1970) yielded differ-ent results when the stimulus was graspedbetween the thumb and index finger of onehand rather than b etween the index fingersof two hands.Abravanel (1971) found no differentialeffects fo r modality combinations in a lengthestimation problem. For the standard, subjectshad to combine separate lengths presentedeither intramodally or cross-modally (haptic-visual sequence only), but the comparisonwas the same throughout; a variable lengthwas adjusted by the examiner to the sub-ject 's visual judgment of equivalence to thecombined standard. Similarly, M illar (19 72 a)made inspection time differences betweenvisual and haptic stimulation comparable bysegmenting the visual stimulus and adminis-tering th e parts sequentially at a rate compar-able to what is required fo r haptic exploration.However, Millar used a joystick moved, felt,

    5Only two of many articles using Stevens's powerlaw magnitude estimation procedure are mentionedhere. The reader is referred to Stevens (1970) fo ra recent review of extensive cross-modal workwithin that frame of reference.

    and seen (by means of .a mounted light) asappropriate by the subject but operated, whenused to present stimulation, by the examinerthrough a tandem arrangement in the appa-ratus. There were many motor aspects to theresponse required and m any uncontrolledfactors in the tandem arrangement. Millar's(1972a) results fo r children aged six andeight were similar to those of Connolly an dJones (1970) in that the cross-modal taskswere more difficult than th e intramodal(though these differences were not entirelyreliable), and they were different in that th evisual-kinesthetic combination was signifi-cantly easier than its converse. Among four-year-olds the pattern was entirely different.In view of the novelty and uncontrolled com-plexity of measurement procedures, Millarwas unsure as to how to interpret the results,and their place in the overall picture is un-clear.

    These diverse experiments suggest thatlength is associated with a different patternof modality relationships than shape andpattern. There is more evidence for intra-modal and cross-modal equivalence, althoughsuch equivalence appears to be differentiallyinfluenced by context effects and by responserequirements, the influence being greater inthe cross-modal than th e intramodal situa-tion.Texture

    Bjorkman (1967) compared visual andtactual intram odal and cross-modal judg-ments of roughness in adults. The subjectcompared a sandpaper grit standard with arange of comparison stimuli, judging eitherthe same or different. Separate psychometricfunctions were computed for all modal com-binations. The intramodal plots were linearwith a slope of 1 indicating subjects couldmatch textures in either modality with greataccuracy. There were, however, nonlinearexponential relationships when matchingacross modalities. In a sense then cross-modaloperations were more difficult than intra-modal.Rate

    Gebhard and Mowbray (1959) studiedvisual flicker and auditory flutter. They re -

  • 7/27/2019 Cross Modal Function

    6/27

    INFORMATION PROCESSING AND MOD ALITY 289ported earlier studies of changes in differencelimen for the detection of frequency shiftsas frequency increased from 1 to 45 cyclesper second. The curve for flutter was anorderly linear function, whereas that forflicker was irregular and difficult to interpret.Gebhard and Mowbray reported a cross-modalprocedure. The subject matched flicker toflutter and the converse, both simultaneouslyand successively. Threshold functions werefound to increase markedly over previousintramodal determinations. Furthermore, thetask was easier with simultaneous matches,and the relations between modalities were-asymmetr ic . It was easier to match flicker toflutter than th e converse, and observers re -ported that when sound w as under their con-trol it appeared to dr ive th e light, but thereverse was not true.Duration

    This section relies on the summaries byGoldstone an d Lhamon (1971, 1972) oftheir own research program specific studiesand other works are cited only where neces-sary. The basic finding was that subjectsjudged auditory stimuli longer than visualstimuli of equal duration. Comparably, re -producing a given duration (one to four sec-onds), auditory durations were significantlyshorter than visual. If intramodal and cross-modal combinations were run, th e extremeresults occurred with cross-modal combina-tions, th e comparison series determining whichintramodal pattern was exceeded. That is ,visual reproductions of visual stimuli yieldedlonger durations than auditory reproductionsof auditory stimuli; visual reproductions ofauditory stimuli (an auditory-visual sequencein this article's terminology) yielded th elongest durations of the group (exceeding th evisual-visual combinat ion) , whereas audi toryreproductions of visual stimuli (a visual-auditory sequence) were th e shortest (lessthan the audi tory-audi tory combinat ion) .The differences appear to be robust, recur-r ing when measurements are made w ith differ-ent kinds of subjects, techniques, and con-texts. The exceptions involve .very briefinputs. If empty intervals bounded by audi-tory clicks or visual flashes are employed,th e modality effect is found with th e method

    of absolute judgment (9-point scale) but notwith the method of comparative judgment( judging longer or shorter, given tw o dura-t ions) . In recent w ork using informationtransmission as a measure of performance,th e reverse has emerged; that is , pairedcomparisons revealed striking modality differ-ences, although th e differences were attenu-ated with absolute judgments. In two ofthe studies using unfilled intervals, complexinteractions with modality order occurred.Goldstone and Goldfarb (1963) found th erelative duration of filled and unfilled visual,but not auditory, intervals to be influencedby the sequence of modalities. Goldstone andLhamon (1971) reported greater informationtransmission with th e visual-auditory se -quences than with th e converse. SimilarlyBehar and Bevan (1961) , who confirmedthat auditory durations were judged longerthan visual durations, found that the effectsof visual anchors on auditory stimulationwere proportional to anchor magnitude butthat this was not t rue of the converse. Also,visual anchors enha nced mo dality differ-ences, whereas auditory anchors reducedthem.Other discrepant data were reported byTanner, Patton, and Atkinson ( 1 9 65) , w hoemployed a two-category, forced-choice, di-rect comparison of in t r amodal an d cross-modal lights and sounds. In any single sessionsubjects had to choose the longer of durationpairs that were .5 and .6, 1.0 and 1.1, or 1.5and 1.6 seconds. Discrimination was found tobe most accurate in the intramodal auditorycondition, next in the intramodal visual con-dition, and least (and symmetrically) inthe cross-modal presentations. Also, withcross-modal comparisons at the .5-secondduration, it was significantly more likely thatth e visual signal would be judged longer thanth e auditory, whereas at the 1- and 1.5-seconddurations (when th e comparisons were moredifficult), this difference essentially disap-peared.Tanner et al. (1965) stated that the orderof presentation of modalities exerted no influ-ence, but close inspection of the two separatecurves reveals an asymmetry in the effectsof mo dality sequence on the influence ofstimulus durations, a result similar to previ-

  • 7/27/2019 Cross Modal Function

    7/27

    29 0 DAVID FREIDESous reports. However, th e failure to replicatemodali ty durat ion differences and one reversalwere major discrepancies from the Goldstoneand Lhamon (1971) results. One possibleexplanation would relate to the recurrentfinding that with unidimensional problems,procedural and contextual variations influ-ence the pattern of results, especially withcross-modal stimuli. In this light, Tanner etal. may have diminished the auditory-visualdifference found by Goldstone by manipulat-in g th e ratio of ambient and focal stimula-tion in the two modalities. The basis for thisconjecture follows.Goldstone and Lhamon (1971, 19 72)viewed their modality difference finding asconsistent with replicated data showingauditory reaction time to be 30-50 millisec-onds faster than visual reaction time. Re-cently, however, Kohfeld (1971) has shownthat if stimuli in the two modalities wereequated on the basis of decibel level (theaudi tory reference .0002 dyne per squarecentimeter, the visual reference 10~ 10 lam-ber t) and ambient stimulation was minimal,then reaction times to visual stimuli in thephotopic range would not differ from reactiontimes to comparable auditory stimuli. Un-for tunate ly, Kohfe ld compared signals thatdiffered in frequency complexity, a whitelight and a 1,000-cycle tone, and the greaterfrequency range of the visual stimulus mayhave permitted "compensation" for anyintr insic "slowness" in the visual modality.If with proper controls Kohfeld 's findingsare verified, then it is possible that modalitydifferences in duration will disappear or at-tenuate. Kohfe ld already has shown that sub-jective matching of intensity, which Goldstoneand others have employed, is not the equiv-alent of decibel matches. Berglund, Berglund,Ekman, and Frankenhaeuser (1969) haveshown that at brief intervals, apparent dura-tion is influenced by stimulus intensity.

    MODALITY DIFFERENCES WITHREACTION SIGNALS

    Brown and Hopkins (1967) determinedthe detectability of a signal across a rangeof signal:noise ratios in both auditory andvisual modalities in the individual subject.The results for both modalities predicted in

    simple addit ive fashion the detectabil i ty ofsignals presented simultaneously to bothmod alities. Similarly, M orrell (19 68 b) showedthat constant auditory clicks administeredat brief intervals (20-120 milliseconds) fol-lowing a visual reaction signal speeded thesimple reaction time to the light. The rela-tion between interstimulus interval andreaction time was linear. With a choicereaction time format , the reactions wereslowed, but the pattern of relationship re -mained. Such additive relationships m ay ap-pear o nly und er special circu m stance s evenwith an information load of reaction signals.Morrell (1968a) found that visual f lashesfacilitated reaction time to auditory signalsmuch less (20-40 milliseconds) than theconverse. Earlier, Hershenson ( 1 9 62 ) showedthat reduc ing light intensity affected facilita-tion of simultaneous bisensory signals, butreducing sound intensity did not.

    There is a large literature related to theadditive relationship described by B r o w n andHopkins (1967) . A recent extensive review(Loveless, Brebner, & Hamilton, 1970) pro-vides many details regarding the circum-stances under which additive relations canbe found among modalities and the moref requent circumstances in which contextualevents complicate the results.Relevant to the question of deficit is agroup of studies on cross-modal reactiont ime in subjects with different impairments.In this procedure, reaction signals from differ-en t , modalities were randomly interspersedacross trials. Intramodal reaction times, whenth e prior stimulus was in the same modality,were compared with cross-modal reactiontimes. In general, reaction time increasedwhenever the stimulus modality changed;however, the difference between cross-modaland intramodal reaction times was insig-nificant or minimal among normal subjects(Dinnerstein & Ziotogura, 1968; Kriegel,Sutton, & Kerr , 1973; Zubin & Sut ton , 1970) .Raab, Deutsch, and Freedman (1960) com-pared fourth- and fifth-grade good and poorreaders and found the good readers' cross-modal reaction times to be faster. Goodreaders were also superior in overall reactiont ime to light but not to sound. Kafz andDeutsch (1963) compared first-, third-, and

  • 7/27/2019 Cross Modal Function

    8/27

    INFORMATION PROCESSING AND MODALITY 291fifth-grade good readers from black ghettoschools. Reaction time was faster with in-crease in grade and was slower cross-modallythan in tramodal ly . In the interaction betweenreading status and intramodal or cross-modalmeasure, the poor readers showed twice thecross-modal retardation of the good readers.It is not clear whether patterns of resultsfor specific modalities did not occur orwhether the question simply was not raised.

    The specific modali ty in the cross-modalcondition was a significant factor in twostudies of adult pathological subjects. Sut-ton, Hakerem, Zubin, and Portnoy (1961)found schizophrenics to be selectively defi-cient when the cross-modal stimulus wasauditory. By contrast, Benton, Sutton, Ken-nedy, and Brokaw ( 196 2 ) found patientswith cerebral disease deficien t in cross-modalreaction t ime to visual, but not auditory,stimuli the same pattern seen in some read-ing retardates. However, Kriegel, Sutton, andKer r ( 1 9 7 3 ) replicated intramodal and cross-modal reaction time differences in schizo-phrenics, but not the modality differences.W ith unidim ensional stimuli and reactionsignals, cross-modal problems at times aremore difficult than in tramodal problems, afinding not encountered in studies of patternmatching. Generally, modal relationships withsimpler informational loads appear to bestrongly influenced by response requirements,intensity parameters, sequence effects, andanchoring stimulivariables that may betermed context effects. (These variables areexamined in adaptation level theory by Hel-son, 1964.) Further summary and integrationfollows consideration of implicated topicssuch as m em ory delay, response requirem ents,and the differentiat ion of spatial and temporalpatterns.

    MEMORY DELAY AND S E N S O R Y MODALITYGoodnow (1971c) pointed out that memorydelays in matching tests vary as a functionof the specif ic methods used. For example,with matching-to-sample procedures, the de-

    lay is influenced by the, size of the comparisonarray and whether the stimuli are presentedsimultaneously or successively. Goodnow(197la) examined th e effect of delay in astudy of visual and tactual matching of forms

    in which a single match for a same-differentjudgment was compared against matches ofeither three or five alternatives. For visualin t ramoda l comparisons she found no changein accuracy with increasing delay, but ac-curacy diminished with in tramodal tactualmatches. Converse patterns of cross-modalmatching were equally difficult when th edelay was minimal, but with increasing de-lay problems with tactual standards weremuch more difficult than those with visualstandards.Similarly in data obtained from children,Milner and Bryan t (1970) and Rose, Blank,and Bridger (1972) , us ing a same-dif ferentprocedure, and Rudel and Teuber ( 1 9 7 1 ) ,using a matching-to-sample procedure (fivecomparison objects), all found visual taskseasier than haptic unless the task was soeasy there were no differences at all. Cross-modal tasks with visual standards generallywere easier than the converse, and delayinterfered more with tactual functions thanwith visual, both in intramodal and cross-modal com binations. W here the task was sodifficult that tactual performance w as l imitedat even zero delay, increased delays affectedvisual per form anc e as w ell. These f indingsalso parallel those of Cashdan (1968) andAbravanel (1971) wh ere segmentation, whichincreases delay, did not seriously affect visualfunctions.Vande Voort , Senf, and Benton ( 1 9 7 2 )interposed either a three- or six-second delaybetween s tandard and comparison using aBi rch-Be lmont auditory-visual ma tching pro-cedure along with intramodal auditory andvisual conditions. Subjects were a youngergroup of 9-year-olds and an older group of11.5 years, divided among retarded readersand controls. Auditory-visual ma tching wasequivalent to auditory intramodal and in-ferior to visual intramodal matching accuracy.In the normal group alone, the longer delayselectively lowered cross-modal performancein younger subjects.Although th e studies on matching of formsuggested that visual inform ation, m ore thantactual, was resistant to the deleterious effectsof delay, Goodnow (1971c ) cautioned aboutviewing these results as a specific propertyof the modalities themselves, citing data that

  • 7/27/2019 Cross Modal Function

    9/27

    29 2 DAVID FREIDESindicated the effect of delay on haptic func-tions was not found in blind subjects. Shesuggested that it is not modality per se butmodality experience that matters.

    A limitation on Goodnow's (1971c ) sug-gestion comes from th e data of Rudel andTeuber (1971), who found that brain-dam-aged children excelled the controls on haptic-haptic matching and generally showed lessadvantage to visual functions than didnormals, a pattern of results found earlierby Hermelin and O'Connor (1961) withretarded children compared to normalsmatched fo r mental age. These children hadvisual experience, yet they did relativelypoorly on visual tasks and excelled on tactualtasks. This suggests both impairment invisual processing and, as with blind children,compensatory augmentation of tactile func-tions. B ut compensation for what? T he issueis considered again after th e evidence fo rthe specialization of modalities with respectto information is examined.Jones and Connolly ( 1970 ) analyzed theirfindings (Connolly & Jones, 1970) that thekinesthetic-visual order was easier than itsconverse and that both were more difficultthan the intramodal matches (themselves ofequivalent diff iculty) . They hypothesized thatthe standard must be immediately recededinto the modality of the match for a compari-so n to be possible when subsequently Inspect-ing the match and that success in matchingis a function of the rate of memory decay(e.g., visual memories decay at a slower ratethan kinesthetic memor ies ) . From these as-sumptions, kinesthetic-visual superiority overthe converse was interpreted as a consequenceof coding in the visual, decay-resistant m o-dality. They tested these ideas by introducingtw o kinds of delay between standard andmatch that differentially influenced visualand kinesthetic memory. It was predicted thatcross-modal results would be determined byth e modality of the second stimulus and theeffect of delay on that stimulus. This effectcould be predicted from intramodal tests. Ina study of length matching with normaladults, these expectations were confirmed.

    Jones and Connolly's (1970) analysis ofmemory factors, which is concerned withcontext and sequence, appears to have heuris-

    tic value. The relations between intramodaland cross-modal findings in Bjorkman ' s(1967) data for roughness, Goldstone andLhamon's (1972) data for time estimation,or Gebhard and Mowbray's (1959) data onrate discrimination coincide with the Jonesand Connolly model or are closer to it thanto any other. In Goldstone and Lhamon'stime estimation data, th e direction of asym-metry among cross-modal relations was de-termined by the comparison stimulus, as Jonesand Connolly suggested. A similar result isin the data of Tanner et al. (1965) andin that of Millar (1972a), although th e evi-dence is incomplete. All of these findings arewith unidimensional stimuli. Pattern stimuli,it will be recalled, are associated with adifferent set of modal relationships. Modalityeffects on memory are considered again afterfur ther examination of response requirementsand informational demands.

    VERBAL F A C T O R S IN CROSS-MODALFUNCTIONS

    It will be recalled that variations in re-sponse requirements were often associatedwith changes in cross-modal relations. Thiswas particularly true of unidimensional stim-uli in which, fo r example, paired comparisonsmight yield different results from magnitudeestimates. Later, evidence is reviewed thatindicates response biases are modality relatedand certain tasks influence response bias.Such considerations plus the general principlethat the response to be examined must becarefully specified to obtain reliable resultsin behavioral research (Goodnow, 1971b) allbring to focus th e role of response influencein sensory or information processing research.In this section, data on the response factorof language in cross-modal funct ions areexamined.

    Ettlinger (1967) argued that only auniquely human capability, language, couldmediate cross-modal functions because effortsto demonstrate such capabilities in animalshad failed. H e viewed th e cross-modal prob-lem as a discrete, emergent, "higher" levelof function rather than as a phenomenonon a continuum with other kinds of informa-tion processing. Although persisting, thisview was tempered in an article written in

  • 7/27/2019 Cross Modal Function

    10/27

    INFORMATION PROCESSING AND MODALITY 293the light of later evidence (Drewe, Ettlinger,Milner, & Passingham, 1970). Most im -portant were th e discoveries of Davenportand Rogers (1970) and Davenport, Rogers,and Russell (1973), who found th e meanswith apes to demonstrate cross-modal match-ing of form an d t ransfer to new problems.Bridger (1970) was responsible fo r perhapsthe most categorical statement of the verbalmediation position. Discussing the Birch-Bel-mont procedure, he wrote,The temporal spatial matching test can only besolved by means of verbal coding of th e temporalstimuli, suggesting that the deficit in the brain-injured children might be cognitive rather thanperceptual [italics added; p. 258] .This statement is something of a surprisebecause Bridger (Blank & Bridger, 1966)had already published evidence fo r cross-modal transfer in the absence of evidencefor verbal mediation.

    There has actually been relatively littleresearch on the role of verbalization in cross-modal functions. Belmont, Birch, and Bel-mont (1968) showed there were no differencesin cross-modal matching between aphasicand nonaphasic brain-damaged patients, butBridger (1970) pointed out that both groupswere operating at chance levels. The mostfrequently cited work is by Blank an d Bridger(1964) , who used a cross-modal transfer de-sign approximating Ettlinger's (1967) speci-fications. Subjects were presented with asimultaneous two-choice discrimination prob-lem in which the discriminanda were placedabove wells that could contain a reward. Thestimuli during the "pretraining" phase wereeither one or two prisms; in the "training"phase, dolls that emitted one or two two-sec-ond flashes; and in the "transfer" phase, oneor tw o sound stimuli. In each instance, therewarded response was to pick the discrim-inandum associated with 2. A controlgroup was given a problem discriminat-ing long versus short lengths during thepretraining phase, long versus short lightsduring the training phase, and the sameauditory problem as the experimental group.Studying children between the ages ofthree and five and using several priming pro-cedures, Blank and Bridger (1964) foundthat solution of the conceptual problem, 2

    versus 1, depend ed on verba lization. H owever,the ability to verbalize the concept did notassure that the problem would be solved orthat it would transfer across modalities, andthe interpretation of the data was not clear.Subsequent attempts (Blank & Bridger,1966; Blank & Klig, 1970) to obtain evi-dence about verbalization failed. RecentlyRose et al. (1972) studied preschoolers' in -termodal functions and verbalizations aboutthe stimuli. Correlations between verbalmeasures and cross-modal behavior wereinsignificant, and it was concluded that th everbal mediation explanation was discon-firmed. Goodnow (1971b) also failed to findevidence of verbal m ediation. Finally B ryant,Jones, Claxton, and Perkins (1972) found away to demonstrate cross-modal influence in6- to 11-month-old nonverbal infants.

    That cross-modal functions do not dependon verbalization does not imply that languagefactors have no influence. Promising innova-tions fo r studying verbal factors in cross-modal matching were introduced by Koen( 1971) . H e conducted a matching study usingall mod al com binations in touch and visionbut varied the nameability of the stimuli(abstract figures or representations of com-mon objects), the tendency of the subjectto use labels (based on postexperimental re-port ) , and the type of error (false negativeor false positive). A same-different procedurewas used in which the standard was matchedthree times in every six with itself (same)and three times with differing transformations(different) .Koen (1971) found that the intramodalvisual combination was easiest, followedby th e visual-tactual, intramodal tactual,and tactual-visual. Cross-modal comb inationswere associated with more false positiveerrors (wrong same judgments), and intra-modal combinations were associated withfalse negative errors (wrong different judg-ments) . With visual standards, high labelersmade more false negative responses than lowlabelers; with tactual standards, lo w labelers

    made more errors. Stimuli with lo w name-ability elicited more errors than nameablestimuli, but low-labeling subjects madefewest errors on low nameable items. Overall,false positive errors were little influenced by

  • 7/27/2019 Cross Modal Function

    11/27

    294 DAVID PREIDESverbal factors, whereas false negatives were.These results indicated that verbal mediation,generally assumed to facilitate problem solu-tion, could increase error depending on thetype of problem. It followed then that th etendency to use labels w ould differen tiallybias the types of errors made.

    S P A T I A L V E R S U S T E M P O R A L A N A L Y S I SA N D S E N S O R Y M O D A L IT Y

    In the Birch-Belmont procedure, the in-formation in th e visual stimulus is spatial(an array of dots spread across the page)and in the auditory stimulus, is temporal(the pattern of time between signals). Thiswas noted by Sterritt and Rudnick (1966),who suggested that the test of auditory-visual integration might really be a test ofspatial-temporal integration and that mo-dality might be of little relevance.To test the idea, comparable proceduresrequiring temporal and spatial analysis inboth v isual and au ditory mod alities wereneeded, and a visual temporal task usingflashing lights was devised that comple-mented th e Birch-Belmont procedure. In theinitial studies (Rudnick et al., 1967; Sterritt& Rudnick, 1966) , the original Birch-Bel-mont pencil-tapping procedure was comparedwith standards that were either tape-recordedsound patterns or patterns of flashing lights.The match was a choice among three spatialdot patterns. In later studies (Rudnick et al.,1 9 7 2 ; Sterrit t et al., 1971), the Birch-Bel-mont procedure was dropped and oneauditory and two visual same-different tasks( t w o temporal and one spatial) were used,combined in all possible orders and in allintramodal and intermodal pairs to yield ninedifferent matching procedures. This formatwas employed in studies of preschool andprimary-grade, minority-group, deprived chil-dren, using separate subjects for each pro-cedure.Sterritt et al. (1971) imposed a task toodifficult fo r most of the preschool children(error levels exceeded chance except for theintramodal visual-spatial matches) . Withprimary-grade children, Rudnick et al. (1972)found that the purely spatial tests were thesimplest, those with both temporal andspatial features were interm ediate , and the

    purely temporal were the most difficult. Theyconcluded that integrational factors, whetheracross sensory modalities or across the tem-poral-spatial distinction, did not influencetask difficulty.These results may be contrasted with thosereported by B ryde n (1972 ) , w ho used anidentical design but compared older sixth-grade good and poor readers, testing all sub-jects on each procedure. The good readersexcelled on all nine tasks, significantly onfour, including the auditory-tem poral intra-modal tasks. Bryden found asymm etry amongcross-modal matches; the easiest occurredwith visual-spatial standards. Cross-modalmatches were, in general, more difficult thanintrasensory matches, but the spatial-tem-poral equivalences were the most difficult.Examination of order of task difficultyin the two studies indicates that deprivedyoungsters were more like the poor thanthe good middle-class readers, showing d i f f i -culty on temporal tasks in all combinationsof modalities. Such comparisons were ex-tended in several other reports. Rudel andTeuber (1971) evaluated brain-damagedand normal children, who were younger thanthose in Bryden's (1972) study, on a visual-temporal and auditory-temporal ma tchingtask. The effect of the visual-temporal pro-cedure was to reduce th e functioning of th enormal children to the level of the brain-damaged children. Klapper and Birch (1971)compared normal children's intramodalmatching of visual-temporal and auditory-temporal sequences. B oth auditory-temp oraland visual-temporal intramodal matchesshowed a gradual developmental course.However, auditory-temp oral matches weremore accurate at early ages, up to the ninthyear. Similar results were obtained by Gold-stone and Goldfa rb (1966) for the abilityto estimate the temporal characteristics ofauditory and visual stim ulation. Rubinsteinand Gruenberg (1971) demonstrated audi-tory-temporal tasks to be easier than avisual-temporal task in adults. The relativedifficulty of intramodal an d cross-modalproblems could be manipulated by changingthe speed with which the stimuli were ad-ministered.Garner and Gottwald (1968) studied th e

  • 7/27/2019 Cross Modal Function

    12/27

    INFORMATION PROCESSING AND MODALITY 295parameters of temporal pattern identificationand their methodology was used in part byHande l and Buffardi (1969) , who combinedspatial and temporal inputs. Space is notavailable for a detailed review of this com-plex work, but the results indicated thattemporal pattern recognition varied as afunction of modality of input, rate of stimu-lation, and star t ing point in the sequence.Audi tory pattern s w ere mo re readily identifiedat high rates of presentation than visual,and tactile informat ion w as preferred to visualin making temporal judgments.The severe impairmen t of auditory-tempo-ral analysis found by Rudnick e t al. ( 1 9 7 2 )in impoverished children and by B ryd en( 1 9 7 2 ) in poor readers differs from th e gen-eral finding that temporal analysis is easierin audition. A suggested resolution is putforward in the concluding section. For thepresent, it is clear that the superiority of thevisual modality, found in many cross-modalstudies on patterned inform ation, does nothold when the problem is temporal analysis.

    Related evidence can be found in anincisive article on modality interactions inspatial localization by Pick (1970). He sum-marized a research program (see Pick's re-view for references, but the bulk of the workis in Warren, 1970, and Warren & Pick,1970) in which the extent to which dis-placement in one modality affected localiza-tion in another w as examined. U sing ingenioustechniques, Pick and his colleagues ascer-tained that there were characteristic patternsof in te rmoda l inf luence fo r spatial localiza-tion. For example, they found that visionbiased proprioception 60%, whereas pro-prioception biased vision approximately 35%.This meant that the visual image biasedjudgments of where a distorted image wasfelt to be approximately twice as much asthe proprioceptive image distorted judgmentsof where a felt target w as seen to be. Eachof these modalities exerted an effect onspatial localization in the o ther . Vision exerte da much stronger effect, although not to theextent that has been claimed (Rock & Ha rris,1967) .Such reciprocity, even uneven reciprocity,did not hold in each combination of modali-ties. For example, subjects localized a sound

    stimulus strongly (55%) toward where theysa w its (displaced) source, but displacedsound had no effect on visual judgment.These data seem consistent with the ideathat th e senses are specialized fo r differenttasks. The direction of influence (vision in -fluences sound and proprioception, but sounddoes not inf luence vision) is consistent withthe notion that spatial localization is a taskfor which vision and to a lesser extent, touchis well suited, whereas audition is not.Further, Pick (1970 cited developmentaldata showing changes in modal relationshipswith age, beginning about the seventh year.Most com binations show a decrease in in-f luence, although there were no developmentaltrends in the inf luence of vision on proprio-ception and vice versa. This suggests thatdifferentiation, a diminution of intermodalinfluence rather than integration, is whatoccurs in development.

    Of course differentiation takes place onlywhen materials fo r specialization are present.The ear, which is not well equipped fo rspatial localization, depends on the eye, andonly because there is a visual system attendingto spatial needs can the ear increasinglyattend to its own function. In individualsblind from bir th, th e pattern of decreasingintersensory bias is not found. On the otherhand, blind individuals with a history ofprevious sight behave in this respect likesighted peoplethe more experience, themore th e pa t te rn fo r sighted individuals isapproximated. Pick (1970) argued that thisimplies that vision exerts an influence on allnonvisual localization. He stated a hypoth-esis, "that for all localization jud gm ents,s t imul i become t ransformed into a visualcode before decision making [p . 2 1 7 ] . "Studies employing three different researchprocedures offer independent verification ofthe Pick (1970) hypothesis. Simpson ( 1 9 7 2 )showed, by means of reaction time, thatauditory localization processes are more com-plex than visual. Such data are consistentwith an asymmetric translation system fo rspatial information as between visual andauditory modalities. Platt and Warren (1972)showed that auditory localization accuracyin children is better in the light than in thedark. Finally, in an experiment of elegant

  • 7/27/2019 Cross Modal Function

    13/27

    29 6 DAVID FRE1DESprecision, Lackner (1973) showed that duringthe period prior to adaptation to prism-in-duced visual spatial rearrangement (whenvisual localization is distorted), sound lo-calization is equally distorted. He suggestedthat precise measurements of auditory mis-localization can be used to monitor visualadaptation to the prisms.Pick ( 1 9 7 0 ) , like Jones and Connolly( 1 9 7 0 ) , suggested that cross-modal functionsinvolved receding information received in onemodali ty to that of another. For Jones andConnolly, th e direction depended on contextand order. For Pick, translation was to themodality best for the job, irrespective oforder . Thus vision best suits a spatial taskand this analysis can be extended to statethat auditory coding best suits a temporaltask.

    There is no conflict between the twotheories if it is recalled that Jones and Con-nolly (1970) verified their ideas with simpleinformation loads and Pick (1970) verifiedhis with complex spatial information. Gen-eralizing, it appears that modality plays anonspecialized role with simple informationbut a selective role with more complex in -format ion, actually requiring translation tothe adept modality for the most proficientprocessing to occur. Where there are de-ficiencies, less adequate means are employed.This may explain the results obtained bySterritt, Camp, and Lipman (1966), whocompared hearing-impaired children andnormal children with equal visual-spatialskills for their ability to reproduce temporalpatterns presented aurally and visually. Al-though the sound patterns were audible tothem, the impaired children had great diffi-culty matching the auditory patterns and,more significantly, they were worse than th econtrols in matching visual-temporal patterns.The Pick (1970) translation model, whichwould account fo r temporal deficits in anysensory system if the specialized modalityis disabled, fits these data very well.

    Research on information processing wouldbe facilitated if temporal and spatial con-figurations could be measured. Research pro-grams having such goals have been reportedbut have had little influence on cross-modalresearch. The reader is referred to Michels

    and Zusne ( 1 9 65) , Brown and Owen (1967),Massaro ( 1 9 7 2 ) , and Leeuwenberg (1971)fo r examples of this work.Next to be considered are studies of at-tention and memory in which modality ofinput is varied. Bisensory memory researchis reviewed in detail because the techniquehas been suggested as a measure of sensoryintegration (Senf, 1969). There is also someconsideration of modality differences in short-term memory (see Tulving & Madigan, 1970,fo r a recent review).

    MODALITY AND MEMORYLindsay, Cuddy, and Tulving (196S) and

    Tulving and Lindsay (1967) provided longor short inputs of different intensities simul-taneously to two modalities and required thesubject to judge their intensities. A significantmodality effect in informat ion transmissionwas found, vision being superior to audition.Because ambient levels of stimulation werenot specified and the stimuli were unidimen-sional, it is not clear whether the results areattributable to characteristic differences inthe modalities, to unspecified contextualvariables, or to the response measure. Noteshould be taken, however, that these studiesare the only instances found in the simul-taneous bisensory literature in which themeasure of informat ion processing favoredvision over audition. In the remaining work,digits or letters (verbal material) were used'as stimuli, and th e consistent finding wasth e superiority of auditory over visual inputin retention.

    Dornbush (I968a, 1968b, 1970, 1971)compared recall of auditory and visual inputsfrom subjects w ho simultaneously heard andsaw lists of digits. She persistently foundsuperior recall of auditory input and, inaddition, found a difference in pattern ofrecall in the two modalities as a functionof th e rate of administration. Visual inputswere recalled better as durations increased,whereas recall of auditory inputs improvedwith decreasing durations. Dornbush's find-ings recurred with shadowing procedures(1968b), special instructions to attend( 1 9 7 0 ) , and delay of the auditory input(1971). With children (Dornbush & Basow,1969) , the bisensory procedure did not dis-

  • 7/27/2019 Cross Modal Function

    14/27

    INFORMATION PROCESSING AND MODALITY 297tinguish good from poor readers, and theadvantage of shorter durations fo r auditoryinputs was not present, but other results werereplicated.

    Margrain (1 9 67 ) , using similar proceduresand materials, arrived at similar conclusions.She also found that speaking and writingresponses w ere not equivalent because theyinteracted selectively with modality of stim-ulus administration.Modality differences may be an artifactof the competitive aspects of the bisensorytechnique. However, superior recall of audi-tory input is found with serial and paired-associate learning procedures when modalitiesare stimulated on separate occasions. Mur-dock (1968, 1969) has shown that th e effectis found with recognition procedures as wellas recall and that it is not a function of someof the temporal (successiveness) an d spatialcharacteristics possessed by the stimuli oftenemployed in short-term memory research.Burrows (1972) examined several hypoth-eses from th e verbal learning, short-termmemory literature regarding the superiorityof recall fo r auditory inputs. The resultslargely disconnrmed these hypotheses butreplicated modality differences. He alsoshowed that auditory superiority was greatestwith brief presentation intervals and shortdelays and that at long intervals, modalitydifferences actually disappeared at times.Burrows concluded that visual memory de-cays more rapidly than auditory memory.Review of some recent studies in whichth e modality and informational qualities ofthe material to be stored were manipulatedcalled into question Burrows's conclusion.Scarborough (1972) found that visual ad-ministration of the Peterson and Peterson(1959) procedure, a technique that interfereswith retention, yielded about half the decayobtained with auditory input. Ternes andYuille (1972) found that recognition ofvisually presented words designating objectswas more accurate than pictures of thoseobjects, but with IS seconds of verbal inter-ference, word recognition declined an d pic-ture recognition improved. Earlier, Jenkins,Neale, and Deno (1967) found recognitionof visually presented words an d pictures tobe greatest with picture-picture comparisons

    followed by significantly lower word-wordand picture-word comparisons. Lowest reten-tion was found in the word-picture condition.Substantial literature on the recognition ofvisual materials has recently appeared.Articles by Shepard (1967) and others haveindicated a surprisingly large retention forcomplex visual spatial inputs. Brown andScott (1971) showed that young childrenhave capacities about as large as adults'.In this vein, Shaffer an d Shiffrin (1972)found that an exposure as little as two sec-onds led to a very high rate of recognitionover long intervals, days and weeks. Further-more there was no serial position effect, afinding confirmed with a recall procedurein a later study (Shiffrin, 1973).Contrary to Burrows's (1972) suggestion,these studies suggest that vision has sub-stantial m nem onic capability. B etter retrievalfrom auditory, as compared with visual,inputs with verbal information suggests thatlanguage is in an auditory-temporal codean d that the modality, as such, is a signi-ficant variable.The evidence fo r modality effects recurs inrelated research. Using simultaneous bisensoryinputs, Senf, Rollins, and Madsen (1967) andMadsen, Rollins, an d Senf (1970) proceededfrom the observation that rate of presentationinfluenced the sequence in which normaladults recalled simultaneous digit pairs. Whenpairs of stimuli were presented rapidly (one-half second per pair), subjects spontaneouslyrecalled digits from one modality before re -calling the other (modality order) . Withslower rates, they tended to recall the pairsof digits that had been presented simultane-ously (pair order). However, the responsepattern (pair or modality ord er) could bemanipulated either way by experimenter-in-duced sets or by specific instructions. Further-more, subjects continued to employ whicheverresponse pattern they happened to use first,no matter how the rate of presentation wassubsequently varied.There is further evidence (Savin, 1967)that results obtained with simultaneousstimulation procedures have more to do withreadily manipulable response biases than withinformation processing and that th e bias ismodality related. Two articles comparing

  • 7/27/2019 Cross Modal Function

    15/27

    298 DAVID FREIDESmodalities (Rollins, Everson, & Schurman,1972; Schurman, Everson, & Rollins, 1972)are reported here. In both studies, tw o se-quences of inputs were presented simultane-ously in the same modality. Retrieval insimultaneous order (i.e., initial inputs priorto subsequent i n p u t s , - l i k e pair order) orin successive order (i.e., an initial input fol-lowed by a later one, then another round,like modality order) was evaluated underconditions of free or directed recall. In thefirst article, when double stimulation wasauditory, subjects preferred a successive modeof recall. In th e second article, when thestimulation was visual, the preferred modeof recall was simultaneous. Subjects couldswitch to the alternate pattern of recall oninstruction and attain comparable success.Ingersoll and Di Vesta (1972 ) studiedresponse bias due to individual modalitypreferences. College students were preclassi-fied on . the basis of recall scores on a visuallyand aurally presented digit span test andcompared on a short-term memory test(Yntema & Trask, 1963) . The visual at-tenders recalled more seen than auditedwords, and the reverse was true fo r auditoryat tenders. Furthermore differences betweenth e auditory and visual items were in earlyserial positions fo r visual attenders and lateserial positions for the auditory attenders.This suggested that modality preference in -fluenced information processing strategy.Kress and Cross (1969) demonstratedtask-subject interactions by studying theeffects of vision and touch on setting a rod toth e vertical in college students tested on anembedded figures test. They found fewesterrors with visual-visual and tactual-visualcombinations and most errors with visual-tactual and tactual- tactual combinations. Ineffect what mattered was whether the rodto be set could be seen. However, the d i f f i -culty in setting the rod in the tactual mo-dality occurred prim arily among field-de-pendent subjects.Despite prior evidence that findings withbisensory stimulation were strongly influencedby readily manipulable response biases, th eorder of recall variable was selected as ameans of studying possible deficits in readingdisability. Senf (1969) found that poor

    readers were inferior to controls when pair-order responses were requ ired. This findingwas interpreted to indicate a deficiency insensory integration because pair order re -quired joint responses in both modalities.Because there were no differences betweengood and poor readers in modality order, Senfconcluded that among th e learning disabled"specific modality deficits or generalizeddisability is not present . . . [p. 27]." . Ac-tually this pattern of results characterizedolder children but not the youngest subjects,approximately nine years old, among whomthe poor readers had no more difficulty withpair order than th e controls. However, th enine-year-old poor readers did have signifi-cantly more difficulty with modality order.Modality-order inferiority in younger poorreaders w as replicated in a later article(Senf & Freundl, 1971), but it was nevermade explicit that modality deficits, and notintegration deficits, were th e major finding.However, th e specific measure that differ-entiated th e groups differed in the two studiesleaving the reliability of the method in doubt.There were modality differences betweenreading skill groups and these differed inthe two studies. Poor readers in the firststudy had more trouble recalling visualitems. Senf (1969) speculated that this rep-resents "conditioned avoidance responses tovisual stimuli due to their constant failurewith reading materials [p . 2 2 ] . " In the sec-ond study on subjects from a different locale,Senf and Freundl (1971) came to the sameconclusion, but the contention is not war-ranted. Graphs depicting differences betweenpoor readers and controls show the discrep-ancy between groups is greater to auditorythan to visual stimuli. Furthermore th e chil-dren in the second study did much better.on visual order errors (the only direct com-parison available) than those in the first. Itis possible that two populations among thereading disabled, differing in modality ofimpairment , have been studied. However, thecontrol groups in the two studies differedso much that the critical issues remain ob-scure.There is another uninterpreted modality-relevant finding. Senf (1969) reported thatwhen digit's were used as stimuli, complex

  • 7/27/2019 Cross Modal Function

    16/27

    INFORMATION PROCESSING AND MODALITY 299interact ions were found between experi-mental variables and individual differencesin the visual but not the auditory modality.W hen pictorial stimuli we re used, the oppositeoccurred.The usefulness of the bisensory memorytask as a means of studying sensory integra-tion (as claimed by Senf and his colleagues)seems dubious, and its suitability for study-ing the distribution of attention (a s Dorn-bush and others have done) is under chal-lenge. In both types of research, audition andvision were treated as alternative equivalentchannels, an assumption disconfirmed by evi-

    . dence of modality differences. Further dis-cussion is postponed to the concluding sectionof th e present article.

    CROSS-MODAL TRANSFERSeveral studies investigated symmetry be-tween modalities in the t r ansfe r of verbal-associate learning to forms. Gaydos (1956)had adult subjects learn verbal associates

    to nonsense shapes in one modali ty and thenlearn the same responses to the same stimuliin another. There were no significant differ-ences in initial learning between groupstrained with vision or touch. When modalitieswere switched , the touch-vision group didbetter than the vision-touch group. Gaydosclaimed an asymmetry in transfer capabili-t ies between m odali t ies that touch, so tospeak, could inform vision much better thanth e reverse. This w as evidence fo r classicempiricism. Using a similar design, Walk(1965) found symmetric transfer effects be-tween modal combinations. He employedstimuli described as physically sym metric andasymmetr ic and found that pe r f o r m a nc e w asbetter wh en sym metr ic s t imu li were presentedvisually and asymmetric stimuli were pre-sented tactually. Eastman (1967, 1968) , a lsousing a similar design, evaluated t ransfer tothe second modality by examining accuracyof response on the first transfer series alone.Like Gaydos, he found more t ransfer in thetouch-vision group than in the converse.Gaydos and Eastman both used familiarwords as their associates, whereas Walk usednonsense syllables, a design difference thatmay have something to do with the discrep-ancies in their findings.

    Garvil l and Molander (1971) used apaired-associate chaining paradigm to studytransfer between visual and tactual modali-ties. Nonsense syllables of high associationvalue were associated in a two-step chain tothe same tactually and then visually per-ceived abstract three-dimensional form in onegroup and the converse sequence in the other.The authors interpreted the learning curvesas indicat ing more transfer from touch tovision than th e converse.

    Lobb (1968) suggested that where asym-metries were found they could more parsi-moniously be viewed as poorer learning inthe tactual than in the visual modality ateach stage, including the final test. Thepresumed superiority of t r ansfe r in thetactual-visual sequence was an ar t ifact ofvisual superiority fo r form perception. Lobb(1965, 1970) reported tw o studies that em -ployed random abstract shapes as standardsand systematic t ransforms of these stimulias comparisons. In the earlier study (amatching design) with eighth-grade subjects,the most successful com bination was thevisual-visual and the least successful was thetactual-tactual. The tactual-visual cross-modal com bination approximated the in tra-modal tactual, whereas the visual-tactualwas intermed iate over m ost of a sequence oftrials. In the later study (a properly con-trolled transfer design) with college students,modalities were equated fo r difficulty by re-ducing exposure of the visual stimuli to 10milliseconds. Transfer occurred more readilyfrom vision to touch than the reverse, anespecially striking finding because ta ctilestimulation was 400 times longer than visual.Cla rk , Warm, and Schumsky ( 1 9 7 2 ) in -vestigated whether cross-modal transfer w aslimited to stimuli on which su bjects we retrained or had greater generality. The ma-terials w ere un fam iliar shapes in a punc t i formversion fo r touch administration and familiarwords fo r paired associates. Two sets ofequivalent stimulus materials were prepared.Cross-modal transfer with materials used inoriginal learning was compared to that foundwith new, although comparable, stimuli. Thesubjects were college students. The authorsconcluded, "Positive t ransfer effects that weresymmetr ical across modalities were noted

  • 7/27/2019 Cross Modal Function

    17/27

    300 DAVID FREIDES[p. 187]" and also that no t r ansfe r had oc-curred to novel stimuli.

    These conclusions w ere based on faul tydata analysis in which modality rather thansubject was the f rame of. reference. That isto say, Clark et al. ( 1972 ) compared orig-inal learning in the visual modality with th evisual learning that occurred when transferw as tested (i.e., af ter tac tual learning) .Transfer was not evaluated because subjectswith different histories were being compared.Wh en the data are rearranged to examinewhat happened to the subjects, it is apparentthat in the first phase tactual learning wasapproximately twice as difficult as visuallearning. In the tactual-visual sequence,visual 'scores we re substantia lly bet ter thantactual, th e difference being greatest wherethe stimuli were famil iar . In the visual-tactual sequence, tactual scores were equiv-alent to visual scores w here the stim uli we refamiliar and substantially worse where thestimuli were different. Examined in this way,the results are consistent with the data forth e ef fec t sx o f increasing memory delay ontactual and visual matching fo r form andwith Lobb's (1968) suggestion that formperception is easier in vision than in touch.

    Blank, Altaian, and Bridger (1968) ,K r a u tha m e r ( 196 8 ) , and Blank and Klig(1970) examined shape discrimination trans-fer; Weissman and Crockett ( 1 9 5 7 ) , Cole,Chorover , and Ettl inger (19 61 ) , and Holm-gren, Arnoult, and Manning ( 1 9 66 ) examinedauditory-visual transfer; and Houck, Gard-ner , and Ruhl (1965) and Blank and Bridger(196 6 ) examined transfer of abstract infor-mation. These studies are so diverse in aims,methods, and subjects that an inordinateam o u n t of space would be required to sum-marize them. Overall the cross-modal transferliterature, if more fragmented, seems to showcont inui ty with the rest. There is evidenceof transfer across modalities, but the t ransferdepends on which modalities are involved,th e response requirements in the two tasks,and the degree of similarity between thetraining and transfer problems. Language isnot necessary for cross-modal t ransfer theinteresting study by Holmgren et al. seemscompelling in this respect. Study of t ransferof more "abstract" informat ion loads has

    barely begun, and the relation betw eenverbalization and t ransfer is not at all clearfrom available data, but techniques such asthose of Koen (1971) should be useful inelucidating the matter. Finally, the superiorperformance of deaf children on some testsprovides a basis fo r fur ther investigationsthat m ay clarify th e process of compensation.

    S U M M A R Y A N D CONCLUSIONSTheories an d Their Evidence

    Birch and cow orkers emph asized the in-tegration of separate sensory modalities andthe superseding of proximal receptors bydistal receptors (classic empiricism) in thecourse of development. The evidence does notsupport either aspect of this position. Birch'sideas can be viewed historically as a stimu-lant to research on the role of sensory modali-ties in inform ation processing.Gibson (1969, chap. 11) emphasized th einformation extraction and processing char-acteristics of sensory systems and de-empha-sized sensory modalities per se. Indeed shehas proposed the existence of an amodalinformat ion processing capability dealing withform, temporal pattern, and the like, andsuggested that cross-modal functions aremediated by this capability. This positionwas supported in a review by von Wright( 1 9 7 0 ) . If amodal capabilities exist, equiv-alent performance in each modality fo r eachtype of problem would be expected, but thisis denied by the evidence fo r complex in -formation loads. Gibson did emphasize therole of experience in shaping informationextraction ability and, in this respect, thedata are consistent with her position.

    The two theories that appear to be mostfavored by the data are the Connolly andJones (1970) hypothesis for simpler informa-tion loads and the Pick (1970) hypothesisfor pattern analysis. To repeat, the Connollyand Jones hypothesis is that informat ion istranslated into the form required immediatelyby the task and that this is a function oforder of presentation. Translation can be 'fromany modality to any modality. Pick suggestedthat spatial inf orm atio n in all modalities istranslated into a visual code, and this con-ception w as extended here fo r other m o-

  • 7/27/2019 Cross Modal Function

    18/27

    INFORMATION PROCESSING AND MODALITY 301dalities and other kinds of in format ion to theconcept of modality adeptness for specifickinds of informat ion. Informat ion reachinga less adept modality is translated into thecode of the most adept modali ty. This impliesthat th e development of cross-modal func-tions consists of learning to perform th etranslation.There is suggestive evidence that receptor-orienting acts and other motor responsesplay a role in mediat ing th e translation, atleast w ith regard to spatial inform ation. Forexample, Platt and Warren (1972 ) comparedaudi tory localization when subjects were re -quired to move their eyes toward the soundsource before pointing, when required tomaintain straight-ahead fixation, or when indarkness. The eye-movem ent group w as foundto be significantly more accurate than theothers, and the effect, in another experiment,was shown to be dependent on the avail-ability of a patterned visual environment.Consequently it appears that auditory lo-calization is under th e influence of a formof visual-motor (oculom otor) integration,data that reinforce consideration of responsefactors. This issue is considered further be-low.Goodnow did not really o f f e r a theory, butsh e did emphasize what might be called th eequivalence of modalities. She pointed outrelated issues about memory and, l ike Gib-son ( 1 9 6 9 ) , th e "educability" of modalities.In general the data have been consistentwith her ideas in that there is much evidenceof plasticity in the informat ion-handl ingcapabilities of all modalities. However, herdiscussion of memory and the history ofmodality function implied a degree of plas-ticity that precluded any notion of modalityadeptness and fur ther implied that memoryis unrelated to modality, only to experience.In this respect, she reflects a prevailing pointof view.However, there is much evidence that mo-dali ty and memory are interrelated and thatmemory is not an independent property ofprocessed in for m atio n. If retention over t imeis considered as memory, then auditory-temporal memory, which shows decay withtime, fits the concept of a separate attribute.Visual-spatial recognit ion m em ory, however,

    follows different decay patterns. Often, de-lay of one hour produces effects no differentthan one mi n u t e or one .day.There are many other respects in whichcomplex audi tory and visual "stores" differed.Visual memo ry fo r words favored th e earlyinputs of a sequence (prim acy ), whereasaudi tory memory favored the later inputs( recency) ; simultaneous visual inputs werepreferentially recalled in simultaneous orders,whereas simultaneous auditory inputs wererecalled in successive orders; retrieval ofinformation increased with faster auditoryinputs and slower visual inputs; auditorymemory showed serial position in terferenceeffects, visual memory did not; visualmnemonic capacity w as extensive, auditorymnemonic capacity was l imited. C onsequently,reference to inputs from different modalitiesas memory stores with th e implication thatthese are equivalent types of info rm atio nmerely located in different places is not war-ranted. The evidence suggests that modali-ties, at least in part, are different informationprocessing systems. Conceptually, memoryceases to be a separate attribute of earlierregistered information; rather it becomesone of the attributes of the informat ion reg-istered. (See Flavell, 1971, for a develop-mental view and Craik and Lockhart , 1972,for a short-term memory approach to theissue.) Bower (1970) reviewed research inwhich both modalit ies were combined to ac-complish impressive feats of memo ry . Themethod is to attach verbal concepts to visualimages. It is the visual images that are readilyretrievable.Related Data and Extrapolations

    Somatosensory information, response pro-cesses, and the nativism-empiricism contro-versy. Recurren t findings in the cross-modalliterature show the influence of responseprocesses. Among th e modalities, th e somato-senses,6 the detectors of response activity,appeared to have a secondary role in that6 Som atosenso ry modali ties are treated as thoughthey are one sensory system, even though severalseparate sensing systems are inv olv ed . This is anexpository device within the framework of thepresent article, hopeful ly int roducing no u n d u edistortion.

  • 7/27/2019 Cross Modal Function

    19/27

    302 DAVID FREIDESno primary adeptness was displayed althoughcompensatory function was. This "back-up"sta tus for the somatosensory modalities m ayhave been an ar t ifact of the spatial or tem-poral tasks used in the studies reviewed. Itseems plausible that somatosensory adeptnesshas to do with recording motor behavior andhence generat ing motor memory. It followsthat the somatosensory modalities are notoriented toward information about the outsideworld, although latent capability of this sortexists. Rather, they inform the individualabout his own behavior. Related functionsmay be to provide a sensory context forinputs f rom th e external world (Hein &Diamond, 1972) and to perform receptor-orienting acts. It will be recalled that thereis evidence that receptor-orienting acts medi-ate cross-modal translations (Platt & Warren ,1972) .Taking recent data into account, it is pos-sible to suggest some further resolution of thenativism-empiricism controversy. There isevidence, consistent with nativism, that theinfant possesses operative feature detectors(Bower , 1966, 1971; Hubel & Wiesel, 1963)and can process complex infor m ation at somelevel of proficiency. This capability improvesin th e course of development with stimulationand exercise (Gibson, 1969, chap. 9) and willa trophy if stimulation is missing (Hirsch &Spinelli, 1971) or decrease in acuity if it isdistorted (Freeman, Mitchell, & Millodot,1972; McLaughlin, 196 4). B ecause experienceis crucial, there is evidence for the empiricismviewpoint, but the experience under discus-sion is w ithin the m odality involved; visionrequires visual stimulation, audition requiresauditory st imulat ion, and the like.Aronson and Rosenbloom (1971) haveshown that if infants receive conflictingspatial information in the visual and auditorymodalities, distress ensues. A similar resultwas obtained fo r visual and tac t i le informa-tion in a study by Bower, Broughton, andMoore ( 1 9 7 0 ) . This suggests that the meansof resolving conflicting sources of informationmust be acquired. The resolution suggestedby the present review, of course, is translationto the most adept modali ty. Bower et al.provided supportive evidence in showing thatthe tactile system lags behind the visual in

    providing informat ion about object charac-teristics.Experience (empiricism) then should becrucial to the development of cross-modalfunctions. It is not a matter of one modali tyteaching another, as in classical empiricism,but learning to translate selected classes ofinformat ion into th e modali ty of greatestadeptness. Because receptor orientation andmotor functions appear to play an importantrole in these developments, data and theoryon the development of sensory motor com-petence (Held, 1970) may be relevant. Al-though these co njectu res regarding m otorinvolvement are ma de to all cross-modalfunctions, available data concern only space,form, and location. It remains to be deter-mined whether the modality adeptness con-cept is confirmed with temporal and m otor in -format ion.M odality and cognition. The selectiveadeptness of vision and audition for dealingwith certain kinds of informat ion may berelated to differences in their mode of opera-tion, which in tu rn may be of importancein understanding higher cognit ive operat ions.Consider that several light sources projectinghues of different colors to the eye will beseen as a single color, whereas several differ-ent tones sounding simultaneously will beheard as a chord whose components can bediscriminated: Simultaneous stimulation atdifferent frequencies is integrated by the eyeand differentiated by the ear.7The possibility exists, therefore, that theprototype, if not the actual mechanics, fo rintegrat ing and differentiat ing information inconcept formation exists separately in theadept operat ions of vision and audit ion. Datahave been reviewed that indicate individualdifferences in modality preference influencethe quan titative and qua litative results ofmemory tests. Is it possible that modalitypreference (verb alizers versus visualizers)interacts with informational demands to de-termine th e pattern of differentiation or in-tegration at higher levels of cognitive pro-cessing? D o genetic factors or early childhood

    1 The au thor does not know who first art iculatedthis observation, but it was pointed out to himmany years ago by Albert Rodwan, an d there is acomment along these lines in Stagner (1971).

  • 7/27/2019 Cross Modal Function

    20/27

    INFORMATION PROCESSING AND MODALITY 303experiences differentially bias or organizemodality preferences? The questions do notappear to have been asked in this way, sothe data are not available. But it is conceiv-able that when a child is instructed to takethe t ime to think, when he hears verbalexplanations, and when his own verbal outputreceives serious consideration, selective rein-forcement of both the use of the auditorymodality and a differentiating cognitive styleare occurr ing. This might explain why Rud-nick et al. ( 1972 ) and B ryden (1 97 2) , wi thimpoverished children and poor readers,found auditory-temporal per formance to bevery poor, whereas Klapper and Birch (1971)and Goldstone and Goldfarb ( 1 9 6 6 ) , withmiddle-class children, found auditory-tempo-ral skills to develop earlier and be moreadvanced than visual-temporal skills.There have been claims that the hum a nbeing is pr imari ly a visual animal just assome theories equate human informat ionprocessing with verbal thinking. These areone-sided views. Hu m an beings are both visualand aud itory, spatia l and temporal, integrat-in g and differentiating. It follows that re-search designs should inc lud e specificationor control of the in format ion to be pro-cessed (Garner , 1970) , th e adeptness of theinput modali ty for dealing with that in forma-tion, and the modality-response biases ofthe individual.Unselective use of language stimuli, as-suming their informat ion to be amodal , m aybe a source of confusion in information pro-cessing research. Data have supported theidea that words are processed in an auditory-temporal code. For the most part, however,these patterns are the only auditory-temporalpatterns studied. More extensive analysis ofnonverbal auditory processing may be usefulin understanding l inguist ic functions.Deficient Populations

    The studies by Birch and his associatesindicated that the auditory-visual integra-tive task w as sensitive to just about any formof disorder, includ ing youth . Althoug h abeginning, these results do little to illuminateth e na ture of any disorder. A t present, noneof th e evidence about pathology is veryconvincing, but it does appear that some con-

    ditions are associated with modal and notwith cross-modal deficits. Several studiesindicated that some reading-disabled andbrain-damaged subjects demonstrated im -paired visual functions, whereas schizo-phrenics demonstrated impaired auditoryfunctions. However, th e tendency for braindamage to be selectively associated withvisual impairment is not substantiated byother studies. For example, Deutsch andSchumer (1970) , using a variety of measure-ment procedures ( including many intramodalmatching techniques) , found significant differ-ences between brain-damaged children andcontrols in each sensory system. The litera-ture fo r reading-disabled children is evenmore contradictory in this respect.

    The problem may be that researchers havenot thought through the relations betweentheir hypotheses and subject selection pro-cedures and instead have chosen subjectssolely on the basis of the existing nosologicalsystem. For example, astute clinical observa-tion (Cole & Kraft , 1964; Denckla , 1972;Holroyd & Riess, 1968) indicates that th epopulation of reading-disabled subjects in -cludes at least four condit ions auditorydeficits, visual deficits, both, and neither.Studies of modal function in which subjectsare not subclassified either a priori or post-experimentally will yield data with muchrandom variance. Significant outcomes m aybe due to institutional diagnostic biases(which may be what happened in Senf 's workin California and I owa ) .Subject segregation problems go beyondth e failure to subclassify. Existing diagnosticprocedures m ay inadvertently yield classifica-tions based on information processing charac-teristics. This possibility was suggested by theimplausible freq uenc y with which brain dam-age is associated with deficits in the visualmodality but not in others. A n explanationm ay lie first, in the hoary tradition thatarbitrarily equates brain damage with deficitin visual-spatial tasks such as the Bender -Gestalt Test; second, in the large nu m be rof visual-spatial assessment procedures andthe few techniques for assessing auditory-temporal or somatosensory motor skills; andthird, in the assumption that verbal functionsare nonmodal and therefore unrelated to

  • 7/27/2019 Cross Modal Function

    21/27

    304 DAVID FREIDESorganic factors. Visual problems may simplybe herded into th e brain-damage categoryand problems in other modalities excluded.Toward the Evaluation o f InformationProcessing Capability

    Systematic assessment requires evaluationof the processing of the three primary cate-gories of pattern information space, t ime,and movement each in its own most adeptmodality and also in th e others. The pro-cessing in each modality of less specific in-formation would also need evaluation. Itshould then be possible to inquire meaning-fully into the in form ation processing statusof any condition of adaptational inadequacy.Are deficits present in all functions, in oneor more sensory modalities, in one or moreinformation categories? Is the deficit in themost adept modality or in the ability totransfer to or from that modal i ty?Subjects with appropriate symptoms needto be chosen, but


Recommended