+ All Categories
Home > Documents > Motivation and Tests

Motivation and Tests

Date post: 07-Apr-2018
Category:
Upload: viswastudent
View: 213 times
Download: 0 times
Share this document with a friend

of 16

Transcript
  • 8/3/2019 Motivation and Tests

    1/16

    I N T E L L I G E N C E A N D A B I L I T Y

    C H A P T E R 2

    Intelligence, Competence, and Expertise

    ROBERT J. STERNBERG

    For roughly 100 years, psychologists havebeen administering tests of intelligence.These tests are supposed to measure a con-struct that is (1) unified (so-called general in-telligence), (2) relatively fixed by genetic en-dowment, and (3) distinct from andprecedent to the competencies that schoolsdevelop (see, e.g., Carroll, 1993). All threeof these assumptions are questioned in thischapter.

    An alternative view, consistent with the

    topic of competence highlighted in thisvolume, is that intelligence represents a setof competencies in development, and thatthese competencies in turn represent exper-tise in development. Thus, intelligence testsmeasure developing competencies on theway toward developing expertise. Ratherthan intelligence (and other sets of abilities),competencies, and expertise being viewed asrelatively distinct, as they tend to be in theliterature of cognitive psychology, they are

    viewed as regions along a developmentalcontinuum. Thus, whereas a cognition text-book might have separate chapters, say, onintellectual abilities, various kinds of compe-tencies (e.g., memory and reasoning compe-tencies), and expertise (e.g., Sternberg &

    Ben Zeev, 2001), the three levels of skill de-velopment psychologically should not beviewed as distinct. A major goal of workunder the point of view presented here is tointegrate the study of intelligence and re-lated abilities (see reviews in Sternberg,1990, 1994, 2000) with the study of compe-tence (Sternberg & Kolligian, 1990), and inturn to link the study of these two constructsto the study of expertise (Chi, Glaser, &Farr, 1988; Ericsson, 1996; Ericsson,

    Krampe, & Tesch-Rmer, 1993; Ericsson &Smith, 1991; Hoffman, 1992). These litera-tures, typically viewed as distinct, are hereviewed as ultimately involved with the samepsychological mechanisms.

    Developing competence is defined hereas the ongoing process of the acquisitionand consolidation of a set of skills neededfor performance in one or more life domainsat the journeyman-level or above. De-veloping expertise is defined here as the on-

    going process of the acquisition and consoli-dation of a set of skills needed for a highlevel of mastery in one or more domains oflife performance. Experts, then, are peoplewho have developed their competencies to ahigh level; competent individuals are people

    15

    Important for exam

  • 8/3/2019 Motivation and Tests

    2/16

    who have developed their abilities to a highlevel. Abilities, competencies, and expertiseare on a continuum. One moves along thecontinuum as one acquires a broader rangeof skills, a deeper level of the skills one al-

    ready has, and increased efficiency in the uti-lization of these skills.

    According to this view, good performanceon intelligence tests requires certain kinds ofcompetencies (in test-taking skills, under-standing word meanings, being able to dobasic arithmetic, visualizing spatial rela-tions, etc.), and to the extent that these com-petencies overlap with the competencies re-quired by schooling or by the workplace,there will be a correlation between the tests

    and performance in school or in the work-place. Some people are experts in taking in-telligence tests and receive very high scores.Because the same skills on which they haveshown expertise are also required in schooland the workplace (e.g., reading, arithme-tic), they will also be expert in work and onthe job. Generally, there is more overlap be-tween the kinds of competencies and exper-tise required on intelligence tests and inschooling than between those required on

    intelligence tests and in job performance.Hence, typically, intelligence test scores willshow somewhat more correlation withschool than with job performance. But manyfactors, such as range of scores and com-plexity of the work done in school or on thejob, can affect this correlation, so it is diffi-cult to speak in totally general terms.

    According to the view of the measurementof intelligence representing the measurementof competencies in development, such corre-

    lations represent no intrinsic relation be-tween intelligence and other kinds of perfor-mance, but rather overlap in the kinds ofcompetencies needed to perform well underdifferent kinds of circumstances. The greaterthe overlap in skills, in general, the higherthe correlations.

    There is nothing mystical or privilegedabout the intelligence tests. One could aseasily use, say, academic or job performanceto predict intelligence-related scores and vice

    versa. For example, many tests of intelli-gence contain items requiring memory skills,vocabulary, reading, arithmetic skills, andreasoning skills. Tests of achievement re-quire the same skills. Both kinds of tests,

    therefore, measure competencies, albeit atdifferent levels of development. In summary,what distinguishes ability tests from otherkinds of assessments is how the ability testsare used (usually predictively) rather than

    what they measure. There is no qualitativedistinction among the various kinds of as-sessments.

    According to this view, the main thingthat distinguishes ability tests from achieve-ment tests is not the tests themselves, butrather how psychologists, educators, andothers interpret the scores on these tests. Theability tests are viewed as measuring some-thing psychologically distinct from theachievement tests, hence the use of different

    labels to describe the tests. But the distinc-tion is quantitative, not qualitative. A testingcompany that seems to have recognized thisfact is the College Board, which originallycalled its test the Scholastic Aptitude Test,then changed the name to Scholastic Assess-ment Test, and finally just to its acronym,SAT. Indeed, items on the SAT-I (formerlythe ability test) and the SAT-II (formerly theachievement tests) are often, for all intentsand purposes, indistinguishable. The various

    kinds of assessments are of the same kindpsychologically.

    Conventional tests of intelligence and re-lated abilities measure achievement that in-dividuals should have accomplished severalyears back (see also Anastasi & Urbina,1997). In other words, the tests are measur-ing competencies at a somewhat less devel-oped level. Tests such as vocabulary, readingcomprehension, verbal analogies, arithmeticproblem solving, and the like, are all, in

    part, tests of achievement. Even abstract rea-soning tests measure achievement in dealingwith geometric symbols, skills taught inWestern schools (Laboratory of Compara-tive Human Cognition, 1982; Serpell,2000). One might as well use academic per-formance to predict ability test scores. Theconventional view infers some kind of cau-sation (abilities cause achievement) fromcorrelation, but the inference is not justifiedfrom the correlational data.

    The view of intelligence and other abilitiesas a set of competencies in development isnot inconsistent with there being a contribu-tion of genetic factors as a source of individ-ual differences in who will be able to de-

    16 II. CENTRAL CONSTRUCTS

  • 8/3/2019 Motivation and Tests

    3/16

    velop given amounts of competence orexpertise. Many human attributes, includingintelligence, reflect the covariation and inter-action of genetic and environmental factors.But the contribution of genes to an individ-

    uals intelligence cannot be directly mea-sured or even directly estimated. Rather,what is measured is a portion of what is ex-pressed, namely, manifestations of develop-ing competencies and expertise.

    According to this view, measures of intel-ligence should be correlated with later suc-cess, because both measures of intelligenceand various measures of success require de-veloping expertise of related types. For ex-ample, both typically require what can be

    referred to as metacomponents of thinking:recognition of problems, definition of prob-lems, formulation of strategies to solveproblems, representation of information,allocation of resources, and monitoring andevaluation of problem solutions. Theseskills develop as results of geneenviron-ment covariation and interaction. If we

    wish to call them intelligence, that is cer-tainly fine, so long as we recognize thatwhat we are calling intelligence is a formof developing competencies that can lead toexpertise.

    HOW ABILITIES DEVELOP

    INTO COMPETENCIES,

    AND COMPTENCIES INTO EXPERTISE

    The specifics of a model for how abilitiescan develop into competencies, and compe-tencies into expertise, are shown in Figure2.1. At the heart of the model is the notionthat individuals are constantly in a process

    of developing expertise when they workwithin a given domain. They may and do, ofcourse, differ in rate and asymptote of devel-opment. The main constraint in achievingexpertise is not some fixed prior level of ca-pacity, but purposeful engagement involvingdirect instruction, active participation, rolemodeling, and reward.

    2. Intelligence, Competence, and Expertise 17

    FIGURE 2.1. The development of abilities into competencies, and competencies into expertise.

  • 8/3/2019 Motivation and Tests

    4/16

    Elements of the Model

    The model of developing expertise has fivekey elements (although they certainly do notconstitute an exhaustive list of elements in

    the ultimate development of expertise fromabilities): metacognitive skills, learningskills, thinking skills, knowledge, and moti-vation. Although it is convenient to separatethese five elements, they are fully interactive,as shown in Figure 2.1. They influence eachother, both directly and indirectly. For exam-ple, learning leads to knowledge, but knowl-edge facilitates further learning.

    These elements are, to some extent, do-main specific. The development of com-

    petencies or expertise in one area does notnecessarily lead to the development of com-petencies or expertise in another area, al-though there may be some transfer, depend-ing upon the relationship of the areas, apoint that has been made with regard to in-telligence by others as well (e.g., Gardner,1983, 1999; Sternberg, 1994).

    In the theory of successful intelligence(Sternberg, 1985, 1997, 1999), intelligenceis viewed as having three aspects: analytical,

    creative, and practical. Our research sug-gests that the development of competenciesor even expertise in one creative domain(Sternberg & Lubart, 1995) or in one practi-cal domain (Sternberg, Wagner, Williams, &Horvath, 1995) shows modest to moderatecorrelations with the development of compe-tencies or expertise in other such domains.Psychometric research suggests more do-main generality for the analytical domain(Jensen, 1998; Sternberg & Grigorenko,

    2002b). Moreover, people can show analyti-cal, creative, or practical expertise in one do-main without showing all three of thesekinds of expertise, or even two of the three.

    Metacognitive Skills

    Metacognitive skills (or metacomponents;Sternberg, 1985) refer to peoples under-standing and control of their own cognition.For example, such skills would encompass

    what an individual knows about writing pa-pers or solving arithmetic word problems,both with regard to the steps that are in-volved and to how these steps can be exe-cuted effectively. Seven metacognitive skills

    are particularly important: problem recogni-tion, problem definition, problemrepresentation, strategy formulation, re-source allocation, monitoring of problemsolving, and evaluation of problem solving

    (Sternberg, 1985, 1986). All of these skillsare modifiable (Sternberg, 1986, 1988;Sternberg & Grigorenko, 2000; Sternberg &Spear-Swerling, 1996).

    Learning Skills

    Learning skills (knowledge-acquisition com-ponents) are essential to the model (Stern-berg, 1985, 1986), although they are cer-tainly not the only learning skills that

    individuals use. Learning skills are some-times divided into explicit and implicit ones.Explicit learning is what occurs when wemake an effort to learn; implicit learning iswhat occurs when we pick up informationincidentally, without any systematic effort.Examples of learning skills are selective en-coding, which involves distinguishing rele-vant from irrelevant information; selectivecombination, which involves putting to-gether the relevant information; and selec-

    tive comparison, which involves relatingnew information to information alreadystored in memory (Sternberg, 1985).

    Thinking Skills

    There are three main kinds of thinking skills(or performance components) that individu-als need to master (Sternberg, 1985, 1986,1994). It is important to note that these aresets of, rather than individual, thinking

    skills. Critical (analytical) thinking skills in-clude analyzing, critiquing, judging, evaluat-ing, comparing and contrasting, and assess-ing. Creative thinking skills include creating,discovering, inventing, imagining, suppos-ing, and hypothesizing. Practical thinkingskills include applying, using, utilizing, andpracticing (Sternberg, 1997). They are thefirst step in the translation of thought intoreal-world action.

    Knowledge

    Two main kinds of knowledge are relevantin academic situations. Declarative knowl-edge is of facts, concepts, principles, laws,

    18 II. CENTRAL CONSTRUCTS

  • 8/3/2019 Motivation and Tests

    5/16

    and the like. It is knowing that. Proce-dural knowledge is of procedures and strate-gies. It is knowing how. Of particular im-portance is procedural tacit knowledge,which involves knowing how the system

    functions in which one is operating (Stern-berg et al., 2000; Sternberg et al., 1995).

    Motivation

    One can distinguish among several differentkinds of motivation. A first kind of motiva-tion is achievement motivation (McClelland,1985; McClelland, Atkinson, Clark, &Lowell, 1976). People who are high inachievement motivation seek moderate chal-

    lenges and risks. They are attracted to tasksthat are neither very easy nor very hard.They are striversconstantly trying tobetter themselves and their accomplish-ments. A second kind of motivation is com-petence (self-efficacy) motivation, which re-fers to persons beliefs in their own ability tosolve the problem at hand (Bandura, 1977,1996). Experts need to develop a sense oftheir own efficacy to solve difficult tasks intheir domain of expertise. This kind of self-

    efficacy can result both from intrinsic andextrinsic rewards (Amabile, 1996; Sternberg& Lubart, 1996). Of course, other kinds ofmotivation are important too. Indeed, moti-vation is perhaps the indispensable elementneeded for school success. Without it, thestudent never even tries to learn. And, ofcourse, if a test is not important to theexaminee, he or she may do poorly simplythrough a lack of effort to perform well.

    Dweck (1999, 2002; Dweck & Elliott,

    1983) has shown that one of the most im-portant sources of motivation is individualsneed to enhance their intellectual skills.What Dweck has shown is that some indi-viduals are entity theorists with respect tointelligence: They believe that to be smart isto show oneself to be smart, and that meansnot making mistakes or otherwise showingintellectual weakness. Incremental theorists,in contrast, believe that to be smart is tolearn and to increase ones intellectual skills.

    These individuals are not afraid to makemistakes and even believe that making mis-takes can be useful, because it is a way tolearn. Dweck and her colleagues researchsuggests that, under normal conditions, en-

    tity and incremental theorists perform aboutthe same in school. But under conditions ofchallenge, incremental theorists do better,because they are more willing to undertakedifficult challenges and to seek mastery of

    new, difficult material.

    Context

    All of the elements discussed earlier arecharacteristics of the learner. Returning tothe issues raised at the beginning of thischapter, a problem with conventional tests isthat they assume that individuals operate ina more or less decontextualized environment(see Grigorenko & Sternberg, 2001b; Stern-

    berg, 1985, 1997; Sternberg & Grigorenko,2001). A test score is interpreted largely interms of the individuals internal attributes.But a test measures much more, and the as-sumption of a fixed or uniform contextacross test-takers is not realistic. Contextualfactors that can affect test performance in-clude native language, family background,emphasis of test on speedy performance, andfamiliarity with the kinds of material on thetest, among many other things.

    Interactions of Elements

    The novice works toward competence andthen expertise through deliberate practice(Ericsson, 1996). But this practice requiresan interaction of all five of the key elements.At the center, driving the elements, is moti-vation. Without it, the elements remain in-ert. Eventually, one reaches a kind of exper-tise, at which one becomes a reflective

    practitioner of a certain set of skills. But ex-pertise occurs at many levels. The expertfirst-year graduate or law student, for exam-ple, is still a far cry from the expert profes-sional. People thus cycle through manytimes, on the way to successively higher lev-els of expertise. They do so through the ele-ments in Figure 2.1.

    Motivation drives metacognitive skills,which in turn activate learning and thinkingskills, which then provide feedback to the

    metacognitive skills, enabling ones level ofexpertise to increase (see also Sternberg,1985). The declarative and proceduralknowledge acquired through the extensionof the thinking and learning skills also re-

    2. Intelligence, Competence, and Expertise 19

  • 8/3/2019 Motivation and Tests

    6/16

    sults in these skills being used more effec-tively in the future.

    All of these processes are affected by, andcan in turn affect, the context in which theyoperate. For example, if a learning experi-

    ence is in English but the learner has onlylimited English proficiency, his or her learn-ing will be inferior to that of someone withmore advanced English-language skills. Or ifmaterial is presented orally to someone whois a better visual learner, that individualsperformance will be reduced.

    How does this model of developing com-petencies and expertise relate to the con-struct of intelligence?

    THE g FACTOR AND

    THE STRUCTURE OF ABILITIES

    Some intelligence theorists point to the sta-bility of the alleged general (g) factor of hu-man intelligence as evidence for the exis-tence of some kind of stable and overridingstructure of human intelligence (e.g.,Bouchard, 1998; Kyllonen, 2002; Petrill,2002). But the existence of a g factor may

    reflect little more than an interaction be-tween whatever latent (and not directly mea-surable) abilities individuals may have andthe kinds of competencies and expertise thatare developed in school. With differentforms of schooling, g could be made eitherstronger or weaker. In effect, Western formsand related forms of schooling may, in part,create the g phenomenon by providing akind of schooling that teaches in conjunc-tion the various kinds of skills measured by

    tests of intellectual abilities.Suppose, for example, that children were

    selected from an early age to be schooled fora certain trade. Throughout most of humanhistory, this is in fact the way most childrenwere schooled. Boys, at least, were appren-ticed at an early age to a master who wouldteach them a trade. There was no point intheir learning skills that would be irrelevantto their lives.

    To bring the example into the present,

    imagine that we decided, from an early age,that certain students would study English(or some other native language) to developlanguage expertise; other students wouldstudy mathematics to develop their mathe-

    matical expertise. Still other students mightspecialize in developing spatial expertise tobe used in flying airplanes or doing shopwork, or whatever. Instead of beginning atthe university level, specialization would be-

    gin from the age of first schooling.This point of view is related to, but differ-

    ent from, that typically associated with thetheory of crystallized and fluid intelligence(Cattell, 1971; Horn, 1994). In that theory,fluid ability is viewed as an ability to acquireand reason with information, whereas crys-tallized ability is viewed as the informationso acquired. According to this view, school-ing primarily develops crystallized ability,based in part on the fluid ability the individ-

    ual brings to bear upon school-like tasks. Inthe theory proposed here, however, bothfluid and crystallized ability are roughlyequally susceptible to development throughschooling or other means that societies cre-ate for developing expertise. One could ar-gue that the greater validity of the positionpresented here is shown by the near-ubiquitous Flynn effect (Flynn, 1987, 1998;Neisser, 1998), which documents massivegains in IQ around the world throughout

    most of the 20th century. The effect must bedue to environment, because large geneticchanges worldwide in such a short timeframe are virtually impossible. Interestingly,gains are substantially larger in fluid abilitiesthan in crystallized abilities, suggesting thatfluid abilities are likely to be as susceptibleas, or probably more susceptible, thancrystalloid abilities to environmental influ-ences. Clearly, the notion of fluid abilities assome basic genetic potential one brings into

    the world, whose development is expressedin crystallized abilities, does not work.

    These students then would be given anomnibus test of intelligence or any broad-ranging measure of intelligence. There wouldbe no g factor, because people schooled inone form of expertise would not have beenschooled in others. One can imagine evennegative correlations between subscores onthe so-called intelligence test. The reason forthe negative correlations would be that de-

    veloping expertise in one area might pre-clude developing expertise in another be-cause of the form of schooling.

    Lest this tale sound far-fetched, I hasten toadd that it is a true tale of what is happening

    20 II. CENTRAL CONSTRUCTS

  • 8/3/2019 Motivation and Tests

    7/16

    now in some places. In the United States andmost of the developed world, of course,schooling takes a fairly standard course. Butthis standard course and the value placedupon it are not uniform across the world.

    And we should not fall into the ethnocentrictrap of believing that the way Westernschooling works is the way all schoolingshould work.

    In a collaborative study among childrennear Kisumu, Kenya (Sternberg et al., 2001),we devised a test of practical intelligencethat measures informal knowledge for animportant aspect of adaptation to the envi-ronment in rural Kenya, namely, knowledgeof the identities and use of natural herbal

    medicines that could be used to combatillnesses. The children use this informalknowledge an average of once a week intreating themselves or suggesting treatmentsto other children, so this knowledge is a rou-tine part of their everyday existence. By in-formal knowledge, we are referring to kindsof knowledge not taught in schools, and notassessed on tests given in the schools.

    The idea of our research was that childrenwho knew what these medicines were, what

    they were used for, and how they should bedosed would be in a better position to adaptto their environments than would childrenwithout this informal knowledge. We do notknow how many, if any, of these medicinesactually work, but from the standpoint ofmeasuring practical intelligence in a givenculture, the important thing is that the peo-ple in Kenya believe that the medicineswork. For that matter, it is not always clearhow effective are the medicines used in the

    Western World.We found substantial individual differ-

    ences in the tacit knowledge of like-age andschooled children about these natural herbalmedicines. More important, however, wasthe correlation between scores on this testand scores on an English-language vocabu-lary test (the Mill Hill), a Dholuo equivalent(Dholuo is the community and home lan-guage), and the Raven Coloured Pro-gressive Matrices. We found significantly

    negative correlations between our test andthe English-language vocabulary test. Corre-lations of our test with the other tests weretrivial. The better children did on the test ofindigenous tacit knowledge, the worse they

    did on the test of vocabulary used in school,and vice versa. Why might we have obtainedsuch a finding?

    Based on ethnographic observation, webelieve a possible reason is that parents in

    the village may emphasize either a more in-digenous or a more Western education.Some parents (and their children) see littlevalue to school. They do not see how successin school connects with the future of chil-dren who will spend their whole lives in avillage, where they do not believe they needthe expertise the school teaches. Other par-ents and children seem to see Westernschooling as being of value in itself or poten-tially as a ticket out of the confines of the

    village. The parents thus tend to emphasizeone type of education or the other for theirchildren, with corresponding results. Thekinds of developing expertise the familiesvalue differ and so, therefore, do scores onthe tests. From this point of view, theintercorrelational structure of tests tells usnothing intrinsic about the structure of intel-ligence per se, but rather something aboutthe way abilities as developing forms of ex-pertise structure themselves in interaction

    with the demands of the environment.In a more recent study (Grigorenko et al.,

    2004), we studied the academic and practi-cal skills of Yupik Eskimo children who livein the southwestern portion of Alaska. TheYupik generally live in geographically iso-lated villages along waterways that are ac-cessible primarily by air. Most of us wouldhave no choice in traveling from one villageto another, because we would be unable tonavigate the terrain using, say, a dogsled.

    These villages are embedded in mile aftermile of frozen tundra that, to us, would alllook relatively the same. The Yupik, how-ever, can navigate this terrain, because theylearn to find landmarks that most of uswould never see. They also have extremelyimpressive hunting and gathering skills thatalmost none of us would have. Yet most ofthe children do quite poorly in school. Theirteachers often think that they are ratherhopeless students. The children thus have

    developed extremely impressive competen-cies and even expertise for surviving in a dif-ficult environment, but because these skillsoften are not ones valued by teachers (whotypically are not from the Yupik commu-

    2. Intelligence, Competence, and Expertise 21

  • 8/3/2019 Motivation and Tests

    8/16

    nity), the children are viewed as not verycompetent.

    Nues (1994) has reported related find-ings based on a series of studies she con-ducted in Brazil (see also Ceci & Roazzi,

    1994). Street childrens adaptive intelligenceis tested to the limit by their ability to formand successfully run a street business. If theyfail to run such a business successfully, theyrisk either starvation or death at the handsof death squads should they resort to steal-ing. Nues and her collaborators have foundthat the same children who are doing themathematics needed for running a successfulstreet business cannot do well the same typesof mathematics problems presented in an

    abstract, paper-and-pencil format.From a conventional abilities standpoint,

    this result is puzzling. From a standpoint ofintelligence as developing competencies andcompetencies as developing expertise, it isnot. Street children grow up in an environ-ment that fosters the development of practi-cal but not academic mathematical skills.We know that even conventional academickinds of expertise often fail to show transfer(e.g., Gick & Holyoak, 1980). It is scarcely

    surprising, then, that there would be littletransfer here. The street children have devel-oped the kinds of practical arithmetical ex-pertise they need for survival and even suc-cess, but they will get no credit for theseskills when they take a conventional abilitiestest.

    It also seems likely that if the scales werereversed, and privileged children who dowell on conventional ability tests or inschool were forced out on the street, many

    of them would not survive long. Indeed, inthe ghettoes of urban America, many chil-dren and adults who for one reason or an-other end up on the street, in fact barely sur-vive or do not make it at all.

    Jean Lave (1989) has reported similarfindings with Berkeley housewives shoppingin supermarkets. There just is no correlationbetween their ability to do the mathematicsneeded for comparison shopping and theirscores on conventional paper-and-pencil

    tests of comparable mathematical skills. AndCeci and Liker (1986) found, similarly, thatexpert handicappers at race tracks generallyhad only average IQs. There was no correla-tion between the complexity of the mathe-matical model they used in handicapping

    and their scores on conventional tests. Ineach case, important kinds of developing ex-pertise for life were not adequately reflectedby the kinds of developing expertise mea-sured by the conventional ability tests.

    One could argue that these results merelyreflect the fact that the problem these studiesraise is not with conventional theories ofabilities, but with the tests that are looselybased on these theories: These tests do notmeasure street math, but more abstractedforms of mathematical thinking. But psycho-metric theories, I would argue, deal with asimilarly abstracted g factor. The abstractedtests follow largely from the abstracted theo-retical constructs. In fact, our research has

    shown that tests of practical intelligencegenerally do not correlate with scores onthese abstracted tests (e.g., Sternberg et al.,1995, 2000).

    The problem with the conventional modelof abilities does not just apply in what to usare exotic cultures or exotic occupations. Inone study (Sternberg, Ferrari, Clinkenbeard,& Grigorenko, 1996; Sternberg, Grigor-enko, Ferrari, & Clinkenbeard, 1999), highschool students were tested for their an-

    alytical, creative, and practical abilitiesvia multiple-choice and essay items. Themultiple-choice items were divided intothree content domains: verbal, quantitative,and figural pictures. Students scores werefactor-analyzed and then later correlatedwith their performance in a college-level in-troductory psychology course.

    We found that when students were testednot only for analytical abilities but also forcreative and practical abilities (as follows

    from the model of successful intelligence;Sternberg, 1985, 1997), the strong g factorthat tends to result from multiple-abilitytests becomes much weaker. Of course, thereis always some general factor when one fac-tor-analyzes but does not rotate the factorsolution, but the general factor was weakand, of course, disappeared with a varimaxrotation. We also found that all of analyti-cal, creative, and practical abilities predictedperformance in the introductory psychology

    course (which itself was taught analytically,creatively, or practically, with assessments tomatch). Moreover, although the studentswho were identified as high analytical werethe traditional populationprimarily white,middle- to upper-middle-class, and well edu-

    22 II. CENTRAL CONSTRUCTS

  • 8/3/2019 Motivation and Tests

    9/16

    cated, the students who were identified ashigh creative or high practical were muchmore diverse in all of these attributes. Mostimportantly, students whose instructionbetter matched their triarchic pattern of

    abilities outperformed those students whoseinstruction more poorly matched theirtriarchic pattern of abilities.

    Thus, conventional tests may unduly fa-vor a small segment of the population byvirtue of the narrow kind of developing ex-pertise they measure. When one measures abroader range of developing competenciesand expertise, the results look quite differ-ent. Moreover, the broader range of exper-tise includes kinds of skills that will be im-

    portant in the world of work and in theworld of the family.

    Even in developed countries, practicalcompetencies probably matter as much as ormore than do academic ones for many as-pects of life success. Goleman (1995), for ex-ample (see also Salovey & Mayer, 1990;Mayer, Salovey, & Caruso, 2000), hasclaimed that emotional competencies aremore important than academic ones, al-though he has offered no direct evidence. In

    a study we did in Russia (Grigorenko &Sternberg, 2001a), although both academicand practical intelligence predicted measuresof adult physical and mental health, themeasures of practical intelligence were thebetter predictors.

    Analytical, creative, and practical abili-ties, as measured by our tests or anyoneelses, are simply forms of developing com-petencies and ultimately of developing ex-pertise. All are useful in various kinds of life

    tasks. But conventional tests may unfairlydisadvantage those students who do not dowell in a fairly narrow range of kinds of ex-pertise. By expanding the range of develop-ing expertise we measure, we discover thatmany children not now identified as ablehave, in fact, developed important kinds ofexpertise. The abilities that conventionaltests measure are important for school andlife performance, but they are not the onlyabilities that are important.

    Teaching in a way that departs from no-tions of abilities based on a g factor alsopays dividends. In a recent set of studies, wehave shown that generally lower socioeco-nomic class third-grade and generallymiddle-class eighth-grade students who are

    taught social studies (a unit in communities)or science (a unit on psychology) for success-ful intelligence (analytically, creative, andpractically, as well as for memory) outper-form students who are taught just for ana-

    lytical (critical) thinking or just for memory(Sternberg, Torff, & Grigorenko, 1998a,1998b). The students taught triarchicallyoutperform the other students not only onperformance assessments that look at ana-lytical, creative, and practical kinds ofachievements, but even on tests that measurestraight memory (multiple-choice tests al-ready being used in the courses). None ofthis is to say that analytical abilities are notimportant in school and lifeobviously,

    they are. Rather, what our data suggest isthat other types of abilitiescreative andpractical onesare important as well, andthat students need to learn how to use allthree kinds of abilities together.

    Thus, teaching students in a way thattakes into account their more highly devel-oped expertise and that also enables them todevelop other kinds of expertise results insuperior learning outcomes, regardless ofhow these learning outcomes are measured.

    The children taught in a way that enablesthem to use kinds of expertise other thanmemory actually remember better, on aver-age, than do children taught for memory.

    We have also done studies in which wehave measured informal procedural knowl-edge in children and adults. We have donesuch studies with business managers, collegeprofessors, elementary school students,salespeople, college students, and generalpopulations. This important aspect of prac-

    tical intelligence, in study after study, hasbeen found to be uncorrelated with aca-demic intelligence, as measured by conven-tional tests, in a variety of populations, oc-cupations, and at a variety of age levels(Sternberg et al., 1995, 2000). Moreover, thetests predict job performance as well as orbetter than do tests of IQ. The lack of corre-lation of the two kinds of ability tests sug-gests that the best prediction of job perfor-mance will result when both academic and

    practical intelligence tests are used as predic-tors. Most recently, we have developed a testof common sense for the workplacefor ex-ample, how to handle oneself in a jobinterviewthat predicts self-ratings of com-mon sense but not self-ratings of various

    2. Intelligence, Competence, and Expertise 23

  • 8/3/2019 Motivation and Tests

    10/16

    kinds of academic abilities (Sternberg &Grigorenko, 1998).

    Although the kinds of informal proce-dural expertise we measure in these testsdoes not correlate with academic expertise,

    it does correlate across work domains. Forexample, we found that subscores (for man-aging oneself, managing others, and manag-ing tasks) on measures of informal proce-dural knowledge are correlated with eachother, and that scores on the test for aca-demic psychology are moderately correlatedwith scores on the test for business managers(Sternberg et al., 1995). So the kinds of de-veloping expertise that matter in the worldof work may show certain correlations with

    each other that are not shown with the kindsof developing expertise that matter in theworld of the school.

    It is even possible to use these kinds oftests to predict effectiveness in leadership.Studies of military leaders showed that testsof informal knowledge for military leaderspredicted the effectiveness of these leaders,whereas conventional tests of intelligencedid not. We also found that although the testfor managers was significantly correlated

    with the test for military leaders, only thelatter test predicted superiors ratings ofleadership effectiveness (Sternberg et al.,2000).

    Both conventional academic tests and ourtests of practical intelligence measure formsof developing expertise that matter in schooland on the job. The two kinds of tests arenot qualitatively distinct. The reason thecorrelations are essentially null is that thekinds of developing expertise they measure

    are quite different. The people who good atabstract, academic kinds of expertise are of-ten people who have not emphasized learn-ing practical, everyday kinds of expertise,and vice versa, as we found in our Kenyastudy. Indeed, children who grow up in chal-lenging environments such as the inner citymay need to develop practical over academicexpertise as a matter of survival. As inKenya, this practical expertise may betterpredict their survival than do academic

    kinds of expertise. The same applies in busi-ness, where tacit knowledge about how toperform on the job is as likely or morelikely to lead to job success than is the aca-demic expertise that in school seems so im-portant.

    The practical kinds of expertise matter inschool too. In a study at Yale, Wendy Wil-liams and I (cited in Sternberg, Wagner, &Okagaki, 1993) found that a test of tacitknowledge for college predicted grade-point

    average as well as did an academic abilitytest. But a test of tacit knowledge for collegelife better predicted adjustment to the col-lege environment than did the academic test.

    TAKING TESTS

    One of the best ways of measuring abilitiesas developing competencies is through dy-namic assessment (Sternberg & Grigorenko,

    2002a). Dynamic assessment has been pro-posed as a way of uncovering this informa-tion. What is dynamic assessment? Dynamicassessment is testing plus an instructional in-tervention. In other words, the instructionaland assessment functions, instead of beingseparated, are integrated. In a conventionalassessment, sometimes called a static assess-ment, individuals receive a set of test itemsand solve these items with little or no feed-back. Often, giving feedback is viewed as a

    source of error of measurement, and there-fore as something to be avoided at all costs.In a dynamic assessment, individuals receivea set of test items with explicit instruction(Grigorenko & Sternberg, 1998; Lidz, 1987,1997; Sternberg & Grigorenko, 2002b;Wiedl, Guthke, & Wingenfeld, 1995). Dy-namic assessments have been found to revealdeveloping expertise in members of under-represented minority groups around theworld that is not revealed by conventional

    static tests (see, e.g., Feuerstein, Rand, &Hoffman, 1979; Lidz & Elliott, 2000; Stern-berg & Grigorenko, 2002a).

    Dynamic assessment is far from perfect.Scores on dynamic assessments can be influ-enced by many factors, such as the kinds ofinstruction given, the match between thekind of instruction given and the test-takersexisting pattern of skills, the relationship be-tween the examiner and the examinee, andso on. No method of assessment gives a to-

    tally accurate picture of a persons poten-tials.

    Why should dynamic instruction and as-sessment tend to benefit members ofunderrepresented minority groups in partic-ular? There are at least four reasons.

    24 II. CENTRAL CONSTRUCTS

  • 8/3/2019 Motivation and Tests

    11/16

    1. Members of such groups may have lesstacit knowledge about how to managethemselves in schools, which often reflectmiddle-class values. Moreover, they mayhave less knowledge of how to take tests

    (test-wiseness), due to lesser experiencewith tests. Dynamic instruction and as-sessment help make this tacit knowledgeexplicit.

    2. The coldness and interpersonal distancecharacteristic of static learning and as-sessment situations may be more threat-ening to members of underrepresentedminority groups than to others.

    3. Members of underrepresented minoritygroups may have less cognitive scaffold-

    ing than do members of other groups.Dynamic instruction and assessment helpprovide this missing scaffolding.

    4. Members of underrepresented minoritygroups who might disidentify with astatic assessment situation may identifywith the situation when they are given anopportunity not only to show what theyhave learned in the assessment situationbut also to learn in this situation.

    Member of underrepresented minoritygroups may actually have less developed ex-pertise than do members of others groups.But they may have as great or greater devel-oping expertise, or at least, capacity to de-velop expertise. Dynamic instruction and as-sessment help elucidate this developingexpertise and capacity to acquire developingexpertise.

    There are two common formats for dy-namic assessments. The first format is that

    the instruction may be sandwiched betweena pretest and a posttest. The second formatis that the instruction may be in response tothe examinees solution to each test item.Note that they are not the only possible for-mats, just the two most commonly usedones. Here, I use two terms of our own in-vention to describe the sandwich formatandthe cake format.

    In the first format, examinees take a pre-test, which is essentially equivalent to a

    static test. After they complete the pretest,they are given instruction in the skills mea-sured by the pretest. The instruction may begiven in an individual or a group setting. If itis in an individual setting, it may or may notbe individualized to reflect a particular

    examinees strengths and weaknesses. If it isindividualized, then the amount as well asthe type of feedback can be individualized. Ifit is in a group setting, then the instructiontypically is the same for all examinees. After

    instruction, the examinees are tested againon a posttest. The posttest is typically an al-ternate form of the pretest, although, lesscommonly, it may be exactly the same test.For convenience, this is referred to as thesandwich format. In individual testing set-tings, the exact contents of the sandwich(type of instruction), as well as its thickness(amount of instruction), can be varied to suitthe individual. In group testing settings, thecontents and thickness of the sandwich are

    typically uniform.In the second format, which is always

    done individually, examinees are given in-struction item by item. An examinee is givenan item to solve. If he or she solves it cor-rectly, then the next item is presented. But ifthe examinee does not solve the item cor-rectly, he or she is given a graded series ofhints. The hints are designed to make the so-lution successively more nearly apparent.The examiner then determines how many

    and what kinds of hints the examinee needsin order to solve the item correctly. Instruc-tion continues until the examinee is success-ful, at which time the next item is presented.The successive hints are presented like suc-cessive layers of icing on a cake. For conve-nience, this is referred to as the cake format.In the cake format, the number of layers ofthe cake is almost always varied (i.e., theamount of feedback depends on howquickly the examinee is able to use the for-

    mat to reach a correct solution). The con-tents of the layers, however (i.e., the type offeedback), may or may not be constant.Most often, they are constant: The numberof hints varies across examinees, but not thecontent of them.

    There are three major differences betweenthe static and dynamic paradigms. The dif-ferences are best viewed as ones of emphasisrather than of dichotomous differences. Astatic test can have dynamic elements, just as

    a dynamic test can have static elements.The first difference regards the respective

    roles of static states versus dynamic pro-cesses. Static assessment emphasizes prod-ucts formed as a result of preexisting skills,whereas dynamic assessment emphasizes

    2. Intelligence, Competence, and Expertise 25

  • 8/3/2019 Motivation and Tests

    12/16

    quantification of the psychological processesinvolved in learning and change. In otherwords, static testing taps more into a devel-oped state, whereas dynamic testing tapsmore into a developing process. In both of

    the formats of dynamic testing described,the examiner is able to assess how theproblem-solving process develops as a resultof instruction. In the sandwich format of dy-namic testing, the instruction is given all atonce between the pretest and the posttest. Inthe cake format of dynamic testing, the in-struction is given in graded bits after eachtest item, as needed. Static testing typicallydoes not allow the examiner to draw suchinferences.

    The second difference regards the role offeedback. In static assessment, an examinerpresents a graded sequence of problems andthe test-taker responds to each of the prob-lems. There is no feedback from examiner totest-taker regarding quality of performance.In dynamic assessment, feedback is given, ei-ther explicitly or implicitly.

    The type of feedback depends on whichkind of dynamic assessment is used. In thesandwich format described above, the feed-

    back may be explicit if the testing is in-dividual, but will probably be implicitif the testing is in a group. The instructionsandwiched between the pretest and theposttest gives each examinee an opportu-nity to see which skills he or she has mas-tered and which skills he or she has notmastered. But in a group testing situation,the examiner is not able explicitly to telleach examinee about these skills. In an in-dividual testing situation with the sandwich

    format, it is possible to provide explicitfeedback, should the examiner decide togive it.

    In the cake format, the examiner presentsa sequence of progressively more challengingtasks, but after the presentation of each task,the examiner gives the test-taker feedback,continuing with this feedback in successiveiterations until the examinee either solvesthe problem or gives up. Testing thus joinswith instruction, and the test-takers ability

    to learn is quantified while he or she learns.The third difference between static and

    dynamic assessment pertains to the qualityof the examinerexaminee relationship. Instatic testing, the examiner attempts to be asneutral and as uninvolved as possible to-

    ward the examinee. The examiner wants tohave good rapport, but nothing more. In-volvement beyond good rapport risks the in-troduction of error of measurement. In dy-namic assessment, the assessment situation

    and the type of examinerexaminee relation-ship are modified from the one-way tradi-tional setting of the conventional psycho-metric approach to form a two-way,interactive relationship between the exam-iner and the examinee.

    In individual dynamic assessment, thistestertestee interaction is individualized foreach child: The conventional attitude of neu-trality is thus replaced by an atmosphere ofteaching and helping. In group dynamic as-

    sessment using the sandwich format, the ex-aminer is still helpful, although at a grouprather than an individual level. The exam-iner is giving instruction in order to help theexaminees improve on the posttest. As in theindividual assessment format, he or she isanything but neutral.

    Thus, dynamic assessment is based on thelink between testing and intervention, andexamines the processes of learning, as wellas its products. By embedding learning in

    evaluation, dynamic assessment assumesthat the examinee can start at the zero (oralmost zero) point of having certain devel-oped skills to be assessed, and that teachingwill provide all the necessary informationfor mastery of the assessed skills. In otherwords, what is assessed, in theory, is not justpreviously acquired skills, but the capacityto master, apply, and reapply skills taught inthe dynamic assessment situation. In prac-tice, results of dynamic assessments can be

    affected by many things, such as match be-tween tester and test-taker, sensitivity of thetester to the test-taker, the testers expecta-tions for the child, and so forth. Thus, thetests may be less than perfect. The view ofdynamic tests as measuring learning skills atthe time of test underlies the use of the termtest of learning potential, which is often ap-plied to dynamic assessment.

    In a study near Bagamoyo, Tanzania, weinvestigated dynamic tests administered to

    children. Although dynamic tests have beendeveloped for a number of purposes (seeGrigorenko & Sternberg, 1998; Sternberg &Grigorenko, 2002a), one of our particularpurposes was to look at how dynamic test-ing affects score patterns. In particular, we

    26 II. CENTRAL CONSTRUCTS

  • 8/3/2019 Motivation and Tests

    13/16

    developed more or less conventional mea-sures but administered them in a dynamicformat. In an experimental group, first, stu-dents took a pretest. Then they received ashort period of instruction (generally no

    more than 1015 minutes) on how to im-prove their performance in the expertisemeasured by these tests. Then the childrentook a posttest. In a control group, childrentook the pretest and posttest but did not re-ceive instruction in between.

    A first finding was that the correlation be-tween pretest and posttest scores, althoughstatistically significant, was relatively weak(about .3) in the experimental group butstrong (about .8) in the control group. In

    other words, even a short period of instruc-tion fairly drastically changed the rank or-ders of the students on the test.

    We again interpret these results in termsof the model of abilities as developing com-petencies and expertise. The Tanzanian stu-dents had developed very little expertise inthe skills required to take American-style in-telligence tests. Thus, even a short interven-tion could have a fairly substantial effect ontheir scores. When the students developed

    somewhat more of this test-taking expertisethrough a short intervention, their scoreschanged and became more reflective of theirtrue capabilities for cognitive work.

    Sometimes the expertise children learnthat is relevant for in-school tests may actu-ally hurt them on conventional ability tests.In one example, we studied the developmentof childrens analogical reasoning in a coun-try day school, where teachers taught inEnglish in the morning and in Hebrew in the

    afternoon (Sternberg & Rifkin, 1979). Wefound a number of second-grade studentswho got no problems right on our test. Theywould have seemed, on the surface, to berather stupid. We discovered the reason why,however. We had tested in the afternoon,and in the afternoon, the children alwaysread in Hebrew. So they read our problemsfrom right to left, and got them all wrong.The expertise that served them so well intheir normal environment utterly failed them

    on the test.Our sample was of upper-middle-class

    children who, in a year or two, would knowbetter. But imagine what happens with otherchildren in less supportive environmentswho develop kinds of expertise that may

    serve them well in their family or commu-nity lives or even school life, but not on thetests. They will appear to be stupid, ratherthan lacking the kinds of expertise the testsmeasure.

    Greenfield (1997), who has done a num-ber of studies in a variety of cultures, foundthat the kinds of test-taking expertise as-sumed to be universal in the United Statesand other Western countries are by nomeans universal. She found, for example,that children in Mayan cultures (and proba-bly in other highly collectivist cultures aswell) were puzzled when they were not al-lowed to collaborate with parents or otherson test questions. In the United States, of

    course, such collaboration would be viewedas cheating. But in a collectivist culture,someone who had not developed this kindof collaborative expertise, and moreover,someone who did not use it, would be per-ceived as lacking important adaptive skills(see also Laboratory of Comparative Hu-man Cognition, 1982).

    CONCLUSIONS

    Intelligence tests measure developing compe-tencies, and these developing competenciescan be transformed into the development ofexpertise. Tests can be created that favor thekinds of developing expertise formed in anykind of cultural or subcultural milieu. Thosewho have created conventional tests of abili-ties have tended to value the kinds of skillsmost valued by Western schools. This sys-tem of valuing is understandable, given that

    Binet and Simon (1905) first developed in-telligence tests for the purpose of predictingschool performance. Moreover, these skillsare important in school and in life. But inthe modern world, the conception of abili-ties as fixed or even as predetermined is ananachronism. Moreover, our research andthat of others (reviewed more extensively inSternberg, 1997) shows that the set of abili-ties assessed by conventional tests measuresonly a small portion of the kinds of develop-

    ing expertise that are relevant for life suc-cess. It is for this reason that conventionaltests predict only about 10% of individualdifference variation in various measures ofsuccess in adult life (Herrnstein & Murray,1994).

    2. Intelligence, Competence, and Expertise 27

  • 8/3/2019 Motivation and Tests

    14/16

    Not all cultures value equally the kinds ofexpertise measured by these tests. In a studycomparing Latino, Asian, and Anglo subcul-tures in California, for example, we foundthat Latino parents valued social kinds of

    expertise as more important to intelligencethan did Asian and Anglo parents, whomore valued cognitive kinds of expertise(Okagaki & Sternberg, 1993). Predictably,teachers also more valued cognitive kinds ofexpertise, with the result that the Anglo andAsian children would be expected to dobetter in school, and did. Of course, cogni-tive expertise matters in school and in life,but so does social expertise. Both need to betaught in the school and the home to all chil-

    dren. This latter kind of expertise may be-come even more important in the work-place. Until we expand our notions ofabilities and recognize that when we mea-sure them, we are measuring developingforms of expertise, we will risk consigningmany potentially excellent contributors toour society to bleak futures. We will also bepotentially overvaluing students with exper-tise for success in a certain kind of school-ing, but not necessarily with equal expertise

    for success later in life.

    ACKNOWLEDGMENTS

    Preparation of this chapter was supported underthe Javits Act Program (Grant No. R206R00001)as administered by the Institute of EducationalSciences, U.S. Department of Education.Grantees undertaking such projects are encour-aged to express freely their professional judg-ment. This chapter, therefore, does not necessar-

    ily represent the position or policies of theInstitute of Educational Sciences or the U.S. De-partment of Education, and no official endorse-ment should be inferred.

    REFERENCES

    Amabile, T. M. (1996). Creativity in context. Boulder,CO: Westview.

    Anastasi, A., & Urbina, S. (1997). Psychological testing

    (7th ed.). Upper Saddle River, NJ: Prentice-Hall.

    Bandura, A. (1977). Self-efficacy: Toward a unifyingtheory of behavioral change. Psychological Review,84, 181215.

    Bandura, A. (1996). Self-efficacy: The exercise of con-

    trol. New York: Freeman.

    Binet, A., & Simon, T. (1905). Mthodes nouvelles

    pour le diagnostic du niveau intellectuel desanormaux. [New methods for the diagnosis of the

    intellectual level of the abnormal]. LAnne

    psychologique, 11, 191336.

    Bouchard, T. J., Jr. (1998). Genetic and environmental

    influences on adult intelligence and special mentalabilities. Human Biology, 70, 257279.

    Carroll, J. B. (1993). Human cognitive abilities: A sur-

    vey of factor-analytic studies. New York: Cambridge

    University Press.

    Cattell, R. B. (1971). Abilities: Their structure, growth,

    and action. Boston: Houghton Mifflin.

    Ceci, S. J., & Liker, J. (1986). Academic and

    nonacademic intelligence: An experimental separa-

    tion. In R. J. Sternberg & R. K. Wagner (Eds.), Prac-tical intelligence: Nature and origins of competence

    in the everyday world (pp. 119142). New York:

    Cambridge University Press.Ceci, S. J., & Roazzi, A. (1994). The effects of context

    on cognition: Postcards from Brazil. In R. J. Stern-

    berg & R. K. Wagner (Eds.), Mind in context:

    Interactionist perspectives on human intelligence

    (pp. 74101). New York: Cambridge University

    Press.

    Chi, M. T. H., Glaser, R., & Farr, M. J. (Eds.). (1988).The nature of expertise. Hillsdale, NJ: Erlbaum.

    Dweck, C. S. (1999). Self-theories: Their role in motiva-

    tion, personality, and development. Philadelphia:

    Psychology Press/Taylor & Francis.

    Dweck, C. S. (2002). Messages that motivate: How

    praise molds students beliefs, motivation, and per-formance (in surprising ways). In J. Aronson (Ed.),

    Improving academic achievement: Impact of psycho-

    logical factors on education (pp. 3760). San Diego:

    Academic Press.

    Dweck, C. S., & Elliott, E. S. (1983). Achievement mo-tivation. In P. H. Mussen (General Ed.) & E. M.

    Heatherington (Vol. Ed.), Handbook of child psy-

    chology: Socialization, personality, and social devel-opment (4th ed., Vol. 4, pp. 644691). New York:

    Wiley.Ericsson, K. A. (Ed.). (1996). The road to excellence:

    The acquisition of expert performance in the artsand sciences, sports and games. Hillsdale, NJ:

    Erlbaum.

    Ericsson, K. A., Krampe, R. T., & Tesch-Rmer, C.(1993). The role of deliberate practice in the acquisi-

    tion of expert performance. Psychological Review,

    100, 363406.

    Ericsson, K. A., & Smith, J. (Eds.). (1991). Toward a

    general theory of expertise: Prospects and limits.New York: Cambridge University Press.

    Feuerstein, R., Rand, Y., & Hoffman, M. B. (1979).

    The dynamic assessment of retarded performers: The

    learning potential assessment device: Theory, instru-ments, and techniques. Baltimore: University Park

    Press.

    Flynn, J. R. (1987). Massive IQ gains in 14 nations:

    What IQ tests really measure. Psychological Bulletin,101, 171191.

    28 II. CENTRAL CONSTRUCTS

  • 8/3/2019 Motivation and Tests

    15/16

    Flynn, J. R. (1998). WAIS-III and WISC-III gains in theUnited States from 1972 to 1995: How to compen-sate for obsolete norms. Perceptual and MotorSkills, 86, 12311239.

    Gardner, H. (1983). Frames of mind: The theory of

    multiple intelligences. New York: Basic Books.Gardner, H. (1999). Intelligence reframed: Multipleintelligences for the 21st century. New York: BasicBooks.

    Gick, M. L., & Holyoak, K. J. (1980). Analogical prob-lem solving. Cognitive Psychology, 12, 306355.

    Goleman, D. (1995). Emotional intelligence. NewYork: Bantam.

    Greenfield, P. M. (1997). You cant take it with you:Why assessment abilities dont cross cultures. Ameri-can Psychologist, 52, 11151124.

    Grigorenko, E. L., Meier, E., Lipka, J., Mohatt, G.,

    Yanez, E., & Sternberg, R. J. (2004). The relation-ship between academic and practical intelligence: Acase study of the tacit knowledge of Native Ameri-can Yupik people in Alaska. Learning and Individ-ual Differences, 14, 185207.

    Grigorenko, E. L., & Sternberg, R. J. (1998). Dynamictesting. Psychological Bulletin, 124, 75111.

    Grigorenko, E. L., & Sternberg, R. J. (2001a). Analyti-cal, creative, and practical intelligence as predictorsof self-reported adaptive functioning: A case study inRussia. Intelligence, 29, 5773.

    Grigorenko, E. L., & Sternberg, R. J. (Eds.). (2001b).Family environment and intellectual functioning: Alife-span perspective. Mahwah, NJ: Erlbaum.

    Herrnstein, R. J., & Murray, C. (1994). The bell curve.New York: Free Press.

    Hoffman, R. R. (Ed.). (1992). The psychology of exper-tise: Cognitive research and empirical AI. New York:Springer-Verlag.

    Horn, J. L. (1994). Fluid and crystallized intelligence,theory of. In R. J. Sternberg (Ed.), Encyclopedia ofhuman intelligence (Vol. 1, pp. 443451). NewYork: Macmillan.

    Jensen, A. R. (1998). The g factor: The science of men-tal ability. Westport, CT: Praeger/Greenwoood.

    Kyllonen, P. C. (2002). g: Knowledge, speed, strategies,or working-memory capacity?: A systems perspec-tive. In R. J. Sternberg & E. L. Grigorenko (Eds.),The general factor of intelligence: How general is it?(pp. 415445). Mahwah, NJ: Erlbaum.

    Laboratory of Comparative Human Cognition. (1982).Culture and intelligence. In R. J. Sternberg (Ed.),Handbook of human intelligence (pp. 642719).New York: Cambridge University Press.

    Lave, J. (1989). Cognition in practice. New York: Cam-bridge University Press.

    Lidz, C. S. (Ed.). (1987). Dynamic assessment. New

    York: Guilford Press.Lidz, C. S. (1997). Dynamic assessment approaches. In

    D. P. Flanagan, J. L. Genshaft, & P. L. Harrison(Eds.), Contemporary intellectual assessment: The-ories, tests, and issues (pp. 281296). New York:Guilford Press.

    Lidz, C. S., & Elliott, J. (Eds.). (2000). Dynamic assess-ment: Prevailing models and applications. Green-wich, CT: Elsevier/JAI Press.

    Mayer, J. D., Salovey, P., & Caruso, D. (2000). Emo-tional intelligence. In R. J. Sternberg (Ed.), Hand-

    book of intelligence (pp. 396421). New York: Cam-bridge University Press.McClelland, D. C. (1985). Human motivation. New

    York: Scott, Foresman.McClelland, D. C., Atkinson, J. W., Clark, R. A., &

    Lowell, E. L. (1976). The achievement motive. NewYork: Irvington.

    Neisser, U. (Ed.). (1998). The rising curve. Washington,DC: American Psychological Association.

    Nues, T. (1994). Street intelligence. In R. J. Sternberg(Ed.), Encyclopedia of human intelligence (Vol. 2,pp. 10451049). New York: Macmillan.

    Okagaki, L., & Sternberg, R. J. (1993). Parental beliefsand childrens school performance. Child Develop-ment, 64, 3656.

    Petrill, S. A. (2002). The case for general intelligence: Abehavioral genetic perspective. In R. J. Sternberg &E. L. Grigorenko (Eds.), The general factor of intelli-

    gence: How general is it? (pp. 281298). Mahwah,NJ: Erlbaum.

    Salovey, P., & Mayer, J. D. (1990). Emotional intelli-gence. Imagination, Cognition, and Personality, 9,185211.

    Serpell, R. (2000). Intelligence and culture. In R. J.Sternberg (Ed.), Handbook of intelligence (pp. 549580). New York: Cambridge University Press.

    Sternberg, R. J. (1985). Beyond IQ: A triarchic theoryof human intelligence. New York: Cambridge Uni-versity Press.

    Sternberg, R. J. (1986). Intelligence applied. Orlando,FL: Harcourt Brace College.

    Sternberg, R. J. (1988). The triarchic mind: A new the-ory of human intelligence. New York: Viking-Penguin.

    Sternberg, R. J. (1990). Metaphors of mind. New York:Cambridge University Press.

    Sternberg, R. J. (1994). Cognitive conceptions of exper-

    tise. International Journal of Expert Systems: Re-search and Application, 7, 112.

    Sternberg, R. J. (1997). Successful intelligence. NewYork: Plume.

    Sternberg, R. J. (1999). The theory of successful intelli-gence. Review of General Psychology, 3, 292316.

    Sternberg, R. J. (Ed.). (2000). Handbook of intelli-gence. New York: Cambridge University Press.

    Sternberg, R. J., & Ben-Zeev, T. (2001). Complex cog-nition: The psychology of human thought. NewYork: Oxford University Press.

    Sternberg, R. J., Ferrari, M., Clinkenbeard, P. R., &

    Grigorenko, E. L. (1996). Identification, instruction,and assessment of gifted children: A construct vali-dation of a triarchic model. Gifted Child Quarterly,

    40, 129137.Sternberg, R. J., Forsythe, G. S., Hedlund, J., Horvath,

    J., Snook, S., Williams, W. M., Wagner, R. K., &

    2. Intelligence, Competence, and Expertise 29

  • 8/3/2019 Motivation and Tests

    16/16

    Grigorenko, E. L. (2000). Practical intelligence in ev-eryday life. New York: Cambridge University Press.

    Sternberg, R. J., & Grigorenko, E. L. (1998). Mea-suring common sense for the work place. Unpub-lished manuscript.

    Sternberg, R. J., & Grigorenko, E. L. (2000). Teachingfor successful intelligence. Arlington Heights, IL:Skylight Training and Publishing.

    Sternberg, R. J., & Grigorenko E. L. (Eds.). (2001). En-vironmental effects on cognitive abilities. Mahwah,NJ: Erlbaum.

    Sternberg, R. J., & Grigorenko, E. L. (2002a). Dynamictesting. New York: Cambridge University Press.

    Sternberg, R. J., & Grigorenko, E. L. (Eds.). (2002b).The general factor of intelligence: How general is it?

    Mahwah, NJ: Erlbaum.Sternberg, R. J., Grigorenko, E. L., Ferrari, M., &

    Clinkenbeard, P. (1999). A triarchic analysis of anaptitudetreatment interaction. European Journal ofPsychological Assessment, 15(1), 111.

    Sternberg, R. J., & Kolligian, J., Jr. (Eds.). (1990).Competence considered. New Haven, CT: Yale Uni-versity Press.

    Sternberg, R. J., & Lubart, T. I. (1995). Defying thecrowd: Cultivating creativity in a culture of confor-mity. New York: Free Press.

    Sternberg, R. J., & Lubart, T. I. (1996). Investing in cre-ativity. American Psychologist, 51, 677688.

    Sternberg, R. J., Nokes, K., Geissler, P. W., Prince, R.,

    Okatcha, F., Bundy, D. A., et al. (2001). The rela-tionship between academic and practicalintelligence: A case study in Kenya. Intelligence, 29,401418.

    Sternberg, R. J., & Rifkin, B. (1979). The development

    of analogical reasoning processes. Journal of Experi-mental Child Psychology, 27, 195232.Sternberg, R. J., & Spear-Swerling, L. (1996). Teaching

    for thinking. Washington, DC: American Psychologi-cal Association Books.

    Sternberg, R. J., Torff, B., & Grigorenko, E. L. (1998a).Teaching for successful intelligence raises schoolachievement. Phi Delta Kappan, 79, 667669.

    Sternberg, R. J., Torff, B., & Grigorenko, E. L. (1998b).Teaching triarchically improves school achievement.

    Journal of Educational Psychology, 90, 111.Sternberg, R. J., Wagner, R. K., & Okagaki, L. (1993).

    Practical intelligence: The nature and role of tacitknowledge in work and at school. In H. Reese & J.Puckett (Eds.), Advances in lifespan development

    (pp. 205227). Hillsdale, NJ: Erlbaum.Sternberg, R. J., Wagner, R. K., Williams, W. M., &

    Horvath, J. A. (1995). Testing common sense. Amer-ican Psychologist, 50, 912927.

    Wiedl, K. H., Guthke, J., & Wingenfeld, S. (1995). Dy-namic assessment in Europe: Historical perspectives.In J. S. Carlson (Ed.), Advances in cognition and ed-ucational practice (Vol. 3, pp. 3382). Greenwich,CT: JAI Press.

    30 II. CENTRAL CONSTRUCTS


Recommended