+ All Categories
Home > Documents > Chapter 6 - web.missouri.eduweb.missouri.edu/~cowann/docs/articles/Word files/Cowan chapte…  ·...

Chapter 6 - web.missouri.eduweb.missouri.edu/~cowann/docs/articles/Word files/Cowan chapte…  ·...

Date post: 15-Mar-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
49
Chapter 6.27.4 (formerly titled, General Psychology as a common basis...) EXPERIMENTAL PSYCHOLOGY AND ITS IMPLICATIONS FOR HUMAN DEVELOPMENT Nelson Cowan, Department of Psychological Sciences, University of Missouri - Columbia, USA Mailing Address Nelson Cowan Department of Psychological Sciences University of Missouri 210 McAlester Hall Columbia, MO 65211 USA E-mail: [email protected] Web: http://web.missouri.edu/~psycowan/ Phone: + 573-882-4232 Contents Summary 1. Book Section Introduction 2. Chapter Introduction: Of Human Successes and Failures 3. Strengths and Limits of Human Information Processing 3.1. Some strengths and limits of perception 3.2. Some strengths and limits of long-term memory and learning 3.3. Some strengths and limits of attention and working memory 3.4. Some strengths and limits of reasoning, problem- solving, and decision-making 3.5. Some strengths and limits of human communication 3.6. Some strengths and limits of human emotions 3.7. Some strengths and limits of human social cognition 3.8. Developmental changes in strengths and weaknesses Bibliography for Further Reading Acknowledgments This chapter was produced with funding from the National Institutes of Health through Grant R01 - HD21338. Address correspondence to Nelson Cowan, Department of Psychological
Transcript

Chapter 6.27.4 (formerly titled, General Psychology as a common basis...)EXPERIMENTAL PSYCHOLOGY AND ITS IMPLICATIONS FOR HUMAN DEVELOPMENT

Nelson Cowan, Department of Psychological Sciences, University of Missouri - Columbia, USA

Mailing AddressNelson CowanDepartment of Psychological SciencesUniversity of Missouri210 McAlester HallColumbia, MO 65211USA

E-mail: [email protected]: http://web.missouri.edu/~psycowan/Phone: + 573-882-4232

ContentsSummary1. Book Section Introduction2. Chapter Introduction: Of Human Successes and Failures3. Strengths and Limits of Human Information Processing

3.1. Some strengths and limits of perception3.2. Some strengths and limits of long-term memory and learning3.3. Some strengths and limits of attention and working memory3.4. Some strengths and limits of reasoning, problem-solving, and decision-making3.5. Some strengths and limits of human communication3.6. Some strengths and limits of human emotions3.7. Some strengths and limits of human social cognition3.8. Developmental changes in strengths and weaknesses

Bibliography for Further Reading

AcknowledgmentsThis chapter was produced with funding from the National Institutes of Health through Grant R01 - HD21338. Address correspondence to Nelson Cowan, Department of Psychological Sciences, University of Missouri, 210 McAlester Hall, Columbia, MO 65211, USA. E-mail: [email protected].

Reference:

Cowan, N. (2002). Experimental psychology and its implications for human development. Encyclopedia of Life Support Systems (EOLSS). Oxford, U.K. [http://www.eolss.net]

Glossary of Terms

Attention: The extra processing of information contained in some stimuli at the cost of poorer processing of other, concurrent stimuli, using any resources within the human mind that are limited in how much information they can process in a fixed time period.

Cognitive Neuroscience: The use of neuroimaging and other physiological evidence to improve our understanding of cognition.

Cognitive Psychology: The study of the mind using techniques from experimental psychology, and other scientific techniques, to examine and understand behavior. There is a special emphasis on understanding the mental processes underlying perception, memory, attention, and thought.

Communication: Information processing resulting from stimuli presented by one organism to another.

Developmental Psychology: The study of the change in mental processes and behavior across the life span.

Emotions: Mental states, including fear, anger, happiness, and so on, that strongly motivate the individual experiencing the emotions to carry out certain actions, usually related to their own self-interest. Such mental states result from a combination of temporary physiological changes in the brain and body and attributions as to the origin of these physiological changes.

Experimental Psychology: The discipline within the scientific study of the mind in which there is an emphasis on discovering principles of mental operation through the controlled observation of behaviors. There is a special emphasis on determining cause-and-effect relationships by manipulating certain variables and determining the effects on certain others, with an attempt to keep all other variables constant across conditions of the experiment.

Information: Data that allow choices between multiple options, labels, or categories.

Information Processing: The way in which information from the senses is used and combined with previously obtained information in the human mind, to permit perception, memory, and thought.

Learning: Changes in behavior resulting from newly-formed memories.

Long-term Memory: The sum of all information previously recorded in

one=s mind. It is not a perfect record of environmental stimulation, but rather an imperfect record involving one=s interpretation of past events, which can be recalled only if adequate environmental cues or hints are present.

Memory: A change in the brain resulting from stimulation that results in a representation of the information contained in that stimulation, which has the capability of affecting future behavior.

Neuroimaging: The use of technical devices to observe or infer where and when brain activity occurs in response to different kinds of stimulation.

Perception: The processing of sensory data ranging from the initial detection of stimuli to the identification and interpretation of the stimuli.

Physiological Psychology: The study of the chemical and electrical processes of the nervous system that underlie behavior and mental processes.

Problem-solving: The use of cognitive processes to find an answer to a question or a method to accomplish a task, especially when the answer or method is new.

Social Cognition: The study of thought processes contributing to social behavior, with an emphasis on principles related to cognitive psychology.

Social Psychology: The study of how behavior occurs in social settings and how human beings respond to one another.

Symbolic Communication: Communication in which the stimuli used to communicate are arbitrarily assigned particular meanings by social convention.

Working Memory: The small amount of information that remains in one=s attention or remains especially accessible at the moment. Such information is especially important for the completion of various mental tasks (e.g., the partial results of an arithmetic problem that one is completing). This type of memory is limited to a small number of separate concepts in attention, and unattended information in working memory typically remains especially accessible for less than a minute.

Summary

This chapter examines experimental psychology, the controlled study of behavior with an emphasis on determining cause-and-effect relationships. The following chapters within this section are introduced and then a single theme is pursued with reference to the material to appear in these chapters. The theme is that there are both strengths and limits in human information processing. Humans use shortcuts or heuristics to think about information of various sorts, often without realizing that they are doing so. These heuristics often are useful in producing answers quickly but they are not infallible. As a result, there are illusions and mistakes that occur, often without one=s awareness, in all areas of human thought. Limitations in human thought are discussed with reference to phenomena in the areas of perception; long-term memory and learning; attention and working memory; reasoning, problem-solving, and decision-making; human communication; human emotions; and human social cognition. Finally, shortcomings in adult cognition are compared to shortcomings that emerge during childhood development. In both cases the shortcomings can be considered types of self-centeredness in which one is insufficiently aware of information that one cannot get from one=s own point of view (both physically and metaphorically speaking). Many of the world=s problems are caused at least partly by peoples= general lack of awareness of the extent to which human thought processes are imperfect and subject to error, and education in this regard seems essential for establishing the sustainable growth or maintenance of the human species in a healthy and happy state.

1. Book Section Introduction

In order to achieve a sustainable way of life on Earth, toward which this entire volume is aimed, there will have to be cycles involving motivation followed by action. Specifically, there will have to be adequate motivation to understand the world better, followed by information-gathering activities; adequate motivation to digest the information gathered, followed by the construction of sound conclusions from that information; and adequate motivation to act upon the conclusions so formed in order to implement changes that will improve our way of life. The relevance of psychology is as a means to understand what it will take to get people to make all of this happen. One must study the characteristics of the human mind to understand (1) how people can be motivated to proceed with an important action, and what can go wrong with peoples' motivation; as well as (2) how people gather evidence and form conclusions from that evidence, and what can go wrong with the thought processes whereby conclusions are formed. The chapters subsumed under this section heading are pulled together from many subdisciplines pertaining to these questions. All of them may be described as facets of experimental psychology, defined loosely as controlled empirical observations, but with special emphasis on laboratory experimentation in which manipulations yield evidence of cause-and-effect relationships between stimuli and responses (see Section 6.27.3 of this volume).

The present chapter provides brief overviews of the chapters to come within this section, in order to put them within a coherent framework, and then discusses some general principles of experimental and developmental psychology that should apply to all of them. These principles lead to a view of how psychology might be used to help create a sustainable way of life on our planet.

Chapter 6.27.4.1 treats the processes that occur as information from the outside world reaches the brain and the individual must perceive and identify it, decide which aspects of it to attend, retain it in memory, and learn it for future encounters with similar situations.

(Reference to Chapter 6.27.4.1)

Chapter 6.27.4.2 treats the higher-level mental processes that are not always driven by a single present stimulus in the outside world, but rather by the whole constellation of events and motivations: namely, thinking, creativity, and problem-solving. All of the processes in these two chapters ordinarily come under the rubric of cognitive psychology. They help with theoretically-driven predictions as to how individuals will behave in particular situations,

(Reference to Chapter 6.27.4.2)

Chapter 6.27.4.3 deals with an aspect of the motivation underlying thoughts and behaviors, namely, human emotions and their relation to health. Traditionally, motivations and emotions have been investigated separately from cognitive psychology but we are entering an era in which more investigators are considering thoughts and emotions together. A good source for that type of effort is a recent book by Antonio Damasio, The feeling of what happens: Body and emotion in the making of consciousness (1999, London, William Heinemann), as well as the journal Cognition & Emotion (Psychology Press, U.K.).

(Reference to Chapter 6.27.4.3)

Chapter 6.27.4.4 treats the neurological and biological bases underlying the processes of thinking and feeling mentioned so far. This is an area in which an increasing amount of grant funding and research activity is being channeled in order to make the most of recent technological advances in neuroimaging, in which pictorial representations of brain structures and neural processes within the brain are constructed along with statistical analyses of differences between the patterns of neural activity occurring under different task demands.

(Reference to Chapter 6.27.4.4)

Finally, Chapter 6.27.4.5 takes a more traditional approach that has been complementary to the experimental approach for many years: an approach in which psychometric testing is used to examine individual differences in cognition and emotion. In some recent research, though, experimental techniques and psychometric techniques are being used together more closely than in the past so as to make use of both the precision in understanding what mental processes a particular task requires, which the experimental approach offers, and the statistical power to understand the similarities and differences reflected in individuals' abilities, which the psychometric approach offers. For examples of this dual approach, see the articles by Cowan, Engle, and Salthouse cited in the further readings.

(Reference to Chapter 6.27.4.5)

One topic seemingly related to the approach outlined here that is absent from the present section of the encyclopedia is social psychology, which has incorporated principles of cognitive and experimental psychology in order to gain a better understanding of social cognition and interaction. Also missing is developmental psychology, in which the normal

adult state is seen to arise from changes occurring in childhood and is seen to decline in some ways in old age (along with a continuing increase in wisdom in some areas). These important directions in which one can go in applying basic experimental methods and results, for a more complete understanding of human changes and interactions in the natural settings of family and social life, are represented in separate sections of the volume, Section 6.27.5 (on developmental psychology) and Section 6.27.6 (on social psychology). The final section on psychology (6.27.7) then goes further in making a transition between disciplines of theoretical science and disciplines of applied science within psychology.

(Reference to Psychology, Social Psychology, and Developmental Psychology)

2. Chapter Introduction: Of Human Successes and Failures

One of the earliest and best known self-help books in psychology is the 1936 book by Dale Carnegie, How to win friends and influence people (New York: Simon & Schuster). Within that book, a simple but important truth was articulated: that the only way to get someone to do something is to get them to want to do it. From the point of view of a cognitive psychologist, one might hasten to add that one must also get them to be able to do it. From that starting point, let us define the question to be addressed here as how one would be able to create the motivation and ability in humans to improve their world. Given that people may become more set in their ways as they grow up, any successful approach would be best begun within a childhood education program.

The reason some kind of major, new education program is needed is that what one encounters presently in the world is vast human potential mixed with vast political obstacles taking the form of disagreements between individuals, which thwart progress. The disagreements appear to stem from two sources: differences in peoples' beliefs, and differences in their perceived self-interests. It is often difficult to tell the two apart. Someone may say that he or she holds a particular belief regarding what is best for the nation or world, whereas in fact the person only espouses that belief for reasons of self-benefit. It may also occur that someone who has ambivalent or conflicting thoughts or emotions (as people are not always internally consistent) actually is able to convince him- or herself that there is no conflict between their self-interest and the common good. The psychological literature, especially within the field of social cognition, is full of examples of experiments showing that motivation often can control reasoning processes and beliefs rather than logic being the controlling factor.

Another factor that aggravates this tendency toward self-interest is that people's ability to process information is limited so that they may not have sufficient means to arrive at the right answers. It may be especially when correct information is lacking that self-interest rushes in. For example, in the United States, as of the beginning of the presidency of George W. Bush in 2001, the Republican and Democratic parties currently have different views of how to stimulate the economy (by returning more money to taxpayers, or by using more of it to support public programs and to reduce the public debt faster?). This difference in political philosophy also tends to correspond to self-interest, with more republicans in the country tending to be wealthy people who would benefit more from the proposed tax reductions, and more democrats in the country tending to be less well-off people who would benefit more from public programs. The question of how best to stimulate the economy is one that should be open to empirical testing, but there is no practical means to carry out the appropriate

empirical tests.

What a new educational program should emphasize, however, is the areas of strength and weakness in people's thinking, i.e., limits in human mental processes. Across the areas of perception, memory, reasoning, emotions, communication, and social interaction, a large variety of studies has shown that people tend to be overly confident of their own mental processes and overly dismissive of the mental processes of other people who disagree with them. We might be able to get people to work together more efficiently if they were made much more aware of these human shortcomings, which often amount to an egocentric frame of reference and a failure to see things from other points of view. What follows, therefore, is a description of strengths and limits in human mental processes of different types, why they may occur, what ill effects may come from a poor understanding of the limits, and how these ill effects might be counteracted.

3. Strengths and Limits of Human Information Processing

Across various areas of human cognition, similar patterns emerge. One finds that cognitive processes are very keen but were not designed to be perfect. Perception is not completely veridical, one cannot attend to all of the stimuli in the environment at once, memories sometimes fail or are misleading, reasoning processes sometimes are illogical, and so on. Instead of perfection, one can speculate from an evolutionary perspective that mental processes were designed to be good enough to get by without using up too many of the organism's physiological resources. The way in which that is accomplished is through shortcuts or "heuristics" that generally give the right answer though they sometimes err. Careful attention to the ways in which cognitive processes break down can lead to a deeper understanding of the shortcuts that the mind uses to carry out its functions. An understanding of the failings of the mind can lead to an appreciation that the wise person must allow for mistakes in his or her own mental processes. The result of this appreciation can only be greater tolerance of differences among individuals and groups, which is helpful in promoting peace and harmony throughout the world and in working together to solve any problems that humans face. Let us now contemplate strengths and weaknesses in various areas of human cognition.

3.1. Some strengths and limits of perception

Like many other animals, humans are able to perceive as little as a single photon of light, a very soft sound, or a very gentle touch, and are able to perceive an incredible range of stimulus intensities and make fine distinctions between stimuli. Where humans truly excel in perception is in using previous knowledge to interpret the meaning of environmental stimuli. For example, one may categorize a stranger as someone who is likely to be friendly versus unfriendly to one's own group, on the basis of subtle differences in clothing, demeanor, speech accent or topic, or facial expression. (Other species carry out similar categorization processes but not on as many kinds of information as humans do.) As another example, one may recognize an object as a "chair" even if it does not resemble any chair that one has ever seen before, on the basis of its properties that would allow comfortable sitting for a human being.

Perception occurs quite rapidly, resulting in the identification of an object usually within a quarter of a second or less. This kind of finding comes from "backward recognition masking" studies in which two brief stimuli of the same sensory modality are presented in rapid

succession; when the onsets of the two stimuli are less than a quarter second apart, recognition of the first stimulus in the pair is impaired, whereas recognition of the second stimulus remains unimpaired unless the stimuli are practically on top of one another. It is thought that a mental afterimage of a brief stimulus is used to continue the recognition process for a short while after the stimulus has ended, unless that afterimage is interrupted by another stimulus in the same modality.

Unfortunately, though, perception is fraught with illusions. Gestalt psychologists pointed out some of these illusions as a means of considering how the mind works. The illusions are not only restricted to very special situations. Instead, illusions are common in daily life. For example, consider the shape represented by an upside-down letter T. The length of the vertical bar typically is overestimated relative to the length of the crossbar. No one knows the entire reason for this effect, but part of it is that divided lines seem shorter than undivided lines. An irrelevant distance between the intersection and the end of the crossbar may be improperly taken into account, lowering the perceived length of the crossbar. Another part of the illusion is that vertical lines seem slightly longer than horizontal ones, even in an L shape that is constructed so that the vertical and horizontal actually are the same length. Some of these illusions may contribute to important errors, such as pilot errors leading to airplane crashes in manually-directed flights.

Illusions are common in ordinary objects. The moon looks much larger at the horizon than when high in the sky but a photograph will show no difference in the size of the image. One theory of why this illusion occurs is related to the use of distance to gauge the size of an object. A penny held up at the right distance can completely block out the moon because these objects cover the same area of the retina in the eye. Yet, the penny is judged to be much smaller than the moon because it is judged to be much closer. People generally maintain perceptual constancy by the implicit (not conscious) use of Emmert's Law, in which the perceived size of an object is said to be proportional to the perceived distance times the retinal area covered. However, for objects as far away as the moon, there is no way to judge the perceived distance and so Emmert's Law breaks down. The moon may be judged as if it were a flock of birds, which is much farther away when viewed at the horizon than when viewed overhead. However, the moon is so far away that there is very little proportional change in its distance from the observer when it is at the horizon versus overhead. If it is perceived to be closer when overhead it will be perceived to be smaller, also.

One can classify illusions in several ways. One way is to make a distinction between contrast illusions and assimilation illusions. In a contrast illusion, an object looks opposite from the context. In an assimilation illusion, it looks similar to the context. The difference may be in whether the context is perceived as part of the figure being judged (leading to assimilation) or a separate figure (leading to contrast). For example, if a small circle is surrounded by a larger circle, this makes the small circle seem larger than it really is (i.e., there is assimilation). However, if a small circle has a larger circle next to it, this makes the small circle seem smaller than it really is (i.e., there is contrast).

Another way to classify illusions is to distinguish between bottom-up sources of illusion and top-down sources. Bottom-up sources result from the way in which the nervous system is hooked together, whereas top-down sources result from knowledge that the individual has acquired over a lifetime. A bottom-up source is present, for example, when one views a black square with vertical and horizontal white bars forming a grid. The intersections of the grid then will look dark. The dark spots seem to be seen in most of the intersections, although less

so in the particular intersection at which one is staring. This effect can be attributed to contrast effects between adjoining areas of brightness. The intersections are surrounded by more white than the lines in other places, and by contrast they look less white. At the center of one's vision the visual receptor cells take in less context and have higher acuity, explaining why the illusion is weaker at that point.

One can create a very powerful bottom-up perceptual effect with no equipment other than well-lit room that can be darkened. Hold your hand over one eye in the lit room, preferably for 20 minutes. Then turn out the lights while still covering the one eye. Taking turns looking out with one eye and then the other, you will find that you are able to see much better with the eye that had been covered than with the other eye. The covered eye was allowed to adapt to the dark, a process that ordinarily takes place gradually in the outdoors as the sun sets and results in much better sensitivity to small amounts of light.

Top-down sources of illusion often involve misleading the subject about a form of information that ordinarily is used to create perceptual constancy. Ordinarily, we use information about the perceived distance of an object, along with the retinal size (the portion of the eye=s retina that is covered by the object) in order to judge the perceived size. Distance cues can fool us about the size of a drawing. Thus, two identical pictures of a man can be made to look very different in size if they are placed at different apparent distances in a perspective drawing. The one that looks closer will be judged smaller, given that the retinal sizes are identical.

A top-down source of illusion in audition, the phonemic restoration effect, strongly shows the role of knowledge. If one uses a computer to remove several phonemes from a spoken sentence, replacing it with silence, the gap in the sentence will be clearly audible. However, if one then fills in the silent gap with an extraneous noise such as a cough, one hears the nonexistent phonemes behind the cough, making the speech sound complete again. It appears that our knowledge of the complete speech is enough to make us hear it that way, provided that the acoustic stimulus provides a good reason why the phonemic information did not actually reach the ear.

The implications of these illusions for ordinary perception are striking. It is fully possible for two normal individuals to witness the same event and to perceive something very different. The most likely reason for this to happen is that the two individuals may have different top-down information in memory, therefore interpreting the event differently. For example, upon witnessing a robbery, a person prejudiced against blacks is more likely to perceive that the robber was black, given only fleeting information (or no relevant information) about the race of the robber. People also differ in how self-confident they are about their own perceptions being correct, as opposed to someone else=s perceptions. It stands to reason that knowledge about the possibility of perceptual illusions would reduce a person=s self-confidence of judgment and that this generally would be a good thing, allowing people who perceived an event differently to resolve their differences and arrive closer to the truth.

Illusions cannot be easily overcome through free will. Time and again, subjects who have been informed about the existence of an illusion and who have been shown the basis of the illusion through a demonstration of it, continue to be susceptible to the illusion. Although it can be somewhat reduced through an awareness-raising session, often it cannot be eliminated. Thus, the best we can do is to keep in mind that our perceptions do not always

accurately mirror reality. We can have the confidence of knowing that our perceptions typically capture what is important to us in the environment.

An exception to the ecological strength of perception is that it was designed through millions of years of evolution and cannot cope as well with the modern world. For example, we do not have a reliable perceptual means to detect harmful radiation from non-natural sources, harmful substances in the air and water, and so on. It takes a scientifically trained mind to be aware that such harmful forces and substances may exist even though they cannot be seen. An untrained individual may believe that if nothing harmful is detected through the senses, than nothing harmful can exist. Diseases carried by bacteria and viruses are a middle case in which the cause itself cannot be detected, although its effect is soon detected, so that the disease is likely to be attributed to the wrong cause. Education cannot make our perceptions totally veridical but it can help us not to rely too directly on our perceptions in understanding the world. Perceptions can only be interpreted correctly with extended observations logically considered in light of the possibility of illusions that always must be taken into account.

3.2. Some strengths and limits of long-term memory and learning

We are able to retain a vast amount of information from our experiences in life. However, this is not to say that our memories always are accurate. Our memories seem very good at preserving the gist of what happened. People remember important incidents (or some that may be less important) that occurred long ago; sometimes even when they were 4 years old or younger.

However, human memory is not so good in retaining incidental details correctly. The memories tend to be simplified in a way that makes them more consistent with our preconceived notions (schemas). Also, our motivations may cause us to remember certain events in a distorted manner that reduces our negative feelings such as guilt, emotional pain, or anxiety. Here, as in perception, our top-down information (in the form of our notions of how things ordinarily work in life, or Aschemas@) play a large role. A personal example illustrates this well. When the present author was about 9 years old, he deliberately slid on the kitchen floor, which had just been waxed. The sliding action was a little too successful and he put up his arm to stop the slide; it crashed through the kitchen window, causing a long, shallow scrape to the arm. Discussing this incident about 37 years later, he found that his mother certainly remembered the incident, but had encoded it differently. She insisted that she would never wax her floor to the point of danger and that her son actually had pushed against the window to turn into the corridor to the left. However, the physical evidence supported the author's childhood memory inasmuch as there was a long scar on his left arm and no scar on his right arm, which would have been the arm used to turn to the left as in his mother's interpretation.

Elizabeth Loftus has done some innovative research showing that our memories are pliable and open to corruption from misleading suggestions. Subjects who witness a film showing an automobile accident are then questioned on the accident in a courtroom-like setting. It turns out that misleading information alters subjects' recollections. If asked questions like, "How fast was the car going when it passed the yield sign," they are more likely to state later on that they saw a yield sign (as opposed to, say, a stop sign). If asked questions like, "How fast was the car going when it smashed into the truck?" they give faster estimates than if a more neutral term for movement is used, such as, "How fast was the car going when it collided with the truck?" Loftus also has shown that one can ask a child about a particular memory

that is fabricated (e.g., "Do you remember the time you were lost in a department store?") and install the memory in the child's mind. Each successive time that the child was asked, he reported more and more details about this non-existent episode. Henry Roediger III and colleagues also have shown many times that people can strongly "remember" words that were not presented within a list. This is accomplished in a procedure adapted from James Deese. A word list is presented that contains many words from a particular category, but not a central word in the category. For example, it may contain words like shade, drapes, glass, sill, and so on but not the word "window." Subjects often are adamant that they remember hearing "window."

How should one alter his or her discourse with other people, knowing about such fallacies of memory? It would make sense to lower one's confidence that one's own memory is necessarily correct. On the other hand, it does not make sense to hold any more confidence in someone else's memory of an event than of one's own. When memories of the same event differ widely, the recourse should be to seek hard evidence of what transpired and to maintain some tolerance about another individual's view. In the absence of this knowledge, one might jump to the conclusion that another individual is lying about the event. Individuals do lie, but they also differ in their perceptions and memories. Heightened awareness of this point seems critical to improve the tone of personal, public, and political dialogues.

Not only are there limits in what people remember; we generally seem unaware of these limits. In several studies, subjects have been asked not only to indicate what they remember, but also to rate how confident they are of the correctness of their memory. An example would be a task in which a familiar person was to be pointed out within a set, much like suspects within a police lineup. When none of the suspects in the lineup is familiar, subjects often err and identify one of the suspects as familiar anyway. What is worse, the level of confidence subjects have in their answers is almost completely uncorrelated with the correctness of the answers. Misleading or suggestive information within a subject's eyewitness testimony not only can make people have false memories; it can result in subjects being very confident in these false memories. We do not seem to have introspective awareness of the correctness of our memories.

The same absence of insight characterizes people's understanding of their own learning. B.L. Schwartz, A.S. Benjamin, and R.A. Bjork (1997, The inferential and experiential bases of metamemory, Current Directions in Psychological Science, 6, 132-137) recently have summarized studies indicating that even adults misjudge the effectiveness of learning situations they have encountered. For example, when asked to make a certain kind of response to a word, such as judging what category it belongs in, responses can be much faster for some words than for others. Subjects seem to predict that words that they can respond to quickly also will be remembered better, when often it is the words that require a slower, more effortful response that are remembered better; the effort helps the memory encoding. As another example that Robert Bjork has emphasized, one can draw an important distinction between massed and spaced practice. Massed practice is a situation in which one type of information is learned through numerous presentations all at once. One might, for example, learn how to do one type of arithmetic problem and, after completing a large number of practice problems, go on to the next type of arithmetic problem. In contrast, spaced practice is a situation in which any one type of information is distributed across a longer time period. One might learn to do one type of arithmetic problem and, after completing several problems of that type, go on to the next type of problem; only to have the various types of problems mixed together later on so that any particular type of problem recurs from time to time.

Although many studies have shown that spaced practice generally leads to much better long-term learning than massed practice, students (even adults) who are questioned generally think that they have learned more when exposed to massed practice. This misjudgment probably occurs because massed practice requires less effort at the time of learning, leading to the impression that the task has already been well-learned. Unfortunately, given massed practice, successive items soon interfere with one another and the supposedly well-learned information no longer can be retrieved. This kind of result emphasizes the importance of psychological research on learning, in which data on learning outcomes are collected systematically. An educational policy based totally on "common sense" is likely to fail because the feeling of learning satisfaction does not reliably correspond to the efficiency or successfulness of learning.

3.3. Some strengths and limits of attention and working memory

In contrast to the vast amount of information in memory, there is only a small amount that an individual can process fully at one time. This limit in attention was pointed out early in the history of experimental psychology by William James. At the beginning of the cognitive psychology movement, Donald Broadbent emphasized it by recounting numerous studies of selective attention in his book, Perception and communication (1958, New York: Pergamon Press). For example, a person exposed to multiple conversations at the same time (as at a cocktail party) generally can only follow the gist of one of those conversations at any moment.

It is not clear whether to count the attentional limit as a strength or weakness of the human information processing system. It is a weakness inasmuch as it would be helpful to be able to understand multiple conversations at once (or, say, to read multiple books or watch multiple athletic games at the same time). However, it is a strength inasmuch as it allows one to filter out all but the information channel of most relevance and to achieve a deeper, more complete understanding of that message than would otherwise be possible. Moreover, we are not entirely deaf or blind to what is going on in the unattended channels. An abrupt change in the voice quality or spoken pitch of a message, or any other abrupt physical change such as the appearance of a bright light, recruits attention temporarily away from what we were attending voluntarily and toward the changed item. Such abrupt changes may signal occurrences that often are of special ecological significance (such as the sudden appearance of a predator at the periphery of one's vision).

"Working memory" is a term that has been used to refer to the small amount of information that we keep readily available in mind while that information is of special use in the completion of a cognitive task such as solving a problem or comprehending a sentence. See, for example, Alan Baddeley's book, Working memory (1986, Oxford Psychology Series #11, Oxford, UK, Clarendon Press). One way to understand these limitations in information processing is to propose that attention is limited in the same way no matter whether it is applied to external stimuli or internal thoughts. When it is applied to internal thoughts, however, it would typically go by the name "working memory" rather than attention. Thus, devout attention to an external channel of stimuli would limit one's working memory, and the attempt to remember a list of items for a short time would limit one's processing of the fine points of an external message.

There are other limits to both perception and working memory, however, that lie outside of attention. Perception depends on processes that are automatic and do not require attention,

sometimes because they seem hard-wired into the human mind since birth (like the ability to notice abrupt shifts in the physical properties of stimuli) and sometimes because the need for the involvement of attention has withered away with practice with the stimuli (like the ability to read many of the common words in one's language, although not the ability to put the words together into the correct meaningful statement). Thus, if one is presented with printed color names in ink that conflicts with the printed names, and if the task is to name the ink colors, the printed color names cannot be ignored and the task is very difficult compared to a situation in which non-color words or nonwords are presented. John Ridley Stroop showed this in a classic, 1935 study.

To the extent that two external stimuli depend on some of the same types of automatic processing, they may interfere with one another. Norman and Bobrow (1975, Cognitive Psychology, 7, 44-64) referred to this as a data limitation, as opposed to resource limitations in which case it is attention, rather than automatic processing, that is the bottleneck in processing. Similarly, in working memory, to the extend that various items to be remembered are similar in a type of feature that is important for memory maintenance, they will be harder to remember. It is much easier, for example, to remember the order of words in a list such as "hat, brick, ball," etc., than in a list such as "rat, bat, cat," etc., in which the phonological features are similar. Thus, one can make the case that both attention to external stimuli and working memory for previously-perceived items are limited both by a common resource (i.e., attention) and by limits in the abilities of particular automatic data-processing subsystems or modules in the brain to handle multiple inputs at once. Accordingly, Cowan (1995, Attention and memory: An integrated framework, Oxford Psychology Series #26, New York: Oxford University Press) presented a theoretical view in which working memory is composed of an embedded system: the subset of elements within the human memory system that is in a temporarily heightened state of activation and accessibility at any moment, and the subset of that activated memory that is in the focus of attention at any one moment.

It typically is difficult to know how much actually is held in the focus of attention because the capacity depends on how information is grouped together or "chunked." The important role of chunking was pointed out in a classic article by George Miller (1956,. Psychological Review, 63, 81-97) entitled, "The magical number seven, plus or minus two: Some limits on our capacity for processing information." According to the principle of chunking, if one knew and recognized the American acronyms IBM, FBI, and CBS, then one could easily recall the letter string IBM-FBI-CIA as three independent chunks rather than nine. Given that we ordinarily don't know how individuals are chunking incoming stimuli, it is difficult to measure the limit on working memory capacity.

Cowan (in press, The magical number 4 in short-term memory: A reconsideration of mental storage capacity, Behavioral and Brain Sciences) did claim to find a way to estimate the capacity limit in certain circumstances. Many situations in the literature were examined in which chunking was presumably prevented (or limited severely, at least) because there were distracting elements of the task or the presentation of many stimuli at once. In such circumstances, adult subjects tended to recall about 3 or 4 items on the average, conforming to a view that had been espoused previously, on the basis of less evidence, by Donald Broadbent and several others. The 3 or 4 items presumably were separate chunks and Miller's previous estimate of about 7 was taken to reflect the usual amount of chunking that people do when a serial list is presented; on average forming about 1 chunk per every 2 items, so that 7 items equals about 3.5 chunks on average. Cowan suggested that this capacity limit of 3 or 4 chunks reflects a limit in how much information can be held in the focus of attention.

The same limit should apply to perception and memory except that, in perceptual situations, it is often very difficult to count the chunks. In both perception and memory as well, there are types of information that can be processed automatically without a capacity limit in terms of chunks, although there are other limits in terms of decay and interference from other input.

The attention limit suggests the need to go sufficiently slowly when presenting important information to another person. However, the working memory limit suggests that some individuals will not easily comprehend particular sets of information regardless of how slowly the information is presented. On an optimistic note, the complexity of a problem can be made simpler if some concepts are committed to memorization, essentially forming larger chunks that can then be put together in working memory without exceeding capacity. However, this process depends on the people on both ends of the information exchange having the patience to go through such a process.

Concepts of attention and working memory both have been tapped as potential explanations for why some individuals are able to carry out a wide rand of scholastic or academic tasks more successfully than others; that is, for a general factor or "g" factor in intelligence, reflecting the ability of individuals to learn complex new information. Many studies have shown that performance on such aptitude tasks is well correlated with a complex sort of working memory task. In the complex working memory tasks, both processing and storage of information are required at the same time. Daneman and Merikle (1996, Psychonomic Bulletin & Review, 3, 422-433) have reviewed many such tasks. In them, the subject carries out an operation (such as comprehension of a sentence or solution of an arithmetic problem) and then is to remember an item (a word at the end of the sentence or a word placed after the arithmetic problem). This process is repeated several times and then the subject is to recall the to-be-remembered items that had occurred at the end of each problem. The number of problem-final items that can be successfully recalled, following successful processing, is the working-memory span. Such complex tasks clearly do correlate with aptitude tests better than do simple memory span tests in which subjects must try simply to repeat a list of items, with no intervening processing tasks. The important question that remains is why these complex working-memory span tasks are successful at predicting intellectual aptitude in various tests.

Randall Engle and his colleagues have argued that the critical factor tapped in complex working-memory span tasks is how well an individual controls his or her attention. They have shown this (1999, Journal of Experimental Psychology: General, 128, 309-331) by separating the variability between individuals that is in common with simple span tasks (repetition of lists) and the variability that is unique to the more complex working-memory span task. Only the latter source of variability correlates well with the g factor common across several intelligence tests.

Although it seems reasonable that the control of attention is very much a part of intelligence and working-memory tasks, there is a remaining question as to how this control of attention operates in complex working-memory tasks. In several recent studies, Graham Hitch and John Towse have brought up a distinction between attention-sharing and attention-switching. One way in which attention could be controlled is that there could be a sharing of attention between the processing task and the maintenance in memory of the items to be recalled. This has been the original assumption in working-memory tasks. A different way in which attention could be controlled, however, is that there could be a focusing of attention on each

processing problem and then a switch of attention to memorization of each item to be recalled, before attention is switched to the next processing problem. At least some of the time, attention-switching seems to be what takes place. One reason to believe this is that an attention-sharing process should predict that, as additional items to be recalled are added, they should interfere more with the processing task Yet, reaction times for finishing each processing problem often seem not to get any longer as trial goes on, in contrast to what the attention-sharing process predicts.

Beyond this question, there also is the question of why one person finds it easier to control attention than another person. There could be a mechanism that is specific to the control of attention, and that is superior in some individuals. However, it also is possible that the control of attention depends on other factors. If an individual is better at the processing task or the storage task that enters into a particular working memory task, they would have an advantage also when the two tasks are combined. If an individual can process information faster they might be able to switch attention from processing to storage and back again efficiently. Indeed, Timothy Salthouse (1996, Psychological Review, 103, 403-428) has argued that the speed of processing is critical in understanding working memory ability. It also is possible that a larger memory capacity would allow the coordination of processing and storage tasks more successfully, no matter how the task is accomplished. If there is attention-sharing, it seems obvious that attention could be shared better with a higher capacity. If there is attention-switching, there might be less memory loss at the point of transition from processing to storage and back again. However, we do not know if there is enough individual variation in capacity to account for individual variation in the control of attention. It is possible that young children with a larger working-memory capacity are the ones who tend to develop superior attentional control when they get older. So there are many research issues remaining to be examined. A large part of the g factor is inherited (for example, making it more similar in identical twins than in nonidentical twins) and it makes sense that the part that is inherited is also a part that depends largely on working memory, which would then also be largely inherited. However, this is not yet clear from the literature in this area if working memory has a high degree of heritability or not. So the issue of what skill or characteristic distinguishes better thinkers from poorer ones generally has not yet been worked out. We know that it has something to do with skills measured in working memory tasks but we are not yet sure which skills are fundamental within those tasks.

What is clear is that many important oversights in mental work occur as a result of working-memory limitations. Imagine that an air traffic controller suddenly is confronted with five or six crafts all at the same time. It is unlikely that the controller would be able to consider the well-being of all of these crafts at the same time. Working-memory limitations also are likely to contribute to mistakes when one is dealing with a complex instrumentation panel that requires that certain critical manipulations be made. There is a limit to how many operations can be held in mind at once. That limit may shrink when the equipment operator is tired or preoccupied.

Another problem is that routine activities are difficult to place in one's conscious mind. A benefit of establishing a routine is that one can carry out the activity automatically, without using up one's limited attention on the activity. That saves attention to be devoted to more creative endeavors. However, it comes with the drawback that one has to be vigilant if one wants to make sure that the action is not completed automatically, outside of one's attention. For example, a person driving a car from work to a neighbor's house that happens to be close to one's own house is quite likely to forget to monitor that action and will accidentally drive

part-way to his or her own house before realizing what has been done. William James pointed out a similar episode of his own in which he went upstairs to dress for dinner and unthinkingly put on his pajamas.

A related problem, often a more serious one, is that a person who has carried out an important action automatically may have no clear memory of having carried it out. For example, people often return home because they are not sure if they turned off the stove. Some individuals may be inclined to assume that they have carried out an action automatically when in fact they may not have; the automatic routine may have failed. This, of course, can contribute to momentous accidents such as airplane crashes and nuclear power plant accidents.

Individuals with low working memory may be more distractable than individuals with higher working memory. It is not entirely clear whether higher distractability is completely a shortcoming or if it is a different style of cognition that is advantageous for some purposes. Consider the case of one of the first-studied and best-known phenomena in cognitive psychology, the "cocktail party phenomenon" studied by several individuals, including Colin Cherry and Neville Moray, in the 1950s. In an environment with multiple stimuli, such as a cocktail party, it is possible for an individual to focus on one channel of stimulation (e.g., one human voice, probably along with accompanying hand gestures) at the expense of other channels (other voices, visual gestures made by various individuals, and so on). It is impossible for several channels to be fully processed at once. Typically, individuals remain totally ignorant of what is being said in all but the single, attended message. On the other hand, certain aspects of ignored channels still do get processed. If an individual who was not the focus of one's attention at the cocktail party suddenly spoke much louder or squealed with delight, that would capture one's attention. If an individual standing behind you at a cocktail party mentioned your name, that sometimes (but not always) would be noticed as well. Moray, and a more recent study by Noelle Wood and colleagues, found that about a third of subjects notice their names in a situation in which a message presented to one ear (through headphones) was to be repeated and a different message, presented concurrently to the other ear, was to be ignored. At one point, unbeknownst to the subject, the unattended message contained his or her first name. Although the name was sometimes noticed, individuals almost never noticed other peoples' names in the message. Subjects who noticed their own name temporarily hesitated or made mistakes in the attended speech-repetition task. Thus, it seems that a stimulus of special relevance (one's own name) sometimes can be processed even without the benefit of attention, and that it recruits attention as a result.

R. A. Conway, N. Cowan, and M.F. Bunting (in press, The cocktail party phenomenon revisited: The importance of working memory capacity, Psychonomic Bulletin & Review) have carried out a study on the cocktail party phenomenon that helps to show that distractibility is related to working memory. In a selective listening task modeled after the ones of Neville Moray and of Noelle Wood and colleagues, individuals with a good working memory (in the highest quartile of the subject sample in working memory) noticed their names only 20% of the time. In contrast, those with a poor working memory (in the lowest quartile) noticed their names 65% of the time. They did so, however, at a slight cost. The subjects who noticed their names made errors or hesitations in their attended task (repetition of a message) immediately after the name was presented, as their attention was recruited to the message containing the name and away from the message that was supposed to be attended. High working-memory subjects usually avoided that problem and performed more reliably on the assigned task.

Perhaps there is a type of job in which distractability is a good thing rather than a bad thing. It might allow an individual to do a job while also noticing an unexpected event in the environment. High working memory as a strength versus a cognitive style has not often been debated; investigators typically assume that it is always a strength, but that is not clear.

It also remains unclear what is the basis of the stronger attentional control in high working-memory individuals. In the study of Conway et al., the message to be attended was presented to the right ear, sending it predominantly to the left hemisphere of the brain, which in most individuals is specialized for language. There is some indication from other work that the high-span individuals may not be able to control attention better than the low-span individuals when the task requires that the right hemisphere of the brain be the one predominantly receiving the message to be repeated. It may be that high-span individuals excel at a particular neural circuit involving the processing of speech by the left hemisphere, which may make it easier to ignore extraneous verbal material presented to the right hemisphere. Thus, better control of attention may, to some degree at least, be an effect of efficient processing rather than the basic cause of efficient processing.

Alan Baddeley's research has shown that there are other processes, not so dependent on attention, that do distinguish between more efficient and less-efficient processors in simple tasks. For simple, in serial recall of a spoken list of words, people seem able to recall lists that contain about as many words as they can say quickly in about 2 seconds. This is attributed to a rehearsal process in which items are refreshed in short-term memory before they can fade away. People who can speak more quickly presumably can rehearse more quickly also, and therefore can rehearse more items before they fade. Although this specific explanation is open to question, the fascinating correlation between the rate of speaking and memory for lists shows that there are some important individual differences in automatic processing that make some people better-suited for certain tasks than other people.

Personnel management may be influenced by a consideration of skill such as those we have been considering. It presumably requires that two factors be considered: which jobs are best accomplished by each available person, and which people are best-suited for a particular job. Because of the g factor in intelligence, some people (those with high g) would be better-suited than certain other people for a wide array of jobs, but we cannot expect high-g individuals (or those with high working memory) to carry out all of the jobs that need doing. There will thus be some compromise in which people are assigned to which jobs. An ameliorating factor is that intelligence and preferences also are multidimensional: Some people are better suited to one type of task whereas other people are better suited to another type of task. Perhaps there are even important types of task for which low working-memory individuals, or those with high distractability, excel. To the extent that we cannot change individuals' working memory, we should figure out what those jobs are. Perhaps, for example, some low-span individuals (presumably high in distractability) excel at noticing untoward events in the environment, such as the beginning of a forest fire that a park ranger might notice.

3.4. Some strengths and limits of reasoning, problem-solving, and decision-making

It is well-known that there are limits in the ability of humans to come up with logical solutions to problems, especially if those problems are stated in an abstract manner. I will touch on these limitations only briefly. Some such limitations occur because the individual

does not have enough working memory to keep in mind all parts of a problem at once, which makes the correct solution impossible to calculate. Daniel Kahneman, Amos Tversky, and others have demonstrated that in such situations, people often unknowingly use simplifying rules-of-thumb or heuristics that serve as shortcuts to the thought process. These heuristics usually yield useful answers but they can be fooled. One heuristic that is not foolproof is to take arguments that are stated in similar terms grammatically and to treat them as similar logically. For example, evaluate the validity of the following line of logical reasoning, or syllogism: "All A are B, and all B are C. Therefore, all A are C." This sounds right, and it is valid. Now how about the following one: "No A are B, and no B are C. Therefore no A are C." This sounds good but is completely invalid. Consider it in a concrete context: "No men are women, and no women are males. Therefore, no men are males." An invalid syllogism that is even more difficult to detect is, "Some A are B and some B are C. Therefore, some A are C." This need not be the case (e.g., it is false in, "Some men are soldiers and some soldiers are women. Therefore, some men are women"). If one is listening with only part of one's attention or one is not expecting falsehoods, the incorrect arguments may pass unnoticed. On a more positive note, the heuristics usually allow easy comprehension of the types of arguments that are most often made in daily life, in familiar contexts.

Often, the way in which a problem is expressed influences the types of judgment that an individual gives. For example, in one type of problem, imagine that you are on the way to buy tickets to a show (for $30) and you lose the money before you buy the tickets. You have to return to the bank for more money; how much are you now willing to pay for the tickets? This may seem like a silly question as the tickets are still worth $30. Now, however, suppose instead that you buy the tickets and lose them. How much are you willing to pay to buy them again? Although logically it is the same question, inasmuch as you have lost something worth $30, subjects who are asked this question are generally more hesitant to say that they would pay the same amount again. They feel as if they would be paying a total of $60 into an account that is of use only for this show, and therefore that they are overpaying for a single purpose. Sales people often take advantage of this illogical aspect to human financial thinking. Reducing the price of a car by $500 is less appealing to the potential customer than charging the full amount but then offering a $500 rebate in cash because, in the latter case, the $500 seems more salient and separate.

It seems fair to say that we believe that we are being logical when, often, we are being mentally lazy or are controlled by emotions rather than reasoning more than we would like to admit. Lines of logical argument that benefit us personally seem more logical than those that do not benefit us. There is probably no way to change this basic aspect of human behavior but the world might be a better place if such limitations were more widely understood by more people.

Another conclusion that can be drawn is that reasoning is done with less effort when the reasoning is about a familiar situation. Unfortunately, this also tends to condemn individuals to reason in ways that match what they have done in the past, making it difficult to change opinions on the basis of new evidence. To the extent that people can come to appreciate how difficult it is to draw valid conclusions from novel material, it may be possible to make people more open to logical persuasive arguments and more wary of appeals to stereotypes with which they already are familiar. Thus, there is the hope that general, liberal-arts education can promote social change in beneficial directions.

3.5. Some strengths and limits of human communication

Any problem or strength in any area of psychology is likely to be understood through the process of communication, by which the thoughts and feelings of one individual can become known, albeit imperfectly, by another individual. Therefore, it will be considered in some detail within this chapter.

Spoken language is the natural, traditional form of symbolic communication that humans have used for many thousands of years. The communication is considered "symbolic" because the correspondence between the physical objects of communication and the meanings they represent are, for the most part, arbitrarily related. These objects include phonemes, spoken word forms, and aspects of word stress and intonation, such as the increase in pitch at the end of a question in English (but not in some languages, such as Finnish). Because of the symbolic nature of language, different languages can have very different words for the same concepts.

Another form of communication that is symbolic is sign language. There are many forms of communication that are not symbolic, such as the aspects of the tone of voice and of body language that are controlled by emotions; as well as acts that people carry out for purposes of communication (for instance, acts of vandalism meant to express hostility). Although nonsymbolic forms of communication sometimes can be interpreted more easily, symbolic communication is advantageous in that it allows ideas to be combined in new ways, assisting humans in societal innovations. However, there are many pitfalls in the production and perception of symbolic communications, which will be discussed. These take the form both of accidental miscommunications and of deliberate prevarifications and lies.

Written communication is a means of supplementing spoken language that is, to a large extent, meant as a substitute for spoken language when speech is impossible (for example, when the message sender and receiver are geographically separated). However, the written media bring different strengths and weaknesses to the process of communication. These media can convey ideas from one adult to another that young children cannot decode. They can be more easily remembered because the words remain present after they are read, and can be reviewed at the reader's discretion. They can be used in conjunction with a spatial layout such as a map, blueprint, or technical diagram so that the spatial location of a word becomes important. They can be duplicated many times. As the media for writing have become less and less permanent, the process of duplication has become easier. Thus, when written communication required stone or clay tablets, only a few copies were made; when it required only hand inscription on parchment, several more copies could be made; when it required paper and a printing press, numerous more copies could be made; and, in the modern world, when messages can be saved along with images in the form of microscopically small, magnetized spots on a computer chip, electronic messages can be disseminated widely, at little cost. The unfortunate aspect of this ability is that miscommunications of some consequence can be spread widely, as well. The same is true of other forms of mass communication, most notably television and radio.

An important difference between face-to-face communication (in writing or sign language) and most other types is that only face-to-face communication allows the message to be adjusted quickly on the basis of the speaker's assessment of what the listener is understanding. To some extent, electronic exchanges do allow this type of adjustment, the most commonly used examples being electronic mail and electronic messaging. Another such example, used in rare but important circumstances, is teleconferencing, in which a face-

to-face interaction is simulated electronically through a long-distance hookup of the two or more parties who wish to communicate.

Typically, the psychology of communication can be treated as a part of cognitive psychology. However, it often is treated instead as a separate area. For example, under the current organization of study sections for the review of grant proposals to the National Institutes of Health in the United States, most of cognitive psychology and neuroscience of normal individuals have been reviewed by one study section, whereas language and communication issues have been reviewed by a separate study section. This separate treatment probably stems at least in part from the scholastic history of the psychology of language. The field of linguistics was basically considered separate from psychology until Noam Chomsky and his colleagues argued, starting in the middle 1950's, that linguistic structures reveal regularities in language behavior and therefore should be counted among behaviors of interest to the psychological sciences. Thus began the interdisciplinary field of psycholinguistics.

Human beings are well-equipped to communicate the ideas that they need for a rich life. Other species communicate through various means, ranging from the dances of bees to the wagging of a dog's tail and the gesturing of chimpanzees. However, as far as we know, naturally-occurring communication among non-human animals involves relations between the communication vehicle and the meaning being conveyed that is non-arbitrary. For example, a dog cannot decide to growl when happy and to wag its tail when angry. Although humans also have non-arbitrary means of communication (smiling, scowling, and various types of "body language" for example), we also have languages in which the signs are arbitrary. Thus, the word for a particular idea differs from one language to the next and these aspects of language are determined by social convention. That is to say, human language includes symbols, which are elements that are meant to stand for other things. The use of symbolic communication allows new ideas to be expressed easily as new combinations of old symbols, along with the occasional introduction of new symbols (such as a new word) built upon the base of the already-agreed-upon symbols. Thus, as Noam Chomsky pointed out starting in the 1950s, when linguistics was first considered to be a branch of psychology dealing with an important human behavior, human language is creative and productive, allowing limitless new combinations of ideas.

There are, however, important limits to the use of human language. These limits come down largely to restrictions in the amount of computation that the speaker and listener are willing to devote to communication, as well as limits in the time available. A speaker may not realize that the listener knows less than the speaker does about a particular topic and therefore may make statements that the listener is unable to interpret correctly. A listener often may not wish to devote the effort necessary for a complete comprehension of what the speaker is saying, especially if the speaker has not succeeded in phrasing the speech very eloquently or accurately. Recent research, for example by Fernanda Ferreira, suggests that listeners often treat a sentence simply as a "bag of words" that they put together in a way that matches already-existing schemata rather than making a difficult attempt to put the words together in exactly the way that was intended. One can see this with four-year-old children, who may hear a sentence like "The horse was pushed by the cow" and misinterpret it to mean that the horse pushed the cow, rather than the other way around.

In adults' comprehension, even though adults supposedly know better, similar mistakes are made when the most plausible interpretation does not fit the sentence. When the correct

interpretation of a sentence requires considerable computation, that computation often is not actually done and the listener or reader instead relies on existing knowledge to help interpret what is being said. Usually, this works out all right. It tends to fail when what is being said differs from what might be expected. For example, adults often misinterpret the sentence, AThe dog was bitten by the man@ to be plausible or misinterpret AThe man was bitten by the dog@ to be implausible, when actually it is the first sentence that is less plausible. For some sentences, no consistent interpretation is reached. Consider, for example, the sentence, AWhile Anna dressed the baby played in the crib.@ Some readers interpret this sentence to mean that Anna dressed the baby and that the baby played in the crib. A careful reading shows that Anna dressing the baby is likely to be a misleading initial interpretation, before the sentence was completely read; sentences that cause such misinterpretations are called garden-path sentences, leading the comprehender down a garden path as it were. However, even after the sentence is completely read, a casual reader may never correct the misleading first impression and therefore may never form a coherent, correct interpretation of the sentence. Correct comprehension requires extra effort that listeners and readers do not always exert, in contrast to a less effortful style of comprehension that is generally sufficient but is based on the criterion of plausibility only.

In many such subtle instances, the speaker may fail to get the message across, either through a limitation in the speaker's skill or effort or a limitation in the listener's skill or effort. Often, these limitations come about partly because the speaker or listener is unaware of the deficiency. If one of them is unaware, it probably leads to frustration on the part of the other partner in the communication failure whereas, if both are unaware at once, it may simply lead to confusion or an uncorrected miscommunication.

Finally, miscommunication may come about purposely; language can be, and often is, used to lie or prevaricate. People differ in their ability to detect lies and a variety of skills can be used to detect them. One may focus intently on the facial expressions and voice intonation of the speaker, and these factors can give emotional clues that the speaker is lying. Alternatively, one may focus on the logic of what is being said and deduce that it would be impossible for the speaker's statements to be true (either because they are inconsistent with known facts or because they are inconsistent with other things that the speaker has said). Lying can be used for purposes that range from the socially malicious to the socially helpful, so it is not necessarily a weakness. One would suppose that the imperfect ability to detect lies is a weakness; however, it is possible that people would be too demoralized if they always detected lies and that, therefore, it is in their best interest not to detect some lies. Of course, if it were common knowledge that lies always were detected, people probably wouldn't lie. Perhaps the most ironic aspect of human communication is that many people habitually lie and apparently fool themselves into believing that other people do not notice, or that it matters less than it actually does if they are found out.

Thus, people may communicate either less or more than they think they are communicating. As in other areas of human cognition, people have only limited insight into their own successes and failures at communication.

We can understand more about the use of language by considering it at several levels of analysis. Spoken human language is generally considered to be formed from the combination of structure at five different levels of analysis: phonology, morphology, syntax, semantics, and pragmatics. Breakdowns in communication can occur at any of these levels

of analysis.

Phonology encompasses the sound and pronunciation systems of a language. A phone is an individual speech sound that can be studied acoustically. A more critical concept for language, though, is the phoneme, which is a category of sounds making a meaningful difference within a language. For example, the phones [l] and [r] are different phonemes in English; pairs of words such as "lice" and "rice" have completely different meanings depending on which phoneme is used. In contrast, these same two phones represent the same phoneme in many Asian languages; they are called allophones. (Other phonemic contrasts are present in the Asian languages, or in other languages, but are absent in English.) Language differences in phonology go far to explain why it is difficult to communicate with a speaker who has learned a language only in adulthood. Such speakers typically find it difficult both to produce and to perceive phonemic differences that are not present in their own language because, early in development, we learn to group together sounds that are allophonic and ignore the distinctions between them, while becoming very sensitive to distinctions that are important in our own language. That tuning process appears to be difficult to modify in adulthood, resulting in foreign "accents."

Morphology is the way in which phonemes are put together to form words. Here there are important language differences in how much meaning is agglutinated together. In English, words typically consist of a root plus optional modifies, which are added on (as in the word "houses," composed of house plus a plural morpheme "es") or mixed in (as in the word geese, in which the word "goose" is modified to make it plural). There can be compound words such as fireman (= fire + man). However, some languages combine many more morphemes together to form words, including both objects and actions associated with them together. German tends in this direction and some American Indian languages did even more. It seems likely that such differences between languages often add a layer of difficulty to the task of learning a new language.

Syntax is the way in which words are put together to form sentences. Morphology and syntax are theoretically not as separate as they may at first seem because both are used together to modify meaning. For example, to create tenses of the verb "to be," one sometimes needs a single word (e.g., "is," "was") and one sometimes needs multiple words ("has been," "would have been"). Thus, morphology and syntax often are considered together under the heading of grammar.

There are enormous differences between the grammars of one language and another. Among the languages of the world, some require that words be in a fixed order to convey a certain meaning. Thus, in English, there are different meanings for the sentences, "The boy kissed the girl" versus "The girl kissed the boy." The normal word order for basic sentences in English is subject, then verb, then object (SVO). Another common order is SOV, both of which allow the subject to come first. However, there also are languages with the basic structure is VSO, occasionally VOS, very rarely OVS, and perhaps no known example of OSV. The basic word orders can be conceived as resulting from a tendency to place S before O and a tendency to place V next to O, perhaps because of the logical connections between the types of concepts that these parts of speech ordinarily represent. There are still other languages (such as Japanese) in which one uses word endings to indicate what grammatical role a word fills (e.g., subject or object), which allows the word order to be left up to the discretion of the speaker. This flexibility in word order can then be used to create different emphases, which must be accomplished in strict-word-order languages with some

other device (such as the passive voice in English; "The dog bit the man" highlights the dog whereas "The man was bitten by the dog" highlights the man). Of course, such differences between languages imply that learning a new language involves, to a considerable extent, learning a new way to think.

People typically focus on the meaning and implications of language and often are unaware of its grammar. Noam Chomsky and some of his colleagues pointed out regularities of English at a level that previously had not been noticed in any language. One interesting example is the word "been." Until careful analysis it is not obvious that this word consists of two morphemes, be + en, in nearly the same way that the word "taken" can be decomposed into take + en. Many studies have shown that people remember primarily the gist (meaning) of language and quickly forget most of the details about the grammatical devices that were used to convey that gist.

Semantics refers to the meanings conveyed by language. Just as phonology is used as a vehicle to combine speech sounds to form grammatical utterances, grammar is used as a vehicle to convey meaning. However, there is not a simple, one-to-one correspondence between grammatical structures and meanings. One can create a "garden-path sentence" that may cause the listener to construct the wrong interpretation at first, requiring them to go back and modify the interpretation on the basis of information occurring later in the sentence. For example, consider the sentence, "The man walked the dog while riding in his golf cart." At first, one gets the impression of a man walking, which later must be revised on the basis of the rest of the sentence. The sentence, "The woman looked up the street" can mean that the woman looked in a phone book, if the underlying grouping of words is "The woman (looked up) the street," with a verb phrase in parentheses and then a direct object; or it can mean that the woman stood in the street and stared along it, if the underlying grouping of words is "The woman looked (up the street," with a prepositional phrase in parentheses.

There is plenty of cause for miscommunication within a language and probably even more cause for it between languages. Early on in computer history, a computer programmed to translate from English to Russian and back again took the phrase "The spirit is strong but the flesh is weak" and mistranslated it by taking the words too literally, to "The vodka is good but the meat is rotten." Humorous examples such as this must be complemented with other examples that were far less humorous at the time, however. When Nikita Kruschev stated to the United States, "We will bury you," it was apparently a mistranslation intended to indicate that the Soviet Union would outlast the United States, not that it would murder the United States as was commonly supposed by Americans.

Pragmatics is the use to which the meanings of language are put. A sentence may be used to convey information, make a request for an action to be carried out, threaten the listener, express emotion, and so on. Just as there is no one-to-one mapping between grammar and semantics, there are many different literal meanings by which a particular pragmatic aim can be carried out. To request that a window be opened one can say, "Would you open the window?", "It's hot in here", "Open the window!", "Oh, I guess I'm fine with the window closed" while actually implying the opposite, and so on. Also, just as different grammatical devices used for the same concept carry slightly different semantic emphases, different semantic means of achieving a pragmatic aim do so with different pragmatic implications (e.g., one can ask politely, subtly, brashly, sarcastically, and so on).

There is the possibility of using a meaning that, on the surface, is exactly the opposite of

what one is trying to say (as in the sarcastic example above, "I guess I'm fine...") and the possibility that one's aim is to convey false information. Given that listeners understand the possibility of lying and misrepresentation, it often becomes a difficult task to convince the listener that one is honest, whether one really is or not; and it may not be possible to know if the listener is convinced. Given the possibility of both deliberate and accidental miscommunications, it seems clear that personal, national, and international relations face important challenges in communication.

So far, we mostly have discussed spoken communication. It is obvious that other forms of symbolic communication and mass communication have greatly changed the nature of human society. One change is that they allow a smaller number of individuals (those who control the means of communication) to have their thoughts received by large numbers of people. When using these forms of communication, compared to face-to-face spoken language, it is also easier to omit nonsymbolic aspects of communication, including tone of voice and sometimes-inadvertent gestures or "body language." These can reveal emotions and are partly out of the deliberate control of the speaker, so prevarication is easier to accomplish in the newer forms of communication.

Efficient, truthful communication is only part of what is important to improve the sustainability of the earth. Much of the rest, of course, is to have the right ideas to share and communicate. The forms of communication available to humans are changing so fast that there has not yet been time for the latest ones to be studied carefully. There are bound to be some consequences of new forms of communication that are difficult to anticipate. It seems doubtful that people understood in advance how important the printing press or radio would turn out to be in producing a world population in which there is a large base of common scientific and cultural knowledge; Marshall McLuhan's idea of a global village. Electronic communication, also, probably is having unanticipated consequences. For example, in 1991, when the Communist Party of the Soviet Union attempted a coup to restore the old political order, they carried out a trusted routine in which the traditional forms of electronic communication (including television and radio) were cut off. However, unlike old times, it was possible for people to communicate and organize in opposition to the coup using electronic mail and facsimile transmissions. It is also the case that, perhaps for the first time in history, the largest single source of printed information about any topic (the worldwide web of computers) is factually unreliable because there is little editorial control over the quality of its entries. The problems and successes of communication seem like reflections of the problems and successes of the human species.

3.6. Some strengths and limits of human emotions

Human emotions are striking in their ability to motivate behavior. Often, this motivation helps the individual's survival. For example, a feeling of romantic love may make an individual bond to a sexual partner who then provides various types of support and assists in procreation of the species. Such feelings help them protect, and provide for, their offspring. A feeling of anger or fear may help an individual take an action that protects his or her life.

Sometimes, though, feelings undeniably are too strong for the occasion or seem unhelpful. This may occur because the emotions come from situations other than the ones that the individual faces. Modern life includes many situations that, perhaps, are new to the process of evolution. A very strong fear response may be unhelpful if the individual is trying to get along with the boss at work or give a speech to a group of people. The emotion can be so

strong that it prevents the individual from carrying out the desired activity efficiently; it can distract attention from performance. Habitual stress, anger or anxiety lowers one=s resistance to illness.

These strong emotions may have come about because a more dramatic response (fighting or running away) was more helpful in the typical situation faced by our primitive ancestors. An emotion also can be transferred from one situation to another. If an individual has strong negative feelings about his or her parents (e.g., has been abused by them), these feelings may transfer inappropriately. One's boss at work may be a perfectly gentle individual but may be in somewhat of a parental role, which may be enough to trigger emotions that originated with one's parents rather than with anything the boss did. The inappropriate or unhelpful interaction that results then may be compounded by other feelings that occur as one or both individuals fail to understand the source of the bizarre quality in the interaction.

Ironically, as strong as emotions are, we cannot be sure that we understand our own emotions. Our understanding of our emotions is an attribution process. Effects of a drug such as adrenaline, which is a strong physiological stimulant, depends on our knowledge of the source of this physiological arousal. If a subject in a social psychology experiment does not know that the drug caused the effect, it can lead to the experience of either a pleasant, euphoric state or a tense, angry state depending on the mood exuded by a confederate in the experiment, as first shown by S. Schacter and J. Singer (1962, Cognitive, social, and psychological determinants of emotional state, Psychological Review, 69, 379-399). Subjects who were aware of the effects of the drug were not as susceptible to these effects. It appeared that the experience and display of emotion did not depend directly upon the physiological effects of the drug, but rather upon the subject's attribution of those effects as arising from the environmental stimulation. Thus, individuals do not seem to have a foolproof way to know what they are feeling apart from their observation and interpretation of their own physiological reactions. This finding has implications for daily life. A person who drinks too much coffee (containing caffeine, another stimulant) may be angrier than a person who does not drink coffee, without realizing that the coffee has contributed to the sense of anger and to angry responses. This is likely to be quite an important issue given the incredibly high proportion of the world's adult population that uses such stimulants daily.

Pleasant emotions are essential for good physical health, as Chapter 6.27.4.3 indicates. Yet, unpleasant emotions are essential if one is attempting to overcome an injustice or threat in the environment. Thus, negative emotions cannot safely be abolished from one's life. The art of healthy living probably involves learning how to balance positive and negative emotions so that the negative emotions serve one's purposes without causing too much damage to one's self or one's allies. The best balance must depend partly on what threats the individual faces and partly on how much the individual realistically could do to minimize those threats.

3.7. Some strengths and limits of human social cognition

One might view social cognition as an area in which general cognitive principles are applied to social situations. The cognitive principles (such as the use of schemas and rules of category formation) may be quite similar in social- and non-social situations. However, social situations bring with them a special set of motivations, rewards, and so on. For example, it is only in an interaction with another person that one would have a motivation to act in a way that conceals a true motive or misleads another individual.

Given the extensive treatment of social cognition in Section 6.27.6, it will be discussed here only briefly. On the side of strengths, it can be said that many or most people are excellent at perceiving social cues from other individuals, whether these cues be verbal, acoustic, facial/visual, tactile, or abstract (e.g., objects left as hints) in nature.

Regarding limits, one can make the general point that people often do not possess full insight into the reasons why they act as they do in social situations. A good example is the concept of cognitive dissonance, developed by Leon Festinger. The idea is that we feel uncomfortable if our actions do not match our beliefs and, consequently, we often change our beliefs to be more in line with our past actions. Festinger and others carried out experiments in which subjects were questioned on their beliefs about a controversial topic and were then paid to write an essay supporting a particular view. After writing the essay, they would be questioned again and the general finding was that the their belief shifted toward the view that they had been paid to espouse. The notion was that they experienced dissonance between the prior view and the view taken, for cash. This dissonance caused anxiety and, to reduce the anxiety, they shifted their belief, thereby reducing the dissonance. In one study it was found that, if they were paid a large sum, the discrepancy between the two beliefs was more patently understandable, and therefore produced less of a shift than if they were paid only a small amount. They were generally unaware of having shifted beliefs, at least judging from the answers to questions. The idea is that the subject who had been paid only a little might implicitly (not explicitly) reason something like this: AI am basically a good person and I would not lie for a small sum of money. It seems appealing to accept this new belief (as it makes me feel good to do so, for reasons I do not fully comprehend) so I will do so and even forget about the prior belief, which was apparently ill-thought-out.@

Daryl Bem challenged this interpretation and suggested instead that people have no direct way to know what they believe. The alternative account states that they simply look at how they have behaved and assume that that is what they must believe. Both accounts hold that people do not have direct access to their full thought processes, but in Festinger=s theory it is their own motivations that subjects do not understand. M.P. Zanna and J. Cooper (1974, in Journal of Personality and Social Psychology, 29, 703 - 709) carried out an interesting experiment to distinguish between these views. They administered placebos (sugar pills) to subjects and led some of them to expect that the pill would cause them to feel aroused and tense. In support of Festinger, the subjects misled in this way did not show the cognitive dissonance shift in their beliefs, whereas subjects who were not misled about the inactive nature of the pills did show the effect. The implication is that the misled subjects could form an account about the cause of their own anxiety that had nothing to do with the essays they wrote. Therefore, even though there were conflicts between their a priori views and their views espoused for payment, and even though they were presumably anxious as a result of the dissonance, the source of the anxiety was mis-attributed to the pills and no shift in beliefs occurred. This area of research and many others in social cognition together illustrate that motivations and emotions often affect thoughts, even though subjects often have no knowledge of this and believe that the direction of causation is the opposite or that there is no connection. Pure, self-negating objectivity is certainly a myth.

All of this is not to say that the situation is hopeless. Some studies have shown that discrepancies between people=s beliefs and actions, or hypocrisy, can be alleviated by pointing out the distinction forcefully. See, for example, J. Stone, A.W. Wiegand, J. Cooper, and E. Aronson (1997, Journal of Personality and Social Psychology, 72, 54-65). It seems to

work only under certain circumstances, such as when subjects cannot get out of focusing their attention on both their past statements of belief and their present, discrepant behaviors. The amount of cognitive dissonance also is influenced by cognitive factors such as how much the culture links one=s identity to individual actions as opposed to actions of the larger group and one=s role in it.

3.8. Developmental changes in strengths and weaknesses

All of the above discussions within this chapter have indicated that the mind is a marvelous mechanism. The brain provides reactions to the environment that often swiftly enact the appropriate thought processes and recruit the person's attention when necessary, as shown in Chapter 6.27.4.4. However, the discussion has pointed to numerous shortcomings in human mental processes, also. These shortcomings are related to the mind's use of heuristics that typically produce the right results with limited effort, but that fail in certain important situations. These failures can occur in perception, memory, attention, problem-solving, emotions, communication, and social interaction.

A common reason for the failure of the heuristics is that they do not fully take into account a point of view different from one's own. One cannot see that the landscape would look different from different perspective, that one's memory of an event would be quite different if one started out with different preconceptions, that one's opinions would be different if one had a different history of reinforcement, and so on.

This, too, is a key variable in the childhood development of human beings. A central aspect of perceptual and cognitive development pointed out by the famous developmental psychologist, Jean Piaget, was the progression from an egocentric mind to a less egocentric view. According to the theory, young infants are unable to keep in mind even the existence of the mother when she is not present. As children grow they resolve one type of egocentrism but still are susceptible to other types. A three-year-old speaker may not understand that a listener does not know the key background information behind what is being said to them. A child of that age sometimes does not realize that their own sibling has a sibling (themselves). An older child (perhaps a 7-year-old) may understand that sort of thing but displays other types of egocentrism. The child may play a game and apply different rules to themselves versus others, perhaps without realizing it. An adolescent may no longer do that but may feel as if each new experience (such as falling in love) is unique to him- or herself. This feeling is well expressed by a line in a song by the Beatles regarding love: "This could only happen to me; can't you see; can't you see?" The journey away from egocentrism also is repeated in the history of science, which started with a belief in Earth as the center of the universe, moving to a belief a non-central Earth within a fixed time and space, and then moving (with Albert Einstein) to a belief that even our measurements of time and space are relative to our point of view in the universe rather than stable across the universe.

From the previous discussion within this chapter, it is clear that developmental maturity is not a release from all forms of egocentrism. Time and again we see that adult human thought processes have limitations of which individuals are often unaware. Thus, there are forms of egocentrism in adulthood. The problems we face with when we raise our children are therefore often similar to the problems we face when interacting with other adults. An understanding of this seems to underlie the simple book by Robert Fulghum, All I really need to know I learned in kindergarten: Uncommon thoughts on common things (1988, New York: Villard Books). It seems likely that the most important activity for the sustainable

growth of the human race is education, and one of the most important aspects of education is coming to the understanding of respect for others; this, in turn, depends on the understanding that one's own personal point of view is limited and is not a dependable indication of what is true.

Bibliography for Further Reading

Baddeley, A.D. (1999). Essentials of human memory. Hove, UK: Psychology Press. [A good recent summary of research on human memory by a leading researcher.]

Baron, R.A., Byrne, D. (2000). Social psychology. Boston: Allyn & Bacon. [A good recent textbook on social psychology.]

Borod, J.C., ed(2000). The neuropsychology of emotion. New York: Oxford University Press. [A good recent account of how the brain handles emotions.]

Clark, H.H. (1996). Using language. Cambridge, UK: Cambridge University Press. [A good recent textbook on the psychology of language.] Cowan, N. (1995). Attention and memory: An integrated framework. Oxford Psychology Series, No. 26. New York: Oxford University Press. [A book describing the author=s theoretical view and reviewing literature on human attention and memory, explaining the relation between them.]

Cowan, N. (1999). The differential maturation of two processing rates related to digit span. Journal of Experimental Child Psychology, 72, 193-209. [A study indicating that multiple processing rates must be taken into account in order to understand thought processes.]

Cowan, N. (in press). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences. [A review of literature on limits in how much people can keep in mind at one time.]

Engle, R.W., Tuholski, S.W., Laughlin, J.E., & Conway, A.R.A. (1999). Working memory, short-term memory, and general fluid intelligence: A latent-variable approach. Journal of Experimental Psychology: General, 128, 309-331. [A recent study showing how working memory tasks can be used to predict intelligence.]

Ferreira, F., Bailey, K.G.D., & Ferraro, V. Good enough representations in language comprehension. Current Directions in Psychological Science, in press. [A good review of studies showing that people often do not completely process information in sentences.]

Gardner, H. (1985). The mind = s new science: A history of the cognitive revolution . New York: Basic Books. [A good history of the issues and methods of cognitive psychology.]

Gazzaniga, M.S., ed. (2000). The new cognitive neurosciences (2nd ed.). Cambridge, MA: The M.I..T. Press. [A good recent textbook on the relation between the mind and brain.]

Goldstein, E.B. (1999). Sensation & perception. (Fifth edition). Pacific Grove, CA: Brooks/Cole. [A good recent textbook on the way in which stimuli are perceived.]

Hitch, G.J., Towse, J.N., & Hutton, U. (in press). What limits children's working memory span? Theoretical accounts and applications for scholastic development. Journal of Experimental Psychology: General. [A study showing that working memory tasks involve more than just one strategy and may involve switching of attention, not just sharing of attention between tasks.]

Kunda, Z. (1999). Social cognition: Making sense of people. Cambridge, MA: MIT Press. [A good, detailed review of studies indicating that people often are unaware of their motives.]

Lane, R.D., & Nadel, L., eds. (2000). Cognitive neuroscience of emotion. New York:Oxford University Press. [A good review of the role of the brain in emotions.]

Miyake, A., & Shah, P., eds. (1999). Models of Working Memory: Mechanisms of active maintenance and executive control. Cambridge, U.K.: Cambridge University Press. [A set of chapters explaining most of the main theories of working memory and attention.]

Salthouse, T.A. (1996). The processing-speed theory of adult age differences in cognition. Psychological Review, 103, 403-428. [A good summary of research indicating that the speed of processing is important for good-quality information processing by humans.]

Schachtman, T.R., & Reed, P. (1998). Optimization: Some factors that facilitate and hinder optimal performance in animals and humans. In W. O.Donohue (ed.), Learning and behavior therapy. Boston, MA: Allyn and Bacon. [A broad consideration of human cognitive strengths and weaknesses that emphasizes principles that apply across species.]

Sternberg, R.J. (1999). Cognitive psychology. Fort Worth, TX: Harcourt Brace. [A good recent textbook of cognitive psychology, the science of the mind and behavior.]


Recommended