+ All Categories
Home > Documents > Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

Date post: 30-Dec-2016
Category:
Upload: ariane
View: 214 times
Download: 2 times
Share this document with a friend
17
New ldem PJychol. Vol. 1. N¢l. 3. pp 263 279. 1983 0732 118X/83 $3.00 + 0 00 Printed in Great Britain ,~c3 1983 Pergamon Press Ltd ARTIFICIAL INTELLIGENCE AND ANIMAL PSYCHOLOGY: A RESPONSE TO BODEN, MARGOLIS AND SLOMAN GUY CELLI~RIER and ARIANE ETIENNE FPSE, Universit~ de Gen~ve, I211 Gen~ve 4, Switzerland Abstract -- We propose the elements of an epistemological framework in which to situate the fundamental problem of ascribing mentalistic concepts to animals and machines that is posed by Margaret Boden's, Aaron Sloman's and Joseph Margolis' discussion of artificial intelligence and animal psychology. " . . . Thus the mobile robot SHAKEY, while executing a plan for moving blocks from one room to another, asks itself at each step whether the plan . . . has produced the expected results." writes Margaret Boden. But does SHAKEY really plan? Does it plan "in its head"? as children would ask. Notice SHAKEY's "head" (if it really has one?) is a few rooms away from its sensori-motor "body" It is located in (or around?) the much larger computer that is currently running (being run by ?) the ABSTRIPS program that executes the symbolic computations that its human programmers interpret as representing the planning activity. But does the computer really compute ? Or should we say that it goes into electromagnetic states and transformations which we interpret, but it does not, as representing the planning activity? "(The bee's) ability . . . to store information about routes and to communicate this to others through its 'dance' or return to the hive" writes Aaron Sloman. But again, does the bee really store and communicate information about routes, or should we say the bees go into "bio-electro-proteic" state transitions that we interpret as representing, for us but not for the bees, the storage and communication of messages about routes? In brief, as Joseph Margolis aptly puts it, is "the ascription of mental states to animals" (and here we add: to machines) epistemologically legitimate, or, if one prefers, is it "scientific"? If we understand him correctly, his answer is yes, but on certain rather strict conditions. Elsewhere Boden [11, Sloman [21, and we [3] have argued in the same direction, though we would differ with Margolis on the nature of some of the conditions. We would like to propose here the elements of an epistemological framework in which to discuss the question of the ascription of computational concepts to animals and machines. To do this we first examine the consequences of a negative answer to the preceding question. This implies that we must refuse to ascribe to entities other Commentary on M.A. Boden (1983) Artificial intelligence and animal psychology, Vol. t, No. 1, pp. 11 - 33; J. Margolis (1983) Commentary on M.A. Boden, Artificial intelligence and animal psychology, Vol. 1, No. 1, pp. 35- 40; and A. Sloman (1983) Commentary on M.A. Boden, Artificial intelligence and animal psychology, Vol. 1, No. 1, pp. 41 - 50. 263
Transcript
Page 1: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

New ldem PJychol. Vol. 1. N¢l. 3. p p 263 279. 1983 0732 118X/83 $3.00 + 0 00 Printed in Great Britain ,,~c3 1983 Pergamon Press Ltd

ARTIFICIAL INTELLIGENCE AND ANIMAL PSYCHOLOGY: A RESPONSE TO BODEN, MARGOLIS AND SLOMAN

GUY CELLI~RIER and ARIANE ETIENNE

FPSE, Universit~ de Gen~ve, I211 Gen~ve 4, Switzerland

Abstract -- We propose the elements of an epistemological framework in which to situate the fundamental problem of ascribing mentalistic concepts to animals and machines that is posed by Margaret Boden's, Aaron Sloman's and Joseph Margolis' discussion of artificial intelligence and animal psychology.

" . . . Thus the mobile robot S H A K E Y , while executing a plan for moving blocks from one room to another , asks itself at each step whether the plan . . . has produced the expected resul ts ." writes Marga re t Boden. But does S H A K E Y really plan? Does it plan " i n its h e a d " ? as children would ask.

Notice S H A K E Y ' s " h e a d " (if it really has one?) is a few rooms away from its sensor i -motor " b o d y " It is located in (or a round?) the much larger compute r that is current ly runn ing (being run by ?) the A B S T R I P S p rogram that executes the symbolic computa t ions that its h u m a n p rogrammers interpret as represent ing the p lanning activity. But does the compute r really compute ? O r should we say that it goes into electromagnetic states and t ransformat ions which w e interpret, but it does not, as represent ing the p lanning activity? " ( T h e bee 's) ability . . . to store informat ion about routes and to communica te this to others th rough its ' d ance ' or return to the h ive" writes Aaron Sloman. But again, does the bee really store and communica te information about routes, or should we say the bees go into "b io-e lec t ro-pro te ic" state transitions that w e interpret as representing, for us but not for the bees, the storage and communica t ion of messages about routes?

In brief, as Joseph Margolis aptly puts it, is " t h e ascription of mental states to an ima l s" (and here we add: to machines) epistemologically legitimate, or, if one prefers, is it "sc ient i f ic"? If we unders tand him correctly, his answer is yes, but on certain ra ther strict conditions. Elsewhere Boden [11, Sloman [21, and we [3] have argued in the same direction, though we would differ with Margol is on the nature of some of the conditions. We would like to propose here the elements of an epistemological f ramework in which to discuss the question of the ascription of computa t iona l concepts to animals and machines.

To do this we first examine the consequences of a negative answer to the preceding question. This implies that we must refuse to ascribe to entities other

Commentary on M.A. Boden (1983) Artificial intelligence and animal psychology, Vol. t, No. 1, pp. 11 - 33; J. Margolis (1983) Commentary on M.A. Boden, Artificial intelligence and animal psychology, Vol. 1, No. 1, pp. 35- 40; and A. Sloman (1983) Commentary on M.A. Boden, Artificial intelligence and animal psychology, Vol. 1, No. 1, pp. 41 - 50.

263

Page 2: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

264 Guy Cclldricr and Arianc Etienne

than h u m a n cognitive systems any of the propert ies such systems have evolved to describe themselves. Wha t epistemological al ternatives does this refusal leave us? One of those, as Margol is points out, " d e p e n d s on its being possible to establish any form of el iminative mater ia l ism or physicalism, which would ul t imately justify t reat ing ' representa t ions ' , ' symbols ' , ' languages ' and the rest as expendable , as (at best) mere 'facons de parler'". This would not only mean that S H A K E Y does not really plan, the bee does not really represent and communica te , but we do not really think. Th e concepts and vocabulary we use to describe our subjective exper ience become a sort of pre-scientific mythological f ramework, perhaps a useful shor thand, that allows us to refer to unknown entities and p h e n o m e n a that we would describe in physico-chemical terms if we knew better. Though t , consciousness, mean ing etc. now have no objective existence; they are ei ther " e p i p h e n o m e n a " or subjective illusions. This reduct ion has a strange self referential ant i reduct ionis t twist to it: if consciousness does not really exist, then who, what metaphysical singulari ty is left, to enter ta in the subjective illusions, and specifically the illusions and doubts about its own nature and existence? T h e reductionist himself vanishes into his epistemological self-generated metaphysical vacuum, together with his now self avowedly meaningless theory.

ASCRIPTIONS IN PHYSICS

Let us first point out that if the aim of the reductionist p r o g r a m m e is greater "scientif ic ob jec t iv i ty" , then even if it succeeds, it fails. This is because all theories, be they mentalist ic or materialistic, psychological or physical, are man- and mind-made . Thus they can only ascribe objective existence in space and time, outside the mind, to symbolic entities that funct ion and evolve within the h u m a n mind. So when we describe S H A K E Y and the bees, etc., within our " m e n t a l i s t i c " f ramework, we must ask whether they really plan and symbolize etc. But when we describe them within our "ma te r i a l i s t i c " f ramework in terms of physical concepts like atoms and macromolecules , electrons and ions, forces and energies, we must also ask of all these theoretical entities that exist in our minds, whether they also exist " o u t t h e r e " beyond our sensor imotor interlace with our env i ronment , and the technological prosthetic devices like microscopes and accelerators that we build to extend the scope of our observat ions and intervent ions. And the answer is the same in both cases: the theoretical entities in our models are ascribed to " r e a l i t y " und e r certain consensual conditions. So materialist ic descriptions and ascriptions are not intrinsically more objective than mentalist ic ones; they are equally consensual. The re fo re even if a reductionist ic p r o g r a m m e succeeded, it would fail to be more objective than a mentalist ic one.

O u r next point is that all allegedly non mentalistic accounts of cognitive p h e n o m e n a must fail in that they will be ul t imately mentalistic. By this we mean the p h e n o m e n a they will account for will necessarily be defined as the extension of some perhaps complex reformula t ion , reconf igurat ion and re f inement o! concepts that the h u m a n cognitive system has evolved to describe itself. This

Page 3: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

Artificial intelligence and animal psychology 265

descriptive conceptual framework covers the intensional subjective phenomenology of its internal mental activity, the biological functions that sustain and embody it, its external finalized or purposive sensori-motor activities, and the functional artifacts, instruments and machines that result from these activities.

We submit that any account of cognitive, biological, and in general symbolic computational phenomena that does not use this descriptive framework must fail because it cannot discern its objects: computational systems, natural or artificial, within the continual flow of physico-chemical phenomena on the planet. The basic reason for this, stated at the highest level of generality, is that material phenomena that realize a computation are not physically distinguishable from those that do not. This has been observed and expressed historically in all the domains of the life sciences that we are concerned with here.

ASCRIPTION IN BIOLOGY

~Chemistry is the same in vivo and in v i t ro" , so there is no way to distinguish the organism from its environment on the basis of chemistry alone. The ~semi- permanent" organisations of chemical interactions that realize the webs ot interrelated functions that we ascribe to life (self repair and self maintenance functionally subordinated to differential self reproduction) are not different, at the chemical level of description, from random non-living combinations ot chemical reactions. What distinguishes ~living" combinations from non living ones is that they realize ~higher level laws" (defined on the laws of chemical combination) such as the ~law of invariant reproduction" which are in no way laws of chemistry, and which are not derivable from them alone. Reproduction is not a law of organic chemistry, neither is it logically derivable from the laws of chemistry. More generally: there is no logical way to derive a system's function bottom-up from its physical structure. This function must be ascribed, top down, from a higher level of description. These higher level descriptions allow us to define the objects of biology, as distinct from those of chemistry. They allow us to see some subsets of chemical reactions as living and reproducing organisms and to ascribe to them these higher level descriptions.

We may now ask where these descriptions originate. Our answer is: within the mentalistic framework wherein we describe ourselves as living and thinking mind - body entities. Thus the biological notion of reproduction is ultimately derived, through a sequence of psycho-genetic reelaborations, from the mentalistic ~naive computational concepts" that we have evolved to describe our activities of copying symbols (texts, pictures etc.) imitating the activities ot others, manufacturing multiple copies of objects, instruments, machines, monuments etc. Within biology, a particular reconfiguration of this web ot concepts allows us to see ourselves as reproducing organisms. This allows us to elaborate a chain of progressively deeper functional and structural homologies that discerns in, and transfers to our genes some of the properties of our macroscopic reproduction by ascribing to them a description in terms ot

Page 4: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

266 Guy Cclldricr and Arianc Etienne

" r e p l i c a t i o n " . F r o m this view point it is reveal ing of our unequal ly dis t r ibuted naive real ism that we are easily p r o m p t e d to ask whether an imals really think, as we do, but rarely to ask whe ther they are really alive, as we are. O r pe rhaps - - do they really reproduce - - or is this a merefacon de parler?

T h e preced ing a r g u m e n t mus t now be general ized to the o ther levels ot descr ipt ion of cognit ive systems.

ASCRIPTION IN PSYCHOLOGY

Behav iou r i sm as an allegedly non mental is t ic a l ternat ive account ot psychology is an as tonishingly t r anspa ren t case of the use of a mental is t ic concept to define its object. A b e h a v i o u r can only be defined by d isc r imina t ing it f rom the general flow of mater ia l interact ions on the planet in the same way that we discern the o rgan i sm f rom its env i ronmen t . T h e r e is no " o b j e c t i v e " way to define a behav iou r " b o t t o m u p " f rom the laws of physics, because the mate r ia l p h e n o m e n a that realize a b e h a v i o u r are not physical ly different f rom those that do not. So b e h a v i o u r mus t be def ined " t o p d o w n " f rom a level ot descr ip t ion that describes organisa t ions of physical p h e n o m e n a . In this case, the higher level concept that allows this defini t ion is the glar ingly mental is t ic one ot " b e h a v i o u r " . Again it is the naive compu ta t i ona l concept we enter ta in about our own behav iou r as an external iza t ion of other internal mental is t ic constructs (emot ions , needs, desires, feelings, purposes , intent ions, goals, plans, decisions, etc.) that allows us to ascribe behav iour s to our conspecifics, and not to i nan ima te objects. And again, to the exact degree that behav iou r i sm succeeds in d iscerning behaviours , it fails as a non mental is t ic theory. M o r e general ly, f rom this view point , any psychological theory that does not e n d e a v o u r to elucidate the m e c h a n i s m s under ly ing our naive mental is t ic p h e n o m e n o l o g y has in a literal sense lost its subject mat te r .

ASCRIPTION IN ETHOLOGY

T h e same observa t ions apply mutatis rnutandis to ethology. He re we ascribe behav iour s to other an imals by ex tend ing to them the concept of behav iou r we ascribe to our conspecifics and ourselves. W e start with a naive homology that ascribes to an imals our subject ive states and behavioura l mani fes ta t ions of hunger , thirst, aggression, fear etc. " M y dog has a sense of h u m o u r " says the protoethologis t . Th is homology is then restricted and refined, while it is ex tended to deeper levels. O n the basis of our exper ience of the relat ions be tween vision and prehens ion , for instance, and the organs that for us sustain and p roduce these behaviours , we seek to extend the ascr ipt ion to funct ional ly homologous organs in animals . T h e homology is refined on the s t ructural neu roana tomi ca l d imens ion to allow us to ascribe a sensor i -motor system to unicel lular o rgan i sms with no ne rvous systems, no eyes but a photosensi t ive patch, no legs but moti le cilia etc. It is refined on the psycho-funct ional d imens ion to allow for (hypothet ical ly) non conscious, central , symbol ic man ipu l a t i ons within the sensor i -motor behav ioura l loop. T o do this,

Page 5: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

Artificial intelligence and animal psychology 267

theoretical concepts like " i n n a t e releasing m e c h a n i s m s " replace our conscious perceptual experience, " f ixed action pa t t e rn s " replace our vo lun ta ry activity, and physiologically based "mot iva t iona l s ta tes" replace our conscious feelings, emotions , goals, etc.

ASCRIPTIONS IN NEUROPHYSIOLOGY

Finally the growing chain of homologies induced by the initial top level ascript ion of behav iour to animals reaches the level of neurophysiology. H e re again we must point out that there is no " o b j e c t i v e " way of defining the object of neurophys io logy " b o t t o m u p " from our knowledge of chemist ry and physics alone. Neurons are def ined functionally top down f rom the initial behavioura l ascription. T a k e n in isolation a neuron has no function, worse, it is not even just if ied to describe it as a physical entity, a figure distinct f rom the physico- chemical background. It is only when we have strong reasons to believe so called "bioelectricar' activities at the cellular level of descript ion part ic ipate in the p roduc t ion of wing movemen t s in an insect for instance, that we describe the physical s t ructure responsible for this as a neural cell, and ascribe it the funct ion of a mo to r neuron . But there is no reason within physics for choosing this cellular level of descript ion; that of molecules, a toms or even e lementary or sub- e lementa ry particles would also be physically correct . T h e reason that we define the neural cell (and today its intracellular functional subdescriptions) as the unit of descript ion is the functional one that sub-e lementary p h e n o m e n a do not seem to have any direct mo to r effects on the wing. Notice that the same neuron , with the same biolectrical activity, could in principle belong to an innate releasing mechanism, in which case we would ascribe it a sensory function, not a mo to r one. This is not as implausible as it sounds; after all, o rd inary addit ion is used as a subuni t in a var ie ty of functionally different computa t ions f rom combinator ics to bookkeeping. He re again we conclude that the physical states and t ransformat ions that consti tute the object of neurophysio logy are defined " t o p d o w n " by the funct ion they realise.

ASCRIPTIONS IN A.I.

Let us start with a central thesis, which, as Margolis points out, has been expressed most effectively by Hi la ry Pu tnam: " e v e r y t h i n g is a Probabilist ic A u t o m a t o n unde r some Desc r ip t ion . " " T h e formal f ramework of A.I . thus allows us to perceive and express an under ly ing uni ty behind the activities of genes, cells, neurons , organs, organisms and cognitive systems, that are the objects of separate branches of the life sciences. T h e y are all par t icular realizations of abstract symbol manipula t ion machines. A.I . in this general sense is thus an a t tempt to art iculate, systematize, and unify the var ie ty of naive computa t iona l concepts that are the basis of ascription of funct ion in the life sciences. T h e central uni fying not ion of A. I., that of symbolic computa t ion , has its precursors in ou r preformal mentalist ic concepts of computa t ion . Th e i r p ro to type may be found in activities like mental numera t ion and ar i thmetic . A

Page 6: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

268 Guy (;dl0ricr and Arimw Etienne

direct homology extends the notion to external wri t ten computa t ion in a nota t ion system, which partially exteriorizes the " u n w r i t t e n " and in part unconscious rules of numerica l symbol manipula t ion . Th e homology extends to simple machines like the abacus, where part of the symbol structures and their manipu la t ion rules are embodied in the device 's mechanical structure. This progressive t ransfer of the computa t iona l process from the h u m a n compute r ' s mind to an external device results in a variety of comput ing machines, from Pascal 's adding machine to the con tempora ry electronic computer . Th e web of naive computa t iona l concepts is by no means restricted to numerica l computa t ion . M a n y of them cluster a round the activities of symbolizing: drawing maps, plans, blueprints , writ ing and t ranscr ibing messages and texts, s toring and filing them for retrieval in libraries etc. Simultaneously the concepts e laborated a round the experiences of external symbol manipula t ion are " r e t r o -

asc r ibed" to the mind that originates them and its material substrate. Thus m a n y of the intuitive models of psychology ( internal representat ions as cognitive maps, m e m o r y as a filing system etc.) have obvious external origins.

In the computa t iona l f ramework the relation between hardware and software condenses these various ways of seeing physical systems as the material substrate of meaningful symbols and functional activities. H a r d w a r e is the set of physical parameters of a material system whose states and t rans tbrmat ions realize, or code for, a software computa t ion . H a r d w a r e is thus defined, top down, by the computa t ion , and all the functional ascriptions in biology, ethology, etc. that we have examined reduce to the ascription of a computa t ion to a physical substrate: this is the basic unifying ascription o fA . I . It is impor tan t to notice that the concept of hardware is subordinate to that of software: there can exist no hardware without a software computa t ion scheme that it embodies. Software defines hardware . So the status of hardware cannot be ascribed to systems like galaxies and meteorology to which we do not ascribe a computa t iona l function. This also means that hardware is not equivalent to physics and chemistry: the hardware of a device is only the very small subset ot physical propert ies that have a function in realizing a computa t ion . Within limits, the p roper ty that computers have of giving off heat, is not part of their hardware , because it plays no role in their computa t ional function. When it does, however , this intrusion of a new hardware pa ramete r is in terpreted at the functional level as a dysfunction.

I f there can be no hardware without software to define it, the converse is true: software computa t iona l procedures cannot exist independent ly of a physical substrate. These two basic postulates imply that we cannot ascribe computa t iona l propert ies to metaphysical systems like immater ia l vital or spiritual principles, and we can only do so, as we shall argue, to organisms, organizat ions and the machines they build.

These postulates again originate in our informal exper ience of external symbol manipula t ion . T h e basic problem which technology solves again and again is that of external iz ing one of our (mentalistic) functions, and embodying it in a physical hardware . But this is precisely the technical counterpar t of an ascription of function. The technical problem of building hardware to realize a

Page 7: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

Artificial intelligence and animal psychology 269

given computa t ion is thus the converse of the biological one where the intuit ively ascribed hardware is a l ready built and the problem is to reconstruct a software to ascribe it. This is where the basic inde termina t ion of the hardware software relat ion appears: the same funct ion m ay be realized in m a n y computat-ionally different ways, and the same computa t ion may be realized in m a n y different hardwares , at the atomic, molecular , macromolecu la r or macroscopic levels (cf. a tomic, quartz , and pendu lum clocks). Th e converse is true: the same physical p h e n o m e n o n (e.g. the rotat ion of a wheel) m ay be ascribed m a n y different computa t ions , each of which m ay be given a role in m a n y different functions. So given a funct ion it is impossible a priori to derive its ha rdware realization, and given a physical p h e n o m e n o n it is impossible to derive the funct ion it realizes. This is obvious at the level of o rd inary means - ends coordinat ions; the same end m ay be at tained by m a n y different means , conversely the same actions part ic ipate in m a n y ends. de Saussure 's arbitraire du signe extends this basic inde termina t ion to symbols: the same sign m a y be ascribed different meanings , the same mean ing different signs. Th e ha rdware software relation thus condenses these interrelated c o m m o n sense observat ions.

Practical and technical exper ience thus shows that computa t ions can be t ransfer red outside the mind into organizat ions of actions that in turn m ay t ransfer them into the organiza t ion of inanimate material devices, and that they also can be t ransfer red back to the mind when the device 's activities are perceived and in terpre ted as symbol manipulat ions . Fur ther , the fact that the same computa t ion can be t ransferred on a var ie ty of different physical substrates, allows computa t ion to be seen as a new independent conceptual ent i ty that is invar iant over its physical realizations. We suggest that this exper ience induces us to see the two way transfers of computa t ion between the mind and external devices as not intrinsically different f rom the transfers that we can effect between external devices. This is the basis of what we have called the " r e t r o - a s c r i p t i o n " of computa t iona l concepts that we have constructed on these external transfers, to the mind itself. T h e m i n d - body relation is thus assimilated to a s o f t w a r e - h a r d w a r e relation between thought and neuro- physiology, media ted by " m a n y layers of 'vir tual machine ' " to use S loman ' s concise formulat ion. This is the fundamenta l ascription that relates A.I . to psychology.

HEURISTIC AND REALISTIC ASCRIPTIONS

T o recapitulate , the main t rend of our a rgumen t is as follows. In the process of unders t and ing and control l ing ourselves, other h u m a n beings, then other organisms and the machines we build to augmen t ou r sensor i -motor and cognit ive systems, we ascribe to them intuit ive mentalist ic functional and semantic predicates. A.I . is a recent a t tempt to elaborate these intuit ions into a less informal , unified, positive, systematic, intersubjective "sc ien t i f i c" form. T h e p rog rammat i c conjecture is that all the fo rmer intuit ive predicates may be un i fo rmly const ructed in terms of the new computa t iona l ones. " W e must

Page 8: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

270 Guy Ccll('i'icr and Arianc E{icnnc

dis t inguish be tween mere ly heurist ic and (fully) realistic uses of A.I . in the empir ica l s tudy of psychological states and processes" writes Margol is . W e not only agree but would fo rmula te the p rob l em in the following more radical m a n n e r : is the ascr ipt ion of the new computa t iona l predicates of A. I . in any way more legi t imate than that of the fo rmer intuit ive ones?

T a k e n in an absolute sense, no ascr ipt ion is ever valid, all ascr ipt ions are equal ly heuristic; but some are more equal than others. O n e of the unavo idab le heurist ics of l imited h u m a n (as opposed to s u p e r h u m a n ) unders t and ing , is the ass imila t ion of the u n k n o w n to the known. This means that we see no possibility that " t h e s tudy of an imal intelligence and cogn i t i on" , or of any other subject for that ma t t e r , " c a n be freed f rom an extended, al tered, adjusted use of" the categories of h u m a n intelligence and cogn i t i on" (Margol is) . As long as we are stuck with the h u m a n (epis temic) condi t ion, there is little hope in this direction. But even if some day, after " c r a c k i n g the neura l codes" of our layers of vir tual mach ine , we evolve a mach ine a u g m e n t e d conscious field, this will still be true of the categories of this new epis temic subject. Once we have assimilated an internal retinal display, say, to our concept of a cube, we then ascribe this cube an external object ive existence in " r e a l i t y " , and behave accordingly. T h e result of our actions m a y suppor t this ascript ion; we feel the expected cube, or we don ' t . Notice our sensory and m o t o r channels could be p lugged into a total s imula to r and we would be none the wiser; our " o n t o l o g y " would be different for an outside observer , but not for us.

Th is heurist ic is called naive real ism. It has p roved so successful over the eons of phylogenet ic evolut ion, that it is innately wired into the ne rvous system of mos t animals . T h e y and h u m a n babies both react to cer tain inner sensory displays as if there were real m o v i n g objects " o u t t h e r e " . F r o m this heuristic view point it it not surpr is ing that in te rac t ing animals , such as p reda tors and their p rey evolve reciprocal ly t rans la table inner descript ive f r ameworks that allow them to share part ial ly the same " r e a l i t y " , though this does not make these descr ipt ions more "on to log ica l ly t r u e " . T o an observer with yet ano ther descr ipt ive f r a m e w o r k they m a y well seem to share the same illusions. Of course, all this is also t rue for animals , such as h u m a n beings, that construct some of" their own descr ipt ive f rameworks . We are phylogenet ical ly and psychogenet ica l ly naive realistic machines with an inbuilt t endency to project our men ta l constructs outside our minds , or worse, as with the concept of self', we project a concept in a par t of the mind onto the whole mind ( including the projected part) . So, valid or not, heurist ic naive realistic ascr ipt ions cannot be dispensed with. We m a y only strive to refine the condit ions of' ascript ion.

Th i s is precisely what needs to be done for the new computa t iona l concepts , and we believe Margo l i s ' var ious caveats about their indiscr iminate or i r re levant use r ightly point in this direction. An impor t an t one is the following, which concerns " t h e sense in which P u t n a m ' s thesis threatens to trivialize the use of A. I . in the s tudy of an imal and h u m a n psycho logy" . It " i s s imply that it is a lways possible to p rov ide a descr ipt ion of any system in t e rms of a mach ine table or of the process ing of in format ion , wi thout that descr ip t ion ' s actually funct ioning in a genuine explana t ion of that s y s t e m " . This is not as bad as it

Page 9: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

Artificial intelligence and animal psychology 271

seems. By machine table Margolis means Turing machine table. It is a matter of record that Turing invented Turing machines as a formalism for representing any computation that a human computer could execute. It turned out that this formalism is (perhaps) capable of representing any effective symbol manipula- tion process. So if we have a mathematical model of a physical process, say hydraulic flow, it can be translated into Turing's formalism. But this does not mean that we now have to ascribe this translation the new interpretation of representing a human computation. It can still be interpreted as representing hydraulic flow. A formal language is formal in part precisely because it can be given many interpretations, and is tied to none, not even to the one that motivated its invention.

However, what Putnam's thesis indicates and what Margolis rightly emphasizes is the need not to ascribe computational A.I. interpretations to some systems. But this means that we need epistemological criteria that discriminate physical information processing systems from physical systems in general, that discriminate organisms and brains from cyclones and stars, for instance. We have proposed elsewhere [3] a morphogenetic criterion. Briefly, we argue that the mechanism of cosmogenetic evolution of physical systems (including proto- biotic ones on our planet) is qualitatively different from that of the phylogenetic evolution of biological systems to which it gives rise. In the cosmogenetic sequence new forms are produced through the random interactions of elements under combinatorial laws that are successively made possible by the progressive dilution of energy caused by the expansion of the universe. This expansion is thus the primum mobileofphysical evolution. In the biological sequence, forms are not produced by a chance interaction of components, but by an anti-chance construction procedure that assembles these components in a prescribed (pre- described) manner. The most frequent forms are not, as in physical evolution, those that are most produced, but those that are most reproduced. This differen- tial reproduction is the primum mobile of biological evolution. We suggest that this profound difference in morphogenetic mechanism be used as an epistemo- logical criterion to define physical information processing systems and thus to distinguish them top down from other physical systems. This criterion ascribes a "fully realistic" computational status to all the physical systems whose construction directly or indirectly originates in the phylogenetic sequence. This means genetic systems, organisms and organizations together with the artifacts they construct can be ascribed computational properties. This both coincides with our intuitive ascription of functional and semantic properties to such systems and allows for a unified computational treatment of the whole range of the life sciences. We suggest that this theoretical simplification and unification which subsumes a wide range of different and separate accounts under a common more general one, makes A.I. ascriptions heuristically more powerful than the preceding, disparate, intuitive ascriptions it replaces.

Using Minsky's notion of model [4], Margaret Boden proposes an ascription criterion that enriches and refines the morphogenetic one [ 1 ]. To quote Minsky: " I f a creature can answer a question about a hypothetical experiment without actually performing that experiment, then the answer must have been obtained

Page 10: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

272 (;uv (:clldricr and Ariam' Eticnm'

f rom some submachine inside the c r e a t u r e " . This submachine "ac t s like the env i ronment , and so it has the character of a model . . . To the extent that the c rea ture ' s actions affect the env i ronment , this model of the world will need to include some representa t ion of the creature i tself ." Marga re t Boden proposes that what we have called the A.I . ascription be limited to systems that possess such models of their interactions with their env i ronment , and that use them to perceive and interpret the env i ronmen t and to guide their action in it.

ASCRIPTION OF COGNITION

Ta ke n together , these two criteria suggest a phylogenet ic approach to the problem of ascribing cognitive capacities to biological systems and to machines, and to the correlat ive one of ascribing them a range c o m p r i s i n g " m an y different forms of consciousness" (Sloman) that may have arisen dur ing evolution. And one should also include new forms that may arise dur ing (accelerated) technological evolution. " A . I . modell ing may be instructive in explaining animal behav iour even if animals are not ascribed psychological states . . . Th e explana tory uses of an informat ion-process ing model need not be confined at all to cognitive phenomena . For example it may be applied to the behaviour of DNA. But such a model is itself modelled on h u m a n language (proposit ionally for ins tance)" (Margolis) . We agree, but again would situate these remarks in a more systematic theoretical f rame of reference.

W ha t we have named the basic p rogrammat ic conjecture of A. I. is that all the intuit ive mentalist ic concepts (like language, behaviour , psychological states, cognit ion etc.) that we have evolved to describe and control our own cognitive system, can effectively be reconstructed in computa t iona l terms, and thus " i n c a r n a t e d " , to use Margol is ' term, in compute r programs. This means that if we ascribe psychological states to an animal , and we propose a computa t iona l ly isomorphic reconst ruct ion of this animal ' s symbolic activity, then we must also ascribe in the same " fu l ly realist ic" manner , the same set ot potential psychological states to the p rogram (or more likely to the web ot s imuhaneous ly interact ing programs) that incarnates this computa t iona l i somorph. But, as we pointed out in our discussion of P u t n a m ' s thesis, this does not mean that we must ascribe conscious states to any biological system's computa t iona l reconstruct ion just because it is computa t ional . Applying our two ascription criteria precisely subdivides " t h e basic philosophical c o n u n d r u m of how a phenomena l exper ience can possibly arise from an assemblage of material stuff (whether the stuff be protoplasm or anyth ing else)" (Boden) into two subproblems. T h e first is: how can a non-conscious symbol-manipula t ion system arise f rom an assemblage of material stuff?. We submit that the morphogene t i c cri terion assigns this problem to the theory of the chemical origins of life. Its original basic conjecture is that life is an inevitable consequence of the evolution of inanimate mat te r on this planet. It then endeavours to show how a class o f " r e p l i c a t o r " self-reproducing combinat ions can, by repeated re-e laborat ion of its products , ul t imately be produced by a huge parallel combinator ia l generator , realized by billions of" components ot

Page 11: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

Artificial intelligence and animal psychology 273

carbon chemis t ry in solution in the pr imeval ocean, and dr iven dur ing billions of years by solar and geothermal energy. Chemica l replicators m ay have evolved that did not use the replication of genes. We will limit our considerat ions to those that do, and that eventual ly supplanted them. Before the first repl icator was assembled, there was no living system on the planet. But by our cri teria there was also no informat ion processing system on the planet. This makes life an essentially computa t iona l p h en o m en o n , not a physical one, and equates the origin of life with the origin of computa t iona l systems on the planet. A.I . began three billion years ago: Its first exper iment was in robotics. T h o u g h it does not satisfy the phylogenet ic cri terion, because it is assembled by a random set of combinat ions , the first replicator, or in yon N e u m a n n ' s sense, the first se l f - reproducing au toma ton does satisfy the model cri terion. To assemble a copy of itself, a repl icator must select, i.e. perceptual ly recognize, its par t icular componen ts in the vast warehouse of possible " o r g a n i c " parts that float a round it in the pr imeval ocean. It must then pick up a part and put it in its p roper place on the construct it is assembling. This non r an d o m sequence of chemical combina t ions is prescr ibed by the genes in today ' s organisms. This implies that the const ruct ion behav iour is pre-descr ibed in the genes. This pre-descr ipt ion satisfies Minsky ' s definit ion of a model: it contains a descript ion of the repl icator ' s interact ions with its env i ronment ; it allows it to solve without trial and e r ror the prob lem of building a replicator; it contains descriptions of objects in the env i ronmen t (its " p a r t s " ) , of its own actions (e. g. commands to "p i ck up a p a r t " ) and partial descriptions of " i t se l f " (specifying 'where ' to insert a new part) . Notice that the descript ion must also describe how to produce a copy of itself.

As a model , it is a " se l f m o d e l " in that it describes how to produce a product that happens to be a s tructural i somorph of its producer , though the latter cannot notice this relation, and in fact does not notice when it is violated by errors that were made in the copying of its own new pseudo self descript ion.

This self model thus does not contain a structural descript ion of its i somorph as a final product , as a s imul taneous conf igurat ion of parts and relations, but a procedural one that describes the successive parts and relations that must be inserted to p roduce this s imul taneous final configurat ion. In such a procedura l model , the global s t ructural descript ion is " i m p l i c i t " , it is dis tr ibuted over the set of local descriptions that specify what action to do next, unde r what condit ion. Satisfying the model cri ter ion ascribes to genes the ha rdware - software relation. Some of their chemical pa ramete rs must now be assigned the funct ion of being the hardware , the material substrate, of a symbol or " s ign i f i e r " . The rise of the replicator thus answers the question: how did a non conscious symbol manipula t ion system arise f rom an assemblage of mater ia l stuff?. Before there were " l i v i n g " replicators there was no hardware , no software, no symbol and no informat ion on the face of the planet - - a slightly deviant re formula t ion of the in principio erat verbum. T h e basic distinction between a p r o g r a m in its stored " p a s s i v e " prescript ion form and its active ( " i n c a r n a t e " ? ) form runn ing in a compute r arises at the same time. T h e mater ia l interact ions that consti tute physical p h e n o m e n a are not pre-described

Page 12: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

274 Guy Cclldricr and Arianc Etienne

in some Laplacian memory; the universe's physical evolution is not seen as the execution of some pre existing (metaphysical?) program. With replication, for the first time on the planet, the simultaneous structure of a material aggregate strictly correlates with the sequential structure of other different material phenomena to produce material aggregates that are "arbi t rar i ly related to the laws of physics", as Jacques Monod once put it. Some configurations, those ot subatomic particles in atoms for instance, are specified by the laws of physics. Others, like the sequence of amino acids in a protein, are allowed by the laws of physics (obviously), but not specified by them; thus they are related to them by an " a r b i t r a r y " choice amongst a huge set of possibilities. Replication, then, introduces novel arbitrarily pre-scripted processes and phenomena into the realm of physics on the planet. These pre-scripted processes are described in the software framework as those produced by the execution of a program. This we believe is also in support of choosing the model criterion to define the differences between physical symbol systems and non-symbolic ones. So, an information processing model " m a y be applied to the behaviour of" D N A " but if it is "model led on human language" (Margolis) this language now sounds very much like the one Tur ing must have used, but only when he was talking about his machines.

The second part of Boden's conundrum now becomes: how can a variety ot conscious symbolic systems arise from various aggregations of non-conscious ones?

The morphogenetic criterion suggests that we look for a functional answer: how could such a conscious emergence in a cognitive system serve the greater reproductivity of the genes that support it? To do this, we briefly indicate how, from the interaction of components that only have the replicator property, a huge new parallel computer emerges, that has the novel " e m e r g e n t " problem solving and learning properties of a self-programming cognitive system. In a finite physical medium, interaction between replicators inevitably arises because their number grows without upper limit, while the medium stays finite. There inevitably comes a point where all replicators cannot replicate, because this would use more matter and energy than are available in the medium. Those that replicate faster or more economically, i.e. that have higher (re)productivity now compete for the use of limited resources with those that have lower productivity. "Natura l selection" through differential reproduction inevitably arises, as a new higher order property of the new system formed by the set of parallel interacting replicators. Correlatively, errors in reproduction are unavoidable in a physically realized replicator automaton because its hardware will be subjected to the random physical interactions of its "spare parts" environment. Thus variant copies will necessarily emerge from replication, providing a continual source of novel, higher and lower productivity replicators, to compete with the existing ones. The reciprocal coupling ot variation and selection thus endows the higher order "genetic system" formed by the interacting replicators, with new trial and error problem-solving functions, together with positive and negative reinforcement learning capacities that are the basic prerequisites tbr a cognitive system. We suggest this bottom-

Page 13: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

Artificial intelligence and animal psychology 275

up construction of new computational functions from parallel interactions of entities that do not, in isolation, realize these functions is the basic evolutionary mechanism of cognitive systems, both at the biological level of genes and at the psychological level of acquired computational schemes. Further, we suggest that these "emergences" are related to inevitably recurring periods of computational overload in a physically realized cognitive system with limited computational resources. In the present, a genetic system continually accumulates reproductivity enhancing inventions that it has evolved one after the other over long periods in the past. To make a functional reproducing organism, these simultaneously present inventions must be integrated or coordinated. Such an integrated whole becomes progressively more rigid (harder to modify or add to) because the number of possible interactions between its components, that may have to be changed when say one component is added, grows much faster than the number of its components. So a time must come when a genetic system's computational resources are exceeded by the accumulated complexity of its own production. New emergent breakthroughs are then needed. They may involve reorganization of the genetic system's computational structure itself, such as subdividing the organism into fewer relatively modular higher level functions (like respiration, circulation etc.) that may be reconfigured into new organisms by modifying only their less numerous higher level interactions. The invention of behaviour may have been another answer to the overload problem: organisms can now be improved by leaving their anatomical structure practically unmodified and changing only their behaviour.

But again, the progressive accumulation of layer upon layer of more numerous, complex and diverse innate schemes of behaviour ultimately leads to integration problems and computational overload of the genetic system. At this point, superimposing a faster, neurally based problem solving system and delegating it the computational load of constructing and coordinating behavioural schemes is a powerful form of computational reorganization, with far-reaching emergent consequences. A new " inna te" behaviour needs generations of natural selection to evolve, whereas thousands of new "acqu i red" behaviours may be evolved in one generation during one organism's life span. Of course, the drawback is that a psychological cognitive system also produces and accumulates complexity at a much faster rate, and must therefore, according to our conjecture, go into periodic Piagetian stage- like phases of reorganization.

At this point the Darwinian evolution of genes may offer some insights into the Piagetian evolution of schemes. A primitive cognitive system may look very much like a "scheme pool" of procedural models with gene-like, local trial and error, learning and problem solving capacities. When a given scheme seizes control of the organism's behaviour, it accomplishes some adaptive task. This functionally subdivides the organism into a population of specialized motivational suborganisms. Some of the power of division of labour, that is some coordination of these specialists, may initially be obtained through their parallel interaction on a common task. This is because their learning capacity

Page 14: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

276 (;uv (:clldrier and Arianc Etienne

allows them to adapt not only to the task itself, but to a new compound task tbrmed by another specialist working on the initial task. However, coordination implies the subordination of the activities of two specialists to a common goal. But, in a procedural model, as we pointed out earlier, goals (as prescribed final results of a procedure) are broken up into local sub-descriptions and distributed over the various conditions that trigger the proper actions. So procedural specialists, of which our example of the replicator procedure is an instance, have no centralized representation of their intended results. Thus any coordination and division of labour amongst many schemes (each of which may be repeatedly activated to realize a potentially different subgoal each time) over long term projects, that involves the sharing of an invariant common goal and the elaboration of a multiplicity of different transitory sub-goals will be very hard to attain. For example: " I t is hard to see how a collection of condi t ion- action rules, where the conditions refer only to current states, with no representation of states to be achieved (or prevented), i.e. without explicit goals, could, for example, produce effective nest building behaviour in such varied conditions" (Sloman). How could a primitive, condi t ion- action based cognitive system attain goal representation? We suggest it might do so by partial selt representation, that is by evolving amongst its reserve pool of unspecialized schemes a new caste of higher order descriptive specialists whose task would be to learn what the lower level schemes do, and produce simplified functional descriptions of the activities and interactions of these motivational suborganisms. To illustrate this idea, suppose we had a very simple condition - action scheme that attached to sequences like 3,9,4,2,0,1,8 and put into ascending order any couple [like (9,4)] it found in descending order. Its higher order description would ignore the many possible sequences of states and transformations, and mainly represent its empirical effects on this micro environment, in terms say of an initial "disordered" state, and a final normal or prescribed "o rde red" state, related by an " ac t " : a global motivated transformation produced by the organism's activity.

Further elaborations of such self-descriptive specialists might ultimately provide the semantic basis of a "mental is t ic" framework. The descriptive schemes in such a network would allow the organism to interpret the activities of lower level schemes in terms of a self-ascribed subjective phenomenology of evaluations (emotions, drives, impulses, needs, feelings, interests etc.) and goals (intentions, purposes, ideals etc .) to combine self-description in "hypothetico-deductive" simplified anticipatory self-simulations producing imaginary "p lans" , and eventually to intervene " top down" on the spontaneous reactive "bot tom up" reorganizations of lower level schemes, by modulating, focussing and rechannelling their activities into the monitored execution of plans. In turn, the use of such a mentalistic framework to understand, predict and control itself would give rise to activities of sell interpretation, imagination, monitoring and intervention that might provide the phenomenological basis tbr the elaboration of concepts like awareness, consciousness, intention, attention, self control and will.

We suspect, however, that another emergent factor is needed to drive a

Page 15: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

Artificial intelligence and animal psychology 277

cognitive system toward the construction of such an interpretation and control level of virtual machine. This factor is the parallel emergence and evolution of other cognitive systems. Mechanisms that evolve for control, communication and division of labour among schemes inside a cognitive system, may serve the dual purpose of understanding, predicting and controlling other cognitive systems in the cooperative execution of common tasks. Language may thus well have dual functional roots as a high level symbolic system for both intra- subjective and intersubjective control

The emergent social strategies of tradition and communication that we have developed for sharing knowledge are devices that enhance the computational power of individual cognitive systems, and the main invention that has ensured our species' biological dominance (in the sense that we may invade the ecological niche of any other species). Tradition, the transmission of the knowledge of one generation to the next, both accumulates complexity and alllows an individual cognitive system to use the cumulative efforts of preceding ones to solve, in one lifetime, problems that it could not have solved (let alone recognized as pertinent) otherwise. This diachronic division of labour is completed by the classical synchronic one: communication and cooperation have the same multiplication effect on the power of the individual cognitive system, as tradition. If two specialists are each, say, ten times faster in their own specialization than the other one would be, they can ideally solve the same problem eleven times faster together, than each could alone; cooperation is not wholly altruistic.

Both these multiplicator strategies are based on the ascription to others of a subjective cognitive field similar to one's own that can be driven by language into similar state transitions of its interpretation and control virtual machine, which in turn may activate a similar range of behaviours. We suggest that the successful use of language that these social strategies induce to generate and describe these similar state transitions and range ofbehaviours has become one of the intuitive criteria that we use to ascribe consciousness to others. Another, and perhaps more basic one is that of a similar range of reactions in similar circumstances. It is at the root of the powerful learning by imitation that we use to acquire know-how from our conspecifics, and we use it to extend our own conscious experience and behaviours that we relate to it to the "percept ion" of situations and to the "behaviours" that we ascribe to other non-conspecific animals. Making these intuitive criteria explicit and theoretically systematizing them is thus one of the tasks in the study of animal awareness. However, we should look for deeper, functional criteria. We conjecture that the individually acquired adaptations of a primitive psychogenetic cognitive system to its environment may not be sufficient to drive it toward mentalization. This is because neither the laws of physics, nor the innate behavioural rules of the other cognitive systems that constitute the environment of an emerging psychogenetic system, significantly change over its life span. Whereas the emerging parallel communicative interaction flowing in a population of psychogenetic systems institutes a totally new internal and external symbolic environment, whose laws of interaction constantly change as its components adapt to these changes,

Page 16: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

978 Guy Ccll~ricr and Arianc Etienne

thereby init iat ing new changes in the rules of their activity. Menta l iza t ion may be a strategy specific to in teract ing evolving cognitive systems, allowing them to keep up with the evolution of their conspecifics, and with the correlative evolut ion of their own internal symbolic universe. This suggests that we should at first restrict our proposed ascriptions of mental izat ion to animals whose socio- and psychogenet ic s t ructure exhibit the same mutua l agonistic relations. It has often been pointed out that the neural p h e n o m e n a that underl ie conscious and non-conscious activities are not physically different, so the difference should be sought at a higher level of descript ion in the intrinsic s t ructure of the symbolic computa t ions that these neural activities realize. O u r cri terion suggests intrinsic s t ructure and complexi ty may be the same in " c o n s c i o u s " and " n o n - consc ious" symbol manipula t ions , and the difference is extrinsic, i.e. it lies in the functional " se l f - in te rp re ta t ive" relation that some (not necessarily neuroana tomica l ly bound) computa t ions enter ta in with other computa t ional activities in the brain. This also implies that what we consciously experience increases quali tat ively and quant i ta t ively with the construct ion of this self- in terpreta t ive network. Conscious exper ience is thus a construct . We learn to exper ience our inner reality in terms of the nuances and var ie ty of emot ion etc just as we learn to exper ience our outer reality in terms of the nuances of colour or the conceptual var ie ty of geometr ical shapes. Learn ing to experience, i.e. recognize, evoke, and exteriorize " t h e subtle differences between reproach, resentment , and censu re " (Boden) changes our cognitive field in the same way as learning to discr iminate and produce ch rome yellow or regular convex polygons. Emot ion is not, as Boden and Sloman point out, divorced from cognit ion.

We suggest that a l though emot ion may tind its origin in the physiological " ' a r r ay of values' r ep re sen ta t ion" (Sloman) of the organism's homeostat ic sensory moto r loops, this evaluat ive d imension is an essential const i tuent of any scheme. Any such self-modifying specialist needs some first order evaluat ion ot success and failure on the task, and some second order evaluat ion of the growth of its expertise. So any cognitive activity s imul taneously takes place on the evaluat ive dimensions of emot ion.

It follows from our conjecture that an artificial intelligence would pass or fail the T u r i n g test mainly on its mas tery of this evaluative and telic dimension. It would thus need to recognize its h u m a n inter locutor as the same over a sequence of conversat ional episodes, that is ascribe to him or her the mentalistic concept o f " p e r m a n e n t sub jec t" . Given no other cues, this is a hard task involving the construct ion of a concept like " s t y l e " (cognitive or otherwise) that allows us, sometimes, to ascribe a class of sculptures or sonnets or sonatas or sophisms, to the same author . T h e n it would have to keep track of these themat ic interactions on some evaluat ive scale inferred and ascribed as c o m m o n to " i t se l f " and " i t s i n t e r locu to r " , in terms of goal satisfactions and frustrations, both given and received. At this point it could begin to manifest some descriptive but non- Proust ian , or active but non-Machiave l l ian (as this would disqualify most of us) mas tery of conversat ion. According to the relative status it ascribes itself and the inter locutor (who needs whom in terms of goal satisfaction, for instance can the

Page 17: Artificial intelligence and animal psychology: A response to Boden, Margolis and Sloman

Artificial intelligence and animal psychology 279

present in ter locutor pull my plug?) it can, when in the frustrat ion range of its scale, per t inent ly activate re t r ibut ive or compensa to ry interact ion schemes p roduc ing scenarios like reproaching past wrongs and asking for future redress as a condi t ion of pursued interact ion; or resent ing wrongs unexpressed because of unequa l status the while render ing less valuable service; or censur ing wrongs and unilateral ly order ing redress, etc. T o master such interactions it would thus need a simplified representa t ion of itself, to recognize its present mental states and activities, and to evoke its past and future ones, and to effectively use it to produce its " i m p e r s o n a t i o n " of a h u m a n intelligence. Th u s to trick a h u m a n inter locutor into ascribing it consciousness, it would have effectively tricked itself into such a self-ascription.

REFERENCES

1. Boden M.A. Purposive Explanations in Psychology. Harvard University Press, Cambridge, MA (1972).

2. Sloman A. The Computer Revolution in Philosophy. Harvester Press, Sussex (1978). 3. Cell~rier G. La gen~se historique de la cybern4tique. In Cahiers Vilfredo Pareto, Vol. XIV.

Droz, Geneva (1976). 4. Minsky M. Steps toward artificial intelligence. In Computers and Thought (eds. Feigenbaum

E. & Feldman I.). McGraw-Hill, New York (1963).


Recommended