+ All Categories
Home > Documents > I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Date post: 27-Mar-2015
Category:
Upload: wyatt-chavez
View: 213 times
Download: 0 times
Share this document with a friend
Popular Tags:
59
I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000
Transcript
Page 1: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

I, Robot

Pat Hayes, IHMC, U. West Florida. CAP-2000

Page 2: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

I, Robot

orWhat would it take to make a

robot with a self?

Page 3: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

I, Robot

orWhat would it take to make a robot with a sense of itself?

Page 4: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Philosophical debate about consciousness

• Maybe THIS is how consciousness works (yaddah, yaddah)….

• Pschaw! I can imagine something just like that without it being conscious.

• I don’t think you can. • Oh no? Let me tell you, I can imagine something which is just

like you, an exact copy right down to the atoms, and it behaves just like you and it even believes what you believe and wants what you want, but it’s not conscious. It’s just a zombie. So there.

• That seems impossible to me.• You just havn’t got enough imagination, that’s all.

Page 5: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Philosophical debate about consciousness

• You just havn’t got enough imagination, that’s all.

• -----------• It’s hard to see quite how to argue against this claim

directly, so rather than try to give SUFFICIENT conditions for consciousness, I’m going to sketch some NECESSARY conditions, to try to raise the imagination-jump bar a little higher.

Page 6: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Philosophical debate about consciousness

• You just havn’t got enough imagination, that’s all.

• -----------• It’s hard to see quite how to argue against this claim

directly, so rather than try to give SUFFICIENT conditions for consciousness, I’m going to sketch some NECESSARY conditions, to try to raise the imagination-jump bar a little higher.

• Basic idea is that consciousness requires a self.

Page 7: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

methodology

• Want to give a functional account of what is essentially a matter of phenomenology

• Danger of vacuous functional structure (Eg a C-box)• Some disciplinary rigor provided by requirement of

evolutionary plausibility. No epiphanies.• Humans are complicated beasties, but we don’t have

subjective reports from nonhumans. So we have to be willing to extrapolate to simpler cases.

Page 8: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

GOFCogSci Standard Model

• Anything known is somehow internally represented as propositions expressed in a ‘language of thought’

• Senses keep internal world-description up to date• World-knowledge is used to plan, react, navigate, etc. • Awareness is restricted to content of LoT.• Cognitive activity involves ‘information processing’ in

the LoT

kiss...

Page 9: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

GOFCogSci Standard Model (with a small addition.)

• Propositions in LoT come with provenances attached, ie information about where the proposition came from.

Page 10: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

GOFCogSci Standard Model(with a small addition.)

• Propositions in LoT come with provenances attached, ie information about where the proposition came from.

Page 11: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

GOFCogSci Standard Model (with a small addition.)

• Propositions in LoT come with provenances attached, ie information about where the proposition came from.

on(cup,table)

Page 12: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

GOFCogSci Standard Model (with a small addition.)

• Propositions in LoT come with provenanaces attached, ie information about where the proposition came from.

on(cup,table)

this was seen

Page 13: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Pregistered by sense S

P

P

P

recorded in memory

explanation of Q

Confirmed by Q,R,...

Inferred from Q,R,...

Page 14: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

GOFCogSci Standard Model (with a small addition.)

• Provenances are under the control of the machinery.• They are needed for truth maintenance, ie keeping

track of corrections.

• (philosophical aside) Knowing a set of propositions might involve more than just knowing their conjunction.

Page 15: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

GOFCogSci Standard Model (with a small addition.)

• Provenances are under the control of the machinery.• They are needed for ‘truth maintenance’, ie keeping

track of corrections.

• (philosophical aside) Knowing a set of propositions might involve more than just knowing their conjunction.

• (This solves the Problem of Mary, by the way.)

Page 16: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

One approach to creating a Self

• If something which can represent things needs to know about itself, just give it a way to represent itself to itself.

Page 17: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

One approach to creating a Self

• If something which can represent things needs to know about itself, just give it a way to represent itself to itself.

• Details get complicated. (Need a meta-theoretic self-description supported by a reflexive architectural layer…)

Meta-management

Deliberative Reasoning

Reactive Mechanisms From A.Sloman 1999

Page 18: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

One approach to creating a Self

• If something which can represent things needs to know about itself, just give it a way to represent itself to itself.

• BUT what is being described by this meta-theory?

Page 19: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

One approach to creating a Self

• If something which can represent things needs to know about itself, just give it a way to represent itself to itself.

• BUT what is being described by this meta-theory?• What does ‘I’ refer to? (Body, mind, soul, ego, Will,…?)

Certainly not our own inference processes.

Page 20: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

One approach to creating a Self

• If something which can represent things needs to know about itself, just give it a way to represent itself to itself.

• BUT what is being described by this meta-theory?• What does ‘I’ refer to? (Body, mind, soul…?) Certainly

not our own inference processes. • Are you the same “I” you were yesterday?

Page 21: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

“I look at those old movies, and I wonder how I did them. It was someone else who made them, not me. I can recognise part of me in them, but they were made by someone else, not by me.”

- Terry Gilliam

Page 22: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

• The human self-concept has several aspects

Page 23: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

• The human self-concept has several aspects

• bodily location (I am not in Kansas)

Page 24: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

• The human self-concept has several aspects

• bodily location (I am not in Kansas)• locus of narrative memory (I recall reading

Proust)

Page 25: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

• The human self-concept has several aspects

• bodily location (I am not in Kansas)• locus of narrative memory (I recall reading

Proust)• epistemic agent (I know I left it here somewhere.)

Page 26: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

• The human self-concept has several aspects

• bodily location (I am not in Kansas)• locus of narrative memory (I recall reading

Proust)• epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?)

Page 27: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

• The human self-concept has several aspects

• bodily location (I am not in Kansas)• locus of narrative memory (I recall reading

Proust)• epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?)• source of intentionality (I was referring to the mint

sauce)

Page 28: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

• The human self-concept has several aspects

• bodily location (I am not in Kansas)• locus of narrative memory (I recall reading

Proust)• epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?)• source of intentionality (I was referring to the mint

sauce)

• the ‘free will’ (I’m in charge here.)

Page 29: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

• The human self-concept has several aspects

• bodily location (I am not in Kansas)• locus of narrative memory (I recall reading

Proust)• epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?)• source of intentionality (I was referring to the mint

sauce)

• the ‘free will’ (I’m in charge here.)• …and probably more.

Page 30: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

• bodily location (I am not in Kansas)• locus of narrative memory (I recall reading Proust)• epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?)• source of intentionality (I was referring to the mint sauce)• the ‘free will’ (I’m in charge here.)

Page 31: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

bodily location

‘mental map’ requires a ‘thishere’ token to relate perceptual input to position of body in the terrain.

This is a primitive ‘sense of self’

Page 32: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

bodily location

‘mental map’ requires a ‘thishere’ token to relate perceptual input to position of subject in the terrain.

This is a primitive ‘sense of self’

Purely geographical, it has no implications for mental state or agency. Required in some form by anything which navigates using non-egocentric spatial model.

This is routine in AI robotics and probably evolved fairly early in animals. For things with an articulated body it gets quite complicated.

Page 33: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

locus of narrative memory

We humans certainly have a well-developed narrative (episodic) memory; but what is it for?

Page 34: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

locus of narrative memory

Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future; adds ‘now’ to ‘thishere’.

Page 35: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

locus of narrative memory

Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future.

….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa

Page 36: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

locus of narrative memory

Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future.

….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa

Page 37: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

locus of narrative memory

Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future.

….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa

bb leads to a after a short delay

Page 38: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

locus of narrative memory

Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future.

….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa

bb leads to a after a short delay

….ghfklbnmsdfbb(now I can see ahead )

Delicate balance needed; too general means weak predictions, too specific means narrow applicability.

This is still a research area in AI.

Page 39: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

WARNING

Here we enter somewhat wilder areas of speculation, where AI has never ventured.

Please follow me carefully and stay alert.

Page 40: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

stability and fickleness

• Unlike AI systems, organisms must eat, and are liable to get eaten. So they have a standing requirement to treat other organisms in a rather special way, one that may require sudden and precipitate action.

• It would be folly to rely solely on induction to learn the causal habits of things that were liable to eat you.

• Beasties need to make a conceptual division of the things in their surroundings into at least two categories: things which are causally predictable, and things which aren’t, but which require immediate attention when detected.

Page 41: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

stability and fickleness

• Something is causally stable when one can reliably predict its future behavior on the basis of past experience with things of that sort, ie when it is reasonable to learn about its behavior by using induction.

Page 42: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

stability and fickleness

• Something is causally stable when one can reliably predict its future behavior on the basis of past experience with things of that sort, ie when it is reasonable to treat it as having a learnable causal behavior.

• It is causally fickle when one knows that it is not causally stable.

Probably very old; examples from human experience include surprise when you find someone (but not someTHING) in your personal space unexpectedly (“making someone jump”). Seems to be a crucial distinction between other ‘agents’ and other things.

Page 43: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

animacy

Being causally fickle is a basic aspect of animacy. Animate entities do things for their own reasons, not because they are causally influenced by other things.

The ‘intentional stance’ (Dennett) or a description at the ‘knowledge level’ (Newell) represents one way to gain some predictive power over animate entities (and it’s pretty useful even for complicated inanimate ones.)

Evidence of agency in unexpected places often are perceived as highly startling (eg movies, automobiles, reactive automata) until one gets used to their repertoire and feels able to recognise them.

We are not very good at integrating these frameworks, eg tensions felt by surgeons. I suspect that notions like ‘agency’ and ‘intentionality’ in their full-blooded senses evolved only recently (humans and chimps may be the only creatures who attribute mental states to others), but causal fickleness is likely to be much older.

Page 44: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Knowing about knowing

The creature so far knows quite a lot about its world, and can learn more from its experience.

Page 45: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Knowing about knowing

The creature so far knows quite a lot about its world, and can learn more from its experience.

But it doesn’t yet KNOW that it knows anything. It is not reflexively aware.

Page 46: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Knowing about knowing

The creature so far knows quite a lot about its world, and can learn more from its experience.

But it doesn’t yet KNOW that it knows anything. It is not reflexively aware….

…but its provenance machinery ‘knows’ something about its own knowledge.

Page 47: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Knowing about knowing

The creature so far knows quite a lot about its world, and can learn more from its experience.

But it doesn’t yet KNOW that it knows anything. It is not reflexively aware….

…but its provenance machinery ‘knows’ something about its own knowledge.

Epistemic access to its own truth-adjusting machinery would be one way to achieve reflexivity of knowledge, ie knowing that it knows some of what it in fact knows.

Page 48: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Knowing about knowing

‘Reflexivity’ of knowledge, ie knowing that it knows some of what it in fact knows, could be of actual practical use (unlike reflexive knowledge of its own cognitive machinery.)

Eg one can take actions to fill gaps in ones own knowledge (exploration) or avoid taking actions when their outcome might depend critically on information known to be missing (not stepping into the dark).

Page 49: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Knowing about knowing

‘Reflexivity’ of knowledge, ie knowing that it knows some of what it in fact knows, could be of actual practical use (unlike reflexive knowledge of its own cognitive machinery.)

Eg one can take actions to fill gaps in ones own knowledge. (exploration) or avoid taking actions when their outcome might depend critically on information known to be missing.

This is current AI research, eg NASA ‘reactive planners’.

Page 50: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Epistemic gradients

The creature so far knows quite a lot about its world, and can learn more from its experience.

On the whole, it knows more about things closer to it in space and time, and less about things which are further away. There is an epistemic gradient with itself at the peak.

The gradient can provide another way to identify a ‘self’: the self is the agent which knows things about this-here-now which nothing else knows.

Page 51: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Epistemic gradients

The self is the agent which knows things about this-here-now which nothing else knows.

This also can be of direct practical use, eg knowing that nobody else knows where this-here-now is.

Some mental illness seems to be associated with a breakdown of this, eg feelings of ‘ego transparency’ in schizophrenia.

Page 52: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Epistemic gradients

The self is the agent which knows things about this-here-now which nothing else knows.

This also can be of direct practical use, eg knowing that nobody else knows where this-here-now is.

(This also fixes the Van Frassen ‘two gods’ argument.)

Page 53: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

Epistemic gradients

Notice that the provenance of a reflexive belief is simply the presence of a (closely related) belief. Provenances of reflexive beliefs are something like ‘simple introspection’.

- How do you know the cup is on the table?

- Because I saw it.

-How do you know that you know that?

-?? I just know, that’s all. (What else can I say?)

Page 54: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

sketch of overall picture

Suppose the ‘thisherenow’ is treated as a first-class entity in the world model.

The creature must either have a very complete understanding of its own inner functioning (which would be of no practical use), or treat itself as causally fickle.

It knows that it does not know why it does what it does. In its own view of itself, its actions necessarily have no causes. It believes itself to have ‘free will’.

Page 55: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

sketch of overall picture

Suppose the ‘thisherenow’ is treated as a first-class entity in the world model.

The creature must either have a very complete understanding of its own inner functioning (which would be of no practical use), or treat itself as causally fickle.

It knows that it does not know why it does what it does. In its own view of itself, its actions necessarily have no causes. It believes itself to have ‘free will’.

If it ever goes to graduate school, it will probably think of itself as having ‘original intentionality’ as well.

Page 56: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

sketch of overall picture

This creature knows that it is an agent, and it knows quite a lot about itself which (it knows) isn’t known to other agents. Much of this knowledge has a characteristic kind of ‘immediate’ provenance. All its ‘private’ beliefs about it’s self have a recursive provenance, in that they are derived from other beliefs about the self, or are ‘immediate’.

Page 57: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

sketch of overall picture

This creature knows that it is an agent, and it knows quite a lot about itself which (it knows) isn’t known to other agents. Much of this knowledge has a characteristic kind of ‘immediate’ provenance. All its ‘private’ beliefs about it’s self have a recursive provenance, in that they are derived from other beliefs about the self, or are ‘immediate’.

One might characterize self-beliefs as a system of stable orbits forming the origin of the provenance field.

Cogito, ergo sum.

Page 58: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

What kind of game are we playing here?

• We are talking about creatures as though they were robots.

• We are using ideas from semantics, evolutionary biology and philosophy, but talking in a technical vocabulary rooted in computer science.

• Broader question: is this way of talking legitimate, and why (or why not)?

Page 59: I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000.

What kind of game are we playing here?

• We are talking about creatures as though they were robots.

• We are using ideas from semantics, evolutionary biology and philosophy, but talking in a technical vocabulary rooted in computer science.

• Broader question: is this way of talking legitimate, and why (or why not)?

• Now, THERE is a topic, surely, where philosophy should have something to say about computers.


Recommended