+ All Categories
Home > Documents > 2101INT – Principles of Intelligent Systems Lecture 2.

2101INT – Principles of Intelligent Systems Lecture 2.

Date post: 27-Dec-2015
Category:
Upload: raymond-nicholson
View: 216 times
Download: 1 times
Share this document with a friend
Popular Tags:
30
2101INT – Principles of Intelligent Systems Lecture 2
Transcript
Page 1: 2101INT – Principles of Intelligent Systems Lecture 2.

2101INT – Principles of Intelligent Systems

Lecture 2

Page 2: 2101INT – Principles of Intelligent Systems Lecture 2.

Last week we covered

History of automatons, in literature and reality The birth of AI and the Dartmouth conference Some applications and domains of AI since then

– Planning– Natural Language Processing– Expert Systems– Neural Networks

Videos demonstrated AIs operating in “microworlds” Discussion about what we think constitutes intelligent

behaviour

Page 3: 2101INT – Principles of Intelligent Systems Lecture 2.

The Turing Test

Proposed by Alan Turing in 1950 to be an operational definition of intelligence

A human “interrogator” asks questions of two entities – one computer, one human – behind screens and using a console, to conceal which is which

If the interrogator cannot determine which is which then the computer is said to be intelligent

An extension of this is the total Turing test which includes a 1-way video feed and the provision for haptic interaction

2-3

Page 4: 2101INT – Principles of Intelligent Systems Lecture 2.

The Turing Test cont.

Physical simulation of a human is not considered necessary for intelligence and is deliberately avoided by the screen/console/1-way link etc.

The test is purely a behavioural test of intelligence, relying only on the external behaviour of the entity and not on its internal mental states.

Many philosophers claim that passing the Turing Test does not prove that a machine is thinking, just that it can simulate thinking.

Turing’s response to this objection is simply that in ordinary life, we never have any direct evidence about the internal mental states of other humans.

952pp

Page 5: 2101INT – Principles of Intelligent Systems Lecture 2.

The Schools of Thought

At the highest level, AI can be divided into two schools of thought:

– Weak AI“the principal value of the computer in the study of the mind is that it

gives us a very powerful tool”

– Strong AI“the computer is not merely a tool in the study of the mind; rather,

the appropriately programmed computer really is a mind”[Searle, 1980]

These relate to the question of consciousness of an intelligent system

Ch.26

Page 6: 2101INT – Principles of Intelligent Systems Lecture 2.

Weak AI: Can Machines Act Intelligently?

Weak AI argues that machines are not actually conscious and only appear to think

Computers can do many things as well as or better than humans – including tasks considered to require insight and understanding

But “Weak AI” doesn’t believe that computers actually use insight and understanding in performing these tasks

949

Page 7: 2101INT – Principles of Intelligent Systems Lecture 2.

The mathematical objection

Certain mathematical questions are unanswerable by particular formal systems

Gödel’s Incompleteness Theorem (GIT) and the Halting Problem being two relevant examples

Some philosophers – J. R. Lucas being one – argue that these demonstrate that machines are mentally inferior to humans, since machines are formal systems and can therefore not establish the truth of their own Gödel sentence

949

Page 8: 2101INT – Principles of Intelligent Systems Lecture 2.

Arguments against Lucas

1. Computers are not Turing Machines, they are approximations only

2. There are statements that humans cannot assert the truth of

3. It is impossible to prove that humans are not subject to GIT because human talent is not a formal system

950

Page 9: 2101INT – Principles of Intelligent Systems Lecture 2.

The argument from informality

Originally raised by Turing, claims that human behaviour is far too complex to be captured by simple rules

Since computers are only able to follow a set of rules, they cannot therefore generate behaviour as intelligent as humans

What this argument is really arguing against is GOFAI – “Good Old Fashioned AI” - that all intelligent behaviour can be captured by a system that reasons logically with symbols

Merely acting intelligently isn’t precluded by this

950

Page 10: 2101INT – Principles of Intelligent Systems Lecture 2.

On the nature of human consciousness

Descartes considered how the soul (let’s say consciousness) interacts with the physical body

This has come to be termed the mind-body problem He concluded that the mind must be distinct from the

body and be two different things – hence dualism

Materialism is a monist theory, doesn’t believe that the mind and the body are different things, or that there are such things as immortal souls

It is simply that brains cause minds

954

Page 11: 2101INT – Principles of Intelligent Systems Lecture 2.

Relationship to AI

Dualism per se disallows strong AI, since consciousness is not a consequence of the physical system

Materialism on the other hand, does allow strong AI

954

Page 12: 2101INT – Principles of Intelligent Systems Lecture 2.

Explaining away the mind-body problem

Descartes managed to contradict himself when he proposed interactionism. In this he said that mind and body do interact, but this is restricted to the pineal organ at the base of the brain

Less contradictory is the theory of parallelism. According to this, mind and matter are entirely separate, each obeys its own laws but each keeps time perfectly with the other - due to the marvellous planning of a greater being.

Page 13: 2101INT – Principles of Intelligent Systems Lecture 2.

Explaining away the mind-body problem

Similar to the last is the doctrine of occasionalism. So although mind and matter can’t interact, providence intervenes on each “occasion” when they need to.

And finally, epiphenomenonalism. This holds that minds play no role in the everyday running of the universe. Consciousness is merely an incidental by-product of the system. Matter “causes” mind as in materialism, but thought has no effect on matter. So we can watch the world go by, but can’t do anything about it – any impressions to the contrary are simply an illusion

Page 14: 2101INT – Principles of Intelligent Systems Lecture 2.

Strong AI: Can Machines Actually Think?

Strong AI argues that a properly programmed computer is conscious “mind” in the fullest sense

952

Page 15: 2101INT – Principles of Intelligent Systems Lecture 2.

Functionalism

The influence on functionalism is attributed to influence of computers on modern society

Functionalism is still Materialism, but:– Brain states are distinct from Mental states– Behaviour is not directly related to stimulus

It does not matter what the physical cause of the mental (functional) state is

These functional states are responsible for our outward behaviour

Page 16: 2101INT – Principles of Intelligent Systems Lecture 2.

Fuctionalism cont.

Any two systems with isomorphic causal processes will have the same mental states – even if they don’t have the same brain states

Recall from last week– “The conjecture that every aspect of learning or any other

feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”

McCarthy et al. (1955) “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence”

954

Page 17: 2101INT – Principles of Intelligent Systems Lecture 2.

Biological Naturalism

The opposite view is termed biological naturalism It argues that mental states are emergent features,

caused by low-level neurological processes inside the neurons

It is the unspecified properties of the neurons that matter – and equivalent mental states are not produced by something else with the same functional behaviour

Of course, this theory doesn’t define why neurons have this power but the concept of a soul comes to mind

954

Page 18: 2101INT – Principles of Intelligent Systems Lecture 2.

The Chinese Room Argument

– “No one supposes that a computer simulation of a storm will leave us all wet… Why one earth would anyone … suppose a computer simulation of mental processes actually had mental processes”

Searle (1980) “Minds, Brains, and Programs”

Searle’s conclusion is that running an appropriate program is not a sufficient condition for being a mind

954

Page 19: 2101INT – Principles of Intelligent Systems Lecture 2.

The Chinese Room Argument cont.

Consider that you have a system consisting of a human who only understands English

He is placed inside a box with a rule book and some blank pieces of paper

The box has an opening, through which appear slips of paper with indecipherable symbols written on them

The instructions in the rulebook direct the human to write slips of the blank pieces of paper, look at bits of paper previously used and even write some of the indecipherable symbols on paper and pass them back

958

Page 20: 2101INT – Principles of Intelligent Systems Lecture 2.

The Analogy of the Chinese room

The human is playing the role of the CPU

The rule book is his program

The pieces of paper are his memory

958

Page 21: 2101INT – Principles of Intelligent Systems Lecture 2.

The Chinese Room Argument cont.

What does this system look like from the outside? The indecipherable symbols are really questions and

with the help of the rulebook the symbols written out are the appropriate answers

So it appears that the system can “understand” Chinese, giving as it does answers appropriate to the questions asked

It therefore passes the Turing test

958

Page 22: 2101INT – Principles of Intelligent Systems Lecture 2.

The Chinese Room Argument cont.

Does the person in the room understand Chinese?– No.

Do the rule book and the stacks of paper understand Chinese?

– No.

So if none of the components of the system understand Chinese, how can the system understand Chinese?

Therefore, running the right program does not necessarily generate understanding.

958

Page 23: 2101INT – Principles of Intelligent Systems Lecture 2.

The Systems Reply

– "While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part.“

Searle citing ‘Berkeley’ (1980) “Minds, Brains, and Programs”

958

Page 24: 2101INT – Principles of Intelligent Systems Lecture 2.

The Systems Reply - Response

Although none of the components understand Chinese, the system as a whole does.

It could be said to be an emergent property of the system.

After all, if you ask it “Do you understand Chinese?” it will of course response (in Chinese) that it does

Searle’s counter-argument is that if the human were to memorise the rule book and not use pieces of paper he still wouldn’t understand Chinese.

958

Page 25: 2101INT – Principles of Intelligent Systems Lecture 2.

The Robot Reply

– "Suppose we wrote a different kind of program. Suppose we put a computer inside a robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating drinking -- anything you like. The robot would, for example have a television camera attached to it that enabled it to 'see,' it would have arms and legs that enabled it to 'act,' and all of this would be controlled by its computer 'brain.' Such a robot would have genuine understanding and other mental states."

Searle citing ‘Yale’ (1980) “Minds, Brains, and Programs”

958

Page 26: 2101INT – Principles of Intelligent Systems Lecture 2.

The Robot Reply - Response

Searle notes that this argument concedes that intelligence is more than symbol manipulation and must relate to the physical world

But the addition of perceptual and motor capabilities do not add understanding

Suppose you leave the human inside, but now with additional symbols (still in Chinese) that come from the camera. Some additional symbols (also in Chinese) will be given to the motors.

The human still knows nothing about what any of these mean, and still does not understand Chinese.

958

Page 27: 2101INT – Principles of Intelligent Systems Lecture 2.

The Brain Simulator Reply

– "Suppose we design a program that doesn't represent information that we have about the world, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs. We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?" Searle citing ‘Berkeley and MIT’ (1980) “Minds, Brains, and Programs”

958

Page 28: 2101INT – Principles of Intelligent Systems Lecture 2.

The Brain Simulator Reply - Response

Searle notes (once again) that intelligence must therefore be more than symbol manipulation.

Even getting this close to the operation of the brain is not sufficient to produce understanding

Imagine that the neurons are simulated by water pipes and that a little man runs around turning valves on and off according to rules in the book.

After all the right “neural firings” the Chinese answer pops out

Still the man doesn’t understand, the water pipes don’t understand and we would return to the Systems Reply

958

Page 29: 2101INT – Principles of Intelligent Systems Lecture 2.

Summary of the Chinese Room

Strong AI claims that instantiating a formal program with the right input is a sufficient condition of intelligence/understanding/intentionality

Attributing these things to the Chinese Room are based on the assumption that if it looks and behaves sufficiently like something else with the same function, then it must have corresponding mental states

If we knew what to attribute its behaviour to (little man, pipes, formal program) we would not make this assumption and would not attribute intentionality to it.

958

Page 30: 2101INT – Principles of Intelligent Systems Lecture 2.

References

Haugeland, John (1985) “Artificial Intelligence – The Very Idea”, MIT Press. “Philosophy of Mind – Functionalism”

http://www.philosophyonline.co.uk/pom/pom_functionalism_introduction.htm

Searle, (1980) “Minds, Brains, and Programs” http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html


Recommended