+ All Categories
Home > Documents > CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 10, 2012.

CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 10, 2012.

Date post: 27-Dec-2015
Category:
Upload: sydney-chambers
View: 215 times
Download: 2 times
Share this document with a friend
33
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 10, 2012
Transcript

CS 4100 Artificial Intelligence

Prof. C. HafnerClass Notes Jan 10, 2012

Goals of artificial intelligence (AI) field

1. Artificial systems with humanlike ability to understand and reason (cf. cognitive science)

2. Solve problems that are too large to find the best answer algorithmically, using heuristic (incomplete) methods

3. Solve problems that are not well-understood

Artificial systems with humanlike ability to understand and reason

• Main techniques: ontology, automated reasoning, formal logic, state-space search

• Uses: problem-solving/planning, natural language processing, intelligent HCI

• Applications: game characters, search engines, recommender systems

Artificial systems with humanlike ability to understand and reason (cont.)

• Main techniques: evidential logics (probability, fuzzy logic, . . .), Bayesian inference nets, Markov models

• Uses: Problem solving under uncertainty, decision support systems (“expert systems”)

• Application: medical diagnosis and advice

Solve problems that are too large to find the best answer algorithmically, so require incomplete

methods

• Main techniques: heuristic search; dependency-directed backtracking

• Uses: scheduling, resource allocation

• Application: factory production

Solve problems that are not well-understood

• Main techniques: weighted linear models; bayesian inference nets; statistical induction and machine learning in general

• Uses: finance, computational science (discovery), data mining

• Application: oil exploration

Go over syllabus

History of AI

• Initial Optimism 1960’s• Samuel’s Checker player – early ML• Problem Solver (Simon & Newell)

Employed means-ends analysis (a pre-cursor of backward chaining now used in many systems)

Match A to B to find difference D

Goal: Transform situation A to situation B

none

Success

Subgoal: Reduce D

fail

Fail

Transform A’ into B

fail

Fail

A’Success

suceed

Means-ends analysis (cont)

Search for operator Q relevant to reducing D

Goal: Reduce difference D between situations A and B

none

Fail

Subgoal: Apply Q to A producing A’

fail

SuccessA’

Match A to the conditions of Q, finding difference F

Goal: Apply operator Q to A

none

Subgoal: Reduce F

fail

Fail

Apply Q to A’’

fail

Fail

Success A’A’’

• Knowledge-based systems - 1970’s – mid 80’s• “Micro-world experiments

• SHRDLU (Terry Winograd)• Rule-based “expert” systems

• Mycin, Dendral (Ed Feigenbaum)• Acceptance by industry – huge oversell

• The knowledge acquisition bottleneck

History of AI (cont.)

History of AI (cont.)• Late 80’s – mid 90’s – AI winter

• Hopes pinned on neural nets/ML to overcome KA bottleneck

• Late 90’s to present – more computing power• Rise of probabilistic approaches

• Lexical tagging breakthrough in NLP• More rigorous experiments/evaluation methods

• 2000’s – influence of the Web revives AI• Massive text corpuses and need for better web

browsers inspires NLP• Hardware advances inspire robotics• Intelligent Agents/Web Bots – applications to e-

commerce

Agents and environments

Black box vs. glass box approach

fInput(stimulusi)

Output(responsej)

Weighted linear functionsBayesian modelsNeural netsSupport vector machines

Black box vs. glass box approach

GoalsStrategies, plans

General Knowledge: Concepts/categories Relationships Schemas/scripts rules of behavior

The current environment (situation)Models of other agents

Goal: tell Sam the details of a social event

Designing an Intelligent Agent

• What can the agent do? (range of possible actions)

• What about the environment ?• Inputs to the program are called percepts

• Symbolic input from keyboard, files, networks• Sensor data from the physical world• Often must be interpreted into meaningful concepts

• What can the agent know?• History of its own previous inputs and actions• Properties of the environment + world knowledge• Knowledge of its own goals, preferences, etc.• Strategies for its behavior

Vacuum-cleaner world

• Percepts: location and contents, e.g., [A,Dirty]• Actions: Left, Right, Suck, NoOp

Types of Agents

• Reflex agent: • no “state” or memory• Reacts to current input according to its

program (condition action rules)• Knowledge-based agent:

• Uses an explicit knowledge base• Exhibits “understanding” of its input by

relating it to prior knowledge• Reacts according to rules, but the conditions

may be complex and require inference to evaluate

Types of Agents (cont)

• Planning agents• Explicitly represent their own goals and/or

preferences (“utilities”) and can reason about them (i.e., planning)

• Exhibit a kind of autonomy – actions do not follow directly from a table lookup

• Learning agents• Learning from positive and negative examples –

“supervised learning”• Learn from experience to improve its outcomes –

“reinforcement learning”

How do we judge whether we have succeeded?getting the “right answer” ?

having a good outcome ? (using some “utility” function)

the Turing Test ? (and modified versions)

we know it when we see it?

Types of Agents (cont.)Reflex agent

• Behavior depends only on current input (no history or model of the overall environment)

• Ex: Vacuum Agent– Percepts: 2-tuple: A or B, Clean or Dirty– Actions: Left, Right, Suck, NoOp

Vacuum agent’s behavior

• The vacuum agent might follow this simple strategy: – if current location is dirty, clean it (Suck)– If current location is clean move to the other

location

Knowledge Representation: Two Approaches

• Procedural representation– Program’s statements directly encode the

knowledge (for example, of the strategy to follow)

• Declarative representation– A data structure encodes the knowledge and the

program’s statements act like an interpreter

Approach 1 (procedural)if status = Dirty then return Suckelse if location = A then return Rightelse if location = B then return Left

Implementing the Vacuum Agent

Approach 2 (declarative):state perceive()

rule rule-match(state, rule base)action RHS(rule)return action

Implementing the Vacuum agent

Per cept Action

[A, Clean] Right

[A, Dirty] Suck

Rule Base

Simple reflex agents

Production Rule Systems

• Behavior is expressed as a set of production rules (called “table-driven” by RN)1. Condition Action2. Condition Action. . . Condition called left-hand-side (LHS)Action called right-hand-side (RHS)

In what sense is an agent that uses declarative knowledge

more intelligent ??

Production rule systems

• Drawbacks:– Huge RULE BASE (time consuming to build by

hand)– What if more than one condition is satisfied?– Inflexible (no adaptation or learning)

Analyzing Agent Performance

• Discussion of “rationality”– Must define a performance measure

• How clean (but penalty for extra work?)

– Rationality maximizes expected value of the measure– Depends on knowledge of the environment

• Can a clean square get dirty again? At what rate?

Knowledgeable agents

Q: What formal language(s) can we use to represent

•Current facts about the state of the world•General facts about how the world behaves•General facts about the effects of actions thatthe agent can perform•Condition action rules that specify how theagent is to behave

A: Formal logic•Syntax and semantics well understood•Computational tractability known for importantsubsets (Horn clause logic)

Introduce PythonAssignment 1

http://relationalagents.com/demos/index.html

Relational Agent Systems

How are you feeling today ?


Recommended