+ All Categories
Home > Documents > Lecture 05 -06 Intelligent Agents_AI_UAAR

Lecture 05 -06 Intelligent Agents_AI_UAAR

Date post: 06-Mar-2016
Category:
Upload: ek-rah
View: 217 times
Download: 0 times
Share this document with a friend
Description:
Artificial intelligence intelligent agent

of 33

Transcript
  • INTELLIGENT AGENTS

  • LECTURE OBJECTIVESAgents and environmentsRationalityPEAS (Performance measure, Environment, Actuators, Sensors)Environment typesAgent types*

  • AGENTSAn agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuatorsHuman agent: eyes, ears, and other organs for sensors; hands,legs, mouth, and other body parts for actuatorsRobotic agent: cameras and infrared range finders for sensors;various motors for actuatorsSoftware agents or softbots that have some functions as sensors and some functions as actuators. Askjeeves.com is an example of a softbot. 4. Expert systems like the Cardiologist is an agent. 5. Autonomous spacecrafts.

    *

  • GLOSSARYPercept agents perceptual inputsPercept sequence History of everything the agent has perceivedAgent function Describes agents behaviour maps any percept to an actionAgent program Implements agent function*

  • AGENTS AND ENVIRONMENTSThe agent function maps from percept histories to actions:[f: P* A]The agent program runs on the physical architecture to produce fagent = architecture + program*

  • EXAMPLE: VACUUM-CLEANER AGENT*Percepts: location andcontents, e.g., [A,Dirty]Actions: Left, Right,suck, NoOpA simple agent function may be if the current square is dirty, then suck or move to other square..Agent Program !!!Whats right way to fill actions in above table what makes an agent good or bad, intelligent or stupid..

  • INTELLIGENT AGENTSThe fundamental faculties of intelligence are

    Acting Sensing Understanding, reasoning, learning

    An Intelligent Agent must sense, must act, must be autonomous (to some extent). It also must be rational. AI is about building rational agents. *

  • RATIONAL AGENTAn agent should strive to "do the right thing", based on what it can perceive and the actions it can perform.What is the right thing?

    Causes the agent to be most successful Require a way to measure success Rationality is not the same as Perfection -- Rationality maximizes Expected Performance Perfection maximizes Actual Performance Performance measure is a criteria to measure an agents behavioure.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc Not always easy to select because measuring performance/behaviour has deep philosophical meanings *

  • RATIONALITY CONTDWhat is rational - depends on four things: The performance measure The agents prior knowledge of the environment The actions the agent can perform The agents percept sequence to date

    A rational agent is:For each possible percept sequence, a rationalagent should select an action that is expected tomaximize its performance measure, given theevidence provided by the percept sequence andwhat ever built in knowledge the agent has.can you describe an example of a rational agent !*

  • RATIONAL AGENTS CONTDRationality is distinct from omniscience (all-knowing with infinite knowledge)Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration an important part of rationality)An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt) a rational agent should be autonomous .!*

  • IS OUR VACUUM-CLEANER AGENTRATIONAL?Keeping in view four aspects to measure rationality:

    The performance measure awards one point for each clean square at each time step over 10000 time steps.The geography of the environment is known.- Clean squares stay clean and sucking cleansthe current square.Only actions are left, right, suck, and NoOpThe agent correctly perceives its location andwhether that location contains dirt.Whats your Answer -- Yes/No*

  • BUILDING RATIONAL AGENTSPEAS DESCRIPTION TO SPECIFY TASK ENVIRONMENTSMust first specify the setting for intelligent agent designPEAS: Performance measure, Environment, Actuators, Sensors these specify the task environment for a rational agent Consider, e.g., the task of designing an automated taxi driver:Performance measure: Safe, fast, legal, comfortable trip, maximize profitsEnvironment: Roads, other traffic, pedestrians, customersActuators: Steering wheel, accelerator, brake, signal, hornSensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard*

  • PEASAgent: Medical diagnosis systemPerformance measure: Healthy patient, minimize costs, lawsuitsEnvironment: Patient, hospital, staffActuators: Screen display (questions, tests, diagnoses, treatments, referrals)Sensors: Keyboard (entry of symptoms, findings, patient's answers)*

  • PEASAgent: Part-picking robotPerformance measure: Percentage of parts in correct binsEnvironment: Conveyor belt with parts, binsActuators: Jointed arm and handSensors: Camera, joint angle sensors*

  • PEASAgent: Interactive English tutorPerformance measure: Maximize student's score on testEnvironment: Set of studentsActuators: Screen display (exercises, suggestions, corrections)Sensors: Keyboard*

  • ENVIRONMENT TYPES/PROPERTIES OF TASK ENVIRONMENTSFully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time.Fully Observable environments are convenient as an agent need not to maintain internal states to keep track of world.. Partially Observability may be due to noisy/inaccurate sensors OR part of state is missing e.g a taxi agent doesnt has sensor to see what other drivers are doing/thinking . Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic)Vacuum worlds is Deterministic while Taxi Driving is Stochastic as one can not exactly predict the behaviour of traffic *

  • ENVIRONMENT TYPESEpisodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.Next episode does not depend on the actions taken in previous episodes.E.g. an agent sorting defective parts in an assembly line is episodic while a taxi driving agent or a chess playing agent are sequential .Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does)Taxi Driving is Dynamic, Crossword Puzzle solver is static

    *

  • ENVIRONMENT TYPES CONTDDiscrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions.e,.g. chess game has finite number of statesTaxi Driving is continuous-state and continuous-time problem Single agent (vs. multiagent): An agent operating by itself in an environment.An agent solving a crossword puzzle is in a single agent environment Agent in chess playing is in two-agent environment*

  • EXAMPLES* The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

  • AGENT TYPESFour basic types in order of increasing generality:Simple reflex agentsModel-based reflex agentsGoal-based agentsUtility-based agentsLearning Agents* All of these can be turned into learning agents

  • SIMPLE REFLEX AGENTSInformation comes from sensors - percepts Changes the agents current state of the world These agents select actions on the basis of current perceptTriggers actions through the effectors The condition-action rules allow the agent to make connection from percept to actionsCharacteristicsSuch agents have limited intelligenceEfficient No internal representation for reasoning, inference. No strategic planning, learning. Are not good for multiple, opposing, goals. *

  • SIMPLE REFLEX AGENTS*

  • MODEL BASED REFLEX AGENTSThese agents keep track of part of world it cant see ..Agent maintains internal state through the knowledge about:How the world evolves independently of the agent andHow agents actions affect the world ..Thus a model based agent works as follows: information comes from sensors - percepts based on this, the agent changes the current state of the world based on state of the world and knowledge (memory), it triggers actions through the effectors E.g.Know about other car location in overtaking scenario for taxi driving agentWhen agent turns the steering clockwise, car turns right .*

  • MODEL-BASED REFLEX AGENTS*

  • GOAL BASED AGENTSCurrent state of environments is not always enough .e.g at a road junction, it can turn left, right or go straight ..Correct decision in such cases depends on where taxi is trying to get to .Agents need some Goal information as well to describe desirable situationsKnowledge to support decisions (according to goals set) is explicitly modelled and can be modified as wellThe taxi driving agent is not to hit the other car so will have all rules set to support actions in different situationsSuch agents work as follows:

    information comes from sensors - percepts changes the agents current state of the world based on state of the world and knowledge (memory) and goals/intentions, it chooses actions and does them through the effectors.

    *

  • GOAL-BASED AGENTS*

  • UTILITY BASED AGENTSGoals alone are not always enough to generate quality behaviourseg different action sequences can take the taxi agent to destination (and achieving thereby the goal) but some may be quicker, safer, economical etc ..A general performance measure is required to compare different world statesA utility function maps a state (or sequence of states) to a real number to take rational decisions and to specify tradeoffs when: goals are conflicting like speed and safetyThere are several goals and none of which can be achieved with certainty *

  • UTILITY-BASED AGENTS*

  • LEARNING AGENTSA learning agent can be divided into four conceptual components:Learning ElementResponsible for making improvementsuses feedback from Critic to determine how performance element should be modified to do better.Performance ElementResponsible for taking external actionsCriticTells the learning element how well agent is doing wrt to fixed performance standardsProblem GeneratorResponsible for suggesting actions .Especially suggesting exploratory type actions.*

  • For a taxi driver agent:Performance element consists of collection of knowledge and procedures for selecting driving actionsThe Critic observes the world and passes the information to performance element e.g. reaction/response of other drivers when the agent takes quick left turn from top lane !!Learning element then can formulate a rule to mark a bad action Problem generator identifies certain areas of behaviour improvement and suggest experiments trying brakes on different road conditions etcThe Learning element can make changes in knowledge components by observing pairs of successive states allow an agent to learn (learn from what happens when strong brake is applied on a wet road )

    *

  • LEARNING AGENTS*

  • SUMMARY*An agent is something that perceives and acts in an environment. The agent function specifies the action taken by the agent in response to any percept sequence. The performance measure evaluates the behaviour of the agent in the environment. A rational agent acts to maximise the expected value of the performance measure. Task environments can be fully or partially observable, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, and single-agent or multiagent

  • SUMMARYSimplex reflex agents respond directly to percepts, whereas model-based reflex agents maintain internal state to track aspects of the world that are not evident in the current percept. Goal-based agents act to achieve their goals, and utility-based agents try to maximize their own expected happiness.All agents can improve their performance through learning*

    *


Recommended