Jochen Triesch, UC San Diego, triesch 1 Attention Outline: Overview bottom-up attention top-down...

Post on 21-Jan-2016

216 views 0 download

transcript

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 1

AttentionAttention

Outline:

• Overview

• bottom-up attention

• top-down attention

• physiology of attention and awareness

• inattention and change blindness

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 2

Credits: major sources of material, including figures and slides were:

• Itti and Koch. Computational Modeling of Visual Attention. Nature Reviews Neuroscience, 2001.

• Sprague, Ballard, and Robinson. Modeling Attention with Embodied Visual Behaviors, 2005.

• Fred Hamker. A dynamic model of how feature cues guide spatial attention. Vision Research, 2004.

• Frank Tong. Primary Visual Cortex and Visual Awareness. Nature Reviews Neuroscience, 2003.

• and various resources on the WWW

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 3

How to think about attention?

How to think about attention?

• William James: “Everyone knows what attention is”

• overt vs. covert attention

• attention as a filter

• attention as enhancing the signal produced by a stimulus

• tuning system to a specific stimulus attribute

• attention as a spotlight

• location-, feature-, object-, modality-, task- based

• attention as binding together features

• attention as something that speeds up processing

• attention as distributed competition

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 4

Important QuestionsImportant Questions

• what is affected by attention?

• where in the brain do we see differences between attended/unattended conditions?

• what controls attention?

• how many things can you attend to?

• is attention a useful notion at all? Or is it too blunt and unspecific?

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 5

Bottom-up AttentionBottom-up Attention

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 6

Points to note:

• saliency of location depends on its surround

• integration into single saliency map (where?)

• inhibition of return is important

• how are things updated across eye movements

• purely bottom-up models provide very poor fit to most experimental data

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 7

Looking to maximize visual Looking to maximize visual rewardreward

• infants may be primarily driven by visual saliency• at about a year of age they start with gaze-following:

• “looking where sombody else is looking”• foundational skill important for learning language, ...• does not emerge normally in certain developmental

disorders• theory: they learn to exploit the caregiver’s direction of gaze as

a cue to where interesting things are

G. Deák, R. Flom, and A. Pick(18 and 12 month-olds)

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 8

Infant:• can look at CG or any region of space• only sees what is in the region it looks at• decides when and where to shift gaze

discrete regionsof space (N=10)

interesting object/eventin one location, some-times moving randomlycaregiver (CG) looks at object

with probability pvalid

Carlson & Triesch (2003):

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 9

Overview of Infant ModelOverview of Infant Model

Infant model is simple two agent system (Findlay & Walker, 1999):• “when agent” decides when to shift gaze• “where agent” decides where to look

“when” agent:shift gaze?

“where” agent:where to?

yes/no

new location

fixation timeobject in view

inst. reward

CG in viewCG head pose

instantaneous reward(habituating)

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 10

Infant Model DetailsInfant Model DetailsHabituation: reward for looking at an object decreases over time:

),(

~max

),(),(

~ where,

/),(~

exp

/),(~

exp),(

1 ttta

ttttttn

b tt

ttttt

asQ

asQasQ

bsQ

asQasp

t

Softmax action selection balances exploration/exploitation (τ >0: temperature)

)exp()0(fixfix thRrt

β: habituation rate, hfix(0) habituation level at beginning of fixation,t: time since start of fixation

Q(st,at) = Q(st,at) + α[rt+1 + γQ(st+1,at+1) - Q(st,at)]

TDerror

Agents learn with tabular SARSA algorithm:Q: state action value, α: learning rate

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 11

Simulation ResultsSimulation Results

• basic set indeed sufficient for gaze following to emerge• model first learns to look at CG, then learns gaze following

Caregiver Index (CGI):ratio of gaze shifts to CG

Gaze Following Index (GFI):ratio of gaze shifts followingCG’s line of regard

learning time

(error bars are standarddeviations of 10 runs)

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 12

Variation of Reward Variation of Reward StructureStructure

no learning if things that CG looks atare not rewarding

learning poor if CGtoo rewarding

(Williams syndrome?)

no learning if CG aversive(Autism?)

time untilGFI>0.3

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 13

Scheduling Visual RoutinesScheduling Visual RoutinesSprague, Ballard, and Robinson (2005):• VR platform to study visual attention in complex behaviors

where several goals have to be negotiated (“Walter”)• rewards are coupled to successful completion of behaviors

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 14

Abstraction hierarchy:

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 15

Behaviors modeled as RL agents:

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 16

Maximum Q Values and best actions:

obstacle avoidance sidewalk following litter pickup

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 17

Growing uncertainty about state unless you look:

• control of eye gaze by behavior that experiences biggest loss due to uncertain state information

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 18

• switching contexts with a state machine:

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 19

• comparing Walter to Human subjects in same task: how often does a behavior control gaze in the “on sidewalk” context?

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 20

• comparing Walter to Human subjects in same task: which behavior controls the eye gaze across different contexts?

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 21

Modulation of V4 activityModulation of V4 activityMotter (1994)

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 22

ModelModelHamker (2004)

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 23

Feedback from higher level exerting input gain control:

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 24

a. switching from red to green.b. spatial effects due to feedback from premotor areas

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 25

Model vs. Experiment:

ExperimentModel

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 26

Detection of stimuli and V1 Detection of stimuli and V1 activityactivity

Super, Spekreijse, and Lamme (2001):• monkey’s task: detect texture defined region and saccade to it• record from orientation selective cell in V1• how is cell’s response correlated with monkey’s percept?

Jochen Triesch, UC San Diego, http://cogsci.ucsd.edu/~triesch 27

• enhancement of late (80-100ms) response only if target is actually detected by the monkey

“seen” “not seen”