+ All Categories
Home > Documents > Reading Between the Lines: Object Localization Using Implicit Cues from Image Tags

Reading Between the Lines: Object Localization Using Implicit Cues from Image Tags

Date post: 17-Jan-2016
Category:
Upload: stan
View: 27 times
Download: 0 times
Share this document with a friend
Description:
Sung Ju Hwang and Kristen Grauman University of Texas at Austin. Reading Between the Lines: Object Localization Using Implicit Cues from Image Tags. Detecting tagged objects. Images tagged with keywords clearly tell us which objects to search for. Dog Black lab Jasper Sofa Self - PowerPoint PPT Presentation
Popular Tags:
35
Sung Ju Hwang and Kristen Grauman University of Texas at Austin
Transcript
Page 1: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Sung Ju Hwang and Kristen GraumanUniversity of Texas at Austin

Page 2: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Images tagged with keywords clearly tell us which objects to search for

Detecting tagged objects

DogBlack labJasperSofaSelfLiving roomFedoraExplore#24

Page 3: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Duygulu et al. 2002

Detecting tagged objects

Previous work using tagged images focuses on the noun ↔ object correspondence.

Fergus et al. 2005

Li et al., 2009

Berg et al. 2004

[Lavrenko et al. 2003, Monay & Gatica-Perez 2003, Barnard et al. 2004, Schroff et al. 2007, Gupta & Davis 2008, Vijayanarasimhan & Grauman 2008, …]

Page 4: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

MugKeyKeyboardToothbrushPenPhotoPost-it

ComputerPosterDeskBookshelfScreenKeyboardScreenMugPoster

? ?

Based on tags alone, can you guess where and what size the mug will be in each image?

Our Idea

The list of human-provided tags gives useful cues beyond just which objects are present.

Page 5: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

ComputerPosterDeskBookshelfScreenKeyboardScreenMugPoster

MugKeyKeyboardToothbrushPenPhotoPost-it

Our Idea

Presence of larger objectsMug is named first

Absence of larger objectsMug is named later

The list of human-provided tags gives useful cues beyond just which objects are present.

Page 6: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Our Idea

We propose to learn the implicit localization cues provided by tag lists to improve object detection.

Page 7: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

WomanTableMugLadder

Approach overview

MugKeyKeyboardToothbrushPenPhotoPost-it

Object detector

Implicit tag features

ComputerPosterDeskScreenMugPoster

Training: Learn object-specific connection between localization parameters and implicit tag features.

MugEiffel

DeskMugOffice

MugCoffee

Testing: Given novel image, localize objects based on both tags and appearance.

P (location, scale | tags)

Implicit tag features

Page 8: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

WomanTableMugLadder

Approach overview

MugKeyKeyboardToothbrushPenPhotoPost-it

Object detector

Implicit tag features

ComputerPosterDeskScreenMugPoster

Training: Learn object-specific connection between localization parameters and implicit tag features.

MugEiffel

DeskMugOffice

MugCoffee

Testing: Given novel image, localize objects based on both tags and appearance.

P (location, scale | tags)

Implicit tag features

Page 9: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Feature: Word presence/absence

MugKeyKeyboardToothbrushPenPhotoPost-it

ComputerPosterDeskBookshelfScreenKeyboardScreenMugPoster

Presence or absence of other objects affects the scene layout record bag-of-words frequency.

Presence or absence of other objects affects the scene layout

= count of i-th word.

, where

Mug Pen Post-itToothbrush

Key PhotoComput

erScreen

Keyboard

DeskBookshe

lfPoster

W(im1) 1 1 1 1 1 1 0 0 1 0 0 0

W(im2) 1 0 0 0 0 0 1 2 1 1 1 1

Page 10: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Feature: Word presence/absence

MugKeyKeyboardToothbrushPenPhotoPost-it

ComputerPosterDeskBookshelfScreenKeyboardScreenMugPoster

Presence or absence of other objects affects the scene layout record bag-of-words frequency.

Presence or absence of other objects affects the scene layout

Large objects mentioned

Small objects mentioned

= count of i-th word.

, where

Mug Pen Post-itToothbrush

Key PhotoComput

erScreen

Keyboard

DeskBookshe

lfPoster

W(im1) 1 1 1 1 1 1 0 0 1 0 0 0

W(im2) 1 0 0 0 0 0 1 2 1 1 1 1

Page 11: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Feature: Rank of tags

People tag the “important” objects earlierPeople tag the “important” objects earlier record rank of each tag compared to its typical rank.

= percentile rank of i-th word.

, whereMugKeyKeyboardToothbrushPenPhotoPost-it

ComputerPosterDeskBookshelfScreenKeyboardScreenMugPoster

MugComput

erScreen

Keyboard

DeskBookshe

lfPoster Photo Pen Post-it

Toothbrush

Key

R(im1) 0.80 0 0 0.51 0 0 0 0.28 0.72 0.82 0 0.90

R(im2) 0.23 0.62 0.21 0.13 0.48 0.61 0.41 0 0 0 0 0

Page 12: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Feature: Rank of tags

People tag the “important” objects earlierPeople tag the “important” objects earlier record rank of each tag compared to its typical rank.

= percentile rank of i-th word.

, whereMugKeyKeyboardToothbrushPenPhotoPost-it

ComputerPosterDeskBookshelfScreenKeyboardScreenMugPoster

Relatively high rank

MugComput

erScreen

Keyboard

DeskBookshe

lfPoster Photo Pen Post-it

Toothbrush

Key

R(im1) 0.80 0 0 0.51 0 0 0 0.28 0.72 0.82 0 0.90

R(im2) 0.23 0.62 0.21 0.13 0.48 0.61 0.41 0 0 0 0 0

Page 13: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Feature: Proximity of tags

People tend to move eyes to nearby objects after first fixation record proximity of all tag pairs.

People tend to move eyes to nearby objects after first fixation

= rank difference.

, where

1) Mug2) Key3) Keyboard4) Toothbrush5) Pen6) Photo7) Post-it

1) Computer2) Poster3) Desk4) Bookshelf5) Screen6) Keyboard7) Screen8) Mug9) Poster

2 3

45

6

7

1

2

34 5

67

891

Mug ScreenKeyboar

dDesk

Bookshelf

Mug 1 0 0.5 0 0Screen   0 0 0 0

Keyboard

    1 0 0

Desk       0 0Bookshe

lf        0

Mug ScreenKeyboar

dDesk

Bookshelf

Mug 1 1 0.5 0.2 0.25Screen   1 1 0.33 0.5

Keyboard

    1 0.33 0.5

Desk       1 1Bookshe

lf        1

Page 14: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Feature: Proximity of tags

People tend to move eyes to nearby objects after first fixation record proximity of all tag pairs.

People tend to move eyes to nearby objects after first fixation

= rank difference.

, where

1) Mug2) Key3) Keyboard4) Toothbrush5) Pen6) Photo7) Post-it

1) Computer2) Poster3) Desk4) Bookshelf5) Screen6) Keyboard7) Screen8) Mug9) Poster

2 3

45

6

7

1

2

34 5

67

891

Mug ScreenKeyboar

dDesk

Bookshelf

Mug 1 0 0.5 0 0Screen   0 0 0 0

Keyboard

    1 0 0

Desk       0 0Bookshe

lf        0

Mug ScreenKeyboar

dDesk

Bookshelf

Mug 1 1 0.5 0.2 0.25Screen   1 1 0.33 0.5

Keyboard

    1 0.33 0.5

Desk       1 1Bookshe

lf        1May be close to each other

Page 15: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

WomanTableMugLadder

Approach overview

MugKeyKeyboardToothbrushPenPhotoPost-it

Object detector

Implicit tag features

ComputerPosterDeskScreenMugPoster

MugEiffel

DeskMugOffice

MugCoffee

P (location, scale | W,R,P)

Implicit tag features

Training:

Testing:

Page 16: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Modeling P(X|T)

We model it directly using a mixture density network (MDN) [Bishop, 1994].

We need PDF for location and scale of the target object, given the tag feature:

P(X = scale, x, y | T = tag feature)

Input tag feature(Words, Rank, or Proximity)

Mixture model

Neural network

α µ Σ α µ Σ α µ Σ

Page 17: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Lamp

CarWheelWheelLight

WindowHouseHouse

CarCarRoadHouseLightpole

CarWindowsBuildingManBarrelCarTruckCar

Boulder

Car

Modeling P(X|T)

Example: Top 30 most likely localization parameters sampled for the object “car”, given only the tags.

Page 18: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Lamp

CarWheelWheelLight

WindowHouseHouse

CarCarRoadHouseLightpole

CarWindowsBuildingManBarrelCarTruckCar

Boulder

Car

Modeling P(X|T)

Example: Top 30 most likely localization parameters sampled for the object “car”, given only the tags.

Page 19: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

WomanTableMugLadder

Approach overview

MugKeyKeyboardToothbrushPenPhotoPost-it

Object detector

Implicit tag features

ComputerPosterDeskScreenMugPoster

MugEiffel

DeskMugOffice

MugCoffee

P (location, scale | W,R,P)

Implicit tag features

Training:

Testing:

Page 20: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Integrating with object detector

How to exploit this learned distribution P(X|T)?1)Use it to speed up the detection

process (location priming)

Page 21: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Integrating with object detector

How to exploit this learned distribution P(X|T)?1)Use it to speed up the detection

process (location priming)

(a) Sort all candidate windows according to P(X|T).

Most likelyLess likelyLeast likely

(b) Run detector only at the most probable locations and scales.

Page 22: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Integrating with object detector

How to exploit this learned distribution P(X|T)?1)Use it to speed up the detection

process (location priming)2)Use it to increase detection accuracy

(modulate the detector output scores)Predictions from object detector

0.70.8

0.9

Predictions based on tag features

0.3

0.2

0.9

Page 23: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Integrating with object detector

How to exploit this learned distribution P(X|T)?1)Use it to speed up the detection

process (location priming)2)Use it to increase detection accuracy

(modulate the detector output scores)

0.630.24

0.18

Page 24: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Experiments: Datasets

LabelMe PASCAL VOC 2007 Street and office

scenes Contains ordered tag

lists via labels added 5 classes 56 unique taggers 23 tags / image Dalal & Trigg’s HOG

detector

Flickr images Tag lists obtained on

Mechanical Turk 20 classes 758 unique taggers 5.5 tags / image Felzenszwalb et al.’s

LSVM detector

Page 25: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Experiments

We evaluate Detection Speed Detection Accuracy

We compare Raw detector (HOG, LSVM) Raw detector + Our tag features

We also show the results when using Gist [Torralba 2003] as context, for reference.

Page 26: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

We search fewer windows to achieve same detection rate.

We know which detection hypotheses to trust most.

PASCAL: Performance evaluation

0 0.2 0.4 0.60

0.2

0.4

0.6

0.8

1

recall

pre

cisi

on

Accuracy: All 20 PASCAL Classes

LSVM (AP=33.69)

0 0.2 0.4 0.60

0.2

0.4

0.6

0.8

1

recall

pre

cisi

on

Accuracy: All 20 PASCAL Classes

LSVM (AP=33.69)LSVM+Tags (AP=36.79)

0 0.2 0.4 0.60

0.2

0.4

0.6

0.8

1

recall

pre

cisi

on

Accuracy: All 20 PASCAL Classes

LSVM (AP=33.69)LSVM+Tags (AP=36.79)LSVM+Gist (AP=36.28)

Naïve sliding window searches 70%.

We search only 30%.

Page 27: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

PASCAL: Accuracy vs Gist per class

Page 28: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

PASCAL: Accuracy vs Gist per class

Page 29: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

LampPersonBottleDogSofaPaintingTable

PersonTableChairMirrorTableclothBowlBottleShelfPaintingFood

Bottle

CarLicense PlateBuilding

Car

LSVM+Tags (Ours)

LSVM alone

PASCAL: Example detections

CarDoorDoorGearSteering WheelSeatSeatPersonPersonCamera

Page 30: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

DogFloorHairclip

DogDogDogPersonPersonGroundBenchScarf

PersonMicrophoneLight

HorsePersonTreeHouseBuildingGroundHurdleFence

PASCAL: Example detectionsDog

Person

LSVM+Tags (Ours)

LSVM alone

Page 31: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

AeroplaneSkyBuildingShadow

PersonPersonPoleBuildingSidewalkGrassRoad

DogClothesRopeRopePlantGroundShadowStringWall

BottleGlassWineTable

PASCAL: Example failure casesLSVM+Tags

(Ours)LSVM alone

Page 32: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Results: Observations

Often our implicit features predict:- scale well for indoor objects- position well for outdoor objects

Gist usually better for y position, while our tags are generally stronger for scale

Need to have learned about target objects in variety of examples with different contexts

- visual and tag context are complementary

Page 33: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Summary

We want to learn what is implied (beyond objects present) by how a human provides tags for an image.

Approach translates existing insights about human viewing behavior (attention, importance, gaze, etc.) into enhanced object detection.

Novel tag cues enable effective localization prior.

Significant gains with state-of-the-art detectors and two datasets.

Page 34: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Joint multi-object detection

From tags to natural language sentences

Image retrieval applications

Future work

Page 35: Reading Between the Lines:  Object Localization Using Implicit Cues from Image Tags

Summary

We want to learn what is implied (beyond objects present) by how a human provides tags for an image.

Approach translates existing insights about human viewing behavior (attention, importance, gaze, etc.) into enhanced object detection.

Novel tag cues enable effective localization prior.

Significant gains with state-of-the-art detectors and two datasets.


Recommended