TRECVID Evaluations Mei-Chen Yeh 05/25/2010. Introduction Text REtrieval Conference (TREC) –...

Post on 20-Jan-2016

219 views 0 download

Tags:

transcript

TRECVID Evaluations

Mei-Chen Yeh05/25/2010

Introduction

• Text REtrieval Conference (TREC)– Organized by National Institute of Standards (NIST)– Support from interested government agencies– Annual evaluation (NOT a competition)– Different “tracks” over the years, e.g.

• web retrieval, email spam filtering, question answering, routing, spoken documents, OCR, video (standalone conference from 2001)

• TREC Video Retrieval Evaluation (TRECVID)

Introduction

• Objectives of TRECVID– Promote progress in content-based analysis and

retrieval from digital videos– Provide open, metrics-based evaluation– Model real world situations

Introduction

• Evaluation is driven by participants• The collection is fixed, available in the spring– 50% data used for development, 50% for testing

• Test queries available in July, 1 month to submission

• More details: – http:://www-nlpir.nist.gov/projects/trecvid

TRECVID Collections• Test data

– Broadcast news– TV programs– Surveillance videos– Video rushes provided by BBC (before 2006)– Documentary and educational materials supplied by the Netherlands

Institute for Sound and Vision (2007-2009)– The Gatwick airport surveillance videos provided by the UK Home

Office (2009)– Web videos (2010)

• Languages– English– Arabic– Chinese

Collection History

Tasks

• Semantic indexing (SIN) • Known-item search (KIS) • Content-based copy detection (CCD) • Surveillance event detection (SED) • Instance search (INS) • Event detection in Internet multimedia (MED)

Semantic indexing

• System task:– Given the test collection, master shot reference,

and feature definitions, return for each feature a list of at most 2000 shot IDs from the test collection ranked according to the possibility of detecting the feature.

• 130 features (2010)– Full list

Examples (1)

• Shots depicting an animal (no humans)• Pictures of flowers• Exterior shots of a shopping mall • Images of people who appear to be scientists• One or more male children• Teenagers• Shots depicting a crowd• …

Examples (2)• One or more people running• One or more people walking• One or more people swimming• One or more people singing• One or more people playing basketball• A person throwing some object• A person riding a bicycle• Putting food or drink in his/her mouth • …

Known-item search

• Models the situation in which someone knows of a video, has seen it before, believes it is contained in a collection, but doesn't know where to look.

• Inputs– A text-only description of the video desired– A test collection of videos

• Outputs– Top ranked videos (automatic or interactive mode)

Examples

• Find the video with the guy talking about how it just keeps raining.

• Find the video about some guys in their apartment talking about some cleaning schedule.

• Find the video with the boy in a red cap throwing a baseball over and over again.

• Find the video with the guy in a yellow T-shirt with the big letter M on it.

• …

http://www-nlpir.nist.gov/projects/tv2010/ki.examples.html

Content-based copy detection

Surveillance event detection• Detects human behaviors in vast amounts

surveillance video, real time!• For public safety and security• Event examples– Person runs– Cell to ear– Object put– People meet– Embrace– Pointing– …

Instance search

• Finds video segments of a certain specific person, object, or place, given a visual example.

Event detection in Internet multimedia

• System task– Given a collection of test videos and a list of test

events, indicate whether each of the test events is present (and the strength of evidence) in each of the test videos

• In 2010 this task will be treated as exploratory!– Emphases on supporting initial exploration of the new

video collection, task definition, evaluation framework, and a variety of technical approaches to the system task

– Not much information available so far

Call for partners

• Schedule (July-September)

Conclusions

• Standardized evaluations and comparisons• Test on large collections • Failures are not embarrassing, and can be

presented at the TRECVID workshop!• Anyone can participate!– A “priceless” resource for researches