Reconstructing the World* in Six Daysjheinly/publications/cvpr2015-heinly-poster.… ·...

Post on 17-Jul-2020

1 views 0 download

transcript

Reconstructing the World* in Six Days*(As Captured by the Yahoo 100 Million Image Dataset)

Jared Heinly, Johannes L. Schönberger, Enrique Dunn, Jan-Michael Frahm

This material is based upon work supported by the National Science Foundation under Grant No. IIS-1252921, No. IIS-1349074, and No. CNS-1405847 as well as by the US Army Research, Development and Engineering Command Grant No. W911NF-14-1-0438.

Yahoo® Flickr® Dataset

100 Million Images14TB, 640x480 Resolution

Results

Frahm et al, 2010 Ours

Registered

Reconstructed

Data Association Time*

*Equivalent Hardware Configuration

26%

8.7%

13.3 Hours 7.9 Hours

1.1%

4.6%

Berlin, Germany (2.7M images)

1.5 Million Images Registered105 Hours

k

Nu

mb

er o

f R

egis

tere

d Im

ages

Effect of Matching to k NeighborsDiscard Rate = 200K

Baseline

Discard Rate

Nu

mb

er o

f R

egis

tere

d Im

ages

Effect of Discard RateMatch to k = 2 Neighbors

Baseline30K Rare Connections

Berlin Cathedral, 26K Cameras

Trafalgar Square, 2.4K Cameras

Notre Dame, 126K Cameras

Streaming Connected Component Discovery

100M Images

For Each Streamed Image:

• Retrieve k nearest neighbors using a bag-of-words representation

• Attempt registration to the set of knearest neighbors

If No Successful Registration:

• Create a new single-image cluster in the database

If 1 Successful Registration:

• Add the image to the matching cluster

If 2+ Successful Registrations:

• Add the image to the best-matching cluster

• Link clusters into a connected component

• Avoid matching streamed images to the same connected component twice

If 2 Clusters Are Linked into a Component:

• Attempt direct registration between the clusters

• If successful, merge the clusters into a single representation

Motivation

Data Association

Sparse Modeling

Dense Modeling

• We push 3D modeling from city-scale (~1M images) to world-scale datasets (~100M images)

• Data association is the biggest challenge at this scale

3D Modeling Pipeline

Streaming Paradigm

• Tackle robustness, scalability, and completeness of data association

• Read images sequentially from disk

• Read each image only once

• Keep images in memory only as long as necessary

Cluster Discarding

• Some clusters are less important than others

• Discard clusters from memory that do not grow in size fast enough

• Discarding enables scalability to world-scale datasets

Discard Rate

Cluster Representation

Iconic Image

163282497

4057189

9463917

3832219

Bag ofVisual Words

Cluster Image

163263917

38371892219

RegisteredVisual Words

Iconic Image

1632 82497 405 7189 94

Cluster Images

• Use cluster images to create adaptive cluster representation