+ All Categories
Home > Documents > Summary Formulation Experimentscs-people.bu.edu/fcakir/eccv2018/hbmp-poster.pdf · 2018. 9. 11. ·...

Summary Formulation Experimentscs-people.bu.edu/fcakir/eccv2018/hbmp-poster.pdf · 2018. 9. 11. ·...

Date post: 02-Oct-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
1
Hashing with Binary Matrix Pursuit Fatih Cakir , Kun He , Stan Sclaroff FirstFuel, [email protected] Computer Science, Boston University, {hekun,sclaroff}@cs.bu.edu Experiments Summary Formulation Main contribution: Technical and empirical improvements for two-stage hashing methods. Two-stage hashing breaks the problem into two stages: i. Binary embedding/code inference (affinity matching task) b Hamming space Affinity matrix ii. Hash function learning (binary classification tasks) CLASSIFIER Target binary code Hashing aims to learn binary embeddings while preserving the neighborhood structureof the data: Input space Hamming space Neighborhood structure is generally defined via an affinity matrix. We propose: 1. Theoretical properties for the binary code inference stage. 2. How to construct the affinity matrix. 3. Given insights from (1) - (2), the novel hashing method Hashing as Binary Matrix Pursuitthat achieves SOA results on retrieval benchmarks. Ordinary Hamming distances are unable to reconstruct certain affinity matrices. With the projected gradient method the norm of the residual monotonically decreases. If non-orthogonal directions are selected at each iteration then the residual strictly decreases. CIFAR-10, NUSWIDE, ImageNet100, LabelMe benchmarks Norm of the residual matrix vs. iteration, when step sizes in the projected gradient descent are constant ( ) and adaptive ( ), corresponding to ordinary and weighted Hamming distances, respectively. ii. Image retrieval experiments with Hamming rankings i. Reconstructing the affinity matrix with ordinary and weighted Hamming distances 5K/1K train and test split 50K/10K train and test split https://github.com/fcakir/
Transcript
Page 1: Summary Formulation Experimentscs-people.bu.edu/fcakir/eccv2018/hbmp-poster.pdf · 2018. 9. 11. · 1. Theoretical properties for the binary code inference stage. 2. How to construct

International Conference on

Computer Vision 2017

Hashing with Binary Matrix PursuitFatih Cakir†, Kun He‡, Stan Sclaroff‡

†FirstFuel, [email protected]‡Computer Science, Boston University, {hekun,sclaroff}@cs.bu.edu

ExperimentsSummary Formulation

Main contribution:

Technical and empirical improvements for two-stage hashing methods.

Two-stage hashing breaks the problem into two stages:

i. Binary embedding/code inference (affinity matching task)

b

Hamming space

Affinity matrix

ii. Hash function learning (binary classification tasks)

CLASSIFIER

Target binary code

Hashing aims to learn binary embeddings while preserving

the “neighborhood structure” of the data:Input

spaceHamming

space

Neighborhood structure is generally defined via an affinity

matrix.

We propose:

1. Theoretical properties for the binary code inference stage.

2. How to construct the affinity matrix.3. Given insights from (1) - (2), the novel hashing

method “Hashing as Binary Matrix Pursuit” that achieves SOA results on retrieval benchmarks.

Ordinary Hamming distances are unable to reconstruct

certain affinity matrices.

With the projected gradient method the norm of the

residual monotonically decreases.

If non-orthogonal directions are selected at each

iteration then the residual strictly decreases.

CIFAR-10, NUSWIDE, ImageNet100, LabelMe benchmarks

Norm of the residual matrix vs. iteration, when step sizes in the projected gradient descent are constant ( ) and adaptive ( ), corresponding to ordinary and weighted Hamming distances, respectively.

ii. Image retrieval experiments with Hamming rankings

i. Reconstructing the affinity matrix with ordinary and weighted

Hamming distances

• 5K/1K train and test split

• 50K/10K train and test split

https://github.com/fcakir/

Recommended