+ All Categories
Home > Documents > Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf ·...

Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf ·...

Date post: 09-Aug-2020
Category:
Upload: others
View: 5 times
Download: 0 times
Share this document with a friend
148
Social Media Based Scalable Concept Detection UNIVERSITY OF SURREY Elisavet Chatzilari Centre for Vision, Speech and Signal Processing University of Surrey Co-supervisors: Prof. Josef Kittler & Dr. loannis (Yiannis) Kompatsiaris PhD Thesis September 2014 ©Elisavet Chatzilari 2014
Transcript
Page 1: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

Social Media Based Scalable Concept D etection

UNIVERSITY OF

SURREY

Elisavet Chatzilari Centre for Vision, Speech and Signal Processing

University of Surrey

Co-supervisors:Prof. Josef K ittler & Dr. loannis (Yiannis) K om patsiaris

PhD Thesis

Septem ber 2014

© Elisavet Chatzilari 2014

Page 2: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

ProQuest Number: 27558479

All rights reserved

INFORMATION TO ALL USERS The qua lity of this reproduction is d e p e n d e n t upon the qua lity of the copy subm itted.

In the unlikely e ve n t that the au tho r did not send a co m p le te m anuscrip t and there are missing pages, these will be no ted . Also, if m ateria l had to be rem oved,

a no te will ind ica te the de le tion .

uestProQuest 27558479

Published by ProQuest LLO (2019). C opyrigh t of the Dissertation is held by the Author.

All rights reserved.This work is protected aga inst unauthorized copying under Title 17, United States C o de

M icroform Edition © ProQuest LLO.

ProQuest LLO.789 East Eisenhower Parkway

P.Q. Box 1346 Ann Arbor, Ml 4 81 06 - 1346

Page 3: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet
Page 4: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

A bstract

Although over the past decades there has been remarkable progress in the field of computer vision, scientists are still confronted with the problem of

designing techniques and frameworks that can easily scale to many different

domains and disciplines. It is true that state of the art approaches cannot

produce highly effective models, unless there is dedicated, and thus costly, human supervision in the process of learning. Recently, we have been wit­

nessing the rapid growth of social media (e.g. images, videos, etc.) that

emerged as the result of users’ willingness to communicate, socialize, col­

laborate and share content. The outcome of this massive activity was the generation of a tremendous volume of user contributed data available on

the Web, usually along with an indication of their meaning (i.e. tags). This

has motivated researchers to investigate whether the Collective Intelligence

that emerges from the users’ contributions inside a Web 2.0 application, can be used to remove or ease the burden for dedicated human supervision. By

doing so, this social content can facilitate scalable but also effective learn­

ing. In this thesis we contribute towards this goal by tackling scalability in

two ways; first, we opt to gather effortlessly high quality training content in

order to facilitate scalable learning to numerous concepts, which will be re­

ferred to as system scalability. Towards this goal, we examine the potential of exploiting user tagged images for concept detection under both unsuper­

vised and semi-supervised frameworks. Second, we examine the scalability issue from the perspective of computational complexity, which we will refer

to as computational scalability. In this direction, we opt to minimize the

computational cost while at the same time minimize the inevitable perfor­mance loss by predicting the most prominent concepts to process further.

Page 5: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

IV

Page 6: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

To my parents ...

Page 7: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

Acknowledgem ents

I would like to acknowledge the opportunity that has been given me by the Information Technologies Institute to work in a stimulating environment

and collaborate with many respected researchers around Europe helping

me to identify my research interests, as well as the University of Surrey

that helped me to organize my research effort and work towards my PhD

thesis. I would particularly like to thank Dr. loannis Kompatsiaris and

Prof. Josef Kittler who took the initiative to establish a radical new form

of collaboration between these institutes, allowing me to get the best out of

both worlds.

During my thesis I have received significant help from a number of people.

First of all I would like to acknowledge the help received from my two su­

pervisors Prof. Josef Kittler and Dr. loannis Kompatsiaris who contributed

with their ideas, guidance, stimulating debates and critical feedback on my

research outcomes. Moreover I would like to thank my colleague Dr. Spiros

Nikolopoulos, whose guidance throughout my PhD study was critical both

with respect to evolving the ideas presented in this thesis and towards writ­

ing reports (e.g. articles, conference papers, this thesis). The quality of this

work would have been compromised without their help.

Finally, I must also acknowledge the contribution of my fellow researchers working in the same research team that have turned my working environ­

ment into a continuous source of inspiration.

Page 8: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

Contents

List o f Figures vii

List o f Tables xi

1 Introduction 11.1 Focus of the t h e s i s ................ 2

1.2 Motivation .......................................................................................................... 5

1.3 Challenges............................................................................................................. 5

1.4 Outline ................................................................... 6

1.5 Contributions of this t h e s i s .............................................................................. 8

1.5.1 On the System Scalab ility .................................................................... 8

1.5.2 On the Computational scalability ....................................................... 9

2 Scalable object detection w ith unsupervised learning techniques 112.1 Introduction.......................................................................................................... 12

2.2 Related W o rk ....................................................................................................... 14

2.3 Problem Form ulation...................................................................................... . 17

2.4 Framework Description....................................................................................... 17

2.4.1 General Framework A rch itec tu re ....................................................... 17

2.4.2 Analysis C o m p o n e n ts .......................................................................... 19

2.4.2.1 Construction of an appropriate image s e t ....................... 19

2.4.2.2 Segm entation.......................................................................... 23

2.4.2.3 Visual D escrip to rs................................................................. 23

2.4.2.4 Clustering .............................................................................. 24

2.4.2.5 Learning Model Param eters................................................. 25

2.5 Rationale of our a p p ro a c h ................................................................................ 26

111

Page 9: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

CONTENTS

2.5.1 Problem Formulation ......................................................................... 26

2.5.2 Image set construction......................................................................... 28

2.5.3 C lustering ............................................................................................... 29

2.6 Experimental s tu d y ........................................................................................... 33

2.6.1 D a tase ts .................................................................................................. 33

2.6.2 Objects’ distribution based on the size of the image s e t ............ 34

2.6.3 Clustering assessment .................................................. 35

2.6.4 Comparing object detection models ............................................... 37

2.6.5 Scaling in various types of o b j e c t s .................................................... 40

2.6.6 Comparison with existing m e th o d s .................................................... 48

2.6.7 Discussion of the r e s u l ts ....................................................................... 50

2.7 Guided cluster selection s t r a te g y ................................................................... 51

2.7.1 C lustering ............................................................................................... 52

2.7.2 Cluster selection s t r a te g y .................................................................... 52

2.7.3 Experimental S t u d y ............................ 54

2.7.3.1 Comparing object detection m o d e ls ................................ 55

2.7.3.2 Generalizing from the validation to the test s e t .............. 56

2.8 Discussion and conclusions.............. 57

3 U sing Tagged Im ages o f Low V isual A m biguity to B oost the Learning Efficiency o f O bject D etectors 67

3.1 In troduction........................................................................................................ 68

3.2 Related w o r k ..................................................................................................... 69

3.3 A p p ro a c h ........................................................................................................... 71

3.3.1 Segmentation and feature e x tra c tio n ................................................ 71

3.3.2 Visual and Textual Scores E s t im a tio n ............................................ 71

3.3.3 Visual Ambiguity and Image T rustw orthiness................................ 74

3.3.4 Region relevance and selection of training s a m p le s ...................... 75

3.4 Experimental r e s u l ts ........................................................................................ 76

3.4.1 D a tase ts .................................................................................................. 76

3.4.2 Evaluation of different textual similarity estimation approaches . 77

3.4.3 Sample Selection Performance............................................................ 81

3.4.4 Retrained Models Performance . ................................................... 81

IV

Page 10: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

CONTENTS

3.4.5 Comparing with existing methods ................................................ . 86

3.5 Discussion of the r e s u l ts .................................................................................. 87

4 A ctive learning in social context 894.1 Introduction........................................................................................................ 90

4.2 Related W o rk ..................................................................................................... 91

4.3 Selective sampling in social c o n te x t............................................................... 934.3.1 Measuring informativeness............................................................. 93

4.3.2 Measuring oracle’s confidence.............................................................. 95

4.3.3 Sample ranking and se le c tio n .............................................................. 97

4.4 Experim ents.............. 984.4.1 Datasets and implementation d e ta i ls ................................................. 98

4.4.2 Evaluation of the proposed selective sampling a p p ro a c h .............. 99

4.4.3 Comparing with state-of-the-art ...........................................................102

4.5 Discussion of the r e s u l ts ..................................................................................... 105

5 Perform ance Prediction of bootstrapping for Im age C lassification 1095.1 In troduction........................................................................................................... 1105.2 Selective model re tra in in g .................................................................................. 112

5.2.1 Oracle re lia b ili ty ........................................................................................113

5.2.2 Model m a tu rity ...........................................................................................113

5.2.3 Regression m o d el........................................................................................113

5.3 Experim ents............................................................................................................1145.3.1 Datasets and implementation d e ta i ls .....................................................114

5.3.2 Impact of maturity and oracle reliab ility .............................................. 114

5.3.3 Performance gain p red iction .....................................................................115

5.4 Discussion of the r e s u l ts ........................................ 118

6 Conclusions and Future Work 1196.1 Discussion and C onclusions............................................................................... 120

6.2 C ontribu tions.........................................................................................................1216.3 Plans for future extensions.................................................................................. 122

Bibliography 123

Page 11: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

CONTENTS

VI

Page 12: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

List of Figures

1.1 W hat happens in an internet minute. (Image from http:/ /scoop.intel.com/what-

happens-in-an-internet-minute/) ................................................................... 3

2.1 Framework Objective.................................................................................. 18

2.2 Proposed framework for leveraging a set of user tagged images to train

a model for detecting the object sky....................................................... 19

2.3 Examples of image sets generated using SEMSOC (in caption the cor­

responding most frequent tag). It is clear that the majority of images

in each set include instances of the object that is linguistically described

by the most frequent tag. The image is best viewed in colour and with magnification..................................... 22

2.4 a) Distribution of jj^appearances Vc% E C based on their frequency rank,

for n=100 and pci=0.9, Pc2 — 0-U Pcz = 0.5, Pc4 = 0.3, Pc = 0.1. b) Difference of jj^appearances between ci, C2 , using fixed values for

Pd = 0.8 and Pcg = 0.6 and different values for n. . ................................ 29

2.5 Distribution of objects’ ^appearance in an image group 5^, generated

from (left) and (right) using S E M S O C ................................ 36

2.6 a) Diagram showing the (FP,FN) pairs for the two most populated clus­

ters of all objects. It is evident that that vast majority of pairs are closer

to (0,0) than (500,500). b) Diagram showing the F-Measure scores exhib­

ited for the most populated cluster of each object, against the observed

\DRij\ value of this cluster normalized with the total number of true

positives TCi. The qualitative aspect of \DRij\ derives from the obser­

vation that the F-measure tends to decrease as the ratio \DRij\/TCi increases........................................................................................................ 38

VII

Page 13: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

LIST OF FIGURES

2.7 Performance comparison between four object recognition models that

are learned using images of different annotation quality (i.e. strongly,

roughly and w e a k ly ) ........................................................................................ 39

2.5 Experiments on the 21 objects of MSRC dataset. In each bar diagram the

nine first bars (colored in black) show the object recognition rates (mea­

sured using Fi metric) for the models trained using as positive samples

the members of each of the nine most populated (in descending order)

clusters. The last bar (colored in gray) in each diagram correspond to

the performance of the model trained using strongly annotated samples. 45

2.6 Indicative regions from the clusters generated by applying our approach

for the object sky. The regions that are not covered in red are the ones

that have been assigned to the corresponding cluster.................................. 47

2.7 Indicative regions from the clusters generated by applying our approach

for the object tree. The regions that are not covered in red are the ones

that have been assigned to the corresponding cluster.................................. 49

2.8 Cluster selection algorithm diagram................................................................. 54

2.9 Comparative performance of the object detection m odels.......................... 57

2.10 Performance of every model generated in each iteration on the validation

and test set for (a) Grass (b) Road and (c) S ky .......................................... 65

3.1 System O verview ...................................................................... 72

3.2 Distribution of images according to the number of meaningful tags they

h a v e ...................................................................................................................... 783.2 Performance of the three examined textual similarity estimation ap­

proaches................................................................................................................. 803.3 The distribution of the R R scores (Eq. 3.6) based on the configuration

a) V, b) VT and c) VTA.................................................................................... 823.3 Performance of the initial and the enhanced classifiers using the V, V T

and V T A configurations....................... 85

3.4 Indicative regions for the concept grass selected using the configurations(a) V, (b) V T and (c) VTA. A blue bounding box indicates a false

positive result....................................................................... 86

4.1 System O verview ................................................................................................ 91

vni

Page 14: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

LIST OF FIGURES

4.2 Inform ativeness.................................................................................................... 954.3 Probability of selecting a sample based on its distance to the hyperplane 96

4.4 Probability of selecting a sample based on the oracle’s confidence . . . . 974.4 Per concept comparison of the two best performing approaches (i.e. the

naïve oracle and the proposed approach) to the baseline (best viewed in colour) . ................................................................................................................104

5.1 System O verview ................................................................................................ 112

5.2 The effect of the oracle reliability and the classifiers’ maturity to the

performance g a in ................................................................................................... 1165.3 Actual cumulative g a in .................................................................................. 118

IX

Page 15: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

LIST OF FIG U R ES

Page 16: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

List of Tables

2.1 Legend of used n o ta t io n .................................................................................... 59

2.2 Qualitative cases for c lu s te r in g ...................................................................... 60

2.3 Datasets Inform ation.......................................................................................... 61

2.4 Clustering Output Insights ...................................................... 62

2.5 Comparing with existing methods in object detection. The reported

scores are the classification rates (i.e. number of correctly classified cases

divided by the total number of correct cases) per object for each method. 63

3.1 D a ta se ts ................................................................................................................ 77

3.2 Comparing Performance of the proposed approach with [1]....................... 87

4.1 D a ta se ts .......................................... 99

4.2 Performance sc o re s .................................................................................................1014.3 Comparison with ImageClef 2 0 1 2 ....................................................................... 106

5.1 Prediction performance comparison between the proposed approach and

the random b a s e l in e ................................... ...................................................... 117

XI

Page 17: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

GLOSSARY

X ll

Page 18: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

Chapter 1

Introduction

Page 19: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

1. INTRODUCTION

1.1 Focus of the thesis

In the 90s, the second generation (2G) cellular technology limited the functionalities of

mobile phones to the very basics, i.e. making calls and sending text messages (SMS).

Rapidly, mobile networks and devices began to evolve to higher speed networks (GPRS

and WAP) and smaller devices with new functionalities (MMS and emails). In the

last decade, the third generation (3G) was launched which in turn gave its place to

4G in 2009. In parallel with the developments in network capabilities, mobile devices

evolved to smartphones, typically equipped with more processing power and high qual­

ity cameras. These recent advances have effectively turned ordinary people into active

members of the Web, that generate, share, contribute and exchange various types of

information. This has led to the huge growth of the available information over the

internet in the form of documents, images and videos. For example, every minute

3000 images and 30 hours of video are uploaded on flickr and YouTube respectively

(Fig. 1.1).

However, as more and more information is becoming available day by day, the

efficient retrieval, indexing and categorization of the content becomes a difficult task.

Driven by this need and given that machine perception is limited to numbers and

strings, there has been increasing research effort to map semantic concepts or events to

multimedia content. Towards this goal, the use of image visual characteristics has been

proposed. In this case, the visual content is utilized by extracting a set of visual features

from each image or image region. Additionally, in an effort to simulate the functionality

of the human visual system, machine learning algorithms have been proposed and

extensively used. The idea behind machine learning algorithms is to mimic the way

that a human learns to recognize visual objects by using a number of samples to train

a model for a semantic concept. The efficient estimation of model parameters mainly

depends on two factors; the quality and the quantity of the training examples. High

quality is usually accomplished through manual annotation, which is a laborious and

time consuming task. This has a direct impact on the second factor since it inevitably

leads into a small number of training examples and limits the performance of the

generated models. This has been approached from researchers either by proposing

http://scoop.intel.com/what-happens-in-an-internet-minute/

Page 20: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

1.1 Focus o f the thesis

What Happens in an Internet Minute?

I 20 miixion 3,000204 miUion

100,000 t4«w fweehNew Tw*tt«f «count»

LOOÔN(fWfl«^uver» New lmksœ%

«cowmh

NswW3tipcd»o

And Future Growth is StaggeringGkt

277,000 6 million

2+ million

1,3 milKonopio iM Vkkovlew»

Today, tbenumb«rof

netwerked dcvkes

By 2015, ^0 numtef of

mtw«k«d «bvkei

to viewaHIn 2015, video crourogil would foker IPnefwwMyou 5 y e w *megiobot

F igu re 1.1: What happens in an internet minute.http://scoop.intel.com /what-happens-in-an-internet-minute/)

(Image from

most sophisticated machine learning algorithms or by aiming to find effortlessly and

cheaply additional training content.

W ith respect to the algorithmic based approach, semi-supervised learning algo­

rithms were proposed in order to ease the tedious effort of manual annotation [2]. In

this case, the objective is to exploit unlabelled data, which are usually of low cost and

can be obtained in high quantities, in conjunction with a small amount of labelled

data. As a special case of semi-supervised learning, the bootstrapping technique was

designed to augment the training set with additional training samples [3]. In a simi­

lar endeavour, active learning was later proposed aspiring to minimize the annotation

cost by enhancing the initial training set with the most informative samples [4]. These

samples are actively selected by the algorithm and they are annotated typically by a

human oracle. Their addition to the training set is expected to be the most beneficial

for boosting the performance of the initial classifiers.

Page 21: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

1. INTRODUCTION

In an effort to find cheaper ways for annotating multimedia content researchers

proposed the use of online annotation games. The task of annotation was presented as

a game to the web users and while they were playing the game for entertainment the

collection of valuable m etadata was a side effect. In this category, Google Image La­

beler^ [5] and Peekaboom [6] were some of the most popular games used for global and

regional image annotation respectively. Following a similar idea, crowdsourcing was

introduced and quickly attracted researchers’ interest. The idea behind crowdsourcing

is to leverage the knowledge of the crowds by splitting the annotation workload into

tasks and assigning them to workers. In this way, one can get thousands of tasks com­

pleted within minutes and obtain annotations of comparable quality to the annotations

of experts [7] for very large datasets in reasonable times. For example, ImageNet [8],

which is currently the largest annotated image database consisting of 14 million im­

ages and 21841 concepts, was annotated using Amazon’s Mechanical Turk (MTurk)^

service, without which, it would require approximately 19 years to annotate the whole

database.

While crowdsourcing has emerged as a popular method for easily obtaining high

level manual annotations, it cannot be considered either free or fully automatic. On

the other hand, Web 2.0 applications have attracted the interest of web users who

contribute content to such sites for their personal use. More specifically, flickr hosts

billions of images with associated tags and although this content is of admittedly lower

quality in terms of annotation precision, it has been obtained completely free. The

challenge of using this kind of content to alleviate the annotation burden has been an

important research direction for the past years. Towards effectively exploiting this free

user generated content, researchers have been trying to overcome the known problems

of social tagging systems such as tag spamming, tag ambiguity, tag synonymy and

granularity variation (i.e. different description level). Nevertheless, the employment of

user contributed content is leading the recent research efforts, mainly because of their

ability to offer more information than the mere image visual content, coupled with the

potential to grow almost without limits. Considering these benefits, the authors of [9]

claim that with the availability of overwhelming amounts of data many problems can

be solved without the need for sophisticated algorithms.

^http: //images.google.com/imagelabeler/ ^https: / / www.mturk.com

Page 22: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

1.2 Motivation

1.2 M otivation

The utilization of user generated content obtained from social media is the motiva­

tion of this thesis, aiming to use this content in order to provide solutions for scalable semantic image annotation. In this thesis, we target towards two different forms of

scalability; a) the system scalability., i.e. on how many concepts the utilized algorithm

can be applied regarding the availability of appropriate training content and b) the

computational scalability, i.e. how much it costs to train and apply these algorithms on

those concepts regarding computational complexity. W ith respect to the system scala­

bility, we investigate whether the user tagged images found in abundance on the web can reliably substitute the laborious task of manual annotation so that we can achieve

robust object detection for numerous concepts. We also study under which circum­

stances they can completely substitute or minimize the required manual annotation by

testing both unsupervised (Chapter 2) and semi-supervised techniques (Chapters 3, 4).

W ith respect to the computational scalability, i.e. the computational cost of a system,

which rises proportionally to the number of examined concepts, we consider the typ­

ical trade off between the computational cost and the performance of the algorithms.

In this direction, we present a method for predicting the concepts for which adding

more training data is expected to provide significant benefit in terms of performance (Chapter 5). Having this knowledge, we can choose to process further only the most

prominent concepts, avoiding in this way the computational cost of processing the whole

set of concepts.

1.3 Challenges

One of the most difficult challenges that one has to face when dealing with user gen­

erated content is noise. Web users tend to tag their uploaded multimedia content for personal use (e.g. vacation, instagramapp, iphoneography) and not necessarily based

on the objects they depict. This disqualifies them from being directly usable training

content and calls for more sophisticated methods to deal with the noise. In this thesis,

we approach this challenge both algorithmically and intuitively. W ith respect to the algorithmic approaches, we tested various textual analysis algorithms that are based

on either the expert knowledge encapsulated in lexicons (i.e. strict definitions of Word-

Net), or the collective intelligence of the crowds (i.e. using co-occurrence metrics on

Page 23: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

1. INTRODUCTION

large textual databases such as flickr), or the contextual information of the tags (i.e.

using bag-of-words schemes). W ith respect to the intuitive approach, we opt to leverage

the noise reduction properties that large amounts of data tend to exhibit. Moreover,

by using large scale datasets, we have the luxury to discard the ambiguous content and

still be able to obtain significant amounts of training data.

Additionally, processing large scale multimedia content can be computationally de­

manding, especially when considering that the complexity of the visual analysis and

machine learning algorithms usually rises proportionally to the size of the dataset.

However, although the performance of machine learning algorithms highly depends on

the size of the utilized training set, adding randomly big chunks of data does not guar­

antee proportional boost in the performance of the models. For this reason, in this

thesis we look for the optimal ways to select training data, the addition of which to the

training set is expected to maximally boost the performance of the classifiers, with the

minimum computation cost. Nevertheless, it is imperative to consider that, even with

optimal selection of data, significantly boosting the performance might not be feasible

for all cases. Given that and towards minimizing the unnecessary processing load for

these cases, we propose a novel method that can predict when adding more data is

expected to be beneficial.

1.4 Outline

In Chapter 2, we aim at system scalability by proposing a method for gathering training

content from user generated content automatically using unsupervised techniques (i.e.

clustering). The problem we consider is essentially multiple-instance learning in noisy

context, where we try to exploit the high volume that characterizes the user generated

content. The objective is to automatically extract a training set from this user generated

content that can be used to learn an object detection model for a certain concept. More

specifically, drawing from a large pool of user tagged images, our goal is to determine a

set of image regions that can be associated with a certain object in an automatic way .

We examine under which circumstances this is possible and we prove, both theoretically

and empirically, our claim that the success of the proposed framework is correlated to

the size of the dataset and the quality of the visual analysis algorithms.

Page 24: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

1.4 Outline

The term ’’object” refers to the visual representation of a visual entity, while ’’con­

cept” is defined to be a perceptual representation that can be expressed both linguis­

tically (i.e. by textual words) and visually (i.e. by visual objects). For example, the

concept grass is represented by a set of textual words (e.g. grass, lawn, etc.) in the

linguistic domain and by a variety of objects in the visual domain (e.g. Fig 2.7 #2).

Furthermore, the term ’’concept” is also used to describe more abstract concepts as

well, such as happy, which are also expressed by a set of words in the linguistic domain

(e.g. happy, cheerful, etc.) but do not correspond to a specific visual object in the

visual domain.

In Chapter 3, we aim at system scalability using semi-supervised techniques (i.e.

bootstrapping). In this case, there is an initial manually annotated set of regions and

the goal is to optimally select regions from user tagged images in order to enhance the

training set and build more effective object detectors. However, the nature of these

annotations (i.e. global level) and the noise existing in the associated information, as

well as the ambiguity that characterizes these examples, disqualifies them from being

directly appropriate learning samples. Nevertheless, the tremendous volume of data

that is currently hosted in social networks gives us the luxury to disregard a substantial

number of candidate learning examples, provided we can devise a gauging mechanism

that could filter out any ambiguous or noisy samples. Our objective in this work is

to define a measure for visual ambiguity, which is caused by the visual similarity of

semantically dissimilar concepts, in order to help in the process of selecting positive

training regions from user tagged images. This is done by limiting the search space of

the potential images to the ones yielding a higher probability to contain the desired

regions, while at the same time not including visually ambiguous objects that could

confuse the selection algorithm.

In Chapter 4, we investigate the extent to which the user tagged images that are

found in social networks can be used as a reliable substitute for the human oracle

in the context of active learning for image classification. Civen that the oracle is

not expected to reply to the queries submitted by the selective sampling mechanism

with 100% accuracy, we expect to face a number of implications that will question

the effectiveness of active learning in this noisy context. The novelty of this work, in

contrast to what has been considered so far in active learning, is to propose a sample

Page 25: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

1. INTRODUCTION

selection strategy that maximizes not only the informativeness of the selected samples

but also the oracle’s confidence about their actual content.Finally, in Chapter 5, we tackle the computational scalability issue by proposing a

method that predicts the concepts for which the enhancement of the initial classifier

with additional training images is not expected to provide significant improvements.

More specifically, we adopt a regression model for predicting the performance gain of

the bootstrapping process prior to actually applying it. This is particularly useful in

the context of recent trends in the image classification domain, where the scalability of

methods to numerous concepts is now considered an important element of the proposed solutions. For example, in the ImageCLEF competition [10], the organizers introduced

this scalability requirement by adding the concept as an input to the participants’

systems rather than giving a pre-defined vocabulary of concepts, while in the ImagneNet

competition they had to classify images with respect to a vocabulary of 1000 concepts.

1.5 Contributions of this thesis

1.5 .1 O n th e S y stem S ca lab ility

Towards the objective of system scalability, i.e. robust object detection for numerous

concepts, we investigate whether social media and more specifically the user tagged

images that can be found in abundance on the web can effectively be leveraged for

obtaining large amounts of training data effortlessly. We examine this in the context

of both unsupervised (i.e. clustering) and semi-supervised (i.e. bootstrapping and

active learning) machine learning. The contribution of this thesis towards automatically

gathering training examples can be summarized in the following:

• We present a completely unsupervised framework that associates image regions

with tags by correlating the most populated visual cluster of regions to the most

frequently appearing group of tags. We make the assumption that the success

of this correlation mainly depends on the size of the processed dataset and the

amount of the visual analysis error. We provide both theoretical and empirical

proof that the aforementioned assumption holds.

• We propose a method for modelling and utilizing visual ambiguity that is in­

herent in multimedia content by explicitly inserting it into the classifier under a

Page 26: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

1.5 Contributions o f th is thesis

bootstrapping scheme, where the objective is to select additional positive regions

from a pool of candidate images. The proposed approach optimizes the selection process by limiting the search space of the potential images to the ones yielding

a higher probability to contain the desired regions, while at the same time not

including visually ambiguous objects that could confuse the selection algorithm.

Experimental results show that the employment of visual ambiguity allows for

better separation between the targeted true positive and the undesired negative regions.

• We examine how the known principles of active learning for image classification

fit in a social context. We show empirically that in the social context, where the

pool of candidates is replaced by user tagged images and the human oracle by

web users, it is important to take into consideration both the informativeness of

new samples and the confidence of the oracle. Towards this goal, we propose a

novel probabilistic fusion method for combining the aforementioned quantities.

1 .5 .2 O n th e C o m p u ta tio n a l sca la b ility

Towards minimizing the computational cost for scalable concept detection, the contri­

bution of this thesis can be summarized in the following:

• We propose a method that is able to exploit the correlation between the ex­

pected performance gain in a bootstrapping context and two novel features; the

maturity of the initial model and the reliability of the oracle. As a result, we

can considerably improve the scalability properties of bootstrapping techniques

by concentrating on the most prominent models and thus reducing the overall processing load.

Page 27: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

1. INTRODUCTION

10

Page 28: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

Chapter 2

Scalable object detection with unsupervised learning techniques

11

Page 29: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

2.1 Introduction

Humans can classify visual objects through models that are built using examples for

every single semantic concept. Based on this assumption, researchers have been trying

to simulate the human visual system by using machine learning algorithms to clas­

sify visual content. A set of training samples plays the role of the examples in the

case of object detection schemes. These schemes typically employ some form of su­

pervision in the process of gathering the required training samples, as it is practically

impossible to learn how to recognize an object without using any kind of semantic infor­

mation during training. However, semantic labels may be provided at different levels of

granularity (global or region level) and preciseness (one-to-one, one-to-many, or many-

to-many relation between objects and labels), imposing different requirements on the

effort needed to generate them. In this chapter we will use the term weakly annotated

images and weakly supervised learning when there is one-to-many or many-to-many re­

lation between the image regions and the provided labels [11]. This is usually the kind

of annotation that we get from search engines or collaborative tagging environments.

Equivalently, we will use the term strongly annotated images and strongly supervised

learning when there is one-to-one relation between the image regions and the provided

labels [12]. This is usually the kind of annotation resulting from dedicated, manual

annotation efforts. The annotation cost is a critical factor when designing an object

detection scheme with the intention of scaling to many different objects and domains.

W ith this in mind, our goal is to highlight the trade-off between the annotation cost

for preparing the necessary training samples and the quality of the resulting models.

While model parameters can be estimated more efficiently from strongly annotated

samples, such samples are very expensive to obtain, raising scalability problems. On

the contrary, weakly annotated samples can be easily obtained in large quantities from

social networks. Social tagging systems like ffickr^ accommodate image corpora that

are being populated with thousands of user tagged images on a daily basis. Motivated

by this fact, our work aims at combining the advantages of both strongly supervised

(learn model parameters more efficiently) and weakly supervised (learn from samples

obtained at low cost) methods, by allowing the strongly supervised methods to learn

from training samples that can be mined from collaborative tagging environments. The

www.flickr.com

12

Page 30: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.1 Introduction

problem we consider is essentially multiple-instance learning in noisy context, where

we try to exploit the noise reduction properties that characterize massive user contri­

butions, given that they encode the collective knowledge of multiple users. Specifically,

drawing from a large pool of weakly annotated images, our goal is to benefit from the

knowledge aggregated in social tagging systems in order to automatically determine

a set of image regions that can be associated with a certain object. In order to do

this, we hypothesize that if the set of weakly annotated images is properly selected, the

most populated tag- “term” and the most populated visual- “term” will be two different

representations (i.e. textual and visual) of the same object. We define tag-“terms”

to be sets of tag instances grouped based on their semantic affinity (e.g. synonyms,

derivatives, etc). Respectively, we define visual-“terms” to be sets of region instances

grouped on the basis of their visual similarity (e.g. clustering using the regions’ visual

features). The most populated tag-“term” (i.e. the most frequently appearing tag,

counting also its synonyms, derivatives, etc) is used to provide the semantic label of

the object that the developed classifier is trained to recognize, while the most populated

visual-“term” (i.e. the most populated cluster of image regions) is used to provide the

set of positive samples for training the classifier in a strongly supervised manner. It is

expected that as the pool of the weakly annotated images grows, the most frequently

appearing “term” in both tag and visual information space will converge into the same

object.

The contribution of this work is on studying theoretically and experimentally the

conditions under which this expectation is verified. The verification is evident in the

ideal case where tags are accurate and free of ambiguity, and no error is introduced by

the visual analysis algorithms. However, since this case is very unusual, we examine

how convergence is influenced both by the size of the processed dataset as well as by

the accuracy of the visual analysis algorithms (i.e. segmentation accuracy, clustering

efficiency). A large dataset size favours convergence since a statistically significant

number of samples can compensate for the error introduced by noisy tagging. On

the contrary, the amount of error introduced by the visual analysis algorithms hinders

convergence since the formulated clusters of image regions may not be consistent in a

semantic sense.

Part of the work presented in this chapter was done in collaboration with my co­

author and was published in a journal paper [13]. Some of the text that appeared in

13

Page 31: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

that paper was reused in this chapter. My contribution to that work was the design,

development and execution of the experiments as well as co-writing significant parts

of the manuscript. My additional contribution to this chapter is the work presented

as a conference paper in [14] (here in Section 2.7). In this work, a guided cluster

selection strategy is proposed to correlate the examined concept with the combination

of clusters containing the desired regions. Moreover, a novel graph based clustering

algorithm, which does not force all regions into clusters, is proposed. The aim is to

leave out noisy regions.

2.2 R elated Work

During the past decade, there has been considerable interest in weakly labelled data

and their potential to serve as training samples for various computer vision tasks. The

common objective of these approaches is to compensate for the loss in learning from

weakly annotated and noisy training data, by exploiting the arbitrary large amount

of available samples. Web 2.0 and collaborative tagging environments have further

boosted the interest in this idea by making available plentiful user tagged data.

Our work can be considered to relate to various works in the literature in different

aspects. From the perspective of exploring the trade-off between analysis efficiency

and the characteristics of the dataset we find similarities with [15], [16]. In [15] the

authors explore the trade-off’s in acquiring training data for image classification models

through automated web search as opposed to human annotation. The authors try

to learn a model that operates on prediction features (i.e. cross-domain similarity,

model generalization, concept frequency, within-training-set model quality) and provide

quantitative measures to gauge when the cheaply obtained data is of sufficient quality

for training robust object detectors. In [16] the authors investigate both theoretically

and empirically when effective learning is possible from ambiguously labelled images.

They formulate the learning problem as partially-supervised multi-class classification

and provide intuitive assumptions under which they expect learning to succeed. This

is done by using convex formulation and showing how to extend a general multi-class

loss function to handle ambiguity.

There are also works [17], [18], [19] that rely on the same principle assumption

used in our work, stating that users tend to contribute similar tags when faced with

14

Page 32: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.2 Related Work

similar type of visual content. In [17], the authors base their work on social data to

introduce the concept of flickr distance. Flickr distance is a measure of the semantic

relation between two concepts using their visual characteristics. The authors rely on

the assumption that images containing the same concept share similar appearance fea­

tures and use images obtained from flickr and visual language modelling (VLM) [20]

to represent a concept. Subsequently, the distance between two concepts is measured

using the Jensen-Shannon (JS) divergence between the constructed models. Although

different in purpose from our approach, the authors present some very interesting re­

sults demonstrating that collaborative tagging environments like flickr can be used to

facilitate various computer vision tasks. In [18], the authors make the assumption that

semantically related images usually include one or several common regions (objects)

with similar visual features. Based on this assumption they build classifiers using as

positive examples the regions assigned to a cluster, which is deemed to be representa­

tive of the concept. They use multiple region-clusters per concept and eventually they

construct an ensemble of classifiers. They are not concerned with object detection but

rather with concept detection modelled as a mixture/ constellation of different object

detectors. Along the same lines, the work presented in [19] investigates inexpensive

ways to generate annotated training samples for building concept classifiers using su­

pervised learning. The authors utilize click-through data logged by retrieval systems

that consist of the queries submitted by the users, together with the images in the

retrieval results, that these users selected to click on in response to their queries. Al­

though the training data collected in this way can be potentially noisy, the authors rely

on the fact that click-through data exhibits noise reduction properties, given tha t it en­

codes the collective knowledge of multiple users. The method is evaluated using global

concept detectors and the conclusion that can be drawn from the experimental study is

that although the automatically generated data cannot surpass the performance of the

manually produced ones, combining both automatically and manually generated data

consistently gives the best results.

The employment of unsupervised methods (e.g. clustering) for mining images de­

picting certain objects, is the attribute that relates our work with [21]. In [21] the

authors make use of community contributed collections and demonstrate a location-

tag-vision-based approach for retrieving images of geography-related landmarks. They

use clustering for detecting representative tags for landmarks, based on their location

15

Page 33: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

and time information. Subsequently, they combine this information with vision-assisted

process for presenting the user with a representative set of images. Clusters are formed

in the visual space and various scores are used to gauge the representativeness of clusters

as well as the images within a cluster. Eventually, the goal is to sample the resultant

clusters with the most representative images for the selected landmark.

Lately, with the impressive results of Convolutional Neural Networks (CNNs) in

both image annotation and object detection [2 2 ], many works have been investigating

their potential to facilitate various computer vision tasks [23; 24; 25; 26]. The authors

of [23] present an extensive study and comparison between features originating from

CNNs and SIFT-alike features followed by encoding algorithms (e.g. Bag-of-Words,

Fisher encoding, etc). Similarly, the authors of [25] show that the parameters of CNNs

can be learnt on independent large-scale annotated datasets (such as ImageNet [8 ])

and can be efficiently transferred to other visual recognition tasks with limited amount

of training data (i.e. object and action detection). Towards the same objective of

object detection, the authors of [24] present a method also based on CNNs, which

simultaneously segments and detects objects in images. Finally, based on weak but

noise-free annotations, the authors of [26] present a weakly supervised CNN for object

recognition that does not rely on detailed object annotations and show that it can

perform equally well when strong annotations are present.

Finally, our work also bears similarities with works like, [27; 28] that operate on

segmented images with associated text and perform annotation using the joint distri­

bution of image regions and words. In [27], the problem of object recognition is viewed

as a process of translating image regions to words, much as one might translate from

one language to another. The authors develop a number of models for the joint dis­

tribution of image regions and words, using weak annotations. In [28], the authors

propose a fully automatic learning framework that learns models from noisy data such

as images and user tags from flickr. Specifically, using a hierarchical generative model

the proposed framework learns the joint distribution of a scene class, objects, regions,

image patches, annotation tags as well as all the latent variables. Based on this distri­

bution the authors support the task of image classification, annotation and semantic

segmentation by integrating out of the joint distribution the corresponding variables.

16

Page 34: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.3 Problem Formulation

2.3 Problem Formulation

We use the notation of Table 2.1 to provide technical details, formalize the functionality

and describe the links between the components employed by our framework.

Our goal is to use tagged images from flickr and transform the one-to-many or

many-to-many relations that characterize their label-to-region annotations into one-to-

one relationships. One way to achieve this is through the semantic clustering of image

regions to objects (i.e. each cluster consists of regions that depict a specific object).

Semantic clustering can only be made feasible in the ideal case where the image analysis

techniques work perfectly. However, as this is highly unlikely, instead of requiring that

each cluster is mapped to a label in a one-to-one relationship, we select an image group

that focuses on Cfc and we only search for the cluster or clusters where the majority

of regions contained in them depict the focused object Cfc (Fig. 2.1). Thus the problem

can be viewed as follows. Given a group of images Iq G with information of the type

{(/d(’"i^)v • • 5 Cfc}, we search for the group of regions that can be mappedwith object Ck in a one-to-one relation.

2.4 Framework D escription

2 .4 .1 G en era l Fram ew ork A rch itec tu re

The framework we propose for leveraging social media to train object detection models

is depicted in Fig. 2.2. The analysis components that can be identified in our framework

are, a) construction of an appropriate image set, b) image segmentation, c) extraction

of visual features from image regions, d) clustering of regions using their visual fea­

tures and e) supervised learning of object recognition models using strongly annotated

samples.

More specifically, given an object Ck that we wish to train a detector for (e.g. sky in

Fig. 2.2), our method starts from a large collection of user tagged images and performs

the following actions. Images are appropriately selected so as to create a set of images

that is biased to emphasize on object By emphasizing we refer to the case where

the majority of the images within the image set depict a certain object and tha t the

linguistic description of that object can be obtained from the most frequently appearing

tag (see Section 2.4.2. 1 for more details). Subsequently, clustering is performed on

17

Page 35: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

SK Y

F igu re 2.1: Framework Objective.

all regions extracted from the images of the image set, that have been determined

by applying an automatic segmentation algorithm on those images. During region

clustering the image regions are represented by their visual features and each of the

generated clusters typically contains visually similar regions. Since the majority of the

images within the selected set depict instances of the desired object c^, we anticipate

that the majority of regions representing the object of interest will be gathered in the

most populated cluster, pushing all irrelevant regions to the other clusters. Eventually,

we use as positive samples the visual features extracted from the regions belonging to

the most populated cluster, to train an SVM-based binary classifier in a supervised

manner for recognizing instances of c^. After training the classifier, object detection is

18

Page 36: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.4 Framework Description

performed on unseen images by using the automatic segmentation algorithm to segment the unseen image into regions, and then apply the classifier to decide whether these

regions depict Ck-

Flickr im ages

Tag-based Im age Selection

Image Set T ag cloud o f all im ages in the Im age G roup

002secO03sec0O4sec Oev 30o40d5d 70mm dtceaectum Austin blUC

bulldipgCSnO n .cemeteiy citvcloods colo;county uiyita; 6 0 Sf6f8f9 groupobjectskyhpexif isolOO leoBOi G O i B O r a n d O m light minimalneon O b j G C t S k y piymogiaphy , vetiroo* scu lp ture s ig n S K y sta tu e

Each word corresponds to a tag-“term”Sky is the most populated tag-“term” in this group

- I Segm entation ■kJ V isual FeatureExtraction

Visual Features Space

P ositive E xam ples

ClusteringSupervisedLearning

(egative exam ples

O bject D etection M odel for Skv

Each group o f colored points corresponds to a visual-“term” The group o f points colored in yellow is the most populated visual-“term”

F igu re 2.2: Proposed framework for leveraging a set of user tagged images to train a model for detecting the object sky.

2 .4 .2 A n a lysis C om p on en ts

2.4.2.1 Construction of an appropriate im age set

In this Section we refer to the techniques that we use in order to construct a set of

images emphasizing on object Cfc, based on the associated textual information (i.e.

annotations). If we define ling{ck) to be the linguistic description of Ck (e.g. the words “sky” , “heaven” , “atmosphere” for the object sky), a function describing the

19

Page 37: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES___________________________________________

functionality of this component takes as input a large set of images and ling{ck), and

returns a set of images S '=, a subset of the initial set, that emphasizes on object c&.

imageSet{S, ling{ck)) = C S (2.1)

For the purposes of our work we use three different implementations of this function based on the type of associated annotations.

K eyw ord-based se lec tion This approach is used for selecting images from strongly

annotated datasets. These datasets are hand-labeled at region detail and the labels

provided by the annotators can be considered to be mostly accurate and free of am­

biguity. Thus, in order to create 5 '= we need only to select the images where at least

one of its regions is labeled with ling{ck). In this case the social aspect of the data is not exploited since the keyword-based selection approach is only applied on datasets

that are strongly annotated. However, as will become apparent in Section 2.6, we use

this approach to provide a reference point (i.e. manually extracted training samples)

for comparing the quality of the training samples produced by our framework.

F lick r g roups Flickr groups^ are virtual places hosted in collaborative tagging envi­ronments that allow social users to share content on a certain topic which can be also an

object. Although managing flickr groups still involves some type of human annotation

(i.e. a human assigns an image to a specific flickr group) it can be considered weaker

than the previous case since this type of annotation does not provide any information

about the boundaries of the object depicted in the image. From here on we will refer to

the images obtained from flickr groups as roughly-annotated images. In this case,

is created by taking a predefined number of images from a flickr group that is titled

with ling{ck)- Here, the tags of the images are not used as selection criteria. One drawback of flickr groups derives from the fact that since they are essentially virtual

places they are not guaranteed to increase their size constantly and therefore cater for

datasets of arbitrary scale. Indeed, the total number of positive samples that can be

extracted from the images of a flickr group has an upper limit on the total number of images that have been included in this group by the users, which is typically much

smaller than the total number of flickr images that actually depict this object. This

http://www.flickr.com/groups/

20

Page 38: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.4 Framework Description

is the reason that we also investigate the following selection technique, that operates

on image tags and therefore is capable of producing considerably larger sets of images

emphasizing on a certain object.

SE M SO C SEMSOC stands for SEmantic, SOcial and Content-based clustering and

is applied by our framework on weakly annotated images (i.e. images that have been

tagged by humans in the context of a collaborative tagging environment, but no rigid

annotations have been provided) in order to create sets of images emphasizing on dif­

ferent topics. SEMSOC was introduced by Giannakidou et. al. in [29] and is an

un-supervised model for the efficient and scalable mining of multimedia social-related

data that jointly considers social and semantic features. Given the tendency of social

tagging systems to formulate knowledge patterns that reflect the way content is per­

ceived by the web users, SEMSOC aims at identifying these patterns and create an

image set emphasizing on c^. The reason for adopting this approach in our framework

is to overcome the limitations that characterize collaborative tagging systems such as

tag spamming, tag ambiguity, tag synonymy and granularity variation (i.e. different

description level). The outcome of applying SEMSOC on a large set of images S, is a

number of image sets 5 * C 5, i = 1 , . . . , m, where m is the number of created sets.

This number is determined empirically, as described in [29]. Then in order to obtain

the image set that emphasizes on object Cfc, we select the SEMSOC-generated set

where its most frequent tag closely relates with ling{ck). Although the image sets

generated by SEMSOC are not of the same quality as those obtained from flickr groups,

they can be significantly larger favoring the convergence between the most populated

visual- and tag- “term” . In this case, the total number of positive samples tha t can be

obtained is only limited by the total number of images that have been uploaded on

the entire flickr repository and depict the object of interest. Moreover, since SEMSOC

considers also the social and semantic features of tags when creating the sets of images,

the resulting sets are expected to be of higher semantic coherence than the sets created

using for instance, a straightforward tag-based search. Fig. 2.3 shows four examples

of image clusters generated by SEMSOC, along with the corresponding most frequent

tag.

21

Page 39: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

(a) Vegetation (b) Sky

(c) Sea (d) Person

F ig u re 2.3; Examples of image sets generated using SEMSOC (in caption the correspond­ing most frequent tag). It is clear that the majority of images in each set include instances of the object that is linguistically described by the most frequent tag. The image is best viewed in colour and with magnification.

22

Page 40: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.4 Framework Description

2.4.2.2 Segm entation

Segmentation is applied to all images in with the aim of extracting the spatial masks

of visually meaningful regions. In our work we have used a segrnentation algorithm

based on K-means with connectivity constraint (KMCC) presented in [30]. Initially,

for each pixel of the image, intensity, texture and spatial feature vectors are extracted.

Then, the initial number of regions and their intensity, texture and spatial centers

are estimated using a variation of the maximin algorithm. These values are given as

input to the KMCC algorithm, which classifies the pixels to the different regions. This

algorithm was chosen over more popular ones (such as Normalized Cuts [31], Mean

Shift [32] and Superpixels [33]) because of its ability to create fewer and larger regions

which are more likely to depict visual objects. The output of this algorithm applied on

a single image is a set of segments which roughly correspond to meaningful objects, as

shown in Fig. 2.2. Thus, the segmentation analysis component takes as input the full

set of images that are included in 6 ' '=, and generates an extensive set of independent

image regions:

= {n G R : VT E (2 .2 )

2.4.2.3 V isual D escriptors

In order to visually describe the segmented regions we have employed an approach

similar to the one described in [34] with the important difference that in our case de­

scriptors are extracted to represent each of the identified image regions, rather than

the whole image. More specifically, for detecting interest points we have applied the

Harris-Laplace point detector on intensity channel, which has shown good performance

for object recognition [35]. In addition, we have also applied a dense-sampling ap­

proach where interest points are taken every 6* pixel in the image. For each interest

point (identified both using the Harris-Laplace and dense sampling approach) the 128-

dimensional SIFT descriptor is computed using the version described by Lowe [36].

Then, a Visual Word Vocabulary (codebook) is created by using the K-Means algo­

rithm to cluster approximately 1 million SIFT descriptors that were sub-sampled from

a total number of 28 million SIFT descriptors extracted from 5 thousand training im­

ages. The codebook allows the SIFT descriptors of all interest points enclosed by an

23

Page 41: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

image region, to be vector quantized against the set of Visual Words (300 in our case)

and their occurrence summarized in a histogram [37]. Thus, Vr% e R a 300-dimensional

feature vector /(r^) is extracted, that contains information about the presence or ab­

sence of the Visual Words included in the Codebook. Then, all feature vectors are

normalized so as the sum of all elements of each feature vector is equal to 1. Thus, the

visual descriptor component of the system takes as input the full set of independent

image regions R extracted from all images in 6 ' and generates an equivalent number

of feature vectors.

;zs(jR) = { / ( n ) G E : Vn G R} (2.3)

2.4.2.4 Clustering

For performing feature-based region clustering, we applied the affinity propagation

clustering algorithm on all extracted feature vectors F. This is an algorithm that takes

as input the measures of similarity between pairs of data points and exchanges messages

between data points, until a high-quality set of centers and corresponding clusters is

found. Affinity propagation, proposed by Frey and Dueck [38], was selected in this

work for the following reasons:

a) The requirements of our framework imply that in order to learn an efficient object

detection model, clustering will have to be performed on a considerably large number

of regions, making computational efficiency an important issue. The common approach

followed by most clustering algorithms is to determine a set of centers such that the sum

of squared errors between data points and their nearest centers is minimized. This is

done by starting with an initial set of randomly selected centers and iteratively refining

this set so as to decrease the sum of squared errors. However, such approaches are

sensitive to the initial selection of centers, and work well only when the number of

clusters is small and the random initialization is close to a good solution. This is the

reason why these algorithms need to re-run many times with different initializations in

order to find a good solution. In contrast to this, affinity propagation simultaneously

considers all data points as potential centers. By viewing each data point as a node in

a network, affinity propagation recursively transmits real-valued messages along edges

of the network until a good set of centers and corresponding clusters emerges. In this

24

Page 42: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.4 Framework Description

way, it removes the need to re-run the algorithm with different initializations which is very beneficiary in terms of computational efficiency.

b) The fact that the number of objects depicted in the images of an image set

can not be known in advance, poses the requirement for the clustering procedure to automatically determine the appropriate number of clusters based on the analyzed data.

Affinity propagation, rather than requiring that the number of clusters is pre-specified,

takes as input a real number for each data point. This number is called “preference” and its meaning is that data points with larger values for “preference” are more likely to

be chosen as centers. In this way the number of identified centers (number of clusters)

is influenced by the values of the input preferences but also emerges from the message-

passing procedure. If a priori, all data points are equally suitable as centers (as in our case) the preferences should be set to a common value. This value can be varied

to produce different numbers of clusters and taken for example to be the median of

the input similarities (resulting in a moderate number of clusters) or their minimum (resulting in a small number of clusters). Given that it is better for our framework to

handle noisy rather than inadequate (in terms of indicative examples) training sets, we opt for the minimum value in our experiments.

Thus, the clustering component takes as input the full set of feature vectors ex­

tracted by the visual descriptors component and generates clusters of feature vectors

based on a similarity distance between those vectors. These clusters of feature vectors

can be directly translated to clusters of regions since there is one to one correspon­

dence between regions and feature vectors. Thus, the functionality of the clustering component can be described as follows:

clust{F) = {r G R } (2.4)

Out of the generated clusters of regions we select the most populated r ,, as described in detail in Section 2.5, and we use the regions included in this cluster to learn the

parameters of a model recognizing Ck-

2.4.2.5 Learning M odel Param eters

Support Vector Machines (SVMs) [39] were chosen for generating the object detection

models due to their ability in smoothly generalizing and coping efficiently with high-

dimensionality pattern recognition problems. All feature vectors corresponding to the

25

Page 43: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

regions assigned to the most populated of the generated clusters, are used as positive

samples for training a binary classifier. Negative examples are chosen arbitrarily from

the remaining dataset. Tuning arguments include the selection of Gaussian radial

basis kernel and the use of cross validation for selecting the kernel parameters. Thus,

the functionality of the model learning component can be described by the following

function;

svm{vis{rv), Ck) = rUc (2.5)

2.5 R ationale of our approach

2 .5 .1 P ro b lem F orm ulation

The goal of our framework is to find a set of image regions depicting the object c&, (r+,Cfe) from a set of user tagged images. However, the annotations found in social

networks are in the form of image level tags {/, (ii, 2 , • • •, ^n)}j which can be trans­

formed to {(ri, T2 , . . . , VmY, • • • 5 tn Y ] after segmenting I into regions. Ideally,

the tagged images could be used to extract the positive samples for every concept

ci,i = 1, ...,t depicted in S if we could perfectly cluster the visual and tag information

space. More specifically, if we take R and T to be the total set of regions and tags

extracted from all images in S respectively, by performing clustering based on the sim­

ilarity between the individuals of the same population (i.e. visual similarity for image

regions and semantic affinity for contributed tags), we are able to generate clusters of

individuals in each population as shown below:

visualCluster{R) = r%, r* Ç R visual-terms ,tagCluster{T) = t j , t j Ç T tag-terms

Now, given a large set of tagged images I e S this process would produce for each

object Cl depicted by the images of S, a triplet of the form (r%,tj,Cf). Ideally in each

triplet, Ti is the set of regions extracted from all images in S that depict ci, and t j is

the set of tags from all images in S that were contributed to describe ci linguistically.

We consider that an object ci may have many different instantiations in both visual

(e.g. different angle, illumination, etc) and tag (e.g. synonyms or derivatives of the

words expressing the object; for instance the object sea can be linguistically described

26

Page 44: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.5 Rationale of our approach

using many different words such as “sea” , “seaside” , “ocean”, etc) information space.

Thus, Ti can be used to provide the positive samples needed to train the SVM-based

classifier, while can be used to provide the linguistic description of the object that

the classifier is trained to recognize. However, the aforementioned process can only be

made feasible in the ideal case where the image analysis works perfectly and there is

no noise in the contributed tags, which is highly unlikely.

For this reason, in our work, we relax the constraints of the aforementioned problem

and instead of requiring that one triplet is extracted for every object c/ depicted by the

images of S, we only aim at extracting the triplet corresponding to object c&, which

is the object emphasized by the processed image set. Thus, the first step is to create

an appropriate set of images that emphasizes on object c&. Then, based on the

assumption that there will be a connection between what is depicted by the majority

of the images in and what is described by the majority of the contributed tags,

we investigate the level of semantic consistency (i.e. the level at which the majority of

regions included in depict c& and the majority of tags included in tg are linguistically

related with c&) of the triplet (r ,, t^, c^), if v and g are selected as follows. Since both

Vi and t j are clusters (of images regions and tags, respectively), we can apply the

Pop{-) function on them, that calculates the population of a cluster (i.e. the number of

instances included in the cluster). Then v and g are selected such as the corresponding

clusters are the most populated from all clusters generated by the clustering functions

of eq. (2.6), that is v = argmax^(Pop(r^)) and g = argmaXj{Pop{tj)).

Although the errors generated from imperfect visual analysis may have different

causes (e.g. segmentation error, imperfect discrimination between objects), they all

hinder the creation of semantically consistent region clusters. Therefore, in our work,

we consider that the error generated from the inaccurate clustering of image regions

with respect to the existing objects {errord-obj), incorporates all other types of visual

analysis error. Similarly, although the contributed tags may incorporate different types

of noise (i.e. ambiguity, redundancy, granularity variation, etc) they all hinder the

process of associating a tag with the objects that are depicted in the image, and thus

is refiected on the level of emphasis that is given on object Ck when collecting

Eventually, the problem addressed in this work is what should be the characteristics

of and errord-ohj so as the triplet (r^,,tg,Cj;;) determined as described above, to

27

Page 45: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

satisfy our objective i.e. that the majority of regions included in depict % and the majority of tags included in tg are linguistically related with Ck-

2 .5 .2 Im age se t co n stru c tio n

Let us assume that we construct an image set C S that emphasizes on object c^. We can view the process of constructing as the act of populating an image set with images selected from a large database S using certain criteria. In this case, the number

of images depicting object Q in 5 '=, can be considered to be equal with the number

of successes in a sequence of n independent success/failure trials, each one yielding

success with probability . Given that S is sufficiently large, drawing an image from

this dataset can be considered as an independent trial. Thus, the number of images in that depict object Ci E C can be expressed by a random variable K following

the binomial distribution with probability p^.. Eq. (2.7) shows the probability mass

function of a random variable following the binomial distribution:

P r ^ K = fc) = ( ^ ) p | ( l - Pci)” -*’ (2.7)

Given the above, we can use the expected value E {K ) of a random variable following

the binomial distribution to estimate the expected number of images in that depict

object Ci G C, if they are drawn from the initial dataset S with probability p ^ . This is actually the value of k maximizing the corresponding probability mass function, which

is:

E ^ (K ) = nv,, (2 .8 )

If we consider 7 to be the average number of times an object appears in an image,

then the number of appearances {if^appearances) of an object in is:

TC i = qnpci (2.9)

Moreover, based on the assumption mentioned earlier in this section, we accept

that there will be an object ci that is drawn (i.e. appears in the selected image) with probability p^ higher than p^^, which is the probability that an image depicting C2

is drawn, and so forth for the remaining q G C. This assumption is experimentally

verified in Section 2.6.2 where the frequency distribution of objects for different image

28

Page 46: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.5 Rationale of our approach

100

90

80

W 709iC 60 2ro 50

Q. 40

I

200

180

160

(0 140 9iC 1205CO 100 CD

40

I L.- I|

jdtlLObjects

(a)

Objects

(b)

cl c2

F igu re 2.4: a) Distribution of ^^appearances VQ G C based on their frequency rank, for n=100 and P d= 0.9 , Pca = 0.7, pcg = 0.5, Pc = 0.3, Pcg = 0.1. b) Difference of ^^appearances between ci, C2 , using fixed values for pc = 0.8 and = 0.6 and differentvalues for n.

sets are measured in a manually annotated dataset. Finally, using eq. (2.9) we can

estimate the expected number of appearances appearances) of an object in ,

Mci E C. Fig. 2.4(a) shows the ^^appearances Vc% G C against their frequency rank,

given some example values for with Pa > Pc2 > — It is clear from eq. (2.9) that if

we consider the probabilities p^ to be fixed, the expected difference, in absolute terms,

in the ^appearances between the first and the second most highly ranked objects ci

and C2 , increases as a linear function of n (see Fig. 2.4(b) for some examples). Based on

this observation and given the fact that as N increases n will also increase, we examine

how the population of the generated region clusters relates to errord-obj and n.

2 .5 .3 C lu ster in g

The purpose of this section is to help the reader draw some intuitive conclusions about

the impact of the dataset size and the error introduced by the visual analysis algo­

rithms errord-obji on the success probability of our approach. In order to do this we

examine the clustering part of the proposed framework from the perspective of how

much a possible solution deviates from the perfect case. This allows us to approximate

errord-obj with a measurable quantity and derive an analytical form of the association

29

Page 47: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

between the visual analysis error, the size of the dataset and an indicator of the success

probability of our approach.

W ithout loss of generality we work under the assumption that due to the errord-obj it is more likely for the cluster corresponding to the second most frequently appearing

object, to become more populated than the cluster corresponding to the first most frequently appearing object, than a cluster corresponding to any other object. A cluster

that corresponds to an object c% is considered to be the cluster that exhibits the highest

F-measure (Ei) score, with respect to that object, among all generated clusters. Thus,

the cluster corresponding to object c% is found using function Z defined as:

Z (Q ,R ) = r^, K = argm ax(Ei(ci,rj)) (2.10)3

where F\ is the harmonic mean of precision (prec) and recall (rec) and is calculated

using the following equation:

= w ith

recij = precij =

Then, given that has been decided to be the corresponding cluster of Cj, the

population Pop^ of the cluster is equal to the number of regions TCi depicting Ci,

adding the number of false positives PPi^K and removing the number of false negatives

FNi^K that have been generated from the error d-obj- Thus, we have:

P opk = TCi + FPi,K - FNi^K =>(2 .12)

P oPk — TCi + DRi,K

DRi^K is defined to be the displacement of with respect to c% and is an indicator

of how much the content of deviates from the perfect solution. DRi^K shows how

the P oPk of cluster is modified according to the errord-obj introduced by the visual analysis algorithms. Positive values of DRi^^ indicates inflows in population, while negative values indicate leakages. In the typical case where the clustering result does

not exhibit high values for PPi^K and PNi^K simultaneously (see Section 2.6.3), DRi^^ is also an indicator of result’s quality since it shows how much the content of a cluster has

been changed with respect to the perfect case. Let us denote = % (ci,R) and =

Z(c2 ,R ) the clusters corresponding to ci (i.e. the most frequently appearing object in

30

Page 48: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.5 Rationale of our approach

and C2 (i.e. the second most frequently appearing object in 5^^), respectively. We

are interested in the relation connecting Popa and Popp given DRi^a: D R 2 ,p- Thus we have:

Popa ~ Popp = TC \ + DRi^a ~ TC 2 — DR2^p =>(2.13)

Popa — Popp = {TCi — TC 2 ) + {DRi^a ~ D P 2 ,p)

We know about the first parenthesis on the right hand side of the equation that since

emphasizes on ci this object will appear more frequently than any other object in

thus TC i — TC 2 > 0. In the case where the second parenthesis on the right hand

side of the equation is also positive (i.e. DRi^a — D R 2 ,p > 0), the value Popa — Popp

will be greater than zero since it is the sum of two positive numbers. This indicates

that despite the errord-obj, cluster r^ remains the most populated of the generated clusters and continues to be the most appropriate (i.e. in terms of the maximum Fi

criterion) cluster for training a model detecting object ci. When DRi^a — DR2,p > 0 we can distinguish between the three qualitative cases for clustering that are described

in Table 2.2. The superscripts are used to indicate the sign (i.e. positive or negative)

of the corresponding displacement in each case.

If DRi^a — D R 2 ,p < 0, the two parentheses of the right hand side of the eq. (2.13) have different signs and the sign of the value Popa — Popp depends on the difference

between the absolute values of \TCi —TC 2 I and \DRi^a — D R 2 ,p\. In this case one of the factors controlling whether the most populated cluster will be the most appropriate

cluster for training a model detecting ci, is the absolute difference between TC i and

TC 2 , which according to our analysis in Section 2.5.2 depends largely on the number of images n in . The three qualitative cases for clustering that we can identify when

DRi^a — D R 2 ,p < 0 are shown in Table 2.2.In order to get an intuitive view of the relation between n and the probability

of selecting the most appropriate cluster when DRi^a ~ D R 2 ,p < 0 , we approximate

the effect of errord-obj on the distribution of the generated clusters’ population by measuring how much a certain clustering solution deviates from the perfect solution. In

order to do this, we view clustering as a recursive process with the starting point at the

perfect solution. Then, the deviation of some clustering solution t + 1 from the perfect

solution depends on the deviation of the previous solution t from the perfect solution.

Respectively, the population of a cluster in solution t + 1 is equal to the population of

31

Page 49: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

this cluster in the previous solution t, adding the number of false positives and removing the number of false negatives that have been generated from the transition t i + 1 .

This can be expressed using the following recursive equation:

= Popl +(2.14)

If we take as a starting point the perfect solution, we have Pop^ = TCi. If we also

consider D R f\ to be constant for all transitions, we can find a closed-form solution for

the recursive equation which is:

Pcyp*p’‘ = TC i + qDl4% (2.15)

where q is the number of transitions that have taken place and provides and intuitive measure of how much distance there is between the current clustering solution and the

perfect solution. However, TCi denotes the number of times the object c% appears in

{jj^appearances) and according to eq. (2.9) we have TCi = I'apci- By substituting

TCi in eq. (2.15) we have:

PopJ^+9 = 7 % + qDR^k (2.16)

Given that D R i^a~D R 2 ,p < 0, the population of cluster rQ, is increasing/decreasing

with a rate lower/higher than the rate with which increases/decreases. So, we are

interested in the number of transitions that are needed for causing the population of

Ta to become equal or less to the population of rp. The equality corresponds to the

minimum number of transitions.

Pop^aP" - Pop*j’“ < 0

7«Pci + qDRf,^ - jn p a - qDP^p < 0

> 7 (Pci -PC9 )9 -

(2.17)

In order to draw some conclusions from this equation we need to note the following.

Given our basic assumption we have pci > Pc2 • Moreover, given that DR\^a—DR2,p < 0 we can also accept that DRf{^ — < 0. Thus, all terms on the right hand side

32

Page 50: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.6 Experimental study

of eq. (2.17) are positive. It is clear from eq. (2.17) that the number of transitions q

required for causing not to be the most populated of the generated clusters, increases

proportionally to the dataset size n and the difference of probabilities (p^ — Pcg)- It is important to note that q does not correspond to any physical value since clustering

is not a recursive process, it is just an elegant way to help us reach the intuitive

conclusion that as n increases, there is higher probability in Tq, (i.e. the most populated

of the generated clusters) being the most appropriate cluster for learning ci, due to the

increased amount of deviation from the perfect solution that can be tolerated.

2.6 Experim ental study

The goal of our study is to experimentally validate using real social data, our expec­

tations on the required size of the processed dataset and the error introduced by the

visual analysis algorithms. We examine the conditions under which the most populated

visual- and tag-“term ” converge into the same object and evaluate the efficiency of the

object detection models generated by our framework. To this end, in Section 2 .6 . 2 we

experimentally verify that the absolute difference between the first and second most

frequently appearing objects in a dataset constructed to emphasize on the former, in­

creases as the size of the dataset grows. Section 2.6.3 provides an experimental insight

on the errord-obj introduced by the visual analysis algorithms and examines whether our expectation regarding the most populated cluster holds. In Section 2.6.4 we com­

pare the quality of object models trained using flickr images leveraged by the proposed

framework, against the models trained using manually provided, strongly annotated

samples. Moreover, we also examine how the volume of the initial dataset affects the

efficiency of the resulting models. In addition to the above, in Section 2.6.5 we examine

the ability of our framework to scale in various types of objects. We close our experi­

mental study in Section 4.4.3 where we compare our work with other existing methods

in the literature.

2.6 .1 D a ta se ts

To carry out our experiments we have relied on three different types of datasets. The

first type includes the strongly annotated datasets constructed by asking people to

provide region detail annotations of images. To acquire comparable measures over

33

Page 51: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

the experiments, the images of the strongly annotated dataset were segmented by the

segmentation algorithm described in Section 2.4.2.2 and the ground tru th label of each

segment was taken to be the label of the hand-labelled region that overlapped with the

segment by more than the 2/3 of the segment’s area. As strongly annotated datasets,

we have used a collection of 536 images from the Seaside domain annotated in

our lab^ and the publicly available MSRC dataset^ consisting of 591 images. The

second type refers to the roughly-annotated datasets like the ones obtained from flickr

groups. In order to create a dataset of this type 5^ , for each object of interest, we

have downloaded 500 member images from a flickr group that is titled with a name

related to the name of the object, resulting in 25 groups of 500 images each (12500

in total). The third type refers to the weakly annotated datasets like the ones that

can be collected freely from the collaborative tagging environments. For this case, we

have crawled 3000 and 10000 images from flickr using the wget^ utility

and flickr API facilities, in order to investigate the impact of the dataset size on the

efficiency of the generated models. Depending on the annotation type we use the tag-

based selection approaches presented in Section 2.4.2. 1 to construct the necessary image

sets /S . Table 2.3 summarizes the information of the datasets used in our experimental

study.

2 .6 .2 O b je c ts ’ d is tr ib u tio n b ased on th e size o f th e im age se t

As claimed in Section 2.5.2, we expect the absolute difference between the number of

appearances {if appearances) of the first (ci) and second (0 2 ) most highly ranked objects

within an image set to increase as the volume of the dataset increases. This is

evident in the case of keyword-based selection since, due to the fact that the annotations

are ground truth, the probability that the selected image depicts the intended object is

equal to 1 , much greater than the probability of depicting the second most frequently

appearing object. Similarly, in the case of flickr groups, since a user has decided to

assign an image to the flickr group titled with the name of the object, the probability

of this image depicting the intended object should be close to 1. On the contrary, for

the case of SEMSOC that operates on ambiguous and misleading tags, this claim is not

http ; //mklab.iti.gr/pro j ect / scef^http: / /research. microsoft. com / vision/Cambridge / recognition ^wget: http://www.gnu.org/software/wget

34

Page 52: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.6 Experimental study

evident. For this reason and in order to verify our conjecture experimentally, we plot

the distribution of objects’ ifappearances in four image sets that were constructed

to emphasize on objects sky, sea, vegetation, person, respectively. These image sets were generated from both and using SEMSOC. Each of the bar diagrams

depicted in Fig. 2.5, describes the distribution of objects’ i f appearances inside an image

set S^, as evaluated by humans. This annotation effort was carried out in our lab and

its goal was to provide weak but noise-free annotations in the form of labels for the

content of the images included in both and . It is clear that as we move

from to the difference, in absolute terms, between the number of images

depicting c\ and C2 , increases in all four cases, advocating our claim about the impact

of the dataset size on the distribution of objects’ i f appearances, when using SEMSOC.

2 .6 .3 C lu ster in g a ssessm en t

The purpose of this experiment is to provide insight into the validity of our approach to selecting the most populated cluster, in order to train a model recognizing the

most frequently appearing object. In order to do so we evaluate the content of each

of the formulated clusters using the strongly annotated datasets and S ^ . More

specifically, Vcj depicted in or we obtain 5** C or C using keyword based search and apply clustering on the extracted regions. Then, for each 5" we

calculate the values TC \, DRi^a and Pop a for the most frequently appearing object

Cl and its corresponding cluster r^; and TC 2 , Di?2 , /3 and Pop^ for the second most frequently appearing object C2 and its corresponding cluster r^. Both and rp are

determined based on eq. (2.10) of Section 2.5.3. Subsequently, we examine whether

Fq; is the most populated among all the clusters generated by the clustering algorithm,

not only among Va and rp (i.e. we examine if Popa = max Pqpi for all generated clusters). If this is the case we consider that our framework has succeeded in selecting

the most appropriate cluster for training a model to recognize ci (a y/ is inserted in the

corresponding entry of the Sue column of Table 2.4). If is not the most populated

cluster, we consider tha t our framework has failed in selecting the appropriate cluster (a

X is inserted in the corresponding entry of the Sue. column). Table 2.4 summarizes the

results for the 7 objects of S ^ and the 19 objects of S ^ (the objects bicycle and cat were

omitted since there was only one cluster generated). We notice that the appropriate

cluster is selected in 2 1 out of 26 cases advocating our expectation that the errord-obj

35

Page 53: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

o 100

Z 100

œ 150

o 100

(a) Sky

Z 100

E 100

(b) Vegetation

^ y * ^

(c) Sea

0) 40

cf 4^

100

g 80

;.5 6001z 20

y < y p / % / y(d) Person

F igu re 2.5: Distribution of objects’ disappearance in an image group S^, generated from ^F3ic and (right) using SEMSOC

36

Page 54: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.6 Experimental study

introduced by the visual analysis process is usually limited and allows our framework to

work efficiently. By examining the figures of Table 2.4 more thoroughly we realize that

DRi^a — D R 2 ,p > 0 for all success cases, with the only exception of object sky for . This is in accordance with the theoretical analysis of Section 2.5.3 which showed that

if the relative inflow from r^ to is positive our framework will succeed in selecting

the appropriate cluster. In the case of object sky our analysis does not hold due to the excessive level of over-segmentation. Indeed, by examining the content of the images

belonging to the image set C we realize that despite the fact that sky is the most frequently appearing object in the image set, after segmenting all images in

and manually annotating the extracted regions, the number of regions depicting sky

T C i= 470 is less than the number of regions depicting sea TC 2 = 663. This is a clear

indication that the effect of over-segmentation has inverted the objects’ distribution

making sea the most frequently appearing object in The fail cases where the

relative inflow from to is negative (i.e. DRi^a ~ D R 2 ,i3 < 0 ) are also consistent with our analysis. In none of these 5 cases the difference between {TCi — TC 2 ) was high enough to compensate for the error introduced by the visual analysis process.

Additionally, we have used the experimental observations of Table 2.4 in order to

verify the qualitative aspect of \D Rij\ mentioned in Section 2.5.3. More specifically,

we have plotted the (FP,FN) pairs exhibited by the and clusters of the 7 seaside

and 19 MSRC objects. Fig. 2.6(a) verifies the tendency of the clustering algorithm not

to deviate substantially from the perfect case since the majority of (FP,FN) pairs are closer to (0,0) than (500,500). Moreover, the fact that no (FP,FN) pairs lay close to the

diagonal {FP = F N ) apart from the ones that are very close to (0,0), renders \D R ij\ a valid indicator for the quality of the result. This qualitative aspect of \DRi^j\ was also

verified by the diagram of Fig. 2.6(b). In this diagram we have plotted the F-measure

score for the most populated cluster of each object, against the observed \D Rij\ value of this cluster normalized by the total number of true positives TCi. It is evident that

the F-Measure tends to decrease as the ratio \D R ij\/T C i increases, showing tha t high

values for \D Rij\ indicate low quality for the result.

2 .6 .4 C om paring o b je c t d e tec tio n m o d els

In order to compare the efficiency of the models generated using training samples with

different annotation type (i.e. strongly, roughly, weakly), we need a set of objects that

37

Page 55: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

(FP.FN) pairs for the two most populated clusters of each object

500

(a)F-measure vs |DR|/TC diagram for the most populated cluster of each object

100

90 -

80 -

70 -

* *0) 60 *

* *S 50 -

E %^ 40 -

30 -

20 -

10 -

0

*%

%* %

3 4|DR|/TC

(b )

F igu re 2.6: a) Diagram showing the (FP,FN) pairs for the two most populated clusters of all objects. It is evident that that vast majority of pairs are closer to (0,0) than (500,500). b) Diagram showing the F-Measure scores exhibited for the most populated cluster of each object, against the observed \D R ij\ value of this cluster normalized with the total number of true positives TCi. The qualitative aspect of \D R i j \ derives from the observation that the F-measure tends to decrease as the ratio \DRi^j\/TCi increases.

38

Page 56: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.6 Experimental study

Model Comparison Diagram

Flickr 3k Flickr 10k Flickr Groups Manual

person sea sky vegetation

F igu re 2.7: Performance comparison between four object recognition models that are learned using images of different annotation quality (i.e. strongly, roughly and weakly)

are com m on in all three typ es o f d atasets. For th is reason, after exam in ing th e con ten ts

of 5"®, review ing th e availability o f groups in flickr and applying SEM SO C on and

r F i o k we determ ined 4 object categories {sky, sea, vegetation , person}. T h ese

objects exh ib ited signiflcant presence in all different d atasets and served as benchm arks

for com paring the quality of the different m odels. For each ob ject Ci G one

m odel was trained using th e strong annotations o f , one m odel was trained using th e

roughly-annotated im ages contained in and tw o m odels were trained using th e w eak

an notations o f and respectively. In order to evaluate th e perform ance o f

these m odels, we test them using a subset (i.e. 268 im ages) o f th e stron gly an notated

d ataset C 5 ^ , not used during training. T h e F\ m etric was used for m easuring

the efficiency o f the m odels.

B y looking at th e bar diagram of F ig. 2.7, w e draw th e follow ing conclusions; a)

39

Page 57: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

Model parameters are estimated more efficiently when trained with strongly annotated

samples, since in 3 out of 4 cases they outperform the other models and sometimes

by a significant amount (e.g. sky, person), b) Flickr groups can serve as a less costly

alternative for learning the model parameters, since using the roughly-annotated sam­

ples we get comparable and sometimes even better (e.g. vegetation) performance than

manually trained models, while requiring considerably less effort to obtain the training

samples, c) The models learned from weakly annotated samples are usually inferior to

the other cases, especially in cases where the proposed approach for leveraging the data

has failed in selecting the appropriate cluster (e.g. sea and sky for the dataset).

However, the efficiency of the models trained using weakly annotated samples im­

proves when the size of the dataset increases. From the bar diagram of Fig. 2.7 it is clear

that when using the the incorporation of a larger number of positive samples

into the training set improves the generalization ability of the generated models in all

four cases. Moreover, in the case of object sea we note also a drastic improvement of the

model’s efficiency. This is attributed to the fact that the increment of the dataset size,

as explained in Section 2.5, compensates for the errord-obj and allows the proposed

method to select the appropriate cluster. On the other hand, in the case of object sky

it seems that the correct cluster is still missed despite the use of a larger dataset. The

correct cluster is also missed for the object sky when the weakly annotated samples are

obtained from flickr groups. This shows that errord-obj is considerably high for this

object and does not allow our framework to select the correct cluster.

2 .6 .5 S ca lin g in variou s ty p e s o f o b je c ts

In order to test the ability of our approach to apply successfully to various types of

objects, we have performed experiments using the MSRC dataset^. MSRC { S ^ ) is a

publicly available dataset that has been widely used to evaluate the performance of

many object detection methods. The reason for choosing MSRC over other publicly

available benchmarking datasets such as the PASCAL VOC dataset [40], was its adop­

tion by the works in the literature that are more relevant to the proposed approach

(i.e. using weakly annotated data to train object detectors) allowing us to compare

our work with state of the art methods (see Section 2.6.6). MSRC consists of 591

http : / /research.microsoft .com/ vision/ Cambridge/ recognition

40

Page 58: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.6 Experimental study

hand-segmented images annotated at region detail for 23 objects. Due to their par­

ticular small number of samples, the objects horse and mountain were ignored in our

study. All images of were segmented by the segmentation algorithm described in

Section 2.4.2.2 and the ground tru th labels were extracted as in Section 2.6.1. The

dataset was split randomly in 295 training 5"^^^ and 296 testing 5"^^ images, ensuring

approximately proportional presence of each object in both sets. The dataset

was used to train the strongly supervised classifiers for comparison reasons (i.e. as

a baseline) and the dataset was used for evaluating the classifiers. In order to

test our approach for these objects we have relied on flickr groups to obtain 2 1 image

groups, with 500 members each, suitable for training models for the 21 objects of .

In an attem pt not only to evaluate the efficiency of the developed models but also

to discover whether the root cause for learning a bad model is the selection of an

inappropriate set of training samples, or the deficiency of the employed visual feature

space to discriminate the examined object, we perform the following. Since we do not

have strong annotations for the images obtained from flickr groups and it is impossible

to assess the quality of the generated clusters as performed in Section 2.6.3, we train as

many models as the number of generated clusters (not only using the most populated)

and test them using S^g^. Our aim is to assess the quality of the generated clusters

indirectly, by looking at the recognition rates of the models trained with the member

regions of each cluster. The bar diagrams of Fig. 2.5 show the object recognition rates

(measured using the Fi metric) for the models trained using as positive samples the

members of each of the nine most populated (in descending order) clusters. The last

bar in each diagram corresponds to the performance of the model trained using the

strong annotations of 6 "^^^ and tested using S^g^. Moreover, in order to visually

inspect the content of the generated clusters we have implemented a viewer tha t is able

to read the clustering output and simultaneously display all regions included in the

same cluster. By having an overall view of the regions classified in each cluster we can

better understand the distribution of clusters to objects and derive some conclusions

on the reasons that make the proposed approach to succeed or fail. By looking at the

bar diagrams of Fig. 2.5 we can distinguish between four cases.

In the first case we classify the objects bird, boat, cat, dog and face th a t are too

diversiform with respect to the employed visual feature space and as a consequence,

none of the developed models (not even the one trained using the manual annotations)

41

Page 59: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

Lisifi iS>

(a) aeroplane (b) b icycle

(c) bird (d) boat

(e) body (f) book

42

Page 60: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.6 Experimental study

1 . .

- 0,4

n(g) cat

(i) cow

f f f f ^

(h) chair

f f f f f f

(j) dog

(k) face

f f f f f f f

(1) flower

43

Page 61: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

i l-B B(m) road

(o) sign

# f y f f

(q) car

(n) sheep

f f f f

(p) water

I

(r) grass

44

Page 62: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.6 Experimental study

»*- f f f f f f

(s) tree (t) building

f

(u) sky

F igu re 2.5: Experiments on the 21 objects of MSRC dataset. In each bar diagram the nine first bars (colored in black) show the object recognition rates (measured using Fi metric) for the models trained using as positive samples the members of each of the nine most populated (in descending order) clusters. The last bar (colored in gray) in each diagram correspond to the performance of the model trained using strongly annotated samples.

45

Page 63: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

manage to achieve good recognition rates. In addition to that, the particularly small number of relevant regions in the test set renders most of these objects inappropriate

for deriving useful conclusions.

In the second case we classify the objects bicycle, body, chair, flower and sign that

in spite of seeming to be adequately discriminated in the visual feature space (i.e. the

model trained using the manually annotated samples performs relatively well), none

of the models trained using the formulated clusters manage to deliver significantly

better recognition rates from the other clusters. Thus, none of the generated clusters

contains good training samples which indicates that the images included in the selected

flickr group are not representative of the examined object, as perceived by the MSRC

annotators.

Aeroplane, book, car, grass, sky, sheep are classified in the third case that includes

the objects that are effectively discriminated in the visual feature space (i.e. the model

trained using the manually annotated samples performs relatively well) and there is

at least one cluster that delivers performance comparable with the manually trained

model. However, the increased errord-obj has prevented this cluster to be the most populated, since the regions representing the examined object are split into two or

more clusters. Indeed, if we take for instance the object sky and use the viewer to

visually inspect the content of the formulated clusters, we realize that clustering has

generated many different clusters containing regions depicting sky. As a result the

cluster containing the regions of textured objects has become the most populated.

Fig. 2.6 shows indicative images for some of the generated clusters for object sky. The

clusters’ rank (# ) refers to their population. We can see that the clusters ranked #2 ,

#3 , # 6 and # 7 contain sky regions while the most populated cluster # 1 is the cluster

that contains the regions primarily depicting statues and buildings. Consistently, we

can see in Fig. 2.5 that the performance of the models trained using clusters ^ 2 , ^ 3 is

much better than the performance of the model trained using cluster #1 .

Finally, in the last case we classify the objects cow, road, water, tree, building, where

our proposed approach succeeds in selecting the appropriate cluster and allows the

classifier to learn an efficient model. Fig. 2.7 presents some indicative regions for 6 out of

the 9 clusters, generated by applying the proposed approach on the images downloaded

from the flickr group titled as tree. For each cluster we present five indicative images

in order to show the tendency, in a semantic sense, of the regions aggregated in each

46

Page 64: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.6 Experimental study

r 1

# 1 Cluster - architecture (statues, buildings)

# 2 Cluster - sky (but a bit noisy)

# 3 Cluster - sky (best performing model)

# 5 Cluster - noise

s f e -

# 6 Cluster - sky (mostly dark)

# 7 Cluster - sky (mostly light)

F igu re 2.6: Indicative regions from the clusters generated by applying our approach for the object sky. The regions that are not covered in red are the ones that have been assigned to the corresponding cluster.

47

Page 65: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

cluster. It is interesting to see that most of the formulated clusters tend to include

regions of a certain semantic object such as tree (#1), grass (#2), sky (#5), water

(#9) or noise regions. In these cases where the errord-obj is limited, it is clear that the regions of the object that appears more frequently in the dataset {tree in this case)

are gathered in the most populated cluster.

2 .6 .6 C om p arison w ith e x is t in g m eth o d s

Our goal in the previous experiments was to highlight the potential of social media to serve as the source of training samples for object recognition models. Thus, we have

focused on the relative loss in performance that results from the use of leveraged rather

than manually annotated training samples, and not on the absolute performance values

of the developed models. However, in order to provide an indicative measure of the loss

in performance that we suffer when compared with other existing works in the literature,

we calculate the classification rate (i.e. number of correctly classified cases divided by

the total number of correct cases) of our framework for the 21 objects of MSRC. Then, we compare the results with two methods [12], [41] that are known to deliver state of

the art performance on this dataset. Textonboost [12] uses conditional random fields

to obtain accurate image segmentation and is based on textons, which jointly model

shape and texture. This work relies on manually annotated regions. The combination of Markov Random Fields (MRF) and aspect models is the approach followed in [41]

in order to produce aspect-based spatial field models for object detection. This work

(from now on PLSA-MRF/I) provides results using manually annotated images (i.e.

weak but noise-free annotations). Note that the reported classification rates are not

directly comparable since the methods are not relying on the same set of visual features,

the training/test split is likely to be different and the results are reported at different

level (in [12] at pixel level, in [41] at the level of 20x20 image patches, and in our

case at the level of arbitrary shaped segments which are extracted by an automatic

segmentation algorithm). However, the comparison of these methods allows us to make

some useful conclusions about the trade-off between the annotation cost for training

and the efficiency of the developed models. Table 2.5 summarizes the classification

rates per object for each method.

On average, the accuracy obtained from our approach (45%) is inferior to the one

obtained from PLSA-MRF/I (50%) which is again inferior to the accuracy obtained

48

Page 66: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.6 Experimental study

# 1 C luster - trees

# 2 C luster - grass

# 3 C luster - m ountain w ith noise

# 4 C luster - noise

# 5 C luster - cloudy sky

# 9 C luster - w ater

F igu re 2.7: Indicative regions from the clusters generated by applying our approach for the object tree. The regions that are not covered in red are the ones that have been assigned to the corresponding cluster.

49

Page 67: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

from Textonboost (58%). The performance scores obtained by the three methods are

ranked proportionally to the amount of annotation effort required to train their models.

Indeed, Textonboost [12] requires strongly annotated images that can only be produced manually, the PLSA-MRF/I algorithmic version of [41], requires weakly but noise-free

annotated images the generation of which typically involves light human effort, and our

framework operates on weakly but noisy annotated images that can be automatically

collected from social sites at no cost. The costless nature of our approach motivated the

execution of two additional experiments that are essentially variations of our original

approach, mixing manually labelled data from MSRC and noisy data from flickr for

training. More specifically, the first variation Prop.Fram ./M -F/W mixes MSRC and

flickr data at the level of images. Initially, the strong region-to-label associations pro­

vided by MSRC are relaxed to become weak associations of the form image-to-label(s).

Then, these weakly annotated MSRC images are mixed with images from flickr and

the proposed framework is applied to the mixed set of images. Finally, the samples

used for training the object recognition models consist of the regions belonging to the

most populated of the clusters generated from the mixed set. The Prop.Fram ./M -F/W

variation uses the MSRC annotations in the same way with PLSA-MRF/I. The sec­

ond variation Prop.Fram ./M -F/S mixes MSRC and flickr data at the level of regions.

The samples used for training the object recognition models consist of the strongly

annotated regions from MSRC plus the regions belonging to the most populated of

the clusters generated from flickr data. The Prop.Fram./M -F/S variation uses the

MSRC annotations in the same way with Textonboost. Table 2.5 shows that both

variations of our approach mixing MSRC and flickr data (i.e. Prop.Fram ./M -F/W and

Prop.Fram ./M -F/S), outperform the corresponding state-of-the art approaches (i.e.

PLSA-MRF/I and Textonboost respectively). In the case of Prop.Fram ./M -F/W the

obtained average accuracy (57%) outperforms PLSA-MRF/I by 7%, while in the case

of Prop.Fram ./M -F/S the obtained average accuracy (62%) outperforms Textonboost

by 4%.

2 .6 .7 D iscu ssio n o f th e resu lts

We have presented a framework for automatic creation of a training set from user

tagged images in order to train object detectors. Experimentally, we have seen that by

increasing the number of utilized images we manage to improve the performance of the

50

Page 68: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.7 Guided cluster selection strategy

generated detectors, providing supporting evidence for the potential of social media to

facilitate the creation of reliable and effective object detectors. Moreover, despite the

fact that there will always be a strong dependence between the discriminative power

of the employed feature space and the efficiency of the proposed approach in selecting

the appropriate set of training samples, our analysis has shown that we can maximize

the probability of success by using large volumes of user contributed content.

On the other hand, analysing the experimental results on a per concept basis,

we have seen object categories for which although there is at least one cluster that

delivers performance comparable with the manually trained model, it was not the most

populated, since the regions representing the examined object are split in two or more

clusters. This could be fixed by adding some supervision to the cluster selection process.

W ith this in mind, in the following section, we present a guided cluster selection strategy

that can optimally choose the combination of region clusters depicting the target object

using a small set of manually labelled examples.

2.7 Guided cluster selection strategy

Previously we proposed to achieve one-to-one region-to-label mapping by correlating

the most populated visual cluster with the concept that the constructed image set was

selected to focus on. However, our experiments have shown that, for some object cat­

egories, either the regions depicting the object of interest were split in many of the

formulated clusters, or noisy regions populated an irrelevant cluster and as a conse­

quence caused our correlation mechanism to fail. For this reason, in this section, we

propose two alterations to the aforementioned framework. Firstly, we utilize a novel

graph based clustering algorithm that is not forced to assign the noisy regions into

clusters [42]. Moreover, we propose a semi-supervised strategy to associate the appro­

priate cluster or combination of clusters to the examined concept, alleviating the effect

of splitting the relevant regions into multiple clusters. A validation set of strongly an­

notated samples guides the selection strategy to decide which of the generated clusters

are most likely to contain regions depicting the object of interest. This is essentially

a post-clustering process that iteratively merges the clusters exhibiting highest perfor­

mance on the validation set and re-evaluates the performance of the merged cluster. In

51

Page 69: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

the end, all regions included in the merged cluster with the highest performance among

all iterations, are mapped in a one-to-one relation with the object of interest.

2 .7 .1 C lu ster in g

In order to compensate for the noise introduced by the visual analysis algorithms (i.e.

segmentation and feature extraction) and boost the efficiency of the proposed cluster

selection strategy, we have employed a noise resilient clustering algorithm that does

not forcefully assign all regions into clusters but leaves the noisy regions out of the

clusters’ distribution. More specifically, we have applied a novel graph based clustering

algorithm [42] that takes as input a portion of the similarity measure values between

pairs of data points, constructs the network between the data points (regions in our

case) and acquires a seed set of densely connected nodes. Then, starting from the

community seed set the algorithm expands the communities by adding nodes to the

communities which maximize a sub-graph modularity function subject to the constraint

that their degree does not belong to the top 10 percentile of the node degree distribution

(this implies that a single pass over the graph nodes is conducted in order to derive the

node degree distribution) [42]. As the outcome of applying the community detection

with expansion algorithm, every data point can belong to zero, one or more clusters.

Thus, we obtain an overlapping distribution of the region’s feature vectors over the

communities.

2 .7 .2 C lu ster se le c tio n s tr a te g y

We can represent the cluster selection strategy as a function rpositive = SelectRegionsÇR) that takes as input the set of generated clusters and selects the ones that represent the

object of interest. Previously, we relied on the intuition of perfect clustering, dictating

that the distribution of clusters’ population based on their population rank will coincide

with the distribution of objects’ ^^appearances based on their frequency rank. Moti­

vated by this, we selected the most populated of the generated clusters to be correlated

with the object of interest. Eq. 2.18 shows this functionality by considering Pop{-) to

be a function that calculates the population of a cluster.

^positive = argmaxi{Pop{Ti)) (2.18)

52

Page 70: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.7 Guided cluster selection strategy

However, the errors introduced by the visual analysis algorithms had a high impact

on the success or failure of (2.18). For this reason, in this work we propose an adapted

version of the self-training technique that aims to boost the efficiency of the cluster

selection strategy using a small set of strongly annotated regions (i.e. validation set).

Let us denote jPscore(i*i) to be the performance (measured by the Fi score that is

achieved on the validation set) of an object detection model which was generated using

the regions of r% as positive examples. Our approach starts by using the validation

set to calculate the Fscorei^i) of all models created using each time the regions of a

different cluster as positive examples. Then, starting from the best performing cluster,

an iterative merging process is performed. In each iteration the algorithm merges the

cluster exhibiting the next highest value for Fgcore to the existing set of selected clusters

and re-evaluates the performance of the newly created cluster Fscorei^ranki U Yrank2 0

•••Ur^anfci+i), where Yrank^ is the cluster exhibiting the highest Encore, T‘ rank2 the cluster

with the second highest Fscore and so on. The iterations stop when the Fgcore of the next

cluster to be merged is zero. Finally, the combination of clusters (i.e. merged cluster)

with optimal performance is chosen to be the one correlated with the object of interest.

In this case the functionality of the cluster selection strategy can be represented as

follows:

^positive — Yranki (2.19)i= l

where x = argmaXm{Fscore{[jjLi rrankj))

an d FscoreiYranki) ^ -^score(nranfc2) ^ > 0

Following the running example of Fig. 2.8, let us assume that R consists of four

clusters so that F {C lusterl) > F{Cluster2) > F{Cluster3) > F{Cluster4) = 0. In

the first iteration, the algorithm merges clusters 1 and 2 which yield the two highest

values for Fscore- In the second iteration it adds cluster 3 which yields the next best

performance. In iteration three, the next best Fscore is zero, so the algorithm stops the

merging procedure. The decision is made to select the combination of clusters 1 and 2

which yields the highest performance of all examined combinations.

53

Page 71: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

Cluster 1+2here are no m ore

MAX

Iteration 3C lusters

Iteration 2

Cluster 1+2+3C lustersC luster 4

Iteration 1C lusters

Cluster 3 C luster 4C luster 1+2

C lusters ranked by perform anceCluster 1 Cluster 2 C luster 3 C luster 4

F igu re 2.8: Cluster selection algorithm diagram.

2 .7 .3 E x p er im en ta l S tu d y

T h e goal o f our experim ental stu d y is twofold. F irst, we want to com pare the quality

of th e training sam ples acquired by th e proposed sem i-supervised approach, w ith the

popu lation based selection stra tegy and the m anually selected sam ples. In order to

assess the quality o f th e different selection types. Support Vector M achines (SVM s)

were chosen to train th e m odels for object localization and recognition. T h e feature

vectors o f th e regions associated w ith th e ob ject o f interest were used as positive sam ples

for train ing a binary classifier. N egative exam ples were chosen arbitrarily from the

rem aining dataset. Second, we w anted to verify that th e proposed cluster selection

algorithm generalizes w hen m oving from th e validation to the test set.

To carry out our experim ents we have used a m anually annotated and a social

54

Page 72: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.7 Guided cluster selection strategy

dataset. The first dataset is the publicly available SAIAPR TC-12 dataset [1] consisting

of 20000 strongly annotated images. The dataset was split into 3 parts (70% train, 10%

validation and 20% test). As previously (see Section 3.4.1), the images of the manually

annotated dataset were segmented by the automatic segmentation algorithm. In order

to create the second dataset, we downloaded images from flickr groups for 15 of the

concepts included in the SAIAPR TC-12 dataset. For each object of interest, we have

downloaded 500 member images from a flickr group that is titled with a name related

to the name of the object, resulting in 15 groups of 500 images each (7500 in total).

2.7.3.1 Com paring object detection m odels

Our goal is to compare the efficiency of the models trained using a set of regions selected

according to:

1. the population-based method (eq. 2.18). Training set consists of flickr images

only.

2. the proposed semi-supervised approach (eq. 2.19). Models were trained using only

the flickr images and 2000 manually annotated images were used for selecting the

appropriate cluster(10% of the SAIAPR TC-12 dataset).

3. the proposed approach adding to each model the images of the validation set.

Models were trained using both the flickr images and 2000 manually annotated

images(10% of the SAIAPR TC-12 dataset).

4. the strongly supervised strategy. Training set consists 14000 manually annotated

images(70% of the SAIAPR TC-12 dataset).

In order to evaluate the performance of the models, we test them using the testing

subset (i.e. 4000 images) of the strongly annotated dataset, not used during training

or validation. Fig. 2.9 shows the Fi score of the generated models for each of the 15

concepts.

By looking at the bar diagrams of Fig. 2.9 we can distinguish between three cases. In

the first case we classify the objects airplane, bicycle, bird, boat, chair and flower that

are diversiform with respect to the employed visual feature space and as a consequence,

none of the developed models (not even the one trained using the manual annotations)

55

Page 73: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

manage to achieve good recognition rates. In the second case we classify the objects

building, car and sign that despite being adequately discriminated in the visual feature space (i.e. the model trained using the manually annotated samples performs relatively

well), none of the other selection algorithms was able to select the regions depicting the

examined concept. In the last case we classify the concepts water, road, person, sky,

tree and grass where the proposed approach performs well. We can also notice that

for the cases of water and road the population based selection algorithm fails to select

the proper cluster but the semi-supervised selection algorithm manages to merge the

appropriate clusters. Finally, in an effort to boost the performance of the generated

detectors, we have trained the models using as training examples both the regions

selected by our framework and the manually selected regions included in the validation

set. We can see that the performance of the models generated by the combination of

the datasets is greatly increased.

2.7.3.2 G eneralizing from the validation to the test set

The purpose of this experiment is to verify that the proposed selection algorithm can

generalize from the validation to the test set. For this reason, we have calculated the

performance {Fscore) of every model generated at each iteration of the algorithm on the validation and test set. Due to lack of space we chose to show only three of the

concepts that were classified in the last case of Section 2.7.3.1 (Fig. 2.10). Black and

grey bars indicate the performance of every merged model generated at each iteration

step of the selection algorithm on the validation and test set, respectively. By this

figure, it is obvious that the models perform similarly both on validation and test set in

all three cases. We choose these concepts because we are able to draw safer conclusions

for the generalization ability of our framework, as it is impossible to generalize in

cases where the visual diversity of the concepts did not allow the algorithm to produce

a model that would perform well even in the validation set. For example, for the

concepts building, car and sign the highest Fscore achieved on the validation set for all the combinations of the generated clusters was lower than 5%. Moreover, this allows us

to assume that our approach fails in these cases because of the different nature of the

training and testing set (e.g. flickr images might depict modern buildings and SAIAPR

TC-12 monuments). We expect that increasing the size of the training set would allow

visually diverse categories of the same concept to exist in the same training set.

56

Page 74: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.8 Discussion and conclusions

100Weakly Supervised Semi-supervised

I I Semi-supervised + ValidationI I Strongly Supervised

40

■■nn

F igu re 2,9: Comparative performance of the object detection models

2.8 Discussion and conclusions

In th is chapter, we have presented an algorithm for extracting sem an tica lly coherent

groups of regions d ep icting a certain object. S tarting from a set o f flickr im ages that

focus on the desired object, we proposed an algorithm th at is able to se lect th e regions

depicting th is object using either an au tom atie or a guided selection strategy. T h e

experim ental resu lts have dem onstrated th a t a lthough th e perform ance o f th e d etectors

trained using leveraged social m edia is inferior to th e one achieved by m anually trained

detectors, there are cases where the gain in effort com pensates for the sm all loss in

perform ance.

On the other hand, we have seen th a t there were cases for w hich no cluster or

57

Page 75: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

combination of clusters achieved comparable performance with the manual trained models. This can be attributed to the fact that unsupervised machine learning such

as clustering is a rather difficult and error-prone task. In addition to that, the guided

cluster selection approach, which was more promising than the unsupervised variation,

can be computationally demanding. For example, in order to select the appropriate

combination of clusters we need to train and evaluate approximately 30 SVM classifiers

(Fig 2.10), which is computationally expensive. On the contrary, considering that some supervision was already added in the guided cluster selection strategy, semi-supervised

approaches, such as self-learning, are expected to incorporate the manually labelled

set in a more efficient and effective way. For this reason, the approaches presented in

following chapters (Chapters 3, 4) are based on the bootstrapping paradigm and their

objective is to find the optimal strategy for effortlessly selecting new training data.

58

Page 76: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.8 Discussion and conclusions

T ab le 2.1: Legend of used notation

S y m b o l D efin itio n

S The complete social media datasetN The number of images in S

An image set, subset of S that emphasizes on object Ck

n The number of images in 5' '=I An image from S

R = Complete set of regions identified in all images of

— 1? • • • 5m } by an automatic segmentation algorithmT = Complete set of tags contributed for all images of

\ t i , % — 1 , . . . , n} by web usersF = Complete set of visual features~ 1, . . . , m } extracted from all regions in Rc = Set of distinct objects that appear

{c i , i = 1, . . . t} in the image setR = Set of clusters created by performing clustering

{I'i) i ~ 1} • • • ?o} on the regions extracted from all images of based on their visual similarity (i.e. visual-terms)

T = Set of clusters created by clustering together the tags

{I'jj J — Ij • • • )d} contributed for all images in 5" '=, based on their semantic affinity (i.e. tag-terms)

Pa Probability that tag-based image selection draws from S an image depicting c*

7 average number of times an object appears in an imageT Q Number of regions depicting object c% in

Popj or Pop{ ^j) Population of cluster r

ra The cluster corresponding to the most frequently appearing object in S^k

I*b The cluster corresponding to the second most frequently appearing object in S^k

Tu The cluster of regions depicting C k

C T T O V c l — o b j the error generated by the visual analysis algorithmsFalse positives of with respect to QFalse negatives of r j with respect to Cj

D R i j = Displacement of r ,with respect to c*

*we use normal letters bold face letters (e.g. z)

(e.g. z) to indicate individuals of some population and to indicate clusters of individuals of the same population

59

Page 77: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

T ab le 2.2: Qualitative cases for clustering

DRi,a ~~ DR2,i3 > C

DPl,a ~ DR2,I3 < f

DRt.a > D R h

R>Ri.c ORi

R^Rl,a ^

R*Rl,a ^R t,t

D R lc < DR^.fi

Both Wa and wp in­crease their population but the inflow of Wa is greater than the inflow of Wp.

Wa increases its pop­ulation while W2,/9 re­duces its own.

Both Wa and wp re­duce their population but the leakage of Wa is lower than the leak­age of wp.

Both Wa and wp in­crease their population but the inflow of Wa is lower than the inflow ofwp.

Wa reduces its popula­tion while wp increases its own.

Both W a and W p re­duce their population but the leakage of Wa is greater than the leak­age of wp.

*the superscripts indicate the sign (i.e. positive or negative) of the corresponding displacement

60

Page 78: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.8 Discussion and conclusions

T ab le 2.3: Datasets Information

S ym b ol S ou rce A n n o ta t io iT y p e

L N o . o f Im ­ages

o b je c ts S e le c tio nap proach

internaidataset

stronglyannotated

536 sky, sea, veg­etation, person, sand, rock, boat

keywordbased

MSRC stronglyannotated

591 aeroplane, bicy­cle, bird, boat, body, book, cat, chair, cow, dog, face, flower, road, sheep, sing, water, car, grass, tree, building, sky

keywordbased

flick r

groups

roughly-annotated

12500 (500 for each object)

sky, sea, vegeta­tion, person and the 21 MSRC objects

flick r groups

flickr weakly an­notated

3000 cityscape, seaside, moun­tain, roadside, landscape, sport-side

SEMSOC

gFlOK flickr weakly an­notated

10000 jaguar, turkey, apple, bush, sea, city, vegetation, roadside, rock, tennis

SEMSOC

61

Page 79: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

T able 2.4: Clustering Output InsightsS a n Cl T C i D R i^a P opa C2 r c g D R 2 ,p P opp Sue. s ig n (D R x ,a

D R 2 ,p )

(Seaside)

6^ = * 395 sea 732 -404 328 sky 395 -212 183 X -^sand 359 sand 422 136 558 sky 337 -103 234 +^rock 53 rock 155 95 250 sea 86 47 133 V +i^boat 68 boat 96 120 216 sky 69 -57 12 V +^person 215 person 435 -238 198 sea 406 -99 307 X -gvegetat vegetati o£L57 140 297 sea 114 59 173 y +g sky 418 sky 470 -246 224 sea 663 -324 339 X +

igsign 27 sign 65 101 166 building 19 -10 9 x/ +gsky 129 sky 139 -89 50 building 115 119 234 X -gbuildin( 88 building 209 304 513 sky 52 -17 35 V +

6 car 6 37 43 road 7 -3 4 V +groad 74 road 94 269 363 sky 32 93 125 +gtree 100 tree 226 258 484 sky 45 124 169 V +^body 32 body 54 195 249 face 19 4 23 +gface 21 face 35 121 156 body 17 10 27 x/ +^grass 154 grass 221 367 588 sky 48 133 181 y +^bird 29 bird 58 71 129 grass 15 -6 9 V +gdog 27 dog 56 84 140 road 11 21 32 V +gwater 62 water 113 182 295 sky 19 7 26 x/ +i COW 43 cow 109 114 223 grass 57 -51 6 V +gsheep 5 sheep 13 15 28 grass 13 -11 2 V +^flow er 28 flower 60 103 163 grass 8 12 20 V +i^book 33 book 149 -55 94 face 5 153 158 X -gchair 19 chair 39 95 134 road 9 -3 6 V +gaeropla ^T8 aeroplai iel2 50 68 sky 12 -8 4 V +igboat 15 boat 25 45 70 water 25 -7 18 V +

* although Popa > Popp in this case, the population Pop^ of the cluster corresponding to the third most frequently appearing object was found to be the highest, which is why we

consider this case as a failure

62

Page 80: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.8 Discussion and conclusions

T ab le 2.5: Comparing with existing methods in object detection. The reported scores are the classification rates (i.e. number of correctly classified cases divided by the total number of correct cases) per object for each method.

I

1m 1 A 1 1 w ! 1 1 6&(5 1 1 5 1 1 1 6 gQ m 1 I

P rop . Fram ew ork 87 9 65 45 45 14 29 53 56 12 75 88 27 30 25 50 44 59 71 29 41 45

P L S A -M R F /I [41] 45 64 7 1 75 7 4 8 6 81 47 1 7 3 55 88 6 6 6 3 18 8 0 27 26 5 5 8 50P r o p .F r a m ./M -F /W 8 3 7 2 69 9 1 70 1 8 7 5 3 3 3 12 8 7 1 0 0 4 7 7 9 53 4 7 55 3 3 6 7 11 61 5 7

T ex to n b o o st [12] 62 9 8 8 6 58 50 8 3 6 0 5 3 7 4 6 3 75 63 35 19 9 2 15 8 6 54 19 6 2 7 58P r o p .F r a m ./M -F /S 6 3 67 76 7 3 7 0 51 27 47 67 17 9 4 1 0 0 5 3 4 7 59 4 7 68 9 2 7 3 59 5£ 6 2

63

Page 81: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

Testing on Validation S et

Testing on Testing S e t

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

iteration #

(a) Grass

Testing on Validation S et

Testing on Testing S e t

0 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 3 0 3 1 3 2 3 3 3 4 3 5 3 6

iteration #

(b) Road

64

Page 82: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2.8 Discussion and conclusions

6 0 -

5 0 -

0 ) 4 0 1-

o o œ

2 0 -

- |— I— I— I I 1— 1— I— I— I— I— r

Testing on Validation S et

Testing on Testing Set

0 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 3 0 3 1 3 2 3 3 34

iteration #

(c) Sky

Figure 2.10: Performance of every model generated in each iteration on the validation and test set for (a) Grass (b) Road and (c) Sky.

65

Page 83: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

2. SCALABLE OBJECT DETECTION WITH UNSUPERVISEDLEARNING TECHNIQUES

66

Page 84: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

Chapter 3

Using Tagged Images of Low Visual Ambiguity to Boost the Learning Efficiency of Object Detectors

67

Page 85: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

3.1 Introduction

An important factor that affects the quality of supervised classifiers is the size of the

training set. Aiming to improve the performance of the classifiers, the bootstrapping

technique was designed to augment the training set with additional training samples [3].

However, within a typical bootstrapping process, the algorithm searches for the targeted

true positive samples in a large pool of unlabelled data, the majority of which constitute

the undesired negative examples. This precludes accurate selection of true positive

regions, in the case of region level object detection, as the search space is noisy and

dense. In order to thin out the search space of the algorithm, multi-modal selection

strategies have been proposed by replacing the pool of unlabelled examples with user

tagged images and using these tags to refine the pool of candidates into a set of images

that are more probable to contain the targeted object [43; 44]. Following the same idea,

in this work, we present a multi-modal region selection strategy that opts to refine the

search space not only by utilizing textual information for selecting the images that are

more likely to depict the targeted object, but also by modelling visual ambiguity in

order to disregard the ambiguous content.

Towards devising a gauging mechanism that could filter out the ambiguous samples,

the main contribution of this work is to define, model and utilize visual ambiguity,

which arises when two semantically different objects share similar visual stimuli under

the employed representation system. In the proposed approach, visual ambiguity is

modelled through a measure of image trustworthiness, which indicates how much the

initial object detection model is trusted to find the targeted regions within the examined

image. More specifically, for every concept, a set of regions is selected to enhance the

initial training set based on three parameters; a) the visual similarity of the region with

the examined concept as measured by the initial object detection model, b) the textual

similarity of the image tags with the examined concept indicating the possibility of

its existence in the image and, c) the trustworthiness of the image the region belongs

to, as defined by the ambiguity characterizing its content. In this way, the pool of

candidates is limited to the most prominent images that will allow the bootstrapping

algorithm to select accurately true positive examples. Parts of this work were published

as conference papers [45; 46].

68

Page 86: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3.2 Related work

3.2 R elated work

In the area of object detection, datasets of manually annotated image regions have been

widely used [1],[12]. The authors of [1] present a new benchmark for evaluating pixel-

based or region-based methods, which consists of 20000 images manually annotated

at pixel level. They also apply and evaluate a variety of known machine learning

algorithms (e.g. Support Vector Machines, naive bayes classifier, random forests, etc).

In [12] semantic segmentation is achieved by learning the conditional distribution of the

class labels given an image, using a Conditional Random Field (CRF) model. However,

given that manual annotation of image regions is a time consuming task, approaches

that operate on weak annotations (i.e. global image level annotations) were proposed.

In this case, the image level keywords are associated with the image regions by either

relying on aspect models like probabilistic Latent Semantic Analysis (pLSA) [11] or by

incorporating multiple instance learning [47]. The authors of [41] propose a method

that combines aspect models (pLSA) with spatial models (Markov Random Fields) with

the aim of labelling image regions. In [48], based also on weak annotations, the authors

present a unified probabilistic generative model capable of jointly learning objects,

attributes and their associations, as well as their location and segmentation. Finally,

considering that the performance of pattern recognition systems is highly influenced by

the number of the training samples [49] and that manual annotation, even at a global

level, is very expensive, the semi supervised approaches became the subject of intense

research efforts [50].

In an attem pt to minimize the labelling effort, approaches that rely on active learn­

ing (i.e. selectively sampling and annotating examples based on their informativeness as

they are expected to improve the model performance) have recently been presented [51],

[43]. The authors of [51] introduce the concept of live learning and propose to replace

the human oracle in the typical active learning method with a crowdsourcing service

like the MTurk to provide annotations for the selected informative samples. On the

other hand, social networks and user contributed content are leading the recent research

efforts, mainly because of their ability to offer more information than the mere image

visual content, coupled with their potential to cope with almost unlimited growth. In

this direction, the authors of [43] propose a solution for actively sampling the most mis-

classified user tagged images to enrich the negative training set of a concept classifier.

69

Page 87: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

The authors claim that the tags of such images can reliably determine if an image does

not include a concept, thus making social sites a reliable pool of negative examples.

However, active learning without an expert oracle is feasible in these cases because they

either rely on non-expert, but still manual annotations (MTurk), or are applied to im­

age level classifiers, which removes the additional factor of localization (i.e. finding the

exact location of the object within the image). On the contrary, the proposed approach

utilizes user tagged images which are provided at no cost and operates on segmented

regions instead of global images.

Towards fully unsupervised object detection exploiting user tagged images, the au­

thors of [52] propose a multiple instance learning algorithm that operates on one million

flickr images. They incorporate the various ambiguities between classes by constructing

an object correlation network that models the inter-object visual similarities and the

co-occurrences of the classes. Visual ambiguity is also considered in [53], where soft

assignment of visual words is proposed by considering the visual word uncertainty (i.e.

an image feature may have more than one candidates in the visual word vocabulary)

and the visual word plausibility (i.e. when there is no suitable visual word for the image

feature).

The proposed approach is essentially a method for object detection that operates

on user tagged images and uses the associated textual information to optimize the

selection of training samples in a modified version of self-training. In contrast to active

learning, where the goal is to select the most informative samples so as to minimize

the required human effort for annotation, the goal of the proposed approach is to be

completely discharged from the laborious task of manual annotation. In order to do

this, the human annotator is replaced by an automatic region selection strategy that

exploits the textual information carried by the images in social networks. Moreover,

we opt to enhance the training set with positive samples, instead of negative as in [43],

allowing for a higher performance boost of the final classifiers. For the same reason, a

semi-supervised learning algorithm was chosen instead of the multiple instance learning

algorithm that is utilized in [52]. Additionally, the visual ambiguity between regions is

also defined and modelled. This measure, unlike other works, is exploited directly in

the classification scheme for discarding the misleading images that contain ambiguous

concepts, as in these cases selecting the targeted region would be rather difficult.

70

Page 88: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3.3 Approach

3.3 Approach

The proposed approach for extracting training samples from unambiguous user tagged

images is depicted in Fig. 3.1. Given a concept an initial classifier is trained on a set

of regions that are labelled with this concept and additional regions representing this

concept are chosen from a pool of user tagged images harvested from the web. In these

images, there is no knowledge of the real objects depicted, or of their exact location

within the image. To overcome this obstacle, the following process takes place. The

user tagged images are automatically segmented into regions that roughly correspond

to semantic objects and visual features are extracted to represent each region. SVMs

are utilized to train initial classifiers using the visual features that were extracted

by the labelled regions. Applying these classifiers to the unlabelled regions provides

the visual scores. Next, the textual scores are extracted by the textual information

that accompanies the user tagged images. Finally, visual ambiguity is modelled and

transformed into image trustworthiness scores, which practically indicate how much

a classifier is trusted to classify the regions that have been extracted from a specific

image. In this way, regions are selected so that they represent the concept % while

at the same time, the ambiguous content is identified and discarded. This luxury is

provided by the exuberant amount of the available user contributed content.

3 .3 .1 S eg m en ta tio n and fea tu re e x tr a c tio n

Segmentation is applied to all images used by this framework aiming to extract spatial

masks of visually meaningful regions. Afterwards, visual features were extracted rep­

resenting the detected visual regions. In this work we used the same segmentation and

feature extraction pipeline as in Sections 2.4.2.2 and 2.4.2.3, with the only difference

that in this case we used 500 visual words resulting in a 500-dimensional feature vector for each region.

3 .3 .2 V isu a l and T ex tu a l S cores E stim a tio n

For every concept c^, an object detection model {SVMc^) is trained using as positive

examples the regions that are labelled with c& while the rest are used as negative

examples (One Versus All / OVA approach). For each region extracted from the user

tagged images, we use the corresponding feature vector in order to estimate its similarity

71

Page 89: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

Initial T raining S e t

Initial model ^

m i l

VisualA m b ig u ity

Im ageVisual Analysis > ( Visual Scores ) \ ^ TrUS&MOrthmess

flirki) '

' ' ' ' »|g 1 I Textual Analysis|^ Textual Scores ^

n R /S e lec tio n Index Selection

process ^O

Initial Training Set

S e lec ted S am ples

L Enhanced model

F igu re 3.1: System Overview

to a given concept. A score for every unlabelled region is extracted using the SVMc^.

classifier. This score is based on the distance of the feature vector that represents this

region from the margin of the SVMcy. model [54]. The higher the outcome of the model

for a specific region the higher the possibility that this region is depicting the concept

Ck- We will refer to this score as visual score, VSci,{r^), of region of image I with

respect to the model SYM ^^ from now on.

In addition, user tagged images contain textual information which can guide the

training sample selection process. Although these tags describe the images globally and

do not provide any information for the location of the objects within an image, they

can still be used as an additional criterion besides the visual score of the region. For

example, if a region with high visual score for the concept grass belongs to an image

which is not tagged with the literal grass the region can be disregarded. However, in

order to exploit this textual information, we need to overcome the well known problems

of social tagging systems (i.e., lack of structure, ambiguity, redundancy, emotional

tagging, etc). To this end we use three approaches in order to measure the semantic

relatedness between the image tags and the concepts’ lexical description. Firstly, an

adapted version of the Google Similarity Distance [55] was used. The original Google

72

Page 90: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3.3 Approach

Similarity Distance between words x and y is given by the following expression:

CD = max{log/(a;), log/(?/)} - log /(æ ;2/) .logiV - min{log/(rc), log/(î/)}

where f {x) denotes the number of pages containing x and f {x \ y ) denotes the number

of pages containing both x and %/, as reported by Google. A is a normalization factor

that is typically equal to the maximum possible value of the function f {x) . In our

case, where the objective is to measure the distance between image tags, the Google

Similarity Distance was modified in order to rely on the co-occurrence of two tags in

the space of social networks, rather than the co-occurrence of two words in the general

space of web documents. From now on we will refer to it as Google-Flickr Distance

(GFD). Finally, all extracted distances were normalized to the [0,1] range and the

similarity between two tags was calculated to be 1 — norm{GFD).

Alternatively, the widely known lexical database WordNet [56], was used in order to

measure the semantic relatedness between image tags and concepts. More specifically,

we employ the vector similarity metric [57] that combines the benefits of using the strict

definitions of WordNet along with the knowledge of the concepts’ co-occurrence which

is derived from a large data corpus. Finally, an extra manual step is taken towards

disambiguating the textual information. More specifically, when judging the relatedness

score between two words, WordNet considers all different “meanings” for each word and

outputs the maximum score among all possible combinations. This is an undesirable

behaviour especially in cases where the examined words, apart from their “meaning”

intended during the manual annotation process, happened to have other “meanings”

that caused a severe misinterpretation of their semantic relatedness. For example, the

word “palm” has five different meanings in the WordNet database. The first meaning

of the word is the inner surface of the hand from the wrist to the base of the fingers

while another one refers to any plant o f the family Palmae having an unbranched trunk

crowned by large pinnate or palmate leaves. In order to tackle this problem, while

querying WordNet about the similarity between a concept and a tag, we manually

select the intended meaning of the concept resulting in more accurate similarities. In

this example, if we intended to search for palm trees we would select manually the

second of the two aforementioned meanings of that word. Eventually, the use of any of

these three approaches (i.e. GFD, WordNet and disambiguated WordNet) results in a

73

Page 91: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

textual similarity score between an image tag tagj and a concept T S im {tag j, Ck).For every concept, its maximum similarity with the tags of the image I is chosen to

gauge the possibility that the concept exists in the specific image:

=meix{TSim{tagj ,Ck)} (3.2)

Here, is a number in the [0,1] range and indicates the possibility that the concept

Ck is present in the image I.

3 .3 .3 V isu a l A m b ig u ity and Im age T ru stw orth in ess

In order to model the visual ambiguity that arises between visually similar concepts the

visual ambiguity scores are introduced and are estimated using the following process.

For a concept c&, given its model SVMc^, the visual scores of all the regions that have been used to train this model, are determined. In the ideal case the visual scores of

all the regions depicting Ck should be much higher than the visual scores of all other regions. When regions that do not depict c& are associated with high visual scores by

SVM c^, the discriminative ability of SVMc^ is low. This is considered as the visual

ambiguity between Cfc and the concept ci,l ^ k, which is the actual concept depicted

by the examined region. The visual ambiguity of Ck and ci is selected to be the average of the visual scores that the regions belonging to the ci class received:

where = are the regions that depict c/. The visual ambiguity between two

concepts Ck and c/ is high when the model that is trained to detect Ck produces high confidence scores for the regions, which practically means that our system tends to confuse the visual information that depicts Ck with the visual information that depicts

Cl. For example, the visual ambiguity scores of the closely related couples of concepts

grass-plant (0.824) and grass-bush (0,874) are higher than the visual ambiguity score

of the couple grass-fence (0,638).The visual ambiguity scores indicate how much a specific classifier is trusted to dis­

tinguish between two concepts when asked to classify a region. Having this knowledge for every couple of concepts, it could be applied on every image separately if the existent

objects in the image were known. This information might not be available explicitly.

74

Page 92: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3.3 Approach

but an indication about the existence of an object within an image is provided by the textual score of the image. If the textual score of a concept in the image is above a

threshold th, we consider that the concept is present in the image. After this textual pre-processing step, for every loosely tagged image I we have the 1 x Ac binary matrix

indicating the existence or not of each concept:

T îh =

r f l

t h , C 2

th,Ck

ti

(3.4)

— —

10 if tg. < th

The trustworthiness of the classifier S V to classify the regions of an image I , is

defined to be the complement of the visual ambiguity of a specific image I with

respect to a concept c&, which is calculated as the maximum visual ambiguity of the

SVMcf, classifier with respect to the concepts that exist in image / , indicated by the textual scores

T ru s tl = 1 - max(T4(c() » VA(ck,ci)) (3.5)

The trustworthiness score of an image I with respect to Ck gauges how much the

classifier SVMc^ can be trusted to classify the regions of the image I and depends

on the existence of ambiguous concepts (i.e. c/) in the image I. In the previous example for the concept grass., the classifier is trusted more to detect the grass regions

within images that contain fence, than within images that contain hush (i.e. because

V A(grass, fence) = 0.638 < V A(gr ass, bush) = 0.874).

3 .3 .4 R e g io n re le v a n c e a n d s e le c tio n o f t r a in in g s a m p le s

In order to combine the three aforementioned independent scores into a single region relevance score, the geometric mean is chosen over the more typical arithmetic mean

due to its robustness when multiplying quantities with different normalizations.

RRckirln) = yRcki^ln) * tik * (3.6)

75

Page 93: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

The regions of the user tagged images are ranked according to their region relevance

score, and finally the top N regions with the highest relevance scores are selected to enhance the initial training set.

3.4 Experim ental results

The objective of the experimental setup is to show the benefits of modelling visual

ambiguity and applying this knowledge in the training sample selection process during

self training. To accomplish that, three configurations based on the calculation of the

R R function (eq. 3.6) have been examined.

1. In the first case, R R is calculated using only the visual scores (V), which corre­

sponds to a typical self training approach.

R R c ,{r l) = V S , ,{ r l ) (3.7)

2. In the second case, RR is the geometric mean of the visual and textual scores

(VT).

m , ( r i , ) = V S c , ( r i ) * t i (3.8)

3. In the third case, the proposed approach is evaluated (VTA).

^ (y'm) * (3 9)

The first and the second cases are essentially used as baselines for measuring the im­

provement introduced by the incorporation of textual and ambiguity information, re­

spectively.

3 .4 .1 D a ta se ts

The datasets that were used in our experimental study are shown in table 3.1. The MIRFLICKR-IM dataset [58] consists of one million user tagged images harvested

from flickr. The images of MIRFLICKR-IM were tagged with 862115 distinct tags of

which 46937 were meaningful (included in WordNet). After the textual preprocessing,

131302 images had no meaningful tag, 825365 images were described with one to 16

meaningful tags and 43333 images had more than 16 meaningful tags. The distribution

76

Page 94: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3.4 Experimental results

Name Source Size AnnotationType

Usage

MIRFLIGKR-IM

flickr 1 mil­lion

Loose Tags 100% training images

SAIAPR TC-12 imageCLEF2006

20000 Manual region- level annota­tions

70-10-20% training- testing-validation images

Table 3.1: Datasets

of the number of images with respect to how many meaningful tags they have can be seen in Fig. 3.2. This dataset constitutes the pool of user tagged images, from where the training regions were selected to enhance the manually trained models. The second

dataset, the SAIAPR TC-12 dataset [1], consists of 20000 images labelled at region

detail and was split into 3 parts (70% train, 10% validation and 20% test). To acquire

comparable measures over the experiments, the images of the SAIAPR TC-12 dataset

were segmented and the ground tru th label of each segment was taken to be the label

of the hand-labelled region that overlapped with the segment by more than the 2/3

of the segment’s area. The concepts that had less than 15 instances were removed to

ensure statistical safety. The mean average precision (mAP) served as the metric for

evaluating the proposed approach.

3 .4 .2 E v a lu a tion o f d ifferent te x tu a l s im ila r ity e s tim a tio n ap p roach es

In order to investigate the impact of textual analysis in the process of optimizing the

region selection process, we have comparatively evaluated the performance of the three

methods that were described in Section 3.3.2 for calculating the textual scores. To

this end, the textual based region selection approach V T was applied three times,

each one using a different textual similarity estimation method. The results for each

concept are shown in Fig. 3.2. The first bar shows the results using WordNet, the

second using the manual disambiguation process with WordNet and finally the third

bar using the Google-Flickr Distance. In general we can see that for the majority of concepts all three methods perform equivalently, with WordNet and disambiguated

WordNet performing slightly higher than Google-Flickr Distance. This was expected

since the Google-Flickr Distance is based solely on the words’ co-occurrences while

77

Page 95: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

#ofmean(ngfull lags

F igu re 3.2: Distribution of images according to the number of meaningful tags they have

WordNet includes the information that is provided by the WordNet lexical database.

However, the benefit of the Google-Flickr Distance is that it is fully automatic and can

be estimated for any word as long as it exists in flickr, while on the other hand, WordNet

limits the concepts and tags to the words included in its lexical database. Finally, by

looking more closely to the results obtained using WordNet and its disambiguated

version, it is interesting to note that the performance of some ambiguous concepts like

palm and branch was boosted by the use of this extra disambiguation step. Based

on the above, it is evident that the quality of textual scores largely depends on the

nature of the considered concept (e.g. ambiguous concepts, concepts with overlapping

WordNet glosses, concepts that can be better explained through their co-occurrence

than their meaning) and different methods can be used to cover all existing cases. In

the following, the WordNet based approach was chosen since it performed favourably

compared to the Google-Flickr Distance and it does not require any human supervision

like its disambiguated version.

78

Page 96: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3.4 Experimental results

WordNet

[)isambig|i

(ioogle Di

35

ated WordNet

stance

branch

building

cactus

cast e

church

curtain

edifice

fabric

fence

f ow erbed

ground

handcraft

highway

40

house

79

Page 97: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

Vi/ordNet

isambiguate d WordNet

ceG oogle Distariman

mountainocean

painting

person

river

sidewalk

snow

statue

vegetation

waterwaterfalw indowwom an

w ood

F igu re 3.2: Performance of the three examined textual similarity estimation approaches.

80

Page 98: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3.4 Experimental results

3 .4 .3 S am p le S e lec tio n P erform an ce

The objective of this experiment is to show the impact of employing visual ambiguity

in the form of image trustworthiness scores to the ranking of the regions. In order

to be able to evaluate the selection process directly, the user tagged images should be

annotated at region level. For this reason, the training set of the SAIAPR TC-12 dataset

(14k images) was used by loosening the region labels to image tags-keywords (i.e. if

the regions r l , r2 and r3 belonging to an image I are annotated as sky, sea and sand

respectively, then we consider that the tags for image I are also sky, sea and sand). The

initial models were trained using the validation set (2k images) and were applied to the

regions of the training set of SAIAPR TC-12. In Fig. 3.3, the distribution of the region

relevance scores, calculated as explained for each configuration (i.e. V , V T and V TA ),

is shown for the concept grass. The black solid line is the distribution of the positive

examples, i.e. the targeted regions which we opt to select, and the red dashed line

is the distribution of the negative examples. It is obvious, that without the auxiliary

information the classifier performs poorly (Fig. 3.3(a)), since the two distributions

overlap significantly. Moreover, we can see that the textual information has eliminated

a large number of non-relevant regions (Fig. 3.3(b)), which was expected since in this

case the tags are accurate. Finally the impact of visual ambiguity is clearly shown

in Fig. 3.3(c), where part of the black distribution, i.e. true positives, now stands

out receiving much higher region relevance scores compared to the rest. This effect

would be ideal in the case of user tagged images since it makes the selection of the top

N regions more accurate. Additionally, the mAP over all concepts is measured and

written in the caption. The numerical results validate the aforementioned conclusions

as well.

3 .4 .4 R etra in ed M o d e ls P erform an ce

In this experiment the performance of the initial classifiers, which were trained using the

manually labelled regions, is compared to the performance of the enhanced classifiers

(i.e. the ones trained by the combination of the labelled and the selected regions).

The initial classifiers were enriched by the top Ik regions as they were ranked based

on the configurations V, V T and V TA . The validation set of the SAIAPR TC-12

dataset (2k images) is used for training the initial models and the test set (4k images)

81

Page 99: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

2500

2000

.1500

1000

500

—positive examples - negative examples

0.4 0.405 0.41 0.415 0.42 0.425 0.43Region Relevance Score

0.435 0.44 0.445

(a) V (4.56% mAP)250

—positive examples -■-negative examples

200

0.150

'S 100

0 4 0.405 0.41Region Relevance Score

(b) V T (58.78% mAP)450

—positive examples -negative examples400

350

« 3 0 0

E 250

200

^ 150

100

0.405 0.41 0.415 0.42 0.425 0.43 0.435 0.44Region Relevance Score

(c) V T A (65.55% mAP)

F igu re 3.3: The distribution of the R R scores (Eq. 3.6) based on the configuration a) V, b) VT and c) VTA.

82

Page 100: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3.4 Experimental results

is used to evaluate the performance of all generated models. The mAP of the initial

models is 5.9%, while adding regions ranked based on the V configuration degraded the

models to 4.9% mAP. Using the V T and V T A configurations, the enhanced models

increased their performance to 6 and 6.3% respectively. These results comply with

the conclusions reached previously, showing the positive impact of ambiguity to the

sample selection process. Examining each concept independently, shown in Fig. 3.3,

the configuration incorporating visual ambiguity exhibits the highest performance in

26 out of the 62 examined concepts, compared to 19 for the V T configuration, 3 for

the V configuration and 14 for the configuration based on the initial classifiers.

The proposed approach manages to increase the mAP score of the initial classifier

performance by 0.4% units. The first bar (black) is the performance of the initial

classifiers, second bar (red) is the performance of the enhanced classifiers with the

regions that were selected by the baseline configuration V using eq. 3.7. For the third

bar (yellow) visual and textual scores contributed to the region relevance scores (V T

configuration using eq. 3.8) while for the fourth bar (white) all the scores were used

(VTA configuration using eq. 3.9). By examining this figure, we can see that the

configuration incorporating visual ambiguity V T A exhibits the highest performance in

26 out of the 62 examined concepts, compared to 19 for the V T configuration, 3 for

the V configuration and 14 for the configuration based on the initial classifiers.

In an attem pt not only to evaluate the efficiency of the developed models but to

get an insight of which regions were selected we perform the following. We visually

examine some of the regions ranked amongst the top N places. The examples are

shown in figures 3.4(a), (b) and (c) of the regions selected based on eq. 3.7, 3.8 and

3.9 respectively. It is obvious that in the V case, where only the visual scores were

used, the performance of the initial model is very poor and the selected regions are

very noisy (Fig. 3.4(a)). Adding the textual information allows us to select a number

of grass regions and the addition of visual ambiguity increases greatly the quality of

the selected regions. Note that in the experiment of Section 3.4.3, this specific model

of the concept grass was ranking the new unlabelled samples with a success rate of

17.14% in terms of average precision (AP) while when adding the textual information

the performance rose to 72.63% and finally to 76.07% when incorporating the visual

ambiguity as well.

83

Page 101: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

branchbuilding

cactuscar

castlechild

churchcity

clothcloud

curtaindish

dooredificefabricfencefloor

flowerbedglacier

grassground

handcrafthighway

house

r

0,4

■ Validation

■ Visual

H Textual

B Ambiguity

84

Page 102: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3.4 Experimental results

lakellama

manmountain

oceanpainting

slidationperson sual

xtual

rhbiguityriver

sidewa k

snowstatuestreet

trunkvegetation

waterwaterfallw indoww om an

w ood

F igu re 3.3: Performance of the initial and the enhanced classifiers using the V , V T and V T A configurations.

85

Page 103: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

F igu re 3.4: Indicative regions for the concept grass selected using the configurations (a)V , (b) V T and (c) V T A . A blue bounding box indicates a false positive result.

3 .4 .5 C om paring w ith ex is t in g m eth o d s

In order to compare the proposed approach with existing methods the results of [1]

were used. The authors introduce the SAIAPR TC-12 dataset and evaluate seven

different classification schemes. More specifically, they compare the performance of a

basic linear classification model called Zarbi [59], the kernel variants of logistic and ridge

regression [60] called klogistic [59] and kridge [61] and the popular Naive Bayes [62],

Neural Networks [63] and Random Forests [64] classifiers. In all cases, the manually

labelled regions of the training set were used to train the classifiers following the OVA

approach. Every test region was classified by all the classifiers and their outputs were

merged by selecting the prediction of the classifier with the highest confidence. In order

to compare our approach with the various classification schemes, the same merging

procedure was applied. The classification accuracy served as the evaluation measure.

Table 3.2 shows the results. We can see that the performance of the proposed approach

is higher in three of the seven examined cases, i.e. when using Zarbi, Naive Bayes and

SVM classifiers. However, given that our purpose is not to evaluate the performance of

different classification schemes but to assess the improvement introduced by optimizing

86

Page 104: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3.5 Discussion of the results

Classifier Classification AccuracyZarbi [59] 6.4Naive Bayes [62] 14.8Klogistic [59] 35Neural Net [63] 22.9SVM [65] 6.2Kridge [60; 61] 30.3Random Forest [64] 3&8

Proposed Approach 20.6

T ab le 3.2: Comparing Performance of the proposed approach with [1]

the sample selection process, the only value that can be considered directly comparable

with our case is the one obtained using the SVM classification scheme. For this case,

it is evident that the proposed approach outperforms significantly the SVM classifier

that was evaluated in [1].

3.5 D iscussion of the results

In this work we have presented a means to quantify and utilize the visual ambiguity that

characterizes the image content, with a view to boost the efficiency of object detection

classifiers. More specifically, we have relied on the self-training paradigm to validate the

merit of using visual ambiguity for the optimization of the sample selection process. Our

experimental results have shown that by using the proposed approach to cope with the

existing ambiguities, the improvement in performance is higher than the one achieved

using a typical self-training approach, where the sample selection process is based solely

on the visual information of the initial models. Moreover, although we have seen the

performance of the employed visual analysis scheme to perform satisfactorily in limited

size datasets, the situation was rather different when the size of the employed dataset

reached the order of one million images. According to our experimental observations

(Section 3.4.3), the level of noise that is hidden in this vast amount of content made the

selection of the relevant samples sometimes impossible, especially when the employed

models worked without any constraint. This was in fact an additional argument in

favour of the proposed approach, since it can be used to reduce the level of noise and

87

Page 105: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

3. USING TAGGED IMAGES OF LOW VISUAL AMBIGUITY TOBOOST THE LEARNING EFFICIENCY OF OBJECT DETECTORS

help the distinction between noisy and relevant samples.An interesting observation that came out of our experimental study relates to the

use of WordNet and the fact that this similarity metric does not take into account the

context of the words to disambiguate their meaning. For example, the words palm and tree would always yield a very high similarity score regardless if the intended meaning

for palm was the tree or the hand. In these cases our approach was heavily misled,

making impossible the extraction of a reliable score for image trustworthiness. In the

work presented in the next chapter, the WordNet based textual analysis is replaced with

the popular bag of textual words, which also takes into account the context of each

tag in order to measure the semantic relatedness. Another observation concerns the

impact of adding more data to the training set; although using auxiliary information

improved significantly the accuracy of the selected samples, this was not reflected on

the performance of the retrained models to a similar degree. This can be attributed to the fact that the informativeness of the selected samples was not taken into account.

For this reason, in the next chapter we examine how the known principles of active

learning apply in the context of user tagged images.

Page 106: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

Chapter 4

Active learning in social context

89

Page 107: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4. ACTIVE LEARNING IN SOCIAL CONTEXT_______________________

4.1 Introduction

In the typical version of active learning, the pool of candidates usually consists of

unlabelled examples that are annotated upon request by an errorfree oracle. This re­

quirement, which implies the involvement of a human annotator, renders active learning

impractical in cases where the initial set needs to be enhanced with a significantly high

number of additional samples while, at the same time, limiting the scalability of this

approach. On the other hand, the widespread use of Web 2.0 has made available large

amounts of user tagged images that can be obtained at almost no cost and offer more

information than their mere visual content. Our goal in this work is to examine ac­

tive learning in a rather different context from what has been considered so far. More

specifically, if we could leverage these tags to become indicators of the images actual

content, we could potentially remove the need for a human annotator and automate

the whole process. This, however, adds a new parameter, the oracle’s confidence about

the actual image content, that should also be considered when actively selecting new

samples. Additionally, even though in our case there is no annotation effort, adding

informative instead of random samples is still important to minimize the complexity

of the classification models (i.e. achieve the same robustness with significantly fewer

images).

The novelty of this work, in contrast to what has been considered so far in active

learning, is to propose a sample selection strategy that maximizes not only the infor­

mativeness of the selected samples but also the oracle confidence about their actual

content. Towards this goal, we quantify the sample informativeness by measuring their

distance from the separating hyperplane of the visual model, while the oracle’s con­

fidence is measured based on the prediction of a textual classifier trained on a set of

descriptors extracted using a typical bag of words approach [66]. Joint maximization

is then accomplished by ranking the samples based on the probability to select a sam­

ple given the two aforementioned quantities (see Fig. 4.1). This probability indicates

the benefit that our system is expected to have if the examined sample is selected

to enhance the initial model. The work presented in this chapter was published as a

conference paper [44].

90

Page 108: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4.2 Related Work

Init. Training Set

Initial model j

Visual Analysis >1 Informativeness Init. Training Set

Selected SamplesRankingSelection Indexf lick r’

Enhanced m odelOrac e sTextual Analysis

confidence

F igu re 4.1; System Overview

4.2 R elated Work

T h e exam ined con text o f th is work com bines three topics; active learning, m ultim edia

dom ain and noisy data. D uring the past decade there have been m any works exploring

a subset o f these topics, e.g. active learning in th e m ultim edia dom ain [67; 68] or active

learning w ith noisy d ata [69; 70; 71] or even non-active learning from noisy d ata in th e

m ultim edia dom ain [46; 72; 73; 74; 75; 76; 77]. However, on ly recently th e scientific

com m unity started to investigate th e im plications o f su b stitu tin g th e hum an oracle

w ith a less exp en sive and less reliable source o f an notations in th e m ultim edia dom ain.

T here has b een on ly a few attem p ts to com bine active learning w ith user contributed

im ages and m ost o f th em rely on either a hum an annotator or on the use o f active

crowdsourcing (i.e. a service like the M Turk) and not on passive crow dsourcing (i.e.

th e user provided tags that are typ ica lly found in social networks like flickr). In th is

direction, the authors of [78] propose to use flickr n otes in th e typ ical active learning

framework w ith th e purpose of obtain ing a train ing d ataset for ob ject localization . In

a sim ilar endeavour, the authors of [51] in troduce the concept o f live learning w here

th ey attem p t to com bine active learning w ith crowdsourced labelling. M ore specifically.

91

Page 109: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4. ACTIVE LEARNING IN SOCIAL CONTEXT_______________________

rather than filling the pool of candidates with some canned dataset, the system itself

gathers possibly relevant images via keyword search on flickr. Then, it repeatedly

surveys the data to identify the samples that are most uncertain according to the

current model, and generates tasks on MTurk to get the corresponding annotations.

On the other hand, social networks and user contributed content are leading most of

the recent research efforts, mainly because of their ability to offer more information than

the mere image visual content, coupled with the potential to grow almost unlimitedly.

In this direction, the authors of [43] propose a solution for sampling user-tagged images

to enrich the negative training set of an object classifier. The presented approach is

based on the assumption that the tags of such images can reliably determine if an image

does not include a concept, thus making social sites a reliable pool of negative examples.

The selected negative samples are further sampled by a two stage sampling strategy.

First, a subset is randomly selected and then, the initial classifier is applied on the

remaining negative samples. The examples that are most misclassified are considered

as the most informative negatives and are finally selected to boost the classifier.

Our aim in this work is to investigate the extent to which the user tagged images

that are found in social networks can be used as a reliable substitute of the human

oracle in the context of active learning. Given that the oracle is not expected to reply

with 100% correctness to the queries submitted by the selective sampling mechanism,

we expect to face a number of implications that will question the effectiveness of active

learning in noisy context. In this respect our work differs from the large body of

methods found in the literature that invariably exhibit undue sensitivity to label noise.

In most of the works that do not use an expert as the oracle, MTurk is used instead

to annotate the datasets. However, although active crowdsourcing services like MTurk

are closer to expert’s annotation [7] with respect to noise, they cannot be considered

fully automated. In this work we rely on data originating from passive crowdsourcing

(flickr images and tags) that although noisier, can be used to support a fully automatic

active learning framework. The work presented in [43] is examined in the same context

as this work (i.e. active learning in the multimedia domain using data from passive

crowdsourcing). However, [43] focuses on enriching the negative training set, whereas

our work focuses on enriching the positive training set that is more complex, since

negative training samples are generally easier to harvest. Moreover, most of the existing

92

Page 110: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4.3 Selective sampling in social context

datasets already contain a large number of negative examples but lack positives, which

renders a positive sample selection strategy more applicable to a real world scenario.

4.3 Selective sam pling in social context

Let us consider a typical case where, given a concept c&, a base classifier is trained on the

initial set of labelled images using SVMs. We follow the popular rationale of SYM-based

active learning methods ([79], [80], [81]), which quantify the informativeness of a sample

based on its distance from the separating hyperplane of the visual model (Section 4.3.1).

In the typical active learning paradigm, a human oracle is employed to decide which

of the selected informative samples are positive or negative. However, in the proposed

scheme the human oracle is replaced with user contributed tags. Thus, in order to

decide about a sample’s actual label we utilize a typical bag-of-words classification

scheme based on the image tags and the linguistic description of %. The outcome

of this process is a confidence score for each image-concept pair (i.e. the oracle’s

confidence) which we consider as a strong indicator of the presence or not of % in

the image content (Section 4.3.2). Finally, the candidate samples are ranked based on

the probability of selecting a new image given the two aforementioned quantities. The

samples with the highest probability are considered the ones that jointly maximize the

samples’ informativeness and oracle’s confidence, and are selected to enhance the initial

training set.

4 .3 .1 M e a s u r in g in fo rm a tiv e n e s s

As already mentioned, the informativeness of an image is measured using the distance

of its visual representation from the hyperplane of the visual model. For the visual

representation of the images, we have used the approach that was shown to perform

best in [82]. More specifically gray SIFT features were extracted at densely selected

key-points at four scales, using the vl-feat library [83]. Principal component analysis

was applied on the SIFT features, decreasing their dimensionality from 128 to 80. The

parameters of a Gaussian mixture model with K = 256 components were learned by

expectation maximization from a set of descriptors, which were randomly selected from

the entire set of descriptors extracted by an independent set of images. The descriptors

were encoded in a single feature vector using the Fisher vector encoding [84]. Moreover,

93

Page 111: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4. ACTIVE LEARNING IN SOCIAL CONTEXT_______________________

each image was divided in 1 x 1 , 3 x 1 , 2 x 2 regions, resulting in 8 total regions. A

feature vector was extracted for each region by the Fisher vector encoding and the

feature vector of the whole image (1x1) was calculated using sum pooling [82]. Finally

the feature vectors of all 8 regions were 12 normalized and concatenated to a single

327680 — dimensional feature vector, which was again power and Z2 normalized.

For every concept %, a linear SVM classifier {wk, 6&), where Wk is the normal vector

to the hyperplane and bk the bias term, was trained using the labelled training set. The

images labelled with Ck were chosen as positive examples while all the rest were used

as negative examples (One Versus All / OVA approach). For each candidate image

l i represented by a feature vector X{, the distance from the hyperplane V { I i , C k ) is

extracted by applying the SVM classifier:

V{Ii, Ck) = W k X x f + bk (4.1)

Using Eq. 4.1 we obtain the prediction scores, which indicate the certainty of the SVM

model that the image depicts the concept c&. In the typical self-training paradigm [3],

this certainty score is used to rank the samples in the pool of candidates and the samples

with the highest certainty scores are chosen to enhance the models. However, as claimed

and proven by the active learning theory [69], [79] these samples do not provide more

information to the classifiers in order to alter significantly the classification boundaries.

Alternatively, as suggested by the active learning theory [69], the samples for which

the initial classifier is more uncertain are more likely to increase the classifier’s perfor­

mance if selected. In the case of an SVM classifier, the margin around the hyperplane

forms an uncertainty area and the samples that are closer to the hyperplane are consid­

ered to be the most informative ones (Fig. 4.2) [79]. Based on the above, the samples

that we want to select (i.e. the most informative) are the ones with the minimum

distance to the hyperplane. Additionally, we only consider samples that lie in the mar­

gin area, since the rest of the samples are not expected to have any impact on the

enhanced classifiers. We denote the probability to select an image given its distance

to the hyperplane V{Ii,Ck) as P (5 |V ). Based on our previous observations, shown in

Fig. 4.2, this probability can be formulated as a function of the sample’s distance to

94

Page 112: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4.3 Selective sampling in social context

w*x+b=l

InfoiiTiatiyeness = 0 ( rain)

w*x+b=-l

Infpntiativeness = 1 (raax)

0 < Informativeness <t Informativeness = 0 ( rain )0 < Informativeness < 1

Margin \

Figure 4.2: Informativeness

the hyperplane which can be seen in Fig. 4.3:

f (S |y ) = i - | y | if 0 < y < 10 else (4 .2 )

4 .3 .2 M easu rin g o ra c le ’s con fid en ce

In order to measure the oracle’s confidence about the existence of the concept Ck in

each tagged image, a typical bag-of-words scheme is utilized [66]. The vocabulary is

extracted from a large independent image dataset crawled from flickr. Initially the dis­

tinct tags of all the images are gathered. The tags that are not included in WordNet are

removed and the remaining tags compose the vocabulary, consisting of 46937 distinct

tags. Then, in order to represent each image with a vector, a histogram is calculated by

assigning the value 1 at the bins of the image tags in the vocabulary. Finally, PC A was

applied to the resulting histograms in order to reduce the dimensionality of the vector

from 46937 to 7000 dimensions. The number of the reduced dimensions (i.e. 7000)

95

Page 113: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4. ACTIVE LEARNING IN SOCIAL CONTEXT

0.9

0.7

(/) 0.5

0.4

0.3

0.2

0.5-0 .5-1 .5V

F igu re 4.3: Probability of selecting a sample based on its distance to the hyperplane

was chosen so that 99.5% of the data variance was kept. This scheme was chosen in

preference to the ones presented in Section 3.3.2 due to its ability to take into account

the context of the tags.

Afterwards, for every concept a linear SVM model 6^^*) is trained using

the tag histograms as the feature vectors. In order to do this, a training set of images

that contains both tags and ground tru th information is utilized. The tags are required

in order to calculate the feature vectors and the ground tru th information to provide

the class labels for training the model. In the testing procedure, for every tagged image

li the feature vector fi is calculated as above and the SVM model is applied. This

results in a value for each tagged image T(A, c&), which corresponds to the distance of

fi from the hyperplane:

T{Ii, Ck) = X f l + b\ (4.3)

This distance indicates the oracle’s confidence that the examined image li depicts the

concept Cfc.

We denote the probability to select an image A given the oracle’s confidence T(A, Ck)

as P{S\T). In order to transform the oracle’s confidence T{Ii,Ck) (which corresponds

to the distance of li to the SVM hyperplane) into a probability, we use a modification

of P la tt’s algorithm [85] proposed by Lin et al. [86]. Thus, the probability P{S\T) can

96

Page 114: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4.3 Selective sampling in social context

0.9

0.7

0.6

0.4

0.3

0.2

-2-3T

F igu re 4.4: Probability of selecting a sample based on the oracle’s confidence

be formulated as a function of the oracle’s confidence using the sigmoid function as

shown in Fig. 4.4:

P{S\T) =

exp{—A T —B)l + e x p { - A T - B ) if AT + 5 > 0

if AT + T < 0(4.4)

l +e xp {AT +B)

The parameters A and B are learned on the training set using cross validation.

4 .3 .3 Sam ple rank ing and se le c tio n

Our aim is to calculate the probability P {S = 1|V, T), that an image is selected {S = 1) given the distance of the image to the hyperplane V and the oracle’s confidence T.

Considering that V and T originate from different modalities (i.e. visual and textual

respectively) we regard them as independent. Using the basic rules of probabilities

(e.g. Bayesian rule) and based on our assumption that V and T are independent we

can express the probability T(5'|y, T) as follows:

P ( S | V ,T) =P ( V , T I S)P(S)

P (V ,T)

97

Page 115: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4. ACTIVE LEARNING IN SOCIAL CONTEXT

_ P ( S | V ) ; ^ P ( S | T ) ; § P ( S ) ^F (V ,T )

P ( S | V ) P ( S | T ) P ( V ) P ( T )P ( V ,T ) P ( S )

In order to calculate the probability P {S = 1|V, T) and eliminate the probabilities

P{V), P{T) and P{V ,T), we divide the probability of selecting an image by the prob­

ability of not selecting it.

p ( 5 = i i v T ) ,P (S = 01V, T) P(S=0\V)P(S=0\T)P(V)P(T) ^

P{V,T)P{S=0)

P i s = i \ v , T )P f ^ - O i v r ) PiS=^0\V)P(S=0\T)

P{S=0)Then we use the basic rule that the probability of an event complement equals 1 minus

the probability of the event {P{S = 0|V,T) = 1 — P {S = 1|V,T)).

p(s = i |v,T) _1 - P (S = 1 | V, T) ( 1 -P (S = 1 |V ) )(1 -P (S = 1 |T ) )

1 -P (S = 1 )

= (s = i|r)P(s = i|T)P {S = 1 ) - P {S = 1)P{S = IjT)

( l - f ( g = l)) (4.5)- P { S = 1)P(S = 1\V) -h P {S = 1 |V )P(5 = 1|T)

Thus, we only need to estimate three probabilities: P (5 = 1), P (S' = 1 j V) and P {S = 1 \T) .

The first one is set to 0.5 as the probability of selecting an image without any prior

knowledge is the same as the probability of dismissing it. For the estimation of the other two probabilities, we use equations 4.2 and 4.4 (shown in Fig. 4.3 and 4.4). Fi­

nally, the top N images with the highest probability P (S = 1|V,T) are selected to

enhance the initial training set.

4.4 Experim ents

4 .4 .1 D a ta se ts an d im p lem en ta tio n d eta ils

Two datasets were employed for the purpose of our experiments (Table 4.1). The

imageCLEF dataset I C [87] consists of 25000 labelled images and was split into two

98

Page 116: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4.4 Experiments

Notation Name Source Size AnnotationType

Usage

IC imageCLEF flickr 25k Ground tru th & user tags

15k training and 10k testing im­ages

F MIRFLICKR-IM

flickr 975k User tags Pool of candi­dates

Table 4.1: Datasets

parts (15k train and 10k test images). The ground tru th labels were gathered using

Amazon’s crowdsourcing service MTurk. The dataset was annotated by a vocabulary

of 94 concepts which belong to 19 general categories {age, celestial, combustion, fauna,

flora, gender, lighting, quality, quantity, relation, scape, sentiment, setting, style, time

of day, transport, view, water, weather). On average there are 934 positive images per concept, while the minimum and the maximum number of positive images for a single

concept is 16 and 10335 respectively. In our experimental study the 15k training images

were used to train the initial classifiers.

The MIRFLICKR-IM dataset F [58] consists of one million user tagged images

harvested from flickr. The images of F were tagged with 862115 distinct tags of which 46937 were meaningful (included in WordNet). After the textual preprocessing, i.e.

removing the tags that were not included in WordNet, 131302 images had no meaningful

tags, 825365 images were described by 1 to 16 meaningful tags and 43333 images had

more than 16 meaningful tags. Given that the IC dataset is a subset of F, the images

that are included in both sets were removed from F. In our experiments, this dataset

constitutes the pool of user tagged images, out of which the top N = 500 images ranked

by Eq. 4.5 are selected for each concept (i.e. 94 concepts * 500 images per concept =

47k images total) to act as the positive examples enhancing the initial training set. Finally, mean average precision (mAP) served as the metric for measuring the models

classification performance and evaluating the proposed approach.

4 .4 .2 E va lu ation o f th e p rop osed se le c tiv e sam p lin g approach

The objective of this section is to compare the proposed active sample selection strategy

against various baselines. The first baseline is the initial models that were generated

99

Page 117: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4. ACTIVE LEARNING IN SOCIAL CONTEXT_______________________

using only the ground tru th images from the training set (15k images). Afterwards, the

initial models are enhanced with positive samples from F using the following sample

selection strategies:

S e lf-tra in ing [3] The images that maximize the certainty of the SVM model trained

on visual information (i.e. maximize the visual distance to the hyperplane as

measured by Eq. 4.1) are chosen.

T ex tu a l based The images that maximize the oracle’s confidence are selected (Eq.

4.4).

M ax in fo rm a tive n e ss The images that maximize the informativeness (i.e. are closer

to the hyperplane) are chosen (Eq. 4.2).

N aïve oracle The images that maximize the informativeness (Eq. 4.2) and explic­

itly contain the concept of interest in their tag list are chosen (i.e. plain string

matching is used).

P ro p o sed ap p ro ach The images that jointly maximize the sample’s informativeness

and the oracle’s confidence are chosen (Eq. 4.5).

The average performance of the enhanced classifiers using the aforementioned sample

selection strategies is shown in Table 4.2. In all cases, 500 samples were selected to

enhance the training but the ranking function was different (i.e. Self-training, Textual

based. Max informativeness, Naïve oracle and Proposed approach). We can see that

in all cases the enhanced classifiers outperform the baseline. Moreover, the approaches

relying on active learning yield a higher performance gain compared to the typical

self-training approach, showing that the informativeness of the selected samples is

a critical factor. The same conclusion is drawn when comparing the textual based

approach to the proposed method, showing that informativeness is crucial to optimize

the learning curve, i.e. achieve higher improvement when adding the same number of

images. On the other hand, the fact that the proposed sample selection strategy and the

string matching variation (i.e. naïve oracle) outperform significantly the visual-based

variations verifies that the oracle’s confidence is a critical factor when applying active

learning in social context, and unless we manage to consider this value jointly with

100

Page 118: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4.4 Experiments

informativeness, the selected samples are inappropriate for improving the performance

of the initial classifiers.

Additionally, we note that the naïve oracle variation performs relatively well, which

can be attributed to the high prediction accuracy achieved by string matching. Never­theless, the recall of string matching is expected to be lower than the textual similarity

algorithm used in the proposed approach (Section 4.3.2), since it does not account for

synonyms, plural versions and the context of the tags. This explains the superiority

of our method compared to the naïve oracle variation. In order to verify that the

performance improvement of the proposed approach compared to the naïve oracle is

statistically significant, we apply the Student’s t-test to the results, as it was proposed

for significance testing in the information retrieval field [88]. The obtained p-value is

2.58e-5, significantly smaller than 0.05, which is typically the limit for rejecting the

null hypothesis (i.e. the results are obtained from the same distribution and thus the

improvement is random), in favour of the alternative hypothesis (i.e. that the obtained

improvement is statistically significant).

Table 4.2: Performance scoresM odel m A P (%)Baseline 28.06Self-training 28.68Textual based 2&89Max informativeness 28^3Naïve oracle 30P ro p o sed ap p ro ach 31.22

Moreover, a per concept comparison of the enhanced models generated by the two

best performing approaches of Table 4.2 (i.e. the proposed approach and the naïve

oracle variation) to the initial classifiers can be seen in the bar diagram shown in Fig.

4.4. We can see that the proposed approach outperforms the naïve oracle in 70 concepts

out of 94. It is also interesting to note that the naïve oracle outperforms the proposed

approach mostly in concepts that depict objects such us amphibian-reptile, rodent,

baby, coast, cycle and rail. This can be attributed to the fact that web users tend

to use the same keywords to tag images with concepts depicting strong visual content,

which are typically the object of interest in an image. In such cases, the string matching

101

Page 119: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4. ACTIVE LEARNING IN SOCIAL CONTEXT_______________________

oracle can be rather accurate, providing valid samples for enhancing the classifiers. On the other hand, the proposed approach copes better with more abstract and ambiguous

concepts for which the context is a crucial factor (e.g. flames, smoke, lens effect, small

group, co-workers, strangers, circular wrap and overlay).A closer look at the results obtained by the proposed approach shows that the con­

cept with the most notable increase in performance is the spider, initially trained by 16 positive examples yielding only 5.48% AP. After adding the samples that were indi­

cated by the proposed oracle, the classifier gains 23.31 units of performance, resulting

in 28.79% average precision. Similarly, other concepts yielding a performance gain in

the range of 5 and more units include stars, rainbow, flames, fireworks, underwater,

horse, insect, baby, rail and air. For most of these concepts, the initial classifiers yield

a low performance. Another category of concepts are the ones with slight variations

in performance, below 0.1%. This category includes the concepts cloudy sky, coast,

city, tree, none, adult, female, no blur and city life whose initial classifiers yield a

rather high performance and are trained with 3600 positive images on average. This

shows that the proposed method, as it could be expected, is more beneficial for difficult

concepts, i.e. for which initial classifiers perform poorly. Finally, there are also the

concepts that either yield minor variations or even decrease in performance and consist

in melancholic, unpleasant and big group. This can be attributed to the ambiguous

nature of these concepts which renders the oracle unable to effectively determine their

existence.

4 .4 .3 C om p arin g w ith s ta te -o f-th e -a r t

In this section the proposed approach is compared to the methods submitted to the

2012 ImageClef competition [87] and specifically in the concept annotation task for

visual concept detection, annotation, and retrieval using Flickr photos^. Since the pro­posed approach is only using the visual information of the test images without taking

into account the associated tags, it is only compared to the visual-based approaches

submitted in the competition. The performance scores for the three metrics utilized by the competition organizers (miAP, GmiAP and F-ex) are reported in Table 4.3 for each

of the 14 participating teams, along with the baselines of Table 4.2 and the proposed

approach. The metric miAP (mean interpolated Average Precision) is calculated as the

http://imageclef.org/2012/photo-flickr

102

Page 120: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4.4 Experiments

day night

sunrisesunset sun

m oon stars

clearsky overcastsky

cloud)rainoow

lightningfogmist

snow iceflam essm oke

fireworksshadow

reflectionsilhouettelenseffect

mountainhilldesert

forestparkcoastrural

citygraffiti

underwaterseaocean

lakeriverstream

othertree

plantflower

grasscat

doghorse

fishbird

insectspider

amphibianreptilerodent

ive Oracle

dposed Approa

;eline

S S bbS S

ch

103

Page 121: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4. ACTIVE LEARNING IN SOCIAL CONTEXT

none one tw o

, three srriallgroup

biggmupwteenager

,a,ou It elder y ma e

fem a e familyfriends

coworkers strangers

nqo ur partialb ur

com pleteb ur m otionb ur

. , .artifacts pictureinpicture

circularwarp grayco or

overlay portrait

closeupm acro indoor

outdoor cityWe

party |fe hom e ife

sportsrecreati.on fooddrink

happy calm

inactive melancholic unpleasant

scary active

euphoric funny cycle

truckbusrail

waterair

Average

0.6 0,8

B Baseline

a Naïve Oràcle

Proposed Approach

Figure 4.4: Per concept comparison of the two best performing approaches (i.e. the naïve oracle and the proposed approach) to the baseline (best viewed in colour)

104

Page 122: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4.5 Discussion of the results

common mAP with the only difference that precision is calculated and averaged only

at interpolated recall values (from 0.0 to 1.0 with steps of 0.1). Geometric mean inter­

polated Average Precision (GMAP) is an extension on miAP and in order to calculate

it, the logs of the average precision for each concept are averaged and afterwards we

exponentiate the resulting average back to obtain the GmiAP. The F-ex score is the

harmonic mean of precision and recall and is calculated by giving annotations instead

of ranking scores for every image and every concept. In order to measure the F-ex score,

the threshold for the positive-negative class separation was set to zero, i.e. images with

an SVM prediction score greater than zero were annotated as positive and negative

otherwise. We can see that our approach is ranked third in terms of miAP, first in

terms of GmiAP and fifth in terms of F-ex. Additionally, we note that the proposed

approach outperforms the rest in terms of GmiAP, which according to [87] is a metric

susceptible to better performance on difficult concepts. This explains the superiority of

our approach and the higher performance gain compared to our baseline since it tends

to improve the performance of the difficult concepts, as it was also observed in Sec­

tion 4.4.2 (see Fig. 4.4). Moreover, it is important to note that the proposed approach

has achieved these very competitive scores by using a single feature space (gray SIFT

features), which was not the case for the other participants that relied on more than

one feature spaces [87].

4.5 D iscussion of the results

In this chapter, we have examined an automatic variation of active learning for image

classification adjusted in the context of social media. This adjustment consists in re­

placing the typical human oracle with user tagged images obtained from social sites

and in using a probabilistic approach for jointly maximizing the informativeness of the

samples and the oracle’s confidence. The results show that in this context it is critical

jointly to consider these two quantities for successfully selecting additional samples to

enhance the initial training set. Additionally, we noticed that the naïve oracle performs

very well on concepts that depict strong visual content corresponding to typical fore­

ground visual objects (e.g fish, spider, bird and baby), while the proposed approach

copes better with more abstract and ambiguous concepts (e.g. flames, smoke, strangers

105

Page 123: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4. ACTIVE LEARNING IN SOCIAL CONTEXT

T ab le 4.3: Comparison with ImageClef 2012

Team miAP GmiAP F-exLIRIS 34.81% 28.58% 54.37%NPDILIP6 34.37% 28.15% 41.99%N il 33.18% 27.03% 55.49%ISI 32.43% 25.90% 54.51%MLKD 31.85% 25.67% 55.34%CERTH 26.28% 19.04% 48.38%UAIC 23.59% 16.85% 43.59%BUAA AUDR 14.23% 8.18% 21.67%UNED 10.20% 5.12% 10.81%DBRIS 9.76% 4.76% 10.06%PRA 9.00% 4.37% 25.29%MSATL 8.68% 4.14% 10.69%IMU 8.19% 3.87% 4.29%URJCyUNED 6.22% 2.54% 19.84%Baseline 30.37% 24.21% 48.6%Self-training 30.77% 24.41% 49.63%Textual based 32.48% 26.84% 51.7%Max informativeness 30.83% 24.48% 52.24%Naive oracle 32.18% 26.53% 51.66%P ro p o sed ap p ro ach 33.84% 29.17% 52.64%

106

Page 124: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4.5 Discussion of the results

and circular wrap), since the utilized textual classifier accounts for the context of the

tags as well.Finally, an interesting note is that the difficult concepts (i.e. models with low per­

formance) tend to gain much more in terms of effectiveness from such bootstrapping

methods, as shown in Fig. 4.4. Similar conclusions are drawn when comparing the pro­

posed approach, which trained a simple SVM classifier using a single feature space to

the more sophisticated approaches of the ImageCLEF 2012 challenge, which typically

used many feature spaces. Especially in the case of difficult concepts, as shown by the

superiority of the proposed approach based on the GmiAP metric, we can also con­

clude that it is more important to find more positive samples than more sophisticated

algorithms. On the other hand, when the initial classifiers of the concepts performed

relatively well, the addition of training samples resulted in minor fluctuations of the

performance. This can be attributed to the fact that these classifiers have reached a

saturation point, i.e. the trained hyperplane has converged to the optimal one. In

addition, there were concepts that were too ambiguous and the oracle was not able to

provide correct annotations for the new samples, e.g. melancholic, unpleasant. In these

cases, augmenting the dataset with new samples can be deemed unnecessary since it

only increases the computational complexity of the system without providing any ben­

efit in terms of performance. This can be particularly useful for the computational

complexity of bootstrapping, which can be rather high when the pool of candidates consists of a large number of images and the dimensionality of the feature vectors is

high as in the presented work. For this reason, in the following chapter, we opt to

predict the prominent concepts, for which adding more data is expected to increase

their performance and, in this way, remove the computational load of updating the

whole set of concepts.

107

Page 125: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

4. ACTIVE LEARNING IN SOCIAL CONTEXT

108

Page 126: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

Chapter 5

Performance Prediction of bootstrapping for Image Classification

109

Page 127: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

5. PERFORMANCE PREDICTION OF BOOTSTRAPPING FORIMAGE CLASSIFICATION

5.1 Introduction

An interesting aspect of most bootstrapping approaches, is that they have been tested

using very few examples to train the initial model, many of which even start with just

two [80; 89; 90]. However, with the widespread adoption of crowdsourcing, collecting

medium scale datasets with ground tru th annotations has become a realistic scenario

for a rather high number of concepts. Prominent examples of such datasets are the

25000 images used for the 2012 imageCLEF photo annotation task [87], which were

annotated for 94 concepts by using Amazon’s Mechanical Turk (MTurk) service, as

well as the 14 million images provided by ImageNET [8], which is currently the largest

annotated image database consisting of 21841 concepts.

Considering the scale of such datasets, it is natural to wonder whether bootstrapping

techniques could still benefit the cases where the initial training set consists of a few

hundreds instances rather than just a couple. More specifically, it becomes particularly

important to examine the learning capacity of the initial model with the aim to identify

its saturation point, i.e. a point where continuing adding more samples does not really

cause the model to perform better. As it was shown by the experiments in the previous

chapter (Section 4.4.2), for certain types of concepts the saturation point was already

reached using the few hundred examples included in the initial training set. In these

cases, the model can be considered to have reached a level of maturity that adding

more training samples would only result in marginal performance changes. In our

work, we define the model maturity to be the distance of the current model from the

optimal hyperplane. However, since this distance can not be directly calculated, we

approximate its value using the classification performance of the model applied on a

large set of images with ground tru th annotations.

In addition to the model’s maturity^ another critical aspect that is expected to de­

termine whether adding more samples will cause the model’s performance to improve is

the oracle’s reliability, which depends on how accurately the oracle can label new train­

ing data (i.e. how accurately they have been annotated through active (e.g. MTurk),

or passive (e.g. flickr tags) crowdsourcing). This is due to the fact that adding a set of

examples, the majority of which has been falsely labelled by an unreliable oracle, will

most probably cause the model to deteriorate. The oracle’s reliability can be considered

as an indicator of how much we trust the oracle’s decisions and, among others, depends

110

Page 128: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

5.1 Introduction

on the nature of the examined concept. Indeed, there are some inherently ambiguous

concepts that are not easy to distinguish using words (e.g. palm-hand and palm-tree)

and there are others that can be pretty clearly described by linguistics (e.g. snow).

For example, as it was shown in Section 4.4.2, bootstrapping did not have significant

impact when applied to abstract concepts such as melancholic and unpleasant. Thus,

motivated by the expectation that the oracle will be more accurate when labelling sim­

pler rather than more ambiguous concepts, we formulate the oracle’s reliability as a

function of the concept of interest. More specifically, reliability is approximated by the

success rate of the oracle in labelling a set of samples with ground tru th annotations,

which is calculated using the average precision metric.

Based on the above, we propose the utilization of these two features, i.e. the model’s

maturity and the oracle’s reliability, for predicting the performance gain expected by

enhancing the models. Then, based on these predictions, we can select to enhance only

the most prominent models, avoiding in this way the computational cost that would

be required to enhance the full set of models (Fig. 5.1). This is particularly useful in

the context of recent trends in the image classification domain, where the scalability of

methods to numerous concepts is now considered an important element of the proposed

solutions. For example, in the ImageCLEF competition [10], the organizers introduced

this scalability requirement by adding the concept as an input to the participants’

systems rather than giving a pre-defined vocabulary of concepts, while in the ImagneNet

competition they had to classify images with respect to a vocabulary of 1000 concepts.

There are only a few works in the literature dealing with the prediction of the ex­

pected learning performance. The authors of [16] investigate both theoretically and

empirically when effective learning is possible from ambiguously labelled images. They

formulate the learning problem as partially-supervised multi-class classification and

postulate intuitive assumptions under which they expect learning to succeed. Similarly,

the authors of [77] examine the trade-off between performance, memory footprint and

speed towards an on-the-fiy large scale concept retrieval system. On the other hand, we

formally formulate the expected performance gain as a function of two pre-computed

features and estimate this function using a regression model. More closely related to

our approach is the work presented in [15], where the objective is to predict the perfor­

mance difference between automatically created and manually annotated datasets. On

the contrary, our approach is designed for the bootstrapping technique and its scope

111

Page 129: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

5. PERFORMANCE PREDICTION OF BOOTSTRAPPING FORIMAGE CLASSIFICATION

tlirkr

Oracle s rellabiUty _

M odel s maturity

Initial Training Set

Initia m odel Regression m odel i

Bootstrap

Figure 5.1: System Overview

is to reduce b oth th e annotation effort and th e com putational com plexity, by intelli­

gen tly selecting th e m ost prom inent concepts for w hich bootstrap ping is exp ected to be

beneficial. T he work presented in th is chapter w as published as a conference paper [91].

5.2 Selective m odel retraining

As already m entioned, th e purpose of our work is to exam ine th e correlation o f the

exp ected perform ance gain w ith the m a tu r i ty o f th e m odel and th e reliability o f the

oracle, in order to build a classifier trained on these two aspects. However before

expressing th e perform ance b oost of th e in itia l classifier as a function o f the oracle’s

reliability and the classifier’s m aturity , we should first define the approach followed for

m easuring these quantities.

112

Page 130: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

5.2 Selective model retraining

5.2 .1 O racle re lia b ility

The reliability of the oracle, R, is defined on a per concept basis and indicates the

quality of the oracle. A less reliable oracle will tend to make more mistakes feeding the classifiers with wrongly selected images and misleading them from their optimal

target. In order to model this property, we quantify the oracle reliability to be the

performance of the oracle as it is measured by average precision. More specifically, the

oracle is asked to rank the images of a manually annotated dataset for the examined concept and the average precision is calculated based on this ranking.

5 .2 .2 M o d e l m a tu r ity

A more mature classifier, i.e. closer to the optimal model, is expected to exhibit small

fluctuations in terms of performance, even if it is guided accurately, since it is closer

to its saturation point. On the other hand, an immature model has more potential

in increasing its performance although it would need more accurate guidance as it is

expected to be highly susceptible to false positives. In this case, the maturity of the

model M is essentially the quality of the initial classifier, which can be measured by

its performance tested on a manually annotated dataset and quantified by the average

precision metric.

5 .2 .3 R eg ressio n m od el

Based on the assumption that the performance gain g is correlated both with the

maturity M of the initial classifier and the reliability R of the oracle, we propose to train a regression model using these two features (i.e. M and R):

g = f{ M ,R ) (5,1)

In the training phase, we provide pairs {p(%), (M{i), R{i))} for every concept c% and the

objective is to map the features (M, R) to the performance gain g by estimating the

mapping function / . The two proposed features, reliability and maturity are computed for every concept as explained in Sections 5.2.1 and 5.2.2 respectively, by applying

three fold cross validation on a manually annotated training set. In order to compute

the output values g{i), the initial classifiers are trained on the manually annotated

training set. Additional training samples are selected by a pool of candidates using the

113

Page 131: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

5. PERFORMANCE PREDICTION OF BOOTSTRAPPING FORIMAGE CLASSIFICATION

bootstrapping technique and the enhanced models are trained using the initial training set augmented with the additional training samples. Afterwards, both the initial and

the enhanced classifiers are applied on a manually annotated evaluation set and their

performance, APinit{i) and APfin{i) respectively, is estimated by the average precision metric. Finally, the performance gain is calculated to be the performance difference

between the enhanced and the initial classifiers:

g{i) = APfin{i) ~ APinit{i) (5.2)

In the testing phase, given a new unseen concept Cj and an initial classifier recogniz­

ing this concept, we compute as previously the proposed features {M{j ) ,R{ j ) } , while

the expected prediction gain g{j) is computed by applying the mapping function / . Based on the predicted gain, we can choose whether it is worthwhile to further enhance

the classifier for the specific concept or retain the initial classifier.

5.3 Experim ents

5 .3 .1 D a ta se ts and im p lem en ta tio n d eta ils

Two datasets were employed for the purpose of our experiments. The imageCLEF dataset IC [87], annotated for 94 concepts, was used as the manually annotated dataset

and was split in three parts; T l, T2 and Test, consisting of 5k, 10k and 10k images

respectively. The MIRFLICKR-IM dataset S [58] constitutes the pool of user-tagged

images out of which 500 images are selected for each concept to act as the positive examples enhancing the initial training set during the bootstrapping approach. The

bootstrapping technique which was presented in [44] was employed in our experiments. The code and the data for the following experiments are available at

5 .3 .2 Im p a ct o f m a tu r ity an d oracle re liab ility

In this experiment we investigate how the classifier maturity and the oracle reliability affect the leaning process by artificially simulating different levels of reliability for the

oracle and examining the susceptibility of classifiers with various levels of maturity

to noisy examples (i.e. false positive). For this experiment the IC dataset is used.

^http : / / mklab.iti.gr/project/PerformancePrediction ^https: / / github.com / ehatzi/PerformancePrediction

114

Page 132: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

5.3 Experiments

Initially, the classifiers are trained using the 5k images of the T l set. Afterwards, in order to simulate an unreliable oracle, the initial training set is augmented with a

combination of true and false positive images from the T2 set. The final classifiers

are retrained using the augmented dataset and are evaluated on the Test set. We

consider five augmented datasets, each one constructed by an oracle that adds samples

with 100%, 80%, 67%, 50% and 0% accuracy, simulating this way different levels of the

oracle’s reliability.

In Figure 5.2, we plot the performance gain between the enhanced and initial clas­

sifiers with respect to the maturity of the initial classifier. Initially, for each concept

Ci of IC the maturity M(i ) is calculated. Then the examined oracle proposes new

training samples, the enhanced classifiers are trained and the performance gain g(i) is

calculated. Finally, for every level of reliability we have a set of 94 points described by

{M{i),g{i)) pairs. For better visualization, we applied a smoothing filter on the data

points and produced an interpolated line for each oracle. The expected correlation becomes obvious if we make the following observations; (a) as the percentage of noisy

data included in the augmentation dataset increases, the classifiers’ performance dete­

riorates (higher decrease in performance for the magenta line, i.e. adding 100% false

positive examples, than the black line, i.e. adding 50% true and 50% false positive ex­

amples), and (b) the classifiers that exhibit a high level of maturity are not affected by

the addition of the augmentation sets, neither positively when the oracle is perfectly

reliable (i.e. red line) nor negatively when adding only false positive examples (i.e.

magenta line). More specifically, there are only small fluctuations of the performance

gain when the maturity of the classifier is high (e.g. over 50%). All the above verify

our expectation that the performance gain is correlated with both the maturity of the

classifier and reliability of the oracle. This justifies the selection of these two features

to train the proposed regression model for predicting the expected performance gain.

5 .3 .3 P erform an ce ga in p red ictio n

Our goal in this section is to verify whether the proposed regression model can effec­

tively predict the performance gain of bootstrapping. For this purpose we train the

regression model as specified in Section 5.2. The initial classifiers are trained using

the combination of T l and T2 datasets and afterwards, classifiers are enhanced by the

images of the S dataset using the approach presented in [44]. The different concepts

115

Page 133: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

5. PERFORMANCE PREDICTION OF BOOTSTRAPPING FORIMAGE CLASSIFICATION

— 100% ——80% — 67% — 50% — 0%

3.5

2.5

I

I0.5

-0 .5

100Maturity

Figure 5.2: The effect of the oracle reliability and the classifiers’ maturity to the perfor­mance gain

(i.e th e 94 concepts of I C ) con stitu te th e instances for train ing th e regression m odel.

In order to predict th e exp ected gain g[ i ) for an in stance th e leave one out protocol

is used (i.e. the regression m odel is trained on the 93 concepts and it is used to predict

the exp ected gain g{i ) o f th e rem aining concept i).

W e tested two different m odelling approaches, an e-S V R and a nu-SV R regres­

sion m odel, w hile b o th linear and R B F kernels were considered. T h e b est perform ing

approach, the e-S V R regression m odel w ith an R B F kernel, was chosen using cross

validation.

In order to visualize th e results, the concepts are ranked based on th e predicted

gain g and th e cum ulative actu al gain g is calcu lated for every concept. If we denote

as c^, Cg,..., th e sorted concepts so th at g{c'^) > we define the cum ulative

gain function Cg { k ) as:

cg(fc) = (5.3)i= l

T his function in dicates th e to ta l actual gain o f th e bootstrap ping algorithm if th e

classifiers representing the top k concepts are enhanced, w hile the in itia l classifiers

116

Page 134: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

5.3 Experiments

Table 5.1: Prediction performance comparison between the proposed approach and the random baseline

# conceptsPro

Abs.posed

Perc.Rai

Abs.idom

Perc.1020406080

0.931.361.812.192.7

.g.Pg

31.3945.58

60.8773.4790.81

100

0.260.621.211.712.35.g.Pg

8.7121

40.7157.3879.04

100

are maintained for the rest N — k cases. In the optimal scenario, the predicted top

k concepts yield the highest improvement in the bootstrapping process (i.e. g{c'i) >

9 (^2 ) > " > p(c^)). In Fig. 5.3, the function of Eq. 5.3 is plotted for every k. In

addition to the proposed approach, three baselines are plotted as well; (a) R andom :

The instances are ranked randomly, (b) U p p e r B aseline: The instances are ranked

based on the actual gain g{i) simulating the best possible regression model (i.e. best

case scenario), (c) Low er B aseline: The instances are inversely ranked based on the

actual gain g{i) simulating the worst regression model (i.e. worst case scenario). It

is obvious that the proposed regression model significantly outperforms the random

baseline and lies quite close to the upper baseline.

In order to provide an indication of the benefits that could be gained in terms of

processing load by the employment of the proposed approach, we provide Table 5.1. In

this table, we can see the achieved performance gain if we decide to enhance the top 10,

20, 40, 60, 80 and 94 (all) concepts as they were ranked by the proposed approach and

the random baseline. The first column (Abs.) refers to the absolute performance gain

achieved, while the second column [Perc.) is the percentage of this value with respect

to the maximum possible achieved gain, which occurs if we enhance all the 94 concepts.

We can see that using the proposed approach we can achieve the same performance

gain with significantly less processing load. For example, if we decide to enhance the

40 most prominent concepts as ranked by the proposed approach, we can achieve more

than half of the total performance boost, while we need to enhance around 60 concepts

to achieve similar boost if the random model decides for the ranking.

117

Page 135: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

5. PERFORMANCE PREDICTION OF BOOTSTRAPPING FORIMAGE CLASSIFICATION

3.5

2.5

0.5

— Random— Proposed approach— Upper Baseline— Lower Baseline

-0.5, 10040 50 6020 30 80

F ig u re 5.3: Actual cumulative gain

5.4 D iscussion of the results

In this chapter, in an effort to improve the scalability properties of the computation­

ally expensive approaches that follow the bootstrapping paradigm, we investigate the correlation of two new features, i.e. the model’s maturity and the oracle’s reliability,

with the expected performance gain. This correlation can be exploited to devise mecha­

nisms appropriate for ruling out the cases that are not expected to substantially benefit

from augmenting the training set. Our experiments have shown that by exploiting this correlation we can achieve approximately 60% of the performance gain by enhancing

less than half of the concepts. Our plans for future work include the investigation of

additional features for predicting the expected performance gain.

118

Page 136: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

Chapter 6

Conclusions and Future Work

119

Page 137: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

6. CONCLUSIONS AND FUTURE WORK

6.1 D iscussion and Conclusions

In concluding this thesis we would like to provide a walk-through to our motivations, the

key choices that we have made in designing the proposed approaches and the conclusions

we have reached. The starting point of our work was to tackle the limitations of the

example-based learning that occur due to the lack of training examples. Especially,

considering the increasing adoption of Web 2.0 applications like flickr and the resulting

huge number of user tagged images that can be obtained at no cost, the objective was to

exploit this cheap content in order to tackle the problems originating from the limited

size of manually annotated image sets (i.e. system scalability).

Towards this objective, our first approach was completely unsupervised (Chapter 2),

working under the reasonable expectation that when one can have huge amounts of data

there is no need for sophisticated methods. Indeed, after our theoretical and empirical

analysis we concluded that simple and intuitive methods were more successful as the

size of the dataset grew. On the other hand, there were cases where some supervision

was required, since the methods failed even with larger dataset sizes. Thus, initially,

we sought to develop a guided cluster selection strategy by adding the knowledge from

a small manually annotated set of images.

A key observation in our experimental results was that many regions depicting visu­

ally similar but semantically dissimilar concepts (e.g. sky and sea) were confusing the

clustering algorithm and ending up in the same cluster. For this reason, we attempted

to model this visual ambiguity characterizing the multimedia content and utilized it

within a bootstrapping scenario (Chapter 3). Incorporating visual ambiguity allowed

for gathering significantly more accurate regions to enhance the initial models.

On the other hand, the improvement in the accuracy of the selected samples did

not have the expected impact on the performance of enhanced models. This can be

attributed to a critical factor that was not taken into account, the informativeness of the

selected samples. In order to examine the effect of incorporating the informativeness of

a sample in the bootstrapping process, we investigated the active learning theory but

for global image classification instead of object detection, since this choice removed the

additional parameter of localizing the objects within an image. An important factor for

examining the active learning principles in this social context was to find a method in

order to consider jointly the informativeness of the samples and the oracle’s confidence

120

Page 138: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

6.2 Contributions

about their content. Towards this direction, we proposed a probabilistic approach

that could optimally maximize both the informativeness and the confidence indicators.

Experimental results have shown that it is critical to use both types of information to

achieve higher increase in performance. Moreover, an interesting observation is that for

the difficult concepts, which are usually trained with a small number of initial samples,

enhancing the training set with additional data is more beneficial than searching for

more sophisticated methods.

Finally we noticed that there were cases for which bootstrapping did not provide any

benefit, which usually happens when the initial classifiers have already reached a level

of maturity or when the oracle, which in our case is the textual analysis algorithm,

does not provide accurately new samples. Taking this observation into account and

aiming to minimize the computational complexity, we proposed a regression model

that exploits the correlation between the expected performance gain of bootstrapping

with the maturity of the initial classifiers and the reliability of the oracle (Chapter 5).

Applying this model to each concept, we can select only the most prominent cases to

enhance, avoiding in this way the computational cost of enhancing the whole set of

classifiers.

6.2 Contributions

The contribution of this thesis can be summarized as follows:

• An unsupervised framework for automatically gathering training data for training

object detection models.

• A method for modelling and utilizing visual ambiguity.

• A novel probabilistic fusion method for combining the informativeness of new

samples and the confidence of the oracle.

• A regression model that can predict the expected performance gain of bootstrap­

ping prior to actually applying it.

121

Page 139: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

6. CONCLUSIONS AND FUTURE WORK

6.3 Plans for future extensions

Possible future paths to explore include the use of flickr groups as a richer and more

large-scale pool of candidates for positive samples and the extension of the approach presented in Chapter 4 to an on-line continuous learning scheme both for global image annotation and local object detection. Towards this goal, an interesting route is to

optimize the strategy for the selection of additional samples to the initial training set,

considering combinations of both positive and negative examples. Moreover, with the

help of the regression model presented in Chapter 5, this continuous learning framework

can be efficiently applied to very large databases such as ImageNet, which consists of

a circa 20000 concept vocabulary.It would also be interesting to investigate whether and to what extend the training

set augmentation processes are able to boost the performance of initial classifiers, when

they are trained on far more compact signatures. For example, the current state-of- the-art CNN-based features can achieve surprisingly high performance with merely 128-

dimensional vectors. It is natural to wonder whether similar benefits can be achieved

when the number of the training samples can easily surpass the dimensionality of the

feature vectors. Moreover, considering that the training phase of CNNs requires large

amounts of manually annotated data in order to learn the millions of parameters that

are required, it would be interesting to investigate whether social media can substitute the human supervision in such tasks as well.

122

Page 140: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

Bibliography

[1] H ugo J a ir E s c a la n te , C a r lo s A. H ernndez, Jess A. G o n z lez , A u r e lio Lpez-Lpez, M a n u el M o n tes Y Gmez, E d u ard o F . M o r a le s , Luis E n riq u e S u gar, Luis V i l la s e o r P in ed a , and M ic h a e l G ru b in ger. T he segm ented and annotated lA P R TC-12 benchm ark, pages 419-428, 2010. xi, 55, 69, 77, 86, 87

[2] O liv ie r C h a p e lle , B ern h a rd S c h lk o p f , and A le x a n d e r Zien, editors. Semi-Supervised Learning. The MIT Press, 1st edition, 2010. 3

[3] V in c e n t N g and C la ir e C ard ie. B ootstrapping coreference classifiers w ith m ultiple machine learning algorithm s. In Proceedings of the 2003 conference on Empirical methods in natural language processing, EMNLP ’03, pages 113-120, 2003. 3, 68, 94, 100

[4] David Cohn, Les A t la s , and R ich ard L ad n er. Im proving G eneralization w ith A ctive Learning.Machine Learning, 15(2):201-221, May 1994. 3

[5] Luis v o n Ahn and L au ra Dabbish. Labeling Im ages w ith a C om puter Game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’04, pages 319-326, New York, NY, USA, 2004. ACM. 4

[6] Luis vo n Ahn, R u oran Liu, and M a n u el B lum . Peekaboom : A Gam e for L ocating O bjects in Images. In Proceedings of the SICCHI Conference on Human Factors in Computing Systems, CHI ’06, pages 55-64, New York, NY, USA, 2006. ACM. 4

[7] S te fa n ie N ow ak and S te fa n R ü g er . H o w reliable are annotations via crowdsourcing; a study about inter-annotator agreem ent for m ulti-label im age annotation. In Proceedings of the inter­national conference on Multimedia information retrieval, MIR ’10, pages 557-566, New York, NY, USA, 2010. ACM. 4, 92

[8] JiA D en g , W ei D on g, R ich ard S o ch er , L i-Jia Li, K ai Li, an d Li F ei-F ei. Im ageN et: A large-scale hierarchical im age database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248-255, June 2009. 4, 16, 110

[9] A n to n io T o r r a lb a , Rob F erg u s, and W illia m T. F reem an. 80 M illion T iny Images: A Large D ata Set for N onparam etric O bject and Scene R ecognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30:1958-1970, 2008. 4

[10] M au ric io V il le g a s , R o b e r to P ared es, and B a r t T hom ee. O verview o f th e Im ageCLEF 2013 Scalable Concept Im age A nnotation Subtask. In CLEF 2013 Evaluation Labs and Workshop, Online Working Notes, Valencia, Spain, September 23-26 2013. 8, 111

[11] J o se f S iv ic, B ryan C. R u ss e ll , A le x e i A. E fr o s , A n d rew Zisserm an, and W illia m T. F reem an. Discovering objects and their location in im ages. In Tenth IEEE International Conference on Computer Vision, 1, pages 370-377 Vol. 1, Oct 2005. 12, 69

123

Page 141: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

BIBLIOGRAPHY

[12] J a m i e S h o t t o n , J o h n W i n n , C a r s t e n R o t h e r , a n d A n t o n i o C r i m i n i s i . T extonBoost: Joint A p­pearance, Shape and C ontext M odeling for M ulti-C lass O bject R ecognition and Segm enta­tion. In European Conference on Computer Vision, pages 1-15, 2006. 12, 48, 50, 63, 69

[13] E l i s a v e t C h a t z i l a r i , S p y r o s N i k o l o p o u l o s , I o a n n i s P a t r a s , a n d I o a n n i s K o m p a t s i a r i s . Leveraging social m edia for scalable object detection. Pattern Recognition, 45(8):2962-2979, 2012. 13

[14] E l i s a v e t C h a t z i l a r i , S p i r o s N i k o l o p o u l o s , S y m e o n P a p a d o p o u l o s , C h r i s t o s Z i g k o l i s , a n d Y i a n -

NIS K o m p a t s i a r i s . Sem i-supervised object recognition using flickr images. In 9th International Workshop on Content-Based Multimedia Indexing (CBMI), 2011, pages 229-234, June 2011. 14

[15] L y n d o n S . K e n n e d y , S h i h - F u C h a n g , a n d I g o r K o z i n t s e v . To search or to label?: pred icting th e perform ance o f search-based au tom atic im age classifiers. In Multimedia Information Retrieval, pages 249-258, 2006. 1 4 ,111

[16] T i m o t h e e C o u r , B e n S a p p , C h r i s J o r d a n , a n d B e n T a s k a r . Learning from Am biguously Labeled Images. In IEEE Conference on Computer Vision and Pattern Recognition, pages 919-926, 2009. 14, 111

[17] L e i W u , X i a n - S h e n g H u a , N e n g h a i Y u , W e i- Y i n g M a , a n d S h i p e n g L i . Flickr distance. In ACM Multimedia, pages 31-40, 2008. 14, 15

[18] Y o n g q i n g S u n , S a t o s h i S h i m a d a , Y u k i n o b u T a n i g u c h i , a n d A k i r a K o j i m a . A novel region-based approach to visual concept m odeling using web images. In ACM Multimedia, pages 635-638, 2008. 14, 15

[19] T h e o d o r a T s i k r i k a , C h r i s t o s D i o u , A r j e n P . d e V r i e s , a n d A n a s t a s i o s D e l o p o u l o s . Im age anno­tation using clickthrough data. In 8th ACM International Conference on Image and Video Retrieval, Santorini, Greece, 8-10 July 2009. 14, 15

[20] L e i W u , M i n g j i n g L i , Z h i w e i L i , W e i - Y i n g M a , a n d N e n g h a i Y u . V isual Language M odeling for Im age Classification. In Proceedings of the International Workshop on Multimedia Information Retrieval, MIR '07, pages 115-124, New York, NY, USA, 2007. ACM. 15

[21] L y n d o n S . K e n n e d y , M o r N a a m a n , S h a n e A h e r n , R a h u l N a i r , a n d T y e R a t t e n b u r y . How flickr helps us make sense o f th e world: context and content in com m unity-contributed m edia collections. In ACM Multimedia, pages 631-640, 2007. 15

[22] A l e x K r i z h e v s k y , I l y a S u t s k e v e r , a n d G e o f f r e y E . H i n t o n . Im ageN et Classification w ith D eep C onvolutional N eural Networks. In Advances in Neural Information Processing Systems 25, pages 1097-1105. Curran Associates, Inc., 2012. 16

[23] K e n C h a t f i e l d , K a r e n S i m o n y a n , A n d r e a V e d a l d i , a n d A n d r e w Z i s s e r m a n . R eturn o f th e D evil in th e D etails: D elving D eep into Convolutional N ets. 2014. 16

[24] B h a r a t h H a r i h a r a n , P a b l o A r b e l a e z , R o s s G i r s h i c k , a n d J i t e n d r a M a l i k . Sim ultaneous D etec­tion and Segm entation. In European Conference on Computer Vision (ECCV), 2014. 16

[25] M a x i m e O q u a b , L é o n B o t t o u , I v a n L a p t e v , a n d J o s e f S i v i c . Learning and Transferring M id- Level Im age R epresentations using C onvolutional Neural Networks. In IEEE Conference on Computer Vision and Pattern Recogintion, 2014. 16

[26] M a x i m e O q u a b , L é o n B o t t o u , I v a n L a p t e v , a n d J o s e f S i v i c . W eakly Supervised O bject R ecog­nition w ith C onvolutional N eural Networks. Technical Report HAL-01015140, INRIA, 2014. 16

124

Page 142: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

BIBLIOGRAPHY

[27] K obus B a r n a r d , P in a r D u y g u lu , David A . F o r sy th , N an d o d e F r e ita s , D avid M. B le i , and M ich a e l I. Jord an . M atching W ords and P ictures. Journal of Machine Learning Research, 3:1107- 1135, 2003. 16

[28] L i- J i a L i , R i c h a r d S o c h e r , a n d L i F e i - F e i . Towards Total Scene Understanding: C lassification, A nnotation and Segm entation in an A utom atic Framework. In IEEE Conference on Computer Vision and Pattern Recognition, 2009. 16

[29] E i r i n i G i a n n a k i d o u , I o a n n i s K o m p a t s i a r i s , a n d A t h e n a V a k a l i . SEMSOC: SEM antic, SOcial and C ontent-Based Clustering in M ultim edia Collaborative Tagging System s. IEEE Sixth International Conference on Semantic Computing, 0:128-135, 2008. 21

[30] V a s i l e i o s M e z a r i s , I o a n n i s K o m p a t s i a r i s , a n d M i c h a e l G. S t r i n t z i s . Still Im age Segm entation Tools For O bject-Based M ultim edia A pplications. International Journal of Pattern Recognition and Artificial Intelligence, 18(4):701-725, 2004. 23

[31] JiANBO S h i a n d J i t e n d r a M a l i k . N orm alized cuts and im age segm entation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888-905, Aug 2000. 23

[32] D o r i n C o m a n i c i u a n d P e t e r M e e r . M ean Shift: A R obust Approach Toward Feature SpaceAnalysis. IEEE Transactions Pattern Analysis and Machine Intelligence, 24(5):603-619, May 2002. 23

[33] R ad h ak rish n a A ch a n ta , Appu Shaji, K evin Sm ith, A u r e lie n L ucchi, P a sc a l F ua, and Sabine SUSSTRUNK. SLIC Superpixels Com pared to State-of-the-A rt Superpixel M ethods. IEEE Trans­actions Pattern Analysis and Machine Intelligence, 3 4 (ll):2 2 7 4 -2 2 8 2 , November 2012. 23

[34] K o e n v a n d e S a n d e , T h e o G e v e r s , a n d G e e s S n o e k . Evaluating Color D escriptors for O bject and Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 99(1), 2008. 23

[35] JiANGUo Z h a n g , M a r g i n M a r s z a l e k , S v e t l a n a L a z e b n i k , a n d C o r d e l i a S c h m i d . Local features and kernels for classification o f texture and object categories: a com prehensive study. International Journal of Computer Vision, 73(2):213-238, jun 2007. 23

[36] David G. Low e. D istinctive Im age Features from Scale-Invariant K eypoints. International Journal of Computer Vision, 60(2):91-110, 2004. 23

[37] J o s e f S i v i c a n d A n d r e w Z i s s e r m a n . V ideo C oogle: A Text R etrieval Approach to O bject M atching in Videos. In IC C V ’03: Proceedings of the Ninth IEEE International Conference on Com­puter Vision, page 1470, W ashington, DC, USA, 2003. IEEE Computer Society. 24

[38] B r e n d a n J . F r e y a n d D e l b e r t D u e c k . C lustering by Passing M essages B etw een D ata Points. Science, 315:972-976, 2007. 24

[39] B e r n h a r d S c h o l k o p f , A l e x J. S m o l a , R o b e r t C . W i l l i a m s o n , a n d P e t e r L . B a r t l e t t . N ew Sup­port Vector A lgorithm s. Neural Computation, 12(5):1207—1245, May 2000. 25

[40] M ark E veringham , L uc C o o l, C h r isto p h e r K. W illia m s, John W inn, and A n d rew Z isserm an. T he PASCAL VOC2009 R esults. 40

[41] J a k o b J. V e r b e e k a n d B il l T r i g g s . R egion C lassification w ith M arkov Field A spect M odels.In IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8, June 2007. 48, 50, 63, 69

[42] Sym eon P ap ad op ou los, Y iannis K om patsiaris, and A th en a V ak a li. A C raph-Based C lustering Schem e for Identifying R elated Tags in Folksonomies. In Data Warehousing and Knowledge Dis­covery, 6 2 6 3 of Lecture Notes in Computer Science, pages 65-76. Springer Berlin Heidelberg, 2010. 51, 52

125

Page 143: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

BIBLIOGRAPHY

[43] XiR O N G L i , C e e s G . M . S n o e k , M a r c e l W o r r i n g , D e n n i s K o e l m a , a n d A r n o l d W . M . S m e u l d e r s .

B ootstrapping V isual C ategorization w ith Relevant N egatives. IEEE Transactions on Multimedia, 15(4) =933-945, June 2013. 68, 69, 70, 92

[44] E lis a v e t C h a tz ila r i, Sp iros N ik o lo p o u lo s , Y iannis K om patsiaris, and J o se f K i t t l e r . A ctive Learning in Social C ontext for Im age Classification. In 9th International Conference on Com­puter Vision Theory and Applications (VISAPP), Lisbon, Portugal, January 5-8 2014. 68, 90, 114, 115

[45] E lis a v e t C h a tz ila r i. U sing Tagged Im ages o f Low V isual A m biguity to B oost th e Learning Ef­ficiency of O bject D etectors. In Proceedings of the 21st ACM International Conference on Multimedia, MM T3, pages 1027-1030, New York, NY, USA, 2013. ACM. 68

[46] E l i s a v e t C h a t z i l a r i , S p i r o s N i k o l o p o u l o s , Y i a n n i s K o m p a t s i a r i s , a n d J o s e f K i t t l e r . M ulti-m odal region selection approach for training object detectors. In Proceedings of the 2nd ACM Interna­tional Conference on Multimedia Retrieval, ICMR ’12, pages 5:l-5:8, New York, NY, USA, 2012. ACM. 68, 91

[47] Y i x i n C h e n , J i n b o B i , a n d J a m e s Z . W a n g . MILES: M ultiple-Instance Learning v ia Em bedded Instance Selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28:1931-1947, 2006. 69

[48] Zhiyuan Shi, Y on gx in Y ang, T im oth y M. H o sp ed a les , and T ao X ian g . W eakly Supervised Learn­ing o f O bjects, A ttributes and their A ssociations. In IEEE European Conference on Computer Vision, 2014. 69

[49] S a r u n a s j . R a u d y s a n d A n i l K . J a i n . Small Sam ple Size Effects in Statistical P attern Recog­nition: R ecom m endations for Practitioners. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13:252-264, 1991. 69

[50] X iA O JIN Z h u . Sem i-Supervised Learning Literature Survey. Technical report. Computer Sciences, University of W isconsin-M adison, 2005. 69

[51] S u d h e e n d r a V i j a y a n a r a s i m h a n a n d K r i s t e n G r a u m a n . Large-scale live active learning: Training object detectors w ith crawled data and crowds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’11, pages 1449 -1456, 2011. 69, 91

[52] Yi S h e n a n d J i a n p i n g F a n . Leveraging loosely-tagged im ages and inter-object correlations for tag recom m endation. In ACM, MM ’10, 2010. 70

[53] J a n C . v a n G e m e r t , C o r J. V e e n m a n , A r n o l d W . M . S m e u l d e r s , a n d J a n - M a r k G e u s e b r o e k . V isual W ord Am biguity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(7) =1271-1283,2010. 70

[54] T h o r s t e n J o a c h i m s . Learning to classify text using support vector machines. Kluwer international series in engineering and computer science. Kluwer Academic Publishers, 2002. 72

[55] Rudi L. G ilibrasi and P a u l M. B. V ita n y i. T he G oogle Sim ilarity D istance. Knowledge and Data Engineering, IEEE Transactions on, 19 (3)=370 -383, march 2007. 72

[56] C h r i s t i a n e F e l l b a u m , editor. WordNet An Electronic Lexical Database. The M IT Press, Cambridge, MA ; London, May 1998. 73

[57] SiDDHARTH P a t w a r d h a n . Incorporating Dictionary and Corpus Information into a Context Vector Mea­sure of Semantic Relatedness. M aster’s thesis. University of Minnesota, Duluth, August 2003. 73

126

Page 144: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

BIBLIOGRAPHY

[58] M ark J. H uiskes, B a r t T hom ee, an d M ic h a e l S. L ew . N ew Tï-ends and Ideas in V isual Concept D etection: The M IR Flickr R etrieval Evaluation Initiative. In Proceedings of the International Conference on Multimedia Information Retrieval, MIR ’10, pages 527-536. ACM, 2010. 76, 99, 114

[59] T r e v o r H a stie , R o b e r t T ibshirani, and Jerom e Friedm an. The Elements of Statistical Learning. Springer Series in Statistics. Springer New York Inc., New York, NY, USA, 2001. 86, 87

[60] C ra ig Saunders, A le x a n d e r Gammerman, and V o lo d y a V ovk . Ridge regression learning algo­rithm in dual variables. In (ICML-1998) Proceedings of the 15th International Conference on Machine Learning, pages 515-521. Morgan Kaufmann, 1998. 86, 87

[61] N e l lo C ristia n in i and John S h a w e-T a y lo r . An introduction to support vector machines and other kernel-based learning methods. Cambridge university press, 2000. 86, 87

[62] D a v i d D L e w i s . N aive (Bayes) at forty: T he independence assum ption in inform ation retrieval.In Machine learning: ECML-98, pages 4-15. Springer, 1998. 86, 87

[63] K u r t H orn ik , M a x w e ll S tin gh com b e, and H a lb e r t W h ite . M ultilayer feedforward networks are universal approxim ators. Neural networks, 2(5):359-366, 1989. 86, 87

[64] L e o B r e i m a n . Random forests. Machine learning, 4 5 (l):5 -3 2 , 2001. 86, 87

[65] CO RINNA C o r t e s a n d V l a d i m i r V a p n i k . Support-V ector Networks. In Machine Learning, pages 273-297, 1995. 87

[66] T h o r s te n Joachim s. Text categorization w ith Support V ector M achines: Learning w ith many relevant features. In C la ir e N ? d e l le c and C ? lin e R o u v e ir o l, editors. Machine Learning: ECML-98, 1 398 of Lecture Notes in Computer Science, pages 137-142. Springer Berlin Heidelberg, 1998. 90, 95

[67] M eng W ang and X ian-Sheng Hua. A ctive learning in m ultim edia annotation and retrieval: A survey. ACM Transactions on Intelligent Systems and Technology, 2(2):10:1-10:21, February 2011. 91

[68] A le x a n d e r F r e y ta g , E r ik R od n er, P a u l Bodesheim , and Joachim D e n z le r . Labeling Exam ples That M atter: R elevance-Based A ctive Learning w ith Gaussian Processes. In Joachim W e- ICKERT, M a tth ia s H ein, and B e r n t S c h ie le , editors. Pattern Recognition, 8 1 4 2 of Lecture Notes in Computer Science, pages 282-291. Springer Berlin Heidelberg, 2013. 91

[69] B u r r S e t t l e s . A ctive Learning Literature Survey. Computer Sciences Technical Report 1648, University of W isconsin-M adison, 2009. 91, 94

[70] Yan Yan, R om er R o sa le s , G len n F ung, and J e n n ife r D y. A ctive Learning from Crowds. In L ise G e to o r and T ob ias S c h e f fe r , editors. Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ’11, pages 1161-1168, New York, NY, USA, June 2011. ACM. 91

[71] M e n g F a n g a n d X i n g q u a n Z h u . I don’t know th e label: A ctive learning w ith blind knowledge.In Pattern Recognition (ICPR), 2012 21st International Conference on, pages 2238-2241, 2012. 91

[72] V ikas C. R ayk ar, Shipeng Y u, Linda H. Zhao, G e r a r d o H e r m o sillo V a la d ez , C h a r le s F lo r in , L uca B ogon i, and Linda M oy. Learning From Crowds. Journal of Machine Leamine Research, 11:1297-1322, August 2010. 91

[73] Yan Yan, R om er R o sa le s , G len n Fung, M ark Schm idt, G e r a r d o H e r m o sillo , L uca B o g o n i, Linda M oy, and J e n n ife r Dy. M odeling annotator expertise: Learning when everybody knows a bit o f som ething. In AISTATS, JMLR Proceedings, pages 932-939. JMLR.org, 2010. 91

127

Page 145: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

BIBLIOGRAPHY

[74] T ib er io U r icch io , L am berto B a l la n , M a rco B e r tin i, and A lb e r to D e l Bimbo. A n evaluation of nearest-neighbor m ethods for tag refinem ent. In Proc. of IEEE International Conference on Multimedia & Expo (ICME), July 2013. 91

[75] Y ashasw i V erm a and C. V. Jaw ahar. Im age annotation using m etric learning in sem antic neighbourhoods. In Proceedings of the 12th European conference on Computer Vision - Volume PartIII, ECCVT2, pages 836-849, 2012. 91

[76] Y a s h a s w i V e r m a a n d C. V. J a w a h a r . Exploring SVM for Im age A nnotation in Presence o f Confusing Labels. In Proceedings of the 2fth British Machine Vision Conference, BM VCT3, 2013. 91

[77] K en C h a t f ie ld and A n d rew Zisserm an. VISOR: Towards O n-the-Fly Large-Scale O bject Cate­gory Retrieval. In Asian Conference on Computer Vision, Lecture Notes in Computer Science. Springer, 2012. 91, 111

[78] L e i Z h a n g , J u n M a , C h a o r a n C u i , a n d F i j i L i . A ctive learning through notes data in Flickr: an effortless training data acquisition approach for object localization. In Proceedings of the 1st ACM International Conference on Multimedia Retrieval, ICMR ’11, pages 46:1-46:8, New York, NY, USA, 2011. ACM. 91

[79] S i m o n T o n g a n d E d w a r d C h a n g . Support vector m achine active learning for im age retrieval. InProceedings of the ninth ACM international conference on Multimedia, MULTIMEDIA ’01, pages 107-118, New York, NY, USA, 2001. ACM. 93, 94

[80] C o lin C am pbell, N e l lo C ristia n in i, and A le x J. Sm ola . Query Learning w ith Large M argin Classifiers. In Proceedings of the Seventeenth International Conference on Machine Learning, ICML ’00, pages 111-118, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. 93, 110

[81] G reg Schohn and David C ohn. Less is More: A ctive Learning w ith Support Vector M achines. InProceedings of the Seventeenth International Conference on Machine Learning, ICML ’00, pages 839-846, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. 93

[82] K en C h a tf ie ld , V ic t o r L em pitsky, A n d rea V ed a ld i, and A n d rew Zisserm an. T he devil is in the details: an evaluation o f recent feature encoding m ethods. In British Machine Vision Conference,2011. 93, 94

[83] A n d rea V ed a ld i and B rian F u lk e r so n . VLFeat: A n Open and Portable Library o f Com puter V ision A lgorithm s, h ttp ://w w w .v lfeat.org /, 2008. 93

[84] F lo r e n t P erro n n in , J o r g e Sanghez, and Thom as M ensink. Im proving th e fisher kernel for large- scale im age classification. In Proceedings of the 11th European conference on Computer vision: PartIV, E CC V’IO, pages 143-156. Springer-Verlag, 2010. 93

[85] J o h n C. P l a t t . Probabilistic O utputs for Support V ector M achines and Com parisons to Regularized Likelihood M ethods. In Advances in Large Margin Classifiers, pages 61-74. MIT Press, 1999. 96

[86] H s u a n - T i e n L i n , C h i h - J e n L i n , a n d R u b y C . W e n g . A note on P la tt’s probabilistic outputs for support vector machines. Machine Learning, 68(3):267-276, 2007. 96

[87] T hom ee B a r t and P o p escu A d rian . Overview o f th e CLEF 2 0 1 2 Flickr P hoto A nnotation and R etrieval Task. In th e W orking N otes for th e CLEF 2 0 1 2 Labs and W orkshop. Rome, Italy,2012. 98, 102, 105, 110, 114

128

Page 146: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

BIBLIOGRAPHY

[88] M ark D. Sm ucker, Jam es A l la n , an d B en C a r t e r e t t e . A com parison o f sta tistica l significance te s ts for inform ation retrieval evaluation. In Proceedings of the sixteenth ACM conference on Con­ference on information and knowledge management, CIKM ’07, pages 623-632, 2007. 101

[89] Shouxian C heng and F ra n k Y. Shih. A n improved increm ental training algorithm for support vector m achines using active query. Pattern Recognition, 40(3):964-971, March 2007. 110

[90] A jay j . Josh i, F atih P o r ik li , and N ik o la o s P a p a n ik o lo p o u lo s. M ulti-class active learning for im age classification. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2372 -2379, june 2009. 110

[91] E l is a v e t C h a tz ila r i, Sp iros N ik o lo p o u lo s , Y iannis K om patsiaris, and J o s e f K i t t l e r . H o w m any m ore im ages do we need? Perform ance P rediction o f bootstrapping for Im age Classification.In Proceedings of IEEE International Conference on Image Processing, ICIP ’14, 2014. 112

129

Page 147: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

BIBLIOGRAPHY

A u th or’s publication list covering th is th esis

Jou rn a l a rtic les

• E. Chatzilari, S. Nikolopoulos, I. Patras, I. Kompatsiaris, “Leveraging social me­

dia for scalable object detection” , Pattern Recognition Journal, Volume 45, Issue

8, August 2012, Pages 2962-2979, DOI: 10.1016/j.patcog.2012.02.006

B o o k ch ap ters

• S. Nikolopoulos, E. Chatzilari, E. Giannakidou, S. Papadopoulos, I. Kompatsiaris,

A. Vakali. “Leveraging Massive User Contributions for Knowledge Extraction” .

In book Next Generation Data Technologies for Collective Computational Intel­

ligence, Bessis, Nik; Xhafa, Fatos (Eds.), book series; Studies in Computational

Intelligence, vol. 352, 1st Edition., XVIII, 638 p. 211, Springer 2011, ISBN

978-3-642-20343-5.

• E. Chatzilari, S. Nikolopoulos, I. Patras and I. Kompatsiaris, “Enhancing Com­

puter Vision Using the Collective Intelligence of Social Media”, in book New

Directions in Web Data Management 1, Athena Vakali, Lakhmi C Jain (Eds.),

book series: Studies in Computationallntelligence, vol. 331, Springer 2011, ISBN:

978-3-642-17550-3.

C on feren ce p ap ers

• E. Chatzilari, S. Nikolopoulos, Y. Kompatsiaris, J. Kittler. “How Many More

Images Do We Need? Performance Prediction of Bootstrapping for Image Clas­

sification” , IEEE International Conference on Image Processing, Paris, France,

27-30 October 2014

• E. Chatzilari, S. Nikolopoulos, Y. Kompatsiaris, J. Kittler. “Active Learning in

Social Context for Image Classification”, 9th International Conference on Com­

puter Vision Theory and Applications (VISAPP) 2014, Lisbon, Portugal, 5-8

January 2014

130

Page 148: Social Media Based Scalable Concept Detectionepubs.surrey.ac.uk/855179/1/27558479.pdf · 2020-04-24 · Social Media Based Scalable Concept Detection SURREYUNIVERSITY OF Elisavet

__________________________________________________ BIBLIOGRAPHY

• E. Chatzilari, “Using Tagged Images of Low Visual Ambiguity to Boost the Learn­ing Efficiency of Object Detectors” , 21st ACM International Conference on Mul­

timedia, Doctoral Symposium, Barcelona, Spain, October 21-25, 2013.

• E. Chatzilari, S. Nikolopoulos, Y. Kompatsiaris, J. Kittler. “Multi-Modal Re­gion Selection Approach for Training Object Detectors” , 2nd ACM International

Conference on Multimedia Retrieval (ICMR T2). ACM, New York, NY, USA, ,

Article 5 , 8 pages.

• E. Chatzilari, S. Nikolopoulos, S. Papadopoulos, C. Zigkolis, Y. Kompatsiaris.

“Semi-Supervised object recognition using flickr images”, 9th International Work­

shop on Content-Based Multimedia Indexing (CBMI 2011), Madrid, Spain, June

2011 .

• E. Chatzilari, S. Nikolopoulos, E. Giannakidou and I. Kompatsiaris. “Leveraging

Social Media For Training Object Detectors” , 16th International Conference on Digital Signal Processing (DSP’09), Special Session on Social Media, 5-7 July

2009, Santorini, Greece.

• S. Nikolopoulos, E. Chatzilari, E. Giannakidou and I. Kompatsiaris. “Towards

fully un-supervised methods for generating object detection classifiers using social

data” , 10th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS 2009), 6-8 May 2009, London, UK.

131

Reproduced with permission of copyright owner. Further reproduction prohibited without permission.


Recommended