+ All Categories
Home > Documents > Table of Contents - Universitetet i osloheim.ifi.uio.no/griff/SIGMM-records-1302.pdfTable of...

Table of Contents - Universitetet i osloheim.ifi.uio.no/griff/SIGMM-records-1302.pdfTable of...

Date post: 07-Jun-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
22
. Volume 5, Number 2 June 2013 Published by the Association for Computing Machinery Special Interest Group on Multimedia ISSN 1947-4598 http://sigmm.org/records Table of Contents 1 Volume 5, Issue 2, June 2013 (ISSN 1947-4598) 1 Editorial 1 Introducing the new Board of ACM SIG Multimedia 3 Open Source Column: OpenIMAJ – Intelligent Multimedia Analysis in Java 8 Papers 8 FXPAL hires Dick Bulterman to replace Larry Rowe as President 9 Call for Bids: ACM Multimedia 2016 10 MediaEval Multimedia Benchmark: Highlights from the Ongoing 2013 Season 11 MPEG Column: Press release for the 104th MPEG meeting 13 PhD Thesis Summaries 13 Johannes Schels 13 Tiia Ojanperä 14 Tomas Kupka 15 Ulrich Lampe 16 Recently published 16 MMSJ Volume 19, Issue 3 16 MMSJ Volume 19, Issue 4 16 TOMCCAP, Volume 9, Issue 2 17 TOMCCAP, Volume 9, Issue 3 17 Job Opportunities 17 PhD position in HDR imaging 17 Research Assistants/PhD Positions at University of Vienna 18 Calls for Contribution 18 CFPs: Sponsored by ACM SIGMM 18 CFPs: Sponsored by ACM 18 CFPs: Not ACM-sponsored 19 Back Matter 19 Notice to Contributing Authors to SIG Newsletters 19 Impressum
Transcript
  • .

    Volume 5, Number 2June 2013

    Published by the Association for Computing MachinerySpecial Interest Group on Multimedia

    ISSN 1947-4598http://sigmm.org/records

    Table of Contents

    1 Volume 5, Issue 2, June 2013 (ISSN 1947-4598)

    1 Editorial

    1 Introducing the new Board of ACM SIG Multimedia

    3 Open Source Column: OpenIMAJ – Intelligent Multimedia Analysis in Java8 Papers

    8 FXPAL hires Dick Bulterman to replace Larry Rowe as President

    9 Call for Bids: ACM Multimedia 2016

    10 MediaEval Multimedia Benchmark: Highlights from the Ongoing 2013 Season

    11 MPEG Column: Press release for the 104th MPEG meeting

    13 PhD Thesis Summaries13 Johannes Schels13 Tiia Ojanperä14 Tomas Kupka15 Ulrich Lampe

    16 Recently published16 MMSJ Volume 19, Issue 316 MMSJ Volume 19, Issue 416 TOMCCAP, Volume 9, Issue 217 TOMCCAP, Volume 9, Issue 3

    17 Job Opportunities17 PhD position in HDR imaging17 Research Assistants/PhD Positions at University of Vienna

    18 Calls for Contribution18 CFPs: Sponsored by ACM SIGMM18 CFPs: Sponsored by ACM18 CFPs: Not ACM-sponsored

    19 Back Matter19 Notice to Contributing Authors to SIG Newsletters19 Impressum

  • .

  • .

    ACM SIGMM RecordsVol. 5, No. 2, June 2013 1

    ISSN 1947-4598http://sigmm.org/records

    Volume 5, Issue 2, June 2013 (ISSN 1947-4598)

    SIGMM RecordsVolume 5, Issue 2, June 2013 (ISSN 1947-4598)

    Editorial

    Dear Member of the SIGMM Community, welcome tothe second issue of the SIGMM Records in 2013.

    SIGMM has elected a new board, to guide the SIGthrough the next couple of years and develop it further.The new board, under the chairmanship of ProfessorShih-Fu Chang introduces itself in this issue of theRecords.

    Among the first acts of the new board was the call forbids for ACM Multimedia 2016, announced also in thisissue.

    Of course, we have also several other contributions:the OpenSource column introduces OpenIMAJ, whilethe MPEG column brings the press release for the104th MPEG meeting. We can also reveal a changeof leadership in FXPal, a research company with manySIGMM members in its ranks and a former SIGMM chairas departing president.

    We put also a spotlight on the ongoing season forMediaEval, the multimedia benchmarking initiative, andwe include four PhD thesis summaries in this issue.

    Of course, we include also a variety of calls forcontribution. Please give attention to two particularones: TOMCCAP has chosen its special issue topic for2014 and includes a call for papers in this issue of theRecords. And also MTAP has issued a special issue callfor paper.

    Last but most certainly not least, you find pointers to thelatest issues of TOMCCAP and MMSJ, and several jobannouncements.

    We hope that you enjoy this issue of the Records.

    The Editors

    Stephan Kopf, Viktor Wendel, Lei Zhang, PradeepAtrey, Christian Timmerer, Pablo Cesar, Mathias Lux,Carsten Griwodz

    Introducing the newBoard of ACM SIGMultimedia

    Statement of the new Board

    As we celebrate the 20th anniversary of the ACMMultimedia conference, we are thrilled to see therapidly expanding momentum in developing innovativemultimedia technologies in both the commercial andacademic worlds. These can be easily seen bythe continuously growing attendance size at theSIGMM flagship conference (ACMMM) and the action-packed technical demo session contributed by manyresearchers and practitioners from around the worldeach year.

    However, the community is also faced with a set of newchallenges in the coming years:

    • how to solidify the intellectual foundation of theSIGMM community and gain broader recognition asa field;

    • how to rebuild the strong relations with othercommunities closely related to multimedia;

    • how to broaden the scope of the multimediaconferences and journals and encourage participationof researchers from other disciplines;

    • how to develop a simulative environment for nurturingnext-generation leaders including a stronger femaleparticipation in the community.

    To respond to these challenges and explore newopportunities, we will actively explore followinginitiatives:

    • create opportunities for stronger collaboration withrelevant fields through joint workshops, tutorials, andspecial issues;

    • recruit active contribution from broad technical,geographical, and organizational areas;

    http://records.sigmm.ndlab.net/2013/08/editorial-17/http://records.sigmm.ndlab.net/2013/06/introducing-the-new-board-of-acm-sig-multimedia-2/http://records.sigmm.ndlab.net/2013/06/introducing-the-new-board-of-acm-sig-multimedia-2/http://records.sigmm.ndlab.net/2013/06/introducing-the-new-board-of-acm-sig-multimedia-2/

  • .

    Introducing the new Board of ACM SIG Multimedia

    ISSN 1947-4598http://sigmm.org/records 2

    ACM SIGMM RecordsVol. 5, No. 2, June 2013

    • strengthen the SIGMM brand by establishingcommunity-wide resources such as SIGMMdistinguished lecture series, open source software,and online courses;

    • expand the mentoring and educational activities forstudents and minority members.

    Chair: Shih-Fu Chang

    Shih-Fu Chang

    [email protected]()Shih-Fu Chang is the Richard Dicker Professor andDirector of the Digital Video and Multimedia Lab atColumbia University. He is an active researcher leadingdevelopment of innovative technologies for multimediainformation extraction and retrieval, while contributing tofundamental advances of the fields of machine learning,computer vision, and signal processing. Recognized bymany paper awards and citation impacts, his scholarlywork set trends in several important areas, such ascontent-based visual search, compressed-domain videomanipulation, image authentication, large-scale high-dimensional data indexing, and semantic video search.He co-led the ADVENT university-industry researchconsortium with participation of more than 25 industrysponsors. He has received IEEE Signal ProcessingSociety Technical Achievement Award, ACM SIGMultimedia Technical Achievement Award, IEEE KiyoTomiyasu Award, Service Recognition Awards fromIEEE and ACM, and the Great Teacher Award fromthe Society of Columbia Graduates. He served as theEditor-in-Chief of the IEEE Signal Processing Magazine(2006-8), Chairman of Columbia Electrical EngineeringDepartment (2007-2010), Senior Vice Dean of ColumbiaEngineering School (2012-date), and advisor for severalcompanies and research institutes. His research has

    been broadly supported by government agencies as wellas many industry sponsors. He is a Fellow of IEEEand the American Association for the Advancement ofScience.

    Vice Chair: Rainer Lienhart

    Rainer Lienhart

    [email protected]()Rainer Lienhart is a full professor in the computerscience department of the University of Augsburgheading the Multimedia Computing & Computer VisionLab (MMC Lab). His group is focusing on all aspectsof very large-scale image, video, and audio miningalgorithms including feature extraction, image/videoretrieval, object detection, and humans pose estimation.Rainer Lienhart has been an active contributor toOpenCV. From August 1998 to July 2004 he worked asa Staff Researcher at Intel’s Microprocessor ResearchLab in Santa Clara, CA, on transforming a networkof heterogeneous, distributed computing platforms intoan array of audio/video sensors and actuators capableof performing complex DSP tasks such as distributedbeamforming, audio rendering, audio/visual tracking,and camera array processing. At the same time, hewas also continuing his work on media mining. He iswell known for his work in image/video text detection/recognition, commercial detection, face detection, shotand scene detection, automatic video abstraction, andlarge-scale image retrieval. The scientific work of RainerLienhart covers more than 80 refereed publications andmore than 20+ patents. He was a general co-chair ofACM Multimedia 2007 and SPIE Storage and Retrievalof Media Databases 2004 & 2005. He serves in theeditorial boards of 3 international journals. For more thana decade he is a committee member of ACM Multimedia.Since July 2009 he is the vice chair of SIGMM. He is

  • .

    Open Source Column: OpenIMAJ – Intelligent Multimedia Analysis in Java

    ACM SIGMM RecordsVol. 5, No. 2, June 2013 3

    ISSN 1947-4598http://sigmm.org/records

    also the executive director of the Institute for ComputerScience at the University of Augsburg since April 2010.

    Director of Conferences:Nicu Sebe

    Nicu Sebe

    [email protected] ()Nicu Sebe is Associate Professor with the University ofTrento, Italy, where he is leading the research in theareas of multimedia information retrieval and human-computer interaction in computer vision applications.He was involved in the organization of the majorconferences and workshops addressing the computervision and human-centered aspects of multimediainformation retrieval, among which as a General Co-Chair of the IEEE Automatic Face and GestureRecognition Conference, FG 2008, ACM InternationalConference on Image and Video Retrieval (CIVR)2007 and 2010. He is the general chair of ACMMultimedia 2013 and a program chair of ECCV 2016and was a program chair of ACM Multimedia 2011 and2007. He has been a visiting professor in BeckmanInstitute, University of Illinois at Urbana-Champaign andin the Electrical Engineering Department, DarmstadtUniversity of Technology, Germany. He is a co-chairof the IEEE Computer Society Task Force on Human-centered Computing and is an associate editor ofIEEE Transactions on Multimedia, Computer Vision andImage Understanding, Machine Vision and Applications,Image and Vision Computing, and of Journal ofMultimedia.

    Open Source Column:OpenIMAJ – IntelligentMultimedia Analysis inJava

    Introduction

    Multimedia analysis is an exciting and fast-movingresearch area. Unfortunately, historically there has beena lack of software solutions in a common programminglanguage for performing scalable integrated analysisof all modalities of media (images, videos, audio,text, web-pages, etc). For example, in the imageanalysis world, OpenCV and Matlab are commonlyused by researchers, whilst many common NaturalLanguage Processing tools are built using Java.The lack of coherency between these tools andlanguages means that it is often difficult to researchand develop rational, comprehensible and repeatablesoftware implementations of algorithms for performingmultimodal multimedia analysis. These problems arealso exacerbated by the lack of any principled softwareengineering (separation of concerns, minimised coderepetition, maintainability, understandability, prematureoptimisation and over optimisation) often found inresearch code. OpenIMAJ is a set of libraries and toolsfor multimedia content analysis and content generationthat aims to fill this gap and address the concerns.OpenIMAJ provides a coherent interface to a very broadrange of techniques, and contains everything from state-of-the-art computer vision (e.g. SIFT descriptors, salientregion detection, face detection and description, etc.)and advanced data clustering and hashing, throughto software that performs analysis on the content,layout and structure of webpages. A full list of the allthe modules and an overview of their functionalitiesfor the latest OpenIMAJ release can be found here.OpenIMAJ is primarily written in Java and, as suchis completely platform independent. The video-captureand hardware libraries contain some native code butLinux, OSX and Windows are supported out of thebox (under both 32 and 64 bit JVMs; ARM processorsare also supported under Linux). It is possible to writeprograms that use the libraries in any JVM languagethat supports Java interoperability, for example Groovyand Scala. OpenIMAJ can even be run on Androidphones and tablets. As it’s written using Java, you canrun any application built using OpenIMAJ on any of thesupported platforms without even having to recompilethe code.

    Some simple programming examples

    http://records.sigmm.ndlab.net/2013/07/openimaj-intelligent-multimedia-analysis-in-java/http://records.sigmm.ndlab.net/2013/07/openimaj-intelligent-multimedia-analysis-in-java/http://records.sigmm.ndlab.net/2013/07/openimaj-intelligent-multimedia-analysis-in-java/http://records.sigmm.ndlab.net/2013/07/openimaj-intelligent-multimedia-analysis-in-java/http://www.openimaj.orghttp://openimaj.org/tutorial/modules.htmlhttp://groovy.codehaus.orghttp://www.scala-lang.org

  • .

    Open Source Column: OpenIMAJ – Intelligent Multimedia Analysis in Java

    ISSN 1947-4598http://sigmm.org/records 4

    ACM SIGMM RecordsVol. 5, No. 2, June 2013

    The following code snippets and illustrations aim to giveyou an idea of what programming with OpenIMAJ is like,whilst showing some of the powerful features.

    ...

    // A simple Haar-Cascade face detector

    HaarCascadeDetector det1 =

    new HaarCascadeDetector();

    DetectedFace face1 =

    det1.detectFaces(img).get(0);

    new SimpleDetectedFaceRenderer()

    .drawDetectedFace(mbf,10,face1);

    // Get the facial keypoints

    FKEFaceDetector det2 =

    new FKEFaceDetector();

    KEDetectedFace face2 =

    det2.detectFaces(img).get(0);

    new KEDetectedFaceRenderer()

    .drawDetectedFace(mbf,10,face2);

    // With the CLM Face Model

    CLMFaceDetector det3 =

    new CLMFaceDetector();

    CLMDetectedFace face3 =

    det3.detectFaces(img).get(0);

    new CLMDetectedFaceRenderer()

    .drawDetectedFace(mbf,10,face3);

    ...

    Face detection, keypoint localisation and model fitting

    ...

    // Find the features

    DoGSIFTEngine eng = new DoGSIFTEngine();

    List sourceFeats =

    eng.findFeatures(sourceImage);

    List targetFeats =

    eng.findFeatures(targetImage);

    // Prepare the matcher

    final HomographyModel model =

    new HomographyModel(5f);

    final RANSAC ransac = new RANSAC(model,

    1500,

    new RANSAC.BestFitStoppingCondition(),

    true);

    ConsistentLocalFeatureMatcher2d matcher

    = new ConsistentLocalFeatureMatcher2d

    (new FastBasicKeypointMatcher(8));

    // Match the features

    matcher.setFittingModel(ransac);

    matcher.setModelFeatures(sourceFeats);

    matcher.findMatches(targetFeats);

    ...

    Finding and matching SIFT keypoints

    ...

    // Access First Webcam

    VideoCapture cap = new VideoCapture(

    640, 480);

    //grab a frame

    MBFImage last = cap.nextFrame().clone();

    // Process Video

    VideoDisplay

    .createOffscreenVideoDisplay(cap)

    .addVideoListener(

    new VideoDisplayListener() {

    public void beforeUpdate(MBFImage

    frame) {

    frame.subtractInplace(last).abs();

    last = frame.clone();

    }

    ...

    Webcam access and video processing

    The OpenIMAJ design philosophy

    One of the main goals in the design and implementationof OpenIMAJ was to keep all components as modularas possible, providing a clear separation of concernswhilst maximising code reusability, maintainability andunderstandability. At the same time, this makes thecode easy to use and extend. For example, theOpenIMAJ difference-of-Gaussian SIFT implementationallows different parts of the algorithm to be replaced ormodified at will without having to modify the source-code

  • .

    Open Source Column: OpenIMAJ – Intelligent Multimedia Analysis in Java

    ACM SIGMM RecordsVol. 5, No. 2, June 2013 5

    ISSN 1947-4598http://sigmm.org/records

    of the existing components; an example of this is ourmin-max SIFT implementation [1], which allows moreefficient clustering of SIFT features by exploiting thesymmetry of features detected at minima and maximaof the scale-space. Implementations of commonly usedalgorithms are also made as generic as possible;for example, the OpenIMAJ RANSAC implementationworks with generic Modelobjects and doesn’t carewhether the specific model implementation is attemptingto fit a homography to a set of point-pair matches ora straight line to samples in a space. Primitive mediatypes in OpenIMAJ are also kept as simple as possible:Images are just an encapsulation of a 2D-arrays ofpixels; Videos are just encapsulated iterable collections/streams of images; Audio is just an encapsulatedarray of samples. The speed of individual algorithmsin OpenIMAJ has not been a major developmentfocus, however OpenIMAJ can not be called slow.For example, most of the algorithms implemented inboth OpenIMAJ and OpenCV run at similar rates, andthings such as SIFT detection and face detection canbe run in real-time. Whilst the actual algorithm speedhas not been a particular design focus, scalabilityof the algorithms to massive datasets has. BecauseOpenIMAJ is written in Java, it is trivial to integrateit with tools for distributed data processing, such asApache Hadoop. Using the OpenIMAJ Hadoop tools [3]on our small Hadoop cluster, we have extracted andindexed visual term features from datasets with sizes inexcess of 50 million images. The OpenIMAJ clusteringimplementations are able to cluster larger-than-memorydatasets by reading data from disk as necessary.

    A history of OpenIMAJ

    OpenIMAJ was first made public in May 2011, just intime to be entered into the 2011 ACM Multimedia Open-Source Software Competition [2] which it went on to win.OpenIMAJ was not written overnight however. As shownin the following picture, parts of the original codebasecame from projects as long ago as 2005. Initially, thefeatures were focused around image analysis, with aconcentration on image features used for CBIR (i.e.global histogram features), features for image matching(i.e. SIFT) and simple image classification (i.e. cityscapeversus landscape classification).

    A visual history of OpenIMAJ

    As time went on, the list of features began to grow;firstly with more implementations of image analysistechniques (i.e. connected components, shape analysis,scalable bags-of-visual-words, face detection, etc). Thiswas followed by support for analysing more types ofmedia (video, audio, text, and web-pages), as wellas implementations of more general techniques formachine learning and clustering. In addition, support forvarious hardware devices and video capture was added.Since its initial public release, the community of peopleand organisations using OpenIMAJ has continuedto grow, and includes a number of internationallyrecognised companies. We also have an activecommunity of people reporting (and helping to fix)any bugs or issues they find, and suggesting newfeatures and improvements. Last summer, we had asingle intern working with us, using and developing newfeatures (in particular with respect to text analysis andmining functionality). This summer we’re expecting twoor three interns who will help us leverage OpenIMAJin the 2013 MediaEvalcampaign. From the point-of-view of the software itself, the number of features inOpenIMAJ continues to grow on an almost daily basis.Since the initial release, the core codebase has becomemuch more mature and we’ve added new features andimplementations of algorithms throughout. We’ve pickeda couple of the highlights from the latest release versionand the current development version below:

    Reference Annotations

    As academics we are quite used to the idea of throughlyreferencing the ideas and work of others when wewrite a paper. Unfortunately, this is not often carriedforward to other forms of writing, such as the writingof the code for computer software. Within OpenIMAJ,we implement and expand upon much of our ownpublished work, but also the published work of others.For the 1.1 release of OpenIMAJ we decided thatwe wanted to make it explicit where the idea for animplementation of each algorithm and technique camefrom. Rather than haphazardly adding references andcitations in the Javadoc comments, we decided thatthe process of referencing should be more formal,and that the references should be machine readable.These machine-readable references are automaticallyinserted into the generated documentation, and canalso be accessed programatically. It’s even possibleto automatically generate a bibliography of all thetechniques used by any program built on top ofOpenIMAJ. For more information, take a look at thisblogpost. The reference annotations are part of a biggerframework currently under development that aims toencourage better code development for experimentationpurposes. The overall aim of this is to provide the basisfor repeatable software implementations of experimentsand evaluations, with automatic gathering of the basic

    http://www.multimediaeval.org/mediaeval2013/http://blog.openimaj.org/2012/08/28/adding-references-to-code/

  • .

    Open Source Column: OpenIMAJ – Intelligent Multimedia Analysis in Java

    ISSN 1947-4598http://sigmm.org/records 6

    ACM SIGMM RecordsVol. 5, No. 2, June 2013

    statistics that all experiments should have, together withmore specific statistics based on the type of evaluation(i.e. ROC statistics for classification experiments;TREC-style Precision-Recall for information retrievalexperiments, etc).

    Stream Processing Framework

    Processing streaming data is a hot topic currently. Wewanted to provide a way in OpenIMAJ to experimentwith the analysis of streaming multimedia data (seethe description of the “Twitter’s visual pulse” applicationbelow for example). The OpenIMAJ Streamclasses inthe development trunk of OpenIMAJ provide a wayto effectively gather, consume, process and analysestreams of data. For example, in just a few lines of codeit is possible to get and display all the images from thelive Twitter sample stream:

    //construct your twitter api key

    TwitterAPIToken token = ...

    // Create a twitter dataset instance

    // connected to the live twitter sample

    // stream

    StreamingDataset dataset =

    new TwitterStreamingDataset(token, 1);

    // use the Stream#map() method to

    // transform the stream so we get images

    dataset

    // process tweet statuses to produce

    // a stream of URLs

    .map(new TwitterLinkExtractor())

    // filter URLs to just get those that

    // are URLs of images

    .map(new ImageURLExtractor())

    // consume the stream and display

    // images

    .forEach(new Operation() {

    public void perform(URL url) {

    DisplayUtilities.display(

    ImageUtilities.readMBF(url));

    }

    });

    The stream processing framework handles a lot of thehard-work for you. For example it can optionally dropincoming items if you are unable to consume the streamat a fast enough rate (in this case it will gather statisticsabout what it’s dropped). In addition to the Twitter livestream, we’ve provided a number of other stream sourceimplementations, including the one based on the Twittersearch API and one based on IRC chat. The latter wasused to produce a simple visualisation of a world mapthat shows where current Wikipedia edits are currentlyhappening.

    Improved face pipeline

    The initial OpenIMAJ release contained some supportfor face detection and analysis, however, this has beenand continues to be improved. The key advantageOpenIMAJ has over other libraries such as OpenCV inthis area is that it implements a complete pipeline withthe following components:

    1. Face Detection

    2. Face Alignment

    3. Facial Feature Extraction

    4. Face Recognition/Classification

    Each stage of the pipeline is configurable, andOpenIMAJ contains a number of different algorithmimplementations for each stage as well as offering thepossibility to easily implement more. The pipeline isdesigned to allow researchers to focus on a specificarea of the pipeline without having to worry about theother components. At the same time, it is fairly easyto modify and evaluate a complete pipeline. In additionto the parts of the recognition pipeline, OpenIMAJ alsoincludes code for tracking faces in videos and comparingthe similarity of faces.

    Improved audio processing & analysis functionality

    When OpenIMAJ was first made public, there waslittle support for audio processing and analysis beyondplayback, resampling and mixing. As OpenIMAJ hasmatured, the audio analysis components have grown,and now include standard audio feature extractors forthings such as Mel-Frequency Cepstrum Coefficients(MFCCs), and higher level analysers for performingtasks such as beat detection, and determining if anaudio sample is human speech. In addition, we’ve addeda large number of generation, processing and filteringclasses for audio signals, and also provided an interfacebetween OpenIMAJ audio objects and the CMU Sphinxspeech recognition engine.

    Example applications

    Every year our research group holds a 2-3 dayHackathon where we stop normal work and form groupsto do a mini-project. For the last two years we’vebuilt applications using OpenIMAJ as the base. We’veprovided a short description together with some links sothat you can get an idea of the varied kinds of applicationOpenIMAJ can be used to rapidly create.

    Southampton Goggles

    In 2011 we built “Southampton Goggles”. The ultimateaim was to build a geo-localisation/geo-information

  • .

    Open Source Column: OpenIMAJ – Intelligent Multimedia Analysis in Java

    ACM SIGMM RecordsVol. 5, No. 2, June 2013 7

    ISSN 1947-4598http://sigmm.org/records

    system based on content-based matching of images ofbuildings on the campus taken with a mobile device;the idea was that one could take a photo of a buildingas a query, and be returned relevant information aboutthat building as a response (i.e. which faculty/schoolis located in it, whether there are vending machines/cafe’s in the building, the opening times of the building,etc). The project had two parts: the first part was datacollection in order to collect and annotate the databaseof images which we would match against. The secondpart involved indexing the images, and making the clientand server software for the search engine. In order torapidly collect images of the campus, we built a hand-portable streetview like camera device with 6 webcams,a GPS and compass. The software for controlling thisused OpenIMAJ to interface with all the hardware andrecord images, location and direction at regular timeintervals. The camera rig and software are shown below:

    The Southampton Goggles Capture Rig

    The Southampton Goggles Capture Software, builtusing OpenIMAJ

    For the second part of the project, we used the SIFTfeature extraction, clustering and quantisation abilities ofOpenIMAJ to build visual-term representations of eachimage, and used our ImageTerrier software [3,4] to buildan inverted index which could be efficiently queried. Formore information on the project, see this blog post.

    Twitter’s visual pulse

    Last year, we decided that for our mini-project we’dexplore the wealth of visual information on Twitter.Specifically we wanted to look at which images weretrending based not on counts of repeated URLs, buton the detection of near-duplicate images hosted atdifferent URLs. In order to do this, we used what has now

    become the OpenIMAJ stream processing framework,described above, to:

    1. ingest the Twitter sample stream,

    2. process the tweet text to find links,

    3. filter out links that weren’t images (based on a set ofpatterns for common image hosting sites),

    4. download and resample the images,

    5. extract sift features,

    6. use locality sensitive hashing to sketch each SIFTfeature and store in an ensemble of temporal hash-tables.

    This process happens continuously in real-time. Atregular intervals, the hash-tables are used to build aduplicates graph, which is then filtered and analysedto find the largest clusters of duplicate images, whichare then visualised. OpenIMAJ was used for all theconstituent parts of the software: stream processing,feature extraction and LSH. The graph constructionand filtering uses the excellent JGraphT library thatis integrated into the OpenIMAJ core-math module.For more information on the “Twitter’s visual pulse”application, see the paper [5] and this video.

    Erica the Rhino

    This year, we’re involved in a longer-running hackathonactivity to build an interactive artwork for a masspublic art exhibition called Go! Rhinos that will be heldthroughout Southampton city centre over the summer.The Go! Rhinos exhibition features a large number ofrhino sculptures that will inhabit the streets and shoppingcentres of Southampton. Our school has sponsoreda rhino sculpture called Erica which we’ve loadedwith Raspberry Pi computers, sensors and physicalactuators. Erica is still under construction, as shown inthe picture below:

    Erica, the OpenIMAJ-powered interactive rhinosculpture

    OpenIMAJ is being used to provide visual analysis fromthe webcams that we’ve installed as eyes in the rhino

    http://www.imageterrier.orghttp://blog.openimaj.org/2011/08/01/goggles/http://jgrapht.orghttp://www.youtube.com/watch?v=CBk5nDd6CLUhttp://www.gorhinos.co.ukhttp://www.ericatherhino.orghttp://www.raspberrypi.org

  • .

    FXPAL hires Dick Bulterman to replace Larry Rowe as President

    ISSN 1947-4598http://sigmm.org/records 8

    ACM SIGMM RecordsVol. 5, No. 2, June 2013

    sculpture (shown below). Specifically, we’re using aJava program built on top of the OpenIMAJ libraries toperform motion analysis, face detection and QR-coderecognition. The rhino-eyeprogram runs directly on aRaspberry Pi mounted inside the sculpture.

    Erica’s eye is a servo-mounted webcam, powered bysoftware written using OpenIMAJ and running on aRaspberry Pi

    For more information, check out Erica’s website andYouTube channel, where you can see a prototype of theOpenIMAJ-powered eye in action.

    Conclusions

    For software developers, the OpenIMAJ libraryfacilitates the rapid creation of multimedia analysis,indexing, visualisation and content generation toolsusing state-of-the-art techniques in a coherentprogramming model. The OpenIMAJ architectureenables scientists and researchers to easily experimentwith different techniques, and provides a platformfor innovating new solutions to multimedia analysisproblems. The OpenIMAJ design philosophy meansthat building new techniques and algorithms, combiningdifferent approaches, and extending and developingexisting techniques, are all achievable. We welcome youto come and try OpenIMAJ for your multimedia analysisneeds. To get started watch the introductory videos, trythe tutorial, and look through some of the examples. Ifyou have any questions, suggestions or comments, thendon’t hesitate to get in contact.

    Acknowledgements

    Early work on the software that formed the nucleus ofOpenIMAJ was funded by the European Unions 6thFramework Programme, the Engineering and PhysicalSciences Research Council, the Arts and HumanitiesResearch Council, Ordnance Survey and the BBC.Current development of the OpenIMAJ software isprimarily funded by the European Union SeventhFramework Programme under the ARCOMEM andTrendMiner projects. The initial public releases werealso funded by the European Union Seventh FrameworkProgramme under the LivingKnowledge together with

    the LiveMemories project, funded by the AutonomousProvince of Trento.

    Papers

    • [1] Hare, Jonathon, Samangooei, Sina and Lewis,Paul (2011) Efficient clustering and quantisation ofSIFT features: Exploiting characteristics of the SIFTdescriptor and interest region detectors under imageinversion. At The ACM International Conference onMultimedia Retrieval (ICMR 2011), Trento, Italy, 17 -20 Apr 2011. ACM Press.

    • [2] Hare, Jonathon, Samangooei, Sina and Dupplaw,David (2011) OpenIMAJ and ImageTerrier: JavaLibraries and Tools for Scalable Multimedia Analysisand Indexing of Images. At ACM Multimedia 2011,Scottsdale, Arizona, USA, 28 Nov - 01 Dec 2011.ACM, 691-694.

    • [3] Hare, Jonathon, Samangooei, Sina, Dupplaw,David and Lewis, Paul H. (2012) ImageTerrier: anextensible platform for scalable high-performanceimage retrieval. At ACM International Conference onMultimedia Retrieval (ICMR’12), Hong Kong, HK, 05- 08 Jun 2012. 8pp.

    • [4] Hare, Jonathon, Samangooei, Sina andLewis, Paul (2012) Practical scalable imageanalysis and indexing using Hadoop. MultimediaTools and Applications, 1-34. (doi:10.1007/s11042-012-1256-0).

    • [5] Hare, Jonathon, Samangooei, Sina, Dupplaw,David and Lewis, Paul. (2013) Twitter’s visualpulse. In, the 3rd ACM conference on Internationalconference on multimedia retrieval, Dallas, US, 2pp,297-298. (doi:10.1145/2461466.2461514).

    FXPAL hires DickBulterman to replaceLarry Rowe as President

    FX Palo Alto Laboratory announced on July 1, 2013,the hiring of Prof.dr. D.C.A. (Dick) Bulterman to bePresident and COO. He will replace Dr. Lawrence A.Rowe on October 1, 2013. Dick is currently a ResearchGroup Head of Distributed and Interactive Systems atCentrum Wiskunde & Informatica (CWI) in Amsterdam,The Netherlands. He is also a Full Professor ofComputer Science at Vrije Universiteit, Amsterdam.His research interests are multimedia authoring anddocument processing. His recent research concernssocially-aware multimedia, interactive television, andmedia analysis. Together with his CWI colleagues, hehas won a series of best paper and research awards

    http://www.ericatherhino.orghttp://www.youtube.com/user/ericatherhinohttp://www.youtube.com/watch?v=q62fwLU6WT4http://www.youtube.com/watch?v=q62fwLU6WT4http://www.openimaj.orghttp://www.youtube.com/watch?v=TNEQ0eNqLgAhttp://www.openimaj.org/tutorial/http://sourceforge.net/p/openimaj/code/HEAD/tree/trunk/demos/examples/http://openimaj.org/contact.htmlhttp://cordis.europa.eu/fp6/http://cordis.europa.eu/fp6/http://www.epsrc.ac.ukhttp://www.epsrc.ac.ukhttp://www.ahrc.ac.ukhttp://www.ahrc.ac.ukhttp://www.ordnancesurvey.co.uk/oswebsite/education-and-research/research/index.htmlhttp://www.bbc.co.uk/rdhttp://cordis.europa.eu/fp7http://cordis.europa.eu/fp7http://www.arcomem.euhttp://www.trendminer-project.euhttp://livingknowledge.europarchive.orghttp://www.livememories.orghttp://www.provincia.tn.ithttp://www.provincia.tn.ithttp://dl.acm.org/citation.cfm?id=1991998http://dl.acm.org/citation.cfm?id=1991998http://dl.acm.org/citation.cfm?id=1991998http://dl.acm.org/citation.cfm?id=1991998http://dl.acm.org/citation.cfm?id=2072421http://dl.acm.org/citation.cfm?id=2072421http://dl.acm.org/citation.cfm?id=2072421http://dl.acm.org/citation.cfm?id=2324796.2324844http://dl.acm.org/citation.cfm?id=2324796.2324844http://dl.acm.org/citation.cfm?id=2324796.2324844http://link.springer.com/article/10.1007/s11042-012-1256-0http://link.springer.com/article/10.1007/s11042-012-1256-0http://dl.acm.org/citation.cfm?id=2461466.2461514http://dl.acm.org/citation.cfm?id=2461466.2461514http://records.sigmm.ndlab.net/2013/07/fxpal-hires-dick-bulterman-to-replace-larry-rowe-as-president/http://records.sigmm.ndlab.net/2013/07/fxpal-hires-dick-bulterman-to-replace-larry-rowe-as-president/http://records.sigmm.ndlab.net/2013/07/fxpal-hires-dick-bulterman-to-replace-larry-rowe-as-president/

  • .

    Call for Bids: ACM Multimedia 2016

    ACM SIGMM RecordsVol. 5, No. 2, June 2013 9

    ISSN 1947-4598http://sigmm.org/records

    within the multimedia community, and has manageda series of successful large-scale European Unionprojects. He was also the founder and first managingdirector of Oratrix Development BV.

    Dick commented on his appointment: “I am verypleased to be able to join an organization with FXPAL’sreputation and scope. Becoming President is a privilegeand a challenge. I look forward to working with therest of the FXPAL team to continue to build a first-class research organization that addresses significantproblems in information capture, sharing, archiving andvisualization. I especially hope to strengthen ties withacademic research teams around the world via jointprojects and an active visitor’s program in Palo Alto.”

    Larry Rowe has led FXPAL for the past six years. During his tenure, he significantly upgraded thephysical facilities and computing infrastructure and hiredsix PhD researchers. Working with VP Research Dr.Lynn Wilcox, he led the lab in pursuing a varietyof research projects involving distributed collaboration(e.g., virtual and mixed reality systems), imageand multimedia processing (e.g., Embedded MediaMarkers), information access (e.g., TalkMiner lecturevideo search system and Querium exploratory searchsystem), video authoring and playback, awareness andpresence systems (e.g., MyUnity), and mobile/cloudapplications. He worked with VP Marketing & BusinessDevelopment Ram Sriram to develop the Fuji XeroxSkyDesk cloud applications product offering. Larrycommented, “it has been an honor to lead FXPAL duringthis exciting time, particularly the changes brought aboutby the internet, mobile devices, and cloud computing.”

    Larry will spend the next several months working onapplications using media wall technology to simplifypresentations and demonstrations that incorporatemultiple windows displaying content from mobiledevices and cloud applications.

    FXPAL is a leading multimedia and human-computerinterface research laboratory established in 1995by Fuji Xerox Co., Ltd. FXPAL researchers inventtechnologies that address key issues affectingbusinesses and society. For more information,visit http://www.fxpal.com.

    Call for Bids: ACMMultimedia 2016

    The ACM Special Interest Group on Multimediais inviting for bids for holding its flagship “ACMInternational Conference on Multimedia”. The bids areinvited for holding the 2016 conference in EUROPE.

    The details of the two required bid documents,evaluation procedure and deadlines are explainedbelow (find updates and corrections at http://disi.unitn.it/~sebe/2016_bid.txt). The bids are due on September 2,2013.

    Required Bid Documents

    Two documents are required:

    1. Bid Proposal: This document outlines all of the detailsexcept the budget. The proposal should contain:

    a. The organizing team: Names and brief biosof General Chairs, Program Chairs and LocalArrangements Chairs. Names and brief biosof at least one chair (out of the two) eachfor Workshops, Panels, Video, Brave NewIdeas, Interactive Arts, Open Source SoftwareCompetition, Multimedia Grand Challenge,Tutorials, Doctoral Symposium, Preservation andTechnical Demos. It is the responsibility of theGeneral Chairs to obtain consent of all ofthe proposed team members. Please note thatthe SIGMM Executive Committee may suggestchanges in the team composition for the winningbids. Please make sure that everyone who hasbeen initially contacted understands this.

    b. The Venue: the details of the proposed conferencevenue including the location, layout and facilities.The layout should facilitate maximum interactionbetween the participants. It should provide forthe normal required facilities for multimediapresentations including internet access. Pleasenote that the 2016 ACM Multimedia Conferencewill be held in Europe.

    c. Accommodation: the bids should indicate arange of accommodations, catering for student,academic and industry attendees with easy aswell as quick access to the conference venue.Indicative costs should be provided. Indicativefigures for lunches/dinners and local transportcosts for the location must be provided.

    d. Accessibility: the venue should be easilyaccessible to participants from Americas, Europeand Asia (the primary sources of attendees).Indicative cost of travel from these majordestinations should be provided.

    e. Other aspects:

    i. commitments from the local government andorganizations

    ii. committed financial and in-kind sponsorships

    http://www.fxpal.com/http://records.sigmm.ndlab.net/2013/07/call-for-bids-acm-multimedia-2016/http://records.sigmm.ndlab.net/2013/07/call-for-bids-acm-multimedia-2016/http://disi.unitn.it/~sebe/2016_bid.txthttp://disi.unitn.it/~sebe/2016_bid.txt

  • .

    MediaEval Multimedia Benchmark: Highlights from the Ongoing 2013 Season

    ISSN 1947-4598http://sigmm.org/records 10

    ACM SIGMM RecordsVol. 5, No. 2, June 2013

    iii. institutional support for local arrangementchairs

    iv. conference date in September/October/November which does not clash with any majorholidays or other major related conferences

    v. social events to be held with the conference

    vi. possible venue(s) for the TPC Meeting

    vii.any innovations to be brought into theconference

    viii.cultural/scenic/industrial attractions

    2. Tentative Budget: The entire cost of holding theconference with realistic estimated figures shouldbe provided. This template budget sheet shouldbe used for this purpose:http://disi.unitn.it/~sebe/ACMMM_Budget_Template.xlsPlease note that thesheet is quite detailed and you may not have allof the information. Please try to fill it as much aspossible. All committed sponsorships for conferenceorganization, meals, student subsidy and awardsmust be highlighted. Please note that estimatedregistration costs for ACM members, non-membersand students will be required for preparing thebudget. Estimates of the number of attendees willalso be required.

    Feedback from ACMMultimedia SteeringCommittee:

    The bid documents will also be submitted to the ACMMultimedia Steering Committee. The feedback of thiscommittee will have to be incorporated in the finalsubmission of the proposal.

    Bid Evaluation Procedure

    Bids will be evaluated on the basis of:

    1. Quality of the Organizing Team (both technicalstrengths and conference organization experience)

    2. Quality of the Venue (facilities and accessibility)

    3. Affordability of the Venue (travel, stay andregistration)to the participants

    4. Viability of the Budget: Since SIGMM fully sponsorsthis conference and it does not have reserves, theaim is to minimize the probability of making a loss andmaximize the chances of making a small surplus.

    The winning bid will be decided by the SIGMM ExecutiveCommittee by vote.

    Bid Submission Procedure

    Please up-load the two required documents and anyother supplementary material to a web-site. The generalchairs then should email the formal intent to host alongwith the bid documents web-site URL to the SIGMMChair ([email protected]) and the Director ofConferences ([email protected]) by Sep 02, 2013.

    Time-line

    Sep 02,2013:

    Bid URL to be submitted to SIGMMChair and Director of Conferences

    Sep 2013: Bids open for viewing by SIGMMExecutive Committee and ACMMultimedia Steering Committee

    Oct 01,2013:

    Feedback from SIGMM ExecutiveCommittee and ACM MultimediaSteering Committee made available

    Oct 15,2013:

    Bid Documents to be finalized

    Oct 15,2013:

    Bids open for viewing by all SIGMMMembers

    Oct 24,2013:

    10-min Presentation of each Bid at ACMMultimedia 2013

    Oct 25,2013:

    Decision by the SIGMM ExecutiveCommittee

    Please note that there is a separate conferenceorganization procedure which kicks in for the winningbids whose details can be seen at: http://www.acm.org/sigs/volunteer_resources/conference_manual

    MediaEval MultimediaBenchmark: Highlightsfrom the Ongoing 2013SeasonMediaEval is an international multimedia benchmarkinginitiative that offers tasks to the multimedia communitythat are related to the human and social aspects ofmultimedia. The focus is on addressing new challengesin the area of multimedia search and indexing that allowresearchers to make full use of multimedia techniquesthat simultaneously exploit multiple modalities.

    http://disi.unitn.it/~sebe/ACMMM_Budget_Template.xlshttp://disi.unitn.it/~sebe/ACMMM_Budget_Template.xlshttp://records.sigmm.ndlab.net/2013/07/mediaeval-multimedia-benchmark-highlights-from-the-ongoing-2013-season/http://records.sigmm.ndlab.net/2013/07/mediaeval-multimedia-benchmark-highlights-from-the-ongoing-2013-season/http://records.sigmm.ndlab.net/2013/07/mediaeval-multimedia-benchmark-highlights-from-the-ongoing-2013-season/http://records.sigmm.ndlab.net/2013/07/mediaeval-multimedia-benchmark-highlights-from-the-ongoing-2013-season/http://www.multimediaeval.org/

  • .

    MPEG Column: Press release for the 104th MPEG meeting

    ACM SIGMM RecordsVol. 5, No. 2, June 2013 11

    ISSN 1947-4598http://sigmm.org/records

    A series of interesting tasks is currently underway inMediaEval 2013. As every year, the selection of tasks ismade using a community-wide survey that gauges whatmultimedia researchers would find most interesting anduseful. A new task to watch closely this year is Searchand Hyperlinking of Television Content, which follows onthe heels of a very successful pilot last year. The othermain tasks running this year are:

    • Social Event Detection for Social Multimedia• Placing: Geo-coordinate Prediction for Social

    Multimedia• Violent Scenes Detection in Film (Affect Task)• Visual Privacy: Preserving Privacy in Surveillance

    Videos• Spoken Web Search: Spoken Term Detection for Low

    Resource Languages

    The tagline of the MediaEval Multimedia Benchmarkis: “The ‘multi’ in multimedia: speech, audio, visualcontent, tags, users, context”. This tagline explainsthe inspiration behind the choice of the Brave NewTasks, which are running for the first time this year.Here, we would like to highlight Question Answeringfor the Spoken Web, which builds on the SpokenWeb Search tasks mentioned above. This task isa joint effort between MediaEval and the Forumfor Information Retrieval Evaluation, a India-basedinformation retrieval benchmark. MediaEval believesstrongly in collaboration and complementarity betweenbenchmarks and we hope that this task will help usto better understand how joint-tasks should be bestdesigned and coordinated.

    The other Brave New Tasks at MediaEval this year are:

    • Soundtrack Selection for Commercials a task fromMusiClef

    • Similar Segments of Social Speech• Retrieving Diverse Social Images• Emotion in Music• Crowdsourcing for Social Multimedia

    The MediaEval 2013 season culminates with theMediaEval 2013 worksop, which will take place inBarcelona, Catalunya, Spain, on Friday-Saturday 18-19October 2013. Note that this is just before ACMMultimedia 201, which will be held Monday-Friday21-25 October 2013, also in Barcelona. We arecurrently working on finalizing the registration site forthe workshop and it will open very soon and will beannounced on the MediaEval website.

    In order to further foster our understanding andappreciation of user-generated multimedia, each yearwe designate a MediaEval filmmaker to make aYouTube video about the workshop. The MediaEval

    2012 workshop video was made by John N.A. Brownand has recently appeared online.

    John’s decided to focus on the sense of communityand the laughter than he observed at the workshop.Interestingly, his focus recalls the work doneat MediaEval 2010 on the role of laughter insocial video, see: http://www.youtube.com/watch?v=z1bjXwxkgBs&feature=youtu.be&t=1m29s

    We hope that this video inspires your to join us inBarcelona.

    MPEG Column: Pressrelease for the 104thMPEG meetingMultimedia ecosystem event focuses on a broaderscope of MPEG standards

    The 104th MPEG meeting was held in Incheon, Korea,from 22 January to 26 April 2013.

    MPEG hosts MultimediaEcosystem 2013 Event

    During its 104th meeting, MPEG has hosted theMPEG Multimedia Ecosystem event to raise awarenessof MPEG’s activities in areas not directly related tocompression. In addition to world class standardsfor compression technologies, MPEG has developedmedia-related standards that enrich the use ofmultimedia such as MPEG-M for Multimedia ServicePlatform Technologies, MPEG-U for Rich Media UserInterfaces, and MPEG-V for interfaces between realand virtual worlds. Also, new activities such asMPEG Augmented Reality Application Format, CompactDescriptors for Visual Search, Green MPEG for energyefficient media coding, and MPEG User Description arecurrently in progress. The event was organized withtwo sessions including a workshop and demonstrations.The workshop session introduced the seven standardsdescribed above while the demonstration sessionshowed 17 products based on these standards.

    MPEG issues CfP for Energy-Efficient Media Consumption(Green MPEG)

    At the 104th MPEG meeting, MPEG has issued a Call forProposals (CfP) on energy-efficient media consumption

    http://www.multimediaeval.org/mediaeval2013/hyper2013/http://www.multimediaeval.org/mediaeval2013/hyper2013/http://www.multimediaeval.org/mediaeval2013/sed2013/http://www.multimediaeval.org/mediaeval2013/placing2013/http://www.multimediaeval.org/mediaeval2013/placing2013/http://www.multimediaeval.org/mediaeval2013/violence2013/http://www.multimediaeval.org/mediaeval2013/visualprivacy2013/http://www.multimediaeval.org/mediaeval2013/visualprivacy2013/http://www.multimediaeval.org/mediaeval2013/sws2013/http://www.multimediaeval.org/mediaeval2013/sws2013/http://www.multimediaeval.org/mediaeval2013/qa4sw2013/http://www.multimediaeval.org/mediaeval2013/qa4sw2013/http://www.isical.ac.in/~clia/http://www.isical.ac.in/~clia/http://www.multimediaeval.org/mediaeval2013/soundtrack2013/http://www.multimediaeval.org/mediaeval2013/socialspeech2013/http://www.multimediaeval.org/mediaeval2013/emotion2013/http://www.multimediaeval.org/mediaeval2013/emotion2013/http://www.multimediaeval.org/mediaeval2013/crowd2013/http://acmmm13.org/http://acmmm13.org/http://www.multimediaeval.org/http://www.jnabrown.com/http://records.sigmm.ndlab.net/2013/07/mpeg-column-press-release-for-the-104th-mpeg-meeting/http://records.sigmm.ndlab.net/2013/07/mpeg-column-press-release-for-the-104th-mpeg-meeting/http://records.sigmm.ndlab.net/2013/07/mpeg-column-press-release-for-the-104th-mpeg-meeting/

  • .

    MPEG Column: Press release for the 104th MPEG meeting

    ISSN 1947-4598http://sigmm.org/records 12

    ACM SIGMM RecordsVol. 5, No. 2, June 2013

    (Green MPEG) which is available in the publicdocuments section at http://mpeg.chiariglione.org/.Green MPEG is envisaged to provide interoperablesolutions for energy- efficient media decoding andpresentation as well as energy-efficient media encodingbased on encoder resources or receiver feedback.The CfP solicits responses that use compact signalingto facilitate reduced consumption from the encoding,decoding and presentation of media content without anydegradation in the Quality of Experience (QoE). Whenpower levels are critically low, consumers may preferto sacrifice their QoE for reduced energy consumption.Green MPEG will provide this capability by allowingenergy consumption to be traded off with the QoE.Responses to the call are due at the 105th MPEGmeeting in July 2013.

    APIs enable access to otherMPEG technologies via MXM

    The MPEG eXtensible Middleware (MXM) APItechnology specifications (ISO/IEC 23006-2) havereached the status of International Standard at the104th MPEG meeting. MXM specifies the means toaccess individual MPEG tools through standardizedAPIs and is expected to help the creation of a globalmarket of MXM applications that can run on devicessupporting MXM APIs in addition to the other MPEGtechnologies. The MXM standard should also help thedeployment of innovative business models becauseit will enable the easy design and implementationof media-handling value chains. The standard alsoprovides reference software as open source with abusiness friendly license. The introductory part of theMXM family of specifications, 23006-1 MXM architectureand technologies, will soon be also freely available onthe ISO web site.

    MPEG introduces MPEG 101with multimedia

    MPEG has taken a further step toward communicatinginformation about its standards in an easy and user-friendly manner; i.e. MPEG 101 with multimedia.MPEG 101 with multimedia will provide video clipscontaining overviews of individual standards along withexplanations of the benefits that can be achieved byeach standard, and will be available from the MPEG website (http://mpeg.chiariglione.org/). During this 104thMPEG meeting, the first video clip on the UnifiedSpeech and Audio Coding (USAC) standard has beenprepared. USAC is the newest MPEG Audio standard,which was issued in 2012. It provides performanceas good as or better than state-of-the-art codecs thatare designed specifically for a single class of content,

    such as just speech or just music, and it does so forany content type, such as speech, music or a mix ofspeech and music. Over its target operating bit rate,12 kb/s for mono signals through 32 kb/s for stereosignals,USAC provides significantly better performancethan the benchmarkcodecs, and continues to providebetter performance as the bitrate is increased to higherrates. MPEG will employ the MPEG 101 with multimediacommunication tool to other MPEG standards in nearfuture.

    Digging Deeper – How toContact MPEG

    Communicating the large and sometimes complex arrayof technology that the MPEG Committee has developedis not a simple task. Experts, past and present, havecontributed a series of tutorials and vision documentsthat explain each of these standards individually. Therepository is growing with each meeting, so if somethingyou are interested is not yet there, it may appearshortly – but you should also not hesitate to requestit. You can start your MPEG adventure at http://mpeg.chiariglione.org/

    Further Information

    Future MPEG meetings are planned as follows:

    • No. 105, Vienna, AT, 29 July – 2 August 2013• No. 106, Geneva, CH, 28 October – 1 November 2013• No. 107, San Jose, CA, USA, 13 – 17 January 2014• No. 108, Valencia, ES, 31 March – 04 April 2014

    For further information about MPEG, please contact:Dr. Leonardo Chiariglione (Convenor of MPEG, Italy)Via Borgionera, 10310040 Villar Dora (TO), ItalyTel: +39 011 935 04 [email protected]

    or

    Dr. Arianne T. HindsCable Television Laboratories 858Coal Creek Circle Lousiville, Colorado 80027, USATel: +1 303 661 [email protected].

    The MPEG homepage also has links to other MPEGpages that are maintained by the MPEG subgroups. Italso contains links to public documents that are freelyavailable for download by those who are not MPEGmembers. Journalists that wish to receive MPEG PressReleases by email should contact Dr. Arianne T. Hindsat [email protected].

  • .

    PhD Thesis Summaries

    ACM SIGMM RecordsVol. 5, No. 2, June 2013 13

    ISSN 1947-4598http://sigmm.org/records

    PhD Thesis Summaries

    Johannes Schels

    Object Class Detection UsingPart-Based Models Trained fromSynthetically Generated Images

    Supervisor(s) and Committee member(s): RainerLienhart (advisor), Bernhard Möller (committee member),Eckehard Steinbach (committee member)URL: http://opus.bibliothek.uni-augsburg.de/opus4/frontdoor/index/index/docId/2269

    This thesis presents part-based approaches to objectclass detection in single 2D images, relying on pre-built CAD models as a source of synthetic trainingdata. Part-based models, representing an object classas a deformable constellation of object parts, havedemonstrated state-of-the-art results with respect toobject class detection. Typically, the majority of part-based approaches rely on real training images ofpublicly available image data sets and consequently,the positive output of those detectors is restricted tothe viewpoints which are represented by those realtraining images. However, progress in the domain ofcomputer graphics enables the generation of photo-realistic renderings on demand from a database ofCAD models, which can serve as training source forlearning an object class detection approach. In thisthesis, we present part-based object class detectionmethods which are based on synthetically generatedpositive training images and real negative trainingimages, thereby combining the advantages of the twodomains described above. More specifically, photo-realistic object parts, representing the object class beingtrained, are learnt in an unsupervised way without

    requiring any manual bounding box, object part, orviewpoint annotations during the training process. Theestablished object parts are efficiently combined intoan object class detection framework relying on twopart-based models with different learning paradigms.In addition, we outline an extension of our detectionframework which is able to cope with multiple objectclasses. The approaches to object class detectionare evaluated on standard benchmark data sets andachieve state-of-the-art results with respect to objectclass detection in single 2D images.

    Multimedia Computing & Computer Vision Lab (MMC)URL: http://www.multimedia-computing.de/

    Tiia Ojanperä

    Cross-layer Optimized VideoStreaming in HeterogeneousWireless Networks

    Supervisor(s) and Committee member(s): Prof. TimoOjala (supervisor), Prof. Jörg Ott (reviewer), Prof. ChristerÅhlund (reviewer), Prof. Gerald C. Maguire Jr. (opponent)URL: http://urn.fi/urn:isbn:9789526201511ISBN: 978-952-62-0151-1

    The volume of data transmitted across wireless andmobile networks continues to grow at a rapid rate.Videos already account for most of this data traffic,and their share is expected to grow even bigger in thenear future. The thesis presents a novel video servicearchitecture designed to optimise video streamingaccording to wireless network capacity in order toachieve improved quality and efficient network resourceusage. The solution essentially enables Quality ofService (QoS) sensitive video streaming services to takefull advantage of the access diversity of heterogeneouswireless networks and to adapt efficiently to available

    http://records.sigmm.ndlab.net/?phd-thesis-summary=johannes-schelshttp://opus.bibliothek.uni-augsburg.de/opus4/frontdoor/index/index/docId/2269http://opus.bibliothek.uni-augsburg.de/opus4/frontdoor/index/index/docId/2269http://www.multimedia-computing.de/http://records.sigmm.ndlab.net/?phd-thesis-summary=tiia-ojanperahttp://urn.fi/urn:isbn:9789526201511

  • .

    PhD Thesis Summaries

    ISSN 1947-4598http://sigmm.org/records 14

    ACM SIGMM RecordsVol. 5, No. 2, June 2013

    network capacity. This nevertheless requires supportbeyond the layered communication architecture oftoday’s Internet as proposed in the thesis.

    The proposed architecture relies on end-to-end cross-layer signaling and control for video streamingoptimization. The architecture supports extensivecontext information collection and transfer inheterogeneous networks; thus, enabling the efficientmanagement of video stream adaptation and userterminal mobility in a diverse and dynamic networkenvironment. The thesis also studies and proposescross-layer enhancements for adaptive video streamingand mobility management functions enabled by thecross-layer architecture. These include cross-layervideo adaptation, congestion-triggered handovers, andconcurrent utilization of multiple access networks inthe video stream transport. For the video adaptationand multipath transmission, the flexible adaptation andtransmission capabilities of the novel scalable videocoding technology are used. Regarding the mobilitymanagement, the proposed solutions essentiallyenhance the handover decision-making of the MobileIP protocol to better support QoS-sensitive videostreaming. Finally, the thesis takes a holistic view on theapplication adaptation and mobility management, andproposes a solution for coordinated control of these twooperations in order to achieve end-to-end optimization.

    The resulting mobile video streaming systemarchitecture and the cross-layer control algorithmsare evaluated using network simulations and realprototypes. Based on the results, the proposedmechanisms can be seen to be viable solutionsfor improving video streaming performance inheterogeneous wireless networks. They requirechanges in the communication end-points and theaccess network but support gradual deployment. Thisallows service providers and operators to select only asubset of the proposed mechanisms for implementation.The results also support the need for cross-layersignaling and control in facilitating efficient managementand utilization of heterogeneous wireless networks.

    VTT Technical Research Centre of FinlandURL: http://www.vtt.fi/

    Tomas Kupka

    On the HTTP segment streamingpotentials and performanceimprovements

    Supervisor(s) and Committee member(s): Pål Halvorsen(advisor), Carsten Griwodz (advisor), Niklas Carlsson

    (opponent), Jörg Ott (opponent), Stein Gjessing(opponent)URL: http://heim.ifi.uio.no/~paalh/students/TomasKupka-phd.pdf

    Video streaming has gone a long way from its earlyyears in the 90’s. Today, the prevailing techniqueto stream live and video on demand (VoD) contentis adaptive HTTP segment streaming as used bythe solutions from for example Apple, Microsoft, andAdobe. The reasons are its simple deployment andmanagement. The HTTP infrastructure, including HTTPproxies, caches and in general Content DeliveryNetworks (CDNs), is already deployed. Furthermore,HTTP is the de facto standard protocol of the Internetand is therefore allowed to pass through most firewallsand Network Address Translation (NAT) devices. Thegoal of this thesis is to investigate the possible uses ofadaptive HTTP segment streaming beyond the classicallinear streaming and to look at ways to make HTTPservers dealing with HTTP segment streaming trafficmore efficient.

    In addition to the deployment and management benefits,the segmentation of video opens new applicationpossibilities. In this thesis, we investigate those first.For example, we demonstrate on the fly creation ofcustom video playlists containing only content relevantto a user query. Using user surveys, we show, that itnot only saves time to automatically get playlists createdfrom relevant video excerpts, but the user experienceincreases significantly as well.

    However, already the basic capabilities of HTTPsegment streaming, i.e., streaming of live and ondemand video, are very popular and are creating a hugeamount of network traffic. Our analysis of logs providedby a Norwegian streaming provider Comoyo indicatesthat a substantial amount of the traffic data must beserved from places other than the origin server. Since

    http://www.vtt.fi/http://records.sigmm.ndlab.net/?phd-thesis-summary=tomas-kupkahttp://heim.ifi.uio.no/~paalh/students/TomasKupka-phd.pdfhttp://heim.ifi.uio.no/~paalh/students/TomasKupka-phd.pdf

  • .

    PhD Thesis Summaries

    ACM SIGMM RecordsVol. 5, No. 2, June 2013 15

    ISSN 1947-4598http://sigmm.org/records

    a substantial part of the traffic comes from places otherthan the origin server, it is important that effective andefficient use of resources not only takes place on theorigin server, but also on other, possibly HTTP segmentstreaming unaware servers.

    The HTTP segment streaming unaware servers handlesegment streaming data as any other type of web data(HTML pages, images, CSS files, javascript files etc.).It is important to look at how the effectiveness of datadelivery from this kind of servers can be improved,because there might be potentially many “off the shelf”servers serving video segments (be it a homemadesolution or an HTTP streaming-unaware CDN server).In general, there are three possible places to improvethe situation: on the server, in the network and on theclient. To improve the situation in the network betweenthe server and the client is generally impossible for astreaming provider. Improving things on the server ispossible, but difficult because the serving server mightbe out of the control of the streaming provider. Bestchances are to improve things on the client. Therefore,the major part of this thesis deals with the proposaland evaluation of different modifications to the client-side and only some light modifications to the server-side. In particular, the thesis looks at two types ofbottlenecks that can occur. The thesis shows how todeal with a client-side bottleneck using multiple links. Inthis context, we propose and evaluate a scheduler forpartial segment requests. After that, we discuss differenttechniques on how to deal with a server-side bottleneckwith for example different modifications on the transportlayer (TCP congestion control variant, TCP CongestionWindow (CWND) limitation) and the application layer(the encoding of segments, segment request strategy).

    The driving force behind many of these modifications isthe on-off traffic pattern that HTTP segment streamingtraffic exhibits. The on-off traffic leads in many cases oflive streaming to request synchronization as explainedin this thesis. The synchronization in turn leads toincreased packet loss and hence to a downgrade ofthroughput, which exhibits itself by decreased segmentbitrate, i.e., lower quality of experience. We find thatdistributing client requests over time by means of adifferent client request strategy yields good results interms of quality and the number of clients a servercan handle. Other modifications like the limiting of theCWND or using a different congestion control algorithmcan also help in many cases.

    All in all, this thesis explores the potentials of adaptiveHTTP segment streaming beyond the linear videostreaming and it explores the possibilities to increase theperformance of HTTP segment streaming servers.

    Media Performance Group

    URL: http://simula.no/research/mpg

    Ulrich Lampe

    Monetary Efficiency inInfrastructure Clouds - SolutionStrategies for WorkloadDistribution and Auction-basedCapacity Allocation

    Supervisor(s) and Committee member(s): Ralf Steinmetz(Supervisor), Schahram Dustdar (Rapporteur), KlausHofmann (Examiner), Helmut Schlaak (Examiner)URL: http://d-nb.info/1037291751ISBN: 978-3843911016

    Since the early days of computing, a vision has beento provide Information Technology services in the formof a utility, just like water, electricity, or telephony.With the advancement of the cloud computing paradigmsince the mid-2000s, this vision has been put intorealization. Cloud computing builds on and combinesmultiple existing technologies and paradigms, suchas virtualization and Service-Oriented Architecture,to deliver various forms of Information Technologyservices over the Internet. In this thesis, our focus is onthe most elementary class of Information Technology:computing infrastructure. In this context, we examinetwo important research problems and propose solutionstrategies, based on the conjoint objective of monetaryefficiency.

    As the first major contribution, we introduce the so-calledCloud-orientedWorkload Distribution Problem (CWDP).This problem concerns the distribution of a workload,

    http://simula.no/research/mpghttp://records.sigmm.ndlab.net/?phd-thesis-summary=ulrich-lampehttp://d-nb.info/1037291751

  • .

    Recently published

    ISSN 1947-4598http://sigmm.org/records 16

    ACM SIGMM RecordsVol. 5, No. 2, June 2013

    which comprises multiple computational jobs, acrossleased infrastructure. We assume the position of acloud user, who aims at cost-minimal deployment underconsideration of resource constraints. On the basis ofa mathematical optimization model, we propose theexact solution approach CWDP-EXA.KOM. Given itshigh time complexity, we further propose the heuristicoptimization approach CWDP-HEU.KOM, which iscomplemented by the improvement procedure CWDP-IMP.KOM. The practical applicability and performanceof these optimization approaches is demonstrated usinga quantitative evaluation, based on realistic data fromthe cloud computing market.

    As the second key contribution, we examine theEquilibrium Price Auction Allocation Problem (EPAAP).This problem refers to the allocation of VirtualMachine instances based on an equilibrium priceauction scheme. For that matter, we focus on therole of a cloud provider, who pursues the aim ofprofit maximization. We formalize the problem asan optimization model, which permits to deduce theexact optimization approach EPAAP-EXA.KOM. Wefurther propose a heuristic optimization approach,named EPAAP-HEU.KOM, and improvement procedureEPAAP-IMP.KOM. All three approaches are thoroughlyanalyzed through a quantitative evaluation.

    Service-oriented Computing, KOM, Tu DarmstadtURL: http://www.kom.tu-darmstadt.de/

    Recently published

    MMSJ Volume 19, Issue 3

    Editor-in-Chief: Thomas PlagemannURL: http://www.springer.de/Published: June 2013

    Special Issue on Network and Systems Support ForGames

    Guest Editors: Shervin Shirmohammadi, CarstenGriwodz, Grenville Armitage

    • Shervin Shirmohammadi, Carsten Griwodz, GrenvilleArmitage: Guest editorial for special issue onnetwork and systems support for games

    • Meng Zhu, Alf Inge Wang, Hong Guo: From 101 tonnn: a review and a classification of computergame architectures

    • Mirko Suznjevic, Maja Matijasevic: Player behaviorand traffic characterization for MMORPGs: a survey

    • A. L. Cricenti, P. A. Branch: The Ex-Gaussiandistribution as a model of first-person shootergame traffic

    • Mirko Suznjevic, Ivana Stupar, Maja Matijasevic: Amodel and software architecture for MMORPGtraffic generation based on player behavior

    • Amir Yahyavi, Kévin Huguenin, Bettina Kemme:Interest modeling in games: the case of deadreckoning

    • Cheryl Savery, T. C. Nicholas Graham: Timelines:simplifying the programming of lag compensationfor the next generation of networked games

    • Juan M. Silva, Abdulmotaleb El Saddik: Exertioninterfaces for computer videogames usingsmartphones as input controllers

    • Daniel Pittman, Chris GauthierDickey: Match+Guardian: a secure peer-to-peer trading cardgame protocol

    MMSJ Volume 19, Issue 4

    Editor-in-Chief: Thomas PlagemannURL: http://www.springer.de/Published: July 2013

    • Chung-Hua Chu: Super-resolution imagereconstruction for mobile devices

    • Chandan Singh, Pooja Sharma: Performanceanalysis of various local and global shapedescriptors for image retrieval

    • Zhang Liu, Chaokun Wang, Jianmin Wang, Hao Wang,Yiyuan Bai: Adaptive music resizing with stretching,cropping and insertion

    • Jun-Bin Yeh, Chung-Hsien Wu, Shi-Xin Mai: Multiplevisual concept discovery using concept-basedvisual word clustering

    TOMCCAP, Volume 9, Issue 2

    Editor-in-Chief: Ralf SteinmetzURL: http://tomccap.acm.org/Published: May 2013sponsored by ACM SIGMM

    The Transactions on Multimedia Computing,Communication and Applications are the SIGMM’s ownTransactions. As a service to Records readers, weprovide direct links to ACM Digital Library for the papersof the latest TOMCCAP issue.

    • Juan M. Silva, Mauricio Orozco, Jongeun Cha,Abdulmotaleb El Saddik, Emil M. Petriu: Human

    http://www.kom.tu-darmstadt.de/http://records.sigmm.ndlab.net/?eventjournal-toc=mmsj-volume-19-issue-3http://www.springer.de/http://dx.doi.org/10.1007/s00530-012-0279-8http://dx.doi.org/10.1007/s00530-012-0279-8http://dx.doi.org/10.1007/s00530-012-0274-0http://dx.doi.org/10.1007/s00530-012-0274-0http://dx.doi.org/10.1007/s00530-012-0274-0http://dx.doi.org/10.1007/s00530-012-0270-4http://dx.doi.org/10.1007/s00530-012-0270-4http://dx.doi.org/10.1007/s00530-012-0272-2http://dx.doi.org/10.1007/s00530-012-0272-2http://dx.doi.org/10.1007/s00530-012-0272-2http://dx.doi.org/10.1007/s00530-012-0269-xhttp://dx.doi.org/10.1007/s00530-012-0269-xhttp://dx.doi.org/10.1007/s00530-012-0269-xhttp://dx.doi.org/10.1007/s00530-012-0275-zhttp://dx.doi.org/10.1007/s00530-012-0275-zhttp://dx.doi.org/10.1007/s00530-012-0271-3http://dx.doi.org/10.1007/s00530-012-0271-3http://dx.doi.org/10.1007/s00530-012-0271-3http://dx.doi.org/10.1007/s00530-012-0268-yhttp://dx.doi.org/10.1007/s00530-012-0268-yhttp://dx.doi.org/10.1007/s00530-012-0268-yhttp://dx.doi.org/10.1007/s00530-012-0291-zhttp://dx.doi.org/10.1007/s00530-012-0291-zhttp://dx.doi.org/10.1007/s00530-012-0291-zhttp://records.sigmm.ndlab.net/?eventjournal-toc=mmsj-volume-19-issue-4http://www.springer.de/http://dx.doi.org/10.1007/s00530-012-0276-yhttp://dx.doi.org/10.1007/s00530-012-0276-yhttp://dx.doi.org/10.1007/s00530-012-0288-7http://dx.doi.org/10.1007/s00530-012-0288-7http://dx.doi.org/10.1007/s00530-012-0288-7http://dx.doi.org/10.1007/s00530-012-0289-6http://dx.doi.org/10.1007/s00530-012-0289-6http://dx.doi.org/10.1007/s00530-012-0294-9http://dx.doi.org/10.1007/s00530-012-0294-9http://dx.doi.org/10.1007/s00530-012-0294-9http://records.sigmm.ndlab.net/?eventjournal-toc=tomccap-volume-9-issue-2http://tomccap.acm.org/"http://dx.doi.org/10.1145/2457450.2457451:

  • .

    Job Opportunities

    ACM SIGMM RecordsVol. 5, No. 2, June 2013 17

    ISSN 1947-4598http://sigmm.org/records

    perception of haptic-to-video and haptic-to-audioskew in multimedia applications

    • Chidansh A. Bhatt, Pradeep K. Atrey, Mohan S.Kankanhalli: A reward-and-punishment-basedapproach for concept detection using adaptiveontology rules

    • Fawaz A. Alsulaiman, Nizar Sakr, Julio J. Valdés,Abdulmotaleb El Saddik: Identity verification basedon handwritten signatures with haptic informationusing genetic programming

    • Qianni Zhang, Ebroul Izquierdo: Multifeatureanalysis and semantic context learning for imageclassification

    • Zhen Wei Zhao, Sameer Samarth, Wei Tsang Ooi:Modeling the effect of user interactions on mesh-based P2P VoD streaming systems

    • Yang Yang, Yi Yang, Heng Tao Shen: Effectivetransfer tagging from image to video

    • Zhen Wei Zhao, Wei Tsang Ooi: APRICOD: Anaccess-pattern-driven distributed cachingmiddleware for fast content discovery ofnoncontinuous media access

    TOMCCAP, Volume 9, Issue 3

    Editor-in-Chief: Ralf SteinmetzURL: http://tomccap.acm.org/Published: June 2013sponsored by ACM SIGMM

    The Transactions on Multimedia Computing,Communication and Applications are the SIGMM’s ownTransactions. As a service to Records readers, weprovide direct links to ACM Digital Library for the papersof the latest TOMCCAP issue.

    • Tao Mei, Lin-Xie Tang, Jinhui Tang, Xian-Sheng Hua:Near-lossless semantic video summarization andits applications to video analysis

    • Oluwakemi A. Ademoye, Gheorghita Ghinea:Information recall task impact in olfaction-enhanced multimedia

    • Lo-Yao Yeh, Jiun-Long Huang: A conditionalaccess system with efficient key distribution andrevocation for mobile pay-TV systems

    • Ruchira Naskar, Rajat Subhra Chakraborty: Ageneralized tamper localization approach forreversible watermarking algorithms

    • Jonathan Doherty, Kevin Curran, Paul Mckevitt: A self-similarity approach to repairing large dropouts ofstreamed music

    • Edmond S. L. Ho, Jacky C. P. Chan, Taku Komura,Howard Leung: Interactive partner control in closeinteractions for real-time applications

    Job Opportunities

    PhD position in HDR imaging

    High Dynamic Range (HDR) imaging is believed to bethe next frontier in imaging similar to transition fromgray level to color or 2D to 3D. However, despitemany recent developments there is still no standardapproach for compression of HDR images and videoand no standard way, in which quality is assessed insuch imaging modality. The successful PhD candidatewill address these shortcomings by determining, viaproposed new subjective evaluations and objectivemetrics, the influence of context and environmentalparameters on HDR image and video content. Theoutcome of these evaluations will be used to designnew HDR compression algorithms for images and videosequences. The compression algorithms will be laterextended to 3D image and video content.

    Applicants should have experience in image/videosignal processing and hold a MSc in related field(electrical engineering, mathematics, or computerscience). Experience in scientific programming (Matlab/C/C++) is essential. Strong knowledge of video andimage compression, HDR imaging, 3D video, andcomputer vision would be a plus. Applicants should bevery well versed in both oral and written English.

    Starting date is September 1, 2013.

    Employer: Ecole Polytechnique Fédérale de Lausanne(EPFL), SwitzerlandExpiration date: Thursday, August 1, 2013More information date: http://mmspg.epfl.ch/cms/op/edit/lang/en/pid/58408

    Research Assistants/PhDPositions at University ofVienna

    The University of Vienna (15 faculties, 4 centers, about180 fields of study, approx. 9.400 members of staff,approx. 90.000 students) is inviting applications for three(3) positions of Research Assistants (PhD Positions) atthe Research Group Multimedia Information Systems(headed by Prof. Dr. Wolfgang Klas) at the Faculty ofComputer Science. Details for the job announcementare available at: http://cs.univie.ac.at/uploads/media/OpenPosition_PraeDoc__4198_-_2013-07.pdf

    Employer: University of ViennaExpiration date: Sunday, September 1, 2013More information date: http://cs.univie.ac.at/uploads/media/OpenPosition_PraeDoc__4198_-_2013-07.pdf

    "http://dx.doi.org/10.1145/2457450.2457451:"http://dx.doi.org/10.1145/2457450.2457451:"http://dx.doi.org/10.1145/2457450.2457452:"http://dx.doi.org/10.1145/2457450.2457452:"http://dx.doi.org/10.1145/2457450.2457452:"http://dx.doi.org/10.1145/2457450.2457453:"http://dx.doi.org/10.1145/2457450.2457453:"http://dx.doi.org/10.1145/2457450.2457453:"http://dx.doi.org/10.1145/2457450.2457454:"http://dx.doi.org/10.1145/2457450.2457454:"http://dx.doi.org/10.1145/2457450.2457454:"http://dx.doi.org/10.1145/2457450.2457455:"http://dx.doi.org/10.1145/2457450.2457455:"http://dx.doi.org/10.1145/2457450.2457456:"http://dx.doi.org/10.1145/2457450.2457456:"http://dx.doi.org/10.1145/2457450.2457457:"http://dx.doi.org/10.1145/2457450.2457457:"http://dx.doi.org/10.1145/2457450.2457457:"http://dx.doi.org/10.1145/2457450.2457457:http://records.sigmm.ndlab.net/?eventjournal-toc=tomccap-volume-9-issue-3http://tomccap.acm.org/http://dx.doi.org/10.1145/2487268.2487269http://dx.doi.org/10.1145/2487268.2487269http://dx.doi.org/10.1145/2487268.2487270http://dx.doi.org/10.1145/2487268.2487270http://dx.doi.org/10.1145/2487268.2487271http://dx.doi.org/10.1145/2487268.2487271http://dx.doi.org/10.1145/2487268.2487271http://dx.doi.org/10.1145/2487268.2487272http://dx.doi.org/10.1145/2487268.2487272http://dx.doi.org/10.1145/2487268.2487272http://dx.doi.org/10.1145/2487268.2487273http://dx.doi.org/10.1145/2487268.2487273http://dx.doi.org/10.1145/2487268.2487273http://dx.doi.org/10.1145/2487268.2487274http://dx.doi.org/10.1145/2487268.2487274http://records.sigmm.ndlab.net/?job-opp=phd-position-in-hdr-imaging-mmspg-epflhttp://records.sigmm.ndlab.net/?job-opp=research-assistantsphd-positions-at-university-of-viennahttp://records.sigmm.ndlab.net/?job-opp=research-assistantsphd-positions-at-university-of-viennahttp://records.sigmm.ndlab.net/?job-opp=research-assistantsphd-positions-at-university-of-viennahttp://cs.univie.ac.at/uploads/media/OpenPosition_PraeDoc__4198_-_2013-07.pdfhttp://cs.univie.ac.at/uploads/media/OpenPosition_PraeDoc__4198_-_2013-07.pdf

  • .

    Calls for Contribution

    ISSN 1947-4598http://sigmm.org/records 18

    ACM SIGMM RecordsVol. 5, No. 2, June 2013

    Calls for Contribution

    CFPs: Sponsored by ACMSIGMM

    ACM MMSys 2014

    ACM Multimedia Systems Conference

    Submission deadline: 27. September 2013Location: SingaporeDates: 19. March 2014 -21. March 2014More information: http://www.mmsys.org/Sponsored by ACM SIGMM

    Call for Papers The ACM Multimedia SystemsConference provides a forum for researchers,engineers, and scientist to present and share their latestresearch findings in multimedia systems. While researchabout specific aspects of multimedia systemsis regularly published in the various proceedingsand transactions of the networking, operating system,real-time system, and database communities, MMSysaims … Read more

    ACM MMSys 2014 Dataset Track

    MMSys'14 Call for Papers: Dataset Track

    Submission deadline: 20. December 2013Location: SingaporeDates: 19. March 2014 -21. March 2014More information: http://www.mmsys.org/Sponsored by ACM SIGMM

    As an integral part of the conference since 2012, theDataset Track provides an opportunity for researchersand practitioners to make their work available to themultimedia community.

    TOMCCAP Special Issue on“Multiple Sensorial (MulSeMedia)Multi-modal Media: Advances andApplications”

    Transactions on Multimedia Computing,Communications and Applications

    Multiple Sensorial (MulSeMedia) Multi-modal Media: Advances and Applications

    Submission deadline: 14. October 2013

    Special issueSponsored by ACM SIGMM

    Multimedia applications have primarily engaged twoof the human senses – sight and hearing. Withrecent advances in computational technology, howeverit is possible to develop applications that alsoconsider, integrate and synchronize inputs across allsenses, including tactile, olfaction, and gustatory. Thisintegration of multiple senses leads to a … Read more

    CFPs: Sponsored by ACM

    ICMR 2014

    ACM International Conference onMultimedia Retrieval (ICMR)

    Submission deadline: 02. December 2013Location: Glasgow, UKDates: 01. April 2014 -05. April 2014More information: http://www.icmr2014.org/Sponsored by ACM

    The annual ACM International Conference onMultimedia Retrieval offers a great opportunity forexchanging leading-edge multimedia retrieval ideasamong researchers, practitioners and other potentialusers of multimedia retrieval systems.

    MMM 2014

    The 20th Anniversary InternationalConference on MultiMedia Modeling(MMM)

    Submission deadline: 01. December 2019Location: Dublin, IrelandDates: 08. January 2014 -10. January 2014More information: http://mmm2014.org/Sponsored by ACM

    MMM is a leading international conference forresearchers and industry practitioners for sharingnew ideas, original research results and practicaldevelopment experiences from all MMM related areas.

    CFPs: Not ACM-sponsored

    CVIU Special Issue on “Large ScaleMultimedia Semantic Indexing”

    Elsevier Computer Vision and ImageUnderstanding

    http://www.mmsys.org/http://records.sigmm.ndlab.net/?cfp=acm-mmsyshttp://www.mmsys.org/http://records.sigmm.ndlab.net/?cfp=tomccap-special-issue-on-multiple-sensorial-mulsemedia-multi-modal-media-advances-and-applicationshttp://records.sigmm.ndlab.net/?cfp=tomccap-special-issue-on-multiple-sensorial-mulsemedia-multi-modal-media-advances-and-applicationshttp://www.icmr2014.org/http://mmm2014.org/

  • .

    Back Matter

    ACM SIGMM RecordsVol. 5, No. 2, June 2013 19

    ISSN 1947-4598http://sigmm.org/records

    Large Scale Multimedia Semantic Indexing

    Submission deadline: 15. August 2013Special issueMore information: http://ees.elsevier.com/cviu

    IEEE ISM 2013

    IEEE International Symposium onMultimedia (ISM2013)

    Submission deadline: 01. August 2013Location: Anaheim, CA, USADates: 09. December 2013 -11. December 2013More information: http://ism.eecs.uci.edu/ISM2013/Sponsored by IEEE

    MTAP Special Issue on “ContentBased Multimedia Indexing”

    Multimedia Tools and Applications

    Content Based Multimedia Indexing

    Submission deadline: 01. September 2013Special issueMore information: http://cbmi2013.mik.uni-pannon.hu/index.php/cfp

    MUM 2013

    The 12th International Conference onMobile and Ubiquitous Multimedia (MUM2013)

    Submission deadline: 17. August 2013Location: Lulea, SwedenDates: 02. December 2013 -05. December 2013More information: http://www.mum2013.org/In cooperation with ACM

    NetGames 2013

    The 12th Annual Workshop on Networkand Systems Support for Games(NetGames)

    Submission deadline: 30. August 2013Location: Denver, CO, USADates: 09. December 2013 -10. December 2013More information: http://netgames2013.cs.du.edu/In cooperation with ACM SIGMM

    Back Matter

    Notice to ContributingAuthors to SIG Newsletters

    By submitting your article for distribution in this SpecialInterest Group publication, you hereby grant to ACM thefollowing non-exclusive, perpetual, worldwide rights:

    • to publish in print on condition of acceptance by theeditor

    • to digitize and post your article in the electronicversion of this publication

    • to include the article in the ACM Digital Library and inany Digital Library related services

    • to allow users to copy and distribute the article fornoncommercial, educational or research purposes

    However, as a contributing author, you retain copyrightto your article and ACM will refer requests forrepublication directly to you.

    Impressum

    Editor-in-Chief

    Carsten Griwodz, Simula Research Laboratory

    Editors

    Stephan Kopf, University of MannheimViktor Wendel, Darmstadt University of TechnologyLei Zhang, Microsoft Research AsiaPradeep Atrey, University of WinnipegChristian Timmerer, Klagenfurt UniversityPablo Cesar, CWIMathias Lux, Klagenfurt University

    http://ees.elsevier.com/cviuhttp://ism.eecs.uci.edu/ISM2013/http://cbmi2013.mik.uni-pannon.hu/index.php/cfphttp://cbmi2013.mik.uni-pannon.hu/index.php/cfphttp://www.mum2013.org/http://netgames2013.cs.du.edu/

  • .

    20

    ACM SIGMM RecordsTable of ContentsVolume 5, Issue 2, June 2013 (ISSN 1947-4598)EditorialIntroducing the new Board of ACM SIG MultimediaStatement of the new BoardChair: Shih-Fu ChangVice Chair: Rainer LienhartDirector of Conferences: Nicu Sebe

    Open Source Column: OpenIMAJ – Intelligent Multimedia Analysis in JavaPapers

    FXPAL hires Dick Bulterman to replace Larry Rowe as PresidentCall for Bids: ACM Multimedia 2016Required Bid DocumentsFeedback from ACM Multimedia Steering Committee:Bid Evaluation ProcedureBid Submission ProcedureTime-line

    MediaEval Multimedia Benchmark: Highlights from the Ongoing 2013 SeasonMPEG Column: Press release for the 104th MPEG meetingMPEG hosts Multimedia Ecosystem 2013 EventMPEG issues CfP for Energy-Efficient Media Consumption (Green MPEG)APIs enable access to other MPEG technologies via MXMMPEG introduces MPEG 101 with multimediaDigging Deeper – How to Contact MPEGFurther Information

    PhD Thesis SummariesJohannes SchelsObject Class Detection Using Part-Based Models Trained from Synthetically Generated Images

    Tiia OjanperäCross-layer Optimized Video Streaming in Heterogeneous Wireless Networks

    Tomas KupkaOn the HTTP segment streaming potentials and performance improvements

    Ulrich LampeMonetary Efficiency in Infrastructure Clouds - Solution Strategies for Workload Distribution and Auction-based Capacity Allocation

    Recently publishedMMSJ Volume 19, Issue 3Papers

    MMSJ Volume 19, Issue 4Papers

    TOMCCAP, Volume 9, Issue 2Papers

    TOMCCAP, Volume 9, Issue 3Papers

    Job OpportunitiesPhD position in HDR imagingResearch Assistants

    Calls for ContributionCFPs: Sponsored by ACM SIGMMACM MMSys 2014ACM Multimedia Systems Conference

    ACM MMSys 2014 Dataset TrackMMSys'14 Call for Papers: Dataset Track

    TOMCCAP Special Issue on “Multiple Sensorial (MulSeMedia) Multi-modal Media: Advances and Applications”Transactions on Multimedia Computing, Communications and ApplicationsMultiple Sensorial (MulSeMedia) Multi-modal Media: Advances and Applications

    CFPs: Sponsored by ACMICMR 2014ACM International Conference on Multimedia Retrieval (ICMR)

    MMM 2014The 20th Anniversary International Conference on MultiMedia Modeling (MMM)

    CFPs: Not ACM-sponsoredCVIU Special Issue on “Large Scale Multimedia Semantic Indexing”Elsevier Computer Vision and Image UnderstandingLarge Scale Multimedia Semantic Indexing

    IEEE ISM 2013IEEE International Symposium on Multimedia (ISM2013)

    MTAP Special Issue on “Content Based Multimedia Indexing”Multimedia Tools and ApplicationsContent Based Multimedia Indexing

    MUM 2013The 12th International Conference on Mobile and Ubiquitous Multimedia (MUM 2013)

    NetGames 2013The 12th Annual Workshop on Network and Systems Support for Games (NetGames)

    Back MatterNotice to Contributing Authors to SIG NewslettersImpressum


Recommended