+ All Categories
Home > Documents > Pixelated Image Abstraction With Integrated User Constraints

Pixelated Image Abstraction With Integrated User Constraints

Date post: 16-Aug-2015
Category:
Upload: jair-bortolucci-junior
View: 229 times
Download: 0 times
Share this document with a friend
Description:
Foto não realista
15
Special Section on Expressive Graphics Pixelated image abstraction with integrated user constraints Timothy Gerstner a,n , Doug DeCarlo a , Marc Alexa c , Adam Finkelstein b , Yotam Gingold a,d , Andrew Nealen a a Rutgers University, United States b Princeton University, United States c TU Berlin, Germany d Columbia University, United States article info Article history: Received 31 August 2012 Received in revised form 22 November 2012 Accepted 18 December 2012 Available online 15 January 2013 Keywords: Pixel art Image abstraction Non-photorealistic rendering Image segmentation Color quantization abstract We present an automatic method that can be used to abstract high resolution images into very low resolution outputs with reduced color palettes in the style of pixel art. Our method simultaneously solves for a mapping of features and a reduced palette needed to construct the output image. The results are an approximation to the results generated by pixel artists. We compare our method against the results of two naive methods common to image manipulation programs, as well as the hand-crafted work of pixel artists. Through a formal user study and interviews with expert pixel artists we show that our results offer an improvement over the naive methods. By integrating a set of manual controls into our algorithm, we give users the ability to add constraints and incorporate their own choices into the iterative process. & 2013 Elsevier Ltd. All rights reserved. 1. Introduction We see pixel art every day. Modern day handheld devices such as the iPhone, Android devices and the Nintendo DS regularly use pixel art to convey information on compact screens. Companies like Coca- Cola, Honda, Adobe, and Sony use pixel art in their advertisements [1]. It is used to make icons for desktops and avatars for social networks. While pixel art stems from the need to optimize imagery for low resolution displays, it has emerged as a contemporary art form in its own right. For example, it has been featured by Museum of Modern Art, and there are a number of passionate online communities devoted to it. The ‘‘Digital Orca’’ by Douglas Coupland is a popular sight at the Vancouver Convention Center. France recently was struck by a ‘‘Post-it War’’, 1 where people use Post-It notes to create pixel art on their windows, competing with their neighbors across workplaces, small businesses, and homes. What makes pixel art both compelling and difficult is the limitations imposed on the medium. With a significantly limited palette and resolution to work with, the task of creating pixel art becomes one of the carefully choosing the set of colors and placing each pixel such that the final image best depicts the original subject. This task is particularly difficult as pixel art is typically viewed at a distance where the pixel grid is clearly visible, which has been shown to contribute to the perception of the image [2]. As seen in Fig. 1, creating pixel art is not a simple mapping process. Features such as the eyes and mouth need to be abstracted and resized in order to be represented in the final image. The end product, which is no longer physically accurate, still gives the impression of an identifiable person. However, few, if any methods exist to automatically or semi- automatically create effective pixel art. Existing downsampling meth- ods, two of which are shown in Fig. 2, do not accurately capture the original subject. Artists often turn to making pieces by hand, pixel-by- pixel, which can take a significant amount of time and requires a certain degree of skill not easily acquired by novices of the art. Automated and semi-automated methods have been proposed for other popular art forms, such as line drawing [3, 4] and painting [5]. Methods such as those proposed by DeCarlo and Santella [6] and Winnem ¨ oller et al. [7] not only abstract images, but do so while retaining salient features. We introduce an entirely automated process that transforms high resolution images into low resolution, small palette outputs in a pixel art style. At the core of our algorithm it is a multi-step iterative process that simultaneously solves for a mapping of features and a reduced palette to convert an input image into a pixelated output image. In the first part of each iteration we use a modified version of an image segmentation proposed by Achanta et al. [8] to map regions of the input image to output pixels. In the second step, we use an adaptation of mass-constrained deterministic annealing [9] to find an Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/cag Computers & Graphics 0097-8493/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cag.2012.12.007 n Corresponding author. E-mail address: [email protected] (T. Gerstner). 1 http://www.postitwar.com/. Computers & Graphics 37 (2013) 333–347
Transcript

SpecialSectiononExpressiveGraphicsPixelatedimageabstractionwithintegrateduserconstraintsTimothyGerstnera,n,DougDeCarloa,MarcAlexac,AdamFinkelsteinb,YotamGingolda,d,AndrewNealenaaRutgersUniversity, UnitedStatesbPrincetonUniversity,UnitedStatescTUBerlin,GermanydColumbiaUniversity,UnitedStatesarticle infoArticlehistory:Received31August 2012Receivedinrevisedform22November2012Accepted18 December2012Availableonline15January2013Keywords:PixelartImageabstractionNon-photorealisticrenderingImagesegmentationColorquantizationabstractWepresentanautomaticmethodthatcanbeusedtoabstracthighresolutionimagesintoverylowresolutionoutputswithreducedcolorpalettesinthestyleof pixel art. Ourmethodsimultaneouslysolvesforamappingof featuresandareducedpaletteneededtoconstruct theoutput image. Theresults are an approximation to the results generated by pixel artists. We compare our method againstthe results of two naive methods common to image manipulation programs, as well as the hand-craftedwork of pixel artists. Through a formal user study and interviews with expert pixel artists we show thatour results offer an improvement over the naive methods. By integrating a set of manual controls intoouralgorithm,wegiveuserstheabilitytoadd constraintsandincorporatetheirownchoicesinto theiterativeprocess.&2013ElsevierLtd.Allrightsreserved.1. IntroductionWe see pixel art every day. Modern day handheld devices such asthe iPhone, Android devices and the Nintendo DS regularly use pixelart to convey information on compact screens. Companies like Coca-Cola, Honda, Adobe, and Sony use pixel art in their advertisements [1].It is used to make icons for desktops and avatars for social networks.Whilepixel artstemsfromtheneedtooptimizeimageryforlowresolution displays, it has emerged as a contemporary art form in itsown right. For example, it has been featured by Museum of ModernArt, and there are a number of passionate online communitiesdevotedtoit. TheDigitalOrca byDouglasCouplandisapopularsight at the Vancouver Convention Center. France recently was struckby a Post-it War,1where people use Post-It notes to create pixel arton their windows, competing with their neighbors across workplaces,small businesses, and homes.What makes pixel art both compelling and difcult is thelimitations imposedonthe medium. Withasignicantlylimitedpaletteandresolutiontoworkwith, thetaskof creatingpixel artbecomesoneofthecarefullychoosingthesetofcolorsandplacingeach pixel such that the nal image best depicts the original subject.Thistaskisparticularlydifcultaspixelartistypicallyviewedatadistance where the pixel grid is clearly visible, which has been showntocontributetotheperceptionoftheimage[2]. AsseeninFig. 1,creatingpixelartisnot asimple mappingprocess.Featuressuchasthe eyes and mouth need to be abstracted and resized in order to berepresented in thenalimage. The end product,whichisnolongerphysically accurate, still gives the impression of an identiableperson.However, few, if anymethods exist toautomaticallyor semi-automatically create effective pixel art. Existing downsampling meth-ods, two of which are shown in Fig. 2, do not accurately capture theoriginal subject. Artists often turn to making pieces by hand, pixel-by-pixel, whichcantakeasignicantamountof timeandrequiresacertaindegree of skill not easilyacquiredbynovices of the art.Automatedandsemi-automatedmethodshavebeenproposedforother popular art forms, such as line drawing [3,4] and painting [5].MethodssuchasthoseproposedbyDeCarloandSantella[6] andWinnem oller et al. [7] not onlyabstract images, but dosowhileretaining salient features.We introduce an entirely automated process that transforms highresolution images into low resolution, small palette outputs in a pixelart style. At thecoreof our algorithmit is amulti-stepiterativeprocessthatsimultaneouslysolvesforamappingoffeaturesandareducedpalettetoconvertaninputimageintoapixelatedoutputimage. In the rst part of each iteration we use a modied version ofan image segmentation proposed by Achanta et al. [8] to map regionsoftheinputimagetooutputpixels. Inthesecondstep, weuseanadaptation of mass-constrained deterministic annealing [9] to nd anContentslistsavailableatSciVerseScienceDirectjournal homepage: www.elsevier.com/locate/cagComputers&Graphics0097-8493/$ - seefrontmatter&2013ElsevierLtd.Allrightsreserved.http://dx.doi.org/10.1016/j.cag.2012.12.007nCorrespondingauthor.E-mailaddress:[email protected](T.Gerstner).1http://www.postitwar.com/.Computers&Graphics37(2013)333347optimalpaletteanditsassociationtooutputpixels.Thesestepsareinterdependent, and the nal solution is an optimization of both thespatial and palette sizes specied by the user. Throughout thisprocessweusetheperceptuallyuniformCIELABcolorspace[10].The end result serves as an approximation to the process performedby pixel artists (Fig. 2, right).This paper presents an extended edition of Pixelated ImageAbstraction [11]. In addition to an expanded results section, we haveadded aset ofusercontrolstobridgethegapbetweenthemanualprocessof anartist andtheautomatedprocessof our algorithm.Thesecontrolsallowtheusertoprovideasmuchoraslittleinputintotheprocessasdesired, toproducearesultthatleveragesboththestrengthsofourautomatedalgorithmandtheknowledgeandpersonal touch of the user.Aside from assisting a class of artists in this medium, applicationsfor this work include automatic and semi-automatic design of low-resolutionimageryinhandheld, desktop, andonlinecontextslikeFacebook and Flickr, wherever iconic representations of high-resolution imagery are used.2. Related workOne aspect of our problem is to reproduce an image as faithfullyas possiblewhileconstrainedtojust afewoutput colors. Colorquantizationisaclassicproblem whereinalimited colorpaletteischosen based on an input image for indexed color displays. A varietyofmethodsweredevelopedinthe1980sandearly1990spriortothe advent of inexpensive 24-bit displays, for example [1215].A similarproblem isthat of selecting a small set of custom inks tobeused inprintinganimage[16].Thesemethodsrelyonlyonthecolor histogram of the input image, and are typically coupled to anindependent dithering (or halftoning) method for output in arelativelyhighresolutionimage. Inourproblemwherethespatialresolutionof theoutput is alsohighlyconstrained, weoptimizesimultaneously the selection and placement of colors in thenal image.The problemof image segmentation has been extensivelystudied. Proposedsolutionsincludegraph-cut techniques, suchasthemethodproposedbyShi andMalik[17], andsuperpixel-basedmethods QuickShift [18], Turbopixels [19], andSLIC[8].Inparticular, SLIC(SimpleLinear IterativeClustering) producesregularsizedandspacedregionswithlowcomputational over-head given very few input parameters. These characteristics makeSLICanappropriatestartingpointforpartsofourmethod.Mass-constrained deterministic annealing (MCDA) [9] is amethodthat uses a probabilistic assignment while clustering.Similar to k-means, it uses a xed number of clusters, but unlikek-means it is independent of initialization. Also, unlike simulatedannealing[20], it doesnot randomlysearchthesolutionspaceandwill converge to the same result every time. We use anadaptedversionofMCDAforcolor paletteoptimization.Puzicha et al. [21] proposed a method that reduces the paletteof animage andapplies half-toningusinga model of humanvisual perception. While their method uses deterministic anneal-ingandtheCIELABspacetondasolutionthatoptimizesbothcolor reductionanddithering, our methodinsteademphasizespalettereductioninparallel withthereductionof theoutputresolution.Recently, Inglis and Kaplan [22] proposed a method to convertvector line art into pixel art. At low resolutions, a single misplacedpixel can drastically change the perception of a curve. Theirmethodremoves thesedisruptiveartifacts fromtheoutput byshifting each paths end points and local extrema to pixel centersandbyenforcingthatamonotoniccurveswhoseslopeismono-tonicisrepresented bymonotonicpixel spans.Kopf and Lischinski [23] proposed a method that extractsvectorartrepresentationsfrompixelart. Thisproblemisalmosttheinverseof theonepresentedinthispaper. However, whiletheir solution focuses on interpolating unknown information,convertinganimagetopixel art requires compressingknowninformation.Finally, weshowthatwithminormodicationouralgorithmcanproduceposterized images, whereinlargeregionsofcon-stant color are separated by vectorized boundaries. To our knowl-edge, little research has addressed this problem, though it sharessome aesthetic concerns with the artistic thresholding approach ofXu andKaplan[24].3. BackgroundOur method for making pixel art builds upon two existingtechniques, which we briey describe in this section.Fig. 1. Examples of pixel art. Alice Blue and Kyle Red by Alice Bartlett. Noticehow faces are easily distinguishable even with this limited resolution and palette.The facial features are no longer proportionally accurate, similar to deformation ina caricature. (For interpretation of the references to color in this gure caption, thereaderisreferredtothewebversionofthisarticle.)Fig. 2. Pixelartimagessimultaneouslyuseveryfewpixelsandatinycolorpalette. Attemptstorepresentimage(a)usingonly22 32pixelsandeightcolorsusing(b) nearest-neighbor or (c) cubic downsampling (both followed by median cut color quantization), result in detail loss and blurriness. We optimize over a set of superpixels(d) and an associated color palette to produce output (e) in the style of pixel art. (For interpretation of the references to color in this gure caption, the reader is referred tothewebversionofthisarticle.)T.Gerstneretal./Computers&Graphics37(2013)333347 334SLIC: Achanta et al. [8] proposed an iterative method tosegmentanimageintoregionstermedsuperpixels. Thealgo-rithmisanalogoustok-meansclustering[25] inavedimen-sional space (three color and two positional), discussed forexampleinForsythandPonce[26]. Pixelsintheinputimagepiareassignedtosuperpixelspsbyminimizingdpi,ps dcpi,psmNMrdppi,ps 1where dcis the color difference, dpis the positional difference, Mis thenumber of pixels in the input image, N is the number of superpixels,andmissomevalueintherange 0,20 thatcontrolstherelativeweight that color similarity and pixel adjacency have on the solution.Thecolorandpositional differencesaremeasuredusingEuclideandistance(asarealldistancesinourpaper, unlessotherwisenoted),and the colors are represented in LAB color space. Upon eachiteration, superpixels are reassigned to the average color and positionof the associated input pixels.Mass constraineddeterministicannealing: MCDA[9] is aglobaloptimizationmethodforclusteringthatdrawsuponananalogywiththeprocessof annealingaphysical material. Weusethismethodbothfordeterminingthecolorsinourpalette, andforassigningoneofthesepalettecolorstoeachpixeleachclustercorresponds toapalette color.MCDA is a fuzzy clustering algorithm that probabilistically assignsobjects to clusters based on their distance from each cluster. It relieson a temperature value T, which can be viewed as proportional to theexpected variance of the clusters. Initially, T is set to a high value T0,which makes each object equally likely to belong to any cluster. Eachtimethesystem locallyconvergesTislowered(andthevarianceofeach cluster decreases). As this happens, objects begin to prefer favorparticularclusters, andasTapproacheszeroeachobject becomeseffectively assigned to a single cluster, at which point the nal set ofclusters is produced. In Section 4.3 we provide a formal denition oftheconditionalprobability we usetoassignsuperpixelstocolorsinthe palette.Since at highT having multiple clusters is redundant, MCDAbegins witha single cluster, representedinternally by two sub-clusters. At thebeginningof eachiterationthesesub-clusters aresettoslightpermutationsoftheirmean. AtahighTtheseclustersconverge to the same value after several iterations, but as thetemperatureisloweredtheybegintonaturallyseparate.Whenthisoccurs, the cluster is split into two separate clusters (each representedby their own sub-clusters). This continues recursively until the (userspecied) maximum number of clusters is reached.4. MethodOurautomatedalgorithmisaniterativeprocedureanexampleexecution is shown in Fig. 3. The process begins with an input imageof width winand height hinand produces an output image of widthwoutandheighthoutwhichcontainsatmostKdifferentcolorsthepalette size. Given the target output dimensions and palette size, eachiterationof the algorithmsegments the pixels inthe input intoregions corresponding to pixels inthe output andsolves for anoptimal palette. Upon convergence, the palette is saturated toproducethenaloutput. Inthissection, wedescribeouralgorithmin terms of the following:Input pixels: The set of pixels in the input image, denoted as piwherei A1,M,andM win hin.Output pixels: The set of pixels in the output image, denoted aspowhereoA1,N,andN wout hout.Superpixel:Aregionoftheinputimage, denotedaspswheresA1,N.Thesuperpixelsareapartitionoftheinputimage.Palette:AsetofKcolorsck,kA1,KinLABspace.Our algorithmconstructs a mapping for eachsuperpixel thatrelates a region of input pixels with a single pixel in the output, as inFig. 4. ThealgorithmproceedssimilartoMCDA, withasuperpixelrenementandpaletteassociationstepperformeduponeachitera-tion, assummarizedinAlgorithm1. Section5.1describeshowthealgorithmcanbeexpandedtoallowauser toindicateimportantregions in the input image.Algorithm 1.xinitializesuperpixels,paletteandtemperatureT(Section4.1)xwhile(T 4Tf)xrenesuperpixelswith1stepofmodiedSLIC(Section 4.2)xassociatesuperpixelstocolorsinthepalette (Section 4.3)xrenecolorsinthepalette (Section4.3)xif(paletteconverged)xreducetemperatureT a Txexpandpalette(Section4.3)xpost-process(Section 4.4)Fig. 3. The pipeline of the algorithm. The superpixels (a) are initialized in a regular grid across the input image, and the palette is set to the average color of the M inputpixels. The algorithm then begins iterating (b). Each iteration has two main steps: (c) the assignment of input pixels to superpixels and (d) the assignment of superpixels tocolorsinthepaletteandupdatingthepalette. Thisnotonlyupdateseachcolor,butmayalsoaddnewcolorstothepalette.Afterconvergence, thepaletteissaturated(e)producingthenaloutput.(Forinterpretationofthereferencestocolor inthisgurecaption,the readerisreferredtothe webversionofthisarticle.)T.Gerstneretal./Computers&Graphics37(2013)333347 3354.1. InitializationThe N superpixel centers are initialized in a regular grid acrosstheinputimage, andeachinputpixelisassignedtothenearestsuperpixel (in (x,y) space, measured to the superpixel center). Thepaletteisinitializedtoasinglecolor, whichissettothemeanvalue of the M input pixels. All superpixels are assigned this meancolor.SeeFig.3,step(a).The temperature T is set to 1:1Tc, where Tcis the criticaltemperatureof theset of Minput pixels, denedastwicethevariancealongthemajorprincipalcomponentaxisofthesetinLAB space [9]. The Tc of a set of objects assigned to a cluster is thetemperature at which a cluster will naturally split. Therefore, thispolicyensures that theinitial temperatureis easilyabovethetemperature at which more than one color in the palettewouldexist.4.2. SuperpixelrenementThis stage of the algorithm assigns pixels in the input image tosuperpixels, which correspond to pixels in the output imageseesteps(b)and(d)inFig.3.To accomplish this task, we use a single iteration of ourmodiedversionof SLIC. Inthe original SLICalgorithm, uponeach iteration, every input pixel is assigned to the superpixel thatminimizesdpi,ps, andthecolorofeachsuperpixelissettothemeancolor value of its associatedinput pixels, ms. However,inour implementation, the color of eachsuperpixel is set tothe palette color that is associated with the superpixel (theconstructionof this mappingis explainedinSection4.3). Thisinterdependencywiththepaletteforces thesuperpixels tobeoptimized with respect to the colors in the palette rather than thecolorsintheinputimage. Fig. 5showstheresultsof usingthemean colorvalueinsteadofouroptimizedpaletteusedinFig.2.However, this alsomeans thecolor error will begenerallyhigher. As a result, we have found that minimizing dpi,ps using avalue of m45 is more appropriate in this case (Achanta et al. [8]suggest m10). This increases the weight of the positionaldistanceandresultsinasegmentationthatcontainssuperpixelswithrelatively uniform size.Next, weperformtwosteps; onemodieseachsuperpixels(x,y)position forthe next iteration,and onechanges eachsuper-pixelsrepresentativecolor. Eachstepisanadditionalmodica-tiontotheoriginal SLICmethodandsignicantlyimprovesthenal result.As seen in Fig. 6(left), SLIC results in superpixel regions whichtend to be organized in 6-connected neighborhoods (i.e. ahexagonalgrid). Thisiscausedbyhowthe(x,y)positionofeachsuperpixelisdenedastheaveragepositionoftheinputpixelsassociated with it. This hexagonal grid does not match theneighborhoods of the output pixels, which are 8-connected(i.e. arectangulargrid)andwill giverisetoundesirabledistor-tionsofimagefeaturesandstructuresintheoutput, asseeninFig. 6(center).Weaddressthisproblem withLaplaciansmoothing.Eachsuper-pixelcenterismovedapercentageofthedistancefromitscurrentpositiontotheaveragepositionofits4-connectedneighbors(usingthe neighborhoods at the time of initialization). We use 40%. As seenFig.4. Pixelsinthe inputimage(left)areassociatedwithsuperpixelregions(middle).Each superpixelregioncorresponds toasinglepixel inthe outputimage(right).Fig. 5. Our method uses palette colors when nding superpixels. Using the mean color of a superpixel works when the palette is unconstrained (left), but fails when usinga constrained palette (middle). This is because the input pixels cluster into superpixels based on colors that do not exist in the nal image, which creates a discrepancy.Usingthepalettecolorstorepresentthesuperpixels(right)removesthisdiscrepancy. (Forinterpretationofthereferencestocolorinthisgurecaption, thereaderisreferredtothe webversionofthisarticle.)T.Gerstneretal./Computers&Graphics37(2013)333347 336in Fig. 2(d), this improves the correspondence between the superpixelandoutput pixel neighborhoods. Specically, it helps ensurethatsuperpixel regions that are adjacent in the input map are alsoadjacent pixels intheoutput. Tobeclear, it is onlyinthenextiteration when the superpixels will be reassigned based on this newcenter, due to the interleaved nature of our algorithm.Inoursecond additionalstep,thecolorrepresentativesofthesuperpixels are smoothed. In the original SLIC algorithm, therepresentative color for each superpixel is the average color ms oftheinput pixels associatedwithit. However, simplyusingthemean color can become problematic for continuous regions in theimage that contain acolor gradient(such asa smooth shadowedsurface). Whilethisgradientappearsnaturalintheinputimage,theregionwillnotappearcontinuous inthepixelatedoutput.To remedy this, our algorithm adjusts the values of ms using abilateral lter. Weconstructanimageof sizewout houtwhereeach superpixel is assigned the same position as its correspondingoutput pixel, with value ms. The colors that result from bilaterallylteringthisimage,m0sareusedwhileiteratingthepalette.4.3. PaletterenementPaletteiterationisperformedusingMCDA[9]. Eachiterationofthe palette, as seen in step (c) in Fig. 3, can be broken down into threebasicsteps:associatesuperpixelstocolorsinthepalette, renethepalette, and expand the palette. The associate and rene steps occurevery iteration of our algorithm. When the palette converges for thecurrent temperature T, the expand step is performed.It is important to note howwe handle the sub-clustersmentionedinSection3:wetreateachsub-clusterasaseparatecolor in the palette, and keep track of the pairs. The color of eachck is the average color of its two sub-clusters. When the maximumsizeofthepaletteisreached(intermsofthenumberofdistinctcolors ck), we eliminate the sub-clusters and represent each colorinthepaletteasasinglecluster.Associate: TheMCDAalgorithmrequiresaprobabilitymodelthatstateshowlikelyaparticularsuperpixel will beassociatedwith each color in the palette. See Fig. 7. The conditionalprobability Pck9ps of a superpixel psbeing assignedcolor ckdepends on the color distance in LAB space and the currenttemperature, andisgiven by(aftersuitablenormalization)Pck9pspPckeJm0sckJ=T2Pck is the probability that color ck is assigned to any superpixel,giventheexistingassignment. Uponinitialization, thereisonlyone color, and thus this value is initialized to 1. As more colors areintroduced into the palette, the value of this probability iscomputedbymarginalizingoverps:Pck XNs 1Pck9psPps 3For the moment, Pps simply has a uniform distribution. This will berevisitedinSection5.1whenincorporatinguser-speciedimpor-tance. The values of Pck are updated after the values of Pck9ps arecomputed using Eq. (2). Each superpixel is assigned to the color in theFig. 6. Without the Laplacian smoothing step, the superpixels (left) tend to have 6-connected neighborhoods. This causes small distortions in the output (center), whichareparticularlynoticeableontheear,eyeandmouth,whencomparedtooriginaloutputthatusesthesuperpixelsthatincludedthesmoothingstep(right).Fig. 7. Each superpixel (left) is associated by some conditional probability Pck9ps to each color in the palette (middle). The color with the highest probability is assigned tothe superpixel and its associated output pixel in the nal image (right). (For interpretation of the references to color in this gure caption, the reader is referred to the webversionofthisarticle.)T.Gerstneretal./Computers&Graphics37(2013)333347 337palettethatmaximizesPck9ps. Intermediateresultsofthisassign-ment can be seen in Fig. 3 (bottom row). The exponential distributionin Eq. (2) tends towards a uniform distribution for large values of T, inwhichcaseeachsuperpixel will beevenlyassociatedwitheverypalettecolor. AsTdecreases, superpixelsfavorcolorsinthepalettethatarelessdistant. Atthenaltemperature, thegenericsituationafter convergence has Pck9ps 1 for a single color in the palette andPck9ps 0 for the rest. In this case, deterministic annealing isequivalent to k-means clustering.Rene: Thenext stepistorenethepalettebyreassigningeach color ck to a weighted average of all superpixel colors, usingtheprobability ofassociationwiththatcolor:ck PNs 1 m0sPck9psPpsPck4Thisadaptsthecolorsintheexistingpalettegiventherevisedsuperpixels.SuchchangesinthepalettecanbeseeninFig.3, asthecomputationprogresses.Expand: Expansion only occurs during an iteration if thepalette has converged for the current temperature T (convergenceis measured by the total change in the palette since last iterationbeinglessthansomesmall value Epalette). First, thetemperatureis lowered by some factor a (we use 0.7). Next, the paletteis expandedif the number of colors is less thanthe numberspecied by the user. For each ck we check to see if the color needstobesplitintotwoseparatecolorsinthepalette. AsperMCDA,eachcolorinthepaletteisrepresentedbytwoclusterpointsck1andck2. Weuse Jck1ck2J4Ecluster(where Eclusterisasufcientlysmall number), tocheckfor palette separation. If so, the twoclusterpointsareaddedtothepaletteasseparatecolors, eachwithitsownpair of cluster points. AsseeninFig. 3, over thecourse of many iterations, the palette grows from a single color toasetof eight(whichisthemaximumnumberspeciedbytheuserinthis example).After resolvinganysplits, eachcolor is representedbytwosub-clusterswiththesamevalue(unlessthemaximumnumberof colors has been reached). In order for any colors sub-clusters toseparateinthefollowingiterations, ck1andck2must bemadedistinctly different. To do so, we perturb the sub-clusters of eachcolor by a small amount along the principal component axis of thecluster in LAB space. Rose [9] has shown this to be the direction acluster will split. This perturbation allows the sub-clusters of eachcolortomergewhenT 4TcandseparatewhenT oTc.Algorithm1 is dened so that the superpixel and paletterenement steps are iterated until convergence. The systemconvergeswhenthetemperaturehasreachedthenaltempera-ture Tf and the palette converges. We use Tf1 to avoid truncationerrorsastheexponentialcomponent ofEq.(2)becomessmall.4.4. PalettesaturationAsapost-processingstep, weprovidetheoptiontosaturatethepalette, whichisatypical pixel artist technique, bysimplymultiplyingtheaandbchannelsof eachcolorbyaparameterb41. This value usedinall our results is b 1:1. Lastly, byconvertingfromLABtoRGBspace, our algorithmoutputs thenalimage.5. User controlsThealgorithmdescribedinSection4completelyautomatestheselectionofthecolorpalette.Thisstandsinmarkedcontrastto the traditional, manual process of creating pixel art, where theartist carefully selects each color in the palette and its placementintheimage. Therefore, weproposeaset of usercontrolsthatleveragetheresultsofouralgorithmandbridgethegapbetweenthese two extremes. These controls allow the user to have as muchor as little control over the process as they want. This combines thepowerandspeedof ourautomatedmethodwiththeknowledgeand creativity of the user.The rst user control, originally proposedinPixelatedImageAbstraction[11], isanimportancemap thatactsasanadditionalinput to our algorithm and lets the user emphasize areas of the imagetheybelieve tobe important. The secondandthirdcontrols wepropose,pixelandpaletteconstraints,areusedaftertheautomatedalgorithm initially converges. Using these two controls, the user candirectlyeditthepalettecolorsandtheirassignmentintheoutputimage, giving them full control over the result. After each set of edits,theusercanchoosetohaveourautomatedalgorithmcontinuetoiterateusingthecurrentresultasitsstartingpointwiththeusersedits as constraints (see Section 5.4). To demonstrate the effectivenessof these user controls, we developed a user interface that was used togenerate the results in Fig. 16.5.1. ImportancemapAsstatedinSection4ourautomatedmethoddoesnotfavoranyimage content. For instance, nothing is inplace that candistinguish between foreground and background objects, or treatthem separately in the output. However, user input (or the outputof acomputervisionsystem) caneasilybeincorporated intoouralgorithmtoprioritizetheforegroundintheoutput. Thus, oursystemallowsadditional inputatthebeginningofourmethod.Users can supply a win hin grayscale image of weights WiA0,1,i A1,M, used to indicate the importance of each input pixel pi. Inourinterface, thisisdonebyusingasimplebrush tomark areaswith the desired weight. We incorporate this map when iteratingthepalette (Section4.3)byadjustingthepriorPps.Given the importance map, the value Pps for each superpixel isgivenbytheaverageimportanceof all input pixelscontainedinsuperpixel ps (and suitable normalization across all superpixels):Ppsp19ps9Xpixeli ApsWi5Pps thusdetermineshowmucheachsuperpixel affectstheresulting palette, through Eqs. (3) and (4). This results in a palettethat can better represent colors in the regions of the input imagemarkedasimportant.5.2. PixelconstraintsIntraditional pixel art, theartistneedstomanuallychoosethecolor of each pixel in the output. In contrast, our automated algorithmmakes the choice entirely for the user. By adding a simple constraintinto our program, we can allow the user to work in the area betweenthese two extremes. For each pixel in the output, we allowthe user toselect asubset of colorsinthepalette. Foreachcolornot inthissubset, we set the conditional probability of that color for this pixel,Pck9ps, to zero. This restricts the color assigned to the output pixel tothe color with the highest conditional probability within the subset.Note that this has the convenient property of being equivalent to themanual process when the subset is a single color, and to theautomatic process when the subset is the entire palette.As explained in Section 4.2, superpixels are represented using thecolor in the palette with the highest conditional probability, Pck9ps.Therefore, adding these constraints will affect the assignment of inputpixels to superpixels in future iterations. As a result, when constraintsareaddedbytheuser, neighboringsuperpixelswillnaturallycom-pensateas thealgorithmattempts todecreaseerror under theseconstraints, as seen in Fig. 8.T.Gerstneretal./Computers&Graphics37(2013)333347 338In our interface, we implement this toolasa paint brush,andallowtheusertoselectoneormorecolorsfromthepalettetoformthesubsetastheypaintontotheoutputimage. Usingthebrush, they are able to choose the precise color of specic pixels,restrict the range of colors for others, and leave the rest entirely toouralgorithm.5.3. PaletteconstraintsSimilarly, intraditionalpixelarttheartistneedstomanuallychoose each color of the palette. We again provide a set ofconstraints to givethe user control over this process while usingouralgorithm. Afterthepalettehasinitiallyconverged, theuserhas the option to edit and x colors in the palette. This is done inone of the two ways. The rst is a trivial method; the user directlymodiesaspeciccolorinthepalette. Thesecondutilizestheinformationalreadygatheredbyour algorithm. Bychoosingacolorinthepaletteck, andthenasuperpixel psformedbyouralgorithm, we set ck to the mean color of that region, ms, as foundinSection4.2. Whiletherst methodallowstheuser tohavedirect control, the second provides them with a way of selecting arelatively uniformarea of the original image fromwhich tosampleacolor,andwithouthaving tospecify specicvalues.In addition to changing the color, the user has the option to keepthese colors xed or free during any future iterations of the algorithm.If they are xed,they will remain the same color for the rest of theprocess. If they are not xed, they will be free to converge to a newvalue as our algorithm iterates, starting with the initial color providedby theusersedit. Thisgives theusers another dimension of palettecontrol in addition to the ability to manually choose the colors.Notethatwhenacolorischangedinthepalette,areasoftheoriginalimagemaynolongerbewellrepresentedinthepalette.Fortunately, during any future iterations, our algorithmwillnaturally seek to reduce this discrepancy by updating the unxedcolors in the palette as it attempts to minimize error and convergetoanewlocalminimum.5.4. ReiteratingAfter usinganyofthetools described inthis section,theuserhastheoptionofrerunningouralgorithm.However,ratherthanstarting from scratch, the algorithm begins with the results of theprevious iteration, subject to the constraints specied by the user.Whenrerunningthealgorithm, thetemperatureremainsatthenal temperature Tfit reachedat convergence, andcontinuesuntil theconvergenceconditiondescribedinSection4.3ismetagain. Note that while iterating, the algorithmmaintains theusersconstraints. Thereforetheusercandecidewhatthealgo-rithm can and cannot update. Also note that since the algorithm isnot starting from scratch, it is generally close to the next solution,automaticeditedoutput superpixelsFig. 8. When the user provides constraints to the image, future iterations of the algorithm will update the superpixels in a way that seeks to decrease error under the newconstraints. Inthisexample, theoriginal image(topleft)ismodiedbyconstrainingseveral pixelsoftheeartothebackgroundcolor(bottomleft). Asaresult, thesuperpixels (top right) are redistributed to match the constraints (bottom right). The superpixels that used to be part of the ear now form segments of the background, andneighboring pixelsin the outputhavechanged to accommodatethe newsuperpixel distribution.(Forinterpretationof the referencestocolor in thisgure caption,thereaderisreferredtothewebversionofthisarticle.)T.Gerstneretal./Computers&Graphics37(2013)333347 339and convergence occurs rapidly (usually less than a second). Afterthe algorithm has converged, the user can continue making editsandrerunningthealgorithmuntilsatised. Inthiswaytheuserbecomes a part of the iterative loop, and both user and algorithmworktocreateanal solution.6. ResultsWe tested our algorithm on a variety of input images at variousoutput resolutions andcolor palettesizes (Figs. 917). For eachexample, we compare our method to two naive approaches:original 32 colors 16 colors 8 colorsour methodnearest methodcubic methodFig. 9. Varying the palette size (output images are 64 58). (For interpretation of the references to color in this gure caption, the reader is referred to the web versionofthisarticle.)original input 4836 6448 8060our methodnearest methodcubic methodFig. 10. Varying the output resolution (palette has 16 colors). (For interpretation of the references to color in this gure caption, the reader is referred to the web versionofthisarticle.)T.Gerstneretal./Computers&Graphics37(2013)333347 340nearest method: A bilateral lter followed by median cut colorquantization, followedbynearestneighbor downsampling,cubic method: Cubicdownsamplingfollowedbymediancutcolorquantization.Unless otherwise stated, the results our method are generatedusing only our automated algorithm, using the parameter settingsfromSection4,andnouserinputwasintegratedintotheresult.EachresultwasproducedingenerallylessthanaminuteonanIntel 2.67 GHz i7processor with4 GB memory. Each naive resultis saturated using the same method described in Section 4.4. Notethat it is best to view the results up-close or zoomed-in, with eachpixelbeingdistinctly visible.In Fig. 9, we show the effects of varying the total number of colorsin the output palette. Our automatic method introduces fewerisolatedcolorsthanthenearestmethod, whilelookinglesswashedout than the cubic method. As the palette size shrinks, our method isbetter able to preserve salient colors, such as the green in the turban.Ourmethodspaletteassignmentalsoimprovesthevisibilityoftheeyes and does not color any of the face pink.SimilarresultsareseeninFig. 10whenwevarytheoutputresolution. Again we see that the cubic method produces washed-out images and the nearest method has a speckled appearance. Atall resolutions, our method preserves features such as the gogglesmore faithfully, and consistently chooses more accurate skintones forthefaces,whereasboth naivemethodschoosegray.Usingourautomatedalgorithm, theimageof BarackObamaisrecognizableevenat extremelysmall outputresolutionsandpalettesizes(Fig.11).At22 32and11 16,ourmethodmoreclearlydepictsfeaturessuchastheeyeswhilecoloringregionssuch as the hair and tie more consistently. At 11 16, the nearestmethodproducesaresultthatappearstodistortfacialfeatures,while the cubic method produces a result that loses the eyes. At4 6, results are very abstract, but our methods output could stillbeidentiedasapersonorashavingoriginatedfromtheinput.In Fig. 12, we compare our automated output to manualresultscreatedbyexpertpixel artists. Whileourresultsexhibitoriginal 2232, 8c 1116, 6c 46, 4cour methodnearest methodcubic methodFig.11. Examplesofverylowresolutionandpalettesizes.T.Gerstneretal./Computers&Graphics37(2013)333347 341thesameadvantagesseeninthepreviousguresoverthenaivemethods, theydonotmatchtheresultsmadebyartists. Expertartistsareabletoheavilyleveragetheir humanunderstandingof the scene to emphasize and de-emphasize features andmakeuseoftechniquessuchasditheringandedgehighlighting.Whiletherearemanyexistingmethodstoautomaticallyditheranimage, at theseresolutions thedecisiononwhentoapplydithering is nontrivial, and uniformdithering can introduceundesiredtexturestosurfaces(suchasskin).Fig. 13 contains additional results computedusing variousinput images. Overall, our automated approach is able to producelessnoisy, sharperimageswithabetterselectionofcolorsthanthenaivetechniques wecompared against.To verify our analysis, we conducted a formal user study with100 subjects using Amazon Mechanical Turk. Subjects wereshown the original image and the results of our automatedmethodandthetwonaivemethods. Theresultswerescaledtoapproximately256pixels alongtheir longest dimensionusingnearestneighborupsampling, sothatuserscouldclearlyseethepixel grid. We asked subjects the question, Which of thefollowingbestrepresentstheimageabove? Subjectsrespondedby choosing a result image. The stimulus sets and answer choiceswere randomizedto remove bias. The study consistedof theexampleimagesandparametersshowninourpaper, excludingtheresultsgeneratedusinguser input, andeachstimuluswasduplicated fourtimes (60total).We accounted for users answering randomly by eliminating theresultsofanysubjectwhogaveinconsistentresponses(choosingthe same answer for less than three of the four duplicates) on morethana thirdof the stimuli. This reducedthe number of validresponses to 40. The nal results show that users choose our results41.49% of the time, the nearest method 34.52% of the time, and thecubic method23.99%of the time. Usinga one-wayanalysis ofvariance (ANOVA) on the results, we found a p value of 2:12 106,which leads us to reject the null hypothesis that subjects all choserandomly. UsingTukeysrangetestwefoundthatourautomatedmethodissignicantlydifferentfromthenearestmethodwitha91%condenceinterval, andfromthecubicmethodwitha99%condence interval. While we acknowledge that the question askedisdifcultonegiventhatitisanaestheticjudgment, webelievethat the results of this study still show subjects prefer the results ofour method over the results of either naive method.We alsoreceived feedback from three expert pixel artists on ourautomated method; each concluded that the automated results are, ingeneral, animprovementoverthenaiveapproaches. TedMartens,creator of the Pixel Fireplace, said that our algorithm chooses bettercolorsforthepalette, groupsthemwell, andndsshapesbetter.AdamSaltsman, creator of Canabalt andFlixel, characterizedouroriginal input pixel artist our methodnearest method cubic methodoriginal input pixel artist our methodnearest method cubic methodFig. 12. Comparing to the work of expert pixel artists (64 43). The results generate from our method and the naive methods use 16 colors in the rst example, 12 in thesecond. The pixel artists use eight colors in the rst example, 11 in the second. (For interpretation of the references to color in this gure caption, the reader is referred tothewebversionofthisarticle.)T.Gerstneretal./Computers&Graphics37(2013)333347 342results as more uniform, more reasonable palette, better forms, morereadable. Craig Adams, art director of Superbrothers: Sword &Sworcery EP,observed that essential features seem to survivea bitbetter[and] shapesseemtocomethroughabitmorecoherently.I thinkthesnowboardersgogglesaretheclearest exampleof anessential shapethe white rimof the gogglebeing coherentlypreserved in your process, while it decays in the naive process.InSection2, we mentioned that the method of Kopf andLischinski [23]isessentiallytheinverseprocessofourmethod;ittakesapixelartpieceandconvertsittoasmooth, vectorizedoutput. Additionally, theirmethodsprocessof reshapingpixelscan be seen as a similar approach to our method of segmentation.Theiralgorithmcomputesamappingbetweeneachpixel andasegment, and, likeourmethod, doessobyutilizingbothneigh-borhoodandcolorinformation.To see how well our method actually serves as an inverse processto their depixeling algorithm, we took the vectorized output of theirmethod as the input of our automated algorithm, setting the size ofoutputimage and palette tothesameastheirinput,andcomparedtheresultstothoseofthenaivemethods. Weusedthe54imagesfound in their papers supplemental material. An example can be seenin Fig. 14. Results were compared by taking the sum of the per-pixelEuclideandistanceinLABcolor spacefromeachmethods resultand the original input to their algorithm. We found the mean squaredper-pixel errorandstandarddeviationforeachmethodwas17.03and2.08(our method), 18.0and2.85(nearest), and118.71and8.08 (cubic).Wehypothesizethatthelackofdifferencebetweenourmethodand the nearest method is due to the fact that the original input to thedepixelizing algorithm was pixel art and while the method creates asmoothoutput, itdoesnotoffsetthereshapedpixelsfarfromtheiroriginal color or position, which means that the results still generallyalign with a pixel grid. We believe that the poor performance of thecubicmethodis attributedtoits tendencytoblur andwashoutcolors, as seen in Fig. 14.In Fig. 15, we present results fromour method using animportance map as an additional input to the algorithm. The resultsare closer to those created by expert pixel artist. Fig. 15(left)allocates more colors in the palette to the face of the person, similarto the manual result in Fig. 12. Fig. 15(right) also shows animprovement over thenon-weightedresults inFig. 9. For bothexamples, the importance map emphasizes the face and de-emphasizesthebackground; consequently, morecolorsareallo-cated to the face in each example at the expense of the background.Fig. 13. Additional results at various resolution and palette sizes. Columns (left to right): input image, output of our algorithm, output of the nearest method, output of thecubicmethod.T.Gerstneretal./Computers&Graphics37(2013)333347 343In Fig. 16, we demonstrate the results of also incorporating pixelandpaletteuserconstraintsduringouriterativeprocess. Foreachresult, we recorded the total number of pixel and palette constraintsused to create the nal product. In each case, the user spent less than2 mincreatinganimportancemap. Fig.16(top)wasmadeusing18pixel constraints (2.6% of the total number of pixels in the image) andonepaletteconstraint. Fig. 16(middle)wascreatedusing766pixelconstraints (27.8%) and eight palette constraints. However, we counteachnewconstraint during the process, evenif it overwrites apreviousconstraint. Inthenal result, only483pixels(17.6%)andzerocolorswereconstrained. Thisdiscrepancyisduetotheusertryingseveral colorsforeachpixel, whichistypical whencreatingpixelartmanually. Fig. 16(bottom)wascreatedusing2732(66.7%)pixel constraints and zero color constraints. Of the 2732 pixelconstraints, 2140werecreatedin27useroperationstomakethebackground a solid color.Incorporatingpixel andpaletteconstraints intotheautomatedprocess give users the ability to work anywhere between a fully auto-mated and a fully manual process. The advantage of this approach canbeseenintheresultsof Figs. 16and17. InFig. 16(top), theuserprovides minimal, but effective changes, such as improving thejawline, andremovingaskincolorinfavorofablueinthepaletteforthetie.Theyalsointroduceasimplestriped patternintothetie,whichstillrepresentstheoriginalimage, butnolongerhasadirectcorrespondence. InFig. 17(left), theuserincorporatesseveralofthehigh level techniques observed in Fig. 12 such as dithering and edgehighlighting,and choicessuch asremovingthe background,noneofwhich are natively built into our automated algorithm. The image inFig. 16(bottom) is a failure case for our automated algorithm, due tothelightingandhighvariationinthebackground. However, evenwith this initially poor output, interleaving the iterative process withuser constraints signicantly improves the results.Finally, while not the direct goal of our work, we briey mention asecondary application of our method, image posterization. This artistictechniqueusesjustafewcolors(originallymotivatedbytheuseofcustominks inprinting) andtypicallyseeks avectorizedoutput.Adobe Illustrator provides a feature called LiveTrace that can poster-izetheimageinFig. 2(a), yieldingFig. 18(a) withonlysixcolors.originalvectorizedourresultnearestresultcubicresultFig. 14. The original pixel art image (&Nintendo Co.,Ltd.) is converted to avectorized version using Kopf and Lischinskis method [23]. The vectorized versionis then converted back to a pixelated version using our automated method and thetwonaivemethods.Fig. 15. Resultsusinganimportancemap. (top) 64 58, 16colors(bottom) 64 43, 12colors. (Forinterpretationof thereferencestocolorinthisgurecaption,thereaderisreferredtothewebversionofthisarticle.)T.Gerstneretal./Computers&Graphics37(2013)333347 344To our knowledge, little research has addressed this problem, thoughit shares some aesthetic concerns with the artistic thresholdingapproach of Xu and Kaplan [24]. Asimple modication to ouroptimizationthat omitsthesmoothingstep(Fig. 6(left)) andthencolors the original image via associated superpixels gives us Fig. 18(b),whichmakesamoreeffectivestartingpointforvectorization. Theresulting Fig. 18(c) offers improved spatial and color delity, based ona good faith effort to produce a similar style in Illustrator.7. Conclusion, limitations and future workWe present a multi-step iterative process that simultaneouslysolves for a mapping of features and a reduced palette to convertan input image to a pixelated output image. Our method demon-stratesseveral advantagesover thenaivemethods. Our resultshaveamorevibrantpalette, retainmorefeaturesoftheoriginalimage, andproduceacleaneroutputwithfewerartifacts. Whilethe naive methods produce unidentiable results at very lowresolutionsandpalettesizes, ourapproachisstillabletocreateiconicimagesthatconjuretheoriginalimage. Thusourmethodmakesasignicantsteptowardsthequalityofimagesproducedbypixelartists.Nevertheless, ourautomatedmethodhasseveral limitations.While pixel artists viewed the results of our automated algorithmas animprovement, theyalsoexpressedthe desire tohave agreater controloverthenalproduct.Toaddresstheseconcerns, weimplementedseveral controlsthat allowtheuser togiveasmuchorlittlefeedbackintotheautomated process as they desire. By incorporating an importancemapwegivetheusertheabilitytoguidethepaletteselection,andbygivingtheuser theabilitytoprovidepixel andpaletteconstraintsandinterleavethemwithouralgorithm, weremovethegapbetweenthemanualandautomatedmethodsofprodu-cing pixelart.The results of combining these user constraints into ouriterativealgorithmareencouraging.Forfuturework,wewishtoexpand on our proposed method and user controls to increase theinteractionbetweentheautomatedalgorithmandtheuser. Ourgoal istocreateacompletesystemthatincorporatesthespeedand power of an automated method to assist artists in their entireprocess, without restricting the control of the artists over the nalresult.As such, the next step is to explore how the users feedback canhelp inform more advanced pixel art techniques in our algorithm,suchasthosethatwouldproduceedgehighlightinganddither-ing. We would also like to look into ways of automaticallyoriginal input automatic user-assisted2232,8c6443,16c6464,16cFig. 16. The results of the automatic method compared to the results obtained by integrating user input into the iterative process with our interface. The pixel and paletteconstraints give users the ability to incorporate high level information that the algorithm does not. (For interpretation of the references to color in this gure caption, thereaderisreferredtothewebversionofthisarticle.)T.Gerstneretal./Computers&Graphics37(2013)333347 345performingpalettetransfers,whichwouldallowpotentialappli-cations of this work to include, for example, reproductionofanimageinrepeatingtileslikeLego, ordesignforarchitecturalfacades composedof particular building materials like knownshadesof brick. Currently, our algorithmislimitedtoworkingwith colors that are similar to the original image due to the natureof how we minimize error, and such an application is not possiblewithouttheuserapplyingalargenumberofconstraints.AcknowledgmentsWe thank the anonymous reviewers for their helpful feedback. Wealso wish to thank pixel artists Craig Adams, Ted Martens, and AdamSaltsmanfor their adviceand comments. Thisresearchis supportedinpartbytheSloanFoundation, theNSF(CAREERAwardCCF-06-43268andGrantsIIS-09-16129, IIS-10-48948, IIS-11-17257, CMMI-11-29917, IIS-09-16845, DGE-05-49115), andgenerous gifts fromAdobe, Autodesk, Intel, mental images, Microsoft, NVIDIA, Side EffectsSoftware,and the Walt Disney Company. The following copyrightedimagesareusedwithpermission:Fig. 1byAliceBartlett, Fig. 9byLouisVest, Fig. 13(giraffe)byPaulAdams, andFig. 13(pagoda)byWilliam Warby. The pixel art in Fig. 12 is copyright Adam Saltsman(top) and Ted Martens (bottom).References[1] VermehrK,SauerteigS,SmitalS.eboy. /http://hello.eboy.comS;2012.[2] Marr D, HildrethE. Theory of edge detection. Proc RSoc LondonSer B1980;207:187217,http://dx.doi.org/10.1098/rspb.1980.0020.[3] DeCarloD,FinkelsteinA, RusinkiewiczS,SantellaA.Suggestivecontoursforconveyingshape.ACMTransGraph2003;22(3):84855.[4] JuddT, DurandF, AdelsonEH. Apparentridgesforlinedrawing. ACMTransGraph2007;26(3):19.[5] GoochB, Coombe G, Shirley P. Artistic vision: painterly rendering usingcomputer vision techniques. In: Non-photorealistic animation and rendering(NPAR), 2002. p. 8390, ISBN1-58113-494-0. URL/http://doi.acm.org/10.1145/508530.508545S.original input automatic user-assisted6459,12colors2832,8colorsFig.17. (left) Usingourproposedtools,the usercan incorporatehighleveltechniques suchasdithering,andedgehighlighting intothe nalresult.(right)Anexampleofafailurecasefortheautomatedalgorithm,the resultsofwhicharedrasticallyimprovedwhenaugmentedwithuser constraints.Fig. 18. (a)VectorizedphotoposterizedwithIllustrator(sixcolors). (b) OptimizationwithoutLaplaciansmoothing, coloringassociatedinputpixels(sixcolors) and(c) Vectorizing b in Illustrator yields similar style with better spatial and color delity than a. (For interpretation of the references to color in this gure caption, the readerisreferredtothe webversionofthisarticle.)T.Gerstneretal./Computers&Graphics37(2013)333347 346[6] DeCarlo D, Santella A. Stylization and abstraction of photographs. ACM TransGraph2002;21:769776,http://doi.acm.org/10.1145/566654.566650.[7] Winnem ollerH, OlsenSC, GoochB. Real-timevideoabstraction. ACMTransGraph2006;25:12216.[8] AchantaR, Shaji A, SmithK, Lucchi A, FuaP, S usstrunkS. SLICsuperpixels.Technical Report. IVRG CVLAB; 2010.[9] Rose K. Deterministic annealing for clustering, compression, classication,regression, and related optimization problems. Proc IEEE 1998;86(11):221039,http://dx.doi.org/10.1109/5.726788.[10] Sharma G, Trussell HJ. Digital color imaging. IEEE Trans Image Process1997;6:90132.[11] Gerstner T, DeCarlo D, Alexa M, Finkelstein A, Gingold Y, Nealen A. Pixelatedimageabstraction. In:Proceedingsoftheinternationalsymposiumonnon-photorealisticanimationandrendering(NPAR),2012.[12] GervautzM,PurgathoferW.Asimplemethodforcolorquantization:octreequantization.In:Graphicsgems,1990.p.28793,ISBN0-12-286169-5.[13] Heckbert P. Color imagequantizationfor framebuffer display. SIGGRAPHComputGraph1982;16:297307.[14] Orchard M, Bouman C. Color quantization of images. IEEE Trans SignalProcess1991;39:267790.[15] WuX. Colorquantizationbydynamicprogrammingandprincipalanalysis.ACMTransGraph1992;11:34872.[16] StollnitzEJ, OstromoukhovV, SalesinDH. Reproducingcolorimagesusingcustominks. In:ProceedingsofSIGGRAPH, 1998. p. 26774, ISBN0-89791-999-8.[17] ShiJ, MalikJ. Normalizedcutsandimagesegmentation. IEEETransPatternAnalMachIntell1997;22:888905.[18] Vedaldi A, SoattoS. Quickshiftandkernel methodsformodeseeking. In:Europeanconferenceoncomputervision, vol.IV,2008.p.70518.[19] Levinshtein A, Stere A, Kutulakos KN, Fleet DJ, Dickinson SJ, Siddiqi K.Turbopixels: fast superpixels using geometric ows. IEEE Trans Pattern AnalMachIntell2009;31:22907,http://dx.doi.org/10.1109/TPAMI.2009.96.[20] KirkpatrickS, GelattCD, Vecchi MP. Optimizationbysimulatedannealing.Science1983;220:67180.[21] Puzicha J, Held M, Ketterer J, Buhmann JM, Fellner DW. On spatial quantiza-tionofcolor images.IEEETransImageProcess2000;9:66682.[22] Inglis TC, Kaplan CS. Pixelating vector line art. In: Proceedings of the symposiumon non-photorealistic animation and rendering. NPAR 12. Aire-la-Ville,Switzerland, Switzerland:EurographicsAssociation;2012. p. 218, ISBN978-3-905673-90-6./http://dl.acm.org/citation.cfm?id=2330147.2330153S.[23] Kopf J, Lischinski D. Depixelizing pixel art. ACM Trans Graph2011;30(4):99.[24] XuJ, KaplanCS, MiX. Computer-generatedpapercutting. In:Proceedingsofpacicgraphics,2007.p.34350.[25] MacQueenJB. Somemethodsforclassicationandanalysisofmultivariateobservations. In:Proceedingsof5thBerkeleysymposiumonmathematicalstatisticsandprobability,1967.p.28197.[26] ForsythDA, PonceJ. Computer vision: amodernapproach. PrenticeHall;2002.T.Gerstneretal./Computers&Graphics37(2013)333347 347


Recommended