+ All Categories
Home > Documents > Efficient synthesis of gradient solid...

Efficient synthesis of gradient solid...

Date post: 07-Mar-2018
Category:
Upload: voquynh
View: 217 times
Download: 0 times
Share this document with a friend
14
Efficient synthesis of gradient solid textures Guo-Xin Zhang a,, Yu-Kun Lai b , Shi-Min Hu a a TNList, Department of Computer Science and Technology, Tsinghua University, China b School of Computer Science and Informatics, Cardiff University, UK article info Article history: Received 30 August 2012 Received in revised form 17 October 2012 Accepted 26 October 2012 Available online xxxx Keywords: Solid textures Synthesis Vector representation Gradient Vectorization Distance fields Tricubic interpolation Editing propagation Real-time rendering 2D exemplars abstract Solid textures require large storage and are computationally expensive to synthesize. In this paper, we propose a novel solid representation called gradient solids to compactly represent solid textures, including a tricubic interpolation scheme of colors and gradients for smooth variation and a region-based approach for representing sharp boundaries. We further propose a novel approach to directly synthesize gradient solid textures from exem- plars. Compared to existing methods, our approach avoids the expensive step of synthesiz- ing the complete solid textures at voxel level and produces optimized solid textures using our representation. This avoids significant amount of unnecessary computation and stor- age involved in the voxel-level synthesis while producing solid textures with comparable quality to the state of the art. The algorithm is much faster than existing approaches for solid texture synthesis and makes it feasible to synthesize high-resolution solid textures in full. We also propose a novel application—instant editing propagation on full solids. Ó 2012 Elsevier Inc. All rights reserved. 1. Introduction Textures are essentially important for current rendering techniques as they bring in richness without involving overly complicated geometry. Most previous work on texture synthesis focuses on synthesizing 2D textures, which require texture mapping with almost unavoidable distortions when they are applied to 3D objects. Solid tex- tures represent color (or other attributes) over 3D space, providing an alternative approach to 2D textures that avoids complicated texture mapping and allows real solid objects to be represented with consistent textures both on the surface and in the interiors alike. Due to the extra dimension, solid textures represented as attributes sampled at regular 3D voxel grids are extre- mely expensive to synthesize and store. To provide sufficient resolution in practice, a typical solution is to synthesize only a small cube (e.g. 128 3 ), and tile the cube to cover the 3D space. However, tiling may cause visual repetition (see Fig. 8). While repetitions could be alleviated with some rotations, they cannot be eliminated completely when the volumes are sliced with certain planes. Further it is possible only when the solid textures have no interac- tion with the underlying objects, and thus cannot respect any model features or user design intentions. To address this, previous approaches [4,42] synthesize solid textures on demand; however, handling high-resolution solid textures is still expensive in both computation and storage. Inspired by image vectorization, for pixels (or voxels) with dominantly smooth color variations (within each homogeneous region), vectorized graphics provide signifi- cant advantages such as being compact, resolution indepen- dent and easy-to-edit. The possibility and effectiveness of vectorizing solid textures have been recently studied in [33]. This work is essentially a 3D generalization of image vectorization, which requires voxel-level (raster) solid tex- tures as input and inherits similar advantages over tradi- tional raster solid textures. It remains computationally 1524-0703/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.gmod.2012.10.006 Corresponding author. E-mail addresses: [email protected] (G.-X. Zhang), Yukun.Lai@cs. cardiff.ac.uk (Y.-K. Lai), [email protected] (S.-M. Hu). Graphical Models xxx (2012) xxx–xxx Contents lists available at SciVerse ScienceDirect Graphical Models journal homepage: www.elsevier.com/locate/gmod Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis of gradient solid textures, Graph. Models (2012), http://dx.doi.org/ 10.1016/j.gmod.2012.10.006
Transcript

Graphical Models xxx (2012) xxx–xxx

Contents lists available at SciVerse ScienceDirect

Graphical Models

journal homepage: www.elsevier .com/locate /gmod

Efficient synthesis of gradient solid textures

Guo-Xin Zhang a,⇑, Yu-Kun Lai b, Shi-Min Hu a

a TNList, Department of Computer Science and Technology, Tsinghua University, Chinab School of Computer Science and Informatics, Cardiff University, UK

a r t i c l e i n f o

Article history:Received 30 August 2012Received in revised form 17 October 2012Accepted 26 October 2012Available online xxxx

Keywords:Solid texturesSynthesisVector representationGradientVectorizationDistance fieldsTricubic interpolationEditing propagationReal-time rendering2D exemplars

1524-0703/$ - see front matter � 2012 Elsevier Inchttp://dx.doi.org/10.1016/j.gmod.2012.10.006

⇑ Corresponding author.E-mail addresses: [email protected] (G.-X. Zh

cardiff.ac.uk (Y.-K. Lai), [email protected] (S.-

Please cite this article in press as: G.-X. Zhang10.1016/j.gmod.2012.10.006

a b s t r a c t

Solid textures require large storage and are computationally expensive to synthesize. Inthis paper, we propose a novel solid representation called gradient solids to compactlyrepresent solid textures, including a tricubic interpolation scheme of colors and gradientsfor smooth variation and a region-based approach for representing sharp boundaries. Wefurther propose a novel approach to directly synthesize gradient solid textures from exem-plars. Compared to existing methods, our approach avoids the expensive step of synthesiz-ing the complete solid textures at voxel level and produces optimized solid textures usingour representation. This avoids significant amount of unnecessary computation and stor-age involved in the voxel-level synthesis while producing solid textures with comparablequality to the state of the art. The algorithm is much faster than existing approaches forsolid texture synthesis and makes it feasible to synthesize high-resolution solid texturesin full. We also propose a novel application—instant editing propagation on full solids.

� 2012 Elsevier Inc. All rights reserved.

1. Introduction

Textures are essentially important for current renderingtechniques as they bring in richness without involvingoverly complicated geometry. Most previous work ontexture synthesis focuses on synthesizing 2D textures,which require texture mapping with almost unavoidabledistortions when they are applied to 3D objects. Solid tex-tures represent color (or other attributes) over 3D space,providing an alternative approach to 2D textures thatavoids complicated texture mapping and allows real solidobjects to be represented with consistent textures bothon the surface and in the interiors alike.

Due to the extra dimension, solid textures representedas attributes sampled at regular 3D voxel grids are extre-mely expensive to synthesize and store. To providesufficient resolution in practice, a typical solution is to

. All rights reserved.

ang), [email protected]. Hu).

et al., Efficient synthesis o

synthesize only a small cube (e.g. 1283), and tile the cubeto cover the 3D space. However, tiling may cause visualrepetition (see Fig. 8). While repetitions could be alleviatedwith some rotations, they cannot be eliminated completelywhen the volumes are sliced with certain planes. Further itis possible only when the solid textures have no interac-tion with the underlying objects, and thus cannot respectany model features or user design intentions. To addressthis, previous approaches [4,42] synthesize solid textureson demand; however, handling high-resolution solidtextures is still expensive in both computation and storage.

Inspired by image vectorization, for pixels (or voxels)with dominantly smooth color variations (within eachhomogeneous region), vectorized graphics provide signifi-cant advantages such as being compact, resolution indepen-dent and easy-to-edit. The possibility and effectiveness ofvectorizing solid textures have been recently studied in[33]. This work is essentially a 3D generalization of imagevectorization, which requires voxel-level (raster) solid tex-tures as input and inherits similar advantages over tradi-tional raster solid textures. It remains computationally

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

2 G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx

costly and involves large intermediate storage for raster so-lid textures to synthesize high resolution solid textureswith a nonhomogeneous spatial distribution (e.g. [42]).

This paper is an extended version of the conferencepaper [43] with substantially extended technical details,experimental results, evaluation and applications includ-ing solid vectorization and instant editing. In this paper,instead of first synthesizing the full voxel solid textures be-fore vectorizing them [33], we propose a novel approach todirectly synthesize vectorized solid textures from exem-plars. Inspired by gradient meshes in image vectorization[29], we propose a novel gradient solid representation thatuses a tricubic interpolation scheme for smooth colorvariations within a region, and a region-based approachto represent sharp boundaries with separated colors. Thisrepresentation is compact, more regular than Radial BasisFunctions (RBFs) [33] and thus particularly suitable forreal-time rendering and efficient solid texture synthesis.Our approach can be used to vectorize input solids, whichis over 100 times faster than [33] and leads to reducedapproximation errors in most practical cases, as shownlater by extensive comparative experiments. As discussedlater in the paper, while the proposed representation isnot suitable for all textures, it is sufficient to represent avariety of practical solid textures in high quality, in partic-ular those having dominantly smooth color variationswithin each homogeneous region.

We further treat solid texture synthesis as an optimiza-tion process of control points of gradient solids to producesynthesized solids with similar sectional images as givenexemplars. Compared with traditional solid texture syn-thesis, we have far less control points than voxels, leadingto a much more efficient algorithm. While we solve bothbitmap solid synthesis and solid vectorization togetherand produce solid textures with comparable quality asthe state of the art, it is over 10 times faster than existingsynthesis methods.

The main contributions of this paper are:

� A new gradient solid representation with regular struc-ture that is compact, resolution-independent and capa-ble of representing smooth solids and solids withseparable regions.� A novel optimization-based algorithm for direct synthe-

sis of high quality solid textures vectorizing high resolu-tion solids which is efficient both in computation andstorage.� We also propose a novel application—instant solid

editing, as demonstrated in the paper.

To the best of our knowledge, this is the first algorithmthat synthesizes vector solid textures directly from exem-plars, allowing high resolution, potentially spatially non-homogeneous solid textures to be synthesized in full.Thanks to the new compact representation, solid texturescan be directly synthesized in this representation, signifi-cantly reducing the computational and memory costs.Our representation also allows instant editing withoutresorting to time-consuming conversion between vectorand raster solids. Both of these would be difficult toachieve, if possible, by previous methods. This addresses

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

major drawbacks of using solid textures in practical appli-cations, namely large storage requirements and synthesistime. Various techniques have also been developed toeffectively improve the quality or reduce the computa-tional cost.

A typical example of high-resolution gradient solid tex-ture synthesis and editing is given in Fig. 1. In Section 2, wereview prior work in texture synthesis and vectorization.Our vector solid representation is described in Section 3and the algorithm details given in Section 4. Experimentalresults, applications and discussions are presented inSection 5 and finally concluding remarks are given inSection 6.

2. Related work

Our work is closely related to example based texturesynthesis and vector images/textures.

Solid texture synthesis: Texture synthesis has been anactive research direction in computer graphics for manyyears. Please refer to [35] for a comprehensive survey ofexample-based 2D texture synthesis and [28] for a recentsurvey of solid texture synthesis from 2D exemplars.

Early work on solid texture synthesis focuses on proce-dural approaches [26,27]. Since rules are used to generatesolid textures, very little storage is needed. Proceduralsolid textures can be generated in real-time [2]. However,only restricted classes of textures can be effectively syn-thesized and it is inconvenient to tune the parameters.Exemplar-based approaches do not suffer from these prob-lems, and thus received more attention. 2D exemplarimages are popular due to their wide availability. Wei[34] extends non-parametric 2D texture synthesis algo-rithms to synthesize solid textures. An improved algorithmis proposed in [13] to generate solid textures based ontexture optimization [14] and histogram matching [8]. Fur-ther extended work [3] considers k-coherent search andcombined position and index histograms to improve theresults. To synthesize high resolution solid textures, Donget al. [4] propose an efficient synthesis-on-demand algo-rithm based on deterministic synthesis of certain windowsfrom the whole space [16] necessary for rendering, basedon the fact that only 2D slices are needed at a time for nor-mal displays. This work is extended in [42] that introducesuser-provided tensor fields as guidance for solid texturesynthesis. This approach allows synthesizing solid textureswith nonhomogeneous spatial distributions, thus cannotbe achieved by tiling small fixed cubes.

Alternative approaches for solid texture synthesis exist.Jagnow et al. [10,11] propose an algorithm based on stere-ological analysis which provides more precise modeling ofsolid textures. Du et al. [5] synthesize solid textures byanalyzing the shapes and colors of particles from 2D exem-plars and appropriately placing particles to form consistentsectional images as the exemplars. This is conceptuallysimilar to salient structural element analysis in 2D texturesynthesis [24]. The method is particularly suitable forsemi-regular solid texture synthesis. However, these ap-proaches only work for restricted types of solid textureswith well separable pieces. Lapped textures have been

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

Fig. 1. High-resolution gradient solid texture synthesis and editing. From left to right: the input exemplar, the synthesized gradient solid texture followinga given directional field, a closeup, internal slices and instant editing (user interaction and the output). Part of the figure was previously published in [43]and is republished with permission by Springer.

G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx 3

extended to synthesize 3D volumetric textures [30]. 3Dvolumetric exemplars instead of 2D image exemplars areneeded as input. Solid texture synthesis has also been usedfor other applications. Ma et al. [21] use similar techniquesfor motion synthesis.

Unlike previous methods, our approach directly synthe-sizes gradient solid textures from 2D exemplars. Thisprovides the benefits from both procedural and exemplar-based approaches: the representation is more compactand high resolution solid textures can be synthesized in fullefficiently. The algorithm is flexible to synthesize varioussolid textures using 2D exemplars and follow given tensorfields if specified by the user. The whole solid textures needonly to be synthesized once which reduces overallcomputation.

Vector images and vector solid textures: Differentfrom raster images, vector graphics use geometric primi-tives along with attributes such as colors and their gradi-ents to represent the images. Due to the advantages ofvector graphics, plenty of work recently focuses on gener-ating vector representations from raster images. Recentwork proposes automatic or semi-automatic approachesto high-quality image vectorization using quadrilateralgradient meshes [29,15] or curvilinear triangle meshesfor better feature alignment [37]. Diffusion curves [23]model vector images as a collection of color diffusionaround curves. Some works consider combining rasterimages with extra geometric primitives [1,32,25] to obtainbenefits such as improved editing and resizing.

Vector graphics have recently been generalized to solidtextures [33,31]. Compared to raster solids, vector solidshave the advantages of compact storage and efficient ren-dering. Wang et al. [33] propose an automatic approach tovectorize given solid textures using a RBF-based represen-tation. However, this approach relies on raster solids asinput, thus an expensive raster solid texture synthesisalgorithm [13] needs to be performed first if only 2D exem-plars are given as input. Diffusion surfaces [31], a general-ization from diffusion curves [23], was used to representvector solids; their focus however is user design of solidsrather than automatic generation.

Vector representation is loosely related to volume com-pression techniques (e.g. [22,41]) as both consider morecompact representations than raster solids. The focus ofvector representation however aims at creating compactand resolution-independent representation suitable forgraphics applications that produce visually similar and

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

pleasing results even when magnified while the purposeof volume compression techniques is to reconstruct largevolumes as close as possible to the original even under sig-nificant compression. Research work on volume compres-sion tends to use blocks and block-based coding whichleads to less smooth reconstruction.

We propose a novel algorithm that synthesizes gradientsolids directly from 2D exemplars, bypassing intermediatebitmap solid synthesis and subsequent bitmap-to-vectorconversion, leading to an efficient algorithm in both com-putation and storage that produces high quality solidtextures. The representation although with a somewhatdifferent aim may be useful for certain volume compres-sion applications.

3. Gradient solid representation

We give details of the gradient solid representation,allowing efficient representation of smooth regions and re-gions with boundaries.

3.1. Representing smooth regions

We first consider representing regions with smoothlyvarying colors. We use an n � n � n grid of control pointswith axes u, v, w to represent the solid textures. At eachcontrol point (i, j,k), we store a feature vector f includingr, g, b color components and additional feature channelssuch as the signed distance measuring the distance as wellas inside/outside to some surfaces that separate thevolume into two sides. This is useful for better structurepreservation [17] as well as region separation. The latteruse will be detailed in the next subsection. In addition,the gradients of f, i.e. df

du ;dfdv ;

dfdw are also stored allowing flex-

ible control of variations in 3D space. 3D tricubic interpo-lation with gradients [7,18] is used to obtain the featurevector ~f for any voxel inside the grid. Similar tricubic inter-polation has been used in isosurface extraction from volu-metric data for visualization [12]. Assume that p = 1,2, . . . ,8represents the 8 control points in the cube that covers thevoxel and assume second or higher order derivatives of f tobe zero, ~f at parameter (u,v,w)(0 6 u,v,w 6 1) can beevaluated as

~fðu;v ;wÞ ¼X3

i;j;k¼0

aijkuiv jwk: ð1Þ

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

4 G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx

The coefficients aijk are determined by setting the inter-polated function to give identical values, gradients andsome selected higher order derivatives as stored values ateach corner of the cube. The higher order constraints are se-lected to be isotropic (consistent with different axes) andintroduced to ensure uniqueness of the solution. As provedin [18], all the 64 coefficient vectors aijk are weighted sums

of 32-dimensional vectors V ¼ � � � fðpÞ; dfðpÞ

du ; dfðpÞ

dv ; dfðpÞ

dw � � �� �

and

the interpolation is C1 continuity not only at cube cornersbut also over the whole volume.

The geometric positions of control points in our repre-sentation are fixed, however, these points still carry otherattributes such as color and gradients which control theappearance of the solids. Assuming the displacement be-tween adjacent control points is d, the geometric positionof the control point (i, j,k) is (id, jd,kd). The displacementdetermines the number of voxels located within each cubeof the control grid. Larger d leads to more compressionwhile smaller d implies better capture of details. In all ofour experiments we use d = 4 which means that the num-ber of control points is roughly 1

64 ¼ 1:56% of voxels.This simple representation has several significant

advantages. For any fixed point with known parameter(u,v,w), since uivjwk can be pre-computed, the expensiveevaluation of Eq. (1) can be reduced to a weighted sumof elements in V. In practice, we pre-compute these coeffi-cients for a regular grid with 333 samples in each cube,with interval at 1

8 voxel for accuracy. A fixed look-up tableirrelevant to the input is pre-computed and stored, with333 � 32 entries (about 4.4 MB), and the interpolated fea-ture at any space position can be computed as a linearcombination of V with these prebuilt weights.

The interpolation is achieved in rendering via GPUacceleration, as detailed in Section 5.3. This allows effi-cient evaluation, particularly important as solid texturesare computationally intensive. The look-up table doesnot need to be stored and is calculated on the fly. It isof fixed size even for very large volumes (equivalent toe.g. 5123 or 10243) and in such cases becomes negligible.Compared with the RBF-based representation [33], wehave regular structures suitable for texture synthesis. Asdemonstrated in Figs. 11 and 12, our local interpolationrepresentation has much better color reproduction. Thereis no need to store the positions of control points, whichfurther saves storage. The regularity also helps efficientdirect solid texture synthesis and supports other applica-tions such as instant editing propagation, as detailedlater.

3.2. Representing region boundaries

If the given texture only contains gradual change of col-ors, the representation described in Section 3.1 is sufficient(e.g. Fig. 11). If the texture contains sharp boundaries thatneed to be preserved, a feature mask image is often used intexture synthesis as an additional component (other thancolor) to better preserve structures. Similar to previouswork both in 2D and 3D textures [17,33], we assume re-gions can be separated using a binary mask. To representthe boundary in the solid textures, we also use a signed

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

distance field stored at the same regular n � n � n grid.We store both the signed distance D and its gradientsdDdu ;

dDdv and dD

dw and use the same tricubic interpolation as inSection 3.1 to calculate the interpolated signed distanceeD at each voxel. The sign of eD indicates which side of theregions in the binary mask this voxel belongs to. Differentfrom [33], gradients are stored in addition to the distance,and thus we process the distance field consistently withcolors and represent region boundaries with flexibility.For each control point that is adjacent to at least one cubewith both positive and negative distances, other than thedistance component where one version is sufficient, twofeature vectors fP (positive distance) and fN (negative dis-tance) and their gradients are stored. Any voxel with posi-tive (or negative) distance will be evaluated using the sameinterpolation in Section 3.1 but with fP (or fN) and theirgradients instead. This guarantees C1 smoothness withineach region while also allowing sharp boundaries to beproduced between regions. Our gradient solid representa-tion is easy to evaluate but also sufficient to representvarious solid textures, as demonstrated in Section 5.Although the representation is more restrictive than gradi-ent meshes in that control points are located at fixed posi-tions, it allows more efficient evaluation and synthesis. Therepresentation still bears major properties of traditionalvector representation such as being resolution indepen-dent and more compact than raster solids.

4. Gradient solid texture synthesis

Our algorithm synthesizes gradient solid texturesdirectly from 2D exemplars, which may include optionalbinary masks (if sharp boundaries exist between regions).In addition, a smooth tensor field may be given to specifythe local coordinate systems the exemplar images alignwith [42]. We use an optimization based approach to syn-thesize gradient solid textures, with local patches alignedto the field if given.

4.1. Algorithm overview

The algorithm pipeline is summarized in Fig. 2, whichinvolves several key steps: initialization, iterative optimi-zation and final gradient solid refinement. Our gradient so-lid representation is first initialized based on the inputexemplar. The synthesis is then carried out using a multi-resolution approach from coarse to fine. At each level, anoptimization based approach is used that first identifiessimilar patches from the exemplar that best matches thecurrent gradient solid and the gradient solid is then up-dated based on the samples in the patches. An approximatebut sufficient fast evaluation of the vector solid representa-tion is used in the intermediate stages. In the last stage,accurate evaluation as given in Eq. (1) is used to optimizethe control points for the best approximation. We will alsodiscuss techniques to ensure efficiency in both computa-tion and storage. If the binary mask is given, we pre-com-pute a signed distance field for the image with the absolutevalue at each pixel being the distance to the region bound-ary and different signs (positive or negative) for different

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

Initialization Finding MatchedPatches from Exemplars

GeneratedSolid Textures

RepresentationUpdate

IterativeTensor Field

(Optional)

RepresentationRefinement

2D Texture (Input)

Fig. 2. Algorithm pipeline of gradient solid texture synthesis. The figure was previously published in [43] and is republished with permission by Springer.

G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx 5

regions. This signed distance is considered as an extra com-ponent in the feature vector f [17].

4.2. Initialization

We simply start from a randomized initialization. Foreach control point, we randomly select a pixel from theexemplar image, and assign the feature vector at the pixelto the control point. All the gradients are initialized to zero.

4.3. Optimization-based synthesis

Optimization is the key step in our gradient solid tex-ture synthesis pipeline. It involves iterations of two alter-nating steps, namely choosing optimal patches fromexemplars that best match the current representationand updating the representation to better approximatethe exemplar patches. Unlike traditional texture optimiza-tion [14,13], we optimize the feature vectors in the controlpoints of the gradient solids, a much more compact repre-sentation than voxels. New challenges exist due to the dif-ferent nature of the representation which we will addresswith various technical solutions. We apply NO iterationsfor each synthesis level, and use a modified coarse-to-finestrategy detailed in Section 4.3.3. NO = 3 is sufficient andused for all the experiments in the paper.

4.3.1. Finding matched patches from exemplarsWe first identify those local patches from the exemplars

that best match the current gradient solid. These patcheswill then be used to improve the representation. Since gra-dient solids have much sparser control points than voxels,we randomly choose a small number NC of check pointswithin each cube of the grid (NC = 3 provides a good bal-ance and is used for all the examples in the paper). At eachcheck point, we sample three orthogonal planes each withN � N samples (denoted as sx, sy and sz respectively) which

Ex

Ey

Ez

Sx

Sy

Sz

Fig. 3. Illustration of crossbars.

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

are evaluated based on our representation (as illustrated inFig. 3). A fast approximate evaluation is used in intermedi-ate synthesis to significantly improve the performancewithout visually degrading the quality (see Section 4.3.4).

We then find three local patches from exemplars thatbest match these sampled patches. If all the three slicesare equally important, we use three independent searchesas [13]. Many practical solid textures are anisotropic andit is not possible to keep all three slices well matched witha single exemplar image. In such cases, it is known thatmatching two slices instead of three may lead to better re-sults [13]. We propose a new approach that takes crossbarconsistency into account, which works best when twoslices are matched. Crossbars are those voxels shared bytwo or three slices (see Fig. 3) and inconsistent crossbarsmay result from independent best searches. For computa-tional efficiency, we first search for the patch Ex from exem-plars that best matches sx, as usual. We then search for thepatch Ey that best matches sy from a set of N1 candidateswith the most consistent crossbar voxels as Ex. If threeslices are matched, we similarly search for the best matchEz of sz from a set of N2 candidates with the most consistentcrossbars as Ex and Ey. N1 = 20 and N2 = 50 are used for allthe experiments in the paper. This leads to improved syn-thesis results with better structure preservation, whichshows the importance of crossbar consistency, as demon-strated in Fig. 4. While crossbar matching has been usedin correction-based synthesis [4], using this in optimizationbased synthesis is new.

To speed up the computation, a PCA projection of thematching vectors is used [9], which effectively reducesdimensions from hundreds to 10–20 while keeping mostof the energy. After this, the searches can be effectivelyaccelerated with ANN approximate nearest neighborlibrary.

4.3.2. Representation updateEach matched patch at every check point gives N � N

samples, which will be used to update the gradient solidrepresentation. To efficiently collect samples, we conceptu-ally build a bucket for each voxel in the grid that holds allthe samples located in the voxel. After considering checkpoints in all the cubes, each bucket may end up with noneor a few samples. For buckets with more than one samples,in order to determine the feature vector, simply averagingall the samples in the bucket tends to produce blurred vox-els. Previous methods [36,13] use mean shift clustering toavoid blurring, which is expensive as all the samples inthe buckets need to be preserved and clustering algorithmsneed to be performed many times. We propose two novelapproaches to significantly improve the efficiency.

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

Fig. 4. Results without (left) and with (right) crossbar matching. The figure was previously published in [43] and is republished with permission bySpringer.

open

closed

2N

Fig. 5. Illustration of bucket reuse.

6 G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx

Quantization. To avoid blurring without storing all thesamples in each bucket, we propose a novel approachbased on vector quantization. We preprocess the givenexemplar to quantize colors of all the pixels into NT clus-ters. A small number of NT (e.g. 12) is sufficient for practicaltextures. For the texture with a binary mask, we start fromtwo clusters for both positive and negative regions, anditeratively allocate the new cluster to regions with mostsignificant average quantization error, until all the NT clus-ters are allocated. We use a two-pass approach in the syn-thesis. In the first pass, for every bucket, only the numberof samples belonging to each cluster is recorded. In the sec-ond pass, we compute the average feature vector only forthose samples belonging to the two dominant clusters(with maximum counts in the first pass). Since the domi-nant clusters are known before the second pass, whenevera sample is generated we test if it should be included foraveraging. Only the sum and the number of samples needto be kept which significantly saves the storage. Thisavoids using the computationally expensive clusteringalgorithm for each voxel but also significantly reducesblurring, as demonstrated in various results in Section 5.Using quantization in the finest level is sufficient, accord-ing to our experience.

Bucket reuse. Although conceptually the number ofbuckets is the same as the number of voxels, i.e. O(n3),we can significantly reduce the memory requirement bybucket reuse. We update our representation in the 3Dscanline order of control points. Depending on the tem-plate size N, check points more than

ffiffi2p

2 N voxels away willnot produce any sample in the current bucket, where N

2 ishalf the template size and

ffiffiffi2p

is introduced due to rotation.As illustrated in Fig. 5 for a 2D illustration, we keep track oftwo references in the dominant dimension (one of thethree dimensions that can be chosen arbitrarily) that markthe boundaries of the open region (where new sampleswill be generated) and the closed region (where no moresamples will be produced and we can safely update thegradient solid representation). In case the two-pass algo-rithm in quantization is used, this buffer needs to be dou-bled i.e. up to 2

ffiffiffi2p

N span in the dominant dimension issufficient, or the memory cost is O(n2N). This is becauseeither pass has an affected region as we discussed andthe second pass relies on the results collected from the firstpass. The required buffering space does not increase with

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

more synthesis iterations as buckets are cleared after eachsynthesis iteration and no further propagation as in [4]happens. Since N� n and often constant for various exam-ples, this effectively saves the storage by reducing the com-plexity from n3 to n2, without any extra recomputation.This is possible, because after each iteration of optimiza-tion, only a very compact gradient solid representation iskept, while traditional solid texture synthesis requiresthe whole dense volume to be accessible. By using thistechnique, we can synthesize gradient solid textures corre-sponding to 10243 voxels within 2GB memory, even lessthan storing the voxels alone.

After obtaining the average feature vector for any buck-et with at least one sample, we assign each non-emptybucket to the closest control point. The feature vector aswell as gradients of the control point are updated by min-imizing the fitting error in the least-squares sense. For aparticular control point, assume s buckets are related withrelative coordinates dut, dvt, dwt and feature vector ft(1 6t 6 s), we find fc;

fcdu ;

fcdv ;

fcdw that minimizes

EC ¼Xs

t¼1

fc þfc

dudut þ

fc

dv dv t þfc

dwdwt � ft

���� ����2

: ð2Þ

This can be considered as a local first-order Taylorexpansion of our representation which can be efficientlysolved by small linear systems. This approximation is suf-ficient for intermediate computation and we optionally usethe accurate evaluation in the final stage.

4.3.3. Multi-resolution synthesisTo capture features at multiple scales, a multi-resolu-

tion approach is also used in our algorithm. However, since

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx 7

a sparse control grid is used, reducing the resolution is notfeasible as it would be too coarse in low resolutions toeffectively capture details. Instead, inspired by fractionalsampling [16], in each successive coarser level we keepthe resolution of the control grid unchanged and doublethe spacing between sample pixels in exemplar imageand voxels in the 3D space. From coarse to fine we usethree levels of synthesis with N = 9, 11, 21 respectively.The finest level uses a significantly larger neighborhoodin order to cover a few control points at minimum in oursparse representation.

4.3.4. Fast approximate evaluationOur gradient solid representation is relatively easy to

evaluate; however, in the solid texture synthesis process,many evaluations are needed. We suggest two approxima-tions for improved performance. In the intermediate syn-thesis process, instead of evaluating the accurate valuesat each sample point, we use a first-order Taylor expansionas an approximation. For any point p whose closest controlpoint is c with feature vector fc and its gradients fc

du ;fcdv ;

fcdw,

the approximate feature vector at p with relative coordi-nates dup, dvp, dwp, can be evaluated as fp ¼ fc þ fc

du dupþfcdv dvp þ fc

dw dwp. This approximation does not ensuresmoothness, but only involves 3 multiplications and 3additions for each component of the feature vector, thusonly takes about 1/10 of the computation of a full evalua-tion. In the iterative synthesis, another approximation is toignore the region-based calculation given in Section 3.2 (asif there is no separated region as in the single channel casesuch as Fig. 6(middle)). This may mix up voxels in differentregions within the same cube and leads to visual degrada-tion of the final results; the impact on the intermediatesynthesis however is negligible as it is restricted to a cou-ple of voxels due to the cube size.

4.4. Gradient solid representation refinement

As the final step, we further optimize the gradient solidrepresentation to better represent the synthesized gradientsolids.

Region separation. For solids with smooth variation ofcolors (e.g. Fig. 11), our algorithm does not require a binarymask as input and can effectively reproduce the solids witha single region. For solids with sharp region boundariesthat need to be preserved, we differentiate regions with

Fig. 6. Comparison of results using direct upscaling (left) and our al

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

positive and negative signed distances for the computationof control point parameters described in Section 4.3.2. Foreach control point, we compute positive parameters (fP

and gradients) using samples with positive signed dis-tance. Similarly, samples with negative signed distancecontribute to negative parameters (fN and gradients). Toimprove reliability in the fitting of boundary controlpoints, we propagate boundary samples (with neighboringsamples having different signs of distance) to the nearneighboring space, similar to dilation in mathematicalmorphology. This mainly ensures cubes near regionboundaries have sufficient samples to make fitting reliable.

Control point optimization. To further improve thequality, instead of fitting with first order approximation,we can also minimize the fitting error of all the samplesbetween the sample values and those interpolated usingEq. (1). For a sample point with sampled feature vector f i

located in the cube ci with corner control points collectedas Vi and parameter (ui, vi, wi), the evaluated feature vec-tors ~f are linear functions of Vi, denoted as f(Vi; ui, vi, wi).We minimize the following quadratic energy

EC ¼XNS

i¼1

k~f i � f ik2 ¼XNS

i¼1

kfðVi; ui;v i;wiÞ � f ik2; ð3Þ

where NS is the number of sample points. Minimization ofEC leads to a sparse linear system. As we have a good esti-mation from the previous approximation, the linear systemcan be effectively solved in a few iterations. As demon-strated in Table 1, control point optimization reduces theapproximation error but also takes some extra time. Ourmethod without this optimization is sufficiently good inmany cases so it is considered as an option to tradeoffquality with speed.

4.5. Instant solid editing

Editing propagation often takes a sparse set of user in-put as constraints and extends this to similar regions toavoid otherwise labor-intensive procedures. Editing propa-gation has been studied for image/video processing (e.g.[39,19]), Bidirectional Texture Functions editing (e.g.[40]) etc. Similarly, 3D solids are expensive to store, andalso time-consuming to edit. We achieve instant solid edit-ing by adapting a recent development [19] in images andvideos. Alternative approaches for texture editing may in-volve texture classification (e.g. [38]) to identify similar

gorithm without (middle) and with (right) region separation.

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

Table 1Statistics of solid vectorization results.

Example Error Time

Wang’s Ours w/o opt. Ours w/opt. Wang’s Ours w/o opt. (s) Ours w/opt. (s)

Fig. 11 (‘caustics’) 6.14 2.62 1.50 8 min 25 s 1.08 5.10Fig. 12 (‘balls’) 8.59 5.83 4.21 24 min 21 s 2.67 11.46

8 G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx

patterns. In this work, we restrict the propagation based oncolor similarity and location closeness, which is muchmore efficient thus suitable for solid textures, and more ro-bust as no classification is needed. A typical scenario forthis editing is the user first draws a few strokes with differ-ent intensities indicating how strong the selected voxelswill be affected by the editing. The user then selects areference color, and voxels will be affected based on simi-larities in the position and the appearance (color) to thosewith user specifications. While the editing in [19] is gener-ally efficient, dealing with large volumes is still relativelyslow. Worse still, if the volume is in some vector represen-tation, naive application of this method will involve con-verting to raster representation before editing and backto vector representation afterwards. We show as followsthat our adaptation of the editing algorithm is instant withvirtually equivalent solution; this cannot be achieved withWang’s representation.

For each control point i with color ci = (r,g,b)T and posi-tion pi = (x,y,z)T, we need to know the influence hi. This iseffectively modeled as m RBFs, the centers of which arerandomly selected from the stroke voxels

hi ¼Xm

k¼1

xkhi;k

¼Xm

k¼1

xk exp �aðbjpi � �pkj2 þ jci � �ckj2Þn o

; ð4Þ

where �pk and �ck are the position and color of k-th strokevoxel selected as a RBF center. xk, restricted to be non-neg-ative, can be obtained by solving a linear programmingproblem that minimizes the strength deviation for userspecified voxels [19]. Parameters a and b control the prop-agation and a = 10�4, b = 0.1 work well in many cases.Assuming the reference color is cref, to compute the editedgradient solids, ci and dci

dpineed to be updated for each con-

trol point, which can be effectively calculated as follows.We define c0i ¼ ð1� hiÞci þ hi � cref , and thus we have

dc0idpi¼ ð1� hiÞ

dci

dpi� ðci � cref Þ

dhi

dpi

� �T

; ð5Þ

where

dhi

dpi¼ �

Xm

k¼1

xk2ahi;k bðpi � �pkÞ þdci

dpi

� �T

ðci � �ckÞ( )

: ð6Þ

The editing is demonstrated in Fig. 1 to turn a fish purple in-stantly. A few strokes on the fish object are drawn to indi-cate the effect of change and a purplish color sample ischosen (as in the box). Another example is in Fig. 9 where‘dinopet’ is turned pink instantly with a few strokes and apink reference color (as in the box). The editing algorithm

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

only takes about 0.1 s, providing instant feedback on suchlarge volumes. Comparatively, direct application on rastersolid of equivalent resolution takes about 1 s and naiveimplementation on vector solids takes a few minutes.

5. Results and discussions

Our algorithm is useful for either direct synthesis of so-lid textures, or vectorizing input solids. We carried out ourexperiments on a computer with 2 � 2.26 GHz quad-coreCPU and NVIDIA GTS 450 GPU. Our algorithm involves afew parameters for various stages of the pipeline. We usedthe following settings for experiments in the paper: thegrid size d = 4, the number of iterations NO = 3, the numberof checkpoints NC = 3, the number of quantization clustersNT = 12, the number of crossbar matching candidatesN1 = 20, N2 = 50, neighborhood size for different levelsN = 9, 11, 21 and editing propagation parameters a = 10�4,b = 0.1.

5.1. Solid texture synthesis

Our algorithm directly synthesizes more compact andresolution-independent gradient solid textures from 2Dexemplars. Solids with comparable quality to the state ofthe art can be synthesized, as shown in Figs. 1, 4,5,6,7,8,10. As for other CPU-based algorithms that focus onsynthesizing full solids of a 1283 cube, the typical reportedtimes have been tens of minutes, e.g. [13] uses 10–90 min(without tensor fields) and [21] (a CPU-based implementa-tion similar to [4] with direction fields considered) re-ported about 30 min with a single core. Our results arevector solid textures which are resolution-independent.For simplicity, we consider solid textures with equivalentdetail resolution to raster solid textures when certain res-olution is referred to in the following discussion. Our cur-rent implementation, after about 10 s preprocessing ofthe input exemplar (which is the same for arbitrarily sizedoutput volumes), takes only 13 s. Even counting the differ-ent performance of CPUs, our algorithm is over 10 timesfaster. Due to the compactness in representation and thetechnique for memory reuse, we can synthesize high-reso-lution solid textures in full. 5123 solids can be synthesizedwithin 15 min (Fig. 8). Other examples throughout the pa-per with about 512 samples in the longest dimension take3–7 min, while the example in Fig. 10 at resolution of 1024takes 35 min and within 900 MB memory. Region separa-tion is not needed if the input texture does not containsharp boundaries, as the ‘vase’ and ‘tree’ examples inFig. 9. In these examples, the binary mask is used only aspart of the feature vector, not for region separation. The‘tree’ example shows that our synthesis algorithm can be

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

Fig. 7. Synthesized object without (a) and with the field (c); the field is given in (b).

Fig. 8. Synthesized solids without fields. First row: tiled low resolution (1283) solids. Second row: high resolution (5123) solids.

G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx 9

generalized to synthesize solids with different exemplarscovering different spaces, mimicking the real structure ofa tree.

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

We demonstrate the effectiveness of our algorithm withvarious examples. Although our method uses a rathersparse set of control points, they are much more expressive

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

Fig. 9. Synthesized high-resolution solids (about 512 samples in the longest dimension) following given directional fields with our algorithm: ‘vase’, ‘horse’,‘tree’ and ‘dinopet’ with synthesized solids, close-ups and internal slices. ‘Dinopet’ is turned pink with instant editing. Part of the figure was previouslypublished in [43] and is republished with permission by Springer.

10 G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx

than voxels at the same resolution. An example is given inFig. 6. The left result is synthesized with [42] (using a pro-portionally downsized exemplar image as input) and lookssensible at original 323 resolution. We use tricubic interpo-lation to upscale the volume to 1283 and clear artifacts ap-pear indicating that 323 volume is not sufficient to capturethe structure of the solid. Our results with also 323 controlpoints are significantly better and sharp region boundariescan be recovered with region separation. Tiling small cubessuch as of 1283 size to cover the whole space is commonlyused, due to the prohibitively expensive computation withmost previous algorithms. Synthesizing high resolutionsolids is essential to avoid visual repetition (as demon-strated by ‘table’, ‘cake’ and ‘statue’ in Fig. 8) or producesolids following certain direction fields (see Fig. 9). A com-parison of results without or with the field is given in Fig. 7.High resolution solid textures with 512 and 1024 samplesin the longest dimension are shown in Figs. 1 and 10,respectively. Note that in all the results we synthesizethe full solids rather than only the visible voxels [4,42].This is preferred since in many applications objects aresynthesized once but rendered many times on lower-endsystems. Our representation makes rendering algorithmboth efficient and simple to implement.

Fig. 10. Synthesized high-resolution solid texture (1024 samples in the longest dsolid texture; (c) close up; and (d) internal slices.

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

5.2. Vectorization of solid textures

Our approach can also be used for solid texture vector-ization. In this application, we take each voxel as a sampleand produce gradient solids with the method in Section 4.4if optimization is used or otherwise a first order approxima-tion as in Section 4.3.4. We perform comparative experi-ments on the same computer, using the code directlyfrom [33]. 5000 RBFs are used to provide sufficient flexibil-ity, more than most examples in [33], for fair comparison.Although our algorithm is highly parallel, we only use a sin-gle core for fair comparison. Detailed running times and fit-ting errors are given in Table 1. Whilst pixel-wise errormeasurement before and after vectorization may not bethe best criterion perceptually, it is widely used in imagevectorization. For most solids suitable for vectorization,our method produces results with lower per-pixel errorand avoids the spotty artifacts caused by the use of RBFs.Although RBFs seem to be more flexible, unless using a(potentially impractically) large number of RBFs for rela-tively complicated input, spotty artifacts are reasonablereflection of radial bases and large approximation errors re-sult. We also experimented with varying RBFs from 3000 to5000 but the approximation errors in our experiments only

imension) with a field. (a) Input user specified tensor field; (b) synthesized

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

Fig. 11. Solid vectorization of the input volume ‘caustic’ without binary mask. (a) Input volume; (b) and (c) our results without and with furtheroptimization; and (d) result using [33].

Fig. 12. Solid vectorization results with a binary mask. (a) Input volume ‘balls’; (b) input volume rendered in transparency; (c) input mask; (d) vectorizedsolid with our algorithm without optimization; (e) our result with optimization; and (f) result using [33].

G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx 11

drop marginally. Wang’s algorithm may also get stuck atsuboptimal solutions due to the highly non-linear nature.Our vectorization does not suffer from these problemsand is much simpler to optimize as only sparse linear sys-tems need to be solved.

Our method without control point optimization is onaverage 500 times faster and has much reduced recon-struction error and better color reproduction than [33],as shown in Figs. 11 and 12 as well as Table 1. If the op-tional control point optimization is used, the error can befurther reduced at a small cost. This shows that we cur-rently achieve interactive performance for vectorizationof moderate sized volumes. Direct synthesis of gradient so-lid textures requires many times of intermediate vectoriza-tion and evaluation and it would become impracticallyslow without the speedup. Since the algorithm is highlyparallel, a parallel GPU-based implementation may furtherimprove the speed.

We use regions to represent sharp boundaries (Fig. 12)but our method can deal with input solids that cannot benaturally separated into multiple regions (see Fig. 11). In

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

this case, no binary mask image needs to be provided. Amore thorough evaluation on the whole dataset providedby [13] with 21 solid textures shows that for more than75% of the examples, especially those examples more suit-able for vectorization (with lower approximation for bothmethods), our method outperforms [33] in fitting error(see the accompanied supplementary material for detailedstatistics).

We quantize each value with 8 bits, and our representa-tion, without further careful coding, takes only 6.5% (with-out region separation) or 15% (with region separation) ofthe voxel solids, while 17–26% is reported in [33]. The sizeof the look-up table is not considered because it does notdepend on input and thus does not need to be stored inexternal files and its size is fixed and small enough to bekept in current graphics card without any problem.

5.3. Rendering

While our current synthesis implementation is CPU-based, gradient solids are rendered efficiently with

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

Fig. 13. An example that a single distance field is not sufficient to fully recover sharp boundaries. (a) Input solid; (b) vectorized solid with a single distancefield; (c) close-up of (b); and (d) vectorized solid with an additional distance field to recover sharp boundaries (close-up).

12 G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx

commodity GPUs. For each visible pixel, we obtain theinterpolated texture coordinate using the vertex shaderand evaluate the color with Eq. (1) using the fragment sha-der; the colors and gradients at control points are stored astextures for efficient GPU access. The color at any continu-ous position is calculated by linear interpolation of entriesin the look-up table described in Section 3 through hard-ware-supported texture fetches, a commonly used tech-nique for realtime rendering. For solid textures withbinary masks, the relevant set of feature vectors is usedbased on the evaluated signed distance. This is both effi-cient and ensures accuracy as accurate values are obtainedat 83 times higher resolution than raster solids. This linearinterpolation is accurate at 1

8 voxel resolution and is suffi-ciently close to the real function such that no visible arti-fact is produced, even when extremely magnified. Inmost practical applications, a precomputed look-up tablewith 1

4 voxel resolution is sufficient, which leads to alook-up table with 173 � 32 entries and taking less than0.6 MB storage. To avoid jagged boundaries when gradientsolids with two regions are rasterized, we use similarantialiasing technique as in [33]. The idea is for pixels closeto boundaries, colors evaluated with both positive andnegative regions are linearly blended.

Our representation has similar real-time rendering per-formance as [33]. To make a fair comparison, in the perfor-mance measurement, we have disabled mipmapping for[33] and enabled antialiasing for both methods. For a1283 solid with a mesh containing 70 K vertices renderedat 1024 � 768 resolution, our average frame rate is 80 fpsand the rendering algorithm from [33] achieves on average75 fps. High resolution solid textures in this paper are ren-dered with 30–60 fps. The slightly lower frame rates aredue to the relatively complicated geometry and large tex-tures with lower cache performance.

5.4. Discussions and limitations

Although we can represent sharp boundaries with re-gions, similar to Wang et al. [33] using a single distancefield we cannot in general recover sharp boundaries ifmore than two regions touch. An example is given inFig. 13. The input solid (a) can be vectorized with our algo-rithm producing the reconstructed solid (b) with close-up(c). Sharp boundaries between triangles cannot be pre-served with the single binary mask. Compared with [33],

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

our blurring effects are much more local. If such blur isnot acceptable, our algorithm needs to be modified to beaugmented with another distance field to separate adja-cent triangle pairs, as shown in (d) (a close-up view).

Another limitation is although our fitting error is usu-ally lower than Wang et al. [33] for typical input, fine de-tails within a region may not be fully reproduced; thishowever is a limitation for virtually all the vectorizationmethods. To simulate fine details of textures withoutexcessive storage, the approximation error at any positionis modeled as a Gaussian distribution. Assume for each po-sition x, and an arbitrary channel c (including r, g, b) with asample pixel value pcðxÞ and corresponding reconstructedvalue from the vector representation ~pcðxÞ, the residualrcðxÞ ¼ pcðxÞ � ~pcðxÞ is a Gaussian distribution with proba-bility p satisfying

pðrcðxÞ ¼ yÞ ¼ Gð0;rcðxÞÞ

¼ 1ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2prcðxÞ2

q exp � y2

2rcðxÞ2

( ); ð7Þ

where rc(x) is the standard deviation and y an arbitraryvalue. We optimize rc(x) such that pðrcðxÞ ¼ pcðxÞ� ~pcðxÞÞis maximized, which is worked out as rcðxÞ ¼ jpcðxÞ�~pcðxÞj. For efficiency, rc is also compactly representedusing our vector representation, treating as an additionalchannel. When rendering at any position, the residual r israndomly sampled from the distribution. To ensure consis-tent result, a position determined hash function as Perlinnoise [27] is used. With similar lookup table based GPUacceleration, extra computation can be efficiently done,keeping the rendering algorithm realtime (current imple-mentation with 30–50% of the original fps). An exampleis shown in Fig. 14 where richer details are recovered with-out losing the benefits of vector representation such as res-olution independence.

As a method to produce vectorized solid textures, ourmethod is not suitable for all textures. Even with noisemodeling, for textures with large amount of high-fre-quency details, the method may not reproduce such tex-tures in the synthesized solids well, as shown in Fig. 15.Nevertheless, we have demonstrated that our methodworks well on a variety of textures throughout the paper.Our representation is particularly suitable for solid tex-tures having dominantly smooth color variations withineach homogeneous region, as assumed by virtually all the

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

Fig. 14. Vector solid textures without (left) and with added details (right).

Fig. 15. Our method may not perform well on exemplar images withsignificant high frequency details. Left: input exemplar image; right:synthesized solid textures.

G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx 13

vectorization methods. This applies also when texturescontain textons at varying scales, reasonable synthesis re-sults can be achieved as long as they do not have signifi-cant high-frequency details, as demonstrated in Fig. 8where textures contain elementary pieces of differentsizes. A regular grid is used for simplicity which may notbe very efficient if the level of detail changes dramaticallyover the volume; alternatively, adaptive sampling may beused to alleviate this.

The instant editing algorithm in this paper does notconsider the texton structures of the solid textures andthus may not provide semantically coherent editing re-sults. Based on our general framework, this could beachieved with texton analysis and this is expected to be ex-plored in the future.

6. Conclusions and future work

In this paper, we propose a novel gradient solid repre-sentation for compactly representing solids. We alsopropose an efficient algorithm for direct synthesis of gradi-ent solid textures from 2D exemplars. Our algorithm is veryefficient in both computation and storage, compared withprevious voxel-level solid texture synthesis methods andthus allows high-resolution solid textures to be synthesizedin full. The algorithm can be generalized to take 3D solids asexemplars which will also benefit from the compactness ofour representation. The representation is also potentiallyuseful for accelerating volume processing. We have demon-strated instant editing of large volumes, and we would liketo explore other applications such as efficient volumetricrendering and manipulation of (solid) textures (e.g.[6,20]) in the future. Our current implementation of the

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

synthesis algorithm is purely CPU based. The algorithm ishighly parallel and we expect to implement this on theGPU to further improve the performance. Our renderingimplementation can be further augmented with mipmap-ping for adaptive scaling especially minification and tex-ture composition to produce richer fractal-likeboundaries, using techniques similar to those in [33]. Theinstant solid editing algorithm could be improved for moresemantically meaningful editing by taking into account thetexton structures.

Acknowledgments

This work was supported by the National Basic ResearchProject of China (Project Number 2012CB316400), theNatural Science Foundation of China (Project Number61120106007 and 61170153), the National High Technol-ogy Research and Development Program of China (ProjectNumber 2011AA010503) and the National Science andTechnology Key Projects of China (2011ZX01042-001-002).

Appendix A. Supplementary material

Supplementary data associated with this article can befound, in the online version, at http://dx.doi.org/10.1016/j.gmod.2012.10.006.

References

[1] W. Barrett, A.S. Cheney, Object-based image editing, ACM Trans.Graph. 21 (3) (2002) 777–784.

[2] Nathan A. Carr, John C. Hart, Meshed atlases for real-time proceduralsolid texturing, ACM Trans. Graph. 21 (2) (2002) 106–131.

[3] J. Chen, B. Wang, High quality solid texture synthesis using positionand index histogram matching, Visual Comput. 26 (4) (2010) 253–262.

[4] Yue Dong, Sylvain Lefebvre, Xin Tong, George Drettakis, Lazy solidtexture synthesis, Comput. Graph. Forum 27 (4) (2008) 1165–1174.

[5] Song-Pei Du, Shi-Min Hu, Ralph R. Martin, Semi-regular solidtexturing from 2D exemplars, IEEE Trans. Vis. Comput. Graph. (inpress), http://dx.doi.org/10.1109/TVCG.2012.129.

[6] Hui Fang, John C. Hart, Textureshop: texture synthesis as aphotograph editing tool, in: ACM SIGGRAPH, 2004, pp. 354–359.

[7] James Ferguson, Multivariable curve interpolation, J. ACM 11 (2)(1964) 221–228.

[8] D.J. Heeger, J.R. Bergen, Pyramid-based texture analysis/synthesis,in: Proc. ACM SIGGRAPH, 1995, pp. 229–238.

[9] A. Hertzmann, C.E. Jacobs, N. Oliver, B. Curless, D.H. Salesin, Imageanalogies, in: Proc. ACM SIGGRAPH, 2001, pp. 327–340.

[10] R. Jagnow, J. Dorsey, H. Rushmeier, Stereological techniques for solidtextures, in: Proc. ACM SIGGRAPH, 2004, pp. 329–335.

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/

14 G.-X. Zhang et al. / Graphical Models xxx (2012) xxx–xxx

[11] R. Jagnow, J. Dorsey, H. Rushmeier, Evaluation of methods forapproximating shapes used to synthesize 3d solid textures, ACMTrans. Appl. Perception 4 (4) (2008) (article 24).

[12] Arie Kadosh, Daniel Cohen-Or, Roni Yagel, Tricubic interpolation ofdiscrete surfaces for binary volumes, IEEE Trans. Vis. Comput. Graph.9 (4) (2003) 580–586.

[13] Johannes Kopf, Chi-Wing Fu, Daniel Cohen-Or, Oliver Deussen, DaniLischinski, Tien-Tsin Wong, Solid texture synthesis from 2Dexemplars, ACM Trans. Graph. 26 (3) (2007) (article 2).

[14] Vivek Kwatra, Irfan Essa, Aaron Bobick, Nipun Kwatra, Textureoptimization for example-based synthesis, ACM Trans. Graph. 24 (3)(2005) 795–802.

[15] Yu-Kun Lai, Shi-Min Hu, Ralph R. Martin, Automatic and topology-preserving gradient mesh generation for image vectorization, ACMTrans. Graph. 28 (3) (2009) (article 85).

[16] Sylvain Lefebvre, Hugues Hoppe, Parallel controllable texturesynthesis, ACM Trans. Graph. 24 (3) (2005) 777–786.

[17] Sylvain Lefebvre, Hugues Hoppe, Appearance-space texturesynthesis, ACM Trans. Graph. 25 (2006) 541–548.

[18] F. Lekien, J. Marsden, Tricubic interpolation in three dimensions, J.Numer. Methods Eng. 63 (2005) 455–471.

[19] Yong Li, Tao Ju, Shi-Min Hu, Instant propagation of sparse edits onimages and videos, Comput. Graph. Forum 29 (7) (2010) 2049–2054.

[20] Jianye Lu, A.S. Georghiades, A. Glaser, H. Wu, L.-Y. Wei, B. Guo, J.Dorsey, H. Rushmeier, Context-aware textures, ACM Trans. Graph.26 (1) (2007) (article 3).

[21] C. Ma, L.-Y. Wei, B. Guo, K. Zhou, Motion field texture synthesis, ACMTrans. Graph. 28 (5) (2009) (article 110).

[22] Paul Ning, Lambertus Hesselink, Fast volume rendering ofcompressed data, in: Proc. IEEE Visualization, 1993, pp. 11–18.

[23] A. Orzan, A. Bousseau, H. Winnemöller, P. Barla, J. Thollot, D. Salesin,Diffusion curves: a vector representation for smooth-shaded images,ACM Trans. Graph. 27 (3) (2008) (article 92).

[24] Bin Pan, Fan Zhong, Shuai Wang, Wei Chen, Qunsheng Peng, Salientstructural elements based texture synthesis, Sci. China Inform. Sci.54 (6) (2011) 1199–1206.

[25] Darko Pavic, Leif Kobbelt, Two-colored pixels, Comput. Graph.Forum 29 (2) (2010) 743–752.

[26] D.R. Peachey, Solid texturing of complex surfaces, in: Proc. ACMSIGGRAPH, 1985, pp. 279–286.

[27] K. Perlin, An image synthesizer, in: Proc. ACM SIGGRAPH, 1985, pp.287–296.

Please cite this article in press as: G.-X. Zhang et al., Efficient synthesis o10.1016/j.gmod.2012.10.006

[28] N. Pietroni, P. Cignoni, M.A. Otaduy, R. Scopigno, Solid-texturesynthesis: a survey, IEEE Comput. Graph. Appl. 30 (4) (2010) 74–89.

[29] J. Sun, L. Liang, F. Wen, H.-Y. Shum, Image vectorization usingoptimized gradient meshes, ACM Trans. Graph. 26 (3) (2007) (article11).

[30] Kenshi Takayama, Makoto Okabe, Takashi Ijiri, Takeo Igarashi,Lapped solid textures: filling a model with anisotropic textures,ACM Trans. Graph. 27 (3) (2008) (article 53).

[31] Kenshi Takayama, Olga Sorkine, Andrew Nealen, Takeo Igarashi,Volumetric modeling with diffusion surfaces, ACM Trans. Graph. 29(6) (2010) (article 180).

[32] J. Tumblin, P. Choudhury, Bixels: picture samples with sharpembedded boundaries, in: Proc. Eurographics Symposium onRendering, 2004, pp. 186–196.

[33] Lvdi Wang, Kun Zhou, Yizhou Yu, Baining Guo, Vector solid textures,in: Proc. ACM SIGGRAPH, 2010 (article 86).

[34] L.-Y. Wei, Texture synthesis from multiple sources, in: SIGGRAPH2003 Sketch, 2003.

[35] L.-Y. Wei, S. Lefebvre, V. Kwatra, G. Turk, State of the art in example-based texture synthesis, in: Eurographics State-of-Art Report, 2009.

[36] Y. Wexler, E. Shechtman, M. Irani, Space-time completion of video,IEEE Trans. PAMI 29 (3) (2007) 463–476.

[37] T. Xia, B. Liao, Y. Yu, Patch-based image vectorization with automaticcurvilinear feature alignment, ACM Trans. Graph. 28 (5) (2009)(article 115).

[38] Tian Xia, Qing Wu, Chun Chen, Yizhou Yu, Lazy texture selectionbased on active learning, Visual Comput. 26 (3) (2010) 157–169.

[39] Kun Xu, Yong Li, Tao Ju, Shi-Min Hu, Tian-Qiang Liu, Efficientaffinity-based edit propagation using k–d tree, ACM Trans. Graph. 28(5) (2009) 118:1–118:6.

[40] Kun Xu, Jiaping Wang, Xin Tong, Shi-Min Hu, Baining Guo, Editpropagation on bidirectional texture functions, Comput. Graph.Forum 28 (7) (2009) 1871–1877.

[41] Boon-Lock Yeo, Bede Liu, Volume rendering of dct-basedcompressed 3d scalar data, IEEE Trans. Vis. Comput. Graph. 1 (1)(1995) 29–43.

[42] Guo-Xin Zhang, Song-Pei Du, Yu-Kun Lai, Tianyun Ni, Shi-Min Hu,Sketch guided solid texturing, Graph. Mod. 73 (3) (2011) 59–73.

[43] Guo-Xin Zhang, Yu-Kun Lai, Shi-Min Hu, Efficient solid texturesynthesis using gradient solids, Lect. Notes Comput. Sci. 7633 (2012)67–74.

f gradient solid textures, Graph. Models (2012), http://dx.doi.org/


Recommended