+ All Categories
Home > Documents > Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The...

Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The...

Date post: 29-Mar-2018
Category:
Upload: trinhkhanh
View: 212 times
Download: 0 times
Share this document with a friend
9
The Visual Computer manuscript No. (will be inserted by the editor) Parag Chaudhuri · Prem Kalra · Subhashis Banerjee Reusing View-Dependent Animation Abstract In this paper we present techniques for reusing view-dependent animation. First, we provide a framework for representing view-dependent animations. We formulate the concept of a view space - the space formed by the key views and their associated character poses. Tracing a path on the view space generates the corresponding view-dependent animation in real time. We then demonstrate that the frame- work can be used to synthesize new stylized animations by reusing view-dependent animations. We present three types of novel reuse techniques. In the first we show how to an- imate multiple characters from the same view space. Next, we show how to animate multiple characters from multiple view spaces. We use this technique to animate a crowd of characters. Finally, we draw inspiration from cubist paint- ings and create their view-dependent analogues, by using different cameras to control different body parts of the same character. Keywords view-dependent character animation · stylized animation · animation reuse 1 Introduction Reuse of previously created animation to synthesize new an- imation is a very attractive alternative to creating animation from scratch. Reuse of stylized animation, however, is a very challenging problem because the the stylizations are often generated for a particular viewpoint. View-dependent ani- mation allows us to overcome this limitation. View-dependent animation is a technique of stylized an- imation which captures the association between the cam- era and the character pose. Since the character’s action de- pends on the view, changing the viewpoint generates a view- dependent instance of the character. These can be reused to synthesize new animation. We show that view-dependent variations can be reused to animate multiple instances of the Department of Computer Science and Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi - 110016, India. E-mail: parag, pkalra, suban @cse.iitd.ernet.in same character, a group of different characters and even dif- ferent body parts of the same character. The view-dependent approach, however, demands that we define a formal representation of the camera-character pose association. We introduce the concept of a view space defined by the key views and associated key character poses which captures all the information required to produce a view-dependent animation. The framework allows the cre- ation of a view-dependent animation in real time, whenever the animator traces out a new camera path on the view space. The animator can explore all the view-dependent variations quickly. We present three broad classes of reuse methods. In the first we show how to animate multiple characters from the same view space. Next, we show how to animate multiple characters from multiple view spaces. We use this technique to animate a crowd of characters. Finally, we draw inspi- ration from cubist paintings and create their view-dependent analogues. We use different cameras to control different body parts of the same character. We combine the different body parts to form a single character in the final animation. We begin by providing the background for our work, examining and comparing related techniques in Section 2. Next, we present our framework for view-dependent ani- mation in Section 3. We then present our techniques for reusing view-dependent animations in Section 4. Section 5 concludes with a summary of the work done. 2 Background We start by exploring the work that has been done toward capturing the relationship between the pose of the character and the view. 2.1 Associating the camera and the character pose The idea of dependence of the character’s geometry on the view direction was introduced by Rademacher [16] in his work on View-Dependent Geometry (VDG). In this work,
Transcript
Page 1: Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The Visual Computer manuscript No. (will be inserted by the editor) Parag Chaudhuri · Prem

The Visual Computer manuscript No.(will be inserted by the editor)

Parag Chaudhuri · Prem Kalra · Subhashis Banerjee

Reusing View-Dependent Animation

Abstract In this paper we present techniques for reusingview-dependent animation. First, we provide a frameworkfor representing view-dependent animations. We formulatethe concept of a view space - the space formed by the keyviews and their associated character poses. Tracing a path onthe view space generates the corresponding view-dependentanimation in real time. We then demonstrate that the frame-work can be used to synthesize new stylized animations byreusing view-dependent animations. We present three typesof novel reuse techniques. In the first we show how to an-imate multiple characters from the same view space. Next,we show how to animate multiple characters from multipleview spaces. We use this technique to animate a crowd ofcharacters. Finally, we draw inspiration from cubist paint-ings and create their view-dependent analogues, by usingdifferent cameras to control different body parts of the samecharacter.

Keywords view-dependent character animation· stylizedanimation· animation reuse

1 Introduction

Reuse of previously created animation to synthesize new an-imation is a very attractive alternative to creating animationfrom scratch. Reuse of stylized animation, however, is a verychallenging problem because the the stylizations are oftengenerated for a particular viewpoint. View-dependent ani-mation allows us to overcome this limitation.

View-dependent animation is a technique of stylized an-imation which captures the association between the cam-era and the character pose. Since the character’s actionde-pends on the view, changing the viewpoint generates a view-dependentinstance of the character. These can be reusedto synthesize new animation. We show that view-dependentvariations can be reused to animate multiple instances of the

Department of Computer Science and EngineeringIndian Institute of Technology DelhiHauz Khas, New Delhi - 110016, India.E-mail: parag, pkalra, suban @cse.iitd.ernet.in

same character, a group of different characters and even dif-ferent body parts of the same character.

The view-dependent approach, however, demands thatwe define a formal representation of the camera-characterpose association. We introduce the concept of aview spacedefined by the key views and associated key character poseswhich captures all the information required to produce aview-dependent animation. The framework allows the cre-ation of a view-dependent animation in real time, wheneverthe animator traces out a new camera path on the view space.The animator can explore all the view-dependent variationsquickly.

We present three broad classes of reuse methods. In thefirst we show how to animate multiple characters from thesame view space. Next, we show how to animate multiplecharacters from multiple view spaces. We use this techniqueto animate a crowd of characters. Finally, we draw inspi-ration from cubist paintings and create their view-dependentanalogues. We use different cameras to control different bodyparts of the same character. We combine the different bodyparts to form a single character in the final animation.

We begin by providing the background for our work,examining and comparing related techniques in Section 2.Next, we present our framework for view-dependent ani-mation in Section 3. We then present our techniques forreusing view-dependent animations in Section 4. Section 5concludes with a summary of the work done.

2 Background

We start by exploring the work that has been done towardcapturing the relationship between the pose of the characterand the view.

2.1 Associating the camera and the character pose

The idea of dependence of the character’s geometry on theview direction was introduced by Rademacher [16] in hiswork on View-Dependent Geometry (VDG). In this work,

Page 2: Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The Visual Computer manuscript No. (will be inserted by the editor) Parag Chaudhuri · Prem

2 Parag Chaudhuri et al.

the animator manually matches the view direction and theshape of a base mesh model with the sketched poses of thecharacter and creates a view-sphere. Tracing any camera pathon this view-sphere generates the appropriate animation withview-dependent deformations. Chaudhuri et al. [6] present asystem for doing view-dependent animation from sketches,using the VDG formulation. Our framework is more generaland we demonstrate that it reduces to the VDG formulationas a special case. Martín et al. [14] use hierarchical extendednon-linear transformations to produceobserver dependentdeformations in illustrations, in order to capture its expres-sive capabilities, however, the method does not present anymethod for authoring such transformations to get the desiredanimation.

Since we reuse view-dependent animation to synthesizenew animations, our work bridges across the two themes ofstylized animation and animation synthesis. We discuss therelated work pertaining to these two areas in the next section.

2.2 Synthesis of stylized animation

Several artwork styles have been explored in stylized ani-mation and rendering literature like [15], [12], and [10].Stylizations based on innovative use of the camera have alsobeen researched. Agrawala et al. [1] present a multiprojec-tion rendering algorithm for creating multiprojection imagesand animations. Singh [17] also presents a technique forconstructing a nonlinear projection as a combination of mul-tiple linear perspectives. Coleman and Singh [7] make one ofSingh’s [17] exploratory cameras a boss (or primary) cam-era; this camera represents the default linear perspective viewused in the animation. All other exploratory (or secondary)cameras, when activated, deform objects such that when viewedfrom the primary camera, the objects will appear non-linearlyprojected. Though this type of camera based stylization canproduce striking effects, which can be aesthetically harnessedby an artist to create interesting animations, there is no di-rect one-to-one correspondence between the viewpoint andthe pose of the character. Reusing view-dependent variationsallow us to produce effects very similar to those producedin [7]. Glassner [9] talks about usingcubist principles in an-imation, i.e., rendering simultaneously from multiple pointsof view in an animation using an abstract camera model. Inone of our reuse techniques, we draw inspiration from cubistpaintings, and synthesize a new animation where differentparts of the same character are controlled by separate cam-eras.

Synthesis of animations using motion synthesis techniqueshas been widely researched. All the previous work in the di-rection of animation reuse has generally focused on creat-ing newer, meaningful motions given a database of previ-ously recorded motion capture data [13], [11], [2]. Brandand Hertzmann [3] describe Style Machines to do stylisticmotion synthesis by learning motion patterns from a highlyvaried set of motion capture sequences. Bregler et. al. [4]describe a method to capture the motion from a cartoon ani-mation and retarget to newer characters. Chaudhuri et al. [5]

present a technique for stylistic reuse of view-dependent an-imation which uses Rademacher’s [16] view-sphere formu-lation to generate an animation. Their method is a subset ofthe reuse method we present in Section 4.1. Our method is amore general formulation.

Contributions We present a novel framework for represent-ing view-dependent animation. We then formalize the con-cept of reuse of view-dependent animations in terms of ourframework. We present three broad classes of novel reusemethods. In the first we show how to animate multiple char-acters from the same view space. Next, we show how to ani-mate multiple characters from multiple view spaces. We usethis technique to animate a crowd of characters. Finally, weuse different cameras to control and animate different bodyparts of the same character. To the best of our knowledge,we do not know of any other work which allows such reuseof stylized animation, in so many myriad ways.

3 Our Framework

In this section we present our framework for view-dependentanimation.

3.1 The view space

Pose of theCharacter

Pose of theCharacter

Pose of theCharacter

Pose of theCharacter

Pose of theCharacter

Pose of theCharacter

Viewpoint 2

Viewpoint 1

Viewpoint 3

Viewpoint 6

Character

Viewpoint 5

Viewpoint 4

(associated with

Viewpoint 1)(associated with

(associated with

(associated with

Viewpoint 2)

(associated withViewpoint 3)

(associated with

Viewpoint 4)

Viewpoint 5)

Viewpoint 6)

Fig. 1 Different character pose associated with each viewpoint

At a given instant of time the character may be poten-tially viewed from a set of different viewpoints. The char-acter may possibly have a different pose associated to eachof these viewpoints (see Figure 1). We consider this set ofviewpoints and associated character poses as one sample.We assume, for simplicity of explanation, that we are ani-mating a single character and that the camera is looking to-ward the character (i.e., the character is in the field of viewof the camera). We also assume that the view direction is aunit vector.

We define a representation that enables aggregation ofsuch samples as an ordered sequence. These sets of view-points and associated character poses sampled (or ordered)

Page 3: Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The Visual Computer manuscript No. (will be inserted by the editor) Parag Chaudhuri · Prem

Reusing View-Dependent Animation 3

across time form aview space (see Figure 2). Every point ontheenvelope of this view space represents a viewpoint (and aunit view direction),v. If we do not consider the sampling or-der then the view space is simply a collection of viewpoints.Since for every viewpoint there is a unique view direction,we use these terms interchangeably. We denote the pose ofthe character, associated with a view directionv, asmv. Acharacter pose, in this paper, is the resulting mesh modelof the character having undergone any change that may berigid or non-rigid i.e., it includes mesh deformations as wellas changes in the mesh due to articulation. We couple thecharacter pose to the view direction. Hence, changing theview direction changes the pose of the character.

Camera Path

Viewpoints

Posed Character

View Directions

Envelope

Fig. 2 Tracing a camera path on the envelope of the view space gen-erates an animation. One character pose is shown for each setof view-points.

An animation is generated by tracing a path,P, on theenvelope (see Figure 2). A pointp on this path consists ofthe view direction associated with the point on the envelope,vp, and is indexed by time (runtime of the animation) alongthat camera path,tp, measured from the start of the camerapath. Note that the runtime of the animation should not beconfused with the sampling time. We refer to points on acamera pathP, asp ≡ (vp, tp). The animation generated, isthe sequence of the posesmvp associated tovp on the pathP, viewed along the directionvp. Every distinct camera pathgenerates a distinct animation. This is the basic idea behindour framework.

In order to create the view space, the animator providesa set ofkey viewpoints or key view directions, and the asso-ciatedkey poses. Let vk represent a key viewpoint, andmvk

represent the associated key character pose. The animatorcan provide these in the form of a set of sketches, a video,or a mix of the two. As an example consider theHugo’sHigh Jump animation (Hugo is the character used in the an-imation) where the animator provides the sketches for thekeyframes. These key views and key poses form theviewspace on which the animation is generated. Figure 5 showssome of the sketches used for the purpose. The animation isprovided separately as a supplementary video to this paper.

Now for each key view, the sphere centered at the look-at point (in this case the end of the unit length view direc-tion vector) is the set of all possible view directions fromwhich one can look toward that point. Hence, such a spheremay be thought of as a view space generated by just oneview. The complete view space is, therefore, the union of

the view spaces generated by all the views i.e., a union of allthe spheres (see Figure 3).

In order to generate an animation along a camera pathP(vp, tp) on the envelope of the view space, we need to gen-erate the associated character pose,mvp , for every pointp onP. To do this, for any view directionvp, we determine ther-closest key viewpoints (closest in the metric defined on theenvelope),vk

j. The character posemvp is then given by:

mvp =r

∑j=1

wvkjmvk

j(1)

Thus, mvp is a weighted blend of the correspondingmvkj’s

(i.e., ther-closest key view poses). Thewvkj’s are the corre-

sponding blending weights. An example of such a path isshown in Figure 3. Figure 4 shows the selection of ther-closest key viewpoint for a given position of the renderingcamera on the path.

Fig. 3 The left image shows the path traced on the envelope of theview space. The right image shows a close-up view of the path.

r −closest Key

(blend of pose at #1 and #2)Current Pose

Pose at selected

#1#2

Selected

Current Viewpoint

Pose at selectedKey

viewpoint

viewpoint #1

#2Key viewpoints ( r= 2)

Fig. 4 Blending between ther-closest key views.

The path shown in Figure 3 is obtained by smoothlyjoining the key viewpoints. Some frames from the anima-tion obtained from this path are shown in Figure 5. Here wesee that the generated animation matches the planned sto-ryboard frames very closely and the path generates the an-imation originally intended by the animator. In this anima-tion, we have generated the actual jump as a view-dependentanimation. Hugo’s run-up before the jump, however, is gen-erated using simple keyframed animation and it blends inseamlessly with the view-dependent portion. The characteris a 3D mesh model, with an embedded articulation skeleton.We use inverse kinematics to pose this skeleton. The charac-ter is also enclosed in a control lattice made up of tetrahedralcells, with each cell associated to one skeleton bone. Thisallows us to deform the character’s mesh using direct free-form deformation. We use a combination of these methods

Page 4: Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The Visual Computer manuscript No. (will be inserted by the editor) Parag Chaudhuri · Prem

4 Parag Chaudhuri et al.

and robust computer vision techniques, similar to those usedby [6] and [8], in order to recover the key view directionsand the key poses from the sketches given by the anima-tor. The complete animation is included as a supplementaryvideo with this paper.

This complete process is very intuitive for the animatoras she does not have to worry about the camera and the char-acter separately, once the view space has been created. Weassume coherence over a local neighbourhood around anyviewpoint both in terms of the view direction and the charac-ter pose i.e., the pose specified by the animator for any view-point is similar to the pose specified for any other viewpointin its small neighbourhood. This guarantees spatio-temporalcontinuity in the generated animation i.e., the animation willnot have any suddenunwanted changes in the view or posebetween successive frames.

Fig. 5 Some frames from the planned storyboard and the final renderedanimation.

The view space for this example (shown in Figure 3) isan instance of the general view space formulation. The viewspace can have other forms depending on the spatial loca-tion and sampling order of the sets of viewpoints used toconstruct it. The conditions under which they are generatedare enumerated below:

1. If all the view directions, corresponding to a set of view-points sampled at a given instant of time, intersect at acommon point (i.e., they share a common look-at point),then the instantaneous view space is a single sphere (alsocalled aview-sphere) centered at the point of intersec-tion. This is trivially true if there is only one view di-rection for some time instant. If this condition holds forall sampling time instants, then the view space is an ag-gregation of view-spheres. The spatial location and sam-pling order of these sets of viewpoints (i.e., view-spheres)gives rise to the following view space configurations:(a) If there is only one set of viewpoints (i.e., there is

only one sample) then the view space is a single view-sphere.

(b) If there are multiple sets of viewpoints and each setis located at a different point in space and sampledat a different time instant, then the view space is anaggregation of view-spheres separated both in spaceand time. The view space shown in Figure 3 is anexample of such a case (with only one view directionfor each time instant).

(c) If there are multiple sets of viewpoints at the samespatial location, sampled at different time instants,then the view space is an aggregation of view-spheresseparated only in time and not in space.

2. If all the view directions, corresponding to a set of view-points sampled at a given time instant, do not intersectat a common point, then the instantaneous view space isnot a single sphere. It can be considered as a collectionof spheres (one centered at each distinct look-at point).Then the complete view space is an aggregation of suchinstantaneous view spaces. The view space may haveany of the three configurations analogous to the ones de-scribed above.

In the work by Rademacher [16] the view-sphere formedby view-dependent models is a special case of our view space.Here, a convex hull of the viewpoints is computed. This par-titions the view space by imposing a triangulation on it. Anovel view-dependent model for any new viewpoint is gen-erated by a barycentric blend of the key deformations at thevertices of the triangle in which the new viewpoint lies. Thisis clearly a special case of our novel view generation strat-egy on the envelope. Here,r = 3-closest key viewpoints setup a local barycentric basis for the novel viewpoint. The newcharacter pose associated with this viewpoint is computed asa weighted blend of the key poses at the selected key view-points, using the barycentric coordinates of the novel view-point as weights. The major limitations of Rademacher’s for-mulation are:

– It does not handle the distance of the viewpoint which iscrucial for incorporating zoom effects.

– It cannot handle cases where all the camera view direc-tions do not intersect at a single look-at point (the centerof a view sphere) thereby limiting the method consider-ably.

Our framework can deal with both these conditions.

3.2 Distance of viewpoint

In the previous discussion we developed our framework con-sidering only the view direction without the distance of theviewpoint. Now we add the component of distance to ourframework i.e., we want the character’s pose to change asthe distance of the viewpoint changes (with or without anaccompanying change in view-direction).

We assume that a tuple list(dlv,m

lv) is associated with

every view direction,v, forming the view space. Here,dv

is the distance of viewing and the associated character poseis mv. The list is sorted on the distance field of each tuple.If the list hasL elements, then 1≤ l ≤ L. So theml

v’s arethe different poses of the character along a view direction atvarious distancesdl

v. As we change the distance,d, along aview direction,v, the resulting character pose is a blend ofthe character posesml1

v andml2v such thatdl1

v ≤ d < dl2v (see

Figure 6).

Page 5: Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The Visual Computer manuscript No. (will be inserted by the editor) Parag Chaudhuri · Prem

Reusing View-Dependent Animation 5

dv1

dv2

dv3

dv4

mv3m2Blend of &v

mv1

mv2

mv3

mv4

View directionalongDistance

Viewdirection v

Poses

Current Viewpoint

Current Pose

Fig. 6 Change of character pose with change of distance of the currentviewpoint along a view direction.

Now, given a set of key viewpoints,vk, and the associ-ated tuple lists,(dl

vk ,mlvk), we want to generate an anima-

tion for a camera pathP(vp,dvp , tp). The added parameterdvp is the distance of the viewpoint along the unit view di-rectionvp. The vectorqvp = dvp vp gives the position of thecurrent viewpoint (see Figure 7). We determine ther-closestkey viewpoints tovp on the envelope as before. Now for ev-ery key viewpoint,vk

j, in the r-closest set ofvp, we projectthe vectorqvp on vk

j and find the length of the projected vec-tor. The projected length,dvp vp · vk

j, is the distancedvp pro-jected alongvk

j. Finddl1vk

janddl2

vkjfrom the tuple list ofvk

j such

thatdl1vk

j≤ dvp vp · vk

j < dl2vk

j. It is always possible to find aβvk

j

such thatdvp vp ·vkj = (1−βvk

j)dl1

vkj+βvk

jdl2

vkj. The computedβvk

j

locates a point,qvkj, along the correspondingvk

j vector. Thepose at eachqvk

jis given by:

mqvk

j= (1−βvk

j)ml1

vkj+βvk

jml2

vkj

(2)

whereml1vk

jandml2

vkjare the poses associated withdl1

vkjanddl2

vkj.

Then the pose corresponding to the current viewpoint,qvp ,is given as a weighted blend of the pose at eachqvk

j, as:

mqvp=

r

∑j=1

wqvk

jmq

vkj

(3)

wherewqvk

jare the weights used for the blending. The pro-

cess is shown schematically in Figure 7.In order to illustrate this concept, we augment the view

space, shown in Figure 3, by adding two more poses for aview direction at different distances. The poses are recon-structed from sketches given by the animator, and the cam-era center is recovered along with the distance of viewing.Figure 8 shows two camera positions on the left that differin the distance from the character, and not the view direction.On the right the corresponding character pose is shown, asseen from their associated cameras. Now we trace another

v

v

k

k

k

v k

v k

vk

k

vkv k

v k

k

k

vp

pq

pq

v1

v4

3v

q4

q3

q4

v1q

q2 q

3

q2

2v

v1q Projections on key

view directions

CurrentBlended

Pose

r − closest keyviewpoints

Currentview direction

Currentviewpoint

m

m

m

m

m

Subspace

Camera PathEnvelope

Fig. 7 Generating a new character pose for the current viewpoint fromkey viewpoints after incorporating distance.

path for the rendering camera, specifyingdp for all pointson the path, and the required animation is generated as ex-plained above. The complete animation is included in thesupplementary material provided with this paper. This alsoillustrates that there exist other paths which are capable ofgenerating interesting animations. Our framework can gen-erate animation in real time as the animator traces out a pathon the view space, thus making it possible for the animatorto explore the view space very easily.

Fig. 8 Varying the character pose with change in distance of the view-point.

Thus, in our framework we incorporate both the viewdirection and the distance of a viewpoint. It is easy to incor-porate changes in the character pose with changes in focallength of the camera in a manner similar to the one usedfor distance of the viewpoint. Since the view space is an ab-stract representation, it can be easily used with all the viewparameters encoded in the form of a camera matrix. We nowpresent our methods for reusing view-dependent animation,using the framework we have developed, to synthesize newanimations.

Page 6: Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The Visual Computer manuscript No. (will be inserted by the editor) Parag Chaudhuri · Prem

6 Parag Chaudhuri et al.

4 Reusing View-Dependent Animations

We consider the different view-dependent animations madepossible by changing the rendering camera, as variationsof each other. We are interested in reusing these variationsto synthesize novel animations. We categorize these waysof reusing view-dependent animation into three broad cat-egories. In the subsequent sections, we discuss these cate-gories in terms of their representation in the machinery ofour framework.

4.1 Animating multiple characters from the same viewspace

We want to reuse the view-dependent variations of a charac-ter to animate multiple characters and create a novel anima-tion.

New animation withmultiple characters

Same view space

Envelope

Envelope

Camera 1Camera Path 1

Camera Path 2

Key viewpoints Key view directions

Posed Character

Key view directionsKey viewpoints

Camera 2Posed Character

Fig. 9 Animating multiple characters from the same view space

Let us assume that a camera,C1, traces a pathP1(vp,dvp , tp)

on the view space,V S . A second camera,C2, traces an-other distinct pathP2(vp,dvp , tp) onV S . The animation gen-erated byC1 can be thought of as an ordered set ofn framesP1, given byP1 = {pi

1 : 1 ≤ i ≤ n}, wherepi1 is the pose

of the character in thei-th frame. The order implicitly im-posed on the set is the temporal sequence of the frames inthe animation. Similarly, we have, for the animation gener-ated byC2, another ordered set ofm framesP2, given byP2 = {p j

2 : 1≤ j ≤ m}. The animationsP1 andP2 are view-dependent variations of each other i.e., they are generatedfrom the same view space. The poses,pi

1 ∈V S andp j2 ∈V S ,

are view-dependent variations, or instances, of each other.We, then define a novel animation with two characters as

an ordered setQ , given by,

Q = {〈qk1⊕qk2〉 : qk1 = pk

1 andqk2 = pk2 ∀ k, 1≤ k≤min(n,m)}(4)

where〈qk1 ⊕ qk2〉 indicates that a framek, in Q consists of

two character poses (see Figure 9). The⊕ operator indi-cates that the two poses are being composed together to formthe synthesized animation frame. The composition can be

done in 3D space if the two poses are registered to a com-mon coordinate system. The composition can also be donein 2D image space, by compositing the poses after they havebeen rendered into the framebuffer. The novel animation hasmin(n,m) frames.

In this manner, we can reuse the view-dependent varia-tions of a character to animate multiple characters and createa new animation. As an example of this method of reuse, wecreate a view space and plan the movement of two camerason it. Figure 10 shows the final frame generated by com-positing the two view-dependent instances of the character.Note that the compositing done is in the image space, i.e., in2D.

Fig. 10 Two view-dependent instances composited together

4.2 Animating multiple characters from multiple viewspaces

The reuse strategy presented in Section 4.1 uses multipleinstances of the same character, each from the same viewspace. We want to further expand this idea and look at ani-mating groups of distinct characters together.

Consider that we haveN distinct characters and we haveconstructed a view space for each. Then we can generatethe distinct animations,P r, with 1 ≤ r ≤ N. Note that thegeneratedP r’s are distinct even if the path traced on eachview space is the same, because the character in each is dis-tinct. EachP r is an ordered set ofnr frames and is given byP r = {pi

r : 1 < i ≤ nr}. We can now construct a new anima-tion of a group of these distinct characters as

Q = {〈N

M

l=1

qkl 〉 : qkl = pkl ∀ k, 1 < k ≤

Nminl=1

(nl)} (5)

where theL

operator indicates thatN poses are being com-posed together to form the synthesized animation frame.

We now look at the problem of how to control the pathswe want to trace on theN distinct view spaces. Let a camerabe associated with every view space. We call this camera thelocal camera for the corresponding view space. Let the pathtraced by this camera bePr(vpr ,dvpr

, tpr). Now, we define asingleglobal camera and the path traced by this camera asP.The pathP is not a path on any view space, but is the trajec-tory of the global camera in 3D space, defined in the global

Page 7: Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The Visual Computer manuscript No. (will be inserted by the editor) Parag Chaudhuri · Prem

Reusing View-Dependent Animation 7

(a) (b) (c)

Fig. 12 The Mexican Wave Animationcoordinate system. We can define apath-mapping function

Camera PathsLocal

Path of the Global Camera

View Spaces Local Cameras

Global Camera

New Crowd Animation

Fig. 11 Animating multiple characters from multiple view spaces.

fr : Pr = fr(P). The functionfr maps the global path to thecorresponding local path on the view space. The functionfr

is a coordinate system transfer function, from the global co-ordinate system to the local coordinate system of each viewspace. In order to create the novel animation, the animatorhas to plan the camera trajectory only for the global cameraand define the variousfr’s. Then moving the global cam-era alongP, will cause each of the local cameras to movealong the correspondingPr on their respective view spaces.This will generate the distinctP r’s. These can be compositedtogether to generate the final animation (see Figure 11). Astraightforward choice for the compositing method is to ren-der the various poses as they appear when viewed throughthe global camera. This technique automatically compositesthem in the rendered frame. The animator, however, can useany other compositing method as required for the anima-tion. Before starting the animation process, the animator hasto place the various characters in the global coordinate sys-tem as a part of global scene definition. Hence, the anima-tor already knows the coordinate system transfer function,gr, from the global coordinate system to the local coordi-nate system of each character. The mapping from the localcoordinate system of the character to the coordinate system

of the view space,hr, is easily recovered during view spaceconstruction. Thus, we havefr = gr ◦hr (where◦ representsfunction composition).

We have used this reuse technique to animate a crowdof characters in theMexican Wave animation. In this exam-ple the same character is replicated many times to gener-ate a crowd (shown in Figure 12(b)). Each character has alocal view space as shown in Figure 12(a). The local keyviewpoints are shown in blue and red, while the current lo-cal camera is shown in green. Moving this local camera onthe path shown (in green) causes the single character’s posechange as it is supposed to change during the crowd anima-tion. The movement of the global camera is mapped to eachof these view spaces, to move the corresponding local cam-eras, which generates the final animation. The path of theglobal camera and current look-at is shown in green in Fig-ure 12(b). Note that the crest of the Mexican wave is in frontof the current camera look-at. We also perform conservativeview culling to efficiently render the crowd. A final frameof the animation is shown in Figure 12(c). The animation ofa single character due to camera movement in a local viewspace, the generation of the crowd and its animation due tothe global camera movement are shown in a supplementaryvideo provided with this paper, along with the final anima-tion.

In this example, finding out the path-mapping function,fr, which will generate the wave in the crowd, for a specificmovement of the global camera is not difficult. Figure 13shows the position of the local cameras in their respectivelocal view spaces for a given position of the global camera.The mapping ensures that the local cameras in local viewspaces outside the bounds of the current view frustum do notmove. This mapping function can be intuitively constructed.For a general case, however, designing a path mapping func-tion to get a desired animation may not always be easy.

4.3 Animating different parts of a single character from asingle view space

In the previous sections we have looked at the problem ofsynthesizing a novel animation with multiple characters us-ing view-dependent variations of one or many characters.Now we draw inspiration fromcubist paintings which por-

Page 8: Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The Visual Computer manuscript No. (will be inserted by the editor) Parag Chaudhuri · Prem

8 Parag Chaudhuri et al.

characters

pose ofCurrent

individual

Local View Spaces

Current position oflocal camera

Current position ofglobal camera

global cameraPath of

Bounds of theView Frustum

Current look−atof global camera

local cameraPath of

Fig. 13 Mapping the movement of global camera to the local cameras.

tray the parts of the same character in a painting from differ-ent perspectives. Many such paintings by Pablo Picasso areperfect examples of a scene that can be visually thought ofas broken into disjoint parts that are viewed from differentperspectives and then patched back together. Similarly, wewant to generate a new animation, where different parts ofthe same character are controlled by separate cameras. Allthe cameras move on the same view space. The final anima-tion will have the character with each separately animatedbody part, blended together.

In order to do this we consider a pose,p, to be madeup of a union ofM body parts,bu, i.e.,p =

SMu=1bu. We as-

sume there is no overlap between the body parts, i.e., theyare distinct and mutually exclusive. Now, we associate acameraCu with a body partbu. Each camera traces a path,Pu(vpu ,dvpu

, tpu) on the view space. The synthesized anima-tion of n frames,Q , is then given by

Q = {qi : qi = pi : 1≤ i ≤ n} (6)

At any point,pu on a camera path, the configuration ofthe corresponding body part,bu, is computed by using aprocess which is analogous to pose computation atpu fora normal view-dependent animation as given in Section 3.We can also associate other parameters, like scaling of eachbody part, with their respective cameras. We can then varythese parameters when their corresponding cameras move.The various body parts are then composited together to formthe final pose (see Figure 14). The compositing method usedis the animator’s prerogative.

We present two variations of this reuse technique as ex-amples. In the first different body parts are viewed from theirrespective cameras and the views are composited in 2D im-age space to generate a multi-perspective image. This com-positing technique is similar to the one given by Colemanand Singh [7]. We associate sixbody cameras, one eachwith the head, torso, two arms and two legs. We explic-itly associate the cameras with the bones of the embeddedskeleton for each body part, which automatically groups themesh vertices into various body parts as each mesh vertex isuniquely contained in a control lattice cell, which in turn isassociated to exactly one bone of the embedded skeleton. Wealso associate scaling parameters of the various body partswith the position of their respective body cameras. Sinceeach body camera is at a different position, each body part

New animationwith differentparts of thecharactercontrolled bydifferent cameras

Camera 1

Camera 2

Camera 5

Camera 3

Camera 4

View Space

Character

Fig. 14 Animating different parts of a single character from a singleview space

Fig. 15 A multi-perspective image

is scaled differently, in addition to having a different per-spective. We then composite the view from each to get thefinal image shown in Figure 15. In this image the head of thecharacter is seen from the right, the torso from the front, theleft hand from the top, the right hand from the left-bottom,the left foot from the front while the right foot is seen fromthe right side. This may be thought of as the view-dependentanalogue of cubist paintings.

In the second variation, we again associate six body cam-eras with the various body parts. The composition of thebody parts is, however, done in object space i.e. in 3D. Thisis done by taking one model of the character and by posingthe various body parts as per the associated camera. The con-nectivity of the body parts is not disturbed, and hence theycan be blended in object space. The rendered viewpoint isthat of a master camera. The body cameras follow the move-ment of the master camera. Figure 16 shows frames fromthree animations we have generated using this technique,each with a different set of scaling parameters for the var-ious body parts. Figure 16(a) has no scaling, Figure 16(b)has scaling which exaggerates the perspective effect i.e., the

Page 9: Parag Chaudhuri Prem Kalra Subhashis Banerjee Reusing …paragc/pubs/papers/cgi2007.pdf · The Visual Computer manuscript No. (will be inserted by the editor) Parag Chaudhuri · Prem

Reusing View-Dependent Animation 9

(a) (b) (c)

Fig. 16 Compositing in object space and rendering from the mastercamerapart which is closer to the camera appears very big while thepart which is farther away appears very small. This effectcan be seen in the legs and the head, as the camera movesfrom below the character to the top. As the camera moves,the scaling associated with the body parts change to main-tain the exaggerated perspective effect. The hands and thetorso are not scaled. Figure 16(c) has scaling which countersthe perspective effect (i.e. the head appears larger). The fi-nal animation for each of these three cases is provided in asupplementary video to this paper.

As a final example of the elegance of our reuse tech-nique, we stylize theHugo’s High Jump animation by as-sociating different cameras with different body parts of thecharacter. Sample frames from this animation are shown inthe title image on the first page. In this animation, as Hugojumps, his limbs stretch and his head becomes larger. This ismade possible by the scaling parameters associated with thevarious moving body cameras. The final animation is pro-vided as a supplementary video to this paper.

5 Conclusion

We have formulated the concept of a view space of key viewsand associated key character poses as a framework for repre-senting view-dependent animations. It captures all the infor-mation about the views and character poses efficiently andconcisely. The animator can trace camera paths on the viewspace and the corresponding animation is generated in realtime. The ability to understand and explore view-dependentanimation using our framework gives us an insight into thereuse of view-dependent animation.

We have formalized the concept of reusing view-dependentanimations in terms of our framework. We have presentedthree novel reuse strategies. In the first we show how to ani-mate multiple characters from the same view space. Next,we show how to animate multiple characters from multi-ple view spaces. We use this technique to animate a crowdof characters. Finally, we have drawn inspiration from cu-bist paintings and created their view-dependent analogues,by using different cameras to control various body parts ofthe same character. We have thus shown that reusing view-dependent animation is possible using our framework andit can be used to synthesize a variety of interesting stylizedanimations.

For future work we would like to analyze under whatconditions can a suitable mapping function be designed, given

any desired combination of paths of the global and localcameras.

Acknowledgments

Hugo’s mesh is courtesy Laurence Boissieux, INRIA.

References

1. Agrawala, M., Zorin, D., Munzner, T.: Artistic multiprojectionrendering. In: Proceedings of the Eurographics Workshop onRen-dering Techniques, pp. 125–136 (2000)

2. Arikan, O., Forsyth, D.A., O’Brien, J.F.: Motion synthesis fromannotations. ACM Transactions on Graphics22(3), 402–408(2003)

3. Brand, M., Hertzmann, A.: Style Machines. In: Proceedings ofSIGGRAPH, pp. 183–192 (2000)

4. Bregler, C., Loeb, L., Chuang, E., Deshpande, H.: Turningtothe masters: Motion capturing cartoons. In: Proceedings ofSIG-GRAPH, pp. 399–407 (2002)

5. Chaudhuri, P., Jindal, A., Kalra, P., Banerjee, S.: Stylistic reuse ofview-dependent animations. In: Proceedings of Indian Conferenceon Vision, Graphics and Image Processing, pp. 95–100 (2004)

6. Chaudhuri, P., Kalra, P., Banerjee, S.: A system for view-dependent animation. Computer Graphics Forum23(3), 411–420(2004)

7. Coleman, P., Singh, K.: Ryan: Rendering your animation nonlin-early projected. In: Proceedings of NPAR, pp. 129–156 (2004)

8. Davis, J., Agrawala, M., Chuang, E., Popovic, Z., Salesin, D.: Asketching interface for articulated figure animation. In: Proceed-ings of SCA, pp. 320–328 (2003)

9. Glassner, A.S.: Cubism and cameras: Free-form optics forcom-puter graphics. Technical Report (MSR-TR-2000-05) (2000)

10. Hays, J., Essa, I.: Image and video based painterly animation. In:Proceedings of NPAR, pp. 113–120 (2004)

11. Kovar, L., Gleicher, M., Pighin, F.: Motion Graphs. In: Proceed-ings of SIGGRAPH, pp. 473–482 (2002)

12. Kowalski, M.A., Markosian, L., Northrup, J.D., Bourdev, L.,Barzel, R., Holden, L.S., Hughes, J.F.: Art-based rendering of fur,grass, and trees. In: Proceedings of SIGGRAPH, pp. 433–438(1999)

13. Li, Y., Wang, T., Shum, H.Y.: Motion texture: a two-levelstatis-tical model for character motion synthesis. In: Proceedings ofSIGGRAPH, pp. 465–472 (2002)

14. Martín, D., García, S., Torres, J.C.: Observer dependent defor-mations in illustration. In: Proceedings of the NPAR, pp. 75–82(2000)

15. Meier, B.J.: Painterly rendering for animation. In: Proceedings ofSIGGRAPH, pp. 477–484 (1996)

16. Rademacher, P.: View-dependent geometry. In: Proceedings ofSIGGRAPH, pp. 439–446 (1999)

17. Singh, K.: A fresh perspective. In: Proceedings of Graphics Inter-face, pp. 17–24 (2002)


Recommended