+ All Categories
Home > Documents > Humanized Robot Dancing: Humanoid Motion...

Humanized Robot Dancing: Humanoid Motion...

Date post: 07-Mar-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
12
Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation of Human Dance Styles Authors Institute Abstract. Expressiveness and naturalness in robotic motions and be- haviors can be replicated with the usage of captured human movements. Considering dance as a complex and expressive type of motion, in this pa- per we propose a method for generating humanoid dance motions trans- ferred from human motion capture data. Motion data of samba dance was synchronized to samba music manually annotated by experts in or- der to build, spatiotemporal representation of the dance movement with variability, in relation to the respective musical temporal structure, mu- sical meter, this enabled the determination and generation of variable dance key-poses according to a human body model. In order to retarget these key-poses from the original human model into the considered hu- manoid morphology, we propose methods for resizing and adapting the original trajectories to the robot joints, overcoming its varied kinematic constraints. Finally, a method for generating the angles for each robot joint is presented, enabling the reproduction of the desired poses. The achieved results validated our approach, suggesting that it is possible to generate poses from motion capture and reproduce them on a humanoid robot with a good degree of similarity. Keywords: Humanoid Robot Motion Generation and Motion Retarget- ing 1 Introduction Robotics applications grow daily, and the creation of realistic motion for hu- manoid robots increasingly plays a key role. Since motion can be regarded as an form of interaction and expression, that allows to enrich communication and interaction, improving humanoid robot motion expressiveness and realism is a form to accomplish better and richer human-robot interaction. A form of achiev- ing more humanized robotic motion is to feasibly reproduce, and imitate, the motions performed by humans. This would allow not only more expressiveness, diversity and realism in the humanoid robot motion, but also a simple, less time consuming and automatic form of creating and converting diverse human motion to robotics. Considering dance as a rich and expressive type of motion, constitut- ing a form of non-verbal communication in social interactions, also transmitting emotion, it imposes a good case study of clear humanized motion.
Transcript
Page 1: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

Humanized Robot Dancing:Humanoid Motion Retargeting based in a

Metrical Representation of Human Dance Styles

Authors

Institute

Abstract. Expressiveness and naturalness in robotic motions and be-haviors can be replicated with the usage of captured human movements.Considering dance as a complex and expressive type of motion, in this pa-per we propose a method for generating humanoid dance motions trans-ferred from human motion capture data. Motion data of samba dancewas synchronized to samba music manually annotated by experts in or-der to build, spatiotemporal representation of the dance movement withvariability, in relation to the respective musical temporal structure, mu-sical meter, this enabled the determination and generation of variabledance key-poses according to a human body model. In order to retargetthese key-poses from the original human model into the considered hu-manoid morphology, we propose methods for resizing and adapting theoriginal trajectories to the robot joints, overcoming its varied kinematicconstraints. Finally, a method for generating the angles for each robotjoint is presented, enabling the reproduction of the desired poses. Theachieved results validated our approach, suggesting that it is possible togenerate poses from motion capture and reproduce them on a humanoidrobot with a good degree of similarity.

Keywords: Humanoid Robot Motion Generation and Motion Retarget-ing

1 Introduction

Robotics applications grow daily, and the creation of realistic motion for hu-manoid robots increasingly plays a key role. Since motion can be regarded asan form of interaction and expression, that allows to enrich communication andinteraction, improving humanoid robot motion expressiveness and realism is aform to accomplish better and richer human-robot interaction. A form of achiev-ing more humanized robotic motion is to feasibly reproduce, and imitate, themotions performed by humans. This would allow not only more expressiveness,diversity and realism in the humanoid robot motion, but also a simple, less timeconsuming and automatic form of creating and converting diverse human motionto robotics. Considering dance as a rich and expressive type of motion, constitut-ing a form of non-verbal communication in social interactions, also transmittingemotion, it imposes a good case study of clear humanized motion.

Page 2: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

2 Humanized Robot Dancing: Humanoid Motion Retargeting

This paper presents methods for generating humanoid robot dancing move-ment from human motion capture data. Processes are based on [9], this methodwas applied to motion data of samba dance style synchronized to samba mu-sic (manually annotated by experts), for building spatiotemporal representationof the original dance movement, with variability, in relation to the respectivemusic temporal structure (musical meter), allowing the determination of thefundamental key-poses of the dance style. Using this representation as startingpoint, we present methods for resizing the body segments and retargeting thejoint trajectories towards different humanoid body morphologies. As such, wefirstly synthesize stochastic variations of the determined key-poses, then resizethese according to the targeted segment lengths and finally process the neces-sary adjustments to retarget the generated joint trajectories onto the consideredhumanoid morphologies. The method is tested for robot NAO [1], using theSimSpark simulation environment, in order to generate and reproduce the orig-inal samba dance.

The remainder of this paper is structured as follows: Section 2 presents areview of related work on the motion analysis and motion generation. Section 3,presents the proposed methods and the developed work. In Section 4, the mainresults are presented and discussed and a evaluation of the similarity betweenthe original human motion and the generated humanoid motion is done. Finally,in Section 5 the conclusion and future work are presented.

2 Related Work

Nowadays many attempts have been made to achieve realistic humanoid motionbased on human motion. The referred techniques below use dance motion capturedata and synthesize new motion from it. The first step in the process consistson determining the most important key-poses from the motion capture data.This choice impacts the overall aspect of the final motion, and must be accurateallowing to determined key-poses to present real valor, and constitute meaningfulrepresentation of the dance, to the dance and so represent the dance. From thiskey-poses, the motion is then transfered to the robot, trying to achieve thegreater similarity possible with the original motion capture data.

2.1 Motion Analysis

One of the traditional methods to generate dance motion in computer anima-tion and in robotics, its to interpolate transition motion between key-poses. Sokey-poses most show representative instances of the motion. The techniques toanalyze and determine the appropriate key-poses are based on the analysis of thebody motion [4] [3] or by analyzing the dance music [9], and in some cases a si-multaneous analysis of both aspects [15] [16] [14] [8]. Working on Latin dances[4], uses information about the main characteristics of Merengue dance style, fo-cusing only on components of the rotation of shoulders and hips. By analyzingJapanese folk dance, [6] and [5], segmented the dance, in key-poses, in terms of

Page 3: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

Humanized Robot Dancing: Humanoid Motion Retargeting 3

minimum velocity of the end-effectors’ (hands and feet). Then these key-poseswere clustered and interpolated to generate the original dance. [3] extracts rapiddirectional changes in motion. A combination of music and motion analysis isapplied in [15] [16] [14] [8]. [15] identifies key-points of the motion rhythm aslocal minimums of the ’weight effort’ (linear sum of rotation of each body joint),indicating stop motions and recognized as key-poses, and motion intensity pointsas the average of instant motion from the previous key pose. Its music analysisfocus on the music intensity (sound whose spectral power is strongest betweenthe neighboring frequency sound) and music rhythm, that is found by analyzingthe repetition of several phrases and patterns present in the music structure. In[16], stop frames of the hand motion are considered as key-frames, and motionintensity is determined as the difference in the velocity of the hands betweenframes. Musical analysis is similar to [15], extracting the music beat and degreeof chord change for the beat structure analysis, and music intensity for moodanalysis. There are also works based on temporal scaling techniques [14] (forupper-body motion) and [8] (for leg motion). In [14] dance motion is capturedat different speeds and the comparison of the variance in the motion, allowsto observe that some poses stay preserved. This poses are considered key-posessince they tend to represent important moments to the music. The analysis in[8] is similar to [14] , but focusing on the analysis of the step motion. The deter-mination of the key-poses is made using the indication that the original timingsfor tasks around key-poses are maintained and that stride length is also closeto the original, even at different speeds. In [9], spatiotemporal dance analysismodel is presented, based on the Topological Gesture Analysis (TGA) [7], thatconveys a discrete point-cloud representation of the dance. The model describesthe spatiotemporal variability of the dance gestural trajectories in spherical dis-tributions, according to the music metrical classes. This method describes thespace that the dancer occupies at each musical class (1 beat, half-beat, 2 beats,etc) in terms of point-clouds, and generated a spatiotemporal representation ofthe occupied points, by projecting musical cues onto spatial trajectories. We fol-lowed the [9] method of dance motion analysis since it conveys a parameterizablerepresentation of the original dance, incorporating its intrinsic variability.

2.2 Dance Motion Generation

Dance Motion Generation techniques typically aim at generating motion fromthe key-poses extracted in the Motion Analysis phase. [11], [12], [2], [8] and[6] apply inverse kinematics to transform the markers position from the motioncapture data into robot joint angles. In [2] the inverse kinematics is only ap-plied to the upper-body, while the pelvis, leg and feet motion is generated byoptimization based on the Zero Motion Point (ZMP) trajectory and dynamicmapping. [8] checks the intervals between steps to keep a stable ZMP and thenapplies inverse kinematics to map the leg joint positions to robot joint angles.[11] applies optimization to ensure the physical restrictions of the robot, ensuringthat limits of angles, velocities and acceleration are met. [12] applies sequentialmotion restrictions by optimization, limiting angles first, then solves collision

Page 4: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

4 Humanized Robot Dancing: Humanoid Motion Retargeting

avoidance and overcomes velocity and dynamic force constraints. Solving colli-sion avoidance by increasing the critical distances for the periods that presentcollisions, [17] uses kinematics mapping to translate the motion capture data tothe humanoid, using similarity functions based on the value of the angles of theoriginal data. For improving the balance, an algorithm based on the number offeet in contact with the ground is used, modifying the hip trajectory to satisfya constraint based on ZMP criterion. [6] also limits angles and the angular ve-locity and finally modifies the ZMP trajectory in order to keep balance. [3] and[15] use motion graphs to represent the motion. [3] aimed to generate motion onthe fly, using a library of motion graphs, matching then the motion representedby the graph with the input sound signal. On its hand, [15], traces the motiongraph based on the correlation between the music and motion features. Finally,[15] determines the better graph path by choosing the highest value from thecorrelation between the music and motion intensity and a correlation betweenmusic beats and motion key-frames. In terms of motion retargeting, [3] uses areal-time algorithm to adapt the motion to the target character. [14] appliesoptimization to overcome the joint angles limitations of the target character.[13] presents a way to extract the joint angles from the three-dimensional pointrepresentation of a pose. This method can be applied not only for computeranimation but also for robotics. After generating the motion, [11] applies aphase of motion refinement to detect trajectory errors and correct them. On itshand, [8] makes a final refinement of the generated movement in order to keepthe robot’s balance and avoid self-collisions. To generate the robot joint anglesfrom the pose point representation obtained from the previous dance movementrepresentation [9] we will base our approach in [13], extracting Euler angles inthe 3 dimensions based on a body centered axis system.

3 Methodology

3.1 Dance Movement Analysis

Our motion analysis stage is based on the approach presented in [9]. As such,we recurred to the same dance sequences of Afro-Brazilian samba, which werecaptured with a MoCap (Motion Capture) system, and synchronized to thesame genre of samba music (manually annotated by experts). Upon these, wealso applied the TGA (Topological Gesture Analysis) method [7] for buildinga spatiotemporal representation of the original dance movement in relation tothe respective music temporal structure (musical meter). This method relies inthe projection of musical metric classes onto the motion joint trajectories, andgenerates a spherical distribution of the three-dimensional space occupied byeach body joint according to every represented metric class. In such way, thisrepresentation model offers a parameterizable spatiotemporal description of theoriginal dance, which translates both musical qualities and variability of theconsidered movement.

Page 5: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

Humanized Robot Dancing: Humanoid Motion Retargeting 5

3.2 Dance Movement Generation

The actual dance movement generation is based on three steps: (a) key-posessynthesis from the given representation model, (b) morphological adaption ofthe key-poses to the used body model, in terms of segments length and numberof joints, and (c) the actual key-poses’ retargeting from the used character tothe simulated robot NAO.

Key-Poses Synthesis By following [10], the synthesis of key-poses consistedon calculating a set of full-body joint positions, one for each considered metricclass. In order to translate the variability imposed in the original dance, forevery key-pose the joint positions were calculated by randomly choosing rotationscircumscribed by every joints’ TGA distributions without violating the fixedgeometry of the human body.

As described in [10], each key-pose is split into 5 kinematic chains. Fromthe anchor to the extremity of each kinematic chain, each joint position pmj iscalculated based on a random rotation circumscribed by the possible variationsof its rotation quaternion qvsmj (i.e. the 3d rotation of a target unity vector −→v ′smjaround its base unity vector −→v smj

) between every two body segments:

pmj = pmj−1 + lj−1,j ∗ −→v ′smj : pmj ∈ Tmj , (1)

where pmj−1 is the former calculated joint position, lj−1,j is the current segmentlength, and Tm

j is the current TGA spherical distribution.

Morphological Adaption In order to get a representation of the key-posesin the target humanoid morphology the point cloud representation of the dancemust be adapted, maintaining the spatial relationship and expression of theposes across all the represented metrical classes. To achieve this, we must lookat the target morphology in terms of size, joints’ degree of freedom and othertarget kinematic physical constraints. Prior to the actual humanoid key-poses’generation, the segment lengths of each body part length must be changed tothose of the target body model. For each joint j, lj−1,j is the length of thesegment that connects j − 1 to j, and Dm

j is the spherical distribution, withradius rmj and center omj . The distance from omj−1 to omj is considered as dmj−1,jand direction vector from omj−1 to omj is −→vomj−1,j . In order to change the segmentlength from lj−1,j to l′j−1,j :

redim = l′j−1,j/lj−1,jd′mj−1,j = dmj−1,j ∗ redimr′mj = r′mj ∗ redimo′mj = omj−1 + d′mj−1,j ∗

−→vomj−1,j

(2)

where bd′mj−1,j is the new distance from omj−1 to omj , and r′mj the adapted radiusof the spherical distribution Dm

j with o′mj as new center point. The translation

Page 6: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

6 Humanized Robot Dancing: Humanoid Motion Retargeting

that happens from omj to o′mj is then applied. To all the following joint cen-ters in the considered kinematic chain. This method allows to resize any bodysegment by manipulating the spherical distributions of the movement represen-tation according th the target segment lengths. Only the anchor sphere of thebody model isn’t resized or moved. The relation between the segment lengthand the radius was considered linear, as pointed by redim in eq. 2. The methodonly performs a translation of the spherical distributions centers maintaining therelation between the spherical distributions centers. The change in the sphericaldistribution radius, is regarded as an adaption of the segment reach.

In order to have the correct body segment length, it may be necessary toremove joints that the target humanoid body doesn’t have. In order to choose apoint in Dm

j and Dmj+2, eliminating Dm

j+1, we calculate the interception, in theform of the spherical cap Cm

j , of the spherical distribution for the target joint,Dm

j+2, with a sphere Smj centered in the position of the previous joint, pmj , and

with radius equal to lj,j+1 + lj+1,j+2. If Cmj is a empty set, the center of Dm

j+2

will be translated in the direction of the vector from pmj to omj+2, increasingor decreasing the distance, from pmj to omj+2. Allowing to obtain interceptionbetween Dm

j+2 and Smj , finally a search for a point in Cm

j , that connected to pmjwill be closer to the sphere center of the eliminated joint, omj+1. In the specialcase where pmj is the anchor point of the model (and first point to be determinedin the model), we can also move pmj , keeping it inside Dm

j , in the direction ofthe vector from pmj to omj+2 to enable the interception, and then move pmj , inorder to approach him of to pmj+1 to allow a better fit of the segment that willbe traced.

Another problem that we faced in the motion retargeting was related to thenecessity to ensure that certain body segments were collinear. In order to ensurethat the segment from j to j′ is collinear with the segment from j2 to j2′,firstly we generate, to the segment from j to j′, the random quaternion and itscorresponding direction vector −→v m

j,j′ and try to apply it to the other segmentinvolved.

(pmj +−→v mj,j′ ∗ lj,j′) ∈ Dm

j′ (3)

(pmj +−→v mj,j′ ∗ lj,j′) ∈ Dm

j′ ∧ (pmj2 +−→v mj,j′ ∗ lj2,j2′) ∈ Dm

j2′ (4)

Condition 3 is the constraint applied to accept a generated point in [10], thealgorithm will try to ensure this condition while not reaching the maximumnumber of tries. The condition 4 must be ensured instead, to use simetric vectorsin both segments. If the method can’t generate a vector that satisfies condition4, then will generate a vector that can satisfies 3, and calculate a new point fromit:

pfmj2′ = pmj2 +−→v m

j,j′ ∗ lj2,j2′ (5)

This new point will be outside the spherical distribution of j2′, Dmj2′ , so the

translation from pfmj2′ to the center of Dm

j2′ and assumed as the translation thatmust be applied to all the following spherical distribution center points in thekinematic chain. The translation will ensure that the remaining centers stillmaintain the spatial relation and until extreme of the chain the pose is stillsimilar to the original.

Page 7: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

Humanized Robot Dancing: Humanoid Motion Retargeting 7

Key-Poses Retargeting To generate the actual robot joint angles from therepresentation obtained from the application of the previous methods we applieda motion retargeting technique based on [13] for extracting the Euler angles ofeach joint, in the 3 dimensions, based on a body-centered axis system.

Fig. 1. Considered body models: (left) motion capture original; (rigth) humanoid robotNAO.

The starting point of this method is to determine a local coordinate systemin the chest of the previously resized and adapted body model:

y = pLSH − pRSH

ztmp = pCSH − pSHIP

x = y × ztmp

z = x× yRaxis = [norm(x), norm(y), norm(z)],

(6)

where norm(X) = X|X| and × is the cross product between two dimensions.

Now, for each vector in the global coordinate system, we calculate the corre-spondent vector in the local coordinate system and calculate the angles in all theaxis that we need. This calculation is specific to each segment of the consideredhumanoid morphology due to the presence of singularities of different natures.For the left arm we start at the upper arm segment LELB by extracting theangle in the y axis (which moves the arm front or back), and retarget it to thepitch rotation of the robot’s left shoulder LShoulderP itch:v0 = pLELB − pLSH

v1 = RTaxis × v0

LShoulderP itch = atan2(v1z, v1x).(7)

Page 8: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

8 Humanized Robot Dancing: Humanoid Motion Retargeting

The rotation of the left shoulder in z axis (moves the arm left or right), isextracted in the same way by applying the previous calculated y rotation to theprevious vector v1: {

v2 = Ry(LShoulderP itch)× v1LShoulderRoll = atan2(v2y, v2x),

(8)

where Ry(LShoulderP itch) is the rotation matrix in y by LShoulderP itch de-grees.

We then apply the same method to extract the angles for the left arm elbow:v3t = pLHND − pLELB

v3 = Rz(LShoulderRoll)×Ry(LShoulderP itch)×RTaxis × v3t

LElbowRoll = atan2(v3z,−v3y)v4 = Rx(LElbowRoll)× v3LElbowY aw = atan2(v4y, v4x).

(9)

The extraction of the legs uses the same local coordinate system, Raxis, andstarting by the angles for the hip joints proceeding to the knee and feet joint.{

v5t = pLHIP − pLKNEE

v5 = RTaxis × v5t

(10)

{RHipRoll = atan2(v5z,−v5z)v6 = Rx(RHipRoll)× v5

(11){RHipY awPitch = atan2(v6y, v6x)v7 = Rz(RHipY awPitch)× v6

(12)RHipPitch = atan2(v7x,−v7z)v8t = pLKNEE − pLANK

v8 = Ry(RHipPitch)×Rz(RHipY awPitch)×Rx(RHipRoll)×RT

axis × v8t

(13)

RKneePitch = atan2(v7x,−v7z)v9t = pLANK − pLFOOT

v9 = Ry(RKneePitch)×Ry(RHipPitch)×Rz(RHipY awPitch)

×Rx(RHipRoll)×RTaxis × v9t

(14)

RAnkleP itch = atan2(−v7z, v7x) (15)

The angle RHipY awPitch isn’t a simple rotation over z axis, and it’s ashared joint with the rigth side, so it doesn’t allow to easily give the desiredorientation to the legs. Since the calculated angle is in the z axis, and the jointwill move in z rotated by 45?, there was made an option to use the value closestto 0, between the calculated RHipY awPitch and LHipY awPitch. Ultimately,the robot presents another joint, RAnkleRoll, that wans’t determined in thecalculations and was considered important only to the maintenance of balance,which is outside the scope of this paper. The joint angles for the right arm andleg are calculated in similar way.

Page 9: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

Humanized Robot Dancing: Humanoid Motion Retargeting 9

4 Evaluation and Results

4.1 Key-poses Comparison

The following images present a comparison between the poses generated and it’srepresentation in the NAO humanoid (see figure 2 and figure 3).

Fig. 2. Comparison of pose 1 (two left images). Comparison of pose 3 (two right im-ages). In each set the left image is the generated by the method and the right imagethe reproduced in the robot.

Fig. 3. Comparison of pose 5 (two left images). Comparison of pose 6 (two right im-ages).

4.2 Level of Similarity

In order to compare the similarity we created a way to evaluate the similarityof the poses based on the evaluation of distances. With this we can evaluatethe similarity of the generated pose points with the spatial position of the samebody parts in the robot. For this, we will use a comparison between the distanceof points in the pose and in the robot. For each pair of joint, j and i, we willdetermine the distance between them in the robot, droboti,j , and in the expecteddistance of the pose, dposei,j . Using this we will evaluate the similarity of thepose presented by the robot versus the expected as the sum of the differencebetween the two already calculated distances:

Page 10: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

10 Humanized Robot Dancing: Humanoid Motion Retargeting

Sima,b =

n∑j

n∑i=j

∣∣dposei,j − droboti,j∣∣ (16)

Using this measure, we calculated the similarity between the poses withmorphological adaptation, without morphological adaptation, with resize of thebody, without resize and finally pose generated only using the centers of thespherical distributions for each joint (see ??). Since the some of the poses havedifferent sizes the evaluation will have higher values, in order to have a mea-sure of comparison, we considered the height, hb, and the wingspan, wb, of thedifferent bodies and created a adjustment factor:

f = (hb ∗ wb)/(h′b ∗ w′b) (17)

With this factor we aim to compare the similarity of the reproduction of theposes from a resized body and from a normal human body. The factor tries todeterminate the linear relation between the max area covered by the differentmorphologies. Using both the evaluation function and the adjustment factor, wecalculated the similarity between the generated key-poses and the reproducedkey-poses in the humanoid. The similarity was calculated for the upper-body,arms mainly, and for the legs. This allows to demonstrate if the algorithm gen-erates better angles for one part of the body. A third value considering all thebody values was also calculated (presented in the 1).

Table 1. With Resizing and Morphological Adaptation (left); No method applied(rigth)

Pose Arm Leg Total

1 459.566 666.27 2742.052 373.065 1038.88 2742.053 175.81 1038.88 4645.394 437.592 987.604 3887.035 584.048 583.348 3093.916 452.228 734.822 2974.517 326.499 550.647 1904.728 276.419 599.648 2093.8

Pose Arm1 Leg Total

1 2768.45 3716.05 21657.682 2808.8 3499.0 21019.63 2766.87 3529.73 21116.144 2837.17 3723.22 21116.145 2689.28 3633.93 21871.876 2807.12 3423.22 22211.477 2579.38 3756.02 20365.768 2713.37 3766.74 205648.66

In order to compare the values in both tables the adjust factor was calculatedas 2.602 and all the values for the evaluation without any resizing were dividedby it.

4.3 Discussion

Both the figures presented and the numerical evaluation of the similarity showgood results for the arms posture, and a bit worst for the legs. This difference is

Page 11: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

Humanized Robot Dancing: Humanoid Motion Retargeting 11

mostly due to problems in the generation of the angles for the hip of the robot.In this area, we have 3 different joints that work in 3 different angles, while2 of them control the rotation in x and y respectively. The third controls themovement in a z axis, rotated 45, and its shared by both legs, this makes thedetermination of the rotation a problem. We tried to minimize the value givento this joint, as explained, but in cases that both values are great, the joint willinduce to much error in the pose. The principal error in the poses is in thatsection, leading to smaller but existent errors in the remaining joints of the leg.

The numerical evaluation of the similarity shows that there is a advantageon performing the morphological adaptation, with better similarity results forthe poses that suffered the adaptation process. Either way, the results from thisevaluation have some error associated, since the comparison on the robot presentsvariations on the positions even when the body is still. And the adaption factorused may not present the best relation to perform a comparison between theadapted poses and the non-adapted poses.

5 Conclusions and Future Work

In this study we proposed and evaluated methods for resizing a robot model andretargeting the joint trajectories. The process starts from information extractedfrom the dance motion analysis, and then generates random joint positions thatsatisfies the desired model, finally the key-poses are extracted and reproducedin the target humanoid. The overall results seem to show that the methods usedfor the resizing and retargeting work, needing further evaluation with a differentmethod and appliance to other dance motions. In relation to the angle extractionmethod, it seems to be valid and obtains good results to the upper-body, butstill only offers a approximate pose for the legs. The review of the extractionof the hip angles may prove to be a improvement in the method. In a form toimprove the similarity the evaluation function may be used as the evaluationfunction in a optimization process. Finally, in order to obtain stable movements,motion refinement should be made, ensuring balance and avoiding collisions.

References

1. David Gouaillier, Vincent Hugel, Pierre Blazevic, Chris Kilner, Jerome Monceaux,Brice Marnier, Julien Serre, and Julien Serre. The nao humanoid: a combina-tion of performance and affordability. Computing Research Repository CoRR,abs/0807.3:1–10, 2008.

2. Seungsu Kim, ChangHwan Kim, Bumjae You, and Sangrok Oh. Stable Whole-Body Motion Generation for Humanoid Robots to Imitate Human Motions. InIEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),pages 2518–2524, St. Louis, MO, USA, October 2009. Ieee.

3. Tae-hoon Kim, Sang Il Park, and Sung Yong Shin. Rhythmic-Motion Synthesisbased on Motion-Beat Analysis. ACM Transactions on Graphics, 22(3):392, July2003.

Page 12: Humanized Robot Dancing: Humanoid Motion …paginas.fe.up.pt/~niadr/PUBLICATIONS/LIACC_publications...Humanized Robot Dancing: Humanoid Motion Retargeting based in a Metrical Representation

12 Humanized Robot Dancing: Humanoid Motion Retargeting

4. Noriko Nagata, Kazutaka Okumoto, Daisuke Iwai, Felipe Toro, and Seiji Inokuchi.Analysis and Synthesis of Latin Dance Using Motion Capture Data. Advances inMultimedia Information Processing, 3333:39–44, 2005.

5. Shin’ichiro Nakaoka, Atsushi Nakazawa, Kazuhito Yokoi, Hirohisa Hirukawa, andKatsushi Ikeuchi. Generating Whole Body Motions for a Biped Humanoid Robotfrom Captured Human Dances. In IEEE International Conference on Robotics andAutomation (ICRA), pages 3905–3910 vol.3. Ieee, 2003.

6. Atsushi Nakazawa, Shinichiro Nakaoka, Katsushi Ikeuchi, and Kazuhito Yokoi. Im-itating human dance motions through motion structure analysis. In In Proceedingsof the International Conference on Intelligent Robots and Systems (IROS), pages2539–2544, 2002.

7. Luiz Naveda and Marc Leman. The spatiotemporal representation of dance andmusic gestures using Topological Gesture Analysis (TGA). Music Perception,28(1):93–111, 2010.

8. Takahiro Okamoto, Takaaki Shiratori, Shunsuke Kudoh, and Katsushi Ikeuchi.Temporal Scaling of Leg Motion for Music Feedback System of a Dancing Hu-manoid Robot. In IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS), pages 2256–2263, Taipei, Taiwan, 2010.

9. Removed. Removed for blind review.10. Removed. Removed for blind review.11. Miti Ruchanurucks, Shin’ichiro Nakaoka, Shunsuke Kudoh, and Katsushi Ikeuchi.

Generation of Humanoid Robot Motions with Physical Constraints using Hierar-chical B-Spline. In IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS), pages 674–679. Ieee, 2005.

12. Miti Ruchanurucks, Shin’ichiro Nakaoka, Shunsuke Kudoh, and Katsushi Ikeuchi.Humanoid Robot Motion Generation with Sequential Physical Constraints. InIEEE International Conference on Robotics and Automation (ICRA), pages 2649–2654, Orlando, FL, USA, 2006. Ieee.

13. Takaaki Shiratori. Synthesis of Dance Performance Based on Analyses of HumanMotion and Music. Phd thesis, University of Tokyo, 2006.

14. Takaaki Shiratori, Shunsuke Kudoh, Shin’ichiro Nakaoka, and Katsushi Ikeuchi.Temporal Scaling of Upper Body Motion for Sound Feedback System of a DancingHumanoid Robot. In IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS), pages 3251–3257, San Diego, CA, USA, October 2007. Ieee.

15. Takaaki Shiratori, Atsushi Nakazawa, and Katsushi Ikeuchi. Dancing-to-MusicCharacter Animation. In EUROGRAPHICS, volume 25, pages 449–458, September2006.

16. Takaaki Shiratori, Atsushi Nakazawa, and Katsushi Ikeuchi. Synthesizing DancePerformance using Musical and Motion Features. In IEEE International Confer-ence on Robotics and Automation, pages 3654–3659. Ieee, 2006.

17. Xiaojun Zhao, Qiang Huang, Peng Du, Dongming Wen, and Kejie Li. HumanoidKinematics Mapping and Similarity Evaluation Based on Human Motion Capture.In International Conference on Information Acquisition (ICIA), pages 426–431.Ieee, 2004.


Recommended