+ All Categories
Home > Documents > LIDAR-Based Body Orientation Estimation by Integrating ...jun/pdffiles/shimizu-robio2016.pdf ·...

LIDAR-Based Body Orientation Estimation by Integrating ...jun/pdffiles/shimizu-robio2016.pdf ·...

Date post: 12-Aug-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
6
LIDAR-based Body Orientation Estimation by Integrating Shape and Motion Information Masanobu Shimizu , Kenji Koide , Igi Ardiyanto , Jun Miura , and Shuji Oishi Department of Computer Science and Engineering Toyohashi University of Technology Department of Electrical Engineering and Information Technology Faculty of Engineering, Universitas Gadjah Mada Abstract— Body orientation gives useful information on as- sessing a human’s state and/or predicting his/her future actions. This paper presents a method of reliably estimating human body orientation using a LIDAR on a mobile robot by inte- grating shape and motion information. A shape database is constructed by collecting body section shape data from various viewpoints. The result of matching between an input shape with the database is combined with a UKF-based tracker which utilizes a relationship between the body orientation and the motion orientation. The experimental results shows the effectiveness of the proposed method. I. I NTRODUCTION There is an increasing demand for personal service robots that attend people and provide use-specific services. Such a robot has to have various functions such as person detection and identification. There are many works on human detection and tracking using LIDAR (Laser Imaging Detection and Ranging) [1], [2], images [3], [4], or depth cameras [5], [6]. These works are mainly concerned with human position and motion. In addition to these kinds of information, human body orientation could provide additional information such as a possible direction to move. Therefore this paper deals with body orientation estimation using LIDAR for mobile robots. Glas et al. [7] developed a method of estimating body orientation by using multiple LIDARs observing a human from various directions. They use a person model with arms and estimate the pose by combining model-based shape prediction with laser scan data. Matsumoto et al. [8] proposed to use sectional-contour for determining human posture such as standing, sitting, and bowing, obtained from multiple LIDARs. Since the LIDARs in these works are supposed to be fixed at different positions, they cannot be directly applicable to mobile robots. Weinrich et al. [9] developed a method of classifying human upper body orientation using HOG [3] and an SVM decision tree. The resolution of image- based orientation estimation is usually not as high as LIDAR- based methods. When a person walks, his/her moving direction and body orientation are correlated to some extent, and this correlation generally becomes larger at a higher speed. Chen et al. [10] used this observation in a combined person tracking and body orientation estimation using particle filter in visual surveillance. Ardiyanto and Miura [11] dealt with a similar problem by appearance-motion integration using UKF (un- scented Kalman filter). Svenstrup et al. [12] used an obser- vation that a person walking at a higher speed changes its orientation less probably for making the orientation estimate more accurate, although they did not use appearance nor shape information. In this paper, we propose a method of estimating human body orientation by integrating shape information obtained by a LIDAR and motion information obtained by a UKF (un- scented Kalman filter) tracker. The shape-motion integration part is based on our previous method [11]. The rest of the paper is organized as follows. Sec. II describes a shape-based orientation estimation using a 2D LIDAR. Sec. III describes our UKF formulation in shape- motion integration for more reliable body orientation esti- mation. Sec. IV shows experiments for accuracy validation and those in human tracking situation. Sec. V concludes the paper and discusses future work. II. SHAPE-BASED BODY ORIENTATION ESTIMATION A. Body shape models We use a visible part of the section of human body observed by a 2D LIDAR for orientation estimation. We make a set of such data in advance as model data. The set is generated by observing a human body at a certain height from omnidirectional viewpoints. Supposing a robot is following an adult, we put a LIDAR at the height of 85 cm and at 2 m distance. We take data at 36 viewpoints with 10 deg interval. We compare an input shape data with those in the database to estimate the body orientation. We developed a data acquisition system, which is com- posed of a motorized turn table with a visual angular position estimation. The accuracy of turn table angle measurement is less than one degree. B. Body orientation estimation by shape matching The first step of body orientation estimation is to match an input shape with those in the model database. All models and input data are represented as binary images. Fig. 1(a) shows a scene of obtaining a scan and its image representation, in which one pixel corresponds to a 1 cm × 1 cm square region. An input image is compared with a set of images in the database. Since each model image is specific to a certain Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics Qingdao, China, December 3-7, 2016 978-1-5090-4363-7/16/$31.00 ©2016 IEEE 1948
Transcript
Page 1: LIDAR-Based Body Orientation Estimation by Integrating ...jun/pdffiles/shimizu-robio2016.pdf · body orientation using a LIDAR on a mobile robot by inte-grating shape and motion information.

LIDAR-based Body Orientation Estimationby Integrating Shape and Motion Information

Masanobu Shimizu†, Kenji Koide†, Igi Ardiyanto‡, Jun Miura†, and Shuji Oishi††Department of Computer Science and Engineering

Toyohashi University of Technology‡Department of Electrical Engineering and Information Technology

Faculty of Engineering, Universitas Gadjah Mada

Abstract— Body orientation gives useful information on as-sessing a human’s state and/or predicting his/her future actions.This paper presents a method of reliably estimating humanbody orientation using a LIDAR on a mobile robot by inte-grating shape and motion information. A shape database isconstructed by collecting body section shape data from variousviewpoints. The result of matching between an input shapewith the database is combined with a UKF-based trackerwhich utilizes a relationship between the body orientation andthe motion orientation. The experimental results shows theeffectiveness of the proposed method.

I. INTRODUCTION

There is an increasing demand for personal service robotsthat attend people and provide use-specific services. Such arobot has to have various functions such as person detectionand identification. There are many works on human detectionand tracking using LIDAR (Laser Imaging Detection andRanging) [1], [2], images [3], [4], or depth cameras [5], [6].These works are mainly concerned with human position andmotion. In addition to these kinds of information, humanbody orientation could provide additional information suchas a possible direction to move. Therefore this paper dealswith body orientation estimation using LIDAR for mobilerobots.

Glas et al. [7] developed a method of estimating bodyorientation by using multiple LIDARs observing a humanfrom various directions. They use a person model with armsand estimate the pose by combining model-based shapeprediction with laser scan data. Matsumoto et al. [8] proposedto use sectional-contour for determining human posture suchas standing, sitting, and bowing, obtained from multipleLIDARs. Since the LIDARs in these works are supposedto be fixed at different positions, they cannot be directlyapplicable to mobile robots. Weinrich et al. [9] developed amethod of classifying human upper body orientation usingHOG [3] and an SVM decision tree. The resolution of image-based orientation estimation is usually not as high as LIDAR-based methods.

When a person walks, his/her moving direction and bodyorientation are correlated to some extent, and this correlationgenerally becomes larger at a higher speed. Chen et al.[10] used this observation in a combined person trackingand body orientation estimation using particle filter in visualsurveillance. Ardiyanto and Miura [11] dealt with a similar

problem by appearance-motion integration using UKF (un-scented Kalman filter). Svenstrup et al. [12] used an obser-vation that a person walking at a higher speed changes itsorientation less probably for making the orientation estimatemore accurate, although they did not use appearance norshape information.

In this paper, we propose a method of estimating humanbody orientation by integrating shape information obtainedby a LIDAR and motion information obtained by a UKF (un-scented Kalman filter) tracker. The shape-motion integrationpart is based on our previous method [11].

The rest of the paper is organized as follows. Sec. IIdescribes a shape-based orientation estimation using a 2DLIDAR. Sec. III describes our UKF formulation in shape-motion integration for more reliable body orientation esti-mation. Sec. IV shows experiments for accuracy validationand those in human tracking situation. Sec. V concludes thepaper and discusses future work.

II. SHAPE-BASED BODY ORIENTATION ESTIMATION

A. Body shape models

We use a visible part of the section of human bodyobserved by a 2D LIDAR for orientation estimation. Wemake a set of such data in advance as model data. Theset is generated by observing a human body at a certainheight from omnidirectional viewpoints. Supposing a robotis following an adult, we put a LIDAR at the height of 85 cmand at 2m distance. We take data at 36 viewpoints with10 deg interval. We compare an input shape data with thosein the database to estimate the body orientation.

We developed a data acquisition system, which is com-posed of a motorized turn table with a visual angular positionestimation. The accuracy of turn table angle measurement isless than one degree.

B. Body orientation estimation by shape matching

The first step of body orientation estimation is to match aninput shape with those in the model database. All models andinput data are represented as binary images. Fig. 1(a) showsa scene of obtaining a scan and its image representation, inwhich one pixel corresponds to a 1 cm×1 cm square region.

An input image is compared with a set of images in thedatabase. Since each model image is specific to a certain

Proceedings of the 2016 IEEEInternational Conference on Robotics and Biomimetics Qingdao, China, December 3-7, 2016

978-1-5090-4363-7/16/$31.00 ©2016 IEEE 1948

Page 2: LIDAR-Based Body Orientation Estimation by Integrating ...jun/pdffiles/shimizu-robio2016.pdf · body orientation using a LIDAR on a mobile robot by inte-grating shape and motion information.

(a) Observation of body shape (green line)

and its binary image representation.

(b) Model image after

applying distance transform.

Fig. 1. Shape model acquisition and representation.

viewing direction, only translation is considered in matchingthe input image and each model image. Here we use itssimplified version, that is, calculate the sum of the twodistances: dm is the averaged distance from each input edgeto its nearest model edge, and di is that from each modeledge to its nearest input edge.

After aligning the input and the model image with theircenter-of-mass positions, we search a certain region (cur-rently, 7×7 region) for the minimum-distance displacement.This can be viewed as a simplified version of the Hausdorffdistance-based edge matching strategy [13].

One way to determine the body orientation is to choosethe model image with the lowest distance and thus the corre-sponding viewing angle. This, however, has two drawbacks.One is that the estimated angle become discrete and theother is the sensitivity to noise. We therefore introduce aweighting function which converts a distance to a weight andcalculate the weighted mean as an estimate. We also calculatethe variance of the estimation; such a variance informationis necessary for being used in a statistical framework likeKalman filter. We use the following equations to calculatethe estimated orientation θ and its variance σ2

θ :

wi = e−κ(dm+di), (1)

θ =

∑Ni=1 wii∑Ni=1 wi

, (2)

σ2θ =

∑Ni=1 wi(i− θ)2∑N

i=1 wi

, (3)

where i is a discrete orientation (i = 1, · · · , N ;N = 36), wi

is the weight for i. Fig. 2 shows an example histogram ofnormalized weights for an observation. Note that the weightsare calculated for orientations in the ±40 deg range fromthe previous orientation. At the same time, the periodicityof angles is taken into account to compute a circular meanfrom directional statistics.

C. Evaluation of body orientation estimation using one scanof LIDAR data

We here evaluate the accuracy of shape-based body ori-entation estimation, by considering several factors whichmay affect to the accuracy, that is, distance, body size, andclothing.

wei

ght (

norm

aize

d)

orientation (deg.)

Fig. 2. An example histogram of normalized weights.

(a) LIDAR scan (2m) (b) LIDAR scan (5m)

Fig. 3. Comparison of LIDAR data for a person at different distances.

1) Effect of distance to the person: The number ofmeasured 3D points decreases as the distance to the personbecomes larger. We compare the accuracy when the personis at either 2m and 5m distant from the robot. Fig. 3 showsan example of LIDAR data for these distances.

For the person with whom a set of model data wasacquired, we acquired data set for testing for each orientation,ranging from 0 deg to 350 deg with 10 deg interval (as inthe case of model data acquisition), and performed scan-to-scan comparisons to estimate the body orientation. For eachset of input LIDAR scans, the discrete orientation which issupported by the highest number of estimations is selected asthe result. Considering the estimation is sometimes reversedby 180 deg because the body shape looks very similar ob-served from the front and the back, we classify the differencebetween the input and the model into the five cases: 0 deg,±10 deg, 180 deg, 180 ± 10 deg, and otherwise. Table Isummarizes the results. Since we can use face recognitionfor discriminate 0 degree and 180 degree cases, as explainedlater, we can conclude that even in 5m cases, the orientationerror is expected to be mostly within 10 deg

2) Effect of clothing: Changing clothing may change thebody shape. We compared three cases shown in Fig. 4.Using clothing A as the model and the others for testing, weestimate the mean and the maximum absolute error. Table IIsummarizes the results showing clothing change can decreasethe accuracy of the body orientation estimation, however, theproposed method still works well.

3) Effect of person to person difference: We take shapedata for three persons with the height of 160 cm, 170 cm and180 cm. For each person, we collected two sets of shape datafor all orientations, one is for making a model and the otherfor testing. We then calculate the absolute estimation error forevery combination of the model person and the test person.Table III summarizes the result showing the height differencebetween model and test data has a bad influence on the body

1949

Page 3: LIDAR-Based Body Orientation Estimation by Integrating ...jun/pdffiles/shimizu-robio2016.pdf · body orientation using a LIDAR on a mobile robot by inte-grating shape and motion information.

TABLE ICLASSIFICATION OF ESTIMATION RESULTS FOR DIFFERENT DISTANCE

TO THE PERSON.

difference (deg) 0 ±10 180 180 ± 10 othersperson is at 2m 27 7 2 0 0person is at 5m 16 10 6 3 1

clothing A clothing B clothing C

Fig. 4. Three different clothing.

orientation estimation. However, the table also include thecase where the averaged image of all model images is usedas the model, which is shown to reduce the errors in average.The result is similar to the case where the subject wearsdifferent clothing.

III. SHAPE-MOTION INTEGRATION FOR BODYORIENTATION ESTIMATION

A. UKF-based integration

Integration of shape and motion is performed by introduc-ing body orientation terms and information integration rulesinto a human tracker. The tracker uses the Unscented KalmanFilter (UKF) [14], which is a non-linear statistical filter andknown to be more accurate in many cases than ExtendedKalman filter. Another superior point of UKF is that it doesnot require Jacobians, which are sometimes hard to derive.

B. Formulation

We basically follow the formulation by Ardiyanto andMiura [11] but modify it so that the robot-centered coordinatesystem is used.

1) State transition equation: State vector xt is definedby:

xt =(xt, yt, xt, yt, φ

Mt , φS

t , θt)T

, (4)

where (xt, yt) and (xt, yt) are the position and the velocityof a person in the robot local coordinate, φM

t and φSt are

the estimated orientation by the motion information and theshape information, respectively, and θt is the orientationestimation result that integrates φM

t and φSt as described later.

Fig. 5 shows the relationships between the robot and theperson positions and the robot motion. Supposing a constantvelocity model, the person position/velocity part of the state

TABLE IITHE MEAN AND THE MAXIMUM ESTIMATION ERROR FOR DIFFERENT

CLOTHING WHEN CLOTHING A IS USED AS MODEL.

Clothing for testing clothing B clothing Cmean absolute error (deg) 3.21 5.96

maximum absolute error (deg) 8.21 17.0

TABLE IIICOMPARISON OF ERRORS (IN DEG.) FOR VARIOUS COMBINATIONS OF

MODEL AND TEST PERSON.

model person test person averaged error maximum errorA 2.96 5.98

A B 3.98 8.48C 8.32 15.1A 4.26 10.7

B B 1.70 3.35C 6.09 15.3A 6.66 17.9

C B 3.95 7.52C 2.86 6.41A 3.33 10.4

Averaged B 2.32 4.41C 5.16 12.3

equation is given by:

xt+1 = xt + xtΔt,

yt+1 = yt + ytΔt, (5)ˆxt+1 = xt,

ˆyt+1 = yt,

where Δt is the cycle time and · indicates the posi-tion/velocity in the robot coordinates at time t. Then, apply-ing the coordinate transformation due to the robot motiongives the following relationship:

xt+1 = (xt+1 −Δx) cosΔθ + (yt+1 −Δy) sinΔθ,

yt+1 = −(xt+1 −Δx) sinΔθ + (yt+1 −Δy) cosΔθ, (6)xt+1 = ˆxt+1 cosΔθ + ˆyt+1 sinΔθ,

yt+1 = −ˆxt+1 sinΔθ + ˆyt+1 cosΔθ.

Combining eqs. (5) and (6) with additional noise terms, weobtain:

xt+1 = (xt + xtΔt−Δx) cosΔθ

+ (yt + ytΔt−Δy) sinΔθ + εx,

yt+1 = −(xt + xtΔt−Δx) sinΔθ

+ (yt + ytΔt−Δy) cosΔθ + εy, (7)xt+1 = xt cosΔθ + yt sinΔθ + εx,

yt+1 = −xt sinΔθ + yt cosΔθ + εy.

For orientation elements, we use the following equations:

φMt+1 = arctan

(−xt sinΔθ + yt cosΔθ

xt cosΔθ + yt sinΔθ

)+ εφM (vt),

φSt+1 = φS

t −Δθ + εφS , (8)

θt+1 = φSt + ω

1− e−vt

1 + e−vt

(φMt − φS

t

)+ εθ,

1950

Page 4: LIDAR-Based Body Orientation Estimation by Integrating ...jun/pdffiles/shimizu-robio2016.pdf · body orientation using a LIDAR on a mobile robot by inte-grating shape and motion information.

Xt

Yt

Xt+1

Yt+1

position at time t

position at time t+1

xt

yt

xt+1

yt+1

Δx

ΔyΔθ

Fig. 5. Robot and person motion and coordinate transformation.

where vt is the speed of the person calculated from (xt, yt)and ω is a constant. The last line in eq. (8) realizes theshape-motion integration; as the person’s speed increases,motion information is used more for orientation estimation.Constant ω controls the influence of the speed to orientationestimation. When ω = 0, only shape information is used,while ω = 1, motion information is maximally utilized. Wewill determine the value of ω experimentally, as describedlater. εx, εy , εx, εy , εφS and ε ˙theta are fixed Gaussian noiseterms. εφM (vt) follows a zero-mean Gaussian with varianceσ2ve

−vt; the error in motion orientation prediction becomessmaller as the person speed increase.

2) Observation equation: The robot measures the personposition (xL

t , yLt ) and the body orientation (φL

t ) from LIDARdata; the observation vector yt is given by:

yt =(xLt , y

Lt , φ

Lt

)T. (9)

Then the observation equation is defined as:

yt =

⎛⎝ xt + εLx

yt + εLyφSt + εφL

⎞⎠ (10)

φSt and the variance of εφL are given by the weighted mean

and the weighted variance as explained in Sec. II-B.

IV. EXPERIMENTAL RESULTS

A. Accuracy evaluation with motion capture device

Sec. II-C has described evaluation results for the casewhere one scan of LIDAR data is used, that is, those for theshape only estimation. To evaluate the effect of shape-motionintegration, we used a motion capture device (VICON M2Camera system) for measuring the body orientation on-line.

Fig. 6 shows the experimental environment. Three markersare attached to a subject at the waist level. From the positionsof the markers, we can measure the body orientation as wellas body position. Four motion patterns were tested: standstill, turn, straight, and circular. Example motion measure-ments were shown in Fig. 7.

We first examined the effect of ω in eq. (8), which controlshow shape and motion information are integrated, on the

Fig. 6. Accuracy evaluation environment with VICON.

(a) straight movement. (b) circular movement.

Fig. 7. Example motion measurements.

estimation accuracy for the straight and circular motion. Fig.8 summarizes the averaged error for various ω’s. Larger ω’sare effective for the straight motion while smaller valuesare effective for the circular one. Based on this result, wedetermined to use 0.3 as ω.

Table IV compiles the accuracy data for all motion pat-terns. For the straight and the circular motion, we alsocompare the estimation only with shape information and thatwith shape-motion integration. Since the experiments weredone in the environment which is different from the onefor generating shape models, the averaged error for “standstill” is considered as offset between the environments, andsubtracted from the averaged errors in the other motionpatterns. The results show that shape-motion integration iseffective especially when a person walks relatively straightly.

B. On-line tracking and orientation estimation experiments

1) Implementation on the robot: We implemented theproposed method on a real mobile robot, GRACE-I. Therobot can detect and follow a target person while estimatinghis/her body orientation. Fig. 9 shows the configuration ofprocessing steps.

GRACE-I is equipped with four LIDARs (Hokuyo UTM-30LX) at the heights of 30 cm and 85 cm so that 360 degscans at two levels are obtained. These two heights arechosen to correspond to the body and the feet positions. Therobot has cameras for person identification based on face,clothing, and other appearance features.

We measured the actual processing time in the case where

1951

Page 5: LIDAR-Based Body Orientation Estimation by Integrating ...jun/pdffiles/shimizu-robio2016.pdf · body orientation using a LIDAR on a mobile robot by inte-grating shape and motion information.

weight ω

aver

aged

err

or [

deg.

] straight circular

Fig. 8. Effect of ω on the body orientation estimation accuracy.

TABLE IVERRORS IN VARIOUS MOTION PATTERNS (IN DEG.).

motion pattern averaged error std. dev.stand still 6.34 0.83

turn 7.40 4.89

forward shape only 8.70 7.01shape-motion integration 7.28 5.90

circular shape only 12.2 8.88shape-motion integration 11.2 8.74

three to four persons are exist in the environment. Theaverage processing time for one frame was 37.6msec andthat for the shape matching and the UKF update per personwas 2.39msec.

2) Face recognition and orientation correction: As de-scribed in Sec. II-C, we use a face recognition methodto discriminate 0 degree and 180 degree cases. We applythe Viola-Jones face detector [15] with our illuminationnormalization method [16] for detecting faces in variousillumination conditions. When a face is detected in the pre-dicted head region, and when the estimated body orientationis within the range between −30 deg and 30 deg as shownin Fig. 10, we reverse the orientation, that is, add 180 deg tothe estimation result.

3) Tracking and orientation estimation: Fig. 11 showssnapshots of tracking a person with estimating his bodyorientation. The left column indicates the images from acamera on the robot, in which cylinders and arrows indicatethe person positions and body orientations, respectively. Theright column indicate the maps with person information.Circles, red one among them, and arrows indicate detectedpersons, the target person, and the target’s body orientation,respectively. In Fig. 11(a), the target person is standingstill and facing to the robot. In Fig. 11(b)-(d), the robot isfollowing the target person and his body orientations are wellestimated.

Fig. 12 shows the results of tracking and body orientationestimation of multiple persons. The top-right corner of theimages shows the 2D mapping of persons and obstacles. Theresults show the body orientations are well estimated.

V. CONCLUSIONS AND FUTURE WORK

This paper has described a body orientation estimationmethod by shape-motion information. The shape informationis extracted from a 2D LIDAR scan and then compared with

shape model database

. . .

person detection

body shape extraction

shape-based estimation

motion estimation

odometry

shape-motion integration using UKF

position, velocity, and body orientation

shape-based orientation estimation

Person tracking and shape-motion integration

LIDARs

0 [deg] 10 [deg] 20 [deg]

Fig. 9. Outline of tracking and orientation estimation.

x

y

robot

person

-30 +30 [deg]

Fig. 10. Orientation range for reversing the measured orientation.

the shape models to generate an orientation estimation withits variance. Considering the fact that the body orientationand the motion direction are correlated to some extent,the shape-based orientation information is combined withmotion information in a UKF framework to produce a betterorientation estimate. We evaluated the method with a motioncapture system as a ground truth and have shown the shape-motion integration increases the accuracy of body orientationestimation. The method is also implemented on a real robotand successfully applied to a following robot scenario.

Although we have examined the effect of body size andclothing on the estimation accuracy, the effects of largerdisturbances such as belongings (e.g., shoulder bag or backpack) have not been evaluated. In such a case, if the changeof the body shape at the height of 2D LIDAR scans is large,the estimation could be much worse. We are also planningto use a 3D LIDAR so that informative parts of scans canbe selectively utilized for a better estimation. Application ofbody orientation estimate to recognition of various humaninteractions such as two persons talking closely is also futurework.

ACKNOWLEDGMENT

This work was supported by JSPS KAKENHI GrantNumber 25280093.

REFERENCES

[1] K.O. Arras, O.M. Mozos, and W. Burgard. Using Boosted Featuresfor the Detection of People in 2D Range Data. In Proceedings of the2007 IEEE Int. Conf. on Robotics and Automation, pp. 3402–3407,2007.

[2] Z. Zainudin, S. Kodagoda, and G. Dissanayake. Torso Detectionand Tracking using a 2D Laser Range Finder. In Proceedings ofAustralasian Conf. on Robotics and Automation 2010, 2010.

[3] N. Dalal and B. Briggs. Histograms of Oriented Gradients for HumanDetection. In Proceedings of 2005 IEEE Conf. on Computer Visionand Patttern Recognition, pp. 886–893, 2005.

1952

Page 6: LIDAR-Based Body Orientation Estimation by Integrating ...jun/pdffiles/shimizu-robio2016.pdf · body orientation using a LIDAR on a mobile robot by inte-grating shape and motion information.

(a) scene 1.

(b) scene 2.

(c) scene 3.

(d) scene 4.

Fig. 11. Tracking and body orientation estimation by the robot.

[4] J. Satake and J. Miura. Robust Stereo-Based Person Detection andTracking for a Person Following Robot. In Proceedings of ICRA-2009Workshop on Person Detection and Tracking, 2009.

[5] L. Spinello and K.O. Arras. People Detection in RGB-D Data. InProceedings of the 2011 IEEE/RSJ Int. Conf. on Intelligent Robotsand Systems, pp. 3838–3843, 2011.

[6] M. Munaro and E. Menegatti. Fast RGB-D People Tracking forService Robots. Autonomous Robots, Vol. 37, No. 3, 2014.

[7] D.F. Glas, T. Miyashita, H. Ishiguro, and N. Hagita. Laser-BasedTracking of Human Position and Orientation using Parametric ShapeModeling. Advanced Robotics, Vol. 23, No. 4, pp. 405–428, 2009.

[8] T. Matsumoto, M. Shimosaka, H. Noguchi, T. Sato, and T. Mori. PoseEstimation of Multiple People using Contour Features from MultipleLaser Range Finders. In Proceedings of 2009 IEEE/RSJ Int. Conf. onIntelligent Robots and Systems, pp. 2190–2196, 2009.

[9] C. Weinrich, C. Vollmer, and H.-M. Gross. Estimation of HumanUpper Body Orientation Estimation for Mobile Robotics using anSVM Decision Tree on Monocular Images. In Proceedings of 2012IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2147–2152,2012.

[10] C. Chen, A. Heili, and J.-M. Odobez. Combined Estimation ofLocation and Body Pose in Surveillance Video. In Proceedings of 2011IEEE Int. Conf. on Advanced Video and Signal-Based Surveillance, pp.

(a) scene 1.

(b) scene 2.

(c) scene 3.

(d) scene 4.

Fig. 12. Tracking and body orientation estimation of multiple persons.

5–10, 2011.[11] Igi Ardiyanto and J. Miura. Partial Least Squares-based Human Upper

Body Orientation Estimation with Combined Detection and Tracking.Image and Vision Computing, Vol. 32, No. 11, pp. 904–915, 2014.

[12] M. Svenstrip, S. Tranberg, H.J. Andersen, and T. Bak. Pose Estimationand Adaptive Robot Behavior for Human-Robot Interaction. InProceedings of 2009 IEEE Int. Conf. on Robotics and Automation,pp. 3571–3576, 2009.

[13] D.P. Huttenlocher, G.A. Klanderman, and W.J. Rucklidge. ComparingImages using the Hausdorff Distance. IEEE Trans. on Pattern Analysisand Machine Intelligence, Vol. 15, No. 9, pp. 850–863, 1993.

[14] S.J. Julier and J.K. Uhlmann. Unscented Filtering and NonLinearEstimation. Proceedings of IEEE, Vol. 92, No. 3, pp. 401–422, 2004.

[15] P. Viola and M.J. Jones. Robust Real-Time Face Detection. Int. J. ofComputer Vision, Vol. 57, No. 2, pp. 137–154, 2004.

[16] Bima Sena Bayu Dewantara and J. Miura. Fuzzy-based IlluminationNormalization for Face Recognition. In 2013 IEEE Workshop onAdvanced Robotics and its Social Impacts, pp. 131–136, 2013.

1953


Recommended