Range Data Approximation for Mobile Robot by Using CAN2
Takeshi NISHIDA, Shuichi KUROGI, Yuji TAKEMURA, Hirohito FUKUMOTO, Shota OKADA
Abstract— In this article, we apply the competitive associativenet called CAN2 to the processing of the range data of indoorenvironment acquired by a mobile robot, where the CAN2 isa neural net or a learning machine which performs piecewiselinear approximation. After introducing several methods fordealing with range data by the CAN2, we show the followingresults; (1) an original range image involving lack of dataor jump edges can be learned to be recalled as a naturalrange image by means of modifying the learning and recallingprocedure of the CAN2, (2) high data compression ratio can beachieved the CAN2 although the quality of the range image isnot reduced so much.
I. INTRODUCTION
Competitive associative net called CAN2 is a neural netor a learning machine which utilizes competitive and asso-ciative schemes to achieve piecewise linear approximationof nonlinear function, and the effectiveness in various ap-plications such as function approximation, control, rainfallestimation, time-series prediction, etc. has been shown sofar (see [1]-[5]). So, we here try to apply the CAN2 to theprocessing of range data for running of a mobile robot fortaking advantages of its functions, such as piecewise linearapproximation, noise reduction, data compression, and so on,where the range data can be obtained via Laser Rangefinder(LRF) without affected by environmental light [6], howeverthey have several disadvantages [7]-[9]:
(1) The range data sometimes involve lack of datacalled black spots around the corner or the edgeof the objects.
(2) The range data involves quantization errors owingto measurement resolution (e.g. 10 [mm]).
(3) The number of data in a range data set are large.
However, the landmarks for the mapping or the self positionestimation are often the edges of the thing or the boundaryof the floor and the wall [10], [11]. Therefore, in this article,we show the CAN2 can be used for overcoming thosedisadvantages of the range data and for extracting of thelandmarks.
In the next section, we show the CAN2 and its batchlearning method. In Sec.III, the relation of each coordinatesystem is shown, and in Sec.IV, we describe the structure ofrange data and present a new recalling method of the CAN2for smoother function approximation. In Sec.V, we showseveral experimental results that manifest the effectivenessof the CAN2 in processing the range images.
Takeshi NISHIDA, Shuichi KUROGI, Yuji TAKEMURA, HirohitoFUKUMOTO, and Shota OKADA are with the Department of ControlEngineering, Kyushu Institute of Technology, 1-1 Sensui Tobata KitakyushuFukuoka 884-8550, Japan (phone: +81-93-884-3190; fax: +81-93-861-1159;email: [email protected]).
II. BATCH LEARNING COMPETITIVE ASSOCIATIVE NET 2
First, we consider the system with an input-output systemwith a k-dimensional input vector xj , (xj1, · · · , xjk)T andan output yj given by,
yj , f(xj) + δj , (1)
where j = 1, 2, · · · indicates the index to distinguish thedata, f(xj) is a nonlinear function of xj , and δj representszero-mean noise with the variance σ2
d. We use k = 1 forthe application to the 3D range data shown below. A CAN2has N units (Fig.1), and the ith (i ∈ I = {1, 2, · · · , N})unit has a weight (column) vector wi , (wi1, · · · , wik)T ∈R
k×1 and an associative matrix (or a row vector) M i ,
(Mi0, · · · ,Mik) ∈ R1×(1+k). The CAN2 approximates the
above function by
y , M cx, (2)
where x , (1,xT )T denotes the extended input vectorwhose first term is for generating the bias term of theapproximation, and cth unit is the winner of the followingminimization or competition,
c , argmini∈I
‖x − wi‖. (3)
Note that the above function approximation partitions theinput space V = R
k into N Voronoi regions
Vi , {x∣∣ i = argmin
j∈I
‖x− wj‖}, (4)
and performs piecewise linear approximation of y = f(x).Further, we have developed an efficient batch learningmethod (see [2] for details), and in the following we supposeto use the CAN2 after the batch learning.
III. MOBILE ROBOT AND LRF
We use a URG-04LX laser rangefinder (LRF) for scanningthe horizontal 2D plane to measure the distance to an object,where the maximum measure distance is 4095 [mm] withthe resolution from 10 [mm] to 1% of the distance, and thescanning range is 240 [deg] with the angular resolution 0.36[deg]. In order to scan 3D space, we installed the LRF in amobile robot (Fig. 2).
First of all, the definition of each coordinate system andthe relations between them are described here. At first, letΣl = {lx,ly}, Σr = {rx,ry,rz} and Σw = {wx,wy,wz}be coordinate systems of the LRF, the robot and the world,respectively. Moreover the relation between the LRF and therobot coordinate systems are shown in Fig.3. So that wecan obtain the polar range data (θk, dk,t) consisting of the
horizontal angle θk and the measured distance dk,t at timet. Actually, we use the range data in the polar form given as
Dpolar = {(θk, dk,t)|0 ≤ θk ≤ π, 20 ≤ dk,t ≤ 4095,
k = 0, · · · ,K, t = 0, · · · , T}, (5)
and transform it to the rectangular form given as
Drect = {(lxk,t,lyk,t) = (dk,t cos(θk), dk,t sin(θk))∣∣k = 0, · · · ,K, t = 0, · · · , T} ⊂ Σl, (6)
and K = 512. Further, the 3D rectangular (Cartesian)position can be obtained by
rxk,t , (rxk,t,
ryk,t,rzk,t)
T
=(lxk,t,
lyk,t cosφ, lyk,t sinφ)T
∈ Σr, (7)
where φ is an angle of depression of the LRF (we set φ =π/4 in the following experiments). Furthermore, since the
M 10
input
competitivecells
assosiativecells
x =10
M 11
M 1k
w11
x 1
x k
...
1st unit
...
w1k
MN0
MN1
MNk
wN1
Nth unit
...
wNk
11
. . .
. . .
. . .
y1
yN
yc
output
Fig. 1. Schematic diagram of the CAN2.
Fig. 2. Mobile robot width LRF.
laser rangefinder
φ
θ (t)k
d (t)k
yl
zr
yrxlx (= )r
Fig. 3. LRF and robot coordinate system.
xr
yryw
xw
a(t)
robot ψ(t)
Fig. 4. LRF coordinate system.
mobile robot measures the environment while running, wecan obtain the 3D position w
xk(t) ∈ Σw as follows,(wxk(t)T , 1
)T, (wxk(t),wyk(t),wzk(t), 1)
T
= wr H(t) ·
(rx
Tk,t, 1
)T, (8)
where the homogeneous coordinate transform matrix wr H(t)
is
wr H(t) ,
(wr R(t) a(t)
0 1
), (9)
and the rotation matrix wr R(t) and the position of the robot
a(t) at time t is
wr R(t) ,
sin(ψ(t)) cos(ψ(t)) 0− cos(ψ(t)) sin(ψ(t)) 0
0 0 1
, (10)
a(t) , (ax(t), ay(t), az(t))T , (11)
where ψ(t) is the vertical rotation angle of the robot.
IV. PIECEWISE LINEAR APPROXIMATION OF RANGE
DATA SEQUENCE VIA THE CAN2
Suppose we train the CAN2 with the data lxk,t ,
(lxk,t,lyk,t)
T ∈ Drect, where the relation lyk,t = f(lxk,t
)+
δk,t is supposed to be fulfilled as the input-output system
given by Eq.(1). Furthermore, we suppose that the rangedata sequence obtained by a scanning of the LRF consists ofseveral line segments Ll (l = 1, 2, · · · ), that is, we supposethat the surrounding environment of the robot consists of sev-eral plane segments. Moreover, the CAN2 after the trainingapproximates each Ll by means of piecewise lines LCAN2
i fori ∈ Jl, where Jl denotes the set of indices i of LCAN2
i whichapproximate the same line Ll.
The algorithm for the piecewise linear approximation forthe data Dpolar by the CAN2 given by the following steps(Fig.5);
Step1: Let (θk, dk,t), (k = 1, 2, · · · ) be a dataset obtainedby the LRF, and it is divided into m = 2n (n ∈N
+) blocks based on θk. Each divided data blocksis rotated so that the central angle of the data be π/2[rad], and is transformed to rectangular coordinatesystem.
Step2: The datasets are linear approximated by CAN2s,and N lines are extracted. As a result, the weightvalue wo (o ∈ I) and the associative vector M o =(Mo0,Mo1)
T in each CAN2 are obtained.Step3: The weights wo are rearranged in order to value
of lx and reset new index r ∈ I , i.e., segmentswith a similar associative vector are connected andintegrated to a line segment Sl (l = 1, 2, · · · , sl)by according to the following conditions,
M o − Mo+1 ≤ M th. (12)
Step5: The processed each block is rotated again andintegrated. The gaps between each block are in-terpolated.
V. EXPERIMENTS
The mobile robot with LRF ran at a speed of 100 [mm/sec]in a corridor (Fig. 6), and range data sets were obtained. TheLRF was installed at height of 830 [mm] from the floorand its angle of depression was 45 [deg]. Moreover, themeasurement resolution is 0.36 [deg] (then K = 512) and theLRF scans within 180 [deg] at 10 [Hz]. The parameters areas follows: unit number of CAN2 was N = 100, a number ofiteration of learning was 20 for one scanning data sequenceand sl = 5. The range data sets and the processing results ofthem are shown in Fig. 7. All of the scanning data sequencestransformed into the world coordinate system and arrangedare shown in Fig. 7(a), (c), (e), and (g), and Fig 7(b), (d),(f), and (h) show the results of applying the above mentionedprocedure to each scanning data sequence.
We see from these results that the range data sets wereappropriately approximated, and the comprehensible mapswere formed. The size of data had been compressed intoabout 4%, i.e., there are 500 data points in one scan lineand line segment they are approximated on average byabout five segments, and the ratio of these data size isabout 4%. Thus, the robot can obtain many informationse.g. the position of the floor, the wall and the obstacle,and the distance from them by applying proposed method.
rotation−π/2
+π/2 −π/2
xl 1 xl 2
yl2
yl1
integration
rotation+π/2
x [mm]l
[mm
]yl
9070503010 1701501301100
200
400
600
800
1000
1200
1400
angle [deg]
1600
D polar
[rectangular coordinate system]
complementation
Sl
ylyl
ylyl
xl
xl
xl
xl
1. partition2. rotation
3. coordinate transformation
piecewise linear approximation
wo
Step 1
Step 2
Step 3
Step 4
m=2partitiondi
stan
ce [m
m]
Mo
Fig. 5. Flow of piecewise linear approximation by using CAN2.
Furthermore, although the obstacle is not judged easily fromthe range data including huge information, it becomes easy
-1000 -500 0 500 1000 0
2000
4000
6000
8000
-1500
-1000
-500
0
xwyw
[mm]
[mm]
[mm
]zw
1000500 0 -500 -1000
0
2000
4000
6000
8000
-1500
-1000
-500
0
yw[mm]
xw [mm]
zw[m
m]
(a) (b)
-1000 -500 0 500 1000 0
2000
4000
6000
8000
-1500
-1000
-500
0
yw[mm]
xw [mm]
zw[m
m]
1000500 0 -500 -1000
0
2000
4000
6000
8000
-1500
-1000
-500
0
yw[mm]
xw [mm]
zw[m
m]
(c) (d)
-1000-500
0 500
1000 0
1000
2000
3000
4000-1000
-500
0
[mm]xw [mm]yw
zw[m
m]
-1000-500
0 500
1000 0
1000
2000
3000
4000-1000
-500
0
yw[mm]
xw [mm]
zw[m
m]
(e) (f)
-800-600
-400-200
0 200
400 600
800 1000
0 1000
2000 3000
4000 5000
6000 7000
-2000
-1000
0
xw
zw[m
m]
yw[mm] [mm]
-1000-800
-600-400
-200 0
200 400
600 800
0 1000
2000 3000
4000 5000
6000
-2000
-1000
0
xw
zw[m
m]
yw[mm] [mm]
(g) (h)
Fig. 7. Range data sets and processing results; (a), (b) a straight corridor, (c), (d) a straight with an obstacle, (e), (f) a dead end of a corridor, and (g),(h) a difference in level. (a), (c), (e) and (g) are raw range data, and (b), (d), (f) and (h) are piecewise linear approximated results.
in the visual by applying proposed procedure. Similarly, thejudgment of a difference in level that exists in a surroundingpassage is also easy. Therefore, these results also suggest the
advantage as the user interface of proposed method.
Fig. 6. Mobile robot in a corridor.
VI. CONCLUSION
In this research, we proposed a technique for piecewiselinear approximation, interpolation, and compression of therange data acquired from LRF installed in the mobile robotby using CAN2, and it experimented. As a result, it wasshown that an interpolation with high accuracy and a highcompressibility were achieved. Moreover, it was shown to beable to obtain many environmental informations by applyingthe proposed technique, and the expandability to the userinterface was also shown as a result.
ACKNOWLEDGMENT
This research was partially supported by the Grant-inAid for Scientific Research (B) 16300070 of the JapaneseMinistry of Education, Science, Sports and Culture.
REFERENCES
[1] S. Kurogi and S. Ren, ”Competitive associative network for functionapproximation and control of plants,” Proc. of NOLTA’97, pp. 775–778,1997.
[2] S. Kurogi, M. Sawa, T. Ueno, and Y. Fuchikawa, ”A batch learningmethod for competitive associative net and its application to fanctionapproximation,” Proc. of SCI2004, Vol.5, pp. 24–28, 2004.
[3] S.Kurogi, ”Asymptotic Optimality of Competitive Associative Netsand Its Application to Incremental Learning of Nonlinear Functions,”IEICE(D-II), Vol. J86-D-II, No. 2, pp. 184-194, 2003.
[4] S. Kurogi, N. Araki, H. Miyamoto, Y. Fuchikawa and T. Nishida,”Temperature Control of RCA Cleaning Solutions Using Batch LearningCompetitive Associative Net,” Proc. of SCI2004, Vol. 5, pp. 18–23,2004.
[5] S. Kurogi, T. Ueno, M. Sawa, ”Batch learning competitive associativenet and its application to time series prediction,” Proc. of IJCNN2004,2004.
[6] G. N. DeSouza and A. C. Kak, “Vision for Mobile Robot Navigation:A Survey,” IEEE Trans. on Pattern Analysis and Machine Intelligence,Vol. 24, No.2, pp. 237–267, 2002.
[7] D. Hihnel, W. Burgard and S. Thrun, “Learning compact 3D models ofindoor and outdoor environments with a mobile robot,” Robotics andAutonomous Systems, Vol. 44, pp. 15–27, 2003.
[8] H. Surmann, K. Lingemann, A. Nuchter and J. Hertzberg, “A 3D laserrange finder for autonomous mobile robots”, Proc. the 32nd ISR, pp.153–158, 2001.
[9] H. Surmann, A. Nuchter and J. Hertzberg, “An autonomous mobilerobot with a 3D laser range finder for 3D exploration and digitalizationof indoor environments”, Robotics and Autonomous Systems, Vol. 45,pp. 181–198, 2003.
[10] J. Neira, J. D. Tardos, J. Horn and G. Schmidt, “Fusing Rangeand Intensity Images for Mobile Robot Localization,” IEEE Trans. onRobotics and Automation, Vol. 15, No. 1, pp. 76–84, 1999.
[11] M. M. Nevado, J. G. Garcia-Bermejo and E. Z. Casanova, “Obtaining3D models of indoor environments with a mobile robot by estimatinglocal surface directions,” Robotics and Autonomous Systems, Vol. 48,pp. 131–143, 2004.