Student ID : M9820202 Student : Chung-Chieh Lien Teacher : Ming-Yuan Shieh An Illumination...

Post on 29-Jan-2016

215 views 0 download

Tags:

transcript

Student ID : M9820202

Student : Chung-Chieh Lien

Teacher : Ming-Yuan Shieh

An Illumination Adaptive Color Object Recognition Method in Robot Soccer Match

1

Proceedings of the 2008 IEEE International Conference on Robotics and Biomimetics Bangkok, Thailand, February 21 - 26, 2009

OUTLINE

ABSTRACT INTRODUCTION DESCRIPTION OF THE METHOD

YUV space and primary colors’ distribution Histogram of image consisted of primary colors Histogram on real images Two stage method

EXPERIMENTAL RESULTS CONCLUSION AND FUTURE WORK REFERENCES

2

ABSTRACT

Generally, the colors are identified by referring to the pre-defined bounds for the components of each color.

However, it is not an easy work to define non- interfered bounds for different colors, and the bounds are sensitive to illumination conditions.

Instead, in this work, different colors are discriminated by comparing their chrominance component in the YUV color space.

3

With the combination of the geometry properties of the color labels of the objects, the recognition process consists of two stages.

In each stage, the color labels’ recognition is realized by simply comparing the average chrominance component of the separated regions.

Thus the system can be adaptive to the variation of ambient illumination.

4

INTRODUCTION

In robot soccer competition, there are mainly two kinds of vision system. One is to mount the camera or even the whole vision system in each robot [2][3][4], the other is to mount the camera in a fixed position with respect to the match field [6].

In this work, we focus on the latter system, in which the camera is mounted above the match field and points downward to the field to capture images of the whole field.

5

The captured images are analyzed to identify the orientation and position of the robots and the ball in the field, then the derived information is transmitted to the decision module.

One of the most important steps in color object recognition is color classification [5]. That is, to separate the colors in the image into clusters.

6

Fig. 1. Overview of the field 7

The original method of object recognition in this system is to train the system by collecting RGB colors of the labels and the ball before match starting.

The training process is time-consuming and sensitive to the change of ambient illumination.

The user needs to collect as many samples as possible for each color in different locations in the field to realize reliable recognition, and have to repeat the tedious work once the ambient illumination changes.

8

In some works, the YUV color space was used for color classification [1][3][6].

The advantage over using the RGB color space is that the Y component which is the luminance can be ignored in the classification process, thus the classification problem is in a 2D space rather than in a 3D space.

It is still a hard work to minimize the influence of the illumination condition and realize effective and robust recognition.

9

DESCRIPTION OF THE METHODYUV space and primary colors’

distribution

When referring to signals in video or digital form, the YUV color model is often in YCbCr format.

0.2549 0.5059 0.0980 16

0.1451 0.2902 0.4392 128

0.4392 0.3647 0.0706 128

Y R

Cb G

Cr B

10

DESCRIPTION OF THE METHOD HISTOGRAM OF IMAGE CONSISTED OF PRIMARY COLORS

In the ideal case, a simple thresholding method can realize the color classification, if only the above mentioned colors are captured in the image.

Fig. 3. Histogram of the image consisted by primary colors

11

DESCRIPTION OF THE METHOD Histogram on real images

Fig. 4. Histograms of real images

12

This is mainly due to two reasons:

1. under general ambient illumination, it is difficult to derive pure black color even the object is black;

2. the cameras working under RGGB mode interpolate RGB values for each pixel, thus the color deviation is inevitable especially on object boundaries as shown in Fig. 5.

13

Fig. 5. Zoomed parts of the real image

14

There are a number of background subtraction methods, such as [7][8] etc.

In this work, we base on the color’s intensity Y to filter out the pixels belonging to the background, and only a small part of the pixels with high Y values are retained for further processes.

After background subtraction, the retained pixels are easier to be clustered according to the histogram on Cb-Cr plane even under weak illumination conditions. 15

Fig. 6. Histogram of the foreground in the image

16

In this histogram, we can see that there are several sharp peaks, and in fact they correspond to the main colors in the image.

Then we truncate the histogram with a threshold, and derive binary images as shown in Fig. 7.

In our experiments, the threshold generally is set as four.

17

The colors of the regions shown in Fig. 7 are not the colors of the objects, but just to label different regions in the binary images.

Fig. 7 (a) is an image derived under weaker illumination than Fig. 7 (b).

(a) (b)

18

It can be observed that the regions are separated in a larger distance in Fig. 7 (b).

It can be observed that, if possible, the system should work under bright ambient illumination so that the colors are easier to be separated.

In the meantime, it also should be

remembered that if too bright, the colors will approach to the white color as mentioned before.

19

DESCRIPTION OF THE METHOD Two stage method

The variation of illumination on the field will result in the labels with a same color look quite differently in the captured image.

In Fig. 8 (a), it can be seen that the three orange labels are not in a uniform color.

20

This phenomenon results in the Cb and Cr values of the same color spread in a wide range in the Cb-Cr plane, then in the binarized histogram image they often form several disconnected regions.

From Fig. 8 (b) and (c) we can see that under a fixed threshold, the orange color in Fig. 8 (a) are separated into three clusters (marked in red, black and magenta).

21

Since the size of the color labels for discriminating the teams of the robots (the rectangular labels) is larger than the labels for identifying members of the robots (the triangular labels), and also larger than the ball, we firstly search for the largest six regions in the mapped image.

Referring to Fig. 2, we know that the blue color is with a larger Cb value than the orange color.

22

Referring to Fig. 2, we know that the blue color is with a larger Cb value than the orange color.

23

The three regions with the largest Cb value belong to one team, and the other three regions belong to another team.

The center of each rectangle region is the corresponding robot’s center, and the long axis of each region can be used to find out the robot’s orientation.

The long axis is the axis to which the region has the least second moment [5].

24

Assume there is a binary image B[i, j], where i ∈ [0, n− 1], j ∈ [0,m − 1], and in the image there is a connected region, we need to calculate the region’s long axis.

Firstly, the region’s area is calculated.

25

Then, the mean center of the region is calculated.

Finally, we calculate the second-order moments.

where x’= x − ¯x, y’= y − ¯y.

26

The axis’ direction α

Since the long axis has two directions and form an angle of π/4 or 3π/4 with the robot’s orientation, we need to find out how to rotate the axis to get the robot’s orientation.

This task can be completed by searching in the square region occupied by the robot in the image, and checking in which side of the long axis there are more pixels retained in the foreground.

27

Up to now, we have obtained each robot’s team information, location and orientation, but we still don’t know their member information.

In each team, we search in the square region occupied by the robot and record the pixels that belong to foreground and meantime do not belong to the team label.

Then we calculate these pixels’ average Cr values. By comparing these values of different robots we can identify which robot should be one, two or three in each team.

28

Now, the final problem is to identify the location of the ball.

Since the ball’s color is not homogenous due to shading and low imaging quality (refer to Fig. 9 (a) and (b)), here we also do not use higher and lower color thresholds to identify it.

29

Instead, we eliminate the regions occupied by all robots from the foreground of the image, then in the retained binary image, the largest region corresponds to the ball (refer to Fig. 9 (c) and (d)).

30

THE PROCESS OF OUR RECOGNITION METHOD IN DETAIL.

31

EXPERIMENTAL RESULTS

In this section, we show the system’s precision and accuracy.

In the experiments, the robots are positioned attaching to the sides of the field and in special orientations as shown in Fig. 11 (a).

The detected data are drawn in graphics as shown in Fig. 11 (b). The resulted image shows the detection is reliable.

32

EXPERIMENTAL RESULT

33

In table I and table II, we recorded the detected data of the robots and the ball in ten frames with different illumination conditions. Each frame is with a size of 640×480 pixels.

The camera’s shutter has been modified from 5ms to 13ms with other parameters fixed. In the process, the average intensity of the image has increased over a double.

34

TABLE IORIENTATION (θ) OF THE ROBOTS (IN RADIAN)

35*180 Degree = 3.14 Radian

TABLE IIPOSITION (X,Y) OF THE ROBOTS AND THE BALL (IN PIXEL)

36

From the tables, it can be seen that the accuracy of the position is within 3 pixels, and that of the orientation is within 3 degrees; the standard deviation of the distribution of orientation is less than 2 degrees, and that of the position is not over than 1 pixel.

These data show the robustness of the method. On a Pentium IV computer we realized the recognition frequency over 30Hz.

37

CONCLUSION AND FUTURE WORK

In this work, we presented a two stage method for color classification and object detection in robot soccer match.

In each stage, only a simple comparison of the Cb or Cr component of the regions’ average color can realize team or member identification.

Thus we avoid the difficult task of defining non-overlapped clustering regions for a number of colors.

38

The process of background subtraction simplifies the extraction of color clusters and improves the efficiency of the system.

The experiments and real applications proved the reliability and robustness of this method. In the future work, we would like to explore the case with more robots in one team.

39

REFERENCES1. J. Bruce, T. Balch and M. Veloso, “Fast and Inexpensive Color Image Segmentation for Interactive

Robots,” Proceedings of nternational Conference on Intelligent Robots and Systems, 2000

2. I. Dahm, S. Deutsch, M. Hebbel and A. Osterhues, “Robust Color Classification for Robot Soccer,” Robot World Cup VII, Lecture Notes in Artificial Intelligence, 2004

3. G. Yasuda and B. Ge, “Object Recognition and Self-Localization for Interactive Soccer Robots,” Proceedings of IEEE International Conference on Robotics and Biomimetics, 2004

4. R. A. Palma-Amestoy, P. A. Guerrero, P. A. Vallejos and J. Ruiz-del- Solar, “Context-dependent Color Segmentation for Aibo Robots,” IEEE 3rd Latin American Robotics Symposium, 2006

5. R. Jain, R. Kasturi and B. G. Schunck, Eds., “Machine Vision,”McGraw-Hill Companies, Inc., 1995

6. Q. Zhang, B. Zhong and Y. Yang, “Method of Soccer Robot Visual Tracing Based on Action-vision Coordination,” Computer Engineering and Applications, 2003

7. D. Lee, J. J. Hull and B. Erol, “A Bayesian Framework for Gaussian Mixture Background Modeling,” Proceedings of International Conference on Image Processing, 2003

8. P. KaewTraKulPong and R. Bowden, “An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection,” Proceedings of 2nd European Workshop on Advanced Video Based Surveillance Systems, 2001.

40