+ All Categories
Home > Documents > [IEEE 2010 International Conference on Electrical and Control Engineering (ICECE) - Wuhan, China...

[IEEE 2010 International Conference on Electrical and Control Engineering (ICECE) - Wuhan, China...

Date post: 11-Dec-2016
Category:
Upload: mei
View: 215 times
Download: 2 times
Share this document with a friend
4
An Effective Background Subtraction Method Based on Pixel Change Classification Songyin Fu, Gangyi Jiang, Mei Yu Faculty of Information Science and Engineering Ningbo University Ningbo, China [email protected] Abstract—Background subtraction is an effective way which is commonly used in intelligent monitoring system for extraction of moving objects. However, the key step of background subtraction need for a precise and time-varying background model. In this paper, we describe an improved background model with its updating method, and apply to a computer vision-based motion detection system able to detect moving objects in real-time. Our system first establishes an background based on Gaussian model; second computes the set of statistical parameters of background model according to the way pixel changes; and then extracts moving objects using the updated background model. Finally experimental results and a performance measure establishing the confidence of the method are presented. Keywords- Background Subtraction; Background updating; Traffic and performance monitoring; Motion detection I. INTRODUCTION Fast and accurate extraction of moving objects in a video sequence is the crucial step in most intelligent monitoring system. Among moving objects detection methods, background subtraction is widely used. The basic process of this method includes three steps. Firstly, a background model is established according to the temporal sequence of the frames. Secondly, the moving objects are detected based on the difference between the current frame and the background model. Finally, the background model is updated periodically to adapt the variety of the monitoring scene. Although the approach is significant, it is difficult to extract the moving objects due to many factors [1], such as noise, motion changes of background, abnormal motion changes of the interested objects, cast shadows, and etc. There is a wide variety of techniques trying to improve the performance of extraction, not only the accuracy but also the speed. Gil Jiménez [2] classified the background model into four categories, based on the way of pixel gray changes, and designed different background model for each category. However, most methods establish a unified statistical model to represent the background. Gaussian single model has been commonly selected. Additionally, some researchers proposed several methods to improve the performance of the background model [3-6]. Furthermore, HSV (hue, saturation, value) and HIS (hue, saturation, intensity) color space also has been involved to establish background model which is more similar to the human visual system [7-8]. The speed of the methods is crucial for practical application systems [9]. Therefore, a number of ways, such as updating the background partially and simplified model of background [10-11], were put forward to improve the speedup performance of monitoring systems. Unfortunately, the accuracy can not be guaranteed as the extraction speed rises up because of the limitation of system resources. In this paper, an improved background model, accompanies with an updating method, is presented. The main goal of this method is to real-timely classify the way pixel changes when different motion occurs and update the background model accordingly. The remainder of this paper is organized as follows. Firstly, a method overview is presented. Secondly, we describe the background model. Then, the details of its updating process are analyzed. Finally, the experimental results and the conclusions are given. II. THE PROPOSED METHOD The algorithm is comprised of four steps: (1) temporal differencing method, which is used to detect intense global gray-level change (IGGC); (2) classifying the way of change for each pixel (PCWC); (3) Back ground model updating based on the result of (2); (4) motion area detection using background subtraction. A refine method is also proposed in this part. The detailed flow chart of the method is illustrated by Fig. 1. A. Background Model Gaussian Mixture Model (GMM) is often used to model the pixels of complex background. It is accompanied by a high cost process. So, we simply model each color channel of pixel with two statistical parameters (, ) (, ) k B k B u xy xy σ , (1) where k is the index of color channel, ( , , ) k RGB = . (, ) k B u xy denotes the k-th running expectation of a pixel at (x, y), (, ) k B xy σ is the stand deviation for k-th color channel of a pixel at (x, y). The initialization of the parameters is comprised 2010 International Conference on Electrical and Control Engineering 978-0-7695-4031-3/10 $26.00 © 2010 IEEE DOI 10.1109/iCECE.2010.1120 4634 2010 International Conference on Electrical and Control Engineering 978-0-7695-4031-3/10 $26.00 © 2010 IEEE DOI 10.1109/iCECE.2010.1120 4634
Transcript

An Effective Background Subtraction Method Based on Pixel Change Classification

Songyin Fu, Gangyi Jiang, Mei Yu Faculty of Information Science and Engineering

Ningbo University Ningbo, China

[email protected]

Abstract—Background subtraction is an effective way which is commonly used in intelligent monitoring system for extraction of moving objects. However, the key step of background subtraction need for a precise and time-varying background model. In this paper, we describe an improved background model with its updating method, and apply to a computer vision-based motion detection system able to detect moving objects in real-time. Our system first establishes an background based on Gaussian model; second computes the set of statistical parameters of background model according to the way pixel changes; and then extracts moving objects using the updated background model. Finally experimental results and a performance measure establishing the confidence of the method are presented.

Keywords- Background Subtraction; Background updating; Traffic and performance monitoring; Motion detection

I. INTRODUCTION Fast and accurate extraction of moving objects in a video

sequence is the crucial step in most intelligent monitoring system. Among moving objects detection methods, background subtraction is widely used. The basic process of this method includes three steps. Firstly, a background model is established according to the temporal sequence of the frames. Secondly, the moving objects are detected based on the difference between the current frame and the background model. Finally, the background model is updated periodically to adapt the variety of the monitoring scene. Although the approach is significant, it is difficult to extract the moving objects due to many factors [1], such as noise, motion changes of background, abnormal motion changes of the interested objects, cast shadows, and etc. There is a wide variety of techniques trying to improve the performance of extraction, not only the accuracy but also the speed.

Gil Jiménez [2] classified the background model into four categories, based on the way of pixel gray changes, and designed different background model for each category. However, most methods establish a unified statistical model to represent the background. Gaussian single model has been commonly selected. Additionally, some researchers proposed several methods to improve the performance of the background model [3-6]. Furthermore, HSV (hue, saturation, value) and HIS (hue, saturation, intensity) color space also has been involved to establish background model which is more similar

to the human visual system [7-8]. The speed of the methods is crucial for practical application systems [9]. Therefore, a number of ways, such as updating the background partially and simplified model of background [10-11], were put forward to improve the speedup performance of monitoring systems. Unfortunately, the accuracy can not be guaranteed as the extraction speed rises up because of the limitation of system resources.

In this paper, an improved background model, accompanies with an updating method, is presented. The main goal of this method is to real-timely classify the way pixel changes when different motion occurs and update the background model accordingly. The remainder of this paper is organized as follows. Firstly, a method overview is presented. Secondly, we describe the background model. Then, the details of its updating process are analyzed. Finally, the experimental results and the conclusions are given.

II. THE PROPOSED METHOD The algorithm is comprised of four steps: (1) temporal

differencing method, which is used to detect intense global gray-level change (IGGC); (2) classifying the way of change for each pixel (PCWC); (3) Back ground model updating based on the result of (2); (4) motion area detection using background subtraction. A refine method is also proposed in this part. The detailed flow chart of the method is illustrated by Fig. 1.

A. Background Model Gaussian Mixture Model (GMM) is often used to model the

pixels of complex background. It is accompanied by a high cost process. So, we simply model each color channel of pixel with two statistical parameters

( , )( , )

kBkB

u x yx yσ

⎧⎪⎨⎪⎩

, (1)

where k is the index of color channel, ( , , )k R G B= . ( , )kBu x y

denotes the k-th running expectation of a pixel at (x, y), ( , )k

B x yσ is the stand deviation for k-th color channel of a pixel at (x, y). The initialization of the parameters is comprised

2010 International Conference on Electrical and Control Engineering

978-0-7695-4031-3/10 $26.00 © 2010 IEEE

DOI 10.1109/iCECE.2010.1120

4634

2010 International Conference on Electrical and Control Engineering

978-0-7695-4031-3/10 $26.00 © 2010 IEEE

DOI 10.1109/iCECE.2010.1120

4634

(a) (b) (c)

Figure 1. Flow chart of the proposed method.

of two steps. In the first step, we learn Nint1 frames and calculate the mean of each color channel for a pixel to represent ( , )k

Bu x y . ( , )kB x yσ is set as 0 in this step. In the

second step, we continue calculating ( , )kBu x y as in the first

step, while pick maximum difference between ( , )kBu x y and

( , )knf x y to be ( , )k

B x yσ . ( , )knf x y is the intensity of k-th

color channel of a pixel at (x, y) in n-th frame. n is the number of Nint2 frames we learn in ( , )k

B x yσ this step. Before updating the two parameters when new frame comes, we must make sure that IGGC is not occurred.

B. Intense Global Gray-level Change Detection A typical intense global gray-level change consists in the

fast alteration of the illumination conditions in the scene. Moreover, this problem is very usual in outdoor environments. Due to the abnormal illumination changes, the current background model will be no longer effective. Hence, the background model should be rebuilt. IGGC is detected by

, >70%

, otherwise

Ntrueoccured M

false

⎧⎪= ⎨⎪⎩

, (2)

where M is the number of pixels which are selected from a frame for low computational complexity purpose. N is the number of pixels with intense inter-frame absolute difference of color intensity (IFdiff), and is calculated by

1 intense1, ( , ) ( , )

, otherwise

k kn nN k f x y f x y Th

NN

+⎧ + ∃ − >⎪= ⎨⎪⎩

: , (3)

where Thintense is a threshold describing the maximum IFdiff. As long as the IFdiff of one channel is greater than Thintense, the system will detect the intense illumination change of the pixel at (x, y).

C. Moving Object Detection As mentioned above, IGGC detection will be done when a

new frame comes. If IGGC not occurs, moving object detection

(MOD) is the following process. The movement of objects in the two-dimensional space is caused by the continuous intensity changes of pixels when they belong to different objects. So, it is the principle of MOD. The pixels which are not belonging to the background is detected by a background subtraction method described as

, ( , ) ( , ) ( , )

, otherwise

k k kB Btrue k f x y u x y x y

meetfalse

λσ⎧ ∀ − <⎪= ⎨⎪⎩

: , (4)

where λ is a constant, and is used to make the system more robust to noise and motion changes of background such as swing leaves. However, we still need to refine the result by some morphological methods (show in Fig 2).

Figure 2. Moving object detection. (a) real image (b) initial result (c) refined result.

D. Pixel Change Classification As the movement of object is caused by continuous

intensity change of pixels, rather than the movement of pixels themselves, we divide the way of pixel intensity changes into four categories.

1) Changing pixel: which is caused by IGGC or an object moves through the pixel. The intensity change process of the latter pixel formulates a curve which is illustrated by Fig. 3(a). There is a step change in intensity, followed by a period of instability (because of the texture inside object). Then, another step back to the original background model.

2) Static pixel: an object moves through the pixel and stays at the location. The intensity change process of the pixel formulates a curve which is illustrated by Fig. 3(b). There is a

46354635

(a) (b) (c) (d)

(a) (b) (c)

step change in intensity, followed by a period of instability. Then, it settles to a new model as the object stops.

3) Re-move pixel: an object stays at the pixel and starts to move. Fig. 3(c) gives the intensity change curve of this kind of pixels. The curve is with a period of instability at the beginning, followed by a step change in intensity. Finally, it settles to a new model (model of current background).

4) Background pixel: pixel changes in intensity caused by lighting or meteorological effects tend to be smooth changes that don’t exhibit large steps. Fig 3(d) shows the curve of intensity change process.

Figure 3. Pixel change classification.

Two factors, the existence of a significant step change in intensity and the stable intensity after passing through a period of instability, are important for the classification of pixel intensity curve. The step change is detected by

1 change, ( , ) ( , )

, otherwise

k kn ntrue k f x y f x y Th

changedfalse

+⎧ ∃ − ≥⎪= ⎨⎪⎩

: , (5)

where Thchange is a threshold. In real system, IFdiff of 2-4 consecutive frames is taken into consideration for the stability of the system. The key step is to real-timely determine whether the intensity of pixel is stable or not. Paper [12] introduces a time-delay and variance calculation method. Unfortunately, it is intensive space and time consuming. In addition, the moment of intensity becoming stable can not be captured on time. In our system, a fast accumulation method is used to do the stability detection. The method is determined by

, , otherwise

stabletrue count Sisstable

false≥⎧

= ⎨⎩

, (6)

where Sstable is a number of consecutive frames, count is calculated by

1 change1, ( , ) ( , )

0, otherwise

k kn ncount k f x y f x y Th

count +⎧ + ∀ − <⎪= ⎨⎪⎩

: . (7)

If the intensities of all the color channels in pixel location (x,y) do not exist step change in consecutive Thchange frames, the pixel is considered to be stable. Then, we can finish the classification according to a rule similar to Eq. 4.

E. Background Updating To keep the system functional in the case of dynamic

scenes, a periodic updating of the background is necessary.

When a new frame comes, the updating process according to the result of classification mentioned in subsection Dperformed.

For pixels with a step change or unstable, the parameters of the model is maintained and not be updated in that location. As for the background pixel, an infinite impulse response (IIR) filter is used to update the parameters.

1 1( , ) ( , ) (1 ) ( , )k k kB n B n nu x y u x y f x yα α+ += ⋅ + − ⋅ , (8)

1 1 1( , ) ( ( , ) ( , ) , ( , ) )k k k kB n n B n B nx y MAX f x y u x y x yσ σ+ + += − , (9)

where α is a time constant that specifies how fast new information supplants old observations.

A robust detection system should be able to discriminate the stopped objects and even disambiguate overlapping objects. So, the static pixels and the re-move pixels are marked, at the same time a new model is added for each of them in our system. The new model is initialized while the original one is still used without being updated for a period of time (we call it “the observation step”) in which no step change has happened. Then, the original model is replaced with the new model. Once a step change happens during the observation step, we remove the new model and do the next processing mentioned before.

Figure 4. Background updating.

Fig 4 shows the result of background updating method based on pixel change classification. Different colors in the pictures of second row indicate different types of pixels. Changing and background pixels are marked as blue and white respectively. Static pixels and Re-move pixels are marked as red. Additionally, we use green to mark the unstable pixels. In Fig 4(a), changing or unstable pixels do not contribute to background updating. In Fig 4(b) stopped object is captured while original background model is maintained. After the observation step, new model replaces the original one, and the background absorbs the stopped objects. Fig 4(c) shows this updated result.

46364636

III. EXPERIMENTAL RESULTS The proposed background subtraction algorithm was

successfully tested at an outdoor traffic scene. The system runs on an Intel Core2 2.0GHz PC and a Dell laptop integrated web camera which is used to capture video. The processing speed is up to 25 frames per second. The resolution of each frame is 320 240× . A sample of extraction results are shown in Fig 5.

Figure 5. Moving object extraction results.

When a car passes through the monitoring scene, it is correctly extracted in each frame.

Fig 6 illustrates a special case of moving object extraction. A stopped car is detected when it starts to move, and system quickly restored the background after the car leaves.

Figure 6. Detection of re-move object and background restoring.

IV. CONCLUSION In this paper, an effective background subtraction method is

proposed for real time intelligent monitoring system. The core

of the algorithm includes (1) establishing a simple and effective background model and (2) updating process of the background model based on an advanced classification method of pixel changes. The system can extract moving objects in real-time, and detect the stopped objects and other special change of the monitoring scene, such as re-moving objects and intense global gray-level change. Experimental results show that the proposed algorithm is promising.

However, we found that, in our experiments, it is possible to leaves “holes” in the extracted results, especially when the color of object is similar to the background. Furthermore, the “ghost” will appear in some case, which leads to miss-extraction. To overcome these problems, more complicated techniques and further study are required.

REFERENCES [1] F. Y. Hu, Y. N. Zhang, L. Yao, and J. Q. Sun. “A New Method of

Moving Object Detection and Shadow Removing,” Journal of Electronic(China), vol. 24, no.4, pp. 528–536, 2007.

[2] P. Gil Jiménez, S. Maldonado-Bascón, R. Gil-Pita, and H. Gómez-Moreno, “Background pixel classification for motion detection in video image sequences,” Lecture Notes in Computer Science, Springer Berlin / Heidelberg Press, vol. 2686, pp. 718–725, 2003.

[3] C. Stauffer, and W. Grimson. “Adaptive background mixture models for real-time tracking,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246–252, 1999.

[4] H. Fujiyoshi, A. J. Lipton, and T. Kanade, “Real-time human motion analysis by image skeletonization,” IEICE Transactions on Information and Systems, vol. E87-D(1), pp. 113–120, 2004.

[5] Y. Li, X. He, Z. H. Wei, and J. Wang, “Real-time extraction of object in clutter background,” Optical Technique, vol. 34, no.3, pp. 372–374, 2008.

[6] P. Spagnolo, T. D. Orazio, and M. Leo, A. Distante, “Advances in background updating and shadow removing for motion detection algorithms,” Lecture Notes in Computer Science, Springer Berlin / Heidelberg Press, vol. 3691, pp. 398–406, 2005.

[7] M. A. Nehme, W. Khoury, B. Yameen, and M. A. Al-Alaoui. “Real time color based motion detection and tracking,” Proceedings of the 3rd IEEE International Symposium on Signal Processing and Information Technology, IEEE Press, pp. 696–700, 2003.

[8] C. Rambabu, W. Woo, “Roubst and accurate segmentation of moving objects in teal-time video,” The 4th international symposium on ubiquitous VR, pp. 75–78, 2006.

[9] T. Rodriìguez, N. Garciìa, “An adaptive real-time traffic monitoring system,” Machine Vision and Applications, Springer Berlin / Heidelberg Press, pp. 1–22, 2009.

[10] Y. L. Yu, H. Z. Lu, “Video object segmentation technology based on background construction,” Computer Engineering & Science, vol. 28, no. 1, pp. 36–38, 2006.

[11] H. J. Elias, O. U. Carlos, S. Jesus, “Detected motion classification with a double-background and a neighborhood-based difference,” Pattern recognition letters, vol. 24, pp. 2079–2092, 2003.

[12] R. Collins, A. Lipton, T. Kanade, “A system for video surveillance and monitoring,” VSAM final report, Carnegie Mellon University, Technical Report: CMU-RI-TR-00-12, 2000.

46374637


Recommended