+ All Categories
Home > Documents > Difference and accumulative difference pictures in dynamic scene analysis

Difference and accumulative difference pictures in dynamic scene analysis

Date post: 26-Aug-2016
Category:
Upload: ramesh-jain
View: 212 times
Download: 2 times
Share this document with a friend
10
Difference and accumulative difference pictures in dynamic scene analysis Ramesh Jain ___ Recognition of objects in a computer vision system requires extraction of the images of all objects. If the object and camera are moving then the task may be to extract images of only the moving objects. We discuss methods for combining information obtained from the positive, negative and absolute dzfference and accumula- tive difference pictures. We present an approach for the segmentation of dynamic scenes using information avail- able in the difference and accumulative difference pic- ture. In addition to the efficacy of the approach, a very attractive feature is the possibility of implementing it in real time using a special hardware. Keywords: image extraction, difference pictures, segmen- tation Analysis of a frame sequence to find the motion charac- teristics of objects, and the structure of the objects and the background, is attracting increasing attention from researchers in computer vision’. Most techniques in dynamic scene analysis exploit changes taking place in frames due to motion of the objects and the observer. Some early techniques segmented each frame of the sequence and used matching techniques to detect changes in the position of segments2-‘. Some success can be achieved using these techniques in the two- dimensional world, where motion of the object does not change the shape ofthe segment too much. In the case of three-dimensional motion of objects some success ma ‘7 be achieved using sophisticated matching techniques3> . A major problem with these approaches is the segmenta- tion of each frame. Segmentation of images is not yet well understood except for some highly constrained domains. The next most obvious scheme for change detection is to compare consecutive images using some similarity measure. If the similarity measure is defined at pixel level then the corresponding pixels of consecutive frames are compared using this measure. In many Department of Computer Science, General Motors Research Lab- oratories, Warren, MI 48090, USA Permanent address: Electrical and Computer Engineering Depart- ment, I.niversity of Michigan, Ann Arbor, MI 48109, USA applications it is desirable to prepare a binary difference picture to represent the changes in the frames under comparison. The similarity measure used for com- parison of the corresponding pixels may be based on the intensity of the pixel or the local characteristics of the intensity values for the pixels. After a difference picture has been formed, other processes may be used for the extraction of information from the frames. Jait# has shown that simple features in difference pictures may be used to extract some approximate, but useful, informa- tion about the objects and their motion characteristics. This information may be later used by a system for dynamic scene analysis to constrain searches or in segmentation of the scene. Segmentation of dynamic scenes to extract images of moving objects has received attention from several researchers7-14. Some approaches were proposed for the segmentation of a dynamic scene starting with the difference pictures7-lo. In this paper we suggest the use of positive difference pictures (I’DI’s), negative difference pictures (NDPs) and absolute difference pictures (DPs), and the corres- ponding accumulative difference pictures (ADPs), for the segmentation of dynamic scenes. These pictures allow a very simple algorithm for the segmentation of scenes. Problems due to running occlusicn are indicated and approaches to segmentation in the presence of run- ning occlusion are proposed. It is shown that frame-to- frame displacement of an object image may be obtained easily by tracking difference regions. DIFFERENCE PICTURES In the simplest form, a difference picture can be pre- pared by comparing corresponding pixels of the two frames under consideration. The method of comparison may be a simple comparison of intensities or a com- parison based on the characteristics of intensities of the neighbourhood around the point. Nagel proposed the use of the likelihood ratio used by Yakimovsky’s7 16. By considering m columns and n rows to be a superpixel we can compute the second-order statistics of the intensity values for a superpixel and compute the likelihood ratio no1 2 no 2 may 1984 0262-8856/84/02099-10$03.00 0 1984 Butterworth & Co (Publishers) Ltd. 99
Transcript
Page 1: Difference and accumulative difference pictures in dynamic scene analysis

Difference and accumulative difference pictures in

dynamic scene analysis

Ramesh Jain

___

Recognition of objects in a computer vision system requires extraction of the images of all objects. If the object and camera are moving then the task may be to extract images of only the moving objects. We discuss methods for combining information obtained from the positive, negative and absolute dzfference and accumula- tive difference pictures. We present an approach for the segmentation of dynamic scenes using information avail- able in the difference and accumulative difference pic- ture. In addition to the efficacy of the approach, a very attractive feature is the possibility of implementing it in real time using a special hardware.

Keywords: image extraction, difference pictures, segmen- tation

Analysis of a frame sequence to find the motion charac- teristics of objects, and the structure of the objects and the background, is attracting increasing attention from researchers in computer vision’. Most techniques in dynamic scene analysis exploit changes taking place in frames due to motion of the objects and the observer. Some early techniques segmented each frame of the sequence and used matching techniques to detect changes in the position of segments2-‘. Some success can be achieved using these techniques in the two- dimensional world, where motion of the object does not change the shape ofthe segment too much. In the case of three-dimensional motion of objects some success ma

‘7 be achieved using sophisticated matching techniques3> . A major problem with these approaches is the segmenta- tion of each frame. Segmentation of images is not yet well understood except for some highly constrained domains.

The next most obvious scheme for change detection is to compare consecutive images using some similarity measure. If the similarity measure is defined at pixel level then the corresponding pixels of consecutive frames are compared using this measure. In many

Department of Computer Science, General Motors Research Lab- oratories, Warren, MI 48090, USA Permanent address: Electrical and Computer Engineering Depart- ment, I.niversity of Michigan, Ann Arbor, MI 48109, USA

applications it is desirable to prepare a binary difference picture to represent the changes in the frames under comparison. The similarity measure used for com- parison of the corresponding pixels may be based on the intensity of the pixel or the local characteristics of the intensity values for the pixels. After a difference picture has been formed, other processes may be used for the extraction of information from the frames. Jait# has shown that simple features in difference pictures may be used to extract some approximate, but useful, informa- tion about the objects and their motion characteristics. This information may be later used by a system for dynamic scene analysis to constrain searches or in segmentation of the scene. Segmentation of dynamic scenes to extract images of moving objects has received attention from several researchers7-14. Some approaches were proposed for the segmentation of a dynamic scene starting with the difference pictures7-lo.

In this paper we suggest the use of positive difference pictures (I’DI’s), negative difference pictures (NDPs) and absolute difference pictures (DPs), and the corres- ponding accumulative difference pictures (ADPs), for the segmentation of dynamic scenes. These pictures allow a very simple algorithm for the segmentation of scenes. Problems due to running occlusicn are indicated and approaches to segmentation in the presence of run- ning occlusion are proposed. It is shown that frame-to- frame displacement of an object image may be obtained easily by tracking difference regions.

DIFFERENCE PICTURES

In the simplest form, a difference picture can be pre- pared by comparing corresponding pixels of the two frames under consideration. The method of comparison may be a simple comparison of intensities or a com- parison based on the characteristics of intensities of the neighbourhood around the point. Nagel proposed the use of the likelihood ratio used by Yakimovsky’s7 16. By considering m columns and n rows to be a superpixel we can compute the second-order statistics of the intensity values for a superpixel and compute the likelihood ratio

no1 2 no 2 may 1984 0262-8856/84/02099-10$03.00 0 1984 Butterworth & Co (Publishers) Ltd. 99

Page 2: Difference and accumulative difference pictures in dynamic scene analysis

M = 01 + s2)/2 + [(ml + m2)/2]2}2

Slt2

where mk and Sk denote the mean and variance of the intensities for the superpixel in the Kth frame.

This ratio has been used by Jain and Nage17, and by Jain et cd.* for the segmentation of images in real-world frame sequences. Recently, Nagel17 has argued that the likelihood ratio based on an approximation of intensity by quadratic surfaces results in very reliable change detection and computation of velocity at a point in a sequence.

In all these methods the aim is to detect whether the frames of the sequence have changed or not; no effort is made to retain the relative intensity characteristics of two frames. Yachida et al. l8 used the knowledge that the objects are darker than the background to extract infor- mation from the sign of the difference of intensities. We show that the information about the relative intensity changes allows extraction of more pertinent informa- tion in a system for dynamic scene analysis without any complex computation.

Let us define the PDP, NDP and DP for a pair of frames Fi(x, y) and Fz(x, y) as follows

if Fi(x, y) - &(x, y) > ri then PDP(x, y) = 1 otherwise PDP(x, y) = 0

if Fi(x, y) - &(x, y) < Ti then NDP(x, y) = 1 otherwise NDP(x, y) = 0

if ABS[Fi(x, y) - Fz(x, y)] > Ti then DP(x, y) = 0 otherwise DP(x, y) = 0

Here Ti is a threshold. By using the sign of the difference, we may be able to

differentiate easily between the region of a frame that is covered by a moving object and the region where the background is uncovered by the object. In applications where the background has a definite relationship with the objects, ie objects are darker than the background, the covering and uncovering information is obtained just by inspecting the sign of the differences, as in the work ofYachida et al.‘* In more complex cases involving a complex background and many objects of different intensities, it may still be possible to get such informa- tion by using the processing discussed below.

SIZE OF DIFFERENCE PICTURES

For a regular difference picture obtained by comparing two contiguous frames, the positive and negative regions should be the same size for a rigid object undergoing only planar translation. If the frame rate is such that there is only limited motion between two con- tiguous frames, then in the case of general three- dimensional motion the difference in the size of the positive and negative regions will not be significant either.

This can be verified by considering Figure 1. If the area of the object is A, then

A=Ar+Ac=Ac+AN

Hence

Ap=&

Here Ar, AN and Ac respectively are the positive dif-

A, Figure 1. A compositeframe showing two positions of an object superimposed (the positive and negative difference areas are the result of covering and uncovering of the background by the object)

ference area, the negative difference area, and the area of the object mask where no changes have taken place yet. If an object motion results in several positive and negative difference regions, then the sum of positive areas will be equal to the sum of negative areas.

ACCUMULATIVE DIFFERENCE PICTURES

For a sequence of frames, ADPs may be prepared by comparing the reference frame with each frame of the sequence. Usually the first frame of the sequence is con- sidered as the reference frame in the initial, maybe tran- sitory, phase of analysis. This frame may be modified on the basis of the subsequent analysis. An ADP contains a partial history of changes taking place at a point in the sequence. Jain and Nage17 used ADPs for the extraction of the images of moving objects. They computed some properties of regions in an ADP and some properties of a difference picture formed by comparing the reference frame and the current frame. They used the knowledge derived from these properties for the recovery of masks of moving objects in the reference and the current frames.

Jain and Nagel found that by discriminating between positive and negative changes and preparing a positive accumulative difference picture (PADP), a negative accumulative difference picture (NADP) and an absolute accumulative difference picture (AADP) more information is retained. The ADPs are prepared using the difference pictures

PADP(x,y) - PADP(x,y) + PDP(x,y)

NADP(x,y) - NADP(x,y) + NDP(x,y)

AADP(x,y) - AADP(x,y) + DP(x,y)

Note that the difference pictures are binary, but the

100 image and vision computing

Page 3: Difference and accumulative difference pictures in dynamic scene analysis

ADPs are not. A very important characteristic of the ADPs is that no entry in the ADP can decrease in value in the normal processing.

Let us consider a surface of uniform intensity s against a uniform background of intensity b such that s > b. The motion of the surface results in displacement of the surface in the frames of the sequence. In Figure 2 we show four frames of the sequence representing uniform planar motion of the object. Consider the first frame of the sequence as the reference frame. The ADPs after the tenth and twentieth frames are shown in Figure 3 and Figure 4 respectively. Jain and Nage17 exploited the monotonicity of entries in the AADP to determine the direction of motion and to separate moving com- ponents of the scene from stationary components. The direction of motion was determined by computing pro- perties of a region on the basis of entries in the region. The separation could only be performed by computing properties of regions in the ADP and in a difference pic- ture obtained by comparing the reference frame with the current frame. This approach has been applied to

Figure 4. The ADPs after the t~lentieth frame

many real scenes’> 19. In complex situations, running occlusion (discussed in a later section) posed a serious problem for this approach.

On considering the PADP and the NADP we find that the PADP contains nonzero entries only in the area occupied by the moving surface in the reference frame and the NADP contains entries only in the area occupied by the part of the surface in the other frames. Thus the regions containing nonzero sentries due to the motion of the surface in the PADP and the NADP are disjoint and remain disjoint for any unidirectional motion of the surface. When the surface is completely displaced from its position the region in the PADP stops growing in size, ie no new nonzero entries are added to the region, but the region in the NADP continues to grow. We call a region that has stopped growing in size but whose entries in the region are increasing, a mature region. A pixel, and a region formed therefrom, is called stale if the entry at the pixel has stopped growing.

Figure 2. Four frames of a synthetic scene

SIMPLE SEGMENTATION OF THE REFERENCE FRAME

Segmentation ofthe reference frame to obtain images of the moving objects can be done using ADPs. As shown earlier, the motion of a surface results in entries in every ADP. The region in one ADP stops growing in size, although the entries continue increasing in value, but the regions in the other two ADPs continue growing in size. The absence of growth of a region in an ADP indi- cates that the surface has moved out of its original posi- tion in the reference frame. The mature region thus formed represents a mask for the image ofthe surface in the reference frame. An algorithm for the extraction of masks of the moving objects in the reference frame is shown in Figure 5. The current frame mask for the object may be obtained from the other ADP for the

Figure 3. The ADPs after the tenth frame object, as discussed later.

vol2 no 2 may 1984 101

Page 4: Difference and accumulative difference pictures in dynamic scene analysis

+

PDP 4 DP

b PADP b NADP 6 ADP

Figure 5. Algorithm for the extraction of masks of mov- ing objects

This method of mask extraction for moving objects using ADPs is simpler than the one used by Jain and Nagel’. Since no region in an ADP can reduce in size, maturity of a region can be detected simply by compar- ing the size of the region after the previous and the current frames: if the size has not changed, the region has matured. Another possibility is to compare the ADPs at the end of successive frames to find whether a nonzero term has been added to the region. This approach can be implemented in hardware and hence can give masks of objects quickly.

A serious limitation of the method is that, to recover the mask of a moving object, the system has to wait for the object to move out of its original position. In many applications the time required for the object to move may not be much, but running occlusion may take place, making segmentation difficult.

RUNNING OCCLUSION

By running occlusion we mean a situation where a mov- ing object image covers the area occupied by another moving object image in the reference frame before the latter image has completely moved out of its position in the reference frame. In the proposed scheme for extract- ing the masks ofmoving objects a problem may occur if, adjacent to a region formed in an ADP due to an

uncovering of the background, another region is created due to some other moving object. This may take place in the following situation. Suppose that an object Oi with intensity si > b is moved to the right and another object 02 with intensity s2 < b is moving towards the location of 01 in the reference frame. The motion of 01 creates a region in the NADP due to the uncovering of the back- ground. Object 02 creates a region in the NADP due to covering of the background. The region due to Oi will mature when 01 has moved out but the region due to 02 will continue to grow. If 02 moves to the area occupied by 01 in the reference frame, the two regions will merge. Depending on the relative size and movements of 01 and 02, the resulting region either may not mature at the correct time or may mature but result in a bad mask.

Running occlusion is an undesirable event. To pre- vent running occlusion, knowledge of the cause behind the formation of the region (ie covering or uncovering of the background) would be useful. The maturity test allows determination of the cause but takes place too late to prevent running occlusion. The rate ofgrowth of regions and the rate of increase in value of some entries give the required information about the covering and uncovering in all but exceptional cases.

Assuming simple and uniform translation for move- ment of an object we find that the number of new pixels added to the region as a result of covering of the back- ground equals the size of the corresponding difference picture with the previous frame. The increase in size of the uncovering region, however, is usually less than the size of the corresponding regions from the difference pictures. It can be seen that the rate of growth of the covering region remains uniform but the rate decreases uniformly for the uncovering region. Consider a convex surface translating uniformly in a plane parallel to the image plane. In Figure 6, three superimposed positions of the surface are shown. From this figure we find that

Pi - Pz = xt + x2

and

02 - 01 = xt + x2

where Pi and P2 are the still increasing entries in the PADP after frames 2 and 3, Ot and 02 are the still increasing entries in the NADP after frames 2 and 3, and X1 and X2 are the stale regions at two sides of the surface, respectively.

This shows that every new area added to the PADP is less than the addition after the previous frame, and stale entries are created in the NADP after each frame. The number ofstale entries created in the NADP is the same as the number by which the growth of the PADP dec- reases. Thus, by keeping track of the rate of growth of a region, we may find the regions due to uncovering of the background. A nice feature of this property is that the test does not require any correspondence: it is based only on the property of the region.

An exceptional case is that of a rectangular surface moving parallel to its side.

One other property that may be exploited is the fact that, in the region due to the covering, there will be some stale entries. These are the areas in a part of the picture which the object entered during its motion but which it has since cleared. Thus in this area of the scene, in both

102 image and vision computing

Page 5: Difference and accumulative difference pictures in dynamic scene analysis

Figure 6. A composite frame showing superimposed positions of a moving object in ?~ree frames

the reference frame and current frame, the background exists, resulting in no new difference entries. In the uncovering region there will be no stale entries.

CORRESPONDENCE

For understanding motion it may be necessary to find the image of a moving object in the reference and current frames. Using the maturity test we may obtain the image of the object in the reference frame. To find the image of the object in the current frame, we use the fact that the AADP is a superregion for an object in the sense that it contains both the regions, in PADP as well as in NADP, due to the object. As we discussed above, the covering region contains the current-frame image of the moving object with the stale regions. These facts allow extraction of the current-frame image using the following strategy.

After a region& an ADP matures, extract the image of the object from the reference frame. Find the superregion in the AADP and from this find the region in the remaining ADP due to the covering of the background by the same object. From this region remove the stale subregions to obtain the mask of the moving object image in the current frame.

FINDING A TRAJECTORY BY TRACKING DIFFERENCE REGIONS

If our aim is to find the trajectory of a surface then we can track the difference pictures. For a sequence PI, Pz, P3 )“‘I P,) ,... a sequence of difference pictures may be formed. Let us assume for the time being that we have a method to determine the correspondence of regions.

vol2 no 2 maJp 1984

Thus by finding the centroid ofregions in each frame we find a trajectory of the surface. The trajectory may be approximated using curve fitting techniques.

This approach gives disturbing results due to the digital nature of the pictures. If the frame-to-frame dis- placement of frames is not an integer multiple of pixel size, the difference regions are not of the same size and same shape even in regular-shaped objects like rec- tangles. The centcoids are influenced by the shape and size of the resulting difference regions and hence give wrong results. The effect of quantization on the trajec- tory determination may be reduced if the motion of the surfaces is uniform and its nature is known. In such cases least-squares methods can be used to approximate the trajectory. Ifthe trajectory is determined over a long sequence, the error due to quantization will be reduced.

Consider a rectangular object with uniform trans- lational motion. In Figure 7, we show three different motions of a rectangular object. The coordinates of points on the trajectories are shown in ‘fable 1. Note the wild ~uctuations in the trajectories when the trans- lational components are not multiples of pixel size. In Table 2 we show the least-squares fit to the trajectories. Note the improvement in the approximation of motion parameters with the increase in the number of frames.

It is possible to obtain a twwdimensional trajectory of the object using curve fitting techniques. If a suffl- cient number of frames is available such a trajectory may be used for obtaining information about the three- dimensional motion of the object. Trajectory-based approaches for this have been proposed by Todd20 and by Sethi and Jain 21 In a general dvnamic scene analysis . 2 system, the frame-to-frame displacement thus obtained may be used for prediction of the object in the next frame. Although initially this prediction may not be pre- cise, the accuracy will improve significantly if the num- ber of frames is increased.

It is possible to extract a feature of the difference pic- ture and to obtain the trajectory ofthe clbject by tracking the feature rather than the difference region. A problem with tracking the feature is that, in the case of three- dimensional motion of the objects, the features may appear and disappear depending on the motion and the shape of the object. The feature-based approach will have to use a sophisticated matching and book-keeping method for tracking. The centroid-based approach allows the absorption of noise much better. Dreschler and Nagel’” tracked a region corresponding to a car over a sequence in which the size of the region changed by a factor of 2.

SEGMENTATION

For the segmentation ofa frame sequence we exploit the properties of the ADPs discussed above. It is possible to find whether a region in an ADP is due to covering or uncovering after three frames. Once it is known that a region is due to uncovering, the region may be transferred to another frame called the segmented pic- ture (SP). The SI’ contains masks for the objects that are not mature and contains images for the objects that are mature. All regions transferred to the SP from an ADI’ are reinitialized to zero in the ADP. This guarantees

103

Page 6: Difference and accumulative difference pictures in dynamic scene analysis

Table 1, Coordinates of points on the trajectories shown in Figure 7

~- a

b

Figure 7 A square object moving with different vel- ocities (the frame-to-frame displacements for the three yumces are (a) (l,l), (‘b) (0.7, 1.5) and (c) (2.8,

Trajectory X Y

1 36.12 36.12 37.12 37.12 38.12 38.12 39.12 39.12 40.12 40.12 41.12 41.12 42.12 41.12

2 36.88 34.77 36.12 38.12 42.00 34.50 37.12 41.12 39.88 40.77 44.00 39.00 40.88 43.77

3 37.84 38.84 40.84 41.84 44.59 44.46 47.54 47.81 48.84 51.84 52.59 54.46 54.84 58.84

Table2. Least-squares fit to the trajectories in Figure 7

Trajectory Number of frames

Slope Error

1 5 1.00 0.00 15 1.00 0.00

2 5 0.196 2.75 15 0.311 3.32

3 5 0.754 1.00 15 0.843 0.94

that the region will not merge with a region in the same ADP due to covering by some other object. After every modification ofthe Sl?, it is processed to find any mature segment. A mature segment in the SF is a signal that the projection of the object in the reference frame has been uncovered completely. If desired, we may substitute background from the current frame into the location of the object in the reference frame, provided there is no other moving object in the current frame at the current time. If there is any other object covering the area, as indicated by an ADP, then the algorithm waits until t-he area has been cleared for the substitution.

Although the above approach is feasible, a minor prob- lem in the implementation is to detect whether there is any possibility of running occlusion. We decided to follow a slightly different approach that has some resemblance to the approach proposed by Jain et al.22 We process five consecutive frames of the sequence and generate difference pictures and ADPs for these frames. The regions in the PADPs and NADPs are analysed to study their growth characteristics. The growth charac- teristics help us in determining whether a region is maturing or not.

For a maturing region we can copy the reference Came mask from the corresponding position in the

104 image and vision computing

Page 7: Difference and accumulative difference pictures in dynamic scene analysis

reference frame into the SP, copy the background for the part that has been uncovered by the moving object from the current frame into the reference frame, and initialize the entries to zero. This strategy of updating the reference frame as soon as the maturing region has been determined is intended to solve the problems due to running occlusion. The current-frame masks can be determined using the fact that the nonmaturing region is due to two different types of entries: the stale entries or the current-frame mask entries. Thus the current- frame masks of the objects may be determined by copy- ing the difference picture contributing to the nonmaturing region.

EXPERIMENTS

We report our experience with a synthetic sequence and a laboratory sequence. The synthetic sequence demon- strates the efftcacy of the algorithm in the case of run- ning occlusion. The laboratory sequence shows the potential of the approach in real applications.

To implement the ideas discussed in the previous sec- tion we defined the average maturity (Avmature) and average staleness (Avstale) of a region by

Avmature = Avchange

Size

Nstale Avstale = --

Size

where Avchange is the average change in the number of new entries added to the region after each frame, Nstale is the number of stale entries in the region, and Size is the size of the region. A maturing region is expected to have positive Avmature. The growing region is expected to have negative Avmature and positive Avstale. To filter the effects ofnoise we decided that a region will be considered to be maturing if it has Avmature > Thi; a region will be considered growing if it has Avmature < -Thi or it has Avmature < 0 and Avstale > Thi. The algorithm is shown in Figure 8.

Four frames of the synthetic scene are shown in Figure 9. The PADP, the NADP, and the current and reference segmented frames after the fifth, tenth and fif- teenth frames are shown in Figures 10,ll and 12 respec- tively. In Figure 9, two objects are moving such that running occlusion takes place. The brighter object is displaced from its position in ten frames and the other object is displaced from its position in 15 frames. The objects are extracted incrementally. The running occlu- sion does not cause any problems because of the incre- mental extraction.

The laboratory scene m Figure 13 contains a con- necting rod moving on a overhead conveyor. There is some extraneous equipment visible in the lower left- hand part of the scene, and in the later part of the sequence the top part of the scene was changed between two frames. Figure 13 shows four frames of the sequence. The PADP, the NADP, and the reference and current segmented frames after the fifth, tenth and fiE teenth frames are shown in Figures 14,15 and 16 respec- tively. A complete image of the connecting rod was not obtained in the fifth frame because part of the connect- ing rod was not recovered by the algorithm; it had not

6 PADP 6 NADP 6 ADP

Figure 8. Algorithm for the extraction of the masks of moving objects in the reference and current frames

Figure 9. Fourframes of a synthetic scene (the brighter object is displaced from its position in ten frames, the darker object in 1.5 frames)

yet moved out ofits projection in the reference frame. In all other segmented frames, however, the images of the connecting rod recovered by the algorithm capture all the details of the connecting rod. The change in back- ground at the top of the image between the tenth frame

vof 2 no 2 may 1984 105

Page 8: Difference and accumulative difference pictures in dynamic scene analysis

Figure 10. The PADP, NADP, segmented reference frame and segmented current frame (a, b, c and d respec- tively) after the fifth frame (the object masks have been only partially recovered)

Figure 12. The PADP, NADP, segmented reference frame and segmented current frame (a, b, c and d respec- tively) after the fifteenth frame (the object masks have been completely recovered)

C

Figure 11. The PADP, NADP, segmented reference frame and segmented current frame (a, b, c and d respec- tively) after the tenth frame (the mask for the brighter object has been completely recovered but that for the darker object has been only partially recovered)

and the eleventh frame does not affect the performance of the algorithm. The ADPs may have a region corres- ponding to the change, but the region will not be con- sidered as due to a moving object.

C d

Figure 13. Frames of a sequence: a, frame 1; b, frame 5; c, frame 10; d, frame 15 (the white strip in the top part of the fifteenth frame was introduced in the eleventh

frame)

106 image and vision computing

Page 9: Difference and accumulative difference pictures in dynamic scene analysis

a b a b

Figure 14. The PADP, NADP, reference segmented frame and current segmentedframe (a, b, c and d respec- tively) after the fifth frame

DISCUSSION

Difference pictures and ADPs have been used in dynamic scene analysis by many researchers. In this paper we showed that we can extract more information by considering the sign of the difference and thereby preparing P(A)DPs, N(A)DPs, DPs and ADPs. This information plays a very important role in the segmen- tation of dynamic scenes and the extraction of motion characteristics of moving objects. It was also shown that by tracking difference pictures we can get a frame-to frame displacement of the images of a moving object. The segmentation algorithm proposed in this paper works in the presence of running occlusion. The algorithm also gives images whose quality is satisfactory for recognition of a moving object. Most of the number crunching required by the segmentation algorithm appears to be directly implementable in hardware. The amount of computation to be performed on the process- ing unit for realtime segmentation of a dynamic scene in

Figure 15. The PADP, NADP, reference segmented frame and current segmented frame (a, b, c and d respec- tively) after the tenth frame

many industrial environments is well within the capa- bility of a processor such as the Motorola 68000.

ACKNOWLEDGEMENTS

I am sure that many ideas in this work are direct or indirect results of my discussions with Susan Haynes, H-H Nagel, B Neumann and Douglas P Rheaume at dif ferent times. I am grateful to them.

REFERENCES

1

2

Huang, T Image sequence analysis Springer- Verlag, Heidelberg, FRG (1981)

Aggarwal, J K and Duda, R 0 ‘Computer analysis of moving polygonal images’ IEEE Trans. Comput. Vo124 (1975) pp 966-976

~012 no 2 may 1984 107

Page 10: Difference and accumulative difference pictures in dynamic scene analysis

a b

C

Figure 16. The PADP, NADP, reference segmented frame and current segmented frame ia, b, c an-d d res- pectively) after the fifteenth frame

3

4

5

Barnard, S T and Thompson, W B ‘Disparity analysis of images’ IEEE Trans. Pattern Anal. Mach. Zntell. Vo12 (1980) pp 333-340

Roach, J W and Aggarwal, J K ‘Computer track- ing of objects moving in space’ IEEE Trans. Pattern Anal. Mach. Zntell. Vol 1 (1979) pp 127-135

Roach, J W and Aggarwal, J K ‘Determining the movement of objects from a sequence of images’ IEEE Trans. Pattern Anal. Mach. Zntell. Vol 2 (1980) pp 554-562

Dreschler, L and Nagel, H-H ‘Volumetric model and 3-D trajectory of a moving car derived from monocular TV-frame sequences of a street scene’ Proc. Znt. Joint Conf on Artificial Intelligence (1981)

Todd, J T ‘Visual information about rigid and non- rigid motions: a geometric analysis’J. Exp. Psychol. (1982) pp 238-252

6 Jain, R ‘Extraction of motion information from peripheral processes’ IEEE Trans. Pattern Anal. Mach. Zntell. Vol 3 (1981) pp 489-503

Jain, R and Nagel, H-H ‘On the analysis of accumulative difference pictures from image sequences of real world scenes’ IEEE Trans. Pat- tern Anal. Mach. Zntell. Vol 1 (1979) pp 206-213

19

20

21

22

Sethi, I K and Jain, R ‘Determining three dimen- sional structure of rotating objects using a sequence of monocular views’ Tech. Rep. 73-l Department of Computer Science, Wayne State University, Det- roit, MI, USA (1983)

7

Jain, R, Militzer, D and Nagel, H-H ‘Separation of stationary from nonstationary components of a scene’ Proc. Znt. Joint Conf on Artificial Intelligence (1977) pp 612-618

8 Jain, R, Martin, W N and Aggarwal, J K ‘Segmentation through the detection ofchanges due to motion’ Comput. Graphics Image Process. Vol 11 (1979) pp 13-34

BIBLIOGRAPHY

Ballard, D and Brown, C M Computer vision Pren- tice-Hall, Englewood Cliffs, NJ, USA (1982)

9 Jain, R ‘Segmentation offrame sequences obtained by a moving observer’ Publ. GMR-4247 General Motors Research Laboratories (January 1983)

Jain, R ‘Dynamic scene analysis’ in Kanal, L and Rosenfeld, A (eds) Progress in pattern recognition North-Holland, Amsterdam, Netherlands (in the press)

Haynes, S and Jain, R ‘Time varying edge detection’ Comput. Graphics Image Process. Vol21 (1982) pp 345- 367

10 Jayaramamurthy, S N and Jain, R ‘An approach to the segmentation of textured dynamic scenes’ Comput. Vision, Graphics Image Process. Vol21 (1983) pp 239- 261

108 image and vision computing

11

12

13

14

15

16

17

18

Fennema, C L and Thompson, W B ‘Velocity determination in scenes containing several moving objects’ Comput. Graphics Image Process. Vol 9 (1979) pp 301-31s

Potter, J L ‘Motion as a cue to segmentation’ Proc. Milwaukee Symp. on Automatic Control (1974) pp 100-104

Prager, J M ‘Segmentation of static and dynamic scenes’ Computers and Information Science Tech. Rep. COINS TR79-7 University of Massachusetts, Amherst, MA, USA (May 1979)

Thompson, W B ‘Combining contrast and motion for fragmentation’ IEEE Trans. Pattern Anal. Mach. Zntell. Vol 2 (1980)

Nagel, H-H ‘Formation of an image concept by analysis of systematic variations in the optically perceptible environment’ Comput. Graphics Image Process. Vol 7 (1978) pp 149-194

Yakimovsky, Y ‘Boundary and object detection in real world images’ J. Assoc. Comput. Mach. Vo123 (1976) pp 599-618

Nagel, H-H ‘On change detection and displacement vector estimation in image sequences’ Pattern Recognition Lett. (1982) pp 55-59

Yachida, M, Asada, M and Tsuji, S ‘Automatic motion analysis system of moving objects from the records of natural processes’ Proc. Znt. Joint Conf on Pattern Recognition Kyoto, Japan (1978) pp 726- 730


Recommended