+ All Categories
Home > Documents > Hough Transform Run Length Encoding for Real-Time Image Processing

Hough Transform Run Length Encoding for Real-Time Image Processing

Date post: 24-Sep-2016
Category:
Upload: s-n
View: 218 times
Download: 0 times
Share this document with a friend
6
962 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 3, JUNE 2007 Hough Transform Run Length Encoding for Real-Time Image Processing Christopher H. Messom, Member, IEEE, Gourab Sen Gupta, Senior Member, IEEE, and Serge N. Demidenko, Fellow, IEEE Abstract—This paper introduces a real-time image processing algorithm based on run length encoding (RLE) for a vision-based intelligent controller of a Humanoid Robot system. The RLE algo- rithms identify objects in the image, providing their size and posi- tion. The RLE Hough transform is also presented for recognition of landmarks in the image to aid robot localization. The vision system presented has been tested by simulating the dynamics of the robot system as well as the image processing subsystem. Index Terms—Edge detection, Hough transform, real-time image processing, run length encoding. I. INTRODUCTION A VISION-BASED humanoid robot system requires a high- speed vision system that does not introduce significant de- lays in the control loop. This paper presents a vision system for biped control that performs in real time. The humanoid robot used to test the system is a 12-degree-of-freedom biped robot. The image from the camera attached to the top of the robot is processed to identify positions of obstacles as well as any land- marks in the field of view. Obstacles can be accurately placed relative to the robot, and with the identification of landmarks, the robot can be accurately localized and a map of the obstacles developed in world coordinates. Once the objects have been located in the two-dimensional image, a coordinate transformation, based on the fact that the ground is level and all the joint angles are available, allows us to determine the object’s position relative to the camera. If the joint angles are not available, an approximation of the camera position and orientation must be calculated based on the image from the camera. Visual features that will contribute to this cal- culation include position of the horizon and any gravitationally vertical lines in the image. This paper discusses the localization problem based on landmark identification using edge detection and run length encoding (RLE) [1]–[3]. Given large landmarks such as the Manuscript received June 15, 2005; revised September 16, 2006. C. H. Messom is with the IIMS, Massey University, Albany, Auckland, New Zealand (e-mail: [email protected]). G. Sen Gupta is with the IIS&T, Massey University, Palmerston North, New Zealand, and also with the School of Electrical and Electronics Engineering, Singapore Polytechnic, Singapore (e-mail: [email protected]). S. N. Demidenko is with the School of Engineering and Science, Monash Uni- versity, Kuala Lumpur, Malaysia (e-mail: [email protected]. edu.my). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.org. Digital Object Identifier 10.1109/TIM.2006.887687 Fig. 1. Biped robot view. horizon or large obstacles, filtering short lines effectively re- moves noise due to multiple small objects in the field of view. One disadvantage with the edge detection algorithms is the computational time associated with detecting and processing the edge image. This paper introduces the RLE edge represen- tation to improve the performance of edge processing. I. Background Much of the early work in biped and humanoid robotics has focused on the basic dynamics and control of the biped robot system [4]–[7]. However, more recently researchers have started to address the higher level functionality such as biped robot vision for navigation and localization. To test and develop this functionality, toolkits that support full simulation of the vision and control system have been developed [8]. This study builds upon the vision enabled robot simulation environment using the 12-degree-of-freedom m2 biped robot [5], [6], [9]. A typical view from the robot in an environment with obstacles is shown in Fig. 1. The key objective of the vision system is to identify the objects in view, given changing viewing angles and lighting conditions. This requires the object’s characteristic color and size to be continuously updated based on current conditions. Recently, researchers have investigated biped vision strate- gies based on both simulation [10], [11] and real robot systems [10]–[13]. Braunl reported some of the problems associated with a “reality gap” when transferring results from a simulated system to a real robot system. Ensuring that the systems de- veloped in the simulation are not dependent on specifics of the simulation system ensures that this “reality gap” can be closed to the point that simulated solutions are useful for solving the real problem. 0018-9456/$25.00 © 2007 IEEE
Transcript
Page 1: Hough Transform Run Length Encoding for Real-Time Image Processing

962 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 3, JUNE 2007

Hough Transform Run Length Encoding forReal-Time Image Processing

Christopher H. Messom, Member, IEEE, Gourab Sen Gupta, Senior Member, IEEE, andSerge N. Demidenko, Fellow, IEEE

Abstract—This paper introduces a real-time image processingalgorithm based on run length encoding (RLE) for a vision-basedintelligent controller of a Humanoid Robot system. The RLE algo-rithms identify objects in the image, providing their size and posi-tion. The RLE Hough transform is also presented for recognition oflandmarks in the image to aid robot localization. The vision systempresented has been tested by simulating the dynamics of the robotsystem as well as the image processing subsystem.

Index Terms—Edge detection, Hough transform, real-timeimage processing, run length encoding.

I. INTRODUCTION

AVISION-BASED humanoid robot system requires a high-speed vision system that does not introduce significant de-

lays in the control loop. This paper presents a vision system forbiped control that performs in real time. The humanoid robotused to test the system is a 12-degree-of-freedom biped robot.The image from the camera attached to the top of the robot isprocessed to identify positions of obstacles as well as any land-marks in the field of view. Obstacles can be accurately placedrelative to the robot, and with the identification of landmarks,the robot can be accurately localized and a map of the obstaclesdeveloped in world coordinates.

Once the objects have been located in the two-dimensionalimage, a coordinate transformation, based on the fact that theground is level and all the joint angles are available, allows usto determine the object’s position relative to the camera. If thejoint angles are not available, an approximation of the cameraposition and orientation must be calculated based on the imagefrom the camera. Visual features that will contribute to this cal-culation include position of the horizon and any gravitationallyvertical lines in the image.

This paper discusses the localization problem based onlandmark identification using edge detection and run lengthencoding (RLE) [1]–[3]. Given large landmarks such as the

Manuscript received June 15, 2005; revised September 16, 2006.C. H. Messom is with the IIMS, Massey University, Albany, Auckland, New

Zealand (e-mail: [email protected]).G. Sen Gupta is with the IIS&T, Massey University, Palmerston North, New

Zealand, and also with the School of Electrical and Electronics Engineering,Singapore Polytechnic, Singapore (e-mail: [email protected]).

S. N. Demidenko is with the School of Engineering and Science, Monash Uni-versity, Kuala Lumpur, Malaysia (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.org.

Digital Object Identifier 10.1109/TIM.2006.887687

Fig. 1. Biped robot view.

horizon or large obstacles, filtering short lines effectively re-moves noise due to multiple small objects in the field of view.One disadvantage with the edge detection algorithms is thecomputational time associated with detecting and processingthe edge image. This paper introduces the RLE edge represen-tation to improve the performance of edge processing.

I. Background

Much of the early work in biped and humanoid robotics hasfocused on the basic dynamics and control of the biped robotsystem [4]–[7]. However, more recently researchers have startedto address the higher level functionality such as biped robotvision for navigation and localization. To test and develop thisfunctionality, toolkits that support full simulation of the visionand control system have been developed [8]. This study buildsupon the vision enabled robot simulation environment using the12-degree-of-freedom m2 biped robot [5], [6], [9]. A typicalview from the robot in an environment with obstacles is shownin Fig. 1. The key objective of the vision system is to identifythe objects in view, given changing viewing angles and lightingconditions. This requires the object’s characteristic color andsize to be continuously updated based on current conditions.

Recently, researchers have investigated biped vision strate-gies based on both simulation [10], [11] and real robot systems[10]–[13]. Braunl reported some of the problems associatedwith a “reality gap” when transferring results from a simulatedsystem to a real robot system. Ensuring that the systems de-veloped in the simulation are not dependent on specifics of thesimulation system ensures that this “reality gap” can be closedto the point that simulated solutions are useful for solving thereal problem.

0018-9456/$25.00 © 2007 IEEE

Page 2: Hough Transform Run Length Encoding for Real-Time Image Processing

MESSOM et al.: HOUGH TRANSFORM RUN LENGTH ENCODING FOR REAL-TIME IMAGE PROCESSING 963

Fig. 2. Image processing pipeline.

II. IMAGE PROCESSING PIPELINE

Fig. 2 shows the image processing pipeline for the systempresented in this paper. The core image processing algorithmused is the RLE-based image segmentation and object tracking.This subsystem [3] provides a real-time object tracking algo-rithm. Its weakness is that it needs the range of color space occu-pied by the objects being tracked to be specified. This requiresan environment with uniform lighting and little variation overtime in order for robust performance to be achieved. To use theRLE algorithm in an environment with unknown objects andvarying light conditions, the required color space range valuesmust be dynamically updated. This study uses a Hough trans-form-based edge detection technique [14] to identify new ob-jects and their associated color space range values. This phaseof the processing is slow, and so it runs in a separate low-pri-ority thread concurrent to the real-time RLE image processing.

III. EDGE DETECTION

Edge detection is often used to identify objects and regionsof interest in an image where there can be significant variationin size and colors of the objects of interest or the colors of theobjects of interest are not known. In this paper, edge detection isused to identify landmarks in the image, particularly the horizonand any unknown large obstacles.

A 5 5 RGB Sobel edge detection filter, with andgiven by

(1)

can be applied to the raw RGB image (Fig. 1) producing theedge-detected image (Fig. 3). This edge detection technique iscomputationally relatively expensive as compared to RLE, how-ever, in the situation where color identifiers are unknown, it pro-vides a suitable image that can be further processed to find infor-mation about the environment in which the robot is operating.

In the simulated domain studied, one of the key features thatcan be identified from the edge-detected image are the horizonfrom which the body position of the robot can be inferred (thisis useful if the joint angles are not explicitly available to the

Fig. 3. Edge-detected image.

Fig. 4. x and y axis intercepts for lines of angle �=4 and ��=4.

system). The second type of feature available in the edge-filteredimage are landmarks such as large obstacles which can be usedto aid robot localization.

Identifying the horizon means that a long almost-horizontal(at least not near-vertical) line must be identified. A similar ap-proach will need to be adopted if there are walls or corridors inthe image, that is, long lines in the image are identified beforefurther processing.

IV. HOUGH TRANSFORM OF RLE EDGES

Identifying straight lines in an image requires a first-orderHough transform to be applied. Normally, this transfers theimage into the parameter space of straight lines, that is from theposition of the pixel to the parameter space of the lines inthe image , where represents the equationsof the lines in the image. Where the lines in the parameter spaceintersect represents the equations of the lines in the image. Thisis also the position where there is a peak of data points in theparameter space.

This study uses a polar representation of the first-order Houghtransform rather than the normal gradient interceptformat. This is so that singularities associated with vertical lines(infinite gradient and intercept with the axis) are removed. Theangle in the polar representation is the angle of the line to thehorizontal (range from to ) and the intercept repre-sents the intercept of the line with the or axis. The interceptwith the axis is used for while the inter-cept with the axis is used for and . Withthis parameter space when , the and intercepts areequal (see Fig. 4) so the representation is continuous as the anglechanges across this boundary. When , the and inter-cepts are additive inverses (see Fig. 4) which means that there isa discontinuity in the representation as the angle changes across

Page 3: Hough Transform Run Length Encoding for Real-Time Image Processing

964 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 3, JUNE 2007

(a) (b)

(c)

Fig. 5. (a) Maximum x and y axis intercepts. (b) Range values in parameterspace showing discontinuity at � = �=4, where � = �max(height;width)and � = height + width. (c) Topological view of the polar parameter space,showing continuity of the �=2, ��=2 boundary and the join of the �=4boundary, where � = �max(height;width) and � = height + width. Thefigure shows that the topology of the parameter space is finite, limited by thevalues of � and �.

this boundary [see Fig. 5(b)], however, with correct implemen-tation of the neighborhood grouping this is not a problem, sincea topological mapping of the parameter space is possible [seeFig. 5(c)]. The polar representation has an additional advantagein bounding the size of the range of values of intercepts by 2max(height, width) min(height, width); see Fig. 5(a) and (b).

If the polar parameter space has a high resolution, it will benecessary to group neighboring peaks in the parameter spaceso that similar lines in the image are amalgamated into one. Acontiguous near-neighbor grouping algorithm [14] is applied tothe parameter space to combine similar lines in the image; in thisway the number of candidate lines in the image are reduced. Thepeaks in parameter space are used to identify the straight linesin each object in the image.

The linear Hough transform is computationally expensive, es-pecially if all the combinations of pixels that have been edge-de-tected are considered. Even if a statistical approach is adopted,the computational time complexity can still be high, if a largenumber of pixels are tested to ensure that no edges in the imageare missed.

This paper proposes modifying the linear Hough transformalgorithm, by applying it to the run length encoded image of theedge-filtered image. This requires a class of color identifiers tobe supplied for identifying the lines in the edge-filtered image.In this study, a class of sharp lines close to white (255, 255, 255)in the edge-filtered image and a wide gray band were suitableto detect both the edges of the obstacles and the edges of thehorizon.

Having run length encoded the edge-filtered image, con-nected lines of contiguous edges are detected as single objects.

Fig. 6. Grouped edge-detected pixels.

Fig. 7. Candidate lines in an object.

In the example illustrated in Fig. 3, this is equivalent to threeobstacles (note the obstacles in the distance are viewed as asingle object using this algorithm since the edge maps overlap).Several vertical lines are detected due to edges caused by rapidvariation in the object colors due to shading and shadow effects.Four horizon elements are identified, one of which is very smalland is filtered off as noise. Fig. 6 illustrates a region of edgedetected image that has been run length encoded.

The Hough transform of the RLE edges is performed on eachobject separately. This reduces the computational complexity ofthe algorithm as interactions between the objects are not addedto the parameter space model, reducing the interference effectbetween the different lines in the image. With run length en-coding, only the start and end positions of the pixels in each hor-izontal row are recorded, so random pixels within the run lengthare selected for the Hough transform to parameter space. Thisis done by choosing a particular run length randomly, weightedby the number of pixels in each run length, and then selecting arandom value between the start and end position of the selectedrun length.

Each obstacle in the example image (Fig. 3) consists of twolong straight lines and two short straight lines. The Hough trans-form is biased towards the long lines as they provide more can-didate points in the parameter space. The two short lines arealso identified since they provide two peaks in parameter spaceafter applying the neighbor grouping algorithm. This is the casefor the base line of the obstacles even though this line is barelystraight. Fig. 7 illustrates the selection of the candidate lines inthe given object.

The RLE edges of the horizon form three objects which arealso transformed individually using a linear Hough transform.Each object produces a single candidate line in parameter spaceafter grouping neighboring candidate lines in parameter space.

Page 4: Hough Transform Run Length Encoding for Real-Time Image Processing

MESSOM et al.: HOUGH TRANSFORM RUN LENGTH ENCODING FOR REAL-TIME IMAGE PROCESSING 965

Fig. 8. Candidate line intersection and enclosed pixels.

Since we are looking for only one horizon line, the three candi-date lines are compared to see whether they can also be amalga-mated into a single overall candidate line. In this case (Fig. 3),the three candidate lines provided are collinear and so produceonly one candidate line.

Given the positions of the obstacles, obtained either from theedge-detected image or the original run length encoded image aswell as the position of the horizon, the robot can be localized.The angle of the horizon gives the rotation of the camera andbased on the height of the robot and the flat environment, thepositions of the obstacles relative to the robot can be calculated.

V. OBJECT IDENTIFICATION

The lines and points formed by the intersection of the linesin each object define the boundaries of the objects of interest inthe image. The pixels within this boundary are used to calculatethe color space range values to be used by the RLE algorithm.See Fig. 8.

The mean value of the color components of the pixels thatform the object are calculated so that outlier pixels can be elimi-nated. Outliers occur near the edges and are not representative ofthe object under study. Typically, variations of more than threestandard deviations from the mean represent outliers. For ob-jects that are larger than 10 pixels, plus or minus three times thestandard deviation of the pixel values is used to calculate themaximum and minimum range values to be used by the RLE al-gorithm. For small objects, the mean plus or minus 15 is used bythe RLE algorithm as the standard deviation may not be reliable.

If particular features and landmarks in the environment areknown, they can be identified from the objects that have beenlocated above.

VI. ANALYSIS

The standard Hough transform applied to this problemwithout using the RLE and aggregation of contiguous pixels isvery slow. This is due to the fact that we need to identify alllines, even short ones in the image.

The probability of selecting a pixel for the Hough transformis given by

(2)

where is the probability of selecting a pixel in the line ofinterest , is the number of points in the line of interest, and

is the number of edge/line points in the image.The probability of selecting two pixels that are in the same

line of interest is given by

(3)

where is the probability of selecting two pixels in the line ofinterest .

Equation (4) gives the number of sample pairs that must betaken to reliably achieve the given number of candidate linesfrom the Hough transform.

(4)

where is the required number of sample pairs to reliably re-sult in candidate lines that match the given line of interest .

VII. RESULTS AND DISCUSSION

The edge detection takes about three times as long as the stan-dard RLE algorithm. In addition, the Hough transform of theRLE edge-detected image depends on the length of the lines andthe size of the objects in the image.

A long horizon line in the 320 240 image will consist ofabout 320 pixels. If the image has about 1000 edge/line pixels,then the chance of selecting two pixels from the horizon are

. This means the chance of retrieving the horizonline from the Hough transform is about 10%. This means weneed to take ten pairs of points before we find a candidate linethat would represent the horizon. If we want to have at leastfive candidate lines before taking that as a legitimate line in theimage, we need to take at least 50 sample pairs.

For smaller lines, say 5 pixels long, we can see that asignificantly larger number of sample pairs are required,

. As the number of edge/line pointsincrease, we can see the time that the algorithm takes will in-crease significantly , where is the number of edge/linepoints in the image.

This means that for small objects the standard Hough trans-form would not be able to update the RLE color space rangevalues regularly. The result of this is that as lighting and objectsin the image change, they will not be correctly identified by thereal-time RLE system. In the biped robot scenario, this resultsin collisions with moving obstacles and selection of nonoptimalpaths through obstacles.

For the RLE-augmented Hough transform, the contiguouspixels that form one object are used in identifying the requiredstraight lines. If we take the example illustrated in Fig. 6, thetotal number of edge/line pixels is 18, and the shortest line is 3pixels. This requires only sample pairs to re-liably identify the four straight lines with at least five candidatelines. This means that the RLE-augmented Hough transform issignificantly faster than standard approaches and so can be usedin adaptive vision systems.

Fig. 9 shows the variation of the required number of samplesto identify various line sizes when the number of available edge

Page 5: Hough Transform Run Length Encoding for Real-Time Image Processing

966 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 3, JUNE 2007

Fig. 9. Samples required to identify lines of give size (with at least five candi-date lines) for varying number of edge pixels available.

pixels ranges from 10 to 80. It can be seen that for shorter linesthere are significant improvements in reducing the number ofedge pixels given by the RLE-augmented Hough transform.

VIII. CONCLUSION

This paper has presented a real-time image-processing algo-rithm based on run length encoding for a simulated biped robotsystem. This system can be implemented on real biped robotsystems to detect obstacles quickly in the field of view. Thispaper has also presented an edge-detection algorithm that usesa Sobel edge-detection algorithm augmented with run length en-coding to improve post-processing of the image using a linearHough transform.

Although run length encoding and the RLE-augmentedHough transform have shown promise, significant researcheffort needs to be directed at real-time generation of the coloridentifiers used in the RLE component of the vision system. Inreal-world environments with highly varying lighting condi-tions, this dynamic update of color identifiers will be essential.Environments with gradual variations in color across the objectwill require additional approaches such as modeling objectswith multiple colors to apply this technique successfully.

This simulated system used in this study provides clean im-ages and so does not reflect reality where there are often vari-ations in the image due to sensor noise. Localization of therobot relative to the obstacles and mapping the environmentwith sensor noise require particle filtering and optimal filteringapproaches as discussed in [15] and [16]. Future research willmodel sensor noise and will require these additional techniquesto provide reliable recognition of obstacle positions and local-ization of the robot.

ACKNOWLEDGMENT

The authors would like to acknowledge the use of MasseyUniversity’s parallel computer, the Helix, for the computationalexperiments that supported the results presented in this paper.

REFERENCES

[1] G. Sen Gupta, D. Bailey, and C. Messom, “A new colour space forefficient and robust segmentation,” in Proc. IVCNZ 2004, pp. 315–320.

[2] J. Bruce, T. Balch, and M. Veloso, “Fast and inexpensive colour imagesegmentation for interactive robots,” presented at the IROS 2000, SanFrancisco, CA.

[3] C. H. Messom, S. Demidenko, K. Subramaniam, and G. Sen Gupta,“Size/position identification in real-time image processing using runlength encoding,” in Proc. IEEE Instrum. Meas. Technol. Conf., 2002,pp. 1055–1060.

[4] C. Zhou and Q. Meng, “Dynamic balance of a biped robot using fuzzyreinforcement learning agents,” Fuzzy Sets Syst., vol. 134, no. 1, pp.169–187, 2003.

[5] K. Jagannathan, G. Pratt, J. Pratt, and A. Persaghian, “Pseudo-trajec-tory control scheme for a 3-D model of a biped robot,” in Proc. ACRA,2001, pp. 223–229.

[6] ——, “Pseudo-trajectory control scheme for a 3-D model of a bipedrobot, Part 2—Body trajectories),” in Proc. CIRAS, 2001, pp. 239–245.

[7] J. Baltes, S. McGrath, and J. Anderson, “Feedback control of walkingfor a small humanoid robot,” presented at the FIRA World Congr., Vi-enna, Austria, 2003.

[8] C. H. Messom, “Vision controlled humanoid toolkit,” in Knowledge-Based Intelligent Information and Engineering Systems. Berlin , Ger-many: Springer Verlag, 2004, vol. 3213, Lecture Notes in Artificial In-telligence, pp. 218–224.

[9] J. Pratt and G. Pratt, “Exploiting natural dynamics in the control ofa 3-D bipedal walking simulation,” in Proc. Int. Conf. Climbing andWalking Robots, Portsmouth, U.K., 1999 [Online]. Available: http://www.ihmc.us/~jpratt/publications/3d_sim_clawar99.pdf

[10] A. Boeing, S. Hanham, and T. Braunl, “Evolving autonomous bipedcontrol from simulation to reality,” in Proc. 2nd Int. Conf. AutonomousRobots and Agents, 2004, pp. 440–445.

[11] J. Chestnutt, J. Kuffner, K. Nishiwaki, and S. Kagami, “Planning bipednavigation strategies in complex environments,” presented at the Int.Conf. Humanoid Robotics, Karlsruhe, Germany, Oct. 2003.

[12] M. Ogino, Y. Katoh, M. Aono, M. Asada, and K. Hosoda, “Vision-based reinforcement learning for humanoid behaviour generation withrhythmic walking parameters,” in Proc. IEEE/RSJ Int. Conf. IntelligentRobots and Systems, 2003, pp. 1665–1671.

[13] O. Lorch, A. Albert, J. Denk, M. Gerecke, R. Cupec, J. F. Seara, W.Gerth, and G. Schmidt, “Experiments in vision-guided biped walking,”in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2003, pp.2484–2490.

[14] D. C. I. Walsh and A. E. Raftery, “Accurate and efficient curve detec-tion in images: the importance sampling Hough transform.,” PatternRecognit., vol. 35, pp. 1421–1431, 2002.

[15] G. Sen Gupta, C. H. Messom, and S. Demidenko, “Real-time identifi-cation and predictive control of fast mobile robots using global visionsensing,” IEEE Trans. Instrum. Meas., vol. 54, no. 1, pp. 200–214, Feb.2005.

[16] D. C. K. Yuen and B. A. MacDonald, “Theoretical considera-tions of multiple particle filters for simultaneous localisation andmap-building,” in Knowledge-Based Intelligent Information andEngineering Systems. Berlin, Germany: Springer Verlag, 2004, vol.3213, Lecture Notes in Artificial Intelligence, pp. 203–209.

Christopher H. Messom (M’96) received theM.Sc. and Ph.D. degrees in computer science fromLoughborough University, Loughborough, U.K., in1992 and 1989, respectively.

He was a Lecturer at Singapore Polytechnic from1993 to 1997, Senior Lecturer at the Dubai Univer-sity College from 1998 to 1999, and now is a SeniorLecturer and Director of the Centre for ParallelComputing at Massey University, Auckland, NewZealand. His research focus is in intelligent roboticsand control systems, particularly automatic learning

of control systems. The computational complexity of intelligent robotics andautomatic learning has led to his interest in parallel implementations of machinelearning algorithms as well as distributed processing in robot simulation andcontrol.

Page 6: Hough Transform Run Length Encoding for Real-Time Image Processing

MESSOM et al.: HOUGH TRANSFORM RUN LENGTH ENCODING FOR REAL-TIME IMAGE PROCESSING 967

Gourab Sen Gupta (M’89–SM’05) received theB.E. degree in electronics from the Universityof Indore, India, in 1982, and the M.E.E. degreefrom Philips International Institute, Eindhoven,The Netherlands, in 1984. He is currently pursuingthe Ph.D. degree in advanced control of robots ina dynamic collaborative system environment atMassey University, Palmerston North, New Zealand.

After working for five years as a Software Engi-neer at Philips India, Pune, India, in the ConsumerElectronics division, he joined Singapore Poly-

technic, Singapore, in 1989, where he is currently a Senior Lecturer in theSchool of Electrical and Electronic Engineering. He is also a Visiting SeniorLecturer with the Institute of Information Sciences and Technology, MasseyUniversity. He has over 40 publications in various journals and conferenceproceedings. He has authored two books and edited three conference pro-ceedings. His current research interests are in the area of embedded systems,robotics, real-time vision processing, behavior programming for multi-agentcollaboration, and automated testing and measurement systems.

Serge N. Demidenko (M’91–SM’94–F’04) receivedthe M.E. degree from the Belarusian State Univer-sity of Informatics and Radio Electronics, Minsk,Belarus, in 1977, and the Ph.D. degree from theBelarusian Academy of Sciences, Minsk, in 1984.

He is a Chair of Electronic Engineering andAssociate Head of Institute of Information Sciencesand Technology at Wellington Campus of MasseyUniversity, New Zealand. During his career, heprogressed from an engineer to Head of Joint(Industry-Academy) Test Laboratory of a large elec-

tronic manufacturing company (around 12000 staff) and Head of Departmentposts by working for academia and industry. Starting in the 1990s, he has beenon the academic staff of institutions of higher learning of several countries.His research areas include electronic design and test, fault-tolerance and signalprocessing. He is an author of four books and more than 80 papers, and holds25 patents.

Dr. Demidenko is an Associate Editor of five international journals includingJETTA: Journal of Electronic Testing: Theory and Applications and the IEEETRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT. He is a Co-Chair ofTC-32 of the IEEE Instrumentation and Measurement (I&M) Society, a memberof Board of Directors of IMTC and a Chair of the IEEE I&M Malaysia Chapter.He is a Fellow of IEEE and IEE, and a U.K. Chartered Engineer.


Recommended