+ All Categories
Home > Documents > [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China...

[IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China...

Date post: 10-Oct-2016
Category:
Upload: jason
View: 217 times
Download: 0 times
Share this document with a friend
6
Sensor Network and Robot Interaction Using Coarse Localization Imad H. Elhajj and Jason Gorski Oakland University Computer Science and Engineering Department Rochester, MI 48309 {elhajj, jd2gorsk}@oakland.edu 1 Abstract Rapid advances in the field of sensing and sensor net- works are opening the door to many new possibilities. One of the main challenges is the localization of the sensor nodes. In this paper we discuss the issue of localization in the context of sensor network assisted telerobotics. The sensor network is used to feedback information to the robot or remote operator to be used to efficiently and safely navigate. We show that coarse localization results in comparable performance to ac- curate and fine localization. This finding will relax the requirement from localization algorithms and will open the door to a new class of reduced complexity and reduced overhead localization algorithms. Exper- imental results are provided to highlight the concepts developed and compare performance. 2 Introduction The popularity of sensor networks stems from the broad range of potential applications [2]-[6]. These range from space exploration to remote information gathering and monitoring [2]-[7]. Sensor networks pro- vide the capability to survey and monitor different parameters in “areas of interest”. They can be de- ployed in a number of methods ranging from accurate placement to random scattering, and in some of the most extreme environments and terrains. Although sensor networks are gaining significant attention from the research community, there are still many technical challenges that must be overcome in order for their application to become a reality [8]-[10]. The main challenge addressed in this paper is the localization of sensor nodes. Specifically, this is considered in the context of sensor network assisted Research supported in part by Michigan Space Grant Con- sortium. robotics. This interfacing allows the sensor network to provide vital information to the robot or remote operator to be used in navigational decisions [1]. Several systems have been developed providing sensor network and robot interaction. In [13], a robot was programmed to survey an area to determine ap- propriate sensor node placement, and also serve as an automatic calibration device by taking readings at various areas in the network and comparing them to readings obtained from nodes in that area. The frame- work addressed in this paper differs by providing bidi- rectional interaction between the network, the robot, and the remote operator. Previous work by R. Peterson and D. Rus [14] outlines the use of a sensor network to guide a human or robot along a path to avoid dangerous obstacles de- tected by the network. Communication between the human or robot and the sensor network was accom- plished through the use of a developed flashlight de- vice. However, there was no interaction between the human and the robot. Also in that work it is assumed that the sensors are uniformly distributed. In either of these frameworks, one of the main challenges is the localization of the sensor nodes [11, 12, 13]. Therefore, there is a need to somehow determine the location of the sensor nodes so that lo- cation based information can be relayed back and used in navigation decisions by the vehicle. In this paper we do not develop a localization scheme; however we illustrate that only the “rough” and coarse position of the nodes needs to be known. By rough position we mean that the sensor nodes do not need to be local- ized but only know one of a limited number of sec- tors they are in relative to the robot’s position. This transforms localization from a continuous domain to a discrete one. This finding will relax the localization requirement for sensor network assisted robotics and will justify the development of less complex and less demanding discrete localization algorithms. 1-4244-0259-X/06/$20.00 ©2006 IEEE 1434 Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems October 9 - 15, 2006, Beijing, China
Transcript
Page 1: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

Sensor Network and Robot Interaction Using Coarse Localization ∗

Imad H. Elhajj and Jason GorskiOakland University

Computer Science and Engineering DepartmentRochester, MI 48309

{elhajj, jd2gorsk}@oakland.edu

1 Abstract

Rapid advances in the field of sensing and sensor net-works are opening the door to many new possibilities.One of the main challenges is the localization of thesensor nodes. In this paper we discuss the issue oflocalization in the context of sensor network assistedtelerobotics. The sensor network is used to feedbackinformation to the robot or remote operator to be usedto efficiently and safely navigate. We show that coarselocalization results in comparable performance to ac-curate and fine localization. This finding will relaxthe requirement from localization algorithms and willopen the door to a new class of reduced complexityand reduced overhead localization algorithms. Exper-imental results are provided to highlight the conceptsdeveloped and compare performance.

2 Introduction

The popularity of sensor networks stems from thebroad range of potential applications [2]-[6]. Theserange from space exploration to remote informationgathering and monitoring [2]-[7]. Sensor networks pro-vide the capability to survey and monitor differentparameters in “areas of interest”. They can be de-ployed in a number of methods ranging from accurateplacement to random scattering, and in some of themost extreme environments and terrains. Althoughsensor networks are gaining significant attention fromthe research community, there are still many technicalchallenges that must be overcome in order for theirapplication to become a reality [8]-[10].

The main challenge addressed in this paper isthe localization of sensor nodes. Specifically, this isconsidered in the context of sensor network assisted

∗Research supported in part by Michigan Space Grant Con-sortium.

robotics. This interfacing allows the sensor networkto provide vital information to the robot or remoteoperator to be used in navigational decisions [1].

Several systems have been developed providingsensor network and robot interaction. In [13], a robotwas programmed to survey an area to determine ap-propriate sensor node placement, and also serve asan automatic calibration device by taking readings atvarious areas in the network and comparing them toreadings obtained from nodes in that area. The frame-work addressed in this paper differs by providing bidi-rectional interaction between the network, the robot,and the remote operator.

Previous work by R. Peterson and D. Rus [14]outlines the use of a sensor network to guide a humanor robot along a path to avoid dangerous obstacles de-tected by the network. Communication between thehuman or robot and the sensor network was accom-plished through the use of a developed flashlight de-vice. However, there was no interaction between thehuman and the robot. Also in that work it is assumedthat the sensors are uniformly distributed.

In either of these frameworks, one of the mainchallenges is the localization of the sensor nodes[11, 12, 13]. Therefore, there is a need to somehowdetermine the location of the sensor nodes so that lo-cation based information can be relayed back and usedin navigation decisions by the vehicle. In this paperwe do not develop a localization scheme; however weillustrate that only the “rough” and coarse position ofthe nodes needs to be known. By rough position wemean that the sensor nodes do not need to be local-ized but only know one of a limited number of sec-tors they are in relative to the robot’s position. Thistransforms localization from a continuous domain toa discrete one. This finding will relax the localizationrequirement for sensor network assisted robotics andwill justify the development of less complex and lessdemanding discrete localization algorithms.

1-4244-0259-X/06/$20.00 ©2006 IEEE1434

Proceedings of the 2006 IEEE/RSJInternational Conference on Intelligent Robots and Systems

October 9 - 15, 2006, Beijing, China

Page 2: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

In the next section we describe the localizationrequirements for the framework followed by the exper-imental results.

3 Localization Requirements

The integration framework presented in [1] allowed asensor network to provide real-time feedback regardingthe environment in the form of haptic feedback to theoperator. This force assists the operator in navigat-ing away from areas of potential danger as detectedby the sensor network. A motivating example is inthe military field, where such capability would allowfor the deployment of chemical or radiation sensors byair dropping them in a field to be traversed by mili-tary vehicles. These sensors would then provide “lookahead” capabilities for the drivers of the vehicles re-garding any chemical or radiation risks.

To illustrate this framework consider Figure 1.This figure shows the robot navigating within the en-vironment and the resultant force vector fed back ateach location. As shown, when the robot goes into thearea monitored a force is applied to direct the robotaway from the node when danger is detected by thisnode. Obviously, if the application demanded seekinginstead of avoiding then the force would be made at-tractive instead of repulsive. In either case, the forceis considered assistive and could be overridden by thedriver for high priority tasks such as obstacle avoid-ance. The generated force is described as:

fi =Ii ∗ vi

|vi|2 (1)

where Ii in the intensity of the phenomenon de-tected by sensor i, vi is the unit vector between nodei and the robot, and |vi| is the distance between nodei and the robot. The intensity value is derived fromincoming sensor data. The distance and bearing of thesensor node in relation to the mobile robot must bedetermined. In the region covered by multiple sensornodes the force is simply the resultant vector due toboth sensors obtained by simple vector addition.

Clearly this scheme requires the nodes and therobot to be localized. This assumption is limiting asaccurate localization can be costly for the network[15]. We will try to illustrate that a modified schemecan be developed which only requires sensor nodes toknow their rough and coarse position with respect tothe mobile robot. By rough position we mean thatthe sensor nodes do not need to be localized but onlyknow one of a limited number of sectors they are in rel-

����������

���� ������������������

���� ��������

Figure 1: Resultant force vectors with accurately lo-calized sensor nodes.

���

���

��� ���

���

���

����������

���� ��������

�����������������

���

����

� ��

Figure 2: Sector area around the mobile robot.

ative to the robot’s position. The experimental resultswill show that this scheme will result in a performancecomparable to accurately localized sensors.

This “loose” localization can be achieved by di-viding or discretizing the area around the robot intosectors or quadrants as shown in Figure 2. There-fore, the sensors need only to identify which quadrantthey are located in, within the frame of the robot, atany given point in time. This information could thenbe used to determine the direction of the componentof the force vector from that particular sensor node.Specifically, the force is derived as follows:

fq = gq(C, I) = C ∗ I (2)

where C is the “confidence level” of all sensorreadings within each sector, q. I is the average inten-sity reported by all sensor nodes across each sector.The choice of C can be defined according to the ap-

1435

Page 3: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

����������

���� ������������������

���� ��������

Figure 3: Resultant force vectors with coarsely local-ized sensor nodes.

plication. In our case, it is related to the number ofsensors in a quadrant. In this approach, the distancesbetween the sensor nodes and the mobile robot arenot needed. Figure 3 shows the resultant force vectoras the robot traverses areas being monitored. In thismodified framework, since a sensor node can be in oneof four quadrants, we have only 4 possible force vec-tors. The force vector due to nodes in Q1 is at 157.5degrees, due to nodes in Q2 is at 22.5 degrees, due tonodes in Q3 is at 67.5 degrees and due to nodes inQ4 is at 112.5 degrees, where the robot direction ofmotion is assumed to be at 90 degrees. This specificchoice was motivated by requiring the robot to steerat a sharper angle when danger is in front of it thanwhen it is behind it. Also, this choice resulted in asmooth trajectory and did not require reversal of mo-tion. The choice of these angles is flexible and can bemodified according to the application.

In both cases of localization the fusion of theforce vectors can be accomplished in a centralized ordistributed fashion. In the centralized approach usingaccurate localization the resultant force, F , is simplygiven by:

−→F = Σ−→

f i(p) (3)

where fi, is the force vector contribution fromsensor i, and p is the list of parameters defining theforce vector resulting from each sensor. The advantageof using force vectors to represent the measurementsof nodes is that fusion is scalable.

Instead of centralized processing, the sum of vec-

���������

��� ������ �

������������������������ ������� �

���������������������������

���

Figure 4: Illustration of a sensor network and mobilevehicle transmission coverage.

tors can be done on clusters of nodes and forwardedthus reducing the amount of data to be communicatedresulting in distributed processing, which is power ef-ficient [16]. In the distributed approach, nodes cal-culate the partial resultant vector based on all themeasurements they have available. This results in thenodes having to only forward resultant vectors insteadof each measurement they receive. If the nodes hadaccess to robot position information, in-network com-putation of the total equivalent vector can be done.The location of the robot can be either broadcastedas it travels through the network, or nodes can es-timate it from signal characteristics originating fromthe robot.

As such, the force vector calculation is done in-network and forwarded to the robot directly, eitherby a node or via a centralized unit. However, thisapproach has to consider the different cases of signalcoverage as seen in Figure 4. The solid line showsthe transmission range of the mobile vehicle and thedotted line shows the transmission range of each node,which typically differ.

As shown, nodes 1, 2, 3, and 4 can receive dis-tance information from the vehicle directly. However,the vehicle can only receive information from nodes1, 2, and 4 directly. Node 4 has to forward informa-tion on behalf of node 3 to the mobile vehicle. Insteadof forwarding f3, which is the vector from node 3, itwould forward f4 +f3, which is the vector due to bothnodes 3 and 4. All nodes fuse information before for-

1436

Page 4: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

���

!��� ���

!��� ���

Figure 5: Results of teleoperation with accurately lo-calized sensor nodes.

warding it.This approach would be inaccurate in cases were

a sensor forwards information on behalf of a nodewhich the vehicle has access to. For example, if node 1sends f1+f2 then the vehicle would actually receive f2twice. Once directly from node 2 and again via node 1.To resolve this, each node conducting in-network pro-cessing (vector addition) has to also report the nodesthat are included in that addition. This way the mo-bile robot would not count any vector more than once.

Under coarse localization data fusion can also becarried out both in a centralized or distributed fash-ion. The centralized approach is straight forward aslong as each node sends the quadrant it is in withthe measurements. The robot can then calculate theaverage and standard deviations from each quadrantand then add the resultant forces due to each. In thedistributed approach to data fusion, nodes can stillfuse in-network by calculating the average and stan-dard deviation of the measurements available. In thiscase, the fusing node has also to send the list of nodesbeing accounted for. By receiving averages and stan-

��

!��� ���

!��� ���

Figure 6: Results of teleoperation with coarsely local-ized sensor nodes.

dard deviations of subsets of nodes with the numberfused in each it is possible to calculate a cumulativeaverage and standard deviation for each quadrant andthus the resultant force. Note that with distributedfusion, nodes can only fuse data from nodes withintheir own quadrant.

In the next section it is shown that discrete orcoarse localization results in comparable performanceto the case where the nodes are accurately localized.This is illustrated with both teleoperated and au-tonomous robots.

4 Experimental Results

The experimental setup consists of two scenarios. Thefirst consisted of a robot being teleoperated and theforce was rendered as haptic feedback by a MS ForceFeedback joystick to the operator. This scenario wastested using both accurate and coarse localization.The second scenario consisted of an autonomous robotand the force was used by the robot to make au-

1437

Page 5: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

��

!��� ��� !��� ���

�����"��� �#������

$������"��� �#�����

Figure 7: Results of autonomous operation.

tonomous navigation decisions. This was also testedusing both accurate and coarse localization.

The results of the first scenario are shown inFigure 5 and Figure 6 for accurate and coarse local-ized sensors respectively. These figures show the pathtaken by the robot as it is being teleoperated. Also, itshows the force vectors at the locations where forcesexisted. Each figure shows the results for two differenttrials. Comparing the two sets of figures it is clear thatthe performance is comparable evident by the levels ofovershoot and the smoothness of the paths.

The results of two trials of the second scenarioare shown in Figure 7. The top row shows the path ofthe autonomous robot when the sensor nodes are accu-rately localized. The bottom row shows the path whenthe sensor nodes are coarsely localized. Comparing ineach trial the top and bottom figures corresponding tothe two localization cases the similarity is clear. Thisshows that also in autonomous operation using coarselocalization of nodes does not affect performance.

5 Conclusion

In this paper we studied the effect of the accuracyof node localization on the performance of sensornetwork assisted robotics. We demonstrated thatcoarsely localizing sensor nodes is sufficient to accom-plish comparable performance to when accurately lo-

calized. The experiments showed that, both with tele-operation and autonomous operation, robots are ableto use information provided by the sensor network tosafely and efficiently navigate even with very coarselocalization of the sensors. This result will facilitatethe development of sensor network systems due to thereduced localization requirements.

References

[1] Jason Gorski and Imad Elhajj, “Sensor NetworkAssisted Teleoperation”, IEEE/ASME Interna-tional Conference on Advanced Intelligent Mecha-tronics, pp 735-740, California, 2005.

[2] H. Qi, S. S. Iyengar, K. Chakrabarty, “Distributedsensor networks-a review of recent research”, Jour-nal of the Franklin Inst., 338(6), p. 655-668, 2001.

[3] D. Chang, G. Asada, W. Kaiser, O. Stafsudd, “Mi-cropower High-Detectivity Infrared Sensor Sys-tem”, Proceedings of the 1998 IEEE Solid-StateSensor and Actuator Workshop, 1998.

[4] J. Aslam, Z. Butler, F. Constantin, V. Crespi,G. Cybenko, D. Rus, “Tracking a Moving Objectwith a Binary Sensor Network”, Conf. On Embed-ded Networked Sensor Systems, Los Angeles, CA,2003.

[5] S. Pattem, B. Krishnamachari, “Energy-QualityTradeoffs in Sensor Tracking: Selective Activationwith Noisy Measurements” SPIE’s 17th AnnualInternational Symposium on Aerospace/DefenseSensing, Simulation, and Controls, Orlando,Florida, April 2003.

[6] S. Feller, E. Cull, D. Kowalski, K. Farlow, J.Burchett, J. Adleman, C. Lin, D. Brady, “Track-ing and imaging humans on heterogeneous in-frared sensor array for tactical applications”, SPIEAerosense, 2002.

[7] A. Brooks, S. Williams, “Tracking People withNetworks of Heterogeneous Sensors”, Proceedingsof the Australasian Conference on Robotics andAutomation, 2003.

[8] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam,E. Cayirci, “Wireless sensor networks: a survey”,Computer Networks, 38(4), p. 393-422, 2002.

[9] L. Kleeman, “Real Time Mobile Robot Sonar withInterference Rejection”, Sensor Review, Vol 19,No. 3, p. 214-221, 1999.

1438

Page 6: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

[10] K. Chakrabarty, S. S. Iyengar, H. Qi, E. Cho,“Coding Theory Framework for Target Locationin Distributed Sensor Networks”, Proc. Intl. Sym-posium on Information Technology: Coding andComputing , p. 130-134, 2001.

[11] Lingxuan Hu and David Evans, “Localization forMobile Sensor Networks”, Annual InternationalConf. on Mobile Computing and Networking, 2004.

[12] Sameer Tilak, Vinay Kolar, Nael B. Abu-Ghazaleh and Kyoung-Don Kang, “Dynamic Lo-calization Protocols for Mobile Sensor Networks”,IEEE Int. Workshop on Strategies for Energy Ef-ficiency in Ad Hoc and Sensor Networks, 2005.

[13] A. Lamarca, W. Brunette, et al. “Making Sen-sor Networks Practical with Robots.” 2002 Inter-national Conference on Pervasive Computing.

[14] R. Peterson and D. Rus. “Interacting with a Sen-sor Network.” , Dartmouth, Hanover, NH, 2002.

[15] James Aspnes, David Goldenberg, Yang RichardYang, “On the Computational Complexity of Sen-sor Network Localization”, ALGOSENSORS 2004,in LNCS 3121:3244, 2004.

[16] Feng Zhao, Leonidas Guibas, Wireless SensorNetworks: An Information Processing Approach,Morgan Keufmann, May 2004.

1439


Recommended