+ All Categories
Home > Documents > [IEEE 2014 International Conference on Robotics and Emerging Allied Technologies in Engineering...

[IEEE 2014 International Conference on Robotics and Emerging Allied Technologies in Engineering...

Date post: 27-Jan-2017
Category:
Upload: muhammad-bilal
View: 214 times
Download: 1 times
Share this document with a friend
5
A Theoretical Model, Representing the Sensor Based Sheet for Machines, in Object Detection/Perception Muhammad Bilal Khan* *Department of Electrical and Electronic Engineering Namal College, Mianwali 42250, Pakistan. (Associate College of Bradford University, United Kingdom) [email protected] AbstractIn this paper, we have introduced the framework of a virtual theoretical model representing, combination of a number of different overcrowded sensors mainly consist of commonly used sensors e.g. ultra sonic, Infra red and laser range finders etc , on a designed sheet; for algorithm testing and development in human machine interaction based academic research. By using a number of different sensors with different combinations, we can get an idea about the environment in front of that sheet, and by using softwares, we can turn the feedback given by the sensors to an image, or even know certain sets of information about the objects placed in front of it. We can also get a slightly better idea about the challenging part in perception based design, that is, edge detection. This paper also provides a detailed possible applications part for this sensor-based sheet with a major example on Futuristic feedback-giving skin in machines like robots etc, and the way it can work as an additional tool for the computer sciences along with the previously held computer vision tool. The effectiveness of the work is appreciated throughout the explanation segments with concentration on real as well as virtual world mapping. As an effective possible addition to vision-based systems, this paper also discuss the impact of energy losses; as a drawback, due to the noise addition to the system, with the major reason as the use of sensors in a bigger numbers. Similar nature problems are discussed in the final segments. KeywordsMachine Skin, Vision-based systems, Human machine interaction, Object perception, Algorithm-development, Feedback Sensors. I. INTRODUCTION In the recent few years, there is a great turned around in mapping certain environmental factors by different machines, like by using autonomous, semi-autonomous robots and unmanned aerial vehicles (UAVs), equipped with different sensors [2]. One of the important tasks of an autonomous system of any kind is to acquire knowledge about its environment. This is done by taking measurements using various sensors and then extracting meaningful data from these measurements [1]. Understanding the different dynamics of a specific environment, by mapping the characteristics of that environment, has a great value for the researchers of all fields. A map can easily tell us the location-based unknown characteristics of different objects, placed within certain under observation area. To give visual capabilities to a machine, and then perceive data from it; there are a number of tools available and being used by the researchers all around. A lot of researchers are using large scale digital maps by using available technologies for mapping application including GPS, RF beacons, encoders, sonar, etc [2]. In this paper we present the theoretical background of the concept of a sensor based sheet, which can be used as an additional tool to vision systems, for image building and mapping the data of a specific environment. By placing different sensors on a sheet, with different patterns, we can plot certain environmental characteristics of an environment. Based on the output values of sensors, we can allocate some dots, representing the intensity of the output values, through a graphical user interface (GUI), in a designed software environment. This will give us a virtual 3D map of that area with different information about objects and surroundings. We will also see the performance parameters for different sheets with different sensors placed on them. Our proposed model consists of these basic functionalities. x A multi-sensor design model, based on the used environmental dimensions of a machine, to fetch the digitally available outputs from the sensors in order to perceive the required unknown entities of the domain area. x A Power supply and a control Unit for physical interfaces (Sensors and Systems). x A feedback control unit to collect the digital data from sensors and then process it. x Two separate storages for feedback and our pre- defined 3D intensity masks. x Comparator and display units to analyze the feedback, and then after comparing it to our predefined masks, start mapping it in 3D. In the following sections, our proposed model is explained in details. After that, the possible applications of our model, while working in the real world scenarios are discussed comprehensively. In the later part, some of the performance parameters of our model are explained. The last section covers the conclusion and possible future directions for this work. 82 2014 International Conference on Robotics and Emerging Allied Technologies in Engineering (iCREATE) Islamabad, Pakistan, April 22-24, 2014 978-1-4799-5132-1/14/$31.00 ©2014 IEEE
Transcript

A Theoretical Model, Representing the Sensor Based Sheet for Machines, in Object Detection/Perception

Muhammad Bilal Khan*

*Department of Electrical and Electronic EngineeringNamal College, Mianwali 42250, Pakistan.

(Associate College of Bradford University, United Kingdom)[email protected]

Abstract—In this paper, we have introduced the frameworkof a virtual theoretical model representing, combination of a number of different overcrowded sensors mainly consist of commonly used sensors e.g. ultra sonic, Infra red and laser range finders etc , on a designed sheet; for algorithm testing and development in human machine interaction based academic research. By using a number of different sensors with different combinations, we can get an idea about the environment in front of that sheet, and by using softwares, we can turn the feedback given by the sensors to an image, or even know certain sets of information about the objects placed in front of it. We can also get a slightly better idea about the challenging part in perception based design, that is, edge detection. This paper also provides adetailed possible applications part for this sensor-based sheet with a major example on Futuristic feedback-giving skin in machines like robots etc, and the way it can work as anadditional tool for the computer sciences along with the previously held computer vision tool. The effectiveness of the work is appreciated throughout the explanation segments with concentration on real as well as virtual world mapping. As aneffective possible addition to vision-based systems, this paper also discuss the impact of energy losses; as a drawback, due to the noise addition to the system, with the major reason as the use of sensors in a bigger numbers. Similar nature problems are discussed in the final segments.

Keywords—Machine Skin, Vision-based systems, Human machine interaction, Object perception, Algorithm-development, Feedback Sensors.

I. INTRODUCTION

In the recent few years, there is a great turned around in mapping certain environmental factors by different machines, like by using autonomous, semi-autonomous robots and unmanned aerial vehicles (UAVs), equipped with different sensors [2]. One of the important tasks of an autonomous system of any kind is to acquire knowledge about its environment. This is done by taking measurements using various sensors and then extracting meaningful data from these measurements [1]. Understanding the different dynamics of a specific environment, by mapping the characteristics of that environment, has a great value for the researchers of all fields. A map can easily tell us the location-based unknown characteristics of different objects, placed within certain under

observation area. To give visual capabilities to a machine, and then perceive data from it; there are a number of tools available and being used by the researchers all around. A lot of researchers are using large scale digital maps by using available technologies for mapping application including GPS, RF beacons, encoders, sonar, etc [2]. In this paper we present the theoretical background of the concept of a sensor based sheet, which can be used as an additional tool to vision systems, for image building and mapping the data of a specific environment. By placing different sensors on a sheet, with different patterns, we can plot certain environmental characteristics of an environment. Based on the output values of sensors, we can allocate some dots, representing the intensity of the output values, through a graphical user interface (GUI), in a designed software environment. This will give us a virtual 3D map of that area with different information about objects and surroundings. We will also see the performance parameters for different sheets with different sensors placed on them. Our proposed model consists of these basic functionalities.

A multi-sensor design model, based on the used environmental dimensions of a machine, to fetch the digitally available outputs from the sensors in order to perceive the required unknown entities of the domain area. A Power supply and a control Unit for physical interfaces (Sensors and Systems). A feedback control unit to collect the digital data from sensors and then process it. Two separate storages for feedback and our pre-defined 3D intensity masks. Comparator and display units to analyze the feedback, and then after comparing it to our predefined masks, start mapping it in 3D.

In the following sections, our proposed model is explained in details. After that, the possible applications of our model, while working in the real world scenarios are discussed comprehensively. In the later part, some of the performance parameters of our model are explained. The last section covers the conclusion and possible future directions for this work.

82

2014 International Conference on Robotics and Emerging Allied Technologies in Engineering (iCREATE) Islamabad, Pakistan, April 22-24, 2014

978-1-4799-5132-1/14/$31.00 ©2014 IEEE

II. DETAILS OF THE MAPPING-MODEL

Through this study, we can see the impact of using a number of different sensors in mapping the feedback data using a theoretical model. By keeping comparison approach to previously done research, each segment constructs the concept.

A. Multi-Sensor Model To perceive useful information from the environment, we can see that a lot of work has been done in the field of perception based mapping. There are a number of ways being used in this regard. The most popular one among all are computer and machine visions [6,8], in which an image is processed pixel by pixel, to know about the environmental unknown aspects of an arena and then mapping of all these findings is done, if needed, for further enhancements [10],[12]. Another way to map and interact with the environment is to use different sensors like sonar and laser range finders as in [1-5]. In all these models, we can see that we can map environment using sonar or laser range finders, but we are quite limited in mapping when it comes to a large scale map-data handling, due to the use of lesser number of sensors and hence we are quite restricted in plotting the more accurate readings. Due to this reason, tackling the issues like edge detection becomes very difficult at times. To sort out all

these issues, we proposed a method, which can further improve the sensor based mapping of the unknown environmental entities. In this work, we assume a flat sheet of L×W dimensions, where L is any supposed length and W is any supposed width of the flat non metallic sheet with boxes to carry sensors and tunnels on the back to give path for wires (Ground, VCC/Supply signal, and Output signal wires, in general), which contains sensors with their predefined dimensions. Area of the Sheet is dependent upon the used environmental characteristics. Each sensor that is contained in a sub cell of the designed sheet is using an area X×Y. which is dependent on the dimensions of a typically used sensor with the additional area, to provide full length spectrum path for the sensor’s ray transmission. A typical overview of the model is shown in Fig.1. The demonstrated model in Fig.1 is showing an assumed scenario, in which our sheet is containing a total of 24 sensors in 4 rows, where each row is having 6 sensors. During this study, we have just considered two sub scenarios, one with sonar sensors and the other one with the laser range finders for mapping. Both scenarios tell the same picture of the model but with different performance parameters, especially with respect to their effective ranges to work. Power and control unit demonstrated in Fig.1, is responsible for power management of the whole system of networked sensors and is dependent rightly on the overall power capacity in terms of consumption.

Fig. 1. Overview of the Model.

Our initial output from each of the sensors is fetched by the feedback control unit. Feedback control unit is responsible for

sorting out the ratio between poor quality signals arrived and the good quality signals. It is also looking for any uncertain

83

additive factors that can impact on the outputs e.g. noise of all natures etc. After passing through the initial filtering, and reception at the feedback control unit, now the output signals of all sensors are separately stored in storage, in the digital form, based on the real time output values, as illustrated in Fig.1. Another sub storage is also required for this system, which works as a memory box for our predefined 3D masks.These 3D masks are basically our own assumptions, which are made on the basis of any supposed look up table of different supposed sub set values by the sensors in the form of output. Although these apparently looks like boxes on GUI, and are providing us a graphical interface of the environment data, but these are based on the intensity of the reflected ray of the sensor. This intensity based mapping is based upon the distance of the object from the sensor, in both the cases; either in the case of sonar or long range laser finders. A sonar sensor can measure average distances and work perfectly in a closed indoor environment [3],[19], while a sophisticated laser range finder can work quite sufficiently under different scenarios and arenas, with it s maximum effective distance tracking [16]. At the end of the overview model, we have a comparator and display units that will solely work in order to match and compare the values fetched by the sensors and stored in the digital form in feedback storage to our predefined 3D masks’ storage. When it gets a bigger value or in other words, objects are located within the vicinity of the sensor than it take the more intense box from the memory of intuitively designed 3D masks and display the output graphically by mapping it in any developed graphical user interface environment.

B. Extension of Previous Methods

We can see our work as an extension to some of the work done previously in mapping an environment using sensors as in [1-5],[11],[13],[14], and [16-19]. In [3], a proposed way to model a map using a single ultrasonic sonar sensor, mounted on a mobile robot, for the range based results line mapping. But using a single sensor to map the unknown indoor environment, with more ratio of uncertainty due to the use of single sonar is the hurdle, when it comes to the real time interaction of machine with the data it acquire from its sensors. Reference [2] proposes a methodology to detect lines or edges for a later construction of geometric maps using a single laser range finder. “In this proposed work, first, the algorithm decodes the raw data from the sensor, using URG series, into a list of points in polar coordinates. Afterward, it converts to x-y coordinates in order to have the first image. The list of points is reduced using sub sampling, and then it applies Hough Transform to obtain line segments. The Hough Transform is applied in several cycles; using masks to erase lines already found and detect new lines segments. At the end, the algorithm is able to group up to 90.32% of the points in the original image”. And the mapping works by taking multiple real time snaps. But again, due the use of single laser range finder, we are quite restricted in our domain to work under various circumstances. In both of the discussed methods, we can see the use of single sensors, and the results were fine but

restricted especially when the same method applied on mobile machines. To overcome this limitation, our model can work as an extension to these proposed methods as well. By using a sensor sheet, we easily get the paramount information about the unknown environment as well as we get the freedom, when working and dealing with more dynamic environments. Fig.2 shows a sample scenario, when a part of the sensor sheet interacts with an edgy surface, containing six sensors, S1 to S6, with an active arc angle of 30° each, with maximum possible power emission and absorption back to the sensor [Assuming ideal sate]. The feedback control unit can see the output coming to it in the form of sets. When sensors, 3 and 4, send an almost equally strong signal to it, the feedback control unit assigns them a set number, based on their intensity level, as compared to the other sensors, working in the scenario. The same process goes with sensors 2 and 5, covering a slight variation in signal strength and showing an abrupt change due to the non-uniform surface detection, and hence these are grouped as Set-2. Sensors 1 and 6, are grouped here as Set-3, due to the weak signal strength recorded at FC unit.

Fig. 2. A sample scenario, showing interaction of the sensors, to a surface.

Fig. 3 shows a typical layout of the map when sensors’ interacting with the object’s dimensions, the intensity of the color map is showing the respective distance patterns, here Set-1 is showing the more intense color box representation, due to the nearness of sensors 3 and 4 to the object/surface,while sensors 2 and 5 are detected at the feedback control unit as the Set-2 (giving an idea about the edges, when we

84

completely see the overview of the output of the system); while sending slightly lesser strong signal to Feedback Control Unit and hence, their map projected on the axis with two major characteristics with one weak signal, the assumption is that object is at a distance more than that of Set-1, as they all are placed in line, Set-3(Allocated to sensors 1 and 6 by FC unit on the basis of the arrived signal) is sending the weakest signals and hence its represented here, on the axis, as the weakest one. On the basis of these allocations at feedback control unit, the comparator and display unit work to display the respectively allocated masks, graphically.

Fig. 3. A typical layout of the map, based on the assumed scenario

III. POSSIBLE APPLICATIONS

This model can make an innovative impact on a number of currently used and futuristic applications. We have already discussed that, how the performance goes up in mapping, when we use a network of sensors, instead of a single or a few sensors. Bearing in mind, the potential of our proposed system, we can talk about a number of things that can be modified or in other words, their performance can be boosted, using this typical layout of the sensor-system design. This model can work as a tool for undergraduate and graduate level academia based research, in machine vision. By using this method, a vision based research model can be more interactive and can involve the researchers by having the freedom of

manual adjustments of the whole system. This will also add more variety to the algorithm development techniques with the on hand interfacing as a part of human-machine interaction based tool and can be a smart tool in the proposed work in [6-10],[12]. The whole process of acquiring, processing and doing further analysis of the data in real world scenarios can be carried out in a more engaging way. Another possible application of this model is the skin for machines like Robots etc. currently robots use sensors in a few numbers and due to this, they are quite limited at times unless they use a camera vision system for full image fetching [6]. A lot of research is going on in making the artificial skin with responsive behavior to a certain signal for further manipulation [20-22]. And skin, for machines like humanoid robots, can be the ultimate goal to compete these with the real world challenges at work. Although our model can give a glimpse as a possible skin for robots, but still, more work is required on materialistic properties of sensors as well as the carrying sheet in terms of flexibility and strength. This model can work effectively on machines, with suitable dimensions. Besides the use of mapping and object recognition, the model can also be helpful in improving the functionality of 3D printing systems, by adding more accuracy when dealing with the massive objects in the near future research and applications. Apart from the real world system integration, this work can also make a real impact when dealing or analyzing a virtually designed machine in a virtual environment.

IV. DISCUSSION ON PERFORMANCE BASED DRAWBACKS

Using this model, we can find a number of applications as explained earlier, but still we have some drawbacks to tackle. The major drawback is the use of more power because we are using more sensors. We can’t change much in our model to deal with this issue, but the only possible way is to be more selective in materialistic properties of the sensors and components being used. Another possible issue that can impact the performance of our model is the collisions among the ray spectrums of different sensors. As a sensor sends its ray/beam signal, there is a great probability in terms of spectral collision, as certain beam can revert from its path due to any uncertain angled deflection on the surface, and engage another sensor at the back bounce or on-receive mode. In this way it can add uncertainty to the system’s overall output. A proposed way to solve this problem is to use conventional way of providing the full path to the ray spectrum of a sensor by increasing the dimensions of the enclosing cell of the sensor, on the sheet. So that, a certain spectrum can makes a full path, to and from the origination. And hence avoid the collision at any point, during the signal transmission. This scheme can work pretty well, in case, when we have a sensor sheet with sonar sensors. In case of laser range finders, there is no such restriction in handling the spectrum, as these sensors are already much conserved in their signal spectrum [2].

85

V. CONCLUSION AND FUTURE WORKS

In this paper, the framework theory of a virtual model,representing a sensor based sheet for machines in object detection and perception has been presented. We have discussed in details, that by using various sensors in an arranged pattern on a sheet, how we can map certain environmental factors or unknowns, graphically. We have also discussed about the relation of our proposed framework to the previously done research in mapping using various sensors. After that we discussed the work conditions of our model with a sample scenario, using a sonar sensor to map low range, indoor, unknown environmental entities. We have also discussed the same scenario, with more effective range handling capabilities, by using laser range finders. This study gives us a clear idea about the use of proposed sensors’ network, as an additional tool to machine vision. By overcoming the energy losses and the additive noise to the system’s feedback unit, we can make this system more economic and suitable for any work conditions. Further studies in materialistic properties of the system can also improve the effectiveness of the system.

ACKNOWLEDGMENT

The author would like to thank the support and encouragement from Dr. Amir Khurrum Rashid, Dr. Aamir Shahzad, M. Fayyaz Kashif, and Mr. Hassaan Saadat during this study. The author also likes to thank the anonymous reviewers for their valuable comments. Additionally, the author acknowledges the support from IEEE chapter, at Namal College; for the relevant facilitation.

REFERENCES

[1] M. Farid, NM. Arshad, and A. Razak, “Construction Sonar Sensor Model of Low Altitude Field Mapping Sensors for Application on a UAV”, Proceedings of the 8th IEEE International Colloquium on Signal Processing and its Applications, 2012.

[2] Marcos Ogaz, Rafael Sandoval and Mario Chacon, “Data Processing from a Laser Range Finder Sensor for the Construction of Geometric Maps ofan Indoor Environment”, IEEE transactions tracking number 978-1-4244-4480-9/09.

[3] M. Kareem Jaradat and Reza Langari, “Line Map Construction using a Mobile Robot with a Sonar Sensor”, Proceedings of the 2005 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Monterey, California, USA, 24-28 July, 2005.

[4] D. Lee, “The map-building and exploration strategies of a simple sonar-equipped robot”, Cambridge university press, Cambridge, 1996.

[5] V. Vassilis, “Robot localization and map construction using sonar data”, http://rossum.sourceforge.net, 2001.

[6] G. Miller, S. Fels and Steve Oldridge, “A Conceptual Structure for Computer Vision”, Proceedings of Canadian Conference on Computer and Robot Vision, 2011.

[7] U. Frese, P. Larsson, and T. Duckett, “A multilevel relaxation algorithm for simultaneous localisation and mapping”, IEEE Transactions onRobotics, vol. 21, no. 2, pp. 1–12, 2005.

[8] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: Using depth cameras for dense 3D modeling of indoor environments”, in Proc. of the Int. Symposium on Experimental Robotics (ISER), 2010.

[9] M. Kaess, A. Ranganathan, and F. Dellaert, “iSAM: Incremental smoothing and mapping,” IEEE Trans. on Robotics, vol. 24, no. 6,pp.1365–1378, Dec 2008.

[10] R. K ummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “g2o: A general framework for graph optimization,” in Proc. of the IEEE Int. Conf. on Robotics & Automation (ICRA), 2011.

[11] T. Nakarnura and H. Ishiguro, “Automatic 2D Map Construction using a Special Catadioptric Sensor”, Proceedings of the 2002 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems EPFL, Lausanne, Switzerland , October 2002.

[12] J. Borenstein and U. Raschke, “A comparison of grid type map-building techniques by index of performance”, Proceedings of IEEE International Conference on Robotics and Automation, pp.1828-1832, 1990.

[13] O. Bozma and R. Kuc, “Single sensor sonar mapbuilding based on physical principles of reflection”, Proceedings of IEEE/RSJ International Workshop on Intelligent Robots and Systems '91, pp.1038-1043, 1991.

[14] G. Oriolo, M. Vendittelli and G. Ulivi, “On-line map building and navigation for autonomous mobile robots”, Proceedings IEEE International Conference on Robotics and Automation, pp. 2900-2906, 1995.

[15] G.C. Anousaki and K.J. Kyriakopoulos, “Simultaneous localization and map building for mobile robot navigation”, IEEE Robotics & Automation Magazine, pp.42-53, 1999.

[16] L. Zhang and B.K. Ghosh, “Line segment based map building and localization using 2D laser range finder” , Proceedings ICRA '00, IEEE International Conference on Robotics and Automation, pp. 2538 -2543, 2000.

[17] Lindsay Kleeman and Roman Kuc, “Mobile Robot Sonar for Target Localization and Classification”, International Journal of Robotics Research. Volume 14, Issue 4, August 1995, Pages 295-318.

[18] Hans Jacob S. Feder, John J. Leonard, Chris M. Smith, “AdaptiveConcurrent Mapping and Localization Using Sonar”, Proceedings of the 1998 IEEE Inti. Conference on Intelligent Robots and System, Victoria, B.C., Canada October 1998.

[19] Yu-Cheol Lee, Wonpil Yu, Jong-Hwan Lim, Wan-Kyun Chung and Dong-Woo Cho. (2008). “Sonar Grid Map Based Localization for Autonomous Mobile Robots”, 2008 IEEE/ASME International conference on Mechatronics and Embedded Systems and Applications, MESA 2008.

[20] Lei Sun, Jian Hua Shan, Max Q-H. Meng, Donfeng Zhang, Tao Mei, “Application of Intelligent Flexible Skin Sensors for Interfacing with Robotic Pets”, 2006 Proceedings of the 1st IEEE International Conference on Nano/Micro Engineered and Molecular Systems, January 18 - 21, 2006, Zhuhai, China.

[21] Jian Hua Shan,TaoMei, Lei Sun,etc, "The design and fabrication of a flexible three-dimensional force sensor skin", Proceedings of the IEEEIRSJ International Conference on Intelligent Robots and Systems, (IROS'2005),pp. 1965 -1970.

[22] An Yong Lee and Doik Kim, “Detachable Tactile Sensor Skin Module for Robotic Applications”, 2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) October 31-November 2, 2013, Jeju, Korea.

86


Recommended