+ All Categories
Home > Documents > Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled...

Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled...

Date post: 17-Jul-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
9
Turtlebot Laser Tag Turtlebot Laser Tag was a collaborative project between Team 1 and Team 7 to create an interactive and autonomous game of laser tag. Turtlebots communicated through a central ROS server and used the Kinect to track opponents and emulate shooting. Jason Grant, Joe Thompson {jgrant3, jthomp11}@nd.edu University of Notre Dame Notre Dame, IN 46556 Artist Kayla Wolter, Chelsea Young Saint Mary’s College Notre Dame, IN 46556 1
Transcript
Page 1: Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an

Turtlebot Laser TagTurtlebot Laser Tag was a collaborative project between Team 1 and Team 7 to create an interactive and autonomous game of laser tag. Turtlebots communicated through a central

ROS server and used the Kinect to track opponents and emulate shooting.

Jason Grant, Joe Thompson{jgrant3, jthomp11}@nd.edu

University of Notre DameNotre Dame, IN 46556

Artist

Kayla Wolter, Chelsea YoungSaint Mary’s College

Notre Dame, IN 46556

1

Page 2: Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an

I. Executive SummaryThis project pits two robots in a spectator thrilling laser-tag fight to the death (or battery discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an avatar for each of the Turtlebots. The robot control and game management will be handled by the engineers. The two aspects of the project were envisioned to be reasonably separable to allow for remote development of each of the solutions. The robots will be designed to be robust to the environments presented by the artists.

II. IntroductionWhy Laser Tag?Collaboratively, teams one and seven decided to simulate the game laser tag for our final project. Laser tag was chosen for several reasons. Firstly, laser tag fits within the constraints of the Turtlebot. Turlebots do two things well: move and think about moving. Secondly, laser tag makes use of the available sensors on the robots (vision/3D sensing and cliff/bump sensor). The concept of shooting is also easily simulated with a vision system and proper sound effects. We believed that laser tag would be fun for kids to watch and play, and that the game was conceptually easy to follow.

GoalsOur goal was to have multiple robots autonomously play a game of laser tag in a constrained environment. This included moving quickly through the environment while avoiding obstacles, tracking the opponent when it was discovered, “shooting” the opponent when in range, and scattering across the map after being hit. Furthermore, each team was responsible for developing its own algorithm for independent gameplay.

Game RulesThe game will have a predefined time limit where the each robot will search for and shoot its opponents. Robots are required to obey the referee (ROS central server) which determines hits and misses for both teams. After a robot is hit by the opposition, the hit robot is responsible for exhibiting an hit behavior (shaking, spinning, etc.). At this time, a ceasefire begins and last for 20 seconds. During this time, the robots will scatter and no shots are allowed to be fired. Concluding the ceasefire, the game will commence as normal. Teams score one point for a hit and lose a point after 3 misses.

Artistic Aspect The robot teams will take on the appearances of either a dinosaur or a robot. The robots interacted with each other in the space modeled after a city skyline battle field. We created objects in the environment that both served as an obstacle for the robots to utilize over the course of battle as well as a fusion of past and futuristic themes. The objects for the course are both portable and lightweight as Pepakura was to construct these forms. In regards to surface design, we used a latex covering on the cardboard to create a more durable form. The forms will also be weighted down using plaster/wood.

2

Page 3: Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an

The viewers will be able to control the robots, thus engaging in a battle of past vs future taking place in the present.

III. Methodology

Collaborative MaterialA central server was needed to enforce game rules, determine hits and misses from shooting solutions provided by the competitors, and communicate game state to each of the robots. A separate ROS node, called the gamemaster, was created for this purpose that would run on its own dedicated system and communicate to each robot over a wireless network using a custom ROS message. As stated, the chief responsibility of this node was to manage the game state and enforce the rules. In implementation this was implemented through an accurate management of the game state. By accurately tracking the game state, the gamemaster could leave it up to each of the robots to follow the basic rules. The game state consisted of the actual state, hunt or seek, the number of hits registered by each team, and the number of misses registered by each team. By sending updates of this state, the robots were able to deduce whether a given shot was registered as a hit or a miss and what behavior the robot should currently be following. Loose rule enforcement was implemented by ignoring shots if in scatter state and by penalizing robots that missed very often. The latter encouraged the control algorithms to only shoot when they could reasonably expect a hit. Determining hits and misses were also a critical part of the gamemaster. In order to hit an opponent, a robot competitor sends a shooting solution to the gamemaster. This solution is composed of the image coordinate center of the blob corresponding to the target on the opponent as well as the depth at that point in the image. This information was then combined in the server to determine a hit probability. The probability of a hit falls off as the opponent gets farther away or more off center. Thus, it would be very unlikely to hit an opponent in the corner of the RGB image that is 3.5 meters away. The gamemaster communicated score information to a scoreboard database application. The only purpose of this application was to track the score for each team. This data was then easily queried by a PHP web page and displayed for the observers. An example display is shown in Figure 1.X.

3

Page 4: Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an

Figure. 1X. A scoreboard depicting team 7 dominating a game of laser tag.

Independent Algorithm DevelopmentEach group was tasked with developing an algorithm to control a single competitor in the game of robot laser tag. These algorithms were developed without collaboration between the groups.

Separation of Perception / ControlIn order to facilitate a modular development, it was decided to separate the perception functionality from the movement control functionality. This allowed all of the code for perceiving the environment to be contained within its own nodelet. This nodelet would then generate high level messages which the movement controller could use. By doing this, we could develop the controller independently of the perception and allowed for easier debugging and more maintainable code files. The perception nodelet is responsible for handling the sensor input from the environment after preprocessing by ROS. The ROS system processes the depth disparity data coming from the Kinect sensor using the Point Cloud Library (PCL) and passes the resulting three-dimensional point cloud to the perception nodelet. The RGB data from the Kinect is first processed by CMVision and color blob information is sent to the nodelet. The nodelet is responsible for combining these two sensor sources into meaningful information for use by the movement controller. This is done by linking the real-world three-dimensional position data with the color blob data. The linking process is possible because the OpenNI drivers provide a flag to

4

Page 5: Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an

automatically register the depth camera with the RGB camera. This process is outlined in Fi2. gure X.

F 2. igure2 X. Flow from the environment to the perception nodelet to the custom messages

Custom MessagesAs stated in the previous section, our Perception nodelete returned four custom messages: obstacle, collision, opponent, and target. Obstacle messages were used in our medium to long range planning. This message indicated that an object was approximately 1-3 meters ahead of us and within a half meter to the right or left. This allowed us to prepare to avoid the obstacle. Collision messages required immediate attention, either by stopping, backing up, or turning to avoid an obstacle less than 1 meter ahead. Opponent messages were sent when the avatar of the opponent was spotted. After receiving this message, the turtlebot could follow its opponent on the map. The last message is the target message. This message contained the blob information so that we could align and shoot the target3.

Figure X: Custom messages from the perception nodelete

State MachineA high level state machine controlled the goals and shot firing of the robot as shown in Figure X. At all times, the robot is either in the hunt or scatter state. This state is communicated to the robot by the central gamemaster. If the robot is in the hunt state, it will attempt to follow the opponent in order to shoot it at some point in the future. If the robot is in the scatter state, it will

5

Page 6: Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an

flee by turning away if the opponent is found. If the target is located and the game state is hunt, the robot will fire a shot. At any point in the game, if the hit and miss counts from the gamemaster indicate that the robot has been hit with a shot, the robot will perform a predefined hit behavior and then resume in the current state. Notice that this state machine does not control movement at all. It only manages high level goals which are communicated to the actual controller as discussed in the next section4. .l W

Figure X. Goal management state machine

Movement ControllerThe movement controller was developed as a reactive controller in that different high level sensory input is linked directly to motor control. The exact nature of this linkage can change depending on the robot’s state. For example, in the scatter state, opponent messages will cause the robot to turn away in an attempt to flee. However, in the hunt state, the robot will attempt to follow the opponent based upon information coming from the opponent messages. Regardless of the state, the forces coming from opponent sensing is combined with forces generated by the other types of messages and movement occurs. The combination of the various movement forces occurs through the action of a “movement arbitrator”. This function accepts a variety of desired movements generated by reactions to the various sensory messages and outputs a single movement command to the robot’s drivers. Each desired movement sent to the arbitrator has a weight associated with it. These weights are used by the arbitrator to create a normalized weighted average of all of the desired movements input in a single turn. This simple mechanism proves to be quite effective and allows for the relative importance of each reactive function to be set at the function level. The arbitrator need not know the source of each desired movement. With this established, the controller is just a set of functions generating movement commands that all feed into arbitrator. The arbitrator is the

6

Page 7: Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an

only entity in the program that can give movement control commands to the robot. The reactive functions are then given relative importance and this is coded by adjusting the weights of the movements input to the arbitrator. Our movements were based on the following desirable behavior. As the weakest goal the robot wants to seek obstacles that could be used for cover. If an obstacle is detected, the robot will attempt approach the object in an effort to hide. This is given a low weight compared to the other reactions. As the next weakest goal, the robot wants to either hunt the opponent by following it (hunt mode) or turn away from it (scatter mode). Movements generated by this reaction are given a stronger weight than the obstacle reaction. This allows the hunting or scattering behavior to subsume control over the obstacle seeking behavior. The strongest reactions are those to avoid collisions. These reactions are caused by the collision events and result in command with a very high weight being sent to the arbitrator. The default behavior if no reaction occurs is to move forward. The process is outlined in Figure X.

Figure X. Movement controller arbitration layout. In summary, the reaction arbitration controller creates the following behavior:

1. The robot will move forward.2. If an obstacle is seen, the robot will attempt to hide by it.3. If the opponent is seen, the robot will attempt to follow it or flee from it depending upon

the game state.4. If a collision is imminent, the robot will address that immediately.

Finally, if at any time the robot can legally make a shot at a target within range, the controller will do so.

7

Page 8: Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an

IV. ResultsOur demonstration at Robotics Day turned out to be quite successful. We were able to achieve remotely and autonomous controlled gameplay. Attendees enjoyed the interactive aspect, and eventually we had lines forming to play the game. Nevertheless, there were several issues that we faced while our robots were deployed. The largest issue we dealt with was poor WiFi coverage in the Joyce Center. When first proposing the project, we expected that WiFi may be an issue in the arena. Our initial solution was to bring our own wireless router on the day of the event. Unfortunately, this did not help much. We believe that the size of the arena and the unavailability of a open channel caused poor coverage. Because of the poor WiFi coverage, the Turtlebots often lost connection to the ROS master node. When this happened, the Turtlebots were no longer able to register new commands and continued to loop the last issued command. When our robot navigated in autonomous mode, its default motion was to move forward. The lost of connection to master node would not let the robot register bumps on the front bump sensor or see objects ahead. This caused the Turtlebot to run into objects and into the walls. The WiFi issue became worse when users controlled the robots with the Playstation 3 controller. When the connection failed, the Turtlebots became unresponsive to the controller which discouraged the user. Some of the artistic aspects also caused issues in the gameplay. The artwork that donned the robots presented a challenge while the game was played interactively. The avatars were able to freely spin on top of the robot. This action disoriented the user at times, and gamers were not sure which side of the robot was frontwards. In one instance, a player continually drove in the wrong direction and eventually drove through a wall. The color used for the ballerina robot was also the same color that was used for several building. Because of this, Team 1 did not have the option to track the robot throughout the arena. Material used for some of the building was reflective which caused several issues for the Kinect. Most noticeably, a team could see its own shooting target in a reflection and then proceed to shoot itself. We, as did most other teams, experience battery issues, but we did not expect our robots to last throughout the entire event. Robot laser tag operated for about 2 hours before the batteries died. We were able to change batteries and operate for about another 2 hours giving. Total down time was close to one hour.

V. Discussion and Future WorkOur previous lab work had adequately prepared us for this project, and we felt that this project was simply an extension of our previous work, almost like final lab assignment. In our previous lab, we had developed the system to separate perception from control and thus only had to develop the movement controller. Very little modification was needed with our previous

8

Page 9: Turtlebot Laser Tagjgrant3/cw/amrfinal.pdf · discharge). The robots compete in an arena filled with obstacles. These obstacles will be designed by the artistic teams along with an

perception nodelet except to extract the specific information needed for our new messages. This allowed us to focus our attention on communicating with the central gamemaster, controlling our robot with the reactive controller, and figuring out how to get two robots operating on the same ROS master. This last point represented the bulk of our development problems. It was not discussed previously as it is viewed to be basic robot startup protocol required to do any other task and is taken as a very basic requirement for the project. Theoretically, it should have been easy. Both robots should not listen to the same topics for movement and sensor information. Thus, these topic names need to be changed. However, because Turtlebots were developed with ease of use in mind, the developers created a number of different scripts to “bring up” the robots automatically. This ended up hiding many of the finer details of the Turtlebot operation which we had to figure out through script reading. Nevertheless, we eventually configured the robots to listen and publish on their own ROS topics such that they would not overlap communications. We feel that an idea of this nature could be incorporated into labs as future work. Specifically, the use of multiple Turtlebots on a single master would be an excellent concept to cover and could lead into the discussion of multi-robot systems. Because of the configuration needed to get this working, the lab could also introduce the lower level concepts of ROS as well as the procedures necessary for starting the robots. The tasks that the multi-robot system perform need not be complex as the take aways from this lab are the ideas needed to get a system like this working.

9


Recommended