+ All Categories
Home > Documents > Robotic Guidance

Robotic Guidance

Date post: 22-Mar-2016
Category:
Upload: kimberly
View: 50 times
Download: 1 times
Share this document with a friend
Description:
Robotic Guidance. Joe Stawicki. Project Description. Teach a robot to guide a person to a predefined destination. The robot must use a cam and a vision algorithm(s) as the main guidance. Sensors such as bump, infra-red and ultra-sound could provide direction and safety for the robot. - PowerPoint PPT Presentation
Popular Tags:
17
ROBOTIC GUIDANCE Joe Stawicki
Transcript

PowerPoint Presentation

Robotic GuidanceJoe Stawicki

Project DescriptionTeach a robot to guide a person to a predefined destination.The robot must use a cam and a vision algorithm(s) as the main guidance.Sensors such as bump, infra-red and ultra-sound could provide direction and safety for the robot.The solution should be easily modified for alternative routes.Check out Alexander Popovs 2011 senior project for techniques about driving the robot.

MY SolutionTurtlebotKinect Sensor/GyroscopeROS (Robot Operating System)Moves the robotGathers dataKinect SensorGyroscopeCalculates the next positionUser InterfaceSLAM (Simultaneous Localization and Mapping)

Originally was going to use a webcam and bluetooth along with the SURF algorithm (image matching), after talking to Sasha, who worked with the robot a couple of years ago and is currently working in a robotics lab in graduate school, he suggested using the Robot Operating System (ROS) along with a Kinect Sensor. This type of robot has a name, called the Turtlebot. It uses a Kinect Sensor (the one used for XBOX 360 Kinect) and a gyroscope for visual/position information. In addition, it uses two laptops, one is mounted on the Turtlebot itself and the other is used to communicate with the one on the robot via wifi. The Turtlebot itself uses the Robot Operating System (ros) to do many activies that would typically be very difficult to do from scratch. This operating system provides functionality out of the box for driving the robot, getting data from the various sensors like the kinect and gyroscope, calculating where the robot needs to be next from the data gathered (moving the robot to that position) and it also provides a User Interface to work with. In order to navigate an area, the Turtlebot (with the ROS) uses a technology called SLAM or simultaneous localization and mapping to map out an area/room/building/etc. I will explain what this is in a bit.3Kinect Sensor

Variety of Sensors3D depth sensorsLaserRGB cameraArray of Microphones

The Turtlebot uses the Kinect sensor as its eyes as can be seen in yellow A little about the sensor. It includes a variety of different sensors within including 3D depth sensors (using a laser) (1), this is what is mainly used for the Turtlebot to see any obstacles/boundaries. The laser scanner creates a 3D view of an area using these sensors. The Kinect sensor also includes an RGB camera (taking pictures, etc.) (2) this isnt used for the Turtlebot. It also includes an array of microphones (3), also not used. In addition, the kinect can tilt, not used for the Turtlebot.4SLAMSimultaneous Localization and MappingBuild map of unknown environmentUses SensorsKeeps track of current locationBased on what it sees/has seenNavigate to a particular spot on the map

SLAM Simultaneous localization and mapping. This technology is used by the Turtlebot in order to navigate in a particular area/building/room, etc. First, a map needs to be created of the area which the robot will be navigating in. I use the keyboard to drive the Turtlebot around in a room or area. Data is captured by the sensors as I drive the robot, while this is happening, the algorithm keeps track of where the robot is located is relation to what it sees and a basic map is created based on boundaries (like walls) and the localization. An example of what a map may look like can be seen above. Now we have our base and can navigate from it. When starting up the navigation, the robot needs to know where it is on the map. A starting pose is given so the robot can locate itself on this map and the navigation can occur directly. It would be very difficult for the robot to guess where it is starting due to many variables, for instance two different places may look the same for the robot when it is started up and it wouldnt know which one to use. Now that I have the map and the robot knows where it is, I can tell the robot where I would like it to go on the map. It then uses the map and localizes itself as it drives to the destination based on what its sensors are seeing. In addition, new objects or obstacles may be present that the map doesnt include. The robot takes in these objects with its sensors as well and figures out how to navigate around obstacles to ultimately get to where it needs to go. This solution isnt always perfect, sometimes the robot gets confused and doesnt know where it is or its sensors may malfunction, but generally SLAM is very useful. It has even been used when dealing with driverless vehicles. Next we will see some examples of the sensor data and mapping in action.5

Left: how we would see what the robot is looking at, Right: how the robot perceives what data its sensors gather. The white lines represent what the laser scanner is currently seeing (point to it) this is supposed to be a 3D perspective. The darker blue represents an object based on what the sensors see. The software then inflates this object to a larger size to give the robot a safety net. The lighter blue around the darker blue is this inflated object. The black around the edges are what the sensors perceive as a boundary (like a wall) and the lighter gray is considered open space for the robot. Sometimes not all objects can be seen by the robot, it does have a wide angled lense however it cannot always see shorter objects or things below it. In this image, it can see the garbage can (yellow arrow) and the recycle bin (orange arrow) in addition, the bed is seen as an object as well as the dresser.6

Now I put the recycling bin right in front of the sensor. First of all, the dresser can be seen as an object on the left side of the screen and the bed can as well. The recycling bin is seen with the laser scanner as the yellow arrow (it is known to be an object). The sensor cant see anything behind the recycling bin, so the algorithm interprets that to be a bunch of objects behind it, even though there is nothing there, so the sensor data can be misinterpreted.7

This picture is the most interesting. I opened my door and put the recycling bin and laptop case at angles. First the yellow arrow shows the recycling bin and it is interpreted as an object. There are also many object on the right hand side that are not there (the recycling bin causes distortion). The laptop case can be seen as an object as well with the orange arrow. The bed is also an object on the right hand side. The door is now seen as an object and is pointed to by the green arrow. Finally, since the door is open, the laser scanner can see outside the door and the back wall outside the door is noted by the blue arrow. Notice that the open space of the map does not extend out of the door even though there is open space and the black boundary does not get extended outside the door. The software uses prior information for the boundary. The only way the boundary would get updated is if the robot moved at least 5 feet, then the map gets updated. That is a little information about what the sensors see and how it gets interpreted. This information is used to create a map and then when navigating that map.8ProblemsNetwork Issues (SNC Firewall)Using ROS (Open Source)Weight Balance Issues

I had many different problems along the way that created many learning experiences. Some big issues faced were 1) dealing with the network. Since this robot operates using 2 laptops and wifi for communication, I needed to use some network between the two computers. The SNC network caused many headaches because of the firewall and not allowing communication between the two laptops. I then went to an adhoc network (which is a network that uses the laptops wireless cards to create communication just between the two laptops). I had many problems with this due to latency issues, there was too much data being sent between the laptops for everything to keep up. I did then get a router from Dr. Pankratz and connected the two laptops between using this router. What a difference that made, everything was live, there was no latency and everything worked as it should. My next problem was dealing with the Robot Operating System. I went through many tutorials in learning how this operating system worked. There are many different components that have to work together to everything to run smoothly and I didnt/still dont know how everything works. Since it is an open source operating system, the code can be changed be users. That was a problem for me, one time I updated the software and the new release had some bugs in it and the robot no longer worked. I had to wait for a new release in order for this to be fixed, sometimes open source does have its drawbacks (however it is very flexible). Finally I had problems with weight balance. The laptop is a little heavier and the robot it self was back heavy, causing the cliff sensor to sometimes go off, even with no cliff (stairs, etc.) present. This posed a problem. However, Dr. Pankratz found a forth wheel for the back and a couple of bungee cords for the laptop and everything worked as it should.9MORE WORKCalibration of the gyroscope

I wasnt able to calibrate the gyroscope as should have been done, I think due to some bug in the software itself. Things are pretty accurate with the way the gyroscope is set up now, however they probably would be a little bit more accurate with a calibration of the gyroscope (the gyroscope aids the Turtlebot in placing it on the map correctly)10MethodologyMoving the Robot using Sashas projectAssembling the TurtlebotROS InstallationROS TutorialsTeleop/NavigationAdaptation to new environmentsDocumentation

When I first got this project, I was set on using the SURF algorithm (pitcure matching) with a webcam. The first thing I did, was to make sure I could move the robot. Sashas project with the robot a couple of years ago dealt with driving the robot, so I used part of his project to make sure I could make the robot move. I had no idea how I was going to put together picture matching with moving the robot and then I talked with Sasha, he suggested using the Robot Operating System with the Kinect sensor. So I talked with Dr. Pankratz and he ordered a kit to convert the iRobot into a Turtlebot. The first task was assembling the Turtlebot. Next I installed Ubuntu onto two laptops and the Robot Operating System along with the Turtlebot stacks (basically plugins for the operating system). It was a big learning curve in learning about how the robot operating system worked and how it was modified, etc. I went through many different tutorials and eventually was able to move the robot with the keyboard. I also was having some problems with hardware, so there was both software and hardware debugging (I never had really had to debug hardware, so it was a challenge testing if the hardware or software was at fault, it ended up being the software). After being able to make the robot move with the keyboard, I had to start creating maps so I could navigate. This was a lot of trial and error as I didnt know how anything worked (from the user interface to starting up the robot) and how things were displayed on the user interface (the network problem definitely made this harder than it should have been). After successful navigation of an area, I had to make sure I could easily adapt this solution to a new environment (part of my requirements). The idea of creating a map and navigating that map lent it self well to this part of the project and I had no problems creating new maps and navigation. The final part of my project was creating documentation so the next person who works on this project doesnt have any of the problems I had, and there is definitely a lot I have learned and needed to explain. Overall, everything went pretty well.11DEMOResourcesSasha PopovProfessorsDr. Pankratz and Dr. McVeyClassmatesROS.orgMany tutorials/how-tos/answersOther online resourcesTrial and Error

Sasha Popov helped me to decide what to use for this project. After chatting with him about the requirements and his project, he definitely pointed me in the right direction. I would not have known about the Turtlebot if it wasnt for him. I also used his project when I first started to get the robot to move. He was very supportive in everything. Dr. Pankratz and Dr. McVey helped me out tremendously. Dr. Pankratz helped me with my long list of problems and helped me explore how the Turtlebot worked and what it was doing. Dr. McVey had many ideas as to robotic navigation and how this could have been extended. Thank you very much! In addition, I found many different online resources when dealing with this open source project. ROS.org was the biggest help, it provided tutorials on how to use the robot operating system and the different components that make it up. Other online resources also provided help and troubleshooting/best practices when it came to using the robot operating system and the Turtlebot. In addition, there was a lot of trial and error in what would happen if I did this? I learned quite a bit through trial and error.13KnowledgeProgramming Languages (CS 322) Turtlebot software written in PythonAnalysis of Algorithms (CS 321)NavigationOperating Systems (CS 370)Linux and ROSAll Classes

I used a variety of concepts throughout my courses here at SNC for this project. I used Programming Languages when dealing with the Turtlebot code. The Turltebot stacks used in the navigation is all written in python. I had never seen python before, so I used the concepts learned within programming languages in order to investigate the code and learn the basic outline of how it worked/what is was doing. Analysis of Algorithms was used a bit when dealing with the algorithm used for navigation (for instance avoiding obstacles and trying a different path). Operating systems was used in dealing with running the Linux operating system, Ubuntu and running terminal windows. In addition, operating systems was used in dealing with the Robot operating system itself and how the different parts work together. Overall, all classes were utilized in some aspect when dealing with problem solving and the process of solving the project given.14ExtensionsVoice ActivationOther Turtlebot ApplicationsObject AvoidanceMobile Device OperationTurtlebot Arm

The Robot Operating System is very flexible and allows for much expansion. New stacks and functionality of the turtlebot is open ended. Some ideas include adding voice activation to the turtlebot navigation. Experimenting with other turltebot applications like the panorma creator or the follower application with the turtlebot follows an object. Better object avoidance could be worked on including finding a way to look at shorter objects. There is applications to use with a smartphone to control and use navigation with the Turtlebot which would be cool. In addition, the Turtlebot has an arm extension as shown which may be interesting to look at as well. There are many different possibilities. 15Advice for upcoming seniorsStart earlyDo little bitsWeekly meetings are very helpfulDocument as you goWhen stuck, ask for helpEnjoy it!!Make sure to start your project early, no procrastination. You will hit an unexpected roadblock/problem and the earlier you start the better off your project will go. Dont try to do everything at once, do little chunks of it at a time and get that section working then move on. Weekly meetings with Dr. Pankratz and/or McVey are VERY helpful. Both professors have many ideas and want to see you succeed, you just need to ask for their help. Document things as you go, then you will not be stuck with documentation at the end in which you dont remember what you did or how things work. It saves a lot of time at the end if you document right away. When stuck, do not get discouraged, ask for help. You will feel a lot better about your project when progress is made, so dont mull around when stuck, chances are someone will have ideas and the project will be much more successful. Finally, enjoy your time working on the project, it really is a lot of fun and there is a lot of learning involved. You will be proud of what you can accomplish so enjoy your time working on the project.16Questions???Any Questions???17


Recommended