+ All Categories
Home > Documents > MV _ 2002

MV _ 2002

Date post: 14-Apr-2018
Category:
Upload: libra76k
View: 226 times
Download: 0 times
Share this document with a friend

of 105

Transcript
  • 7/30/2019 MV _ 2002

    1/105

    ________________________________________________________________Table of Contents

    1

    CH#1: INTRODUCTION.........09

    1-1. INTRODUCTION .............................................................................................................10

    1-2. PROBLEM STATEMENT.................................................................................................10

    1-3. METHODOLOGY............................................................................................................11

    1-3.1. HARDWARE DEVELOPMENT................................................................................11

    1-3.2. SOFTWARE DEVELOPMENT..................................................................................11

    1-4. THESIS ORGANIZATION ..............................................................................................13

    CH#2 : LITERATURE STUDY....................................................................................................14

    2-1. MANIPULATORS .............................................................................................................15

    2-2. WHAT IS A ROBOT?........................................................................................................16

    2-2.1. CLASSIFICATION OF ROBOTS..............................................................................16

    2-2.2. ROBOT COMPONENTS .........................................................................................162-2.2.1. Manipulator or Rover.......................................................................................172-2.2.2. End Effecter......................................................................................................172-2.2.3. Actuators ...........................................................................................................17

    2-2.2.4. Sensors .............................................................................................................172-2.2.5. Controller...........................................................................................................172-2.2.6. Processor..........................................................................................................182-2.2.7. Software ............................................................................................................18

    2-2.3. ROBOT COORDINATES.........................................................................................182-2.3.1. Cartesian/Rectangular/Gantry (3p)...............................................................182-2.3.2. Cylindrical (R2P) ..............................................................................................182-2.3.3. Spherical (2RP)................................................................................................182-2.3.4. Articulated/anthropomorphic (3R) .................................................................182-2.3.5. Selective Compliance Assembly Robot Arm (SCARA) .............................19

    2-2.4. ROBOT REFERENCE FRAMES................................................................................19

    2-2.4.1. World Reference Frame .................................................................................192-2.4.2. Joint Reference Frame ...................................................................................202-2.4.3. Tool Reference Frame ....................................................................................20

    2-2.5. ROBOT CHARACTERISTICS...................................................................................202-2.5.1. Payload .............................................................................................................202-2.5.2. Reach ................................................................................................................212-2.5.3. Precision ...........................................................................................................212-2.5.4. Repeatability (Variability)................................................................................212-2.5.5. Robot Workspace ............................................................................................212-2.5.6. Volume And Shape Of The Workspace .......................................................21

    2-2.6. ROBOT DEGREES OF FREEDOM ..........................................................................22

    2-2.7. ROBOT J OINTS ........................................................................................................24

    2-2.8. PROGRAMMING MODES.....................................................................................242-2.8.1. Physical Setup .................................................................................................242-2.8.2. Lead Through or Teach Mode .......................................................................242-2.8.3. Continuous Walk-Through Mode ..................................................................242-2.8.3. Software Mode .................................................................................................25

    2-2.9. ROBOT LANGUAGES.............................................................................................252-2.9.1. Microcomputer Machine Language Level....................................................252-2.9.2. Point-to-Point....................................................................................................262-2.9.3. Primitive Motion Level.....................................................................................26

    2-2.9.4. Structured Programming Level......................................................................262-2.9.5. Task-Oriented Level ........................................................................................26

  • 7/30/2019 MV _ 2002

    2/105

    ________________________________________________________________Table of Contents

    2

    2-3. ROBOT KINEMATICS & POSITION ANALYSIS .........................................................26

    2-3.1 MATRIX REPRESENTATION ....................................................................................272-3.1.1. Representation of a Point in Space ..............................................................272-3.1.2. Representation of a Vector in Space............................................................272-3.1.3. Representation of a Frame at the Origin of a Fixed-Reference Frame ..282-3.1.4. Representation of a Frame in a Fixed Reference Frame..........................28

    2-3.1.5. Representation of a Rigid Body ....................................................................292-4. HOMOGENEOUS TRANSFORMATION MATRICES..................................................31

    2-4.1. REPRESENTATION OF TRANSFORMATIONS ......................................................312-4.1.1. Representation of a Pure Translation...........................................................312-4.1.2. Representation Of Pure Rotation About An Axis........................................322-4.1.3. Representation Of Combined Transformations ..........................................342-4.1.4. Transformations Relative To The Rotating Frame .....................................35

    2-4.2. INVERSE OF TRANSFORMATION MATRICES.....................................................352-5. VISION SYSTEMS ............................................................................................................37

    2-5.1. IMAGE ACQUISITION ............................................................................................37

    2-5.2. IMAGE PROCESSING .............................................................................................38

    2-5.3. IMAGE ANALYSIS....................................................................................................38

    2-5.4. IMAGE UNDERSTANDING .....................................................................................382-6. WHAT IS AN IMAGE.......................................................................................................38

    2-6.1. TWO-AND THREE-DIMENSIONAL IMAGES........................................................39

    2-6.2. ACQUISITION OF IMAGES....................................................................................392-6.2.1. Vidicon Camera ...............................................................................................392-6.2.2. Digital Camera .................................................................................................412-6.2.3. Analog-to-Digital Conversion.........................................................................412-6.2.4. Pixels .................................................................................................................422-6.2.5. Digital Images...................................................................................................42

    2-6.3. IMAGE PROCESSING .............................................................................................432-6.4. IMAGE-PROCESSING TECHNIQUES ...................................................................43

    2-6.4.1. Preprocessing ..................................................................................................432-6.4.2. Lighting..............................................................................................................442-6.4.3. Frequency Content Of An Image ..................................................................442-6.4.4. Windowing ........................................................................................................452-6.4.5. Sampling And Quantization ...........................................................................452-6.4.6. SAMPLING THEOREM ..................................................................................462-6.4.7. Histogram Of Images ......................................................................................492-6.4.8. Histogram Flattening .......................................................................................492-6.4.9. Thresholding.....................................................................................................50

    2-6.4.10. Connectivity ......................................................................................................502-6.4.11. Neighborhood Averaging................................................................................522-6.4.12. Image Averaging..............................................................................................532-6.4.13. Median Filters...................................................................................................54

    2-6.5. IMAGE ANALYSIS....................................................................................................542-6.5.1. Object Recognition By Features ...................................................................552-6.5.2. Basic Features Used for Object Identification.............................................562-6.5.3. Binary Morphology Operations ......................................................................562-6.5.4. Thickening Operation ......................................................................................572-6.5.5. Dilation...............................................................................................................57

    2-6.5.6. Erosion ..............................................................................................................572-6.5.7. Skeletonization.................................................................................................58

  • 7/30/2019 MV _ 2002

    3/105

    ________________________________________________________________Table of Contents

    3

    2-6.5.8. Open Operation ...............................................................................................592-6.5.9. Close Operation ...............................................................................................592-6.5.10. Fill Operation ....................................................................................................592-6.5.11. Edge Detection.................................................................................................592-6.5.12. Roberts Cross-Operator.................................................................................61

    2-6.6. IMAGE UNDERSTANDING .....................................................................................62

    2-6.6.1. Template Matching ..........................................................................................622-6.6.2. Other Techniques ............................................................................................632-6.6.3. Limitations And Improvements ......................................................................642-6.6.4. Detecting Motion ..............................................................................................65

    2-7. MATLAB & DIGITAL IMAGE REPRESENTATION....................................................65

    COORDINATE CONVENTIONS IN MATLAB ......................................................................66

    IMAGES AS MATRICES ..........................................................................................................662-8. READING IMAGES .........................................................................................................67

    2-9. DISPLAYING IMAGES ....................................................................................................68

    2-10. WRITING IMAGES ..........................................................................................................69

    2-11. DATA CLASSES ................................................................................................................72

    2-12. IMAGE TYPES..................................................................................................................73

    2-12.1. INTENSITY IMAGES..............................................................................................73

    2-12.2. BINARY IMAGES..................................................................................................732-13. CONVERTING BETWEEN DATA CLASSES AND IMAGE TYPES ...........................74

    2-13.1. CONVERTING BETWEEN DATA CLASSES......................................................74

    2-13.2. CONVERTING BETWEEN IMAGE CLASSES AND TYPES..............................742-14. FLOW CONTROL.............................................................................................................76

    2-14.1. IF, ELSE, AND ELSEIF ...........................................................................................76

    2-14.2. FOR LOOP............................................................................................................78

    2-14.3. WHILE ....................................................................................................................79

    2-14.4. BREAK....................................................................................................................792-14.5. CONTINUE............................................................................................................79

    2-14.6. SWITCH .................................................................................................................792-15. CODE OPTIMIZATION ..................................................................................................80

    2-15.1. VECTORIZING LOOPS........................................................................................80

    2-15.2. PREALLOCATING ARRAYS...............................................................................842-16. EDGE DETECTION METHODS ...................................................................................84

    2-16.1. DESCRIPTION.......................................................................................................85

    2-16.2. SOBEL METHOD ..................................................................................................85

    2-16.3. PREWITT METHOD ...............................................................................................86

    2-16.4. ROBERTS METHOD..............................................................................................862-16.5. LAPLACIAN OF GAUSSIAN METHOD ............................................................86

    2-16.6. ZERO-CROSS METHOD......................................................................................86

    2-16.7. CANNY METHOD................................................................................................86

    2-16.8. CLASS SUPPORT..................................................................................................872-17. PERFORMING MORPHOLOGICAL OPERTAIONS ON IMAGES ............................87

    2-17.1. SYNTAX& DESCRIPTION ....................................................................................87

    2-17.2. CLASS SUPPORT..................................................................................................892-18. CAPTURING IMAGE FROM WEBCAM USING MATLAB ..........................................90

    2-18.1. SYNTAX .................................................................................................................91

    2-19. REFRENCES .....................................................................................................................91

  • 7/30/2019 MV _ 2002

    4/105

    ________________________________________________________________Table of Contents

    4

    CH#3: OUR PPROACH.93

    3-1. OUR APPROACH .............................................................................................................94

    3-1.1. SELECTION OF ROBOT..........................................................................................943-1.1.1. Our Selected Robot.........................................................................................95

    3-1.2. INTERFACING OF ROBOT.....................................................................................953-1.2.1. PROGRAM .......................................................................................................95

    3-1.3. SOFTWARE DEVELOPMENT..................................................................................983-1.4. TESTING AND TROUBLESHOOTING .....................................................................98

    3-1.4.1. During Hardware Development: ....................................................................983-1.4.2. During Software Implementation:..................................................................98

    CH#4: HARDWARECOMPONENTS..99

    4-1. INTRODUCTION ...........................................................................................................100

    4-2. HARDWARE BUILDING BLOCKS ..............................................................................100

    4-2.1. DUAL POLARITY POWER SUPPLY ......................................................................100

    4-2.2. INTERFACING CIRCUITARY OF ROBOT...........................................................101

    4-2.3. PARALLEL PORT....................................................................................................1014-2.4. OPTOCOUPLER.....................................................................................................102

    4-2.5. TRANSISTOR ...........................................................................................................103

    4-2.6. OVERALL CIRCUIT SCHEME.......................................................................................1034-3. ROBOTIC ARM TRAINER (OWI- 007) .........................................................................104

    4-3.1. PRODUCT INFORMATION...................................................................................104

    4-3.2. SPECIFICATIONS...................................................................................................1044-3.2.1. Five Axis Of Motion .......................................................................................1044-3.2.2. Product Dimensions ......................................................................................1044-3.2.3. Power Source.................................................................................................104

    4-4. ROBOT EYE ...................................................................................................................105

    4-5. CPU REQUIRMENTS.....................................................................................................105

    4-6. REFRENCES ...................................................................................................................105

  • 7/30/2019 MV _ 2002

    5/105

    ________________________________________________________________Table of Figures

    5

    FIGURE 1-1: Our machine vision, robot system...........................................................................12

    FIGURE 2-1: A Fanuc M-410iww palletizing robotic Arm..........................................................17

    FIGURE 2-2: Robot types..............................................................................................................19

    FIGURE 2-3: Some possible robot co ordinate frames..................................................................19

    FIGURE 2-4: A robots reference frame........................................................................................20

    FIGURE 2-5: Workspaces for common robot configurations. ......................................................22

    FIGURE 2-6: A Fanuc P-15 robot. ................................................................................................23FIGURE 2-7: Representation of a point in space...........................................................................27

    FIGURE 2-8: Representation of a vector in space.........................................................................28

    FIGURE 2-9: Representation of a frame in a frame ......................................................................29

    FIGURE 2-10: Representation of a rigid body in a frame .............................................................29

    FIGURE 2-11: Representation of a pure Translation in space.......................................................32

    FIGURE 2-12: Coordinates of a point relative to the reference frame and rotating frame............33

    FIGURE 2-13: The Universe, robot, hand, part, and end effecter frames .....................................35

    FIGURE 2-14: Gray intensity creation in printed images..............................................................39

    FIGURE 2-15: Schematic of a vidicon camera,.............................................................................40

    FIGURE 2-16: Raster scan depiction of a vidicon camera. ...........................................................40

    FIGURE 2-17: (a) Image data collection model. (b) The CCD element of a VHS. ......................41FIGURE 2-18: Sampling a Video Signal. ......................................................................................42

    FIGURE 2-19: Sample/Hold Amplifiers........................................................................................42

    FIGURE 2-20: Noise and edge information in an image. ..............................................................44

    FIGURE 2-21: Discrete signal reconstructed of the signal............................................................45

    FIGURE 2-22: Different sampling rates for an image...................................................................46

    FIGURE 2-23: An image quantized at 2, 4, 8 and 44 gray level. ..................................................46

    FIGURE 2-24: A low resolution (16x16) image............................................................................47

    FIGURE 2-25: Sinusoidal signal with a frequency of fs ...............................................................47

    FIGURE 2-26: Reconstruction of signals from sampled data........................................................47

    FIGURE 2-27: Sampling rate comparison of two signals..............................................................48

    FIGURE 2-28: The image of Figure2.24 Presented at higher resolution.......................................48FIGURE 2-29: Effect of histogram equalization. ..........................................................................49

    FIGURE 2-30: Histogram Flattering to improve detail. ................................................................50

    FIGURE 2-32: Neighborhood connectivity of pixels ....................................................................51

    FIGURE 2-33: The image for Example 2.3 ...................................................................................52

    FIGURE 2-34: The results of the connectivity searches for Example 2.3.....................................52

    FIGURE 2-35: Neighborhood averaging of an image. ..................................................................52

    FIGURE 2-36: Neighborhood averaging mask..............................................................................52

    FIGURE 2-37: Neighborhood averaging of image. .......................................................................53

    FIGURE 2-38: 5x5 and 3x3 Gaussian averaging Filter. ................................................................53

    FIGURE 2-39: Improvement of corrupted image with 7X7 median filter. ..................................54

    FIGURE 2-40: Consult [21] for more details.................................................................................55

    FIGURE 2-41: Image analysis scheme. .........................................................................................55

    FIGURE 2-42: Aspect ratio of an object........................................................................................56

    FIGURE 2-43: Binary image of a bolt and its skeleton. ................................................................57

    FIGURE 2-44: Removal of threads by thickening operation.........................................................57

    FIGURE 2-45: Effect of dilation operations. .................................................................................57

    FIGURE 2-46: Effect of erosion on objects (a) with (b) 3 and (c) 7 repetitions. ..........................58

    FIGURE 2-47: Skeletonization of an image without thickening. ..................................................58

    FIGURE 2-48: The skeleton of the object after thickening. ..........................................................59

    FIGURE 2-49: result of a fill operation. ........................................................................................59

    FIGURE 2-50: Edge detection algorithm.......................................................................................60FIGURE 2-51: Flow chart for edge detection. ...............................................................................61

  • 7/30/2019 MV _ 2002

    6/105

    ________________________________________________________________Table of Figures

    6

    FIGURE 2-52: The Roberts cross-operator. ..................................................................................61

    FIGURE 2-53: Types of Vertexes That Form Functions Between Objects in a Scene .................64

    FIGURE 2-54: Other Types of Vertexes........................................................................................64

    FIGURE 2-55: Coordinate convention. .........................................................................................66

    FIGURE 2-56: Result of scaling by using imshow(h , [ ] ) ...........................................................69

    FIGURE 2-57: Effect of storing image at different quality values................................................71

    TABLE 2-1: Data classes ...............................................................................................................73TABLE 2-2: Function in IPT for converting between image classes and types ............................75

    TABLE 2-3: Flow control statements ............................................................................................76

    FIGURE 2-58: Sinusoidal image generated in previous example. ................................................83

    FIGURE 2-59: Edge detection methods.........................................................................................87

    TABLE 2-4: Morphological operations .........................................................................................88

    FIGURE 2-60: Before morphological operation............................................................................89

    FIGURE 2-61: After morphological operation. .............................................................................89

    FIGURE 2-62: Skeletonization. .....................................................................................................90

    FIGURE 2-63: Screen shot of capturing window. .........................................................................90

    FIGURE 3-1: Arm Trainer Model: OWI-007. ...............................................................................95

    FIGURE 4-1: Circuit diagram of the dual power supply.............................................................100FIGURE 4-2: Actual view of our power supply. .........................................................................101

    FIGURE 4-3: Pin configuration of the parallel port. ...................................................................101

    TABLE 4-1: Detailed description of parallel port. ......................................................................102

    FIGURE 4-4: View of the optocouplers. .....................................................................................102

    FIGURE 4-5: Component view of the C1383C. ..........................................................................103

    FIGURE 4-6: Actual view of our PCB. .......................................................................................103

    FIGURE 4-7: Overall view of our actual circuit scheme.............................................................104

    FIGURE 4-8: View of the robot arm trainer model: OWI-007....................................................105

  • 7/30/2019 MV _ 2002

    7/105

    7

    To our Parents and Teachers whose love andguidance has been instrumental and the driving force

    behind our effort.

  • 7/30/2019 MV _ 2002

    8/105

    ______________________________________________________________Acknowledgements

    8

    IN THE NAME OF ALLAHTHE MOST GRACIOUSTHE MOST MERCIFUL"The first and foremost thanks to Allah, the Almighty, for his shower of blessings that

    he has enriched us with. It is by the Creator will that thou succeed and nothing can be done

    if Allahs will is not there. My lord gives me courage to be a good Muslim and an

    engineer. Amen!

    Thanks to human ingenuity and curiosity which have enabled us, the humans, to

    decipher the code of natural entities, to study and develop new things beyond any

    boundary surpassing any barrier coming in our way. Basically, what we would like to

    say is that we thank all scientists, researchers, teachers and writers courtesy of whom

    we are writing this thesis.

    A heartiest salute and warmest expression of gratitude for Mr. Khalid Mahmood

    Arif and Mr. Samsoon Inayat, our project advisers, for being the benign guardians of the

    project. They have been very persuasive in making us do the work in the best way and bybeing a guiding entity through the whole endeavor. Teachers are like spiritual parents and

    their interaction makes us idealize them. We have idealized Mr. Samsoon and thank him

    for his support even in absence which has been fuel for motivation for us.

    The authors would like to thank Mr. Ali Raza, for letting us use his lab and

    its equipment for our work. It has been instrumental in helping us complete our work.

    Throughout the duration he has been encouraging us which has made us belligerent against

    desperation.

    We would like to thank our dear compatriot engineers for their help and supportespecially Mr. Mohammad Ayyub and Mr. Ahmed Ali Qadeer.

    Last but not the least; we would like to thank our parents for providing for our daily

    needs by the grace of God. Their love has been instrumental in providing us the courage

    and sole inspiration for doing this work. It had made us feel quite positive.

  • 7/30/2019 MV _ 2002

    9/105

    Chapter 1 Introduction

    9

  • 7/30/2019 MV _ 2002

    10/105

    Chapter 1 Introduction

    10

    1-1. INTRODUCTION

    Man has always been fascinated by its five basic senses. The sense of sight, hearing,

    smell, taste and touch. The understanding of these functions has gone through many a mile of

    development and we have tried to comprehend them and emulate them. The whole theme behind

    this is to provide more leisure time to human. It is this reason that man has tried to develop these

    qualities in machines, of which most of the work has been done on the field of sight, hearing and

    touch. We have sensors to depict touch, microphones & speech recognition software to imitate

    hearing and camera and machine vision system to develop sight.

    Scientists have always been fascinated by human thinking and reasoning process. But as

    we know that human brain does more than just pure thinking and reasoning. It is the control

    interpreter for all of our senses. The brain must have inputs about its surroundings to control the

    body. Our senses must provide those inputs. In short, what we wish to say here is that we want to

    develop a machine that sees, hear and feels like a human, but never tires. Its smart enough to

    deal with different changing scenarios. It may work in a hazardous environment and takedecisions and make inferences like a human. This is very important in saving important human

    lives and stops the suffering of people who become ill working in such an environment. It is with

    this desire that people have worked in this field. It is the same passion that we earnestly pursue in

    developing machine vision. What ever work we have done may not have that much significance

    in the vast scientific world as people may have already worked and achieved what we have done

    over here. But it is our belief that every beginning is a new beginning, in the midst of the whole

    enclave there may be something new which we may achieve.

    It is very difficult to explain in true spirit what we see; why we see it and how do we

    comprehend it. Human brain is complex circuit of neurons. Scientists have largely worked upon

    the procedures going on behind the scene inside the brain. We are still perplexed by the fact thathow come a complex entangled bunch of neurons is able to think so diversely. Whatever

    understanding that have led us to develop microprocessor, uses the same logic of reasoning as a

    human. Vision is nothing more than just mental reasoning and understanding of surroundings. In

    humans the understanding is done by the brain, whereas in computers it is done by a computer

    program which is created again by a human.

    1-2.PROBLEM STATEMENT

    In order to have a critical appreciation of the obstinate task we had selected to

    pursue. The logical methodology we adopted was to approach our task by devisingcertain aims and objects and accomplishing the mission with the highest degree of

    compactness, completeness and correctness. It is our intrinsic desire that every thing is

    done with the highest degree of perspicacity and perspicuity.

    The problem at hand is as follows

    There is a conveyor belt on which different shaped products are coming. This

    conveyor belt is present in a hazardous environment. There are two collection boxes in

    which the product is to be placed and sealed before being brought out. The products

    material and other specifications are similar. Therefore they cannot be differentiated by

  • 7/30/2019 MV _ 2002

    11/105

    Chapter 1 Introduction

    11

    conventional methods; however, if somehow a robotic vision system is developed the

    product can be distinguished.

    Machine vision is a very complex field. There is no boundary as to what you can

    achieve. But at graduate we had to limit our selves to a great extent due to time

    constraints and the level of our expertise. It is due to that reason we have limited

    ourselves to the two dimensional view but we plan in the future to go into the thirddimension.

    1-3.METHODOLOGY

    People have used various methods to emulate vision. For this they have used various

    languages like visual C++, visual basic and java. But these languages are very specific and need

    proper training and intensive study procedures. Even then people do not master these languages,

    leaving a side using them for vision systems. Our aim basically was to use a simple programming

    environment which is simpler than these languages and is also fast enough. It is for this reason

    that we have chosen MATLAB.

    Some people may argue that MATLAB is not fast enough to run vision systems in real

    time although we may agree on this to some extent, but we have manage to develop certain ideal

    conditions which have enabled us to use MATLAB for vision. and to give us comparable

    movement of our robot which equal in stature to a robot using visual C in its background but we

    must concede here that it is not as fast enough till now. Looking at such difficulties we had

    decided to follow a modular approach for the completion of our project. So, the first step, from

    where we started our project, was the study of basics of robot designing and using that knowledge

    we described our design parameters & keeping market availability & cost factors in mind we

    selected a robotic arm for our project. For simplicity and due to the enormity of this field we have

    limited ourselves to two dimensions. For vision we have used a PC camera which connects

    through a USB cable to the computer. The PC camera is able to produce an image ranging from

    160x120 pixels. Its refreshing rate is 30 frames/seconds. This high resolution camera is able to

    capture image in real time.

    1-3.1. HARDWARE DEVELOPMENT

    The next step was to develop an interface between computer and the robot. We have

    selected the parallel port to communicate with the robot. A special circuit was designed using a

    special methodology which will be discussed in details in chapter 3.

    1-3.2. SOFTWARE DEVELOPMENT

    Programming in MATLAB is similar to programming in C yet it uses simple English

    words to carry out complex tasks. What we have done here is that we have emulated the process

    (of edge detection) for image understanding.

    We used a simple white background (this again done to keep things simple at start) and

    placed our robot in front of that white background and the camera is placed in front of the robot

    to give a to dimensional image feedback to the computer. To understand the image we have

    written algorithms to reduce the different links of robot into straight line. We may add here that

  • 7/30/2019 MV _ 2002

    12/105

    Chapter 1 Introduction

    12

    this process of converting from a real time image to a line representation goes through much

    process.

    After achieving line image the algorithms basically calculate the angles between the

    different lines. The angles basically help us to position the robot. The line representation is stored

    within the computer memory, so that computer knows the difference between the robot and the

    image. If any other object is then placed into the environment the computer is able to differentiatebetween the object and the robot.

    In future the computer may be made to learn different shaped objects and differentiate

    them from robot and make the robot pick the exact object even if there are many objects present

    in the environment. When an object is detected the software tells the robot to move its extreme

    point that is the gripper towards the object. Meanwhile there is continuous feedback from the

    camera to the computer. The camera image and the angles between the different links guides the

    movements of the robot until its extreme point reaches the object and the grippers comes in line

    with the center point on the object. Once this happens a command is given to the gripper to grip

    the object.

    The robot then moves the object to another position within the two dimensional frame

    according to previously supplied target coordinates. The arm then moves down on to the floor

    when it touches the floor the movement stops, the gripper loosens and the product is placed on

    the floor. Then the cycle starts again.

    The overview of the whole system is given below. It is quite evident from the diagram

    that the camera is the eye of the robot whereas computer is its brain and the effector is the robotic

    arm itself.

    FIGURE 1-1: Our machine vision, robot system.

  • 7/30/2019 MV _ 2002

    13/105

    Chapter 1 Introduction

    13

    1-4.THESIS ORGANIZATION

    This project is one of the few addressing the machine vision systems for robots. Contrary

    to other projects which emphasize machine vision of typical applications, it addresses the morebasic application of machine vision that are frequently overlooked in traditional study projects,

    such as study of machine vision system with computer camera interface using visual C, as well

    as some specific issues of critical importance for optimization of the processing speed of the

    processor when interfaced with camera. It is to help readers, practicing and/or future engineers,

    deal with specifics of robot vision systems. Since the project is to reach, first of all, mechatronics

    engineers, it addresses the basics of robotic systems with the purpose of developing an

    understanding of specific phenomena in such systems and specific requirements for their vision.

    The contents of the thesis are organized accordingly. A short chapter 1 surveys basic

    introduction, problem statement and methodology adopted. A critical overlook through the whole

    thesis is also given. A complete literature study along with some related work resides in chapter

    2. It contains different robot parameters with some details of kinematic and dynamic analysis of

    robot movements. Next is vision system with some details of image acquisition, image

    processing, and image analysis and image understandings all using more than one techniques. In

    the next part of this chapter we introduce the implementation of above described techniques using

    MATLAB. We have consulted the MathWorks Toolbox for different commands being used here.

    The following chapters, that is, chapter 3, chapter 4 and chapter 5, all are related to our

    practical work. Chapter 3 provides the complete description of our methodology, including all

    computer programs developed in C++ and in MATLAB. Chapter 4 contains the complete

    description of hardware used in the project, including the description of, the power supply, thedriving circuitry, the robot specifications etc. the 5th chapter constitutes the experimental results

    along with some troubleshooting issues and their possible solutions. The last chapter is for

    conclusions and some references. This is the scheme that we will follow in our preceding

    chapters.

    The authors hope that not only student and the established robot vision system designers

    but also designers of general purpose machinery will benefit from the logical analysis of typical

    vision system of manipulators and an infusion of some high technology information on critical

    vision components.

  • 7/30/2019 MV _ 2002

    14/105

    Chapter 2 Literature Study

    14

  • 7/30/2019 MV _ 2002

    15/105

    Chapter 2 Literature Study

    15

    2-1.MANIPULATORS

    An industrial robot defined by U.S robot industries association (RIA) a "reprogrammable

    multifunctional manipulator designed to move material parts, tools or specialized devices through

    variable programmed motions for the performance of a verity of tasks. Similar definitions are

    adopted by British Robot Association and Japanese Robot Association, etc [13]

    There are several more or less clearly distinguished generations of industrial robots.

    The first-generation robots are fixed-sequence robots which can repeat a sequence of operations

    once they have been programmed to do so, to carry out a different job. They have to be

    reprogrammed, often by "training" or "education."

    The second-generation robots are equipped with sensory devices which allow a robot to

    act in a not-completely defined environment, e.g. pick up a part which is misplaced from its ideal

    position, pick up a needed part from a batch of mixed parts, recognize a need to switch from one

    succession of motions to another etc.

    The third-generation robots which are emerging now have the intelligence to allow them

    to make decisions, such as ones necessary in assembly operations (assembling a proper

    combination of parts; rejecting faulty parts; selecting necessary combinations of tolerances, etc.).

    Robots of the first and so-called "1.5" generation 'with some sensing devices (constitute the

    overwhelming majority of robots now in use and in production.

    However, regardless of the generation, industrial robots are built of three basic systems:

    The "mechanical structure" consisting of mechanical linkages and joints capable of various

    movements. Additional movements are made possible by end effectors fitted at the arm end.

    The "control system," which can be of "fixed" or "servo" type. Robots with fixed control

    systems have fixed (but, possibly, adjustable! mechanical stops, limit switches, etc... forpositioning and informing the controller. Servo-controlled robots can be either point to point

    (PTP) where only specified point coordinates are under control or not the path between them or

    continuous path (CP) controlled, thus achieving a smooth transition between the critical points.

    The "power units," which can be hydraulic, pneumatic, electrical, or their combination, with or

    without mechanical transmissions.

    If we consider a human being as a manipulator, it would be a very effective and efficient

    one. With the total mass 68 to 90 kg (150 to 200 Ib) and its "linkage" (lower and upper arm and

    wrist) mass 4.5 to 9.0 kg (10 to 20 lb), this manipulator can precisely handle, with a rather high

    speed, loads up to 4,5 to 9.0 kg (10 to 20 Ib); with slightly lower speeds it can handle loads up to

    15 to 25 kg (30 to 50 Ib), or about one-fifth to one-quarter of its overall mass, far exceeding the"linkage" mass; and it can make simple movements with loads exceeding its overall mass, up to

    90 to 135 kg (200 to 300 Ib), and in cases of trained athletes, much more. On the other hand,

    industrial robots have payload limitations (and, in this case, the payload includes the mass of the

    gripper or end effectors which amount to one-twentieth to one-fiftieth of their total mass, more

    than 10 times less effective than a human being. And such massive structures cannot move with

    the required speeds [13]. It was found that human operators can handle loads up to 1.5 kg (3 Ib)

    faster than existing robots, in the 1.5 to 9 kg (3 to 20 Ib) range they are very competitive, and only

    above 9 kg (l20 Ib) are robots technically more capable.

    If the mass of end effectors or grippers is considered, which the human operator has built

    in but which consumes up to half of the maximum payload mass in robots, then one can come tothe conclusion that robots with maximum rated loads below 3 kg l6 Ib are mechanically inferior to

  • 7/30/2019 MV _ 2002

    16/105

    Chapter 2 Literature Study

    16

    human operators, in the 3 to 20 kg (6 to 40 Ib) range they are comparable, and only at higher loads

    are they superior.

    2-2.WHAT IS A ROBOT?

    If you compare a conventional robotic manipulator with a crane attached to say a utility or

    towing vehicle, you will notice that the robot manipulator is very similar to the crane. Bothposses a number of links attached serially to each other with joints, where each joint can bemoved by some type of actuator. In both systems, the "hand" of the manipulator can be moved inspace and be placed in any desired location within the workspace of the system, each one cancarry a certain amount of load, and each one is controlled by a central controller which controlsthe actuators. However, one is called a robot and the other a manipulator (or, in this case, acrane). The fundamental difference between the two is that the crane is controlled by a humanwho operates and controls the actuators, whereas the robot manipulator is controlled by acomputer that runs a program. This difference between the two determines whether a device is asimple manipulator or a robot. In general, robots are designed, and meant, to be controlled by acomputer or similar device. The motions of the robot are controlled through a controller that isunder the supervision of the computer, which, itself, is running some type of a program. Thus, ifthe program is changed, the actions of the robot will be changed accordingly. The intention is tohave a device that can perform many different tasks and thus is very flexible in what it can do,without having to redesign the device. Thus, the robot is designed to be able to perform any taskthat can be programmed (within limit, of course) simply by changing the program. The simplemanipulator (or the crane) cannot do this without an operator running it all the time [1].

    Different countries have different standards for what they consider to be a robot. ByAmerican standards, a device must be easily reprogrammable to be considered a robot. Thus,manual-handling devices (i.e., a device that has multiple degrees of freedom and is actuated by anoperator) or fixed-sequence robots (i.e., any device controlled by hard stops to control actuatormotions on a fixed sequence and difficult to change) are not considered to be robots.

    2-2.1. CLASSIFICATION OF ROBOTSThe following is the classification of robots according to the Japanese Industrial Robot

    Association (JIRA):

    Class 1: Manual-Handling Device: A device with multiple degrees of freedom that is

    actuated by an operator.

    Class 2: Fixed-Sequence Robot: A device that performs the successive stage of a tasks

    according lo a predetermined, unchanging method and is hard to modify.

    Class 3: Variable-Sequence Robot: Same as class 2, but easy to modify.

    Class 4: Playback Robot: A human operator performs the task manually by leading therobot which records the motions for later playback. The robot repeats the same motion according

    to the recorded information.

    Class 5: Numerical Control Robot: The operator supplies the robot with i movement

    program rather than teaching it the task manually.

    Class 6: Intelligent Robot: A robot with the means to understand its environment and

    the ability to successfully complete a task despite changes in the surrounding conditions under

    which it is to be performed [1].

    2-2.2. ROBOT COMPONENTSA robot, as a system, consists of the following elements, which are integrated together to form awhole:

  • 7/30/2019 MV _ 2002

    17/105

    Chapter 2 Literature Study

    17

    2-2.2.1. Manipulator or Rover

    This is the main body of the robot and consists of the links, the joints, and other structuralelements of the robot. Without other elements, the manipulator alone is not a robot (Figure 2.1).

    2-2.2.2. End Effecter

    This is the part that is connected to the last joint (hand) of a manipulator, which generallyhandles objects, makes connection to other machines, or performs the required tasks (Figure 2.1).Robot manufacturers generally do not design or sell end effectors. In most cases, all they supplyis a simple gripper. Generally, the hand of a robot has provisions for connecting specialty endeffectors that are specifically designed for a purpose. This is the job of a company's engineers oroutside consultants to design and install the end effecter on the robot and to make it work for thegiven situation. A welding torch, a paint spray gun, a glue-laying device, and a parts handler are

    but a few of the possibilities. IN Most cases, the action of the end effecter is either controlled bythe robots controller or the controller communicates with the end effectors controlling device(such as a PLC).

    FIGURE 2-1: A Fanuc M-410iww palletizing robotic Arm

    2-2.2.3. Actuators

    Actuators are the -muscles" of the manipulators. Common types of actuators are

    servomotors, stepper motors, pneumatic cylinders, and hydraulic cylinders. There are also other

    actuators that are more novel and are used in specific situations. Actuators are controlled by the

    controller.

    2-2.2.4. Sensors

    Sensors are used to collect information about the internal state of the robot or to

    communicate with the outside environment as in humans. Robots are often equipped with external

    sensory devices such as a vision system, touch and tactile sensors, speech synthesizers, etc., whichenable the robot to communicate with the outside world.

    2-2.2.5. Controller

    The controller receives its data from the computer, controls the motions of the actuators,

    and coordinates the motions with the sensory feedback information. Suppose that in order for the

    robot to pick up a part from a bin. it is necessary that its first joint be at 35 If the Joint is not

    already at this magnitude, the controller will send a signal lo the actuator (a current to an electric

    motor, air to a pneumatic cylinder, or a signal to a hydraulic servo valve), causing it to move. It

    will then measure the change in the joint angle through the feedback sensor attached to the joint (apotentiometer, an encoder. etc.). When the joint reaches the desired value, the signal is stopped.

  • 7/30/2019 MV _ 2002

    18/105

    Chapter 2 Literature Study

    18

    2-2.2.6. Processor

    The processor is the brain of the robot. It calculates the motions of the robots joints,

    determines how much and how fast each joint must move to achieve the desired location and

    speeds, and oversees the coordinated actions of the controller and the sensors The processor is

    generally a computer which works like all other computers, but is dedicated to a single purpose. It

    requires an operating system, programs, peripheral equipment such as monitors, and has many of

    the same limitations and capabilities of a PC processor.

    2-2.2.7. Software

    There are perhaps three groups of software that are used in a robot. One is the operating

    system, which operates the computer. The second is the robotic software, which calculates the

    necessary motions of each joint based on the kinematics equations of the robot. This information

    is sent lo the controller. This software may be at many different levels, from machine language to

    sophisticated languages used by modern robots. The third group is the collection of routines and

    application programs that are developed in order to use the peripheral devices of the robots, suchas vision routines, or to perform specific tasks.

    It is important to note that in many systems, the controller and the processor are placed in

    the same unit. Although these two units are in the same box and even if they are integrated into

    the same circuit, they have two separate functions.

    2-2.3. ROBOT COORDINATESRobot configuration generally follows the coordinate frames with which they are defined, as

    shown in Figure 2.2. Prismatic joints are denoted by P, revolute joints are denoted by R. and

    spherical joints are denoted by S. Robot configurations are specified by a succession of Ps, R's,

    or Ss. For example, a robot with three prismatic and three revolute Joints is specified by 3P3R.The following configurations are common for positioning the hand of the robot:

    2-2.3.1. Cartesian/Rectangular/Gantry (3p)

    These robots are made of three linear joints that position the end effecter, which are

    usually followed by additional revolute Joints that orientate the end effecter.

    2-2.3.2. Cylindrical (R2P)

    Cylindrical coordinate robots have two prismatic joints and one revolute joint for

    positioning the part, plus revolute Joints for orientating the part.

    2-2.3.3. Spherical (2RP)

    Spherical coordinate robots follow a spherical coordinate system. Which has one

    prismatic and two revolute joints for positioning the part, plus additional revolute joint for

    orientation?

    2-2.3.4.Articulated/anthropomorphic (3R)

    An articulated robot Joints are all revolute to a human's arm. They are perhaps the must

    common configuration for industrial robots

  • 7/30/2019 MV _ 2002

    19/105

    Chapter 2 Literature Study

    19

    2-2.3.5. Selective Compliance Assembly Robot Arm (SCARA)

    SCARA robots have two revolute that are parallel and allow them to move in horizontal

    plane, plus an additional prismatic joint that moves vertically. SCARA robots are very common in

    assembly operations. Their specific characteristic is that they are more compliant in the .x-y-plane,

    but are very stiff along the z-axis, and thus have selective compliance.

    FIGURE 2-2: Robot types.

    FIGURE 2-3: Some possible robot co ordinate frames.

    2-2.4. ROBOT REFERENCE FRAMES

    Robots may be moved relative to different coordinate frames. In each type of coordinate frame,the motions will be different. Usually, robot motions are accomplished in the following threecoordinate frames (Figure 2.4):

    2-2.4.1. World Reference Frame

    It is a universal coordinate frame, as defined by x, y, z-axes. In this case, the joints of therobot move simultaneously so as to create motions along the three major axes. In this frame, forexample, no matter where the arm is, a positive x-axis movement is always in the positivedirection of the x-axis; this coordinate is used to define the motions of the robot relative to otherobjects, to define other parts and machines that the robot communicates with, and to definemotion paths.

  • 7/30/2019 MV _ 2002

    20/105

    Chapter 2 Literature Study

    20

    2-2.4.2. Joint Reference Frame

    It is used to specify movements of each individual joint of the robot. Suppose that you

    want to move the hand of a robot to a particular position. You may decide to move one joint at a

    time in order to direct the hand to the desired location. In this case, each joint may be accessed

    individually, and, thus, only one joint moves at a time. Depending on the type of joint used (pris-

    matic, revolute, or spherical), the motion of the robot hand will be different. For instance, if a

    revolute joint is moved, the hand will move around a circle defined by the joint axis.

    2-2.4.3. Tool Reference Frame

    It specifies movements of the robot's hand relative to a frame attached to the hand. The x-

    y- and z-axes attached to the hand define the motions of the hand relative to this local frame.

    Unlike the universal World frame, the local Tool frame moves with the robot. Suppose that the

    hand is pointed as shown in Figure 2.3. Moving the hand relative to the positive x-axis of the local

    Tool frame will move the hand along the x-axis of the Tool frame. If the arm were pointed

    elsewhere, the same motion along the local x'-axis of the Tool frame would be completelydifferent from the first motion. The same -l-x'-axis movement would be upward if the x'-axis were

    pointed upwards, and it would be downward if the x'-axis were pointed downward. As a result, the

    Tool reference frame is a moving frame that changes continuously as the robot moves, so the

    ensuing motions relative to it are also different, depending on where the arm is and what direction

    the Tool frame has.

    FIGURE 2-4: A robots reference frame.

    2-2.5. ROBOT CHARACTERISTICSThe following definitions are used to characterize robot specifications:

    2-2.5.1. Payload

    Payload is the weight a robot can carry and still remain within its other specifications. For

    example, a robot's maximum load capacity may be much larger than its specified payload, but at

  • 7/30/2019 MV _ 2002

    21/105

    Chapter 2 Literature Study

    21

    the maximum level, it may become less accurate may not follow its intended path accurately, or

    may have excessive deflections. The payload of robots compared with their own weight is usually

    very small, For example. Fanuc Robotics LR Mate robot has a mechanical weight of 86 lbs and a

    payload of 6.6 Ibs, and the M-16i robot has a mechanical weight of 594 lbs and a payload of 35

    Ibs.

    2-2.5.2. Reach

    Reach is the maximum distance a robot can reach within its work envelope. As we will

    see latter, many points within the work envelope of the robot may be reached with any desired

    orientation (called dexterous). However, for other points, dose to the limit of robots reach

    capability, orientation cannot be specified as desired (called nondexterous point). Reach is a

    function of the robots joint lengths and its configuration.

    2-2.5.3. Precision

    Precision is defined as how accurately a specified point can be reached. This is a function

    of the resolution of the actuator, as well as its feedback devices. Most industrial robots can have

    precision of 0.001 inch or better.

    2-2.5.4. Repeatability (Variability)

    Repeatability is how accurately the same position can be reached if the motion is repealed

    many limes. Suppose that a robot is driven to the same point 100 times. Since many factors may

    affect the accuracy of the position, the robot may not reach the same point every time, but will be

    within a certain radius from the desired point. The radius of a circle that is formed by this re-

    peated motion is called repeatability. Repeatability is much more important that precision. If a

    robot is not precise, it will generally show a consistent error, which can be predicted and thus

    corrected through programming.

    Repeatability defines the extent of this random error. Repeatability is usually specified for

    a certain number of runs. More tests yield larger (bad for manufacturers) and more realistic (goodfor the users) results. Manufacturers must specify repeatability in conjunction with the number of

    tests, the applied payload during the tests, and the orientation of the arm. For example, the

    repeatability of an arm in a vertical direction will be different from when the arm is tested in a

    horizontal configuration. Most industrial robots have repeatability in the 0.001 inch range.

    2-2.5.5. Robot Workspace

    Depending on their configuration and the site of their links and wrist Joints, robots can

    reach a collection of points called a workspace. The shape of the workspace for each robot is

    uniquely related to its characteristics. The workspace may be found mathematically by writing

    equations that define the robots links and joints and including their limitations, such as ranges ofmotions for each joint[13]- Alternatively, the workspace may be found empirically, by moving

    each joint through its range of motions and combining all the space it can reach and subtracting

    what it cannot reach. Figure 1.7 shows the approximate workspace for some common configura-

    tions. When a robot is being considered for a particular application, its workspace must be studied

    to ensure that the robot will be able to reach the desired points For accurate workspace

    determination. Please refer to manufacturers data sheets.

    2-2.5.6. Volume And Shape Of The Workspace

    The volume and shape of the workspace are very important for applications since they

    determine capabilities of the robot. The use of a robot might be severely limited for some

    applications since the workspace usually has voids (dead zones") which cannot be reached. Fre-

  • 7/30/2019 MV _ 2002

    22/105

    Chapter 2 Literature Study

    22

    quently, the actual size and shape of the workspace depend significantly on parameters other than

    basic kinematics, such as structural tolerances, thermal condition of the linkage, the payload

    being manipulated, velocities and accelerations, etc.

    An important parameter of the workspace is its degree of redundancy. A human being can

    reach most of the workspace of his or her arm from several directions, thus overcoming possible

    obstacles. Some robots also demonstrate such ability thanks to redundant degrees of freedom.

    While extra degrees of freedom significantly complicate programming and control strategies, aswell as structural design, the use of at least one redundant degree of freedom is advocated to over-

    come the degeneracy problems [7]. The angle at which the arm reaches a certain point is not

    routinely specified by the robot manufacturer. However, it can be a critical factor for certainapplications.

    FIGURE 2-5: Workspaces for common robot configurations.

    2-2.6. ROBOT DEGREES OF FREEDOM

    In order to locate a point in space, one need to specify three coordinates, such as the x-, y-

    , and z-coordinates along the three Cartesian axes. Three coordinates are necessary and sufficientto define the location of the point. Although the three coordinates may be expressed in terms of

    different coordinate systems, they are always necessary. However, it is not possible to have two

    or four coordinates, since two is inadequate to locate a point in space, and four is impossible in

    three dimensions. Similarly, if you consider a three-dimensional device with three degrees of

    freedom, within the workspace of the device, you should be able to place any point at any desired

    location. For example, a gantry (x, y, z) crane can place a ball at any location within its

    workspace as specified by the operator.

    Similarly, to locate a rigid body (a three-dimensional object rather than a point) in space,

    one needs to specify the location of a selected point on it, and thus it requires three pieces of

    information to be located as desired. However, although the location of the object is specified,

    there are infinitely many possible ways to orientate the object about the selected point. To fullyspecify the object in space, in addition to the location of a selected point on it, one needs to

    specify the orientation of the object. This means that there is need for a total of six pieces of

    information to fully specify the location and orientation of a rigid body. By the same token, there

    needs to be six degrees of freedom available to fully place the object in space and also orientate it

    as desired. If there are fewer than six degrees of freedom, the robot's capabilities are limited.

    To demonstrate this, consider a robot with three degrees of freedom, where it can only

    move along the x-, y-, and z-axes. In this case, no orientation can be specified; all the robot can

    do is to pick up the part and to move it in space, parallel to the reference axes. The orientation

    always remains the same. Now consider another robot with five degrees of freedom, capable of

    rotating about the three axes, but only moving along the x- and y-axes. Although you may specify

    any orientation desired, the positioning of the part is only possible along the x- and y-axes, butnot z-axis.

  • 7/30/2019 MV _ 2002

    23/105

    Chapter 2 Literature Study

    23

    A system with seven degrees of freedom does not have a unique solution. This means that

    if a robot has seven degrees of freedom, there are an infinite number of ways it can position a part

    and orientate it at the desired location. For the controller to know what to do there must be some

    additional decision making routine that allows it to pick only one of the infinite ways. As an

    example, one may use an optimization routine to pick the fastest or the shortest path to the

    desired destination. Then the computer has to check all solutions to find the shortest or fastest

    response and perform it. Due to this additional requirement, which can take much computingpower and time, no seven-degree-of-freedom robot is used in industry. A similar issue arises

    when a manipulator robot is mounted on a moving base such as a mobile platform or a conveyor

    belt (Figure 2.6). The robot then has an additional degree of freedom, which, based on the

    preceding discussion, is impossible to control. The robot can be at a desired location and

    orientation from infinitely many distinct positions on the conveyor belt or the mobile platform.

    However, in this case, although there are too many degrees of freedom, generally, the additional

    degrees of freedom are not solved for.

    In other words, when a robot is mounted on a conveyor belt or is otherwise mobile, the

    location of the base of the robot relative to the belt or other reference frame is known. Since this

    location does not need to be defined by the controller, the remaining number of degrees of

    freedom are still 6, and thus, unique. So long as the location of the base of the robot on the belt orthe location of the mobile platform is known (or picked), there is no need to find it by solving a

    set of equations ofrobot motions, and, thus, the system can be solved.

    FIGURE 2-6:A Fanuc P-15 robot.

    Can you determine how many degrees of freedom the human arm has? This should

    exclude the hand (palm and the fingers), but should include the wrist. Before you go on, please

    try to see if you can determine it.

    You will notice that the human arm has three joint clusters in it, the shoulder, the elbow

    and the wrist. The shoulder has three degrees of freedom, since the upper arm (humerus) can

    rotate in the sagittal plane (parallel to the mid-plane of the body), the coronal plane (a plane from

    shoulder to shoulder), and about the humerus. (Verify this by rotating your arm about the three

    different axes.) The elbow has only one degree of freedom; it can only flex and extend about theelbow joint. The wrist also has three degrees of freedom. It can abduct and adduct, flex and

    extend, and since the radius bone can role over the ulna bone, it can rotate longitudinally (pronate

    and supinate). Thus, the human arm has a total of seven degrees of freedom, even if the ranges of

    some movements are small. Since a seven-degree-of-freedom system does not have a unique

    solution, how do you think we can use our arms?

    You must realize that in a robot system, the end effecter is never considered as one of the

    degrees of freedom. All robots have this additional capability, which may appear to be similar to

    a degree of freedom. However, none of the movements in the end effecter are counted towards

    the robot's degrees of freedom.

    There are many robots in industry that possess fewer than six degrees of freedom. In fact,

    robots with 3.5,4 and 5 degrees of freedom are very common. So long as there is no need for the

    additional degrees of freedom, these robots perform very well. As an example, suppose that you

  • 7/30/2019 MV _ 2002

    24/105

    Chapter 2 Literature Study

    24

    desire to insert electronic components into a circuit board. The circuit board is always laid flat on

    a known work surface; thus, its height (z-value) relative to the base of the robot is known.

    Therefore, there is only need for two degrees of freedom along the x- and y-axes to specify any

    location on the board for insertion. Additionally, suppose that the components would be inserted

    in any direction on the board, but that the board is always flat. In that case, there will be need for

    one degree of freedom to rotate about the vertical axis (z) in order to orientate the component

    above the surface. Since there is also need for a 1/2-degree of freedom to fully extend the endeffecter to insert the part or to fully retract it to lift the robot before moving, all that is needed is

    3.5 degrees of freedom; two to move over the board, one to rotate the component, and 1/2 to

    insert or retract. Insertion robots are very common and are used extensively in electronic industry.

    Their advantage is that they are simple to program, are less expensive, and are smaller and faster.

    Their disadvantage is that although they may be programmed to insert components on any size

    board in any direction, they cannot perform other Jobs. They are limited to what 3.5 degrees of

    freedom can achieve, but they can perform a variety of functions within this design limit.

    2-2.7. ROBOT JOINTS

    Robots may have different types of joints, such as linear, rotary, sliding, or spherical

    Although spherical joints are common in many systems, since they posses multiple degrees of

    freedom, and thus, are difficult to control, spherical Joints are not common in robotics, except in

    research. Most robots have either a linear (prismatic) Joint or a rotary (revolute) joint.

    prismatic joints ate linear there is no rotation involved- They are either hydraulic or

    pneumatic cylinders-or they are linear electric actuators-These joints are used in gantry-

    cylindrical, or similar joint configuration".-

    Revolute joints are rotary, and although hydraulic and pneumatic rotary joints are

    common, most rotary joints are electrically driven, either by stepper motors or, more commonly,

    by servomotors.

    2-2.8. PROGRAMMING MODESRobots may be programmed in a number of different modes, depending on the robot

    and its sophistication. The following programming modes are very common:

    2-2.8.1. Physical Setup

    In this mode, an operator sets up switches and hard stops that control the motion of the

    robot. This mode is usually used along with other devices, such as Programmable Logic

    Controllers (PLC).

    2-2.8.2. Lead Through or Teach Mode

    In this mode, the robot's joints are moved with a teach pendant. When the desired location

    and orientation is achieved, the location is entered (taught) into the controller. During playback,

    the controller will move the joints to the same locations and orientations. This mode is usually

    point to point, where the motion between points is not specified or controlled. Only the points

    that are taught are guaranteed to reach.

    2-2.8.3. Continuous Walk-Through Mode

    In this mode, all robot joints are moved simultaneously, while the motion is continuouslysampled and recorded by the controller. During playback, the exact motion that was recorded is

  • 7/30/2019 MV _ 2002

    25/105

    Chapter 2 Literature Study

    25

    executed. The motions are taught by an operator, either through a model, by physically moving

    the end effecter, or by directing the robot arm and moving it through its work space. Painting

    robots, for example, are programmed by skilled painters through this mode.

    2-2.8.3. Software Mode

    In the mode of programming the robot, a program if written off-line or on-line and is

    executed by the controller to control the motions- The programming mode is I he most

    sophisticated and versatile mode and can include sensory information, conditional statements

    (such as if...then statements), and branching. However, ii requires the knowledge of the operating

    system of the robot before any program is written. Most industrial robots can be programmed in

    more than one mode,

    2-2.9. ROBOT LANGUAGES

    There are perhaps as many robotic languages as there are robots. Each manufacturer designsits own robotic language, and thus, in order to use any particular robot, its brand of programming

    language must be learned. Many robot languages are based on some other common language,

    such as Cobol, Basic, C, and Fortran. Other languages are unique and not directly related to any

    other common language.

    Robotic languages are at different levels of sophistication, depending on their design andapplication. This ranges from machine level to a proposed human intelligence level [2][3][4].High-level languages are either interpreter based or compiler based.

    Interpreter-based languages execute one line of the program at a time, and each line has

    a line number. The interpreter interprets the line every time it is encountered (by converting theline to a machine language that the processor can understand and execute) and executes each linesequentially. The execution continues until the last line is encountered or until an error isdetected. The advantage of an interpreter-based language is in its ability to continue executionuntil an error is detected, which allows the user to run and debug the program portion by portion.Thus, debugging programs is much faster and easier. However, because each line is interpretedevery time, execution is slower and not very efficient. Many robot languages, such as UnimationVAL and IBM's AML (A Manufacturing Language), are interpreter based [5][6].

    Compiler-based languages use a compiler to translate the whole program into machine

    language (which creates an object code) before it is executed. Since the processor executes the

    object code during execution, these programs are much faster and more efficient. However, since

    the whole program must first be compiled, it is impossible to run any part of the program if anyerror is present. As a result, debugging compiler-based programs is much more difficult. Certain

    languages, such as AL, are more flexible. They allow the user to debug the program in

    interpreter mode, while the actual execution is in compiler mode.

    The following is a general description of different levels of robotic languages [2]:

    2-2.9.1. Microcomputer Machine Language Level

    In this level, the programs are written in machine language. This level of programming is

    the most basic and is very efficient, but difficult to understand and to follow. All languages will

    eventually be interpreted or compiled to this level. However, in the case of higher level programsthe user writes the programs in a higher level language, which is easier to follow and understand.

  • 7/30/2019 MV _ 2002

    26/105

    Chapter 2 Literature Study

    26

    2-2.9.2. Point-to-Point

    Level In this level (such as in Funky and Cincinnati Milacrons T3), the coordinates of

    the points are entered sequentially, and the robot follows the points as specified. This is a very

    primitive and simple type of program is easy to use, but not very powerful. It also lacks

    branching, sensory information, and conditional statements.

    2-2.9.3. Primitive Motion Level

    In these languages, it is possible to develop more sophisticated programs, includingsensory information, branching, and conditional statements (such as VAL by Unimation). Mostlanguages of this level are interpreter based.

    2-2.9.4. Structured Programming Level

    Most languages of this level are compiler based, are powerful, and allow moresophisticated programming. However, they are also more difficult to learn.

    2-2.9.5. Task-Oriented Level

    Currently, there are no actual languages of this level in existence. Autopass, proposed byIBM in the 1980s, never materialized. Autopass was supposed to be task oriented. This meansthat instead of programming a robot to perform a task by programming each and every stepnecessary to complete the task, the user was simply to mention the task, while the controllerwould create the necessary sequence. Imagine that a robot is to sort three boxes by size. In allexisting languages, the programmer will have to tell the robot exactly what to do, which meansthat every step must be programmed. The robot must be told how to go to the largest box, how to

    pick up the box, where to place it, go to the next box, etc. In Autopass, the user would onlyindicate "sort," while the robot controller would create this sequence automatically.

    2-3. ROBOT KINEMATICS & POSITION

    ANALYSIS

    In this section, we will study forward and inverse kinematics of robots. The forward

    kinematics will enable us to determine where the robot's end (hand) will be if all joint variables

    are known. Inverse kinematics will enable us to calculate what each joint variable must be if we

    desire that the hand be located at a particular point and have a particular orientation. Using

    matrices, we will first establish a way of describing objects, locations, orientations, and

    movements. Then we will study the forward and inverse kinematics of different configurations of

    robots, such as Cartesian, cylindrical, and spherical coordinates. Finally, we will use the Denavit-

    Hartenberg representation to derive forward and inverse kinematics equations of all possible

    configurations of robots.It is important to realize that in reality, manipulator-type robots are delivered with no end

    effecter. In most cases, there may be a gripper attached to the robot. However, depending on the

    actual application, different end effectors are attached to the robot by the user. Obviously, the end

    effector's size and length determine where the end of the robot is. For a short end effecter, the end

    will be at a different location than for a long end effecter. In this chapter, we will assume that the

    end of the robot is a plate to which the end effecter can be attached, as necessary. We will call this

    the "hand" or the "end-plate" of the robot. If necessary, we can always add the length of the end

    effecter to the robot for determining the location and orientation of the end effecter.

  • 7/30/2019 MV _ 2002

    27/105

    Chapter 2 Literature Study

    27

    2-3.1 MATRIX REPRESENTATION

    Matrices can be used to represent points, vectors, frames, translations, rotations, and

    transformations, as well as objects and other kinematics elements in a frame. We will use this

    representation throughout this book to derive equations of motion for robots.

    2-3.1.1. Representation of a Point in SpaceA point P in space (Figure 2.7) can be represented by its three coordinates relative to a

    reference frame:

    (2.1)

    Where ax, by, and cz are the three coordinates of the point represented in the reference frame.

    Obviously, other coordinate representations can also be used to describe the location of a point in

    space.

    FIGURE 2-7: Representation of a point in space.

    2-3.1.2. Representation of a Vector in Space

    A vector can be represented by three coordinates of its tail and of its head. If the vector

    starts at a point A and ends at point B, then it can be represented by

    Specifically, if the vector starts at the origin (Figure 2.8), then:

    where ax, by, and cz are the three components of the vector in the reference frame. In fact, point

    P in the previous section is in reality represented by a vector connected to it at point P and

    expressed by the three components of the vector.

    The three components of the vector can also be written in a matrix form, as in Equation (2.5).

    This format will be used throughout this book to represent all kinematics elements:

  • 7/30/2019 MV _ 2002

    28/105

    Chapter 2 Literature Study

    28

    This representation can be slightly modified to also include a scale factor w

    such that if x, y, and z are divided by w, they will yield ax, by, and cz. Thus, the vector can be

    written as

    Variable w may be any number, and as it changes, it can change the overall size of the vector.This is similar to zooming a picture in computer graphics. As the value of w changes, the size of

    the vector changes accordingly. If w is greater than unity, all vector components enlarge; if w is

    less than unity, all vector components become smaller. This is also used in computer graphics for

    changing the size of pictures and drawings.

    If w is unity, the size of the components remains unchanged. However, if w = 0, then ax, by, and

    cz, will be infinity. In this case, x, y, and z. (as well as ax, by, and cz) will represent a vector

    whose length is infinite, but nonetheless, is in the direction represented by the vector. This means

    that a directional vector can be represented by a scale factor of w = 0, where the length is not of

    importance, but the direction is represented by the three components of the vector. This will be

    used throughout this book to represent directional vectors.

    FIGURE 2-8: Representation of a vector in space

    2-3.1.3. Representation of a Frame at the Origin of a Fixed-Reference Frame

    A frame centered at the origin of a reference frame is represented by three vectors, usually

    mutually perpendicular to each other, called unit vectors n, o, a, for normal, orientation, and

    approach vectors. Each unit vector is represented by its three components in the reference frame.

    Thus, a frame F can be represented by three vectors in a matrix form as:

    2-3.1.4. Representation of a Frame in a Fixed Reference Frame

    If a frame is not at the origin (or, in fact, even if it is at the origin) then the lo cation of the

    origin of the frame relative to the reference frame must also be expressed. In order to do this, a

    vector will be drawn between the origin of the frame and the origin of the reference frame

    describing the location of the frame (Figure 2.9). This vector is expressed through its components

    relative to the reference frame. Thus, the frame can be expressed by three vectors describing itsdirectional unit vectors, as well as a fourth vector describing its location as follows:

  • 7/30/2019 MV _ 2002

    29/105

    Chapter 2 Literature Study

    29

    FIGURE 2-9: Representation of a frame in a frame

    2-3.1.5. Representation of a Rigid Body

    An object can be represented in space by attaching a frame to it and representing the frame

    in space. Since the object is permanently attached to this frame, its position and orientation

    relative to this frame is always known. As a result, so long as the frame can be described in space,

    the object's location and orientation relative to the fixed frame will be known (Figure 2.10). As

    before a frame in space can be represented by a matrix, where the origin of the frame as well as

    the three vectors representing its orientation relative to the reference frame are expressed. Thus,

    FIGURE 2-10: Representation of a rigid body in a frame

    A rigid body in space has 6 degrees of freedom, meaning that not only it can move along

    three axes of X, Y, andZ, but can also rotate about these


Recommended