+ All Categories
Home > Documents > Vision Based Grasp Planning for Robot Assembly

Vision Based Grasp Planning for Robot Assembly

Date post: 20-Feb-2015
Category:
Upload: marco-tulio-figueroa
View: 94 times
Download: 0 times
Share this document with a friend
103
International Master’s Thesis Vision Based Grasp Planning for Robot Assembly Naresh Marturi Technology Studies from the Department of Technology at Örebro University örebro 2010
Transcript
Page 1: Vision Based Grasp Planning for Robot Assembly

International Master’s Thesis

Vision Based Grasp Planning for Robot Assembly

Naresh Marturi

Technology

Studies from the Department of Technology at Örebro Universityörebro 2010

Page 2: Vision Based Grasp Planning for Robot Assembly
Page 3: Vision Based Grasp Planning for Robot Assembly

Vision Based Grasp Planning for Robot Assembly

Page 4: Vision Based Grasp Planning for Robot Assembly
Page 5: Vision Based Grasp Planning for Robot Assembly

Studies from the Department of Technologyat Örebro University

Naresh Marturi

Vision Based Grasp Planning for

Robot Assembly

Supervisor: Prof. Ivan Kalaykov

Page 6: Vision Based Grasp Planning for Robot Assembly

© Naresh Marturi, 2010

Title: Vision Based Grasp Planning for Robot Assembly

ISSN 1404-7225

Page 7: Vision Based Grasp Planning for Robot Assembly

Abstract

This thesis demonstrates an industrial assembly task of assembling several partsinto a product by using the vision information. Due to the lack of sensory ca-pabilities, most of the assembly cells cannot act intelligently in recognizing theworkpieces and perceiving the task space. These types of systems were lack-ing the flexibility and capability of automatic modification in their trajectoriesto accommodate changes in the task. Such a flexibility for assembly cells wasachieved through the integration of vision sensing.

For this work, we prototype an assembly cell that has one ABB IRB140robot equipped with a flexible gripper, a flexible fixture and a camera fixed inthe midpoint of the gripper. The flexibility of the assembly cell is provided bythe main components - the gripper and the fixture, which are already designedand prototyped at AASS IC Laboratory and the vision system, developed dur-ing this project. The image information from the camera is used to perceive therobot’s task space and to recognize the workpieces. In turn this information isused to compute the spatial position and orientation of the workpieces. Basedon this information an automatic assembly grasp planner was designed anddeveloped to compute the possible stable grasps and to select and execute theposture of the entire robot arm plus gripper. In order to recognize the work-pieces, different low-level object recognition algorithms were developed basedon their geometrical models. Once the workpieces are identified and grasped bythe robot, the vision system is no longer in use and the robot will execute thepredefined sequence of assembly operations. In this system, the assembly pro-cess of every product is described as an assembly tree, e.g. precedence graph,for all parts in the product.

Entire work was assessed by evaluating the individual modules of the projectagainst a set of goal based criteria and using those results in finding the projectoverall importance. The tests conducted on the developed system showed thatthe system is capable of grasping and assembling workpieces regardless of theirinitial position and orientation. Apart from this, a simple and reliable commu-nication was developed in order to connect the components of the assembly celland to provide a flexible process execution.Keywords: Flexible assembly cell, Grasp planner, Object identification

7

Page 8: Vision Based Grasp Planning for Robot Assembly
Page 9: Vision Based Grasp Planning for Robot Assembly

Acknowledgements

First of all I would like to gratefully acknowledge my supervisor Prof. IvanKalaykov for his abundant help and prolific suggestions. I specially thank himfor his infinite patience. The discussions I had with him were invaluable.

I would like to say a big thanks to Assoc.Prof. Anani Ananiev for his supportin fixing the hardware problems with the gripper.

I am grateful to all my friends for being the surrogate family during the twoyears I stayed in Örebro.

My final words go to my family. I want to thank my mom, dad, sister andbujji, whose love and guidance is with me in whatever I pursue.

9

Page 10: Vision Based Grasp Planning for Robot Assembly
Page 11: Vision Based Grasp Planning for Robot Assembly

Contents

1 Introduction 171.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.1.1 Eye-in-hand for robot manipulators . . . . . . . . . . . . 191.2 Project goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1.2.1 Project evaluation . . . . . . . . . . . . . . . . . . . . . . 201.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201.4 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2 Previous study 232.1 Related study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.1.1 Visual servoing . . . . . . . . . . . . . . . . . . . . . . . 232.1.2 Grasping with visual servoing . . . . . . . . . . . . . . . 242.1.3 Vision system . . . . . . . . . . . . . . . . . . . . . . . . 262.1.4 Grasping module . . . . . . . . . . . . . . . . . . . . . . 28

2.2 Resource description . . . . . . . . . . . . . . . . . . . . . . . . 302.2.1 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.2.2 Grasping system . . . . . . . . . . . . . . . . . . . . . . 342.2.3 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

2.3 Functions in FMS . . . . . . . . . . . . . . . . . . . . . . . . . . 402.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3 Developed system 433.1 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . 433.2 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.2.1 Work flow . . . . . . . . . . . . . . . . . . . . . . . . . . 453.3 Object identification . . . . . . . . . . . . . . . . . . . . . . . . 48

3.3.1 Shaft recognition . . . . . . . . . . . . . . . . . . . . . . 483.3.2 Gear recognition . . . . . . . . . . . . . . . . . . . . . . 51

3.4 Automatic planner for arm posture control . . . . . . . . . . . . 543.5 Synchronizing arm and gripper motions . . . . . . . . . . . . . . 55

3.5.1 Arm motion control . . . . . . . . . . . . . . . . . . . . 55

11

Page 12: Vision Based Grasp Planning for Robot Assembly

12 CONTENTS

3.5.2 Gripper motion control . . . . . . . . . . . . . . . . . . 573.5.3 Interface to synchronize motions . . . . . . . . . . . . . 58

3.6 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4 Experiments with the system 634.1 Test environment . . . . . . . . . . . . . . . . . . . . . . . . . . 634.2 Assembly representation . . . . . . . . . . . . . . . . . . . . . . 64

4.2.1 Assembly sequence selection . . . . . . . . . . . . . . . . 644.3 Test scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4.3.1 Special cases . . . . . . . . . . . . . . . . . . . . . . . . . 674.3.2 Test results . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

5 Conclusions 775.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

A Source code for shaft identification 83

B Source code for gear identification 85

C Camera calibration procedure 91C.1 Camera intrinsic parameters . . . . . . . . . . . . . . . . . . . . 91C.2 Camera extrinsic parameters . . . . . . . . . . . . . . . . . . . . 92

D Source code for client and server 95

E Source code for Galil controller 99

Page 13: Vision Based Grasp Planning for Robot Assembly

List of Figures

1.1 FMS schematic diagram . . . . . . . . . . . . . . . . . . . . . . 171.2 System architecture of robotic vision based control . . . . . . . 181.3 Typical Eye-in-hand system . . . . . . . . . . . . . . . . . . . . 19

2.1 Visual servoing control schema . . . . . . . . . . . . . . . . . . 232.2 Grasping control architecture . . . . . . . . . . . . . . . . . . . 252.3 Pinhole camera model . . . . . . . . . . . . . . . . . . . . . . . 272.4 Contour image . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.5 Contour image . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.6 ABB IRB 140B robotic manipulator . . . . . . . . . . . . . . . . 312.7 Robot geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.8 Robot mechanical structure . . . . . . . . . . . . . . . . . . . . 312.9 Robot base and tool coordinate systems . . . . . . . . . . . . . 322.10 Robot wrist coordinate system . . . . . . . . . . . . . . . . . . . 332.11 Flexible gripper . . . . . . . . . . . . . . . . . . . . . . . . . . . 352.12 Finger configuration 1 . . . . . . . . . . . . . . . . . . . . . . . 362.13 Finger configuration 2 . . . . . . . . . . . . . . . . . . . . . . . 362.14 Finger configuration 3 . . . . . . . . . . . . . . . . . . . . . . . 372.15 Flexible fixture . . . . . . . . . . . . . . . . . . . . . . . . . . . 372.16 Galil motion controller . . . . . . . . . . . . . . . . . . . . . . . 392.17 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392.18 Simulated view of robot-centered FMS . . . . . . . . . . . . . . 41

3.1 Test assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.2 Test assembly parts sequence . . . . . . . . . . . . . . . . . . . . 443.3 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . 443.4 Work flow diagram . . . . . . . . . . . . . . . . . . . . . . . . . 473.5 (A) Background image (B) Current frame . . . . . . . . . . . . . 493.6 Subtracted image . . . . . . . . . . . . . . . . . . . . . . . . . . 493.7 Curve pixels of the shaft . . . . . . . . . . . . . . . . . . . . . . 493.8 (A) Original image (B) Edge image . . . . . . . . . . . . . . . . 51

13

Page 14: Vision Based Grasp Planning for Robot Assembly

14 LIST OF FIGURES

3.9 Recognized gears . . . . . . . . . . . . . . . . . . . . . . . . . . 523.10 Sample code for robot motion control . . . . . . . . . . . . . . . 563.11 Robot zone illustration diagram . . . . . . . . . . . . . . . . . . 573.12 Robot controller communication architecture . . . . . . . . . . 583.13 Sequential diagram for client – server model . . . . . . . . . . . 59

4.1 Test environment . . . . . . . . . . . . . . . . . . . . . . . . . . 634.2 Graph structure of test assembly . . . . . . . . . . . . . . . . . . 644.3 Precedence diagram of test assembly . . . . . . . . . . . . . . . 654.4 (A) Initialized gripper (B) Arm at home position . . . . . . . . . 684.5 (A) Arm at position 1 (B) Arm at position 2 . . . . . . . . . . . 684.6 Arm at position 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 694.7 Screen-shot of execution window . . . . . . . . . . . . . . . . . 694.8 Workpieces in robot task space . . . . . . . . . . . . . . . . . . 694.9 Recognized shaft . . . . . . . . . . . . . . . . . . . . . . . . . . 704.10 Grasping shaft . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.11 (A) Robot Fixing shaft (B) Fixed shaft . . . . . . . . . . . . . . . 714.12 (A) Searching at POS2 (B) Searching at POS3 . . . . . . . . . . . 714.13 Recognized small gear . . . . . . . . . . . . . . . . . . . . . . . 724.14 Grasping small gear . . . . . . . . . . . . . . . . . . . . . . . . . 724.15 (A) Robot fixing gear (B) Assembled gear . . . . . . . . . . . . . 724.16 (A) Searching for the gear (B) Robot grasping the gear . . . . . . 734.17 (A) Fixing the gear (B) Assembled big gear . . . . . . . . . . . . 734.18 (A) Robot grasping the pin (B) Fixing the pin . . . . . . . . . . . 744.19 Assembled pin . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.20 Final assembled product . . . . . . . . . . . . . . . . . . . . . . 74

C.1 Pinhole camera geometric projection . . . . . . . . . . . . . . . 91

Page 15: Vision Based Grasp Planning for Robot Assembly

List of Tables

2.1 Robot axis specifications . . . . . . . . . . . . . . . . . . . . . . 322.2 Flexible gripper technical configuration . . . . . . . . . . . . . . 352.3 Flexible gripper finger configuration . . . . . . . . . . . . . . . . 372.4 Flexible fixture technical configuration . . . . . . . . . . . . . . 382.5 Flexible fixture grasp dimensions range . . . . . . . . . . . . . . 392.6 Camera specifications . . . . . . . . . . . . . . . . . . . . . . . . 40

3.1 Pseudo code for shaft recognition . . . . . . . . . . . . . . . . . 503.2 Pseudo code for gear recognition . . . . . . . . . . . . . . . . . 533.3 Robot positioning instructions . . . . . . . . . . . . . . . . . . . 56

4.1 Test assembly task description . . . . . . . . . . . . . . . . . . . 66

15

Page 16: Vision Based Grasp Planning for Robot Assembly
Page 17: Vision Based Grasp Planning for Robot Assembly

Chapter 1

Introduction

1.1 Background

This thesis demonstrates a simple industrial task of assembling different work-pieces into a single product by using the image information from the visionsystem that is integrated with a high precision articulated1 robotic manipu-lator. The main goal of this work is to develop a successful grasp planningmethodology using the vision information for a flexible assembly cell.

Figure 1.1: FMS schematic diagram

In the past two decades, a number of technological advancements have beenmade in the development of flexible manufacturing systems(FMS). A FMS canbe described as a system consisting of one or more handling devices like roboticmanipulators along with the robot controllers and machine tools, arranged sothat it can handle different family of parts for which it has been designed and

1A robot with rotatory joints and a fixed base is called as an articulated robot.

17

Page 18: Vision Based Grasp Planning for Robot Assembly

18 CHAPTER 1. INTRODUCTION

developed [Rezaie et al., 2009]. Figure 1.1 shows an example of FMS. In thepresent day life, these systems are playing vital role in industrial applicationslike welding, assembly operations etc.. A simple FMS used for assembly opera-tions, which is also termed as flexible assembly cell is an arrangement of one ormore Computed Numerically Controlled (CNC) machines along with a robotmanipulator and a cell computer. The main tasks of CNC machines includework load balancing, task scheduling etc. and also responsible for handling themachine breakdown and tool breakage. The cell computer is responsible for thesupervision and coordination for various operations in the manufacturing cell.The end effector of the robot manipulator is fixed with specific machine tools(e.g. two or three figured grippers) depending on the task to be performed. Thefunctions of these manufacturing cells and the technical specifications of theused tools are described in more detail in the preparation study chapter of thisthesis.

Due to the lack of sensory capabilities, most of the assembly cells cannotact intelligently in recognizing the workpieces and perceiving the task space.For example, a general robotic assembly cell requires the workpieces presentedto the robot must be placed in a predefined precise locations and with a knownorientation, which is fixed to complete the overall task. These types of systemsare lacking the flexibility and capability of automatic modification in their tra-jectories to accommodate changes in the task. Such a flexibility for assemblycells is achieved through the integration of vision sensing, as the visual sen-sors can provide a rich and complete information of the task space than anyother sensing devices. And also with the help of the information received fromintegrated vision, robots can now act intelligently and possess a capability ofdealing with imprecisely positioned workpieces and also can handle uncertain-ties and variations in the work environment. This vision information is alsouseful in enhancing the capability of robot by continuously updating its viewof the world. This type of architecture is termed as Eye-in-hand or Eye-to-hand configuration. The basic building blocks of the whole system are shown

Figure 1.2: System architecture of robotic vision based control

in Figure1.2 and can be described as follows:

Work space includes fixtures, workpieces and tools.

Page 19: Vision Based Grasp Planning for Robot Assembly

1.1. BACKGROUND 19

Sensory system allows the robot to perceive work environment and to recog-nize the workpieces.

Control system contains the cell computer and a robot controller to organizethe tasks and to control the robot respectively.

Robotic manipulator is used to exhibit appropriate action under the control ofrobot controller.

1.1.1 Eye-in-hand for robot manipulators

For the last couple of decades eye-in-hand using visual servoing has been stud-ied extensively because of its importance in industrial assembly operations. Aneye-in-hand system can be described as a robot end effector equipped with aclose-range camera as shown in Figure1.3. The camera selection is based onthe task complexity. An illuminating source is attached to the gripper alongwith the camera in order to capture good images in dim lighting conditionsand also to conflict the light changes at some areas in the real scene of view.The camera has a lens that can be adjusted for proper focus to minimize thepositioning error [Eye].

These type of systems are mainly employed to guide the robot end effec-tors and the grippers in performing a specific task. The images acquired by thecamera are processed using specific algorithms in a computer system in orderto recognize the object of interest and also to find its spatial information. Thisinformation can be used to guide the robot movement in a specific workspace.

Figure 1.3: Typical Eye-in-hand system

Page 20: Vision Based Grasp Planning for Robot Assembly

20 CHAPTER 1. INTRODUCTION

1.2 Project goal

The main objective of this project is to develop and demonstrate a pilot roboticsystem for implementing new basic functionalities in assembly cell. These func-tionalities include:

• Investigating and implementing eye-in-hand vision system for perceivingthe working environment of the robot and recognizing the workpieces tobe manipulated.

• Developing and implementing an automatic grasp planner, which com-putes possible stable grasps and sets gripper posture accordingly.

• Developing and implementing an automatic grasping planner, which se-lects and executes the posture of the entire robot arm plus a flexible grip-per.

• All above functionalities have to be implemented by a respective visualperception system.

1.2.1 Project evaluation

The general approach for evaluating this thesis work involves evaluating theindividual modules of the project against a set of goal based criteria and usingthose results in finding the project overall importance. These individual mod-ules includes eye-in-hand visual servoing, grasp planner, motion control and aninterface to integrate all these modules. The overall process time is not includedas a concept of research in this thesis.

1.3 Contributions

With the completion of this thesis, the overall goal has been achieved and agrasp planner for robot assembly operations has been successfully developed.Main contributions of this thesis are:

• Developing a methodology of autonomous planning and replanning ofassembly operations.

• Intelligent use of the Flexible Gripper.

• Integration and demonstration of the above two functions in a heteroge-neous environment involving plenty of particular limitations and prob-lems.

Page 21: Vision Based Grasp Planning for Robot Assembly

1.4. THESIS STRUCTURE 21

1.4 Thesis structure

This section describes the contents mentioned in the next chapters.

Chapter 2 Provides an overview of the relevant background concepts alongwith a survey of existing techniques. This chapter also provides all thetechnical information regarding various components like the robot, grip-per, fixture and the camera used for this project and also introduces thereader to the problem.

Chapter 3 Proposes a solution for the problem introduced in previous chapter.This chapter also provides a detailed description of the implementationprocedure.

Chapter 4 Describes the test scenario to demonstrate the capacity of the devel-oped system.

Chapter 5 Presents conclusions along with suggestions for future work.

Page 22: Vision Based Grasp Planning for Robot Assembly
Page 23: Vision Based Grasp Planning for Robot Assembly

Chapter 2

Previous study

This chapter provides an overview of the related background concepts alongwith a survey of existing techniques. The main goal of this chapter is to ren-der the reader all technical information regarding various components like therobot, gripper, fixture and the camera used for this project along with the basicfunctionalities of FMS.

2.1 Related study

2.1.1 Visual servoing

Visual servoing mainly refers to the use of vision data to control the motion ofthe robot. Visual servoing can be described as a closed loop control algorithmfor which the error is defined in terms of visual measurements. The main goalof this control scheme is to reduce the error and drive the robot joint angles asa function of the pose error [Taylor and Kleeman, 2006]. This type of controlschema is mainly applied in the object manipulation tasks which requires objectdetection, servoing, alignment and grasping. Figure 2.1 shows the basic build-ing blocks of visual servoing control schema. The visual servoing systems aregenerally classified into two types: position based visual servoing(PBVS) andimage based visual servoing(IBVS). In the former model, the error is calculated

Figure 2.1: Visual servoing control schema

23

Page 24: Vision Based Grasp Planning for Robot Assembly

24 CHAPTER 2. PREVIOUS STUDY

after constructing the pose of the gripper from visual measurements where asin the later model, the error is formulated directly as the difference betweenthe observed and desired location of the image features. In both cases, imageinformation is used to compute the error.

2.1.2 Grasping with visual servoing

Grasping by multi-fingered grippers has been an important topic of research inthe field of manipulation for many years. Reaching a particular position andgrasping an object is a complex task which requires lots of sensing activities.These activities need to be performed in a right sequence and in a right time inorder to make a smooth and stable grasp. Most of the studies state that in orderto provide a stable grasp, the grasping system requires complete informationregarding the robot kinematics, gripper capabilities, sensor capabilities and alsoabout the workspace where the objects are placed [Diankov et al., 2009]. A fewwork, however, has been done in integrating vision-sensors for grasping andmanipulation tasks. For grasping stationary objects which is the concept of thisthesis, the objects pose can be computed from the available image frames andmotion of the arm can be planned accordingly. Such type of approach in whicha robotic arm picks up a stationary object and places it in a specified location isdescribed by Chiu et al. [1986]. Figure 2.2 shows a theoretical control programby Arbib [1981] for grasping a stationary object using vision data. According tothis control program, in moving the arm towards the stationary target object,the spatial location of the target need to be known in prior. The required spatialinformation i.e. the object location, size and its orientation of the object isprovided by vision system. As the arm approaches the target, it needs to correctits orientation towards the target. At the point of grasping the arm should bealigned towards the target in such a way that it will grasp around the longestaxis in order to make stable hold of the object.

Generally, for any grasping model using the common manipulating frame-work there will be three phases namely specification, planning and executionphase. Specification phase is responsible for supplying enough information tothe robot in order to perform a specific task. Planning phase is responsible toproduce a global plan based on the information received from the specificationphase. A collision free path for the robot end effector in order to achieve thefinal goal is produced from this phase. Finally, the execution phase is responsi-ble for continuously updating the information from the planning phase and toexecute a right grasping action. The performance of the overall model mainlydepends on these three phases. For planning a grasp based on visual servoingthe final goal of the robot is defined with respect to the task frame i.e. the infor-mation acquired by the camera. The overall plan of the robot will be updatedevery time whenever the task frame updates [Prats et al., 2007]. One main fun-damental observation with robotic systems using vision for grasping is that arobot cannot determine the accurate spatial location of the object if the camera

Page 25: Vision Based Grasp Planning for Robot Assembly

2.1. RELATED STUDY 25

Figure 2.2: Grasping control architecture

is located far away. Therefore, the robot should move closer towards the objectfor better accuracy.

Eye-hand coordination

Eye-hand coordination is a typical scenario that links perception with action[Xie, 1997]. It is the integration of visual perception system with arm manipu-lation system; the overall performance of the model depends on how these twosystems are integrated. The simplest approach of eye-hand coordination is tomake use of the vision system to compute the pose of the target object in therobot workspace and pass this information to the robotic manipulator to planand execute the specific actions required for the task. In order to make this co-ordination model more reliable, an incremental mapping principle which mapsthe visual sensory data to hand’s motions should be established [Xie, 2002].Most of the recent studies on eye-hand coordination focus on the estimation offeature Jacobian matrix1 for mapping the motion of the robot arm to changes inthe image frame. To model a reliable incremental mapping principle the camerashould be calibrated using a high precession calibration rig.

1The image feature Jacobian matrix is a linear transformation matrix to transform task spaceto image space [Hutchinson et al., 1996].

Page 26: Vision Based Grasp Planning for Robot Assembly

26 CHAPTER 2. PREVIOUS STUDY

2.1.3 Vision system

Vision system mainly comprises of a normal pinhole camera along with an il-lumination source. In order to integrate this vision system along with a manip-ulation framework, it should be capable of tracking multiple objects that needto be manipulated. Functionally this vision system can be decomposed into twosubsystems:

1. Object tracking

2. Pose2 estimation

Object tracking

Multiple object tracking and shape representation is a vital task for assistingrobotic manipulator in assembly operations using vision system. Extensive re-search has been carried out in this field since many years and various researchalgorithms have been developed. The common approach used for most of theobject tracking applications is by training the vision system with an imagedataset containing object, and matching the object in real scene with trainedobject [Bastanlar et al., 2010]. But this type of tracking system is inapplicablefor vision based manipulation as it consumes time in performing the overalltask. Other approach is to find different characterizing features of the objectsfrom the image frames, and the most preferred feature of this kind is the ob-ject centroid. Koivo and Houshangi [1991] used these type of features in theirwork for tracking different objects. Hunt and Sanderson [1982] proposed var-ious algorithms for object tracking based on the mathematical predictions ofthe centroid locations of an object from the image frames. Huang et al. [2002]proposed image warping and Kalman filtering based object tracking method.H.Yoshimi and Allen [1997] used fiducial marks3 and Allen et al. [1993] usedsnakes4 approach to trace various objects in a given workspace. In order toreduce the computational time for object tracking, many algorithms are devel-oped based on image background subtraction. One such approach was usedby Wiklund and Granlund [1987] for tracking multiple objects. A variety oftechniques based on blob detection, contour fitting, image segmentation andobject feature extraction are in practice for low level object identification andgeometrical shape description. One such approach is used in this thesis.

Pose estimation

After detecting the object to be manipulated, it is necessary to compute its posein order to find a set of valid grasping points. Pose of an object can be com-

2Pose can be described as a combination of position and orientation of object.3Fiducial mark is a black dot on a white background.4A snake is an energy minimizing spline which can detect objects in an image and track non-

occluded objects in a sequence of images [Tabb et al., 2000].

Page 27: Vision Based Grasp Planning for Robot Assembly

2.1. RELATED STUDY 27

Figure 2.3: Pinhole camera model

puted either from a single image or from a stereo pair of images. Most of thevision based manipulation applications use POSIT algorithm [Dementhon andDavis, 1995], to estimate the pose of an object using a single pinhole camera.In general, vision based pose estimation algorithms mainly rely on a set of im-age features like corner points, edge lines and curves. In order to estimate thepose of an object from an image, prior knowledge about the 3D location ofthe object features is necessary. The pose of an object is a combination of itsrotation R3×3 and its translation T3×1 with respect to the camera. So the poseof an object can me mathematically shown as [R|T ] which is a 3 × 4 matrix.For a given 3D point on an object, its corresponding 2D projection (perspec-tive projection5) in image, camera’s focal length and principle point of focusare used to compute its pose. The camera focal length and its principle point offocus can be computed by following a standard calibration technique proposedby Chen and He [2007]. Figure 2.3 shows the normal pinhole camera modeland the related coordinate systems. The projection of a 3D point M [X,Y, Z]T

on to image plane at a point m[x, y]T can be represented in the homogeneousmatrix form as

λ

xy1

m

=

f 0 αx 00 f αy 00 0 1 0

K

XYZ1

M

(2.1)

where, K is the intrinsic parameter matrix of the camera, f is the camera focallength, λ = Z is the homogeneous scaling factor, αx and αy represent the princi-ple point of the camera. These parameters are used to find the point projection.A complete pixel position of a point M can now be written as

5Perspective projection is the mapping from three dimension onto two direction.

Page 28: Vision Based Grasp Planning for Robot Assembly

28 CHAPTER 2. PREVIOUS STUDY

λm =[

K 03]

[

R 003 1

] [

0T3

−T0 1

]

M (2.2)

Alternatively after combining matrices, equation 2.2 can be written as

λm =[

K 03]

[

R −RT0T3

1

]

M (2.3)

From the corresponding mapping of 3D feature points to 2D image points, poseis estimated.

2.1.4 Grasping module

Grasping module used for robotic manipulation tasks mainly comprised of thefollowing two sub-modules:

1. Grasp planning

2. Grasp execution

Grasp planning has been an important concept of research in the field ofdexterous manipulation since many years. Okamura.A.M. et al. [2000] pre-sented a survey of existing techniques in dexterous manipulation along with acomparison to robotics and human manipulation. Rao et al. [1989] proposed8 possible grasps using a three fingered gripper. Most of the researches assumethat the geometry of the object is known in prior before starting the graspingprocess. A very important concept in grasp planning is the way of using thisinformation to filter unstable grasp points like corners of the object. The maintask is to select a particular type of grasp i.e. to take a final decision of usingtwo or three fingers for a particular object based on its geometry.

In general, grasping an object means building a relationship between ma-nipulator and object model. Sometimes the complete information of the objectmodel is hardly known. In that case grasp planning is imprecise and not reli-able. So instead of using the information about the object model directly, objectfeatures obtained by vision system and a binary image containing contours ofobjects can be used. The contour images serve as input to grasp computingblock inside the planning module and a respected output from a database ofgrasps is produced, which serves as input to the planning module. The databaseof grasps contains the following list of grasps:

1. Valid grasps – Those satisfy a particular grasping criteria and can be usedfor grasping but a stable grasping is not ensured and needs verification.

2. Best grasps – These are a part of valid grasps and ensure stable grasping.

Page 29: Vision Based Grasp Planning for Robot Assembly

2.1. RELATED STUDY 29

3. Invalid grasps – Those don’t satisfy a particular grasping criteria and can-not be used for grasping.

Finally, planning module will select an appropriate grasp from this databaseand pass this information to the execution module. Grasp execution moduleis a control module, which is responsible for executing the selected grasp.This module is also responsible for initializing both gripper and manipulatorarm before grasping and moving the arm towards the object location respec-tively[Morales et al., 2006].

Grasp region determination

“The contiguous regions on the contour that comply with the fin-ger adaptation criterion are called grasp regions [Morales et al.,2006].”

The preliminary step in the grasp point determination is the extraction of grasp-ing region from binary image containing object contours. An example contourimage is shown in the Figure2.4. Contour based approach is the one of the mostcommonly used technique for shape description and grasping region selection.The main idea behind this approach is to extract meaningful information fromcurves like finding the vertexes and line segments in image which are highly sig-nificant for grasping point selection. Some of the other research works preferpolygonal approximation using linking and merging algorithms [Rosin, 1997].

Figure 2.4: Contour image

A curvature function along with a fixed threshold (τ1) is used to process allthe contour points [Morales et al., 2006]. τ1 helps in finding the continuousedge points for which a line segment can be approximated. The final outcomeof this step will be a set of line segments. These line segments are processedfurther by using another threshold, τ2 (selected based on finger properties);such that all segments below τ2 are rejected. The remaining edge line segmentsare the valid grasping regions. Ideally, this type of approach fails for someobjects shown in Figure 2.5. For those objects most of edge points lie below τ1,

Page 30: Vision Based Grasp Planning for Robot Assembly

30 CHAPTER 2. PREVIOUS STUDY

therefore approximation of edge line segments is not possible. For such type ofobjects the longer regions are broken down to small pieces. And these piecescan be approximated as straight lines.

Figure 2.5: Contour image

Once the grasp regions are selected, the next step is to compute the grasppoints where the robot can make hold of the object. In order to find good grasppoints, we need to find compatible regions out of all available valid regions.For a two finger grasp, finding the compatible grasp regions can be performediteratively by selecting two regions at a time and validating them. The valida-tion procedure can be performed by finding the normal vectors of the selectedregions and projecting the regions in the direction of normal vectors. If thereexists any intersection between the regions, the regions are said to be compat-ible for grasping. Once the compatible regions are selected the midpoints ofthese regions serves as grasping points for the gripper. On the other way for athree finger grasping, the similar approach is used for finding the compatibleregions as well as grasp points, which is explained by Morales et al. [2006].

2.2 Resource description

2.2.1 Robot

The robot used in this thesis is ABB IRB 140B which is shown in the Figure2.6. This is a six axis articulated robotic manipulator which allows an arbi-trary positioning and orientation in the robot’s work space. The geometricaland mechanical structures of the robot are shown in Figures 2.7 and 2.8 respec-tively. The accuracy of this robot is very high with its position repeatability of+/-0.03mm. The maximum payload handling capacity of this robot is 6kg. Theaxis specification and the joints velocities of the robot are shown in Table2.1.

Page 31: Vision Based Grasp Planning for Robot Assembly

2.2. RESOURCE DESCRIPTION 31

Figure 2.6: ABB IRB 140B robotic manipulator [ABB, 2004]

Figure 2.7: Robot geometry [ABB, 2004]

Figure 2.8: Robot mechanical structure [ABB, 2004]

Page 32: Vision Based Grasp Planning for Robot Assembly

32 CHAPTER 2. PREVIOUS STUDY

Axis No. Axis Range Velocity

1 C, Rotation 360◦ 200◦/s2 B, Arm 200◦ 200◦/s3 A, Arm 280◦ 260◦/s4 D, Wrist Unlimited(400◦ default) 360◦/s5 E, Bend 240◦ 360◦/s6 P, Turn Unlimited(800◦ default) 450◦/s

Table 2.1: Robot axis specifications

Robot coordinate systems

The position and motion of the robot are always related to the robot ToolCenter Point (TCP). The TCP is located in the middle of the defined tool. Forany application, we can define several tools but only one tool or TCP is activeat a particular point of time or point of move. The coordinates of TCP aredefined with respect to the robot base coordinate system or can use its owncoordinates. Figure 2.9 shows the robot base and tool coordinate systems. Formany applications TCP coordinates are recorded with respect to the robots basecoordinate system. The base coordinate system of the robot can be described

Figure 2.9: Robot base and tool coordinate systems

as follows:

• It is located on the base of the robot.

• The origin is located at the intersection point of axis 1 and the robot’sbase mounting surface

• The xy- plane coincides with base mounting surface such that x-axispoints forward from the base and y-axis points to the left (From therobot’s perspective).

Page 33: Vision Based Grasp Planning for Robot Assembly

2.2. RESOURCE DESCRIPTION 33

• The z-axis points upward and coincides with axis-1 of the robot.

The tool is always mounted on the mounting flange of the robot. In order todefine its TCP it always requires its own coordinate system. The orientationof the tool at any particular programmed position can be determined by theorientation of the tool coordinate system. This coordinate system is also usedto get appropriate motion directions when jogging the robot. The tool coordi-nate system is often referenced to the wrist coordinate system of the robot (SeeFigure 2.10). The coordinate system of the wrist can be described as follows:

Figure 2.10: Robot wrist coordinate system [ABB, 2004]

• The wrist coordinate system always remains same as the mounting flangeof the robot.

• The origin(TCP) is located at the center of the mounting flange.

• The z-axis points outwards from the mounting flange.

• At a point of calibration, the x-axis points in the opposite direction, to-wards the origin of the base coordinate system.

• The y-axis points to the left and can be seen as a parallel axis to the y-axisof base coordinate system.

Robot motion controller

The motion of the ABB IRB 140 is controlled by a special purpose fifth gener-ation robot controller, IRC5 designed by ABB robotics. The IRC5 controller isembedded with all the functions and controls in order to move and control therobot. This controller combines the motion control, flexibility and safety withPC tool support and optimizes the robot performance for short cycle times andprecise movements. Because of its multi move function, it is capable of syn-chronizing up to four robot controls. The standard IRC5 controller supports

Page 34: Vision Based Grasp Planning for Robot Assembly

34 CHAPTER 2. PREVIOUS STUDY

the high-level robot programming language RAPID and also features a well de-signed hand-held interface unit called FlexPendant or teach pendant which isconnected to the controller by an integrated cable and connector.

Robot software and programming

RobotStudio is a computer application for the offline creation, programming,and simulation of robot cells. This application is used to simulate the robotin offline mode and to work on the controller directly in the online mode, asa compliment to the FlexPendant. Both the FlexPendant and RobotStudio areused to programming. FlexPendant is best suited for modifying position andpath sequences in the program where as robot studio is used for more complexprogramming (e.g. socket programming).

2.2.2 Grasping system

The grasping system used in this thesis consists of two components, namely:

1. Flexible gripper for grasping objects.

2. Flexible fixture for fixing the grasped objects.

Both these components are designed and prototyped at AASS research labora-tory.

Flexible gripper

The gripper prototype shown in Figure 2.11 has three single joint identical fin-gers, providing a balance between functionality and increased level of dexterity,compared to standard industrial grippers. The base (palm) increases the func-tionality of the gripper providing the possibility of different relative orientationof the fingers. One of the fingers is fixed to the base, while the other two cansymmetrically rotate up to 90 deg each. Driven by four motors, the gripper iscapable of:

• Grasping work pieces of various shapes and sizes (4 – 300 mm).

• Holding the part rigidly after grasping.

Table 2.2 provides the technical configuration of the flexible gripper.

Page 35: Vision Based Grasp Planning for Robot Assembly

2.2. RESOURCE DESCRIPTION 35

Figure 2.11: Flexible gripper

Flexible Gripper – FG 2.0Type 3 fingered servo drivenGripping method OutsidePayload 2.5kgGripping force for every finger max.30NCourse for every finger 145mmTime for total finger course 10sRotation speed of the finger max.14.5mm/sRotation degree of moved fingers max.90◦

Dimensions 397× 546× 580

Table 2.2: Flexible gripper technical configuration [Ananiev, 2009]

Sensory system

The sensory system equipped with the gripper prototype provides a vital feed-back information to implement a closed loop control of the gripper (finger)movement. Mainly two types of sensors are used in this system. They are:

1. Tactile sensors for touch information.

2. Limit switches6 for position information.

6Limit switches are the switching devices designed to cut off power automatically at the limit oftravel of a moving object

Page 36: Vision Based Grasp Planning for Robot Assembly

36 CHAPTER 2. PREVIOUS STUDY

With the latest prototype of the gripper, the contact surfaces of the fin-gers are enclosed with tactile sensing pads for enabling tactile feedback duringgrasping movements. These sensors are based on force sensing resistors, whoseresistance value changes whenever the finger comes in contact with an object.

Limit switches, which are commonly used to control the movement of themechanical parts are the second type of sensors used with this system. The limitswitches enclosed with the three fingers of the gripper are used to determine thecoarse finger positions and the attached optical shaft encoders estimates theprecise position information of the fingers.

Finger configurations

The three available configurations for the finger movement are shown in theFigures 2.12, 2.13 and 2.14. Table 2.3 shows the number of fingers used andthe suitable type of grasping for a particular finger configuration.

Figure 2.12: Finger configuration 1

Figure 2.13: Finger configuration 2

Page 37: Vision Based Grasp Planning for Robot Assembly

2.2. RESOURCE DESCRIPTION 37

Figure 2.14: Finger configuration 3

Configuration No. of fingers used Type

1 3 Grasp long objects2 3 Grasp circular objects3 2 Grasp small objects

Table 2.3: Flexible gripper finger configuration

Flexible fixture

Figure 2.15: Flexible fixture

The fixture prototype shown in the Figure 2.15 features 4 DOF and consists ofone driving module and two pairs of connecting and grasping modules. Table2.4 provides the complete technical configuration of the fixture. The drivingmodule consists of two identical parts connected with orienting couple. Thetwo different types of movements defined with the driving module are linearand rotatory. Linear movement is accomplished by using a gear motor, ball

Page 38: Vision Based Grasp Planning for Robot Assembly

38 CHAPTER 2. PREVIOUS STUDY

screw pair converting the rotary motion of the nut into reciprocal and a slidingblock connected to it where as rotatory movement is accomplished by usinggear motor passing rotary motion via a teeth - belt connection to the ball-linearpairs. The connecting modules are responsible for holding the grasping modulescontaining finger pairs. The fixture is designed in such a way that it can openand close the finger pairs independently with respect to each other. The mainbenefits with this fixture architecture are:

• Grasping wide range of objects with different shapes and sizes. (See Table2.5 for allowed dimensions).

• Holding the objects firmly.

• Self centering of the objects.

• Precise control over the horizontal movement of holding modules and thevertical movement of fingers.

Type 4 fingered flexible fixtureType of grasp Inside and outsideDriving Elctromechanical, 24VRun of every holder 50mmForce of grasping 2500NTime for full travel of the holders 2.5sOperational time for grasping 0.5sMaximum rotational speed 360◦/sMaximum torque 7.5N.mMaximum angle of rotation 210◦

Positioning accuracy ±0.05Max. Weight of the Grasped Detail 10kgDimensions 550mm× 343mm× 150mmWeight 22kg

Table 2.4: Flexible fixture technical configuration

Motion control

The motion of the both gripper and fixture are controlled by two differentGalil’s DMC – 21x3 Ethernet motion controllers. Figure 2.16 shows the Galil’sDMC – 21x3 motion controller. With a 32-bit microcomputer, the Galil’s DMC– 21x3 motion controller provides advanced features like PID compensationwith velocity and acceleration, program memory with multitasking and un-committed I/O for synchronizing motion with external events. The encoder

Page 39: Vision Based Grasp Planning for Robot Assembly

2.2. RESOURCE DESCRIPTION 39

Shape Min. size(mm) Max. size (mm)

Cylindrical 30 165Square / rectangle 30 160Hexagonal 25 85Inside grasping 90 220

Table 2.5: Flexible fixture grasp dimensions range

Figure 2.16: Galil motion controller

and limit switch information can be accessed by special purpose Galil’s soft-ware tools (GalilTools) for motion controllers. This tool is also used for sendingand receiving the Galil commands. The integrated Watch Tool is used to moni-tor the controller status such as I/O and motion throughout the operation. TheGalilTools C++ communication library (Galil class is compatible with g++ com-piler in Linux) provides various methods for communication with Galil motioncontroller over Ethernet gal.

2.2.3 Camera

Figure 2.17: Camera

The camera used in this thesis is Logitech webcam pro 9000 (See Figure2.17). This camera is fixed in the midpoint of the flexible gripper. A source ofillumination is integrated with the camera in order to provide a uniform lightdistribution over the scene of view. Table 2.6 provides the technical specifica-tions of the camera.

Page 40: Vision Based Grasp Planning for Robot Assembly

40 CHAPTER 2. PREVIOUS STUDY

Focal length 3.7mmLens iris F/2.0Megapixels 2 (enhanced up to 8)Focus Adjustment AutomaticVideo resolution 1600× 1200image resolution 1600× 1200Optical sensor CMOSFrame rate Up to 30FPSCommunication USB 2.0

Table 2.6: Camera specifications

2.3 Functions in FMS

As explained in Section 1.1, a flexible manufacturing system is a

“highly automated group technology (GT) machine cell, consistingof a group of workstations, interconnected by an automated ma-terial handling and storage system and controlled by a distributedcomputer system [Groover, 2000].”

These types of systems are mainly designed to produce various parts whichare defined within the range of different sizes and shapes. These are a groupof computer guided machines used to produce various products based on thecontroller (CNC machine) instructions [Mitchell]. The important characteristicof these systems are flexibility; comes from their capability of handling differ-ent family of parts and their adaptability for the task changes. Generally theseFMSs’ can be distinguished into various types like single machine cell, flexiblemanufacturing cell and flexible manufacturing system, based on the numberof CNC machines used for operation [Leicester, 2009]. A Single machine cellconsists of only one CNC machine and a flexible manufacturing cell consists oftwo or three CNC controlled workstations along with the automated handlingtools.

The flexible robot assembly cell used for this thesis consists of:

• A CNC machine tool responsible for controlling the overall assembly op-eration.

• An automated robot whose movements can be controlled by program-ming.

• A cell computer which is used to control the program flow and to coor-dinate the activities of the work station by communication with CNC. Itis also used to monitor the work status.

Page 41: Vision Based Grasp Planning for Robot Assembly

2.4. SUMMARY 41

Figure 2.18: Simulated view of robot-centered FMS [Festo solution center]

These types of systems are also called as robot-centered FMS. A simulatedview of one such system is shown in Figure 2.18. All the required instructionsfor the flexible assembly cell are programmed and ported into the controllersbefore starting the assembly operation. All the components in the flexible as-sembly are connected under a common network such that the communicationbetween different machines and the cell computer is performed over high speedIntranet. One main advantage with this model is, all the instructions requiredfor the task can be programmed offline and tested in a simulation environmentand can be executed online directly on the robot. The basic functionalities pro-vided by this flexible assembly cell are:

• Sensorial perception and status identification.

• Interactive planning in order to assure a stable performance for processchanges.

• On-line decision making capabilities such as process monitoring, errorrecovery and handling tool breakage.

2.4 Summary

This chapter has provided an overview of the previous work done in the fieldof vision based grasping and also provides a survey of existing techniques rel-evant to grasp planning methodologies, eye-in-hand control and extraction ofwork piece geometrical information from images. Technical details of variousresources like the robot, CNC controller, camera, flexible gripper and flexiblefixture used in this thesis along with a short description of the flexible assem-bly cell and its functionalities are also provided in this chapter. The conceptsthat are mentioned in this chapter are relevant to the main contributions in thisthesis.

Page 42: Vision Based Grasp Planning for Robot Assembly
Page 43: Vision Based Grasp Planning for Robot Assembly

Chapter 3

Developed system

This chapter proposes a solution for the problem of grasping various work-pieces from the robot work space and assembling them by making use of theEye-in-hand vision system. This chapter also discuss the developed system ar-chitecture along with the used techniques in order to solve the problem.

3.1 Experimental setup

In order to demonstrate the capabilities of developed system a test assemblycontaining four different workpieces has been designed as shown in the Figure3.1. These four objects differ from each other in size and shape and are initiallyplaced in different locations at the robot work space. The main goal of the sys-tem is to identify the workpieces separately using the information received fromcamera and to develop a planning methodology for controlling the entire pos-ture of the arm and gripper in order to assemble them. The parts identificationsequence is shown in Figure 3.2.

Figure 3.1: Test assembly

43

Page 44: Vision Based Grasp Planning for Robot Assembly

44 CHAPTER 3. DEVELOPED SYSTEM

Figure 3.2: Test assembly parts sequence

3.2 System architecture

A conceptual diagram of the proposed system is shown in the Figure 3.3. The

Figure 3.3: System architecture

complete architecture is divided into four different subsystems, described asfollows:

Page 45: Vision Based Grasp Planning for Robot Assembly

3.2. SYSTEM ARCHITECTURE 45

Camera subsystem is responsible for controlling the camera operations and up-dating the field of view. Open Computer Vision Library (OpenCV) is usedin the Linux platform to capture images from the camera. The imagesacquired from the camera serve as input to the image processing blockinside controller subsystem.

Controller subsystem is a combination of various blocks and is mainly respon-sible for synchronizing different actions inside the system. As previouslystated, the image processing and object identification block inside thissubsystem receives the image information from camera subsystem andproduces an output containing the object feature information. This out-put serves as the input for shape and position identification block and arespected output containing the object spatial information is produced.This information is served as input to the grasp planning block and aoverall plan for grasping is developed. The two interface blocks for thearm and Galil motion controllers are used to communicate with the ABBIRB 140 robot arm and flexible gripper and fixture respectively. Thesetwo interface blocks serve as a bridge between the software and the hard-ware.

Robot arm subsystem is responsible for controlling the arm actions based onthe information received from the controller subsystem. The task of thissubsystem is to send/receive the position information of the robot end ef-fector to/from controller. This subsystem is developed using ABB’s RAPIDprogramming language.

Fixture and gripper subsystems are responsible for controlling the actions ofthe flexible fixture and gripper respectively based on the informationreceived from the controller subsystem. These subsystems were devel-oped using special purpose Galil programming language for motion con-trollers.

3.2.1 Work flow

Flow chart shown in Figure 3.4 provides a step-by-step solution to the problemof vision based assembly operation. As the system operates continuously forall the parts in a similar manner, a single cycle is displayed in the diagram. Thesystem will run this cycle every time the variable PN (part number) increments.

The work flow starts by initializing the robot, gripper and fixture. In thisstep, all the three devices are moved to a predefined position and orientation. Inthe next step, camera device and MATLAB engine are initialized, if any problemoccurs during this step the system execution is stopped. Even though the fixtureand the gripper operate in a similar way, the execution steps are shown sepa-rately because, the fixture is responsible only for holding the shaft (PN = 1)and remaining parts are mounted on this shaft. As the camera is fixed inside

Page 46: Vision Based Grasp Planning for Robot Assembly

46 CHAPTER 3. DEVELOPED SYSTEM

the gripper, it cannot cover the total workspace in a single frame. Thereforethe total region is divided into several subregions for which predefined posi-tions POS1, POS2 and POS3 are specified. All workpieces should be foundin these three subregions.

If the initialization process is successful, the system will start executing themain cycle by changing the variable PN to 1. As a primary step of this cycle,the robot arm will be moved to a predefined position POS1 in order to searchfor particular workpiece in its field of view. If the workpiece is identified atthis position, system will start executing the next steps, otherwise the arm willmove to next predefined position POS2 to find the part. Once the part is iden-tified, its position and shape details are computed and the arm is commandedto move to get a pre-grasping pose. At this point, the grippers fingers are ad-justed depending on the received shape information. In the next step, the armis commanded to move near the object and gripper grasps the workpiece. Oncethe grasping operation is successful, the camera is no longer in use and the armis moved to a predefined medium point position in order to avoid collision withthe fixture while fixing the part. At this point of operation, the fixture jaws areassigned in an order to hold the object. As a final step in this cycle, the armmoves over the fixture and delivers the grasped part. The whole cycle is exe-cuted repeatedly for all the parts and system execution stops once the assemblyoperation is finished.

Page 47: Vision Based Grasp Planning for Robot Assembly

3.2. SYSTEM ARCHITECTURE 47

Figure 3.4: Work flow diagram

Page 48: Vision Based Grasp Planning for Robot Assembly

48 CHAPTER 3. DEVELOPED SYSTEM

3.3 Object identification

This section describes the developed methods to identify the workpieces fromimages. The workpieces in this assembly operation are, a cylindrical shaft, twocircular gears which are already shown in the test assembly setup. As statedearlier in Section 3.1 these workpieces have to be identified in a specific orderduring assembly execution. The implementation procedures for part identifica-tion are explained below.

3.3.1 Shaft recognition

The procedure used to recognize the shaft was implemented in MATLAB basedon the image background subtraction and boundary extraction algorithms. Thepseudo code in Table 3.1 presents the basic steps of the procedure.

Initially, a background image of the workspace is captured without anyworkpieces. For every new frame from the camera the proposed algorithm forshaft recognition, subtracts this new frame from the background image. Theresult of this step will be a binary image containing the region of interest (ROI)of different objects. Figures 3.5 and 3.6 show the background image, the orig-inal image and the resulting binary image after image subtraction respectively.The area of this ROI has to be greater than a certain threshold in order to avoidthe recognition of small objects. This threshold is chosen manually by trial anderror method. Next, in order to determine the type of the workpiece the curvesthat composes the boundaries are found in a systematic way. A threshold is ob-tained to discriminate between the shaft and the remaining work objects. Theboundary region containing the total number of curves less than this thresholdwill be the region containing the shaft. Figure 3.7 shows the extracted curvepixels of a boundary region. As a cross checking approach another threshold isobtained based on the difference between the major axis and minor axis of thefound region. This step is performed in order to eliminate misjudged regions inthe previous step. The final region after this step is to be the shaft region. Theobject orientation is determined based on the angle between the major axis ofthe detected region with the X-axis. Source code for shaft identification is givenin Appendix A.

Page 49: Vision Based Grasp Planning for Robot Assembly

3.3. OBJECT IDENTIFICATION 49

Figure 3.5: (A) Background image (B) Current frame

Figure 3.6: Subtracted image

Figure 3.7: Curve pixels of the shaft

Page 50: Vision Based Grasp Planning for Robot Assembly

50 CHAPTER 3. DEVELOPED SYSTEM

Table 3.1: Pseudo code for shaft recognition

1: Capture the background image BG2: Subtract each new frame from BG3: for each resulting region4: if (area > areathreshold)5: add this region to ROI6: end if7: end for8: for each resulting ROI9: extract the boundaries10: for each resulting boundary11: extract the curves12: end for13: for each extracted curve14: Find the number of curve pixels NCP15: end for16: if (NCP < curvethreshold)

17: find ~FV T = [M1,M2, O,A]18: where FV is a vector containing object features19: M1 is the major axis of the region20: M2 is the minor axis of the region21: O is the Orientation of the major axis22: A is the Area of the region23: find difference D = M1−M224: set the value of found to one.25: end if26: end for27: if found = 1andD > shaftthreshold28: compute the centroid29: return shaft found30: else31: return shaft not found32: Go To step 233: end if

Page 51: Vision Based Grasp Planning for Robot Assembly

3.3. OBJECT IDENTIFICATION 51

3.3.2 Gear recognition

The procedure used to recognize the circular gear is developed using Open CVwith C++ in Ubuntu distribution. The pseudo code in Table 3.2 presents thebasic steps of the implementation procedure.

Once the shaft is recognized and fixed in the fixture, the system starts search-ing for the circular gears. The approach for recognizing both the circular gearsis same and is based on the Hough transformation for circle detection algorithm[Yi, 1998; CV, 2000]. The pseudo code for gear recognition can be explained asfollows: The images from the camera are captured continuously. Before startingwith the exact search for circles, these captured frames require preprocessing.For this, each captured frame is converted into gray scale image and undergoeshistogram equalization in order to normalize the intensity levels. A median fil-ter of size 11 × 11 is applied on the equalized image to reduce the noise levels.Once the preprocessing stage is completed, edge features are extracted from theimage using Canny edge detection algorithm. The Canny edge detection algo-rithm applies two thresholds which are used for edge linking and to find theinitial segments of strong edges respectively. These two thresholds are obtainedusing trial and error method. The result of this step is to be a binary imagecontaining edges. Figure 3.8 shows the original image along with its computededges.

Figure 3.8: (A) Original image (B) Edge image

The proposed algorithm uses this edge image to search for the presence ofHough circle candidates with specific radius. If one or more than one circlecandidate found, the captured image is considered for the rest of the recogni-tion process otherwise it is discarded and a new frame is captured. As an initialcase if more than one circle candidate found, a filter is applied on the foundcircle candidates in order to limit their total count to two (assuming that boththe gears are placed in the same subregion). This filter is designed based onthe manually measured radii of two gears. As a next step, the radii of thesecircles are computed and the circle with least radius and satisfying the smallgear radius condition is to be the smaller gear and the circle with most radius

Page 52: Vision Based Grasp Planning for Robot Assembly

52 CHAPTER 3. DEVELOPED SYSTEM

and satisfying the big gear radius condition is to be the bigger gear. Figure 3.9shows the recognized gears. On the other hand if the gears are placed in differ-ent subregions i.e. if only one circle candidate found, two different thresholdsare obtained (based on their radii) to recognize the gears. At a particular posi-tion, the time to search and recognize the gears is fixed and if the gears are notrecognized within this time the robot is commanded to move to one of the otherpredefined position POS1, POS2 or POS3 as mentioned earlier. If any of thetwo gears are found missing in the robot workspace, the system produces anassembling error message and program execution is stopped. Source code forgear identification is given in Appendix B.

Figure 3.9: Recognized gears

Page 53: Vision Based Grasp Planning for Robot Assembly

3.3. OBJECT IDENTIFICATION 53

Table 3.2: Pseudo code for gear recognition

1: while (found = false)2: Capture frame from the camera3: convert image to Gray scale4: equalize the image5: filter the image with a median filter of size 11× 116: find edges in the image using Canny edge detector7: search for Hough circles in the binary edge image8: if (circlesfound > 1)9: for (circlecandidate = 1 to 2)

10: compute the radii R1 and R211: if (SMALLGEAR)12: if (R1 < R2)and(smallmin < R1 < smallmax)13: compute centroid of the circle14: return centroid pixel coordinates15: found = true16: break the for loop17: else if (R2 < R1)and(smallmin < R2 < smallmax)18: do steps 13 to 1619: end if20: else if (BIGGEAR)21: if (R1 > R2)and(bigmin < R1 < bigmax)22: do steps 13 to 1623: else if (R2 > R1)and(bigmin < R2 < bigmax)24: do steps 13 to 1625: end if26: end for27: break the while loop28: end if29: else if (circlesfound == 1)30: compute radius R131: if (SMALLGEAR)32: if (smallmin < R2 < smallmax)33: do steps 13 to 1634: else if (BIGGEAR)35: if (bigmin < R2 < bigmax)36: do steps 13 to 1637: end if38: else39: Go To step 240: end while

Page 54: Vision Based Grasp Planning for Robot Assembly

54 CHAPTER 3. DEVELOPED SYSTEM

3.4 Automatic planner for arm posture control

Once a workpiece is identified in the robot workspace, the next step is to com-pute its spatial location with respect to the robot and its possible stable graspingstate depending on its orientation. Based on this information the arm’s posturecan be controlled automatically. This can be performed in several steps.

1. Initially the robot is moved to a predefined position in the robots work-space. As the robot is calibrated, its TCP position coordinates are avail-able for future computation.

2. Next step is to find the camera location in the robot world coordinates.This is performed by calibrating the camera with respect to the robot. Thecamera calibration procedure used for this thesis is described in AppendixC.

3. From the above step, a final transformation matrix containing the cam-era rotation R(3 × 3) and translation T (3 × 1) with respect to the robotcoordinate system is computed. This transformation matrix along withthe camera intrinsic parameters matrix K is used to compute the object’slocation in robot frame. This can be explained as follows: Let us con-sider a 2D point m(u, v) in the image which corresponds to the centroidof the recognized object. For a 2D point m in an image, there exists acollection of 3D points that are mapped onto the same point m. This col-lection of 3D points constitutes a ray P (λ) connecting the camera centerOc(x, y, z)

T and m(x, y, 1)T , where λ is a positive scaling factor that de-fines the position of the 3D point on the ray. The value of λ is the averagevalue of the back projection error of a set of known points in 3D. Thisvalue is used to obtain the X and Y coordinates of the 3D point using

XYZ

M

= T + λR−1K−1m (3.1)

As the used vision system is monocular, it is not possible to compute thevalue of Z i.e. the distance between the object and the camera, instead itis computed by using the robot TCP coordinates and the object model.

4. Next step is to compute the orientation of the object in robot workspace.This is performed by fitting the detected object region (in image) to anellipse. Now the orientation of the object corresponds to the orientationof the major axis of the ellipse with respect to its x-axis. Based on thisorientation a final rotation matrix also called as direction cosine matrixis computed using the current camera rotation. This rotation matrix is

Page 55: Vision Based Grasp Planning for Robot Assembly

3.5. SYNCHRONIZING ARM AND GRIPPER MOTIONS 55

converted into quaternions which are used to command the robots finalorientation. As the used vision system is monocular, this system possessome limitations which are explained in Section 3.6.

5. Once the object orientation is computed, the next step is to drive the robottowards the object. The final position coordinates (X,Y, Z) of the objectthat are computed in the third step along with the orientation quaternionscomputed in the previous step are used to move the robot to a particularlocation in order to grasp the object. The robot movement and position-ing instructions are explained in Section 3.5.1.

6. Once the robot is moved to the object grasping location, a suitable grasp-ing type (2 or 3 fingered) is selected automatically based on the objectsorientation.

3.5 Synchronizing arm and gripper motions

The motions of the robot, the gripper and the fixture are controlled indepen-dently by a set of predefined functions that are defined in their motion con-trollers. As both gripper and fixture motions are controlled in a similar mannerwe are not pointing out to the fixture in this section.

3.5.1 Arm motion control

As explained before in Section 2.2, the robot’s motion is controlled by ABBIRC5 controller. The software embedded with this controller has the librariescontaining predefined functions and instructions developed in RAPID for armmotion control. For this project, a software program is developed in RAPIDcontaining all the required set of instructions to control the robot’s motionautonomously and also to communicate with cell computer. The movements ofthe robot are programmed as pose – to – pose movements, i.e. move from thecurrent position to a new position and the robot automatically calculates thepath between these two positions. The basic motion characteristics (e.g. typeof path) are specified by choosing appropriate position instructions. Both therobot and the external axes are positioned by the same instructions. Some ofthese positioning instructions used for this thesis are shown in the Table 3.3and an example program is shown in Figure 3.10.

Page 56: Vision Based Grasp Planning for Robot Assembly

56 CHAPTER 3. DEVELOPED SYSTEM

Instruction Type of movement (TCP)

MoveC Moves along a circular pathMoveJ Joint movementMoveL Moves along a linear pathMoveAbsJ Absolute joint movement

Table 3.3: Robot positioning instructions

Figure 3.10: Sample code for robot motion control

The syntax of a basic positioning instruction isMoveL p1, v500, z20, tool1This instruction requires following parameters in order to move the robot:

• Type of path: linear (MoveL), joint motion (MoveJ) or circular (MoveC).

• The destined position, p1.

• Motion velocity, v500 (Velocity in mm/s).

• Zone size (accuracy) of robot destined position, z20.

• Tool data currently in use, tool1.

Zone size defines how close the robot TCP has to move to the destined position.If this is defined as fine, robot will move to the exact position. Figure 3.11illustrates this.

Page 57: Vision Based Grasp Planning for Robot Assembly

3.5. SYNCHRONIZING ARM AND GRIPPER MOTIONS 57

Figure 3.11: Robot zone illustration diagram

3.5.2 Gripper motion control

Gripper motion refers to the finger movements. In order to control these move-ments a Galil motion controller was used with the gripper prototype (See Sec-tion 2.2). The main task of this Galil controller is to control the motor speedthat is associated with each finger. Each motor in the prototype is associatedwith different axis encoders in order to record the finger positions. Generally,the motor speeds can be controlled by executing specific commands (Galil com-mands) on the controller by using Galil tools software. For this thesis, a low –level control program containing the required Galil commands is developed anddownloaded to the controller to command the finger movements autonomouslyduring different stages of the process execution. The finger movements are pro-grammed in different steps and all these steps constitutes a grasp cycle. Thesesteps are described below.

Initialization is the primary step performed by the gripper fingers before start-ing the assembly process. During this step all three fingers are commandedto move to their home positions. The information provided by differentlimit switches (fixed in the home positions) is used to stop the fingers afterreaching their home positions. This step is mainly required to provide thecamera, a clear view of the workspace.

Pregrasping step is executed after a workpiece is identified by the vision system.During this step fingers are commanded to move to a pregrasping pose.This is computed based on the workpiece size, shape and orientation.This step is mainly required to avoid the collision of gripper fingers withother workpieces. After this step the finger motors are turned off.

Grasping and holding steps are executed when the arm reaches near the work-piece and is ready to grasp. During this steps, fingers start closing andtheir movement is stopped based on the received tactile information from

Page 58: Vision Based Grasp Planning for Robot Assembly

58 CHAPTER 3. DEVELOPED SYSTEM

the force sensors fixed on the finger tips. After this step motors are keptON in order to hold the workpiece until it is released.

Releasing is the final and simple step executed at the end of every grasp cycle.During this step, the finger motors are turned off in order to release theworkpiece.

3.5.3 Interface to synchronize motions

From the previous subsections we understood that the control programs foreach device are developed using different programming languages related totheir motion controllers and it is difficult to integrate them in a single program.So, for a flexible process execution it is required to develop a software interfacethat can interact with all devices simultaneously. For this thesis, a software in-terface program is developed in C++ to communicate with the robot controlleras well as to download control programs to Galil controllers and to executeGalil commands. This interface is executed from the cell computer which is anormal PC running with Ubuntu. The cell computer is also responsible for:

• Controlling the overall process execution.

• Monitoring the process status.

• Communicating with various devices in the cell for activity coordination.

• Handling assembly errors.

• Executing the object detection programs.

As all the devices and cell computer are connected under a common LAN withdifferent IP addresses, the interface program can interact with them over Eth-ernet by connecting to the specific IP address.

Interface interaction with robot controller

Figure 3.12: Robot controller communication architecture

Page 59: Vision Based Grasp Planning for Robot Assembly

3.5. SYNCHRONIZING ARM AND GRIPPER MOTIONS 59

Figure 3.12 shows the robot communication architecture diagram. It is a client– server communication model where the server program is running on therobot controller and the client program on the cell computer. The server pro-gram is written in RAPID and contains all the instructions to communicatewith client. Client initiates the connection with server by connecting to a spe-cific socket address1 using TCP/IP. The communication process between clientand server is described using a sequential diagram shown in Figure 3.13 and isexplained in following steps:

Figure 3.13: Sequential diagram for client – server model

• Initially, sockets are created on both server and client (by default, a serversocket is created with the execution of server program).

• Client requests a socket connection with server on specific port.

• If the requested port is free to use, server establishes a connection and isready to communicate with client.

1The socket address is a combination of IP address and a port number.

Page 60: Vision Based Grasp Planning for Robot Assembly

60 CHAPTER 3. DEVELOPED SYSTEM

• Once, the connection is established, the client program send/receive posi-tion information to/from the server program.

• Both sockets are closed when data transfer is successful.

The server program is mainly responsible for receiving the position informationfrom client and to execute it on robot. This position information is transferredin the form of strings from client. An example line of code is har pos[256℄="[50.9,39,-10.7,-34.5,18.5,-14.9℄1";These strings are preprocessed in server and are converted to float/double. Themovement type i.e. linear or joint can be decided by the value after bracketsin the passing string (e.g. 1 in the above code). 1 and 2 for Joint movementusing MoveJ and 3 for linear movement using MoveL. Once the instructionis executed on the robot, server sends current position information of TCP toclient in the form of[X,Y,Z℄[Q1,Q2,Q3,Q4℄[ f1, f4, f6, fx℄...where,[X,Y, Z] are TCP position coordinates, [Q1, Q2, Q3, Q4] are quaternionsof TCP and [cf1, cf4, cf6, cfx] are Robot axis configurations. Source codes ofserver and client are given in Appendix D.

Interface interaction with Galil controller

Interface for the Galil controller is developed using specific predefined func-tions of Galil communication library. These functions are used to connect anddownload the program to controller and also to read specific digital outputs.The basic functionalities provided by communication library are:

• Connecting and Disconnecting with a controller.

• Basic Communication.

• Downloading and uploading embedded programs.

• Downloading and uploading array data.

• Executing Galil commands.

• Access to the data record in both synchronous and asynchronous modes.

It is also possible to connect and communicate with more than one Galil con-troller at the same time. Source code is given in Appendix E.

Page 61: Vision Based Grasp Planning for Robot Assembly

3.6. LIMITATIONS 61

3.6 Limitations

Main limitations of the developed system are:

• The proposed object identification techniques can only detect the work-pieces with particular shape (circles or rectangles).

• Size and shape of the pin requires a small pin holder fixed in the workspace to support grasping. Because of this problem the vision system can-not recognize the pin, as 80% of the pin region is covered by the holder.So a fixed assembly plan is used for assembling the pin.

• The developed system cannot identify the pin hole presented on the shaft.

• Even though background subtraction technique is producing good re-sults, it is sensitive to illumination changes and noise present in the back-ground.

Page 62: Vision Based Grasp Planning for Robot Assembly
Page 63: Vision Based Grasp Planning for Robot Assembly

Chapter 4

Experiments with the system

This chapter describes the test scenario to demonstrate the capabilities of thedeveloped system along with a detailed analysis of the final results.

4.1 Test environment

The test environment used to test the developed system is shown below.

Figure 4.1: Test environment

63

Page 64: Vision Based Grasp Planning for Robot Assembly

64 CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

4.2 Assembly representation

As stated earlier in Section 3.1 the test assembly consists of a cylindrical shaft,two circular gears and one pin. In order to generate an assembly plan for ourtest assembly, a computer representation of mechanical assembly is required.Generally an assembly of parts can be represented by the individual descriptionof each component and their relationships in the assembly. Knowledge basecontains all the related information regarding the geometric models of eachcomponent, their spatial orientations and the assembly relationships betweenthem. Based on this knowledge an assembly model is represented by a graphstructure as shown in Figure 4.2 in which each node represents an individualcomponent of the assembly and the connected links represents the relationshipamong them.

Figure 4.2: Graph structure of test assembly

4.2.1 Assembly sequence selection

The entire assembly of any type should follow a predefined assembly sequence,but with a content specific to the respective components. In order to find acorrect and reliable assembly sequence one should evaluate all the possible as-sembly lines of a given product. This task can be accomplished by using prece-dence diagrams1 [Y. Nof et al., 1997; Prenting and Battaglin, 1964]. These di-agrams are designed based on the assembly knowledge base. Figure 4.3 showsthe precedence diagram for our test assembly and is described below.

1Precedence diagrams are the graphical representation of a set of precedence constraints orprecedence relations [Lambert, 2006].

Page 65: Vision Based Grasp Planning for Robot Assembly

4.2. ASSEMBLY REPRESENTATION 65

Figure 4.3: Precedence diagram of test assembly

Usually this diagram is organized into different columns and all the assem-bly operations that can be carried out first are placed in the first column and soon. Each individual operation is assigned a number and is represented by a cir-cle. The connecting arrows shows the precedence relations. Now let us considerthe geometrical design of our test assembly components; the shaft contains apin hole on its top and the small gear contains a step on one side. These com-ponents can be assembled only if they have a proper orientation. Based on thisgeometrical constraints, two assembly sequences are derived as shown in thediagram. In the first sequence the shaft is fixed at first and the small gear ismounted on the shaft such that the base of the shaft is fixed and the step sideof the gear is facing towards the shaft’s base. Where as in the second sequence,small gear is fixed at first having the step side facing up and the shaft is fixedto it such that the base of the shaft remains above the gear which is exactly theopposite case of first sequence. Next, in order to fix the big gear, first one is adirect assembly sequence where the bigger gear can be directly mounted on theexisting components where as in the second sequence the complete subassem-bly needs to be turned upside down and re-fixed. All precedence relations arerestricted to simple AND relations. A simple AND relation for a specific com-ponent means that it can be assembled only if it have a proper orientation and ifthe other required operations are performed beforehand e.g. the small gear canbe assembled only if it has a proper orientation and the shaft is assembled prop-erly. Once the precedence diagram has been designed, the cost of each actionis estimated and the correct assembly sequence is selected by applying mixedinteger programming2 [Lambert and M. Gupta, 2002]. After evaluating both

2A mixed integer programming is the minimization or maximization of a linear function subjectto linear constraints.

Page 66: Vision Based Grasp Planning for Robot Assembly

66 CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

sequences, the first sequence is selected for our test assembly mainly because ofthe following reasons:

• It is a direct assembly sequence.

• Task complexity is reduced.

• Overall assembly time is reduced.

The final assembly sequence along with the tasks are described in Table 4.1.

Table 4.1: Test assembly task descriptionTask No. Description

1 Fix shaft’s base2 Mount small gear (step towards shaft’s base)3 Mount big gear4 Install pin5 Remove complete assembly

4.3 Test scenario

In order to test the developed system, the following test scenario has been de-veloped:

1. The demonstrator starts the assembly process by executing the main con-trol program in the cell computer.

2. Subsequently the hardware components like the robot, the gripper andthe fixture start initializing.

3. Once the hardware component initialization is successful the softwarecomponents like the vision system and MATLAB engine are initializedand the process status is displayed in the cell computer execution window(See Figure 4.7).

4. After all the system components are initialized, the robot will move tothe three predefined positions (POS1, POS2, POS3. See Section 3.2.1) inorder to capture the background images of the workspace.

5. Then the arm will move to its home position and a assembly messagestating “submit the workpieces” is displayed in the execution window.During this time the process execution is paused and will continue afterthe demonstrator gives a command.

Page 67: Vision Based Grasp Planning for Robot Assembly

4.3. TEST SCENARIO 67

6. Now it is the demonstrator’s job to place the workpieces arbitrarily inthe three subregions. It is not restricted to place only one workpiece inone subregion. Once the workpieces are placed, the demonstrator entersa command at the execution window in order to continue the execution.

7. After the command the robot will start searching for the workpiecesbased on the assembly sequence explained in Section 4.2.1 in all sub-regions starting from POS1.

8. If the workpiece needed to be assembled is identified in any of the threepositions, the grasp planner will compute its orientation from the ob-tained image information and compares it with the predefined assemblysequence in order to find whether the workpiece is having a proper orien-tation for assembling or not. If this condition is satisfied, an overall planfor grasping is produced based on the workpiece spatial position and ori-entation. Otherwise an assembly message is displayed in the executionwindow stating the required action to be performed by the demonstrator.This type of check is performed only for the parts that needs specific ori-entation in the assembly (e.g. small gear cannot be assembled if its stepside is facing up).

9. Based on the generated plan from the above step an assembly cycle con-sisting of aligning, grasping, moving, fixing and releasing is executed forevery workpiece in order to assemble it.

10. If the workpiece is not identified in any of the first two positions (POS1and POS2), an assembly message stating “workpiece not found” is dis-played in the execution window and the robot will move to the next po-sition. And if the workpiece is not identified in POS3 an assembly errormessage stating “workpiece not found in the workspace” is displayed inthe execution window and the execution stops.

11. Once the assembly is finished, the arm will move over the fixture to graspthe entire assembly and to place it in a desired location.

12. As a final step, a process completion message is displayed in the executionwindow.

4.3.1 Special cases

• If two similar workpieces are found in one subregion, the system willrandomly choose one of them.

• If the recognized small gear contains the step side facing up, an assem-bly message stating "turn the gear upside down and place it again" isdisplayed on the execution window and the process execution is paused.

Page 68: Vision Based Grasp Planning for Robot Assembly

68 CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

• If two small gears are found in one of the subregions, the system willsearch for the gear containing the plain side facing up.

4.3.2 Test results

Based on the above test scenario, the tests are conducted on the developedsystem and the results are shown below:

System initialization

Figure 4.4: (A) Initialized gripper (B) Arm at home position

Figure 4.5: (A) Arm at position 1 (B) Arm at position 2

Page 69: Vision Based Grasp Planning for Robot Assembly

4.3. TEST SCENARIO 69

Figure 4.6: Arm at position 3

Figure 4.7: Screen-shot of execution window

Figure 4.8: Workpieces in robot task space

Page 70: Vision Based Grasp Planning for Robot Assembly

70 CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

Assembling shaft

Figure 4.9: Recognized shaft

Figure 4.10: Grasping shaft

Page 71: Vision Based Grasp Planning for Robot Assembly

4.3. TEST SCENARIO 71

Figure 4.11: (A) Robot Fixing shaft (B) Fixed shaft

Assembling small gear

Figure 4.12: (A) Searching at POS2 (B) Searching at POS3

Page 72: Vision Based Grasp Planning for Robot Assembly

72 CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

Figure 4.13: Recognized small gear

Figure 4.14: Grasping small gear

Figure 4.15: (A) Robot fixing gear (B) Assembled gear

Page 73: Vision Based Grasp Planning for Robot Assembly

4.3. TEST SCENARIO 73

Assembling big gear

Figure 4.16: (A) Searching for the gear (B) Robot grasping the gear

Figure 4.17: (A) Fixing the gear (B) Assembled big gear

Page 74: Vision Based Grasp Planning for Robot Assembly

74 CHAPTER 4. EXPERIMENTS WITH THE SYSTEM

Assembling pin

Figure 4.18: (A) Robot grasping the pin (B) Fixing the pin

Figure 4.19: Assembled pin

Figure 4.20: Final assembled product

Page 75: Vision Based Grasp Planning for Robot Assembly

4.4. ANALYSIS 75

4.4 Analysis

• Finger design of the current gripper prototype cannot provide a stablegrasp for the small objects (width and height less than 3cm), if they areplaced horizontally on the workspace. In order to solve this problem,objects are placed on top of small supporting blocks.

• The vision system cannot recognize the objects if they are placed vertically(standing position) in the line of camera view.

• As the used vision system is monocular, the system cannot build a 3Dmodel of the object. Because of this limitation it is not reliable in com-puting the exact 3D location of the objects if they are placed vertically(standing position) in the workspace.

• As the system cannot recognize the pin hole in the shaft, the shaft is ro-tated manually such that the robot can fix the pin.

• As the total work space is divided into three subregions the total opera-tional time is increased (Total time is not a concept of this thesis).

• Because of the hardware problems of the gripper, it cannot hold an objectfor long time.

Page 76: Vision Based Grasp Planning for Robot Assembly
Page 77: Vision Based Grasp Planning for Robot Assembly

Chapter 5

Conclusions

This chapter presents conclusions along with suggestions for future work.

5.1 Summary

The main goal of this thesis was to investigate and implement an Eye–in–handbased automatic grasp planning methodology for an industrial assembly oper-ation. This objective has been successfully achieved by implementing a visionbased autonomous grasp planner which can compute the possible stable graspsand executes the posture of both robot arm and the gripper. The developed sys-tem is flexible in comparison to the conventional assembly cells, and has beenimplemented in a robot assembly to recognize, posit, manipulate, and assem-ble different types of workpieces. The system successfully grasps and assemblesvarious workpieces regardless of their initial placement and orientation. In thissystem, every product is described as an assembly tree, i.e. precedence graph,for all parts in the product. Assembly trees are decomposed into simpler oper-ations (e.g. “grasp”, “move”, “insert”, “release”, etc.) for the gripper, fixtureand robot respectively.

Object identification techniques that have been developed to identify theworkpieces doesn’t require any system training and are capable to recognizeworkpieces at any given orientation. The main advantage of the developed vi-sion system is its capability of online tracking of work pieces i.e. the changes inthe location of the workpieces will not affect the process execution. As this is anindustrial assembly operation, the knowledge of workpieces is known in priorand this knowledge is used to generate a final execution plan. The interfacethat is developed to synchronize motions of various machine tools increasedthe overall process execution flexibility.

With the completion of this thesis, a reliable communication model to com-municate with the robot arm as well as with the grasping system i.e. with bothflexible gripper and fixture has been successfully implemented and tested under

77

Page 78: Vision Based Grasp Planning for Robot Assembly

78 CHAPTER 5. CONCLUSIONS

Linux platform. This model is simple and easy to understand and provides allthe necessary functionalities.

5.2 Future work

The following advancements can be made to the existing system :

• The system capabilities can be increased by incorporating a stereo visionsystem to estimate the distance between the robot TCP and the workpiece.

• Overall flexibility can be increased by using a 3D model of the objectgenerated by the vision system.

• Blob detection algorithm which compensates the background subtractiontechnique can decrease the vision system limitations.

Page 79: Vision Based Grasp Planning for Robot Assembly

Bibliography

Eye-in-hand system. http://whatis.te htarget. om/definition/0,,sid9_g i521680,00.html.

Galil motion control. http://www.galilm . om/.

Open cv reference manual. http://www. omp.leeds.a .uk/vision/open v/open vref_ v.html, 2000.

ABB IRB 140 Product manual. ABB Automation Technologies AB, Västerås,Sweden, 2004.

Peter K. Allen, Aleksandar Timcenko, Billibon Yoshimi, and Paul Michelman.Automated tracking and grasping of a moving object with a robotic hand-eyesystem. IEEE Transactions on Robotics and Automation, April 1993.

Anani Ananiev. Flexible gripper FG2.00.00.00 Operation manual. Öre-bro,Sweden, 2009.

Arbib. Perceptual structures and distributed motor control. Handbook of Phys-iology, Section 2: The Nervous System, Motor Control, II:1449–1480, 1981.

Yalın Bastanlar, Alptekin Temizel, and Yasemin Yardımcı. Improved sift match-ing for image pairs with scale difference. IET Electronics Letters, 46(5),March 2010.

Jean-Yves Bouguet. Camera calibration toolbox for matlab. http://www.vision. alte h.edu/bouguetj/ alib_do /, July 2010.

Aihua Chen and Bingwei He. A camera calibration technique based on planargeometry feature. In Mechatronics and Machine Vision in Practice, 2007.M2VIP 2007. 14th International Conference on, pages 165 –169, 4-6 2007.doi: 10.1109/MMVIP.2007.4430736.

T. H. Chiu, A. J. Koivo, and R. Lewczyk. Experiments on manipulatorgross motion using self-tuning controller and visual information. Journalof Robotic Systems, 3(1):59–70, 1986.

79

Page 80: Vision Based Grasp Planning for Robot Assembly

80 BIBLIOGRAPHY

Daniel F. Dementhon and Larry S. Davis. Model-based object pose in 25 linesof code. Int. J. Comput. Vision, 15(1-2):123–141, 1995. ISSN 0920-5691.doi: http://dx.doi.org/10.1007/BF01450852.

Rosen Diankov, Takeo Kanade, and James Kuffner. Integrating grasp planningand visual feedback for reliable manipulation. In Proceedings of the 9thIEEE-RAS International Conference on Humanoid Robots, Paris,France,December 2009.

Festo Didactic GmbH & Co. KG Festo solution center. microfms – flexiblemanufacturing system. http://www.festo-dida ti . om/int-en/.

Mikell P. Groover. Automation, production systems and computer integratedmanufacturing. Prentice Hall PTR, Upper Saddle River, NJ, USA, 2000. ISBN0130889784.

Yu Huang, Thomas S. Huang, and Heinrich Niemann. Segmentation-basedobject tracking using image warping and kalman filtering. In Proceedingsof the International Conference on Image Processing, volume 3, pages 601–604, 2002.

Alison E. Hunt and Arthur C. Sanderson. Vision-based predictive robotic track-ing of a moving target. Technical Report CMU-RI-TR-82-15, Robotics In-stitute, Pittsburgh, PA, January 1982.

Seth Hutchinson, Gregory D. Hager, and Peter I. Corke. A tutorial on visualservo control. IEEE Transactions on Robotics and Automation, 12(5):651–671, October 1996.

Billibon H.Yoshimi and Peter K. Allen. Integrating real-time vision and manip-ulation. In Proceedings of the Hawaii International Conference on SystemSciences, volume 5, pages 178–187, January 1997.

A.J. Koivo and N. Houshangi. Real-time vision feedback for servoing roboticmanipulator with self-tuning controller. IEEE Transactions on Systems, Manand Cybernetics, 21(1):134 –142, jan/feb 1991. ISSN 0018-9472. doi: 10.1109/21.101144.

A.J.D Lambert and Surendra M. Gupta. Demand-driven disassembly optimi-sation for electronic consumer goods. Journal of Electronics Manufacturing,11(2):121–135, 2002.

Alfred J.D. Lambert. Generation of assembly graphs by systematic analysisof assembly structures. European Journal of Operational Research, 168(3):932–951, February 2006.

College Leicester. Presentation on flexible manufacturing systems. http://www.slideshare.net/, 2009.

JOSE ALBERTO
Resaltado
Page 81: Vision Based Grasp Planning for Robot Assembly

BIBLIOGRAPHY 81

John Mitchell. Flexible manufacturing systems for flexible production. http://ezinearti les. om/.

Antonio Morales, Pedro J. Sanz, Angel P. del Pobil, and Andrew H. Fagg.Vision-based three-finger grasp synthesis constrained by hand geometry.Robotics and Autonomous Systems, 54(6):496 – 512, 2006. ISSN 0921-8890. doi: DOI:10.1016/j.robot.2006.01.002.

Okamura.A.M., Smaby.N, and Cutkosky.M.R. An overview of dexterous ma-nipulation. In Proceedings of the IEEE International Conference on Roboticsand Automation. ICRA 2000, volume 1, pages 255–262, 2000.

Mario Prats, Philippe Martinet, A.P.del Pobil, and Sukhan Lee. Vision/forcecontrol in task-oriented grasping and manipulation. In Proceedings of theIEEE International Conference on Intelligent Robots and Systems, 2007.

T.O. Prenting and R.M. Battaglin. The precedence diagram: a tool for analysisin assembly line balancing. The Journal of Industrial Engineering, 15(4):208–213, 1964.

K. Rao, G. Medioni, H. Liu, and G.A. Bekey. Shape description and graspingfor robot hand-eye coordination. Control Systems Magazine, IEEE, 9(2):22–29, feb 1989. ISSN 0272-1708. doi: 10.1109/37.16767.

K. Rezaie, S. Nazari Shirkouhi, and S.M. Alem. Evaluating and selecting flexi-ble manufacturing systems by integrating data envelopment analysis and ana-lytical hierarchy process model. Asia International Conference on Modellingand Simulation, pages 460–464, 2009. doi: http://doi.ieeecomputersociety.org/10.1109/AMS.2009.68.

Paul L. Rosin. Techniques for assessing polygonal approximations of curves.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE IN-TELLIGENCE, 19(6):659 –666, June 1997.

Ken Tabb, Neil Davey, R. G. Adams, and S. J. George. Analysis of humanmotion using snakes and neural networks. In AMDO ’00: Proceedings ofthe First International Workshop on Articulated Motion and DeformableObjects, pages 48–57, London, UK, 2000. Springer-Verlag. ISBN 3-540-67912-X.

Geoffrey Taylor and Lindsay Kleeman. Visual Perception and Robotic Manip-ulation. Springer-Verlag Berlin Heidelberg, Germany, 2006.

Johan Wiklund and Gösta H. Granlund. Image Sequence Analysis for ObjectTracking. In Proc. of The 5th Scandinavian Conference on Image Analysis,pages 641–648, 1987.

Page 82: Vision Based Grasp Planning for Robot Assembly

82 BIBLIOGRAPHY

Ming Xie. Robotic hand-eye coordination: New solutions with uncalibratedstereo cameras. Machine Vision and Applications, 10(10):136–143, 1997.

Ming Xie. A developmental principle for robotic hand-eye coordination skill.In Proceedings of the 2nd International Conference on Development andLearning, 2002.

Ming Xie. The fundamentals of robotics: Linking perception to action. WorldScientific Publishing Co., 2003.

Shimon Y. Nof, Wilbert E. Wilhelm, and Hans-Jürgen Warnecke. Industrialassembly. Chapman and Hall, UK, 1997.

Wei Yi. Circle detection using improved dynamic generalized hough transform(idght). In Geoscience and Remote Sensing Symposium Proceedings, 1998.IGARSS ’98. 1998 IEEE International, volume 2, pages 1190 –1192, 6-101998.

Page 83: Vision Based Grasp Planning for Robot Assembly

Appendix A

Source code for shaft

identification

1 c l o s e a l l ;2 background =im2double ( imread ( color_bg ) ) ;3 image=im2double ( imread ( color_image ) ) ;4 [ r o i s mask num]= getROI ( background , image ) ; % F i l t e r mask56 %======== ERODING THE MASK ========7 se = s t r e l ( ’ square ’ , 4 ) ;8 erodedBW = imerode ( mask , s e ) ;9 erodedBW = imerode ( erodedBW , se ) ;

10 f i g u r e ; imshow ( mask ) ; ax i s t i g h t on ; hold on ;1112 %======== BOUNDARY EXTRACTION =======13 [B , L ,N,A] = bwboundaries ( erodedBW , ’ ho l e s ’ ) ;14 tempim = c e l l ( l e ng th (B ) , 1 ) ;15 d i f f = z e r o s ( l e ng th (B ) , 1 ) ;16 S = c e l l ( l e ng th (B ) , 1 ) ;1718 %======== CURVES EXTRACTION =========19 i f (~ l e ng th (B)==0)20 fo r k=1: l e ng th (B )21 boundary = B{ k } ;22 R = B{ k } ;23 tempim { k } = z e r o s ( s i z e ( erodedBW ) ) ;24 [M N] = s i z e (R ) ;25 fo r i =1:M26 tempim { k } (R( i , 1 ) ,R( i , 2 ) ) = 1 ;27 end28 curve = e x t r a c t_ c ur ve ( tempim { k } , 3 ) ;29 fo r no=1: l e ng th ( curve )30 i f (~ l e ng th ( curve { [ no ] } ) = = 0 )31 [ c ur ve _p ixe l s ] = f i ndc ur ve ( curve { no } ) ;32 %===== FIND SHAFT SUITABLE REGION ======33 i f (~ l e ng th ( c u r v e _p ixe l s )==0&& l e ng th ( c u r v e _p ixe l s ) <200)34 S { k } = reg ionprops ( tempim { k } , ’ a l l ’ ) ;35 d i f f ( k ) = S { k } . MajorAxisLength−S { k } . MinorAxisLength ;36 s t r = 1;37 break ;38 e l s e39 s t r = 0;40 end41 e l s e

83

Page 84: Vision Based Grasp Planning for Robot Assembly

84 APPENDIX A. SOURCE CODE FOR SHAFT IDENTIFICATION

42 s t r = 0;43 end44 end45 end46 e l s e47 s t r = 0;48 end4950 %====== CROSS CHECK FOR SHAFT =========51 i f ( s t r ==1)52 shaft_num = f i nd ( d i f f ==max ( d i f f ) ) ;53 s h a f t _ c e n t r o i d = S { shaft_num } . Centroid ;5455 s h a f t _ o r i e n t = S { shaft_num } . Or ienta t ion ;56 i f ( ( s ha f t _or i e n t <40&&s ha f t_or i e n t > 0 ) | | ( s ha f t _or i e n t >−40&&s ha f t_or i e n t <0) )57 o r i e n t =0; %HORIZONTAL58 e l s e59 o r i e n t =1; %VERTICAL60 end61 u = round ( s h a f t _ c e n t r o i d ( 2 ) ) ;62 v = round ( s h a f t _ c e n t r o i d ( 1 ) ) ;63 s ha f t _p ix = [ s h a f t _ c e n t r o i d ( 1 ) , s h a f t _ c e n t r o i d ( 2 ) ] ;64 check = 1;65 p lo t ( v , u , ’ r* ’ ) ;66 e l s e67 check = 0;68 end

Page 85: Vision Based Grasp Planning for Robot Assembly

Appendix B

Source code for gear

identification

1 # inc lude "cv . h"2 # inc lude "highgui . h"34 # de f ine CAMERA 056 / / ===============================================================//7 / /FUNCTION TO DETECT GEARS8 / / INPUT : Type of the ob j e c t (SMALL or BIG )9 / /OUTPUT: Centroid p i x e l c oor d ina t e s and Or ienta t ion

10 / / ===============================================================//1112 void de t e c t_ge a r ( cons t char obj_ type [256] , i n t *u , i n t *v , double * o r i e n t )13 {14 / / ====== Image d e c l a r a t i o n ==============15 Ipl Image* img ;16 Ipl Image* grayImg ;17 Ipl Image* MedianImg ;18 Ipl Image* cannyImg ;19 Ipl Image* c o lo r _ds t ;20 Ip l Image* r e s u l t ;2122 i n t px [ 0 ] , py [ 0 ] , pz [ 0 ] ;23 i n t i t e r =0 , i ,U1, U2,V1 ,V2,R1=0 ,R2=0 , disk_u , d i sk_v ;24 f l o a t * p ;25 double contarea ;2627 CvCapture* cap ;28 CvMemStorage* c s to r age = cvCreateMemStorage (0 ) ;2930 / / ======= Capture frame from camera =========31 cap = cvCaptureFromCAM (CAMERA) ;32 i f ( ! cvGrabFrame ( cap ) )33 {34 p r i n t f ( "Could not grab a frame \ n \7 " ) ;35 e x i t (0 ) ;36 }3738 / / ======== Named windows to d i s p l ay r e s u l t s ========39 cvNamedWindow( "LIVE" , CV_WINDOW_AUTOSIZE ) ;40 cvNamedWindow( "LIVE − Edge Fe a tu r e s" , CV_WINDOW_AUTOSIZE ) ;41

85

Page 86: Vision Based Grasp Planning for Robot Assembly

86 APPENDIX B. SOURCE CODE FOR GEAR IDENTIFICATION

42 / / work loop43 while ( i t e r <50)44 {45 / / ========= r e t r i e v e captured frame =======46 img = cvQueryFrame ( cap ) ;47 i f ( ! img )48 {49 p r i n t f ( "bad video \ n" ) ;50 e x i t (0 ) ;51 }5253 / / ======== Image c r e a t i on =========54 r e s u l t =cvCreateImage ( cvGetS ize ( img ) , 8 , 1 ) ;55 ds t = cvCreateImage ( cvGetS ize ( img ) , 8 , 1 ) ;56 c o lo r _ds t = cvCreateImage ( cvGetS ize ( img ) , 8 , 3 ) ;57 h i s t im = cvCreateImage ( cvGetS ize ( img ) , IPL_DEPTH_8U , 1 ) ;5859 / / Convers ion to gray s c a l e60 grayImg = cvCreateImage ( c vS i z e ( img−>width , img−>he i gh t ) , IPL_DEPTH_8U , 1

) ;61 cvCvtColor ( img , grayImg , CV_BGR2GRAY ) ;6263 / / Histogram e q u a l i z a t i o n64 c vE qua l i z e Hi s t ( grayImg , h i s t im ) ;6566 / / Applying Median F i l t e r67 MedianImg = cvCreateImage ( cvGetS ize ( img ) , IPL_DEPTH_8U , 1) ;68 cvSmooth ( h i s t im , MedianImg ,CV_MEDIAN,11 ,11) ;6970 / / Computing Edge f e a t u r e s71 cannyImg = cvCreateImage ( cvGetS ize ( img ) , IPL_DEPTH_8U , 1) ;72 cvCanny ( MedianImg , cannyImg , 80 , 120 , 3) ;73 cvCvtColor ( cannyImg , color_ds t , CV_GRAY2BGR ) ;7475 / / ========== Search fo r hough c i r c l e s ============76 CvSeq* c i r c l e s = cvHoughCircles ( cannyImg , c s torage , CV_HOUGH_GRADIENT,

cannyImg−>he i gh t /50 , 1 , 35 ) ;7778 i f ( c i r c l e s −>to ta l >=2)79 {80 / / f i l t e r to l i m i t only <=2 c i r c l e s to draw81 fo r ( i = 0; c i r c l e s −>to ta l >=2? i <2: i < c i r c l e s −>t o t a l ; i ++ )82 {83 i f ( i ==0)84 {85 p = ( f l o a t *) cvGetSeqElem ( c i r c l e s , i ) ;86 c vC i r c l e ( img , cvPoint ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , 3 ,

CV_RGB(255 ,0 ,0) , −1, 8 , 0 ) ;8788 c vC i r c l e ( img , cvPoint ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound

( p [ 2 ] ) , CV_RGB(255 ,0 ,0) , 1 , 8 , 0 ) ;8990 c vC i r c l e ( ro i , cvPoint ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound

( p [ 2 ] ) , CV_RGB(255 ,255 ,255) , −1, 8 , 0 ) ;9192 U1 = cvRound ( p [ 1 ] ) ;93 V1 = cvRound ( p [ 0 ] ) ;94 R1 = cvRound ( p [ 2 ] ) ;95 }96 e l s e97 {98 p = ( f l o a t *) cvGetSeqElem ( c i r c l e s , i ) ;99 c vC i r c l e ( img , cvPoint ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , 3 ,

CV_RGB(255 ,0 ,0) , −1, 8 , 0 ) ;100

Page 87: Vision Based Grasp Planning for Robot Assembly

87

101 c vC i r c l e ( img , cvPoint ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound( p [ 2 ] ) , CV_RGB(255 ,0 ,0) , 1 , 8 , 0 ) ;

102103 c vC i r c l e ( ro i , cvPoint ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound

( p [ 2 ] ) , CV_RGB(255 ,255 ,255) , −1, 8 , 0 ) ;104 U2 = cvRound ( p [ 1 ] ) ;105 V2 = cvRound ( p [ 0 ] ) ;106 R2 = cvRound ( p [ 2 ] ) ;107 }108109 }110 / / ======== Checking fo r smal l gear ============111 i f ( strcmp ( obj_type , "SMALL" ) ==0)112 {113 i f ( fabs (R2−R1) >15 && fabs (R2−R1) < 30)114 {115 i f ( ( R1<R2)&&(R1>50&&R1<70) )116 {117 disk_u =U1 ;118 disk_v=V1 ;119 *check = 1;120 break ;121 }122 e l s e i f ( ( R2<R1)&&(R2>50&&R2<70) )123 {124 disk_u = U2 ;125 disk_v = V2 ;126 *check = 1;127 break ;128 }129 e l s e130 {131 *check =0;132 }133 }134 e l s e i f (R1>50&&R1<70)135 {136 disk_u =U1 ;137 disk_v=V1 ;138 *check = 1;139 break ;140 }141 e l s e i f (R2>50&&R2<70)142 {143 disk_u =U2 ;144 disk_v=V2 ;145 *check = 1;146 break ;147 }148 e l s e149 *check = 0;150 }151 / / ======== Checking fo r big gear ============152 i f ( strcmp ( obj_type , "BIG" ) ==0)153 {154 i f ( fabs (R2−R1) >15 && fabs (R2−R1) < 30)155 {156 i f ( ( R1>R2)&&(R1>70&&R1<90) )157 {158 disk_u =U1 ;159 disk_v=V1 ;160 *check = 1;161 break ;162 }163 e l s e i f ( ( R2>R1)&&(R2>70&&R2<90) )

Page 88: Vision Based Grasp Planning for Robot Assembly

88 APPENDIX B. SOURCE CODE FOR GEAR IDENTIFICATION

164 {165 disk_u = U2 ;166 disk_v = V2 ;167 *check = 1;168 break ;169 }170 e l s e171 {172 *check =0;173 }174 }175 e l s e i f (R1>70&&R1<90)176 {177 disk_u =U1 ;178 disk_v=V1 ;179 *check = 1;180 break ;181 }182 e l s e i f (R2>70&&R2<90)183 {184 disk_u =U2 ;185 disk_v=V2 ;186 *check = 1;187 break ;188 }189 }190 }191192 e l s e i f ( c i r c l e s −>t o t a l ==1)193 {194 fo r ( i = 0; c i r c l e s −>to ta l >=1? i <1: i < c i r c l e s −>t o t a l ; i ++ )195 {196 i f ( i ==0)197 {198 p = ( f l o a t *) cvGetSeqElem ( c i r c l e s , i ) ;199 c vC i r c l e ( img , cvPoint ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , 3 ,

CV_RGB(255 ,0 ,0) , −1, 8 , 0 ) ;200201 c vC i r c l e ( img , cvPoint ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound

( p [ 2 ] ) , CV_RGB(255 ,0 ,0) , 1 , 8 , 0 ) ;202203 c vC i r c l e ( ro i , cvPoint ( cvRound ( p [ 0 ] ) , cvRound ( p [ 1 ] ) ) , cvRound

( p [ 2 ] ) , CV_RGB(255 ,255 ,255) , −1, 8 , 0 ) ;204 U1 = cvRound ( p [ 1 ] ) ;205 V1 = cvRound ( p [ 0 ] ) ;206 R1 = cvRound ( p [ 2 ] ) ;207 }208 cvSaveImage ( edgeim , r o i ) ;209 }210 / / ======== Checking fo r smal l gear ============211 i f ( strcmp ( obj_type , "SMALL" ) ==0)212 {213 i f (R1>50&&R1<70)214 {215 disk_u =U1 ;216 disk_v=V1 ;217 *check = 1;218 break ;219 }220 e l s e221 *check = 0;222 }223 / / ======== Checking fo r big gear ============224 e l s e i f ( strcmp ( obj_type , "BIG" ) ==0)225 {

Page 89: Vision Based Grasp Planning for Robot Assembly

89

226 i f (R1>70&&R1<90)227 {228 disk_u =U1 ;229 disk_v=V1 ;230 *check = 1;231 break ;232 }233 e l s e234 *check = 0;235 }236 }237238 e l s e239 *check = 0;240241 / / ====== Display ing r e s u l t s ============242 cvShowImage ( "LIVE" , img ) ;243 cvShowImage ( "LIVE − Edge Fe a tu r e s" , cannyImg ) ;244 cvWaitKey (33) ;245 i t e r ++;246 }247248 *u = disk_u ;249 *v = disk_v ;250 * o r i e n t = 0;251 cvReleaseImage (&img ) ; / / Re lease image252 cvRe leaseCapture(&cap ) ; / / Re lease capture253 }

Page 90: Vision Based Grasp Planning for Robot Assembly
Page 91: Vision Based Grasp Planning for Robot Assembly

Appendix C

Camera calibration procedure

The camera calibration procedure described here is used to develop a mathe-matical model of transformation between observed image points and the realworld points. For this thesis, the points in the world are expressed in termsof robot base coordinates. In order accomplish this task we need to find thecamera intrinsic parameter matrix along with the camera external rotation andtranslation in the 3D world.

C.1 Camera intrinsic parameters

The intrinsic parameters of any camera are used to describe the internal geom-etry and optical characteristics of the camera [Xie, 2003]. The intrinsic param-eter matrix is computed by projecting a 3D point in the real world onto theimage plane.

Let us consider a 3D point M(X,Y, Z) in the robot world coordinate systemand assume that the frame w is assigned to the world coordinate system and theframe c is assigned to the camera coordinate system as shown in Figure C.1.

Figure C.1: Pinhole camera geometric projection

If cQw describes the transformation from world frame w to camera frame cthen the coordinates of M with respect to c is given by

91

Page 92: Vision Based Grasp Planning for Robot Assembly

92 APPENDIX C. CAMERA CALIBRATION PROCEDURE

Xc

Yc

Zc

1

= cQw •

Xw

Yw

Zw

1

(C.1)

As described in Chapter 2, the relationship between the image coordinates(u, v) and the corresponding coordinates (Xc, Yc, Zc) in the camera frame isdescribed by a projective mapping matrix K as follows:

λ •

uv1

= K •

Xc

Yc

Zc

1

(C.2)

whereK =

f

Dx

0 u0

0 f

Dy

0

0 0 1

In the above equation, λ is unknown scaling factor, f is the focal length ofthe camera, (Dx, Dy) are the image digitizer sampling steps and (u0, v0) arethe optical center coordinates in image coordinate system uv. The matrix Kis the camera intrinsic parameter matrix. Practically, this is performed by cap-turing various images with different orientations of a chessboard pattern andextracting the grid corners in every image by using camera calibration toolboxin MATLAB [Bouguet, 2010].

C.2 Camera extrinsic parameters

By combining both C.1 and C.2 we will get

λ •

uv1

= K • cQw •

Xr

Yr

Zr

1

(C.3)

Equation C.3 describes the forward projective mapping from 3D to 2D andthe elements of matrix cQw are called as camera extrinsic parameters. Theseparameters include the camera rotation R3×3 and translation T3×1 in the realworld.

For this thesis the camera rotation and translation are computed as ex-plained below. Our task is to fix the camera in the midpoint of the gripper

Page 93: Vision Based Grasp Planning for Robot Assembly

C.2. CAMERA EXTRINSIC PARAMETERS 93

which is fixed to the robot’s TCP. In order to perform this, first the camera isplaced at the robot’s base such that the optical axis pointing upwards. Next itis flipped (rotated 180◦ around the y − axis) and translated to the gripper. Thefollowing equations shows the camera rotation and translation when the armis at POS1.Rotation matrix R = rotation about y − axis with an angle of 180◦.That is,

R =

cos(180) 0 sin(180)0 1 0

− sin(180) 0 cos(180)

=

−1 0 00 1 00 0 −1

The respective translation is computed by subtracting the measured offsetsi.e. the distances from the TCP to the gripper center point (GCP) and from theGCP to the camera center from the robot TCP coordinates.

TCP coordinates at POS1 are

4500647

Measured offset distances are:From TCP to Camera center (z-axis) = 170mmFrom TCP to GCP (z-axis) = 131mmFrom GCP to camera center (x-axis) = 29mm

Therefore, the final translation is

T =

450− 290

647− 170

=

4210477

Whenever the arm moves to a new position, the rotation and translation ma-trices are updated based on the robot rotation and translation.

Page 94: Vision Based Grasp Planning for Robot Assembly
Page 95: Vision Based Grasp Planning for Robot Assembly

Appendix D

Source code for client and

server

Listing D.1: Client code1 # inc lude < s y s / t ype s . h>2 # inc lude < s y s / socke t . h>3 # inc lude < s y s / s t a t . h>4 # inc lude < s y s / s e n d f i l e . h>5 # inc lude < n e t i n e t / in . h>6 # inc lude <netdb . h>7 # inc lude <errno . h>8 # inc lude < f c n t l . h>9

10 us ing namespace s td ; / / cout o s t r i ng s t r e am ve c to r s t r i n g1112 / / =========== FUNCTION DECLARATIONS =============================//13 void e r r o r ( cons t char *msg ) ;14 void sock_cont ( char *pos ) ;1516 / / =========== DECLARING ARM VARIABLES============================//17 i n t sockfd , portno = 1300 , n ;18 i n t t e l l e r ;19 char buf f e r [256 ] ;20 char pos0 [256]= " [0 ,0 ,0 ,0 ,0 ,0]1 " ; / / I n t i a l Pos i t i on fo r ARM21 char pos1 [256]= "[50.9 ,39 ,−10.7 ,−34.5 ,18.5 ,−14.9]2" ;22 char pos2 [256]= "[8.7 ,48 , −30.7 ,27.1 ,48.5 , −8.0]1" ;23 char pos3 [256]= "[−27.1 ,25.8 ,−11.9 ,−52.8 ,48.6 ,−22.4]2 " ;2425 s t r u c t sockaddr_in se rv_addr ;26 s t r u c t hos t e n t * s e r v e r ;2728 / / ====== MAIN =========29 i n t main ( )30 {31 sock_cont ( pos0 ) ;32 sock_cont ( pos3 ) ;33 sock_cont ( pos1 ) ;34 sock_cont ( pos2 ) ;35 r e tu r n 0;36 }3738 / / ===============================================================//39 / /FUNCTION TO PRINT ERROR MESSAGE

95

Page 96: Vision Based Grasp Planning for Robot Assembly

96 APPENDIX D. SOURCE CODE FOR CLIENT AND SERVER

40 / / ===============================================================//41 void e r r o r ( cons t char *msg )42 {43 per ror ( msg ) ;44 e x i t (0 ) ;45 }46 / / ===============================================================//47 / /FUNCTION TO CONNECT TO SOCKET AND WRITE POS48 / / INPUT : STRING (POS)49 / / ===============================================================//50 void sock_cont ( char *pos )51 {52 sockfd = socke t (AF_INET , SOCK_STREAM, 0) ;53 i f ( sockfd < 0)54 e r r o r ( "ERROR opening socke t " ) ;55 s e r v e r = gethostbyname ( "192.168.200.91" ) ;5657 i f ( s e r v e r == NULL)58 {59 f p r i n t f ( s tde r r , "ERROR, no such hos t \ n" ) ;60 e x i t (0 ) ;61 }6263 bzero ( ( char *) &serv_addr , s i z e o f ( s e rv_addr ) ) ;64 serv_addr . s i n_ fami l y = AF_INET ;65 bcopy ( ( char *) s e r v e r−>h_addr , ( char *)&serv_addr . s in_addr . s_addr , s e r v e r−>

h_length ) ;66 serv_addr . s i n_por t = htons ( portno ) ;6768 i f ( connect ( sockfd , ( s t r u c t sockaddr *)&serv_addr , s i z e o f ( s e rv_addr ) ) < 0)69 e r r o r ( "ERROR connecbzero ( buf fe r ,256) " ) ;7071 / / WRITING TO SOCKET72 s t r c py ( buf fe r , pos ) ;73 n = wr i t e ( sockfd , buf fe r , s t r l e n ( bu f f e r ) ) ;74 i f ( n < 0)75 e r r o r ( "ERROR wr i t i ng to socke t " ) ;7677 / / READING FROM SOCKET78 n = read ( sockfd , buf fe r ,255) ;79 i f ( n < 0)80 e r r o r ( "ERROR reading from socke t " ) ;81 p r i n t f ( "%s \ n" , bu f f e r ) ;8283 c l o s e ( sockfd ) ; / / Close socke t84 }

Listing D.2: Rapid server code12 %%%3 VERSION : 14 LANGUAGE: ENGLISH5 %%%67 MODULE main_module89 VAR iodev l o g f i l e ;

10 VAR e r r s t r e r t i t l e :="UNDEFINED MOVEMENT TYPE";11 VAR e r r s t r e r s t r 1 :=" The l a s t e lement of the pos i t i on s t r i n g should s p e c i f i y the

movement type ";12 VAR e r r s t r e r s t r 2 :="1 − MoveJ ";13 VAR e r r s t r e r s t r 3 :="2 − Compute MoveL from Move J ";14 VAR e r r s t r e r s t r 4 :="3 − MoveL";15 VAR s t r i n g r e c e i v e _ t r a n s ;

Page 97: Vision Based Grasp Planning for Robot Assembly

97

16 VAR s t r i n g r e c e i v e _ r o t ;17 VAR s t r i n g r e c e i v e _ s t r i n g ;18 VAR s t r i n g send_trans ;19 VAR s t r i n g send_rot ;20 VAR s t r i n g send_robconf ;21 VAR r obta r g e t Cur r e n t_pos i t i on ;22 VAR r obta r g e t Pos1 ;23 VAR bool ok1 ;24 VAR bool ok2 ;25 VAR bool ok3 ;26 VAR bool ok4 ;27 VAR num C1 ;28 VAR num found1 ;29 VAR num found2 ;30 VAR num le n ;31 VAR num C2 ;32 VAR num move_type ;33 VAR num message_count ;34 VAR socke tdev s e r v e r _ s oc ke t ;35 VAR socke tdev c l i e n t _ s o c k e t ;36 VAR s t r i n g c l i e n t _ i p ;37 VAR s t r i n g c l i e n t_me s s age ;38 CONST j o i n t t a r g e t j o in tpos 0 : = [ [ 0 , 0 , 0 , 0 , 0 , 0 ] , [0 ,9E9 ,9 E9 ,9E9 ,9 E9 ,9E9 ] ] ;39 VAR j o i n t t a r g e t j o in tpos 1 ;4041 PROC main ( )4243 jo in tpos 1 . extax := [0 ,9E9 ,9E9 ,9E9 ,9E9 ,9E9 ] ;44 ConfJ \ Off ;454647 ! open a f i l e fo r wr i t i ng log s48 ! Open "HOME:" \ F i l e := "s imlog .DOC" , l o g f i l e \ Write ;4950 message_count := 1;51 Socke tCreate s e r v e r _ s oc ke t ;52 ! SocketBind s e r v e r _ s oc ke t , "130.243.124.172" , 1300;53 SocketBind s e r v e r _ s oc ke t , "192.168.200.91" , 1300;54 Soc ke tL i s t e n s e r v e r _ s oc ke t ;5556 WHILE TRUE DO57 SocketAccept s e r v e r _ s oc ke t , c l i e n t _ s o c k e t \ Cl ientAddres s := c l i e n t _ i p ;5859 message_count := message_count + 1;6061 Socke tRece ive c l i e n t _ s o c k e t \ S t r := r e c e i v e _ s t r i n g ;62 l e n := StrLen ( r e c e i v e _ s t r i n g ) ;63 ok1 := StrToVal ( S t r P a r t ( r e c e i v e _ s t r i n g , len , 1 ) , move_type ) ;6465 IF move_type =1 THEN66 !MOVEJ67 ok2 := StrToVal ( S t r P a r t ( r e c e i v e _ s t r i n g ,1 , len −1) , j o in tpos 1 . robax ) ;6869 MoveAbsJ jo intpos1 , v1000 , f i ne , tool0 ;70 ! MoveAbsJ jo intpos0 , v1000 , f i ne , tool0 ;7172 ELSEIF move_type =2 THEN73 SingArea \ Wris t ;74 ! Convert from J o in tT a r ge t to Robtarget75 ok2 := StrToVal ( S t r P a r t ( r e c e i v e _ s t r i n g ,1 , len −1) , j o in tpos 1 . robax ) ;76 Pos1 := CalcRobT ( jo intpos1 , tool0 , \ Wobj := wobj0 ) ;77 MoveL Pos1 , v500 , f i ne , tool0 ;7879 ELSEIF move_type =3 THEN80 ConfL \On;

Page 98: Vision Based Grasp Planning for Robot Assembly

98 APPENDIX D. SOURCE CODE FOR CLIENT AND SERVER

81 ! SingArea \ Wris t ;82 Pos1 . extax := [9E+09 ,9E+09 ,9E+09 ,9E+09 ,9E+09 ,9E+09] ;83 l e n := StrLen ( r e c e i v e _ s t r i n g ) ;84 found1 := StrMatch ( r e c e i v e _ s t r i n g ,1 ," ]" ) ;85 found2 := StrMatch ( r e c e i v e _ s t r i n g , found1 +1 ,"]") ;86 ok2 := StrToVal ( S t r P a r t ( r e c e i v e _ s t r i n g ,1 , found1 ) , Pos1 . t r ans ) ;87 C1:= found2−found1 ;88 ok3 := StrToVal ( S t r P a r t ( r e c e i v e _ s t r i n g , found1 +1 ,C1) , Pos1 . r o t ) ;89 C2:= len−found2−1;90 ok4 := StrToVal ( S t r P a r t ( r e c e i v e _ s t r i n g , found2 +1 ,C2) , Pos1 . robconf ) ;91 MoveL Pos1 , v500 , f i ne , tool0 ;9293 ELSE94 ErrLog 4800 , e r t i t l e , e r s t r 1 , e r s t r 2 , e r s t r 3 , e r s t r 4 ;95 ENDIF96 ! Get and the c ur r e n t p o s i t i t i o n97 Cur r e n t_pos i t i on :=CRobT ( \ Tool := tool0 \WObj:= wobj0 ) ;9899 ! Convert i t to 2 s t r i n g s

100 send_trans := ValToStr ( Cur r e n t_pos i t i on . t r ans ) ;101 send_rot := ValToStr ( Cur r e n t_pos i t i on . r o t ) ;102 send_robconf := ValToStr ( Cur r e n t_pos i t i on . robconf ) ;103104 ! Send the s t r i n g s through the socke t105 SocketSend c l i e n t _ s o c k e t \ S t r := send_trans ;106 SocketSend c l i e n t _ s o c k e t \ S t r := send_rot ;107 SocketSend c l i e n t _ s o c k e t \ S t r := send_robconf ;108109 ! send a s t r i n g with the outcome of the s imu la t i on110 Socke tClose c l i e n t _ s o c k e t ;111 ENDWHILE112 ERROR113 IF ERRNO=ERR_SOCK_TIMEOUT THEN114 RETRY;115 ELSEIF ERRNO=ERR_SOCK_CLOSED THEN116 RETURN;117 ELSE118 ! No e r r o r recovery handl ing119 ENDIF120 UNDO121 Socke tClose s e r v e r _ s oc ke t ;122 Socke tClose c l i e n t _ s o c k e t ;123 c l o s e l o g f i l e ;124 ENDPROC125126 ENDMODULE

Page 99: Vision Based Grasp Planning for Robot Assembly

Appendix E

Source code for Galil controller

Listing E.1: Galil interface code12 # inc lude " G a l i l . h" / / v e c to r s t r i n g G a l i l34 # de f ine GRIPPER "192.168.200.98"5 # de f ine FIXTURE "192.168.200.99"6 # de f ine GRIPPERPROGRAM " / . . . / . . . / "7 # de f ine FIXTUREPROGRAM " / . . . / . . . / "89 us ing namespace s td ; / / cout o s t r i ng s t r e am ve c to r s t r i n g

101112 i n t main ( )13 {14 t r y15 {16 / / ===== DECLARING IP for both g r ippe r and F ix tu r e ====/ /17 G a l i l g (GRIPPER ) ; / / Gripper connec t ion18 G a l i l g1 (FIXTURE) ; / / F ix tu r e connec t ion1920 i n t i n i t _ o u t p u t= 0 , i n i t _ou tpu t1 =0;21 vec tor <char > r ;22 vec tor <char > r1 ;2324 / / ==== CONNECT TO THE Çt’ALIL CONTROLLER ======//25 cout << g . connec t ion ( ) << endl ; / / FIXTURE26 cout << g . l i b r a r yVe r s ion ( ) << endl ;27 cout << g1 . connec t ion ( ) << endl ; / / GRIPPER28 cout << g1 . l i b r a r yVe r s ion ( ) << endl ;2930 g . command ( "CFA" ) ;31 g1 . command ( "CFA" ) ;323334 / / ===== PROGRAM DOWNLOAD TO THE Çt’ALIL CONTROLLER =====//35 g . programDownloadFile (GRIPPERPROGRAM ) ; / / Downloads program to

c o n t r o l l e r from f i l e36 g1 . programDownloadFile(FIXTUREPROGRAM) ;373839 / / ===== MOTION CONTROLLERS INTITIALISATION ==========//40 g . command ( "XQ #INIT" ) ;41 g1 . command ( "XQ #INIT" ) ;42 while (1 )

99

Page 100: Vision Based Grasp Planning for Robot Assembly

100 APPENDIX E. SOURCE CODE FOR GALIL CONTROLLER

43 {44 r = g . r ecord ( "QR" ) ;45 i n i t _ o u t p u t = g . sourceValue ( r , "@OUT[01] " ) ; / / Reads d i g i t a l output46 i f ( i n i t _ o u t p u t ==1)47 {48 cout <<" : : : : : : : : : : INTITIALISATION COMPLETED : : : : : : : : "<< endl ;49 i n i t _ o u t p u t ==1;50 break ;51 }52 }53 }54 catch ( s t r i n g e ) / / Catches i f e r r o r55 {56 cout << e ;57 }58 } / /END MAIN

Listing E.2: Gripper control code1 #INIT ’ I n i t i a l i z a t i o n2 AL3 _ALA=04 _ALB=05 _ALC=06 _ALD=07 i =08 #CB9 i = i +1

10 CB i11 JP#CB, i <912 SHABCD13 spd=800014 JG spd ,−spd ,−spd ,−spd15 IF (_LFA=1) ;BG A;ENDIF16 IF (_LRB=1) ;BG B ;ENDIF17 IF (_LRC=1) ;BG C;ENDIF18 IF (_LRD=1) ;BG D;ENDIF19 ’−−−−−−−−−−−−−−−−−−−−−20 IF (_LFA=0) ; ST A;ENDIF21 IF (_LRB=0) ; ST B ;ENDIF22 IF (_LRC=0) ; ST C;ENDIF23 IF (_LRD=0) ; ST D;ENDIF24 AM25 DP 0 ,0 ,0 ,026 MG " I n i t i a l i s a t i o n Done"27 SB 128 WT 100029 JP#MOEN30 ’31 ’32 #START2 ’ For two f i n g e r grasp33 MOC34 a=@IN[1 ]35 b=@IN[2 ]36 SHABD37 spd=800038 JG −spd , spd39 BG AB40 #341 a=@IN[01]42 b=@IN[02]43 IF ( a=0)44 STA45 MG"A−STOPPED"46 ENDIF

Page 101: Vision Based Grasp Planning for Robot Assembly

101

47 IF ( b=0)48 STB49 MG "B−STOPPED"50 ENDIF51 JP #3 , ( a=1) | ( b=1)52 AM53 MG "GRASP DONE"54 SB 555 JP#EN56 JP #3 , ( a=1) | ( b=1)57 AM58 MG "GRASP DONE"59 SB 660 JP#EN61 #START3 ’ For th r e e f i n g e r grasp62 a=@IN[1 ]63 b=@IN[2 ]64 SHABCD65 spd=800066 JG −spd , spd , spd ,−spd67 BG ABCD68 #469 a=@IN[01]70 b=@IN[02]71 c=@IN[03]72 IF ( a=0)73 STA74 MG"A−STOPPED"75 ENDIF76 IF ( b=0)77 STB78 MG "B−STOPPED"79 ENDIF80 IF ( c =0)81 STC82 MG "C−STOPPED"83 ENDIF84 MOD85 JP #4 , ( a=1) | ( b=1) | ( c =1)86 AM87 MG "GRASP DONE"88 SB 589 JP#EN90 JP #4 , ( a=1) | ( b=1) | ( c =1)91 AM92 MG "GRASP DONE"93 SB 694 JP#EN95 ’96 ’97 #PRGRDIS ’ For smal l d i sk pregrasp98 CB 499 SH AB

100 PA −10000,13000101 BG AB102 AM103 MG "DISK PREGRASP READY"104 SB 4105 JP#EN106 ’107 ’108 #PRGRDI2 ’ For b i gd i s k pregrasp109 CB 4110 SH AB111 PA −16000,13000

Page 102: Vision Based Grasp Planning for Robot Assembly

102 APPENDIX E. SOURCE CODE FOR GALIL CONTROLLER

112 BG AB113 AM114 MG "DISK PREGRASP READY"115 SB 4116 JP#EN117 ’118 ’119 #PRGRDI3 ’ For s h a f t pregrasp120 CB 4121 SH AB122 PA −10000,10000123 BG AB124 AM125 MG "SHAFT PREGRASP READY"126 SB 4127 JP#EN128 #MOEN129 MO130 MG "Motors OFF"131 ’132 ’133 #EN134 MG "END"135 EN136 ’

Listing E.3: Fixture control code1 #INIT ’ Label fo r the I n i t i a l i s a t i o n of the F ix tu r e2 i =03 #CB ’Making a l l used D i g i t a l Outputs "0"4 i = i +15 CB i6 JP#CB, i <57 SH ACD8 spd=700009 JG spd , ,− spd ,−spd

10 BG ACD11 AM12 DP 0 ,0 ,0 ,013 MG "INITIALISATION DONE"14 JG −spd , , spd , spd15 PR −500000,,80000,8000016 BG ACD17 AM18 DP −500000,,80000,8000019 MG "PREFIXING DONE!"20 SB 321 JP#EN22 #FIXING ’ Label fo r the f i x i n g of the s h a f t23 SHA24 JGA=7000025 BGA26 JP#FIXING ,_TEA<100027 STA28 AM29 MG "FIXING DONE!"30 SB 131 JP#EN32 #RELEASE ’ Label fo r r e l e a s i n g the assembly33 SH ACD34 spd=9000035 JG −spd , , spd , spd36 PR −500000,,80000,8000037 BG ACD

Page 103: Vision Based Grasp Planning for Robot Assembly

103

38 AM39 MG "ASSEMBLY RELEASED!"40 SB 241 JP#EN42 #EN ’ Used fo r turning OFF the motors43 MO44 EN


Recommended