+ All Categories
Home > Documents > [IEEE 2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) -...

[IEEE 2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) -...

Date post: 27-Jan-2017
Category:
Upload: mohammad-shukri
View: 216 times
Download: 3 times
Share this document with a friend
6
FPGA-BASED IMAGE PROCESSING SYSTEM FOR QUALITY CONTROL AND PALLETIZATION APPLICATIONS Abubakar M. Ashir 1 , Atef A. Ata 2 and Mohammad Shukri Salman 1 1 Department of Electrical and Electronics Engineering 2 Departments of Mechatronic Engineering Mevlana University, Konya-Turkey. email: [email protected], {atef, mssalman}@mevlana.edu.tr Abstract--This paper proposes a new approach for solving well-known industrial automation problems such as Quality Control and Palletization (QCP). An intelligent four-bar mechanism has been designed as a mechanical palletizer. It has been modelled as a singular quadrilateral mechanism whose intelligence is sourced from an image processing algorithm targeted for Field Programmable Gate Array (FPGA) real-time processing system. In this proposed approach the algorithms are implemented using MATLAB and Simulink packages. The critical system blocks of the Simulink model are the serial pixel data generator and the thresholder whose functions is to compute threshold value of all pixels for binarization. All the Simulink system blocks have been designed based on the proposed FPGA architecture and mapped onto the Configurable Logic Blocks of the FPGA. The hardware description language (HDL) codes generated from the Simulink model show no behavioral deviation from the original MATLAB version of the algorithm. The recognition rate results are high and the whole system is very fast at 50 MHz clock frequency. Keywords—FPGA; Mechanical Palletizer; Edge Detection; BLOB; Robotics and Automation I. INTRODUCTION Following compelling advancements in sensors and digital technology, industries have been shifting focus towards full automation. Processes like Quality Control and Palletization (QCP) are among the most recurring routines in the manufacturing industries. As the manufacturing processes make transition towards full automation, numerous approaches have been developed and adopted using different techniques for optimal performance and accuracy. Machine vision has been one of the hot research topics that have received high level of attention in industrial applications. The vision system in turn consists of a huge calculations and processing of the images of a process captured by high resolution industrial Charged Couple Device (CCD) image sensors [1]-[3]. A great necessity of low processing time is required to accomplish a given task. The need becomes even much high when the application is performed in real-time; where the stream of image frames must be processed by the hardware and the result is passed to the next process. Traditionally General Purpose Processor (GPP)-based processing have been the norms for years. However, with high demands of low processing time of colored images, GPP-based processing suffers many challenges [1, 2]. Field Programmable Gate Array (FPGA)-based image processing has shown great processing speed and flexibility especially with the recent developments in the FPGA hardware and has become a favorite for real-time processing [1]-[4]. Quality control processes requires vital features extraction from images to separate good productions from defective ones. Meanwhile, palletization process is only concerned with the geometrical features of the product such as its global coordinate system and orientation. The available palletizers in the market are mechanical and robotic palletizers. In both types, the key performance criterion is the palletizing throughput, operational flexibility, purchasing cost and level of training required to operate and maintain the palletizer [5]. Mechanical palletizers offer high speed palletizing and low initial costs. However, because these machines are designed for a specific range of products and a limited number of patterns, they offer low flexibility. Manual system reconfiguration is required during product changeover, resulting in long product changeover downtime and significant productivity losses, especially, when production batch sizes become smaller. Frequent set-up errors add to the total downtime and in extreme situations result in costly product and machine damage [5, 6]. Typical robotic palletizers offer high flexibility in palletizing and shorter product changeover downtime. However, they can only achieve low to medium speed palletizing and have a higher initial cost than the mechanical types. During product changeover, there are still some mechanisms that need manual adjustments, these frequently result in set-up errors that are added to the total downtime, and can cause significant losses due to potential product and machine damage in industries [6]. In this design, a simple mechanical device to provide orientation correction was proposed. It can be used in connection with the robot palletizer for higher performance and flexibility. With the high speed handle capability of the FPGA, it is more convenient and efficient to solve QCP processes with this processor [7]. The two applications can be implemented together on the FPGA with true parallel processing. The objective of this paper is to present a new approach to handle QCP applications for improved speed, accuracy and cost-effectiveness. Sequel to this, an image processing-based intelligence sourcing mechanical palletizer is proposed. Image thresholding 2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) May 14-15, Espinho, Portugal 978-1-4799-4254-1/14/$31.00 ©2014 IEEE 285
Transcript

FPGA-BASED IMAGE PROCESSING SYSTEM FOR QUALITY CONTROL AND PALLETIZATION APPLICATIONS

Abubakar M. Ashir1, Atef A. Ata2 and Mohammad Shukri Salman1 1Department of Electrical and Electronics Engineering 2Departments of Mechatronic Engineering

Mevlana University, Konya-Turkey. email: [email protected], {atef, mssalman}@mevlana.edu.tr

Abstract--This paper proposes a new approach for solving well-known industrial automation problems such as Quality Control and Palletization (QCP). An intelligent four-bar mechanism has been designed as a mechanical palletizer. It has been modelled as a singular quadrilateral mechanism whose intelligence is sourced from an image processing algorithm targeted for Field Programmable Gate Array (FPGA) real-time processing system. In this proposed approach the algorithms are implemented using MATLAB and Simulink packages. The critical system blocks of the Simulink model are the serial pixel data generator and the thresholder whose functions is to compute threshold value of all pixels for binarization. All the Simulink system blocks have been designed based on the proposed FPGA architecture and mapped onto the Configurable Logic Blocks of the FPGA. The hardware description language (HDL) codes generated from the Simulink model show no behavioral deviation from the original MATLAB version of the algorithm. The recognition rate results are high and the whole system is very fast at 50 MHz clock frequency. Keywords—FPGA; Mechanical Palletizer; Edge Detection; BLOB; Robotics and Automation

I. INTRODUCTION

Following compelling advancements in sensors and digital technology, industries have been shifting focus towards full automation. Processes like Quality Control and Palletization (QCP) are among the most recurring routines in the manufacturing industries. As the manufacturing processes make transition towards full automation, numerous approaches have been developed and adopted using different techniques for optimal performance and accuracy. Machine vision has been one of the hot research topics that have received high level of attention in industrial applications. The vision system in turn consists of a huge calculations and processing of the images of a process captured by high resolution industrial Charged Couple Device (CCD) image sensors [1]-[3]. A great necessity of low processing time is required to accomplish a given task. The need becomes even much high when the application is performed in real-time; where the stream of image frames must be processed by the hardware and the result is passed to the next process. Traditionally General Purpose Processor (GPP)-based processing have been the norms for years.

However, with high demands of low processing time of colored images, GPP-based processing suffers many challenges [1, 2]. Field Programmable Gate Array (FPGA)-based image processing has shown great processing speed and flexibility especially with the recent developments in the FPGA hardware and has become a favorite for real-time processing [1]-[4]. Quality control processes requires vital features extraction from images to separate good productions from defective ones. Meanwhile, palletization process is only concerned with the geometrical features of the product such as its global coordinate system and orientation. The available palletizers in the market are mechanical and robotic palletizers. In both types, the key performance criterion is the palletizing throughput, operational flexibility, purchasing cost and level of training required to operate and maintain the palletizer [5]. Mechanical palletizers offer high speed palletizing and low initial costs. However, because these machines are designed for a specific range of products and a limited number of patterns, they offer low flexibility. Manual system reconfiguration is required during product changeover, resulting in long product changeover downtime and significant productivity losses, especially, when production batch sizes become smaller. Frequent set-up errors add to the total downtime and in extreme situations result in costly product and machine damage [5, 6]. Typical robotic palletizers offer high flexibility in palletizing and shorter product changeover downtime. However, they can only achieve low to medium speed palletizing and have a higher initial cost than the mechanical types. During product changeover, there are still some mechanisms that need manual adjustments, these frequently result in set-up errors that are added to the total downtime, and can cause significant losses due to potential product and machine damage in industries [6]. In this design, a simple mechanical device to provide orientation correction was proposed. It can be used in connection with the robot palletizer for higher performance and flexibility. With the high speed handle capability of the FPGA, it is more convenient and efficient to solve QCP processes with this processor [7]. The two applications can be implemented together on the FPGA with true parallel processing. The objective of this paper is to present a new approach to handle QCP applications for improved speed, accuracy and cost-effectiveness. Sequel to this, an image processing-based intelligence sourcing mechanical palletizer is proposed. Image thresholding

2014 IEEE International Conference onAutonomous Robot Systems and Competitions (ICARSC)May 14-15, Espinho, Portugal

978-1-4799-4254-1/14/$31.00 ©2014 IEEE 285

techniques are used to binarize the image in real-time. Connected component analysis (CCA) of the binarized image is used to extract vital features from the image which are subsequently used for cascaded object classifier and object orientation computations. For an efficient implementation of this process, an FPGA hardware architectural design is proposed in this paper. The rest of this paper is organized as follows: Section 2 presents an overview of object recognition, CCA and mechanical palletizers. Section 3 presents the proposed system and the implementation methodologies. Section 4 demonstrates experimental results of the object recognition and orientation computation. In Section 5 simulation results from mechanical palletizer models are presented. Finally, Section 6 concludes the findings. II. THEORETICAL OVERVIEW In this paper, the critical areas including the object recognition CCA, FPGA architecture and the Mechanical Palletizer modelling are presented. Usually, the object recognition parts provide information for quality control actuators. Results obtained from the CCA are used in the object recognition classifier and geometrical feature extractions to actuate the mechanical palletizer. And the entire process is implemented using FPGA. A. Object Recognition The main advantage of any object recognition algorithm is its ability to identify key points on the object and the unique features surrounding those points [8]. These features are stored in feature vectors which are subsequently used with a classifier to compare with an object of interest stored in the database. Features such as HAAR and Local Binary Pattern (LBP) are historically used for detecting face while Histogram of Oriented Gradient (HOG) is suited for capturing overall shape of an object such as cars, boxes and people [9]. Basically, classifiers are the final decision makers of the algorithm and can either be rule-based or artificial intelligence-based. The most familiar amongst the rule-based classifier is the cascaded one using Viola-Jones algorithm [8]. The cascaded classifier is made of stages and each stage consists of an ensemble of weak learners. The weighted average decision of the learners represents the classification status of that stage [9]. Each stage slides its detection window over the entire image of interest. If at any stage an object is classified negative, the classification stops there and does not proceed to the next stage. An object is reported positive at the current window location in the final stage of the classifier only when the detector classifies the image region to be positive. Three scenarios of the classification status exist; True Positive, False Positive and False Negative. True Positive occurs when the object is correctly classified, False Positive is when non-target object is mistaken for positive (target), while False Negative happens when target object is classified as non-target (i.e. negative) [9]. To train the classifier, a

sufficient set of positive and negative samples of the target object must be supplied to the classifier. The number of positive samples (PS) required to train each stage is computed by:

TotalPositiveSamplesPS=floor

(1+(NumStages-1)×(1-TruePositiveRate)).⎛ ⎞

⎜ ⎟⎝ ⎠

(1)

B. Connected Component Analysis Connected Component algorithms are applied to colored, grayscale and binary images. Initially, the entire image is scanned pixel by pixel; row and column wise from the left top to the right bottom corners. If any group of connected pixels shares equal or similar designated pixel intensity values, it is labeled as an object in the output image [9]. Each set of distinct connected pixels has a distinct label in the output image. In a binary image, any pixel encountered during the scan and found to have a value of 1, is considered as an object pixel while 0 values indicate non-object pixels. In a certain grayscale range of intensity values (i.e. [0,50] and [60,80]), to indicate an object pixel; any pixel with intensity within that range can be assigned as an output object. Modification of the algorithm makes it possible to operate in RGB color-spaced images at different levels of connectivity; i.e. 4 and 8 neighborhood connectivity levels. In a binary image with a 4 neighborhood connectivity, if during scan the connected component labelling operator finds a pixel, say K, whose logical value is 1, K is labelled and written in the output image and get burnt in the input image ( i.e. converted to zero) indicating that it has been scanned. The four neighbors of the burnt pixel K are examined. If all of the four neighbors have logic 0 values, the pixel of interest is a disconnected object and has a unique output label. If any of the neighbors is logic 1, it will also be burnt and assigned same label as the pixel K, whereas if all the neighboring pixels are logic 1’s, they all get burnt and get the same labelling as pixel K. The operation continues until all the pixels are processed and the second scan is performed to sort out connected objects into equivalent classes each with distinct label. The algorithm is often referred as Recursive Grass-Fire Algorithm [9]. Each labeled connected component object is treated as a unique Binary Large Object (BLOB) and stored in a vector with its labelled values and indices [10]. Considering a BLOB whose minimum and maximum X and Y coordinate of its pixels are denoted by Xmin, Ymin, Xmax and Ymax, respectively, if the total number of pixels within that BLOB is N, which is also equivalent to the BLOB’s area, Eqns. (2) and (3) compute both the bounding box area (BB) and centroid (C) of the BLOB.

max min max min( ) ( )BB x x y y= − × − (2)

1 1

1 1, , .N N

i ii i

C x yN N= =

⎡ ⎤= ⎢ ⎥⎣ ⎦∑ ∑ (3)

2014 IEEE International Conference onAutonomous Robot Systems and Competitions (ICARSC)May 14-15, Espinho, Portugal

286

The orientation of the BLOB is the angle in degrees ranging from negative 90° to positive 90° along the X-axis and the major axis of an ellipse that has the same second moment as the BLOB being investigated. Fig.1 shows a connected binary object with an ellipse of the same second moment. The orientation is the angle between the horizontal line and the major axis of the ellipse.

Figure 1: BLOBs with an Ellipse of the same second Moment.

The extracted geometrical features of the BLOB are housed in object feature vectors which does not only provide statistics about the BLOB but also become a simple classifier to identify the type and nature of the BLOB we are dealing with. For instance, circularity measure of BLOB in Eqn. (4) can help in distinguishing between circular and non-circular objects.

Counter Pixels (Perimeter)Circularity=

2× π×BLOB Area.∑ (4)

C. Modelling of the Mechanical Palletizer A four-bar linkage mechanism is modelled as the

mechanical palletizer with planned trajectory to achieve its purpose. It is a movable chain with four linkages connected in series by four joints. Each joint has one degree of freedom and could either be Revolute or Prismatic. In a planar quadrilateral linkage, as adopted in this design, one link is fixed and is designated as the fixed link or frame (r1). The other two links connected to both ends of the frame are the input (r2) and the output links (r4), respectively. The link connecting the input and the output is the coupler link (r3). For a proper understanding of the mechanism operation and the desired trajectory on the conveyor’s plane, the links lengths have to be synthesized so that the angular positions, speed and acceleration of joints B and C are deduced. Consider Fig 2. below, where P is any point that defines the trajectory of the coupler link and its distance from joint B is given as rp.

Figure 2: Schematic Diagram of a Four-bar Mechanism.

For a singular linkage configuration in which the sum of any two links equals the sum of the remaining links, the followings hold for the closed loop link [6].

2 3 1 4r r r r+ = + 32 1 4( )( ) ( ) ( )

2 3 1 4ii i ir e r e r e r eθθ θ θ+ = + (5)

3 4 1 2( ) ( ) ( ) ( )3 4 1 2 .i i i ir e r e r e r eθ θ θ θ− = − (6)

With θ2 as a reference (independent variable) and a known length values for all the four links and θ1, coupler angle (θ3) and the output angle (θ4) can be determined from (6) as:

3 4( ) ( )

3 4i ir e r e X iYθ θ− = +

1 4 43

4 4

sintan

sinY rY r

θθ

θ− +

=+

⎛ ⎞⎜ ⎟⎝ ⎠

(7)

2 2 2 21 1 4 3

4 2 24

tan cos2

.X Y r rY

X X r X Yθ − − + + −

= ±− × +

⎛ ⎞⎛ ⎞ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ (8)

To compute the velocities of the links, derivative of (6) is taken assuming that the angular speed of the input link ω2 is known.

3 1 243 3 4 1 1 2 24 ,i i iir e r e r e r eθ θ θθω ω ω ω× − × = × − × (9)

where, 2 2 4 2

3

3 4 3

sin( )sin( )r

rω θ θ

ωθ θ

−=

−××

(10)

2 2 3 24

3 4 3

sin( )sin( )

.r

rω θ θ

ωθ θ

−=

−××

(11)

III. PROPOSED SYSTEM & IMPLEMENTATION METHODOLOGY

The prototype of the proposed system is as depicted in Fig. 3. The system consists of a network of three conveyor belts. Each of the products A, B and B with defects is routed into a separate conveyor after being analyzed. The images of the product moving on the conveyor are captured by the camera C. Proximity sensor S1 acknowledges the presence of objects. If the object passing is identified from the image processing algorithm to be product B or B with defect, Pneumatic Cylinder PC1 will be actuated to push the product into the vertical conveyor. If the product is of type A it will continue its journey to the Mechanical Palletizer. The Orientation Device (OD) is only actuated if A is found to have distorted orientation in respect to the x-axis. For the products pushed onto the vertical conveyor, proximity sensor S2, detects their presence and if it is a product B with defect, PC2 is activated to push it into the horizontal conveyor that houses defected products, whereas, if it is normal B it continues its journey to another terminal. The robot arm picks up product A and stocks it properly after the correct orientation of A is

4θ3θ

2θ1θ

2014 IEEE International Conference onAutonomous Robot Systems and Competitions (ICARSC)May 14-15, Espinho, Portugal

287

been taking care of by OD. Pressure sensor S3 provides feed back to the OD.

Figure 3: Proposed System Diagram.

Figure 4: Control Flow Chart.

It should be noted that the angle θ1 in Fig. 4, which is measured as deviation from x-axis of product A to be corrected, depends on the minimum allowable error margin of the robot to which it can effectively pick the product. The upper limit of this angle (θ2) is limited by the extent to which the Mechanical Palletizer can align the distorted product. For square shaped product the OD has no upper limit. It can literally correct any angle deviation from the x-axis. With the algorithm being implemented on the FPGA, real-time processing requirement would be met. Fig. 4 represents the control flow of the proposed algorithm. A. FPGA Design Architecture The hardware architecture of the FPGA is shown in Fig. 5. It has been grouped into five system blocks. The

critical blocks are the serial data extraction, thresholder, BLOB and classifier blocks. In serial data extraction, 4 groups of shift registers are used. Each block of shift register is internally made up of four 1 bit registers. Between two blocks of shift registers a First-in First-out (FIFO) line buffer is used. Each FIFO is capable of caching 3 bits of image data shifted out of the preceding shift register block. A total of 3 FIFO and 4 shift register blocks would be required to create a 4x4 image window in 4 clock cycles. The FIFO is generated by dual-port RAM instead of FIFO IP [11]. In each 4x4 window of image data created, a new center pixel is computed. At each clock signal, the data advances one step to the right by the shift registers. The 4x4 data window is processed and streamed to the blocks to its right. Each 4x4 window has a modified center pixel value. The operation continues until all the image pixels are exhausted. At the output, the original image is used as reference together with the modified center pixel value of the streamed windows to reconstruct the new image containing the BLOBs. Control signal is required to stream the 4x1 column of the data into the algorithm. The 4x4 window allows the 4 neighborhood connectivity algorithm to prop the adjacent pixels of the pixel currently operated. Since 4 clock cycles are used to create a 4x4 image and compute the first center pixel, additional control signal at the output of the image is required to indicate the validity of the center pixel. Fig. 6 Shows the pixel generation. The bottom shift register block holds the oldest image data.

Figure 5: FPGA Architecture.

B. Data Serialization The image data needs to be reconstructed to make it suitable for HDL codes generation targeted for implementation on FPGA. This is because the data type in the algorithm may contain double precision data, strings and some control flow constructs that do not map well on the FPGA processor [3, 7]. For a 1024 1024× size image of RGB color space, the bit depth will be 1024 1024 8 3× × × (18KB). This is a large data and will occupy a significant portion of system memory and line buffers which are, in turn, limited supply the FPGA of concern. Serialization makes it possible for the data to be broken into streaming data before being fed to the chip. The serialization of the image data depends on how much of the image pixels need to be available for the algorithm computation and how much of the memory and line buffers are available on the chip to stream the data. In this design the serialization happens at the Simulink model based design similar to the architecture of the FPGA. The critical part of the algorithms is the

love

C

camera

p

A A AB

B

B

B

prox

pro

ori

PC1

S1OD

S2Robot Arm

PC2

S3

pre-process

the image

prosessthe image

activatePC1 aftertime, t1

is the imagetype A?

acquirethe

image

activatePC2 aftertime, t2

activatePC1 after

time, t1

activateOD aftertime, t3

A's angle> theta2

pressure onS3 >=P1?

OD keepmoving untill

S3>=P1

DeactivateOD

error stop

STOP

START

NO

YES

YES

YES

NO

NONO is the imagetype B?

A's angle<theta1?

YES

NO

2014 IEEE International Conference onAutonomous Robot Systems and Competitions (ICARSC)May 14-15, Espinho, Portugal

288

thresholder, BLOB computation and detection blocks. Instead of scanning the entire image by the four neighborhoods connectivity of the modified Grass-Fire algorithm, the image data is reshaped into a column-wise serial data.

Figure 6: A 4x4 pixel Generation Architecture.

IV. IMAGE PROCESSING EXPERIMENTAL

RESULTS

The Simulink model is evaluated with ODE45 Simulink solver and conforms to the architecture, HDL codes were automatically generated. A test bench of the Simulink models created using MODELSIM software that came with Quartus II application, confirmed the workability of the model with a negligible error. The results were obtained by averaging the simulations with 10 independent runs. The model was targeted for low-cost Altera Cylone III EP3C120F780 FPGA with dual on-board oscillators for generating 50 MHz and 125 MHz clock speed. Even at lower clock frequency of 50 MHz, the resources of the hardware utilization would be very low with an extremely fast execution. Figs. 7 and 8 show the original image templates used for training binary object classifier. Figs. 9 to 12 show four different instances of detection level windows. In Fig. 9 a clustered image containing template 1 was fed into the algorithm. Subsequently, it was correctly detected and extracted. Its exact location in the image was determined as its inclination with x-axis was computed as 31.1o. Similarly, Fig. 10 has template 2 in its mist. The detection result is shown with template 2 inclination measured to be -1.83o. In Fig. 11, the detection level was able to realize the image as exactly as the target templates but with no marks on it. The last result was obtained when the image contains non-target object used. For all the scenarios tested, the detection rate was always high and robust to the scaling and rotation of the input image.

Figure 7: Template 1. Figure 8: Template 2.

Figure 9: Template1 Pre and Post-processing Results.

Figure 10: Template 2 Pre and Post-processing Results.

Figure 11: Pre and Post-processing Results of unmarked

Template.

Figure 12: Post-processing Results of Non-target Image.

V. MECHANICAL PALLETIZER SIMULATION RESULTS In the Palletizer design, a modification was made from the Grashof Singular Quadrilateral Linkage. Both of the input and the output joints are driven by two similar synchronized induction motors. The trajectory of the coupler link at both ends, joints B and C are considered. The two joints represent the displacement of the device on the conveyor belt. Figs. 13 and 14 show plots of relative displacement, velocity and acceleration versus the time of the two points on B and C as measured from a reference point. The reference point from which the measurement is taken is modelled to be on the same axis with joints A and D of the fixed link. The origin of the reference point is placed at the end wall of the conveyor and its maximum allowable displacement is equivalent to the conveyor’s width. From both plots, it is worthy to note that none of them is a harmonic. As the length of both input and output links are made shorter relative to the coupler link, relative velocities become more sinusoidal while their relative accelerations obey cosine function.

4 shiftregister

A11

A21

A14

A13

A12

A31

A24

A23

A22

A34

A33

A32

A41

A41

A44

A43

A42

4 shiftregister

4 shiftregister

4 shiftregister

A31

A21

FIFO

FIFO

FIFO

Image Data

Target object found Binary target 1 at 31.1deg

Target object found Binary target 2 at -1.83

Target object found Binary target 'Unmarked' at -2.25deg

2014 IEEE International Conference onAutonomous Robot Systems and Competitions (ICARSC)May 14-15, Espinho, Portugal

289

The mechanism was simulated over a period of 2s with angular velocity of both the input and output links as 3.142 rad/s. At time t=0, the relative displacement and velocities of both points are also zero while acceleration attains its maximum value. As the points begin to deaccelerate, their relative velocities pick up and attain their maximum values of 3.14 m/s at exactly t=0.5s. At this point, the maximum displacement from the reference point is also attained and the reference point is now placed on the other wall of the conveyor. Thereafter, the velocity begins to decrease until it becomes zero again at t=1s. This point corresponds to the point at which the displacement of the reference point completes a round trip and return to its origin. At the same time, acceleration attains another maxima but this time in the opposite direction to previously attained at t=0s. Between t=1s and t=2s the process repeats itself with acceleration decreasing from maximum negative value back to the maximum positive value. The displacement is now computed as a reduction from the previously accumulated value of the round trip displacement value. The only difference between Fig. 13 and Fig. 14 is the acceleration values. In Fig. 13, the maximum accelerations during the forward stroke are 12.29 m/s2 and -7.46 m/s2 during the reverse stroke. In Fig. 14, the maximum accelerations in forward and reverse strokes are 10.93 m/s2 and -8.82 m/s2, respectively. The disparity is obvious due to the fact that, the origin of the output link, D, is further away from the reference point compared to the origin of the input link, A. Due to this fact point B must accelerate faster in the forward stroke and slower in the reverse stroke than point C for input and output links to be synchronized.

Figure 13: Plot of Displacement, Speed and Accelaration of

joint B versus Time.

VI. CONCLUSION In this paper, a system for solving QCP problems is proposed. The proposed system is modeled and simulated. The proposed system provides accurate and fast results (99% recognition rate) compared to the other systems available in literature. The object recognition algorithms are shown to be robust to the scaling and rotation of the object. Moreover, the FPGA implementation meets real-time requirements. Finally, the integration of the robot arm, cameras and sensors

makes it possible to solve the orientation problem of any object easily and accurately.

Figure 14: Plot of Displacement, Speed and Accelaration of

joint C versus Time.

REFERENECES

[1] C. Diederichs and S. Fatikow, “ FPGA-based Object Detection and Motion Tracking in Micro and Nano Robotics”, International Journal of Intelligent Mechatronics and Robotics, March 27, 2013, vol3, no.1, pp. 27-37.

[2] I. A. Qader and M. Maddix, “Real-Time Edge Detection Using TMS320C6711 DSP”, IEEE Transactions on Image Processing, May 2004, vol. 3, pp. 306-309.

[3] R. Harinarayan, R. Pannerselvam and M. Mubarak Ali, “Feature Extraction of Digital Aerial Images by FPGA Based Implementation of edge detection algorithms”, IEEE Proceedings of International Conference on Emerging Trends in Electrical and Computer Technology, ICETECT, September 2011, pp.131-135.

[4] G. Orchard, J. G. Martin and R. J Vogelstein “Fast Neuro-mimetic Object Recognition Using FPGA Outperforms GPU Implementations”, IEEE Transactions on Neural Networks and Learning Systems, August 2013, vol. 24, no. 8, pp. 1239 – 1252.

[5] H. Yu, J. Shan and X. Zhu, “Off– line Programing and Remote Control for a Palletizing Robot'', IEEE International Conference on Computer Science and Automation Engineering, CSAE, 2011, vol. 2, pp. 58-589.

[6] P. Dzitac and A. M Mazid, “An Efficient Control Configuration Development for a High-speed Robotic Palletizing System”, IEEE Conference on Robotics, Automation and Mechatronics, 2008, pp. 140 - 145.

[7] A. Sultana and M. Meenakshi, “Design and Development of FPGA based Adaptive Thresholder for Image Processing Applications”, 2011 IEEE Relevant Advances in Intelligence 2011, pp. 633 - 637.

[8] Z. Wang, H. Xiao, W. He and F. Wen, “Real-time SIFT-based Object Recognition System”, Proceedings of 2013 IEEE International Conference on Mechatronics and Automation, August 4-7, 2013, vol 5, pp. 1361-1366.

[9] S. Milan; H. Vaclav; and B. Roger.2008. Image Processing, Analysis and Machine Vision. Thompson. Toronto, USA.

[10] B. Alexander, H. Herpers and B.K. Kenneth, “Hardware Acceleration of BLOB Detection for Image Processing”, Third Internatinal Conference on Advances in Circuits, Electronics And Micro-electronics, 2010, pp. 28-33.

[11] Z. Guo, W. Xu and Z. Chai, “Image Edge Detection Based on FPGA”, IEEE Ninth International Symposium on Distributed Computing and Applications to Business, Engineering and Science, Feb. 10, 2010, vol. 5, pp. 169-171.

0 0.5 1 1.5 2-10

-5

0

5

10

15

↓ Displacement in (m)

Time (s)

Dis

plac

emen

t-Spe

ed-A

ccel

arat

ion

↓ Speed in (m/s)

↓ Accelaration in (m/s2)

0 0.5 1 1.5 2-10

-5

0

5

10

15

↓ Displacement in (m)

Time (s)

Dis

plac

emen

t-Spe

ed-A

ccel

arat

ion

↓ Speed in (m/s)

↓ Accelaration in (m/s2)

2014 IEEE International Conference onAutonomous Robot Systems and Competitions (ICARSC)May 14-15, Espinho, Portugal

290


Recommended