INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 16, No. 4, pp. 661-667 APRIL 2015 / 661
© KSPE and Springer 2015
Controlled Trigger and Image Restoration for High SpeedProbe Card Analysis
Bonghun Shin1, Soo Jeon1,#, Jiwon Lee1, Chung Su Han2, Chang Min Im3, and Hyock-Ju Kwon1
1 Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1, Canada2 Sedicon, 522 Dangjeong-dong, Gunpo-si, Gyeonggi-do, 435-833, South Korea
3 SDA Co. Ltd., 38-16 Ojeon-dong, Uiwang-si, Gyeonggi-do, 483-817, South Korea# Corresponding Author / E-mail: [email protected], TEL: +1-519-888-4567 (ext.) 38898, FAX: +1-519-885-5862
KEYWORDS: Machine vision, Real-time processing, Image restoration, Wafer probing
Latency and image blurring are major limitations of machine vision processes, which often require every target object to pause for
a moment for capturing and processing a still image. When a large number of objects are to be inspected, such a stop-and-go
approach may significantly degrade the test efficiency due to a long inspection time. This paper investigates the performance and
error analysis of dynamic imaging approach where the image is captured and processed on-the-fly while the target object is still
moving. Taking images of a moving object can substantially enhance the inspection speed but intensifies latency and image blurring.
To overcome these issues, firstly, we implement the controlled trigger, i.e., to operate the machine vision in synchrony with the position
sensing while the target object is moving. Then, we attempt to restore the blurred pixel data through advanced image restoration
techniques. The main ideas are applied to a semiconductor test process called the probe card analysis and its performance is
experimentally verified.
Manuscript received: April 8, 2014 / Revised: August 4, 2014 / Accepted: January 4, 2015
1. Introduction
Machine vision is widely used in industrial applications as a
primary means for inspection and testing. Examples include textiles,
printed circuit boards (PCB’s), integrated circuits (IC’s), labels,
machine tools, fruits, etc.1 Visual signal represents one of the most
informative sensory input, yet it has some drawbacks compared to
other sensing modalities. First of all, the image data available at the
present time step report the status of the object in the previous time
step, i.e. an activity’s initiation and its result occur at different time
instances. This is called latency. Another limitation of the vision
sensing is that the object has to stay still for a certain amount of time
duration, called the exposure time. If the object moves during the
exposure time, the image products will be blurred degrading the
integrity of inspection. Due to these reasons, typical machine vision
processes take the stop-and-go approach where the target object is held
stationary during image acquisition. If the machine vision is relatively
fast compared to other process steps that limit the manufacturing line
speed, it may not be a big issue to make frequent stops for visual
inspection. However, in many applications for high precision testing,
halting the process for every object for a still image can become a
major impediment for testing efficiency. One example is the inspection
of a probe card. The probe card is a component used to test
conductivities and functionalities of integrated circuits (IC’s) before
packaging.2
A probe card consists of a large number of probe pins aligned
within a small area of a single IC chip and provides an interface
between the tester and the IC.2 Fig. 1(a) shows an example of probe
cards. Electrical connection from each pin to the tester (i.e., the wafer
prober) is made through the wire branching out from the epoxy center
ring. The cantilever pins are stacked around the center ring and make
a physical contact with wafer dies. The magnified view of the center
ring and pins is shown in Fig. 1(b). During the probe card manufacturing,
the card manufacturer needs to inspect it for a number of specifications
on mechanical (alignment, planarity, tip radius, missing tips, etc.) as
well as electrical (leakage, capacitance, etc.) properties. The test
equipment used in this process is called the probe card analyzer. Fig.
2 shows the main inspection module of a probe card analyzer. The
probe card is placed on top of the module with the probe pins pointing
downward so that the camera can take their images from below. The
camera is mounted to the x-y stage and can be moved to different pin
locations. Typically, the inspection time for the probe card analysis
DOI: 10.1007/s12541-015-0088-z ISSN 2234-7593 (Print) / ISSN 2005-4602 (Online)
662 / APRIL 2015 INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 16, No. 4
increases in proportion to the number of pins. Therefore, the inspection
time is one of critical factors in evaluating the performance of probe
card analyzers.
Among items to be inspected by a probe card analyzer, there are two
mechanical properties that are essential but time-consuming to
measure: the planar alignment (x and y coordinates) and the tip
diameter of each probe pin. As is done in typical machine vision
applications,3 such mechanical properties are often optically (thus non-
contact) inspected using the digital imaging system of the probe card
analyzer where the stop-and-go strategy is taken to inspect one pin at
a time. The target point for the stage to be positioned for each pin is
provided by the predefined location map of probe pins. For each pin to
be accurately inspected (requirement: ±1.0 µm for each coordinate), the
stage needs to come to a complete stop before an image is taken and
analyzed. In this way, the speed of 1 probe tip per second will result in
the total inspection time close to three hours for a 10,000-pin probe
card.
Motivated by the need for enhancing the inspection speed in the
above mentioned tasks, this paper addresses the approach to capture the
vision image while the object is moving and to process it on-the-fly.
Such a dynamic imaging approach will clearly manifest two main
drawbacks of machine vision, i.e., (non-real-time) latency and image
blurring, as mentioned in the beginning. To overcome these issues, we
adopt controlled sampling of visual data and image restoration
technique. Firstly, vision images are sampled in a time-critical way
using a real-time trigger such that the visual sensing is synchronized
with the position measurement of the object. With some simple
calibration of event timings based on stage motion data, we can
establish a consistent latency to coordinate the timing of position data
with that of the vision image. Such a hard real-time image capturing
technique has also been used in some of recent motion control and
tracking applications.4,5 Secondly, to recover the clear vision data from
the imperfect image, the advanced image restoration techniques or
deblurring6-8 has been employed. The deblurring technique has long
been used in various industrial applications, yet its use in micro-scale
inspection is still rare.7
The remainder of this paper is organized as follows. The technical
background is explained in Section 2 which explains the real-time
machine vision and the image restoration technique. Section 3 and 4
present the demonstration of main ideas through experimental results
with the probe card analysis. The concluding remarks are summarized
in Section 5.
2. Technical Background and Main Issues
In this section, we briefly review relevant issues of high-speed real-
time machine vision applications and provide some technical
background on the basic approach we take.
2.1 Real-time coordination of vision and motion data
To use camera images as reference data, we first need to keep track
of the exact moment when each image is taken. This can be achieved
by real-time triggering and compensation of latency. The latency of a
vision system mainly comes from the exposure time tex (accumulating
light into electrical charges), the readout time tro (converting charges into
digital data), the transfer time ttr (transferring data into the processor)
and the processing time tpr. The total latency TL is thus given by
(1)
where δt stands for the jitter, i.e., the uncertainty in latency. The amount
of jitter should be minimized in applications requiring high determinism.
Fig. 3 shows the timing diagram of a vision system synchronized with
the measured position (x) of the target object. Using carefully configured
triggering operation and proper machine vision protocols (e.g., the
Camera Link or the GigE vision), tex, tro, and ttr (which constitute the
hardware latency) can be kept almost constant. On the other hand, the
software latency due to the processing time tpr, typically varies with
TL tex tro ttr tpr δ t+ + + +=
Fig. 1 A probe card. (SDA Technology Co., Ltd.)
Fig. 2 A probe card analyzer (SDA Technology Co., Ltd.)
INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 16, No. 4 APRIL 2015 / 663
each image sample. Let us denote the maximum and the minimum
values for tpr by and , respectively, (i.e., ≤ tpr ≤ ). As
shown in Fig. 3, the image processing starts right after the completion
of the transmission. As long as the completion of image processing (i.e.
the falling edge of tpr pulse) occurs before the transmission of the next
image data (i.e. the falling edge of the next tpr pulse), image processing
does not cause any backlog. In other words, two consecutive tpr pulses
do not overlap in the timing diagram of Fig. 3. Therefore, the period of
the trigger pulse denoted by Tg(=tk+1−tk) must be chosen as follows to
guarantee the real-time image processing:
(2)
If we send a trigger pulse at t = tk, the electronic shutter immediately
starts to integrate the image data. Any information drawn from this
image will be available at t = tk+TL. Hence, by knowing the latency and
the exact time of trigger, we can synchronize the vision image with the
motion data from the stage. On the other hand, movement of the
camera during the exposure time (tex) will cause motion blurring.
Denoting the stage position at time t = tk by xk, the motion blurring
occurs for the period of tk ≤ t ≤ tk+tex, which corresponds to the
displacement δxk as illustrated in Fig. 3.
2.2 Restoration of image blurred by linear motion
In our application, the motion blur is the result of relative motion
between the camera and the probe tips during camera exposure. From
a mathematical point of view, the blurred image can be considered as
a convolution of the ideal image with the blur function:6
(3)
where g(nx, ny) and f(nx, ny) denote the blurred image and the ideal
image, respectively, for each pixel at (nx, ny). M and N are the pixel
numbers for x (horizontal) and y (vertical) directions on the image
plane, respectively. d(nx, ny) is called the point spread function (PSF)
which represents the convolution kernel that describes the blurring
mechanism. w(nx, ny) is any additional noise affecting the captured
image, e.g., electrical noise of image sensors, illumination noise, etc. If
we have a prior knowledge on how the image is blurred, then we can
operate the deblurring (or the image deconvolution) by inversely
applying d(nx, ny) to g(nx, ny), the process known as non-blind image
restoration.8 The specific condition of measuring the probe tips meets
the non-blind image restoration because the blurring occurs due to the
movement of the x-y stage. The PSF d(nx, ny) can thus be described by
the stage motion. If we assume that the stage is running at a constant
speed, the blurring distance δxk in Fig. 3 becomes independent of time,
i.e., δxk=δx for all k = 0, 1, ... In this case, the relative motion between
the camera and the probe tips is linear and d(nx, ny) becomes spatially
shift-invariant, i.e., d(nx, ny)=d(nx−k, ny−k) for all k = 0, 1, ....
Assume that the vertical axis of the image plane is aligned with the
y coordinate of the stage and that the stage moves along y axis with a
constant speed v µm/sec. Then it becomes linear motion blur with the
length of motion δx=vtex. Denoting the spatial resolution of image
sensor along the y axis as s pixels/ìm, the PSF can be given by the
moving average filter
(4)
where L = [vtex s] denotes, in the number of pixels, the length of motion
during exposure. By applying the discrete-time Fourier transform to Eq.
(3) and denoting the transformed signals by capital letters, we get
(5)
for the spatial frequency (ωx, ωy), where the moving average filter is
written as
(6)
Therefore, the image restoration in this case is to find the ideal
image F(ωx, ωy) from the acquired one G(ωx, ωy) with a priori
knowledge on D(ωx, ωy) in the form of Eq. (6). The PSF D(ωx, ωy) is
non-minimum phase. Hence, the inverse filtering will amplify the noise
W(ωx, ωy) making it impractical.6 A more practical way is to formulate
the above mentioned deblurring problem as an optimal filter to
minimize the mean square error: i.e., given some observation of g(nx,
ny), say g1, find which minimizes the mean squared error (MSE):
(7)
Note that the blurring model in Eq. (5) is a linear system. If we
assume that the noise signal w(ωx, ωy) is zero mean, Gaussian distributed
and independent from other signals, the optimal solution to our deblurring
problem will be given by the minimum mean square error (MMSE)
estimator or Wiener filter:
(8)
(nx, ny) denotes the recovered image and the overline ( ) represents
the mean value of the corresponding signal. Note that if w is
zero mean. The effect of the mean value terms in Eq. (8) will show up
as an offset in the deblurring process and can be adjusted during the
calibration. According to the MMSE solution, the Fourier transform of
tpr tpr tpr tpr
Tg max tpr tex tro ttr+ +,( )≥
g nx ny,( ) d nx ny,( ) f nx ny,( )× w nx ny,( )+=
d i j,( )f nx i– ny j–,( ) w nx ny,( )+j=0
N 1–
∑i=0
M 1–
∑=
d nx ny,( )1
L--- if nx 0= 0 ny L 1–≤ ≤
0 otherwise⎩⎪⎨⎪⎧
=
G ωx ωy,( ) D ωx ωy,( )F ωx ωy,( ) W ωx ωy,( )+=
D ωx ωy,( ) 1
L---
ωyL
2---------⎝ ⎠⎛ ⎞sin
ωy
2------⎝ ⎠⎛ ⎞sin
----------------------e
jωy
L 1–( )2
---------------⎝ ⎠⎛ ⎞
–
=
f̂
minE f f ˆ–2
g g1
=[ ]
f̂ nx ny,( ) h nx ny,( ) g nx ny,( ) g nx ny,( )–( )× f nx ny,( )+=
f̂.
f g=
Fig. 3 Timing diagrams of vision events and the object position
664 / APRIL 2015 INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 16, No. 4
the convolution kernel h(nx, ny) for the MMSE estimator is computed as
(9)
where D*(ωx, ωy) is the complex conjugate of D(ωx, ωy), and K is the
additive factor given by
with Sw(ωx, ωy) and Sf(ωx, ωy) denoting the power spectrum of the noise
w and the ideal image f, respectively. Since both power spectrums are
unknown in practice, K is usually treated as a parameter that tunes the
effect of sharpening and noise.9 In summary, the image restoration is
achieved by applying the Wiener filter h(nx, ny) to the blurred image
g(nx, ny) to obtain (nx, ny) which represents the best estimate of the
original image f(nx, ny) in the sense of minimum mean square error.
3. Experiments
A tabletop probe analysis system has been designed and built to
demonstrate the main ideas given in the previous section. The test bed
comprises three main parts: the micro vision system, the x-y-z stage
with a probe card mount, and a real-time controller. The overall
configuration of the micro vision system and the stage is shown in Fig.
4. The micro vision system is attached to the stage so that the camera
scans the probe tips through the x-y planar motion of the stage during
the inspection of probe pins. Each axis of the stage is equipped with a
linear motion system that consists of a lead screw, a linear guide, and
a stepper motor. The z axis is fixed to the focal length of the lens and
does not move during the measurement. A linear encoder is attached
beside the y axis frame and provides the position and the velocity of the
camera along the y axis. The basic specifications of the x-y stage and
the linear encoder are listed in Table 1.
The real-time controller can communicate with both the vision
camera and the stage independently so that the motion data and the
image data can be synchronized in the way explained in the previous
section.
The designed machine vision system is shown in Fig. 5. The main
functional requirement of the vision system is to capture the image of
the end tips so that we can gauge the (x, y) position and the radius of
each pin tip. For this, the vision camera needs to magnify and to focus
only on a specific spatial point in the air. A 20× objective lens has been
used in our test bed for magnification. The LED (light-emitting-diode)
illumination is adopted as a light source and is placed on top of the
probe card with a diffuser. The camera is oriented with 90o angle with
respect to the LED. A beam splitter is mounted between the LED and
the lens. It transmits the light from the LED and projects the reflected
image to the camera.
Fig. 6 shows a sample still image of probe pins with 20× objective
lens where the direction of the stage movement (i.e., y direction) is
H ωx ωy,( )D* ωx ωy,( )
D* ωx ωy,( )D ωx ωy,( ) K+-----------------------------------------------------------=
KSw ωx ωy,( )Sf ωx ωy,( )------------------------=
f̂
Fig. 4 Experiment of the probe analyzer
Table 1 Specifications of the x-y stage
Specification Value
Microstep (step size of the stepper) 47.625 nm
Repeatability ≤ 1 µm
Encoder resolution (for y-axis) 0.5 µm
Fig. 5 Configuration of the machine vision component
Fig. 6 A sample still image of pin tips with 20× lens
INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 16, No. 4 APRIL 2015 / 665
indicated by the arrow. The pin tip has its radius around 3~8 µm and
the pins are located about 60 µm apart from each other. The camera
model is STC-CL202A manufactured by Sensor Technologies America,
Inc. Specifications of the camera and its image acquisition system are
listed in Table 2. The exposure time has been chosen to 10 ms based
on the intensity of the light source. The transmission time can be
computed using the size of the image data (=1620×1236×10 bits) and
the speed of the camera link communication protocol (=255 MB/s).
As explained in section 2.1, we need a deterministic real-time
platform that can process the vision data and the motion data in
synchrony. For this purpose, we adopted the RTOS (realtime operating
system) provided by National Instruments Inc. and converted a regular
PC into an RT processing machine. The overall configuration of the
real-time controller and the associated interface hardware is shown in
Fig. 7 in more detail. The PC houses three separate interface boards to
communicate with the camera (i.e. the frame grabber), the x-y stage and
the linear encoder, respectively. Especially, the interface board for the
linear encoder serves as the main DAQ (Data Acquisition) hardware
and sends out the deterministic trigger signal to the frame grabber such
that the exact timing of image capturing is controlled precisely by the
real-time controller. The original image has been converted to 8 bit data
to be compatible with the Labview image processing library functions.
4. Results and Discussion
The experimental test has been conducted with running the stage at
five different speeds: 0.2, 0.4, 0.6, 0.8 and 1.0 mm/s. The trigger period
of Tg = 130 ms for 1.0 mm/s of stage speed is chosen considering the
data size of each image to be processed. Determining the actual size of
pixel data for image processing needs some consideration and will be
explained in the next paragraph. Fig. 8 shows the flowchart of each
experimental trial. As soon as the stage starts to move, images are
captured every trigger period during which the pixel data get processed
with the synchronized encoder signal for computing x and y coordinates
as well as the radius of each pin tip. The computed pin information is
then stacked up to a predefined array to be provided to the user at the
end of the test.
As shown in Fig. 8, the basic image processing algorithm for the
probe tip analysis consists of deblurring (Wiener filtering), thresholding
and finding circles to compute their center coordinates and radii.
Although these are relatively simple image processing algorithms, the
total processing time for 1620×1236 pixel image takes more than 7
seconds in a standard PC, which is not acceptable for our purpose. The
simple way to reduce the image processing time is to reduce the size
of image data. From Fig. 6, we can first note that the pin tips are aligned
near the vertical center line and most of the side area is irrelevant to the
test. Secondly, for the range of speed we tested (up to 1 mm/s) and the
trigger period (130 ms), at least half of the total image data turns out
to overlap between two consecutive images. Therefore, we can trim
down some pixels along both x and y directions and, as a result, 255×
535 pixels are taken from the original image of 1620×1236 pixel size.
Another point considered in selecting the 255×535 pixel size for
Table 2 Specifications of the machine vision system
Camera Specification Value
Communication protocol Camera Link
Resolution 1620(H) × 1236(V) pixels
Bit depth 10 bits
Maximum frame rate 15 fps
Sensor type Progressive Scan CCD
Latency Value
Exposure time (tex) 10 ms
Readout time (tro) 65 ms
Transmission time (ttr) 9.87 ms
Fig. 7 Configuration of real-time controller
Fig. 8 Real-time image processing algorithm
Fig. 9 Image overlapping for a cropped pin
666 / APRIL 2015 INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 16, No. 4
online processing is that a small length along y direction actually needs
to overlap with the next image. This is because the image of a pin may
end up sitting on the boundary with some part of it being cut off. Fig.
9 shows such a situation where the lower pin of the left-hand side
image shows in its entirety while the same pin is clipped in the right-
hand side image. As long as the overlap length (denoted by ∆Ly) is
large enough to cover the entire length of a pin (including when it is
blurred), we can process the pin regardless of clipping.
As a result of image reduction explained above, the maximum image
processing time has been reduced to around 70 ms. Note that the
trigger period of our choice Tg = 130 ms satisfies Eq. (2) since 130
max (70, 10+55+9.87) = 84.87. Table 3 summarizes the parameters for
experimental results. When the stage speed gets lowered, the trigger
period can be increased accordingly to keep the overlap length ∆Ly
similar for all speeds. For our experiment, the trigger period has been
increased by 65 ms as v is reduced by 0.2 mm/s.
In order to evaluate the inspection error of the method proposed in
this paper, still images are first taken for all probe pins to have their
position and radius values computed and spared as reference data. The
pixel resolution is around 0.25 µm/pixel and the repeatability of the x-
y stage is 1 µm at maximum as shown in Table 1. Thus, the base error
margin of the reference data itself can be considered around 1.25 µm
at maximum. Fig. 10 compares three different images of the same pins:
the still image for reference data (Fig. 10(a)), the blurred image captured
during the stage motion (Fig. 10(b)) and the restored image (Fig. 10(c)).
For each pin, the (x, y) coordinates and the radius r are printed in µm
scale on Figs. 10(a) and 10(c) for comparison. For three pins shown
here, the x coordinate data from the restored image coincide with those
of still image while the y coordinate data and the radius of the bottom
pin introduce some errors. Note that the perfect image restoration is not
possible in practice due to the unknown noise w(nx, ny) (See Eq. (3)).
Experimental results have been collected from a batch of about 80
pins on the probe card. All the pins are inspected at once for each test
speed listed in Table 3. The inspection error for each pin is then
computed using the reference data a priori obtained from still images.
The mean and standard deviation of inspection errors are presented in
Fig. 11 using bar graphs. The circle mark for each speed denotes the
mean value of the inspection error and the line segment between the
circle and the upper (or lower) bar indicates the standard deviation.
Overall, the measurement error for the tip size r shows the smallest
standard deviation which is around 0.3 µm for all test speeds. The
standard deviations for position errors are a bit larger than that of r,
specifically, 0.5 µm for x axis and 0.8 µm for y axis. As expected, the
y direction that causes linear motion blur introduces the largest
inspection error. Nevertheless, standard deviation is comparable to the
base error margin (1.25 µm). The inspection error for y coordinate may
tpr
Fig. 10 Comparison of sample images for three different cases
Table 3 Experimental test conditions
Parameter Value
Image size for online processing 255(H) × 535(V) pixels
Max. processing time 70 ms
Camera speed v 0.2/0.4/0.6/0.8/1 mm/s
Trigger period Tg 390/325/260/195/130 ms
tpr
Fig. 11 Mean and standard deviation of measurement errors of tip
position and radius
Fig. 12 Histograms that show error distribution of x, y and r for two
different speeds of camera movement
INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 16, No. 4 APRIL 2015 / 667
also be attributed to the resolution of the encoder which is 0.5 µm. (See
Table 1). One interesting aspect of the results shown in Fig. 11 is that
the standard deviation of inspection errors is not necessarily
proportional to the test speed. This suggests that the Wiener filtering
used for deblurring may perform with similar mean square errors for
the range of speeds tested in this paper.
To take a more detailed look, the actual distributions of inspection
errors are plotted in Fig. 12 by histograms for two test speeds: 0.8 mm/
s and 1.0 mm/s. ex, ey and er denote the estimation errors for the x, y and
r values, respectively. The frequency data are normalized by the total
number of pins. We can see that the inspection errors are close to the
normal (Gaussian) distribution. For the number of pins tested, the
maximum error for r and x coordinate is around ±1 µm while that of
y coordinate is around ±1.5 µm.
To compare the inspection speed, a separate test algorithm has been
created and tested based on the primitive stop-and-go approach. The
maximum speed in this case is observed to reach 2 pins per second at
maximum for the same probe card. The probe pins are about 60 µm
distance apart, so the proposed probe card analysis technique operated
with 1 mm/s of stage speed will result in the inspection speed
equivalent to 16.7 pins per second which corresponds to more than 8×
improvement compared to the stop-and-go approach.
5. Concluding Remarks
We studied the dynamic imaging approach for fast and high speed
visual inspection. The main idea is to operate the vision-based inspection
on-the-fly while the camera (or the target object) is continuously
moving. In doing so, the position measurement from the encoder is first
synchronized with the image data that is captured by a controlled trigger
signal under the real-time setting. Capturing images from a moving
camera creates blurring in the image. The deblurring technique has
been employed to restore the original still images from blurred ones.
Experimental results with the probe card analysis show that the
inspection speed can be significantly increased compared to the
conventional stop-and-go approach with the inspection error still
contained within the range of base error margins corresponding to the
quantization level of encoder and the pixel resolution. The
implementation shown in this paper can be useful to other high density
visual inspection processes where the inspection time is a critical issue.
It should be mentioned that the image blurring can also be improved
by other ways such as the use of a high speed camera with the impulse
lighting. However, this will be at the expense of hardware modification
and the increased cost. The image restoration technique introduced in
this paper can be considered as a simple means to improve the
performance of a vision-based inspection system without additional
hardware and system modification. In fact, it can be applied to any
existing vision-based inspection system for further improvement in the
inspection speed by allowing the system to run at higher speeds.
ACKNOWLEDGEMENT
The work in this paper has been supported by the National Research
Foundation (NRF) of Korea under the Global Research Network
Program.
REFERENCES
1. Sun, T. H., Tseng, C. C., and Chen, M. S., “Electric Contacts
Inspection Using Machine Vision,” Image and Vision Computing,
Vol. 28, No. 6, pp. 890-901, 2010.
2. Mann, W. R., Taber, F. L., Seitzer, P. W., and Broz, J. J., “The
Leading Edge of Production Wafer Probe Test Technology,” Proc. of
Internationl Test Conference, pp. 1168-1195, 2004.
3. West, P. C., “Machine Vision in Practice,” IEEE Transactions on
Industry Applications, Vol. IA-19, No. 5, pp. 794-801, 1983.
4. Tao, Y., Hu, H., and Zhou, H., “Integration of Vision and Inertial
Sensors for 3D Arm Motion Tracking in Home-based Rehabilitation,”
The International Journal of Robotics Research, Vol. 26, No. 6, pp.
607-624, 2007.
5. Jeon, S., Tomizuka, M., and Katou, T., “Kinematic Kalman Filter
(KKF) For Robot End-Effector Sensing,” Journal of Dynamic
Systems, Measurement, and Control, Vol. 131, No. 2, Paper No.
021010, 2009.
6. Bovik, A. C., “Handbook of Image and Video Processing,” Academic
Press, pp. 71-257, 2010.
7. Banham, M. R. and Katsaggelos, A. K., “Digital Image Restoration,”
IEEE Signal Processing Magazine, Vol. 14, No. 2, pp. 24-41, 1997.
8. Nayar, S. K. and Ben-Ezra, M., “Motion-based Motion Deblurring,”
IEEE Transactions on Pattern Analysis and Machine Intelligence,
Vol. 26, No. 6, pp. 689-698, 2004.
9. Grimaldi, D., Lamonaca, F., and Macrì, C., “Correction of the
Motion Blur Alteration in the Human Lymphocyte Micro-Nucleus
Image based on Wiener’s Deconvolutionl,” Proc. of 16th IMEKO
TC4 Symposium, pp. 22-24, 2008.