Intelligent vision system for grasping error compensation of a wafer manipulation robot.
Stanciu, Mihai ; Nicolescu, Adrian ; Enciu, George 等
1. INTRODUCTION
Most of the industrial robots used in silicon wafer manipulation
are SCARA type articulated arms with 3 or 4 degrees of freedom.
Generally the silicon wafer manipulation process is performed in vacuum
or in a clean room environment. In present the semiconductor industry
try to optimize the manufacturing process using 200mm and 300mm diameter
wafers and the average manipulation speed is 1.2m/s. The silicon wafer
weight is in the range 0.5-0.8kg but considering the very high value of
the manipulation speed involving too a high acceleration value on the
trajectory, usually in 5-8% of the cases a positioning error of the
wafer in the grasping mechanism occurs (***,2001). This error is
affecting the final positioning of the wafer in the dropping point (that
can be a place in a wafer cassette or processing station).
The authors of this paper developed a specific vision system able
to detect the positioning error and to provide the correction parameters
trough a feedback loop to the robot controller (Nicolescu et al., 2002).
The error evaluation process and correction parameters calculations are
performed in the final stage of the trajectory trough the dropping point
and the robot controller compensate this error by changing the
coordinates of programmed endpoint (Stanciu et al., 1997).
2. VISION SYSTEM ARCHITECTURE
The vision system includes (fig.1):
* 1 or 2 video cameras installed on the bottom side of the robot
end effector (for the 3 degrees of freedom version of the robot is
required only one video camera due to the robot kinematics allowing the
error compensation only on one direction--radial). For the 4 degrees of
freedom robot 2 cameras are used to detect two components of the error
in an horizontal plane;
* computer having installed the image processing software and two
usb free ports;
* 1 or 2 usb cables connecting the cameras with the computer;
* RS232 serial cable connecting the computer and the robot
controller.
[FIGURE 1 OMITTED]
Video cameras positions on the robot end effector are correlated
with the wafer diameter. If the wafer is right positioned his edge will
cover half of video cameras visual field in order to allows the
detection of a positive / negative error value. This set-up increases
the accuracy of the edge detection routine maximizing the image
contrast. In the same time to increase the error detection software
speed will be used gray scale images having a reduced size and less
color information.
3. VISION SYSTEM CALIBRATION
The calibration process of the above presented vision system is
performed automatically when the working cycle of the robot is started.
In the first step the robot will pick up the reference wafer from a
wafer holder. The vision system will detect the wafer edge and will set
this edge (Guerra & Vlsi, 2000) as a calibration reference line
(line position is expressed by pr). In the second step the robot will
pick up the reference wafer from a wafer holder with an offset of 10mm.
The vision system will detect the wafer edge (p10) and will set the
vision system conversion parameter (cp) as the difference between this
edge and the calibration reference line (fig.2). In the last step is
calculated the vision system aspect ratio parameter (ar):
ar = 10/[absolute value of cp] [mm/pixels] (1)
cp = p10 - pr [pixels] (2)
[FIGURE 2 OMITTED]
[FIGURE 3 OMITTED]
Concluding, during the calibration stage is detected the reference
line of a zero error positioned wafer and is also calculated the vision
system aspect ratio parameter that will be used in error evaluation
(fig.3). To optimize the error detection process will be used the
reference wafer holder as a working station in the application
architecture. In the same time periodically after 50-60 robot working
cycles the vision system will be calibrated to increase the accuracy of
the error evaluation process.
4. ERROR MEASUREMENT
The positioning error evaluation process is performed in two
different ways depending on the number of robot DOF. A 3-axis robot can
compensate the positioning error only on one direction. In this case
will be used one video camera positioned on the longitudinal symmetry
axis of the robot effector. The edge detection algorithm will select the
2 points defining the intersection between the wafer edge and the left
and respectively right image limits (Spacek, 1986). To determine the
error value is required only one point but using a second one will
increase the accuracy of the calculation routine (fig.4).
A four axis robot can compensate both components of the positioning
error. In this case will be used two video cameras positioned on the
sides of the robot effector. The edge detection algorithm will select
the two points defining the intersection between the wafer edge and the
bottom and respectively right or left image limits for both images. To
determine the error value are required only two points but using a
second set will result an increased accuracy of the calculation routine
(fig.5). Usually the aspect ratio parameter corresponding to a vision
system resolution of about 3-3.2 pixels / mm and the overall accuracy of
the positioning error routine will be in the range 0.3-0.35 mm. This
value is very rough considering an usual 0.1 mm accuracy of this type of
robots. To increase this accuracy can be used a more advanced video
camera but the vision system resolution will be limited to 5-6 pixels /
mm. To solve this problem the authors developed a mathematical method to
increase the resolution by "devised pixel" technique.
In fig.6 is presented an enhanced image of a detected point with
his 8 neighbors. On the figure are also marked the light intensity
values for every pixel. With a standard resolution algorithm the
detected point on the edge of the wafer will be the pixel marked on
fig.6 with the light intensity value = 155. Assuming too that the vision
system resolution is 3 pixels / mm resulting the reference point
position as in fig.7.
[FIGURE 4 OMITTED]
[FIGURE 5 OMITTED]
[FIGURE 6 OMITTED]
[FIGURE 7 OMITTED]
[FIGURE 8 OMITTED]
By applying by "devised pixel" technique is assumed that
every pixel weight is proportional with the light intensity as defined
in fig.6. In this case the position of the reference point will be the
position of the center of gravity of the fig.6. The equations of the
center of gravity are presented in (3) and (4).
x = [[summation].sup.9.sub.i=1] [x.sub.i] x [I.sub.i]/
[[summation].sup.9.sub.i=1] [I.sub.i] (3)
y = [[summation].sup.9.sub.i=1] [y.sub.i] x [I.sub.i]/
[[summation].sup.9.sub.i=1] [I.sub.i] (4)
The results for the image presented in fig.6 are: x=0.44mm and
y=0.59mm. Assuming that the vision system resolution is the same (3
pixels / mm) resulting the reference point position determined with the
"devised pixel" technique as in fig.8.
Experimental research revealed that using "devised pixel"
technique is possible to achieve an accuracy of 0.05 mm with a standard
video camera (resolution 800x600 pixels).
5. CONCLUSION
Based on the requirement to minimize the positioning error
resulting in wafer damage, the authors developed the vision system
presented in the paper. The error detection model is based on an
original "devised pixel" technique that allows increasing
significantly the accuracy with standard hardware and low processing
resources.
6. REFERENCES
***. SHR3000--Manual, JEL Corporation, 2001.
Guerra, C.; A Vlsi. Algorithm For The Optimal Detection Of A Curve,
CAPAIDM, No.86 /2000, pp. 197-202.
Nicolescu, A.; Popescu, D.; Costian, D. Industrial robot assisted
design using Catia P3 V5R7 environment, Constructia de masini, 2002
(54), nr. 6/ 2002, ISSN 0573-7419, OID-ICM, Bucuresti,
Spacek, L.A. Edge Detection And Motion Detection, IVC, No.4, 1986,
pp. 43-56.
Stanciu M. D.; Nicolescu A.; Ispas C. Identification of
Joint's Elastic Behavior Influence on Volumetric Accuracy of
Industrial Robots., "IEEE's 6th International Workshop
RAAD'97", pag.163 ... pag.168, STUDIO 22 Edizioni, ISBN 88-87054-00-2, 1997, Cassino, Italy.