首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. This work presents techniques for modeling and performing robot calibration processes with off-line programming using a 3D vision-based measurement system. The measurement system is portable, accurate and low cost, consisting of a single CCD camera mounted on the robot tool flange to measure the robot end-effector pose relative to a world coordinate system. Radial lens distortion is included in the photogrammetric model. Scale factors and image centers are obtained with innovative techniques, making use of a multiview approach. Results show that the achieved average accuracy using a common off-the-shelf CCD camera varies from 0.2 to 0.4 mm, at distances from 600 to 1000 mm from the target, respectively, with different camera orientations. Experimentation is performed on two industrial robots to test their position accuracy improvement using the calibration system proposed: an ABB IRB-2400 and a PUMA-500. The robots were calibrated at different regions and volumes within their workspace achieving accuracy from three to six times better when comparing errors before and after calibration, if measured locally. The proposed off-line robot calibration system is fast, accurate and easy to set up.  相似文献   

2.
Autonomous robot calibration using vision technology   总被引:2,自引:0,他引:2  
Yan  Hanqi   《Robotics and Computer》2007,23(4):436-446
Unlike the traditional robot calibration methods, which need external expensive calibration apparatus and elaborate setups to measure the 3D feature points in the reference frame, a vision-based self-calibration method for a serial robot manipulator, which only requires a ground-truth scale in the reference frame, is proposed in this paper. The proposed algorithm assumes that the camera is rigidly attached to the robot end-effector, which makes it possible to obtain the pose of the manipulator with the pose of the camera. By designing a manipulator movement trajectory, the camera poses can be estimated up to a scale factor at each configuration with the factorization method, where a nonlinear least-square algorithm is applied to improve its robustness. An efficient approach is proposed to estimate this scale factor. The great advantage of this self-calibration method is that only image sequences of a calibration object and a ground-truth length are needed, which makes the robot calibration procedure more autonomous in a dynamic manufacturing environment. Simulations and experimental studies on a PUMA 560 robot reveal the convenience and effectiveness of the proposed robot self-calibration approach.  相似文献   

3.
A measurement technique for kinematic calibration of robot manipulators, which uses a stereo hand-eye system with moving camera coordinates, is presented in this article. The calibration system consists of a pair of cameras rigidly mounted on the robot end-effector, a camera calibration board, and a robot calibration fixture. The stereo cameras are precalibrated using the camera calibration board so that the 3D coordinates of any object point seen by the stereo cameras can be computed with respect to the camera coordinate frame [C] defined by the calibration board. Because [C] is fixed with respect to the tool frame [T] of the robot, it moves with the robot hand from one calibration measurement configuration to another. On each face of the robot calibration fixture that defines the world coordinate frame [W], there are evenly spaced dot patterns of uniform shape. Each pattern defines a coordinate frame [Ei], whose pose is known in [W]. The dot pattern is designed in such a way that from a pair of images of the pattern, the pose of [Ei] can be estimated with respect to [C] in each robot calibration measurement. By that means the pose of [C] becomes known in [W] at each robot measurement configuration. For a sufficient number of measurement configurations, the homogeneous transformation from [W] to [C] (or equivalently to [T]), and thus the link parameters of the robot, can be identified using the least-squares techniques. Because the cameras perform local measurements only, the field-of-view of the camera system can be as small as 50 × 50 mm2, resulting in an overall accuracy of the measurement system as high as 0.05 mm. This is at least 20 times better than the accuracy provided by vision-based measurement systems with a fixed camera coordinate frame using common off-the-shelf cameras. © 1994 John Wiley & Sons, Inc.  相似文献   

4.
Traditional robot calibration implements model and modeless methods. The compensation of position error in modeless method is to move the end-effector of robot to the target position in the workspace, and to find the position error of that target position by using a bilinear interpolation method based on the neighboring 4-point's errors around the target position. A camera or other measurement devices can be utilized to find or measure this position error, and compensate this error with the interpolation result. This paper provides a novel fuzzy interpolation method to improve the compensation accuracy obtained by using a bilinear interpolation method. A dynamic online fuzzy inference system is implemented to meet the needs of fast real-time control system and calibration environment. The simulated results show that the compensation accuracy can be greatly improved by using this fuzzy interpolation method compared with the bilinear interpolation method.  相似文献   

5.
The mobile manipulators (MMs) have been increasingly adopted for machining large and complex components. In order to ensure the machining efficiency and quality, the MMs usually need to cooperate with each other. However, due to the low motion accuracy of the mobile platform, the relative pose accuracy between the coordinated MMs are difficult to guarantee, so an effective calibration method is needed to on-line obtain the relative pose of the MMs. For this purpose, a vision-based fast base frame calibration method is proposed in this paper, which can quickly and accurately obtain the relative pose between the coordinated MMs. The method only needs to add a camera and a marker, and then a frame network of the calibration system can be generated by installing the marker at three different positions. Based on the Perspective-n-Points principle and the robot forward kinematics, the transformation matrix of the marker frame with respect to the camera frame and the robot base frame can be determined by simply obtaining the images of the marker at different positions and corresponding robot joint angles. Then, the relative pose between the base frames of coordinated MMs can be determined by the calibration equation established based on a frame closed chains. In addition, the calibration method is capable of real-time calculation by dividing the calibration process into off-line and on-line stages. Simulation and experimental results have verified the effectiveness of the proposed method.  相似文献   

6.
For most visual servo systems, accurate camera/robot calibration is essential for precision tasks, such as tracking time-varying end-effector trajectories in the image plane of a remote (or fixed) camera. This paper presents details of control-theoretic approaches to the calibration and control of monocular visual servo systems in the case of a planar robot with a workspace perpendicular to the optical axis of the imaging system. An on-line adaptive calibration and control scheme is developed, along with an associated stability and convergence theorem. A redundancy-based refinement of this scheme is proposed and demonstrated via simulation.  相似文献   

7.
Multi-camera vision is widely used for guiding the machining robot to remove flash and burrs on complex automotive castings and forgings with arbitrary initial posture. Aiming at the problems of insufficient field of vision and regional occlusion in actual machining, a gradient-weighted multi-view calibration method (GWM-View) is proposed for the machining robot positioning based on the convergent binocular vision. Specifically, the mapping between each auxiliary camera and the main camera in the multi-view system is calculated by the inverse equation and intrinsic parameter matrix. Then, the gradient-weighted suppression algorithm is introduced to filter out the errors caused by camera angle variation. Next, the spatial coordinates of the feature points after suppression are used to correct the transformation matrix. Finally, the hand-eye calibration algorithm is implemented to transform the corrected data into the robot base coordinate system for the accurate positioning of the robot under multiple views. The experiment on the automotive engine flywheel shell indicates that the average positioning error is controlled within 1 mm under different postures. The stability and robustness of the proposed method are further improved while the positioning accuracy of the machining robot meets the requirements.  相似文献   

8.
大视场双目立体视觉的摄像机标定   总被引:1,自引:0,他引:1  
针对大视场视觉测量应用,在分析摄像机成像模型的基础上,设计制作了可自由转动的十字靶标,实现了大视场双目视觉摄像机的精确标定。将十字靶标在测量空间内多次均匀摆放,两摄像机同步拍摄多幅靶标图像。由本质矩阵得到摄像机参数的初始值,采用自检校光束法平差得到摄像机参数的最优解。该方法不要求特征点共面,仅需要知道特征点之间的物理距离,降低了靶标制作难度。采用TN3DOMS.S进行了实测,在1500mm×1500mm的测量范围内测试标准标杆,误差均方值为0.06mm。  相似文献   

9.
The paper deals with the geometric and elastostatic calibration of robotic manipulator using partial pose measurements, which do not provide the end-effector orientation. The main attention is paid to the efficiency improvement of identification procedure. In contrast to previous works, the developed calibration technique is based on the direct measurements only. To improve the identification accuracy, it is proposed to use several reference points for each manipulator configuration. This allows avoiding the problem of non-homogeneity of the least-square objective, which arises in the classical identification technique with the full pose information (position and orientation). Its efficiency is confirmed by the comparison analysis, which deals with the accuracy evaluation of different identification strategies. The obtained theoretical results have been successfully applied to the geometric and elastostatic calibration of a serial industrial robot employed in a machining work cell for aerospace industry.  相似文献   

10.
Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed such that both central and non-central cameras can be calibrated within the same framework. Consequently, existing parametric calibration techniques cannot be applied for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection in order to enable the application of established pinhole calibration techniques. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A novel linear estimation stage is proposed that enables a well established pinhole calibration technique to be used to estimate the camera centre and initial grid poses. The proposed solution is shown to be more accurate than the linear estimation stage of the standard method. A linear alternative to the existing polynomial method for estimating the pose of additional grids used in the calibration is demonstrated and evaluated. Distortion correction experiments are conducted with real data for both an omnidirectional camera and a fisheye camera using the standard and proposed methods. Motion reconstruction experiments are also undertaken for the omnidirectional camera. Results show the accuracy and robustness of the proposed method to be improved over those of the standard method.  相似文献   

11.
摄像机标定是精密视觉测量的基础,传统的标定方法具有很多的缺陷。提出了一种新的双目视觉摄像机标定方法,通过引入基因表达式程序设计算法,挖掘其中潜在的坐标函数关系。将GEP标定方法与同类方案进行了比较,实验结果表明:新算法有效地提高了标定精度,加快了运算时间,具有较高的实用价值。  相似文献   

12.
Kinematic calibration is an effective and economical way to improve the accuracy of surgical robot, and in most cases, it is a necessary procedure before the robot is put into operation. This study investigates a novel kinematic calibration method where the effect of controller error is taken into account when formulating the model based on screw theory, which is applied to the kinematic control of magnetic resonance compatible surgical robot. Based on screw theory, the kinematic error model is established for the relationship between error of controller and the deviation of the measured pose of the end-effector. Therefore, the error of controller can be figured out and parameters of controller can be adjusted accordingly. Control strategy based on the kinematic calibration framework is proposed. According to artificial neural network, the deviation of end-effector in arbitrary configuration can be effectively obtained. Comparative experiments are carried out to show the validity and effectiveness of the proposed framework with the help of commercial visual system and joint encoders.  相似文献   

13.
提出一种基于亚像素精度的特征点提取算法,结合PnP方法和正交迭代(OI)算法计算乒乓球机器人本体的位姿.根据摄像机成像时弥散斑的近似高斯分布,以亚像素精度精确求取色标块的边缘,利用边缘直线交点得到高精度的角点作为特征点.基于PnP算法利用上述特征点求取机器人位姿的初值,再通过OI算法进行优化,以保证其姿态矩阵的正交性.实验结果表明,该方法能快速准确地实现机器人本体的位姿测量.  相似文献   

14.
现有全景摄像机标定算法中,场景的先验知识在很多情况下难以准确获取。提出了一种基于HALCON的双曲面折反射全景摄像机标定算法。通过摄像机或者标定板的相互移动来获取不同位置的标定板图像;通过HALCON预处理后提取角点坐标;通过最小二乘法求解标定点投影方程的展开系数得到相机的内外参数。该算法中摄像机只需满足单视点要求,不需要场景的先验知识,也不需要特殊的装置和设备。实验结果表明能够获得较高的标定精度,并且在足球机器人边线距离的测量中得到了准确的结果。  相似文献   

15.
Differential-drive mobile robots are usually equipped with video cameras for navigation purposes. In order to ensure proper operational capabilities of such systems, several calibration steps are required to estimate the video-camera intrinsic and extrinsic parameters, the relative pose between the camera and the vehicle frame and the odometric parameters of the vehicle. In this paper, simultaneous estimation of the aforementioned quantities is achieved by a novel and effective calibration procedure. The proposed calibration procedure needs only a proper set of landmarks, on-board measurements given by the wheels encoders, and the camera (i.e., a number of properly taken camera snapshots of the set of landmarks). A major advantage of the proposed technique is that the robot is not required to follow a specific path: the vehicle is asked to roughly move around the landmarks and acquire at least three snapshots at some approximatively known configurations. Moreover, since the whole calibration procedure does not use external measurement devices, it can be used to calibrate, on-site, a team of mobile robots with respect to the same inertial frame, given by the position of the landmarks’ tool. Finally, the proposed algorithm is systematic and does not require any iterative step. Numerical simulations and experimental results, obtained by using a mobile robot Khepera III equipped with a low-cost camera, confirm the effectiveness of the proposed technique.  相似文献   

16.
针对一种4自由度高速并联机器人(Cross-IV机器人)的零点标定问题,提出了一种基于末端转角误差信息的快速零点标定方法.基于机器人的单支链闭环矢量方程,建立了零点误差全集与末端误差之间的映射模型.通过对误差传递矩阵的分解,在仅利用旋转编码器对末端转角误差进行测量的基础上,构建了该机器人的快速零点误差辨识模型.为进一步最大化测量效率及提高辨识矩阵的鲁棒性,提出了一种优化的测量点选择方案.通过仿真详细验证了该零点标定方法的鲁棒性与准确性.基于激光跟踪仪的验证实验表明,经标定后机器人末端位置误差降低至1.312 mm,转角误差降低至0.202°,标定结果表明该零点标定方法简单、有效.  相似文献   

17.
Camera calibration with genetic algorithms   总被引:3,自引:0,他引:3  
We present an approach based on genetic algorithms for performing camera calibration. Contrary to the classical nonlinear photogrammetric approach, the proposed technique can correctly find the near-optimal solution without the need of initial guesses (with only very loose parameter bounds) and with a minimum number of control points (7 points). Results from our extensive study using both synthetic and real image data as well as performance comparison with Tsai's procedure (1987) demonstrate the excellent performance of the proposed technique in terms of convergence, accuracy, and robustness  相似文献   

18.
This study aims at jointly controlling two critical process parameters from a remote site, of which include the process capability of robotic assembly operations and the accuracy of vision calibration. The process capability is regarded as the indication of robot positioning accuracy. When the robot is driven by the vision camera, the process capability becomes mainly dependent on the calibration accuracy of vision-guided robot system. Even though newly commissioned, high precision assembly robots typically display excellent positioning accuracies under normal working conditions, the imperfect mathematical conversion of vision coordinates into robot coordinates imparts the accuracy problems. In this study, a novel vision calibration method is proposed that effectively rectifies the inherent complications associated with lens distortions. Our analysis shows that the degree of lens distortion appears very differently along the vision field of view. Because of this non-uniform distortion, a single mathematical equation for vision calibration is deemed ineffective. The proposed methodology significantly improves the positioning accuracy, which can be performed over the network from a remote site. This is better suited for today?s global manufacturing companies, where fast product cycles and geographically distributed production lines dictate more efficient and effective quality control strategies.  相似文献   

19.
Implicit and explicit camera calibration: theory and experiments   总被引:22,自引:0,他引:22  
By implicit camera calibration, we mean the process of calibrating a camera without explicitly computing its physical parameters. Implicit calibration can be used for both three-dimensional (3-D) measurement and generation of image coordinates. In this paper, we present a new implicit model based on the generalized projective mappings between the image plane and two calibration planes. The back-projection and projection processes are modelled separately to ease the computation of distorted image coordinates from known world points. A set of constraints of perspectivity is derived to relate the transformation parameters of the two calibration planes. Under the assumption of the radial distortion model, we present a computationally efficient method for explicitly correcting the distortion of image coordinates in frame buffer without involving the computation of camera position and orientation. By combining with any linear calibration techniques, this method makes explicit the camera physical parameters. Extensive experimental comparison of our methods with the classic photogrammetric method and Tsai's (1986) method in the aspects of 3-D measurement (both absolute and relative errors), the prediction of image coordinates, and the effect of the number of calibration points, is made using real images from 15 different depth values  相似文献   

20.
为实现AS-R智能机器人在运动情况下摄像机在线动态标定,提出一种新的基于粒子滤波的直线运动摄像机标定方法。用状态空间方法描述直线运动摄像机模型,把摄像机内参数和位置运动参数作为状态量,特征点图像坐标作为观测量,根据粒子滤波算法求得摄像机内参数和位置运动参数的最优估计,并用双线程实现整个标定过程。AS-R机器人在直线运动情况下的摄像机在线动态标定实验结果表明:该算法是合理可行的,并且具有很高的标定精度和良好的鲁棒性。该方法适用于各种类型的系统噪声。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号