首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
李巍  吕乃光  董明利  娄小平 《机器人》2018,40(3):301-308
针对相机姿态估计及机器人运动学正解存在测算偏差时,手眼标定及机器人坐标系-世界坐标系标定结果不能准确收敛到全局最优解的问题,提出了一种基于对偶四元数理论的机器人方位与手眼关系同时标定方法.该方法首先将标定方程中坐标系刚体变换关系用螺旋轴、旋转角度和平移量参数化表示,再结合全局优化算法对平移量进行优化.搭建了PUMA560机器人数值仿真系统和工业机器人实测实验平台,将该方法与经典的四元数和对偶四元数标定方法进行了比较分析.仿真和实测结果表明,在相机姿态估计及机器人运动学正解存在测量误差的情况下,该方法无需初值估计和数据筛选依然可以保证求解结果的最优性.  相似文献   

2.
因为彩色镜头和深度镜头不在同一位置,并且深度图像测量精度差、分辨率低、没有颜色纹理信息,传统的手眼标定方法并不适用于RGB-D相机.本文提出一种利用简单低成本的3D打印球作为标定件对机械臂与RGB-D相机进行手眼标定的方法.本方法只需要测量标定件的3D位置信息,避免使用测量不便、精度稍差的姿态信息.文中给出了该方法的封闭解和迭代优化解.100组仿真结果表明,标定精度与RGB-D相机自身测量精度一致;封闭解不需要机械臂与相机时间同步;迭代优化解的标定精度略有提升,误差最大值和误差方差都很稳定.最后,在7自由度的KUKA ⅡWA机械臂和Kinect相机上做了手眼标定实验,结果与仿真实验一致.总之,本文方法简单可靠,可实现机械臂与RGB-D相机之间的快速部署手眼标定.  相似文献   

3.
机器人手位姿数据对手眼标定精度的影响不可忽略,将对基于手眼标定方程AX=XB的精度影响因素进行分析.通过手眼标定仿真和实测实验验证上述两个因素对手眼标定精度的影响与理论分析的一致性.通过仿真与实测实验,总结得出了减小摄像机与靶标间距离、减小机器人手的运动前后到基坐标空间距离的相差距离,可提高手眼标定精度,通过四元数法和矩阵直积法验证了此规律在解AX=XB标定方程时的通用性,并且在摄像机与靶标间距约为230 mm以及机器人手的运动前后到基坐标空间距离的相差距离为3.2401 mm时,手眼标定平移向量相对误差最高精度可达0.0403%.  相似文献   

4.
针对机器人运动学正解及相机的外参数标定存在偏差时,基于非线性最优化的手眼标定算法无法确保目标函数收敛到全局极小值的问题,提出基于四元数理论的凸松弛全局最优化手眼标定算法。考虑到机械手末端相对运动旋转轴之间的夹角对标定方程求解精度的影响,首先利用随机抽样一致性(RANSAC)算法对标定数据中旋转轴之间的夹角进行预筛选,再利用四元数参数化旋转矩阵,建立多项式几何误差目标函数和约束,采用基于线性矩阵不等式(LMI)凸松弛全局优化算法求解全局最优手眼变换矩阵。实测结果表明,该算法可以求得全局最优解,手眼变换矩阵几何误差平均值不大于1.4 mm,标准差小于0.16 mm,结果稍优于四元数非线性最优化算法。  相似文献   

5.
为实现在统一的理论框架下对机器人视觉伺服基础特性进行细致深入的研究,本文基于任务函数方法,建立了广义的视觉伺服系统模型.在此模型基础之上,重点研究了基于位置的视觉伺服(PBVS)与基于图像的视觉伺服(IBVS)方法在笛卡尔空间和图像空间的动态特性.仿真结果表明,在相同的比较框架结构下,PBVS方法同样对摄像机标定误差具有鲁棒性.二者虽然在动态系统的稳定性、收敛性方面相类似,但是在笛卡尔空间和图像空间的动态性能上却有很大的差别.对于PBvS方法,笛卡尔轨迹可以保证最短路径,但是对应的图像轨迹是不可控的,可能会发生逃离视线的问题;对于IBVS方法,图像空间虽然能保证最短路径,但是由于缺乏笛卡尔空间的直接控制,在处理大范围旋转伺服的情况时,会发生诸如摄像机退化的笛卡尔轨迹偏移现象.  相似文献   

6.
手眼标定是机器人领域常用的一种确定机器人末端坐标系和摄像机坐标系之间相互关系的标定方法。跟踪和定位是增强现实研究的热点,跟踪定位通常为复合跟踪系统,结合多传感器跟踪信息,优势互补,达到更高测量精度和准确度。跟踪注册是实现信息有效融合的基本前提。文章将机器人手眼标定引入多传感器跟踪注册,并通过实验验证了这种基于手眼标定的跟踪注册方法的有效性。  相似文献   

7.
Multi-camera vision is widely used for guiding the machining robot to remove flash and burrs on complex automotive castings and forgings with arbitrary initial posture. Aiming at the problems of insufficient field of vision and regional occlusion in actual machining, a gradient-weighted multi-view calibration method (GWM-View) is proposed for the machining robot positioning based on the convergent binocular vision. Specifically, the mapping between each auxiliary camera and the main camera in the multi-view system is calculated by the inverse equation and intrinsic parameter matrix. Then, the gradient-weighted suppression algorithm is introduced to filter out the errors caused by camera angle variation. Next, the spatial coordinates of the feature points after suppression are used to correct the transformation matrix. Finally, the hand-eye calibration algorithm is implemented to transform the corrected data into the robot base coordinate system for the accurate positioning of the robot under multiple views. The experiment on the automotive engine flywheel shell indicates that the average positioning error is controlled within 1 mm under different postures. The stability and robustness of the proposed method are further improved while the positioning accuracy of the machining robot meets the requirements.  相似文献   

8.
In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms (Proc of IEEE Int Conf on robotics and automation, pp 1794–1799, 2007) are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach (Int Conf on advanced robotics, 2007), based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are presented.  相似文献   

9.
陈宝存  吴巍  郭毓  郭健 《计算机仿真》2020,37(2):343-348
为了缩短标定的时间,设计了一种基于ROS的机器人自动手眼标定系统。通过分析坐标系间的转换关系,建立了手眼标定的数学模型,推导了两步法的具体解法。为了尽可能减小标定误差,提出了带约束的随机增量法,并根据角点检测结果实时剔除标定过程中出现的无效图像。基于ROS软件平台完成了驱动、运动空间规划、图像处理和标定解算等功能模块的逻辑设计和系统级联。以6自由度工业机械臂UR3和RGB-D相机Kinect2构建了手眼标定实验系统,实验结果表明,与传统标定流程相比,所设计的自动手眼标定系统便捷高效且精度高,具有较强的拓展性。  相似文献   

10.
Active vision sensors are increasingly being employed in vision systems for their greater flexibility. For example, vision sensors in hand-eye configurations with computer controllable lenses (e.g., zoom lenses) can be set to values which satisfy the sensing situation at hand. For such applications, it is essential to determine the mapping between the parameters that can actually be controlled in a reconfigurable vision system (e.g., the robot arm pose, the zoom setting of the lens) and the higher-level viewpoint parameters that must be set to desired values (e.g., the viewpoint location, focal length). In this paper we present calibration techniques to determine this mapping. In addition, we discuss how to use these relationships in order to achieve the desired values of the viewpoint parameters by setting the controllable parameters to the appropriate values. The sensor setup that is considered consists of a camera in a hand-eye arrangement equipped with a lens that has zoom, focus, and aperture control. The calibration techniques are applied to the H6 × 12.5R Fujinon zoom lens and the experimental results are shown.  相似文献   

11.
12.
In the robotic eye-in-hand measurement system, a hand-eye calibration method is essential. From the perspective of 3D reconstruction, this paper first analyzes the influence of the line laser sensor hand-eye calibration error on the 3D reconstructed point clouds error. Based on this, considering the influence of line laser sensor measurement errors and the need for high efficiency and convenience in robotic manufacturing systems, this paper proposes a 3D reconstruction-based robot line laser hand-eye calibration method. In this method, combined with the point cloud registration technique, the newly defined error-index more intuitively reflects the calibration result than traditional methods. To raise the performance of the calibration algorithm, a Particle Swarm Optimization - Gaussian Process (PSO-GP) method is adopted to improve the efficiency of the calibration. The experiments show that the Root Mean Square Error (RMSE) of the reconstructed point cloud can reach 0.1256 mm when using the proposed method, and the reprojection error is superior to those using traditional hand-eye calibration methods.  相似文献   

13.
钟宇  张静  张华  肖贤鹏 《计算机工程》2022,48(3):100-106
智能协作机器人依赖视觉系统感知未知环境中的动态工作空间定位目标,实现机械臂对目标对象的自主抓取回收作业。RGB-D相机可采集场景中的彩色图和深度图,获取视野内任意目标三维点云,辅助智能协作机器人感知周围环境。为获取抓取机器人与RGB-D相机坐标系之间的转换关系,提出基于yolov3目标检测神经网络的机器人手眼标定方法。将3D打印球作为标靶球夹持在机械手末端,使用改进的yolov3目标检测神经网络实时定位标定球的球心,计算机械手末端中心在相机坐标系下的3D位置,同时运用奇异值分解方法求解机器人与相机坐标系转换矩阵的最小二乘解。在6自由度UR5机械臂和Intel RealSense D415深度相机上的实验结果表明,该标定方法无需辅助设备,转换后的空间点位置误差在2 mm以内,能较好满足一般视觉伺服智能机器人的抓取作业要求。  相似文献   

14.
为了解决摄像机在视觉测量中的应用这一问题,必须实现摄像机的高精度标定,设计了摄像机标定系统。首先,从摄像机标定技术的基础理论研究出发,对摄像机标定技术的原理和所采用的张正友标定法的基本原理进行了分析总结。然后,利用Matlab 标定工具箱和一张8x12,正方形边长为26mm的黑白棋盘格,对设计的摄像机系统进行标定。最后,分析了标定系统中的误差,对该系统的设计进行验证。实验结果表明:摄像机的主点坐标误差为:[-0.01186;0.03393];摄像机外部参数标定像素误差达到[0.22981; 0.16846]。基本满足视觉测量的稳定可靠、精度高、抗干扰能力强、可移植性强等要求,为后续应用到视觉测量中奠定了基础。  相似文献   

15.
深入研究了机器人手眼视觉系统的立体定位问题。首先,重新将手眼问题公式化,得到一个新的系统模型;然后,在此基础上提出了一种实用有效的标定方法。其核心思想是直接将图像坐标映射到机器人基坐标系中,把系统参数作为一个整体来获取,而不必分别计算摄像机内部的每一个参数。与原方法相比,本方法可随意改变末端姿态定位,即定位时摄像机对目标取像的姿态不受任何约束。实验表明,该方法操作方便、实现简单、定位精度高。这一方法的提出克服了原方法的局限性,大大推广了手眼视觉系统的应用范围。  相似文献   

16.
Robotic eye-in-hand configuration is very useful in many applications. Essential in the use of this configuration is robotic hand-eye calibration. The method proposed in this paper, calibrating the optical axis of the camera and a target pattern, creates an environment for camera calibration without using additional equipment. By using the geometric relation between the optical axis and the robot hand, calibration of the optical axis and the target pattern becomes a feasible process. The proposed method requires identification of the points intersected by the optical axis with the target pattern. Image-processing techniques for identifying these intersection points are developed and implemented by designing the target pattern with a checkerboard pattern. The accuracy issue of the proposed method is discussed. Finally, experimental results are presented to verify the proposed method.  相似文献   

17.
为实现结构光视觉引导的焊接机器人系统的标定,解决现有标定方法复杂,标定靶标制作要求高等缺点,提出一种基于主动视觉的自标定方法。该标定方法对场景中3个特征点取像,通过精确控制焊接机器人进行5次平移运动,标定摄像机内参数和手眼矩阵旋转部分;通过进行2次带旋转运动,结合激光条在特征点平面的参数方程,标定手眼矩阵平移部分和结构光平面在摄像机坐标系下的平面方程;并针对不同焊枪长度进行修正。在以Denso机器人为主体构建的结构光视觉引导的焊接机器人系统上的测试结果稳定,定位精度可达到±0.93 mm。该标定方法简单,特征选取容易,对焊接机器人系统在实际工业现场的使用有重要意义。  相似文献   

18.
《Advanced Robotics》2013,27(5):429-443
We propose a simple visual servoing scheme based on the use of binocular visual space. When we use a hand-eye system which has a similar kinematic structure to a human being, we can approximate the transformation from a binocular visual space to a joint space of the manipulator as a linear time-invariant mapping. This relationship makes it possible to generate joint velocities from image observations using a constant linear mapping. This scheme is robust to calibration error, especially to camera turning, because it uses neither camera angles nor joint angles. Some experimental results are also shown to demonstrate the positioning precision remained unchanged despite the calibration error,  相似文献   

19.
高金锋  梁冬泰  陈叶凯 《机器人》2022,44(3):321-332
在搭载线激光轮廓传感器的机器人平台手眼标定问题中,依靠线激光轮廓传感器输出的2维点云信息进行标定,存在标定过程复杂、标定精度低的缺点。本文针对这些问题,提出一种基于圆柱侧面约束的手眼标定方法。通过改变扫描机器人末端位姿,获得不同位姿下圆柱侧面扫描数据。对于激光平面与圆柱侧面相交得到的椭圆轮廓,利用随机抽样一致性(RANSAC)算法得到椭圆轮廓中心点坐标。利用椭圆轮廓的估计中心点到圆柱中轴线的距离建立约束优化方程,将手眼标定问题转化为约束优化问题。利用粒子群优化(PSO)算法和广义拉格朗日乘子法的融合算法求解约束优化问题,得到手眼标定的变换矩阵。最后基于所提方法进行模拟仿真和扫描重建试验。分别讨论了标定数据误差、标定参照物位置和标定参数初始值对标定结果的影响,并验证了手眼标定精度。结果表明,该方法不受标定参照物位置和标定参数初始值的影响,具有操作简单、通用性强、标定精度高等特点,标定精度在0.15mm以内,适合现场标定。  相似文献   

20.
指出摄象机指标定问题中存在的观测误差适于用未知但有界误差(UBBE)模型描述,标定问题可用集内不确定(SMU)估计方法解决。针对有关文献研究的一种摄象机内部参数标定问题,提出了利用SMU估计方法进行参数标定的新算法。仿真实验结果表明,新算法不权可以在给出标定结果的同时给出标定误差确定的上界,而且可获得较好的标定精度,具有一定的使用价值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号