首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 171 毫秒
1.
钟宇  张静  张华  肖贤鹏 《计算机工程》2022,48(3):100-106
智能协作机器人依赖视觉系统感知未知环境中的动态工作空间定位目标,实现机械臂对目标对象的自主抓取回收作业。RGB-D相机可采集场景中的彩色图和深度图,获取视野内任意目标三维点云,辅助智能协作机器人感知周围环境。为获取抓取机器人与RGB-D相机坐标系之间的转换关系,提出基于yolov3目标检测神经网络的机器人手眼标定方法。将3D打印球作为标靶球夹持在机械手末端,使用改进的yolov3目标检测神经网络实时定位标定球的球心,计算机械手末端中心在相机坐标系下的3D位置,同时运用奇异值分解方法求解机器人与相机坐标系转换矩阵的最小二乘解。在6自由度UR5机械臂和Intel RealSense D415深度相机上的实验结果表明,该标定方法无需辅助设备,转换后的空间点位置误差在2 mm以内,能较好满足一般视觉伺服智能机器人的抓取作业要求。  相似文献   

2.
针对自由度冗余蛇形臂机器人的手眼标定问题,提出一种不依赖蛇形臂机构运动参数求解视觉系统与末端法兰盘手眼关系的方法;借助外部辅助双目相机系统,通过构造外部相机、末端法兰盘、视觉导航相机三者坐标系的闭环转换得到待求手眼关系;首先根据立体视觉原理得到标志点在对应相机坐标系下的3D坐标;然后利用四元数法分别求得外部相机坐标系与法兰盘坐标系、外部相机坐标系与视觉导航相机坐标系间的转换关系;最后构建坐标系转换关系闭环求得手眼关系;实验结果表明,该标定法避免了蛇形臂自由度冗余导致运动控制精度差带来的干扰,标定精度满足设计要求。  相似文献   

3.
便携式测量臂和激光扫描测头手眼关系的确定是便携式激光扫描测量臂系统中的关键问题。针对测量臂末端相对运动旋转轴之间的夹角和测量臂与扫描测头相对运动的旋转角差值这两个影响手眼标定精度的因素,设定激光扫描测量臂标定数据筛选原则,提出一种基于随机采样一致性的自适应阈值手眼标定算法。蒙特卡洛仿真实验和激光扫描测量臂实测实验结果表明,提出的算法对四元数方法求解的相对旋转轴和平移向量误差标准差精度分别提高了2.42%,4.14%,可以满足便携式激光扫描测量臂系统的测量精度要求。  相似文献   

4.
陈宝存  吴巍  郭毓  郭健 《计算机仿真》2020,37(2):343-348
为了缩短标定的时间,设计了一种基于ROS的机器人自动手眼标定系统。通过分析坐标系间的转换关系,建立了手眼标定的数学模型,推导了两步法的具体解法。为了尽可能减小标定误差,提出了带约束的随机增量法,并根据角点检测结果实时剔除标定过程中出现的无效图像。基于ROS软件平台完成了驱动、运动空间规划、图像处理和标定解算等功能模块的逻辑设计和系统级联。以6自由度工业机械臂UR3和RGB-D相机Kinect2构建了手眼标定实验系统,实验结果表明,与传统标定流程相比,所设计的自动手眼标定系统便捷高效且精度高,具有较强的拓展性。  相似文献   

5.
目的 RGB-D相机的外参数可以被用来将相机坐标系下的点云转换到世界坐标系的点云,可以应用在3维场景重建、3维测量、机器人、目标检测等领域。 一般的标定方法利用标定物(比如棋盘)对RGB-D彩色相机的外参标定,但并未利用深度信息,故很难简化标定过程,因此,若充分利用深度信息,则极大地简化外参标定的流程。基于彩色图的标定方法,其标定的对象是深度传感器,然而,RGB-D相机大部分则应用基于深度传感器,而基于深度信息的标定方法则可以直接标定深度传感器的姿势。方法 首先将深度图转化为相机坐标系下的3维点云,利用MELSAC方法自动检测3维点云中的平面,根据地平面与世界坐标系的约束关系,遍历并筛选平面,直至得到地平面,利用地平面与相机坐标系的空间关系,最终计算出相机的外参数,即相机坐标系内的点与世界坐标系内的点的转换矩阵。结果 实验以棋盘的外参标定方法为基准,处理从PrimeSense相机所采集的RGB-D视频流,结果表明,外参标定平均侧倾角误差为-1.14°,平均俯仰角误差为4.57°,平均相机高度误差为3.96 cm。结论 该方法通过自动检测地平面,准确估计出相机的外参数,具有很强的自动化,此外,算法具有较高地并行性,进行并行优化后,具有实时性,可应用于自动估计机器人姿势。  相似文献   

6.
在关节臂视觉检测系统中,视觉传感器与关节臂末端的手眼关系标定的准确性直接影响到系统的测量精度。推导了关节臂手眼问题的数学关系模型,提出了一种基于遗传优化算法的双阶段手眼标定的新方法,即先采用常规方法求取手眼变换矩阵主要参数的初始值,在初始值的基础上利用遗传算法优化手眼标定模型的所有参数,建立遗传优化算法的手眼关系数学模型。通过实验论证了该方法的优良性和可行性,对一般手眼问题具有一定的可移植性,系统测量精度优于40 μm。  相似文献   

7.
基于单幅图片的相机完全标定   总被引:1,自引:0,他引:1  
现有相机标定方法的标定过程比较繁琐,不利于标定相机的广泛使用。为此,从摄像机镜头畸变矫正着手,结合标定板信息及消失点约束,提出一种基于单张图片的相机标定方法。利用非线性迭代得到相机镜头的畸变系数,通过线性求解得出相机的内参,直接计算得到相机的外参,从而实现仅需拍摄单张标定板图片的相机完全标定。实验结果表明,该方法在标定板与视平面夹角小于45°的情况下均能成功标定,并且重投影误差小于0.3像素。  相似文献   

8.
《微型机与应用》2015,(17):70-74
以固高GRB-400机器人和摄像机组成手眼系统,在手眼关系旋转矩阵的标定方面,分析了基于主动视觉的标定方法。为实现手眼关系平移向量的标定,提出以固定于机械臂末端的激光笔来获取工件平台上特征点的基坐标,并结合已标定的旋转矩阵来标定平移向量。最后,从图像求取多个特征点之间的距离并与实际值进行误差比较,平面特征点间的长度测量误差在±0.8 mm之间,表明手眼标定精度较高,可满足机器人进行工件定位与自动抓取的要求。  相似文献   

9.
相机标定是为了解决计算机视觉中2D图像特征和相应的3D特征之间的映射关系。针对精确标定的困难,提出了一种基于政治优化器(Political Optimizer, PO)的分步标定算法来分步求解相机几何参数和镜头畸变系数,采用参数标定与畸变修正相结合的方法,首先在理想状态下对相机参数进行寻优,其次在考虑畸变的情况下修正畸变系数,通过迭代的方式更新个体的位置来找到最佳的解,有效地校准了相机。实验结果表明,PO算法的平均误差为0.044 706像素,具有良好的避免局部最优的优化能力,能够准确完成相机标定任务。  相似文献   

10.
精确的机器人手眼标定对于机器人的视觉环境感知具有重要的意义。现有算法通常采用最小二乘估计或全局非线性优化求解方法对机器人手眼系统的变换参数进行估计。当系统存在测量粗差时直接采用最小二乘估计会导致标定结果精度的下降;基于全局非线性优化策略的标定算法则由于数据粗差的影响,求解过程易过早收敛也会造成标定精度低。为了解决误差粗差敏感的问题,提出了一种基于误差分布估计的加权最小二乘鲁棒估计方法,以提高机器人手眼标定的精度。首先,通过最小二乘估计计算手眼变换矩阵;之后计算每对坐标对应的误差值;根据误差值的分布概率初始化对应坐标数据的权值;最后采用加权的最小二乘估计重新计算机器人手眼标定矩阵。最后引入迭代估计策略进一步提高手眼标定的精度。设计的机器人手眼标定实验及结果证明,所提算法能够在数据粗差影响下保持较高的标定精度,更适用于机器人的手眼标定问题。  相似文献   

11.
Visual servoing towards moving target with hand-eye cameras fixed at hand is inevitably affected by hand dynamical oscillations. Therefore, it is difficult to make target position keep always at the center of camera’s view, as nonlinear dynamical effects of whole manipulator stand against tracking ability. To overcome this defect of the hand-eye fixed camera system, an eye-vergence system has been put forward, where the cameras could rotate to observe the target object. The visual servoing controllers of hand and eye-vergence are installed independently, so that it can observe the target object at the center of camera images through eye-vergence function. The dynamical superiorities of eye-vergence system are verified through frequency response experiments, comparing with hand tracking performances and the proposed eye-vergence tracking performances. This paper analyzes the performance of 3D-object position and orientation tracking, in which orientation representation method is based on quaternion, and the orientation tracking results are shown with more comprehensive analysis of system performance.  相似文献   

12.
Robotic eye-in-hand configuration is very useful in many applications. Essential in the use of this configuration is robotic hand-eye calibration. The method proposed in this paper, calibrating the optical axis of the camera and a target pattern, creates an environment for camera calibration without using additional equipment. By using the geometric relation between the optical axis and the robot hand, calibration of the optical axis and the target pattern becomes a feasible process. The proposed method requires identification of the points intersected by the optical axis with the target pattern. Image-processing techniques for identifying these intersection points are developed and implemented by designing the target pattern with a checkerboard pattern. The accuracy issue of the proposed method is discussed. Finally, experimental results are presented to verify the proposed method.  相似文献   

13.
为了提高机械臂抓取的精度,提出一种基于Mask R-CNN的机械臂抓取最佳位置检测框架。基于RGB-D图像,所提框架通过精确的实例分割确定抓取对象的类别、位置和掩码信息,由反距离加权法在去噪后的深度图上获取中心点的加权深度坐标,构成目标对象的三维目标位置,经坐标系转换得到最终的最优抓取位置。建议的框架考虑到目标对象的姿态与边缘信息,可以有效地提高抓取性能。最后,基于UR3机械臂上的抓取实验结果验证了该框架的有效性。  相似文献   

14.
We present a practical system which can provide a textured full-body avatar within 3 s. It uses sixteen RGB-depth (RGB-D) cameras, ten of which are arranged to capture the body, while six target the important head region. The configuration of the multiple cameras is formulated as a constraint-based minimum set space-covering problem, which is approximately solved by a heuristic algorithm. The camera layout determined can cover the full-body surface of an adult, with geometric errors of less than 5 mm. After arranging the cameras, they are calibrated using a mannequin before scanning real humans. The 16 RGB-D images are all captured within 1 s, which both avoids the need for the subject to attempt to remain still for an uncomfortable period, and helps to keep pose changes between different cameras small. All scans are combined and processed to reconstruct the photorealistic textured mesh in 2 s. During both system calibration and working capture of a real subject, the high-quality RGB information is exploited to assist geometric reconstruction and texture stitching optimization.  相似文献   

15.
In a human–robot collaborative manufacturing application where a work object can be placed in an arbitrary position, there is a need to calibrate the actual position of the work object. This paper presents an approach for automatic work-object calibration in flexible robotic systems. The approach consists of two modules: a global positioning module based on fixed cameras mounted around robotic workspace, and a local positioning module based on the camera mounted on the robot arm. The aim of the global positioning is to detect the work object in the working area and roughly estimate its position, whereas the local positioning is to define an object frame according to the 3D position and orientation of the work object with higher accuracy. For object detection and localization, coded visual markers are utilized. For each object, several markers are used to increase the robustness and accuracy of the localization and calibration procedure. This approach can be used in robotic welding or assembly applications.  相似文献   

16.
针对四自由度机器人手眼标定精度不高的问题,提出了基于标定块的手眼标定系统.通过引入亚像素角点提取算法,提取特征点的精确像素坐标;结合机械手平移规则,完成手眼系统旋转矩阵的标定,通过标定块提取机器人第三连杆中心在工作平台上的投影点所对应的世界坐标,计算系统平移矩阵.实验表明:方法不仅提高了手眼系统标定精度,而且简化了特征点世界坐标的提取过程.  相似文献   

17.
Due to their wide field of view, omnidirectional cameras are becoming ubiquitous in many mobile robotic applications.  A challenging problem consists of using these sensors, mounted on mobile robotic platforms, as visual compasses (VCs) to provide an estimate of the rotational motion of the camera/robot from the omnidirectional video stream. Existing VC algorithms suffer from some practical limitations, since they require a precise knowledge either of the camera-calibration parameters, or the 3-D geometry of the observed scene. In this paper we present a novel multiple-view geometry constraint for paracatadioptric views of lines in 3-D, that we use to design a VC algorithm that does not require either the knowledge of the camera calibration parameters, or the 3-D scene geometry. In addition, our algorithm runs in real time since it relies on a closed-form estimate of the camera/robot rotation, and can address the image-feature correspondence problem. Extensive simulations and experiments with real robots have been performed to show the accuracy and robustness of the proposed method.  相似文献   

18.
针对机器人运动学正解及相机的外参数标定存在偏差时,基于非线性最优化的手眼标定算法无法确保目标函数收敛到全局极小值的问题,提出基于四元数理论的凸松弛全局最优化手眼标定算法。考虑到机械手末端相对运动旋转轴之间的夹角对标定方程求解精度的影响,首先利用随机抽样一致性(RANSAC)算法对标定数据中旋转轴之间的夹角进行预筛选,再利用四元数参数化旋转矩阵,建立多项式几何误差目标函数和约束,采用基于线性矩阵不等式(LMI)凸松弛全局优化算法求解全局最优手眼变换矩阵。实测结果表明,该算法可以求得全局最优解,手眼变换矩阵几何误差平均值不大于1.4 mm,标准差小于0.16 mm,结果稍优于四元数非线性最优化算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号