首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 234 毫秒
1.
一种新的机器人手眼关系标定方法   总被引:10,自引:2,他引:8  
杨广林  孔令富  王洁 《机器人》2006,28(4):400-405
通过控制装有摄像机的机械手的运动,给出了一种新的机器人手眼系统标定方法.与以往算法的不同之处在于,在计算手眼关系的平移向量时,对摄像机坐标系进行虚设旋转变换使旋转转化为平移问题.该方法需要机械手平台做两次平移运动和一次旋转运动,只需要场景中两个特征点,所以具有方便性和实用性.同时,也给出了基于主动视觉的空间点深度值计算方法.  相似文献   

2.
基于手眼立体视觉的机器人定位系统   总被引:1,自引:0,他引:1  
陈锡爱  徐方 《计算机应用》2005,25(Z1):302-304
研发了基于手眼的机器人定位系统,采用了眼在手上的单目摄像机,通过机械手的一次移动实现了立体视觉的功能.提出了一种方便有效手眼标定方法,避免了复杂的传统手眼标定过程,无需求解摄像机外参数和手眼变换矩阵.仅获取标定时刻的摄像机综合参数和机器人位姿,就可以在机器人基坐标系中视场范围内的任意两点进行检测,根据立体视觉的约束关系求解出目标物体在机器人基坐标中的位置,进而实现对目标物体的精确定位.  相似文献   

3.
为解决现有基于主动视觉方法标定手眼矩阵和结构光平面操作较复杂的问题,提出一种基于主动视觉的同时标定手眼矩阵和光平面的方法。通过精确控制机器人做两次相互正交的平移运动,求解手眼矩阵的旋转部分;而后通过两次及以上带旋转运动,求解手眼矩阵的平移部分和光平面方程。该方法简单,无需使用特定靶标,标定过程只需3个特征点,即可实现机器人手眼矩阵和光平面方程的标定;实验结果证明了该方法的有效性。  相似文献   

4.
设计了一种基于视场中单个目标点的视觉系统标定方法,任意选取视场中的一点作为目标点,以该目标点为基准,机器人作相对运动来获得多个特征点。建立图像系列对应点之间的几何约束关系及各坐标系之间的变换矩阵,确定变换矩阵关系式,进一步求解摄像机的内外参数。该标定方法只需提取场景中的一个景物点,对机器人的运动控制操作方便、算法实现简洁。实验结果验证了该方法的有效性。  相似文献   

5.
在服务机器人的日常任务中要求机械手抓取不同的目标,且根据目标的放置位姿的不同需要从相应角度进行抓取,但机械手与手持目标的位姿关系往往难以精确和直接地测量。以刀具作为手持目标,利用eye-in-hand手眼系统抓取该目标后,在线标定手眼与手持刀具刀头的位姿关系,首先给出了摄像机调焦前后焦距标定的计算方法,再将调整焦距视为摄像机沿光轴的平移运动,通过调焦前后摄像机获取的两幅图像,标定出刀具在摄像机和机械手坐标系下的位姿,同时给出了刀具刀头到期望加工点的导向矢量计算方法。  相似文献   

6.
钟宇  张静  张华  肖贤鹏 《计算机工程》2022,48(3):100-106
智能协作机器人依赖视觉系统感知未知环境中的动态工作空间定位目标,实现机械臂对目标对象的自主抓取回收作业。RGB-D相机可采集场景中的彩色图和深度图,获取视野内任意目标三维点云,辅助智能协作机器人感知周围环境。为获取抓取机器人与RGB-D相机坐标系之间的转换关系,提出基于yolov3目标检测神经网络的机器人手眼标定方法。将3D打印球作为标靶球夹持在机械手末端,使用改进的yolov3目标检测神经网络实时定位标定球的球心,计算机械手末端中心在相机坐标系下的3D位置,同时运用奇异值分解方法求解机器人与相机坐标系转换矩阵的最小二乘解。在6自由度UR5机械臂和Intel RealSense D415深度相机上的实验结果表明,该标定方法无需辅助设备,转换后的空间点位置误差在2 mm以内,能较好满足一般视觉伺服智能机器人的抓取作业要求。  相似文献   

7.
为实现结构光视觉引导的焊接机器人系统的标定,解决现有标定方法复杂,标定靶标制作要求高等缺点,提出一种基于主动视觉的自标定方法。该标定方法对场景中3个特征点取像,通过精确控制焊接机器人进行5次平移运动,标定摄像机内参数和手眼矩阵旋转部分;通过进行2次带旋转运动,结合激光条在特征点平面的参数方程,标定手眼矩阵平移部分和结构光平面在摄像机坐标系下的平面方程;并针对不同焊枪长度进行修正。在以Denso机器人为主体构建的结构光视觉引导的焊接机器人系统上的测试结果稳定,定位精度可达到±0.93 mm。该标定方法简单,特征选取容易,对焊接机器人系统在实际工业现场的使用有重要意义。  相似文献   

8.
机器人手眼立体视觉标定研究   总被引:1,自引:0,他引:1  
以固高GRB-400型机器人和CCD组成机器人手眼系统,分析了摄像机的成像模型,采用了基于直接将图像坐标映射到机器人参考坐标的“黑箱”思想,从图像坐标直接计算出目标位置的方法,用于立体定位的摄像机手眼标定,该方法通过保持机器人末端执行器到机器人参考坐标系旋转矩阵来简化复杂的相机标定过程,最后通过实验验证了该方法的可行性,并分析了实验误差产生的原因,并提出了相应的解决措施。  相似文献   

9.
《微型机与应用》2015,(17):70-74
以固高GRB-400机器人和摄像机组成手眼系统,在手眼关系旋转矩阵的标定方面,分析了基于主动视觉的标定方法。为实现手眼关系平移向量的标定,提出以固定于机械臂末端的激光笔来获取工件平台上特征点的基坐标,并结合已标定的旋转矩阵来标定平移向量。最后,从图像求取多个特征点之间的距离并与实际值进行误差比较,平面特征点间的长度测量误差在±0.8 mm之间,表明手眼标定精度较高,可满足机器人进行工件定位与自动抓取的要求。  相似文献   

10.
《机器人》2015,(3)
欠驱动机器人因难以像全主动关节机器人一样形成多种空间位姿,故无法直接使用经典的手眼标定方法.为解决这一问题,本文建立了其数学模型,并提出了一种基于平面约束的手眼标定方法.在该方法中,末端执行器的运动被约束于3维空间内的已知平面上,手眼标定的姿态矩阵和平移向量分别通过纯平移与纯旋转运动模式,结合对应的标定图像进行解耦求解.蒙特卡洛仿真实验结果表明,该标定算法有效、准确,且对噪声具有较好的鲁棒性.在所开发的欠驱动爬壁机器人上进行的实际试验中,定位误差被控制在1.5 mm以内.结果表明该标定方法简便易行,精度满足要求.  相似文献   

11.
A robotic manipulator using a stereo camera mounted on one of its links requires a precise kinematic transformation calibration between the manipulator and the camera coordinate frames, the so-called hand–eye calibration, to achieve high-accuracy end-effector positioning. This paper introduces a new method that performs simultaneous joint angle and hand–eye calibration, based on a traditional method that uses a sequence of pure rotations of the manipulator links. The new method considers an additional joint angle constraint, which improves the calibration accuracy when the circular arc that can be measured by the stereo camera is very limited. Experimental results using a manipulator developed for humanitarian demining demonstrate that relative errors between the end effector and the external points mapped by the stereo camera are greatly reduced compared to traditional methods.  相似文献   

12.
This paper deals with the trajectory planning problem for redundant manipulators. A genetic algorithm (GA) using a floating point representation is proposed to search for the optimal end-effector trajectory for a redundant manipulator. An evaluation function is defined based on multiple criteria, including the total displacement of the end-effector, the total angular displacement of all the joints, as well as the uniformity of Cartesian and joint space velocities. These criteria result in minimized, smooth end-effector motions. Simulations are carried out for path planning in free space and in a workspace with obstacles. Results demonstrate the effectiveness and capability of the proposed method in generating optimized collision-free trajectories.  相似文献   

13.
《Advanced Robotics》2013,27(4):473-492
In this paper, we consider the problem of real-time planning and control of a robot manipulator in an unstructured workspace. The task we consider is to control the manipulator, such that the end-effector follows a path on an unknown surface, with the aid of a single camera assumed to be uncalibrated with respect to the robot coordinates. To accomplish a task of this kind, we propose a new control strategy based on multisensor fusion. We assume that three different sensors, i.e. encoders mounted at each joint of the robot with 6 d.o.f., a force-torque sensor mounted at the wrist of the manipulator and a visual sensor with a single camera fixed to the ceiling of the workcell, are available. Also, we assume that the contact point between the tool grasped by the end-effector and the surface is frictionless. To describe the proposed algorithm that we have implemented, first of all we decouple the vector space of control variables into two subspaces, and use one of the subspaces for controlling the magnitude of the contact force on the surface and the other subspace for controlling the constrained motion on the surface. In this way the control synthesis problem is decoupled and we are able to develop a new scheme that utilizes sensor fusion to handle uncalibrated parameters in the workcell and wherein the surface on which the task is to be performed is assumed to be visible, but has an apriori unknown position.  相似文献   

14.
Pietro Falco 《Advanced Robotics》2014,28(21):1431-1444
The paper proposes a method to improve flexibility of the motion planning process for mobile manipulators. The approach is based on the exploitation of perception data available only from simple proximity sensors distributed on the robot. Such data are used to correct pre-planned motions to cope with uncertainties and dynamic changes of the scene at execution time. The algorithm computes robot motion commands aimed at fulfilling the mission by combining two tasks at the same time, i.e. following the planned end-effector path and avoiding obstacles in the environment, by exploiting robot redundancy as well as handling priorities among tasks. Moreover, a technique to smoothly switch between the tasks is presented. To show the effectiveness of the method, four experimental case studies have been presented consisting in a place task executed by a mobile manipulator in an increasingly cluttered scene.  相似文献   

15.
Based on previous physiological information, this paper proposes a model of cerebellum motor learning based on a neuroadaptive robot manipulator controller. Compliance (or impedance) control is chosen as the basis of the model in preference to alternative robot control strategies because muscles do not act like pure force generators such as torque motors nor as pure displacement devices such as stepper motors but instead act more like tunable springs or compliance devices. Compliance control has the further advantage that it is applicable for a variety of motor tasks, and is both more robust and simple than alternative control strategies. Simulation results are presented to verify the performance of the proposed model. Specific results are presented for the applications of impedance control to the case where the end-effector is interacting with surfaces. By setting the equilibrium position of the end-effector beyond the obstacle (wall), it can be assured that the end-effector will touch the surface rather than crush it. The power of the phase spare to analyze the behavior of the system during movement is demonstrated.  相似文献   

16.
Shot Change Detection Using Scene-Based Constraint   总被引:1,自引:0,他引:1  
A key step for managing a large video database is to partition the video sequences into shots. Past approaches to this problem tend to confuse gradual shot changes with changes caused by smooth camera motions. This is in part due to the fact that camera motion has not been dealt with in a more fundamental way. We propose an approach that is based on a physical constraint used in optical flow analysis, namely, the total brightness of a scene point across two frames should remain constant if the change across two frames is a result of smooth camera motion. Since the brightness constraint would be violated across a shot change, the detection can be based on detecting the violation of this constraint. It is robust because it uses only the qualitative aspect of the brightness constraint—detecting a scene change rather than estimating the scene itself. Moreover, by tapping on the significant know-how in using this constraint, the algorithm's robustness is further enhanced. Experimental results are presented to demonstrate the performance of various algorithms. It was shown that our algorithm is less likely to interpret gradual camera motions as shot changes, resulting in a significantly better precision performance than most other algorithms.  相似文献   

17.
This paper deals with real-time implementation of visual-motor control of a 7 degree of freedom (DOF) robot manipulator using self-organized map (SOM) based learning approach. The robot manipulator considered here is a 7 DOF PowerCube manipulator from Amtec Robotics. The primary objective is to reach a target point in the task space using only a single step movement from any arbitrary initial configuration of the robot manipulator. A new clustering algorithm using Kohonen SOM lattice has been proposed that maintains the fidelity of training data. Two different approaches have been proposed to find an inverse kinematic solution without using any orientation feedback. In the first approach, the inverse Jacobian matrices are learnt from the training data using function decomposition. It is shown that function decomposition leads to significant improvement in accuracy of inverse kinematic solution. In the second approach, a concept called sub-clustering in configuration space is suggested to provide multiple solutions for the inverse kinematic problem. Redundancy is resolved at position level using several criteria. A redundant manipulator is dexterous owing to the availability of multiple configurations for a given end-effector position. However, existing visual motor coordination schemes provide only one inverse kinematic solution for every target position even when the manipulator is kinematically redundant. Thus, the second approach provides a learning architecture that can capture redundancy from the training data. The training data are generated using explicit kinematic model of the combined robot manipulator and camera configuration. The training is carried out off-line and the trained network is used on-line to compute the joint angle vector to reach a target position in a single step only. The accuracy attained is better than the current state of art.  相似文献   

18.
This article proposes multiple self-organizing maps (SOMs) for control of a visuo-motor system that consists of a redundant manipulator and multiple cameras in an unstructured environment. The maps control the manipulator so that it reaches its end-effector at targets given in the camera images. The maps also make the manipulator take obstacle-free poses. Multiple cameras are introduced to avoid occlusions, and multiple SOMs are introduced to deal with multiple camera images. Some simulation results are shown.  相似文献   

19.
We consider the self-calibration problem for a generic imaging model that assigns projection rays to pixels without a parametric mapping. We consider the central variant of this model, which encompasses all camera models with a single effective viewpoint. Self-calibration refers to calibrating a camera’s projection rays, purely from matches between images, i.e. without knowledge about the scene such as using a calibration grid. In order to do this we consider specific camera motions, concretely, pure translations and rotations, although without the knowledge of rotation and translation parameters (rotation angles, axis of rotation, translation vector). Knowledge of the type of motion, together with image matches, gives geometric constraints on the projection rays. We show for example that with translational motions alone, self-calibration can already be performed, but only up to an affine transformation of the set of projection rays. We then propose algorithms for full metric self-calibration, that use rotational and translational motions or just rotational motions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号