首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Advanced Robotics》2013,27(5):429-443
We propose a simple visual servoing scheme based on the use of binocular visual space. When we use a hand-eye system which has a similar kinematic structure to a human being, we can approximate the transformation from a binocular visual space to a joint space of the manipulator as a linear time-invariant mapping. This relationship makes it possible to generate joint velocities from image observations using a constant linear mapping. This scheme is robust to calibration error, especially to camera turning, because it uses neither camera angles nor joint angles. Some experimental results are also shown to demonstrate the positioning precision remained unchanged despite the calibration error,  相似文献   

2.
This article addresses the visual servoing of a rigid robotic manipulator equipped with a binocular vision system in eye-to-hand configuration. The control goal is to move the robot end-effector to a visually determined target position precisely without knowing the precise camera model. Many vision-based robotic positioning systems have been successfully implemented and validated by supporting experimental results. Nevertheless, this research aims at providing stability analysis for a class of robotic set-point control systems employing image-based feedback laws. Specifically, by exploring epipolar geometry of the binocular vision system, a binocular visual constraint is found to assist in establishing stability property of the feedback system. Any three-degree-of-freedom positioning task, if satisfying appropriate conditions with the image-based encoding approach, can be encoded in such a way that the encoded error, when driven to zero, implies that the original task has been accomplished with precision. The corresponding image-based control law is proposed to drive the encoded error to zero. The overall closed-loop system is exponentially stable provided that the binocular model imprecision is small.  相似文献   

3.
In this paper, the problem of robust relative positioning between a 6-DOF robot camera and an object of interest is considered. Assuming weak perspective camera model and local linear approximation of visible object's surface, an image-based state space representation of robot camera–object interaction model is derived, based on the matrix of 2-D affine transformations. Dynamic extension of the visual model permits to estimate 3-D parameters directly as functions of state variables. The proposed nonlinear robust control law ensures asymptotic stability but at image singularities, assuming exact model and state measurements. In the presence of bounded uncertainties, under appropriate choice of control gains, ultimate boundedness of the state error is also formally proved. Simulation results validate the theoretical framework both in terms of system convergence and control robustness.  相似文献   

4.
The goal of this paper is to describe a method to position a robot arm at any visible point of a given workspace without an explicit on line use of the analytical form of the transformations between real space and camera coordinates (camera calibration) or between cartesian and joint coordinates (direct or inverse kinematics of the robot arm). The formulation uses a discrete network of points distributed all over the workspace in which a procedure is given to measure certain Jacobian matrices which represent a good local linear approximation to the unknown compound transformation between camera and joint coordinates. This approach is inspired by the biological observation of the vestibulo-ocular reflex in humans (VOR). We show that little space is needed to store the transformation at a given scale, as feedback on the visual coordinates is used to improve precision up to the limit of the visual system. This characteristic also allows the plant to cope with disturbances in camera positioning or robot parameters. Furthermore, if the dimension of the visual space is equal or bigger than the motor space dimension, the transformation can be inverted, resulting in a realistic model of the plant able to be used to train other methods for the determination of visuo-motor mapping. As a test of the method an experiment to position a real robot arm is presented, together with another experiment showing the robot executing a simple task (building a tower of blocks).  相似文献   

5.
深入研究了机器人手眼视觉系统的立体定位问题。首先,重新将手眼问题公式化,得到一个新的系统模型;然后,在此基础上提出了一种实用有效的标定方法。其核心思想是直接将图像坐标映射到机器人基坐标系中,把系统参数作为一个整体来获取,而不必分别计算摄像机内部的每一个参数。与原方法相比,本方法可随意改变末端姿态定位,即定位时摄像机对目标取像的姿态不受任何约束。实验表明,该方法操作方便、实现简单、定位精度高。这一方法的提出克服了原方法的局限性,大大推广了手眼视觉系统的应用范围。  相似文献   

6.
We present a new approach to visual feedback control using image-based visual servoing with stereo vision. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location, but also at other locations. The suggested technique can guide a robot manipulator to the desired location without needing such a priori knowledge as the relative distance to the desired location or a model of the object, even if the initial positioning error is large. We describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by a simulation and by experimental results, and compared with the conventional method for an assembling robot. This work was presented in part at the Fourth International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–22, 1999  相似文献   

7.
针对人机协作中人与机器人共享工作空间时的安全问题,设计了一套人机协作视觉手部保护系统,并搭建相应的验证系统。该系统采用深度学习目标检测算法结合双目视觉技术实现对操作人员手部的识别与定位,同时利用手眼标定将视觉定位后的手部坐标转换到机器人基座坐标系下,通过计算操作人员手部与机器人末端执行器之间的距离,机器人自主执行减速、急停等安全策略。经实验验证:当操作人员在机器人工作空间作业时,通过检测手-末端相对位置关系,可以有效避免人机协作过程中机器人末端执行器与手部发生碰撞,达到了保护操作人员安全的目的。  相似文献   

8.
针对六自由度机器人控制系统的特点以及空间环境对系统的影响,提出基于双冗余设计思想的分布式视觉伺服控制系统。该系统由主控制计算机和关节控制器、手爪控制器、手眼视觉控制器等多个节点组成。系统采用冷热两级双冗余CAN总线作为各模块问的通信总线,各节点均采用双冗余设计。各智能节点通过主控计算机的规划和协调完成对机器人系统的控制功能,解决了空间机器人控制系统处理能力、空间适应性和通信实时性等问题,并进行了系统集成和性能、功能的测试,验证了该设计方案的可行性。  相似文献   

9.
针对室外大范围场景移动机器人建图中,激光雷达里程计位姿计算不准确导致SLAM (simultaneous localization and mapping)算法精度下降的问题,提出一种基于多传感信息融合的SLAM语义词袋优化算法MSW-SLAM(multi-sensor information fusion SLAM based on semantic word bags)。采用视觉惯性系统引入激光雷达原始观测数据,并通过滑动窗口实现了IMU (inertia measurement unit)量测、视觉特征和激光点云特征的多源数据联合非线性优化;最后算法利用视觉与激光雷达的语义词袋互补特性进行闭环优化,进一步提升了多传感器融合SLAM系统的全局定位和建图精度。实验结果显示,相比于传统的紧耦合双目视觉惯性里程计和激光雷达里程计定位,MSW-SLAM算法能够有效探测轨迹中的闭环信息,并实现高精度的全局位姿图优化,闭环检测后的点云地图具有良好的分辨率和全局一致性。  相似文献   

10.
韩峰 《计算机系统应用》2015,24(11):252-256
针对三维空间中目标物体定位的问题,提出了一种结构简单、操作方便、性价比较高的单摄像机实现双目立体视觉定位的方法.在对目标物体的识别和定位中,利用各方面性能和指标都比较好的SURF算法对所获取的图像进行特征点的提取和匹配.实验结果表明,文中使用的基于SURF算法的单目转双目视觉定位的方法,不论是在定位的精度,还是在时间速度方面都表现出了很好的可行性与实用性,具有一定的现实利用价值.  相似文献   

11.
This article describes real-time gaze control using position-based visual servoing. The main control objective of the system is to enable a gaze point to track the target so that the image feature of the target is located at each image center. The overall system consists of two parts: the vision process and the control system. The vision system extracts a predefined color feature from images. An adaptive look-up table method is proposed in order to get the 3-D position of the feature within the video frame rate under varying illumination. An uncalibrated camera raises the problem of the reconstructed 3-D positions not being correct. To solve the calibration problem in the position-based approach, we constructed an end-point closed-loop system using an active head-eye system. In the proposed control system, the reconstructed position error is used with a Jacobian matrix of the kinematic relation. The system stability is locally guaranteed, like image-based visual servoing, and the gaze position was shown to converge to the feature position. The proposed approach was successfully applied to a tracking task with a moving target in some simulations and some real experiments. The processing speed satisfies the property of real time. This work was presented in part at the Sixth International Symposium on Artificial Life and Robotics, Tokyo, January 15–17, 2001  相似文献   

12.
李严  吴林 《机器人》1990,12(3):1-7
本文着重研究了“手-眼”式通用型多关节机器人视觉系统中基体坐标系.物体坐标系和图象坐标系之间的关系.利用空间坐标系的投影变换和映射等方法,推导了“手-眼”式机器人系统中摄象机图象点和空间点对应的数学关系式.同时对固定式也进行了探讨.本文以多关节机器人视觉系统的实际情况为基础,在实际的机器人坐标系上导出了一个简便实用的算法,可以通过对摄象机摄取的图象中的点的坐标运算,求出其对应的空间点在机器人基体坐标中的坐标,因而能迅速准确地引导机器人根据视觉图象正确地运行到所求的空间点的位置.  相似文献   

13.
基于均匀空间的颜色分级方法   总被引:4,自引:0,他引:4       下载免费PDF全文
自动视觉检测是机器视觉的一个重要研究领域,而颜色分级是自动视觉检测中的一个典型问题,在陶瓷、木材等行业应用广泛。为了实现快速自动分级,根据人类视觉特性,提出了一种基于均匀颜色空间的表面颜色分级方法。该方法首先将数据从RGB颜色空间转换到CIE 1976 L^ a^ 均匀颜色空间;然后在CIE 1976 L^ a^ 空间用RWM(radius weighted mean)方法提取主导颜色(dominant colors,DC),再以此作为颜色特征,提出了一种新的颜色距离度量——映射色差,并分析了它与平均色差的关系;最后以映射色差为距离度量,采用最小距离分类器来进行颜色分级。实验结果说明该方法是有效的。  相似文献   

14.
魏彤  李绪 《机器人》2020,42(3):336-345
现有的同步定位与地图创建(SLAM)算法在动态环境中的定位与建图精度通常会大幅度下降,为此提出了一种基于动态区域剔除的双目视觉SLAM算法.首先,基于立体视觉几何约束方法判别场景中动态的稀疏特征点,接下来根据场景深度和颜色信息进行场景区域分割;然后利用动态点与场景分割结果标记出场景中的动态区域,进而剔除现有双目ORB-SLAM算法中动态区域内的特征点,消除场景中的动态目标对SLAM精度的影响;最后进行实验验证,本文算法在KITTI数据集上的动态区域分割查全率达到92.31%.在室外动态环境下,视觉导盲仪测试中动态区域分割查全率达到93.62%,较改进前的双目ORB-SLAM算法的直线行走定位精度提高82.75%,环境建图效果也明显改善,算法的平均处理速度达到4.6帧/秒.实验结果表明本文算法能够显著提高双目视觉SLAM算法在动态场景中的定位与建图精度,且能够满足视觉导盲的实时性要求.  相似文献   

15.
《Advanced Robotics》2013,27(10):1057-1072
It is an easy task for the human visual system to gaze continuously at an object moving in three-dimensional (3-D) space. While tracking the object, human vision seems able to comprehend its 3-D shape with binocular vision. We conjecture that, in the human visual system, the function of comprehending the 3-D shape is essential for robust tracking of a moving object. In order to examine this conjecture, we constructed an experimental system of binocular vision for motion tracking. The system is composed of a pair of active pan-tilt cameras and a robot arm. The cameras are for simulating the two eyes of a human while the robot arm is for simulating the motion of the human body below the neck. The two active cameras are controlled so as to fix their gaze at a particular point on an object surface. The shape of the object surface around the point is reconstructed in real-time from the two images taken by the cameras based on the differences in the image brightness. If the two cameras successfully gaze at a single point on the object surface, it is possible to reconstruct the local object shape in real-time. At the same time, the reconstructed shape is used for keeping a fixation point on the object surface for gazing, which enables robust tracking of the object. Thus these two processes, reconstruction of the 3-D shape and maintaining the fixation point, must be mutually connected and form one closed loop. We demonstrate the effectiveness of this framework for visual tracking through several experiments.  相似文献   

16.
The position and orientation of moving platform mainly depends on global positioning system and inertial navigation system in the field of low-altitude surveying, mapping and remote sensing and land-based mobile mapping system. However, GPS signal is unavailable in the application of deep space exploration and indoor robot control. In such circumstances, image-based methods are very important for self-position and orientation of moving platform. Therefore, this paper firstly introduces state of the art development of the image-based self-position and orientation method (ISPOM) for moving platform from the following aspects: 1) A comparison among major image-based methods (i.e., visual odometry, structure from motion, simultaneous localization and mapping) for position and orientation; 2) types of moving platform; 3) integration schemes of image sensor with other sensors; 4) calculation methodology and quantity of image sensors. Then, the paper proposes a new scheme of ISPOM for mobile robot — depending merely on image sensors. It takes the advantages of both monocular vision and stereo vision, and estimates the relative position and orientation of moving platform with high precision and high frequency. In a word, ISPOM will gradually speed from research to application, as well as play a vital role in deep space exploration and indoor robot control.  相似文献   

17.
针对移动机器人视觉导航定位需求,提出一种基于双目相机的视觉里程计改进方案。对于特征信息冗余问题,改进ORB(oriented FAST and rotated BRIEF)算法,引入多阈值FAST图像分割思想,为使误匹配尽可能减少,主要运用快速最近邻和随机采样一致性算法;一般而言,运用的算法主要是立体匹配算法,此算法的特征主要指灰度,对此算法做出改进,运用一种新型的双目视差算法,此算法主要以描述子为特征,据此恢复特征点深度;为使所得位姿坐标具有相对较高的准确度,构造一种特定的最小二乘问题,使其提供初值,以相应的特征点三维坐标为基础,基于有效方式对相机运动进行估计。根据数据集的实验结果可知,所提双目视觉里程具有相对而言较好的精度及较高的实时性。  相似文献   

18.
In this paper, we present a visual servoing method based on a learned mapping between feature space and control space. Using a suitable recognition algorithm, we present and evaluate a complete method that simultaneously learns the appearance and control of a low-cost robotic arm. The recognition part is trained using an action precedes perception approach. The novelty of this paper, apart from the visual servoing method per se, is the combination of visual servoing with gripper recognition. We show that we can achieve high precision positioning without knowing in advance what the robotic arm looks like or how it is controlled.  相似文献   

19.
A general scheme to represent the relation between dynamic images and camera and/or object motions is proposed for applications to visual control of robots. We consider the case where a moving camera observes moving objects in a static scene. The camera obtains images of the objects moving within the scene. Then, the possible combinations of the camera and the objects' poses and the obtained images are not arbitrary but constrained to each other. Here we represent this constraint as a lower dimensional hypersurface in the product space of the whole combination of their motion control parameters and image data. The visual control is interpreted as to find a path on this surface leading to their poses where a given goal image will be obtained. In this paper, we propose a visual control method to utilize tangential properties of this surface. First, we represent images with a composition of a small number of eigen images by using K-L (Karhunen-Loève) expansion. Then, we consider to reconstruct the eigen space (the eigen image space) to achieve efficient and straightforward controls. Such reconstruction of the space results in the constraint surface being mostly flat within the eigen space. By this method, visual control of robots in a complex configuration is achieved without image processing to extract and correspond image features in dynamic images. The method also does not need camera or hand-eye calibrations. Experimental results of visual servoing with the proposed method show the feasibility and applicability of our newly proposed approach to a simultaneous control of camera self-motion and object motions.  相似文献   

20.
Multi-camera vision is widely used for guiding the machining robot to remove flash and burrs on complex automotive castings and forgings with arbitrary initial posture. Aiming at the problems of insufficient field of vision and regional occlusion in actual machining, a gradient-weighted multi-view calibration method (GWM-View) is proposed for the machining robot positioning based on the convergent binocular vision. Specifically, the mapping between each auxiliary camera and the main camera in the multi-view system is calculated by the inverse equation and intrinsic parameter matrix. Then, the gradient-weighted suppression algorithm is introduced to filter out the errors caused by camera angle variation. Next, the spatial coordinates of the feature points after suppression are used to correct the transformation matrix. Finally, the hand-eye calibration algorithm is implemented to transform the corrected data into the robot base coordinate system for the accurate positioning of the robot under multiple views. The experiment on the automotive engine flywheel shell indicates that the average positioning error is controlled within 1 mm under different postures. The stability and robustness of the proposed method are further improved while the positioning accuracy of the machining robot meets the requirements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号