首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
为实现在统一的理论框架下对机器人视觉伺服基础特性进行细致深入的研究,本文基于任务函数方法,建立了广义的视觉伺服系统模型.在此模型基础之上,重点研究了基于位置的视觉伺服(PBVS)与基于图像的视觉伺服(IBVS)方法在笛卡尔空间和图像空间的动态特性.仿真结果表明,在相同的比较框架结构下,PBVS方法同样对摄像机标定误差具有鲁棒性.二者虽然在动态系统的稳定性、收敛性方面相类似,但是在笛卡尔空间和图像空间的动态性能上却有很大的差别.对于PBvS方法,笛卡尔轨迹可以保证最短路径,但是对应的图像轨迹是不可控的,可能会发生逃离视线的问题;对于IBVS方法,图像空间虽然能保证最短路径,但是由于缺乏笛卡尔空间的直接控制,在处理大范围旋转伺服的情况时,会发生诸如摄像机退化的笛卡尔轨迹偏移现象.  相似文献   

3.
In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms (Proc of IEEE Int Conf on robotics and automation, pp 1794–1799, 2007) are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach (Int Conf on advanced robotics, 2007), based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are presented.  相似文献   

4.
A new uncalibrated eye-to-hand visual servoing based on inverse fuzzy modeling is proposed in this paper. In classical visual servoing, the Jacobian plays a decisive role in the convergence of the controller, as its analytical model depends on the selected image features. This Jacobian must also be inverted online. Fuzzy modeling is applied to obtain an inverse model of the mapping between image feature variations and joint velocities. This approach is independent from the robot's kinematic model or camera calibration and also avoids the necessity of inverting the Jacobian online. An inverse model is identified for the robot workspace, using measurement data of a robotic manipulator. This inverse model is directly used as a controller. The inverse fuzzy control scheme is applied to a robotic manipulator performing visual servoing for random positioning in the robot workspace. The obtained experimental results show the effectiveness of the proposed control scheme. The fuzzy controller can position the robotic manipulator at any point in the workspace with better accuracy than the classic visual servoing approach.  相似文献   

5.
针对传统的视觉伺服方法中图像几何特征的标记、提取与匹配过程复杂且通用性差等问题,本文提出了一种基于图像矩的机器人四自由度(4DOF)视觉伺服方法.首先建立了眼在手系统中图像矩与机器人位姿之间的非线性增量变换关系,为利用图像矩进行机器人视觉伺服控制提供了理论基础,然后在未对摄像机与手眼关系进行标定的情况下,利用反向传播(BP)神经网络的非线性映射特性设计了基于图像矩的机器人视觉伺服控制方案,最后用训练好的神经刚络进行了视觉伺服跟踪控制.实验结果表明基于本文算法可实现0.5 mm的位置与0.5°的姿态跟踪精度,验证了算法的的有效性与较好的伺服性能.  相似文献   

6.
智能空间中家庭服务机器人所需完成的主要任务是协助人完成物品的搜寻、定位与传递。而视觉伺服则是完成上述任务的有效手段。搭建了由移动机器人、机械臂、摄像头组成的家庭服务机器人视觉伺服系统,建立了此系统的运动学模型并对安装在机械臂末端执行器上的视觉系统进行了内外参数标定,通过分解世界平面的单应来获取目标物品的位姿参数,利用所获取的位姿参数设计了基于位置的视觉伺服控制律。实验结果表明,使用平面单应分解方法来设计控制律可简单有效地完成家庭物品的视觉伺服任务。  相似文献   

7.
M.T. Hussein 《Advanced Robotics》2013,27(24):1575-1585
In this review, recent developments in the field of flexible robot arm control using visual servoing are reviewed. In comparison to rigid robots, the end-effector position of flexible links cannot be obtained precisely enough with respect to position control using kinematic information and joint variables. To solve the task here the use of a vision sensor (camera) system, visual servoing is proposed to realize the task of control flexible manipulators with improved quality requirements. The paper is organized as follows: the visual servoing architectures will be reviewed for rigid robots first. The advantages, disadvantages, and comparisons between different approaches of visual servoing are carried out. The using of visual servoing to control flexible robot is addressed next. Open problems such as state variables estimation as well as the combination of different sensor properties as well as some application-oriented points related to flexible robot are discussed in detail.  相似文献   

8.
2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.  相似文献   

9.
基于图像的视觉伺服方法,图像的变化直接解释为摄像机的运动,而不是直接对机械手末端实现笛卡尔速度控制,导致机械手的运动轨迹迂回,产生摄像机回退现象.针对这一问题,提出了将旋转和平移分离并先实现旋转的视觉伺服方案.该方案计算量小,系统响应时间短,解决了图像旋转和平移间的干扰,克服了传统基于图像视觉伺服产生的摄像机回退现象,实现了时间和路径的最优控制.并用传统IBVS的控制律和摄像机成像模型解释了回退现象的产生原因.二维运动仿真说明了提出方案的有效性.  相似文献   

10.
In this paper, the visual servoing problem is addressed by coupling nonlinear control theory with a convenient representation of the visual information used by the robot. The visual representation, which is based on a linear camera model, is extremely compact to comply with active vision requirements. The devised control law is proven to ensure global asymptotic stability in the Lyapunov sense, assuming exact model and state measurements. It is also shown that, in the presence of bounded uncertainties, the closed-loop behavior is characterized by a global attractor. The well known pose ambiguity arising from the use of linear camera models is solved at the control level by choosing a hybrid visual state vector including both image space (2D) information and 3D object parameters. A method is expounded for on-line visual state estimation that avoids camera calibration. Simulation and real-time experiments validate the theoretical framework in terms of both system convergence and control robustness.  相似文献   

11.
Image-based effector servoing is a process of perception–action cycles for handling a robot effector under continual visual feedback. This paper applies visual servoing mechanisms not only for handling objects, but also for camera calibration and object inspection. A 6-DOF manipulator and a stereo camera head are mounted on separate platforms and are steered independently. In a first phase (calibration phase), camera features are determined like the optical axes and the fields of sharp view. In the second phase (inspection phase), the robot hand carries an object into the field of view of one camera, then approaches the object along the optical axis to the camera, rotates the object for reaching an optimal view, and finally the object shape is inspected in detail. In the third phase (assembly phase), the system localizes a board containing holes of different shapes, determines the hole which fits most appropriate to the object shape, then approaches and arranges the object appropriately. The final object insertion is based on haptic sensors, but is not treated in the paper. At present, the robot system has the competence to handle cylindrical and cuboid pegs. For handling other object categories the system can be extended with more sophisticated strategies of the inspection and/or assembly phase.  相似文献   

12.
13.
机器人视觉伺服控制在理论和应用等方面还有许多问题需要研究,例如特征选择、系统标定和伺服控制算法等.针对Adept机器人,提出了一种简单快速的不需要精确标定摄像机内外部参数的摄像机标定方法,完成了从被观测物体表面所在的视觉平面坐标系到机器人基坐标系的坐标变换.使用图像的全局特征,即图像矩特征进行伺服跟踪;利用所推导的图像雅可比矩阵,设计了由图像反馈与目标运动自适应补偿组成的视觉伺服控制器.将算法对静态目标的定位实验进行了验证,然后又将其应用到移动目标的跟踪上,通过调节和优选控制参数,实现了稳定的伺服跟踪和抓取.实验结果表明采用图像矩作为图像特征能够避免复杂的特征匹配过程,并且能够获得较好的跟踪精度.  相似文献   

14.
《Advanced Robotics》2013,27(11):1203-1218
A new visual servoing technique based on two-dimensional (2-D) ultrasound (US) image is proposed in order to control the motion of an US probe held by a medical robot. In opposition to a standard camera which provides a projection of the three-dimensional (3-D) scene to a 2-D image, US information is strictly in the observation plane of the probe and consequently visual servoing techniques have to be adapted. In this paper the coupling between the US probe and a motionless crossed string phantom used for probe calibration is modeled. Then a robotic task is developed which consists of positioning the US image on the intersection point of the crossed string phantom while moving the probe to different orientations. The goal of this task is to optimize the procedure of spatial parameter calibration of 3-D US systems.  相似文献   

15.
《Advanced Robotics》2013,27(8-9):1035-1054
Abstract

In this paper we present an image predictive controller for an eye-in-hand-type servoing architecture, composed of a 6-d.o.f. robot and a camera mounted on the gripper. A novel architecture for integrating reference trajectory and image prediction is proposed for use in predictive control of visual servoing systems. In the proposed method, a new predictor is developed based on the relation between the camera velocity and the time variation of the visual features given by the interaction matrix. The image-based predictor generates the future trajectories of a visual feature ensemble when past and future camera velocities are known. In addition, a reference trajectory is introduced to define the way how to reach the desired features over the prediction horizon starting from the current features. The advantages of the new architecture are the reference trajectory used for the first time in the sense of the predictive control and the predictor based on a local model. Simulations reveal the efficiency of the proposed architecture to control a 6-d.o.f. robot manipulator.  相似文献   

16.
Image-based visual servoing is a flexible and robust technique to control a robot and guide it to a desired position only by using two-dimensional visual data. However, it is well known that the classical visual servoing based on the Cartesian coordinate system has one crucial problem, that the camera moves backward at infinity, in case that the camera motion from the initial to desired poses is a pure rotation of 1800 around the optical axis. This paper proposes a new formulation of visual servoing, based on a cylindrical coordinate system that can shift the position of the origin. The proposed approach can interpret from a pure rotation around an arbitrary axis to the proper camera rotational motion. It is shown that this formulation contains the classical approach based on the Cartesian coordinate system as an extreme case with the origin located at infinity. Furthermore, we propose a decision method of the origin-shift parameters by estimating a rotational motion from the differences between initial and desired image-plane positions of feature points.  相似文献   

17.
We present a new approach to visual feedback control using image-based visual servoing with stereo vision. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location, but also at other locations. The suggested technique can guide a robot manipulator to the desired location without needing such a priori knowledge as the relative distance to the desired location or a model of the object, even if the initial positioning error is large. We describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by a simulation and by experimental results, and compared with the conventional method for an assembling robot. This work was presented in part at the Fourth International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–22, 1999  相似文献   

18.
Detection and tracking for robotic visual servoing systems   总被引:1,自引:0,他引:1  
Robot manipulators require knowledge about their environment in order to perform their desired actions. In several robotic tasks, vision sensors play a critical role by providing the necessary quantity and quality of information regarding the robot's environment. For example, “visual servoing” algorithms may control a robot manipulator in order to track moving objects that are being imaged by a camera. Current visual servoing systems often lack the ability to detect automatically objects that appear within the camera's field of view. In this research, we present a robust “figureiground” framework for visually detecting objects of interest. An important contribution of this research is a collection of optimization schemes that allow the detection framework to operate within the real-time limits of visual servoing systems. The most significant of these schemes involves the use of “spontaneous” and “continuous” domains. The number and location of continuous domains are. allowed to change over time, adjusting to the dynamic conditions of the detection process. We have developed actual servoing systems in order to test the framework's feasibility and to demonstrate its usefulness for visually controlling a robot manipulator.  相似文献   

19.
This paper addresses the control issue for cooperative visual servoing manipulators on strongly connected graph with communication delays, in which case that the uncertain robot dynamics and kinematics, uncalibrated camera model, and actuator constraint are simultaneously considered. An adaptive cooperative image‐based approach is established to overcome the control difficulty arising from nonlinear coupling between visual model and robot agents. To estimate the coupled camera‐robot parameters, a novel adaptive strategy is developed and its superiority mainly lies in the containment of both individual image‐space errors and the synchronous errors among networked robots; thus, the cooperative performance is significantly strengthened. Moreover, the proposed cooperative controller with a Nussbaum‐type gain is implemented to both globally stabilize the closed‐loop systems and realize the synchronization control objective under the existence of unknown and time‐varying actuator constraint. Finally, simulations are carried out to validate the developed approach.  相似文献   

20.
This article describes real-time gaze control using position-based visual servoing. The main control objective of the system is to enable a gaze point to track the target so that the image feature of the target is located at each image center. The overall system consists of two parts: the vision process and the control system. The vision system extracts a predefined color feature from images. An adaptive look-up table method is proposed in order to get the 3-D position of the feature within the video frame rate under varying illumination. An uncalibrated camera raises the problem of the reconstructed 3-D positions not being correct. To solve the calibration problem in the position-based approach, we constructed an end-point closed-loop system using an active head-eye system. In the proposed control system, the reconstructed position error is used with a Jacobian matrix of the kinematic relation. The system stability is locally guaranteed, like image-based visual servoing, and the gaze position was shown to converge to the feature position. The proposed approach was successfully applied to a tracking task with a moving target in some simulations and some real experiments. The processing speed satisfies the property of real time. This work was presented in part at the Sixth International Symposium on Artificial Life and Robotics, Tokyo, January 15–17, 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号