首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
基于图象差的平面大范围视觉伺服控制   总被引:1,自引:1,他引:1  
为解决大范围偏差的控制问题,将期望图象按给定的角度间隔旋转,离线生成一系列子期望图象。比较实时采集图象与期望子图象间的差异程序可获取目标绕重心的旋转运动参数。纵使图象求重心方法给出的平动参数,实现了在大范围偏差时迅速将摄象机调整到期望位姿。在期望位姿附近结合直接图象反馈方式,实现了基于图象差的平面大范围视觉伺服控制。  相似文献   

2.
基于图像的视觉伺服方法,图像的变化直接解释为摄像机的运动,而不是直接对机械手末端实现笛卡尔速度控制,导致机械手的运动轨迹迂回,产生摄像机回退现象.针对这一问题,提出了将旋转和平移分离并先实现旋转的视觉伺服方案.该方案计算量小,系统响应时间短,解决了图像旋转和平移间的干扰,克服了传统基于图像视觉伺服产生的摄像机回退现象,实现了时间和路径的最优控制.并用传统IBVS的控制律和摄像机成像模型解释了回退现象的产生原因.二维运动仿真说明了提出方案的有效性.  相似文献   

3.
This paper presents a novel approach for image‐based visual servoing (IBVS) of a robotic system by considering the constraints in the case when the camera intrinsic and extrinsic parameters are uncalibrated and the position parameters of the features in 3‐D space are unknown. Based on the model predictive control method, the robotic system's input and output constraints, such as visibility constraints and actuators limitations, can be explicitly taken into account. Most of the constrained IBVS controllers use the traditional image Jacobian matrix, the proposed IBVS scheme is developed by using the depth‐independent interaction matrix. The unknown parameters can appear linearly in the prediction model and they can be estimated by the identification algorithm effectively. In addition, the model predictive control determines the optimal control input and updates the estimated parameters together with the prediction model. The proposed approach can simultaneously handle system constraints, unknown camera parameters and depth parameters. Both the visual positioning and tracking tasks can be achieved desired performances. Simulation results based on a 2‐DOF planar robot manipulator for both the eye‐in‐hand and eye‐to‐hand camera configurations are used to demonstrate the effectiveness of the proposed method.  相似文献   

4.
单目视觉伺服研究综述   总被引:5,自引:1,他引:5  
徐德 《自动化学报》2018,44(10):1729-1746
视觉伺服是机器人视觉领域的研究热点之一,具有十分广泛的应用前景.本文针对单目视觉系统,从视觉伺服的运动映射关系、误差表征、控制律设计、关键影响因素等多个层面,对视觉伺服的研究现状进行了论述,并分析了不同视觉伺服方法的特点,给出了视觉伺服在不同领域的典型应用.最后,指出了视觉伺服未来的主要发展方向.  相似文献   

5.
王婷婷  刘国栋 《控制工程》2013,20(2):334-338
针对当前基于图像的视觉伺服(IBVS)方法难以处理系统约束以及局部渐进稳定等的问题,提出一种新的并行分布补偿(PDC)控制方法.首先,运用张量积(TP)模型变换将视觉伺服系统模型转换为线性时不变系统的凸组合形式;然后,根据并行分布补偿原理将视觉伺服系统的控制变量通过求解线性矩阵不等式的凸优化问题获得,其可行解保证视觉伺服系统的闭环渐进稳定性.该方法除了能够避免直接求解图像雅可比矩阵的逆而无需考虑图像奇异问题外,还易于处理系统约束,根据执行器的机械限制有效规划控制信号的强度.两自由度连杆系统的仿真结果验证了该方法的有效性.  相似文献   

6.
机器人视觉伺服综述   总被引:45,自引:0,他引:45  
系统论述了机器人视觉伺服发展的历史和现状。从不同角度对机器人视觉控制系统进行分类,重点介绍了基位置的视觉伺服系统和基于图像的视觉伺服系统。对人工神经网络在机器人视觉伺服方面的应用情况作了介绍。讨论了视觉伺服中图像特征的选择问题。对机器人视觉所涉及的前沿问题进行阐述,并指出了目前研究中所存在的问题及今后发展方向。  相似文献   

7.
8.
《Advanced Robotics》2013,27(12-13):1817-1827
The principal deficiency of an image-based servo is that the induced three-dimensional (3-D) trajectories are not optimal and sometimes, especially when the displacements to realize are large, these trajectories are not physically valid, leading to the failure of the servoing process. In this paper, we adress the problem of generating trajectories of some image features that correspond to optimal 3-D trajectories in order to control efficiently a robotic system using an image-based control strategy. First, a collineation path between given the start and end points is obtained, and then the trajectories of the image features are derived. Path planning is formulated as a variational problem that allows us to consider simultaneously optimality and inequality constraints (visibility). A numerical method is employed for solving the path planning problem in the variational form.  相似文献   

9.
《Advanced Robotics》2013,27(5):547-572
This paper presents the architecture of a feedforward manipulator control strategy based on a belief function that may be appropriate for less controlled environments. In this architecture, the belief about the environmental state, as described by a probability density function, is maintained by a recursive Bayesian estimation process. A likelihood is derived from each observation regardless of whether the targeted features of the environmental state have been detected or not. This provides continuously evolving information to the controller and allows an inaccurate belief to evolve into an accurate belief. Control actions are determined by maximizing objective functions using non-linear optimization. Forward models are used to transform control actions to a predicted state so that objective functions may be expressed in task space. The first set of examples numerically investigates the validity of the proposed strategy by demonstrating control in a two dimensional scenario. Then a more realistic application is presented where a robotic manipulator executes a searching and tracking task using an eye-in-hand vision sensor.  相似文献   

10.
Visual servoing allows the introduction of robotic manipulation in dynamic and uncontrolled environments. This paper presents a position-based visual servoing algorithm using particle filtering. The objective is the grasping of objects using the 6 degrees of freedom of the robot manipulator in non-automated industrial environments using monocular vision. A particle filter has been added to the position-based visual servoing algorithm to deal with the different noise sources of those industrial environments. Experiments performed in the real industrial scenario of ROBOFOOT (http://www.robofoot.eu/) project showed accurate grasping and high level of stability in the visual servoing process.  相似文献   

11.
International Journal of Control, Automation and Systems - In this paper, an adaptive switch image-based visual servoing (IBVS) controller for industrial robots is presented. The proposed control...  相似文献   

12.
13.
机器人视觉伺服系统的研究   总被引:31,自引:0,他引:31  
机器人伺觉伺服系统及到多学科内容。针对机器人视觉伺服系统主要的三方面内容;系统的结构方式,图象处理,控制方法,介绍了该领域的研究现状及所取得的成就。最后分析了今后的发展趋势。  相似文献   

14.
For microassembly tasks uncertainty exists at many levels. Single static sensing configurations are therefore unable to provide feedback with the necessary range and resolution for accomplishing many desired tasks. In this paper we present experimental results that investigate the integration of two disparate sensing modalities, force and vision, for sensor-based microassembly. By integrating these sensing modes, we are able to provide feedback in a task-oriented frame of reference over a broad range of motion with an extremely high precision. An optical microscope is used to provide visual feedback down to micron resolutions, while an optical beam deflection technique (based on a modified atomic force microscope) is used to provide nanonewton level force feedback or nanometric level position feedback. Visually servoed motion at speeds of up to 2 mm/s with a repeatability of 0.17 m are achieved with vision alone. The optical beam deflection sensor complements the visual feedback by providing positional feedback with a repeatability of a few nanometers. Based on the principles of optical beam deflection, this is equivalent to force measurements on the order of a nanonewton. The value of integrating these two disparate sensing modalities is demonstrated during controlled micropart impact experiments. These results demonstrate micropart approach velocities of 80 m/s with impact forces of 9 nN and final contact forces of 2 nN. Within our microassembly system this level of performance cannot be achieved using either sensing modality alone. This research will aid in the development of complex hybrid MEMS devices in two ways; by enabling the microassembly of more complex MEMS prototypes; and in the development of automatic assembly machines for assembling and packaging future MEMS devices that require increasingly complex assembly strategies.  相似文献   

15.
一种无标定视觉伺服控制技术的研究   总被引:3,自引:0,他引:3  
赵杰  李牧  李戈  闫继宏 《控制与决策》2006,21(9):1015-1019
在视觉伺服控制过程中无法精确地标定摄像机和机器人运动学模型,而当前的无标定视觉伺服控制技术或者只能针对静态的目标,或者针对动态目标但无法摆脱大偏差的影响.针对此问题,提出一种动态无标定的视觉伺服控制方法:基于非线性方差最小化法控制机器人跟踪运动目标,利用动态拟牛顿法估计图像雅克比矩阵,采用迭代最小二乘法提高系统的稳定性并提出大偏差条件下的无标定控制策略.仿真实验证明了该方法的正确性和有效性.  相似文献   

16.
A novel approach to visual servoing is presented, which takes advantage of the structure of the Lie algebra of affine transformations. The aim of this project is to use feedback from a visual sensor to guide a robot arm to a target position. The target position is learned using the principle of teaching by showing in which the supervisor places the robot in the correct target position and the system captures the necessary information to be able to return to that position. The sensor is placed in the end effector of the robot, the camera-in-hand approach, and thus provides direct feedback of the robot motion relative to the target scene via observed transformations of the scene. These scene transformations are obtained by measuring the affine deformations of a target planar contour (under the weak perspective assumption), captured by use of an active contour, or snake. Deformations of the snake are constrained using the Lie groups of affine and projective transformations. Properties of the Lie algebra of affine transformations are exploited to provide a novel method for integrating observed deformations of the target contour. These can be compensated with appropriate robot motion using a non-linear control structure. The local differential representation of contour deformations is extended to allow accurate integration of an extended series of small perturbations. This differs from existing approaches by virtue of the properties of the Lie algebra representation which implicitly embeds knowledge of the three-dimensional world within a two-dimensional image-based system. These techniques have been implemented using a video camera to control a 5 DoF robot arm. Experiments with this implementation are presented, together with a discussion of the results.  相似文献   

17.
Stable Visual Servoing Through Hybrid Switched-System Control   总被引:1,自引:0,他引:1  
Visual servoing methods are commonly classified as image-based or position-based, depending on whether image features or the camera position define the signal error in the feedback loop of the control law. Choosing one method over the other gives asymptotic stability of the chosen error but surrenders control over the other. This can lead to system failure if feature points are lost or the robot moves to the end of its reachable space. We present a hybrid switched-system visual servo method that utilizes both image-based and position-based control laws. We prove the stability of a specific, state-based switching scheme and present simulated and experimental results.  相似文献   

18.
机器人视觉伺服研究进展   总被引:36,自引:1,他引:35  
王麟琨  徐德  谭民 《机器人》2004,26(3):277-282
介绍了机器人视觉伺服系统的结构和主要研究内容,比较了当前几种主要的视觉伺服方法,针对当前 机器人视觉伺服所面临的主要问题,详细阐述了近期提出的一些解决方法.  相似文献   

19.
We present an approach for controlling robotic interactions with objects, using synthetic images generated by morphing shapes. In particular, we attempt the problem of positioning an eye-in-hand robotic system with respect to objects in the workspace for grasping and manipulation. In our formulation, the grasp position (and consequently the approach trajectory of the manipulator), varies with each object. The proposed solution to the problem consists of two parts. First, based on a model-based object recognition framework, images of the objects taken at the desired grasp pose are stored in a database. The recognition and identification of the grasp position for an unknown input object (selected from the family of recognizable objects) occurs by morphing its contour to the templates in the database and using the virtual energy spent during the morph as a dissimilarity measure. In the second step, the images synthesized during the morph are used to guide the eye-in-hand system and execute the grasp. The proposed method requires minimal calibration of the system. Furthermore, it conjoins techniques from shape recognition, computer graphics, and vision-based robot control in a unified engineering amework. Potential applications range from recognition and positioning with respect to partially-occluded or deformable objects to planning robotic grasping based on human demonstration.  相似文献   

20.
Virtual Visual Servoing: a framework for real-time augmented reality   总被引:1,自引:0,他引:1  
This paper presents a framework to achieve real‐time augmented reality applications. We propose a framework based on the visual servoing approach well known in robotics. We consider pose or viewpoint computation as a similar problem to visual servoing. It allows one to take advantage of all the research that has been carried out in this domain in the past. The proposed method features simplicity, accuracy, efficiency, and scalability wrt. to the camera model as well as wrt. the features extracted from the image. We illustrate the efficiency of our approach on augmented reality applications with various real image sequences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号