共查询到20条相似文献,搜索用时 10 毫秒
1.
2.
基于图像的视觉伺服方法,图像的变化直接解释为摄像机的运动,而不是直接对机械手末端实现笛卡尔速度控制,导致机械手的运动轨迹迂回,产生摄像机回退现象.针对这一问题,提出了将旋转和平移分离并先实现旋转的视觉伺服方案.该方案计算量小,系统响应时间短,解决了图像旋转和平移间的干扰,克服了传统基于图像视觉伺服产生的摄像机回退现象,实现了时间和路径的最优控制.并用传统IBVS的控制律和摄像机成像模型解释了回退现象的产生原因.二维运动仿真说明了提出方案的有效性. 相似文献
3.
This paper presents a novel approach for image‐based visual servoing (IBVS) of a robotic system by considering the constraints in the case when the camera intrinsic and extrinsic parameters are uncalibrated and the position parameters of the features in 3‐D space are unknown. Based on the model predictive control method, the robotic system's input and output constraints, such as visibility constraints and actuators limitations, can be explicitly taken into account. Most of the constrained IBVS controllers use the traditional image Jacobian matrix, the proposed IBVS scheme is developed by using the depth‐independent interaction matrix. The unknown parameters can appear linearly in the prediction model and they can be estimated by the identification algorithm effectively. In addition, the model predictive control determines the optimal control input and updates the estimated parameters together with the prediction model. The proposed approach can simultaneously handle system constraints, unknown camera parameters and depth parameters. Both the visual positioning and tracking tasks can be achieved desired performances. Simulation results based on a 2‐DOF planar robot manipulator for both the eye‐in‐hand and eye‐to‐hand camera configurations are used to demonstrate the effectiveness of the proposed method. 相似文献
4.
单目视觉伺服研究综述 总被引:5,自引:1,他引:5
视觉伺服是机器人视觉领域的研究热点之一,具有十分广泛的应用前景.本文针对单目视觉系统,从视觉伺服的运动映射关系、误差表征、控制律设计、关键影响因素等多个层面,对视觉伺服的研究现状进行了论述,并分析了不同视觉伺服方法的特点,给出了视觉伺服在不同领域的典型应用.最后,指出了视觉伺服未来的主要发展方向. 相似文献
5.
针对当前基于图像的视觉伺服(IBVS)方法难以处理系统约束以及局部渐进稳定等的问题,提出一种新的并行分布补偿(PDC)控制方法.首先,运用张量积(TP)模型变换将视觉伺服系统模型转换为线性时不变系统的凸组合形式;然后,根据并行分布补偿原理将视觉伺服系统的控制变量通过求解线性矩阵不等式的凸优化问题获得,其可行解保证视觉伺服系统的闭环渐进稳定性.该方法除了能够避免直接求解图像雅可比矩阵的逆而无需考虑图像奇异问题外,还易于处理系统约束,根据执行器的机械限制有效规划控制信号的强度.两自由度连杆系统的仿真结果验证了该方法的有效性. 相似文献
6.
7.
8.
《Advanced Robotics》2013,27(12-13):1817-1827
The principal deficiency of an image-based servo is that the induced three-dimensional (3-D) trajectories are not optimal and sometimes, especially when the displacements to realize are large, these trajectories are not physically valid, leading to the failure of the servoing process. In this paper, we adress the problem of generating trajectories of some image features that correspond to optimal 3-D trajectories in order to control efficiently a robotic system using an image-based control strategy. First, a collineation path between given the start and end points is obtained, and then the trajectories of the image features are derived. Path planning is formulated as a variational problem that allows us to consider simultaneously optimality and inequality constraints (visibility). A numerical method is employed for solving the path planning problem in the variational form. 相似文献
9.
《Advanced Robotics》2013,27(5):547-572
This paper presents the architecture of a feedforward manipulator control strategy based on a belief function that may be appropriate for less controlled environments. In this architecture, the belief about the environmental state, as described by a probability density function, is maintained by a recursive Bayesian estimation process. A likelihood is derived from each observation regardless of whether the targeted features of the environmental state have been detected or not. This provides continuously evolving information to the controller and allows an inaccurate belief to evolve into an accurate belief. Control actions are determined by maximizing objective functions using non-linear optimization. Forward models are used to transform control actions to a predicted state so that objective functions may be expressed in task space. The first set of examples numerically investigates the validity of the proposed strategy by demonstrating control in a two dimensional scenario. Then a more realistic application is presented where a robotic manipulator executes a searching and tracking task using an eye-in-hand vision sensor. 相似文献
10.
Aitor Ibarguren José María Martínez-Otzeta Iñaki Maurtua 《Journal of Intelligent and Robotic Systems》2014,74(3-4):689-696
Visual servoing allows the introduction of robotic manipulation in dynamic and uncontrolled environments. This paper presents a position-based visual servoing algorithm using particle filtering. The objective is the grasping of objects using the 6 degrees of freedom of the robot manipulator in non-automated industrial environments using monocular vision. A particle filter has been added to the position-based visual servoing algorithm to deal with the different noise sources of those industrial environments. Experiments performed in the real industrial scenario of ROBOFOOT (http://www.robofoot.eu/) project showed accurate grasping and high level of stability in the visual servoing process. 相似文献
11.
Ghasemi Ahmad Li Pengcheng Xie Wen-Fang 《International Journal of Control, Automation and Systems》2020,18(5):1324-1334
International Journal of Control, Automation and Systems - In this paper, an adaptive switch image-based visual servoing (IBVS) controller for industrial robots is presented. The proposed control... 相似文献
12.
13.
14.
Yu Zhou Bradley J. Nelson Barmeshwar Vikramaditya 《Journal of Intelligent and Robotic Systems》2000,28(3):259-276
For microassembly tasks uncertainty exists at many levels. Single static sensing configurations are therefore unable to provide feedback with the necessary range and resolution for accomplishing many desired tasks. In this paper we present experimental results that investigate the integration of two disparate sensing modalities, force and vision, for sensor-based microassembly. By integrating these sensing modes, we are able to provide feedback in a task-oriented frame of reference over a broad range of motion with an extremely high precision. An optical microscope is used to provide visual feedback down to micron resolutions, while an optical beam deflection technique (based on a modified atomic force microscope) is used to provide nanonewton level force feedback or nanometric level position feedback. Visually servoed motion at speeds of up to 2 mm/s with a repeatability of 0.17 m are achieved with vision alone. The optical beam deflection sensor complements the visual feedback by providing positional feedback with a repeatability of a few nanometers. Based on the principles of optical beam deflection, this is equivalent to force measurements on the order of a nanonewton. The value of integrating these two disparate sensing modalities is demonstrated during controlled micropart impact experiments. These results demonstrate micropart approach velocities of 80 m/s with impact forces of 9 nN and final contact forces of 2 nN. Within our microassembly system this level of performance cannot be achieved using either sensing modality alone. This research will aid in the development of complex hybrid MEMS devices in two ways; by enabling the microassembly of more complex MEMS prototypes; and in the development of automatic assembly machines for assembling and packaging future MEMS devices that require increasingly complex assembly strategies. 相似文献
15.
16.
A novel approach to visual servoing is presented, which takes advantage of the structure of the Lie algebra of affine transformations. The aim of this project is to use feedback from a visual sensor to guide a robot arm to a target position. The target position is learned using the principle of teaching by showing in which the supervisor places the robot in the correct target position and the system captures the necessary information to be able to return to that position. The sensor is placed in the end effector of the robot, the camera-in-hand approach, and thus provides direct feedback of the robot motion relative to the target scene via observed transformations of the scene. These scene transformations are obtained by measuring the affine deformations of a target planar contour (under the weak perspective assumption), captured by use of an active contour, or snake. Deformations of the snake are constrained using the Lie groups of affine and projective transformations. Properties of the Lie algebra of affine transformations are exploited to provide a novel method for integrating observed deformations of the target contour. These can be compensated with appropriate robot motion using a non-linear control structure. The local differential representation of contour deformations is extended to allow accurate integration of an extended series of small perturbations. This differs from existing approaches by virtue of the properties of the Lie algebra representation which implicitly embeds knowledge of the three-dimensional world within a two-dimensional image-based system. These techniques have been implemented using a video camera to control a 5 DoF robot arm. Experiments with this implementation are presented, together with a discussion of the results. 相似文献
17.
Stable Visual Servoing Through Hybrid Switched-System Control 总被引:1,自引:0,他引:1
Visual servoing methods are commonly classified as image-based or position-based, depending on whether image features or the camera position define the signal error in the feedback loop of the control law. Choosing one method over the other gives asymptotic stability of the chosen error but surrenders control over the other. This can lead to system failure if feature points are lost or the robot moves to the end of its reachable space. We present a hybrid switched-system visual servo method that utilizes both image-based and position-based control laws. We prove the stability of a specific, state-based switching scheme and present simulated and experimental results. 相似文献
18.
19.
Rahul Singh Richard M. Voyles David Littau Nikolaos P. Papanikolopoulos 《Autonomous Robots》2001,10(3):317-338
We present an approach for controlling robotic interactions with objects, using synthetic images generated by morphing shapes. In particular, we attempt the problem of positioning an eye-in-hand robotic system with respect to objects in the workspace for grasping and manipulation. In our formulation, the grasp position (and consequently the approach trajectory of the manipulator), varies with each object. The proposed solution to the problem consists of two parts. First, based on a model-based object recognition framework, images of the objects taken at the desired grasp pose are stored in a database. The recognition and identification of the grasp position for an unknown input object (selected from the family of recognizable objects) occurs by morphing its contour to the templates in the database and using the virtual energy spent during the morph as a dissimilarity measure. In the second step, the images synthesized during the morph are used to guide the eye-in-hand system and execute the grasp. The proposed method requires minimal calibration of the system. Furthermore, it conjoins techniques from shape recognition, computer graphics, and vision-based robot control in a unified engineering amework. Potential applications range from recognition and positioning with respect to partially-occluded or deformable objects to planning robotic grasping based on human demonstration. 相似文献
20.
This paper presents a framework to achieve real‐time augmented reality applications. We propose a framework based on the visual servoing approach well known in robotics. We consider pose or viewpoint computation as a similar problem to visual servoing. It allows one to take advantage of all the research that has been carried out in this domain in the past. The proposed method features simplicity, accuracy, efficiency, and scalability wrt. to the camera model as well as wrt. the features extracted from the image. We illustrate the efficiency of our approach on augmented reality applications with various real image sequences. 相似文献