首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 686 毫秒
1.
《Advanced Robotics》2013,27(8-9):947-967
Abstract

A wide field of view is required for many robotic vision tasks. Such an aperture may be acquired by a fisheye camera, which provides a full image compared to catadioptric visual sensors, and does not increase the size and the weakness of the imaging system with respect to perspective cameras. While a unified model exists for all central catadioptric systems, many different models, approximating the radial distortions, exist for fisheye cameras. It is shown in this paper that the unified projection model proposed for central catadioptric cameras is also valid for fisheye cameras in the context of robotic applications. This model consists of a projection onto a virtual unitary sphere followed by a perspective projection onto an image plane. This model is shown equivalent to almost all the fisheye models. Calibration with four cameras and partial Euclidean reconstruction are done using this model, and lead to persuasive results. Finally, an application to a mobile robot navigation task is proposed and correctly executed along a 200-m trajectory.  相似文献   

2.
2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.  相似文献   

3.
《Advanced Robotics》2013,27(8-9):1035-1054
Abstract

In this paper we present an image predictive controller for an eye-in-hand-type servoing architecture, composed of a 6-d.o.f. robot and a camera mounted on the gripper. A novel architecture for integrating reference trajectory and image prediction is proposed for use in predictive control of visual servoing systems. In the proposed method, a new predictor is developed based on the relation between the camera velocity and the time variation of the visual features given by the interaction matrix. The image-based predictor generates the future trajectories of a visual feature ensemble when past and future camera velocities are known. In addition, a reference trajectory is introduced to define the way how to reach the desired features over the prediction horizon starting from the current features. The advantages of the new architecture are the reference trajectory used for the first time in the sense of the predictive control and the predictor based on a local model. Simulations reveal the efficiency of the proposed architecture to control a 6-d.o.f. robot manipulator.  相似文献   

4.
When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework for wheeled mobile robots presented in this paper is based on this assumption. During a human-guided learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint. Based on the regulation of successive homographies, this control guides the robot along the reference visual route without explicitly planning any trajectory. The proposed framework has been designed for the entire class of central catadioptric cameras (including conventional cameras). It has been validated onto two architectures. In the first one, algorithms have been implemented onto a dedicated hardware and the robot is equipped with a standard perspective camera. In the second one, they have been implemented on a standard PC and an omnidirectional camera is considered.
Youcef MezouarEmail:
  相似文献   

5.
王婷婷  刘国栋 《控制与决策》2013,28(7):1018-1022
将特征点的深度信息和像素坐标作为视觉特征,提出一种视觉伺服准最小最大模型预测控制(MPC)方法。与传统方法相比,机器人控制信号通过在线求解线性矩阵不等式的凸优化问题获得,其可行解可保证系统的闭环渐近稳定性。该方法易于处理系统约束,在满足执行器机械限制的前提下能够有效规划特征点的图像轨迹时,深度特征的引入对于改进摄像机的三维轨迹具有显著效果,六自由度工业机器人手眼系统的仿真结果验证了所提出算法的有效性。。  相似文献   

6.
This paper deals with the problem of position-based visual servoing in a multiarm robotic cell equipped with a hybrid eye-in-hand/eye-to-hand multicamera system. The proposed approach is based on the real-time estimation of the pose of a target object by using the extended Kalman filter. The data provided by all the cameras are selected by a suitable algorithm on the basis of the prediction of the object self-occlusions, as well as of the mutual occlusions caused by the robot links and tools. Only an optimal subset of image features is considered for feature extraction, thus ensuring high estimation accuracy with a computational cost independent of the number of cameras. A salient feature of the paper is the implementation of the proposed approach to the case of a robotic cell composed of two industrial robot manipulators. Two different case studies are presented to test the effectiveness of the hybrid camera configuration and the robustness of the visual servoing algorithm with respect to the occurrence of occlusions  相似文献   

7.
Global Path-Planning for Constrained and Optimal Visual Servoing   总被引:1,自引:0,他引:1  
Visual servoing consists of steering a robot from an initial to a desired location by exploiting the information provided by visual sensors. This paper deals with the problem of realizing visual servoing for robot manipulators taking into account constraints such as visibility, workspace (that is obstacle avoidance), and joint constraints, while minimizing a cost function such as spanned image area, trajectory length, and curvature. To solve this problem, a new path-planning scheme is proposed. First, a robust object reconstruction is computed from visual measurements which allows one to obtain feasible image trajectories. Second, the rotation path is parameterized through an extension of the Euler parameters that yields an equivalent expression of the rotation matrix as a quadratic function of unconstrained variables, hence, largely simplifying standard parameterizations which involve transcendental functions. Then, polynomials of arbitrary degree are used to complete the parametrization and formulate the desired constraints and costs as a general optimization problem. The optimal trajectory is followed by tracking the image trajectory with an IBVS controller combined with repulsive potential fields in order to fulfill the constraints in real conditions.  相似文献   

8.
This paper presents a real-time architecture for visual servoing of robot manipulators using nonlinear based predictive control. In order to increase the robustness of the control algorithm, image moments were chosen to be the visual features which describe the objects from the image. A visual predictive architecture is designed to solve tasks addressed to robot manipulators with an eye-in-hand configuration. An implementation of the proposed architecture is done so that the capabilities of a 6 d.o.f robot manipulator are extended. The results of different experiments conducted with two types of image moments based controllers (proportional and predictive with reference trajectory) are presented and discussed.  相似文献   

9.
为实现在统一的理论框架下对机器人视觉伺服基础特性进行细致深入的研究,本文基于任务函数方法,建立了广义的视觉伺服系统模型.在此模型基础之上,重点研究了基于位置的视觉伺服(PBVS)与基于图像的视觉伺服(IBVS)方法在笛卡尔空间和图像空间的动态特性.仿真结果表明,在相同的比较框架结构下,PBVS方法同样对摄像机标定误差具有鲁棒性.二者虽然在动态系统的稳定性、收敛性方面相类似,但是在笛卡尔空间和图像空间的动态性能上却有很大的差别.对于PBvS方法,笛卡尔轨迹可以保证最短路径,但是对应的图像轨迹是不可控的,可能会发生逃离视线的问题;对于IBVS方法,图像空间虽然能保证最短路径,但是由于缺乏笛卡尔空间的直接控制,在处理大范围旋转伺服的情况时,会发生诸如摄像机退化的笛卡尔轨迹偏移现象.  相似文献   

10.
目的 为解决目前基于鱼眼变换技术的图像适应方法难以解决的焦点检测和多焦点冲突两大问题,提出一种基于改进鱼眼变换技术的图像适应方法。方法 提出的方法根据源图像的能量计算出图像中所有最优高能量线并组成高能量线集合,作为源图像的高能量部分,即图像的焦点区域;以能量线而不是传统的图像区域为单位进行鱼眼变换以得到目标图像。结果 改变鱼眼变换技术的变换模式并应用于图像适应中,实验结果表明,本文方法解决了基于鱼眼变换技术的图像适应方法存在的问题,通过本文算法所得到的目标图像具有较好的视觉效果,用户满意度接近4分。算法运行速度较快,将源图像(512×384)长度缩小一半的情况下仅需6 s的运算时间。结论 本文方法一方面保留了鱼眼变换图像适应方法的优势,在突出显示图像重要部分的同时,不会忽略图像的次要部分;另一方面解决了鱼眼变换图像适应方法存在的焦点检测和多焦点冲突问题。实现效果和用户主观评价结果表明,该方法是一种有效可行的图像适应方法。  相似文献   

11.
针对传统的视觉伺服方法中图像几何特征的标记、提取与匹配过程复杂且通用性差等问题,本文提出了一种基于图像矩的机器人四自由度(4DOF)视觉伺服方法.首先建立了眼在手系统中图像矩与机器人位姿之间的非线性增量变换关系,为利用图像矩进行机器人视觉伺服控制提供了理论基础,然后在未对摄像机与手眼关系进行标定的情况下,利用反向传播(BP)神经网络的非线性映射特性设计了基于图像矩的机器人视觉伺服控制方案,最后用训练好的神经刚络进行了视觉伺服跟踪控制.实验结果表明基于本文算法可实现0.5 mm的位置与0.5°的姿态跟踪精度,验证了算法的的有效性与较好的伺服性能.  相似文献   

12.
This paper presents a novel approach for image-based visual servoing of a robot manipulator with an eye-in-hand camera when the camera parameters are not calibrated and the 3-D coordinates of the features are not known. Both point and line features are considered. This paper extends the concept of depth-independent interaction (or image Jacobian) matrix, developed in earlier work for visual servoing using point features and fixed cameras, to the problem using eye-in-hand cameras and point and line features. By using the depth-independent interaction matrix, it is possible to linearly parameterize, by the unknown camera parameters and the unknown coordinates of the features, the closed-loop dynamics of the system. A new algorithm is developed to estimate unknown parameters online by combining the Slotine–Li method with the idea of structure from motion in computer vision. By minimizing the errors between the real and estimated projections of the feature on multiple images captured during motion of the robot, this new adaptive algorithm can guarantee the convergence of the estimated parameters to the real values up to a scale. On the basis of the nonlinear robot dynamics, we proved asymptotic convergence of the image errors by the Lyapunov theory. Experiments have been conducted to demonstrate the performance of the proposed controller.   相似文献   

13.
This paper presents a novel approach for image-based visual servoing, extending the existing works that use the trifocal tensor (TT) as source for image measurements. In the proposed approach, singularities typically encountered in this kind of methods are avoided. A formulation of the TT-based control problem with a virtual target resulting from the vertical translation of the real target allows us to design a single controller, able to regulate the robot pose towards the desired configuration, without local minima. In this context, we introduce a super-twisting control scheme guaranteeing continuous control inputs, while exhibiting strong robustness properties. Our approach is valid for perspective cameras as well as catadioptric systems obeying the central camera model. All these contributions are supported by convincing numerical simulations and experiments under a popular dynamic robot simulator.  相似文献   

14.
In this paper, we present a cooperative passers-by tracking system between fixed view wall mounted cameras and a mobile robot. The proposed system fuses visual detections from wall mounted cameras and detections from a mobile robot–in a centralized manner–employing a “tracking-by-detection” approach within a Particle Filtering strategy. This tracking information is then used to endow the robot with passers-by avoidance ability to facilitate its navigation in crowds during the execution of a person following mission. The multi-person tracker’s ability to track passers-by near the robot distinctively is demonstrated through qualitative and quantitative off-line experiments. Finally, the designed perceptual modalities are deployed on our robotic platform, controlling its actuators via visual servoing techniques and free space diagrams in the vicinity of the robot, to illustrate the robot’s ability to follow a given target person in human crowded areas.  相似文献   

15.
Vision based redundant manipulator control with a neural network based learning strategy is discussed in this paper. The manipulator is visually controlled with stereo vision in an eye-to-hand configuration. A novel Kohonen’s self-organizing map (KSOM) based visual servoing scheme has been proposed for a redundant manipulator with 7 degrees of freedom (DOF). The inverse kinematic relationship of the manipulator is learned using a Kohonen’s self-organizing map. This learned map is shown to be an approximate estimate of the inverse Jacobian, which can then be used in conjunction with the proportional controller to achieve closed loop servoing in real-time. It is shown through Lyapunov stability analysis that the proposed learning based servoing scheme ensures global stability. A generalized weight update law is proposed for KSOM based inverse kinematic control, to resolve the redundancy during the learning phase. Unlike the existing visual servoing schemes, the proposed KSOM based scheme eliminates the computation of the pseudo-inverse of the Jacobian matrix in real-time. This makes the proposed algorithm computationally more efficient. The proposed scheme has been implemented on a 7 DOF PowerCube? robot manipulator with visual feedback from two cameras.  相似文献   

16.
This paper presents a vision-based navigation strategy for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using a single embedded camera observing natural landmarks. In the proposed approach, images of the environment are first sampled, stored and organized as a set of ordered key images (visual path) which provides a visual memory of the environment. The robot navigation task is then defined as a concatenation of visual path subsets (called visual route) linking the current observed image and a target image belonging to the visual memory. The UAV is controlled to reach each image of the visual route using a vision-based control law adapted to its dynamic model and without explicitly planning any trajectory. This framework is largely substantiated by experiments with an X4-flyer equipped with a fisheye camera.  相似文献   

17.
This paper addresses the challenges of choosing proper image features for planar symmetric shape objects and designing visual servoing controller to enhance the tracking performance in image-based visual servoing (IBVS). Six image moments are chosen as the image features and the analytical image interaction matrix related to the image features are derived. A controller is designed to efficiently increase the robustness of the visual servoing system. Experimental results on a 6-DOF robot visual servoing system are provided to illustrate the effectiveness of the proposed method.  相似文献   

18.
19.
研究了基于智能算法的机器人无标定视觉伺服问题, 提出了一种新的基于最小二乘支持向量回归的机器人无标定视觉免疫控制方法. 利用最小二乘支持向量回归学习机器人位姿变化和观测到的图像特征变化之间的复杂非线性关系, 其中最小二乘支持向量回归的参数由自适应免疫算法加5折交叉检验优化确定, 在此基础上利用免疫控制原理设计了视觉控制器. 六自由度工业机器人空间4DOF 视觉定位实验结果表明了该方法的有效性.  相似文献   

20.
《Advanced Robotics》2013,27(7-8):711-734
In robotic applications, tasks of picking and placing are the most fundamental ones. Also, for a robot manipulator, the recognition of its working environment is one of the most important issues to do intelligent tasks, since this aptitude enables it to work in a variable environment. This paper presents a new control strategy for robot manipulators, which utilizes visual information to direct the manipulator in its working space, to pick up an object of known shape, but with arbitrary position and orientation. During the search for an object to be picked up, vision-based control by closed-loop feedback, referred to as visual servoing, is performed to obtain the motion control of the manipulator hand. The system employs a genetic algorithm (GA) and a pattern matching technique to explore the search space and exploit the best solutions by this search technique. The control strategy utilizes the found results of GA-pattern matching in every step of GA evolution to direct the manipulator towards the target object. We named this control strategy step-GA-evnlution. This control method can be applied for manipulator real-time visual servoing and solve its path planning problem in real-time, i.e. in order for the manipulator to adapt the execution of the task by visual information during the process execution. Simulations have been performed, using a two-link planar manipulator and three image models, in order to find which one is the best for real-time visual servoing and the results show the effectiveness of the control method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号