首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
智能空间中家庭服务机器人所需完成的主要任务是协助人完成物品的搜寻、定位与传递。而视觉伺服则是完成上述任务的有效手段。搭建了由移动机器人、机械臂、摄像头组成的家庭服务机器人视觉伺服系统,建立了此系统的运动学模型并对安装在机械臂末端执行器上的视觉系统进行了内外参数标定,通过分解世界平面的单应来获取目标物品的位姿参数,利用所获取的位姿参数设计了基于位置的视觉伺服控制律。实验结果表明,使用平面单应分解方法来设计控制律可简单有效地完成家庭物品的视觉伺服任务。  相似文献   

2.
Visual servoing approaches navigate a robot to the desired pose with respect to a given object using image measurements. As a result, these approaches have several applications in manipulation, navigation and inspection. However, existing visual servoing approaches are instance specific, that is, they control camera motion between two views of the same object. In this paper, we present a framework for visual servoing to a novel object instance. We further employ our framework for the autonomous inspection of vehicles using Micro Aerial Vehicles (MAVs), which is vital for day‐to‐day maintenance, damage assessment, and merchandising a vehicle. This visual inspection task comprises the MAV visiting the essential parts of the vehicle, for example, wheels, lights, and so forth, to get a closer look at the damages incurred. Existing methods for autonomous inspection could not be extended for vehicles due to the following reasons: First, several existing methods require a 3D model of the structure, which is not available for every vehicle. Second, existing methods require expensive depth sensor for localization and path planning. Third, current approaches do not account for the semantic understanding of the vehicle, which is essential for identifying parts. Our instance invariant visual servoing framework is capable of autonomously navigating to every essential part of a vehicle for inspection and can be initialized from any random pose. To the best our knowledge, this is the first approach demonstrating fully autonomous visual inspection of vehicles using MAVs. We have validated the efficacy of our approach through a series of experiments in simulation and outdoor scenarios.  相似文献   

3.
Visual servoing is a control method to manipulate the motion of the robot using visual information, which aims to realize “working while watching.” However, the visual servoing towards moving target with hand–eye cameras fixed at hand is inevitably affected by hand dynamical oscillation. To overcome this defect of the hand–eye fixed camera system, an eye-vergence system has been put forward, where the pose of the cameras could be rotated to observe the target object. The visual servoing controllers of hand and eye-vergence are installed independently, so that it can observe the target object at the center of camera images through eye-vergence function. In this research, genetic algorithm (GA) is used as a pose tracking method, which is called “Real-Time Multi-step GA(RM-GA),” solves on-line optimization problems for 3D visual servoing. The performances of real-time object tracking using eye-vergence system and “RM-GA” method have been examined, and also the pose tracking accuracy has been verified.  相似文献   

4.
Visual servoing is a powerful approach to enlarge the applications of robotic systems by incorporating visual information into the control system. On the other hand, teleoperation – the use of machines in a remote way – is increasing the number of applications in many domains. This paper presents a remote visual servoing system using only partial camera calibration and exploiting the high bandwidth of Internet2 to stream video information. The underlying control scheme is based on the image-based philosophy for direct visual servoing – computing the applied torque inputs to the robot based in error signals defined in the image plane – and evoking a velocity field strategy for guidance. The novelty of this paper is a remote visual servoing with the following features: (1) full camera calibration is unnecessary, (2) direct visual servoing does not neglect the robot nonlinear dynamics, and (3) the novel velocity field control approach is utilized. Experiments carried out between two laboratories demonstrated the effectiveness of the application. Work partially supported by CONACyT grant 45826 and CUDI.  相似文献   

5.
2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.  相似文献   

6.
M.T. Hussein 《Advanced Robotics》2013,27(24):1575-1585
In this review, recent developments in the field of flexible robot arm control using visual servoing are reviewed. In comparison to rigid robots, the end-effector position of flexible links cannot be obtained precisely enough with respect to position control using kinematic information and joint variables. To solve the task here the use of a vision sensor (camera) system, visual servoing is proposed to realize the task of control flexible manipulators with improved quality requirements. The paper is organized as follows: the visual servoing architectures will be reviewed for rigid robots first. The advantages, disadvantages, and comparisons between different approaches of visual servoing are carried out. The using of visual servoing to control flexible robot is addressed next. Open problems such as state variables estimation as well as the combination of different sensor properties as well as some application-oriented points related to flexible robot are discussed in detail.  相似文献   

7.
机器人视觉伺服控制在理论和应用等方面还有许多问题需要研究,例如特征选择、系统标定和伺服控制算法等.针对Adept机器人,提出了一种简单快速的不需要精确标定摄像机内外部参数的摄像机标定方法,完成了从被观测物体表面所在的视觉平面坐标系到机器人基坐标系的坐标变换.使用图像的全局特征,即图像矩特征进行伺服跟踪;利用所推导的图像雅可比矩阵,设计了由图像反馈与目标运动自适应补偿组成的视觉伺服控制器.将算法对静态目标的定位实验进行了验证,然后又将其应用到移动目标的跟踪上,通过调节和优选控制参数,实现了稳定的伺服跟踪和抓取.实验结果表明采用图像矩作为图像特征能够避免复杂的特征匹配过程,并且能够获得较好的跟踪精度.  相似文献   

8.
为实现在统一的理论框架下对机器人视觉伺服基础特性进行细致深入的研究,本文基于任务函数方法,建立了广义的视觉伺服系统模型.在此模型基础之上,重点研究了基于位置的视觉伺服(PBVS)与基于图像的视觉伺服(IBVS)方法在笛卡尔空间和图像空间的动态特性.仿真结果表明,在相同的比较框架结构下,PBVS方法同样对摄像机标定误差具有鲁棒性.二者虽然在动态系统的稳定性、收敛性方面相类似,但是在笛卡尔空间和图像空间的动态性能上却有很大的差别.对于PBvS方法,笛卡尔轨迹可以保证最短路径,但是对应的图像轨迹是不可控的,可能会发生逃离视线的问题;对于IBVS方法,图像空间虽然能保证最短路径,但是由于缺乏笛卡尔空间的直接控制,在处理大范围旋转伺服的情况时,会发生诸如摄像机退化的笛卡尔轨迹偏移现象.  相似文献   

9.
针对传统的视觉伺服方法中图像几何特征的标记、提取与匹配过程复杂且通用性差等问题,本文提出了一种基于图像矩的机器人四自由度(4DOF)视觉伺服方法.首先建立了眼在手系统中图像矩与机器人位姿之间的非线性增量变换关系,为利用图像矩进行机器人视觉伺服控制提供了理论基础,然后在未对摄像机与手眼关系进行标定的情况下,利用反向传播(BP)神经网络的非线性映射特性设计了基于图像矩的机器人视觉伺服控制方案,最后用训练好的神经刚络进行了视觉伺服跟踪控制.实验结果表明基于本文算法可实现0.5 mm的位置与0.5°的姿态跟踪精度,验证了算法的的有效性与较好的伺服性能.  相似文献   

10.
This paper presents a novel method to improve the performance of high-DOF image base visual servoing (IBVS) with an uncalibrated camera. Firstly, analysis and comparison between point-based and moment-based features are carried out with respect to a 4-DOF positioning task. Then, an extended interaction matrix (IM) related to the digital image, and a Kalman filter (KF)-based estimation algorithm of the extended IM without calibration and IM model are proposed. Finally, the KF-based algorithm is extended to realize an approximation to decoupled control scheme. Experimental results conducted on an industrial robot show that our proposed methods can provide accurate estimation of IM, and achieve similar performance compared with traditional calibration-based method. Therefore, the proposed methods can be applied to any robot control system in variational environments, and can realize instant operation to planar object with complex and unknown shape at large displacement.  相似文献   

11.
This paper presents a 3D contour reconstruction approach employing a wheeled mobile robot equipped with an active laser‐vision system. With observation from an onboard CCD camera, a laser line projector fixed‐mounted below the camera is used for detecting the bottom shape of an object while an actively‐controlled upper laser line projector is utilized for 3D contour reconstruction. The mobile robot is driven to move around the object by a visual servoing and localization technique while the 3D contour of the object is being reconstructed based on the 2D image of the projected laser line. Asymptotical convergence of the closed‐loop system has been established. The proposed algorithm also has been used experimentally with a Dr Robot X80sv mobile robot upgraded with the low‐cost active laser‐vision system, thereby demonstrating effective real‐time performance. This seemingly novel laser‐vision robotic system can be applied further in unknown environments for obstacle avoidance and guidance control tasks. Copyright © 2011 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

12.
For most visual servo systems, accurate camera/robot calibration is essential for precision tasks, such as tracking time-varying end-effector trajectories in the image plane of a remote (or fixed) camera. This paper presents details of control-theoretic approaches to the calibration and control of monocular visual servo systems in the case of a planar robot with a workspace perpendicular to the optical axis of the imaging system. An on-line adaptive calibration and control scheme is developed, along with an associated stability and convergence theorem. A redundancy-based refinement of this scheme is proposed and demonstrated via simulation.  相似文献   

13.
In this paper, the visual servoing problem is addressed by coupling nonlinear control theory with a convenient representation of the visual information used by the robot. The visual representation, which is based on a linear camera model, is extremely compact to comply with active vision requirements. The devised control law is proven to ensure global asymptotic stability in the Lyapunov sense, assuming exact model and state measurements. It is also shown that, in the presence of bounded uncertainties, the closed-loop behavior is characterized by a global attractor. The well known pose ambiguity arising from the use of linear camera models is solved at the control level by choosing a hybrid visual state vector including both image space (2D) information and 3D object parameters. A method is expounded for on-line visual state estimation that avoids camera calibration. Simulation and real-time experiments validate the theoretical framework in terms of both system convergence and control robustness.  相似文献   

14.
Image-based visual servoing is a flexible and robust technique to control a robot and guide it to a desired position only by using two-dimensional visual data. However, it is well known that the classical visual servoing based on the Cartesian coordinate system has one crucial problem, that the camera moves backward at infinity, in case that the camera motion from the initial to desired poses is a pure rotation of 1800 around the optical axis. This paper proposes a new formulation of visual servoing, based on a cylindrical coordinate system that can shift the position of the origin. The proposed approach can interpret from a pure rotation around an arbitrary axis to the proper camera rotational motion. It is shown that this formulation contains the classical approach based on the Cartesian coordinate system as an extreme case with the origin located at infinity. Furthermore, we propose a decision method of the origin-shift parameters by estimating a rotational motion from the differences between initial and desired image-plane positions of feature points.  相似文献   

15.
16.
《Advanced Robotics》2013,27(3):205-220
In this paper, we describe a visual servoing system developed as a human-robot interface to drive a mobile robot toward any chosen target. An omni-directional camera is used to get the 360° of field of view and an efficient tracking technique is developed to track the target. The use of the omni-directional geometry eliminates many of the problems common in visual tracking and makes the use of visual servoing a practical alternative for robot-human interaction. The experiments demonstrate that it is an effective and robust way to guide a robot. In particular, the experiments show robustness of the tracker to loss of template, vehicle motion, and change in scale and orientation.  相似文献   

17.
In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms (Proc of IEEE Int Conf on robotics and automation, pp 1794–1799, 2007) are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach (Int Conf on advanced robotics, 2007), based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are presented.  相似文献   

18.
In this work, several robust vision modules are developed and implemented for fully automated micromanipulation. These are autofocusing, object and end-effector detection, real-time tracking and optical system calibration modules. An image based visual servoing architecture and a path planning algorithm are also proposed based on the developed vision modules. Experimental results are provided to assess the performance of the proposed visual servoing approach in positioning and trajectory tracking tasks. Proposed path planning algorithm in conjunction with visual servoing imply successful micromanipulation tasks.  相似文献   

19.
A new uncalibrated eye-to-hand visual servoing based on inverse fuzzy modeling is proposed in this paper. In classical visual servoing, the Jacobian plays a decisive role in the convergence of the controller, as its analytical model depends on the selected image features. This Jacobian must also be inverted online. Fuzzy modeling is applied to obtain an inverse model of the mapping between image feature variations and joint velocities. This approach is independent from the robot's kinematic model or camera calibration and also avoids the necessity of inverting the Jacobian online. An inverse model is identified for the robot workspace, using measurement data of a robotic manipulator. This inverse model is directly used as a controller. The inverse fuzzy control scheme is applied to a robotic manipulator performing visual servoing for random positioning in the robot workspace. The obtained experimental results show the effectiveness of the proposed control scheme. The fuzzy controller can position the robotic manipulator at any point in the workspace with better accuracy than the classic visual servoing approach.  相似文献   

20.
《Advanced Robotics》2013,27(5):403-405
A new adaptive linear robot control system for a robot work cell that can visually track and intercept stationary and moving objects undergoing arbitrary motion anywhere along its predicted trajectory within the robot's workspace is presented in this paper. The proposed system was designed by integrating a stationary monocular CCD camera with off-the-shelf frame grabber and an industrial robot operation into a single application on the MATLAB platform. A combination of the model based object recognition technique and a learning vector quantization network is used for classifying stationary objects without overlapping. The optical flow technique and the MADALINE network are used for determining the target trajectory and generating the predicted robot trajectory based on visual servoing, respectively. The necessity of determining a model of the robot, camera, all the stationary and moving objects, and environment is eliminated. The location and image features of these objects need not be preprogrammed, marked and known before, and any change in a task is possible without changing the robot program. After the learning process on the robot, it is shown that the KUKA robot is capable of tracking and intercepting both stationary and moving objects at an optimal rendezvous point on the conveyor accurately in real-time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号