首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
This article introduces a sensor placement measure called vision resolvability. The measure provides a technique for estimating the relative ability of various visual sensors, including monocular systems, stereo pairs, multi-baseline stereo systems, and 3D rangefinders, to accurately control visually manipulated objects. The resolvability ellipsoid illustrates the directional nature of resolvability, and can be used to direct camera motion and adjust camera intrinsic parameters in real-time so that the servoing accuracy of the visual servoing system improves with camera-lens motion. The Jacobian mapping from task space to sensor space is derived for a monocular system, a stereo pair with parallel optical axes, and a stereo pair with perpendicular optical axes. Resolvability ellipsoids based on these mappings for various sensor configurations are presented. Visual servoing experiments demonstrate that vision resolvability can be used to direct camera-lens motion to increase the ability of a visually servoed manipulator to precisely servo objects. © 1996 John Wiley & Sons, Inc.  相似文献   

2.
We present a new approach to visual feedback control using image-based visual servoing with stereo vision. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location, but also at other locations. The suggested technique can guide a robot manipulator to the desired location without needing such a priori knowledge as the relative distance to the desired location or a model of the object, even if the initial positioning error is large. We describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by a simulation and by experimental results, and compared with the conventional method for an assembling robot. This work was presented in part at the Fourth International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–22, 1999  相似文献   

3.
基于视觉伺服的输电线巡检机器人抓线控制   总被引:4,自引:0,他引:4  
巡检机器人在自动越障时,需要完成机器人手臂的准确抓线控制。结合输电线的几何特征和摄像机成像原理,提出了一种基于单摄像机的立体视觉方法来确定输电线的位置和姿态。结合该定位方法及视觉伺服理论建立机械手抓线伺服控制模型。在自行研制的巡检机器人进行了视觉伺服抓线实验。实验结果验证了该方法的有效性。  相似文献   

4.
M.T. Hussein 《Advanced Robotics》2013,27(24):1575-1585
In this review, recent developments in the field of flexible robot arm control using visual servoing are reviewed. In comparison to rigid robots, the end-effector position of flexible links cannot be obtained precisely enough with respect to position control using kinematic information and joint variables. To solve the task here the use of a vision sensor (camera) system, visual servoing is proposed to realize the task of control flexible manipulators with improved quality requirements. The paper is organized as follows: the visual servoing architectures will be reviewed for rigid robots first. The advantages, disadvantages, and comparisons between different approaches of visual servoing are carried out. The using of visual servoing to control flexible robot is addressed next. Open problems such as state variables estimation as well as the combination of different sensor properties as well as some application-oriented points related to flexible robot are discussed in detail.  相似文献   

5.
This article describes a connectionist vision system for the precise control of a robot designed to walk on the exterior of the space station. The network learns to use video camera input to determine the displacement of the robot's gripper relative to a hole in which the gripper must be inserted. Once trained, the network's output is used to control the robot, with a resulting factor of five fewer missed gripper insertions than occur when the robot walks without sensor feedback. The neural network visual feedback techniques described could also be applied in domains such as manufacturing, where precise robot positioning is required in an uncertain environment.  相似文献   

6.
不需要标定系统模型的"眼在手上"视觉伺服控制技术   总被引:1,自引:0,他引:1  
工业实践中不可能精确地标定摄像机和机器人模型,但当前的视觉伺服控制都需要标定系统模型.针对这一现象,提出一种新颖的、能应用于“眼在手上”视觉伺服控制结构的动态无标定的视觉伺服控制算法,无需标定摄像机和机器人运动学模型即可跟踪运动物体,通过将非线性目标函数最小化,以视觉信息跟踪动态图像.针对目前视觉伺服控制系统中“眼在手上”系统的复合雅克比矩阵随每个时间增量的变化无法计算的现象,提出了对每一时间增量时刻的图像雅克比矩阵的变化做出估计的方法,仿真实验证明了上述方法的正确性和有效性.  相似文献   

7.
《Advanced Robotics》2013,27(8-9):1035-1054
Abstract

In this paper we present an image predictive controller for an eye-in-hand-type servoing architecture, composed of a 6-d.o.f. robot and a camera mounted on the gripper. A novel architecture for integrating reference trajectory and image prediction is proposed for use in predictive control of visual servoing systems. In the proposed method, a new predictor is developed based on the relation between the camera velocity and the time variation of the visual features given by the interaction matrix. The image-based predictor generates the future trajectories of a visual feature ensemble when past and future camera velocities are known. In addition, a reference trajectory is introduced to define the way how to reach the desired features over the prediction horizon starting from the current features. The advantages of the new architecture are the reference trajectory used for the first time in the sense of the predictive control and the predictor based on a local model. Simulations reveal the efficiency of the proposed architecture to control a 6-d.o.f. robot manipulator.  相似文献   

8.
采用工业相机、工业投影机、普通摄像头、计算机和机械臂开发了一套具有三维立体视觉的机械臂智能抓取分类系统。该系统采用自编软件实现了对工业相机、工业投影机的自动控制和同步,通过前期研究提出的双波长条纹投影三维形貌测量法获取了物体的高度信息,结合opencv技术和普通摄像头获取的物体二维平行面信息,实现了物体的自动识别和分类;利用串口通信协议,将上述处理后的数据传送至机械臂,系统进行几何姿态解算,实现了智能抓取,并能根据抓手上压力反馈自动调节抓手张合程度,实现自适应抓取。经实验证明该系统能通过自带的快速三维形貌获取装置实现准确、快速的抓取工作范围内的任意形状的物体并实现智能分类。  相似文献   

9.
Designing a real-time visual tracking system to catch a goldfish is a complex task because of the large amount of streaming video data that must be transmitted and processed immediately when tracking the goldfish. Usually, building such visual servoing systems requires the application of high-cost specialized hardware and the development of complicated visual control software. In this paper, a novel low-cost, real-time visual servo control system is presented. The system uses stereo vision consisting of two calibrated cameras to acquire images of the goldfish, and applies the continuously adaptive mean shift (CAMSHIFT) vision-tracking algorithm to provide feedback of a fish’s real-time position at a high frame rate. It then employs a 5-axis robot manipulator controlled by a fuzzy reasoning system to catch the fish. This visual tracking and servoing system is less sensitive to lighting influences and thus performs more efficiently. Experiments with the proposed method yielded very good results, as the system’s real-time 3D vision successfully tracked two fish and guided the manipulator, which has a net attached to its end effector, to catch one of them.  相似文献   

10.
结合机器视觉的采摘机械手的定位仿真研究   总被引:1,自引:0,他引:1  
对水果采摘机械手空间定位机理进行了研究,分析了双目立体视觉系统的定位误差并建立视觉误差补偿机制,利用虚拟机械手开发软件和CCD视觉硬件构建了仿真系统,通过双目立体视觉获取空间位置数据映射到虚拟环境下引导机械手进行模拟采摘。该系统利用多领域知识融合实现了采摘机构与视觉关联精确定位的仿真,能有效地指导实际作业环境中采摘机械手精确定位的优化设计。  相似文献   

11.
设计、开发了西红柿采摘机器人的双目立体视觉系统,为机器人自动化采摘作业提供条件。采用VFW方法进行了实时采集系统的设计;基于成熟西红柿与背景之间颜色特征的差异信息进行图像分割来识别成熟西红柿;在摄像头标定和形心匹配的基础上,通过三维立体重建获取了西红柿果实的空间位置信息。实验结果表明:视觉系统的成熟果识别率可达到98%,图像分割识别整个过程消耗平均时间0.21s;当工作距离小于500mm时,除个别奇异点,测试距离误差绝对值可控制在14mm以内,能较好满足实际工作需要。  相似文献   

12.
2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.  相似文献   

13.
In this paper, a teleoperation system of a robot arm with position measurement function and visual supporting function is developed. The working robot arm is remotely controlled by the manual operation of the human operator and the autonomous control via visual servo. The visual servo employs the template matching technique. The position measurement is realized using a stereo camera based on the angle-pixel characteristic. The visual supporting function to give the human operator useful information about the teleoperation is also provided. The usefulness of the proposed teleoperation system is confirmed through experiments using an industrial articulated robot arm.  相似文献   

14.
Image-based effector servoing is a process of perception–action cycles for handling a robot effector under continual visual feedback. This paper applies visual servoing mechanisms not only for handling objects, but also for camera calibration and object inspection. A 6-DOF manipulator and a stereo camera head are mounted on separate platforms and are steered independently. In a first phase (calibration phase), camera features are determined like the optical axes and the fields of sharp view. In the second phase (inspection phase), the robot hand carries an object into the field of view of one camera, then approaches the object along the optical axis to the camera, rotates the object for reaching an optimal view, and finally the object shape is inspected in detail. In the third phase (assembly phase), the system localizes a board containing holes of different shapes, determines the hole which fits most appropriate to the object shape, then approaches and arranges the object appropriately. The final object insertion is based on haptic sensors, but is not treated in the paper. At present, the robot system has the competence to handle cylindrical and cuboid pegs. For handling other object categories the system can be extended with more sophisticated strategies of the inspection and/or assembly phase.  相似文献   

15.
Visual servoing allows the introduction of robotic manipulation in dynamic and uncontrolled environments. This paper presents a position-based visual servoing algorithm using particle filtering. The objective is the grasping of objects using the 6 degrees of freedom of the robot manipulator in non-automated industrial environments using monocular vision. A particle filter has been added to the position-based visual servoing algorithm to deal with the different noise sources of those industrial environments. Experiments performed in the real industrial scenario of ROBOFOOT (http://www.robofoot.eu/) project showed accurate grasping and high level of stability in the visual servoing process.  相似文献   

16.
Executing complex robotic tasks including dexterous grasping and manipulation requires a combination of dexterous robots, intelligent sensors and adequate object information processing. In this paper, vision has been integrated into a highly redundant robotic system consisting of a tiltable camera and a three-fingered dexterous gripper both mounted on a puma-type robot arm. In order to condense the image data of the robot working space acquired from the mobile camera, contour image processing is used for offline grasp and motion planning as well as for online supervision of manipulation tasks. The performance of the desired robot and object motions is controlled by a visual feedback system coordinating motions of hand, arm and eye according to the specific requirements of the respective situation. Experiences and results based on several experiments in the field of service robotics show the possibilities and limits of integrating vision and tactile sensors into a dexterous hand-arm-eye system being able to assist humans in industrial or servicing environments.  相似文献   

17.
Detection and tracking for robotic visual servoing systems   总被引:1,自引:0,他引:1  
Robot manipulators require knowledge about their environment in order to perform their desired actions. In several robotic tasks, vision sensors play a critical role by providing the necessary quantity and quality of information regarding the robot's environment. For example, “visual servoing” algorithms may control a robot manipulator in order to track moving objects that are being imaged by a camera. Current visual servoing systems often lack the ability to detect automatically objects that appear within the camera's field of view. In this research, we present a robust “figureiground” framework for visually detecting objects of interest. An important contribution of this research is a collection of optimization schemes that allow the detection framework to operate within the real-time limits of visual servoing systems. The most significant of these schemes involves the use of “spontaneous” and “continuous” domains. The number and location of continuous domains are. allowed to change over time, adjusting to the dynamic conditions of the detection process. We have developed actual servoing systems in order to test the framework's feasibility and to demonstrate its usefulness for visually controlling a robot manipulator.  相似文献   

18.
This paper describes the UJI librarian robot, a mobile manipulator that is able to autonomously locate a book in an ordinary library, and grasp it from a bookshelf, by using eye-in-hand stereo vision and force sensing. The robot is only provided with the book code, a library map and some knowledge about its logical structure and takes advantage of the spatio-temporal constraints and regularities of the environment by applying disparate techniques such as stereo vision, visual tracking, probabilistic matching, motion estimation, multisensor-based grasping, visual servoing and hybrid control, in such a way that it exhibits a robust and dependable performance. The system has been tested, and experimental results show how it is able to robustly locate and grasp a book in a reasonable time without human intervention.  相似文献   

19.
We present a distributed vision-based architecture for smart robotics that is composed of multiple control loops, each with a specialized level of competence. Our architecture is subsumptive and hierarchical, in the sense that each control loop can add to the competence level of the loops below, and in the sense that the loops can present a coarse-to-fine gradation with respect to vision sensing. At the coarsest level, the processing of sensory information enables a robot to become aware of the approximate location of an object in its field of view. On the other hand, at the finest end, the processing of stereo information enables a robot to determine more precisely the position and orientation of an object in the coordinate frame of the robot. The processing in each module of the control loops is completely independent and it can be performed at its own rate. A control Arbitrator ranks the results of each loop according to certain confidence indices, which are derived solely from the sensory information. This architecture has clear advantages regarding overall performance of the system, which is not affected by the "slowest link," and regarding fault tolerance, since faults in one module does not affect the other modules. At this time we are able to demonstrate the utility of the architecture for stereoscopic visual servoing. The architecture has also been applied to mobile robot navigation and can easily be extended to tasks such as "assembly-on-the-fly."  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号