首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
《Advanced Robotics》2013,27(10):1041-1056
When considering real-world applications of robot control with visual servoing, both three-dimensional (3-D) information and a high feedback rate are required. We have developed a 3-D target-tracking system with a 1-ms feedback rate using two high-speed vision systems called Column Parallel Vision (CPV) systems. To obtain 3-D information, such as position, orientation and shape parameters of the target object, a feature-based algorithm has been introduced using moment feature values extracted from vision systems for a spheroidal object model. Also, we propose a new 3-D self-windowing method to extract the target in 3-D space using epipolar geometry, which is an extension of the conventional self-windowing method in 2-D images.  相似文献   

2.
Real-time binocular smooth pursuit   总被引:8,自引:6,他引:2  
This article examines the problem of a moving robot tracking a moving object with its cameras, without requiring the ability to recognize the target to distinguish it from distracting surroundings. A novel aspect of the approach taken is the use of controlled camera movements to simplify the visual processing necessary to keep the cameras locked on the target. A gaze-holding system implemented on a robot's binocular head demonstrates this approach. Even while the robot is moving, the cameras are able to track an object that rotates and moves in three dimensions.The central idea is that localizing attention in 3-D space makes precategorical visual processing sufficient to hold gaze. Visual fixation can help separate the target object from distracting surroundings. Converged cameras produce a horopter (surface of zero stereo disparity) in the scene. Binocular features with no disparity can be located with a simple filter, showing the object's location in the image. Similarly, an object that is being tracked is imaged near the center of the field of view, so spatially localized processing helps concentrate visual attention on the target. Instead of requiring a way to recognize the target, the system relies on active control of camera movements and binocular fixation segmentation to locate the target.This work was largely completed while Coombs was affiliated with the University of Rochester.Robot Systems Division, Manufacturing Engineering Laboratory, National Institute of Standards and Technology, Technology Administration, U.S. Department of Commerce. Product endorsement disclaimer: references to specific brands, equipment, or trade names in this document are made to facilitate understanding and do not imply endorsement by the National Institute of Standards and Technology.  相似文献   

3.
王杰    蒋明敏  花晓慧    鲁守银    李金屏   《智能系统学报》2015,10(5):775-782
为了在机器人机械手双目视觉伺服系统中跟踪并精确定位目标的空间位置,提出了一种利用投影直方图匹配和极线几何约束的目标跟踪方法。分别在2个视觉中人工标定目标,并提取目标在多颜色空间的水平、垂直投影直方图作为匹配模板;在一个视觉中利用目标的运动一致性原则和投影直方图匹配搜索并跟踪目标;在另一个视觉中依据双目视觉系统的极线几何原理限定目标搜索范围,搜索并定位目标。该方法利用水平、垂直投影直方图描述目标的结构信息,同时完成了双目视觉系统中的目标跟踪与配准功能,有利于目标的精确定位和视觉测量。实验结果表明,该方法可在双目视觉系统中有效跟踪目标,运算效率高,鲁棒性强。  相似文献   

4.
A novel approach to 3-D gaze tracking using stereo cameras   总被引:1,自引:0,他引:1  
A novel approach to three-dimensional (3-D) gaze tracking using 3-D computer vision techniques is proposed in this paper. This method employs multiple cameras and multiple point light sources to estimate the optical axis of user's eye without using any user-dependent parameters. Thus, it renders the inconvenient system calibration process which may produce possible calibration errors unnecessary. A real-time 3-D gaze tracking system has been developed which can provide 30 gaze measurements per second. Moreover, a simple and accurate calibration method is proposed to calibrate the gaze tracking system. Before using the system, each user only has to stare at a target point for a few (2-3) seconds so that the constant angle between the 3-D line of sight and the optical axis can be estimated. The test results of six subjects showed that the gaze tracking system is very promising achieving an average estimation error of under 1 degrees.  相似文献   

5.
《Advanced Robotics》2013,27(15):2171-2197
This paper presents a novel approach for object tracking with a humanoid robot head. The proposed approach is based on the concept of a virtual mechanism, where the real head is enhanced with a virtual link that connects the eye with a point in three-dimensional space. We tested our implementation on a humanoid head with 7 d.o.f. and two rigidly connected cameras in each eye (wide-angle and telescopic). The experimental results show that the proposed control algorithm can be used to maintain the view of an observed object in the foveal (telescopic) image using information from the peripheral view. Unlike other methods proposed in the literature, our approach indicates how to exploit the redundancy of the robot head. The proposed technique is systematic and can be easily implemented on different types of active humanoid heads. The results show good tracking performance regardless of the distance between the object and the head. Moreover, the uncertainties in the kinematic model of the head do not affect the performance of the system.  相似文献   

6.
The purposeful-gazing capability of active vision offers advantages to many manufacturing tasks. This paper discusses the problems associated with purposeful gazing and fixation of attention for active vision. In binocular active vision, gazing at a selected target refers to directing the visual axes to capture the target in the (appropriate part of) the visual field by both sensors (cameras), and holding gaze refers to directing the visual axes of the sensors so as to maintain the target or point of interest in the visual field of both sensors. This paper proposes solutions to the important problems involved in gaze stabilization by developing techniques for vergence error extraction, and vergence servo control. Vergence is the tilt movement process of two visual sensors (in a binocular system) in opposite directions to fixate at a selected point. Binocular gazing is realized by decreasing the disparity which represents the vergence error. In order to obtain the disparity for extraction of vergence error, a phase-based approach that robustly and efficiently estimates vergence disparity is developed. To control vergence, we present a dual sampling-rate approach for vision-sensor-based dynamic servo control.  相似文献   

7.
将双目立体视觉技术融入四旋翼飞行器设计,通过两个在相对水平位置上的单目相机模拟人的眼睛,在双目立体视觉测距系统上经过图像采集、双目矫正、双目立体匹配等步骤得到视差结果,计算出深度信息并进行三维重构,实现四旋翼飞行器一键起飞、定高、定点巡航、悬停及自主避障。实验结果表明,所设计的飞行器响应速度快,跟随性能好,定位精度较高。  相似文献   

8.
This article describes real-time gaze control using position-based visual servoing. The main control objective of the system is to enable a gaze point to track the target so that the image feature of the target is located at each image center. The overall system consists of two parts: the vision process and the control system. The vision system extracts a predefined color feature from images. An adaptive look-up table method is proposed in order to get the 3-D position of the feature within the video frame rate under varying illumination. An uncalibrated camera raises the problem of the reconstructed 3-D positions not being correct. To solve the calibration problem in the position-based approach, we constructed an end-point closed-loop system using an active head-eye system. In the proposed control system, the reconstructed position error is used with a Jacobian matrix of the kinematic relation. The system stability is locally guaranteed, like image-based visual servoing, and the gaze position was shown to converge to the feature position. The proposed approach was successfully applied to a tracking task with a moving target in some simulations and some real experiments. The processing speed satisfies the property of real time. This work was presented in part at the Sixth International Symposium on Artificial Life and Robotics, Tokyo, January 15–17, 2001  相似文献   

9.
In this paper, a new active visual system is developed, which is based on bionic vision and is insensitive to the property of the cameras. The system consists of a mechanical platform and two cameras. The mechanical platform has two degrees of freedom of motion in pitch and yaw, which is equivalent to the neck of a humanoid robot. The cameras are mounted on the platform. The directions of the optical axes of the two cameras can be simultaneously adjusted in opposite directions. With these motions, the object's images can be located at the centers of the image planes of the two cameras. The object's position is determined with the geometry information of the visual system. A more general model for active visual positioning using two cameras without a neck is also investigated. The position of an object can be computed via the active motions. The presented model is less sensitive to the intrinsic parameters of cameras, which promises more flexibility in many applications such as visual tracking with changeable focusing. Experimental results verify the effectiveness of the proposed methods.  相似文献   

10.
In this paper, a new kind of human-computer interface allowing three-dimensional (3-D) visualization of multimedia objects and eye controlled interaction is proposed. In order to explore the advantages and limitations of the concept, a prototype system has been set up. The testbed includes a visual operating system for integrating novel forms of interaction with a 3-D graphic user interface, autostereoscopic (free-viewing) 3-D displays with close adaptation to the mechanisms of binocular vision, and solutions for nonintrusive eye-controlled interaction (video-based head and gaze tracking). The paper reviews the system's key components and outlines various applications implemented for user testing. Preliminary results show that most of the users are impressed by a 3-D graphic user interface and the possibility to communicate with a computer by simply looking at the object of interest. On the other hand, the results emphasize the need for a more intelligent interface agent to avoid misinterpretation of the user's eye-controlled input and to reset undesired activities  相似文献   

11.
运动目标跟踪技术是未知环境下移动机器人研究领域的一个重要研究方向。该文提出了一种基于主动视觉和超声信息的移动机器人运动目标跟踪设计方法,利用一台SONY EV-D31彩色摄像机、自主研制的摄像机控制模块、图像采集与处理单元等构建了主动视觉系统。移动机器人采用了基于行为的分布式控制体系结构,利用主动视觉锁定运动目标,通过超声系统感知外部环境信息,能在未知的、动态的、非结构化复杂环境中可靠地跟踪运动目标。实验表明机器人具有较高的鲁棒性,运动目标跟踪系统运行可靠。  相似文献   

12.
In stereoscopic vision, the ability of perceiving the three-dimensional structure of the surrounding environment is subordinated to a precise and effective motor control for the binocular coordination of the eyes/cameras. If, on the one side, the binocular coordination of camera movements is a complicating factor, on the other side, a proper vergence control, acting on the binocular disparity, facilitates the binocular fusion and the subsequent stereoscopic perception process. In real-world situations, an effective vergence control requires further features other than real time capabilities: real robot systems are indeed characterized by mechanical and geometrical imprecision that affect the binocular vision, and the illumination conditions are changeable and unpredictable. Moreover, in order to allow an effective visual exploration of the peripersonal space, it is necessary to cope with different gaze directions and provide a large working space. The proposed control strategy resorts to a neuromimetic approach that provides a distributed representation of disparity information. The vergence posture is obtained by an open-loop and a closed-loop control, which directly interacts with saccadic control. Before saccade, the open-loop component is computed in correspondence of the saccade target region, to obtain a vergence correction to be applied simultaneously with the saccade. At fixation, the closed-loop component drives the binocular disparity to zero in a foveal region. The obtained vergence servos are able to actively drive both the horizontal and the vertical alignment of the optical axes on the object of interest, thus ensuring a correct vergence posture. Experimental tests were purposely designed to measure the performance of the control in the peripersonal space, and were performed on three different robot platforms. The results demonstrated that the proposed approach yields real-time and effective vergence camera movements on a visual stimulus in a wide working range, regardless of the illumination in the environment and the geometry of the system.  相似文献   

13.
In order for a binocular head to perform optimal 3D tracking, it should be able to verge its cameras actively, while maintaining geometric calibration. In this work we introduce a calibration update procedure, which allows a robotic head to simultaneously fixate, track, and reconstruct a moving object in real-time. The update method is based on a mapping from motor-based to image-based estimates of the camera orientations, estimated in an offline stage. Following this, a fast online procedure is presented to update the calibration of an active binocular camera pair. The proposed approach is ideal for active vision applications because no image-processing is needed at runtime for the scope of calibrating the system or for maintaining the calibration parameters during camera vergence. We show that this homography-based technique allows an active binocular robot to fixate and track an object, whilst performing 3D reconstruction concurrently in real-time.  相似文献   

14.
This article describes a three-dimensional artificial vision system for robotic applications using an ultrasonic sensor array. The array is placed on the robot grip so that it is possible to detect the presence of an object, to direct the robot tool towards it, and to locate the object position. It will provide visual information about the object's surface by means of superficial scanning and it permits the object shape reconstruction. The developed system uses an approximation of the ultrasonic radiation and reception beam shape for calculating the first contact points with the object's surface. On the other hand, the position of the array's sensors has been selected in order to provide the sensorial head with other useful capabilities, such as edge detection and edge tracking. Furthermore, the article shows the structure of the sensorial head for avoiding successive rebounds between the head and the object surface, and for eliminating the mechanical vibrations among sensors.  相似文献   

15.
仿生机器人在定姿过程中受到空间扰动因素的影响容易产生控制误差,需要对机器人进行精确标定,提高仿生机器人的定位控制精度,因此提出一种基于双目视觉导航的仿生机器人鲁棒控制算法。利用光学CCD双目视觉动态跟踪系统进行仿生机器人的末端位姿参量测量,建立被控对象的运动学模型;以机器人的转动关节的6自由度参量为控制约束参量,建立机器人的分层子维空间运动规划模型;采用双目视觉跟踪方法实现仿生机器人的位姿自适应修正,实现鲁棒性控制。仿真结果表明,采用该方法进行仿生机器人控制的姿态定位时对机器人末端位姿参量的拟合误差较低,动态跟踪性能较好。  相似文献   

16.
在现实世界中,点云数据的采集方式有激光雷达、双目相机和深度相机,但是在机器人采集过程中由于设备分辨率、周围环境等因素的影响,收集到的点云数据通常是非完整的。为了解决物体形状缺失的问题,提出了一种使用局部邻域信息的三维物体形状自动补全的网络架构。该架构包括点云特征提取网络模块和点云生成网络模块,输入为缺失的点云形状,输出为缺失部分的点云形状,将输入与输出点云形状进行合并完成物体的形状补全。采用倒角距离和测地距离进行评估,实验结果表明,在ShapeNet数据集上,平均倒角距离和平均测地距离均小于多层感知机特征提取网络与PCN网络的值,两值分别为0.000 84和0.028。对于现实中扫描的点云数据进行补全处理也达到了预期效果,说明该网络有较强的泛化性,可以修复不同类别的物体。  相似文献   

17.
Vision-based 3-D trajectory tracking for unknown environments   总被引:1,自引:0,他引:1  
This paper describes a vision-based system for 3-D localization of a mobile robot in a natural environment. The system includes a mountable head with three on-board charge-coupled device cameras that can be installed on the robot. The main emphasis of this paper is on the ability to estimate the motion of the robot independently from any prior scene knowledge, landmark, or extra sensory devices. Distinctive scene features are identified using a novel algorithm, and their 3-D locations are estimated with high accuracy by a stereo algorithm. Using new two-stage feature tracking and iterative motion estimation in a symbiotic manner, precise motion vectors are obtained. The 3-D positions of scene features and the robot are refined by a Kalman filtering approach with a complete error-propagation modeling scheme. Experimental results show that good tracking and localization can be achieved using the proposed vision system.  相似文献   

18.
Visual navigation is a challenging issue in automated robot control. In many robot applications, like object manipulation in hazardous environments or autonomous locomotion, it is necessary to automatically detect and avoid obstacles while planning a safe trajectory. In this context the detection of corridors of free space along the robot trajectory is a very important capability which requires nontrivial visual processing. In most cases it is possible to take advantage of the active control of the cameras. In this paper we propose a cooperative schema in which motion and stereo vision are used to infer scene structure and determine free space areas. Binocular disparity, computed on several stereo images over time, is combined with optical flow from the same sequence to obtain a relative-depth map of the scene. Both the time to impact and depth scaled by the distance of the camera from the fixation point in space are considered as good, relative measurements which are based on the viewer, but centered on the environment. The need for calibrated parameters is considerably reduced by using an active control strategy. The cameras track a point in space independently of the robot motion and the full rotation of the head, which includes the unknown robot motion, is derived from binocular image data. The feasibility of the approach in real robotic applications is demonstrated by several experiments performed on real image data acquired from an autonomous vehicle and a prototype camera head  相似文献   

19.
In this article we present the integration of 3-D shape knowledge into a variational model for level set based image segmentation and contour based 3-D pose tracking. Given the surface model of an object that is visible in the image of one or multiple cameras calibrated to the same world coordinate system, the object contour extracted by the segmentation method is applied to estimate the 3-D pose parameters of the object. Vice-versa, the surface model projected to the image plane helps in a top-down manner to improve the extraction of the contour. While common alternative segmentation approaches, which integrate 2-D shape knowledge, face the problem that an object can look very differently from various viewpoints, a 3-D free form model ensures that for each view the model can fit the data in the image very well. Moreover, one additionally solves the problem of determining the object’s pose in 3-D space. The performance is demonstrated by numerous experiments with a monocular and a stereo camera system.  相似文献   

20.
This paper presents a 3D contour reconstruction approach employing a wheeled mobile robot equipped with an active laser‐vision system. With observation from an onboard CCD camera, a laser line projector fixed‐mounted below the camera is used for detecting the bottom shape of an object while an actively‐controlled upper laser line projector is utilized for 3D contour reconstruction. The mobile robot is driven to move around the object by a visual servoing and localization technique while the 3D contour of the object is being reconstructed based on the 2D image of the projected laser line. Asymptotical convergence of the closed‐loop system has been established. The proposed algorithm also has been used experimentally with a Dr Robot X80sv mobile robot upgraded with the low‐cost active laser‐vision system, thereby demonstrating effective real‐time performance. This seemingly novel laser‐vision robotic system can be applied further in unknown environments for obstacle avoidance and guidance control tasks. Copyright © 2011 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号