首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到9条相似文献,搜索用时 0 毫秒
1.
This paper presents a method for estimating position and orientation of multiple robots from a set of azimuth angles of landmarks and other robots which are observed by multiple omnidirectional vision sensors. Our method simultaneously performs self-localization by each robot and reconstruction of a relative configuration between robots. Even if it is impossible to identify correspondence between each index of the observed azimuth angles and those of the robots, our method can reconstruct not only a relative configuration between robots using `triangle and enumeration constraints' but also an absolute one using the knowledge of landmarks in the environment. In order to show the validity of our method, this method is applied to multiple mobile robots each of which has an omnidirectional vision sensor in simulation and the real environment. The experimental results show that the result of our method is more precise and stabler than that of self-localization by each robot and our method can handle the combinatorial explosion problem. Correspondence to:T. Nakamura (e-mail: ntakayuk@sys.wakayama-u.ac.jp)  相似文献   

2.
Described here is a method for estimating rolling and swaying motions of a mobile robot using optical flow. We have proposed an image sensor with a hyperboloidal mirror for the vision-based navigation of a mobile robot. Its name is HyperOmni Vision. The radial component of optical flow in HyperOmni Vision has a periodic characteristic. The circumferential component of optical flow has a symmetric characteristic. The proposed method makes use of these characteristic to estimate robustly the rolling and swaying motion of the mobile robot. Correspondence to: Y. Yagi e-mail: y-yagi@sys.es.osaka-u.ac.jp  相似文献   

3.
View- and appearance-based approaches have recently been attracting the interest of those involved in computer vision research. We have already proposed a visual view-based navigation method using a model of the route called the ”view sequence,” which contains a sequence of front views along a route memorized in the recording run. In this paper, we firstly apply the omnidirectional vision sensor to our view-based navigation method and propose an extended model of a route called the ”omniview sequence.” Secondly we propose a map named the ”view-sequenced map” which represents an entire corridor environment in a building. A method for the automatic acquisition of a view-sequenced map based on the exploration in a corridor using both stereo and omnidirectional vision is also described. Finally experimental results of the autonomous navigation and the map acquisition are presented to show the feasibility of the proposed methods. Correspondence to: Y. Matsumoto (e-mail: yoshio@is.aist-nara.ac.jp)  相似文献   

4.
Real-time three-dimensional tracking of people is an important requirement for a growing number of applications. In this paper we describe two trackers; both of them use a network of video cameras for person tracking. These trackers are called a rectilinear video array tracker (R-VAT) and an omnidirectional video array tracker (O-VAT), indicating the two different ways of video capture. The specific objectives of this paper are twofold: (i) to present a systematic comparison of these two trackers using an extensive series of experiments conducted in an `intelligent' room; (ii) to develop a real-time system for tracking the head and face of a person, as an extension of the O-VAT approach. The comparative research indicates that O-VAT is more robust to the number of people, less complex and runs faster, needs manual camera calibration, and the integrated omnidirectional video network has better reconfigurability. The person head and face tracker study shows that such a system can serve as a most effective input stage for face recognition and facial expression analysis modules.  相似文献   

5.
A new circular scanning technique is proposed for rapid acquisition of 2D objects. The object is scanned in a circle centered at an arbitrary boundary point of the object. The resulting scan is matched to a set of prototypes obtained from scanning about closely spaced centers on the boundary of a model. Each successful match yields a possible rotation and translation mapping the model onto the object.The technique requires only 1D boundary detection and thereby obtains a relative speed advantage over many alternative techniques. Both pilot simulations and experiments on real image data are reported which support the utility of the technique.  相似文献   

6.
The recovery of 3-D shape information (depth) using stereo vision analysis is one of the major areas in computer vision and has given rise to a great deal of literature in the recent past. The widely known stereo vision methods are the passive stereo vision approaches that use two cameras. Obtaining 3-D information involves the identification of the corresponding 2-D points between left and right images. Most existing methods tackle this matching task from singular points, i.e. finding points in both image planes with more or less the same neighborhood characteristics. One key problem we have to solve is that we are on the first instance unable to know a priori whether a point in the first image has a correspondence or not due to surface occlusion or simply because it has been projected out of the scope of the second camera. This makes the matching process very difficult and imposes a need of an a posteriori stage to remove false matching.In this paper we are concerned with the active stereo vision systems which offer an alternative to the passive stereo vision systems. In our system, a light projector that illuminates objects to be analyzed by a pyramid-shaped laser beam replaces one of the two cameras. The projections of laser rays on the objects are detected as spots in the image. In this particular case, only one image needs to be treated, and the stereo matching problem boils down to associating the laser rays and their corresponding real spots in the 2-D image. We have expressed this problem as a minimization of a global function that we propose to perform using Genetic Algorithms (GAs). We have implemented two different algorithms: in the first, GAs are performed after a deterministic search. In the second, data is partitioned into clusters and GAs are independently applied in each cluster. In our second contribution in this paper, we have described an efficient system calibration method. Experimental results are presented to illustrate the feasibility of our approach. The proposed method yields high accuracy 3-D reconstruction even for complex objects. We conclude that GAs can effectively be applied to this matching problem.  相似文献   

7.
提出基于改进联合概率数据关联滤波器的智能车立体视觉多目标跟踪方法。利用立体视觉摄像头采集车辆及行人图像、视频;在Lie群下对传感器的不确定性进行建模,并采用欧几里德群算法对预处理的图像进行状态滤波;在可能存在车辆的区域内利用双目视觉去除误检,并获得车辆的位置信息;通过卡尔曼滤波器对测量的不确定度和预测目标运动的轨迹进行确认;运用改进的联合概率数据关联滤波器对车辆及行人的跟踪结果进行优化修正。实验结果表明,提出的方法可以有效解决智能车多目标跟踪问题,大幅度提升驾驶系统的自动化和智能化水平。相比其他较新的目标跟踪方法,提出的方法在跟踪精度和速度上具有明显的优势,且在跟踪车辆时不会产生明显的偏移、不会遗漏对行人的跟踪。  相似文献   

8.
To solve the problem of estimating velocities of gas bubbles in melted glass, a method based on optical flow constraint (OFC) has been extended to the 3D case. A single camera, whose distance to the fluid is variable in time, is used to capture a sequence of frames at different depths. Since objects are not static, we cannot obtain two frames of different height values at the same time, and to our knowledge, this prevents the use of common 3D motion estimation techni ques. Since the information will be rather sparse, our estimation takes several measures around a given pixel and discards the erroneous ones, using a robust estimator. Along with the exposition of the practical application, the estimation proposed here is first contrasted in the 2D case to common benchmarks and then evaluated for a synthetic problem where velocities are known. Received: 9 July 2001 / Accepted: 5 August 2002 Published online: 3 June 2003 This work has been supported by Saint Gobain Cristaleria S.A., under contract FUO-EM-034-01 with Oviedo University, Spain.  相似文献   

9.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号