首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 466 毫秒
1.
Two relevant issues in vision-based navigation are the field-of-view constraints of conventional cameras and the model and structure dependency of standard approaches. A good solution of these problems is the use of the homography model with omnidirectional vision. However, a plane of the scene will cover only a small part of the omnidirectional image, missing relevant information across the wide range field of view, which is the main advantage of omnidirectional sensors. The interest of this paper is in a new approach for computing multiple homographies from virtual planes using omnidirectional images and its application in an omnidirectional vision-based homing control scheme. The multiple homographies are robustly computed, from a set of point matches across two omnidirectional views, using a method that relies on virtual planes independently of the structure of the scene. The method takes advantage of the planar motion constraint of the platform and computes virtual vertical planes from the scene. The family of homographies is also constrained to be embedded in a three-dimensional linear subspace to improve numerical consistency. Simulations and real experiments are provided to evaluate our approach.  相似文献   

2.
This paper presents an efficient metric for the computation of the similarity among omnidirectional images (image matching). The representation of image appearance is based on feature vectors that include both the chromatic attributes of color sets and their mutual spatial relationships. The proposed metric fits well to robotic navigation using omnidirectional vision sensors, because it has very important properties: it is reflexive, compositional and invariant with respect to image scaling and rotation. The robustness of the metric was repeatedly tested using omnidirectional images for a robot localization task in a real indoor environment.  相似文献   

3.
全方位图像展开成全景和透视图的实现方法   总被引:3,自引:0,他引:3       下载免费PDF全文
全方位视觉系统生成的全方位图像能获得水平方向3600环境信息,非常适合于某些有大视场需求的应用。该文介绍了全方位图像的特点、生成原理及其展开成全景图及透视图的具体实现方法。同时也介绍了以DirectShow为工具,全方位视频的展开实现方法。最后通过实验证明了全景图及透视图生成方法的有效性。  相似文献   

4.
提出了一种针对室外场景的单幅折反射全向图三维重构方法,能够自动重构出全向图中360°视野内景物的三维模型,并实现自由漫游。基于全向图与遥感图匹配把全向图分为水平地面、垂直建筑物立面和垂直背景景物面三类区域,得到全向图场景的基本结构;在此基础上利用折反射光路投射模型计算出全向图中每个像素点的三维几何位置,从而实现了折反射全向图的重构。实验证明该方法具有采集简单、视野大、处理过程全自动化、能够重构非平面场景等特点。  相似文献   

5.
刘栋栋 《微型电脑应用》2012,28(3):43-45,68,69
设计了一个基于全景视觉的多摄像机监控网络。全景相机视野广,可以实现大范围的目标检测与跟踪。云台摄像机视角具有一定的自由度,可以捕捉目标的高分辨率图像。将全景相机与云台相机相互配合,通过多传感器的数据融合,分层次的跟踪算法及多相机调度算法,实现了大范围的多个运动目标的检测与跟踪,并能捕获目标的清晰图像。实验验证了该系统的有效性和合理性。  相似文献   

6.
Described here is a visual navigation method for navigating a mobile robot along a man-made route such as a corridor or a street. We have proposed an image sensor, named HyperOmni Vision, with a hyperboloidal mirror for vision-based navigation of the mobile robot. This sensing system can acquire an omnidirectional view around the robot in real time. In the case of the man-made route, road boundaries between the ground plane and wall appear as a close-looped curve in the image. By making use of this optical characteristic, the robot can avoid obstacles and move along the corridor by tracking the close-looped curve with an active contour model. Experiments that have been done in a real environment are described.  相似文献   

7.
The problem of catadioptric omnidirectional imaging defocus blur, which is caused by lens aperture and mirror curvature, becomes more severe when high-resolution sensors and large apertures are applied. In order to overcome this problem, a novel method based on coded aperture technique is proposed in this article. Firstly, the defocus blur of catadioptric omnidirectional imaging is analyzed to calculate the point spread function for different scene points. Then, a method of obtaining optimal focused plane is proposed. Lastly, based on the strategies of neighboring annuluses division and stitching of omnidirectional images, a deconvolution algorithm using omni-total variation prior is applied to obtain all-focused/sharp omnidirectional images. Experimental results demonstrate that the proposed method is effective for omnidirectional image defocus deblurring and can be applied to most existing catadioptric omnidirectional imaging systems.  相似文献   

8.
Catadioptric omnidirectional view sensors have found increasing adoption in various robotic and surveillance applications due to their 360° field of view. However, the inherent distortion caused by the sensors prevents their direct utilisations using existing image processing techniques developed for perspective images. Therefore, a correction processing known as “unwrapping” is commonly performed. However, the unwrapping process incurs additional computational loads on central processing units. In this paper, a method to reduce this burden in the computation is investigated by exploiting the parallelism of graphical processing units (GPUs) based on the Compute Unified Device Architecture (CUDA). More specifically, we first introduce a general approach of parallelisation to the said process. Then, a series of adaptations to the CUDA platform is proposed to enable an optimised usage of the hardware platform. Finally, the performances of the unwrapping function were evaluated on a high-end and low-end GPU to demonstrate the effectiveness of the parallelisation approach.  相似文献   

9.
Recently, many virtual reality and robotics applications have been called on to create virtual environments from real scenes. A catadioptric omnidirectional image sensor composed of a convex mirror can simultaneously observe a 360-degree field of view making it useful for modeling man-made environments such as rooms, corridors, and buildings, because any landmarks around the sensor can be taken in and tracked in its large field of view. However, the angular resolution of the omnidirectional image is low because of the large field of view captured. Hence, the resolution of surface texture patterns on the three-dimensional (3-D) scene model generated is not sufficient for monitoring details. To overcome this, we propose a high resolution scene texture generation method that combines an omnidirectional image sequence using image mosaic and superresolution techniques.  相似文献   

10.
Extrinsic calibration of heterogeneous cameras by line images   总被引:1,自引:0,他引:1  
The extrinsic calibration refers to determining the relative pose of cameras. Most of the approaches for cameras with non-overlapping fields of view (FOV) are based on mirror reflection, object tracking or rigidity constraint of stereo systems whereas cameras with overlapping FOV can be calibrated using structure from motion solutions. We propose an extrinsic calibration method within structure from motion framework for cameras with overlapping FOV and its extension to cameras with partially non-overlapping FOV. Recently, omnidirectional vision has become a popular topic in computer vision as an omnidirectional camera can cover large FOV in one image. Combining the good resolution of perspective cameras and the wide observation angle of omnidirectional cameras has been an attractive trend in multi-camera system. For this reason, we present an approach which is applicable to heterogeneous types of vision sensors. Moreover, this method utilizes images of lines as these features possess several advantageous characteristics over point features, especially in urban environment. The calibration consists of a linear estimation of orientation and position of cameras and optionally bundle adjustment to refine the extrinsic parameters.  相似文献   

11.
The growing utilisation of omnidirectional view cameras in robotic applications is mainly owing to their wide 360° field of view. This paper addresses the issue of incorrect aspect ratio often found in cuboid panoramic unwrapped spherical omnidirectional view images. The proposed method consists of an efficient computational technique that utilises only three planar image points obtained from the omnidirectional view’s setting. The correction of the aspect ratio in turn improves the matching performance of Scale-Invariant Feature Transform keypoints in the unwrapped omnidirectional view images. Experimental results are presented towards the end of this paper as an empirical verification of our proposed method.  相似文献   

12.
Omnidirectional vision sensors capture a wide field of view than can benefit many robotic applications. One type of omnidirectional vision sensor is the paracatadioptric. A paracatadioptric sensor combines a parabolic mirror and a camera inducing an orthographic projection. This combination provides a wide field of view while maintaining the single center of projection which is a desirable property of these sensors. Furthermore, lines are projected as circles on the paracatadioptric image plane. In contrast with traditional perspective cameras the image formation process of paracatadioptric sensors is no longer linear. However in this paper we present a model which is able to linearize it. This linearization is based on the fact that the paracatadioptric projection can be represented by a sphere inversion, that belongs to the conformal group Rn\mathcal{R}^{n} which is isomorphic to the Lorentz group in Rn+1\mathcal{R}^{n+1}. Thus a nonlinear conformal transformation can be represented with an equivalent linear Lorenz transformation, which can be represented as a versor in the CGA. Therefore the present model can be applied algebraically not only to points, but also to point-pairs, lines, circles in the same way to all them and in a linear form. The benefits of the proposed method will be reflected on the development of complex applications that use paracatadioptric sensors.  相似文献   

13.
立方体表面上的图象拼接   总被引:3,自引:3,他引:3  
介绍了一种全方位全景图的建模和实现方式,给出并实现了普通图像、立方体表面以及景物所在平面之间一组新的坐标变换,提出了一种实现彩色图象拼接的方法,在立方体表面实现了实景图象的拼接,给出了拼接实例,为全方位全景图象的生成奠定了基础。  相似文献   

14.
Mobile robotic devices hold great promise for a variety of applications in industry. A key step in the design of a mobile robot is to determine the navigation method for mobility control. The purpose of this paper is to describe a new algorithm for omnidirectional vision navigation. A prototype omnidirectional vision system and the implementation of the navigation techniques using this modern sensor and an advanced automatic image processor is described. The significance of this work is in the development of a new and novel approach—dynamic omnidirectional vision for mobile robots and autonomous guided vehicles.  相似文献   

15.
Active,optical range imaging sensors   总被引:7,自引:0,他引:7  
Active, optical range imaging sensors collect three-dimensional coordinate data from object surfaces and can be useful in a wide variety of automation applications, including shape acquisition, bin picking, assembly, inspection, gaging, robot navigation, medical diagnosis, and cartography. They are unique imaging devices in that the image data points explicitly represent scene surface geometry in a sampled form. At least six different optical principles have been used to actively obtain range images: (1) radar, (2) triangulation, (3) moire, (4) holographic interferometry, (5) focusing, and (6) diffraction. In this survey, the relative capabilities of different sensors and sensing methods are evaluated using a figure of merit based on range accuracy, depth of field, and image acquisition time.  相似文献   

16.
The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera‐robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis. © 2004 Wiley Periodicals, Inc.  相似文献   

17.
This paper presents a new approach to search for a gas/odor source using an autonomous mobile robot. The robot is equipped with a CMOS camera, gas sensors, and airflow sensors. When no gas is present, the robot looks for a salient object in the camera image. The robot approaches any object found in the field of view, and checks it with the gas sensors to see if the object is releasing gas. On the other hand, if the robot detects the presence of gas while wandering around the area, it turns toward the direction of the wind that carries the gas. The robot then looks for any visible object in that direction. These navigation strategies are implemented into the robot under the framework of the behavior-based subsumption architecture. Experimental results on the search for a leaking bottle in an indoor environment are presented to demonstrate the validity of the navigation strategies.  相似文献   

18.
Safety is undoubtedly the most fundamental requirement for any aerial robotic application. It is essential to equip aerial robots with omnidirectional perception coverage to ensure safe navigation in complex environments. In this paper, we present a light‐weight and low‐cost omnidirectional perception system, which consists of two ultrawide field‐of‐view (FOV) fisheye cameras and a low‐cost inertial measurement unit (IMU). The goal of the system is to achieve spherical omnidirectional sensing coverage with the minimum sensor suite. The two fisheye cameras are mounted rigidly facing upward and downward directions and provide omnidirectional perception coverage: 360° FOV horizontally, 50° FOV vertically for stereo, and whole spherical for monocular. We present a novel optimization‐based dual‐fisheye visual‐inertial state estimator to provide highly accurate state‐estimation. Real‐time omnidirectional three‐dimensional (3D) mapping is combined with stereo‐based depth perception for the horizontal direction and monocular depth perception for upward and downward directions. The omnidirectional perception system is integrated with online trajectory planners to achieve closed‐loop, fully autonomous navigation. All computations are done onboard on a heterogeneous computing suite. Extensive experimental results are presented to validate individual modules as well as the overall system in both indoor and outdoor environments.  相似文献   

19.
近十几年来,计算机视觉越来越受研究者们的欢迎,特别是全景相机由于其具有较大的视场而被广泛应用到许多领域,包括视频监控、机器人导航、电视电话会议、场景重建以及虚拟现实等。摄像机标定是从二维图像获得三维信息必不可少的一步,摄像机标定结果的好坏直接决定着三维重建结果以及其它计算机视觉应用效果的好坏,所以,研究摄像机的标定方法具有重要的理论研究意义和重要的实际应用价值。这里将2000年到2012年折反射相机标定方法按照标定像的不同分为五大类:基于线的标定、基于二维标定块标定、基于三维点的标定、基于球的标定和自标定,且简要分析其优缺点。  相似文献   

20.
视觉导航相关技术发展及其研究分析   总被引:1,自引:0,他引:1  
为了全面了解航空器应用背景下的视觉导航相关技术的研究与进展,深入分析了视觉导航关键技术。分别对与传感器图像相关的载体姿态提取技术、基于图像匹配算法的定位技术以及基于序列图像的速度测量技术这三种技术进行了全面阐述,重点论述了姿态与参考特征的关系、景像匹配定位的核心算法及速度测量的两类方法,并总结了视觉导航发展过程中的相关难点与发展趋势。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号