首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Localisation and mapping with an omnidirectional camera becomes more difficult as the landmark appearances change dramatically in the omnidirectional image. With conventional techniques, it is difficult to match the features of the landmark with the template. We present a novel robot simultaneous localisation and mapping (SLAM) algorithm with an omnidirectional camera, which uses incremental landmark appearance learning to provide posterior probability distribution for estimating the robot pose under a particle filtering framework. The major contribution of our work is to represent the posterior estimation of the robot pose by incremental probabilistic principal component analysis, which can be naturally incorporated into the particle filtering algorithm for robot SLAM. Moreover, the innovative method of this article allows the adoption of the severe distorted landmark appearances viewed with omnidirectional camera for robot SLAM. The experimental results demonstrate that the localisation error is less than 1 cm in an indoor environment using five landmarks, and the location of the landmark appearances can be estimated within 5 pixels deviation from the ground truth in the omnidirectional image at a fairly fast speed.  相似文献   

2.
We describe a pipeline for structure-from-motion (SfM) with mixed camera types, namely omnidirectional and perspective cameras. For the steps of this pipeline, we propose new approaches or adapt the existing perspective camera methods to make the pipeline effective and automatic. We model our cameras of different types with the sphere camera model. To match feature points, we describe a preprocessing algorithm which significantly increases scale invariant feature transform (SIFT) matching performance for hybrid image pairs. With this approach, automatic point matching between omnidirectional and perspective images is achieved. We robustly estimate the hybrid fundamental matrix with the obtained point correspondences. We introduce the normalization matrices for lifted coordinates so that normalization and denormalization can be performed linearly for omnidirectional images. We evaluate the alternatives of estimating camera poses in hybrid pairs. A weighting strategy is proposed for iterative linear triangulation which improves the structure estimation accuracy. Following the addition of multiple perspective and omnidirectional images to the structure, we perform sparse bundle adjustment on the estimated structure by adapting it to use the sphere camera model. Demonstrations of the end-to-end multi-view SfM pipeline with the real images of mixed camera types are presented.  相似文献   

3.
In robot teleoperation, a robot works as a physical agent at a remote site for a robot operator. There are mainly two tasks in robot teleoperation using camera images: environment recognition using visual information and robot control according to the recognition. In this paper, we propose a gaze direction based vehicle teleoperation method with an omnidirectional image stabilization and an automatic body rotation control. In the proposed method, we manage above two tasks in the same manner that are usually treated separately. This method is an intuitive vehicle teleoperation method where an operator do not need to have concern about vehicle body orientations and can absorb differences of vehicle driving mechanisms. That is, this method frees an operator from being bothered from controlling a vehicle and the operator can concentrate on where he/she intends to go. This method mainly consists of two technologies: an omnidirectional image stabilization technology and automatic body rotation control. The conducted experiments show effectiveness of the proposed method.  相似文献   

4.
In this paper, we present a pipeline for camera pose and trajectory estimation, and image stabilization and rectification for dense as well as wide baseline omnidirectional images. The proposed pipeline transforms a set of images taken by a single hand-held camera to a set of stabilized and rectified images augmented by the computed camera 3D trajectory and a reconstruction of feature points facilitating visual object recognition. The paper generalizes previous works on camera trajectory estimation done on perspective images to omnidirectional images and introduces a new technique for omnidirectional image rectification that is suited for recognizing people and cars in images. The performance of the pipeline is demonstrated on real image sequences acquired in urban as well as natural environments.  相似文献   

5.
Robust topological navigation strategy for omnidirectional mobile robot using an omnidirectional camera is described. The navigation system is composed of on-line and off-line stages. During the off-line learning stage, the robot performs paths based on motion model about omnidirectional motion structure and records a set of ordered key images from omnidirectional camera. From this sequence a topological map is built based on the probabilistic technique and the loop closure detection algorithm, which can deal with the perceptual aliasing problem in mapping process. Each topological node provides a set of omnidirectional images characterized by geometrical affine and scale invariant keypoints combined with GPU implementation. Given a topological node as a target, the robot navigation mission is a concatenation of topological node subsets. In the on-line navigation stage, the robot hierarchical localizes itself to the most likely node through the robust probability distribution global localization algorithm, and estimates the relative robot pose in topological node with an effective solution to the classical five-point relative pose estimation algorithm. Then the robot is controlled by a vision based control law adapted to omnidirectional cameras to follow the visual path. Experiment results carried out with a real robot in an indoor environment show the performance of the proposed method.  相似文献   

6.
In this study, we proposed a high-density three-dimensional (3D) tunnel measurement method, which estimates the pose changes of cameras based on a point set registration algorithm regarding 2D and 3D point clouds. To detect small deformations and defects, high-density 3D measurements are necessary for tunnel construction sites. The line-structured light method uses an omnidirectional laser to measure a high-density cross-section point cloud from camera images. To estimate the pose changes of cameras in tunnels, which have few textures and distinctive shapes, cooperative robots are useful because they estimate the pose by aggregating relative poses from the other robots. However, previous studies mounted several sensors for both the 3D measurement and pose estimation, increasing the size of the measurement system. Furthermore, the lack of 3D features makes it difficult to match point clouds obtained from different robots. The proposed measurement system consists of a cross-section measurement unit and a pose estimation unit; one camera was mounted for each unit. To estimate the relative poses of the two cameras, we designed a 2D–3D registration algorithm for the omnidirectional laser light, and implemented hand-truck and unmanned aerial vehicle systems. In the measurement of a tunnel with a width of 8.8 m and a height of 6.4 m, the error of the point cloud measured by the proposed method was 162.8 and 575.3 mm along 27 m, respectively. In a hallway measurement, the proposed method generated less errors in straight line shapes with few distinctive shapes compared with that of the 3D point set registration algorithm with Light Detection and Ranging.  相似文献   

7.
The single viewpoint constraint is a principal optical characteristic for most catadioptric omnidirectional vision. Single viewpoint catadioptric omnidirectional vision is very useful because it allows the generation of geometrically correct perspective images from one omnidirectional image. Therefore precise calibration for single viewpoint constraint is needed during system assembling. However, in most image detection based calibration methods, the nonlinear optical distortion brought by lens is often neglected. Hence the calibration precision is poor. In this paper, a new calibration method of single viewpoint constraint for the catadioptric omni-directional vision is proposed. Firstly, an image correction algorithm is obtained by training a neural network. Then, according to characteristics of the space circular perspective projection, the corrected image of the mirror boundary is used to estimate its position and attitude relative to the camera to guide the calibration. Since the estimate is conducted based on actual imaging model rather than the simplified model, the estimate error is largely reduced, and the calibration accuracy is significantly improved. Experiments are conducted on simulated images and real images to show the accuracy and the effectiveness of the proposed method.  相似文献   

8.
刘栋栋 《微型电脑应用》2012,28(3):43-45,68,69
设计了一个基于全景视觉的多摄像机监控网络。全景相机视野广,可以实现大范围的目标检测与跟踪。云台摄像机视角具有一定的自由度,可以捕捉目标的高分辨率图像。将全景相机与云台相机相互配合,通过多传感器的数据融合,分层次的跟踪算法及多相机调度算法,实现了大范围的多个运动目标的检测与跟踪,并能捕获目标的清晰图像。实验验证了该系统的有效性和合理性。  相似文献   

9.
针对采用固定摄像的路况监视系统无法观看自如的缺点,提出了基于云台摄像的实时车速检测算法.建立了简化的摄像机参数模型,提取了线性拟合后的车道图像特征参数,并利用Kluge曲线模型和随机霍夫变换实现了像平面车道分割线的二维重建和云台摄像机的标定;应用自适应背景减除、扩展Kalman滤波器等方法,提取了帧运动域及域中目标轮廓,从而实现了车辆的精确定位、跟踪,以至实时速度检测.该算法已试用于工程实践,具有较好的鲁棒性.  相似文献   

10.
针对手持移动设备拍摄的抖动视频问题,提出了一种基于特征跟踪和网格路径运动的视频稳像算法。通过SIFT算法提取视频帧的特征点,采用KLT算法追踪特征点,利用RANSAC算法估计相邻帧间的仿射变换矩阵,将视频帧划分为均匀的网格,计算视频的运动轨迹,再通过极小化能量函数优化平滑多条网格路径。最后由原相机路径与平滑相机路径的关系,计算相邻帧间的补偿矩阵,利用补偿矩阵对每一帧进行几何变换,从而得到稳定的视频。实验表明,该算法在手持移动设备拍摄的抖动视频中有较好的结果,其中稳像后视频的PSNR平均值相比原抖动视频PSNR值大约提升了11.2 dB。与捆绑相机路径方法相比约提升了2.3 dB。图像间的结构相似性SSIM平均值大约提升了59%,与捆绑相机路径方法相比约提升了3.3%。  相似文献   

11.
The current work addresses the problem of 3D model tracking in the context of monocular and stereo omnidirectional vision in order to estimate the camera pose. To this end, we track 3D objects modeled by line segments because the straight line feature is often used to model the environment. Indeed, we are interested in mobile robot navigation using omnidirectional vision in structured environments. In the case of omnidirectional vision, 3D straight lines are projected as conics in omnidirectional images. Under certain conditions, these conics may have singularities.In this paper, we present two contributions. We, first, propose a new spherical formulation of the pose estimation withdrawing singularities, using an object model composed of lines. The theoretical formulation and the validation on synthetic images thus show that the new formulation clearly outperforms the former image plane one. The second contribution is the extension of the spherical representation to the stereovision case. We consider in the paper a sensor which combines a camera and four mirrors. Results in various situations show the robustness to illumination changes and local mistracking. As a final result, the proposed new stereo spherical formulation allows us to localize online a robot indoor and outdoor whereas the classical formulation fails.  相似文献   

12.
In this study, we present a calibration technique that is valid for all single-viewpoint catadioptric cameras. We are able to represent the projection of 3D points on a catadioptric image linearly with a 6×10 projection matrix, which uses lifted coordinates for image and 3D points. This projection matrix can be computed from 3D–2D correspondences (minimum 20 points distributed in three different planes). We show how to decompose it to obtain intrinsic and extrinsic parameters. Moreover, we use this parameter estimation followed by a non-linear optimization to calibrate various types of cameras. Our results are based on the sphere camera model which considers that every central catadioptric system can be modeled using two projections, one from 3D points to a unitary sphere and then a perspective projection from the sphere to the image plane. We test our method both with simulations and real images, and we analyze the results performing a 3D reconstruction from two omnidirectional images.  相似文献   

13.
We address the problem of depth and ego-motion estimation from omnidirectional images. We propose a correspondence-free structure-from-motion problem for sequences of images mapped on the 2-sphere. A novel graph-based variational framework is first proposed for depth estimation between pairs of images. The estimation is cast as a TV-L1 optimization problem that is solved by a fast graph-based algorithm. The ego-motion is then estimated directly from the depth information without explicit computation of the optical flow. Both problems are finally addressed together in an iterative algorithm that alternates between depth and ego-motion estimation for fast computation of 3D information from motion in image sequences. Experimental results demonstrate the effective performance of the proposed algorithm for 3D reconstruction from synthetic and natural omnidirectional images.  相似文献   

14.
《Advanced Robotics》2013,27(6-7):731-747
This paper describes an outdoor positioning system for vehicles that can be applied to an urban canyon by using an omnidirectional infrared (IR) camera and a digital surface model (DSM). By means of omnidirectional IR images, this system enables robust positioning in urban areas where satellite invisibility caused by buildings hampers high-precision GPS measurements. The omnidirectional IR camera can generate IR images with an elevation of 20–70° for the surrounding area of 360°. The image captured by the camera is highly robust to light disturbances in the outdoor environment. Through the IR camera, the sky appears distinctively dark; this enables easy detection of the border between the sky and the buildings captured in white due to the difference in the atmospheric transmittance rate between visible light and IR rays. The omnidirectional image, which includes several building profiles, is compared with building-restoration images produced by the corresponding DSM in order to determine the self-position. Field experiments in an urban area show that the proposed outdoor positioning method is valid and effective, even if high-rise buildings cause satellite blockage that affects GPS measurements.  相似文献   

15.
In this paper, an image fusion algorithm is proposed for a multi-aperture camera. Such camera is a feasible alternative to traditional Bayer filter camera in terms of image quality, camera size and camera features. The camera consists of several camera units, each having dedicated optics and color filter. The main challenge of a multi-aperture camera arises from the fact that each camera unit has a slightly different viewpoint. Our image fusion algorithm corrects the parallax error between the sub-images using a disparity map, which is estimated from the single-spectral images. We improve the disparity estimation by combining matching costs over multiple views using trifocal tensors. Images are matched using two alternative matching costs, mutual information and Census transform. We also compare two different disparity estimation methods, graph cuts and semi-global matching. The results show that the overall quality of the fused images is near the reference images.  相似文献   

16.
《Real》1996,2(5):271-284
This paper describes a method ofstabilizingimage sequences obtained by a camera carried by a ground vehicle. The motion of the vehicle can usually be regarded as consisting of a desired smooth motion combined with an undesired non-smooth motion that includes impulsive or high-frequency components. The goal of the stabilization process is to correct the images so that they are approximately the same as the images that would have been obtained if the motion of the vehicle had been smooth.We analyse the smooth and non-smooth motions of a ground vehicle and show that only the rotational components of the non-smooth motion have significant perturbing effects on the images. We show how to identify image points at which rotational image flow is dominant, and how to use such points to estimate the vehicle's rotation. Finally, we describe an algorithm that fits smooth (ideally, piecewise constant) rotational motions to these estimates; the residual rotational motion can then be used to correct the images. We have obtained good results for several image sequences obtained from a camera carried by a ground vehicle moving across bumpy terrain.  相似文献   

17.
One method to detect obstacles from a vehicle moving on a planar road surface is the analysis of motion-compensated difference images. In this contribution, a motion compensation algorithm is presented, which computes the required image-warping parameters from an estimate of the relative motion between camera and ground plane. The proposed algorithm estimates the warping parameters from displacements at image corners and image edges. It exploits the estimated confidence of the displacements to cope robustly with outliers. Knowledge about camera calibration, measuremts from odometry, and the previous estimate are used for motion prediction and to stabilize the estimation process when there is not enough information available in the measured image displacements. The motion compensation algorithm has been integrated with modules for obstacle detection and lane tracking. This system has been integrated in experimental vehicles and runs in real time with an overall cycle of 12.5 Hz on low-cost standard hardware. Received: 23 April 1998 / Accepted: 25 August 1999  相似文献   

18.
Scanning Depth of Route Panorama Based on Stationary Blur   总被引:1,自引:0,他引:1  
This work achieves an efficient acquisition of scenes and their depths along long streets. A camera is mounted on a vehicle moving along a straight or a mildly curved path and a sampling line properly set in the camera frame scans the 1D images over scenes continuously to form a 2D route panorama. This paper proposes a method to estimate the depth from the camera path by analyzing a phenomenon called stationary blur in the route panorama. This temporal blur is a perspective effect in parallel projection yielded from the sampling slit with a physical width. We analyze the behavior of the stationary blur with respect to the scene depth, vehicle path, and camera properties. Based on that, we develop an adaptive filter to evaluate the degree of the blur for depth estimation, which avoids error-prone feature matching or tracking in capturing complex street scenes and facilitates real time sensing. The method also uses much less data than the structure from motion approach so that it can extend the sensing area significantly. The resulting route panorama with depth information is useful for urban visualization, monitoring, navigation, and modeling.  相似文献   

19.
20.
Operation of an autonomous vehicle along a marked path, in an obstacle‐laden environment, requires path detection, relative position detection and control, and obstacle detection and avoidance. The design solution of the team from the U.S. Military Academy is a tracked vehicle operating open‐loop in response to position information from an omnidirectional mirror, and to obstacle‐detection input from the mirror and from a scanning laser. The use of a tracked rather than a wheeled vehicle is the team's open‐loop solution to the problem of wheeled‐vehicle slippage on wet and sandy surfaces. The vehicleresponds to sensor information from (1) a digital camera‐mounted parabolic omnidirectional mirror for visual inputs and (2) a scanning laser for detecting obstacles in relief. Raw sensor data is converted synchronously into a global virtual context, which places the vehicle's center at the origin of a 2‐D Cartesian coordinate system. A four‐phase process is used to convert the camera's inputs into the data structures needed to reason about the vehicle's position relative to the course. Development of the path plan proceeds incrementally, using a space‐sweeping algorithm to identify safe paths along waypoints within the course boundaries. An attempt is made to minimize translation errors by favoring paths which exhibit fewer sharp turns. Integration of Intel's OpenCV computer vision library and the Independent JPEG Group's JPEG library allow for very good encapsulation of the low‐level functions needed to do most of the image processing. Ada95 is the language of choice for the majority of the team‐developed software, except where needed to interface to motors and sensors. Use of an object‐oriented high‐level language has been invaluable in leveraging the efforts of previous years' development activities, and for maximizing the ability to log or otherwise respond to anomalous behavior. © 2004 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号