首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One method to detect obstacles from a vehicle moving on a planar road surface is the analysis of motion-compensated difference images. In this contribution, a motion compensation algorithm is presented, which computes the required image-warping parameters from an estimate of the relative motion between camera and ground plane. The proposed algorithm estimates the warping parameters from displacements at image corners and image edges. It exploits the estimated confidence of the displacements to cope robustly with outliers. Knowledge about camera calibration, measuremts from odometry, and the previous estimate are used for motion prediction and to stabilize the estimation process when there is not enough information available in the measured image displacements. The motion compensation algorithm has been integrated with modules for obstacle detection and lane tracking. This system has been integrated in experimental vehicles and runs in real time with an overall cycle of 12.5 Hz on low-cost standard hardware. Received: 23 April 1998 / Accepted: 25 August 1999  相似文献   

2.
A modified version of the CDWT optical flow algorithm developed by Magarey and Kingsbury is applied to the problem of moving-target detection in noisy infrared image sequences, in the case where the sensor is also moving. Frame differencing is used to detect pixel-size targets moving in strongly cluttered backgrounds. To compensate for sensor motion, prior to differencing, the background is registered spatially using the estimated motion field between the frames. Results of applying the method to three image sequences show that the target SNR is higher when the estimated motion field for the whole scene is explicitly regularized. A comparison with another optical flow algorithm is also presented.  相似文献   

3.
Abstract. The image sequence in a video taken by a moving camera may suffer from irregular perturbations because of irregularities in the motion of the person or vehicle carrying the camera. We show how to use information in the image sequence to correct the effects of these irregularities so that the sequence is smoothed, i.e., is approximately the same as the sequence that would have been obtained if the motion of the camera had been smooth. Our method is based on the fact that the irregular motion is almost entirely rotational, and that the rotational image motion can be detected and corrected if a distant object, such as the horizon, is visible. Received: 14 February 2001 / Accepted: 11 February 2002 Correspondence to: A. Rosenfeld  相似文献   

4.
This paper proposes an effective approach to detect and segment moving objects from two time-consecutive stereo frames, which leverages the uncertainties in camera motion estimation and in disparity computation. First, the relative camera motion and its uncertainty are computed by tracking and matching sparse features in four images. Then, the motion likelihood at each pixel is estimated by taking into account the ego-motion uncertainty and disparity in computation procedure. Finally, the motion likelihood, color and depth cues are combined in the graph-cut framework for moving object segmentation. The efficiency of the proposed method is evaluated on the KITTI benchmarking datasets, and our experiments show that the proposed approach is robust against both global (camera motion) and local (optical flow) noise. Moreover, the approach is dense as it applies to all pixels in an image, and even partially occluded moving objects can be detected successfully. Without dedicated tracking strategy, our approach achieves high recall and comparable precision on the KITTI benchmarking sequences.  相似文献   

5.
提出了在镜头畸变径向约束下,用平面上四个点及其成像关系来建立摄像机姿态的几何 方法.并运用随机样本一致性技术和多视点下摄像机内参数一致性约束提高计算的稳定性和 精度.指出了只利用摄像机正、反投影关系检验其姿态正确性是不充分的,提出将视点间运 动变换关系作为评价相应摄像机姿态精度的重要标准.  相似文献   

6.
This study investigates the problem of estimating camera calibration parameters from image motion fields induced by a rigidly moving camera with unknown parameters, where the image formation is modeled with a linear pinhole-camera model. The equations obtained show the flow to be separated into a component due to the translation and the calibration parameters and a component due to the rotation and the calibration parameters. A set of parameters encoding the latter component is linearly related to the flow, and from these parameters the calibration can be determined.However, as for discrete motion, in general it is not possible to decouple image measurements obtained from only two frames into translational and rotational components. Geometrically, the ambiguity takes the form of a part of the rotational component being parallel to the translational component, and thus the scene can be reconstructed only up to a projective transformation. In general, for full calibration at least four successive image frames are necessary, with the 3D rotation changing between the measurements.The geometric analysis gives rise to a direct self-calibration method that avoids computation of optical flow or point correspondences and uses only normal flow measurements. New constraints on the smoothness of the surfaces in view are formulated to relate structure and motion directly to image derivatives, and on the basis of these constraints the transformation of the viewing geometry between consecutive images is estimated. The calibration parameters are then estimated from the rotational components of several flow fields. As the proposed technique neither requires a special set up nor needs exact correspondence it is potentially useful for the calibration of active vision systems which have to acquire knowledge about their intrinsic parameters while they perform other tasks, or as a tool for analyzing image sequences in large video databases.  相似文献   

7.
In this paper, we present a method called MODEEP (Motion-based Object DEtection and Estimation of Pose) to detect independently moving objects (IMOs) in forward-looking infrared (FLIR) image sequences taken from an airborne, moving platform. Ego-motion effects are removed through a robust multi-scale affine image registration process. Thereafter, areas with residual motion indicate potential object activity. These areas are detected, refined and selected using a Bayesian classifier. The resulting regions are clustered into pairs such that each pair represents one object's front and rear end. Using motion and scene knowledge, we estimate object pose and establish a region of interest (ROI) for each pair. Edge elements within each ROI are used to segment the convex cover containing the IMO. We show detailed results on real, complex, cluttered and noisy sequences. Moreover, we outline the integration of our fast and robust system into a comprehensive automatic target recognition (ATR) and action classification system.  相似文献   

8.
In this study, a new framework of vision-based estimation is developed using some data fusion schemes to obtain previewed road curvatures and vehicular motion states based on the scene viewed from an in-vehicle camera. The previewed curvatures are necessary for the guidance of an automatically steering vehicle, and the desired vehicular motion variables, including lateral deviation, heading angle, yaw rate, and sideslip angle, are also required for proper control of the vehicular lateral motion via steering. In this framework, physical relationships of previewed curvatures among consecutive images, motion variables in terms of image features searched at various levels in the image plane, and dynamic correlation among vehicular motion variables are derived as bases of data fusion to enhance the accuracy of estimation. The vision-based measurement errors are analyzed to determine the fusion gains based on the technique of a Kalman filter such that the measurements from the image plane and predictions of physical models can be properly integrated to obtain reliable estimations. Off-line experimental works using real road scenes are performed to verify the whole framework for image sensing.  相似文献   

9.
This paper addresses the problem of recovering both the intrinsic and extrinsic parameters of a camera from the silhouettes of an object in a turntable sequence. Previous silhouette-based approaches have exploited correspondences induced by epipolar tangents to estimate the image invariants under turntable motion and achieved a weak calibration of the cameras. It is known that the fundamental matrix relating any two views in a turntable sequence can be expressed explicitly in terms of the image invariants, the rotation angle, and a fixed scalar. It will be shown that the imaged circular points for the turntable plane can also be formulated in terms of the same image invariants and fixed scalar. This allows the imaged circular points to be recovered directly from the estimated image invariants, and provide constraints for the estimation of the imaged absolute conic. The camera calibration matrix can thus be recovered. A robust method for estimating the fixed scalar from image triplets is introduced, and a method for recovering the rotation angles using the estimated imaged circular points and epipoles is presented. Using the estimated camera intrinsics and extrinsics, a Euclidean reconstruction can be obtained. Experimental results on real data sequences are presented, which demonstrate the high precision achieved by the proposed method.  相似文献   

10.
We present a novel approach to track the position and orientation of a stereo camera using line features in the images. The method combines the strengths of trifocal tensors and Bayesian filtering. The trifocal tensor provides a geometric constraint to lock line features among every three frames. It eliminates the explicit reconstruction of the scene even if the 3-D scene structure is not known. Such a trifocal constraint thus makes the algorithm fast and robust. The twist motion model is applied to further improve its computation efficiency. Another major contribution is that our approach can obtain the 3-D camera motion using as little as 2 line correspondences instead of 13 in the traditional approaches. This makes the approach attractive for realistic applications. The performance of the proposed method has been evaluated using both synthetic and real data with encouraging results. Our algorithm is able to estimate 3-D camera motion in real scenarios accurately having little drifting from an image sequence longer than a 1,000 frames.  相似文献   

11.
一种基于无人机序列成像的地形地貌重建方法   总被引:1,自引:0,他引:1  
以无人机平台上普通摄像机获取的序列图像为对象,提出了一种对地三维重建的自动化处理方法。首先提出了基于视差分析的序列图像关键帧选择方法,对关键帧图像特征点进行鲁棒性的提取与匹配;第二步用加权的RANSAC算法估计基础矩阵,同时获取准确匹配的内点集。根据已标定的像机内参数,解算相对运动并进行优化。最后对待重建的目标点提出几何约束和单应约束融合的方法实现快速准确匹配,通过三角交会完成目标形貌三维重建。仿真实验结果表明该算法对序列图像具有较好的自动化程度和鲁棒性。  相似文献   

12.
Epipolar geometry from profiles under circular motion   总被引:1,自引:0,他引:1  
Addresses the problem of motion estimation from profiles (apparent contours) of an object rotating on a turntable in front of a single camera. A practical and accurate technique for solving this problem from profiles alone is developed. It is precise enough to reconstruct the shape of the object. No correspondences between points or lines are necessary. Symmetry of the surface of revolution swept out by the rotating object is exploited to obtain the image of the rotation axis and the homography relating epipolar lines in two views robustly and elegantly. These, together with geometric constraints for images of rotating objects, are used to obtain first the image of the horizon, which is the projection of the plane that contains the camera centers, and then the epipoles, thus fully determining the epipolar geometry of the image sequence. The estimation of this geometry by this sequential approach avoids many of the problems found in other algorithms. The search for the epipoles, by far the most critical step, is carried out as a simple 1D optimization. Parameter initialization is trivial and completely automatic at all stages. After the estimation of the epipolar geometry, the Euclidean motion is recovered using the fixed intrinsic parameters of the camera obtained either from a calibration grid or from self-calibration techniques. Finally, the spinning object is reconstructed from its profiles using the motion estimated in the previous stage. Results from real data are presented, demonstrating the efficiency and usefulness of the proposed methods  相似文献   

13.
Structure from motion with wide circular field of view cameras   总被引:2,自引:0,他引:2  
This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180/spl deg/ field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183/spl deg/), Sigma 8 mm-f4-EX (180/spl deg/), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors.  相似文献   

14.
为了获取宽视野的场景表示,提出了一种基于块匹配的视频图像镶嵌算法,该算法首先采用基于相位相关的块匹配方法估计出视频图像间的运动矢量场,并剔除其中由于图像噪声或运动物体的遮挡而导致外点运动矢量,然后根据图像的运动矢量场确定出图像子块之间的对应点对,进而利用得到的对应点对迭代求解图像间的变换模型参数以实现视频图像的自动镶嵌.针对真实场景的视频图像序列进行实验,获得了较好的镶嵌结果,表明了该算法的有效性.  相似文献   

15.
Silhouette-based occluded object recognition through curvature scale space   总被引:4,自引:0,他引:4  
A complete and practical system for occluded object recognition has been developed which is very robust with respect to noise and local deformations of shape (due to weak perspective distortion, segmentation errors and non-rigid material) as well as scale, position and orientation changes of the objects. The system has been tested on a wide variety of free-form 3D objects. An industrial application is envisaged where a fixed camera and a light-box are utilized to obtain images. Within the constraints of the system, every rigid 3D object can be modeled by a limited number of classes of 2D contours corresponding to the object's resting positions on the light-box. The contours in each class are related to each other by a 2D similarity transformation. The Curvature Scale Space technique [26, 28] is then used to obtain a novel multi-scale segmentation of the image and the model contours. Object indexing [16, 32, 36] is used to narrow down the search space. An efficient local matching algorithm is utilized to select the best matching models. Received: 5 August 1996 / Accepted: 19 March 1997  相似文献   

16.
Real-time multiple vehicle detection and tracking from a moving vehicle   总被引:18,自引:0,他引:18  
Abstract. A real-time vision system has been developed that analyzes color videos taken from a forward-looking video camera in a car driving on a highway. The system uses a combination of color, edge, and motion information to recognize and track the road boundaries, lane markings and other vehicles on the road. Cars are recognized by matching templates that are cropped from the input data online and by detecting highway scene features and evaluating how they relate to each other. Cars are also detected by temporal differencing and by tracking motion parameters that are typical for cars. The system recognizes and tracks road boundaries and lane markings using a recursive least-squares filter. Experimental results demonstrate robust, real-time car detection and tracking over thousands of image frames. The data includes video taken under difficult visibility conditions. Received: 1 September 1998 / Accepted: 22 February 2000  相似文献   

17.
18.
19.
Independent motion detection in 3D scenes   总被引:1,自引:0,他引:1  
This paper presents an algorithmic approach to the problem of detecting independently moving objects in 3D scenes that are viewed under camera motion. There are two fundamental constraints that can be exploited for the problem: 1) two/multiview camera motion constraint (for instance, the epipolar/trilinear constraint) and 2) shape constancy constraint. Previous approaches to the problem either use only partial constraints, or rely on dense correspondences or flow. We employ both the fundamental constraints in an algorithm that does not demand a priori availability of correspondences or flow. Our approach uses the plane-plus-parallax decomposition to enforce the two constraints. It is also demonstrated that for a class of scenes, called sparse 3D scenes in which genuine parallax and independent motions may be confounded, how the plane-plus-parallax decomposition allows progressive introduction, and verification of the fundamental constraints. Results of the algorithm on some difficult sparse 3D scenes are promising.  相似文献   

20.
Detecting moving objects using the rigidity constraint   总被引:1,自引:0,他引:1  
A method for visually detecting moving objects from a moving camera using point correspondences in two orthographic views is described. The method applies a simple structure-from-motion analysis and then identifies those points inconsistent with the interpretation of the scene as a single rigid object. It is effective even when the actual motion parameters cannot be recovered. Demonstrations are presented using point correspondences automatically determined from real image sequences  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号