首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present an analysis of the spatial and temporal statistics of “natural” optical flow fields and a novel flow algorithm that exploits their spatial statistics. Training flow fields are constructed using range images of natural scenes and 3D camera motions recovered from hand-held and car-mounted video sequences. A detailed analysis of optical flow statistics in natural scenes is presented and machine learning methods are developed to learn a Markov random field model of optical flow. The prior probability of a flow field is formulated as a Field-of-Experts model that captures the spatial statistics in overlapping patches and is trained using contrastive divergence. This new optical flow prior is compared with previous robust priors and is incorporated into a recent, accurate algorithm for dense optical flow computation. Experiments with natural and synthetic sequences illustrate how the learned optical flow prior quantitatively improves flow accuracy and how it captures the rich spatial structure found in natural scene motion.  相似文献   

2.
The apparent pixel motion in an image sequence, called optical flow, is a useful primitive for automatic scene analysis and various other applications of computer vision. In general, however, the optical flow estimation suffers from two significant problems: the problem of illumination that varies with time and the problem of motion discontinuities induced by objects moving with respect to either other objects or with respect to the background. Various integrated approaches for solving these two problems simultaneously have been proposed. Of these, those that are based on the LMedS (least median of squares) appear to be the most robust. The goal of this paper is to carry out an error analysis of two different LMedS-based approaches, one based on the standard LMedS regression and the other using a modification thereof as proposed by us recently. While it is to be expected that the estimation accuracy of any approach would decrease with increasing levels of noise, for LMedS-like methods, it is not always clear as to how much of that decrease in performance can be attributed to the fact that only a small number of randomly selected samples is used for forming temporary solutions. To answer this question, our study here includes a baseline implementation in which all of the image data is used for forming motion estimates. We then compare the estimation errors of the two LMedS-based methods with the baseline implementation. Our error analysis demonstrates that, for the case of Gaussian noise, our modified LMedS approach yields better estimates at moderate levels of noise, but is outperformed by the standard LMedS method as the level of noise increases. For the case of salt-and-pepper noise, the modified LMedS method consistently performs better than the standard LMedS method.  相似文献   

3.
为了提高重构图像或者视频的分辨率.提出把新型的基于光流法的图像配准算法应用于迭代反投影(IBP)超分辨率算法中。在所提出的方法中.基于光流法的图像配准算法用来提高图像配准的准确性。首先,为了得到像素级别的运动矢量.基于光流法的图像配准算法被用于估计图像间的运动矢量。以得到更加准确的运动矢量矩阵。接着,利用所获得的运动矢量矩阵结合迭代反投影算法重构高分辨率的图像。同时.由于基于光流法的图像配准能够很好地估计视频图像间的运动.所提出的方法同样适用于视频图像的超分辨。实验结果表明.提出的方法对于图像或者视频的超分辨率效果.在主观效果和客观评价上都有一定的提升。  相似文献   

4.
针对光照变化和大位移运动等复杂场景下图像序列变分光流计算的边缘模糊与过度分割问题,文中提出基于运动优化语义分割的变分光流计算方法.首先,根据图像局部区域的去均值归一化匹配模型,构建变分光流计算能量泛函.然后,利用去均值归一化互相关光流估计结果,获取图像运动边界信息,优化语义分割,设计运动约束语义分割的变分光流计算模型.最后,融合图像不同标签区域光流,获得光流计算结果.在Middlebury、UCF101数据库上的实验表明,文中方法的光流估计精度与鲁棒性较高,尤其对光照变化、弱纹理和大位移运动等复杂场景的边缘保护效果较优.  相似文献   

5.
《Advanced Robotics》2013,27(7-8):791-816
This paper presents a new idea for an obstacle recognition method for mobile robots by analyzing optical flow information acquired from dynamic images. First, the optical flow field is detected in image sequences from a camera on a moving observer and moving object candidates are extracted by using a normalized square residual error [focus of expansion (FOE) residual error] value that is calculated in the process of estimating the FOE. Next, the optical flow directions and intensity values are stored for the pixels involved in each candidate region to calculate the distribution width values around the principal axes of inertia and the direction of the principal axes. Finally, each candidate is classified into an object category that is expected to appear in the scene by comparing the proportion and the direction values with standard data ranges for the objects which are determined by preliminary experiments. Experimental results of car/bicycle/pedestrian recognition in real outdoor scenes have shown the effectiveness of the proposed method.  相似文献   

6.
A Database and Evaluation Methodology for Optical Flow   总被引:4,自引:0,他引:4  
The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high frame-rate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by Barron et al., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at . Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them.  相似文献   

7.
针对Horn-Schunck光流运动估计的矢量中可能出现局部错误估计点的问题,提出一种基于维纳线性预测的光流运动矢量优化算法。首先,将光流运动矢量从笛卡尔坐标转换到极坐标下;其次,根据制定的判决规则,对于判决中的可疑点做进一步判定,而对于判决中的错估点采用维纳线性预测的方法进行重新估值;最后,将极坐标下的光流矢量转换到笛卡尔坐标下,到此就完成了光流运动矢量的优化。与直接求出的Horn-Schunck光流矢量相比,优化后的Horn-Schunck光流矢量中幅度和角度错估点的误差明显减小,光流矢量的准确性获得了一定程度的提高。将直接求出的Horn-Schunck光流矢量和优化后光流矢量分别应用到图像和视频序列的运动补偿中。结果表明:基于维纳线性预测的Horn-Schunck光流运动矢量优化算法取得了比较好的效果。  相似文献   

8.
基于最小平方中值定理的立体视觉里程计   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于最小平方中值定理(LMedS)的立体视觉里程计方法。利用图像中尺度不变的SIFT特征点作为路标,基于KD树的最邻近点搜索算法来实现左右图像对特征点的匹配和前后帧间特征点跟踪。通过特征点的三维重建,基于最小平方中值定理估计出机器人的运动距离和方向信息。实验表明该方法在不同图像间匹配、三维路标跟踪和机器人运动估计中具有很强的鲁棒性。  相似文献   

9.
In video post-production applications, camera motion analysis and alignment are important in order to ensure the geometric correctness and temporal consistency. In this paper, we trade some generality in estimating and aligning camera motion for reduced computational complexity and increased image-based nature. The main contribution is to use fundamental ratios to synchronize video sequences of distinct scenes captured by cameras undergoing similar motions. We also present a simple method to align 3D camera trajectories when the fundamental ratios are not able to match the noisy trajectories. Experimental results show that our method can accurately synchronize sequences even when the scenes are totally different and have dense depths. An application on 3D object transfer is also demonstrated.  相似文献   

10.
Two approaches are described that improve the efficiency of optical flow computation without incurring loss of accuracy. The first approach segments images into regions of moving objects. The method is based on a previously defined Galerkin finite element method on a triangular mesh combined with a multiresolution segmentation approach for object flow computation. Images are automatically segmented into subdomains of moving objects by an algorithm that employs a hierarchy of mesh coarseness for the flow computation, and these subdomains are reconstructed over a finer mesh on which to recompute flow more accurately. The second approach uses an adaptive mesh in which the resolution increases where motion is found to occur. Optical flow is computed over a reasonably coarse mesh, and this is used to construct an optimal adaptive mesh in a way that is different from the gradient methods reported in the literature. The finite element mesh facilitates a reduction in computational effort by enabling processing to focus on particular objects of interest in a scene (i.e. those areas where motion is detected). The proposed methods were tested on real and synthetic image sequences, and promising results are reported.  相似文献   

11.
Differential optical flow methods allow the estimation of optical flow fields based on the first-order and even higher-order spatio-temporal derivatives (gradients) of sequences of input images. If the input images are noisy, for instance because of the limited quality of the capturing devices or due to poor illumination conditions, the use of partial derivatives will amplify that noise and thus end up affecting the accuracy of the computed flow fields. The typical approach in order to reduce that noise consists of smoothing the required gradient images with Gaussian filters, for instance by applying structure tensors. However, that filtering is isotropic and tends to blur the discontinuities that may be present in the original images, thus likely leading to an undesired loss of accuracy in the resulting flow fields. This paper proposes the use of tensor voting as an alternative to Gaussian filtering, and shows that the discontinuity preserving capabilities of the former yield more robust and accurate results. In particular, a state-of-the-art variational optical flow method has been adapted in order to utilize a tensor voting filtering approach. The proposed technique has been tested upon different datasets of both synthetic and real image sequences, and compared to both well known and state-of-the-art differential optical flow methods.  相似文献   

12.
Optical flow estimation is a recurrent problem in several disciplines and assumes a primary importance in a number of applicative fields such as medical imaging [12], computer vision [6], productive process control [4], etc. In this paper, a differential method for optical flow evaluation is being presented. It employs a new error formulation that ensures a more than satisfactory image reconstruction in those points which are free of motion discontinuity. A dynamic scheme of brightness-sample processing has been used to regularise the motion field. A technique based on the concurrent processing of sequences with multiple pairs of images has also been developed for improving detection and resolution of mobile objects on the scene, if they exist. This approach permits to detect motions ranging from a fraction of a pixel to a few pixels per frame. Good results, even on noisy sequences and without the need of a filtering pre-processing stage, can be achieved. The intrinsic method structure can be exploited for favourable implementation on multi-processor systems with a scalable degree of parallelism. Several sequences, some with noise and presenting various types of motions, have been used for evaluating the performances and the effectiveness of the method. Carmelo Lodato received his Dr. Ing. Degree in Civil Engineering from the University of Palermo, Italy, in 1987. He is Researcher at the High Performance Computing and Networking Institute (ICAR) of the Italian National Research Council (CNR). His current research interests include computer vision, image processing, motion analysis, optimization and stochastic algorithms. Salvatore Lopes received his Dr. Ing. Degree (summa com laude) in Nuclear Engineering from the University of Palermo, Italy, in 1988. He is Researcher at the High Performance Computing and Networking Institute (ICAR) of the Italian National Research Council (CNR). His current research interests include computer vision, image processing, motion analysis, optimization and stochastic algorithms.  相似文献   

13.
基于粒子视频的高密度人群主流运动检测   总被引:2,自引:0,他引:2  
采用粒子视频流获得视频序列中的特征点运动轨迹,并对获得的运动轨迹进行提取,然后利用最长共同子序列LCS(Longest Common Subsequence)聚类轨迹,得到运动的主流方向。该算法可以有效检测实际场景中的主流运动方向。  相似文献   

14.
付豪  徐和根  张志明  齐少华 《计算机应用》2021,41(11):3337-3344
针对动态场景下的定位与静态语义地图构建问题,提出了一种基于语义和光流约束的动态环境下的同步定位与地图构建(SLAM)算法,以降低动态物体对定位与建图的影响。首先,对于输入的每一帧,通过语义分割获得图像中物体的掩模,再通过几何方法过滤不符合极线约束的特征点;接着,结合物体掩模与光流计算出每个物体的动态概率,根据动态概率过滤特征点以得到静态特征点,再利用静态特征点进行后续的相机位姿估计;然后,基于RGB-D图片和物体动态概率建立静态点云,并结合语义分割建立语义八叉树地图。最后,基于静态点云与语义分割创建稀疏语义地图。公共TUM数据集上的测试结果表明,高动态场景下,所提算法与ORB-SLAM2相比,在绝对轨迹误差和相对位姿误差上能取得95%以上的性能提升,与DS-SLAM、DynaSLAM相比分别减小了41%和11%的绝对轨迹误差,验证了该算法在高动态场景中具有较好的定位精度和鲁棒性。地图构建的实验结果表明,所提算法创建了静态语义地图,与点云地图相比,稀疏语义地图的存储空间需求量降低了99%。  相似文献   

15.
基于纹理约束和参数化运动模型的光流估计   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于局部小平面运动的光流估计新方法。目的是获得精确致密的光流估计结果。与以往采用亮度一致性区域作为假设平面的算法不同,本算法利用序列图像的纹理信息,在纹理分割区域的基础上,进行运动估计。该算法首先通过微分法计算粗光流,可以得到参数化光流模型的初始估计,然后通过区域迭代算法,调整初始估计,从而得到精细的平面分割及其对应的参数化光流模型。基于纹理信息的部分拟合算法被用于算法的每一步当中,保证了纹理边缘位置的光流估计值的准确性。实验采用了标准图像序列,结果表明,可以得到更为精细的光流估计结果,特别是对于那些有着丰富纹理信息的室外环境的图像序列,而且在运动边界处的结果改善尤为明显。  相似文献   

16.
Deformations of region properties such as contours and moments have been used previously to recover the optic flow field velocities for planar patches. In a sequence of images containing several moving objects, the flow field determination is further complicated by occlusion effects. This letter presents a method whereby the flow field can still be obtained even in the presence of occlusion. Adjacent regions experiencing occlusion are treated as a coupled system. The results of a number of experiments on laboratory and natural image sequences are reported.  相似文献   

17.
Optical flow methods are used to estimate pixelwise motion information based on consecutive frames in image sequences. The image sequences traditionally contain frames that are similarly exposed. However, many real-world scenes contain high dynamic range content that cannot be captured well with a single exposure setting. Such scenes result in certain image regions being over- or underexposed, which can negatively impact the quality of motion estimates in those regions. Motivated by this, we propose to capture high dynamic range scenes using different exposure settings every other frame. A framework for OF estimation on such image sequences is presented, that can straightforwardly integrate techniques from the state-of-the-art in conventional OF methods. Different aspects of robustness of OF methods are discussed, including estimation of large displacements and robustness to natural illumination changes that occur between the frames, and we demonstrate experimentally how to handle such challenging flow estimation scenarios. The flow estimation is formulated as an optimization problem whose solution is obtained using an efficient primal–dual method.  相似文献   

18.
We analyze the least-squares error for structure from motion with a single infinitesimal motion (structure from optical flo). We present asymptotic approximations to the noiseless error over two, complementary regions of motion estimates: roughly forward and non-forward translations. Our approximations are powerful tools for understanding the error. Experiments show that they capture its detailed behavior over the entire range of motions. We illustrate the use of our approximations by deriving new properties of the least-squares error. We generalize the earlier results of Jepson/Heeger/Maybank on the bas-relief ambiguity and of Oliensis on the reflected minimum. We explain the error's complexity and its multiple local minima for roughly forward translation estimates (epipoles within the field of view) and identify the factors that make this complexity likely. For planar scenes, we clarify the effects of the two-fold ambiguity, show the existence of a new, double bas-relief ambiguity, and analyze the error's local minima. For nonplanar scenes, we derive simplified error approximations for reasonable assumptions on the image and scene. For example, we show that the error tends to have a simpler form when many points are tracked. We show experimentally that our analysis for zero image noise gives a good model of the error for large noise. We show theoretically and experimentally that the error for projective structure from motion is simpler but flatter than the error for calibrated images.  相似文献   

19.
We present a method for capturing the skeletal motions of humans using a sparse set of potentially moving cameras in an uncontrolled environment. Our approach is able to track multiple people even in front of cluttered and non‐static backgrounds, and unsynchronized cameras with varying image quality and frame rate. We completely rely on optical information and do not make use of additional sensor information (e.g. depth images or inertial sensors). Our algorithm simultaneously reconstructs the skeletal pose parameters of multiple performers and the motion of each camera. This is facilitated by a new energy functional that captures the alignment of the model and the camera positions with the input videos in an analytic way. The approach can be adopted in many practical applications to replace the complex and expensive motion capture studios with few consumer‐grade cameras even in uncontrolled outdoor scenes. We demonstrate this based on challenging multi‐view video sequences that are captured with unsynchronized and moving (e.g. mobile‐phone or GoPro) cameras.  相似文献   

20.
A time efficient technique for real-time tracking of high-speed objects in a video sequence is presented in this article. The technique is primarily based on the segmentation of the optical flow field computed between the successive image frames of a video sequence, followed by the tracking of a detected point of interest (POI) within the segmented flow field. In the initial phase of the technique, the optical flow field between the first two successive image frames acquired from a video sequence, is computed. A fuzzy hostility index indicative of the degree of coherence of the moving objects in the image frames, is used to segment the optical flow field. This yields different coherent regions of interest (ROIs) in the segmented flow field. A POI is then detected in the different ROIs obtained. Tracking of the moving object is then carried out by computing the flow fields between predefined ROIs in the neighborhood of the detected POI in the subsequent image frames.Since the selected ROIs are smaller than the image frames, a fair amount of reduction in the time required for the computation of the optical flow field is achieved, thereby facilitating real-time operation. An application of the proposed technique is demonstrated on three video sequences of high-speed flying fighter aircrafts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号