首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 203 毫秒
1.
In recent years, local stereo matching algorithms have again become very popular in the stereo community. This is mainly due to the introduction of adaptive support weight algorithms that can for the first time produce results that are on par with global stereo methods. The crux in these adaptive support weight methods is to assign an individual weight to each pixel within the support window. Adaptive support weight algorithms differ mainly in the manner in which this weight computation is carried out.In this paper we present an extensive evaluation study. We evaluate the performance of various methods for computing adaptive support weights including the original bilateral filter-based weights, as well as more recent approaches based on geodesic distances or on the guided filter. To obtain reliable findings, we test these different weight functions on a large set of 35 ground truth disparity pairs. We have implemented all approaches on the GPU, which allows for a fair comparison of run time on modern hardware platforms. Apart from the standard local matching using fronto-parallel windows, we also embed the competing weight functions into the recent PatchMatch Stereo approach, which uses slanted sub-pixel windows and represents a state-of-the-art local algorithm. In the final part of the paper, we aim at shedding light on general points of adaptive support weight matching, which, for example, includes a discussion about symmetric versus asymmetric support weight approaches.  相似文献   

2.
目的 立体匹配算法是立体视觉研究的关键点,算法的匹配精度和速度直接影响3维重建的效果。对于传统立体匹配算法来说,弱纹理区域、视差深度不连续区域和被遮挡区域的匹配精度依旧不理想,为此选择具有全局匹配算法和局部匹配算法部分优点、性能介于两种算法之间、且鲁棒性强的半全局立体匹配算法作为研究内容,提出自适应窗口与半全局立体匹配算法相结合的改进方向。方法 以通过AD(absolute difference)算法求匹配代价的半全局立体匹配算法为基础,首先改变算法匹配代价的计算方式,研究窗口大小对算法性能的影响,然后加入自适应窗口算法,研究自适应窗口对算法性能的影响,最后对改进算法进行算法性能评价与比较。结果 实验结果表明,匹配窗口的选择能够影响匹配算法性能、提高算法的适用范围,自适应窗口的加入能够提高算法匹配精度特别是深度不连续区域的匹配精度,并有效降低算法运行时间,对Cones测试图像集,改进的算法较改进前误匹配率在3个测试区域平均减少2.29%;对于所有测试图像集,算法运行时间较加入自适应窗口前平均减少28.5%。结论 加入自适应窗口的半全局立体匹配算法具有更优的算法性能,能够根据应用场景调节算法匹配精度和匹配速度。  相似文献   

3.
带有时变非线性预处理的立体声回波消除方法研究   总被引:1,自引:0,他引:1  
就立体声回波消除问题提出了两种新的信号非线性预处理方法,并给出了相应的自适应算法。新的预处理方法比Benesty(1997)及Joncour(1998)的方法对信号的非线性处理部分要少,因而对语音信号的质量影响有所下降。仿真结果表明,新的预处理方法与其实现算法相结合应用于立体声回波消除时,比Benesty(1997)及Joncour(1998) 所提方法的效果更好。  相似文献   

4.
In this paper, the challenge of fast stereo matching for embedded systems is tackled. Limited resources, e.g. memory and processing power, and most importantly real-time capability on embedded systems for robotic applications, do not permit the use of most sophisticated stereo matching approaches. The strengths and weaknesses of different matching approaches have been analyzed and a well-suited solution has been found in a Census-based stereo matching algorithm. The novelty of the algorithm used is the explicit adaption and optimization of the well-known Census transform in respect to embedded real-time systems in software. The most important change in comparison with the classic Census transform is the usage of a sparse Census mask which halves the processing time with nearly unchanged matching quality. This is due the fact that large sparse Census masks perform better than small dense masks with the same processing effort. The evidence of this assumption is given by the results of experiments with different mask sizes. Another contribution of this work is the presentation of a complete stereo matching system with its correlation-based core algorithm, the detailed analysis and evaluation of the results, and the optimized high speed realization on different embedded and PC platforms. The algorithm handles difficult areas for stereo matching, such as areas with low texture, very well in comparison to state-of-the-art real-time methods. It can successfully eliminate false positives to provide reliable 3D data. The system is robust, easy to parameterize and offers high flexibility. It also achieves high performance on several, including resource-limited, systems without losing the good quality of stereo matching. A detailed performance analysis of the algorithm is given for optimized reference implementations on various commercial of the shelf (COTS) platforms, e.g. a PC, a DSP and a GPU, reaching a frame rate of up to 75 fps for 640 × 480 images and 50 disparities. The matching quality and processing time is compared to other algorithms on the Middlebury stereo evaluation website reaching a middle quality and top performance rank. Additional evaluation is done by comparing the results with a very fast and well-known sum of absolute differences algorithm using several Middlebury datasets and real-world scenarios.  相似文献   

5.
A Database and Evaluation Methodology for Optical Flow   总被引:4,自引:0,他引:4  
The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high frame-rate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by Barron et al., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at . Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them.  相似文献   

6.
为了提高立体匹配效率和克服处理区域的视差跳跃,提出了一种基于像元集的置信传递立体匹配方法。该方法首先以像素为基元,利用层次置信传递算法得到较为准确的初始视差;然后依次根据颜色和初始视差对参考图像进行分割,再利用分裂合并策略对分割后的像元集进行平面拟合,以消除颜色分割错误对匹配造成的影响;最后在拟合后的像元集空间,利用标准置信传递优化算法得到最终解。采用国际标准图像进行测试的实验结果表明,该方法的匹配效率和精度优于同类方法。  相似文献   

7.
近年来双目立体匹配技术发展迅速,高精度、高分辨率、大视差的应用需求无疑对该技术的计算效率提出了更高的要求。由于传统立体匹配算法固有的计算复杂度正比于视差范围,已经难以满足高分辨率、大视差的应用场景。因此,从计算复杂度、匹配精度、匹配原理等多方面综合考虑,提出了一种基于PatchMatch的半全局双目立体匹配算法,在路径代价计算过程中使用空间传播机制,将可能的视差由整个视差范围降低为t个候选视差(t远远小于视差范围),显著减少了候选视差的数量,大幅提高了半全局算法的计算效率。对KITTI2015数据集的评估结果表明,该算法以5.81%的错误匹配率和20.2 s的匹配时间实现了准确性和实时性的明显提高。因此,作为传统立体匹配改进算法,该设计可以为大视差双目立体匹配系统提供高效的解决方案。  相似文献   

8.
In this paper we present a 3D-vision based obstacle detection system for an autonomously operating train in open terrain environments. The system produces dense depth data in real-time from a stereo camera system with a baseline of 1.4 m to fulfill accuracy requirements for reliable obstacle detection 80 m ahead. On an existing high speed stereo engine, several modifications have been applied to significantly improve the overall performance of the system. Hierarchical stereo matching and slanted correlation masks increased the quality of the depth data in a way that the obstacle detection rate increased from 89.4% to 97.75% while the false positive detection rate could be kept as low as 0.25%. The evaluation results have been obtained from extensive real-world test data. An additional stereo matching speed-up of factor 2.15 was achieved and the overall latency of obstacle detection is considerably faster than 300 ms.  相似文献   

9.
From Multiple Stereo Views to Multiple 3-D Surfaces   总被引:4,自引:1,他引:4  
  相似文献   

10.
Stereo matching is a fundamental and crucial problem in computer vision. In the last decades, many researchers have been working on it and made great progress. Generally stereo algorithms can be classified into local methods and global methods. In this paper, the challenges of stereo matching are first introduced, and then we focus on local approaches which have simpler structures and higher efficiency than global ones. Local algorithms generally perform four steps: cost computation, cost aggregation, disparity computation and disparity refinement. Every step is deeply investigated, and most work focuses on cost aggregation. We studied most of the classical local methods and divide them into several classes. The classification well illustrates the development history of local stereo correspondence and shows the essence of local matching along with its important and difficult points. At the end we give the future development trend of local methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号