首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 16 毫秒
1.
Dense stereo algorithms are able to estimate disparities at all pixels including untextured regions. Typically these disparities are evaluated at integer disparity steps. A subsequent sub-pixel interpolation often fails to propagate smoothness constraints on a sub-pixel level.We propose to increase the sub-pixel accuracy in low-textured regions in four possible ways: First, we present an analysis that shows the benefit of evaluating the disparity space at fractional disparities. Second, we introduce a new disparity smoothing algorithm that preserves depth discontinuities and enforces smoothness on a sub-pixel level. Third, we present a novel stereo constraint (gravitational constraint) that assumes sorted disparity values in vertical direction and guides global algorithms to reduce false matches, especially in low-textured regions. Finally, we show how image sequence analysis improves stereo accuracy without explicitly performing tracking. Our goal in this work is to obtain an accurate 3D reconstruction. Large-scale 3D reconstruction will benefit heavily from these sub-pixel refinements.Results based on semi-global matching, obtained with the above mentioned algorithmic extensions are shown for the Middlebury stereo ground truth data sets. The presented improvements, called ImproveSubPix, turn out to be one of the top-performing algorithms when evaluating the set on a sub-pixel level while being computationally efficient. Additional results are presented for urban scenes. The four improvements are independent of the underlying type of stereo algorithm.  相似文献   

2.
目的 立体匹配是计算机双目视觉的重要研究方向,主要分为全局匹配算法与局部匹配算法两类。传统的局部立体匹配算法计算复杂度低,可以满足实时性的需要,但是未能充分利用图像的边缘纹理信息,因此在非遮挡、视差不连续区域的匹配精度欠佳。为此,提出了融合边缘保持与改进代价聚合的立体匹配。方法 首先利用图像的边缘空间信息构建权重矩阵,与灰度差绝对值和梯度代价进行加权融合,形成新的代价计算方式,同时将边缘区域像素点的权重信息与引导滤波的正则化项相结合,并在多分辨率尺度的框架下进行代价聚合。所得结果经过视差计算,得到初始视差图,再通过左右一致性检测、加权中值滤波等视差优化步骤获得最终的视差图。结果 在Middlebury立体匹配平台上进行实验,结果表明,融合边缘权重信息对边缘处像素点的代价量进行了更加有效地区分,能够提升算法在各区域的匹配精度。其中,未加入视差优化步骤的21组扩展图像对的平均误匹配率较改进前减少3.48%,峰值信噪比提升3.57 dB,在标准4幅图中venus上经过视差优化后非遮挡区域的误匹配率仅为0.18%。结论 融合边缘保持的多尺度立体匹配算法有效提升了图像在边缘纹理处的匹配精度,进一步降低了非遮挡区域与视差不连续区域的误匹配率。  相似文献   

3.
针对传统局部立体匹配算法在深度不连续区域误匹配率高的问题,提出一种基于自适应权重的遮挡信息立体匹配算法。首先,采用左右一致性检测算法检测参考图像与目标图像的遮挡区域;然后利用遮挡信息,在代价聚合阶段降低遮挡区域像素点所占权重,在视差优化阶段采用扫描线传播方式选择水平方向最近点填充遮挡区域的视差;最后,根据Middlebury数据集提供的标准视差图为视差结果计算误匹配率。实验结果表明,基于自适应权重的遮挡信息匹配算法相对于自适应权重算法误匹配率降低了16%,并解决了局部立体匹配算法在深度不连续区域误匹配率高的问题,提高了算法的匹配精确性。  相似文献   

4.
作为双目三维重建中的关键步骤,双目立体匹配算法完成了从平面视觉到立体视觉的转化.但如何平衡双目立体匹配算法的运行速度和精度仍然是一个棘手的问题.本文针对现有的局部立体匹配算法在弱纹理、深度不连续等特定区域匹配精度低的问题,并同时考虑到算法实时性,提出了一种改进的跨多尺度引导滤波的立体匹配算法.首先融合AD和Census变换两种代价计算方法,然后采用基于跨尺度的引导滤波进行代价聚合,在进行视差计算时通过制定一个判断准则判断图像中每一个像素点的最小聚合代价对应的视差值是否可靠,当判断对应的视差值不可靠时,对像素点构建基于梯度相似性的自适应窗口,并基于自适应窗口修正该像素点对应的视差值.最后通过视差精化得到最终的视差图.在Middlebury测试平台上对标准立体图像对的实验结果表明,与传统基于引导滤波器的立体匹配算法相比具有更高的精度.  相似文献   

5.
目的 从视差图反映影像景物深度变化并与LiDAR系统距离量测信息"同源"这一认识出发,提出一种基于视差互信息的立体航空影像与LiDAR点云自动配准方法.方法 本文方法分为3个阶段:第一、通过半全局匹配SGM(semi-gdabal matching)生成立体航空影像密集视差图;第二、利用航空影像内参数及初始配准参数(外方位元素)对LiDAR点云进行"针孔"透视成像,生成与待配准的立体航空影像空间分辨率、几何形变相接近且具有相同幅面大小的模拟灰度影像-LiDAR深度影像,以互信息作为相似性测度依据估计航空影像视差图与LiDAR深度影像的几何映射关系,进而以之为基础实现LiDAR点云影像概略相关;第三、以LiDAR点云影像概略相关获得的近似同名像点为观测值,以视差互信息为权重,实施摄影测量空间后方交会计算获得优化的影像外方位元素,生成新的LiDAR深度影像并重复上述过程,直至满足给定的迭代计算条件.结果 选取重叠度约60%、幅面大小7 216×5 428像素、空间分辨率约0.5 m的立体航空像对与平均点间距约1.5 m、水平精度约25 cm的LiDAR"点云"进行空间配准实验,配准精度接近1个像素.结论 实验结果表明,本文方法自动化程度高且配准精度适中,理论上适用于不同场景类型、相机内参数已知立体航空影像,具有良好的应用价值.  相似文献   

6.
立体视觉中熏对于视差不连续场景如遮挡区域、物体边缘等,获得高性能的图像匹配是一个重要的研究课题。为此,提出了一种改进的匹配算法,对左右图像进行LOG算子预处理后,用传统的SAD窗口匹配结合改进的子窗口技术。它的突出优点是可以作为任何现有匹配算法的补充以增加匹配的精度,并且简单高效。  相似文献   

7.
Stereo images acquired by a stereo camera setup provide depth estimation of a scene. Numerous machine vision applications deal with retrieval of 3D information. Disparity map recovery from a stereo image pair involves computationally complex algorithms. Previous methods of disparity map computation are mainly restricted to software-based techniques on general-purpose architectures, presenting relatively high execution time. In this paper, a new hardware-implemented real-time disparity map computation module is realized. This enables a hardware-based fuzzy inference system parallel-pipelined design, for the overall module, implemented on a single FPGA device with a typical operating frequency of 138 MHz. This provides accurate disparity map computation at a rate of nearly 440 frames per second, given a stereo image pair with a disparity range of 80 pixels and 640 × 480 pixels spatial resolution. The proposed method allows a fast disparity map computational module to be built, enabling a suitable module for real-time stereo vision applications.  相似文献   

8.
Stereo matching is one of the most used algorithms in real-time image processing applications such as positioning systems for mobile robots, three-dimensional building mapping and recognition, detection and three-dimensional reconstruction of objects. In order to improve the performance, stereo matching algorithms often have been implemented in dedicated hardware such as FPGA or GPU devices. In this paper an FPGA stereo matching unit based on fuzzy logic is described. The proposed algorithm consists of three stages. First, three similarity parameters inherent to each pixel contained in the input stereo pair are computed. Then, the similarity parameters are sent to a fuzzy inference system which determines a fuzzy-similarity value. Finally, the disparity value is defined as the index which maximizes the fuzzy-similarity values (zero up to dmax). Dense disparity maps are computed at a rate of 76 frames per second for input stereo pairs of 1280 × 1024 pixel resolution and a maximum expected disparity equal to 15. The developed FPGA architecture provides reduction of the hardware resource demand compared to other FPGA-based stereo matching algorithms: near to 72.35% for logic units and near to 32.24% for bits of memory. In addition, the developed FPGA architecture increases the processing speed: near to 34.90% pixels per second and outperforms the accuracy of most of real-time stereo matching algorithms in the state of the art.  相似文献   

9.
在立体视觉中,视差间接反映物体的深度信息,视差计算是深度计算的基础。常见的视差计算方法研究都是面向双目立体视觉,而双焦单目立体视觉的视差分布不同于双目视差,具有沿极线辐射的特点。针对双焦单目立体视觉的特点,提出了一种单目立体视差的计算方法。对于计算到的初步视差图,把视差点分类为匹配计算点和误匹配点。通过均值偏移向量(Mean Shift)算法,实现了对误匹配点依赖于匹配点和图像分割的视差估计,最终得到致密准确的视差图。实验证明,这种方法可以通过双焦立体图像对高效地获得场景的视差图。  相似文献   

10.
针对传统立体匹配算法无法同时为图像边缘和低纹理区域提供一个合适大小的聚合窗口而导致匹配精度较低的难题,提出一种结合高斯混合模型及最小生成树结构的立体匹配算法。通过图像初始视差、像素颜色及距离信息将图像分为初始若干区域及待分割候选像素;基于高斯混合模型并行迭代更新各区域参数,得到最终的分割;在各分割上建立最小生成树计算聚合值求取视差;通过邻域内的有效视差修正误匹配点,获取精度较高的稠密视差图。与其他算法相比,该算法能有效降低误匹配率,尤其在深度不连续区域的匹配效果显著改善。  相似文献   

11.
一种利用动态规划和左右一致性的立体匹配算法   总被引:1,自引:0,他引:1       下载免费PDF全文
立体匹配是计算机视觉领域研究的一个重要课题,为了得到准确、稠密的视差图,提出了一种利用动态规划和左右一致性的立体匹配算法。该算法首先分别以左、右图像为基元图像,计算各自的视差空间图像,在视差空间图像上利用动态规划,计算得到左视差图和右视差图;然后通过使用左右视差图之间的一致性关系,消除误匹配点,得到较为准确的部分视差图;最后利用视差图的顺序约束关系,给出未匹配视差点的搜索空间计算方法,并利用一种简单有效的方法来计算这些点的视差值。在一些标准立体图像对上所做的实验结果表明,该算法效果良好。  相似文献   

12.
We present a new feature based algorithm for stereo correspondence. Most of the previous feature based methods match sparse features like edge pixels, producing only sparse disparity maps. Our algorithm detects and matches dense features between the left and right images of a stereo pair, producing a semi-dense disparity map. Our dense feature is defined with respect to both images of a stereo pair, and it is computed during the stereo matching process, not a preprocessing step. In essence, a dense feature is a connected set of pixels in the left image and a corresponding set of pixels in the right image such that the intensity edges on the boundary of these sets are stronger than their matching error (which is the difference in intensities between corresponding boundary pixels). Our algorithm produces accurate semi-dense disparity maps, leaving featureless regions in the scene unmatched. It is robust, requires little parameter tuning, can handle brightnessdifferences between images, nonlinear errors, and is fast (linear complexity).  相似文献   

13.
生成当前视点目标图像的快速逆映射算法   总被引:2,自引:0,他引:2  
郑新  吴恩华 《软件学报》2001,12(11):1667-1674
利用极线的属性及深度图像隐含的边界信息,提出了一种可以处理非深度连续图像的快速逆映射算法,以从多幅参考图像精确合成当前视点目标图像.该算法分为3步:首先由参考图像的深度信息得到图像中的边界;然后从参考图像中选出一幅作为主参考图像,利用极线的整体匹配特性和对应点在极线上分布的单调性,逐个处理目标极线,以生成目标图像;最后根据其他参考图像填补目标图像中的空洞.由于在第2步中只需处理参考图像的边界点对即可得到对应目标极线上所有点的深度信息及其在参考极线上的对应点,因此,新算法很好地提高了速度.在填补空洞时,利用参考图像的边界信息及所隐含的遮挡关系,还提出了一些加速的方法.  相似文献   

14.
A cooperative algorithm for stereo matching and occlusion detection   总被引:27,自引:0,他引:27  
Presents a stereo algorithm for obtaining disparity maps with occlusion explicitly detected. To produce smooth and detailed disparity maps, two assumptions that were originally proposed by Marr and Poggio (1976, 1979) are adopted: uniqueness and continuity. That is, the disparity maps have a unique value per pixel and are continuous almost everywhere. These assumptions are enforced within a three-dimensional array of match values in disparity space. Each match value corresponds to a pixel in an image and a disparity relative to another image. An iterative algorithm updates the match values by diffusing support among neighboring values and inhibiting others along similar lines of sight. By applying the uniqueness assumption, occluded regions can be explicitly identified. To demonstrate the effectiveness of the algorithm, we present the processing results from synthetic and real image pairs, including ones with ground-truth values for quantitative comparison with other methods  相似文献   

15.
In stereoscopic video coding, the interview correlation between the stereo image pair can be used for error concealment. A new spatial error concealment method for stereoscopic video coding based on pixel matching in the decoder is proposed in this paper. The lost macroblocks are recovered by utilizing disparity matching between two-view images on a pixel-by-pixel basis. Firstly, we get the candidate disparity vectors of the four neighboring pixels of the lost pixel by disparity matching in the decoder. Secondly, by calculating the boundary pixel difference, we determine an optimal replacing pixel in the reference image, and then we recover the lost pixel by the optimal pixel in the reference image. Experimental results show that the proposed algorithm performs better comparing to the previous technique.  相似文献   

16.
Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result.  相似文献   

17.
This work presents a novel approach for both stereo and optical flow that deals with large displacements, depth/motion discontinuities and occlusions. The proposed method comprises two main steps. First, a novel local stereo matching algorithm is presented, whose main novelty relies in the block-matching aggregation step. We adopt an adaptive support weights approach in which the weight distribution favors pixels that share the same displacement with the reference one. State-of-the-art methods make the weight function depend only on image features. On the contrary, the proposed weight function depends additionally on the tested shift, by giving more importance to those pixels in the block-matching with smaller cost, as these are supposed to have the tested displacement. Moreover, the method is embedded into a pyramidal procedure to locally limit the search range, which helps to reduce ambiguities in the matching process and saves computational time. Second, the non-dense local estimation is filtered and interpolated by means of a new variational formulation making use of intermediate scale estimates of the local procedure. This permits to keep the fine details estimated at full resolution while being robust to noise and untextured areas using estimates at coarser scales. The introduced variational formulation as well as the block-matching algorithm are robust to illumination changes. We test our algorithm for both stereo and optical flow public datasets showing competitive results.  相似文献   

18.
Stereo matching is a challenging problem and highly accurate depth image is important in different applications. The main problem is to estimate the correspondence between two pixels in a stereo pair. To solve this problem, in the last decade, several cost aggregation methods aimed at improving the quality of stereo matching algorithms have been introduced. We propose a new cost aggregation method based on weighted guided image filtering (WGIF) for local stereo matching. The proposed algorithm solves multi-label problems in three steps. First, the cost volume is constructed using pixel-wise matching cost computation functions. Then, each slice of the cost volume is independently filtered using the WGIF, which substitutes for the smoothness term in the energy function. Finally, the disparity of any pixel is simply computed. The WGIF uses local weights based on a variance window of pixels in a guidance image for cost volume filtering. Experimental results using Middlebury stereo benchmark verify that the proposed method is effective due to a high quality cost volume filter.  相似文献   

19.
讨论立体图对的图像分割问题,提出一种基于深度和颜色信息的图像物体分割算法。该算法首先利用基于聚类的Mean-shift分割算法对目标图像进行适度的过分割,同时借助双目立体视觉算法获取立体图对的稠密深度图,并依据深度不连续性从过分割结果中选取用于继续进行“精致”分割的种子点集,接着对未分配种子标签的区域用图割算法分配标签,并对彼此之间没有深度不连续边界但具有不同标签的相邻区域进行融合。相比于传统图像分割算法,该算法可有效克服过分割和欠分割问题,获取具有一定语义的图像分割结果。相关的对比实验结果验证了该算法的有效性。  相似文献   

20.
本文提出了一种新的局部立体匹配算法.该方法首先将参考图像中的像素分为同质和异质像素;然后对异质像素进行N个方向的能量聚集,利用WTA(Winner Take All)方法选取所有方向上的最优视差,并把统计量最多的视差设为当前点的最终视差,对同质像素利用可移动矩形窗口聚集更多的像素进行匹配;最后,对得到的视差图采用一种快速有效的后处理去除视差图的噪声点.通过实验表明,本文算法在保持高效的同时,能够取得较高的视差准确性,尤其在视差不连续区域和无纹理区域.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号