首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Segment based disparity estimation methods have been proposed in many different ways. Most of these studies are built upon the hypothesis that no large disparity jump exists within a segment. When this hypothesis does not hold, it is difficult for these methods to estimate disparities correctly. Therefore, these methods work well only when the images are initially over segmented but do not work well for under segmented cases. To solve this problem, we present a new segment based stereo matching method which consists of two algorithms: a cost volume watershed algorithm (CVW) and a region merging (RM) algorithm. For incorrectly under segmented regions where pixels on different objects are grouped into one segment, the CVW algorithm regroups the pixels on different objects into different segments and provides disparity estimation to the pixels in different segments accordingly. For unreliable and occluded regions, we merge them into neighboring reliable segments for robust disparity estimation. The comparison between our method and the current state-of-the-art methods shows that our method is very competitive and is robust particularly when the images are initially under segmented.  相似文献   

2.
Depth segmentation has the challenge of separating the objects from their supporting surfaces in a noisy environment. To address the issue, a novel segmentation scheme based on disparity analysis is proposed. First, we transform a depth scene into the corresponding U-V disparity map. Then, we conduct a region-based detection method to divide the object region into several targets in the processed U-disparity map. Thirdly, the horizontal plane regions may be mapped as slant lines in the V-disparity map, the Random Sample Consensus (RANSAC) algorithm is improved to fit such multiple lines. Moreover, noise regions are reduced by image processing strategies during the above processes. We respectively evaluate our approach on both real-world scenes and public data sets to verify the flexibility and generalization. Sufficient experimental results indicate that the algorithm can efficiently segment and label a full-view scene into a group of valid regions as well as removing surrounding noise regions.  相似文献   

3.
传统的基于全局优化的立体匹配算法计算复杂度较高,在遮挡和视差不连续区域具有较差的匹配精度。提出了基于Tao 立体匹配框架的全局优化算法。首先采用高效的局部算法获取初始匹配视差;然后对得到的视差值进行可信度检测,利用可信像素点和视差平面假设使用具有鲁棒性的低复杂度算法修正不可信任像素视差值;最后改进置信度传播算法,使其能够自适应地停止收敛节点的消息传播,并对经修正的初始匹配进行优化,提高弱纹理区域匹配准确度。实验结果表明,文中算法有效地降低整体误匹配率,改善了视差不连续及遮挡区域的匹配精度;同时,降低了算法整体复杂度,兼顾了速度,具有一定的实用性。  相似文献   

4.
This paper describes a dense stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. Our basic assumptions are that disparity varies smoothly inside a segment, while disparity boundaries coincide with the segment borders. The use of these assumptions makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function. This cost function is based on the observation that occlusions cannot be dealt with in the domain of segments. Therefore, we propose a novel cost function that is defined on two levels, one representing the segments and the other corresponding to pixels. The basic idea is that a pixel has to be assigned to the same disparity layer as its segment, but can as well be occluded. The cost function is then effectively minimized via graph-cuts. In the experimental results, we show that our method produces good-quality results, especially in regions of low texture and close to disparity boundaries. Results obtained for the Middlebury test set indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.  相似文献   

5.
李小晗  陈璐  周翔 《红外与激光工程》2020,49(6):20200085-1-20200085-8
噪声是影响图像分割的重要因素,文中提出了一种能够在含噪声的真实场景中准确提取出多个物体区域的分割方案。利用基于正弦条纹投影的双目结构光系统,得到包含目标物体的相位图和视差图。将视差图映射到U-视差图中,利用物体和噪声区域在该视差空间的不同形态特征,采用闭合区域检测算法初步得到各个物体的分割区域,并结合条纹调制度阈值分析法进一步去除阴影区域的噪声,最终得到精确的分割结果。客观评价的数据分析表明,文中提出的分割算法,不仅对噪声的鲁棒性较好,还可以有效地将物体与水平支撑面分割开,在不同场景下具有计算复杂度低,抗干扰能力强的优势,分割准确率均在90%以上,最高可达到99.2%,平均运行时间为27 ms。  相似文献   

6.
董欣 《电视技术》2014,38(3):1-3,11
立体匹配是深度图获取的一个关键技术,传统的立体匹配使用的图像分割方法会对图像边界造成一定的破坏,为提高深度图边缘的准确度,提出了一种基于梯度增强的立体匹配算法。首先通过梯度算子对参考图像和目标图像进行预处理,增强图像边界的鲁棒性,然后采用mean-shift图像分割算法对图像进行区域分割,对分割得到的区域使用自适应块匹配法进行立体匹配,最后采用均值滤波对图像进行后处理,得到最优的视差平面。实验结果证明该算法在图像边界的匹配精度上得到了比较满意的效果。  相似文献   

7.
结合颜色和MGD特征及MRF模型的场景文本分割   总被引:1,自引:1,他引:0  
针对场景文本受到光照、复杂背景等因素影响而难以进行有效分割的问题,提出了一种融合颜色和最大梯度差(MGD,maximum gradient difference)特征及马尔科夫随机场(MRF,Markov random field)的场景文本分割方法。首先提取能够有效表达文本纹理特性的MGD特征,通过概率框架将其和颜色特征结合起来对观测图像进行建模;然后结合空间关系和邻域像素属性差异对传统势函数进行改进;最后建立场景文本分割的MRF模型,利用图割(graph cut)算法快速地求解该模型。实验结果表明,采用颜色和MGD特征相结合以及改进的势函数对分割结果具有较大地改善,尤其在光照不均匀及背景复杂情况下相比其他算法取得了较好的性能。  相似文献   

8.
An iterative split-and-merge framework for the segmentation of planar surfaces in the disparity space is presented. Disparity of a scene is modeled by approximating various surfaces in the scene to be planar. In the split phase, the number of planar surfaces along with the underlying plane parameters is assumed to be known from the initialization or from the previous merge phase. Based on these parameters, planar surfaces in the disparity image are labeled to minimize the residuals between the actual disparity and the modeled disparity. The labeled planar surfaces are separated into spatially continuous regions which are treated as candidates for the merging that follows. The regions are merged together under a maximum variance constraint while maximizing the merged area. A multistage branch-and-bound algorithm is proposed to carry out this optimization efficiently. Each stage of the branch-and-bound algorithm separates a planar surface from the set of spatially continuous regions. The multistage merging estimates the number of planar surfaces and their labeling. The splitting and the multistage merging is repeated till convergence is reached or satisfactory results are achieved. Experimental results are presented for variety of stereo image data.  相似文献   

9.
In order to improve the semantic segmentation accuracy of traffic scene,a segmentation method was proposed based on RGB-D image and convolutional neural network.Firstly,on the basis of semi-global stereo matching algorithm,the disparity map was obtained,and the sample library was established by fusing the disparity map D and RGB image into the four-channel RGB-D image.Then,with two different structures,the networks were trained by using two different learning rate adjustment strategy respectively.Finally,the traffic scene semantic segmentation test was carried out with RGB-D image as the input,and the results were compared with the segmentation method based on RGB image.The experimental results show that the proposed traffic scene segmentation algorithm based on RGB-D image can achieve higher semantic segmentation accuracy than that based on RGB image.  相似文献   

10.
针对现有场景流计算方法在复杂场景、大位移和运动遮挡等情况下易产生运动边缘模糊的问题,提出一种基于语义分割的双目场景流估计方法.首先,根据图像中的语义信息类别,通过深度学习的卷积神经网络模型将图像划分为带有语义标签的区域;针对不同语义类别的图像区域分别进行运动建模,利用语义知识计算光流信息并通过双目立体匹配的半全局匹配方法计算图像视差信息.然后,对输入图像进行超像素分割,通过最小二乘法耦合光流和视差信息,分别求解每个超像素块的运动参数.最后,在优化能量函数中添加语义分割边界的约束信息,通过更新像素到超像素块的映射关系和超像素块到移动平面的映射关系得到最终的场景流估计结果.采用KITTI 2015标准测试图像序列对本文方法和代表性的场景流计算方法进行对比分析.实验结果表明,本文方法具有较高的精度和鲁棒性,尤其对于复杂场景、运动遮挡和运动边缘模糊的图像具有较好的边缘保护作用.  相似文献   

11.
任艳楠  刘琚  元辉  顾凌晨 《信号处理》2018,34(5):531-538
本文提出一种采用几何复杂度的室外场景图像几何分割和深度生成算法。该算法首先通过图像中主要线段的角度统计分布将室外场景图像的几何结构规划为四种类型;然后,利用meanshift分割算法将输入图像分割成若干小区域,依据该图像的场景几何结构将这些小的区域逐步融合成为三个大的区域,每个区域具有一致的深度分布特点,由此实现输入图像的几何分割;最后,根据几何类型定义标准的深度图,结合输入图像的几何分割结果获得图像的深度图。实验结果表明可以通过简单的线段角度统计分布实现图像的几何分割,并进一步获得图像的深度图,与已有算法相比,提出的算法可以更好地保持深度图细节,更接近场景的真实的深度信息。   相似文献   

12.
In this paper, we propose a fully automatic image segmentation and matting approach with RGB-Depth (RGB-D) data based on iterative transductive learning. The algorithm consists of two key elements: robust hard segmentation for trimap generation, and iterative transductive learning based image matting. The hard segmentation step is formulated as a Maximum A Posterior (MAP) estimation problem, where we iteratively perform depth refinement and bi-layer classification to achieve optimal results. For image matting, we propose a transductive learning algorithm that iteratively adjusts the weights between the objective function and the constraints, overcoming common issues such as over-smoothness in existing methods. In addition, we present a new way to form the Laplacian matrix in transductive learning by ranking similarities of neighboring pixels, which is essential to efficient and accurate matting. Extensive experimental results are reported to demonstrate the state-of-the-art performance of our method both subjectively and quantitatively.  相似文献   

13.
针对传统局部匹配算法在斜面场景匹配中所表现出的阶梯效应,提出了一种基于最优斜面参数估计的局部立体匹配算法。该算法首先为每一个像素随机地分配一组斜面参数,然后以新的斜面参数所定义的支撑域下当前像素的匹配代价是否减小为准则,迭代地进行斜面参数的邻域传播-单点优化过程,并最终使得计算结果收敛到最优斜面,同时估计得到稠密的亚像素级视差。通过对典型斜面场景图像和Middlebury 标准测试图像对的匹配实验表明,文中算法在将对普通场景的匹配效果保持在当前先进水平的同时,对斜面场景的匹配消除了阶梯效应,且匹配率代表了局部匹配的先进水平。  相似文献   

14.
沈乐  刘琼 《电子学报》2000,48(10):1909-1914
道路场景复杂、热成像纹理信息较少以及图像品质不稳定,RoIs提取面临挑战.阈值分割RoIs提取更多关注行人局部细节和像素间邻域关系,容易产生行人遗漏、背景粘连和行人断裂,且很难控制RoIs总量.模拟人类视觉,关注图像显著性区域及其位置和大小,提出概率图RoIs提取方法,设计凸-凹形曲线映射像素灰度值增强图像对比度;基于图像签名方法获取显著性图.融合灰度强度和显著性概率图并从中提取图像前景;设计算法搜索路面估计限定的概率图区域生成RoIs.实验表明,相对阈值分割,本文方法能够提高RoIs定位准确度、控制RoIs总量和显著减少非行人RoIs;提取等量单帧RoIs,召回率提高不低于9%.  相似文献   

15.
现有的多目标进化聚类算法应用于图像分割时,往往是在图像像素层面上进行聚类,运行时间过长,而且忽略了图像区域信息使得图像分割效果不太理想。为了提高多目标进化聚类算法的分割效果和时间效率,该文将图像区域信息与部分监督信息引入多目标进化聚类,提出图像区域信息驱动的多目标进化半监督模糊聚类图像分割算法。该算法首先利用超像素策略获得图像的区域信息,然后结合部分监督信息,设计融合区域信息和监督信息的适应度函数,接着通过多目标进化策略对多个适应度函数进行优化得到最优解集。最后构造融合区域信息与监督信息的最优解评价指标,实现从最优解集中选取一个最优解。实验结果表明:与已有多目标进化聚类算法相比,该算法不但分割效果有所提升,而且运行效率得以提高。  相似文献   

16.
一种基于图像纹理的模板匹配算法   总被引:5,自引:0,他引:5  
为了提高实际工程应用中图像匹配算法计算的实时性,增强图像匹配算法对复杂场景跟踪的鲁棒性,提出了一种新的基于图像纹理的模板匹配算法(TTM)。依据邻域内像素灰度变化的趋势分别提取水平、垂直方向二值化的图像纹理矩阵,然后根据定义的一个相似性判别准则,分别度量水平、垂直两个方向二值化图像矩阵之间的相关置信度。最后,通过合成两个方向上的图像匹配置信度结果得到目标配准置信度。实验结果表明,该算法对图像光照变化、目标部分被遮挡等情况具有较好的适应性。  相似文献   

17.
We propose a novel algorithm for segmentation of video background models in time-variant scenarios. It is robust to gradual or abrupt illumination changes, diverse kind of noises, and even scenario variation. The algorithm generates regions according to the scene composition by keeping region segmentation coherence. The proposed method based on a discrete-time cellular neural network estimates the number regions in the current background model, and then, a modified k-means algorithm is used to achieve segmentation. The findings demonstrate the robustness of the method and its superiority over two state of the art scene segmentation algorithms.  相似文献   

18.
In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.  相似文献   

19.
Obtaining a good-quality image requires exposure to light for an appropriate amount of time. If there is camera or object motion during the exposure time, the image is blurred. To remove the blur, some recent image deblurring methods effectively estimate a point spread function (PSF) by acquiring a noisy image additionally, and restore a clear latent image with the PSF. Since the groundtruth PSF varies with the location, a blockwise approach for PSF estimation has been proposed. However, the block to estimate a PSF is a straightly demarcated rectangle which is generally different from the shape of an actual region where the PSF can be properly assumed constant. We utilize the fact that a PSF is substantially related to the local disparity between two views. This paper presents a disparity-based method of space-variant image deblurring which employs disparity information in image segmentation, and estimates a PSF, and restores a latent image for each region. The segmentation method firstly over-segments a blurred image into sufficiently many regions based on color, and then merges adjacent regions with similar disparities. Experimental results show the effectiveness of the proposed method.  相似文献   

20.
基于小基高比的快速立体匹配方法   总被引:1,自引:0,他引:1  
为了提高立体匹配效率和获得高精度的亚像素级视差,该文提出一种快速的小基高比立体匹配方法。该方法首先利用积分图像加速自适应窗口和规范互相关度量的计算,然后根据可靠性约束进一步拒绝错误匹配,再采用基于迭代二倍重采样的亚像素级匹配方法为可信点计算亚像素级视差,最后利用基于图分割的视差平面拟合方法获得稠密的亚像素级视差图。实验结果表明该方法不但可获得高精度的亚像素级视差而且还提高了算法的匹配效率,满足了小基高比立体重建的需求。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号