首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
离焦测距算法是一种用于恢复场景深度信息的常用算法。传统的离焦测距算法通常需要采集多幅离焦图像,实际应用中具有很大的制约性。文中基于局部模糊估计提出单幅离焦图像深度恢复算法。基于局部模糊一致性的假设,本文采用简单而有效的两步法恢复输入图像的深度信息:1)通过求取输入离焦图和利用已知高斯核再次模糊图之间的梯度比得到边缘处稀疏模糊图 2)将边缘位置模糊值扩离至全部图像,完整的相对深度信息即可恢复。为了获得准确的场景深度信息,本文加入几何条件约束、天空区域提取策略来消除颜色、纹理以及焦点平面歧义性带来的影响,文中对各种类型的图片进行对比实验,结果表明该算法能在恢复深度信息的同时有效抑制图像中的歧义性。  相似文献   

2.
Constructing a Multivalued Representation for View Synthesis   总被引:1,自引:1,他引:1  
A fundamental problem in computer vision and graphics is that of arbitrary view synthesis for static 3-D scenes, whereby a user-specified viewpoint of the given scene may be created directly from a representation. We propose a novel compact representation for this purpose called the multivalued representation (MVR). Starting with an image sequence captured by a moving camera undergoing either unknown planar translation or orbital motion, a MVR is derived for each preselected reference frame, and may then be used to synthesize arbitrary views of the scene. The representation itself is comprised of multiple depth and intensity levels in which the k-th level consists of points occluded by exactly k surfaces. To build a MVR with respect to a particular reference frame, dense depth maps are first computed for all the neighboring frames of the reference frame. The depth maps are then combined together into a single map, where points are organized by occlusions rather than by coherent affine motions. This grouping facilitates an automatic process to determine the number of levels and helps to reduce the artifacts caused by occlusions in the scene. An iterative multiframe algorithm is presented for dense depth estimation that both handles low-contrast regions and produces piecewise smooth depth maps. Reconstructed views as well as arbitrary flyarounds of real scenes are presented to demonstrate the effectiveness of the approach.  相似文献   

3.
三维场景建模及三维多目标检测识别等研究中需要获取高精度、高分辨率深度图,针对RGB-D传感器提供的深度信息存在分辨率低、深度值缺失和噪声干扰等问题,提出一种基于深度置信度的分层联合双边滤波深度图修复算法。基于深度信息获取存在的问题提出相应的深度退化模型,采用深度置信度测量对深度像素进行置信度分类,根据深度置信度确定滤波器窗口权重值,利用提出的分层联合双边滤波算法在待修复区域完成深度图修复。采用Middlebury标准数据库和自采数据库进行定性对比实验和定量结果分析表明,该算法对深度图修复后边缘更加清晰合理,消除了边缘模糊和纹理伪像,有效提高了三维深度图修复的精确度。  相似文献   

4.
Real-time focus range sensor   总被引:3,自引:0,他引:3  
Structures of dynamic scenes can only be recovered using a real-time range sensor. Depth from defocus offers an effective solution to fast and dense range estimation. However, accurate depth estimation requires theoretical and practical solutions to a variety of problems including recovery of textureless surfaces, precise blur estimation, and magnification variations caused by defocusing. Both textured and textureless surfaces are recovered using an illumination pattern that is projected via the same optical path used to acquire images. The illumination pattern is optimized to maximize accuracy and spatial resolution in computed depth. The relative blurring in two images is computed using a narrow-band linear operator that is designed by considering all the optical, sensing, and computational elements of the depth from defocus system. Defocus invariant magnification is achieved by the use of an additional aperture in the imaging optics. A prototype focus range sensor has been developed that has a workspace of 1 cubic foot and produces up to 512×480 depth estimates at 30 Hz with an average RMS error of 0.2%. Several experimental results are included to demonstrate the performance of the sensor  相似文献   

5.
基于景物散焦图像的距离测量   总被引:2,自引:0,他引:2  
计算机视觉中,景物三维重建的关键是从景物的图像中计算出景物目标到摄像机的距离,提出了一种基于散焦图像计算景物距离的新方法。该方法利用远心光学镜头拍摄景物图像,通过改变像检测到镜头的距离获得同一景物的两幅散焦程度不同的图像,将获得灰度图像转换成梯度图像。利用矩不变原理计算梯度图像中边缘区的大小与整个图像匹配大小的比Pe,根据两幅图像的Pe值计算出景物的深度。实验结果表明了该方法的有效性,并对该方法产生的误差进行了分析。  相似文献   

6.
从深度图RGB-D域中联合学习RGB图像特征与3D几何信息有利于室内场景语义分割,然而传统分割方法通常需要精确的深度图作为输入,严重限制了其应用范围。提出一种新的室内场景理解网络框架,建立基于语义特征与深度特征提取网络的联合学习网络模型提取深度感知特征,通过几何信息指导的深度特征传输模块与金字塔特征融合模块将学习到的深度特征、多尺度空间信息与语义特征相结合,生成具有更强表达能力的特征表示,实现更准确的室内场景语义分割。实验结果表明,联合学习网络模型在NYU-Dv2与SUN RGBD数据集上分别取得了69.5%与68.4%的平均分割准确度,相比传统分割方法具有更好的室内场景语义分割性能及更强的适用性。  相似文献   

7.
盛斌  吴恩华 《软件学报》2008,19(7):1806-1816
首先推导与归纳了图像三维变换中像素深度场的变换规律,同时提出了基于深度场和极线原则的像素可见性别方法,根据上述理论和方法,提出一种基于深度图像的建模与绘制(image-based modeling and rendering,简称IBMR)技术,称为虚平面映射.该技术可以基于图像空间内任意视点对场景进行绘制.绘制时,先在场景中根据视线建立若干虚拟平面,将源深度图像中的像素转换到虚平面上,然后通过对虚平面上像素的中间变换,将虚平面转换成平面纹理,再利用虚平面的相互拼接,将视点的成像以平面纹理映射的方式完成.新方法还能在深度图像内侧,基于当前视点快速获得该视点的全景图,从而实现视点的实时漫游.新方法视点运动空间大、存储需求小,且可以发挥图形硬件的纹理映射功能,并能表现物体表面的三维凹凸细节和成像视差效果,克服了此前类似算法的局限和不足.  相似文献   

8.
An investigation of methods for determining depth from focus   总被引:6,自引:0,他引:6  
The concept of depth from focus involves calculating distances to points in an observed scene by modeling the effect that the camera's focal parameters have on images acquired with a small depth of field. This technique is passive and requires only a single camera. The most difficult segment of calculating depth from focus is deconvolving the defocus operator from the scene and modeling it. Most current methods for determining the defocus operator employ inverse filtering. The authors reveal some fundamental problems with inverse filtering: inaccuracies in finding the frequency domain representation, windowing effects, and border effects. A general, matrix-based method using regularization is presented, which eliminates these problems. The new method is confirmed experimentally, with the results showing an RMS error of 1.3%  相似文献   

9.
The recovery of depth from defocused images involves calculating the depth of various points in a scene by modeling the effect that the focal parameters of the camera have on images acquired with a small depth of field. In the existing methods on depth from defocus (DFD), two defocused images of a scene are obtained by capturing the scene with different sets of camera parameters. Although the DFD technique is computationally simple, the accuracy is somewhat limited compared to the stereo algorithms. Further, an arbitrary selection of the camera settings can result in observed images whose relative blurring is insufficient to yield a good estimate of the depth. In this paper, we address the DFD problem as a maximum likelihood (ML) based blur identification problem. We carry out performance analysis of the ML estimator and study the effect of the degree of relative blurring on the accuracy of the estimate of the depth. We propose a criterion for optimal selection of camera parameters to obtain an improved estimate of the depth. The optimality criterion is based on the Cramer-Rao bound of the variance of the error in the estimate of blur. A number of simulations as well as experimental results on real images are presented to substantiate our claims.  相似文献   

10.
现有基于深度学习的显著性检测算法主要针对二维RGB图像设计,未能利用场景图像的三维视觉信息,而当前光场显著性检测方法则多数基于手工设计,特征表示能力不足,导致上述方法在各种挑战性自然场景图像上的检测效果不理想。提出一种基于卷积神经网络的多模态多级特征精炼与融合网络算法,利用光场图像丰富的视觉信息,实现面向四维光场图像的精准显著性检测。为充分挖掘三维视觉信息,设计2个并行的子网络分别处理全聚焦图像和深度图像。在此基础上,构建跨模态特征聚合模块实现对全聚焦图像、焦堆栈序列和深度图3个模态的跨模态多级视觉特征聚合,以更有效地突出场景中的显著性目标对象。在DUTLF-FS和HFUT-Lytro光场基准数据集上进行实验对比,结果表明,该算法在5个权威评估度量指标上均优于MOLF、AFNet、DMRA等主流显著性目标检测算法。  相似文献   

11.
目的 光场相机可以通过单次曝光同时从多个视角采样单个场景,在深度估计领域具有独特优势。消除遮挡的影响是光场深度估计的难点之一。现有方法基于2D场景模型检测各视角遮挡状态,但是遮挡取决于所采样场景的3D立体模型,仅利用2D模型无法精确检测,不精确的遮挡检测结果将降低后续深度估计精度。针对这一问题,提出了3D遮挡模型引导的光场图像深度获取方法。方法 向2D模型中的不同物体之间添加前后景关系和深度差信息,得到场景的立体模型,之后在立体模型中根据光线的传输路径推断所有视角的遮挡情况并记录在遮挡图(occlusion map)中。在遮挡图引导下,在遮挡和非遮挡区域分别使用不同成本量进行深度估计。在遮挡区域,通过遮挡图屏蔽被遮挡视角,基于剩余视角的成像一致性计算深度;在非遮挡区域,根据该区域深度连续特性设计了新型离焦网格匹配成本量,相比传统成本量,该成本量能够感知更广范围的色彩纹理,以此估计更平滑的深度图。为了进一步提升深度估计的精度,根据遮挡检测和深度估计的依赖关系设计了基于最大期望(exception maximization,EM)算法的联合优化框架,在该框架下,遮挡图和深度图通过互相引导的方式相继提升彼此精度。结果 实验结果表明,本文方法在大部分实验场景中,对于单遮挡、多遮挡和低对比度遮挡在遮挡检测和深度估计方面均能达到最优结果。均方误差(mean square error,MSE)对比次优结果平均降低约19.75%。结论 针对遮挡场景的深度估计,通过理论分析和实验验证,表明3D遮挡模型相比传统2D遮挡模型在遮挡检测方面具有一定优越性,本文方法更适用于复杂遮挡场景的深度估计。  相似文献   

12.
We present the focal flow sensor. It is an unactuated, monocular camera that simultaneously exploits defocus and differential motion to measure a depth map and a 3D scene velocity field. It does this using an optical-flow-like, per-pixel linear constraint that relates image derivatives to depth and velocity. We derive this constraint, prove its invariance to scene texture, and prove that it is exactly satisfied only when the sensor’s blur kernels are Gaussian. We analyze the inherent sensitivity of the focal flow cue, and we build and test a prototype. Experiments produce useful depth and velocity information for a broader set of aperture configurations, including a simple lens with a pillbox aperture.  相似文献   

13.
Projectors are increasingly being used as light-sources in computer vision applications. In several applications, they are modeled as point light sources, thus ignoring the effects of illumination defocus. In addition, most active vision techniques assume that a scene point is illuminated only directly by the light source, thus ignoring global light transport effects. Since both defocus and global illumination co-occur in virtually all scenes illuminated by projectors, ignoring them can result in strong, systematic biases in the recovered scene properties. To make computer vision techniques work for general real world scenes, it is thus important to account for both these effects.  相似文献   

14.
Edge and Depth from Focus   总被引:2,自引:0,他引:2  
This paper proposes a novel method to obtain the reliable edge and depth information by integrating a set of multi-focus images, i.e., a sequence of images taken by systematically varying a camera parameter focus. In previous work on depth measurement using focusing or defocusing, the accuracy depends upon the size and location of local windows where the amount of blur is measured. In contrast, no windowing is needed in our method; the blur is evaluated from the intensity change along corresponding pixels in the multi-focus images. Such a blur analysis enables us not only to detect the edge points without using spatial differentiation but also to estimate the depth with high accuracy. In addition, the analysis result is stable because the proposed method involves integral computations such as summation and least-square model fitting. This paper first discusses the fundamental properties of multi-focus images based on a step edge model. Then, two algorithms are presented: edge detection using an accumulated defocus image which represents the spatial distribution of blur, and depth estimation using a spatio-focal image which represents the intensity distribution along focus axis. The experimental results demonstrate that the highly precise measurement has been achieved: 0.5 pixel position fluctuation in edge detection and 0.2% error at 2.4 m in depth estimation.  相似文献   

15.
The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.  相似文献   

16.
In this paper, we propose a learning-based test-time optimization approach for reconstructing geometrically consistent depth maps from a monocular video. Specifically, we optimize an existing single image depth estimation network on the test example at hand. We do so by introducing pseudo reference depth maps which are computed based on the observation that the optical flow displacement for an image pair should be consistent with the displacement obtained by depth-reprojection. Additionally, we discard inaccurate pseudo reference depth maps using a simple median strategy and propose a way to compute a confidence map for the reference depth. We use our pseudo reference depth and the confidence map to formulate a loss function for performing the test-time optimization in an efficient and effective manner. We compare our approach against the state-of-the-art methods on various scenes both visually and numerically. Our approach is on average 2.5× faster than the state of the art and produces depth maps with higher quality.  相似文献   

17.
张旭东  李成云  汪义志  熊伟 《控制与决策》2018,33(12):2122-2130
光场相机通过单次拍摄可获取立体空间中的4维光场数据,利用光场的多视角特性可从中提取全光场图像的深度信息.然而,现有深度估计方法很少考虑场景中存在遮挡的情况,当场景中有遮挡时,提取深度信息的精度会明显降低.对此,提出一种新的基于多线索融合的光场图像深度提取方法以获取高精度的深度信息.首先分别利用自适应散焦算法和自适应匹配算法提取场景的深度信息;然后用峰值比作为置信以加权融合两种算法获取的深度;最后,用具有结构一致性的交互结构联合滤波器对融合深度图进行滤波,得到高精度深度图.合成数据集和真实数据集的实验结果表明,与其他先进算法相比,所提出的算法获取的深度图精度更高、噪声更少、图像边缘保持效果更好.  相似文献   

18.
利用不均匀散焦模型获取景物深度信息   总被引:1,自引:0,他引:1       下载免费PDF全文
景物三维重建的关键之一是从景物图像中获得景物目标到摄像机的距离。研究了一种基于散焦图像的计算景物距离的算法。该方法基于不均匀散焦模型,并且只需两幅改变光圈指数得到的散焦程度不同的图像,因此可以避免图像的大小匹配问题。根据图像的点扩散函数的形式,可以通过优化的方法求得深度。模拟和真实实验表明了算法的有效性。  相似文献   

19.
王伟  余淼  胡占义 《自动化学报》2014,40(12):2782-2796
提出一种高精度的基于匹配扩散的稠密深度图估计算法. 算法分为像素级与区域级两阶段的匹配扩散过程.前者主要对视图间的稀疏特征点匹配进行扩散以获取相对稠密的初始深度图; 而后者则在多幅初始深度图的基础上, 根据场景分段平滑的假设, 在能量函数最小化框架下利用平面拟合及多方向平面扫描等方法解决存在匹配多义性问题区域(如弱纹理区域)的深度推断问题. 在标准数据集及真实数据集上的实验表明, 本文算法对视图中的光照变化、透视畸变等因素具有较强的适应性, 并能有效地对弱纹理区域的深度信息进行推断, 从而可以获得高精度、稠密的深度图.  相似文献   

20.
In this paper, we present a new structured light 3D reconstruction method that can be applied to scenes in the presence of global illumination such as inter-reflection, subsurface scattering and severe projector defocus. The proposed method takes advantage of important properties embedded in the binary code patterns, and determines the minimum stripe width that eliminates the effects of subsurface scattering and projector defocus in the capturing stage. Moreover, errors caused by inter-reflection are detected, and corrected by an iterative approach. Our method can be applied to scenes where more than one global illumination effect appears in a scene point. The accuracy of our method is demonstrated by quantitative evaluation, and its robustness by qualitative evaluation when applying to real-world scenes with various characteristics, both of which are better than two of the currently known best methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号