共查询到18条相似文献,搜索用时 46 毫秒
1.
提出一种新的用于散焦求深度的摄像机内参数标定算法。该算法依靠改变摄像机镜头光圈指数获取同一场景的任意两幅散焦程度不同的图像,提取两幅图像间模糊程度差异信息,结合分析透镜成像几何标定出摄像机的相应内参数。此算法解除了2006年由Park所提出的标定方法中必须有一幅聚焦图像的限制,并且无须对图像进行复杂的放大率标准化处理。模拟实验与真实实验均验证了算法的有效性和精确性。 相似文献
2.
由散焦图像求深度是计算机视觉中一个非常重要的课题。散焦图像中点的模糊程度随物体的深度而变化,因此可以利用散焦图像估计物体的深度信息,该方法不存在立体视觉和运动视觉中对应点的匹配问题,具有很好的应用前景。研究了一种基于散焦图像空间的深度估计算法:将散焦成像描述成热扩散过程,借助形变函数将两幅散焦图像扩张成一个散焦空间,再估计出形变参数,进而恢复物体的深度信息。最后利用实验验证了算法的有效性。 相似文献
3.
4.
目的 当前,大多数单幅散焦图像的3维(3D)场景深度恢复方法,通常使用高斯分布描述点扩散函数(PSF)模型,依据图像边缘散焦模糊量与场景深度的对应关系获得稀疏深度图,采用不同的扩展方法得到整个场景图像的全深度图.鉴于现有方法的深度恢复结果还不够精准,对各种噪声干扰还不够健壮,提出一种基于柯西分布的点扩散函数模型计算物体图像边缘散焦模糊量的方法.方法 将输入的单幅散焦图像分别用两个柯西分布重新模糊,利用图像边缘两次重新模糊图像间梯度比值和两个柯西分布的尺度参数,可以计算出图像中边缘处的散焦模糊量.使用matting内插方法将边缘模糊量扩展到整个图像,即可恢复场景的全深度图.结果 将原始Lenna图像旋转并加入高斯噪声以模拟图像噪声和边缘位置误差,用原图与噪声图比较了柯西分布图像梯度比值与高斯分布图像梯度比值的平均误差.使用多种真实场景图像数据,将本文方法与现有的多种单幅散焦图像深度恢复方法进行了比较.柯西分布图像梯度比值的平均误差要小于高斯分布图像梯度比值的平均误差.本文方法能够从非标定单幅散焦图像中较好地恢复场景深度,对图像噪声、不准确边缘位置和邻近边缘具有更好的抗干扰能力.结论 本文方法可以生成优于现有基于高斯模型等方法的场景深度图.同时,也证明了使用非高斯模型建模PSF的可行性和有效性. 相似文献
5.
利用拉氏算子鉴别散焦模糊图像点扩散函数 总被引:4,自引:0,他引:4
对于散焦模糊图像,通常把其散焦点扩散函数近似为圆柱形函数。若能有效地从模糊图像中鉴别出该圆柱形的半径,则等价于求出了散焦模糊点扩散函数。利用拉氏算子对散焦模糊图像进行无方向性的二阶微分,然后求微分图像的自相关,则三维显示的自相关图像上会出现一个环形槽。环形槽由数值为负的一系列相关峰连成,槽底住置连成鉴别 别圆。该鉴别圆以零频尖峰为圆心,直径等于作为散焦点扩散函数的圆柱形直径的两倍。通过插值求和,得到一条鉴别曲线,可以直接鉴别出其直径。这一新的鉴别方法计算 量小,鉴别精度高,抗噪声能力较强。 相似文献
6.
基于散焦图像深度测量的一种新方法 总被引:5,自引:0,他引:5
提出了基于散焦图像深度测量的一种新方法.该方法采用一台带有远心镜头的CCD摄
像机,沿光轴方向移动摄像机拍摄两副图像,根据所拍摄目标图像的散焦半径与图像的大小
计算目标景物距摄像机的距离.采用远心光学镜头代替普通镜头可使图像的大小与散焦半径
之间的关系简单.该方法融合了图像的大小与散焦半径两种信息,使深度计算更加准确.由
于该方法只需要一台参数固定的CCD摄像机,可以免除图像间的配准和特征点的选取,有利
于实时系统的实现.实验结果表明该方法的有效性. 相似文献
7.
8.
单幅图像场景深度的获取一直是计算机视觉领域的一个难题。使用高斯分布函数或柯西分布函数近似点扩散函数模型(PSF),再根据图像边缘处散焦模糊量的大小与场景深度之间的关系估算出深度信息,是一种常用的方法。真实世界中图像模糊的缘由千变万化,高斯分布函数以及柯西分布函数并不一定是最佳的近似模型,并且传统的方法对于图像存在阴影、边缘不明显以及深度变化比较细微的区域的深度恢复结果不够准确。为了提取更为精确的深度信息,提出一种利用高斯-柯西混合模型近似PSF的方法;然后对散焦图像进行再模糊处理,得到两幅散焦程度不同的图像;再通过计算两幅散焦图像边缘处梯度的比值估算出图像边缘处的散焦模糊量,从而得到稀疏深度图;最后使用深度扩展法得到场景的全景深度图。通过大量真实图像的测试,说明新方法能够从单幅散焦图像中恢复出完整、可靠的深度信息,并且其结果优于目前常用的两种方法。 相似文献
9.
10.
11.
Images taken of 3D objects are generally defocused. Depth-from-focus techniques use the defocus information to determine range. However, quantitative measurement of focus is difficult and requires accurate modeling of the point-spread function (PSF). We describe a new method that determines depth using the symmetry and smoothness of focus gradient with respect to the focus position. The technique is passive and uses a monocular imaging system. The performance for estimating range is experimentally demonstrated. 相似文献
12.
Edge and Depth from Focus 总被引:2,自引:0,他引:2
Asada Naoki Fujiwara Hisanaga Matsuyama Takashi 《International Journal of Computer Vision》1998,26(2):153-163
This paper proposes a novel method to obtain the reliable edge and depth information by integrating a set of multi-focus images, i.e., a sequence of images taken by systematically varying a camera parameter focus. In previous work on depth measurement using focusing or defocusing, the accuracy depends upon the size and location of local windows where the amount of blur is measured. In contrast, no windowing is needed in our method; the blur is evaluated from the intensity change along corresponding pixels in the multi-focus images. Such a blur analysis enables us not only to detect the edge points without using spatial differentiation but also to estimate the depth with high accuracy. In addition, the analysis result is stable because the proposed method involves integral computations such as summation and least-square model fitting. This paper first discusses the fundamental properties of multi-focus images based on a step edge model. Then, two algorithms are presented: edge detection using an accumulated defocus image which represents the spatial distribution of blur, and depth estimation using a spatio-focal image which represents the intensity distribution along focus axis. The experimental results demonstrate that the highly precise measurement has been achieved: 0.5 pixel position fluctuation in edge detection and 0.2% error at 2.4 m in depth estimation. 相似文献
13.
Rational Filters for Passive Depth from Defocus 总被引:9,自引:0,他引:9
A fundamental problem in depth from defocus is the measurement of relative defocus between images. The performance of previously proposed focus operators are inevitably sensitive to the frequency spectra of local scene textures. As a result, focus operators such as the Laplacian of Gaussian result in poor depth estimates. An alternative is to use large filter banks that densely sample the frequency space. Though this approach can result in better depth accuracy, it sacrifices the computational efficiency that depth from defocus offers over stereo and structure from motion. We propose a class of broadband operators that, when used together, provide invariance to scene texture and produce accurate and dense depth maps. Since the operators are broadband, a small number of them are sufficient for depth estimation of scenes with complex textural properties. In addition, a depth confidence measure is derived that can be computed from the outputs of the operators. This confidence measure permits further refinement of computed depth maps. Experiments are conducted on both synthetic and real scenes to evaluate the performance of the proposed operators. The depth detection gain error is less than 1%, irrespective of texture frequency. Depth accuracy is found to be 0.51.2% of the distance of the object from the imaging optics. 相似文献
14.
15.
为了解决日常拍摄的图像或视频中普遍存在局部运动模糊导致信息丢失的问题,提出一种基于能量估计的局部运动模糊检测算法。该算法首先计算图像的Harris特征点,根据每个区域内的特征点分布筛选出备选区域;然后根据近单色区域梯度分布平滑的特点,通过计算备选区域的梯度分布并参照平均幅值阈值过滤掉大部分容易被误判的部分;最后根据运动模糊对图像能量衰减的特征对备选区域进行模糊方向估计,并计算模糊方向和与其垂直方向的能量,根据两个方向上能量的比值进一步去掉单色区域和散焦模糊区域。在图像库上的实验结果表明,所提算法能较好从存在近单色区域和散焦区域干扰的图像中检测出运动模糊区域,有效提高局部运动模糊检测的鲁棒性以及适应性。 相似文献
16.
A bin picking system based on depth from defocus 总被引:3,自引:0,他引:3
It is generally accepted that to develop versatile bin-picking systems capable of grasping and manipulation operations, accurate
3-D information is required. To accomplish this goal, we have developed a fast and precise range sensor based on active depth from defocus (DFD). This sensor is used in conjunction with a three-component vision system, which is able to recognize and evaluate the
attitude of 3-D objects. The first component performs scene segmentation using an edge-based approach. Since edges are used
to detect the object boundaries, a key issue consists of improving the quality of edge detection. The second component attempts
to recognize the object placed on the top of the object pile using a model-driven approach in which the segmented surfaces
are compared with those stored in the model database. Finally, the attitude of the recognized object is evaluated using an
eigenimage approach augmented with range data analysis. The full bin-picking system will be outlined, and a number of experimental
results will be examined.
Received: 2 December 2000 / Accepted: 9 September 2001
Correspondence to: O. Ghita 相似文献
17.
The focal problems of projection include out-of-focus projection images from the projector caused by incomplete mechanical focus and screen-door effects produced by projection pixilation. To eliminate these defects and enhance the imaging quality and clarity of projectors, a novel adaptive projection defocus algorithm is proposed based on multi-scale convolution kernel templates. This algorithm applies the improved Sobel-Tenengrad focus evaluation function to calculate the sharpness degree of intensity equalization and then constructs multi-scale defocus convolution kernels to remap and render the defocus projection image. The resulting projection defocus corrected images can eliminate out-of-focus effects and improve the sharpness of uncorrected images. Experiments show that the algorithm works quickly and robustly and that it not only effectively eliminates visual artifacts and can run on a self-designed smart projection system in real time but also significantly improves the resolution and clarity of the observer's visual perception. 相似文献