首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
The continuous advancement in the field of imaging sensor necessitates the development of an efficient image fusion technique. The multi-focal image fusion extracts the in-focus information from the source images to construct a single composite image with increased depth-of-field. Traditionally, the information in multi-focal images is divided into two categories: in-focus and out-of-focus data. Instead of using a binary focus map, in this work we calculate the degree of focus for each source image using fuzzy logic. The fused image is then generated based on the weighted sum of this information. An initial focus tri-state map is built for each input image by using spatial frequency and a proposed focus measure named as alternate sum modified Laplacian. For the cases where these measures indicate different source images to contain focused pixel or have equal strength, another focus measure based on sum of gradient is employed to calculate the degree of focus in a fuzzy inference system. Finally, the fused image is computed from the weights determined by the degree of focus map of each image. The proposed algorithm is designed to fuse two source images, whereas fusion of multiple input images can be performed by fusing a source image with the fusion output of the previous input group. The comparison of the proposed method with several transform and pixel domain based techniques are conducted in terms of both subjective visual assessment and objective quantitative evaluation. Experimental results demonstrate that our method can be competitive with or even outperforms the methods in comparison.  相似文献   

2.
Depth from defocus (DFD) is a technique that restores scene depth based on the amount of defocus blur in the images. DFD usually captures two differently focused images, one near-focused and the other far-focused, and calculates the size of the defocus blur in these images. However, DFD using a regular circular aperture is not sensitive to depth, since the point spread function (PSF) is symmetric and only the radius changes with the depth. In recent years, the coded aperture technique, which uses a special pattern for the aperture to engineer the PSF, has been used to improve the accuracy of DFD estimation. The technique is often used to restore an all-in-focus image and estimate depth in DFD applications. Use of a coded aperture has a disadvantage in terms of image deblurring, since deblurring requires a higher signal-to-noise ratio (SNR) of the captured images. The aperture attenuates incoming light in controlling the PSF and, as a result, decreases the input image SNR. In this paper, we propose a new computational imaging approach for DFD estimation using focus changes during image integration to engineer the PSF. We capture input images with a higher SNR since we can control the PSF with a wide aperture setting unlike with a coded aperture. We confirm the effectiveness of the method through experimental comparisons with conventional DFD and the coded aperture approach.  相似文献   

3.
传统基于暗通道先验的图像去雾算法不能有效去除有雾图像在景深突变处的雾点,边界处容易引起光晕效应,对此提出一种基于暗通道先验的自适应超像素去雾算法.首先,在暗通道的获取过程中引入自适应方法判断当前像素邻域内是否具有多个景深物体,若仅存在相同景深物体,则直接求取此像素的暗通道,若存在多个景深物体,则引入超像素分割算法区分不同景深物体,减小景深变化对暗通道获取的影响,以求取更准确的暗通道;然后,估计粗略的透射率,并根据上下文约束细化透射率;最后,通过图像降质的逆过程求解去雾图像.实验结果表明,所提出的算法与暗通道先验单幅图像去雾(DCP)算法、基于边界邻域最大值滤波的快速图像去雾(EMDCP)算法、基于自适应暗原色的单幅图像去雾(ADCP)算法、带边界约束和上下文正则化的高效图像去雾(BCCR)算法相比,可将客观质量综合评价准则提高10%,能够抑制光晕效应,提高有雾图像的视觉效果.  相似文献   

4.
针对传统单幅图像深度估计线索不足及深度估计精度不准的问题,提出一种基于非参数化采样的单幅图像深度估计方法。该方法利用非参数化的学习手段,将现有RGBD数据集中的深度信息迁移到输入图像中去。首先计算输入图像和现有RGBD数据集多尺度的高层次图像特征;然后,在现有RGBD数据集中,基于高层次的图像特征通过kNN最近邻搜索找到若干与输入图像特征最匹配的候选图像,并将这些候选图像对通过SIFT流形变到输入图像进行对齐。最后,对候选深度图进行插值和平滑等优化操作便可以得到最后的深度图。实验结果表明,与现有算法相比,该方法估计得到的深度图精度更高,对输入图像的整体结构保持得更好。  相似文献   

5.
Image-based rendering of range data with estimated depth uncertainty   总被引:2,自引:0,他引:2  
Image-based rendering (IBR) involves constructing an image from a new viewpoint, using several input images from different viewpoints. Our approach is to acquire or estimate the depth for each pixel of each input image. We then reconstruct the new view from the resulting collection of 3D points. When rendering images from photographs, acquiring and registering data is far from perfect. Accuracy can fluctuate, depending on the choice of geometry reconstruction technique. Our image-rendering approach involves three steps: depth extraction, uncertainty estimation, and rendering. That is, we first compute a depth map for every input image. Then we calculate the uncertainty information using the estimated depth maps as starting points. Finally, we perform the actual rendering, which renders the uncertainty estimated in the previous step as ellipsoidal Gaussian splats.  相似文献   

6.
单幅图像场景深度的获取一直是计算机视觉领域的一个难题。使用高斯分布函数或柯西分布函数近似点扩散函数模型(PSF),再根据图像边缘处散焦模糊量的大小与场景深度之间的关系估算出深度信息,是一种常用的方法。真实世界中图像模糊的缘由千变万化,高斯分布函数以及柯西分布函数并不一定是最佳的近似模型,并且传统的方法对于图像存在阴影、边缘不明显以及深度变化比较细微的区域的深度恢复结果不够准确。为了提取更为精确的深度信息,提出一种利用高斯-柯西混合模型近似PSF的方法;然后对散焦图像进行再模糊处理,得到两幅散焦程度不同的图像;再通过计算两幅散焦图像边缘处梯度的比值估算出图像边缘处的散焦模糊量,从而得到稀疏深度图;最后使用深度扩展法得到场景的全景深度图。通过大量真实图像的测试,说明新方法能够从单幅散焦图像中恢复出完整、可靠的深度信息,并且其结果优于目前常用的两种方法。  相似文献   

7.
针对显微图像清晰度小于人眼通过目镜和物镜观察到的图像清晰度的问题,提出了焦深扩展图像这一概念,即在物体聚焦范围内按照某一选定的步距采集多幅图像,使用图像融合技术对图像序列进行计算得到的图像;探讨了两种图像融合算法对显微图像序列进行实时焦深扩展,有效地提高清晰度和增强立体感;通过图像信息量统计方法,并针对这两种算法的特点进行了实验分析.  相似文献   

8.
运用飞行时间相机来获取场景深度图像非常方便,但由于硬件的限制,得到的深度图像分辨率非常低,无法满足实际的需要.文中结合同场景的高分辨率彩色图像来制定优化框架,将深度图超分辨率问题转化为最优化问题来求解.具体来说,将彩色图像和深度图像在局部小窗口内具有的近似线性关系通过拉普拉斯矩阵的方式融合到目标函数的正则约束项中,运用彩色图像的局部结构参数模型,将该参数模型融入到正则约束项中对深度图的局部边缘结构提供更进一步的约束,再通过最速下降法有效地求解该优化问题.实验表明文中算法较其它算法无论在视觉效果还是客观评价指标下都可得到更好的结果.  相似文献   

9.
Abstract— Techniques for 3‐D display have evolved from stereoscopic 3‐D systems to multiview 3‐D systems, which provide images corresponding to different viewpoints. Currently, new technology is required for application in multiview display systems that use input‐source formats such as 2‐D images to generate virtual‐view images of multiple viewpoints. Due to the changes in viewpoints, occlusion regions of the original image become disoccluded, resulting in problems related to the restoration of output image information that is not contained in the input image. In this paper, a method for generating multiview images through a two‐step process is proposed: (1) depth‐map refinement and (2) disoccluded‐area estimation and restoration. The first step, depth‐map processing, removes depth‐map noise, compensates for mismatches between RGB and depth, and preserves the boundaries and object shapes. The second step, disoccluded‐area estimation and restoration, predicts the disoccluded area by using disparity and restores information about the area by using information about neighboring frames that are most similar to the occlusion area. Finally, multiview rendering generates virtual‐view images by using a directional rendering algorithm with boundary blending.  相似文献   

10.
A simple and high image quality method for viewpoint image synthesis from multi‐camera images for a stereoscopic 3D display using head tracking is proposed. In this method, slices of images for depth layers are made using approximate depth information, the slices are linearly blended corresponding to the distance between the viewpoint and cameras at each layer, and the layers are overlaid from the perspective of viewpoint. Because the linear blending automatically compensates for depth error because of the visual effects of depth‐fused 3D (DFD), the resulting image is natural to observer's perception. Smooth motion parallax of wide depth range objects induced by viewpoint movement for left‐and‐right and front‐and‐back directions is achieved using multi‐camera images and approximate depth information. Because the calculation algorithm is very simple, it is suitable for real time 3D display applications.  相似文献   

11.
Pan  Baiyu  Zhang  Liming  Yin  Hanxiong  Lan  Jun  Cao  Feilong 《Multimedia Tools and Applications》2021,80(13):19179-19201

3D movies/videos have become increasingly popular in the market; however, they are usually produced by professionals. This paper presents a new technique for the automatic conversion of 2D to 3D video based on RGB-D sensors, which can be easily conducted by ordinary users. To generate a 3D image, one approach is to combine the original 2D color image and its corresponding depth map together to perform depth image-based rendering (DIBR). An RGB-D sensor is one of the inexpensive ways to capture an image and its corresponding depth map. The quality of the depth map and the DIBR algorithm are crucial to this process. Our approach is twofold. First, the depth maps captured directly by RGB-D sensors are generally of poor quality because there are many regions missing depth information, especially near the edges of objects. This paper proposes a new RGB-D sensor based depth map inpainting method that divides the regions with missing depths into interior holes and border holes. Different schemes are used to inpaint the different types of holes. Second, an improved hole filling approach for DIBR is proposed to synthesize the 3D images by using the corresponding color images and the inpainted depth maps. Extensive experiments were conducted on different evaluation datasets. The results show the effectiveness of our method.

  相似文献   

12.
讨论立体图对的图像分割问题,提出一种基于深度和颜色信息的图像物体分割算法。该算法首先利用基于聚类的Mean-shift分割算法对目标图像进行适度的过分割,同时借助双目立体视觉算法获取立体图对的稠密深度图,并依据深度不连续性从过分割结果中选取用于继续进行“精致”分割的种子点集,接着对未分配种子标签的区域用图割算法分配标签,并对彼此之间没有深度不连续边界但具有不同标签的相邻区域进行融合。相比于传统图像分割算法,该算法可有效克服过分割和欠分割问题,获取具有一定语义的图像分割结果。相关的对比实验结果验证了该算法的有效性。  相似文献   

13.
The depth map captured from a real scene by the Kinect motion sensor is always influenced by noise and other environmental factors. As a result, some depth information is missing from the map. This distortion of the depth map directly deteriorates the quality of the virtual viewpoints rendered in 3D video systems. We propose a depth map inpainting algorithm based on a sparse distortion model. First, we train the sparse distortion model using the distortion and real depth maps to obtain two learning dictionaries: one for distortion and one for real depth maps. Second, the sparse coefficients of the distortion and the real depth maps are calculated by orthogonal matching pursuit. We obtain the approximate features of the distortion from the relationship between the learning dictionary and the sparse coefficients of the distortion map. The noisy images are filtered by the joint space structure filter, and the extraction factor is obtained from the resulting image by the extraction factor judgment method. Finally, we combine the learning dictionary and sparse coefficients from the real depth map with the extraction factor to repair the distortion in the depth map. A quality evaluation method is proposed for the original real depth maps with missing pixels. The proposed method achieves better results than comparable methods in terms of depth inpainting and the subjective quality of the rendered virtual viewpoints.  相似文献   

14.
The quality of depth maps affects the quality of generated 3D content. Practically, the depth maps often have lower resolution than that of color images, thus, Depth map Up-sampling (DU) is needed in various 3D applications. DU can yield specific artifacts which can degrade the quality of depth maps as well as constructed stereoscopic (color plus depth map) images. This paper investigates the effect of DU on 3D perception. The depth maps were up-sampled using seven approaches and the quality of stereoscopic images obtained from up-sampled depth maps was estimated through subjective and objective tests. The objective quality prediction was performed using a depth map quality assessment framework. The method is able to predict the quality of stereoscopic images through evaluation of their corresponding up-sampled depth maps using 2D Image Quality Metrics (IQMs). In order to improve the quality estimation, the framework selects the 2D IQMs with highest correlation to subjective test. Furthermore, motivated by previous researches on multiple metrics combination, a new metric fusion method is proposed. Experimental results show that the combined metric delivers higher performance than single metrics in 3D quality prediction.  相似文献   

15.
目的 越来越多的应用都依赖于对真实场景深度图像的准确且快速的观测和分析。飞行时间相机可以实时获取场景的深度图像,但是由于硬件条件的限制,采集的深度图像分辨率较低,无法满足实际应用的需要。为此提出一种结合同场景彩色图像通过构造自适应权值滤波器对深度图像进行超分辨率重建的方法。方法 充分发掘深度图像的非局部以及局部自相似性先验约束,结合同场景的高分辨率彩色图像构造非局部及局部的自适应权值滤波算法对深度图像进行超分辨率重建。具体来说,首先利用非局部滤波算法来有效避免重建结果的振铃效应,然后利用局部滤波算法进一步提升重建的深度图像质量。结果 实验结果表明,无论在客观指标还是视觉效果上,基于自适应权值滤波的超分辨率重建算法较其他算法都可以得到更好的结果,尤其当初始的低分辨率深度图像质量较差的情况下,本文方法的优势更加明显,峰值信噪比可以得到1dB的提升。结论 结合非局部和局部自相似性先验约束,结合同场景的高分辨率彩色图像构造的自适应权值滤波算法,较其他算法可以得到更理想的结果。  相似文献   

16.
This paper presents a novel depth estimation method based on feature points. Two points are selected arbitrarily from an object and their distance in the space is assumed to be known.The proposed technique can estimate simultaneously their depths according to two images taken before and after a camera moves and the motion parameters of the camera may be unknown. In addition, this paper analyzes the ways to enhance the precision of the estimated depths and presents a feature point image coordinates search algorithm to increase the robustness of the proposed method.The search algorithm can find automatically more accurate image coordinates of the feature points based on their detected image coordinates. Experimental results demonstrate the efficiency of the presented method.  相似文献   

17.
李世航  胡茂林 《微机发展》2006,16(4):110-112
文中提出了利用射影不变量来求解基于图像对三维深度恢复问题。方法的基本思想是对于立体图像,利用密度段元素,引入了两个射影不变量来恢复密度段的深度信息。从这两个不变量,能推导立体图像中匹配的密度段对所满足的关系。利用这个关系,实现了密度段之间的匹配运算。这个方法能直接地从输入图像中得到密集和准确的深度,对变形的图像具有鲁棒性。  相似文献   

18.
目的 深度图像作为一种重要的视觉感知数据,其质量对于3维视觉系统至关重要。由于传统方法获取的深度图像大多有使用场景的限制,容易受到噪声和环境影响,导致深度图像缺失部分深度信息,使得修复深度图像仍然是一个值得研究并有待解决的问题。对此,本文提出一种用于深度图像修复的双尺度顺序填充框架。方法 首先,提出基于条件熵快速逼近的填充优先级估计算法。其次,采用最大似然估计实现缺失深度值的最优预测。最后,在像素和超像素两个尺度上对修复结果进行整合,准确实现了深度图像孔洞填充。结果 本文方法在主流数据集MB (Middlebury)上与7种方法进行比较,平均峰值信噪比(peak signal-to-noise ratio,PSNR)和平均结构相似性指数(structural similarity index,SSIM)分别为47.955 dB和0.998 2;在手工填充的数据集MB+中,本文方法的PSNR平均值为34.697 dB,SSIM平均值为0.978 5,对比其他算法,本文深度修复效果有较大优势。在时间效率对比实验中,本文方法也表现优异,具有较高的效率。在消融实验部分,对本文提出的填充优先级估计、深度值预测和双尺度改进分别进行评估,验证了本文创新点的有效性。结论 实验结果表明,本文方法在鲁棒性、精确度和效率方面相较于现有方法具有比较明显的优势。  相似文献   

19.
针对Kinect传感器所采集的深度图像中存在大面积空洞的问题,提出了一种模糊C-均值聚类引导的深度图像修复算法。该算法将同步获取的彩色图像和深度图像作为输入;利用模糊C-均值聚类算法对彩色图像进行聚类,聚类结果作为引导图像;然后对每个深度图像中的大面积空洞区域,利用改进的快速行进算法,从空洞边缘向空洞内部逐层修复空洞区域;最后,利用改进的双边滤波算法去除图像中的散粒噪声。实验表明该算法能有效修复Kinect深度图像中的空洞,修复后的图像在平滑度和边缘强度上优于传统算法。  相似文献   

20.
针对单一的激光传感器或视觉传感器无法检测到透视三维平面的问题,提出一种基于激光传感器与视觉传感器融合的透视平面检测与深度预测算法;首先采用透视平面检测网络,在二维彩色图像中对透视平面进行图像分割;其次应用单一图像反射去除算法,在分割得到的透视平面区域分离背景信息,并使用MegaDepth算法进行深度预测,得到相对深度图;最后结合激光传感器的深度数据,采用抽样一致性算法,计算深度标尺,并使用对透视平面进行深度赋值,将相对深度图转化为绝对深度图,进而完成对透视平面的深度预测;实验结果表明该算法能成功检测并分割透视平面,且能得到正确的透视平面绝对深度信息.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号