首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
为解决靶标图像全局和局部离焦模糊问题,提出一种基于自适应模糊映射图估计的快速盲复原方法。首先,根据尺度空间图像边缘的连续性,自适应选择二次模糊量参考矩阵,并对离焦模糊靶标图像进行二次模糊,然后基于图像边缘差比计算稀疏模糊映射图,利用引导滤波进行插值获取模糊映射图,最后基于光学离焦退化模型建立模糊映射图和模糊退化图像之间的物理关系,实现离焦模糊靶标图像快速复原。实验结果表明,本文方法能够有效恢复离焦模糊靶标图像,增强靶标图像边缘特征,在算法运行效率上有极大优势,避免了迭代算法的高时耗问题,适合实际工业应用。  相似文献   

2.
根据离焦模糊图像的特性,提出一种新的离焦模糊图像边缘检测算法,该算法通过定义一个新的边缘检测算子,利用新算子对图像各像素进行卷积,求得各像素的梯度和方向信息,根据梯度和方向信息进行阈值化处理,得到离焦模糊图像的边缘检测图像。实验结果表明,对于离焦模糊图像,利用新的边缘检测算子进行边缘检测,能够较好地检测被模糊而弱化的边缘,检测效果符合人眼视觉感受。  相似文献   

3.
针对室内场景深度图像,检测场景中的面片边缘信息并确定场景中的完整面信息是实现室内场景分析和理解的基础.基于深度信息局部二值模式特征,提出一种室内场景深度图像边缘检测的方法.首先对场景深度信息图分别求X方向和Y方向的梯度信息,结合2个梯度图找到深度信息图的基本边缘信息;然后计算基本边缘附近的局部二值模式特征信息,并计算深度图每个点的法线信息;最后利用深度信息局部二值模式特征和法线信息对边缘信息进行判断和矫正,以提取深度图像的面片边缘信息.实验结果表明,该方法能够高效、准确地检测室内场景深度图像的边缘信息,避免边缘信息的过检测和欠检测.  相似文献   

4.
针对室内场景深度图像,检测场景中的面片边缘信息并确定场景中的完整面信息是实现室内场景分析和理解的基础.基于深度信息局部二值模式特征,提出一种室内场景深度图像边缘检测的方法.首先对场景深度信息图分别求X方向和Y方向的梯度信息,结合2个梯度图找到深度信息图的基本边缘信息;然后计算基本边缘附近的局部二值模式特征信息,并计算深度图每个点的法线信息;最后利用深度信息局部二值模式特征和法线信息对边缘信息进行判断和矫正,以提取深度图像的面片边缘信息.实验结果表明,该方法能够高效、准确地检测室内场景深度图像的边缘信息,避免边缘信息的过检测和欠检测.  相似文献   

5.
由于Kinect传感器的有效测距有限,深度图像会出现黑洞、噪声等问题,传统的深度图像增强算法仅利用彩色图像信息填充黑洞,增强后的深度图像物体边缘不清晰。针对这种情况,提出基于边缘信息引导滤波的深度图像增强算法。首先,分别获取彩色图像和深度图像的边缘信息,通过融合得到作为引导的边缘图像信息;然后,将边缘信息和迭代非局部中值滤波算法相结合进行黑洞填充;最后,利用自适应中值滤波对图像进行平滑噪声的处理。实验结果表明,该算法能够很好地修复深度图像,得到较为清晰的物体边缘。  相似文献   

6.
目的 当前,大多数单幅散焦图像的3维(3D)场景深度恢复方法,通常使用高斯分布描述点扩散函数(PSF)模型,依据图像边缘散焦模糊量与场景深度的对应关系获得稀疏深度图,采用不同的扩展方法得到整个场景图像的全深度图.鉴于现有方法的深度恢复结果还不够精准,对各种噪声干扰还不够健壮,提出一种基于柯西分布的点扩散函数模型计算物体图像边缘散焦模糊量的方法.方法 将输入的单幅散焦图像分别用两个柯西分布重新模糊,利用图像边缘两次重新模糊图像间梯度比值和两个柯西分布的尺度参数,可以计算出图像中边缘处的散焦模糊量.使用matting内插方法将边缘模糊量扩展到整个图像,即可恢复场景的全深度图.结果 将原始Lenna图像旋转并加入高斯噪声以模拟图像噪声和边缘位置误差,用原图与噪声图比较了柯西分布图像梯度比值与高斯分布图像梯度比值的平均误差.使用多种真实场景图像数据,将本文方法与现有的多种单幅散焦图像深度恢复方法进行了比较.柯西分布图像梯度比值的平均误差要小于高斯分布图像梯度比值的平均误差.本文方法能够从非标定单幅散焦图像中较好地恢复场景深度,对图像噪声、不准确边缘位置和邻近边缘具有更好的抗干扰能力.结论 本文方法可以生成优于现有基于高斯模型等方法的场景深度图.同时,也证明了使用非高斯模型建模PSF的可行性和有效性.  相似文献   

7.
离焦模糊图像的盲复原算法   总被引:1,自引:0,他引:1       下载免费PDF全文
针对离焦模糊图像,提出了一种盲复原算法。该算法首先利用Hough变换检测出离焦图像中的直线边缘,然后基于图像的空域统计特性和修正的Grubbs检验法,定位出阶跃或近似阶跃直线边缘,在此基础上自适应计算出线扩散函数,最后利用线扩散函数求取离焦模糊半径,进而用Wiener滤波完成了图像的复原。实验结果表明,对真实的离焦模糊图像,该算法能够准确地检测和定位出阶跃或近似阶跃边缘,提高离焦模糊半径的鉴别精度和图像的复原效果,已在实际刑侦取证工作中获得较为成功的应用。  相似文献   

8.
为检测并定位数字图像篡改中常采用的人工模糊边缘操作,提出一种基于模糊集合、局部清晰度与数学形态学的人工模糊边缘检测算法。利用模糊集合对图像边缘进行提取,得到被增强的经过人工模糊的图像边缘与被弱化的非人工模糊边缘;引入局部清晰度来区分人工模糊与离焦模糊边缘点;利用数学形态学中的腐蚀运算细化掉被弱化的非人工模糊边缘,保留被增强的人工模糊边缘,实现对人工模糊图像边缘的像素级定位。通过实例验证了该算法的有效性与正确性。  相似文献   

9.
针对任意形状遮挡下人脸修复,现有方法容易产生边缘模糊和恢复结果失真等问题。提出了一种结合边缘信息和门卷积的人脸修复算法。首先,通过先验人脸知识产生遮挡区域的边缘图,以约束人脸修复过程。其次,利用门卷积在部分像素缺失下的精确局部特征描述能力,设计面向图像修复的门卷积深度生成对抗网络(GAN)。该模型由边缘连接生成对抗网络和图像修复生成对抗网络两部分组成。边缘连接网络利用二值遮挡图和待修复图像及其边缘图的多源信息进行训练,实现对缺失边缘图像的自动补全和连接。图像修复网络以补全的边缘图为引导信息,联合遮挡图像进行缺失区域修复。实验结果表明:相比其他算法,该算法修复效果更好,其评价指标比当前基于深度学习的图像修复算法更优。  相似文献   

10.
利用2维离焦图像恢复景物的3维深度信息是计算机视觉中一个重要的研究方向。但是,在获取不同程度的离焦图像时,必须改变摄像机参数,例如,调节摄像机的焦距、像距或者光圈大小等。而在一些需要高倍放大观测的场合,使用的高倍精密摄像机的景深非常小,任何摄像机参数的改变都会对摄像机产生破坏性的后果,这在很大程度上限制了当前许多离焦深度恢复算法的应用范围。因此,提出了一种新的通过物距变化恢复景物全局深度信息的方法。首先,改变景物的物距获取两幅离焦程度不同的图像,然后,利用相对模糊度及热辐射方程建立模糊成像模型,最后,将景物深度信息的计算转化成一个动态优化问题并求解,获得全局景物深度信息。该方法不需改变任何摄像机参数或者计算景物的清晰图像,操作简单。仿真试验和误差分析结果表明,该方法可以实现高精度的深度信息恢复,适合应用于微纳米操作、高精度快速检测等对摄像机参数改变较为敏感的场合。  相似文献   

11.
In this paper, we address the challenging problem of recovering the defocus map from a single image. We present a simple yet effective approach to estimate the amount of spatially varying defocus blur at edge locations. The input defocused image is re-blurred using a Gaussian kernel and the defocus blur amount can be obtained from the ratio between the gradients of input and re-blurred images. By propagating the blur amount at edge locations to the entire image, a full defocus map can be obtained. Experimental results on synthetic and real images demonstrate the effectiveness of our method in providing a reliable estimation of the defocus map.  相似文献   

12.
近年来,基于深度学习的运动模糊去除算法得到了广泛关注,然而单幅散焦图像去模糊算法鲜有研究。为针对性地解决单幅图像的散焦模糊问题,提出一种基于循环神经网络的散焦图像去模糊算法。首先级联两个残差网络,分别完成散焦图估计和图像去模糊;随后,为了保证散焦图和清晰图像的深度特征可以更好地跨阶段传播以及阶段内相互作用,在残差网络中引入LSTM(long short-term memory)循环层;最后,整个残差网络进行了多次迭代,迭代过程中网络参数共享。为了训练网络,制作了一个合成散焦图像数据集,每一张散焦图像都包含对应的清晰图像和散焦图。实验结果表明,该算法相较于对比算法在主客观图像质量评价上均有显著优势,在复原结果中具有更锐利的边缘和清晰的细节。对于真实双像素图像散焦模糊数据集DPD,该算法相比DPDNet-Single在峰值信噪比(PSNR)和结构相似性(SSIM)上分别提高了0.77 dB、5.6%,因此所提方法可以有效处理真实场景散焦模糊。  相似文献   

13.
Owing to recent advances in depth sensors and computer vision algorithms, depth images are often available with co-registered color images. In this paper, we propose a simple but effective method for obtaining an all-in-focus (AIF) color image from a database of color and depth image pairs. Since the defocus blur is inherently depth-dependent, the color pixels are first grouped according to their depth values. The defocus blur parameters are then estimated using the amount of the defocus blur of the grouped pixels. Given a defocused color image and its estimated blur parameters, the AIF image is produced by adopting the conventional pixel-wise mapping technique. In addition, the availability of the depth image disambiguates the objects located far or near from the in-focus object and thus facilitates image refocusing. We demonstrate the effectiveness of the proposed algorithm using both synthetic and real color and depth images.  相似文献   

14.
15.
In this paper, we propose a MAP-Markov random field (MRF) based scheme for recovering the depth and the focused image of a scene from two defocused images. The space-variant blur parameter and the focused image of the scene are both modeled as MRFs and their MAP estimates are obtained using simulated annealing. The scheme is amenable to the incorporation of smoothness constraints on the spatial variations of the blur parameter as well as the scene intensity. It also allows for inclusion of line fields to preserve discontinuities. The performance of the proposed scheme is tested on synthetic as well as real data and the estimates of the depth are found to be better than that of the existing window-based depth from defocus technique. The quality of the space-variant restored image of the scene is quite good even under severe space-varying blurring conditions  相似文献   

16.
Depth from defocus (DFD) is a technique that restores scene depth based on the amount of defocus blur in the images. DFD usually captures two differently focused images, one near-focused and the other far-focused, and calculates the size of the defocus blur in these images. However, DFD using a regular circular aperture is not sensitive to depth, since the point spread function (PSF) is symmetric and only the radius changes with the depth. In recent years, the coded aperture technique, which uses a special pattern for the aperture to engineer the PSF, has been used to improve the accuracy of DFD estimation. The technique is often used to restore an all-in-focus image and estimate depth in DFD applications. Use of a coded aperture has a disadvantage in terms of image deblurring, since deblurring requires a higher signal-to-noise ratio (SNR) of the captured images. The aperture attenuates incoming light in controlling the PSF and, as a result, decreases the input image SNR. In this paper, we propose a new computational imaging approach for DFD estimation using focus changes during image integration to engineer the PSF. We capture input images with a higher SNR since we can control the PSF with a wide aperture setting unlike with a coded aperture. We confirm the effectiveness of the method through experimental comparisons with conventional DFD and the coded aperture approach.  相似文献   

17.
This paper proposes a novel method to synthesize shallow depth-of-field images from two input photographs taken with different aperture values. The basic approach is to estimate the depth map of a given scene using a DFD (depth-from-defocus) algorithm and blur an input image according to the estimated depth map. The depth information estimated by DFD contains much noise and error, while the estimation is rather accurate along the edges of the image. To overcome the limitation, we propose a depth map filling algorithm using a set of initial depth maps and a segmented image. After depth map filling, the depth map can be fine tuned by applying segment clustering and user interaction. Since our method blurs an input image according to the estimated depth information, it generates physically plausible result images with shallow depth-of-field. In addition to depth-of-field control, the proposed method can be utilized for digital refocusing and detail control in image stylization.  相似文献   

18.
杨晓洁  杨字红  王慈 《计算机工程》2011,37(18):211-213
提出一种多聚焦散焦图像复原算法。基于物体成像的点扩散函数退化模型,通过同一场景(包括前景和后景)中任意2幅对焦不同的模糊图像,得到该场景比较清晰的图像。同时,利用点扩散函数的模糊半径与物体深度信息之间的对应关系,估计物体的深度信息。实验结果证明了该算法的有效性。  相似文献   

19.
We present the focal flow sensor. It is an unactuated, monocular camera that simultaneously exploits defocus and differential motion to measure a depth map and a 3D scene velocity field. It does this using an optical-flow-like, per-pixel linear constraint that relates image derivatives to depth and velocity. We derive this constraint, prove its invariance to scene texture, and prove that it is exactly satisfied only when the sensor’s blur kernels are Gaussian. We analyze the inherent sensitivity of the focal flow cue, and we build and test a prototype. Experiments produce useful depth and velocity information for a broader set of aperture configurations, including a simple lens with a pillbox aperture.  相似文献   

20.
Defocus Magnification   总被引:1,自引:0,他引:1  
A blurry background due to shallow depth of field is often desired for photographs such as portraits, but, unfortunately, small point-and-shoot cameras do not permit enough defocus because of the small diameter of their lenses. We present an image-processing technique that increases the defocus in an image to simulate the shallow depth of field of a lens with a larger aperture. Our technique estimates the spatially-varying amount of blur over the image, and then uses a simple image-based technique to increase defocus. We first estimate the size of the blur kernel at edges and then propagate this defocus measure over the image. Using our defocus map, we magnify the existing blurriness, which means that we blur blurry regions and keep sharp regions sharp. In contrast to more difficult problems such as depth from defocus, we do not require precise depth estimation and do not need to disambiguate textureless regions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号