首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, TV-Stokes model has been widely researched for various image processing tasks such as denoising and inpainting. In this paper, we introduce a new TV-Stokes model for image deconvolution, and propose fast and efficient iterative algorithms based on the augmented Lagrangian method. The new TV-Stokes model is a two-step model: in the first step, a smoothed and divergence free tangential field of the observed image is recovered based on total variation (TV) minimization and a new data fidelity term; in the second step, the image is reconstructed by minimizing the distance between the unit image gradient and the regularized unit normal direction. Numerical experiments demonstrate that the proposed model has the potential to outperform popular TV-based restoration methods in preserving texture details and fine structures. As a result, an improvement in signal-to-noise ratio (SNR) is obtained for deconvolution and denoising results.  相似文献   

2.
Image deblurring and denoising are fundamental problems in the field of image processing with numerous applications. This paper presents a new nonlinear Partial Differential Equation (PDE) model based on curve evolution via level sets, for recovering images from their blurry and noisy observations. The proposed method integrates an image deconvolution process and a curve evolution based regularizing process to form a reaction-diffusion PDE. The regularization term in the proposed PDE is a combination of a diffusive image smoothing term and a reactive image enhancement term. The diffusive and reactive terms present in the model lead to effective suppression of noise with sharp restoration of image features. We present several numerical results for image restoration, with synthetic and real degradations and compare it to other state-of-the-art image restoration techniques. The experiments confirm the favorable performance of our method, both visually and in terms of Improvement in Signal-to-Noise-Ratio (ISNR) and Pratt’s Figure Of Merit (FOM).  相似文献   

3.
4.
图像模糊是指在图像捕捉或传输过程中,由于镜头或相机运动、光照条件等因素导致图像失去清晰度和细节,从而影响图像的质量和可用性。为了消除这种影响,图像去模糊技术应运而生。其目的在于通过构建计算机数学模型来衡量图像的模糊信息,从而自动预测去模糊后的清晰图像。图像去模糊算法的研究发展不仅为计算机视觉领域的其他任务提供了便利,同时也为生活领域提供了便捷和保障,如安全监控等。1)回顾了整个图像去模糊领域的发展历程,对盲图像去模糊和非盲图像去模糊中具有影响力的算法进行论述和分析。2)讨论了图像模糊的常见原因以及去模糊图像的质量评价方法。3)全面阐述了传统方法和基于深度学习方法的基本思想,并针对图像非盲去模糊和图像盲去模糊两方面的一些文献进行了综述。其中,基于深度学习的方法包括基于卷积神经网络、基于循环神经网络、基于生成式对抗网络和基于Transformer的方法等。4)简要介绍了图像去模糊领域的常用数据集并比较分析了一些代表性图像去模糊算法的性能。5)探讨了图像去模糊领域所面临的挑战,并对未来的研究方法进行了展望。  相似文献   

5.
传统的图像去模糊方法易产生振铃和边缘模糊等“伪像”效应,针对这一问题,采用非光滑的正则项约束图像在稀疏字典下表示系数的稀疏性,并引入非负约束项,提出了图像的稀疏正则化去模糊模型。进一步,基于交替方向拉格朗日乘子算法,提出了求解该模型的多变量分裂迭代快速算法,将复杂问题求解转化为三个简单子问题的迭代求解,降低了模型求解的复杂性。实验结果表明,所提出的去模糊模型及其快速算法相对较好地保持了图像的结构特征和平滑性,并降低了计算复杂性。  相似文献   

6.
基于深度学习的非均匀运动图像去模糊方法已经获得了较好的效果. 然而, 现有的方法通常存在对边缘恢复不清晰的问题. 因此, 本文提出一种强边缘提取网络(Strong-edge extraction network, SEEN), 用于提取非均匀运动模糊图像的强边缘以提高图像边缘复原质量. 设计的强边缘提取网络由两个子网络SEEN-1和SEEN-2组成, SEEN-1实现双边滤波器的功能, 用于提取滤除了细节信息后的图像边缘. SEEN-2实现L0平滑滤波器的功能, 用于提取模糊图像的强边缘. 本文还将对应网络层提取的强边缘特征图与模糊特征图叠加, 进一步利用强边缘特征. 最后, 本文在GoPro数据集上进行了验证实验, 结果表明: 本文提出的网络可以较好地提取非均匀运动模糊图像的强边缘, 复原图像在客观和主观上都可以达到较好的效果.  相似文献   

7.
目的 以卷积神经网络为代表的深度学习方法已经在单帧图像超分辨领域取得了丰硕成果,这些方法大多假设低分辨图像不存在模糊效应。然而,由于相机抖动、物体运动等原因,真实场景下的低分辨率图像通常会伴随着模糊现象。因此,为了解决模糊图像的超分辨问题,提出了一种新颖的Transformer融合网络。方法 首先使用去模糊模块和细节纹理特征提取模块分别提取清晰边缘轮廓特征和细节纹理特征。然后,通过多头自注意力机制计算特征图任一局部信息对于全局信息的响应,从而使Transformer融合模块对边缘特征和纹理特征进行全局语义级的特征融合。最后,通过一个高清图像重建模块将融合特征恢复成高分辨率图像。结果 实验在2个公开数据集上与最新的9种方法进行了比较,在GOPRO数据集上进行2倍、4倍、8倍超分辨重建,相比于性能第2的模型GFN(gated fusion network),峰值信噪比(peak signal-to-noive ratio,PSNR)分别提高了0.12 d B、0.18 d B、0.07 d B;在Kohler数据集上进行2倍、4倍、8倍超分辨重建,相比于性能第2的模型GFN,PSNR值分别...  相似文献   

8.
This paper proposes a probability formulation that unifies both single-image deblurring and multi-image denoising using variational inference. The proposed formulation is based on a theoretical analysis that compares denoising and deblurring in the same probabilistic framework, and supported by a practical approach that deal with general motion that creates HDR images in the presence of spatially varying motion. Based on this formulation, a new algorithm for deblurring a noisy and blurry image pair is presented. Besides, we provide also an approach that combines existing optical flow and image denoising techniques for High Dynamic Range imaging.  相似文献   

9.
纪野  戴亚平  廣田薰  邵帅 《控制与决策》2024,39(4):1305-1314
针对动态场景下的图像去模糊问题,提出一种对偶学习生成对抗网络(dual learning generative adversarial network, DLGAN),该网络可以在对偶学习的训练模式下使用非成对的模糊图像和清晰图像进行图像去模糊计算,不再要求训练图像集合必须由模糊图像与其对应的清晰图像成对组合而成. DLGAN利用去模糊任务与重模糊任务之间的对偶性建立反馈信号,并使用这个信号约束去模糊任务和重模糊任务从两个不同的方向互相学习和更新,直到收敛.实验结果表明,在结构相似度和可视化评估方面, DLGAN与9种使用成对数据集训练的图像去模糊方法相比具有更好的性能.  相似文献   

10.
In this paper, we propose a general form of TV-Stokes models and provide an efficient and fast numerical algorithm based on the augmented Lagrangian method. The proposed model and numerical algorithm can be used for a number of applications such as image inpainting, image decomposition, surface reconstruction from sparse gradient, direction denoising, and image denoising. Comparing with properties of different norms in regularity term and fidelity term, various results are investigated in applications. We numerically show that the proposed model recovers jump discontinuities of a data and discontinuities of the data gradient while reducing stair-case effect.  相似文献   

11.
The full-image based kernel estimation strategy is usually susceptible by the smooth and fine-scale background regions impacting and it is time-consuming for large-size image deblurring. Since not all the pixels in the blurred image are informative and it is frequent to restore human-interested objects in the foreground rather than background, we propose a novel concept “SalientPatch” to denote informative regions for better blur kernel estimation without user guidance by computing three cues (objectness probability, structure richness and local contrast). Although these cues are not new, it is innovative to integrate and complement each other in motion blur restoration. Experiments demonstrate that our SalientPatch-based deblurring algorithm can significantly speed up the kernel estimation and guarantee high-quality recovery for large-size blurry images as well.  相似文献   

12.
模糊降质图像恢复技术研究进展   总被引:1,自引:0,他引:1  
介绍近期模糊降质图像恢复技术的研究进展。首先分析图像模糊的成因,继而对图像模糊的数学模型进行介绍,并在此基础上引入图像去模糊的经典模型。接着详细介绍图像去模糊中经典的几种方法,包括维纳滤波,RL(Richardson-Lucy)滤波,总变分,并给出了这几种经典去模糊算法的实验结果。在回顾经典算法的基础上,对近期的图像去模糊算法研究进展进行分类分析,从自然图像统计特性,图像边缘,字典学习这三个方面介绍了近期经典的图像去模糊算法研究进展,并给出了几种有代表性算法的实验结果。在对经典回顾和对近期研究进展分析的基础上展望了图像去模糊技术的发展前景。  相似文献   

13.
目的 非均匀盲去运动模糊是图像处理和计算机视觉中的基础课题之一。传统去模糊算法有处理模糊种类单一、耗费时间两大缺点,且一直未能有效解决。随着神经网络在图像生成领域的出色表现,本文把去运动模糊视为图像生成的一种特殊问题,提出一种基于神经网络的快速去模糊方法。方法 首先,将图像分类方向表现优异的密集连接卷积网络(dense connected convolutional network, DenseNets)应用到去模糊领域,该网络能充分利用中间层的有用信息。在损失函数方面,采用更符合去模糊目的的感知损失(perceptual loss),保证生成图像和清晰图像在内容上的一致性。采用生成对抗网络(generative adversarial network,GAN),使生成的图像在感官上与清晰图像更加接近。结果 通过测试生成图像相对于清晰图像的峰值信噪比 (peak signal to noise ratio,PSNR),结构相似性 (structural similarity,SSIM)和复原时间来评价算法性能的优劣。相比DeblurGAN(blind motion deblurring using conditional adversarial networks),本文算法在GOPRO测试集上的平均PSNR提高了0.91,复原时间缩短了0.32 s,能成功恢复出因运动模糊而丢失的细节信息。在Kohler数据集上的性能也优于当前主流算法,能够处理不同的模糊核,鲁棒性强。结论 本文算法网络结构简单,复原效果好,生成图像的速度也明显快于其他方法。同时,该算法鲁棒性强,适合处理各种因运动模糊而导致的图像退化问题。  相似文献   

14.
The aim of this paper is to present a computational study on scaling techniques in gradient projection-type (GP-type) methods for deblurring of astronomical images corrupted by Poisson noise. In this case, the imaging problem is formulated as a non-negatively constrained minimization problem in which the objective function is the sum of a fit-to-data term, the Kullback–Leibler divergence, and a Tikhonov regularization term. The considered GP-type methods are formulated by a common iteration formula, where the scaling matrix and the step-length parameter characterize the different algorithms. Within this formulation, both first-order and Newton-like methods are analysed, with particular attention to those implementation features and behaviours relevant for the image restoration problem. The numerical experiments show that suited scaling strategies can enable the GP methods to quickly approximate accurate reconstructions and then are useful for designing effective image deblurring algorithms.  相似文献   

15.
The restoration of images degraded by blur and multiplicative noise is a critical preprocessing step in medical ultrasound images which exhibit clinical diagnostic features of interest. This paper proposes a novel non-smooth non-convex variational model for ultrasound images denoising and deblurring motivated by the successes of sparse representation of images and FoE based approaches. Dictionaries are well adapted to textures and extended to arbitrary image sizes by defining a global image prior, while FoE image prior explicitly characterizes the statistics properties of natural image. Following these ideas, the new model is composed of the data-fidelity term, the sparse and redundant representations via learned dictionaries, and the FoE image prior model. The iPiano algorithm can efficiently deal with this optimization problem. The new proposed model is applied to several simulated images and real ultrasound images. The experimental results of denoising and deblurring show that proposed method gives a better visual effect by efficiently removing noise and preserving details well compared with two state-of-the-art methods.  相似文献   

16.
In this paper we propose a space-variant blur estimation and effective denoising/deconvolution method for combining a long exposure blurry image with a short exposure noisy one. The blur in the long exposure shot is mainly caused by camera shake or object motion, and the noise in the underexposed image is introduced by the gain factor applied to the sensor when the ISO is set to an high value. Due to the space variant degradation, the image pair is divided into overlapping patches for processing. The main idea in the deconvolution algorithm is to incorporate a combination of prior image models into a spatially-varying deblurring/denoising framework which is applied to each patch. The method employs a kernel and parameter estimation method to choose between denoising or deblurring each patch. Experiments on both synthetic and real images are provided to validate the proposed approach.  相似文献   

17.
In this paper, we propose a fast fixed point algorithm and apply it to total variation (TV) deblurring and segmentation. The TV-based models can be written in the form of a general minimization problem. The novel method is derived from the idea of establishing the relation between solutions of the general minimization problem and new variables, which can be obtained by a fixed point algorithm efficiently. Under gentle conditions it provides a platform to develop efficient numerical algorithms for various image processing tasks. We then specialize this fixed point methodology to the TV-based image deblurring and segmentation models, and the resulting algorithms are compared with the split Bregman method, which is a strong contender for the state-of-the-art algorithms. Numerical experiments demonstrate that the algorithm proposed here performs favorably.  相似文献   

18.
The intensity (grey value) consistency of image pixels in a sequence or stereo camera setup is of central importance to numerous computer vision applications. Most stereo matching and optical flow algorithms minimise an energy function composed of a data term and a regularity or smoothing term. To date, well performing methods rely on the intensity consistency of the image pixel values to model the data term. Such a simple model fails if the illumination is (even slightly) different between the input images. Amongst other situations, this may happen due to background illumination change over the sequence, different reflectivity of a surface, vignetting, or shading effects.In this paper, we investigate the removal of illumination artifacts and show that generalised residual images substantially improve the accuracy of correspondence algorithms. In particular, we motivate the concept of residual images and show two evaluation approaches using either ground truth correspondence fields (for stereo matching and optical flow algorithms) or errors based on a predicted view (for stereo matching algorithms).  相似文献   

19.
余孝源  谢巍  陈定权  周延 《控制与决策》2020,35(7):1667-1673
传统的暗通道先验已成功地运用于单一图像去模糊问题,但是,当模糊图像具有显著噪声时,暗通道先验无法对模糊核估计起到作用.因此,得益于分数阶计算能够有效地抑制信号的噪声并对信号的低频部分进行增强,将分数阶计算理论与模糊图像的暗通道先验相结合,提出一种基于改进的暗通道先验的运动模糊核估计方法.首先,结合最大后验估计算法与分数阶暗通道先验,构建出运动模糊图像的核估计模型;其次,利用半二次方分裂法解决模型的非凸问题;最后,根据粗糙-精细的策略,利用多尺度迭代框架估计出准确图像的模糊核,进而利用非盲去模糊的方法求解清晰图像.实验结果表明:在有无显著噪声的模糊图像中,所提出的算法虽然所需计算时间较长,但是能够获得较为准确的模糊核,并且能够减少图像噪声以及振铃伪影,提高清晰图像估计的质量;此外,对于不同类型的模糊图像,所提出的算法也同样适用.  相似文献   

20.
由于光在水下传播时会出现吸收和散射的情况,水下图像往往存在色偏、对比度低、模糊、光照不均匀等问题。根据水下图像成像模型,人们在海底拍摄所获得的图像往往是退化的图像,而退化的图像不能完整地表达海洋场景信息,难以满足实际的应用需要。为此,文中提出了一种基于颜色校正和去模糊的水下图像增强方法。该方法有效融合了颜色校正和去模糊两个阶段,取得了递增的增强效果。在颜色校正阶段,首先对原始图像进行对比度拉伸,在对比度拉伸完成之后,图像可能存在拉伸过度或拉伸不足的现象。因此,所提方法根据灰度世界先验,在对比度拉伸后进一步使用伽马校正来优化和调整图像的对比度和色彩,使图像的R,G,B三通道的灰度值之和趋于相等。接着,在去模糊阶段,通过融合暗通道先验对颜色校正后的图像进行去模糊,得到最终的增强图像。实验结果表明,所提方法具有良好的整体恢复效果,能有效地恢复图像信息,在主观评价和客观评价上均展现出较好的效果。另外,所提方法可以作为水下图像分类等计算机视觉任务的预处理步骤,在实验中能够将水下图像集的分类精度提升16%左右。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号