首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
The lossy compression techniques at low bit rate often create ringing and contouring effects on the output images and introduce various blurring and distortion at block bounders. To overcome those compression artifacts different neural network based post-processing techniques have been experimented with over the last few years. The traditional loop-filter methods in the HEVC frame-work support two post-processing operations namely a de-blocking filter followed by a sample adaptive offset (SAO) filter. These operations usually introduce extra signaling bits and become overhead to the network with high-resolution video processing. In this study, we came up with a new deep learning-based algorithm for SAO filtering operations and substantiated the merits of the proposed method. We introduced a variable filter size sub-layered dense CNN (SDCNN) to improve the denoising operation and incorporated large stride deconvolution layers for further computation improvement. We demonstrate that our deconvolution model can effectively be trained by leveraging the high-frequency edge features learned in a shallow network using residual learning and data augmentation techniques. Extensive experiments show that our approach outperformed other state-of-the-art approaches in terms of SSIM, Bjøntegaard delta bit-rate (BD-BR), BD-PSNR measurements on the standard video test set and achieves an average of 8.73 % bit rate saving compared to HEVC baseline.  相似文献   

3.
Common image compression techniques suitable for general purpose may be less effective for such specific applications as video surveillance. Since a stationed surveillance camera always targets at a fixed scene, its captured images exhibit high consistency in content or structure. In this paper, we propose a surveillance image compression technique via dictionary learning to fully exploit the constant characteristics of a target scene. This method transforms images over sparsely tailored over-complete dictionaries learned directly from image samples rather than a fixed one, and thus can approximate an image with fewer coefficients. A set of dictionaries trained off-line is applied for sparse representation. An adaptive image blocking method is developed so that the encoder can represent an image in a texture-aware way. Experimental results show that the proposed algorithm significantly outperforms JPEG and JPEG 2000 in terms of both quality of reconstructed images and compression ratio as well.  相似文献   

4.
In this work the normalized dictionary distance (NDD) is presented and investigated. NDD is a similarity metric based on the dictionary of a sequence acquired from a data compressor. A dictionary gives significant information about the structure of the sequence it has been extracted from. We examine the performance of this new distance measure for color image retrieval tasks, by focusing on three parameters: the transformation of the 2D image to a 1D string, the color to character correspondence, and the image size. We demonstrate that NDD can outperform standard (dis)similarity measures based on color histograms or color distributions.  相似文献   

5.
Compression noise reduction is similar to the super-resolution problem in terms of the restoration of lost high-frequency information. Because learning-based approaches have proven successful in the past in terms of addressing the super-resolution problem, we focus on a learning-based technique for compressed image denoising. In this process, it is important to search for the exact prior in a training set. The proposed method utilizes two different databases (i.e., a noisy and a denoised database), which work together in a complementary way. The denoised images from the dual databases are combined into a final denoised one. Additionally, the input noisy image is decomposed into structure and texture components, and only the latter is denoised because most noise tends to exist within the texture component. Experimental results show that the proposed method can reduce compression noise while reconstructing the original information that was lost in the compression process, especially for texture regions.  相似文献   

6.
基于TwIST-TV 约束的图像去模糊方法   总被引:1,自引:0,他引:1       下载免费PDF全文
传统的基于频域和小波域的去模糊算法所得的复原图像总是存在比较明显的边缘振铃及模糊效应,而较为有效的空域迭代优化去模糊算法速度通常比较慢。为了解决上述问题,提出了基于二步迭代阈值收缩(TwIST)与总变分(TV)约束相结合的图像去模糊算法(TwIST-TV)。首先在去模糊目标函数中加入对图像的TV 正则化约束,其次在对图像小波系数的每次二步迭代之前,加入对图像的TV 优化去噪约束,最后迭代获取去模糊图像。实验结果表明:相对于基于频域和小波域的模糊图像恢复算法,TwIST-TV 能有效抑制边缘模糊和振铃效应,复原图像的信噪比(SNR)、峰值信噪比(PSNR)高出1~7 dB,平均结构相似度指标(MSSIM)可高出0.05,相对于空域解卷积算法在保证求解精度相当的情况下具备6 倍以上的速度优势。  相似文献   

7.
根据Gyrator变换原理,首先研究了Gyrator变换的自成像效应,即泰伯效应,讨论了实现Gyrator变换泰伯效应的条件,并指出了Gyrator变换的泰伯效应与传统意义上的泰伯效应不同,泰伯角度不是固定的,不同泰伯角度之间的分布也不是线性的,Gyrator变换的分数泰伯效应无法通过分数化泰伯角度得到。其次,应用Gyrator变换对图像减噪也进行了讨论,研究发现Gyrator变换不仅对双曲型噪声有较好的去除效果,而且对携带有双曲信号的噪声同样具有较好的去除效果。最后,还指出了研究的不足以及后续要开展的工作。不但深化了人们对Gyrator变换的认识,而且发展了图像的自成像和去噪等技术。  相似文献   

8.
9.
王嘉业  李艺璇  张玉珍 《红外与激光工程》2022,51(2):20220006-1-20220006-10
基于条纹投影的三维形貌测量广泛应用于工业制造、质量检测、生物医疗、航空航天等领域。然而在高速测量的场景下,由于光栅图像的采集过程曝光时间短,三维重建结果通常会受到较为严重的图像噪声干扰。近年来,深度学习技术在计算机视觉等领域得到了广泛应用,并且取得了巨大的成功。受此启发,提出了一种基于学习的光栅图像噪声抑制方法。首先构建了一个基于U-net的卷积神经网络。其次在训练过程中,构建的神经网络学习从含有噪声的条纹图像到对应高质量包裹相位之间的映射关系。当经过适当训练,该网络可从含有噪声的条纹图像中准确恢复相位信息。实验结果表明:针对离线的快速运动场景三维测量,该方法仅利用一幅光栅图像可恢复高精度的相位信息,且相位精度优于传统的三步相移方法。该方法可为提升运动高速场景三维测量的精度提供切实可靠的解决方案。  相似文献   

10.
提出一种红外图像多传感器超分辨率重建算法。 算法存在两个关键点:一是有效利用两类图像的相关性;二是针对红外图像的特点利用其自 身信息 构造正则化模型。采用相位一致性算法提取可见光图像边缘,利用此边缘信息对正则化模型 加权,以 充分利用可见光和红外图像的相关性;将一阶梯度锐化算子引入总广义变分模型,构成针对 红外 图像特点的正则化模型;最后采用一阶主-对偶优化算法求得加权后模型的最优解。实验表 明,本文算法可获得边缘清晰的重建结果,并且有效抑制噪声,在主观视觉效果和客观评价 指标方面均优于其他算法。  相似文献   

11.
刘玉 《电视技术》2013,37(7):114-116
在矩量法基础上,通过波动方程的求解把泛函方程转换为代数方程。变形Born迭代算法解决了方程的非线性问题;TTLS正则化解决了解的稳定性。实验表明,运用空间域的层析成像方法,迭代过程中的计算量减少了,算法的应用范围得到了拓宽,在较强散射的情况下,反演较高的对比度。  相似文献   

12.
Multiplicative noise is often present in medical and biological imaging, such as magnetic resonance imaging (MRI), Ultrasound, positron emission tomography (PET), single photon emission computed tomography (SPECT), and fluorescence microscopy. Noise reduction in medical images is a difficult task in which linear filtering algorithms usually fail. Bayesian algorithms have been used with success but they are time consuming and computationally demanding. In addition, the increasing importance of the 3-D and 4-D medical image analysis in medical diagnosis procedures increases the amount of data that must be efficiently processed. This paper presents a Bayesian denoising algorithm which copes with additive white Gaussian and multiplicative noise described by Poisson and Rayleigh distributions. The algorithm is based on the maximum a posteriori (MAP) criterion, and edge preserving priors which avoid the distortion of relevant anatomical details. The main contribution of the paper is the unification of a set of Bayesian denoising algorithms for additive and multiplicative noise using a well-known mathematical framework, the Sylvester-Lyapunov equation, developed in the context of the Control theory.  相似文献   

13.
Despite the tremendous success of wavelet-based image regularization, we still lack a comprehensive understanding of the exact factor that controls edge preservation and a principled method to determine the wavelet decomposition structure for dimensions greater than 1. We address these issues from a machine learning perspective by using tree classifiers to underpin a new image regularizer that measures the complexity of an image based on the complexity of the dyadic-tree representations of its sublevel sets. By penalizing unbalanced dyadic trees less, the regularizer preserves sharp edges. The main contribution of this paper is the connection of concepts from structured dyadic-tree complexity measures, wavelet shrinkage, morphological wavelets, and smoothness regularization in Besov space into a single coherent image regularization framework. Using the new regularizer, we also provide a theoretical basis for the data-driven selection of an optimal dyadic wavelet decomposition structure. As a specific application example, we give a practical regularized image denoising algorithm that uses this regularizer and the optimal dyadic wavelet decomposition structure.  相似文献   

14.
程晓东  姜竹楠 《电子测试》2020,(7):29-30,11
本文设计了基于电磁降噪电路的恒流高清图像信号处理系统。系统中选用APW7120集成芯片作为中央处理芯片,以液晶显示器或智能终端显示屏作为显示单元,将信号滤波单元当作电子滤波器,据此,此系统可通过过滤图像信号内的干扰信号,优化图像信号质量;还可基于集成芯片直接简捷化系统结构,并控制系统自主生成的电磁噪声信号,以此保证系统在图像信号处理时不受电磁噪声影响,从而提高系统稳定性与可靠性;通过适度处理输入电流,可确保在正常运转时不会生成电磁干扰信号,以显著提升系统稳定性。  相似文献   

15.
We develop a method for the formation of spotlight-mode synthetic aperture radar (SAR) images with enhanced features. The approach is based on a regularized reconstruction of the scattering field which combines a tomographic model of the SAR observation process with prior information regarding the nature of the features of interest. Compared to conventional SAR techniques, the method we propose produces images with increased resolution, reduced sidelobes, reduced speckle and easier-to-segment regions. Our technique effectively deals with the complex-valued, random-phase nature of the underlying SAR reflectivities. An efficient and robust numerical solution is achieved through extensions of half-quadratic regularization methods to the complex-valued SAR problem. We demonstrate the performance of the method on synthetic and real SAR scenes.  相似文献   

16.
基于L0正则化的车牌图像去模糊   总被引:1,自引:0,他引:1  
  相似文献   

17.
18.
刘鹏  徐磊  杨燕妮  李超 《电子设计工程》2013,21(16):120-123
对传统傅里叶变换与小波变换进行理论分析比较,运用小波分析中小波变换具有对信号的自适应特性可以消除实际信号噪声中的高频成分,通过使用Matlab-6.51中的Wavelet工具箱可实现对图片的降噪操作,最终达到降噪效果。  相似文献   

19.
本文在对图像降噪进行总体概述的基础上,介绍了传统降噪和小波降噪的原理,提出一种以阈值降噪法为基础的混合算法。然后用MATLAB中的小波工具箱对一个含有噪声图像进行降噪。通过实验结果的对比,可以看出新算法可以更为有效地降低噪声,并较好地保留图像的细节。  相似文献   

20.
《现代电子技术》2017,(20):98-100
激光遥感图像乘性噪声的降噪处理中,传统解决乘性噪声的图像降噪技术存在较多的缺陷,故根据乘性噪声的非线性和时变性,提出基于TV模型的激光遥感图像乘性噪声综合降噪方法,在不破坏图像基本结构的情况下进行降噪。该方法使用TV模型针对乘性噪声的非线性和时变性进行能量变分,获取乘性噪声能量极限,综合使用规则化方法和边界约束法对能量极限进行迭代求解,从激光遥感图像的边缘处开始向内进行乘性噪声平滑降噪,很好地保留了图像边缘信息。实验对不同降噪方法进行了对比分析,结果显示,所提降噪方法具备良好的降噪性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号