共查询到20条相似文献,搜索用时 32 毫秒
1.
《Signal Processing: Image Communication》2007,22(1):86-101
This paper presents a novel scheme for simultaneous compression and denoising of images: WISDOW-Comp (Wavelet based Image and Signal Denoising via Overlapping Waves—Compression). It is based on the atomic representation of wavelet details employed in WISDOW for image denoising. However, atoms can be also used for achieving compression. In particular, the core of WISDOW-Comp consists of recovering wavelet details, i.e. atoms, by exploiting wavelet low frequency information. Therefore, just the approximation band and significance map of atoms absolute maxima have to be encoded and sent to the decoder for recovering a cleaner as well as compressed version of the image under study. Experimental results show that WISDOW-Comp outperforms the state of the art of compression based denoisers in terms of both rate and distortion. Some technical devices will also be investigated for further improving its performances. 相似文献
2.
基于Canny准则,参照最佳边缘滤波器的设计要求,确定选择用于边缘检测的小波母函数的一般准则。并在此基础上构造出二次B样条小波,提出了基于小波变换的多尺度自适应阈值图像边缘检测的新方法。 相似文献
3.
《Signal Processing: Image Communication》2001,16(9):859-869
Filter bank design for wavelet compression is crucial; careful design enables superior quality for broad classes of images. The Bernstein basis for frequency-domain construction of biorthogonal nearly coiflet (BNC) wavelet bases forms a unified design framework for high-performance medium-length filters. A common filter bandwidth is characteristic of widely favoured BNC filter pairs: the classical CDF 9/7, the Villasenor 6/10, and the Villasenor 10/18. Based on this observation, we construct previously unknown BNC 17/11 and BNC 16/8 wavelet filters. Key filter-quality evaluation metrics, due to Villasenor, demonstrate these filters to be well suited for image compression. Also studied are the biorthogonal coiflet 17/11 (half-band), 18/10 and 10/6 filter pairs, which have not previously been formally evaluated for image coding. Simulation results confirm that the BNC 17/11 and BNC 16/8 wavelet bases are outstanding for compression of natural and medical images, and particularly for images with significant high-frequency detail, such as fingerprints. The BNC 17/11 pair recommends itself for international standardization for the compression of still images; the BNC 16/8 pair for high-quality compression of production quality video. Experimental evidence suggests biorthogonal filters achieve good compression if, subject to a filter bandwidth constraint, maximum vanishing moments are obtained for a given filter support. 相似文献
4.
Adaptive wavelet thresholding for image denoising and compression 总被引:188,自引:0,他引:188
5.
In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods. 相似文献
6.
A novel encoding scheme for Daubechies wavelets is proposed. The technique eliminates the requirements to approximate the transformation matrix elements; rather, by using algebraic integers, it is possible to obtain exact representations for them. As a result, error-free calculations up to the final reconstruction step can be achieved, which provides considerable improvement in image reconstruction accuracy. 相似文献
7.
The aim of this paper is to examine a set of wavelet functions (wavelets) for implementation in a still image compression system and to highlight the benefit of this transform relating to today's methods. The paper discusses important features of wavelet transform in compression of still images, including the extent to which the quality of image is degraded by the process of wavelet compression and decompression. Image quality is measured objectively, using peak signal-to-noise ratio or picture quality scale, and subjectively, using perceived image quality. The effects of different wavelet functions, image contents and compression ratios are assessed. A comparison with a discrete-cosine-transform-based compression system is given. Our results provide a good reference for application developers to choose a good wavelet compression system for their application 相似文献
8.
An image can be decomposed into the structural component and the geometric textural component.Based on this idea,an efficient two-layered compressing algorithm is proposed,which uses 2nd generation bandelets and wavelets.First,an original image is decomposed into the structural component and the textural component,and then these two components are compressed using wavelets and 2nd generation bandelets respectively.Numerical tests show that the proposed method works better than the bandelets and JPEG2000 in some specific SAR scene. 相似文献
9.
Gnutti Alessandro Guerrini Fabrizio Adami Nicola Migliorati Pierangelo Leonardi Riccardo 《Multidimensional Systems and Signal Processing》2021,32(2):791-820
Multidimensional Systems and Signal Processing - In this paper, we explicitly analyze the performance effects of several orthogonal and bi-orthogonal wavelet families. For each family, we explore... 相似文献
10.
一种改进的EMD图像信号去噪算法 总被引:1,自引:0,他引:1
《现代电子技术》2016,(16):91-93
针对当前阈值类算法在图像去噪的同时均会将原图像有用成分滤掉,破坏图像的完整性,处理后的图像模糊等问题,提出EMD-SG算法为图像去噪。依据EMD算法对图像的拆分,利用SG滤波器对每个采集点的邻域进行滤波。同时利用最小二乘法方法拟合出采集点邻域内最佳值,并结合IMF进行图像重构。该算法使图像处理过程良好地兼顾了图像除噪效果与图像信号完整性。实验结果表明此算法相比于其他算法具有优良的去噪能力。 相似文献
11.
Zhuoer Shi Wei G.W. Kouri D.J. Hoffman D.K. Zheng Bao 《IEEE transactions on image processing》2001,10(10):1488-1508
This paper deals with the design of interpolating wavelets based on a variety of Lagrange functions, combined with novel signal processing techniques for digital imaging. Halfband Lagrange wavelets, B-spline Lagrange wavelets and Gaussian Lagrange (Lagrange distributed approximating functional (DAF)) wavelets are presented as specific examples of the generalized Lagrange wavelets. Our approach combines the perceptually dependent visual group normalization (VGN) technique and a softer logic masking (SLM) method. These are utilized to rescale the wavelet coefficients, remove perceptual redundancy and obtain good visual performance for digital image processing. 相似文献
12.
Image compression performance of new multi-wavelets constructed using B-spline super functions is compared with existing multi-wavelets. First, orthogonal approximation order preserving pre-filters are designed and then an extensive comparative performance analysis in image compression is carried out. The results confirm the usefulness of the super function design criteria in image compression. The new multi-wavelets show excellent performance outperforming most of the well known multi-wavelets for a large number of still images at almost all compression ratios considered. 相似文献
13.
This paper presents a very efficient algorithm for image denoising based on wavelets and multifractals for singularity detection. A challenge of image denoising is how to preserve the edges of an image when reducing noise. By modeling the intensity surface of a noisy image as statistically self-similar multifractal processes and taking advantage of the multiresolution analysis with wavelet transform to exploit the local statistical self-similarity at different scales, the pointwise singularity strength value characterizing the local singularity at each scale was calculated. By thresholding the singularity strength, wavelet coefficients at each scale were classified into two categories: the edge-related and regular wavelet coefficients and the irregular coefficients. The irregular coefficients were denoised using an approximate minimum mean-squared error (MMSE) estimation method, while the edge-related and regular wavelet coefficients were smoothed using the fuzzy weighted mean (FWM) filter aiming at preserving the edges and details when reducing noise. Furthermore, to make the FWM-based filtering more efficient for noise reduction at the lowest decomposition level, the MMSE-based filtering was performed as the first pass of denoising followed by performing the FWM-based filtering. Experimental results demonstrated that this algorithm could achieve both good visual quality and high PSNR for the denoised images. 相似文献
14.
In this work, we propose a two-stage denoising approach, which includes generation and fusion stages. Specifically, in the generation stage, we first split the expanding path of the UNet backbone of the standard DIP (deep image prior) network into two branches, converting it into a Y-shaped network (YNet). Then we adopt the initial denoised images obtained with DAGL (dynamic attentive graph learning) and Restormer methods together with the given noisy image as the target images. Finally, we utilize the standard DIP on-line training routine to generate two complementary basic images, whose image quality is quite improved, with the help of a novel automatic iteration termination mechanism. In the fusion stage, we first split the contracting path of the standard UNet network into two branches for receiving the two basic images generated in the previous stage, and obtain a fused image as the final denoised image in a fully unsupervised manner. Extensive experimental results confirm that our method has a significant improvement over the standard DIP or other unsupervised methods, and outperforms recently proposed supervised denoising models. The noticeable performance improvement is attributed to the proposed hybrid strategy, i.e., we first adopt the supervised denoising methods to process the common content of images substantially, then utilize the unsupervised method to fine-tune the specific details. In other words, we take full advantage of the high performance of the supervised methods and the flexibility of the unsupervised methods. 相似文献
15.
A novel compression algorithm for fingerprint images is introduced. Using wavelet packets and lattice vector quantization , a new vector quantization scheme based on an accurate model for the distribution of the wavelet coefficients is presented. The model is based on the generalized Gaussian distribution. We also discuss a new method for determining the largest radius of the lattice used and its scaling factor , for both uniform and piecewise-uniform pyramidal lattices. The proposed algorithms aim at achieving the best rate-distortion function by adapting to the characteristics of the subimages. In the proposed optimization algorithm, no assumptions about the lattice parameters are made, and no training and multi-quantizing are required. We also show that the wedge region problem encountered with sharply distributed random sources is resolved in the proposed algorithm. The proposed algorithms adapt to variability in input images and to specified bit rates. Compared to other available image compression algorithms, the proposed algorithms result in higher quality reconstructed images for identical bit rates. 相似文献
16.
Rebollo-Neira L. Constantinides A.G. Stathaki T. 《Signal Processing, IEEE Transactions on》1998,46(3):587-597
A mathematical framework for data representation and for noise reduction is presented in this paper. The basis of the approach lies in the use of wavelets derived from the general theory of frames to construct a subspace capable of representing the original signal excluding the noise. The representation subspace is shown to be efficient in signal modeling and noise reduction, but it may be accompanied by an ill-conditioned inverse problem. This is further examined, and a more adequate orthonormal representation for the generated subspace is proposed with an improvement in compression performance 相似文献
17.
Experiments with wavelets for compression of SAR data 总被引:3,自引:0,他引:3
Werness S.A. Wei S.C. Carpinella R. 《Geoscience and Remote Sensing, IEEE Transactions on》1994,32(1):197-201
Wavelet transform coding is shown to be an effective method for compression of both detected and complex synthetic aperture radar (SAR) imagery. Three different orthogonal wavelet transform filters are examined for use in SAR data compression; the performances of the filters are correlated with mathematical properties such as regularity and number of vanishing moments. Excellent quality reconstructions are obtained at data rates as low as 0.25 bpp for detected data and as low as 0.5 bits per element (or 2 bpp) for complex data 相似文献
18.
Fractal image denoising 总被引:7,自引:0,他引:7
Over the past decade, there has been significant interest in fractal coding for the purpose of image compression. However, applications of fractal-based coding to other aspects of image processing have received little attention. We propose a fractal-based method to enhance and restore a noisy image. If the noisy image is simply fractally coded, a significant amount of the noise is suppressed. However, one can go a step further and estimate the fractal code of the original noise-free image from that of the noisy image, based upon a knowledge (or estimate) of the variance of the noise, assumed to be zero-mean, stationary and Gaussian. The resulting fractal code yields a significantly enhanced and restored representation of the original noisy image. The enhancement is consistent with the human visual system where extra smoothing is performed in flat and low activity regions and a lower degree of smoothing is performed near high frequency components, e.g., edges, of the image. We find that, for significant noise variance (/spl sigma//spl ges/20), the fractal-based scheme yields results that are generally better than those obtained by the Lee filter which uses a localized first order filtering process similar to fractal schemes. We also show that the Lee filter and the fractal method are closely related. 相似文献
19.
Lorenzo-Ginori J.V. Plataniotis K.N. Venetsanopoulos A.N. 《Vision, Image and Signal Processing, IEE Proceedings -》2002,149(5):290-296
The problem of phase image denoising through nonlinear (NL) filtering is addressed. There are various imaging systems in which the phase information is utilised to generate useful imaging data. However, the presence of noise makes difficult to obtain the appropriate phase image. The authors apply NL vector filtering techniques to denoise the complex data from which the phase image is extracted. A study was realised in which several NL filters were applied to a simulated complex image. The effects of filtering were determined through a Monte Carlo simulation in which the image was successively contaminated with six different noise models. The effectiveness of the filters was measured in terms of normalised mean square error, signal-to-noise ratio and the number of eliminated phase residues. Results indicate a significant noise reduction, especially when NL filters based on angular distances are applied to the noisy input. 相似文献
20.
Complexity-regularized image denoising 总被引:1,自引:0,他引:1
We study a new approach to image denoising based on complexity regularization. This technique presents a flexible alternative to the more conventional l2,l1, and Besov regularization methods. Different complexity measures are considered, in particular those induced by state-of-the-art image coders. We focus on a Gaussian denoising problem and derive a connection between complexity-regularized denoising and operational rate-distortion optimization. This connection suggests the use of efficient algorithms for computing complexity-regularized estimates. Bounds on denoising performance are derived in terms of an index of resolvability that characterizes the compressibility of the true image. Comparisons with state-of-the-art denoising algorithms are given 相似文献