共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we propose to develop novel techniques for signal/image decomposition, and reconstruction based on the B-spline mathematical functions. Our proposed B-spline based multiscale/resolution representation is based upon a perfect reconstruction analysis/synthesis
point of view. Our proposed B-spline analysis can be utilized for different signal/imaging applications such as compression,
prediction, and denoising. We also present a straightforward computationally efficient approach for B-spline basis calculations
that is based upon matrix multiplication and avoids any extra generated basis. Then we propose a novel technique for enhanced
B-spline based compression for different image coders by preprocessing the image prior to the decomposition stage in any image
coder. This would reduce the amount of data correlation and would allow for more compression, as will be shown with our correlation
metric. Extensive simulations that have been carried on the well-known SPIHT image coder with and without the proposed correlation
removal methodology are presented. Finally, we utilized our proposed B-spline basis for denoising and estimation applications.
Illustrative results that demonstrate the efficiency of the proposed approaches are presented. 相似文献
2.
A novel encoding scheme for Daubechies wavelets is proposed. The technique eliminates the requirements to approximate the transformation matrix elements; rather, by using algebraic integers, it is possible to obtain exact representations for them. As a result, error-free calculations up to the final reconstruction step can be achieved, which provides considerable improvement in image reconstruction accuracy. 相似文献
3.
The aim of this paper is to examine a set of wavelet functions (wavelets) for implementation in a still image compression system and to highlight the benefit of this transform relating to today's methods. The paper discusses important features of wavelet transform in compression of still images, including the extent to which the quality of image is degraded by the process of wavelet compression and decompression. Image quality is measured objectively, using peak signal-to-noise ratio or picture quality scale, and subjectively, using perceived image quality. The effects of different wavelet functions, image contents and compression ratios are assessed. A comparison with a discrete-cosine-transform-based compression system is given. Our results provide a good reference for application developers to choose a good wavelet compression system for their application 相似文献
4.
《Signal Processing: Image Communication》2007,22(1):86-101
This paper presents a novel scheme for simultaneous compression and denoising of images: WISDOW-Comp (Wavelet based Image and Signal Denoising via Overlapping Waves—Compression). It is based on the atomic representation of wavelet details employed in WISDOW for image denoising. However, atoms can be also used for achieving compression. In particular, the core of WISDOW-Comp consists of recovering wavelet details, i.e. atoms, by exploiting wavelet low frequency information. Therefore, just the approximation band and significance map of atoms absolute maxima have to be encoded and sent to the decoder for recovering a cleaner as well as compressed version of the image under study. Experimental results show that WISDOW-Comp outperforms the state of the art of compression based denoisers in terms of both rate and distortion. Some technical devices will also be investigated for further improving its performances. 相似文献
5.
An image can be decomposed into the structural component and the geometric textural component.Based on this idea,an efficient two-layered compressing algorithm is proposed,which uses 2nd generation bandelets and wavelets.First,an original image is decomposed into the structural component and the textural component,and then these two components are compressed using wavelets and 2nd generation bandelets respectively.Numerical tests show that the proposed method works better than the bandelets and JPEG2000 in some specific SAR scene. 相似文献
6.
Averbuch A.Z. Pevnyi A.B. Zheludev V.A. 《Signal Processing, IEEE Transactions on》2001,49(11):2682-2692
We present a new family of biorthogonal wavelet transforms and a related library of biorthogonal periodic symmetric waveforms. For the construction, we used the interpolatory discrete splines, which enabled us to design a library of perfect reconstruction filterbanks. These filterbanks are related to Butterworth filters. The construction is performed in a “lifting” manner. The difference from the conventional lifting scheme is that all the transforms are implemented in the frequency domain with the use of the fast Fourier transform (FFT). Two ways to choose the control filters are suggested. The proposed scheme is based on interpolation, and as such, it involves only samples of signals, and it does not require any use of quadrature formulas. These filters have a linear-phase property, and the basic waveforms are symmetric. In addition, these filters yield refined frequency resolution 相似文献
7.
Symmetric nearly shift-invariant tight frame wavelets 总被引:3,自引:0,他引:3
K-regular two-band orthogonal filterbanks have been applied to image processing. Such filters can be extended into a case of downsampling by two and more than two filters provided that they satisfy a set of conditions. Such a setup allows for more degrees of freedom but also at the cost of higher redundancy. The latter depends directly on the number of the wavelet filters involved. Tight frame filters allow the design of smooth scaling functions and wavelets with a limited number of coefficients. Moreover, such filters are nearly shift invariant, a desirable feature in many applications. We explore a family of symmetric tight frame finite impulse response (FIR) filters characterized by the relations H/sub 3/(z)=H/sub 0/(-z) and H/sub 2/(z)=H/sub 1/(-z). They are simple to design and exhibit a degree of near orthogonality, in addition to near shift invariance. Both properties are desirable for noise removal purposes. 相似文献
8.
Lixin Shen Qiyu Sun 《Signal Processing, IEEE Transactions on》2004,52(7):1997-2011
High-resolution images are often desired but made impossible because of hardware limitations. For the high-resolution model proposed by Bose and Boo (see Int. J. Imaging Syst. Technol., vol.9, p.294-304, 1998), the iterative wavelet-based algorithm has been shown to perform better than the traditional least square method when the resolution ratio M is two and four. In this paper, we discuss the minimally supported biorthogonal wavelet system that comes from the mathematical model by Bose and Boo and propose a wavelet-based algorithm for arbitrary resolution ratio M/spl ges/2. The numerical results indicate that the algorithm based on our biorthogonal wavelet system performs better in high-resolution image reconstruction than the wavelet-based algorithm in the literature, as well as the common-used least square method. 相似文献
9.
Experiments with wavelets for compression of SAR data 总被引:3,自引:0,他引:3
Werness S.A. Wei S.C. Carpinella R. 《Geoscience and Remote Sensing, IEEE Transactions on》1994,32(1):197-201
Wavelet transform coding is shown to be an effective method for compression of both detected and complex synthetic aperture radar (SAR) imagery. Three different orthogonal wavelet transform filters are examined for use in SAR data compression; the performances of the filters are correlated with mathematical properties such as regularity and number of vanishing moments. Excellent quality reconstructions are obtained at data rates as low as 0.25 bpp for detected data and as low as 0.5 bits per element (or 2 bpp) for complex data 相似文献
10.
A novel compression algorithm for fingerprint images is introduced. Using wavelet packets and lattice vector quantization , a new vector quantization scheme based on an accurate model for the distribution of the wavelet coefficients is presented. The model is based on the generalized Gaussian distribution. We also discuss a new method for determining the largest radius of the lattice used and its scaling factor , for both uniform and piecewise-uniform pyramidal lattices. The proposed algorithms aim at achieving the best rate-distortion function by adapting to the characteristics of the subimages. In the proposed optimization algorithm, no assumptions about the lattice parameters are made, and no training and multi-quantizing are required. We also show that the wedge region problem encountered with sharply distributed random sources is resolved in the proposed algorithm. The proposed algorithms adapt to variability in input images and to specified bit rates. Compared to other available image compression algorithms, the proposed algorithms result in higher quality reconstructed images for identical bit rates. 相似文献
11.
Rebollo-Neira L. Constantinides A.G. Stathaki T. 《Signal Processing, IEEE Transactions on》1998,46(3):587-597
A mathematical framework for data representation and for noise reduction is presented in this paper. The basis of the approach lies in the use of wavelets derived from the general theory of frames to construct a subspace capable of representing the original signal excluding the noise. The representation subspace is shown to be efficient in signal modeling and noise reduction, but it may be accompanied by an ill-conditioned inverse problem. This is further examined, and a more adequate orthonormal representation for the generated subspace is proposed with an improvement in compression performance 相似文献
12.
An approach to watermarking digital images using non-regular wavelets is advanced. Non-regular transforms spread the energy
in the transform domain. The proposed method leads at the same time to increased image quality and increased robustness with
respect to lossy compression. The approach provides robust watermarking by suitably creating watermarked messages that have
energy compaction and frequency spreading. Our experimental results show that the application of non-regular wavelets, instead
of regular ones, can furnish a superior robust watermarking scheme. The generated watermarked data is more immune against
non-intentional JPEG and JPEG2000 attacks. 相似文献
13.
Group testing for image compression 总被引:4,自引:0,他引:4
This paper presents group testing for wavelets (GTW), a novel embedded-wavelet-based image compression algorithm based on the concept of group testing. We explain how group testing is a generalization of the zerotree coding technique for wavelet-transformed images. We also show that Golomb coding is equivalent to Hwang's group testing algorithm (Du and Hwang 1993). GTW is similar to SPIHT (Said and Pearlman 1996) but replaces SPIHT's significance pass with a new group testing based method. Although no arithmetic coding is implemented, GTW performs competitively with SPIHT's arithmetic coding variant in terms of rate-distortion performance. 相似文献
14.
基于Canny准则,参照最佳边缘滤波器的设计要求,确定选择用于边缘检测的小波母函数的一般准则。并在此基础上构造出二次B样条小波,提出了基于小波变换的多尺度自适应阈值图像边缘检测的新方法。 相似文献
15.
Chin-Chen Chang Yu-Chiang Li Chia-Hsuan Lin 《AEUE-International Journal of Electronics and Communications》2008,62(2):159-162
The techniques of progressive image transmission (PIT) divide image delivery into several phases. PIT's main objective is to efficiently and effectively provide an approximate reconstruction of the original image in each phase. Therefore, this study proposes the blocked wavelet progressive image transmission (BWPIT) method based on the wavelet transformation and the spatial similarity of pixels, to reduce the bit-rate and increase the image quality in an early phase of PIT. Experimental results show that the transmission bit-rate and the image quality of BWPIT are significantly better than those of bit-plane method (BPM), improved bit-plane method (IBPM), and wavelet-based progressive image transmission (WbPIT) method in each early phase. 相似文献
16.
Optimal known pixel data for inpainting in compression codecs based on partial differential equations is real-valued and thereby expensive to store. Thus, quantisation is required for efficient encoding. In this paper, we interpret the quantisation step as a clustering problem. Due to the global impact of each known pixel and correlations between spatial and tonal data, we investigate the central question, which kind of feature vectors should be used for clustering with popular strategies such as k-means. Our findings show that the number of colours can be reduced significantly without impacting the reconstruction quality. Surprisingly, these benefits are negated by an increased coding cost in compression applications. 相似文献
17.
We propose a patch-based image compression framework inspired by the inpainting techniques. The repeated patterns in one image are exploited for compression in a non-parametric manner, i.e., directly sampling image patches and encoding the similarity between them. We show how this idea leads to an assisted inpainting method, and how the inpainting method can be integrated into a patch-based image compression framework in a rate-distortion (R-D) optimal fashion. Two specific techniques - assisted inpainting for decoding, and R-D optimization for encoding by mode selection or image analysis - are presented in this paper. Experimental results show that compared with standard H.264 intra coding, our system (1) achieves up to 0.85 dB gain when optimized for objective quality and (2) saves as much as 25% bit-rate at similar subjective quality levels. 相似文献
18.
Successive approximation (SA) quantization is part of many of the state-of-the-art image and video compression methods. We first make a review of it, starting from the classical optimality considerations of Equitz and Cover(1991) and then proceed to Mallat and Falzon (see IEEE Transactions on Signal Processing, vol.46, no.4, 1998) results concerning low bit-rate transform coding. We then develop a general theory of SA quantization which we refer to as o-expansions. This theory explains the published results obtained by both scalar and vector SA quantization methods, and indicates how further performance improvements can be obtained. 相似文献
19.
首先介绍了立体视觉的基本原理,然后对立体图像的压缩方法分四类进行了综述。对其中用于立体图像序列的两种主要方法:基于“块”匹配的立体图像压缩方法和基于物体的立体图像压缩方法进行了深入探讨。通过对已有成果进行总结和分类,剖析了两种方法的优、缺点,并提出了一些还需要深入研究的问题,如:残差图像编码、遮挡检测、更精确的场景分割等。 相似文献
20.
Decoding algorithm for fractal image compression 总被引:3,自引:0,他引:3
A new iterative decoding method is proposed for fractal image compression. Convergence properties are provided. Experimental results show the superiority of the new method over the conventional decoding procedure 相似文献