共查询到20条相似文献,搜索用时 0 毫秒
1.
Pyramid vector quantization (PVQ) uses the lattice points of a pyramidal shape in multidimensional space as the quantizer codebook. It is a fixed-rate quantization technique that can be used for the compression of Laplacian-like sources arising from transform and subband image coding, where its performance approaches the optimal entropy-coded scalar quantizer without the necessity of variable length codes. In this paper, we investigate the use of PVQ for compressed image transmission over noisy channels, where the fixed-rate quantization reduces the susceptibility to bit-error corruption. We propose a new method of deriving the indices of the lattice points of the multidimensional pyramid and describe how these techniques can also improve the channel noise immunity of general symmetric lattice quantizers. Our new indexing scheme improves channel robustness by up to 3 dB over previous indexing methods, and can be performed with similar computational cost. The final fixed-rate coding algorithm surpasses the performance of typical Joint Photographic Experts Group (JPEG) implementations and exhibits much greater error resilience. 相似文献
2.
The authors present a very efficient minimum mean-squared error (MMSE) encoding method useful for vector quantization. Using this method results in a considerable reduction in the number of multiplications and additions. The increase in the number of comparisons is moderate, and therefore the overall saving in the number of operations is still considerable. Very little precomputation and extra storage is required 相似文献
3.
《Geoscience and Remote Sensing, IEEE Transactions on》2004,42(8):1791-1798
A fast vector quantization algorithm for data compression of hyperspectral imagery is proposed in this paper. It makes use of the fact that in the full search of the generalized Lloyd algorithm (GLA) a training vector does not require a search to find the minimum distance partition if its distance to the partition is improved in the current iteration compared to that of the previous iteration. The proposed method has the advantage of being simple, producing a large computation time saving and yielding compression fidelity as good as the GLA. Four hyperspectral data cubes covering a wide variety of scene types were tested. The loss of spectral information due to compression was evaluated using the spectral angle mapper and a remote sensing application. 相似文献
4.
Sitaram V.S. Chien-Min Huang Israelsen P.D. 《Communications, IEEE Transactions on》1994,42(11):3027-3033
This paper discusses some algorithms to be used for the generation of an efficient and robust codebook for vector quantization (VQ). Some of the algorithms reduce the required codebook size by 4 or even 8 b to achieve the same level of performance as some of the popular techniques. This helps in greatly reducing the complexity of codebook generation and encoding. We also present a new adaptive tree search algorithm which improves the performance of any product VQ structure. Our results show an improvement of nearly 3 dB over the fixed rate search algorithm at a bit rate of 0.75 b/pixel 相似文献
5.
AMR-WB是一个可以在低速时取得很高语音质量的宽带语音编码器,但是当AMR-WB编码算法应用于一些对硬件要求比较苛刻的领域时,其编码的复杂度偏高。固定码本搜索方法在语音编码中占有很大的比例,论文提出了用于AMR-WB语音编码标准的一个新的快速固定码本搜索方法,使其计算量下降了44.3%。 相似文献
6.
Jyi-Chang Tsai Chaur-Heh Hsieh Te-Cheng Hsu 《IEEE transactions on image processing》2000,9(11):1825-1836
The picture quality of conventional memory vector quantization techniques is limited by their supercodebooks. This paper presents a new dynamic finite-state vector quantization (DFSVQ) algorithm which provides better quality than the best quality that the supercodebook can offer. The new DFSVQ exploits the global interblock correlation of image blocks instead of local correlation in conventional DFSVQs. For an input block, we search the closest block from the previously encoded data using the side-match technique. The closest block is then used as the prediction of the input block, or used to generate a dynamic codebook. The input block is encoded by the closest block, dynamic codebook or supercodebook. Searching for the closest block from the previously encoded data is equivalent to expand the codevector space; thus the picture quality achieved is not limited by the supercodebook. Experimental results reveal that the new DFSVQ reduces bit rate significantly and provides better visual quality, as compared to the basic VQ and other DFSVQs. 相似文献
7.
A new approach is presented to the digitization and compression of a class of voiceband modem signals. The approach, baseband residual vector quantization (BRVQ), relies heavily on the simple structure present in a modem signal. After the signal is converted to baseband, the magnitude sequence and the sequence of residuals obtained when the phase within each baud of the baseband signals is modeled by a straight line are separately vector quantized. Carrier frequency estimation and baud-rate classification schemes that were designed to carry out these operations are described. Experimental results show that the performance of the BRVQ system at and below 16 kb/s is better than that of a previously developed vector quantization scheme that has itself been shown to outperform traditional speech-compression techniques such as adaptive predictive coding, adaptive transform coding, and subband coding when these techniques are used to compress modem signals 相似文献
8.
A fast exhaustive search algorithm for rate-constrained motion estimation is presented. The motion vectors are selected from a search window based on a rate-distortion criterion by successively eliminating the search positions depending on the rate constraint. The estimation performance of the proposed algorithm is identical to the performance of the rate-constrained full search algorithm, with considerable reduction in computation. Simulation results indicate that the number of matching calculations decreases as the constraint on the rate increases. 相似文献
9.
The fast codeword search algorithm based on the mean pyramids of codewords is currently used in image coding applications. A more efficient algorithm is presented for fast codeword searching which is based on the use of mean-variance pyramids of codewords. Experimental results confirm the effectiveness of the proposed method 相似文献
10.
Ruey-Feng Chang Wen-Tsuen Chen Jia-Shung Wang 《Signal Processing, IEEE Transactions on》1992,40(1):221-225
The Linde-Buzo-Gray (LBG) algorithm is usually used to design a codebook for encoding images in vector quantization. In each iteration of this algorithm, one must search the full codebook in order to assign the training vectors to their corresponding codewords. Therefore, the LBG algorithm needs a large computation effort to obtain a good codebook from the training set. The authors propose a finite-state LBG (FSLBG) algorithm for reducing the computation time. Instead of searching the entire codebook, they search only those codewords that are close to the codeword for a training vector in the previous iteration. In general, the number of these possible codewords can be made very small without sacrificing performance. By only searching a small part of the codebook, the computation time is reduced. In experiments, the performance of the FSLBG algorithm in terms of signal-to-noise ratio is very close to that of the LBG algorithm. However, the computation time of the FSLBG algorithm is about 10% of the time required by the LBG algorithm 相似文献
11.
Recently, C.D. Bei and R.M. Gray (1985) used a partial distance search algorithm that reduces the computational complexity of the minimum distortion encoding for vector quantization. The effect of ordering the codevectors on the computational complexity of the algorithm is studied. It is shown that the computational complexity of this algorithm can be reduced further by ordering the codevectors according to the sizes of their corresponding clusters 相似文献
12.
Side-match vector quantization is a finite-state technique for image coding. This research shows that the side-match vector quantization is an error-propagating code and it is similar to a catastrophic convolutional code. Here, we propose a Viterbi-based algorithm to solve this problem. Various noise detection algorithms are integrated into the Viterbi algorithm (to yield the Viterbi-based algorithm) for a much better performance. According to the simulation of a binary symmetric channel with random bit-error rate (BER) 0.1%-0.01%, the Viterbi-based algorithm provides 2.8-9.6 dB and 2.3-8.4 dB gain compared with the conventional side-match vector quantization decoder and the Viterbi decoder, respectively. In addition, the proposed algorithm requires much fewer computations than the Viterbi algorithm 相似文献
13.
A new diamond search algorithm for fast block-matching motionestimation 总被引:127,自引:0,他引:127
Shan Zhu Kai-Kuang Ma 《IEEE transactions on image processing》2000,9(2):287-290
Based on the study of motion vector distribution from several commonly used test image sequences, a new diamond search (DS) algorithm for fast block-matching motion estimation (BMME) is proposed in this paper. Simulation results demonstrate that the proposed DS algorithm greatly outperforms the well-known three-step search (TSS) algorithm. Compared with the new three-step search (NTSS) algorithm, the DS algorithm achieves close performance but requires less computation by up to 22% on average. Experimental results also show that the DS algorithm is better than the four-step search (4SS) and block-based gradient descent search (BBGDS), in terms of mean-square error performance and required number of search points. 相似文献
14.
A tree structure for the fast nearest neighbour search algorithm for vector quantisation is presented. Using the new algorithm a remarkable reduction in the number of multiplications, additions and comparisons is achieved.<> 相似文献
15.
Hao Zhang Yingling Peng 《Electronics letters》1999,35(20):1704-1705
A new Lp-norm-based minimisation algorithm for signal parameter estimation is proposed. This method can be used to obtain accurate results without any prior knowledge of the key coefficient p or any adaptive process to find the optimum value of p. Simulation shows it to be an efficient and robust method 相似文献
16.
A novel compression algorithm for fingerprint images is introduced. Using wavelet packets and lattice vector quantization , a new vector quantization scheme based on an accurate model for the distribution of the wavelet coefficients is presented. The model is based on the generalized Gaussian distribution. We also discuss a new method for determining the largest radius of the lattice used and its scaling factor , for both uniform and piecewise-uniform pyramidal lattices. The proposed algorithms aim at achieving the best rate-distortion function by adapting to the characteristics of the subimages. In the proposed optimization algorithm, no assumptions about the lattice parameters are made, and no training and multi-quantizing are required. We also show that the wedge region problem encountered with sharply distributed random sources is resolved in the proposed algorithm. The proposed algorithms adapt to variability in input images and to specified bit rates. Compared to other available image compression algorithms, the proposed algorithms result in higher quality reconstructed images for identical bit rates. 相似文献
17.
A fast algorithm for 1-norm vector median filtering 总被引:1,自引:0,他引:1
A major drawback with vector median filters is their high computational complexity. A fast algorithm is presented for the computation of the vector median operator based on 1-norm. The algorithm complexity is investigated both from a theoretical and an experimental point of view. Simulation results are shown proving the complexity reduction achieved by the novel algorithm. 相似文献
18.
Byung Cheol Song Kang-Wook Chun Jong Beom Ra 《IEEE transactions on image processing》2005,14(3):308-311
This paper presents a fast full-search algorithm (FSA) for rate-constrained motion estimation. The proposed algorithm, which is based on the block sum pyramid frame structure, successively eliminates unnecessary search positions according to rate-constrained criterion. This algorithm provides the identical estimation performance to a conventional FSA having rate constraint, while achieving considerable reduction in computation. 相似文献
19.
Chang-Hsing Lee Ling-Hwei Chen 《IEEE transactions on image processing》1997,6(11):1587-1591
In this correspondence, a fast approach to motion estimation is presented. The algorithm uses the block sum pyramid to eliminate unnecessary search positions. It first constructs the sum pyramid structure of a block. Successive elimination is then performed hierarchically from the top level to the bottom level of the pyramid. Many search positions can be skipped from being considered as the best motion vector and, thus, the search complexity can be reduced. The algorithm can achieve the same estimation accuracy as the full search block matching algorithm with much less computation time. 相似文献
20.
As linearly constrained vector quantization (LCVQ) is efficient for block-based compression of images that require low complexity decompression, it is a “de facto” standard for three-dimensional (3-D) graphics cards that use texture compression. Motivated by the lack of an efficient algorithm for designing LCVQ codebooks, the generalized Lloyd (1982) algorithm (GLA) for vector quantizer (VQ) codebook improvement and codebook design is extended to a new linearly constrained generalized Lloyd algorithm (LCGLA). This LCGLA improves VQ codebooks that are formed as linear combinations of a reduced set of base codewords. As such, it may find application wherever linearly constrained nearest neighbor (NN) techniques are used, that is, in a wide variety of signal compression and pattern recognition applications that require or assume distributions that are locally linearly constrained. In addition, several examples of linearly constrained codebooks that possess desirable properties such as good sphere packing, low-complexity implementation, fine resolution, and guaranteed convergence are presented. Fast NN search algorithms are discussed. A suggested initialization procedure halves iterations to convergence when, to reduce encoding complexity, the encoder considers the improvement of only a single codebook for each block. Experimental results for image compression show that LCGLA iterations significantly improve the PSNR of standard high-quality lossy 6:1 LCVQ compressed images 相似文献