首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A vector quantization scheme based on the classified vector quantization (CVQ) concept, called predictive classified vector quantization (PCVQ), is presented. Unlike CVQ where the classification information has to be transmitted, PCVQ predicts it, thus saving valuable bit rate. Two classifiers, one operating in the Hadamard domain and the other in the spatial domain, were designed and tested. The classification information was predicted in the spatial domain. The PCVQ schemes achieved bit rate reductions over the CVQ ranging from 20 to 32% for two commonly used color test images while maintaining the same acceptable image quality. Bit rates of 0.70-0.93 bits per pixel (bpp) were obtained depending on the image and PCVQ scheme used.  相似文献   

2.
A predictive vector quantization (PVQ) structure is proposed, where the encoder uses a predictor based on an intrablock support region, followed by a modified vector quantizer stage. Simulation results show that a modification on a previously published PVQ system led to an improvement of 1 dB in PSNR for Lenna.  相似文献   

3.
A new interband vector quantization of a human vision-based image representation is presented. The feature specific vector quantizer (FVQ) is suited for data compression beyond second-order decorrelation. The scheme is derived from statistical investigations of natural images and the processing principles of biological vision systems, the initial stage of the coding algorithm is a hierarchical, and orientation-selective, analytic bandpass decomposition, realized by even- and odd-symmetric filter pairs that are modeled after the simple cells of the visual cortex. The outputs of each even- and odd-symmetric filter pair are interpreted as real and imaginary parts of an analytic bandpass signal, which is transformed into a local amplitude and a local phase component according to the operation of cortical complex cells. Feature-specific multidimensional vector quantization is realized by combining the amplitude/phase samples of all orientation filters of one resolution layer. The resulting vectors are suited for a classification of the local image features with respect to their intrinsic dimensionality, and enable the exploitation of higher order statistical dependencies between the subbands. This final step is closely related to the operation of cortical hypercomplex or end-stopped cells. The codebook design is based on statistical as well as psychophysical and neurophysiological considerations, and avoids the common shortcomings of perceptually implausible mathematical error criteria. The resulting perceptual quality of compressed images is superior to that obtained with standard vector quantizers of comparable complexity.  相似文献   

4.
Interpolative vector quantization has been devised to alleviate the visible block structure of coded images plus the sensitive codebook problems produced by a simple vector quantizer. In addition, the problem of selecting color components for color picture vector quantization is discussed. Computer simulations demonstrate the success of this coding technique for color image compression at approximately 0.3 b/pel. Some background information on vector quantization is provided  相似文献   

5.
A novel two-dimensional subband coding technique is presented that can be applied to images as well as speech. A frequency-band decomposition of the image is carried out by means of 2D separable quadrature mirror filters, which split the image spectrum into 16 equal-rate subbands. These 16 parallel subband signals are regarded as a 16-dimensional vector source and coded as such using vector quantization. In the asymptotic case of high bit rates, a theoretical analysis yields that a lower bound to the gain is attainable by choosing this approach over scalar quantization of each subband with an optimal bit allocation. It is shown that vector quantization in this scheme has several advantages over coding the subbands separately. Experimental results are given, and it is shown the scheme has a performance that is comparable to that of more complex coding techniques  相似文献   

6.
Conventional vector quantization (VQ)-based techniques partition an image into nonoverlapping blocks that are then raster scanned and quantized. Image blocks that contain an edge result in high-frequency vectors. The coarse representation of such vectors leads to visually annoying degradations in the reconstructed image. The authors present a solution to the edge-degradation problem based on some earlier work on scan models. The approach reduces the number of vectors with abrupt intensity variations by using an appropriate scan to partition an image into vectors. They show how their techniques can be used to enhance the performance of VQ of multispectral data sets. Comparisons with standard techniques are presented and shown to give substantial improvements.  相似文献   

7.
A vector quantization (VQ) scheme with finite memory called dynamic finite-state vector quantization (DFSVQ) is presented. The encoder consists of a large codebook, so called super-codebook, where for each input vector a fixed number of its codevectors are chosen to generate a much smaller codebook (sub-codebook). This sub-codebook represents the best matching codevectors that could be found in the super-codebook for encoding the current input vector. The choice for the codevectors in the sub-codebook is based on the information obtained from the previously encoded blocks where directional conditional block probability (histogram) matrices are used in the selection of the codevectors. The index of the best matching codevector in the sub-codebook is transmitted to the receiver. An adaptive DFSVQ scheme is also proposed in which, when encoding an input vector, first the sub-codebook is searched for a matching codevector to satisfy a pre-specified waveform distortion. If such a codevector is not found in tile current sub-codebook then the whole super-codebook is checked for a better match. If a better match is found then a signaling flag along with the corresponding index of the codevector is transmitted to the receiver. Both the DFSVQ encoder and its adaptive version are implemented. Experimental results for several monochrome images with a super-codebook size of 256 or 512 and different sub-codebook sizes are presented  相似文献   

8.
The authors consider 2-D predictive vector quantization (PVQ) of images subject to an entropy constraint and demonstrate the substantial performance improvements over existing unconstrained approaches. They describe a simple adaptive buffer-instrumented implementation of this 2-D entropy-coded PVQ scheme which can accommodate the associated variable-length entropy coding while completely eliminating buffer overflow/underflow problems at the expense of only a slight degradation in performance. This scheme, called 2-D PVQ/AECQ (adaptive entropy-coded quantization), is shown to result in excellent rate-distortion performance and impressive quality reconstructions of real-world images. Indeed, the real-world coding results shown demonstrate little distortion at rates as low as 0.5 b/pixel  相似文献   

9.
Feature predictive vector quantization of multispectral images   总被引:5,自引:0,他引:5  
A compression method for multispectral data sets is proposed where a small subset of image bands is initially vector quantized. The remaining bands are predicted from the quantized images. Two different types of predictors are examined, an affine predictor and a new nonlinear predictor. The residual (error) images are encoded at a second stage based on the magnitude of the errors. This scheme simultaneously exploits both spatial and spectral correlation inherent in multispectral images. Simulation results on an image set from the Thematic Mapper with seven spectral bands provide a comparison of the affine predictor with the nonlinear predictor. It is shown that the nonlinear predictor provides significantly improved performance compared to the affine predictor. Image compression ratios between 15 and 25 are achieved with remarkably good image quality  相似文献   

10.
This paper presents a new vector quantization technique called predictive residual vector quantization (PRVQ). It combines the concepts of predictive vector quantization (PVQ) and residual vector quantization (RVQ) to implement a high performance VQ scheme with low search complexity. The proposed PRVQ consists of a vector predictor, designed by a multilayer perceptron, and an RVQ that is designed by a multilayer competitive neural network. A major task in our proposed PRVQ design is the joint optimization of the vector predictor and the RVQ codebooks. In order to achieve this, a new design based on the neural network learning algorithm is introduced. This technique is basically a nonlinear constrained optimization where each constituent component of the PRVQ scheme is optimized by minimizing an appropriate stage error function with a constraint on the overall error. This technique makes use of a Lagrangian formulation and iteratively solves a Lagrangian error function to obtain a locally optimal solution. This approach is then compared to a jointly designed and a closed-loop design approach. In the jointly designed approach, the predictor and quantizers are jointly optimized by minimizing only the overall error. In the closed-loop design, however, a predictor is first implemented; then the stage quantizers are optimized for this predictor in a stage-by-stage fashion. Simulation results show that the proposed PRVQ scheme outperforms the equivalent RVQ (operating at the same bit rate) and the unconstrained VQ by 2 and 1.7 dB, respectively. Furthermore, the proposed PRVQ outperforms the PVQ in the rate-distortion sense with significantly lower codebook search complexity.  相似文献   

11.
The authors present a very efficient minimum mean-squared error (MMSE) encoding method useful for vector quantization. Using this method results in a considerable reduction in the number of multiplications and additions. The increase in the number of comparisons is moderate, and therefore the overall saving in the number of operations is still considerable. Very little precomputation and extra storage is required  相似文献   

12.
Fast search algorithms are proposed and studied for vector quantization encoding using the K-dimensional (K-d) tree structure. Here, the emphasis is on the optimal design of the K -d tree for efficient nearest neighbor search in multidimensional space under a bucket-Voronoi intersection search framework. Efficient optimization criteria and procedures are proposed for designing the K-d tree, for the case when the test data distribution is available (as in vector quantization application in the form of training data) as well as for the case when the test data distribution is not available and only the Voronoi intersection information is to be used. The criteria and bucket-Voronoi intersection search procedure are studied in the context of vector quantization encoding of speech waveform. They are empirically observed to achieve constant search complexity for O(log N) tree depths and are found to be more efficient in reducing the search complexity. A geometric interpretation is given for the maximum product criterion, explaining reasons for its inefficiency with respect to the optimization criteria  相似文献   

13.
Neural networks for vector quantization of speech and images   总被引:6,自引:0,他引:6  
Using neural networks for vector quantization (VQ) is described. The authors show how a collection of neural units can be used efficiently for VQ encoding, with the units performing the bulk of the computation in parallel, and describe two unsupervised neural network learning algorithms for training the vector quantizer. A powerful feature of the new training algorithms is that the VQ codewords are determined in an adaptive manner, compared to the popular LBG training algorithm, which requires that all the training data be processed in a batch mode. The neural network approach allows for the possibility of training the vector quantizer online, thus adapting to the changing statistics of the input data. The authors compare the neural network VQ algorithms to the LBG algorithm for encoding a large database of speech signals and for encoding images  相似文献   

14.
The lossless compression of AVIRIS images by vector quantization   总被引:15,自引:0,他引:15  
The structure of hyperspectral images reveals spectral responses that would seem ideal candidates for compression by vector quantization. This paper outlines the results of an investigation of lossless vector quantization of 224-band Airborne/Visible Infrared imaging Spectrometer (AVIRIS) images. Various vector formation techniques are identified and suitable quantization parameters are investigated. A new technique, mean-normalized vector quantization (M-NVQ), is proposed which produces compression performances approaching the theoretical minimum compressed image entropy of 5 bits/pixel. Images are compressed from original image entropies of between 8.28 and 10.89 bits/pixel to between 4.83 and 5.90 bits/pixel  相似文献   

15.
A feature correction two-stage vector quantization (FC2VQ) algorithm was previously developed to compress gray-scale photo identification (ID) pictures. This algorithm is extended to color images in this work. Three options are compared, which apply the FC2VQ algorithm in RGB, YCbCr, and Karhunen-Loeve transform (KLT) color spaces, respectively. The RGB-FC2VQ algorithm is found to yield better image quality than KLT-FC2VQ or YCbCr-FC2VQ at similar bit rates. With the RGB-FC2VQ algorithm, a 128x128 24-b color ID image (49152 bytes) can be compressed down to about 500 bytes with satisfactory quality. When the codeword indices are further compressed losslessly using a first order Huffman coder, this size is further reduced to about 450 bytes.  相似文献   

16.
This paper presents a new lossy coding scheme based on 3D wavelet transform and lattice vector quantization for volumetric medical images. The main contribution of this work is the design of a new codebook enclosing a multidimensional dead zone during the quantization step which enables to better account correlations between neighbor voxels. Furthermore, we present an efficient rate–distortion model to simplify the bit allocation procedure for our intra-band scheme. Our algorithm has been evaluated on several CT- and MR-image volumes. At high compression ratios, we show that it can outperform the best existing methods in terms of rate–distortion trade-off. In addition, our method better preserves details and produces thus reconstructed images less blurred than the well-known 3D SPIHT algorithm which stands as a reference.  相似文献   

17.
In this paper, we propose a novel vector quantizer (VQ) in the wavelet domain for the compression of electrocardiogram (ECG) signals. A vector called tree vector (TV) is formed first in a novel structure, where wavelet transformed (WT) coefficients in the vector are arranged in the order of a hierarchical tree. Then, the TVs extracted from various WT subbands are collected in one single codebook. This feature is an advantage over traditional WT-VQ methods, where multiple codebooks are needed and are usually designed separately because numerical ranges of coefficient values in various WT subbands are quite different. Finally, a distortion-constrained codebook replenishment mechanism is incorporated into the VQ, where codevectors can be updated dynamically, to guarantee reliable quality of reconstructed ECG waveforms. With the proposed approach both visual quality and the objective quality in terms of the percent of root-mean-square difference (PRD) are excellent even in a very low bit rate. For the entire 48 records of Lead II ECG data in the MIT/BIH database, an average PRD of 7.3% at 146 b/s is obtained. For the same test data under consideration, the proposed method outperforms many recently published ones, including the best one known as the set partitioning in hierarchical trees.  相似文献   

18.
A new neural network architecture is proposed for spatial domain image vector quantization (VQ). The proposed model has a multiple shell structure consisting of binary hypercube feature maps of various dimensions, which are extended forms of Kohonen's self-organizing feature maps (SOFMs). It is trained so that each shell can contain similar-feature vectors. A partial search scheme using the neighborhood relationship of hypercube feature maps can reduce the computational complexity drastically with marginal coding efficiency degradation. This feature is especially proper for vector quantization of a large block or high dimension. The proposed scheme can also provide edge preserving VQ by increasing the number of shells, because shells far from the origin are trained to contain edge block features.  相似文献   

19.
The authors use predictive pruned tree-structured vector quantization for the compression of medical images. Their goal is to obtain a high compression ratio without impairing the image quality, at least so far as diagnostic purposes are concerned. The authors use a priori knowledge of the class of images to be encoded to help them segment the images and thereby to reserve bits for diagnostically relevant areas. Moreover, the authors improve the quality of prediction and encoding in two additional ways: by increasing the memory of the predictor itself and by using ridge regression for prediction. The improved encoding scheme was tested via computer simulations on a set of mediastinal CT scans; results are compared with those obtained using a more conventional scheme proposed recently in the literature. There were remarkable improvements in both the prediction accuracy and the encoding quality, above and beyond what comes from the segmentation. Test images were encoded at 0.5 bit per pixel and less without any visible degradation for the diagnostically relevant region.  相似文献   

20.
Three fast search routines to be used in the encoding phase of vector quantization (VQ) image compression systems are presented. These routines, which are based on geometric considerations, provide the same results as an exhaustive (or full) search. Examples show that the proposed algorithms need only 3-20% of the number of mathematical operations required by a full search and fewer than 50% of the operations required by recently proposed alternatives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号