首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Variable rate vector quantization for medical image compression   总被引:1,自引:0,他引:1  
Three techniques for variable-rate vector quantizer design are applied to medical images. The first two are extensions of an algorithm for optimal pruning in tree-structured classification and regression due to Breiman et al. The code design algorithms find subtrees of a given tree-structured vector quantizer (TSVQ), each one optimal in that it has the lowest average distortion of all subtrees of the TSVQ with the same or lesser average rate. Since the resulting subtrees have variable depth, natural variable-rate coders result. The third technique is a joint optimization of a vector quantizer and a noiseless variable-rate code. This technique is relatively complex but it has the potential to yield the highest performance of all three techniques.  相似文献   

2.
3.
Gauss mixtures have gained popularity in statistics and statistical signal processing applications for a variety of reasons, including their ability to well approximate a large class of interesting densities and the availability of algorithms such as the Baum–Welch or expectation-maximization (EM) algorithm for constructing the models based on observed data. We here consider a quantization approach to Gauss mixture design based on the information theoretic view of Gaussian sources as a “worst case” for robust signal compression. Results in high-rate quantization theory suggest distortion measures suitable for Lloyd clustering of Gaussian components based on a training set of data. The approach provides a Gauss mixture model and an associated Gauss mixture vector quantizer which is locally robust. We describe the quantizer mismatch distortion and its relation to other distortion measures including the traditional squared error, the Kullback–Leibler (relative entropy) and minimum discrimination information, and the log-likehood distortions. The resulting Lloyd clustering algorithm is demonstrated by applications to image vector quantization, texture classification, and North Atlantic pipeline image classification.  相似文献   

4.
Constrained storage vector quantization, (CSVQ), introduced by Chan and Gersho (1990, 1991) allows for the stagewise design of balanced tree-structured residual vector quantization codebooks with low encoding and storage complexities. On the other hand, it has been established by Makhoul et al. (1985), Riskin et al. (1991), and by Mahesh et al. (see IEEE Trans. Inform. Theory, vol.41, p.917-30, 1995) that variable-length tree-structured vector quantizer (VLTSVQ) yields better coding performance than a balanced tree-structured vector quantizer and may even outperform a full-search vector quantizer due to the nonuniform distribution of rate among the subsets of its input space. The variable-length constrained storage tree-structured vector quantization (VLCS-TSVQ) algorithm presented in this paper utilizes the codebook sharing by multiple vector sources concept as in CSVQ to greedily grow an unbalanced tree structured residual vector quantizer with constrained storage. It is demonstrated by simulations on test sets from various synthetic one dimensional (1-D) sources and real-world images that the performance of VLCS-TSVQ, whose codebook storage complexity varies linearly with rate, can come very close to the performance of greedy growth VLTSVQ of Riskin et al. and Mahesh et al. The dramatically reduced size of the overall codebook allows the transmission of the code vector probabilities as side information for source adaptive entropy coding.  相似文献   

5.
Finite-state vector quantization for waveform coding   总被引:3,自引:0,他引:3  
A finite-state vector quantizer is a finite-state machine used for data compression: Each successive source vector is encoded into a codeword using a minimum distortion rule, and into a code book, depending on the encoder state. The current state and the selected codeword then determine the next encoder state. A finite-state vector quantizer is capable of making better use of the memory in a source than is an ordinary memoryless vector quantizer of the same dimension or blocklength. Design techniques are introduced for finite-state vector quantizers that combine ad hoc algorithms with an algorithm for the design of memoryless vector quantizers. Finite-state vector quantizers are designed and simulated for Gauss-Markov sources and sampled speech data, and the resulting performance and storage requirements are compared with ordinary memoryless vector quantization.  相似文献   

6.
Compressing a digital image can facilitate its transmission, storage, and processing. As radiology departments become increasingly digital, the quantities of their imaging data are forcing consideration of compression in picture archiving and communication systems (PACS) and evolving teleradiology systems. Significant compression is achievable only by lossy algorithms, which do not permit the exact recovery of the original image. This loss of information renders compression and other image processing algorithms controversial because of the potential loss of quality and consequent problems regarding liability, but the technology must be considered because the alternative is delay, damage, and loss in the communication and recall of the images. How does one decide if an image is good enough for a specific application, such as diagnosis, recall, archival, or educational use? The authors describe three approaches to the measurement of medical image quality: signal-to-noise ratio (SNR), subjective rating, and diagnostic accuracy. They compare and contrast these measures in a particular application, consider in some depth recently developed methods for determining diagnostic accuracy of lossy compressed medical images and examine how good the easily obtainable distortion measures like SNR are at predicting the more expensive subjective and diagnostic ratings. The examples are of medical images compressed using predictive pruned tree-structured vector quantization, but the methods can be used for any digital image processing that produces images different from the original for evaluation  相似文献   

7.
This paper proposes a new algorithm for the design of a tree-structured vector quantizer that operates under storage and entropy constraints; hence, it is called the storage- and entropy-constrained tree-structured vector quantizer (SECTSVQ). The algorithm uses the tree-growing approach and designs the tree one stage/layer at a time. The constraints on the rate and storage size, i.e., the number of nodes or codewords, at each stage are specified prior to the design procedure. While growing the tree, at each stage the algorithm optimally allocates the rate and the number of nodes available for the current stage to the nodes of the previous stage using the dynamic programming technique. The nodes of the current stage are then determined based on the allocations. In addition to being useful as a tree-structured VQ, SECTSVQ is particularly suited for application in progressive transmission. Moreover, the optimal allocation technique used in the design can be effectively applied to other optimal resource allocation problems. The SECTSVQ algorithm is implemented for various sources and is shown to compare favorably with other VQs  相似文献   

8.
Image segmentation using hidden Markov Gauss mixture models.   总被引:2,自引:0,他引:2  
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.  相似文献   

9.
A joint source-channel hybrid digital-analog (HDA) vector quantization (VQ) system is presented. The main advantage of the new VQ-based HDA system is that it achieves excellent rate-distortion-capacity performance at the design signal-to-noise ratio (SNR) while maintaining a "graceful improvement" characteristic at higher SNRs. It is demonstrated that, within the HDA framework, the parameters of the system can be optimized using an iterative procedure similar to that of channel-optimized vector quantizer design. Comparisons are made with three purely digital systems and one purely analog system. It is found that, at high SNRs, the VQ-based HDA system is superior to the other investigated systems. At low SNRs, the performance of the new scheme can be improved using the optimization procedure and using soft decoding in the digital part of the system. These results demonstrate that the introduced scheme provides an attractive method for terrestrial broadcasting applications  相似文献   

10.
This paper describes the compression of grayscale medical ultrasound images using a recent compression technique, i.e., space-frequency segmentation (SITS). This method finds the rate-distortion optimal representation of an image from a large set of possible space-frequency partitions and quantizer combinations and is especially effective when the images to code are statistically inhomogeneous, which is the case for medical ultrasound images. We implemented a compression application based on this method and tested the algorithm on representative ultrasound images. The result is an effective technique that performs better than a leading wavelet-transform coding algorithm, i.e., set partitioning in hierarchical trees (SPIHT), using standard objective distortion measures. To determine the subjective qualitative performance, an expert viewer study was run by presenting ultrasound radiologists with images compressed using both SFS and SPIHT. The results confirmed the objective performance rankings. Finally, the performance sensitivity of the space-frequency codec is shown with respect to several parameters, and the characteristic space-frequency partitions found for ultrasound images are discussed  相似文献   

11.
Multiresolution multiresource progressive image transmission   总被引:3,自引:0,他引:3  
This paper presents a new progressive image transmission (PIT) design algorithm in which the resolution and resources (rate or distortion and storage size) at each transmission stage are allowed to be prespecified. This algorithm uses the wavelet transform and tree-structured vector quantizer (TSVQ) techniques. The wavelet transform is used to obtain a pyramid structure representation of an image. The vector quantizer technique is used to design a TSVQ for each subimage so that all the subimages that constitute the image at the current stage can be successively refined according to the resources available at that stage. The resources assigned to each subimage for the successive refinement is determined to optimize the performance at the current stage under the resource constraints. Normally, the resource constraints at each stage are determined by the specification of the transmission time or distortion for image data and the storage complexity of the TSVQ. The resolution at each stage is determined/specified according to the application or as part of the design process to optimize the visual effect.  相似文献   

12.
The authors apply a lossy compression algorithm to medical images, and quantify the quality of the images by the diagnostic performance of radiologists, as well as by traditional signal-to-noise ratios and subjective ratings. The authors' study is unlike previous studies of the effects of lossy compression in that they consider nonbinary detection tasks, simulate actual diagnostic practice instead of using paired tests or confidence rankings, use statistical methods that are more appropriate for nonbinary clinical data than are the popular receiver operating characteristic curves, and use low-complexity predictive tree-structured vector quantization for compression rather than DCT-based transform codes combined with entropy coding. The authors' diagnostic tasks are the identification of nodules (tumors) in the lungs and lymphadenopathy in the mediastinum from computerized tomography (CT) chest scans. Radiologists read both uncompressed and lossy compressed versions of images. For the image modality, compression algorithm, and diagnostic tasks the authors consider, the original 12 bit per pixel (bpp) CT image can be compressed to between 1 bpp and 2 bpp with no significant changes in diagnostic accuracy. The techniques presented here for evaluating image quality do not depend on the specific compression algorithm and are useful new methods for evaluating the benefits of any lossy image processing technique.  相似文献   

13.
The authors have developed a new classified vector quantizer (CVQ) using decomposition and prediction which does not need to store or transmit any side information. To obtain better quality in the compressed images, human visual perception characteristics are applied to the classification and bit allocation. This CVQ has been subjectively evaluated for a sequence of X-ray CT images and compared to a DCT coding method. Nine X-ray CT head images from three patients are compressed at 10:1 and 15:1 compression ratios and are evaluated by 13 radiologists. The evaluation data are analyzed statistically with analysis of variance and Tukey's multiple comparison. Even though there are large variations in judging image quality among readers, the proposed algorithm has shown significantly better quality than the DCT at a statistical, significance level of 0.05. Only an interframe CVQ can reproduce the quality of the originals at 10:1 compression at the same significance level. While the CVQ can reproduce compressed images that are not statistically different from the originals in quality, the effect on diagnostic accuracy remains to be investigated.  相似文献   

14.
Vector quantization (VQ) is an efficient data compression technique for low bit rate applications. However the major disadvantage of VQ is that its encoding complexity increases dramatically with bit rate and vector dimension. Even though one can use a modified VQ, such as the tree-structured VQ, to reduce the encoding complexity, it is practically infeasible to implement such a VQ at a high bit rate or for large vector dimensions because of the huge memory requirement for its codebook and for the very large training sequence requirement. To overcome this difficulty, a structurally constrained VQ called the sample-adaptive product quantizer (SAPQ) has recently been proposed. We extensively study the SAPQ that is based on scalar quantizers in order to exploit the simplicity of scalar quantization. Through an asymptotic distortion result, we discuss the achievable performance and the relationship between distortion and encoding complexity. We illustrate that even when SAPQ is based on scalar quantizers, it can provide VQ-level performance. We also provide numerical results that show a 2-3 dB improvement over the Lloyd-Max (1982, 1960) quantizers for data rates above 4 b/point  相似文献   

15.
A finite-state vector quantizer (FSVQ) is a switched vector quantizer where the sequence of quantizers selected by the encoder can be tracked by the decoder. It can be viewed as an adaptive vector quantizer with backward estimation, a vector generalization of an AQB system. Recently a family of algorithms for the design of FSVQ's for waveform coding application has been introduced. These algorithms first design an initial set of vector quantizers together with a next-state function giving the rule by which the next quantizer is selected. The codebooks of this initial FSVQ are then iteratively improved by a natural extension of the usual memoryless vector quantizer design algorithm. The next-state function, however, is not modified from its initial form. In this paper we present two extensions of the FSVQ design algorithms. First, the algorithm for FSVQ design for waveform coders is extended to FSVQ design of linear predictive coded (LPC) speech parameter vectors using an Itakura-Saito distortion measure. Second, we introduce a new technique for the iterative improvement of the next-state function based on an algorithm from adaptive stochastic automata theory. The design algorithms are simulated for an LPC FSVQ and the results are compared with each other and to ordinary memoryless vector quantization. Several open problems suggested by the simulation results are presented.  相似文献   

16.
A new interband vector quantization of a human vision-based image representation is presented. The feature specific vector quantizer (FVQ) is suited for data compression beyond second-order decorrelation. The scheme is derived from statistical investigations of natural images and the processing principles of biological vision systems, the initial stage of the coding algorithm is a hierarchical, and orientation-selective, analytic bandpass decomposition, realized by even- and odd-symmetric filter pairs that are modeled after the simple cells of the visual cortex. The outputs of each even- and odd-symmetric filter pair are interpreted as real and imaginary parts of an analytic bandpass signal, which is transformed into a local amplitude and a local phase component according to the operation of cortical complex cells. Feature-specific multidimensional vector quantization is realized by combining the amplitude/phase samples of all orientation filters of one resolution layer. The resulting vectors are suited for a classification of the local image features with respect to their intrinsic dimensionality, and enable the exploitation of higher order statistical dependencies between the subbands. This final step is closely related to the operation of cortical hypercomplex or end-stopped cells. The codebook design is based on statistical as well as psychophysical and neurophysiological considerations, and avoids the common shortcomings of perceptually implausible mathematical error criteria. The resulting perceptual quality of compressed images is superior to that obtained with standard vector quantizers of comparable complexity.  相似文献   

17.
A differential index (DI) assignment scheme is proposed for the image encoding system in which a variable-length tree-structured vector quantizer (VLTSVQ) is adopted. Each source vector is quantized into a terminal node of VLTSVQ and each terminal node is represented as a unique binary vector. The proposed index assignment scheme utilizes the correlation between interblocks of the image to increase the compression ratio with the image quality maintained. Simulation results show that the proposed scheme achieves a much higher compression ratio than the conventional one does and that the amount of the bit rate reduction of the proposed scheme becomes large as the correlation of the image becomes large. The proposed encoding scheme can be effectively used to encode MR images whose pixel values are, in general, highly correlated with those of the neighbor pixels.  相似文献   

18.
As linearly constrained vector quantization (LCVQ) is efficient for block-based compression of images that require low complexity decompression, it is a “de facto” standard for three-dimensional (3-D) graphics cards that use texture compression. Motivated by the lack of an efficient algorithm for designing LCVQ codebooks, the generalized Lloyd (1982) algorithm (GLA) for vector quantizer (VQ) codebook improvement and codebook design is extended to a new linearly constrained generalized Lloyd algorithm (LCGLA). This LCGLA improves VQ codebooks that are formed as linear combinations of a reduced set of base codewords. As such, it may find application wherever linearly constrained nearest neighbor (NN) techniques are used, that is, in a wide variety of signal compression and pattern recognition applications that require or assume distributions that are locally linearly constrained. In addition, several examples of linearly constrained codebooks that possess desirable properties such as good sphere packing, low-complexity implementation, fine resolution, and guaranteed convergence are presented. Fast NN search algorithms are discussed. A suggested initialization procedure halves iterations to convergence when, to reduce encoding complexity, the encoder considers the improvement of only a single codebook for each block. Experimental results for image compression show that LCGLA iterations significantly improve the PSNR of standard high-quality lossy 6:1 LCVQ compressed images  相似文献   

19.
Wavelet image decompositions generate a tree-structured set of coefficients, providing an hierarchical data-structure for representing images. A new class of previously proposed image compression algorithms has focused on new ways for exploiting dependencies between this hierarchy of wavelet coefficients using “zero-tree” data structures. This paper presents a new framework for understanding the efficiency of one specific algorithm in this class we introduced previously and dubbed the space-frequency quantization (SFQ)-based coder. It describes, at a higher level, how the SFQ-based image coder of our earlier work can be construed as a simplified attempt to design a global entropy-constrained vector quantizer (ECVQ) with two noteworthy features: (i) it uses an image-sized codebook dimension (departing from conventional small-dimensional codebooks that are applied to small image blocks); and (ii) it uses an on-line image-adaptive application of constrained ECVQ (which typically uses off-line training data in its codebook design phase). The principal insight offered by the new framework is that improved performance is achieved by more accurately characterizing the joint probabilities of arbitrary sets of wavelet coefficients. We also present an empirical statistical study of the distribution of the wavelet coefficients of high-frequency bands, which are responsible for most of the performance gain of the new class of algorithms. This study verifies that the improved performance achieved by the new class of algorithms like the SFQ-based coder can be attributed to its being designed around one conveniently structured and efficient collection of such sets, namely, the zero-tree data structure. The results of this study further inspire the design of alternative, novel data structures based on nonlinear morphological operators  相似文献   

20.
A real-time, low-power video encoder design for pyramid vector quantization (PVQ) has been presented. The quantizer is estimated to dissipate only 2.1 mW for real-time video compression of images of 256 × 256 pixels at 30 frames per second in standard 0.8-micronCMOS technology with a 1.5 V supply. Applying this quantizer to subband decomposed images, the quantizer performs better than JPEG on average. We achieve this high level of power efficiency with image quality exceeding that of variable rate codes through algorithmic and architectural reformulation. The PVQ encoder is well-suited for wireless, portable communication applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号