首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
At the present time, block-transform coding is probably the most popular approach for image compression. For this approach, the compressed images are decoded using only the transmitted transform data. We formulate image decoding as an image recovery problem. According to this approach, the decoded image is reconstructed using not only the transmitted data but, in addition, the prior knowledge that images before compression do not display between-block discontinuities. A spatially adaptive image recovery algorithm is proposed based on the theory of projections onto convex sets. Apart from the data constraint set, this algorithm uses another new constraint set that enforces between-block smoothness. The novelty of this set is that it captures both the local statistical properties of the image and the human perceptual characteristics. A simplified spatially adaptive recovery algorithm is also proposed, and the analysis of its computational complexity is presented. Numerical experiments are shown that demonstrate that the proposed algorithms work better than both the JPEG deblocking recommendation and our previous projection-based image decoding approach.  相似文献   

2.
基于二维APDCSF的列率子带特征编码方法   总被引:5,自引:1,他引:4  
提出了一种子带编码的新方法。该方法利用二维全相位离散反余弦列率滤波器(APDCSF)对图像进行子带分解;对于低频子带图像采用直接斜交多重亚采样和基于全相位离散反余弦列牢滤波器(APDICsF)的多重旋转内插恢复.而对高频子带图像利用直方图自动阈值化提取如边缘和线等特征的图像元;根据各个子带的图像元的特征分别进行编码压缩,解压缩后利用凸集投影重建原始图像。该方法消除了传统的离散余弦变换(DCT)编码的方块化效应,与基于小波变换的子带特征编码方法相比,计算复杂度小,压缩率高,主观视觉性能好,对于灰阶图像可达到0.1~0.3bpp,特别适用于低比特率图像压缩。  相似文献   

3.
Image compression using binary space partitioning trees   总被引:1,自引:0,他引:1  
For low bit-rate compression applications, segmentation-based coding methods provide, in general, high compression ratios when compared with traditional (e.g., transform and subband) coding approaches. In this paper, we present a new segmentation-based image coding method that divides the desired image using binary space partitioning (BSP). The BSP approach partitions the desired image recursively by arbitrarily oriented lines in a hierarchical manner. This recursive partitioning generates a binary tree, which is referred to as the BSP-tree representation of the desired image. The most critical aspect of the BSP-tree method is the criterion used to select the partitioning lines of the BSP tree representation, In previous works, we developed novel methods for selecting the BSP-tree lines, and showed that the BSP approach provides efficient segmentation of images. In this paper, we describe a hierarchical approach for coding the partitioning lines of the BSP-tree representation. We also show that the image signal within the different regions (resulting from the recursive partitioning) can be represented using low-order polynomials. Furthermore, we employ an optimum pruning algorithm to minimize the bit rate of the BSP tree representation (for a given budget constraint) while minimizing distortion. Simulation results and comparisons with other compression methods are also presented.  相似文献   

4.
神经网络在图像编码中的应用   总被引:2,自引:0,他引:2  
王卫  蔡德钧 《电子学报》1995,23(7):69-76
神经网研究的再度兴起及其在图像编码中的应用,开辟了图像压缩的新途径。本文论述了用于图像编码的神经网络模型、算法、应用效果及进展,对该领域尚未解决的一些基本理论问题,如神经网络实现图像数据压缩的机理,基于神经网络图像编码方法的分类等进行了探讨。最后,展望了需要进一步研究的方向。  相似文献   

5.
A new algorithmic approach to segmentation-based image coding is proposed. A good compromise is achieved between segmentation by quadtree-based decomposition and by free region-growing in terms of time complexity and scene adaptability. Encoding is to recursively partition an image into convex n-gons, 3⩽n⩽8, until the pixels in the current n-gon satisfy a uniformity criterion. The recursive partition generates a valid segmentation by aligning the polygon boundaries with image edges. This segmentation is embedded into a binary tree for compact encoding of its geometry. The compressed image is sent as a labeled pointerless binary tree, and decoding is simply polygon filling. High compression ratios are obtained by balancing the accuracy and geometric complexity of the image segmentation, a key issue for segmentation-driven image coding that was not addressed before. Due to its tree structure, the method is also suitable for progressive image coding  相似文献   

6.
Morphological operators for image and video compression   总被引:4,自引:0,他引:4  
This paper deals with the use of some morphological tools for image and video coding. Mathematical morphology can be considered as a shape-oriented approach to signal processing, and some of its features make it very useful for compression. Rather than describing a coding algorithm, the purpose of this paper is to describe some morphological tools that have proved attractive for compression. Four sets of morphological transformations are presented: connected operators, the region-growing version of the watershed, the geodesic skeleton, and a morphological interpolation technique. The authors discuss their implementation, and show how they can be used for image and video segmentation, contour coding, and texture coding.  相似文献   

7.
Wavelet image compression - the quadtree coding approach   总被引:3,自引:0,他引:3  
Perfect reconstruction, quality scalability and region-of-interest coding are basic features needed for the image compression schemes used in telemedicine applications. This paper proposes a new wavelet-based embedded compression technique that efficiently exploits the intraband dependencies and uses a quadtree-based approach to encode the significance maps. The algorithm produces a losslessly compressed embedded data stream, supports quality scalability and permits region-of-interest coding. Moreover, experimental results obtained on various images show that the proposed algorithm provides competitive lossless/lossy compression results. The proposed technique is well-suited for telemedicine applications that require fast interactive handling of large image sets over networks with limited and/or variable bandwidth  相似文献   

8.
We consider optimal formulations of spread spectrum watermark embedding where the common requirements of watermarking, such as perceptual closeness of the watermarked image to the cover and detectability of the watermark in the presence of noise and compression, are posed as constraints while one metric pertaining to these requirements is optimized. We propose an algorithmic framework for solving these optimal embedding problems via a multistep feasibility approach that combines projections onto convex sets (POCS) based feasibility watermarking with a bisection parameter search for determining the optimum value of the objective function and the optimum watermarked image. The framework is general and can handle optimal watermark embedding problems with convex and quasi-convex formulations of watermark requirements with assured convergence to the global optimum. The proposed scheme is a natural extension of set-theoretic watermark design and provides a link between convex feasibility and optimization formulations for watermark embedding. We demonstrate a number of optimal watermark embeddings in the proposed framework corresponding to maximal robustness to additive noise, maximal robustness to compression, minimal frequency weighted perceptual distortion, and minimal watermark texture visibility. Experimental results demonstrate that the framework is effective in optimizing the desired characteristic while meeting the constraints. The results also highlight both anticipated and unanticipated competition between the common requirements for watermark embedding.  相似文献   

9.
Region-based coding schemes are among the most promising compression techniques for very low bit-rate applications. They consist of image segmentation, contour and texture coding. This paper deals with the use of the geodesic skeleton as a morphological tool for contour coding of segmented image sequences. In the geodesic case, already coded and known regions are taken into account for the coding of contours of unknown regions. A new technique is presented for the entropy coding of the coordinates of the skeleton points exploiting their special spatial distribution. Furthermore, a fast algorithm for the reconstruction of the skeleton points is given based on hierarchical queues. In the case of numerous isolated contour arcs (for example error coding in a motion prediction loop), the geodesic skeleton proofs higher efficiency than traditional methods. Results at very low bit-rates are presented and compared to standard methods confirming the validity of the chosen approach.  相似文献   

10.
小波编码算法是图像压缩编码算法中的一种重要方法,在进行小波变换时,小波基的选取至关重要,它直接影响到变换速度和编码效率。在详细分析小波基的基本性质及其与图像编码的关系的基础上,选出数种典型小波基进行实验比较,得出小波基与实时图像压缩编码之间的关系。  相似文献   

11.
Cascaded differential and wavelet compression of chromosome images   总被引:2,自引:0,他引:2  
This paper proposes a new method for chromosome image compression based on an important characteristic of these images: the regions of interest (ROIs) to cytogeneticists for evaluation and diagnosis are well determined and segmented. Such information is utilized to advantage in our compression algorithm, which combines lossless compression of chromosome ROIs with lossy-to-lossless coding of the remaining image parts. This is accomplished by first performing a differential operation on chromosome ROIs for decorrelation, followed by critically sampled integer wavelet transforms on these regions and the remaining image parts. The well-known set partitioning in hierarchical trees (SPIHT) (Said and Perlman, 1996) algorithm is modified to generate separate embedded bit streams for both chromosome ROIs and the rest of the image that allow continuous lossy-to-lossless compression of both (although lossless compression of the former is commonly used in practice). Experiments on two sets of sample chromosome spread and karyotype images indicate that the proposed approach significantly outperforms current compression techniques used in commercial karyotyping systems and JPEG-2000 compression, which does not provide the desirable support for lossless compression of arbitrary ROIs.  相似文献   

12.
Due to its excellent rate–distortion performance, set partitioning in hierarchical trees (SPIHT) has become the state-of-the-art algorithm for image compression. However, the algorithm does not fully provide the desired features of progressive transmission, spatial scalability and optimal visual quality, at very low bit rate coding. Furthermore, the use of three linked lists for recording the coordinates of wavelet coefficients and tree sets during the coding process becomes the bottleneck of a fast implementation of the SPIHT. In this paper, we propose a listless modified SPIHT (LMSPIHT) approach, which is a fast and low memory image coding algorithm based on the lifting wavelet transform. The LMSPIHT jointly considers the advantages of progressive transmission, spatial scalability, and incorporates human visual system (HVS) characteristics in the coding scheme; thus it outperforms the traditional SPIHT algorithm at low bit rate coding. Compared with the SPIHT algorithm, LMSPIHT provides a better compression performance and a superior perceptual performance with low coding complexity. The compression efficiency of LMSPIHT comes from three aspects. The lifting scheme lowers the number of arithmetic operations of the wavelet transform. Moreover, a significance reordering of the modified SPIHT ensures that it codes more significant information belonging to the lower frequency bands earlier in the bit stream than that of the SPIHT to better exploit the energy compaction of the wavelet coefficients. HVS characteristics are employed to improve the perceptual quality of the compressed image by placing more coding artifacts in the less visually significant regions of the image. Finally, a listless implementation structure further reduces the amount of memory and improves the speed of compression by more than 51% for a 512×512 image, as compared with that of the SPIHT algorithm.  相似文献   

13.
一种新的分形图像压缩编码方法   总被引:23,自引:2,他引:21  
王舟  余英林 《通信学报》1996,17(3):84-90
本文提出了一种将双线性内插与基于迭代函数系统的分形图像编码相结合的图像编码新方法。实验结果表明,与基本的自动分形图像编码方法相比,在基本保持重建图像质量的前提下,运算时间大大下降而压缩比有显著的提高。  相似文献   

14.
Though most image coding techniques use a raster scan to order pixels prior to coding, Hilbert and other scans have been proposed as having better performance due to their superior locality preserving properties. However, a general understanding of the merits of various scans has been lacking. This paper develops an approach for quantitatively analyzing the effect of pixel scan order for context-based, predictive lossless image compression and uses it to compare raster, Hilbert, random and hierarchical scans. Specifically, for a quantized-Gaussian image model and a given scan order, it shows how the encoding rate can be estimated from the frequencies with which various pixel configurations are available as previously scanned contexts, and from the corresponding conditional differential entropies. Formulas are derived for such context frequencies and entropies. Assuming an isotropic image model and contexts consisting of previously scanned adjacent pixels, it is found that the raster scan is better than the Hilbert scan which is often used in compression applications due to its locality preserving properties. The hierarchical scan is better still, though it is based on nonadjacent contexts. The random scan is the worst of the four considered. Extensions and implications of the results to lossy coding are also discussed.  相似文献   

15.
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.  相似文献   

16.
Although subband transform coding is a useful approach to image compression and communication, the performance of this method has not been analyzed so far for color images, especially when the selection of color components is considered. Obviously, the RGB components are not suitable for such a compression method due to their high inter-color correlation. On the other hand, the common selection of YUV or YIQ is rather arbitrary and in most cases not optimal. In this work we introduce a rate–distortion model for color image compression and employ it to find the optimal color components and optimal bit allocation (optimal rates) for the compression. We show that the DCT (discrete cosine transform) can be used to transform the RGB components into an efficient set of color components suitable for subband coding. The optimal rates can be also used to design adaptive quantization tables in the coding stage with results superior to fixed quantization tables. Based on the presented results, our conclusion is that the new approach can improve presently available methods for color image compression and communication.  相似文献   

17.
Vector quantisation (VQ) has been extensively used as an effective image coding technique. One of the most important steps in the whole process is the design of the codebook. The codebook is generally designed using the LBG algorithm which uses a large training set of empirical data that is statistically representative of the images to be encoded. The LBG algorithm, although quite effective for practical applications, is computationally very expensive and the resulting codebook has to be recalculated each time the type of image to be encoded changes. Stochastic vector quantisation (SVQ) provides an alternative way for the generation of the codebook. In SVQ, a model for the image is computed first, and then the codewords are generated according to this model and not according to some specific training sequence. The SVQ approach presents good coding performance for moderate compression ratios and different type of images. On the other hand, in the context of synthetic and natural hybrid coding (SNHC), there is always need for techniques which may provide very high compression and high quality for homogeneous textures. A new stochastic vector quantisation approach using linear prediction which is able to provide very high compression ratios with graceful degradation for homogeneous textures is presented. Owing to the specific construction of the method, there is no block effect in the synthetised image. Results, implementation details, generation of the bit stream and comparisons with the verification model of MPEG-4 are presented which prove the validity of the approach. The technique has been proposed as a still image coding technique in the SNHC standardisation group of MPEG  相似文献   

18.
In this paper, a novel dynamic voltage–frequency scaling-aware (DVFS-aware) bandwidth- efficient motion estimation (ME) scheme is presented for mobile application processor (AP) systems. Under volatile operating performance conditions due to the power management mechanism, we model the coding bandwidth (BW) and coding performance for the video processor as a convex function of the working frequency. In this paper, we present a bandwidth–rate–distortion (B–R–D) optimized framework that will guarantee the smallest possible rate–distortion cost among coding BW constraints applied in video coding design. By formulating the coding bandwidth-constrained ME problem as an optimization problem, known convex optimization theory can be applied to yield optimal resource-constrained compression. Using varied CIF (352×288)- and HP (1280×720)-sized video sequences with different motion activities over our proposed DVFS-aware video coding approach, the excellent results in terms of coding performance and coding bandwidth savings are obtained. With negligible quality loss, the proposed scheme under coding BW constraints achieves 45–65% coding BW usage reduction over HD-sized 30 frame/s video coding.  相似文献   

19.
小波图象编码属于子带编码,但是,与通常的子带编码不同,图象经过小波变换后,不但分解成不同频带(即分辨率)的子带图象,而且,同分辨率的子带图象还进一步分解成不同空间方向的子带图象。同方向的各个子带图象之间具有相似性,利用这种相似性进行图象编码可以提高压缩比。不同方向的各个子带图象之间虽然不具有相似性,但是,原图象的特征在这些子带图象中的空间位置具有不变性,利用这种空间位置的不变性进行图象编码,可以进一步提高压缩比。本文提供了这方面的理论分析和实例验证。  相似文献   

20.
本文首先研究了分形图象压缩编码和分形插值图象压缩方法。这两种方法在压缩比很高时图象效果均不是很理想,为此我们提出了综合利用分形插值和分形图象编码方法进行图象数据高效压缩的思想。实验结果验证了这种思想的正确性和可行性。新的方法在恢复图象信噪比为28.5dB时,压缩倍数可高达76倍,速度也大大提高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号