首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A novel image compression scheme based on two-dimensional adaptive decimation is reported in this paper. In this approach, images are encoded with adaptive sampling along the horizontal and vertical directions, and decoded with an edge prediction interpolation algorithm. The method is capable of maintaining reasonable coding fidelity at low bit-rate with good visual quality. As only a small amount of computation is required in the encoding and decoding processes, the compression scheme can be implemented for real time operation with simple hardware and a small amount of memory storage. The proposed scheme had been applied in encoding images at bit-rates between 0.2 and 0.33 bpp and the results are encouraging.  相似文献   

2.
A differential index (DI) assignment scheme is proposed for the image encoding system in which a variable-length tree-structured vector quantizer (VLTSVQ) is adopted. Each source vector is quantized into a terminal node of VLTSVQ and each terminal node is represented as a unique binary vector. The proposed index assignment scheme utilizes the correlation between interblocks of the image to increase the compression ratio with the image quality maintained. Simulation results show that the proposed scheme achieves a much higher compression ratio than the conventional one does and that the amount of the bit rate reduction of the proposed scheme becomes large as the correlation of the image becomes large. The proposed encoding scheme can be effectively used to encode MR images whose pixel values are, in general, highly correlated with those of the neighbor pixels.  相似文献   

3.
汪洋  卢焕章 《信号处理》2006,22(5):630-634
本文提出了一种基于分类矢量量化器的小目标红外图像的压缩方法。首先利用图像子块的平均灰度与纹理能量这两个参数将图像划分为背景区域与感兴趣区域,然后分别对两类区域的子块进行码书设计,用相对较多的码字描述感兴趣区域,用相对较少的码字描述背景区域,这样既达到了较高的压缩比,同时又较好的保留了感兴趣区域的信息,并且编码计算量有大幅度的下降。文中对分类矢量量化器对于减小编码计算量的作用进行了理论分析。实验结果表明在相同码书尺寸的情况下本算法比直接矢量量化方法更好地保留了红外图像中的小目标信息,并且加快了编码速度。  相似文献   

4.
Po  L.-M. Tan  W.-T. 《Electronics letters》1994,30(2):120-121
To display a decompressed colour image of the conventional DCT-based compression algorithms on a palette-based display system, the decoded image must be colour quantised. To avoid the computing-intensive colour quantisation process and provide a fast decoding process, a new block address predictive colour quantisation image compression scheme is proposed. A closest-pairs colour palette ordering technique is also proposed for effectively exploiting the redundancy of the palettised image.<>  相似文献   

5.
SAR image compression is very important in reducing the costs of data storage and transmission in relatively slow channels. The authors propose a compression scheme driven by texture analysis, homogeneity mapping and speckle noise reduction within the wavelet framework. The image compressibility and interpretability are improved by incorporating speckle reduction into the compression scheme. The authors begin with the classical set partitioning in hierarchical trees (SPIHT) wavelet compression scheme, and modify it to control the amount of speckle reduction, applying different encoding schemes to homogeneous and nonhomogeneous areas of the scene. The results compare favorably with the conventional SPIHT wavelet and the JPEG compression methods  相似文献   

6.
Speed-up fractal image compression with a fuzzy classifier   总被引:4,自引:0,他引:4  
This paper presents a fractal image compression scheme incorporated with a fuzzy classifier that is optimized by a genetic algorithm. The fractal image compression scheme requires to find matching range blocks to domain blocks from all the possible division of an image into subblocks. With suitable classification of the subblocks by a fuzzy classifier we can reduce the search time for this matching process so as to speedup the encoding process in the scheme. Implementation results show that by introducing three image classes and using fuzzy classifier optimized by a genetic algorithm the encoding process can be speedup by about 40% of an unclassified encoding system.  相似文献   

7.
Po  L.M. Tan  W.T. Wong  W.B. 《Electronics letters》1995,31(23):1988-1990
A new colour quantisation and quadtree based image compression scheme is proposed. The features of the new scheme are that colour palette ordering and requantisation of the decoded image for palette-based monitor displays are not required. Thus, fast decoding and displaying can be achieved  相似文献   

8.
The embedded zero-tree wavelet (EZW) coding algorithm is a very effective technique for low bitrate still image compression. In this paper, an improved EZW algorithm is proposed to achieve a high compression performance in terms of PSNR and bitrate for lossy and lossless image compression, respectively. To reduce the number of zerotrees, the scanning and symbol redundancy of the existing EZW; the proposed method is based on the use of a new significant symbol map which is represented in a more efficient way. Furthermore, we develop a new EZW-based schemes for achieving a scalable colour image coding by exploiting efficiently the interdependency of colour planes. Numerical results demonstrate a significant superiority of our scheme over the conventional EZW and other improved EZW schemes with respect to both objective and subjective criteria for lossy and lossless compression applications of greyscale and colour images.  相似文献   

9.
Most colour watermarking methods are realised by modifying the image luminance or by processing each component of colour space separately. This paper presents a novel and robust colour watermarking approach for applications in copy protection and digital archives. The proposed scheme considers chrominance information that can be utilised at information embedding. This work presents an approach for hiding the watermark into DC components of the colour image directly in the spatial domain, followed by a saturation adjustment technique performed in RGB space. The merit of the proposed approach is that it not only provides promising watermarking performance but also is computationally efficient. Experimental results demonstrate that this scheme successfully makes the watermark perceptually invisible and robust to image processing operations such as general image processing operations (JPEG2000, JPEG-loss compression, lowpass filtering, and medium filtering), image scaling and image cropping.  相似文献   

10.
一种基于小波包树的图像压缩方法   总被引:1,自引:0,他引:1  
在过去十年中对图像压缩的研究呈持续增长趋势,在这个领域里最有效和最具代表性的方法是离散小波变换。图像的压缩包括变换、量化和编码。提出了图像的变换和量化方案。该算法采用小波包实现变换,在香农熵的基础上重建最佳树,并且为了量化采用了自适应阈值。相对小波变换的压缩,提供了一种很好的压缩实现。最后实验结果显示了该算法的优越性。  相似文献   

11.
This paper presents a CMOS image sensor with on-chip compression using an analog two-dimensional discrete cosine transform (2-D DCT) processor and a variable quantization level analog-to-digital converter (ADC). The analog 2-D DCT processor is essentially suitable for the on-sensor image compression, since the analog image sensor signal can be directly processed. The small and low-power nature of the analog design allows us to achieve low-power, low-cost, one-chip digital video cameras. The 8×8-point analog 2-D DCT processor is designed with fully differential switched-capacitor circuits to obtain sufficient precision for video compression purposes. An imager array has a dedicated eight-channel parallel readout scheme for direct encoding with the analog 2-D DCT processor. The variable level quantization after the 2-D DCT can be performed by the ADC at the same time. A prototype CMOS image sensor integrating these core circuits for compression is implemented based on triple-metal double-polysilicon 0.35-μm CMOS technology. Image encoding using the implemented analog 2-D DCT processor to the image captured by the sensor is successfully performed. The maximum peak signal-to-noise ratio (PSNR) is 36.7 dB  相似文献   

12.
分形编码是一种非常有潜力的图象压缩技术,但因其与具很高的时间复杂度,故至今未能获得广泛的应用,本文提出了旨在降低分形编码复杂度,缩短编码时间的分形图像压缩改进算法,该算法采取递归四树分块结构,将多种块分类技术相结合,并通过预计算,旋转与翻转标准化等方法降低计算复杂度,采取高效的存储方案来提高压缩化,力求在图像质量,压缩比和编码时间上取得了良好的折Zong使分形编码更加实用化。实验结果表明,采用这种  相似文献   

13.
An efficient encoding scheme for arbitrary curves, based on the chain code representation, has been proposed. The encoding scheme takes advantage of the property that a curve with gentle curvature is divided into somewhat long curve sections (segments), each of which is represented by a sequence of two adjacent-direction chain codes. The coding efficiency of the scheme becomes higher as the segments become longer, while a variable-length coding technique makes it possible to encode short segments without an extreme increase in code amount. An experiment with complicated curves obtained from geographic maps has shown a high data compression rate of the proposed scheme, whose code amount is about 50-60 percent of that required for the conventional chain encoding scheme.  相似文献   

14.
An image compressor inside wireless capsule endoscope should have low power consumption, small silicon area, high compression rate and high reconstructed image quality. Simple and efficient image compression scheme, consisting of reversible color space transformation, quantization, subsampling, differential pulse code modulation (DPCM) and Golomb–Rice encoding, is presented in this paper. To optimize these methods and combine them optimally, the unique properties of human gastrointestinal tract image are exploited. Computationally simple and suitable color spaces for efficient compression of gastrointestinal tract images are proposed. Quantization and subsampling methods are optimally combined. A hardware-efficient, locally adaptive, Golomb–Rice entropy encoder is employed. The proposed image compression scheme gives an average compression rate of 90.35 % and peak signal-to-noise ratio of 40.66 dB. ASIC has been fabricated on UMC130nm CMOS process using Faraday high-speed standard cell library. The core of the chip occupies 0.018 mm\(^2\) and consumes 35 \(\upmu {\text {W}}\) power. The experiment was performed at 2 frames per second on a \(256\times 256\) color image. The power consumption is further reduced from 35 to 9.66 \(\upmu \)W by implementing the proposed image compression scheme using Faraday low-leakage standard cell library on UMC130nm process. As compared to the existing DPCM-based implementations, our realization achieves a significantly higher compression rate for similar area and power consumption. We achieve almost as high compression rate as can be achieved with existing DCT-based image compression methods, but with an order of reduced area and power consumption.  相似文献   

15.
三维矩阵彩色图像WDCT压缩编码   总被引:15,自引:0,他引:15       下载免费PDF全文
桑爱军  陈贺新 《电子学报》2002,30(4):594-597
本文深入研究了三维矩阵,首次提出了一种新的基于三维矩阵的宽离散余弦变换(3-D WDCT),并且定义了它的运算准则,证明了它的正反变换和相关性质,将之应用到彩色图像的压缩编码中.我们知道,彩色图像的三帧是同一物理模型的映射,具有相同的纹理,边缘和灰度变化等.本文充分考虑了彩色图像三帧间的结构相似性,把彩色图像的各个分量表示在一个统一的数学模型里,使彩色图像的R,G,B三帧第一次作为一个整体进行压缩编码,同时去除帧间的与帧内的冗余,而不是象其他方法那样把各个分量独立处理.实验结果表明,这种方法的重建图像无论是压缩比,信噪比,还是视觉效果都有明显提高.本文运用并发展了目前较为成熟的离散余弦变换技术,与JPEG和MPEG都有很好的兼容性.  相似文献   

16.
Fast and memory efficient text image compression with JBIG2   总被引:1,自引:0,他引:1  
We investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.  相似文献   

17.
18.
Study on Huber Fractal Image Compression   总被引:2,自引:0,他引:2  
In this paper, a new similarity measure for fractal image compression (FIC) is introduced. In the proposed Huber fractal image compression (HFIC), the linear Huber regression technique from robust statistics is embedded into the encoding procedure of the fractal image compression. When the original image is corrupted by noises, we argue that the fractal image compression scheme should be insensitive to those noises presented in the corrupted image. This leads to a new concept of robust fractal image compression. The proposed HFIC is one of our attempts toward the design of robust fractal image compression. The main disadvantage of HFIC is the high computational cost. To overcome this drawback, particle swarm optimization (PSO) technique is utilized to reduce the searching time. Simulation results show that the proposed HFIC is robust against outliers in the image. Also, the PSO method can effectively reduce the encoding time while retaining the quality of the retrieved image.  相似文献   

19.
A fast and efficient hybrid fractal-wavelet image coder.   总被引:1,自引:0,他引:1  
The excellent visual quality and compression rate of fractal image coding have limited applications due to exhaustive inherent encoding time. This paper presents a new fast and efficient image coder that applies the speed of the wavelet transform to the image quality of the fractal compression. Fast fractal encoding using Fisher's domain classification is applied to the lowpass subband of wavelet transformed image and a modified set partitioning in hierarchical trees (SPIHT) coding, on the remaining coefficients. Furthermore, image details and wavelet progressive transmission characteristics are maintained, no blocking effects from fractal techniques are introduced, and the encoding fidelity problem common in fractal-wavelet hybrid coders is solved. The proposed scheme promotes an average of 94% reduction in encoding-decoding time comparing to the pure accelerated Fractal coding results. The simulations also compare the results to the SPIHT wavelet coding. In both cases, the new scheme improves the subjective quality of pictures for high-medium-low bitrates.  相似文献   

20.
This paper presents a novel predictive coding scheme for image-data compression by vector quantization (VQ). On the basis of a prediction, further compression is achieved by using a dynamic codebook-reordering strategy that allows a more efficient Huffman encoding of vector addresses. The proposed method is lossless, for it increases the compression performances of a baseline vector quantization scheme, without causing any further image degradation. Results are presented and a comparison with Cache-VQ is made  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号