首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
矢量量化是一种有效的数据压缩技术,由于其算法简单,具有较高的压缩率,因而被广泛应用于数据压缩编码领域。通过对图像块灰度特征的研究,根据图像的平滑与否,提出了对图像进行均值和矢量量化复合编码算法,该算法对平滑图像块采用均值编码,对非平滑块采用矢量量化编码。这不仅节省了平滑码字的存储空间,提高了码书存储效率,并且编码速度大大提高。同时采用码字旋转反色(2R)压缩算法将码书的存储容量减少到1/8,并结合最近邻块扩展搜索算法(EBNNS)对搜索算法进行优化。在保证图像画质的前提下,整个系统的图像编码速度比全搜索的普通矢量量化平均提高约7.7倍。  相似文献   

2.
Based on the grey model (GM), a simple and fast methodology is developed for lossy image compression. First of all, the image is decomposed into some different-size image windows through the judgement of grey difference level; then the GM (1,1) of grey system theory is used as a fitter to model those window pixels. The proposed algorithms can be contrasted with the conventional compression techniques such as discrete cosine transform or vector quantization (VQ) algorithms in their dynamic modelling sequence and flexible block size. Especially, the compression and decompression process do not require an extra decoder and only utilize the modelling parameters to reconstruct the image by reversing the operation of GM (1,1). Experiments with some (512 x 512) images indicate that not only the average bit number per pixel and peak signal-to-noise ratio but also the coding time and decoding time of this lossy image compression algorithm based on GM (1,1) are better than those of block truncation coding with VQ.  相似文献   

3.
基于局部投影与块LBP特征的图像检索   总被引:1,自引:0,他引:1       下载免费PDF全文
首先将投影方法运用于图像局部子块,并与矢量量化技术相结合,得到投影矢量索引直方图特征,能够有效提取图像的颜色分布、空间关系等信息;其次,提出基于块基元的LBP纹理特征算法,既能有效提取块基元的结构模式,又能避免传统基于单像素LBP模板的不稳定性,并大幅减少了计算量;最后,基于显著图提出了划分显著区域和非显著区域的特征提取方案,使得分别提取的特征更富于视觉意义。实验结果表明,本文算法相对于传统索引直方图方法在性能上有较大的提高,平均查准率平均提高幅度为6.39%。  相似文献   

4.
A two-phase fractal image sequence compression system is proposed. In the classification phase, according to the texture attribution a testing solid image block is assigned to its corresponding texture class. The texture attribution is derived from the tomographic block projection classification for the finite projection directions at the three-dimensional (3D) space. In the adaptive coding phase, both the algorithm of the 3D projection classification and the 3D variable shape decomposition are incorporated into the variable shape block transformation for image sequence. By applying this variable shape block transformation algorithm to fractal image sequence coding scheme, we can obtain a promising performance.  相似文献   

5.
An edge preserving image compression algorithm based on an unsupervised competitive neural network is proposed. The proposed neural network, the called weighted centroid neural network (WCNN), utilizes the characteristics of image blocks from edge areas. The mean/residual vector quantization (M/RVQ) scheme is utilized in this proposed approach as the framework of the proposed algorithm. The edge strength of image block data is utilized as a tool to allocate the proper code vectors in the proposed WCNN. The WCNN successfully allocates more code vectors to the image block data from edge area while it allocates less code vectors to the image black data from shade or non-edge area when compared to conventional neural networks based on VQ algorithm. As a result, a simple application of WCNN to an image compression problem gives improved edge characteristics in reconstructed images over conventional neural network based on VQ algorithms such as self-organizing map (SOM) and adaptive SOM.  相似文献   

6.
This paper discusses a video compression and decompression method based on vector quantization (VQ) for use on general purpose computer systems without specialized hardware. After describing basic VQ coding, we survey common VQ variations and discuss their impediments in light of the target application. We discuss how the proposed video codec was designed to reduce computational complexity in every principal task of the video codec process. We propose a classified VQ scheme that satisfies the data rate, image quality, decoding speed, and encoding speed objectives for software-only video playback. The functional components of the proposed VQ method are covered in detail. The method employs a pseudo-YUV color space and criteria to detect temporal redundancy and low spatial frequency regions. A treestructured-codebook generation algorithm is proposed to reduce encoding execution time while preserving image quality. Two separate vector codebooks, each generated with the treestructured search, are employed for detail and low spatial frequency blocks. Codebook updating and sharing are proposed to further improve encoder speed and compression.  相似文献   

7.
基于IFS块的快速图象编码算法   总被引:5,自引:0,他引:5  
文中首先叙述了目前分形块编码研究领域大致情况,然后,提出了一种新的快速编码算法,用L1距离替换了以往算法所用的L2距离,把匹配过程转换成类似于矢量量化的搜索过程,因而采纳了许多矢量量化的加速算法,还讨论了以平坦区的处理并提出了新的剖分方法,取得了较好的效果,同其它的分形块编码方法相比,该算法大大缩短了编码时间,改善了压缩图象的质量,特别是它能较大程度地消除块效应,压缩比也有了进一步的提高。  相似文献   

8.
《Parallel Computing》2002,28(7-8):1079-1093
Vector quantization (VQ) is a widely used algorithm in speech and image data compression. One of the problems of the VQ methodology is that it requires large computation time especially for large codebook size. This paper addresses two issues. The first deals with the parallel construction of the VQ codebook which can drastically reduce the training time. A master/worker parallel implementation of a VQ algorithm is proposed. The algorithm is executed on the DM-MIMD Alex AVX-2 machine using a pipeline architecture. The second issue deals with the ability of accurately predicting the machine performance. Using communication and computation models, a comparison between expected and real performance is carried out. Results show that the two models can accurately predict the performance of the machine for image data compression. Analysis of metrics normally used in parallel realization is conducted.  相似文献   

9.
Search-order coding method with indicator-elimination property   总被引:1,自引:0,他引:1  
Vector quantization (VQ) is a widely used technique for many applications especially for lossy image compression. Since VQ significantly reduces the size of a digital image, it can save the costs of storage space and image delivery. Search-order coding (SOC) was proposed for improving the performance of VQ in terms of compression rate. However, SOC requires extra data (i.e. indicators) to indicate source of codewords so the compression rate may be affected. To overcome such a drawback, in this paper, a search-order coding with the indicator-elimination property was proposed by using a technique of reversible data hiding. The proposed method is the first one using such a concept of data hiding to achieve a better compression rate of SOC. From experimental results, the performance of the SOC method can be successfully improved by the proposed indicator eliminated search-order coding method in terms of compression rate. In addition, compared with other relevant schemes, the proposed method is also more flexible than some existing schemes.  相似文献   

10.
Image coding algorithms such as Vector Quantisation (VQ), JPEG and MPEG have been widely used for encoding image and video. These compression systems utilise block-based coding techniques to achieve a higher compression ratio. However, a cell loss or a random bit error during network transmission will permeate into the whole block, and then generate several damaged blocks. Therefore, an efficient Error Concealment (EC) scheme is essential for diminishing the impact of damaged blocks in a compressed image. In this paper, a novel adaptive EC algorithm is proposed to conceal the error for block-based image coding systems by using neural network techniques in the spatial domain. In the proposed algorithm, only the intra-frame information is used for reconstructing the image with damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the PSNR evaluation of a reconstructed image are significantly improved using the proposed EC algorithm.  相似文献   

11.
Recently, medical image compression becomes essential to effectively handle large amounts of medical data for storage and communication purposes. Vector quantization (VQ) is a popular image compression technique, and the commonly used VQ model is Linde–Buzo–Gray (LBG) that constructs a local optimal codebook to compress images. The codebook construction was considered as an optimization problem, and a bioinspired algorithm was employed to solve it. This article proposed a VQ codebook construction approach called the L2‐LBG method utilizing the Lion optimization algorithm (LOA) and Lempel Ziv Markov chain Algorithm (LZMA). Once LOA constructed the codebook, LZMA was applied to compress the index table and further increase the compression performance of the LOA. A set of experimentation has been carried out using the benchmark medical images, and a comparative analysis was conducted with Cuckoo Search‐based LBG (CS‐LBG), Firefly‐based LBG (FF‐LBG) and JPEG2000. The compression efficiency of the presented model was validated in terms of compression ratio (CR), compression factor (CF), bit rate, and peak signal to noise ratio (PSNR). The proposed L2‐LBG method obtained a higher CR of 0.3425375 and PSNR value of 52.62459 compared to CS‐LBG, FA‐LBG, and JPEG2000 methods. The experimental values revealed that the L2‐LBG process yielded effective compression performance with a better‐quality reconstructed image.  相似文献   

12.
Vector quantization (VQ), a lossy image compression, is widely used for many applications due to its simple architecture, fast decoding ability, and high compression rate. Traditionally, VQ applies the full search algorithm to search for the codeword that best matches each image vector in the encoding procedure. However, matching in this manner consumes a lot of computation time and leads to a heavy burden for the VQ method. Therefore, Torres and Huguet proposed a double test algorithm to improve the matching efficiency. However, their scheme does not include an initiation strategy to choose an initially searched codeword for each image vector, and, as a result, matching efficiency may be affected significantly. To overcome this drawback, we propose an improved double test scheme with a fine initialization as well as a suitable search order. Our experimental results indicate that the computation time of the double test algorithm can be significantly reduced by the proposed method. In addition, the proposed method is more flexible than existing schemes.  相似文献   

13.
在综合分析当前图像压缩算法的基础上,提出新的基于分层变块大小分解的图像压缩构想。JPEG、JPEG2000、分形作为当前最为流行的3种静态图像压缩算法,在对不同的图像进行相同倍率的压缩时,表现出同样的性能趋势:视觉上越复杂的图像,恢复图像的质量越低。经过大量实验发现,3种算法的压缩性能均与同一个指标存在明确关系——图像活跃度量(IAM)。根据图像不同区域的复杂程度不同,采用IAM和相似度作为性能指标,利用粒子群优化(PSO)算法求解最优近似图像,实现对图像的分层变块大小分解(SVBD),将图像中相同复杂特性的区块归为一类。该分解方式符合人类认知图像内容的特点,为提高压缩性能创造了有利条件。  相似文献   

14.
Vector quantization (VQ) for image compression requires expensive time to find the closest codevector in the encoding process. In this paper, a fast search algorithm is proposed for projection pyramid vector quantization using a lighter modified distortion with Hadamard transform of the vector. The algorithm uses projection pyramids of the vectors and codevectors after applying Hadamard transform and one elimination criterion based on deviation characteristic values in the Hadamard transform domain to eliminate unlikely codevectors. Experimental results are presented on image block data. These results confirm the effectiveness of the proposed algorithm with the same quality of the image as the full search algorithm.  相似文献   

15.
An image compression technique is proposed that attempts to achieve both robustness to transmission bit errors common to wireless image communication, as well as sufficient visual quality of the reconstructed images. Error robustness is achieved by using biorthogonal wavelet subband image coding with multistage gain-shape vector quantization (MS-GS VQ) which uses three stages of signal decomposition in an attempt to reduce the effect of transmission bit errors by distributing image information among many blocks. Good visual quality of the reconstructed images is obtained by applying genetic algorithms (GAs) to codebook generation to produce reconstruction capabilities that are superior to the conventional techniques. The proposed decomposition scheme also supports the use of GAs because decomposition reduces the problem size. Some simulations for evaluating the performance of the proposed coding scheme on both transmission bit errors and distortions of the reconstructed images are performed. Simulation results show that the proposed MS-GS VQ with good codebooks designed by GAs provides not only better robustness to transmission bit errors but also higher peak signal-to-noise ratio even under high bit error rate conditions  相似文献   

16.
PDVQ图像编码系统首先将码书进行方向性分类,把每类方向性码书中的码字按码字和值进行升序排列,并根据EBNNS算法将码书分块。编码时,先根据输入图像块的相关性进行PDVQ编码,然后分析输入图像块的方向性来选择相应的分类子码书,在该子码书中根据输入图像块的和值确定码字搜索范围,最后在确定的搜索范围内搜索最匹配码字。仿真结果表明,该系统集合了动态图像块划分(PDVQ)、基于方向性分类编码和等和值块扩展最近邻码字搜索(EBNNS)三种算法的优点,在保证重建图像质量前提下,缩短了编码时间,并提高了压缩比。  相似文献   

17.
文章提出了一个新的基于矢量量化的数字水印算法,与基于DCT(DiscreteCosineTransform)、DFT(DiscreteFourierTransform)及DWT(DiscreteWaveletTransform)等的传统水印算法不同,该算法利用码书分割方法和矢量量化索引的特点,在矢量量化的不同阶段分别嵌入水印来保护原始图像的版权,水印检测不需要原始图像。实验结果表明,该方法实现的水印具有良好的不可见性,并对JPEG压缩、矢量量化压缩、旋转以及剪切等空域操作也具有较好的稳健性。  相似文献   

18.
小波树结构快速矢量量化编码方法   总被引:3,自引:0,他引:3       下载免费PDF全文
提出了基于人眼视觉属性和应用小波树结构2快速图象编码的矢量量化图象编码方法,简称为树结构快速矢量量化编码。在分析此方法矢量量化特点之后,设计产生码本的统计方法,并提出了矢量量化编码的快速算法。  相似文献   

19.
In this paper, we propose adaptive and flexible quantization and compression algorithms for 3-D point data using vector quantization (VQ) and rate-distortion (R-D) optimization. The point data are composed of the position and the radius of sphere based on QSplat representation. The positions of child spheres are first transformed to the local coordinate system, which is determined by the parent-children relationship. The local coordinate transform makes the positions more compactly distributed in 3-D space, facilitating an effective application of VQ. We also develop a constrained encoding method for the radius data, which can provide a hole-free surface rendering at the decoder side. Furthermore, R-D optimized compression algorithm is proposed in order to allocate an optimal bitrate to each sphere. Experimental results show that the proposed algorithm can effectively compress the original 3-D point geometry at various bitrates.  相似文献   

20.
对典型的竞争学习算法进行了研究和分析,提出了一种基于神经元获胜概率的概率敏感竞争虎法。与传统竞争学习算法只有一个神经元获胜而得到学习不同,PSCL算法按照各种凶的获胜概率并通过对失真距离的调整使每个神经元均得到不同的学习,可以有效地克服神经元欠利用问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号