首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Image Compression Based on Multistage Vector Quantization   总被引:1,自引:0,他引:1  
This paper presents a new three-stage vector quantization system for the compression of images. It uses some simple schemes including error block classifier, search order coding (SOC), and index vector coding. The error block classifier preserves the edge blocks and discards the psychovisually redundant texture blocks in the last stage. The index vector coding encodes the combination of quantization indexes of the last two stages, and the SOC is used for encoding the quantization index of the first stage. The proposed system can achieve better compression performance than the conventional multistage vector quantization systems.  相似文献   

2.
Recently deep learning has been introduced to the field of image compression. In this paper, we present a hybrid coding framework that combines entropy coding, deep learning, and traditional coding framework. In the base layer of the encoding, we use convolutional neural networks to learn the latent representation and importance map of the original image respectively. The importance map is then used to guide the bit allocation of the latent representation. A context model is also developed to help the entropy coding after the masked quantization. Another network is used to get a coarse reconstruction of the image in the base layer. The residual between the input and the coarse reconstruction is then obtained and encoded by the traditional BPG codec as the enhancement layer of the bit stream. We only need to train a basic model and the proposed scheme can realize image compression at different bit rates, thanks to the use of the traditional codec. Experimental results using the Kodak, Urban100 and BSD100 datasets show that the proposed scheme outperforms many deep learning-based methods and traditional codecs including BPG in MS-SSIM metric across a wide range of bit rates. It also exceeds some latest hybrid schemes in RGB444 domain on Kodak dataset in both PSNR and MS-SSIM metrics.  相似文献   

3.
The increasing importance of skeleton information in surveillance big data feature analysis demands significant storage space. The development of an effective and efficient solution for storage is still a challenging task. In this paper, we propose a new framework for the lossless compression of skeleton sequences by exploiting both spatial and temporal prediction and coding redundancies. Firstly, we propose a set of skeleton prediction modes, namely, spatial differential-based, motion vector-based, relative motion vector-based, and trajectory-based skeleton prediction mode. These modes can effectively handle both spatial and temporal redundancies present in the skeleton sequences. Secondly, we further enhance performance by introducing a novel approach to handle coding redundancy. Our proposed scheme is able to significantly reduce the size of skeleton data while maintaining exactly the same skeleton quality due to lossless compression approach. Experiments are conducted on standard surveillance and Posetrack action datasets containing challenging test skeleton sequences. Our method obviously outperforms the traditional direct coding methods by providing an average of 73% and 66% bit-savings on the two datasets.  相似文献   

4.
一种HEVC帧内预测编码CU结构快速选择算法   总被引:2,自引:2,他引:0  
为了提升高效视频编码(HEVC) 帧内预测编码部分的编码效率,提出了一种HEVC帧内预测编码编码单元(CU)结构快速选择算 法。算法通过 对CTU(coding tree unit)四叉树结构的遍历过程进行优化,设计了两种不同的最优CU结 构快速决策算 法,分别从最大划分深度和最小划分深度开始遍历,并在每一步遍历之前,判断是否提前终 止遍历操作。 同时,在对每个CTU进行求解时,依据其纹理复杂度和当前编码状态,从两种算法中选择出 最优快速决策 算法对其进行求解。在HM 15.0的基础上实现了提出的快速选择算法 。实验结果表明,本文算法能够在 保证编码性能的同时,降低31.14%的编码时间,提高了HEVC的编码效 率。  相似文献   

5.
Growing test data volume and excessive test application time are two serious concerns in scan-based testing for SoCs. This paper presents an efficient test-independent compression technique based on block merging and eight coding (BM-8C) to reduce the test data volume and test application time. Test compression is achieved by encoding the merged blocks after merging consecutive compatible blocks with exact eight codewords. The proposed scheme compresses the pre-computed test data without requiring any structural information of the circuit under test. Therefore, it is applicable for IP cores in SoCs. Experimental results demonstrate that the BM-8C technique can achieve an average compression ratio up to 68.14 % with significant low test application time.  相似文献   

6.
In order to improve the performance of fractal video coding, we explore a novel fractal video sequences codec with automatic region-based functionality. To increase the quality of decoding image, intra frame coding, deblocking loop filter and sub-pixel block matching are applied to the codec. An efficient searching algorithm is used to increase the compression ratio and encoding speed. Automatic region-based fractal video sequences coding reduces coding stream greatly. Experimental results indicate that the proposed algorithm is more robust, and provides much less encoding time and bitrate while maintaining the quality of decompression image than the conventional CPM/NCIM method and other related references. We compare the proposed algorithm with three algorithms in Refs. [24], [25], [26], and the results of all these four algorithms are compared with H.264. The bitrate of the proposed algorithm is decreased by 0.11% and the other algorithms are increased by 4.29%, 6.85% and 11.62%, respectively. The average PSNR degradations of the four algorithms are 0.71 dB, 0.48 dB, 0.48 dB and 0.75 dB. So the bitrate of the proposed algorithm is decreased and the other algorithms are increased. At the meantime the compression time is reduced greatly, about 79.19% on average. The results indicate that, on average, the proposed automatic region-based fractal video sequences coding system can save compression time 48.97% and bitrate 52.02% with some image quality degradation in comparison with H.264, since they are all above 32 dB and the human eyes are insensitive to the differences.  相似文献   

7.
序列图像的分形编码   总被引:1,自引:1,他引:0  
分形方法作为一种新兴的图像编码方法,有着压缩比高等独特的优点。但由于它也有着致命的缺点,即编码速度速度很慢,目前对于序列图像的分形编码的研究方兴未艾。文章介绍了两类主要的序列图像分形编码方法,并对其进行了展望。  相似文献   

8.
A compression-decompression scheme, Modified Selective Huffman (MS-Huffman) scheme based on Huffman code is proposed in this paper. This scheme aims at optimization of the parameters that influence the test cost reduction: the compression ratio, on-chip decoder area overhead and overall test application time. Theoretically, it is proved that the proposed scheme gives the better test data compression compared to very recently proposed encoding schemes for any test set. It is clearly demonstrated with a large number of experimental results that the proposed scheme improves the test data compression, reduces overall test application time and on-chip area overhead compared to other Huffman code based schemes.  相似文献   

9.
In this article, a run length encoding-based test data compression technique has been addressed. The scheme performs Huffman coding on different parts of the test data file separately. It has been observed that up to a 6% improvement in compression ratio and a 29% improvement in test application time can be achieved sacrificing only about 6.5% of the decoder area. We have compared our results with the other contemporary works reported in the literature. It has been observed that for most of the cases, our scheme produces a better compression ratio and that the area requirements are much less.  相似文献   

10.
Based on the classical fractal video compression method, an improved object-based stereo video compression scheme with Shape-Adaptive DCT is proposed in this paper. Firstly, we use more effective macroblock partition scheme instead of classical quadtree partition scheme; thus reducing the block searching strategy. The stereo fractal video coding is proposed which matches the macroblock with two reference frames in left and right view results in increasing compression ratio and reducing bit rate when transmitting compressed stereo data. The stereo codec combines the Motion Compensation Prediction (MCP) and Disparity Compensation Prediction (DCP). Fractal coding is adopted and each object is encoded independently by a prior video segmentation alpha plane, which is defined exactly as in MPEG-4. The testing results with the nature monocular and stereo video sequences provide promising performances at low bit rate coding. We believe it will be a powerful and efficient technique for the object-based monocular and stereo video sequences coding.  相似文献   

11.
Deterministic Built-in Pattern Generation for Sequential Circuits   总被引:1,自引:0,他引:1  
We present a new pattern generation approach for deterministic built-in self testing (BIST) of sequential circuits. Our approach is based on precomputed test sequences, and is especially suited to sequential circuits that contain a large number of flip-flops but relatively few controllable primary inputs. Such circuits, often encountered as embedded cores and as filters for digital signal processing, are difficult to test and require long test sequences. We show that statistical encoding of precomputed test sequences can be combined with low-cost pattern decoding to provide deterministic BIST with practical levels of overhead. Optimal Huffman codes and near-optimal Comma codes are especially useful for test set encoding. This approach exploits recent advances in automatic test pattern generation for sequential circuits and, unlike other BIST schemes, does not require access to a gate-level model of the circuit under test. It can be easily automated and integrated with design automation tools. Experimental results for the ISCAS 89 benchmark circuits show that the proposed method provides higher fault coverage than pseudorandom testing with shorter test application time and low to moderate hardware overhead.  相似文献   

12.
王丽  刘增力 《电讯技术》2020,60(8):871-875
分形图像压缩利用自身图像具有的相似性,结合压缩仿射变换减少图像数据的冗余来实现图像数据的压缩,具有压缩比高、恢复简单的特点。然而,分形图像压缩编码也具有编码时间长、计算复杂的缺点。为了解决上述的缺点,提出了基于平方加权质心特征的快速分形图像压缩编码算法,利用平方加权质心特征可以将基本分形图像压缩编码过程中的全局搜索转化为局部搜索,限定搜索范围,减少码本数量,在巨大图像信息量传输和存储过程中,在一定程度上缩短了编码时间。将平方加权质心特征快速分形图像压缩编码算法和双交叉和算法、改进叉迹算法、规范五点和算法进行比较,仿真结果表明,所提算法在恢复质量可接受情况下,编码时间具有巨大优势。  相似文献   

13.
詹文法  梁华国  时峰  黄正峰 《电子学报》2009,37(8):1837-1841
 文章提出了一种混合定变长虚拟块游程编码的测试数据压缩方案,该方案将测试向量级联后分块,首先在块内找一位或最大一位表示,再对块内不能一位表示的剩下位进行游程编码,这样减少了游程编码的数据量,从而突破了传统游程编码方法受原始测试数据量的限制.对ISCAS 89部分标准电路的实验结果显示,本文提出的方案在压缩效率明显优于类似的压缩方法,如Golomb码、FDR码、VIHC码、v9C码等.  相似文献   

14.
In a prior work, a wavelet-based vector quantization (VQ) approach was proposed to perform lossy compression of electrocardiogram (ECG) signals. In this paper, we investigate and fix its coding inefficiency problem in lossless compression and extend it to allow both lossy and lossless compression in a unified coding framework. The well-known 9/7 filters and 5/3 integer filters are used to implement the wavelet transform (WT) for lossy and lossless compression, respectively. The codebook updating mechanism, originally designed for lossy compression, is modified to allow lossless compression as well. In addition, a new and cost-effective coding strategy is proposed to enhance the coding efficiency of set partitioning in hierarchical tree (SPIHT) at the less significant bit representation of a WT coefficient. ECG records from the MIT/BIH Arrhythmia and European ST-T Databases are selected as test data. In terms of the coding efficiency for lossless compression, experimental results show that the proposed codec improves the direct SPIHT approach and the prior work by about 33% and 26%, respectively.  相似文献   

15.
The embedded zero-tree wavelet (EZW) coding algorithm is a very effective technique for low bitrate still image compression. In this paper, an improved EZW algorithm is proposed to achieve a high compression performance in terms of PSNR and bitrate for lossy and lossless image compression, respectively. To reduce the number of zerotrees, the scanning and symbol redundancy of the existing EZW; the proposed method is based on the use of a new significant symbol map which is represented in a more efficient way. Furthermore, we develop a new EZW-based schemes for achieving a scalable colour image coding by exploiting efficiently the interdependency of colour planes. Numerical results demonstrate a significant superiority of our scheme over the conventional EZW and other improved EZW schemes with respect to both objective and subjective criteria for lossy and lossless compression applications of greyscale and colour images.  相似文献   

16.
徐孟侠 《电子学报》1993,(7):104-106,109
电视图象(隔行扫描)编码中采用的预测器:x^0=1/2x1′ 1/4(x3′ x4′),同样适用于多光谱扫描仪(MSS)卫星遥感图象原始数据(逐行扫描)的信息保持型压缩编码.采用此预测器和简单的熵编码(A3码),在两类计算机上用软件实现了典型MSS数据的信息保持型的编码/解码,压缩比略大于2.本文是课题报告及后续工作的简要报导,所提出的算法现JPEG建议中有关部分相似.  相似文献   

17.
This paper proposes a class of test compression for IP (intellectual property) core testing. The proposed compression requires only test cubes for the IP cores and it dose not require the structural information about the IP cores. It uses both a reconfigurable network and classes of coding, namely fixing-flipping coding and fixing-shifting-flipping coding. The proposed compression is evaluated from the viewpoint of compression rates and hardware overhead. For three out of four large ISCAS89 benchmark circuits, the compression rates of the proposed compression are better than those of the four existing test compressions.
Hideo ItoEmail:
  相似文献   

18.
Wireless multimedia sensor networks (WMSNs) have been potentially applicable for several emerging applications. The resources, i.e., power and bandwidth available to visual sensors in a WMSN are, however, very limited. Hence, it is important but challenging to achieve efficient resource allocation and optimal video data compression while maximizing the overall network lifetime. In this paper, a power-rate-distortion (PRD) optimized resource-scalable low-complexity multiview video encoding scheme is proposed. In our video encoder, both the temporal and interview information can be exploited based on the comparisons of extracted media hashes without performing motion and disparity estimations, which are known to be time-consuming. We present a PRD model to characterize the relationship between the available resources and the RD performance of our encoder. More specifically, an RD function in terms of the percentages for different coding modes of blocks and the target bit rate under the available resource constraints is derived for optimal coding mode decision. The major goal here is to design a PRD model to optimize a “motion estimation-free” low-complexity video encoder for applications with resource-limited devices, instead of designing a general-purpose video codec to compete compression performance against current compression standards (e.g., H.264/AVC). Analytic results verify the accuracy of our PRD model, which can provide a theoretical guideline for performance optimization under limited resource constraints. Simulation results on joint RD performance and power consumption (measured in terms of encoding time) demonstrate the applicability of our video coding scheme for WMSNs.  相似文献   

19.
新一代的高效率视频编码标准HEVC采用编码树单元(CTU)四叉树划分技术和多达10种的帧间预测单元(PU)模式,有效地提高了编码压缩效率,但也极大地增加了编码计算复杂度。为了减少编码单元(CU)的划分次数和候选帧间PU模式个数,提出了一种基于时空相关性的帧间模式决策快速算法。首先,利用当前CTU与参考帧中相同位置CTU、当前帧中相邻CTU的深度信息时空相关性,有效预测当前CTU的深度范围。然后,通过分析当前CU与其父CU之间的最佳PU模式空间相关性,以及利用当前CU已估计PU模式的率失真代价,跳过当前CU的冗余帧间PU模式。实验结果表明,提出的算法与HEVC测试模型(HM)相比,在不同编码配置下降低了52%左右的编码时间,同时保持了良好的编码率失真性能;与打开快速算法选项的HM相比,所提算法进一步降低了30%左右的编码时间。  相似文献   

20.
This paper evaluates the compression performance and characteristics of two wavelet coding compression schemes of electrocardiogram (ECG) signals suitable for real-time telemedical applications. The two proposed methods, namely the optimal zonal wavelet coding (OZWC) method and the wavelet transform higher order statistics-based coding (WHOSC) method, are used to assess the ECG compression issues. The WHOSC method employs higher order statistics (HOS) and uses multirate processing with the autoregressive HOS model technique to provide increasing robustness to the coding scheme. The OZWC algorithm used is based on the optimal wavelet-based zonal coding method developed for the class of discrete “Lipschitizian” signals. Both methodologies were evaluated using the normalized rms error (NRMSE) and the average compression ratio (CR) and bits per sample criteria, applied on abnormal clinical ECG data samples selected from the MIT-BIH database and the Creighton University Cardiac Center database. Simulation results illustrate that both methods can contribute to and enhance the medical data compression performance suitable for a hybrid mobile telemedical system that integrates these algorithmic approaches for real-time ECG data transmission scenarios with high CRs and low NRMSE ratios, especially in low bandwidth mobile systems  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号