首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 843 毫秒
1.
应用混合游程编码的SOC测试数据压缩方法   总被引:10,自引:1,他引:9       下载免费PDF全文
方建平  郝跃  刘红侠  李康 《电子学报》2005,33(11):1973-1977
本文提出了一种有效的基于游程编码的测试数据压缩/解压缩的算法:混合游程编码,它具有压缩率高和相应解码电路硬件开销小的突出特点.另外,由于编码算法的压缩率和测试数据中不确定位的填充策略有很大的关系,所以为了进一步提高测试压缩编码效率,本文还提出一种不确定位的迭代排序填充算法.理论分析和对部分ISCAS 89 benchmark电路的实验结果证明了混合游程编码和迭代排序填充算法的有效性.  相似文献   

2.
An Efficient Test Data Compression Technique Based on Codes   总被引:1,自引:1,他引:0  
提出了一种新的测试数据压缩/解压缩的算法,称为混合游程编码,它充分考虑了测试数据的压缩率、相应硬件解码电路的开销以及总的测试时间.该算法是基于变长-变长的编码方式,即把不同游程长度的字串映射成不同长度的代码字,可以得到一个很好的压缩率.同时为了进一步提高压缩率,还提出了一种不确定位填充方法和测试向量的排序算法,在编码压缩前对测试数据进行相应的预处理.另外,混合游程编码的研究过程中充分考虑到了硬件解码电路的设计,可以使硬件开销尽可能小,并减少总的测试时间.最后,ISCAS 89 benchmark电路的实验结果证明了所提算法的有效性.  相似文献   

3.
提出了一种新的测试数据压缩/解压缩的算法,称为混合游程编码,它充分考虑了测试数据的压缩率、相应硬件解码电路的开销以及总的测试时间.该算法是基于变长-变长的编码方式,即把不同游程长度的字串映射成不同长度的代码字,可以得到一个很好的压缩率.同时为了进一步提高压缩率,还提出了一种不确定位填充方法和测试向量的排序算法,在编码压缩前对测试数据进行相应的预处理.另外,混合游程编码的研究过程中充分考虑到了硬件解码电路的设计,可以使硬件开销尽可能小,并减少总的测试时间.最后,ISCAS 89 benchmark电路的实验结果证明了所提算法的有效性.  相似文献   

4.
The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic test equipment (ATE) and the CUT (circuit under test) effectively, a novel VSPTIDR (variable shifting prefix-tail identifier reverse) code for test stimulus data compression is designed. The encoding scheme is defined and analyzed in detail, and the decoder is presented and discussed. While the probability of 0 bits in the test set is greater than 0.92, the compression ratio from VSPTIDR code is better than the frequency-directed run-length (FDR) code, which can be proved by theoretical analysis and experiments. And the on-chip area overhead of VSPTIDR decoder is about 15.75 % less than the FDR decoder.  相似文献   

5.
双游程编码的无关位填充算法   总被引:2,自引:2,他引:0  
双游程编码是集成电路测试数据压缩的一种重要方法,可分为无关位填充和游程编码压缩两个步骤.现有文献大都着重在第二步,提出了各种不同的编码压缩算法,但是对于第一步的无关位填充算法都不够重视,损失了一定的潜在压缩率.本文首先分析了无关位填充对于测试数据压缩率的重要性,并提出了一种新颖的双游程编码的无关位填充算法,可以适用于不同的编码方法,从而得到更高的测试数据压缩率.该算法可以与多种双游程编码算法结合使用,对解码器的硬件结构和芯片实现流程没有任何的影响.在ISCAS89的基准电路的实验表明,对于主流的双游程编码算法,结合该无关位填充算法后能提高了6%-9%的测试数据压缩率.  相似文献   

6.
A compression-decompression scheme, Modified Selective Huffman (MS-Huffman) scheme based on Huffman code is proposed in this paper. This scheme aims at optimization of the parameters that influence the test cost reduction: the compression ratio, on-chip decoder area overhead and overall test application time. Theoretically, it is proved that the proposed scheme gives the better test data compression compared to very recently proposed encoding schemes for any test set. It is clearly demonstrated with a large number of experimental results that the proposed scheme improves the test data compression, reduces overall test application time and on-chip area overhead compared to other Huffman code based schemes.  相似文献   

7.
应用Variable-Tail编码压缩的测试资源划分方法   总被引:13,自引:6,他引:13       下载免费PDF全文
测试资源划分是降低测试成本的一种有效方法.本文提出了一种新的有效的对测试数据进行压缩的编码:Variable-Tail编码,并构建了基于该编码的测试资源划分方案.文章的理论分析和实验研究表明了采用Variable-Tail编码能取得比Golomb编码更高的压缩率,针对多种模式下的测试向量均能提供很好的压缩效果,解码器的硬件也较易实现.文章还提出了一种整合不确定位动态赋值的测试向量排序算法,该算法可以进一步提高测试压缩率.文章最后用实验数据验证了所提编码和排序算法的高效性.  相似文献   

8.
The emergence of the nanometer scale integration technology made it possible for systems-on-a-chip, SoC, design to contain many reusable cores from multiple resources. This resulted in higher complexity SoC testing than the conventional VLSI. To address this increase in design complexity in terms of data-volume and test-time, several compression methods have been developed, employed and proposed in the literature. In this paper, we present a new efficient test vector compression scheme based on block entropy in conjunction with our improved row-column reduction routine to reduce test data significantly. Our results show that the proposed method produces much higher compression ratio than all previously published methods. On average, our scheme scores nearly 13% higher than the best reported results. In addition, our scheme outperformed all results for each of the tested circuits. The proposed scheme is very fast and has considerable low complexity.  相似文献   

9.
Recently, several efficient context-based arithmetic coding algorithms have been developed successfully for lossless compression of error-diffused images. In this paper, we first present a novel block- and texture-based approach to train the multiple-template according to the most representative texture features. Based on the trained multiple template, we next present an efficient texture- and multiple-template-based (TM-based) algorithm for lossless compression of error-diffused images. In our proposed TM-based algorithm, the input image is divided into many blocks and for each block, the best template is adaptively selected from the multiple-template based on the texture feature of that block. Under 20 testing error-diffused images and the personal computer with Intel Celeron 2.8-GHz CPU, experimental results demonstrate that with a little encoding time degradation, 0.365 s (0.901 s) on average, the compression improvement ratio of our proposed TM-based algorithm over the joint bilevel image group (JBIG) standard [over the previous block arithmetic coding for image compression (BACIC) algorithm proposed by Reavy and Boncelet is 24%] (19.4%). Under the same condition, the compression improvement ratio of our proposed algorithm over the previous algorithm by Lee and Park is 17.6% and still only has a little encoding time degradation (0.775 s on average). In addition, the encoding time required in the previous free tree-based algorithm is 109.131 s on average while our proposed algorithm takes 0.995 s; the average compression ratio of our proposed TM-based algorithm, 1.60, is quite competitive to that of the free tree-based algorithm, 1.62.  相似文献   

10.
基于变游程编码的测试数据压缩算法   总被引:13,自引:1,他引:12       下载免费PDF全文
彭喜元  俞洋 《电子学报》2007,35(2):197-201
基于IP核的设计思想推动了SOC设计技术的发展,却使SOC的测试数据成几何级数增长.针对这一问题,本文提出了一种有效的测试数据压缩算法——变游程(Variable-Run-Length)编码算法来减少测试数据量、降低测试成本.该算法编码时同时考虑游程0和游程1两种游程,大大减小了测试数据中长度较短游程的数量,提高了编码效率.理论分析和实验数据表明,变游程编码能取得较同类编码算法更高的压缩效率,能够显著减少测试时间、降低测试功耗和测试成本.  相似文献   

11.
A new scheme of test data compression based on run-length, namely equal-run-length coding (ERLC) is presented. It is based on both types of runs of 0's and 1's and explores the relationship between two consecutive runs. It uses a shorter codeword to represent the whole second run of two equal length consecutive runs. A scheme for filling the don't-care bits is proposed to maximize the number of consecutive equal-length runs. Compared with other already known schemes, the proposed scheme achieves higher compression ratio with low area overhead. The merits of the proposed algorithm are experimentally verified on the larger examples of the ISCAS89 benchmark circuits.  相似文献   

12.
A system-on-chip (SOC) usually consists of many memory cores with different sizes and functionality, and they typically represent a significant portion of the SOC and therefore dominate its yield. Diagnostics for yield enhancement of the memory cores thus is a very important issue. In this paper we present two data compression techniques that can be used to speed up the transmission of diagnostic data from the embedded RAM built-in self-test (BIST) circuit that has diagnostic support to the external tester. The proposed syndrome-accumulation approach compresses the faulty-cell address and March syndrome to about 28% of the original size on average under the March-17N diagnostic test algorithm. The key component of the compressor is a novel syndrome-accumulation circuit, which can be realized by a content-addressable memory. Experimental results show that the area overhead is about 0.9% for a 1Mb SRAM with 164 faults. A tree-based compression technique for word-oriented memories is also presented. By using a simplified Huffman coding scheme and partitioning each 256-bit Hamming syndrome into fixed-size symbols, the average compression ratio (size of original data to that of compressed data) is about 10, assuming 16-bit symbols. Also, the additional hardware to implement the tree-based compressor is very small. The proposed compression techniques effectively reduce the memory diagnosis time as well as the tester storage requirement.  相似文献   

13.
为了减少测试数据和测试时间,该文提出一种基于镜像对称参考切片的多扫描链测试数据压缩方法。采用两个相互镜像对称的参考切片与扫描切片做相容性比较,提高了相容概率。若扫描切片与参考切片相容,只需要很少的几位编码就可以表示这个扫描切片,并且可以并行载入多扫描链;若不相容,参考切片被该扫描切片替换。提出一种最长相容策略,用来处理扫描切片与参考切片同时满足多种相容关系时的选取问题。根据Huffman编码原理确定不同相容情况的编码码字,可以进一步提高测试数据的压缩率。实验结果表明所提方法的平均测试数据压缩率达到了69.13%。  相似文献   

14.
The large size of hyperspectral imaging poses a significant threat to its potential use in real life due to the abundant information stored in it. The use of deep learning for such data processing is visible in recent applications. In this work, we propose a lossy hyperspectral image compression algorithm based on the concept of autoencoders. It uses a combination of the convolution layer and max-pooling layer to reduce the dimensions of the input image and generate a compressed image. The original image with some loss of information is reconstructed using transpose convolution layer that uses reverse of the procedure used by the encoder. The compressed image has been entropy coded using an adaptive arithmetic coder for transmission or storage application. The method provides an improvement of 28% in PSNR with 21 times increment in the compression ratio. The effect of compression on classification has also been evaluated in the experiment using state of art classification algorithm. Negligible difference in classification accuracy was obtained that proves the effectiveness of the proposed algorithm.  相似文献   

15.
We present a parallel algorithm, architecture, and implementation for efficient Lempel-Ziv (LZ)-based data compression. The parallel algorithm exhibits a scalable, parameterized, and regular structure and is well suited for VLSI array implementation. Based on our parallel algorithm and systematic design methodologies, two semisystolic array architectures have been developed which are low power and area efficient. The first architecture trades off the compression speed for the area and has a low run-time overhead for multichannel compression. The second architecture achieves a high compression rate (one data symbol per clock) at the expense of the area due to a large clock load and global wiring. Compared to a recent state-of-the-art parallel architecture, our first array structure requires significantly less chip area (≃330 k versus ≃36 k transistors) and more than an order of magnitude less power (≈1.0 W versus ≈70 mW) while still providing the compression speed required for most data communication applications. Hence, data compression can be adopted in portable data communication as well as wireless local area networks. The second architecture has at least three times less area and power while providing the same constant compression rate. To demonstrate the correctness of our design, a prototype module for the first architecture has been implemented using 1.2 μ complementary metal-oxide-semiconductor (CMOS) technology. The compression module contains 32 simple and identical processors, has an average compression rate of 12.5 million bytes/s, and consumes 18.34 mW without the dictionary (≈70 mW with a 4.1k SRAM for the dictionary) while operating at a 100 MHz clock rate (simulated)  相似文献   

16.
We examine the use of exponential-Golomb codes and subexponential codes can be used for the compression of scan test data in core-based system-on-a-chip (SOC) designs. These codes are well-known in the data compression domain but their application to SOC testing has not been explored before. We show that these codes often provide slighly higher compression than alternative methods that have been proposed recently.  相似文献   

17.
We present an analysis of test application time for test data compression techniques that are used for reducing test data volume and testing time in system-on-a-chip (SOC) designs. These techniques are based on data compression codes and on-chip decompression. The compression/decompression scheme decreases test data volume and the amount of data that has to be transported from the tester to the SOC. We show via analysis as well as through experiments that the proposed scheme reduces testing time and allows the use of a slower tester. Results on test application time for the ISCAS'89 circuits are obtained using an ATE testbench developed in VHDL to emulate ATE functionality.  相似文献   

18.
大容量高速光纤光栅解调系统的数据压缩传输   总被引:4,自引:4,他引:0  
提出了一种基于硬件差分压缩的数据传输方法,以实现大容量高速光纤Bragg光栅(FBG)解调系统中对大数据量的高效率传输。方法根据FBG峰值数据的特征进行差分处理减少数据量,并结合Lempel-Ziv-Welch(LZW)字典压缩算法消除数据冗余,最终实现大容量高速FBG解调数据的实时传输。数据处理与压缩算法在FPGA上进行硬件实现,数据吞吐率高。根据实验测试,在4kHz解调速率下,FBG静态信号数据压缩率达到16.56%,动态信号数据压缩率达到36.25%。测试结果表明,本文方法在航空发动机动态监测系统平台实验中压缩效果良好,能够有效解决在硬件资源受限下巨大数据量的传输问题。  相似文献   

19.
This paper proposes novel algorithms for computing test patterns for transition faults in combinational circuits and fully scanned sequential circuits. The algorithms are based on the principle that s@ vectors can be effectively used to construct good quality transition test sets. Several algorithms are discussed. Experimental results obtained using the new algorithms show that there is a 20% reduction in test set size, test data volume and test application time compared to a state-of-the-art native transition test ATPG tool, without any reduction in fault coverage. Other benefits of our approach, viz. productivity improvement, constraint handling and design data compression are highlighted.  相似文献   

20.
文章提出一种基于FDR码改进分组的SoC测试数据压缩方法.经过对原始测试集无关位的简单预处理,提高确定位0在游程中的出现频率.在FDR码的基础上,改进其分组方式,通过理论证明其压缩率略高于FDR编码,尤其是短游程的压缩率.用C语言编写程序模拟两种编码方法的软件实现程序,实验结果证明了改进分组的FDR编码方法的有效性和高压缩性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号