首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
测试数据编码压缩是一类重要、经典的测试源划分(TRP)方法。本文提出了一种广义交替码,将FDR码、交替码都看作它的特例;又扩展了两步压缩方法,将原测试集划分成多组,每组采用不同的比值进行交替编码,综合了交替码与两步编码各自的优势,弥补了FDR码,交替码对某些电路测试集压缩的缺陷,得到了较好的压缩率。实验结果表明,与同类型的编码压缩方法相比,该方案具有更高的测试数据压缩率和较好的综合测试性能。  相似文献   

2.
Recent test data compression techniques raise concerns regarding power dissipation and compression efficiency. This letter proposes a new test data compression scheme, twin symbol encoding, that supports block division skills that can reduce hardware overhead. Our experimental results show that the proposed technique achieves both a high compression ratio and low‐power dissipation. Therefore, the proposed scheme is an attractive solution for efficient test data compression.  相似文献   

3.
应用混合游程编码的SOC测试数据压缩方法   总被引:10,自引:1,他引:9       下载免费PDF全文
方建平  郝跃  刘红侠  李康 《电子学报》2005,33(11):1973-1977
本文提出了一种有效的基于游程编码的测试数据压缩/解压缩的算法:混合游程编码,它具有压缩率高和相应解码电路硬件开销小的突出特点.另外,由于编码算法的压缩率和测试数据中不确定位的填充策略有很大的关系,所以为了进一步提高测试压缩编码效率,本文还提出一种不确定位的迭代排序填充算法.理论分析和对部分ISCAS 89 benchmark电路的实验结果证明了混合游程编码和迭代排序填充算法的有效性.  相似文献   

4.
A test set embedding approach based on twisted-ring counter with few seeds   总被引:1,自引:0,他引:1  
Test data storage, test application time and test power dissipation increase dramatically for single stuck-at faults while tens of million gates are integrated in a System-on-a-Chip (SoC), which makes implementing fault testing for embedded cores based SoC become a challenging task. To further reduce test data storage, test application time and test power dissipation, this paper presents a new test set embedding approach based on twisted-ring counter (TRC) with few seeds. This approach includes two improvements. The first is that an efficient seed-selection algorithm is employed to exploit the high-density unspecified bits in the deterministic test set and so the test data storage for complete coverage of single stuck-at faults is minimized. The second is that a novel test-sequence-reduction scheme based on shifting seeds is proposed to reduce test application time that in turn reduces test power dissipation. Compared with the conventional approach, experiments on ISCAS’89 benchmark circuits show that the proposed approach requires 65% less test data storage, 68% shorter test application time and 67% less test power dissipation. Moreover, its hardware overhead is very small.  相似文献   

5.
In this paper, we present two multistage compression techniques to reduce the test data volume in scan test applications. We have proposed two encoding schemes namely alternating frequency-directed equal-run-length (AFDER) coding and run-length based Huffman coding (RLHC). These encoding schemes together with the nine-coded compression technique enhance the test data compression ratio. In the first stage, the pre-generated test cubes with unspecified bits are encoded using the nine-coded compression scheme. Later, the proposed encoding schemes exploit the properties of compressed data to enhance the test data compression. This multistage compression is effective especially when the percentage of do not cares in a test set is very high. We also present the simple decoder architecture to decode the original data. The experimental results obtained from ISCAS'89 benchmark circuits confirm the average compression ratio of 74.2% and 77.5% with the proposed 9C-AFDER and 9C-RLHC schemes respectively.  相似文献   

6.
A new scheme of test data compression based on run-length, namely equal-run-length coding (ERLC) is presented. It is based on both types of runs of 0's and 1's and explores the relationship between two consecutive runs. It uses a shorter codeword to represent the whole second run of two equal length consecutive runs. A scheme for filling the don't-care bits is proposed to maximize the number of consecutive equal-length runs. Compared with other already known schemes, the proposed scheme achieves higher compression ratio with low area overhead. The merits of the proposed algorithm are experimentally verified on the larger examples of the ISCAS89 benchmark circuits.  相似文献   

7.
Test data compression using alternating variable run-length code   总被引:1,自引:0,他引:1  
This paper presents a unified test data compression approach, which simultaneously reduces test data volume, scan power consumption and test application time for a system-on-a-chip (SoC). The proposed approach is based on the use of alternating variable run-length (AVR) codes for test data compression. A formal analysis of scan power consumption and test application time is presented. The analysis showed that a careful mapping of the don’t-cares in pre-computed test sets to 1s and 0s led to significant savings in peak and average power consumption, without requiring slower scan clocks. The proposed technique also reduced testing time compared to a conventional scan-based scheme. The alternating variable run-length codes can efficiently compress the data streams that are composed of both runs 0s and 1s. The decompression architecture was also presented in this paper. Experimental results for ISCAS'89 benchmark circuits and a production circuit showed that the proposed approach greatly reduced test data volume and scan power consumption for all cases.  相似文献   

8.
The pattern run-length coding test data compression approach is extended by introducing don’t care bit (x) propagation strategy into it. More than one core test sets for testing core-based System-on-Chip (SoC) are unified into a single one, which is compressed by the extended coding technique. A reconfigurable scan test application mechanism is presented, in which test data for multiple cores are scanned and captured jointly to make SoC test application more efficient with low hardware overhead added. The proposed union test technique is applied to an academic SoC embedded by six large ISCAS’89 benchmarks, and to an ITC’ 02 benchmark circuit. Experiment results show that compared with the existing schemes in which a core test set is compressed and applied independently of other cores, the proposed scheme can not only improve test data compression/decompression, but also reduce the redundant shift and capture cycles during scan testing, de-creasing SoC test application time effectively.  相似文献   

9.
A compression-decompression scheme, Modified Selective Huffman (MS-Huffman) scheme based on Huffman code is proposed in this paper. This scheme aims at optimization of the parameters that influence the test cost reduction: the compression ratio, on-chip decoder area overhead and overall test application time. Theoretically, it is proved that the proposed scheme gives the better test data compression compared to very recently proposed encoding schemes for any test set. It is clearly demonstrated with a large number of experimental results that the proposed scheme improves the test data compression, reduces overall test application time and on-chip area overhead compared to other Huffman code based schemes.  相似文献   

10.
The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic test equipment (ATE) and the CUT (circuit under test) effectively, a novel VSPTIDR (variable shifting prefix-tail identifier reverse) code for test stimulus data compression is designed. The encoding scheme is defined and analyzed in detail, and the decoder is presented and discussed. While the probability of 0 bits in the test set is greater than 0.92, the compression ratio from VSPTIDR code is better than the frequency-directed run-length (FDR) code, which can be proved by theoretical analysis and experiments. And the on-chip area overhead of VSPTIDR decoder is about 15.75 % less than the FDR decoder.  相似文献   

11.
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.  相似文献   

12.
一种低功耗双重测试数据压缩方案   总被引:1,自引:0,他引:1       下载免费PDF全文
陈田  易鑫  王伟  刘军  梁华国  任福继 《电子学报》2017,45(6):1382-1388
随着集成电路制造工艺的发展,VLSI(Very Large Scale Integrated)电路测试面临着测试数据量大和测试功耗过高的问题.对此,本文提出一种基于多级压缩的低功耗测试数据压缩方案.该方案先利用输入精简技术对原测试集进行预处理,以减少测试集中的确定位数量,之后再进行第一级压缩,即对测试向量按多扫描划分为子向量并进行相容压缩,压缩后的测试向量可用更短的码字表示;接着再对测试数据进行低功耗填充,先进行捕获功耗填充,使其达到安全阈值以内,然后再对剩余的无关位进行移位功耗填充;最后对填充后的测试数据进行第二级压缩,即改进游程编码压缩.对ISCAS89基准电路的实验结果表明,本文方案能取得比golomb码、FDR码、EFDR码、9C码、BM码等更高的压缩率,同时还能协同优化测试时的捕获功耗和移位功耗.  相似文献   

13.
This paper presents a code compression and on-the-fly decompression scheme suitable for coarse-grain reconfigurable technologies. These systems pose further challenges by having an order of magnitude higher memory requirement due to much wider instruction words than typical VLIW/TTA architectures. Current compression schemes are evaluated. A highly efficient and novel dictionary-based lossless compression technique is implemented and compared against a previous implementation for a reconfigurable system. This paper looks at several conflicting design parameters, such as the compression ratio, silicon area, latency, and power consumption. Compression ratios in the range of 0.32 to 0.44 are recorded with the proposed scheme for a given set of test programs. With these test programs, a 60% overall silicon area saving is achieved, even after the decompressor hardware overhead is taken into account. The proposed technique may be applied to any architecture which exhibits common characteristics to the example reconfigurable architecture targeted in this paper.   相似文献   

14.
The random-like filling strategy pursuing high compression for today's popular test compression schemes introduces large test power. To achieve high compression in conjunction with reducing test power for multiple-scan-chain designs is even harder and very few works were dedicated to solve this problem. This paper proposes and demonstrates a multilayer data copy (MDC) scheme for test compression as well as test power reduction for multiple-scan-chain designs. The scheme utilizes a decoding buffer, which supports fast loading using previous loaded data, to achieve test data compression and test power reduction at the same time. The scheme can be applied automatic test pattern generation (ATPG)-independently or to be incorporated in an ATPG to generate highly compressible and power efficient test sets. Experiment results on benchmarks show that test sets generated by the scheme had large compression and power saving with only a small area design overhead.  相似文献   

15.
罗瑜  唐博 《电子学报》2018,46(4):969-974
为了进一步提高参考帧无损压缩的压缩性能,本文提出了一种基于纹理方向预测和游程哥伦布编码的帧存无损压缩算法.本算法首先采用双扫描和自适应预测的方法,按纹理方向,为每个像素选取最优的参考像素,并进行预测以获得预测残差;然后对预测残差进行哥伦布游程混合熵编码,从而提高了参考帧无损压缩的压缩性能.实现结果显示,与帧内预测哥伦布编码算法相比,本文算法不但平均压缩率提高了16%,而且降低了平均编码时间.  相似文献   

16.
To overcome the limitation of the automatic test equipment (ATE), test data compression/decompression schemes become a more important issue of testing for a system-on-chip (SoC). In order to alleviate the limitation of previous works, a new hybrid test data compression/decompression scheme for an SoC is developed. The new scheme is based on analyzing the factors that influence test parameters: compression ratio and hardware overhead. To improve compression ratio, the proposed scheme, called the Modified Input reduction and CompRessing One block (MICRO), uses the modified input reduction, the one block compression, a novel mapping, and reordering algorithms. Unlike previous approaches using the cyclic scan register architecture, the proposed scheme is to compress original test data and to decompress the compressed test data without the cyclic scan register architecture. Therefore, the proposed scheme leads to high-compression ratio with low-hardware overhead. Experimental results on ISCAS '89 and ITC '99 benchmark circuits prove the efficiency of the new method.  相似文献   

17.
In many image sequence compression applications, Huffman coding is used to eliminate statistical redundancy resident in given data. The Huffman table is often pre-defined to reduce coding delay and table transmission overhead. Local symbol statistics, however, may be much different from the global ones manifested in the pre-defined table. In this paper, we propose three Huffman coding methods in which pre-defined codebooks are effectively manipulated according to local symbol statistics. The first proposed method dynamically modifies the symbol-codeword association without rebuilding the Huffman tree itself. The encoder and decoder maintain identical symbol-codeword association by performing the same modifications to the Huffman table, thus eliminating extra transmission overhead. The second method adaptively selects a codebook from a set of given ones, which produces the minimum number of bits. The transmission overhead in this method is the codebook selection information, which is observed to be negligible compared with the bit saving attained. Finally, we combine the two aforementioned methods to further improve compression efficiency. Experiments are carried out using five test image sequences to demonstrate the compression performance of the proposed methods.  相似文献   

18.
提出了一种有效的新型测试数据压缩编码——PTIDR编码。该编码方法综合利用哈夫曼编码和前缀编码。理论分析和实验结果表明,在测试集中,0的概率p满足p≥0.7610时,能取得比FDR编码更高的压缩率,从而降低芯片测试成本。该编码方法的解码器也较FDR编码的解码器简单、易实现,且能有效节省硬件开销,并进一步节省芯片面积,从而降低芯片制造成本。  相似文献   

19.
An Efficient Test Data Compression Technique Based on Codes   总被引:1,自引:1,他引:0  
提出了一种新的测试数据压缩/解压缩的算法,称为混合游程编码,它充分考虑了测试数据的压缩率、相应硬件解码电路的开销以及总的测试时间.该算法是基于变长-变长的编码方式,即把不同游程长度的字串映射成不同长度的代码字,可以得到一个很好的压缩率.同时为了进一步提高压缩率,还提出了一种不确定位填充方法和测试向量的排序算法,在编码压缩前对测试数据进行相应的预处理.另外,混合游程编码的研究过程中充分考虑到了硬件解码电路的设计,可以使硬件开销尽可能小,并减少总的测试时间.最后,ISCAS 89 benchmark电路的实验结果证明了所提算法的有效性.  相似文献   

20.
方建平  郝跃  刘红侠  李康 《半导体学报》2005,26(11):2062-2068
提出了一种新的测试数据压缩/解压缩的算法,称为混合游程编码,它充分考虑了测试数据的压缩率、相应硬件解码电路的开销以及总的测试时间.该算法是基于变长-变长的编码方式,即把不同游程长度的字串映射成不同长度的代码字,可以得到一个很好的压缩率.同时为了进一步提高压缩率,还提出了一种不确定位填充方法和测试向量的排序算法,在编码压缩前对测试数据进行相应的预处理.另外,混合游程编码的研究过程中充分考虑到了硬件解码电路的设计,可以使硬件开销尽可能小,并减少总的测试时间.最后,ISCAS 89 benchmark电路的实验结果证明了所提算法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号