首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ever-increasing test data volume and excessive test power are two of the main concerns of VLSI testing. The “don’t-care” bits (also known as X-bits) in given test cube can be exploited for test data compression and/or test power reduction, and these techniques may contradict to each other because the very same X-bits are likely to be used for different optimization objectives. This paper proposes a capture-power-aware test compression scheme that is able to keep capture-power under a safe limit with low test compression ratio loss. Experimental results on benchmark circuits validate the effectiveness of the proposed solution.  相似文献   

2.
The paper proposes a new test data compression scheme for testing embedded cores with multiple scan chains. The new compression scheme allows broadcasting identical test data to several scan chains whenever the cells in the same depth are compatible for the current application test pattern. Thus, it efficiently utilizes the compatibility of the scan cells among the scan chain segments, increases test data run in broadcast mode and reduces test data volume and test application time effectively. It does not need complex compressing algorithm and costly hardware. Experimental results demonstrate the efficiency and versatility of the proposed method.  相似文献   

3.
The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic test equipment (ATE) and the CUT (circuit under test) effectively, a novel VSPTIDR (variable shifting prefix-tail identifier reverse) code for test stimulus data compression is designed. The encoding scheme is defined and analyzed in detail, and the decoder is presented and discussed. While the probability of 0 bits in the test set is greater than 0.92, the compression ratio from VSPTIDR code is better than the frequency-directed run-length (FDR) code, which can be proved by theoretical analysis and experiments. And the on-chip area overhead of VSPTIDR decoder is about 15.75 % less than the FDR decoder.  相似文献   

4.
Scan architectures, though widely used in modern designs for testing purpose, are expensive in test data volume and power consumption. To solve these problems, we propose in this paper to modify an existing test data compression technique (Wang Z, Chakrabarty K in Test data compression for IP embedded cores using selective encoding of scan slices. IEEE International Test Conference, paper 24.3, 2005) so that it can simultaneously address test data volume and power consumption reduction for scan testing of embedded Intellectual Property (IP) cores. Compared to the initial solution that fill don’t-care bits with the aim of reducing only test data volume, here the assignment is performed to minimize also the power consumption. The proposed power-aware test data compression technique is applied to the ISCAS’89 and ITC’99 benchmark circuits and on a number of industrial circuits. Results show that up to 14× reduction in test data volume and 98% test power reduction can be obtained simultaneously.
C. LandraultEmail: URL: URL: http://www.lirmm.fr/~w3mic
  相似文献   

5.
Chain codes are the most size-efficient representations of rasterised binary shapes and contours. This paper considers a new lossless chain code compression method based on move-to-front transform and an adaptive run-length encoding. The former reduces the information entropy of the chain code, whilst the latter compresses the entropy-reduced chain code by coding the repetitions of chain code symbols and their combinations using a variable-length model. In comparison to other state-of-the-art compression methods, the entropy-reduction is highly efficient, and the newly proposed method yields, on average, better compression.  相似文献   

6.
Test data has increased enormously owing to the rising on-chip complexity of integrated circuits. It further increases the test data transportation time and tester memory. The non-correlated test bits increase the issue of the test power. This paper presents a two-stage block merging based test data minimization scheme which reduces the test bits, test time and test power. A test data is partitioned into blocks of fixed sizes which are compressed using two-stage encoding technique. In stage one, successive blocks are merged to retain a representative block. In stage two, the retained pattern block is further encoding based on the existence of ten different subcases between the sub-block formed by splitting the retained pattern block into two halves. Non-compatible blocks are also split into two sub-blocks and tried for encoded using lesser bits. Decompression architecture to retrieve the original test data is presented. Simulation results obtained corresponding to different ISCAS′89 benchmarks circuits reflect its effectiveness in achieving better compression.  相似文献   

7.
In this paper, we present two multistage compression techniques to reduce the test data volume in scan test applications. We have proposed two encoding schemes namely alternating frequency-directed equal-run-length (AFDER) coding and run-length based Huffman coding (RLHC). These encoding schemes together with the nine-coded compression technique enhance the test data compression ratio. In the first stage, the pre-generated test cubes with unspecified bits are encoded using the nine-coded compression scheme. Later, the proposed encoding schemes exploit the properties of compressed data to enhance the test data compression. This multistage compression is effective especially when the percentage of do not cares in a test set is very high. We also present the simple decoder architecture to decode the original data. The experimental results obtained from ISCAS'89 benchmark circuits confirm the average compression ratio of 74.2% and 77.5% with the proposed 9C-AFDER and 9C-RLHC schemes respectively.  相似文献   

8.
In this article, a run length encoding-based test data compression technique has been addressed. The scheme performs Huffman coding on different parts of the test data file separately. It has been observed that up to a 6% improvement in compression ratio and a 29% improvement in test application time can be achieved sacrificing only about 6.5% of the decoder area. We have compared our results with the other contemporary works reported in the literature. It has been observed that for most of the cases, our scheme produces a better compression ratio and that the area requirements are much less.  相似文献   

9.
The pattern run-length coding test data compression approach is extended by introducing don’t care bit (x) propagation strategy into it. More than one core test sets for testing core-based System-on-Chip (SoC) are unified into a single one, which is compressed by the extended coding technique. A reconfigurable scan test application mechanism is presented, in which test data for multiple cores are scanned and captured jointly to make SoC test application more efficient with low hardware overhead added. The proposed union test technique is applied to an academic SoC embedded by six large ISCAS’89 benchmarks, and to an ITC’ 02 benchmark circuit. Experiment results show that compared with the existing schemes in which a core test set is compressed and applied independently of other cores, the proposed scheme can not only improve test data compression/decompression, but also reduce the redundant shift and capture cycles during scan testing, de-creasing SoC test application time effectively.  相似文献   

10.
An Efficient Test Data Compression Technique Based on Codes   总被引:1,自引:1,他引:0  
提出了一种新的测试数据压缩/解压缩的算法,称为混合游程编码,它充分考虑了测试数据的压缩率、相应硬件解码电路的开销以及总的测试时间.该算法是基于变长-变长的编码方式,即把不同游程长度的字串映射成不同长度的代码字,可以得到一个很好的压缩率.同时为了进一步提高压缩率,还提出了一种不确定位填充方法和测试向量的排序算法,在编码压缩前对测试数据进行相应的预处理.另外,混合游程编码的研究过程中充分考虑到了硬件解码电路的设计,可以使硬件开销尽可能小,并减少总的测试时间.最后,ISCAS 89 benchmark电路的实验结果证明了所提算法的有效性.  相似文献   

11.
应用混合游程编码的SOC测试数据压缩方法   总被引:10,自引:1,他引:9       下载免费PDF全文
方建平  郝跃  刘红侠  李康 《电子学报》2005,33(11):1973-1977
本文提出了一种有效的基于游程编码的测试数据压缩/解压缩的算法:混合游程编码,它具有压缩率高和相应解码电路硬件开销小的突出特点.另外,由于编码算法的压缩率和测试数据中不确定位的填充策略有很大的关系,所以为了进一步提高测试压缩编码效率,本文还提出一种不确定位的迭代排序填充算法.理论分析和对部分ISCAS 89 benchmark电路的实验结果证明了混合游程编码和迭代排序填充算法的有效性.  相似文献   

12.
方建平  郝跃  刘红侠  李康 《半导体学报》2005,26(11):2062-2068
提出了一种新的测试数据压缩/解压缩的算法,称为混合游程编码,它充分考虑了测试数据的压缩率、相应硬件解码电路的开销以及总的测试时间.该算法是基于变长-变长的编码方式,即把不同游程长度的字串映射成不同长度的代码字,可以得到一个很好的压缩率.同时为了进一步提高压缩率,还提出了一种不确定位填充方法和测试向量的排序算法,在编码压缩前对测试数据进行相应的预处理.另外,混合游程编码的研究过程中充分考虑到了硬件解码电路的设计,可以使硬件开销尽可能小,并减少总的测试时间.最后,ISCAS 89 benchmark电路的实验结果证明了所提算法的有效性.  相似文献   

13.
While integrated circuits of ever increasing size and complexity necessitate larger test sets for ensuring high test quality, the consequent test time and data volume reflect into elevated test costs. Test data compression solutions have been proposed to address this problem by storing and delivering stimuli in a compressed format. The effectiveness of these techniques, however, strongly relies on the distribution of the specified bits of test vectors. In this paper, we propose a scan cell partitioning technique so as to ensure that specified bits are uniformly distributed across the scan slices, especially for the test vectors with higher density of specified bits. The proposed scan cell partitioning process is driven by an integer linear programming (ILP) formulation, wherein it is also possible to account for the layout and routing constraints. While the proposed technique can be applied to increase the effectiveness of any combinational decompression architecture, in this paper, we present its application in conjunction with a fan-out based decompression architecture. The experimental results also confirm the compression enhancement of the proposed methodology.
Ozgur SinanogluEmail:

Ozgur Sinanoglu   received a B.S. degree in Computer Engineering, and another B.S. degree in Electrical and Electronics Engineering, both from Bogazici University in Turkey in 1999. He earned his M.S. and Ph.D. degrees in the Computer Science and Engineering department of University of California, San Diego, in 2001 and 2004, respectively. Between 2004 and 2006, he worked as a senior design for testability engineer in Qualcomm, located in San Diego, California. Since Fall 2006, he has been a faculty member in the Mathematics and Computer Science Department of Kuwait University. His research field is the design for testability of VLSI circuits.  相似文献   

14.
A novel approach for using an embedded processor to aid in deterministic testing of the other components of a system-on-a-chip (SOC) is presented. The tester loads a program along with compressed test data into the processor's on-chip memory. The processor executes the program which decompresses the test data and applies it to scan chains in the other components of the SOC to test them. The program itself is very simple and compact, and the decompression is done very rapidly, hence this approach reduces both the amount of data that must be stored on the tester and reduces the test time. Moreover, it enables at-speed scan shifting even with a slow tester (i.e., a tester whose maximum clock rate is slower than the SOC's normal operating clock rate). A procedure is described for converting a set of test cubes (i.e., test vectors where the unspecified inputs are left as X's) into a compressed form. A program that can be run on an embedded processor is then given for decompressing the test cubes and applying them to scan chains on the chip. Experimental results indicate a significant amount of compression can be achieved resulting in less data that must be stored on the tester (i.e., smaller tester memory requirement) and less time to transfer the test data from the tester to the chip.  相似文献   

15.
为了减少测试数据和测试时间,该文提出一种基于镜像对称参考切片的多扫描链测试数据压缩方法。采用两个相互镜像对称的参考切片与扫描切片做相容性比较,提高了相容概率。若扫描切片与参考切片相容,只需要很少的几位编码就可以表示这个扫描切片,并且可以并行载入多扫描链;若不相容,参考切片被该扫描切片替换。提出一种最长相容策略,用来处理扫描切片与参考切片同时满足多种相容关系时的选取问题。根据Huffman编码原理确定不同相容情况的编码码字,可以进一步提高测试数据的压缩率。实验结果表明所提方法的平均测试数据压缩率达到了69.13%。  相似文献   

16.
A new scheme of test data compression based on run-length, namely equal-run-length coding (ERLC) is presented. It is based on both types of runs of 0's and 1's and explores the relationship between two consecutive runs. It uses a shorter codeword to represent the whole second run of two equal length consecutive runs. A scheme for filling the don't-care bits is proposed to maximize the number of consecutive equal-length runs. Compared with other already known schemes, the proposed scheme achieves higher compression ratio with low area overhead. The merits of the proposed algorithm are experimentally verified on the larger examples of the ISCAS89 benchmark circuits.  相似文献   

17.
该文提出一种用于测试数据压缩的自适应EFDR(Extended Frequency-Directed Run-length)编码方法。该方法以EFDR编码为基础,增加了一个用于表示后缀与前缀编码长度差值的参数N,对测试集中的每个测试向量,根据其游程分布情况,选择最合适的N值进行编码,提高了编码效率。在解码方面,编码后的码字经过简单的数学运算即可恢复得到原测试数据的游程长度,且不同N值下的编码码字均可使用相同的解码电路来解码,因此解码电路具有较小的硬件开销。对ISCAS-89部分标准电路的实验结果表明,该方法的平均压缩率达到69.87%,较原EFDR编码方法提高了4.07%。  相似文献   

18.
We examine the use of exponential-Golomb codes and subexponential codes can be used for the compression of scan test data in core-based system-on-a-chip (SOC) designs. These codes are well-known in the data compression domain but their application to SOC testing has not been explored before. We show that these codes often provide slighly higher compression than alternative methods that have been proposed recently.  相似文献   

19.
A new scan architecture for both low power testing and test volume compression is proposed. For low power test requirements, only a subset of scan cells is loaded with test stimulus and captured with test responses by freezing the remaining scan cells according to the distribution of unspecified bits in the test cubes. In order to optimize the proposed process, a novel graph-based heuristic is proposed to partition the scan chains into several segments. For test volume reduction, a new LFSR reseeding based test compression scheme is proposed by reducing the maximum number of specified bits in the test cube set, s max, virtually. The performance of a conventional LFSR reseeding scheme highly depends on s max. In this paper, by using different clock phases between an LFSR and scan chains, and grouping the scan cells by a graph-based grouping heuristic, s max could be virtually reduced. In addition, the reduced scan rippling in the proposed test compression scheme can contribute to reduce the test power consumption, while the reuse of some test results as the subsequent test stimulus in the low power testing scheme can reduce the test volume size. Experimental results on the largest ISCAS89 benchmark circuits show that the proposed technique can significantly reduce both the average switching activity and the peak switching activity, and can aggressively reduce the volume of the test data, with little area overhead, compared to the previous methods.
Hong-Sik KimEmail:
  相似文献   

20.
Anti-random testing has proved useful in a series of empirical evaluations. The basic premise of anti-random testing is to chose new test vectors that are as far away from existing test inputs as possible. The distance measure is either Hamming distance or Cartesian distance. Unfortunately, this method essentially requires enumeration of the input space and computation of each input vector when used on an arbitrary set of existing test data. This prevents scale-up to large test sets and/or long input vectors.We present and empirically evaluate a technique to generate anti-random vectors that is computationally feasible for large input vectors and long sequences of tests. We also show how this fast anti-random test generation (FAR) can consider retained state (i.e. effects of subsequent inputs on each other). We evaluate effectiveness of applying anti-random vectors for behavioral model verification using branch coverage as the testing criterion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号