共查询到20条相似文献,搜索用时 15 毫秒
1.
Test data has increased enormously owing to the rising on-chip complexity of integrated circuits. It further increases the test data transportation time and tester memory. The non-correlated test bits increase the issue of the test power. This paper presents a two-stage block merging based test data minimization scheme which reduces the test bits, test time and test power. A test data is partitioned into blocks of fixed sizes which are compressed using two-stage encoding technique. In stage one, successive blocks are merged to retain a representative block. In stage two, the retained pattern block is further encoding based on the existence of ten different subcases between the sub-block formed by splitting the retained pattern block into two halves. Non-compatible blocks are also split into two sub-blocks and tried for encoded using lesser bits. Decompression architecture to retrieve the original test data is presented. Simulation results obtained corresponding to different ISCAS′89 benchmarks circuits reflect its effectiveness in achieving better compression. 相似文献
2.
Test data compression is an effective methodology for reducing test data volume and testing time. A novel compatibility-based
test data compression method is presented in this paper. With the high compression efficiency of extended frequency-directed
run length coding algorithm, the proposed method groups the test vectors that have least incompatible bits and amalgamates
them into a single vector by assigning 1 or 0 to unspecified bits and c to incompatible bits. Three runs of 1, 0 and c can be encoded simultaneously. In addition, the corresponding decoder architecture with low hardware overhead has been developed.
To evaluate the effectiveness of the proposed approach, in experiments, it is applied to the International Symposium on Circuits
and Systems’ benchmark circuits. The experiments results show that the proposed algorithm gets a higher compression ratio
than the conventional algorithms. 相似文献
3.
Test data volume amount is increased multi-fold due to the need of quality assurance of various parts of the circuit design at deep submicron level. Huge memory is required to store this enormous test data which not only increases the cost of the ATE but also the test application time. This paper presents an optimal selective count compatible run length (OSCCPRL) encoding scheme for achieving maximum compression for reduction of the test cost. OSCCPRL is a hybrid technique that amalgamates the benefits of other two techniques: 10 Coded run length (10 C) and Selective CCPRL (SCCPRL) proposed here. These techniques work on improvement of the 9 C and CCPRL techniques. In OSCCPRL, entire data is segmented in blocks and further compressed using inter block and intra block level merging techniques. SCCPRL technique is used for encoding the compatible blocks while the 10C is used to do encoding at sub block (half block length) level. In case, if no compatibility is found at block/sub block level then the unique pattern is held as such in the encoded data along with the necessary categorization bits. The decompression architecture is described and it is shown how by just the addition of few states of FSM, better test data compression can be achieved as compared to previous schemes. The simulation results performed for various ISCAS benchmarks circuits prove that the proposed OSCCPRL technique provides an average compression efficiency of around 80 %. 相似文献
4.
双游程编码的无关位填充算法 总被引:2,自引:2,他引:0
双游程编码是集成电路测试数据压缩的一种重要方法,可分为无关位填充和游程编码压缩两个步骤.现有文献大都着重在第二步,提出了各种不同的编码压缩算法,但是对于第一步的无关位填充算法都不够重视,损失了一定的潜在压缩率.本文首先分析了无关位填充对于测试数据压缩率的重要性,并提出了一种新颖的双游程编码的无关位填充算法,可以适用于不同的编码方法,从而得到更高的测试数据压缩率.该算法可以与多种双游程编码算法结合使用,对解码器的硬件结构和芯片实现流程没有任何的影响.在ISCAS89的基准电路的实验表明,对于主流的双游程编码算法,结合该无关位填充算法后能提高了6%-9%的测试数据压缩率. 相似文献
5.
Growing test data volume and excessive test application time are two serious concerns in scan-based testing for SoCs. This paper presents an efficient test-independent compression technique based on block merging and eight coding (BM-8C) to reduce the test data volume and test application time. Test compression is achieved by encoding the merged blocks after merging consecutive compatible blocks with exact eight codewords. The proposed scheme compresses the pre-computed test data without requiring any structural information of the circuit under test. Therefore, it is applicable for IP cores in SoCs. Experimental results demonstrate that the BM-8C technique can achieve an average compression ratio up to 68.14 % with significant low test application time. 相似文献
6.
N. Badereddine Z. Wang P. Girard K. Chakrabarty A. Virazel S. Pravossoudovitch C. Landrault 《Journal of Electronic Testing》2008,24(4):353-364
Scan architectures, though widely used in modern designs for testing purpose, are expensive in test data volume and power
consumption. To solve these problems, we propose in this paper to modify an existing test data compression technique (Wang
Z, Chakrabarty K in Test data compression for IP embedded cores using selective encoding of scan slices. IEEE International
Test Conference, paper 24.3, 2005) so that it can simultaneously address test data volume and power consumption reduction
for scan testing of embedded Intellectual Property (IP) cores. Compared to the initial solution that fill don’t-care bits
with the aim of reducing only test data volume, here the assignment is performed to minimize also the power consumption. The
proposed power-aware test data compression technique is applied to the ISCAS’89 and ITC’99 benchmark circuits and on a number
of industrial circuits. Results show that up to 14× reduction in test data volume and 98% test power reduction can be obtained
simultaneously.
相似文献
C. LandraultEmail: URL: URL: http://www.lirmm.fr/~w3mic |
7.
S. Sivanantham M. Padmavathy Ganga Gopakumar P.S. Mallick J. Raja Paul Perinbam 《Integration, the VLSI Journal》2014
In this paper, we present two multistage compression techniques to reduce the test data volume in scan test applications. We have proposed two encoding schemes namely alternating frequency-directed equal-run-length (AFDER) coding and run-length based Huffman coding (RLHC). These encoding schemes together with the nine-coded compression technique enhance the test data compression ratio. In the first stage, the pre-generated test cubes with unspecified bits are encoded using the nine-coded compression scheme. Later, the proposed encoding schemes exploit the properties of compressed data to enhance the test data compression. This multistage compression is effective especially when the percentage of do not cares in a test set is very high. We also present the simple decoder architecture to decode the original data. The experimental results obtained from ISCAS'89 benchmark circuits confirm the average compression ratio of 74.2% and 77.5% with the proposed 9C-AFDER and 9C-RLHC schemes respectively. 相似文献
8.
Haiying Yuan Changshi Zhou Xun Sun Kai Zhang Tong Zheng Chang Liu Xiuyu Wang 《Journal of Electronic Testing》2018,34(6):685-695
Massive test data volume and excessive test power consumption have become two strict challenges for very large scale integrated circuit testing. In BIST architecture, the unspecified bits are randomly filled by LFSR reseeding-based test compression scheme, which produces enormous switching activities during circuit testing, thereby causing high test power consumption for scan design. To solve the above thorny problem, LFSR reseeding-oriented low-power test-compression architecture is developed, and an optimized encoding algorithm is involved in conjunction with any LFSR-reseeding scheme to effectively reduce test storage and power consumption, it includes test cube-based block processing, dividing into hold partition sets and updating hold partition sets. The main contributions is to decrease logic transitions in scan chains and reduce specified bit in test cubes generated via LFSR reseeding. Experimental results demonstrate that the proposed scheme achieves a high test compression efficiency than the existing methods while significantly reduces test power consumption with acceptable area overhead for most Benchmark circuits. 相似文献
9.
Hua-Guo Liang Sybille Hellebrand Hans-Joachim Wunderlich 《Journal of Electronic Testing》2002,18(2):159-170
In this paper a novel architecture for scan-based mixed mode BIST is presented. To reduce the storage requirements for the deterministic patterns it relies on a two-dimensional compression scheme, which combines the advantages of known vertical and horizontal compression techniques. To reduce both the number of patterns to be stored and the number of bits to be stored for each pattern, deterministic test cubes are encoded as seeds of an LFSR (horizontal compression), and the seeds are again compressed into seeds of a folding counter sequence (vertical compression). The proposed BIST architecture is fully compatible with standard scan design, simple and flexible, so that sharing between several logic cores is possible. Experimental results show that the proposed scheme requires less test data storage than previously published approaches providing the same flexibility and scan compatibility. 相似文献
10.
Test data compression is an efficient methodology in reducing large test data volume for system-on-a-chip designs. In this
paper, a variable-to-variable length compression method based on encoding runs of compatible patterns is presented. Test data
in the test set is divided into a number of sequences. Each sequence is constituted by a series of compatible patterns in
which information such as pattern length and number of pattern runs is encoded. Theoretical analyses on the evolution of the
proposed Multi-Dimensional Pattern Run-Length Compression (MD-PRC) are made respectively from one-Dimensional-PRC to three-Dimensional-PRC.
To demonstrate the effectiveness of the proposed method, experiments are conducted on both larger ISCAS’89 benchmarks and
the industrial circuits with large number of don’t cares. Results show this method can achieve significant compression in
test data volume and have good adaptation to industrial-size circuits. 相似文献
11.
《Very Large Scale Integration (VLSI) Systems, IEEE Transactions on》2008,16(11):1429-1440
12.
Ozgur Sinanoglu 《Journal of Electronic Testing》2008,24(5):439-448
While integrated circuits of ever increasing size and complexity necessitate larger test sets for ensuring high test quality,
the consequent test time and data volume reflect into elevated test costs. Test data compression solutions have been proposed
to address this problem by storing and delivering stimuli in a compressed format. The effectiveness of these techniques, however,
strongly relies on the distribution of the specified bits of test vectors. In this paper, we propose a scan cell partitioning
technique so as to ensure that specified bits are uniformly distributed across the scan slices, especially for the test vectors
with higher density of specified bits. The proposed scan cell partitioning process is driven by an integer linear programming
(ILP) formulation, wherein it is also possible to account for the layout and routing constraints. While the proposed technique
can be applied to increase the effectiveness of any combinational decompression architecture, in this paper, we present its
application in conjunction with a fan-out based decompression architecture. The experimental results also confirm the compression
enhancement of the proposed methodology.
Ozgur Sinanoglu received a B.S. degree in Computer Engineering, and another B.S. degree in Electrical and Electronics Engineering, both from Bogazici University in Turkey in 1999. He earned his M.S. and Ph.D. degrees in the Computer Science and Engineering department of University of California, San Diego, in 2001 and 2004, respectively. Between 2004 and 2006, he worked as a senior design for testability engineer in Qualcomm, located in San Diego, California. Since Fall 2006, he has been a faculty member in the Mathematics and Computer Science Department of Kuwait University. His research field is the design for testability of VLSI circuits. 相似文献
Ozgur SinanogluEmail: |
Ozgur Sinanoglu received a B.S. degree in Computer Engineering, and another B.S. degree in Electrical and Electronics Engineering, both from Bogazici University in Turkey in 1999. He earned his M.S. and Ph.D. degrees in the Computer Science and Engineering department of University of California, San Diego, in 2001 and 2004, respectively. Between 2004 and 2006, he worked as a senior design for testability engineer in Qualcomm, located in San Diego, California. Since Fall 2006, he has been a faculty member in the Mathematics and Computer Science Department of Kuwait University. His research field is the design for testability of VLSI circuits. 相似文献
13.
Test data compression using alternating variable run-length code 总被引:1,自引:0,他引:1
Bo YeAuthor Vitae Qian ZhaoAuthor VitaeDuo ZhouAuthor Vitae Xiaohua WangAuthor VitaeMin LuoAuthor Vitae 《Integration, the VLSI Journal》2011,44(2):103-110
This paper presents a unified test data compression approach, which simultaneously reduces test data volume, scan power consumption and test application time for a system-on-a-chip (SoC). The proposed approach is based on the use of alternating variable run-length (AVR) codes for test data compression. A formal analysis of scan power consumption and test application time is presented. The analysis showed that a careful mapping of the don’t-cares in pre-computed test sets to 1s and 0s led to significant savings in peak and average power consumption, without requiring slower scan clocks. The proposed technique also reduced testing time compared to a conventional scan-based scheme. The alternating variable run-length codes can efficiently compress the data streams that are composed of both runs 0s and 1s. The decompression architecture was also presented in this paper. Experimental results for ISCAS'89 benchmark circuits and a production circuit showed that the proposed approach greatly reduced test data volume and scan power consumption for all cases. 相似文献
14.
The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic test equipment (ATE) and the CUT (circuit under test) effectively, a novel VSPTIDR (variable shifting prefix-tail identifier reverse) code for test stimulus data compression is designed. The encoding scheme is defined and analyzed in detail, and the decoder is presented and discussed. While the probability of 0 bits in the test set is greater than 0.92, the compression ratio from VSPTIDR code is better than the frequency-directed run-length (FDR) code, which can be proved by theoretical analysis and experiments. And the on-chip area overhead of VSPTIDR decoder is about 15.75 % less than the FDR decoder. 相似文献
15.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1978,24(6):683-692
Distortion-rate theory is used to derive absolute performance bounds and encoding guidelines for direct fixed-rate minimum mean-square error data compression of the discrete Fourier transform (DFT) of a stationary real or circularly complex sequence. Both real-part-imaginary-part and magnitude-phase-angle encoding are treated. General source coding theorems are proved in order to justify using the optimal test channel transition probability distribution for allocating the information rate among the DFT coefficients and for calculating arbitrary performance measures on actual optimal codes. This technique has yielded a theoretical measure of the relative importance of phase angle over the magnitude in magnitude-phase-angle data compression. The result is that the phase angle must be encoded with 0.954 nats, or 1.37 bits, more rate than the magnitude for rates exceeding 3.0 nats per complex element. This result and the optimal error bounds are compared to empirical results for efficient quantization schemes. 相似文献
16.
17.
Compression of the layered depth image 总被引:1,自引:0,他引:1
Jiangang Duan Jin Li 《IEEE transactions on image processing》2003,12(3):365-372
A layered depth image (LDI) is a new popular representation and rendering method for objects with complex geometries. Similar to a two-dimensional (2-D) image, the LDI consists of an array of pixels. However, unlike the 2-D image, an LDI pixel has depth information, and there are multiple layers at a pixel location. We develop a novel LDI compression algorithm that handles the multiple layers and the depth coding. The algorithm records the number of LDI layers at each pixel location, and compresses LDI color and depth components separately. For LDI layer with sparse pixels, the data is aggregated and then encoded. An empirical rate-distortion model is used to optimally allocate bits among different components. Compared with the benchmark compression tools such as JPEG-2000 and MPEG-4, our scheme improves the compression performance significantly. 相似文献
18.
Chen-Kuei Yang Ja-Chen Lin Wen-Hsiang Tsai 《Communications, IEEE Transactions on》1997,45(12):1513-1516
A new color image compression technique based on moment-preserving and block truncation coding is proposed. An input image is divided into nonoverlapping blocks and each block pixel is assigned one of two representative colors, which are computed with analytic formulas derived from preserving certain moments in the block. A bit map is then generated for each block to represent the pixels' colors. Different uniformity conditions in the representative colors are also identified and utilized to save code bits. Good average compression ratios up to about 13 can be achieved, as shown by experimental results 相似文献
19.
文章提出一种基于FDR码改进分组的SoC测试数据压缩方法.经过对原始测试集无关位的简单预处理,提高确定位0在游程中的出现频率.在FDR码的基础上,改进其分组方式,通过理论证明其压缩率略高于FDR编码,尤其是短游程的压缩率.用C语言编写程序模拟两种编码方法的软件实现程序,实验结果证明了改进分组的FDR编码方法的有效性和高压缩性. 相似文献
20.
A new scan architecture for both low power testing and test volume compression is proposed. For low power test requirements,
only a subset of scan cells is loaded with test stimulus and captured with test responses by freezing the remaining scan cells
according to the distribution of unspecified bits in the test cubes. In order to optimize the proposed process, a novel graph-based
heuristic is proposed to partition the scan chains into several segments. For test volume reduction, a new LFSR reseeding
based test compression scheme is proposed by reducing the maximum number of specified bits in the test cube set, s
max, virtually. The performance of a conventional LFSR reseeding scheme highly depends on s
max. In this paper, by using different clock phases between an LFSR and scan chains, and grouping the scan cells by a graph-based
grouping heuristic, s
max could be virtually reduced. In addition, the reduced scan rippling in the proposed test compression scheme can contribute
to reduce the test power consumption, while the reuse of some test results as the subsequent test stimulus in the low power
testing scheme can reduce the test volume size. Experimental results on the largest ISCAS89 benchmark circuits show that the
proposed technique can significantly reduce both the average switching activity and the peak switching activity, and can aggressively
reduce the volume of the test data, with little area overhead, compared to the previous methods.
相似文献
Hong-Sik KimEmail: |