共查询到20条相似文献,搜索用时 0 毫秒
1.
《Very Large Scale Integration (VLSI) Systems, IEEE Transactions on》2008,16(11):1429-1440
2.
Selective Data Pruning-Based Compression Using High-Order Edge-Directed Interpolation 总被引:1,自引:0,他引:1
Vo D. T. Sole J. Yin P. Gomila C. Nguyen T. Q. 《IEEE transactions on image processing》2010,19(2):399-409
3.
Test data compression is an efficient methodology in reducing large test data volume for system-on-a-chip designs. In this paper, a variable-to-variable length compression method based on encoding runs of compatible patterns is presented. Test data in the test set is divided into a number of sequences. Each sequence is constituted by a series of compatible patterns in which information such as pattern length and number of pattern runs is encoded. Theoretical analyses on the evolution of the proposed Multi-Dimensional Pattern Run-Length Compression (MD-PRC) are made respectively from one-Dimensional-PRC to three-Dimensional-PRC. To demonstrate the effectiveness of the proposed method, experiments are conducted on both larger ISCAS’89 benchmarks and the industrial circuits with large number of don’t cares. Results show this method can achieve significant compression in test data volume and have good adaptation to industrial-size circuits. 相似文献
4.
Test data volume amount is increased multi-fold due to the need of quality assurance of various parts of the circuit design at deep submicron level. Huge memory is required to store this enormous test data which not only increases the cost of the ATE but also the test application time. This paper presents an optimal selective count compatible run length (OSCCPRL) encoding scheme for achieving maximum compression for reduction of the test cost. OSCCPRL is a hybrid technique that amalgamates the benefits of other two techniques: 10 Coded run length (10 C) and Selective CCPRL (SCCPRL) proposed here. These techniques work on improvement of the 9 C and CCPRL techniques. In OSCCPRL, entire data is segmented in blocks and further compressed using inter block and intra block level merging techniques. SCCPRL technique is used for encoding the compatible blocks while the 10C is used to do encoding at sub block (half block length) level. In case, if no compatibility is found at block/sub block level then the unique pattern is held as such in the encoded data along with the necessary categorization bits. The decompression architecture is described and it is shown how by just the addition of few states of FSM, better test data compression can be achieved as compared to previous schemes. The simulation results performed for various ISCAS benchmarks circuits prove that the proposed OSCCPRL technique provides an average compression efficiency of around 80 %. 相似文献
5.
Improving Linear Test Data Compression 总被引:1,自引:0,他引:1
Balakrishnan K. J. Touba N. A. 《Very Large Scale Integration (VLSI) Systems, IEEE Transactions on》2006,14(11):1227-1237
The output space of a linear decompressor must be sufficiently large to contain all the test cubes in the test set. The ideas proposed in this paper transform the output space of a linear decompressor so as to reduce the number of inputs required thereby increasing compression while still keeping all the test cubes in the output space. Scan inversion is used to invert a subset of the scan cells while reconfiguration modifies the linear decompressor. Any existing method for designing a linear decompressor (either combinational or sequential) can be used first to obtain the best linear decompressor that it can. Using that linear decompressor as a starting point, the proposed methods improve the compression further. The key property of scan inversion is that it is a linear transformation of the output space and, thus, the output space remains a linear subspace spanned by a Boolean matrix. Using this property, a systematic procedure based on linear algebra is described for selecting the set of inverting scan cells to maximize compression. A symbolic Gaussian elimination method to solve a constrained Boolean matrix is proposed and utilized for reconfiguring the linear decompressor. The proposed schemes can be utilized in various design flow scenarios and require no or very little hardware overhead. Experiments indicate that significant improvements in compression can be achieved 相似文献
6.
Usha Sandeep Mehta Kankar S. Dasgupta Nirnjan M. Devashrayee 《Journal of Electronic Testing》2010,26(6):679-688
A compression-decompression scheme, Modified Selective Huffman (MS-Huffman) scheme based on Huffman code is proposed in this
paper. This scheme aims at optimization of the parameters that influence the test cost reduction: the compression ratio, on-chip
decoder area overhead and overall test application time. Theoretically, it is proved that the proposed scheme gives the better
test data compression compared to very recently proposed encoding schemes for any test set. It is clearly demonstrated with
a large number of experimental results that the proposed scheme improves the test data compression, reduces overall test application
time and on-chip area overhead compared to other Huffman code based schemes. 相似文献
7.
On Using Exponential-Golomb Codes and Subexponential Codes for System-on-a-Chip Test Data Compression 总被引:1,自引:0,他引:1
We examine the use of exponential-Golomb codes and subexponential codes can be used for the compression of scan test data in core-based system-on-a-chip (SOC) designs. These codes are well-known in the data compression domain but their application to SOC testing has not been explored before. We show that these codes often provide slighly higher compression than alternative methods that have been proposed recently. 相似文献
8.
ECG Data Compression Using Fourier Descriptors 总被引:3,自引:0,他引:3
9.
We present an analysis of test application time for test data compression techniques that are used for reducing test data volume and testing time in system-on-a-chip (SOC) designs. These techniques are based on data compression codes and on-chip decompression. The compression/decompression scheme decreases test data volume and the amount of data that has to be transported from the tester to the SOC. We show via analysis as well as through experiments that the proposed scheme reduces testing time and allows the use of a slower tester. Results on test application time for the ISCAS'89 circuits are obtained using an ATE testbench developed in VHDL to emulate ATE functionality. 相似文献
10.
11.
一种基于正交小波包变换的数据压缩方法 总被引:5,自引:0,他引:5
本文将最优小波包基变换应用到数据压缩方法的研究工作中,针对不同频率的小波包系数之不同统计特性设计了相应的标量量化和矢量量化方法,并对一组实际测量数据运用我们建议的方法进行压缩,在失真率较小的情况下,得到较高的压缩比. 相似文献
12.
Hua-Guo Liang Sybille Hellebrand Hans-Joachim Wunderlich 《Journal of Electronic Testing》2002,18(2):159-170
In this paper a novel architecture for scan-based mixed mode BIST is presented. To reduce the storage requirements for the deterministic patterns it relies on a two-dimensional compression scheme, which combines the advantages of known vertical and horizontal compression techniques. To reduce both the number of patterns to be stored and the number of bits to be stored for each pattern, deterministic test cubes are encoded as seeds of an LFSR (horizontal compression), and the seeds are again compressed into seeds of a folding counter sequence (vertical compression). The proposed BIST architecture is fully compatible with standard scan design, simple and flexible, so that sharing between several logic cores is possible. Experimental results show that the proposed scheme requires less test data storage than previously published approaches providing the same flexibility and scan compatibility. 相似文献
13.
基于NetFlow技术,实现网络流量数据的采集整理、压缩存储和多维聚合。数据采集采用全时段抽样采集,保证数据的准确和高效。针对数据的海量特点,提出了固定阈值和可变阈值两种数据压缩方法,大大降低了数据存储量。此外,针对不同的统计分析需求,提出了数据多维度聚合结构,涵盖了数据流中时间、协议、IP地址、端口等信息。最后应用于真实的流量数据进行统计分析,取得良好效果。 相似文献
14.
15.
提出了一种新的测试数据压缩/解压缩的算法,称为混合游程编码,它充分考虑了测试数据的压缩率、相应硬件解码电路的开销以及总的测试时间.该算法是基于变长-变长的编码方式,即把不同游程长度的字串映射成不同长度的代码字,可以得到一个很好的压缩率.同时为了进一步提高压缩率,还提出了一种不确定位填充方法和测试向量的排序算法,在编码压缩前对测试数据进行相应的预处理.另外,混合游程编码的研究过程中充分考虑到了硬件解码电路的设计,可以使硬件开销尽可能小,并减少总的测试时间.最后,ISCAS 89 benchmark电路的实验结果证明了所提算法的有效性. 相似文献
16.
The recently developed technique of arithmetic coding, in conjunction with a Markov model of the source, is a powerful method of data compression in situations where a linear treatment is inappropriate. Adaptive coding allows the model to be constructed dynamically by both encoder and decoder during the course of the transmission, and has been shown to incur a smaller coding overhead than explicit transmission of the model's statistics. But there is a basic conflict between the desire to use high-order Markov models and the need to have them formed quickly as the initial part of the message is sent. This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source. 相似文献
17.
This paper presents a flexible runs-aware PRL coding method whose coding algorithm is simple and easy to implement. The internal 2-n -PRL coding iteratively codes 2 n runs of compatible or inversely compatible patterns inside a single segment. The external N-PRL coding iteratively codes flexible runs of compatible or inversely compatible segments across multiple segments. The decoder architecture is concise. The benchmark circuits verify the flexible runs-aware PRL coding method, the experimental results show it obtains higher compression ratio and shorter test application time. 相似文献
18.
Test data compression is an effective methodology for reducing test data volume and testing time. A novel compatibility-based test data compression method is presented in this paper. With the high compression efficiency of extended frequency-directed run length coding algorithm, the proposed method groups the test vectors that have least incompatible bits and amalgamates them into a single vector by assigning 1 or 0 to unspecified bits and c to incompatible bits. Three runs of 1, 0 and c can be encoded simultaneously. In addition, the corresponding decoder architecture with low hardware overhead has been developed. To evaluate the effectiveness of the proposed approach, in experiments, it is applied to the International Symposium on Circuits and Systems’ benchmark circuits. The experiments results show that the proposed algorithm gets a higher compression ratio than the conventional algorithms. 相似文献
19.
20.
The emergence of the nanometer scale integration technology made it possible for systems-on-a-chip, SoC, design to contain
many reusable cores from multiple resources. This resulted in higher complexity SoC testing than the conventional VLSI. To
address this increase in design complexity in terms of data-volume and test-time, several compression methods have been developed,
employed and proposed in the literature. In this paper, we present a new efficient test vector compression scheme based on
block entropy in conjunction with our improved row-column reduction routine to reduce test data significantly. Our results
show that the proposed method produces much higher compression ratio than all previously published methods. On average, our
scheme scores nearly 13% higher than the best reported results. In addition, our scheme outperformed all results for each
of the tested circuits. The proposed scheme is very fast and has considerable low complexity. 相似文献