共查询到20条相似文献,搜索用时 0 毫秒
1.
N. Badereddine Z. Wang P. Girard K. Chakrabarty A. Virazel S. Pravossoudovitch C. Landrault 《Journal of Electronic Testing》2008,24(4):353-364
Scan architectures, though widely used in modern designs for testing purpose, are expensive in test data volume and power
consumption. To solve these problems, we propose in this paper to modify an existing test data compression technique (Wang
Z, Chakrabarty K in Test data compression for IP embedded cores using selective encoding of scan slices. IEEE International
Test Conference, paper 24.3, 2005) so that it can simultaneously address test data volume and power consumption reduction
for scan testing of embedded Intellectual Property (IP) cores. Compared to the initial solution that fill don’t-care bits
with the aim of reducing only test data volume, here the assignment is performed to minimize also the power consumption. The
proposed power-aware test data compression technique is applied to the ISCAS’89 and ITC’99 benchmark circuits and on a number
of industrial circuits. Results show that up to 14× reduction in test data volume and 98% test power reduction can be obtained
simultaneously.
相似文献
C. LandraultEmail: URL: URL: http://www.lirmm.fr/~w3mic |
2.
3.
Jia LiAuthor Vitae Xiao LiuAuthor VitaeYubin ZhangAuthor Vitae Yu HuAuthor VitaeXiaowei LiAuthor Vitae Qiang XuAuthor Vitae 《Integration, the VLSI Journal》2011,44(3):205-216
Ever-increasing test data volume and excessive test power are two of the main concerns of VLSI testing. The “don’t-care” bits (also known as X-bits) in given test cube can be exploited for test data compression and/or test power reduction, and these techniques may contradict to each other because the very same X-bits are likely to be used for different optimization objectives. This paper proposes a capture-power-aware test compression scheme that is able to keep capture-power under a safe limit with low test compression ratio loss. Experimental results on benchmark circuits validate the effectiveness of the proposed solution. 相似文献
4.
Ozgur Sinanoglu 《Journal of Electronic Testing》2008,24(5):439-448
While integrated circuits of ever increasing size and complexity necessitate larger test sets for ensuring high test quality,
the consequent test time and data volume reflect into elevated test costs. Test data compression solutions have been proposed
to address this problem by storing and delivering stimuli in a compressed format. The effectiveness of these techniques, however,
strongly relies on the distribution of the specified bits of test vectors. In this paper, we propose a scan cell partitioning
technique so as to ensure that specified bits are uniformly distributed across the scan slices, especially for the test vectors
with higher density of specified bits. The proposed scan cell partitioning process is driven by an integer linear programming
(ILP) formulation, wherein it is also possible to account for the layout and routing constraints. While the proposed technique
can be applied to increase the effectiveness of any combinational decompression architecture, in this paper, we present its
application in conjunction with a fan-out based decompression architecture. The experimental results also confirm the compression
enhancement of the proposed methodology.
Ozgur Sinanoglu received a B.S. degree in Computer Engineering, and another B.S. degree in Electrical and Electronics Engineering, both from Bogazici University in Turkey in 1999. He earned his M.S. and Ph.D. degrees in the Computer Science and Engineering department of University of California, San Diego, in 2001 and 2004, respectively. Between 2004 and 2006, he worked as a senior design for testability engineer in Qualcomm, located in San Diego, California. Since Fall 2006, he has been a faculty member in the Mathematics and Computer Science Department of Kuwait University. His research field is the design for testability of VLSI circuits. 相似文献
Ozgur SinanogluEmail: |
Ozgur Sinanoglu received a B.S. degree in Computer Engineering, and another B.S. degree in Electrical and Electronics Engineering, both from Bogazici University in Turkey in 1999. He earned his M.S. and Ph.D. degrees in the Computer Science and Engineering department of University of California, San Diego, in 2001 and 2004, respectively. Between 2004 and 2006, he worked as a senior design for testability engineer in Qualcomm, located in San Diego, California. Since Fall 2006, he has been a faculty member in the Mathematics and Computer Science Department of Kuwait University. His research field is the design for testability of VLSI circuits. 相似文献
5.
本文详细地讨论了信源编码的原理,介绍了最优量化的方法以及自适应量化编码的原理。本文还给出一个自适应量化编码的实例。 相似文献
6.
Test data compression using alternating variable run-length code 总被引:1,自引:0,他引:1
Bo YeAuthor Vitae Qian ZhaoAuthor VitaeDuo ZhouAuthor Vitae Xiaohua WangAuthor VitaeMin LuoAuthor Vitae 《Integration, the VLSI Journal》2011,44(2):103-110
This paper presents a unified test data compression approach, which simultaneously reduces test data volume, scan power consumption and test application time for a system-on-a-chip (SoC). The proposed approach is based on the use of alternating variable run-length (AVR) codes for test data compression. A formal analysis of scan power consumption and test application time is presented. The analysis showed that a careful mapping of the don’t-cares in pre-computed test sets to 1s and 0s led to significant savings in peak and average power consumption, without requiring slower scan clocks. The proposed technique also reduced testing time compared to a conventional scan-based scheme. The alternating variable run-length codes can efficiently compress the data streams that are composed of both runs 0s and 1s. The decompression architecture was also presented in this paper. Experimental results for ISCAS'89 benchmark circuits and a production circuit showed that the proposed approach greatly reduced test data volume and scan power consumption for all cases. 相似文献
7.
软件测试用例集缩减的一个算法 总被引:1,自引:0,他引:1
朱海燕 《微电子学与计算机》2007,24(1):204-206
一个测试用例集可能含有冗余的测试用例。在回归测试中为了减少维护测试用例集和执行测试用例的成本,可以采用测试用例集缩减的技术。文章提出了一个测试用例集缩减的新算法,并给出了应用实例。 相似文献
8.
9.
随着物联网设备的快速增加,边缘计算的必要性逐渐增长。在这些系统中,数据传输与处理技术决定了整个网络的效率和效能。文中重点分析了几种优化的传输和处理技术,包括自适应采样和压缩技术,并探讨了它们在边缘计算环境下的应用潜力。 相似文献
10.
该文提出一种用于测试数据压缩的自适应EFDR(Extended Frequency-Directed Run-length)编码方法。该方法以EFDR编码为基础,增加了一个用于表示后缀与前缀编码长度差值的参数N,对测试集中的每个测试向量,根据其游程分布情况,选择最合适的N值进行编码,提高了编码效率。在解码方面,编码后的码字经过简单的数学运算即可恢复得到原测试数据的游程长度,且不同N值下的编码码字均可使用相同的解码电路来解码,因此解码电路具有较小的硬件开销。对ISCAS-89部分标准电路的实验结果表明,该方法的平均压缩率达到69.87%,较原EFDR编码方法提高了4.07%。 相似文献
11.
S. Sivanantham M. Padmavathy Ganga Gopakumar P.S. Mallick J. Raja Paul Perinbam 《Integration, the VLSI Journal》2014
In this paper, we present two multistage compression techniques to reduce the test data volume in scan test applications. We have proposed two encoding schemes namely alternating frequency-directed equal-run-length (AFDER) coding and run-length based Huffman coding (RLHC). These encoding schemes together with the nine-coded compression technique enhance the test data compression ratio. In the first stage, the pre-generated test cubes with unspecified bits are encoded using the nine-coded compression scheme. Later, the proposed encoding schemes exploit the properties of compressed data to enhance the test data compression. This multistage compression is effective especially when the percentage of do not cares in a test set is very high. We also present the simple decoder architecture to decode the original data. The experimental results obtained from ISCAS'89 benchmark circuits confirm the average compression ratio of 74.2% and 77.5% with the proposed 9C-AFDER and 9C-RLHC schemes respectively. 相似文献
12.
一种基于变长数据块相关性统计的 测试数据压缩和解压方法 总被引:2,自引:1,他引:1
为了解决系统芯片(SoC)测试过程中自动测试设备(ATE)在存储空间以及带宽等方面所面临的问题,本文提出了一种新的基于变长数据块相关性统计的测试数据压缩和解压方法.以测试向量为单位,先用算法确定一个具有最好相关性的数据块作为该向量的参考数据块,再利用它与该向量中数据块的相关性进行压缩.且每个向量的参考数据块长度相互独立.其解压结构只需要一个有限状态机(FSM)、一个5位暂存器和一个与参考数据块等长的循环扫描移位寄存器(CSR)即可,硬件开销小,对ISCAS-89标准电路Mintest集的压缩结果表明,本文提出方案较同类编码方法有更高的压缩效率. 相似文献
13.
14.
Aiming at the problem that the test time is too long and the test efficiency is affected, an adaptive test patterns reordering method based on Gamma distribution was proposed. And a probability model based on Gamma distribution for the probability of each test pattern hitting fault was established. During the test, the circuit to be tested was added to the sample space, and the parameters of probability model were updated dynamically, and the test patterns were reordered synchronously. The experimental results showed that the reordered test patterns have higher test quality, which can reduce the test time and test cost of the faulty circuit. The algorithm is completely software-based and does not require any additional hardware overhead and is directly compatible with traditional integrated circuit testing process. 相似文献
15.
Recent test data compression techniques raise concerns regarding power dissipation and compression efficiency. This letter proposes a new test data compression scheme, twin symbol encoding, that supports block division skills that can reduce hardware overhead. Our experimental results show that the proposed technique achieves both a high compression ratio and low‐power dissipation. Therefore, the proposed scheme is an attractive solution for efficient test data compression. 相似文献
16.
Shan Xiaohong Bi Guangguo 《电子科学学刊(英文版)》2007,24(1):54-59
This paper explores the potential to use accurate but outdated channel estimates for adaptive modulation. The work is novel in that the research is conditioned on block by block adaptation. First, we define a new quantity, the Tolerable Average Use Delay (TAUD), which can indicate the ability of an adaptation scheme to tolerate the delay of channel estimation results. We find that for the variable-power schemes, TAUD is a constant and dependent on the target Bit Error Rate (BER), average power and Doppler frequency; while for the constant-power schemes, it depends on the adaptation block length as well. At last, we investigate the relation between the delay tolerating performance and the spectral efficiency and give the system design criterion. The delay tolerating performance is improved at the price of lower data rate. 相似文献
17.
A novel approach for using an embedded processor to aid in deterministic testing of the other components of a system-on-a-chip (SOC) is presented. The tester loads a program along with compressed test data into the processor's on-chip memory. The processor executes the program which decompresses the test data and applies it to scan chains in the other components of the SOC to test them. The program itself is very simple and compact, and the decompression is done very rapidly, hence this approach reduces both the amount of data that must be stored on the tester and reduces the test time. Moreover, it enables at-speed scan shifting even with a slow tester (i.e., a tester whose maximum clock rate is slower than the SOC's normal operating clock rate). A procedure is described for converting a set of test cubes (i.e., test vectors where the unspecified inputs are left as X's) into a compressed form. A program that can be run on an embedded processor is then given for decompressing the test cubes and applying them to scan chains on the chip. Experimental results indicate a significant amount of compression can be achieved resulting in less data that must be stored on the tester (i.e., smaller tester memory requirement) and less time to transfer the test data from the tester to the chip. 相似文献
18.
19.
20.
This paper proposes a new high-capacity reversible data hiding scheme in encrypted images. The content owner first divides the cover image into blocks. Then, the block permutation and the bitwise stream cipher processes are applied to encrypt the image. Upon receiving the encrypted image, the data hider analyzes the image blocks and adaptively decides an optimal block-type labeling strategy. Based on the adaptive block encoding, the image is compressed to vacate the spare room, and the secret data are encrypted and embedded into the spare space. According to the granted authority, the receiver can restore the cover image, extract the secret data, or do both. Experimental results show that the embedding capacity of the proposed scheme outperforms state-of-the-art schemes. In addition, security level and robustness of the proposed scheme are also investigated. 相似文献