首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 881 毫秒
1.
A new scan architecture for both low power testing and test volume compression is proposed. For low power test requirements, only a subset of scan cells is loaded with test stimulus and captured with test responses by freezing the remaining scan cells according to the distribution of unspecified bits in the test cubes. In order to optimize the proposed process, a novel graph-based heuristic is proposed to partition the scan chains into several segments. For test volume reduction, a new LFSR reseeding based test compression scheme is proposed by reducing the maximum number of specified bits in the test cube set, s max, virtually. The performance of a conventional LFSR reseeding scheme highly depends on s max. In this paper, by using different clock phases between an LFSR and scan chains, and grouping the scan cells by a graph-based grouping heuristic, s max could be virtually reduced. In addition, the reduced scan rippling in the proposed test compression scheme can contribute to reduce the test power consumption, while the reuse of some test results as the subsequent test stimulus in the low power testing scheme can reduce the test volume size. Experimental results on the largest ISCAS89 benchmark circuits show that the proposed technique can significantly reduce both the average switching activity and the peak switching activity, and can aggressively reduce the volume of the test data, with little area overhead, compared to the previous methods.
Hong-Sik KimEmail:
  相似文献   

2.
A new scan partition architecture to reduce both the average and peak power dissipation during scan testing is proposed for low‐power embedded systems. In scan‐based testing, due to the extremely high switching activity during the scan shift operation, the power consumption increases considerably. In addition, the reduced correlation between consecutive test patterns may increase the power consumed during the capture cycle. In the proposed architecture, only a subset of scan cells is loaded with test stimulus and captured with test responses by freezing the remaining scan cells according to the spectrum of unspecified bits in the test cubes. To optimize the proposed process, a novel graph‐based heuristic to partition the scan chain into several segments and a technique to increase the number of don't cares in the given test set have been developed. Experimental results on large ISCAS89 benchmark circuits show that the proposed technique, compared to the traditional full scan scheme, can reduce both the average switching activities and the average peak switching activities by 92.37% and 41.21%, respectively.  相似文献   

3.
We present a new technique for uniquely identifying a single failing vector in an interval of test vectors. This technique is applicable to combinational circuits and for scan-BIST in sequential circuits with multiple scan chains. The proposed method relies on the linearity properties of the MISR and on the use of two test sequences, which are both applied to the circuit under test. The second test sequence is derived from the first in a straightforward manner and the same test pattern source is used for both test sequences. If an interval contains only a single failing vector, the algebraic analysis is guaranteed to identify it. We also show analytically that if an interval contains two failing vectors, the probability that this case is interpreted as one failing vector is very low. We present experimental results for the ISCAS benchmark circuits to demonstrate the use of the proposed method for identifying failing test vectors.  相似文献   

4.
Parallel test application helps reduce the otherwise considerable test times in SOCs; yet its applicability is limited by average and peak power considerations. The typical test vector loading techniques result in frequent transitions in the scan chain, which in turn reflect into significant levels of circuit switching unnecessarily. Judicious utilization of logic in the scan chain can help reduce transitions while loading the test vector needed. The transitions embedded in both test stimuli and the responses are handled through scan chain modifications consisting of logic gate insertion between scan cells as well as inversion of capture paths. No performance degradation ensues as these modifications have no impact on functional execution. To reduce average and peak power, we herein propose computationally efficient schemes that identify the location and the type of logic to be inserted. The experimental results confirm the significant reductions in test power possible under the proposed scheme.  相似文献   

5.
Single-channel based wireless networks have limited bandwidth and throughput and the bandwidth utilization decreases with increased number of users. To mitigate this problem, simultaneous transmission on multiple channels is considered as an option. In this paper, we propose a distributed dynamic channel allocation scheme using adaptive learning automata for wireless networks whose nodes are equipped with single-radio interfaces. The proposed scheme, Adaptive Pursuit learning automata runs periodically on the nodes, and adaptively finds the suitable channel allocation in order to attain a desired performance. A novel performance index, which takes into account the throughput and the energy consumption, is considered. The proposed learning scheme adapts the probabilities of selecting each channel as a function of the error in the performance index at each step. The extensive simulation results in static and mobile environments provide that the proposed channel allocation schemes in the multiple channel wireless networks significantly improves the throughput, drop rate, energy consumption per packet and fairness index—compared to the 802.11 single-channel, and 802.11 with randomly allocated multiple channels. Also, it was demonstrated that the Adaptive Pursuit Reward-Only (PRO) scheme guarantees updating the probability of the channel selection for all the links—even the links whose current channel allocations do not provide a satisfactory performance—thereby reducing the frequent channel switching of the links that cannot achieve the desired performance.  相似文献   

6.
This note continues a sequence of attempts to define efficient digital signature schemes based on low-degree polynomials, or to break such schemes. We consider a scheme proposed by Satoh and Araki [5], which generalizes the Ong—Schnorr—Shamir scheme to the noncommutative ring of quaternions. We give two different ways to break the scheme. Received 9 December 1998 and revised 14 December 1998  相似文献   

7.
Test data compression using alternating variable run-length code   总被引:1,自引:0,他引:1  
This paper presents a unified test data compression approach, which simultaneously reduces test data volume, scan power consumption and test application time for a system-on-a-chip (SoC). The proposed approach is based on the use of alternating variable run-length (AVR) codes for test data compression. A formal analysis of scan power consumption and test application time is presented. The analysis showed that a careful mapping of the don’t-cares in pre-computed test sets to 1s and 0s led to significant savings in peak and average power consumption, without requiring slower scan clocks. The proposed technique also reduced testing time compared to a conventional scan-based scheme. The alternating variable run-length codes can efficiently compress the data streams that are composed of both runs 0s and 1s. The decompression architecture was also presented in this paper. Experimental results for ISCAS'89 benchmark circuits and a production circuit showed that the proposed approach greatly reduced test data volume and scan power consumption for all cases.  相似文献   

8.
The objective of this paper is to propose a BIST scheme enabling the test of delay faults in all the Look-Up Tables (LUTs) of FPGA SRAMs, in a Manufacturing context. The BIST scheme does not consume any area overhead and can be removed from the device after the test thus, allowing the use of the whole circuit by the user. The structure we propose is composed of a simple test pattern generator, an error detector and a chain of LUTs. The chain of LUTs is formed alternatively by a LUT and a flip–flop. By using such a chain, the test of all delay faults in every LUT is enabled. In this paper, we develop an experiment based on the implantation of our BIST architecture in a Virtex FPGA from Xilinx. The purpose of this experiment is to show the feasibility of our solution. As a result, one important issue from this solution is its ability to detect the “smallest” delay faults in the LUTs, i.e. the smallest delays that can be observed on a LUT output. Patrick Girard is presently Researcher at CNRS (French National Center for Scientific Research), and works in the Microelectronics Department of the LIRMM (Laboratory of Informatics, Robotics and Micro-electronics of Montpellier—France). His research interests include the various aspects of digital testing, with special emphasis on DfT, logic BIST, delay fault testing, and low power testing. He has authored and co-authored more than 90 papers on these fields, and has supervised several PhD dissertations. He has also participated to several European research projects (Esprit III ATSEC, Eureka MEDEA, MEDEA+ ASSOCIATE, IST MARLOW). Patrick GIRARD holds a B.Sc. and a M.Sc. in Electrical Engineering, and obtained the Ph.D. degree in microelectronics from the University of Montpellier in 1992. Olivier Héron is presently researcher at CEA (French Center for Technology Research) in the laboratory of Reliability for Embedded Systems. His research interests are Logic BIST, On-Line Testing, Delay Fault Testing of FPGAs and Fault Modelling. He is a member of the program commitee of the Field Programmable Logic Conference FPL2006. He received his Ph.D. from the University of Montpellier (France) in 2004 and worked in the Microelectronics Department of the LIRMM (Laboratory of Computer Science, Automation and Microelectronics of Montpellier—France). He received the B.Sc. degree in 1998 and the M.Sc. degree in 2001 in Electrical Engineering from the University of Montpellier. Serge Pravossoudovitch was born in 1957. He is currently Professor in the Electrical and Computer Engineering Department of the University of Montpellier and his research activities are performed at LIRMM (Laboratory of Computer Science, Automation and Microelectronics of Montpellier—France). He is received the Master degree in Electrical Engineering in 1979 from the University of Montpellier. He got his Ph.D. degree in Electrical Engineering in 1983 on symbolic layout for IC design. Since 1984, he was been interested in the testing domain. He obtained the “doctorat d’état” degree in 1987 for his works on switch level automatic test pattern generation. He is presently interested in delay fault testing, design for testability and power consumption optimization. He has authored and co-authored numerous papers on these fields and has supervised several Ph.D. dissertations. He has also participated to several European projects (Microelectronic regulation, Esprit, MEDEA). Michel Renovell is presently Researcher at CNRS (French National Center for Scientific Research), and works in the Microelectronics Department of the LIRMM (Laboratory of Computer Science, Automation and Microelectronics of Montpellier). His research interests include: Fault modeling, Analog testing and FPGA testing. He is Vice-Chair of the IEEE TTTC (Test Technology Technical Committee) and Chair of the FPGA testing Committee. He is a member of the editorial board of JETTA and the editorial board of IEEE Design & Test. Michel has been General Chair of several conferences: International Mixed Signal Testing Workshop IMSTW2000, Field Programmable Logic Conference FPL2002 and European Test Symposium ETS2004. A preliminary version of this work has been presented at the 1st European Test Symposium 2004, in Ajaccio.  相似文献   

9.
This paper uses canonical piecewise—linear analysis method to analyze nonlinear DC fault circuits and solve for the values of the test port voltages which are selected beforehand. The method needs less memory storages, obtains the results in finite steps and has high efficiency in computation. It can be applied to the circuits containing multiport nonlinear elements. It is a good method of pre—test analysis for fault circuits in simulation—before—test aproach in analogue circuit diagnosis.  相似文献   

10.
This work presents a Failure Analysis case study related to a scan chain integrity issue. By using Laser Voltage Imaging and Probing (LVx) approaches, we have been able to localize a defect that was inducing a transition failure, detected during scan chain integrity test flow.Laser Voltage Imaging (LVI) and Laser Voltage Probing (LVP) techniques were applied with the aim of both identifying and characterizing the first failing flip-flop of the chain, in order to understand the failure mechanism.The relatively simple Design For Test (DFT) structure of the device, made of a single scan chain to address a large area of the digital logic, and the nature of the electrical behaviour did not allow the ATPG Scan Chain diagnostic to be accurate, leaving the failure identification task to the FA engineer with the help of classical optical techniques.In this work we specially focused on the application of the two fault isolation techniques, to first, identify the failing flip-flop through dichotomy approach, investigating the macro failure mode through a second harmonic LVI analysis and finally characterizing this failure using LVP.  相似文献   

11.
The complexity of breaking cryptosystems of which security is based on the discrete logarithm problem is explored. The cryptosystems mainly discussed are the Diffie—Hellman key exchange scheme (DH), the Bellare—Micali noninteractive oblivious transfer scheme (BM), the ElGamal public-key cryptosystem (EG), the Okamoto conference-key sharing scheme (CONF), and the Shamir 3-pass key-transmission scheme (3PASS). The obtained relation among these cryptosystems is that where denotes the polynomial-time functionally many-to-one reducibility, i.e., a function version of the -reducibility. We further give some condition in which these algorithms have equivalent difficulty. One of such conditions suggest another advantage of the discrete logarithm associated with ordinary elliptic curves. Received 18 January 1996 and revised 7 September 1996  相似文献   

12.
The paper proposes a new test data compression scheme for testing embedded cores with multiple scan chains. The new compression scheme allows broadcasting identical test data to several scan chains whenever the cells in the same depth are compatible for the current application test pattern. Thus, it efficiently utilizes the compatibility of the scan cells among the scan chain segments, increases test data run in broadcast mode and reduces test data volume and test application time effectively. It does not need complex compressing algorithm and costly hardware. Experimental results demonstrate the efficiency and versatility of the proposed method.  相似文献   

13.
This paper uses the newly developed concept of ‘sparsity’ in signal processing to the context of Image Compression. The first step of the scheme is to use a sparsifying transform on the image. The sparse set of coefficients is encoded via Sparse PCA. Wavelet Transform had been used profusely for image compression tasks. But the choice is not the ideal one. The partial reconstruction error from wavelet coefficients is an order of magnitude higher than the ideal error rate. In this paper image compression is carried in the curvelet domain—a better choice compared to wavelets, at least theoretically, since the reconstruction error rate with curvelet coefficients is of the same asymptotic order as that of the ideal error rate. The compression scheme is tested on the Lena and Barbara image as well as the USPS and Yale Face databases.  相似文献   

14.
The OAEP encryption scheme was introduced by Bellare and Rogaway at Eurocrypt '94. It converts any trapdoor permutation scheme into a public key encryption scheme. OAEP is widely believed to provide resistance against adaptive chosen ciphertext attack. The main justification for this belief is a supposed proof of security in the random oracle model, assuming the underlying trapdoor permutation scheme is one way. This paper shows conclusively that this justification is invalid. First, it observes that there appears to be a non-trivial gap in the OAEP security proof. Second, it proves that this gap cannot be filled, in the sense that there can be no standard ``black box' security reduction for OAEP. This is done by proving that there exists an oracle relative to which the general OAEP scheme is insecure. The paper also presents a new scheme OAEP+ , along with a complete proof of security in the random oracle model. OAEP+ is essentially just as efficient as OAEP. It should be stressed that these results do not imply that a particular instantiation of OAEP, such as RSA-OAEP, is insecure. They simply undermine the original justification for its security. In fact, it turns out—essentially by accident, rather than by design—that RSA-OAEP is secure in the random oracle model; however, this fact relies on special algebraic properties of the RSA function, and not on the security of the general OAEP scheme.  相似文献   

15.
In this paper, a novel method to build a complementary metal oxide semiconductor (CMOS) exponential voltage-to-current converter has been developed. The method is based on a modular design approach in which increasing the dynamic range of the system—to any required value—is done by simply adding more cells. The method is based on a pulse width modulation technique to generate a pulse signal that is used to activate a number of cascaded integrators. Based on the presented method, a novel exponential voltage-to-current converter is introduced with an achieved dynamic range of 42 dB. The circuit operates from a single supply of 3 V. Simulation results are based on Mietec 0.5 μm CMOS technology.  相似文献   

16.
This paper presents a new RF testing scheme based on a design-for-testability (DFT) method for measuring functional specifications of RF integrated circuits (IC). The proposed method provides the input impedance, gain, noise figure, voltage standing wave ratio (VSWR) and output signal-to-noise ratio (SNR) of a low noise amplifier (LNA). The RF test scheme is based on theoretical expressions that produce the actual RF device specifications by utilizing the output DC voltages from the DFT chip. This technique can save marginally failing chips in production testing as well as in the system, hence saving a tremendous amount of revenue from unnecessary device replacements.  相似文献   

17.
基于RAS结构优化测试时间和数据量的测试方案   总被引:1,自引:0,他引:1       下载免费PDF全文
大规模高密度的集成电路在测试中遇到了测试数据量大,测试时间长等问题。对此,本文提出了一种带有折叠集的完全测试方案。该方案利用RAS(Random access scan)结构控制经输入精简的扫描单元,先生成若干折叠集检测电路中大部分的故障,然后直接控制扫描单元生成剩余故障的测试向量。本方案生成的折叠集故障检测率高,所需控制数据少。实验数据表明与同类方法相比,本方案能有效减少测试数据量和测试时间。  相似文献   

18.
SOC test time minimization hinges on the attainment of core test parallelism; yet test power constraints hamper this parallelism as excessive power dissipation may damage the SOC being tested. We propose a test power reduction methodology for SOC cores through scan chain modification. By inserting logic gates between scan cells, a given set of test vectors & captured responses is transformed into a new set of inserted stimuli & observed responses that yield fewer scan chain transitions. In identifying the best possible scan chain modification, we pursue a decoupled strategy wherein test data are decomposed into blocks, which are optimized for power in a mutually independent manner. The decoupled handling of test data blocks not only ensures significantly high levels of overall power reduction but it furthermore delivers computational efficiency at the same time. The proposed methodology is applicable to both fully, and partially specified test data; test data analysis in the latter case is performed on the basis of stimuli-directed controllability measures which we introduce. To explore the tradeoff between the test power reduction attained by the proposed methodology & the computational cost, we carry out an analysis that establishes the relationship between block granularity & the number of scan chain modifications. Such an analysis enables the utilization of the proposed methodology in a computationally efficient manner, while delivering solutions that comply with the stringent area & layout constraints in SOC as well.  相似文献   

19.
In this paper we propose a BIST based method to test network on chip (NOC) communication infrastructure. The proposed method utilizes an IEEE 1149.1 architecture based on BIST to at-speed test of crosstalk faults for inter-switch links as well as an IEEE 1500-compliant wrapper to test switches themselves in NOC communication infrastructure. The former architecture includes enhanced cells intended for MAF model test patterns generation and analysis test responses, and the later architecture includes: (a) a March decoder which decodes and executes March commands, which are scanned in serially from input system, on First-In-First-Out (FIFO) buffers in the switch; and (b) a scan chain which is defined to test routing logic block of the switch.To at-speed test inter-switch links one new instruction is used to control cells and TPG controller. Two new instructions, as well as, are applied to activate March decoder and to control scan activities in switch test session. These instructions are defined to fully comply with conventional IEEE 1149.1 and IEEE 1500 standards.  相似文献   

20.
While response compaction reduces the size of expected vectors that need to be stored on tester memory, the consequent information loss inevitably reflects into loss in test quality. Unknown x's further exacerbate the quality loss problem, as they mask out errors captured in other scan cells in the presence of response compactors. In this paper, we propose a technique that manipulates the x distribution in scan responses prior to their propagation into the response compactor. A block, which we refer to as x-align, inserted between the scan chains and the response compactor aligns response x's within the same slices as much as possible in order to increase the number of scan cells that can be observed through the compactor. The alignment of x's is achieved by delaying the scan-out operations in the scan chains, wherein the proper delay values are computed judiciously. We present an Integer Linear Programming (ILP) formulation and a computationally efficient greedy heuristic for the computation of the delay values for scan chains. The x-align hardware is generic yet reconfigurable. An analysis of x distribution in a captured response helps compute the proper delay values, with which x-align is reconfigured to maximize the alignment of x's. The scan cell observability enhancement delivered by x-align paves the way for the utilization of simple response compactors, such as parity trees, yet providing high levels of test quality even in the presence of a large density of response x's. X-align can also be utilized with any response compactor to manipulate the x distribution in favor of the compactor, thus improving the test quality attained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号