首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 952 毫秒
1.
Built in self tests (BISTs) on integrated circuits are one approach of maintaining fault coverage and device’s testability without increasing the test time. As an additional benefit, for the purpose of failure analysis fault simulation down to node level can be achieved. However, regarding defect localization common FA techniques are still mandatory.In this paper, we present BIST assisted case studies on functional failing integrated circuits. Starting from a fault simulation, defect localization will be done by using conventional failure analysis techniques. After successfully determining the physical defect, we will compare its effect on the affected nodes to the initial fault simulation.  相似文献   

2.
Analog and mixed-signal testing is becoming an important issue that affects both the time-to-market and the product cost of many SoCs. In order to provide an efficient testing method for 865–870 MHz low noise amplifiers (LNAs), which constitute a mixed-signal circuit, a novel BIST method is developed. This BIST can be easily implemented with a RF peak detector and two comparators. The circuit used in the test and the LNA are designed using 0.35 μm CMOS technology. The simulation results show higher fault coverage than that of previous test methods. A total of twenty eight short and open (catastrophic) faults and eleven variation parameters have been introduced into the LNA, giving fault coverage of 100% for catastrophic faults and parametric variation. Thus, it provides an efficient structural test, which is suitable for a production test in terms of an area overhead, a test accessibility, and test time.  相似文献   

3.
Device scaling has led to the blurring of the boundary between design and test: marginalities introduced by design tool approximations can cause failures when aggressive designs are subjected to process variation. Larger die sizes are more vulnerable to intra-die variations, invalidating analyses based on a number of given process corners. These trends are eroding the predictability of test quality based on stuck-at fault coverage. Industry studies have shown that an at-speed functional test with poor stuck-at fault coverage can be a better DPM screen than a set of scan tests with very high stuck-at fault coverage. Contrary to conventional wisdom, we have observed that a high stuck-at fault test set is not necessarily good at detecting faults that model actual failure mechanisms. One approach to address the test quality crisis is to rethink the fault model that is at the core of these tests. Targeting realistic fault models is a challenge that spans the design, test and manufacturing domains: the extraction of realistic faults has to analyze the design at the physical and circuit levels of abstraction while taking into account the failure modes observed during manufacture. Practical fault models need to be defined that adequately model failing behavior while remaining amenable to automatic test generation. The addition of these fault models place increasing performance and capacity demands on already stressed test generation and fault simulation tools. A new generation of analysis and test generation tools is needed to address the challenge of defect-based test. We provide a detailed discussion of process technology trends that are responsible for next generation test problems, and present a test automation infrastructure being developed at Intel to meet the challenge.  相似文献   

4.
The International Technology Roadmap for Semiconductors (ITRS) identifies two main challenges associated with the testing of manufactured ICs. First, the increase in complexity of semiconductor manufacturing process, physical properties of new materials, and the constraints imposed by resolution of lithography techniques etc., give rise to more complex failure mechanisms and hard-to-model defects that can no longer be abstracted using traditional fault models. Majority of defects, in today's technology, include resistive bridging and open defects with diverse electrical characteristics. Consequently, conventional fault models, and tools based on these models are becoming inadequate in addressing defects resulting from new failure mechanisms. Second, the defect detection resolution of main-stream IDDQ testing is challenged by significant elevation in off-state quiescent current and process variability in newer technologies. Overcoming these challenges demands innovative test solutions that are based on realistic fault models capable of targeting real defects and thus, providing high defect coverage. In prior works power supply transient current or iDDT testing has been shown to detect resistive bridging and open defects. The ability of transient currents to detect resistive opens and their insensitivity (virtually) to increase in static leakage current make iDDT testing all the more attractive. However, in order to integrate iDDT based methods into production test flows, it is necessary to develop a fault simulation strategy to assess the defect detection capability of test patterns and facilitate the ATPG process. The analog nature of the test observable, i.e., iDDT signals, entail compute intensive transient simulations that are prohibitive. In this work, we propose a practical fault simulation model that partitions the task of simulating the DUT (device under test) into linear and non-linear components, comprising of power/ground-grid and core-logic, respectively. Using divide-and-conquer strategy, this model replaces the transient simulations of power/ground-grid with simple convolution operations utilizing its impulse response characteristics. We propose a path isolation strategy for core-logic as a means of reducing the computational complexity involved in deriving iDDT signals in the non-linear portion. The methodology based on impulse response functions and isolated path simulation, can enable iDDT fault simulation without having to simulate the entire DUT. To our knowledge, no practical technique exists to perform fault simulation for iDDT based methods. The proposed fault simulation model offers two main advantages, first, it allows fault induction at geometric or layout level, thus providing a realistic representation of physical defects, and second, the current/voltage profile of power/ground-grid, derived for iDDT fault simulation, can be used to perform accurate timing verification of logic circuit, thus facilitating design verification. In summary, the proposed fault simulation framework not only enables the assessment of defect detection capabilities of iDDT test methodologies, but also establishes a platform for performing defect-based testing on practical designs.  相似文献   

5.
This paper presents a theoretical expression to evaluate the test quality of hierarchical defect-tolerant integrated circuits. This expression, which is developed for circuits with two levels of hierarchy, is based on a defect model with which one can take into account the relative importance (probability of occurrence) of each defect and consequently of each fault. Results obtained from this expression show that, for a given test coverage, the addition of defect-tolerance mechanisms decreases the test quality of integrated circuits. These results are important because they indicate that fault coverage can be a misleading measure of the test quality of defect-tolerant integrated circuits.  相似文献   

6.
Testing integrated circuits (ICs) is understood as the task of filtering out defective ICs that violate data sheet specifications. The costs of this filter comprise both the direct cost of testing a device and the indirect cost of test escapes and test yield–loss. For analog and mixed-signal devices, such as data converters, traditional methods of estimating the defect and test escape levels require large sample sets of devices. This is because the defect level induced by manufacturing process variations is typically low. In this work, a model-based method of estimating defect and test escape levels is described. For this method, a small set of sample devices is sufficient, as we first derive a manufacturing process model which is then used to simulate the manufacturing of a large number of devices. These simulation results are subsequently used for the purposes of estimating the defect and test escape levels, as well as the test-related yield–loss when applying a given test. With these estimates, the quality and indirect costs of a test can be determined as a function of the test limits and guard-bands applied in production test.  相似文献   

7.
8.
Large-scale integration components are subjected to testing based on stuck fault modeling. Stuck fault testing often does not provide patterns for all possible stuck conditions that can exist in a circuit. Because of the incompleteness of test coverage, a new quality measure is needed-one that is not based on sample inspection. Such an LSI quality measure is described in this paper. The LSI quality measure can be related to component yield and is based on the stuck fault testing coverage, the physical circuit design layout, and the rate of faults occurring on elemental circuit geometries. The concept of the LSI quality measure is illustrated in this paper by an example. Starting from a block diagram and an assumed stuck fault coverage, some stuck faults are assumed to remain untested. For these untested faults, the elemental circuit geometries in a corresponding FET circuit layout are determined, and the quality measure calculated. Common sense rules are offered for optimizing the quality and lowering its cost impact on higher levels of assembly.  相似文献   

9.
This work describes a technique for testing RF mixers with digital adaptive filters. RF circuits are widely used on data transmission applications, such as wireless communication, radio and portable phone systems. However, traditional analog testing covers mainly linear circuits, being not suitable to non-linear pieces of hardware like analog mixers. Herein, an adaptive non-linear filter is trained so that it can mimic the behavior of a RF mixer. Then, a test stimulus is simultaneously applied to the filter and the mixer and the outputs of both circuits are compared to check whether the circuit under test is faulty or fault free. A prototype of a mixer was built in order to allow fault injection in the circuit under test. Thus, the detection capability of the proposed technique could be checked in a real life circuit. The preliminary results point to a very promising test technique. The test is very precise, low cost and allows a complete fault coverage with a very small testing time.  相似文献   

10.
Functional broadside tests were defined to address overtesting that may occur due to high peak current demands when tests for delay faults take the circuit through states that it cannot visit during functional operation (unreachable states). The fault coverage achievable by functional broadside tests is typically lower than the fault coverage achievable by (unrestricted) broadside tests. A solution to this loss in fault coverage in the form of observation point insertion is described. Observation points do not affect the state of the circuit. Thus, functional broadside tests retain their property of testing the circuit using only reachable states to avoid overtesting due to high peak current demands. However, the extra observability allows additional faults to be detected. A procedure for observation point insertion to improve the coverage of transition faults is described. Experimental results are presented to demonstrate that significant improvements in transition fault coverage by functional broadside tests is obtained.  相似文献   

11.
The growing dispersion of ICs’ parameters poses relevant uncertainties on gate output conductances and logic thresholds which play a main role in bridging fault detection. In this evolving context, the quality of fault simulation and test generation tools making use of nominal parameters should be verified. To analyze this problem we have studied bridging fault detection in combinational ICs in the presence of growing variations of IC’s parameters. Results show that a single test is not sufficient to ensure acceptable escape probabilities. Conversely, the minimal number of test vectors required to provide a null escape probability is upper bounded with respect to variations in the standard deviation of IC’s parameters. This result has been verified by means of Monte Carlo electrical level simulation. We propose a method to derive these minimal test sets in the case of low frequency tests. A fault simulator and a test generator have been developed supporting the search of minimal test sets targeting a null escape probability. These tools have been applied to a set of combinational benchmarks.
Michele FavalliEmail:

Michele Favalli   received the Dr. Eng. degree in Electronic Engineering from the University of Bologna in 1987 and the Ph. D. in Electronic Engineering and Computer Science in 1993. From 1993 he has worked at the Department of Electronics at the University of Bologna as a Researcher. From 1998 he is Associate Professor of Computer Science at the University of Ferrara. His research interests are in the area of digital IC’s design and testing. They include fault modeling, fault simulation, test generation, on-line testing and fault tolerant circuits. Marcello Dalpasso   received the B.S. degree (summa cum laude) in electrical engineering, in 1990, and the Ph.D. degree in computer science, in 1994, from the University of Bologna, Italy. Since 2004, he has been an Associate Professor at the University of Padova, Italy, where he teaches computer science. His research interests include the areas of fault modeling and simulation for digital IC design and CAD frameworks and tools for intellectual-property protection in EDA.  相似文献   

12.
This article emphasizes simulation-based sampling techniques for estimating fault coverage that use small fault samples. Although random testing is considered to be the primary area of application of the technique it is also suitable for estimating the fault coverage of nonrandom tests based on specific fault models. Especially for fault coverages exceeding 95%, it is shown that a precise estimate can be obtained using a fault sample of only 500 faults. The estimation is based on a binomial approximation of the probability density of the sample fault coverage. Using Bayes statistics an estimate is obtained whose accuracy is a linear function of the sample size if the fault coverage approaches 100%. The sample size is independent of the circuit size, thus making fault sampling particularly interesting for the fault simulation of ULSI designs due to the resulting reduction of the time complexity of fault simulation from O(N 2) to O(N).This work was performed while Dr. Daehn was with the Laboratorium fuer Informationstechnologie at the university of Han- nover, Germany.  相似文献   

13.
The combination of higher quality requirements and sensitivity of high performance circuits to delay defects has led to an increasing emphasis on delay testing of VLSI circuits. In this context, it has been proven that Single Input Change (SIC) test sequences are more effective than classical Multiple Input Change (MIC) test sequences when a high robust delay fault coverage is targeted. In this paper, we show that random SIC (RSIC) test sequences achieve a higher fault coverage than random MIC (RMIC) test sequences when both robust and non-robust tests are under consideration. Experimental results given in this paper are based on a software generation of RSIC test sequences that can be easily generated in this case. For a built-in self-test (BIST) purpose, hardware generated RSIC sequences have to be used. This kind of generation will be shortly discussed at the end of the paper.  相似文献   

14.
阻变随机存储器(RRAM)中存在的故障严重影响产品的可靠性和良率.采用精确高效的测试方法能有效缩短工艺优化周期,降低测试成本.基于SMIC 28 nm工艺平台,完成了1T1R结构的1 Mbit RRAM模块的流片.详细分析了测试中的故障响应情况,并定义了一种故障识别表达式.在March算法的基础上,提出针对RRAM故障的有效测试算法,同时设计了可以定位故障的内建自测试(BIST)电路.仿真结果表明,该测试方案具有占用引脚较少、测试周期较短、故障定位准确、故障覆盖率高的优势.  相似文献   

15.
In this work, based on the concept of test pattern broadcasting, we propose a new core-based testing method which gives core users the maximum level of test freedom. Instead of only using the test patterns delivered by core providers, core users are allowed to broadcast their own test patterns to the cores of a SoC (system on chip) design for parallel scan testing. The fault coverage of each core test, using test patterns developed by any core user, can be evaluated by an enhanced version of a traditional fault simulator. The netlist of each core is scrambled before it is delivered to core users, thus the netlist will not be revealed. The enhanced fault simulator of a core has the capabilities of decoding the scrambled netlist, and performing fault simulation for the test patterns provided by each of the core users. For each core, both random test patterns (applied by a core user), and golden test patterns (delivered by the core provider) jointly achieve high and flexible fault coverage requirements. The enhanced logic simulator of each core can also decrypt the scrambled netlist, and perform logic simulation with the objective of generating fault-free test responses for signature analysis (for example). The proposed method has the advantages of minimizing the number of scan pins, reducing the test application time, and achieving the maximum level of test quality control by core users. Simulation results demonstrate the feasibility of this method.  相似文献   

16.
Functional test sequences are often used in manufacturing testing to target defects that are not detected by structural test. However, they suffer from low defect coverage since they are mostly derived in practice from existing design-verification test sequences. Therefore, there is a need to increase their effectiveness using design-for-testability (DFT) techniques. We present a DFT method that uses the register-transfer level (RTL) output deviations metric to select observation points for an RTL design and a given functional test sequence. Simulation results for six ITC′99 circuits show that the proposed method outperforms two baseline methods for several gate-level coverage metrics, including stuck-at, transition, bridging, and gate-equivalent fault coverage. Moreover, by inserting a small subset of all possible observation points using the proposed method, significant fault coverage increase is obtained for all benchmark circuits.  相似文献   

17.
A novel oscillation ring (OR) test scheme and architecture for testing interconnects in SOC is proposed and demonstrated. In addition to stuck-at and open faults, this scheme can also detect delay faults and crosstalk glitches, which are otherwise very difficult to be tested under the traditional test schemes. IEEE Std. 1500 wrapper cells are modified to accommodate the test scheme. An efficient algorithm is proposed to construct ORs for SOC based on a graph model. Experimental results on MCNC benchmark circuits have been included to show the effectiveness of the algorithm. In all experiments, the scheme achieves 100% fault coverage with a small number of tests.  相似文献   

18.
The results of a simulation-based fault characterization study of BiCMOS logic circuits are given. Based on the fault characterization results, the authors have studied different techniques for testing BiCMOS logic circuits. The effectiveness of stuck-at fault testing, stuck-open fault testing, delay fault testing, and current testing in achieving a high level of defect coverage is studied. A novel BiCMOS circuit structure that improves the testability of BiCMOS digital circuits is presented  相似文献   

19.
The authors consider a protocol specification represented as a fully specified Mealy automata, and the problem of testing an implementation for conformance to such a specification. No single sequence-based test can be completely reliable, if one allows for the possibility of an implementation with an unknown number of extra states. They define a hierarchy of test sequences, parameterized by the length of behaviors under test. For the reset method of conformance testing, they prove that the hierarchy has the property that any fault detected by test i is also detected by test i+1, and show that this sequence of tests converges to a reliable conformance test. For certain bridge sequence methods for constructing test sequences, this result does not always hold. In experiments with several specifications, they observe that given a small number of extra states in an implementation, the sequence of tests converge to a total fault coverage for small values of i, for both reset and bridge sequence methods. They also observe that the choice of characterizing sequence has less effect on fault coverage than the choice of behavior length or number of extra states in the implementation  相似文献   

20.
一种基于存储器故障原语的March测试算法研究   总被引:1,自引:0,他引:1  
研究高效率的系统故障测试算法,建立有效的嵌入式存储器测试方法,对提高芯片良品率、降低芯片生产成本,具有十分重要的意义.从存储器基本故障原语测试出发,在研究MarchLR算法的基础上,提出March LSC新算法.该算法可测试现实的连接性故障,对目前存储器的单一单元故障及耦合故障覆盖率提升到100%.采用March LSC算法,实现了内建自测试电路(MBIST).仿真实验表明,March LSC算法能很好地测试出嵌入式存储器故障,满足技术要求.研究结果具有重要的应用参考价值.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号