首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
The IDDQ test method measures the quiescent power supply current of CMOS ICs for select test vectors, or logic states, and provides a clear indication of defects, failure mechanisms, and many types of design errors. Underlying this type of test are design principles that inherently provide high defect coverage, as well as diagnosis capability and physical localization. The ability of IDDQ testing to rapidly gauge an IC's health is like a nurse taking a patient's temperature. This is an especially appropriate analogy because it underscores that IDDQ testing is not a panacea. As with a patient, other "vital signs" are needed. The authors give a unique perspective into the design of a submicron-technology microprocessor (approximately 1 million transistor, 1OO MHz-plus design). Stringent testability and quality goals drove their selection of IDDQ testing in addition to at-speed functional testing, boundary scan, internal scan, and built-in self-test. They demonstrate the many benefits of IDDQ testing, including substantially reduced power consumption and the ability to merge readily with other testability and high-performance goals  相似文献   

2.
IDDQ testing has emerged from a company specific CMOS IC test technology in the 1960s and 1970s to become a worldwide accepted technique that is a requirement for low defective parts per million levels and failure rates. It is the single most sensitive test method to detect CMOS IC defects, and an abundance of studies have laid a solid foundation for why this is so. The IDDQ test uses the quiescent power supply current of logic states as an indication of defect presence. Its major requirement for maximum efficiency is that the design implement nanowatt power levels (nanoampere supply current) in the quiescent portion of the power supply current. No direct connections are allowed between VDD and VSS during the quiescent period. IDDQ testing has increased significantly since 1990, highlighting problems and driving solutions not addressed by the high reliability manufacturers of earlier technologies. Faster IDDQ instrumentation and better software tools to generate and grade IDDQ test patterns result from this increased interest. We address two major issues confronting IDDQ testing: yield loss and increased background current of deep submicron IC technologies projected by the Semiconductor Industry Association/Sematech road map. Both issues are points of controversy  相似文献   

3.
Deep-submicron technologies pose difficult challenges for IDDQ testing in the future. The low threshold voltage used by DSM devices decreases the defect resolution of IDDQ. However, because IDDQ is a valuable test method, researchers are working to augment with other test parameters to prolong its effectiveness  相似文献   

4.
Designers must target realistic faults if they desire high-quality test and diagnosis of CMOS circuits. The authors propose a strategy for generating high-quality IDDQ test patterns for bridging faults. They use a standard ATPG tool for stuck-at faults that adapts to target bridging faults via IDDQ testing. The authors discuss IDDQ test set diagnosis capability and specifically generated vectors that can improve diagnosability, and provide test and diagnosis results for benchmark circuits  相似文献   

5.
This paper shows that IDD waveform analysis can detect defects that IDDQ testing cannot. An investigation of IDD waveform analysis methods-one based on integrators, one on fast Fourier transform-confirms that such analysis enables fault localization testing in static and dynamic CMOS circuits  相似文献   

6.
A rapid failure analysis method for high-density CMOS static RAMs (SRAMs) that uses realistic defect modeling and the results of functional and IDDQ testing is presented. The key to the method is the development of a defect-to-signature vocabulary through inductive fault analysis. Results indicate that the method can efficiently debug the multimegabit-memory manufacturing process  相似文献   

7.
An IDDQ technique is proposed based on an extension of a VDDT-based method called transient signal analysis. The method, called quiescent signal analysis, uses IDDQs measured at multiple supply pins as a means of localizing defects  相似文献   

8.
Applying scan-based DFT, IDDQ testing, or both to sequential circuits does not ensure bridging-fault detection, which depends on the resistance of the fault and circuit level parameters. With a “transparent” scan chain, however, the tester can use both methods to detect manufacturing process defects effectively-including difficult-to-detect shorts in the scan chain. The author presents a strategy for making the scan chain transparent. The test complexity of such a chain is very small, regardless of the number of flip-flops it contains  相似文献   

9.
To screen defective dies, IDDQ tests require a reliable estimate of each die's defect-free measurement. The nearest-neighbor residual (NNR) method provides a straightforward, data-driven estimate of test measurements for improved identification of die outliers  相似文献   

10.
In these two PLA configurations, adjacent precharge lines activate, and adjacent evaluation lines evaluate, to complementary logic levels. This design-for-test technique makes it possible to use IDDQ tests to defect all likely bridging faults-for the most part independently of the PLA's implemented function  相似文献   

11.
This paper compares the fault-detecting ability of several software test data adequacy criteria. It has previously been shown that if C1 properly covers C2, then C1 is guaranteed to be better at detecting faults than C2, in the following sense: a test suite selected by independent random selection of one test case from each subdomain induced by C1 is at least as likely to detect a fault as a test suite similarly selected using C2. In contrast, if C1 subsumes but does not properly cover C2, this is not necessarily the case. These results are used to compare a number of criteria, including several that have been proposed as stronger alternatives to branch testing. We compare the relative fault-detecting ability of data flow testing, mutation testing, and the condition-coverage techniques, to branch testing, showing that most of the criteria examined are guaranteed to be better than branch testing according to two probabilistic measures. We also show that there are criteria that can sometimes be poorer at detecting faults than substantially less expensive criteria  相似文献   

12.
Two testing techniques for ultra-large-scale integrated (ULSI) memories containing on-chip voltage downconverters (VDCs) are described. The first in an on-chip VDC tuning technique that adjusts internal VCC to compensate for the monitored characteristics of the process parameters during repair analysis testing. The second is an operating-voltage margin test, performed at various internal VCC levels during the water sort test (WT) and the final shipping test (FT)  相似文献   

13.
李岳炀  钟麦英 《自动化学报》2015,41(9):1638-1648
研究存在多数据包丢失现象的线性离散时变系统有限时间域内故障检测滤波器(Fault detection filter, FDF)设计问题. 在数据包具有时间戳标记的条件下, 设计基于观测器的FDF作为残差产生器, 构造两类FDF. 其一为H-/H∞-FDF或H∞/H∞-FDF. 定义故障到残差和未知输入到残差的广义传递函数算子, 将此类FDF设计问题转换为随机意义下H-/H∞ 或H∞/H∞性能指标优化问题. 其二为H∞-FDF, 将此类FDF设计问题转化为随机意义下的H∞滤波问题. 采用基于伴随算子的H∞优化方法, 通过求解递推Riccati方程, 得到上述两类FDF设计问题的解析解. 通过算例验证所提方法的有效性.  相似文献   

14.
The design of a feedforward compensator for robust ℋ or ℋ2 performance under structured uncertainty is considered. For linear time-invariant uncertainty, a convex method based on linear matrix inequalities (LMIs) across the frequency variable is given. For nonlinear or time-varying perturbations and ℋ performance, the design problem is reduced exactly to a state-space LMI, and extensions to ℋ2 performance are discussed. An example illustrates the application of these techniques to two-degree-of-freedom control design  相似文献   

15.
Min-Yuan Ma  Jing-Song Huang   《Displays》2008,29(3):219-236
The index of cognitive information difficulty (Dinfo) are measured at the psychological level for the application of learning and design when people are reading and recognizing the Chinese sentences; thus, how to effectively monitor the information difficulty for the purpose of educational learning and design becomes an essential issue. The psychological cognition of the difficulty of sentences experiment is planning to test and verify which one is better of Information Mass (Minfo) and Information Quantity. There has been proof that it is suitable for using the concept of Minfo in meaningful sentences instead of Information Quantity. We can also measure psychological cognition of the difficulty of sentences from subjects. From our study, we have found that the concept of Minfo not only can be use to measure the Dinfo of Chinese characters but also can be use to measure the Dinfo of meaningful sentences. There is significant linear relation between Dinfo of sentences and logarithm of Minfo. In other words, we can infer that the Dinfo of subjects by applying the concept of Minfo. The concept of Minfo not only can be use to measure the Dinfo of Chinese characters but also can be use to measure the Dinfo of meaningful sentences. The authors also establish the qualitative regression model “Dinfo = a + b*log2(Minfo)”. The concept to apply Minfo gives reasonable explanation to the difficulty of Chinese characters and sentences and the model of linear regression serves the function for the reference of educational learning and design.  相似文献   

16.
This paper presents a nonlinear control design for both the H2 and H optimal control for current-fed induction motor drives. These controllers are derived using analytical stationary solutions that minimize a generalized convex energy cost function including the stored magnetic energy and the coil losses, while satisfying torque regulation control objectives. Explicit control expressions for both the H2 and H optimal design are given. Furthermore, the optimal attenuation factor, i.e., the optimal H norm and the corresponding worst case disturbance, are both computed explicitly  相似文献   

17.
This study introduces a mixed H2/H fuzzy output feedback control design method for nonlinear systems with guaranteed control performance. First, the Takagi-Sugeno fuzzy model is employed to approximate a nonlinear system. Next, based on the fuzzy model, a fuzzy observer-based mixed H2/H controller is developed to achieve the suboptimal H2 control performance with a desired H disturbance rejection constraint. A robust stabilization technique is also proposed to override the effect of approximation error in the fuzzy approximation procedure. By the proposed decoupling technique and two-stage procedure, the outcome of the fuzzy observer-based mixed H2/H control problem is parametrized in terms of the two eigenvalue problems (EVPs): one for observer and the other for controller. The EVPs can be solved very efficiently using the linear matrix inequality (LMI) optimization techniques. A simulation example is given to illustrate the design procedures and performances of the proposed method  相似文献   

18.
A formal analysis of the fault-detecting ability of testing methods   总被引:1,自引:0,他引:1  
Several relationships between software testing criteria, each induced by a relation between the corresponding multisets of subdomains, are examined. The authors discuss whether for each relation R and each pair of criteria, C1 and C2 , R(C1, C2) guarantees that C1 is better at detecting faults than C2 according to various probabilistic measures of fault-detecting ability. It is shown that the fact that C 1 subsumes C2 does not guarantee that C1 is better at detecting faults. Relations that strengthen the subsumption relation and that have more bearing on fault-detecting ability are introduced  相似文献   

19.
Process capability index Cpk has been widely used in the manufacturing industry as a process performance measure. In this paper, we investigate the natural estimator of the index Cpk, and show that under the assumption of normality its distribution can be expressed as a mixture of the chi-square and the normal distributions. We also implement the theory of hypothesis testing using the natural estimator of Cpk, and provide efficient Maple programs to calculate the p-values as well as the critical values for various values of -risk, capability requirements, and sample sizes. The behavior of the p-values and critical values as functions of the distribution parameters are investigated to obtain tight critical values for reliable testing. Based on the test, we develop a simple and practical procedure for in-plant applications. The practitioners can use the proposed procedure to determine whether their process meets the preset capability requirement, and make reliable decisions.  相似文献   

20.
In this technical note, the problem of designing fixed-order robust Hinfin controllers is considered for linear systems affected by polytopic uncertainty. A polynomial method is employed to design a fixed-order controller that guarantees that all the closed-loop poles reside within a given region of the complex plane. In order to utilize the freedom of the controller design, an Hinfin performance specification is also enforced by using the equivalence between robust stability and Hinfin norm constraint. The design problem is formulated as a linear matrix inequality (LMI) constraint whose decision variables are controller parameters. An illustrative example demonstrates the feasibility of the proposed design methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号