首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
In cases where device numbers are limited, large statistical studies to verify reliability are impractical. Instead, an approach incorporating a solid base of modelling, simulation, and material science into a standard reliability methodology makes more sense and leads to a science-based reliability methodology. The basic reliability method is (a) design, model and fabricate, (b) test structures and devices, (c) identify failure modes and mechanisms, (d) develop predictive reliability models (accelerated aging), and (e) develop qualification methods. At various points in these steps technical data is required on MEMS material properties (residual stress, fracture strength, fatigue, etc.), MEMS surface characterization (stiction, friction, adhesion, role of coatings, etc.) or MEMS modelling and simulation (finite element, analysis, uncertainty analysis, etc.). This methodology is discussed as it relates to reliability testing of a micro-mirror array consisting of 144-piston mirrors. In this case, 140 mirrors were cycled full stroke (1.5 μm) 26 billion times with no failure. Using our technical science base, fatigue of the springs was eliminated as a mechanism of concern. Eliminating this wear-out mechanism allowed use of the exponential statistical model to predict lower bound confidence levels for failure rate in a “no-fail” condition.  相似文献   

2.
Reliability estimation is usually performed on a part under a constant stress level. However, a part could experience several different stress levels, or profiled stress, during its lifetime. One such example is when the part is subject to step-stress accelerated life testing. Studying the reliability estimation & its confidence bounds for a part under varying stresses will generalize the existing estimation methods for accelerated life testing. In this paper, we derive the reliability function of a part under varying stresses based on a Weibull failure time distribution, and cumulative damage model. The reliability confidence bounds, based on a s-normal approximation, are given explicitly, and their limiting properties are discussed. A step-stress accelerated life testing example is used to illustrate these interesting properties, which provides the insights of the limitation of the current test plan, and how to design a better one.  相似文献   

3.
This paper reviews the concepts and techniques that have been adopted by BT in order to understand network equipment reliability. The ideas presented are largely generic, in the sense that they can be applied irrespective of whether the equipment is intended for a core network or an access network application. In particular, the paper examines the relationship between out-turn reliability and the reliability that might be guaranteed by an equipment manufacturer. The relevance of confidence limits to reliability modelling and in-service reliability monitoring is also discussed, and ideas presented on how reliability monitoring can be applied to resilient networks. It is shown that in competitive situations, there are good reasons why reliability modellers should use failure rate data whose upper confidence limit is 50%. In contrast, it is shown that for practical reasons, in-service monitoring schemes should not be based on specific confidence levels.  相似文献   

4.
This paper describes a different approach to software reliability growth modeling which enables long-term predictions. Using relatively common assumptions, it is shown that the average value of the failure rate of the program, after a particular use-time, t, is bounded by N/(e·t), where N is the initial number of faults. This is conservative since it places a worst-case bound on the reliability rather than making a best estimate. The predictions might be relatively insensitive to assumption violations over the longer term. The theory offers the potential for making long-term software reliability growth predictions based solely on prior estimates of the number of residual faults. The predicted bound appears to agree with a wide range of industrial and experimental reliability data. Less pessimistic results can be obtained if additional assumptions are made about the failure rate distribution of faults  相似文献   

5.
Product reliability is one of the key factors for a successful product launch. However, electronic components can still fail in various stages of applications due to certain failure mechanisms. A constant failure rate typically describes a majority of non-solder joint related package failures in the accelerated testing or the field application. Historically, the failure rate for a constant failure phenomenon is estimated by using the Chi-square value or the expected number of failures.This paper will discuss the statistical characteristics of the number of failures observed in tests or applications and their confidence bounds. Several methods used to estimate the confidence bounds will be described, and a new approach will be proposed and validated through case studies. The estimation of the acceleration factor (AF) used in the failure rate modeling is also discussed. The conclusion will help engineers to understand the statistical meaning of the failures observed in stress tests or in the field applications, additionally, obtain a meaningful failure rate based on the expected failure data.  相似文献   

6.
家用电器无故障定时截尾可靠性试验方案探讨   总被引:2,自引:0,他引:2  
无故障定时截尾方案是针对故障率较低的家用电器产品的可靠性鉴定试验和验收试验方案,累计测试时间依据置信度确定,可接收状态为无故障或仅出现1个故障,试验方案编制简便和易以掌握,试验过程管理简单。按寿命服从指数分布的条件导出试验方法,同时将适用范围推广至寿命服从形状参数已知的两参数威布尔分布的条件。  相似文献   

7.
GaAs微波功率FET可靠性评价技术研究   总被引:3,自引:1,他引:2  
为了使GaAs微波功率FET更可靠地应用于重要微波系统,选取高可靠器件生产线生产的CS0531型器件进行加速寿命试验,并研制了专用试验设备。观察到器件n因子随着试验时间有增大的趋势,初始低频噪声值与器件突然烧毁有一定的相关性。这一结果表明低频噪声有可能成为未来评价GaAs器件可靠性的一种方法。该器件失效机构激活能2.45eV,为道温度110℃时,10年平均失效率4Fit,平均寿命75137×1011h。  相似文献   

8.
Random testing techniques have been extensively used in reliability assessment, as well as in debug testing. When used to assess software reliability, random testing selects test cases based on an operational profile; while in the context of debug testing, random testing often uses a uniform distribution. However, generally neither an operational profile nor a uniform distribution is chosen from the perspective of maximizing the effectiveness of failure detection. Adaptive random testing has been proposed to enhance the failure detection capability of random testing by evenly spreading test cases over the whole input domain. In this paper, we propose a new test profile, which is different from both the uniform distribution, and operational profiles. The aim of the new test profile is to maximize the effectiveness of failure detection. We integrate this new test profile with some existing adaptive random testing algorithms, and develop a family of new random testing algorithms. These new algorithms not only distribute test cases more evenly, but also have better failure detection capabilities than the corresponding original adaptive random testing algorithms. As a consequence, they perform better than the pure random testing.   相似文献   

9.
Reliability Demonstration Through Degradation Bogey Testing   总被引:3,自引:0,他引:3  
Bogey testing, also known as zero-failure testing, is used in industry to demonstrate reliability at a high confidence level. This test method is simple to apply; however, it requires excessive test time and/or a large sample size, and thus is often unaffordable. For some products, a failure is defined in terms of a performance characteristic exceeding a specified threshold. For these products, it is possible to measure the performance characteristic at different times during testing. The measurement data can be employed to predict whether or not a test unit will fail by the end of the test. When there are sufficient data to make such a prediction with a high degree of confidence, the test of the unit can be terminated. As a result, the test time is reduced. This paper develops a method for degradation bogey testing to reduce test time. In particular, the paper describes degradation modeling, and the calculation of the conditional failure probability of a test unit. Then we develop the optimum test plans, which choose the sample size, and the expected test time, by minimizing the total test cost, and simultaneously satisfying the constraints on the type II error and available sample size. Sensitivity analysis shows that the optimum test plans are robust against the preestimates of model parameters. The paper also presents decision rules for terminating the test of a unit. The proposed method is illustrated with an example. The application shows that the method is effective in reducing test time.   相似文献   

10.
A large number of software reliability growth models have been proposed to analyse the reliability of a software application based on the failure data collected during the testing phase of the application. To ensure analytical tractability, most of these models are based on simplifying assumptions of instantaneous & perfect debugging. As a result, the estimates of the residual number of faults, failure rate, reliability, and optimal software release time obtained from these models tend to be optimistic. To obtain realistic estimates, it is desirable that the assumptions of instantaneous & perfect debugging be amended. In this paper we discuss the various policies according to which debugging may be conducted. We then describe a rate-based simulation framework to incorporate explicit debugging activities, which may be conducted according to the different debugging policies, into software reliability growth models. The simulation framework can also consider the possibility of imperfect debugging in conjunction with any of the debugging policies. Further, we also present a technique to compute the failure rate, and the reliability of the software, taking into consideration explicit debugging. An economic cost model to determine the optimal software release time in the presence of debugging activities is also described. We illustrate the potential of the simulation framework using two case studies.  相似文献   

11.
A method for the flexible and systematic design of the current stress level is proposed to assure reliability concerning electromigration open failure in aluminum interconnects of LSI. The proposed design rule can satisfy reliability requirements without excess restrictions on LSI performance. The required reliability is quantitatively defined using the failure rate and lifetime. The requirements for interconnects are hierarchically decided from LSI reliability design.The equation used in the design is induced using the acceleration factor of the testing condition to the operating condition considering the statistical difference between a test sample and an LSI, and consists of three terms corresponding to reliability requirements, parameters of circuit design and the EM failure resistance of the interconnect technology used.The interconnects in an LSI are classified into those for power use and those for signal use. The solution for each use is represented using a map that follows allowable areas of design parameters. The map makes it easy to change the designed parameters for the purpose of high performance within the reliability requirement limit.The error of the reliability estimation is analyzed as a function of the parameters used in the equations. To avoid optimistic estimations, a safety factor is introduced in relation to the standard deviation of the error.  相似文献   

12.
ISO 26262道路车辆功能安全标准是以产品功能安全设计导入为核心,同时包含产品安全生命周期中的制造环节。封装测试是半导体制造过程中重要的一环,研究工作着重在芯片从设计到封装测试的功能安全任务链接、转移与执行,包含封装厂商如何在芯片设计前端提供封装故障率预估,以评估硬件架构指标和随机硬件故障机率指标,确定功能安全设计的符合性。将产品设计中与安全相关的关键参数在量产过程中得到适当的管制,确保功能安全设计在产品上的实现。同时评估应用于封装设计、测试软件设计的软件工具信赖度,以及增强封装可靠度以减小故障率等课题。  相似文献   

13.
Failure correlation in software reliability models   总被引:4,自引:0,他引:4  
Perhaps the most stringent restriction in most software reliability models is the assumption of statistical independence among successive software failures. The authors research was motivated by the fact that although there are practical situations in which this assumption could be easily violated, much of the published literature on software reliability modeling does not seriously address this issue. The research work in this paper is devoted to developing the software reliability modeling framework that can consider the phenomena of failure correlation and to study its effects on the software reliability measures. The important property of the developed Markov renewal modeling approach is its flexibility. It allows construction of the software reliability model in both discrete time and continuous time, and (depending on the goals) to base the analysis either on Markov chain theory or on renewal process theory. Thus, their modeling approach is an important step toward more consistent and realistic modeling of software reliability. It can be related to existing software reliability growth models. Many input-domain and time-domain models can be derived as special cases under the assumption of failure s-independence. This paper aims at showing that the classical software reliability theory can be extended to consider a sequence of possibly s-dependent software runs, viz, failure correlation. It does not deal with inference nor with predictions, per se. For the model to be fully specified and applied to estimations and predictions in real software development projects, we need to address many research issues, e.g., the detailed assumptions about the nature of the overall reliability growth, way modeling-parameters change as a result of the fault-removal attempts  相似文献   

14.
A general software reliability model based on the nonhomogeneous Poisson process (NHPP) is used to derive a model that integrates imperfect debugging with the learning phenomenon. Learning occurs if testing appears to improve dynamically in efficiency as one progresses through a testing phase. Learning usually manifests itself as a changing fault-detection rate. Published models and empirical data suggest that efficiency growth due to learning can follow many growth-curves, from linear to that described by the logistic function. On the other hand, some recent work indicates that in a real industrial resource-constrained environment, very little actual learning might occur because nonoperational profiles used to generate test and business models can prevent the learning. When that happens, the testing efficiency can still change when an explicit change in testing strategy occurs, or it can change as a result of the structural profile of the code under test and test-case ordering  相似文献   

15.
对于威布尔分布无故障数据可靠性评估方法中形状参数已知和未知的两种方法,通过一个例子进行对比分析,指出当形状参数毫无所知时,所得到的基本可靠度置信下限估计最为保守。通过相似产品的信息和工程经验对形状参数作出一个较为精确的估计是可行的。  相似文献   

16.
Fuzzy states as a basis for a theory of fuzzy reliability   总被引:1,自引:0,他引:1  
Various engineering backgrounds have shown that the binary state assumption in probist (i.e., conventional) reliability theory, i.e. defining a system as fully failed or functioning, is not extensively acceptable, and thus the fuzzy state assumption should be used to replace the binary state assumption. As a result, the concept of profust reliability is introduced and a conceptual framework of profust reliability theory is developed on the basis of the fuzzy state assumption and the probability assumption. Profust reliability function, profust lifetime function and profust failure rate function and the mathematically rigorous relationships among them, lay a solid foundation for profust reliability theory. On the other hand, the concept of the virtual random lifetime builds a bridge linking profust reliability theory with probist reliability theory. In addition, in this paper, typical systems including the series system, parallel system, Markov model, mixture model and coherent system are briefly discussed within the conceptual framework of profust reliability theory.  相似文献   

17.
The nonhomogeneous error detection rate model has been extensively used in software reliability modelling. An important management responsibility is to make a decision about the release of software, so as to result in maximum cost effectiveness. It is well known that the effort for correction of an error increases, rather heavily, from an initial testing phase to a final testing phase and then to the operational phase. In this paper, a method is presented to systematically determine this optimum release instant. The fact that some faults can be regenerated during the process of correction has also been considered in the modelling—this has been ignored in previous literature. Also, the partition of the testing phase into initial and final phases is considered to be desirable as effort per error correction will be significantly different in these two phases. An example illustrates the entire procedure for various values of the total number of errors and the trends of cost and release time.  相似文献   

18.
针对高可靠性产品寿命数据少、获取成本高的问题,基于充分利用产品在研制、加速试验等不同环境下的退化数据、失效数据等可靠性数据的思想,提出了一种融合非线性加速退化模型和失效率模型的产品寿命预测方法.首先,根据退化数据对非线性退化过程进行分析,估计退化过程的参数;然后,根据加速退化数据及相应的加速退化模型估计加速退化模型的参数,从而得到退化参数与应力之间的关系.进一步,利用比例风险模型融合产品的寿命数据和未失效截尾数据,并基于此计算产品的可靠度函数、预测产品的寿命.实例应用验证了所提方法的有效性,同时说明了所提方法的应用价值.  相似文献   

19.
对已批量生产的某种石英谐振称重传感器进行了可靠性增长的尝试。首先对这种传感器实施了环境试验与寿命试验,并用经典法、信赖法和贝叶斯法估计了环境可靠度及MTBF的单侧置信下限。进而对其进行失效分析,并将系统性B类失效加以改进,采用一般性的可靠性增长方式,经可靠性验证试验对增长后的MTBF做出评价,当置信水平为0.9时,其MTBF的单侧置信下限达到了299 158次;满足了MTBFL10万次的要求。并用AMSAA—BISE模型进行了增长趋势检验和拟合优度检验。  相似文献   

20.
Seventy electronic manufacturers (with at least 100 employees) in the northwest USA were contacted in 1990 September with the intent of measuring their perception of reliability-task effectiveness. There were 17 competent respondents; they rated the effectiveness of 26 reliability tasks; the highest rating (from top to third) were for development testing, failure reporting and corrective action, durability analysis, and durability testing. Interestingly, some US Mil-Std-785 reliability tools such as reliability qualification testing, sneak-circuit analysis, and reliability prediction received the lowest ratings. Many respondents thought reliability prediction was ineffective for improving product reliability, although the majority of respondents do use Mil-Hdbk-217. Since the response rate was so low, it is difficult to draw firm conclusions. Both a larger sample size and a virtual 100% response rate are needed for future studies. Other question-areas, especially about corporate culture, are desirable  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号