首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
In the laboratory, a designer may conveniently analyse microprocessor circuits and their peripherals by means of suitable logic analysers. This is possible because he will have a detailed knowledge of his system and the time to learn the optimum methods of using his analyser. Once this system is a part of a product, it would be preferable if a less complicated means of verifying the processor's correct activity were available should a failure occur. An extremely convenient and inexpensive verification and diagnostic system may be designed into a product using two complementary techniques, selftest and signature analysis. The material cost of implementing such a system can be very small indeed whereas the savings in troubleshooting time and test equipment costs can be substantial.  相似文献   

3.
The main objective of this study was to investigate performance models of automatic diagnostic systems taking into consideration its imperfections such as incorrect isolation and false alarms. This was accomplished by developing an optimization model to assist the decision maker in determining the optimal values of testability parameters which maximize his/her utility function. This will provide the decision maker with a tool to evaluate the parameters set forth by the designer and to assess the real capability of the diagnostic system. This tool also will help the decision maker to check if the correct detection and isolation capability of the system, as well as the imperfections of the system, e.g., incorrect isolation and false alarms are acceptable and satisfactory.

An interactive program was developed and help to implement the optimization modedl. This program provides the decision maker with enough flexibility to avaluate different different strategies and to repeat the decision process after changing one or more of the model's parameter. An example will be presented to show the application of this optimization model.  相似文献   


4.
Sequential optimization and reliability assessment (SORA) is one of the most popular decoupled approaches to solve reliability-based design optimization (RBDO) problem because of its efficiency and robustness. In SORA, the double loop structure is decoupled through a serial of cycles of deterministic optimization and reliability assessment. In each cycle, the deterministic optimization and reliability assessment are performed sequentially and the boundaries of violated constraints are shifted to the feasible direction according to the reliability information obtained in the previous cycle. In this paper, based on the concept of SORA, approximate most probable target point (MPTP) and approximate probabilistic performance measure (PPM) are adopted in reliability assessment. In each cycle, the approximate MPTP needs to be reserved, which will be used to obtain new approximate MPTP in the next cycle. There is no need to evaluate the performance function in the deterministic optimization since the approximate PPM and its sensitivity are used to formulate the linear Taylor expansion of the constraint function. One example is used to illustrate that the approximate MPTP will approach the accurate MPTP with the iteration. The design variables and the approximate MPTP converge simultaneously. Numerical results of several examples indicate the proposed method is robust and more efficient than SORA and other common RBDO methods.  相似文献   

5.
Agent-based systems are deployed to solve a wide range of problems in a wide range of domains. Before software is deployed, it is important to obtain assurance that it will function correctly. Traditionally, this assurance is obtained by testing. However, there is an intuition that agents exhibit more complex behaviour than traditional software, which raises the question: how testable are agent systems? We focus on BDI agent programs, and analyse their testability with respect to the all edges test adequacy criterion (also known as “branch coverage”). Our results augment earlier results that considered the all paths criterion to provide a richer and more nuanced understanding of the testability of BDI agents. We show that the number of tests required with respect to the all edges criterion is much lower than that required with respect to the all paths criterion. We also show that, as for the previous analysis, BDI programs are harder to test than equivalently-sized procedural programs, even if exception handling is introduced. Overall, our conclusions lend strength to the earlier work, and motivate the need for work on formal methods for agent systems.  相似文献   

6.
 The integration of sensors and actuators with microelectronics into either compact packages or onto a single silicon die is likely to be of major technological importance over the next decade. These systems are referred to as Microsystems or Micro-Electro-Mechanical-Systems (MEMS). One obstacle to mass-market introduction are difficulties with quality and reliability verification. This paper outlines the difficulties of testing microsystems, shows approaches of test generation and verification transferable from the mixed-signal Integrated-Circuit (IC) domain, and demonstrates an on-line test designed for bridge-type, micromachined accelerometer and pressure sensors [1]. Received: 31 October 1996 / Accepted: 14 November 1996  相似文献   

7.
A custom DFT strategy solved specific testability and manufacturing issues for this high performance microprocessor. Hardware and software assisted self test and self repair features helped meet aggressive schedule and manufacturing quality and cost goals  相似文献   

8.
《Ergonomics》2012,55(1-3):183-189
The aim of the study was to evaluate the reliability and temporal stability of various individual difference and tolerance to shiftwork measures which were used in an extensive study on relationships between shiftworkers' features and tolerance to shiftwork. The study was conducted in a Croatian oil refinery where 604 male shift workers, working in a 2:2:3 continuous three-shift system were examined by means of various questionnaires. A subsample of 61 shift workers were re-examined using the same questionnaires and the same administration procedure nine years later. The questionnaires were: Jenkins Activity Survey, Eysenck Personality Inventory, Momingness-Eveningness Questionnaire, Behavioural Arousal Questionnaire, Orcadian Type Questionnaire and Inventory, Way of Life Questionnaire, Health Information Questionnaire, Sleep Quality Scale. The questionnaires were scored for 14 individual difference measures and eight tolerance to shiftwork measures. In addition to these scores sleep durations on night, morning, afternoon shifts, and on days off were used as tolerance to shiftwork measures. The reliability of each measure was determined by means of Cronbach alpha coefficients computed on a total of 604 shiftworkers and temporal stability by means of correlation coefficients between the scores obtained on two occasions on the subsample of 61 shiftworkers. The internal consistency coefficients for most of the measures ranged between moderate to high, and were considered to be satisfactory. The correlations between the scores obtained twice over a period of nine years proved both the individual difference and tolerance to shiftwork measures to be temporally stable. The only measure that had insignificant correlation was the sleep duration on days off.  相似文献   

9.
针对可信软件可靠性定性评估问题,将云模型理论与GB/T16260中软件可靠性度量指标相结合,提出了一种基于云模型的软件可靠性度量模型.模型根据可靠性度量指标提取样本信息,通过云发生器生成云模型,运用合并云算法将多个属性云合并为一个综合云模型,使用带权重的欧氏距离云决策算法确定综合云模型的可靠性等级.通过某安全关键软件验证,该模型能够更加客观、真实地反映软件的可靠性.  相似文献   

10.
With a focus on a short time to volume production, the UltraSparc microprocessor design incorporated innovative features that optimize test, debug and manufacture. The following areas are discussed: goals; cost-benefit analysis; scan design; decoded multiplexer; test generation flow; custom circuit blocks; boundary cell design; embedded array testing; and clock control features  相似文献   

11.
Testability measures have been defined on flowgraphs modelling the control flow through a program. These measures attempt to quantify aspects of the structural complexity of code that might give useful information about the testing stage of software production. This paper shows how two such metrics, the Number of Trails metric and the Mask [k = 2] metric, can be calculated axiomatically.  相似文献   

12.
In estimating the effect of a change in a random variable parameter on the (time-invariant) probability of structural failure estimated through Monte Carlo methods the usual approach is to carry out a duplicate simulation run for each parameter being varied. The associated computational cost may become prohibitive when many random variables are involved. Herein a procedure is proposed in which the numerical results from a Monte Carlo reliability estimation procedure are converted to a form that will allow the basic ideas of the first order reliability method to be employed. Using these allows sensitivity estimates of low computational cost to be made. Illustrative examples with sensitivities computed both by conventional Monte Carlo and the proposed procedure show good agreement over a range of probability distributions for the input random variables and for various complexities of the limit state function.  相似文献   

13.
提出了一种基于抽样技术的算法ABS,通过分析从大型复杂网络中抽样所得到的样本网络,推导出大型复杂网络的可靠性度量参数,同时还给出这些参数的 Bootstrap 置信区间.大量的实验表明,ABS 算法在估算大型复杂网络的可靠性度量参数方面是有效的、可行的.  相似文献   

14.
Exhaustive self-testing of combinational circuitry within the framework of the level-sensitive scan design (LSSD) discipline requires that every output node depend on a small number of input nodes. We present here efficient algorithms that take an arbitrary block of combinational logic and add to it the smallest number of bits of new LSSD registers necessary to: (1) partition the logic so that no output depends on more thank inputs, and (2) maintain timing within the block (so that all input-to-output paths encounter the same number of bits of register). Our partitioning algorithms conform to two different design constraints. We also show that the unconstrained partitioning problem is NP-complete.A portion of the research of the first and third authors was done while visiting Bell Communications Research. Sandeep Bhatt was also supported in part by NSF Grant DCR 84-05478 and ONR Grant N00014-82-K-0184, and Arnold Rosenberg by NSF Grants MCS-81-01213 and DMC-85-04308. A preliminary version of this paper was presented at the Fourth MIT VLSI Conference on Advanced Research in VLSI.  相似文献   

15.
16.
Semantic metrics for software testability   总被引:2,自引:0,他引:2  
Software faults that infrequently affect output cause problems in most software and are dangerous in safety-critical systems. When a software fault causes frequent software failures, testing is likely to reveal the fault before the software is released; when the fault “hides” from testing, the hidden fault can cause disaster after the software is installed. During the design of safety-critical software, we can isolate certain subfunctions of the software that tend to hide faults. A simple metric, derivable from semantic information found in software specifications, indicates software subfunctions that tend to hide faults. The metric is the domain/range ratio (DRR): the ratio of the cardinality of the possible inputs to the cardinality of the possible outputs. By isolating modules that implement a high DRR function during design, we can produce programs that are less likely to hide faults during testing. The DRR is available early in the software lifecycle; when code has been produced, the potential for hidden faults can be further explored using empirical methods. Using the DRR during design and empirical methods during execution, we can better plan and implement strategies for enhancing testability. For certain specifications, testability considerations can help produce modules that require less additional testing when assumptions change about the distribution of inputs. Such modules are good candidates for software reuse.  相似文献   

17.
Exhaustive self-testing of combinational circuitry within the framework of the level-sensitive scan design (LSSD) discipline requires that every output node depend on a small number of input nodes. We present here efficient algorithms that take an arbitrary block of combinational logic and add to it the smallest number of bits of new LSSD registers necessary to: (1) partition the logic so that no output depends on more thank inputs, and (2) maintain timing within the block (so that all input-to-output paths encounter the same number of bits of register). Our partitioning algorithms conform to two different design constraints. We also show that the unconstrained partitioning problem is NP-complete.  相似文献   

18.
为研究软件可测性,从数据处理过程中信息掩盖这一新的角度对软件可测性度量进行研究,提出了一种基线模型用以描写输入、处理和输出之间的关系,并在此基础上提出了一套基线度量指标,用以度量信息处理过程中的信息掩盖程度,揭示软件的可测性;基线模型中3条基线度量定理的发现,进一步揭示了信息掩盖程度与基本处理片和过程临时元的定量关系;两种度量模型的比较显示了基线度量模型的合理性。  相似文献   

19.
软件测试性计算方法研究   总被引:1,自引:1,他引:0  
在软件开发和评估过程中,软件测试性是一个越来越需要考虑的重要因素,对于软件测试性的计算则是指导软件进行有效测试的理论工具。目前,对软件测试性有几种不同的理解,从而导致测试性计算方法的不同。从探索这些不同定义间关系的角度出发,分类介绍了一些测试性计算方法,并分析了这两类方法之间的区别和联系。这种讨论有助于促进对软件测试性的理解,并能根据不同的情况选择不同的计算方法。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号