首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   1篇
无线电   11篇
一般工业技术   1篇
自动化技术   8篇
  2017年   1篇
  2016年   1篇
  2014年   1篇
  2011年   1篇
  2008年   1篇
  2002年   1篇
  1995年   1篇
  1994年   1篇
  1993年   2篇
  1992年   3篇
  1991年   2篇
  1990年   1篇
  1987年   1篇
  1985年   1篇
  1984年   1篇
  1982年   1篇
排序方式: 共有20条查询结果,搜索用时 807 毫秒
1.
"Software test-coverage measures" quantify the degree of thoroughness of testing. Tools are now available that measure test-coverage in terms of blocks, branches, computation-uses, predicate-uses, etc. that are covered. This paper models the relations among testing time, coverage, and reliability. An LE (logarithmic-exponential) model is presented that relates testing effort to test coverage (block, branch, computation-use, or predicate-use). The model is based on the hypothesis that the enumerable elements (like branches or blocks) for any coverage measure have various probabilities of being exercised; just like defects have various probabilities of being encountered. This model allows relating a test-coverage measure directly with defect-coverage. The model is fitted to 4 data-sets for programs with real defects. In the model, defect coverage can predict the time to next failure. The LE model can eliminate variables like test-application strategy from consideration. It is suitable for high reliability applications where automatic (or manual) test generation is used to cover enumerables which have not yet been tested. The data-sets used suggest the potential of the proposed model. The model is simple and easily explained, and thus can be suitable for industrial use. The LE model is based on the time-based logarithmic software-reliability growth model. It considers that: at 100% coverage for a given enumerable, all defects might not yet have been found.  相似文献   
2.
Even with proper design, integrated circuits and systems can have timing problems because of physical faults or variation of parameters. The authors introduce a fault model that takes into account timing related failures in both the combinational logic and the storage elements. Using their fault model and the system's requirements for proper operation, the authors propose ways to handle flipflop-to-flipflop delay, path selection, initialization, error propagation, race-around, and anomalous behavior. They discuss the advantages of scan designs like LSSD and the effectiveness of random delay testing.  相似文献   
3.
The main considerations for built-in self-test (BIST) for complex circuits are fault coverage, test time, and hardware overhead. In the BIST technique, exhaustive or pseudo-exhaustive testing is used to test the combinational logic in a register sandwich. If register sandwiches can be identified in a complex digitial system, it is possible to test several of them in parallel using the built-in logic block observation (BILBO) technique. Concurrent built-in logic block observation (CBILBO) technique can further improve the test time, but it requires significant hardware overhead. A systematic scheduling technique is suggested to optimize parallel tests of register sandwiches. Techniques are proposed to deal with shared registers for parallel testing. The proposed method attempts to reduce further the test time while only modestly increasing the hardware overhead.  相似文献   
4.
Single BJT BiCMOS devices exhibit sequential behavior under transistor stuck-OPEN (s-OPEN) faults. In addition to the sequential behavior, delay faults are also present. Detection of s-OPEN faults exhibiting sequential behavior needs two-pattern or multipattern sequences, and delay faults are all the more difficult to detect. A new design for testability scheme is presented that uses only two extra transistors to improve the circuit testability regardless of timing skews/delays, glitches, or charge sharing among internal nodes. With this design, only a single vector is required to test for a fault instead of the two-pattern or multipattern sequences. The testable design scheme presented also avoids the requirement of generating tests for delay faults  相似文献   
5.
A number of security vulnerabilities have been reported in the Windows, and Linux operating systems. Both the developers, and users of operating systems have to utilize significant resources to evaluate, and mitigate the risk posed by these vulnerabilities. Vulnerabilities are discovered throughout the life of a software system by both the developers, and external testers. Vulnerability discovery models are needed that describe the vulnerability discovery process for determining readiness for release, future resource allocation for patch development, and evaluating the risk of vulnerability exploitation. Here, we analytically describe six models that have been recently proposed, and evaluate those using actual data for four major operating systems. The applicability of the proposed models, and the significance of the parameters involved are examined. The results show that some of the models tend to capture the discovery process better than others.  相似文献   
6.
Attacks on computer systems are now attracting increased attention. While the current trends in software vulnerability discovery indicate that the number of newly discovered vulnerabilities continues to be significant, the time between the public disclosure of vulnerabilities and the release of an automated exploit is shrinking. Thus, assessing the vulnerability exploitability risk is critical because this allows decision-makers to prioritize among vulnerabilities, allocate resources to patch and protect systems from these vulnerabilities, and choose between alternatives. Common vulnerability scoring system (CVSS) metrics have become the de facto standard for assessing the severity of vulnerabilities. However, the CVSS exploitability measures assign subjective values based on the views of experts. Two of the factors in CVSS, Access Vector and Authentication, are the same for almost all vulnerabilities. CVSS does not specify how the third factor, Access Complexity, is measured, and hence it is unknown whether it considers software properties as a factor. In this work, we introduce a novel measure, Structural Severity, which is based on software properties, namely attack entry points, vulnerability location, the presence of the dangerous system calls, and reachability analysis. These properties represent metrics that can be objectively derived from attack surface analysis, vulnerability analysis, and exploitation analysis. To illustrate the proposed approach, 25 reported vulnerabilities of Apache HTTP server and 86 reported vulnerabilities of Linux Kernel have been examined at the source code level. The results show that the proposed approach, which uses more detailed information, can objectively measure the risk of vulnerability exploitability and results can be different from the CVSS base scores.  相似文献   
7.
Some faults in storage elements (SEs) do not manifest as stuck-at-0/1 faults. These include data-feedthrough faults that cause the SE cell to exhibit combinational behaviour. The authors investigate the implications of such faults on the behaviour of circuits using unclocked SEs. It is shown that effects of data-feedthrough faults at the behavioural level are different from those due to stuck-at faults, and therefore tests generated for the latter may be inadequate  相似文献   
8.
A vulnerability discovery model attempts to model the rate at which the vulnerabilities are discovered in a software product. Recent studies have shown that the S‐shaped Alhazmi–Malaiya Logistic (AML) vulnerability discovery model often fits better than other models and demonstrates superior prediction capabilities for several major software systems. However, the AML model is based on the logistic distribution, which assumes a symmetrical discovery process with a peak in the center. Hence, it can be expected that when the discovery process does not follow a symmetrical pattern, an asymmetrical distribution based discovery model might perform better. Here, the relationship between performance of S‐shaped vulnerability discovery models and the skewness in target vulnerability datasets is examined. To study the possible dependence on the skew, alternative S‐shaped models based on the Weibull, Beta, Gamma and Normal distributions are introduced and evaluated. The models are fitted to data from eight major software systems. The applicability of the models is examined using two separate approaches: goodness of fit test to see how well the models track the data, and prediction capability using average error and average bias measures. It is observed that an excellent goodness of fit does not necessarily result in a superior prediction capability. The results show that when the prediction capability is considered, all the right skewed datasets are represented better with the Gamma distribution‐based model. The symmetrical models tend to predict better for left skewed datasets; the AML model is found to be the best among them. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
9.
The usefulness of connectionist models for software reliability growth prediction is illustrated. The applicability of the connectionist approach is explored using various network models, training regimes, and data representation methods. An empirical comparison is made between this approach and five well-known software reliability growth models using actual data sets from several different software projects. The results presented suggest that connectionist models may adapt well across different data sets and exhibit a better predictive accuracy. The analysis shows that the connectionist approach is capable of developing models of varying complexity  相似文献   
10.
The fault exposure ratio, K, is an important factor that controls the per-fault hazard rate, and hence, the effectiveness of the testing of software. The authors examine the variations of K with fault density, which declines with testing time. Because faults become harder to find, K should decline if testing is strictly random. However, it is shown that at lower fault densities K tends to increase. This is explained using the hypothesis that real testing is more efficient than strictly random testing especially at the end of the test phase. Data sets from several different projects (in USA and Japan) are analyzed. When the two factors, e.g., shift in the detectability profile and the nonrandomness of testing, are combined the analysis leads to the logarithmic model that is known to have superior predictive capability  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号