首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   36篇
  免费   0篇
自动化技术   36篇
  2016年   1篇
  2013年   1篇
  2012年   1篇
  2010年   2篇
  2008年   1篇
  2005年   1篇
  2004年   2篇
  2001年   1篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   4篇
  1996年   2篇
  1995年   1篇
  1994年   1篇
  1993年   3篇
  1991年   1篇
  1990年   1篇
  1988年   4篇
  1984年   2篇
  1981年   1篇
  1979年   2篇
排序方式: 共有36条查询结果,搜索用时 15 毫秒
1.
Selective regression testing strategies attempt to choose an appropriate subset of test cases from among a previously run test suite for a software system, based on information about the changes made to the system to create new versions. Although there has been a significant amount of research in recent years on the design of such strategies, there has been very little investigation of their cost-effectiveness. The paper presents some computationally efficient predictors of the cost-effectiveness of the two main classes of selective regression testing approaches. These predictors are computed from data about the coverage relationship between the system under test and its test suite. The paper then describes case studies in which these predictors were used to predict the cost-effectiveness of applying two different regression testing strategies to two software systems. In one case study, the TESTTUBE method selected an average of 88.1 percent of the available test cases in each version, while the predictor predicted that 87.3 percent of the test cases would be selected on average  相似文献   
2.
We compare the effectiveness of four modeling methods—negative binomial regression, recursive partitioning, random forests and Bayesian additive regression trees—for predicting the files likely to contain the most faults for 28 to 35 releases of three large industrial software systems. Predictor variables included lines of code, file age, faults in the previous release, changes in the previous two releases, and programming language. To compare the effectiveness of the different models, we use two metrics—the percent of faults contained in the top 20% of files identified by the model, and a new, more general metric, the fault-percentile-average. The negative binomial regression and random forests models performed significantly better than recursive partitioning and Bayesian additive regression trees, as assessed by either of the metrics. For each of the three systems, the negative binomial and random forests models identified 20% of the files in each release that contained an average of 76% to 94% of the faults.  相似文献   
3.
Three automatic test case generation algorithms intended to test the resource allocation mechanisms of telecommunications software systems are introduced. Although these techniques were specifically designed for testing telecommunications software, they can be used to generate test cases for any software system that is modelable by a Markov chain provided operational profile data can either be collected or estimated. These algorithms have been used successfully to perform load testing for several real industrial software systems. Experience generating test suites for five such systems is presented. Early experience with the algorithms indicate that they are highly effective at detecting subtle faults that would have been likely to be missed if load testing had been done in the more traditional way, using hand-crafted test cases. A domain-based reliability measure is applied to systems after the load testing algorithms have been used to generate test data. Data are presented for the same five industrial telecommunications systems in order to track the reliability as a function of the degree of system degradation experienced  相似文献   
4.
Reliability testing of rule-based systems   总被引:1,自引:0,他引:1  
Rule-based software systems are becoming more common in industrial settings, particularly to monitor and control large, real-time systems. The authors describe an algorithm for reliability testing of rule-based systems and their experience using it to test an industrial network surveillance system  相似文献   
5.
6.
A formal analysis of the fault-detecting ability of testing methods   总被引:1,自引:0,他引:1  
Several relationships between software testing criteria, each induced by a relation between the corresponding multisets of subdomains, are examined. The authors discuss whether for each relation R and each pair of criteria, C1 and C2 , R(C1, C2) guarantees that C1 is better at detecting faults than C2 according to various probabilistic measures of fault-detecting ability. It is shown that the fact that C 1 subsumes C2 does not guarantee that C1 is better at detecting faults. Relations that strengthen the subsumption relation and that have more bearing on fault-detecting ability are introduced  相似文献   
7.
8.
This paper compares the fault-detecting ability of several software test data adequacy criteria. It has previously been shown that if C1 properly covers C2, then C1 is guaranteed to be better at detecting faults than C2, in the following sense: a test suite selected by independent random selection of one test case from each subdomain induced by C1 is at least as likely to detect a fault as a test suite similarly selected using C2. In contrast, if C1 subsumes but does not properly cover C2, this is not necessarily the case. These results are used to compare a number of criteria, including several that have been proposed as stronger alternatives to branch testing. We compare the relative fault-detecting ability of data flow testing, mutation testing, and the condition-coverage techniques, to branch testing, showing that most of the criteria examined are guaranteed to be better than branch testing according to two probabilistic measures. We also show that there are criteria that can sometimes be poorer at detecting faults than substantially less expensive criteria  相似文献   
9.
10.
A family of test data adequacy criteria employing data-flow information was previously proposed, and a theoretical complexity analysis was performed. The author describes an empirical study to determine the actual cost of using these criteria. The aim is to establish the practical usefulness of these criteria in testing software and provide a basis for predicting the amount of testing needed for a given program. The first goal of the study is to confirm the belief that the family of software testing criteria considered is practical to use. An attempt is made to show that even as the program size increases, the amount of testing, expressed in terms of the number of test cases sufficient to satisfy a given criterion, remains modest. Several ways of evaluating this hypothesis are explored. The second goal is to provide the prospective user of these criteria with a way of predicting the number of test cases that will be needed to satisfy a given criterion for a given program. This provides testers with a basis for selecting the most comprehensive criterion that they can expect to satisfy. Several plausible bases for such a prediction are considered  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号