首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ContextTesting from finite state machines has been investigated due to its well-founded and sound theory as well as its practical application. There has been a recurrent interest in developing methods capable of generating test suites that detect all faults in a given fault domain. However, the proposal of new methods motivates the comparison with traditional methods.ObjectiveWe compare the methods that generate complete test suites from finite states machines. The test suites produced by the W, HSI, H, SPY, and P methods are analyzed in different configurations.MethodComplete and partial machines were randomly generated varying numbers of states, inputs, outputs, and transitions. These different configurations were used to compare test suite characteristics (number of resets, test case length) and the test suite length (i.e., the sum of the length of its test cases). The fault detection ratio was evaluated using mutation testing to produce faulty implementations with an extra state.ResultsOn average, the recent methods (H, SPY, and P) produced longer test cases but smaller test suites than the traditional methods (W, HSI). The recent methods generated test suites of similar length, though P produced slightly smaller test suites. The SPY and P methods had the highest fault detection ratios and HSI had the lowest. For all methods, there was a positive correlation between the number of resets and the test suite length and between the test case length and the fault detection ratio.ConclusionThe recent methods rely on fewer and longer test cases to reduce the overall test suite length, while the traditional methods produce more and shorter test cases. Longer test cases are correlated to fault detection ratio which favored SPY, though all methods have a ratio of over 92%.  相似文献   

2.
An experiment comparing the effectiveness of the all-uses and all-edges test data adequacy criteria is discussed. The experiment was designed to overcome some of the deficiencies of previous software testing experiments. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of executable edges and definition-use associations covered were measured, and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. Logistic regression analysis was used to investigate whether the probability that a test set exposes an error increases as the percentage of definition-use associations or edges covered by it increases. Error exposing ability was shown to be strongly positively correlated to percentage of covered definition-use associations in only four of the nine subjects. Error exposing ability was also shown to be positively correlated to the percentage of covered edges in four different subjects, but the relationship was weaker  相似文献   

3.
4.
The effectiveness in discovering errors of symbolic evaluation and of testing sad static program analysis are studied. The three techniques are applied to a diverse collection of programs and the results compared. Symbolic evaluation is used to carry out symbolic testing and to generate symbolic systems of path predicates. The use of the predicates for automated test data selection is analysed. Several conventional types of program testing strategies are evaluated. The strategies include branch testing, structured testing and testing on input values having special properties. The static source analysis techniques that are studied include anomaly analysis and interface analysis. Examples are included which describe typical situations in which one technique is reliable but another unreliable. The effectiveness of symbolic testing is compared with testing on actual data and with the use of an integrated methodology that includes both testing and static source analysis. Situations in which symbolic testing is difficult to apply or not effective are discussed. Different ways in which symbolic evaluation can be used for generating test data are described. Those ways for which it is most effective are isolated. The paper concludes with a discussion of the most effective uses to which symbolic evaluation can he put in an integrated system which contains all three of the validation techniques that are studied.  相似文献   

5.
6.
Drift is a common phenomenon in active sensors and, if left untreated, is generally the limiting factor in their performance. It is shown that drift and spread in sensor characteristics are tightly interwoven due to finite sensitivity to biasing parameters. Modern treatments of drift are dynamical under operating conditions, notably so chopping, the sensitivity variation method and the recently introduced van Putten-method. These methods differ in regards to drift-dependence on biasing. In their application to silicon flow sensors, the first two reduce but do not eliminate drift. The geometric van Putten-method leaves biasing invariant, which eliminates drift and obtains uniform sensor-characteristics leaving drift-free operation.  相似文献   

7.
Three pairs of higher level computer communication protocols are compared to reveal design choices. The comparisons are between the host level protocols NCP and TCP used in the ARPANET, between the virtual terminal protocols Telnet and SMVT, and between the file transfer protocols FTP and BTF. The Telnet and FTP protocols are used in the ARPANET, while the SMVT and BTF are proposed for use in the EIN.  相似文献   

8.
摄动法近似应当保辛.本文指出,有限元位移法自动保辛,有限元混合能表示也保辛.摄动法的刚度阵Taylor级数展开能证明保辛;混合能的Taylor级数展开摄动也证明了保辛.但传递辛矩阵的Taylor级数展开摄动却不能保辛.辛矩阵只能在乘法群下保辛,故传递辛矩阵的保辛摄动必须采用正则变换的乘法.虽然刚度阵加法摄动、混合能矩阵加法摄动与传递辛矩阵正则变换乘法摄动都保辛,但这3种摄动近似并不相同.最后通过数值例题给出了对比.  相似文献   

9.
10.
The effectiveness of CCTV and improved street lighting has been studied extensively in terms of their potential for reducing the number of crimes in a certain area. However, this does not take into account the cost of the interventions or the savings due to crime reduction. This paper presents a model, which takes the form of a cellular automaton to simulate the implementation of improved street lighting and CCTV cameras using a range of strategies. This permits an exploration of simulated options to find which is most cost effective and what the best strategy for implementation is. The results indicate that there are few situations where CCTV is more cost effective than improved street lighting as a way of reducing street crime. In addition, it is shown that the strategy of targeting locations with the highest crime rates, “hot spots”, has the greatest potential for maximising the cost effectiveness of interventions.  相似文献   

11.
Choice of a classification algorithm is generally based upon a number of factors, among which are availability of software, ease of use, and performance, measured here by overall classification accuracy. The maximum likelihood (ML) procedure is, for many users, the algorithm of choice because of its ready availability and the fact that it does not require an extended training process. Artificial neural networks (ANNs) are now widely used by researchers, but their operational applications are hindered by the need for the user to specify the configuration of the network architecture and to provide values for a number of parameters, both of which affect performance. The ANN also requires an extended training phase.In the past few years, the use of decision trees (DTs) to classify remotely sensed data has increased. Proponents of the method claim that it has a number of advantages over the ML and ANN algorithms. The DT is computationally fast, make no statistical assumptions, and can handle data that are represented on different measurement scales. Software to implement DTs is readily available over the Internet. Pruning of DTs can make them smaller and more easily interpretable, while the use of boosting techniques can improve performance.In this study, separate test and training data sets from two different geographical areas and two different sensors—multispectral Landsat ETM+ and hyperspectral DAIS—are used to evaluate the performance of univariate and multivariate DTs for land cover classification. Factors considered are: the effects of variations in training data set size and of the dimensionality of the feature space, together with the impact of boosting, attribute selection measures, and pruning. The level of classification accuracy achieved by the DT is compared to results from back-propagating ANN and the ML classifiers. Our results indicate that the performance of the univariate DT is acceptably good in comparison with that of other classifiers, except with high-dimensional data. Classification accuracy increases linearly with training data set size to a limit of 300 pixels per class in this case. Multivariate DTs do not appear to perform better than univariate DTs. While boosting produces an increase in classification accuracy of between 3% and 6%, the use of attribute selection methods does not appear to be justified in terms of accuracy increases. However, neither the univariate DT nor the multivariate DT performed as well as the ANN or ML classifiers with high-dimensional data.  相似文献   

12.
For many applications two-dimensional hydraulic models are time intensive to run due to their computational requirements, which can adversely affect the progress of both research and industry modelling projects. Computational time can be reduced by running a model in parallel over multiple cores. However, there are many parallelisation methods and these differ in terms of difficulty of implementation, suitability for particular codes and parallel efficiency. This study compares three parallelisation methods based on OpenMP, message passing and specialised accelerator cards. The parallel implementations of the codes were required to produce near identical results to a serial version for two urban inundation test cases. OpenMP was considered the easiest method to develop and produced similar speedups (of ~3.9×) to the message passing code on up to four cores for a fully wet domain. The message passing code was more efficient than OpenMP, and remained over 90% efficient on up to 50 cores for a completely wet domain. All parallel codes were less efficient for a partially wet domain test case. The accelerator card code was faster and more power efficient than the standard code on a single core for a fully wet domain, but was subject to longer development time (2 months compared to <2 week for the other methods).  相似文献   

13.
Multi-label learning has received significant attention in the research community over the past few years: this has resulted in the development of a variety of multi-label learning methods. In this paper, we present an extensive experimental comparison of 12 multi-label learning methods using 16 evaluation measures over 11 benchmark datasets. We selected the competing methods based on their previous usage by the community, the representation of different groups of methods and the variety of basic underlying machine learning methods. Similarly, we selected the evaluation measures to be able to assess the behavior of the methods from a variety of view-points. In order to make conclusions independent from the application domain, we use 11 datasets from different domains. Furthermore, we compare the methods by their efficiency in terms of time needed to learn a classifier and time needed to produce a prediction for an unseen example. We analyze the results from the experiments using Friedman and Nemenyi tests for assessing the statistical significance of differences in performance. The results of the analysis show that for multi-label classification the best performing methods overall are random forests of predictive clustering trees (RF-PCT) and hierarchy of multi-label classifiers (HOMER), followed by binary relevance (BR) and classifier chains (CC). Furthermore, RF-PCT exhibited the best performance according to all measures for multi-label ranking. The recommendation from this study is that when new methods for multi-label learning are proposed, they should be compared to RF-PCT and HOMER using multiple evaluation measures.  相似文献   

14.
Vertices Principal Component Analysis (V-PCA), and Centers Principal Component Analysis (C-PCA) generalize Principal Component Analysis (PCA) in order to summarize interval valued data. Neural Network Principal Component Analysis (NN-PCA) represents an extension of PCA for fuzzy interval data. However, also the first two methods can be used for analyzing fuzzy interval data, but they then ignore the spread information. In the literature, the V-PCA method is usually considered computationally cumbersome because it requires the transformation of the interval valued data matrix into a single valued data matrix the number of rows of which depends exponentially on the number of variables and linearly on the number of observation units. However, it has been shown that this problem can be overcome by considering the cross-products matrix which is easy to compute. A review of C-PCA and V-PCA (which hence also includes the computational short-cut to V-PCA) and NN-PCA is provided. Furthermore, a comparison is given of the three methods by means of a simulation study and by an application to an empirical data set. In the simulation study, fuzzy interval data are generated according to various models, and it is reported in which conditions each method performs best.  相似文献   

15.
. We present a case study that tracks usability problems predicted with six usability evaluation methods (claims analysis, cognitive walkthrough, GOMS, heuristic evaluation, user action notation, and simply reading the specification) through a development process. We assess the method's predictive power by comparing the predictions to the results of user tests. We assess the method's persuasive power by seeing how many problems led to changes in the implemented code. We assess design-change effectiveness by user testing the resulting new versions of the system. We concludethatpredictivemethodsarenot as effective as the HCI field would like and discuss directions for future research.  相似文献   

16.
A rig is a ring without negatives. We analyse the free rig on a generator x subject to the equivalence x1+x+x2, showing that in it the non-constant polynomials form a ring. This ring can be identified with the Gaussian integers, which thus acquire objective meaning.  相似文献   

17.
Abstract

We present a case study that tracks usability problems predicted with six usability evaluation methods (claims analysis, cognitive walkthrough, GOMS, heuristic evaluation, user action notation, and simply reading the specification) through a development process. We assess the method's predictive power by comparing the predictions to the results of user tests. We assess the method's persuasive power by seeing how many problems led to changes in the implemented code. We assess design-change effectiveness by user testing the resulting new versions of the system. We concludethatpredictivemethodsarenot as effective as the HCI field would like and discuss directions for future research.  相似文献   

18.
The way ahead with the practical development and application of Ergonomic methods is through a better anticipation and appreciation of changes to system effectiveness and human work that will be incurred through the introduction of new technologies to the workplace. These improvements will involve an improved awareness by the system of the working context and environment. The argued future is with improvements in the handling and use of knowledge by systems. The development of suitable Ergonomics methods, or the careful adaptation of existing methods, should accompany any technological revolution. Moreover, future methods are needed that are specifically developed to be applicable to the real time study of work considering both work context and the amalgamation of results from the use of many diverse methods throughout the design and development life cycle of a system. Part of this process will be a necessary complementation of both quantitative and qualitative methods and guidelines. Another focus should be on creating improved Ergonomics participation within multidisciplinary system design and development environments throughout the system's life cycle. Only through this avenue can Ergonomics show a consistent and valued contribution to quality design and its development. In parallel to such a contribution will be an acceptance by other engineering disciplines, managers, and customers that such an application of Ergonomics is cost effective.  相似文献   

19.
Recently, many superpixels detection methods have been proposed and used in various applications. We are interested in which method is more suitable for the application of contour detection. In this paper, superpixels are evaluated on BSDS500 dataset in two different aspects. On the one hand, contours are directly provided by the boundaries of superpixels and experiments show that better results could be achieved by the superpixels with irregular shapes than those with regular shapes and similar sizes. On the other hand, contours are further detected from those candidate positions which are confirmed by the boundaries of superpixels through the operation of dilation. In this situation, experiments show that competitive results could also be achieved by some superpixels with regular shapes and similar sizes. Besides, we propose a superpixels detection method called watershed-based graph (WG), by which superpixels with irregular shapes could be produced. Firstly, a graph is constructed from an over-segmented map which is achieved through a watershed algorithm. Then, to get the desired superpixels, the graph is segmented by merging neighbor segments in an order of decreasing similarity. Experiments show that higher efficiency could be achieved by WG with a moderate worse contour quality than its original graph-based method.  相似文献   

20.
This paper addresses multicriteria combinatorial optimization problems involving one cost and several bottleneck objective functions. An algorithm is developed which generates the minimal complete set of Pareto-optimal solutions. This algorithm runs in polynomial time as long as the single objective problem considering only the cost function can be solved polynomially. A reoptimization procedure is used to accelerate the convergence of the algorithm. Applications are given. Computational results on randomly generated instances and planar grid graphs concerning the minimum cost spanning tree and the shortest path problem are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号