首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
One approach to testing concurrent programs is called reachability testing, which derives test sequences automatically and on‐the‐fly, without constructing a static model. Existing reachability testing algorithms are exhaustive in that they are intended to exercise all possible synchronization sequences of a concurrent program with a given input. In this paper, we present a new testing strategy, called t‐way reachability testing, that adopts the dynamic framework of reachability testing but selectively exercises a subset of synchronization sequences. The selection of the synchronization sequences is based on a combinatorial testing strategy called t‐way testing. We present an algorithm that implements t‐way reachability testing, and report the results of several case studies that were conducted to evaluate its effectiveness. The results indicate that t‐way reachability testing can substantially reduce the number of synchronization sequences exercised during reachability testing while still effectively detecting faults. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

2.
Existing specification‐based testing techniques often generate comprehensive test suites to cover diverse combinations of test‐relevant aspects. Such a test suite can be prohibitively expensive to execute exhaustively because of its large size. A pragmatic strategy often adopted in practice, called test‐once strategy, is to identify certain particular conditions from the specification and to test each such condition once only. This strategy is implicitly based on the uniformity assumption that the implementation will process a particular condition uniformly, regardless of other parameters or inputs. As the decision of adopting the test‐once strategy is often based on the specification, whether the uniformity assumption actually holds in the implementation needs to be critically assessed, or else the risk of inadequate testing could be non‐negligible. As viable alternatives to reduce such a risk, a family of test‐a‐few strategies for the testing of particular conditions is proposed in this paper. Two rounds of experiments that evaluate the effectiveness of the test‐a‐few strategies as compared with the test‐once strategy are further reported. Our experiments do the following: (1) provide clear evidence that the uniformity assumption often, but not always, holds and that the assumption usually fails to hold when the implementation is faulty; (2) demonstrate that all our proposed test‐a‐few strategies are statistically more reliable than the test‐once strategy in revealing faulty programs; (3) show that random sampling is already substantially more effective than the test‐once strategy; and (4) indicate that, compared with other test‐a‐few strategies under study, choice coverage seems to achieve a better trade‐off between test effort and effectiveness. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
This paper presents a generic formal framework to specify and test autonomous e‐commerce agents. First, the formalism to represent the behaviour of agents is introduced. The corresponding machinery to define how implementations can be tested follows. Two testing approaches are considered. The first of them, which can be called active, is based on stimulating the implementation under test (IUT) with a test. The peculiarity is that tests will be defined as a special case of autonomous e‐commerce agent. The second approach, which can be called passive, consists of observing the behaviour of the tested agent in an environment containing other agents. As a case study the framework is applied to the e‐commerce system Kasbah. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, the focus is put on multi‐core branch‐and‐bound algorithms for solving large‐scale permutation‐based optimization problems. We investigate five work stealing (WS) strategies with a new data structure called integer–vector–matrix (IVM). In these strategies, each thread has a private IVM allowing the local management of a set of subproblems enumerated using a factorial system. The WS strategies differ in the way the victim thread is selected and the granularity of stolen work units (intervals of factoradics). To assess the efficiency of the private IVM‐based WS approach, the five WS strategies have been extensively experimented on the flowshop scheduling permutation problem and compared with their conventional linked‐list‐based counterparts. The obtained results demonstrate that the IVM‐based WS outperforms the linked‐list‐based one in terms of CPU time, memory usage and number of performed WS operations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Algorithmic construction of software interaction test suites has focussed on pairwise coverage; less is known about the efficient construction of test suites for t‐way interactions with t≥3. This study extends an efficient density‐based algorithm for pairwise coverage to generate t‐way interaction test suites and shows that it guarantees a logarithmic upper bound on the size of the test suites as a function of the number of factors. To complement this theoretical guarantee, an implementation is outlined and some practical improvements are made. Computational comparisons with other published methods are reported. Many of the results improve upon those in the literature. However, limitations on the ability of one‐test‐at‐a‐time algorithms are also identified. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
In this work, the controllability of single‐leader multi‐agent systems with chain structures is studied. It is shown that the necessary and sufficient condition for the multi‐chain system to be controllable is that there exist no two chain lengths in the form ? 1=i + k 1(2i + 1) and ? 2=i + k 2(2i + 1), where i is some natural number and k 1 and k 2 some nonnegative integers. Using this condition, the author derives an upper bound based on the length of the longest chain and proves that if the number of chains exceeds this bound, the multi‐chain system must be uncontrollable. In addition, the author investigates an augmented system constructed by connecting some follower nodes of the multi‐chain system and obtains a sufficient condition for the augmented system to be uncontrollable. Finally, the author shows how to select a minimum number of additional leaders to make an uncontrollable multi‐chain system controllable. Numerical examples are provided to illustrate the results. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
This paper mainly investigates the event‐triggered control for couple‐group multi‐agent systems with communication delay. Logarithmic quantization is considered in the communication channels. Event‐triggered control laws are adopted to reduce the frequency of individual actuation updating for discrete‐time agent dynamics. The proposed protocol is efficient as long as the quantization levels are dense enough, i.e. the density of quantization levels goes to infinity is a sufficient condition for the asymptotic consensus of the multi‐agent systems. It turns out that the bounded consensus depends on not only the density of quantization levels, but also the updating strategy of events. Finally, a simulation example is given to demonstrate the effectiveness of the proposed methods.  相似文献   

8.
9.
This article studies consensus problems of discrete‐time linear multi‐agent systems with stochastic noises and binary‐valued communications. Different from quantized consensus of first‐order systems with binary‐valued observations, the quantized consensus of linear multi‐agent systems requires that each agent observes its neighbors' states dynamically. Unlike the existing quantized consensus of linear multi‐agent systems, the information that each agent in this article gets from its neighbors is only binary‐valued. To estimate its neighbors' states dynamically by using the binary‐valued observations, we construct a two‐step estimation algorithm. Based on the estimates, a stochastic approximation‐based distributed control is proposed. The estimation and control are analyzed together in the closed‐loop system, since they are strongly coupled. Finally, it is proved that the estimates can converge to the true states in mean square sense and the states can achieve consensus at the same time by properly selecting the coefficient in the estimation algorithm. Moreover, the convergence rate of the estimation and the consensus speed are both given by O(1/t). The theoretical results are illustrated by simulations.  相似文献   

10.
Abstract— The present paper describes a method for using more than three primaries in an additive‐primary display. The method ensures that each tristimulus specification can be produced in no more than one way, even if a non‐singular filter (i.e., one that does not reduce the dimensionality of color‐matching space) is interposed between the screen and the viewer. Starting with N primaries, the method uses only three at a time, but these may be composites — fixed linear combinations of the original N. As further insurance against on‐screen metamerism, a criterion on the primary spectra, based on the Binet‐Cauchy theorem, ensures that a triad of primaries keeps its right/left‐handed chromaticity ordering when a filter is interposed.  相似文献   

11.
This paper presents an approach to generate test cases from UML 2.0 sequence diagrams and subsequently prioritize those test cases using model information encapsulated in the sequence diagrams. The test cases generated according to the proposed approach satisfy the scenario coverage criterion and are suitable for system‐level testing. For prioritizing test cases, three different prioritization metrics are proposed. The values of these prioritization metrics can be analytically computed from the model information only. This paper also presents an approach to generate test data using a concept called rule‐based matrix. The prioritization metrics are used to control the number of test data without compromising the test adequacy. The effectiveness of the proposed approach has been verified using two industrial designs. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
13.
In this paper, the perturbed continuous‐time large‐scale system with time delays is represented by an equivalent Takagi‐Sugeno type fuzzy model. First, two types of decentralized state feedback controllers are considered in this paper. Based on the Riccati‐type inequality, the Razumikhin theorem, and the delay‐dependent Lyapunov functional approach, some controller design approaches are proposed to stabilize the whole fuzzy time‐delay system asymptotically. In these design methods, both the delay‐independent and delay‐dependent stabilization criteria are derived. By Schur complement, these sufficient conditions can be easily transformed into the problem of LMI's. Moreover, the systems with all the time‐delays τlij (t) are the same for all rules (i.e., τlij (t) = τmij (t) = τij for all l =m); the authors also propose a simpler and less conservative stabilizing criteria. A numerical example is given to illustrate the control design and its effectiveness.  相似文献   

14.
Modelling a software system is often a challenging prerequisite to automatic test case generation. Modelling the navigation structure of a dynamic web application is particularly challenging because of the presence of a large number of pages that are created dynamically and the difficulty of reaching a dynamic page unless a set of appropriate input values are provided for the parameters. To address the first challenge, some form of abstraction is required to enable scalable modelling. For the second challenge, techniques are required to select appropriate input values for parameters and systematically combine them to reach new pages. This paper presents a combinatorial approach in building a navigation graph for dynamic web applications. The navigation graph can then be used to automatically generate test sequences for testing web applications. The novelty of our approach is twofold. First, we use an abstraction scheme to control the page explosion problem, where pages that are likely to have the same navigation behaviour are grouped together and are represented as a single node in the navigation graph. Second, assuming that values of individual parameters are supplied manually or generated from other techniques, we combine parameter values such that well‐defined combinatorial coverage of input parameter values is achieved. Using combinatorial coverage can significantly reduce the number of requests that have to be submitted while still achieving effective coverage of the navigation structure. We implement our combinatorial approach in a tool, Tansuo, and apply the tool on seven open‐source web applications. We evaluate the effectiveness of Tansuo's exploration process guided by t‐way coverage, for t = 1,2,3, with respect to code coverage, and find that the navigation structure exploration by Tansuo, in general, results in high code coverage (more than 80% statement coverage for most of our subject applications when dead code is removed). We compare Tansuo's effectiveness with two other navigation graph tools and find that Tansuo is more effective. Our empirical results indicate that using pairwise coverage in Tansuo results in the efficient generation of navigation graphs and effective exploration of dynamic web applications. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
Rescaling is possibly the most popular approach to cost‐sensitive learning. This approach works by rebalancing the classes according to their costs, and it can be realized in different ways, for example, re‐weighting or resampling the training examples in proportion to their costs, moving the decision boundaries of classifiers faraway from high‐cost classes in proportion to costs, etc. This approach is very effective in dealing with two‐class problems, yet some studies showed that it is often not so helpful on multi‐class problems. In this article, we try to explore why the rescaling approach is often helpless on multi‐class problems. Our analysis discloses that the rescaling approach works well when the costs are consistent, while directly applying it to multi‐class problems with inconsistent costs may not be a good choice. Based on this recognition, we advocate that before applying the rescaling approach, the consistency of the costs must be examined at first. If the costs are consistent, the rescaling approach can be conducted directly; otherwise it is better to apply rescaling after decomposing the multi‐class problem into a series of two‐class problems. An empirical study involving 20 multi‐class data sets and seven types of cost‐sensitive learners validates our proposal. Moreover, we show that the proposal is also helpful for class‐imbalance learning.  相似文献   

16.
The configuration spaces of software systems are often too large to test exhaustively. Combinatorial interaction testing approaches, such as covering arrays, systematically sample the configuration space and test only the selected configurations. In an attempt to reduce the cost of testing, standard t-way covering arrays aim to cover all t-way combinations of option settings in a minimum number of configurations. By doing so, they simply assume that every configuration costs the same. When the cost varies from one configuration to another, however, minimizing the number of configurations is not necessarily the same as minimizing the cost. To overcome this issue, we have recently introduced cost-aware covering arrays. In a nutshell, a t-way cost-aware covering array is a standard t-way covering array that “minimizes” a given cost function modeling the actual cost of testing. In this work we develop a simulated annealing-based approach to compute cost-aware covering arrays, which takes as input a configuration space model enhanced with a cost function and computes a cost-aware covering array by using two alternating neighboring state generation strategies together with a fitness function expressed as a weighted sum of two objectives: covering all required t-way option setting combinations and minimizing the cost function. To the best of our knowledge, the proposed approach is the first approach that computes cost-aware covering arrays for general, non-additive linear cost functions with multiplicative interaction effects. We evaluate the approach both by conducting controlled experiments, in which we systematically vary the input models to study the sensitivity of the approach to various factors and by conducting experiments using real cost functions for real software systems. We also compare cost-aware covering arrays to standard covering arrays constructed by well-known algorithms and study how fast the construction costs are compensated by the cost reductions provided. Our empirical results suggest that the proposed approach is more effective and efficient than the existing approaches.  相似文献   

17.
Aspect‐oriented programming yields new types of programming faults due to the introduction of new constructs for dealing with crosscutting concerns. To reveal aspect faults, this paper presents a framework for testing whether or not aspect‐oriented programs conform to their state models. It supports two families of strategies (i.e. structure‐oriented and property‐oriented) for automated generation of aspect tests from aspect‐oriented state models. A structure‐oriented testing strategy derives tests and test code from an aspect‐oriented state model to meet a given structural coverage criterion, such as state coverage, transition coverage, or round trip. A property‐oriented testing strategy generates test code from the counterexamples of model checking. Two such strategies are checking an aspect‐oriented state model against trap properties and checking mutants of aspect models against system properties. Mutation analysis of aspect‐oriented programs is used to evaluate the effectiveness of these testing strategies. The experiments demonstrate that testing aspect‐oriented programs against their state models can detect many aspect faults. The comparative evaluations also reveal that the structure‐oriented and property‐oriented testing strategies complement each other—some aspect faults were detected by the structure‐oriented strategies, but not by the property‐oriented strategies and vice versa. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
We introduce three new families of stochastic algorithms to generate progressive 2D sample point sequences. This opens a general framework that researchers and practitioners may find useful when developing future sample sequences. Our best sequences have the same low sampling error as the best known sequence (a particular randomization of the Sobol’ (0,2) sequence). The sample points are generated using a simple, diagonally alternating strategy that progressively fills in holes in increasingly fine stratifications. The sequences are progressive (hierarchical): any prefix is well distributed, making them suitable for incremental rendering and adaptive sampling. The first sample family is only jittered in 2D; we call it progressive jittered. It is nearly identical to existing sample sequences. The second family is multi‐jittered: the samples are stratified in both 1D and 2D; we call it progressive multi‐jittered. The third family is stratified in all elementary intervals in base 2, hence we call it progressive multi‐jittered (0,2). We compare sampling error and convergence of our sequences with uniform random, best candidates, randomized quasi‐random sequences (Halton and Sobol'), Ahmed's ART sequences, and Perrier's LDBN sequences. We test the sequences on function integration and in two settings that are typical for computer graphics: pixel sampling and area light sampling. Within this new framework we present variations that generate visually pleasing samples with blue noise spectra, and well‐stratified interleaved multi‐class samples; we also suggest possible future variations.  相似文献   

19.
In object‐oriented terms, one of the goals of integration testing is to ensure that messages from objects in one class or component are sent and received in the proper order and have the intended effect on the state of the objects that receive the messages. This research extends an existing single‐class testing technique to integration testing of multiple classes. The single‐class technique models the behaviour of a single class as a finite state machine, transforms the representation into a data flow graph that explicitly identifies the definitions and uses of each state variable of the class, and then applies conventional data flow testing to produce test case specifications that can be used to test the class. This paper extends those ideas to inter‐class testing by developing flow graphs, finding paths between pairs of definitions and uses, detecting some infeasible paths and automatically generating tests for an arbitrary number of classes and components. It introduces flexible representations for message sending and receiving among objects and allows concurrency among any or all classes and components. Data flow graphs are stored in a relational database and database queries are used to gather def‐use information. This approach is conceptually simple, mathematically precise, quite powerful and general enough to be used for traditional data flow analysis. This testing approach relies on finite state machines, database modelling and processing techniques and algorithms for analysis and traversal of directed graphs. The paper presents empirical results of the approach applied to an automotive system. This work was prepared by U.S. Government employees as part of their official duties and is, therefore, a work of the U.S. Government and not subject to copyright. Published in 2006 by John Wiley & Sons, Ltd.  相似文献   

20.
Polyhedral modeling and re‐meshing algorithms use T‐junctions to add or remove feature lines in a quadrilateral mesh. In many ways this is akin to adaptive knot insertion in a tensor‐product spline, but differs in that the designer or meshing algorithm does not necessarily protect the consistent combinatorial structure that is required to interpret the resulting quad‐dominant mesh as the control net of a hierarchical spline – and so associate a smooth surface with the mesh as in the popular tensor‐product spline paradigm. While G‐splines for multi‐sided holes or generalized subdivision can, in principle, convert quad‐dominant meshes with T‐junctions into smooth surfaces, they do not preserve the two preferred directions and so cause visible shape artifacts. Only recently have n‐gons with T‐junctions (T‐gons) in unstructured quad‐dominant meshes been recognized as a distinct challenge for generalized splines. This paper makes precise the notion of locally quad‐dominant mesh as quad‐meshes including τ‐nets, i.e. T‐gons surrounded by quads; and presents the first high‐quality G‐spline construction that can use τ‐nets as control nets for spline surfaces suitable, e.g., for automobile outer surfaces. Remarkably, T‐gons can be neighbors, separated by only one quad, both of T‐gons and of points where many quads meet. A τ‐net surface cap consists of 16 polynomial pieces of degree (3,5) and is refinable in a way that is consistent with the surrounding surface. An alternative, everywhere bi‐3 cap is not formally smooth, but achieves the same high‐quality highlight line distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号