共查询到20条相似文献,搜索用时 15 毫秒
1.
Wormhole networks have traditionally used deadlock avoidance strategies. More recently, deadlock recovery strategies have begun to gain acceptance. In particular, progressive deadlock recovery techniques allocate a few dedicated resources to quickly deliver deadlocked packets. Deadlock recovery is based on the assumption that deadlocks are rare; otherwise, recovery techniques are not efficient. Measurements of deadlock occurrence frequency show that deadlocks are highly unlikely when enough routing freedom is provided. However, networks are more prone to deadlocks when the network is close to or beyond saturation, causing some network performance degradation. Similar performance degradation behavior at saturation was also observed in networks using deadlock avoidance strategies. In this paper, we take a different approach to handling deadlocks and performance degradation. We propose the use of an injection limitation mechanism that prevents performance degradation near the saturation point and, at the same time, reduces the probability of deadlock to negligible values. We also propose an improved deadlock detection mechanism that uses only local information, detects all deadlocks, and considerably reduces the probability of false deadlock detection over previous proposals. In the rare case when impending deadlock is detected, our proposal consists of using a simple recovery technique that absorbs the deadlocked message at the current node and later reinjects it for continued routing toward its destination. Performance evaluation results show that our new approach to handling deadlock is more efficient than previously proposed techniques 相似文献
2.
Requirements traceability offers many benefits to software projects, and it has been identified as critical for successful development. However, numerous challenges exist in the implementation of traceability in the software engineering industry. Some of these challenges can be overcome through organizational policy and procedure changes, but the lack of cost-effective traceability models and tools remains an open problem. A novel, cost-effective solution for the traceability tool problem is proposed, prototyped, and tested in a case study using an actual software project. Metrics from the case study are presented to demonstrate the viability of the proposed solution for the traceability tool problem. The results show that the proposed method offers significant advantages over implementing traceability manually or using existing commercial traceability approaches. 相似文献
4.
GUI systems are becoming increasingly popular thanks to their ease of use when compared against traditional systems. However,
GUI systems are often challenging to test due to their complexity and special features. Traditional testing methodologies
are not designed to deal with the complexity of GUI systems; using these methodologies can result in increased time and expense.
In our proposed strategy, a GUI system will be divided into two abstract tiers—the component tier and the system tier. On
the component tier, a flow graph will be created for each GUI component. Each flow graph represents a set of relationships
between the pre-conditions, event sequences and post-conditions for the corresponding component. On the system tier, the components
are integrated to build up a viewpoint of the entire system. Tests on the system tier will interrogate the interactions between
the components. This method for GUI testing is simple and practical; we will show the effectiveness of this approach by performing
two empirical experiments and describing the results found.
Ping Li
received her M.Sc. in Computer Engineering from the University of Alberta, Canada, in 2004. She is currently working for Waterloo
Hydrogeologic Inc., a Schlumberger Company, as a Software Quality Analyst.
Toan Huynh
received a B.Sc. in Computer Engineering from the University of Alberta, Canada. He is currently a PhD candidate at the same
institution. His research interests include: web systems, e-commerce, software testing, vulnerabilities and defect management,
and software approaches to the production of secure systems.
Marek Reformat
received his M.Sc. degree from Technical University of Poznan, Poland, and his Ph.D. from University of Manitoba, Canada.
His interests were related to simulation and modeling in time-domain, as well as evolutionary computing and its application
to optimization problems. For three years he worked for the Manitoba HVDC Research Centre, Canada, where he was a member of
a simulation software development team. Currently, Marek Reformat is with the Department of Electrical and Computer Engineering
at University of Alberta. His research interests lay in the areas of application of Computational Intelligence techniques,
such as neuro-fuzzy systems and evolutionary computing, as well as probabilistic and evidence theories to intelligent data
analysis leading to translating data into knowledge. He applies these methods to conduct research in the areas of Software
Engineering, Software Quality in particular, and Knowledge Engineering. Dr. Reformat has been a member of program committees
of several conferences related to Computational Intelligence and evolutionary computing. He is a member of the IEEE Computer
Society and ACM.
James Miller
received the B.Sc. and Ph.D. degrees in Computer Science from the University of Strathclyde, Scotland. During this period,
he worked on the ESPRIT project GENEDIS on the production of a real-time stereovision system. Subsequently, he worked at the
United Kingdom’s National Electronic Research Initiative on Pattern Recognition as a Principal Scientist, before returning
to the University of Strathclyde to accept a lectureship, and subsequently a senior lectureship in Computer Science. Initially
during this period his research interests were in Computer Vision, and he was a co-investigator on the ESPRIT 2 project VIDIMUS.
Since 1993, his research interests have been in Software and Systems Engineering. In 2000, he joined the Department of Electrical
and Computer Engineering at the University of Alberta as a full professor and in 2003 became an adjunct professor at the Department
of Electrical and Computer Engineering at the University of Calgary. He is the principal investigator in a number of research
projects that investigate software verification and validation issues across various domains, including embedded, web-based
and ubiquitous environments. He has published over one hundred refereed journal and conference papers on Software and Systems
Engineering (see www.steam.ualberta.ca for details on recent directions); and currently serves on the program committee for
the IEEE International Symposium on Empirical Software Engineering and Measurement; and sits on the editorial board of the
Journal of Empirical Software Engineering.
相似文献
5.
There have been numerous advancements and rising competition in semiconductor technologies. In light of this, the wafer test plays a more significant role in providing the most prompt yield feedback for quick process improvement. However, the wafer test shop floor is getting more complicated than ever before because of the increasing change-over rate, nonlinear wafer arrival, and preemption by urgent orders. Furthermore, the foundry wafer test is a heterogeneous production with different production cycle times and a large variety of nonidentical testers. Shop floor conditions, including work in process (WIP) pool, tester status, and work order priority, continuously change. There is a need to operate the kind of production line that simultaneously fulfills multiple objectives. Such objectives are maximum confirmed line item performance (CLIP) for normal lots, 100% CLIP for urgent lots, minimum change-over rate, and shortest cycle time. Thus, a reactive dispatching approach is proposed and expected to perform a real-time solution no matter how/what the shop floor would change. The dynamic approach is mainly triggered by two kinds of major events: one is when an urgent lot comes in, and the other is when a tester is idle. In addition, through a two-phase dispatching algorithm, lot ranking, and lot assignment methods, prioritized WIP lots and an appropriate lot assignment are suggested. A better performance measure is obtained by considering the multiple objectives the wafer test operations seek to achieve. 相似文献
6.
An approach to testing the consistency of specifications is explored, which is applicable to the design validation of communication protocols and other cases of step-wise refinement. In this approach, a testing module compares a trace of interactions obtained from an execution of the refined specification (e.g., the protocol specification) with the reference specification (e.g., the communication service specification). Nondeterminism in reference specifications presents certain problems. Using an extended finite state transition model for the specifications, a strategy for limiting the amount of nondeterminacy is presented. An automated method for constructing a testing module for a given reference specification is discussed. Experience with the application of this testing approach to the design of a transport protocol and a distributed mutual exclusion algorithm is described. 相似文献
7.
On a cloud platform, the user requests are managed through workload units called cloudlets which are assigned to virtual machines through cloudlet scheduling mechanism that mainly aims at minimizing the request processing time by producing effective small length schedules. The efficient request processing, however, requires excessive utilization of high-performance resources which incurs large overhead in terms of monetary cost and energy consumed by physical machines, thereby rendering cloud platforms inadequate for cost-effective green computing environments. This paper proposes a power-aware cloudlet scheduling (PACS) algorithm for mapping cloudlets to virtual machines. The algorithm aims at reducing the request processing time through small length schedules while minimizing energy consumption and the cost incurred. For allocation of virtual machines to cloudlets, the algorithm iteratively arranges virtual machines (VMs) in groups using weights computed through optimization and rescaling of parameters including VM resources, cost of utilization of resources, and power consumption. The experiments performed with a diverse set of configurations of cloudlets and virtual machines show that the PACS algorithm achieves a significant overall performance improvement factor ranging from 3.80 to 23.82 over other well-known cloudlet scheduling algorithms.. 相似文献
8.
Correct functioning of object-oriented software depends upon the successful integration of classes. While individual classes may function correctly, several new faults can arise when these classes are integrated together. In this paper, we present a technique to enhance testing of interactions among modal classes. The technique combines UML collaboration diagrams and statecharts to automatically generate an intermediate test model, called SCOTEM (State COllaboration TEst Model). The SCOTEM is then used to generate valid test paths. We also define various coverage criteria to generate test paths from the SCOTEM model. In order to assess our technique, we have developed a tool and applied it to a case study to investigate its fault detection capability. The results show that the proposed technique effectively detects all the seeded integration faults when complying with the most demanding adequacy criterion and still achieves reasonably good results for less expensive adequacy criteria. 相似文献
9.
Unique Input–Output sequences (UIOs) are quite commonly used in conformance testing. Unfortunately finding UIOs of minimal length is an NP hard problem. This study presents a hybrid approach to generate UIOs automatically on a basis of the finite state machine (FSM) specification. The proposed hybrid approach harnesses the benefits of hill climbing (Greedy search) and heuristic algorithm. Hill climbing, which exploits domain knowledge, is capable of quickly generating good result however it may get stuck in local minimum. To overcome the problem we used a set of parameters called the seed, which allows the algorithm to generate different results for a different seed. The hill climbing generates solutions implied by the seed while the Genetic Algorithm is used as the seed generator. We compared the hybrid approach with Genetic Algorithm, Simulated Annealing, Greedy Algorithm, and Random Search. The experimental evaluation shows that the proposed hybrid approach outperforms other methods. More specifically, we showed that Genetic Algorithm and Simulated Annealing exhibit similar performance while both of them outperform Greedy Algorithm. Finally, we generalize the proposed hybrid approach to seed-driven hybrid architectures and elaborate on how it can be adopted to a broad range of optimization problems. 相似文献
10.
An approach is presented that uses the following techniques: automatic test case generation, self-checking test cases, black box test cases, random test cases, sampling, a form of exhaustive testing, correctness measurements, and the correction of defects in the test cases instead of in the product (defect circumvention). The techniques are cost-effective and have been applied to very large products 相似文献
11.
The testing phase of the software development process consumes about one-half of the development time and resources. This paper addresses the automation of the analysis stage of testing. Dual programming is introduced as one approach to implement this automation. It uses a higher level language to duplicate the functionality of the software under test. We contend that a higher level language (HLL) uses fewer lines of code than a lower level language (LLL) to achieve the same functionality, so testing the HLL program will require less effort than testing the LLL equivalent. The HLL program becomes the oracle for the LLL version. This paper describes experiments carried out using different categories of applications, and it identifies those most likely to profit from this approach. A metric is used to quantify savings realized. The results of the research are: (a) that dual programming can be used to automate the analysis stage of software testing; (b) that substantial savings of the cost of this testing phase can be realized when the appropriate pairing of primal and dual languages is made, and (c) that it is now possible to build a totally automated testing system. Recommendations are made regarding the applicability of the method to specific classes of applications. 相似文献
12.
A partition testing strategy consists of two components: a partitioning scheme which determines the way in which the program's input domain is partitioned into subdomains; and an allocation of test cases which determines the exact number of test cases selected from each subdomain. This paper investigates the problem of determining the test allocation when a particular partitioning scheme has been chosen. We show that this problem can be formulated as a classic problem of decision-making under uncertainty, and analyze several well known criteria to resolve this kind of problem. We present algorithms that solve the test allocation problem based on these criteria, and evaluate these criteria by means of a simulation experiment. We also discuss the applicability and implications of applying these criteria in the context of partition testing. 相似文献
13.
A formal framework for the analysis of execution traces collected from distributed systems at run-time is presented. We introduce
the notions of event and message traces to capture the consistency of causal dependencies between the elements of a trace.
We formulate an approach to property testing where a partially ordered execution trace is modeled by a collection of communicating
automata. We prove that the model exactly characterizes the causality relation between the events/messages in the observed
trace and discuss the implementation of this approach in SDL, where ObjectGEODE is used to verify properties using model-checking
techniques. Finally, we illustrate the approach with industrial case studies.
Received May 2004, Revised February 2005, Accepted April 2005 by J. Derrick, M. Harman and R. M. Herons 相似文献
14.
This paper deals with testing distributed software systems. In the past, two important problems have been determined for executing tests using a distributed test architecture: controllability and observability problems. A coordinated test method has subsequently been proposed to solve these two problems. In the present article: 1) we show that controllability and observability are indeed resolved if and only if the test system respects timing constraints, even when the system under test is non-real-time; 2) we determine these timing constraints; 3) we determine other timing constraints which optimize the duration of test execution; 4) we show that the communication medium used by the test system does not necessarily have to be FIFO; and 5) we show that the centralized test method can be considered just as a particular case of the proposed coordinated test method. 相似文献
15.
Integration of reused, well-designed components and subsystems is a common practice in software development. Hence, testing integration interfaces is a key activity, and a whole range of technical challenges arise from the complexity and versatility of such components. 相似文献
16.
Prompted by an application in congestion control of computer networks, this paper presents a methodology to certify with a certain degree of confidence whether a given controller satisfies a predefined set of specifications. The methodology is applied to testing AQM controllers for efficiency in congestion control, by repeating detailed simulations under a variety of network configurations. 相似文献
17.
If the statistics available from various process steps involved in the fabrication of large-scale integrated logic circuit chips indicate that the computed probability of each circuit path operating properly is greater than 1/2, then a reliable screening test procedure can be devised. A reliable reference standard from untested chips, or modules, can be constructed, and such a standard reference can be used in all test procedures in which circuit testing is based on comparison. 相似文献
18.
Conclusion The notion of compatibility of automata was proposed in [1] for formalization of requirements that must be met by interacting
partial automata. Testing the compatibility of automata is of essential importance for the design of systems that interact
with the environment, especially when we use declarative specificatio of the system to be designed. Under the assumptions
of this article for the automaton that models the environment, partiality of the specified automaton is a source of possible
incompatibility with the environment. When declarative specification is used, we can never decide in advance if the specified
automaton is partial or not. Moreover, even a specification that a priori describes a completely defined automaton may be altered by the actions of the designer in the process of design (especially
if these actions are incorrect) so that the specified automaton becomes partial. Therefore the initial specification, and
each successive specification produced by human intervention in the design process, must be tested for compatibility with
the environment.
In the methodology of verification design of automata, compatibility testing is used to solve two problems: a) generating
the specification of the class of all automata that satisfy the initial specification and are compatible with the specification
of the environment; b) testing for correctness the designer's decisions that alter the current specification of the automaton
being designed.
The results of this article have led to the development of an efficient resolution procedure for testing the compatibility
of automaton specification with the specification of the environment. this procedure has been implemented in the system for
verification design of automata from their logical specifications. The efficiency of the developed procedure is based on the
results of compatibility analysis of automata from [1] and on the restricted resolution strategy whose completeness and correctness
have been proved in [2].
Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 36–50, November–December, 1994. 相似文献
19.
Hypothesis testing using constrained null models can be used to compute the significance of data mining results given what is already known about the data. We study the novel problem of finding the smallest set of patterns that explains most about the data in terms of a global p value. The resulting set of patterns, such as frequent patterns or clusterings, is the smallest set that statistically explains the data. We show that the newly formulated problem is, in its general form, NP-hard and there exists no efficient algorithm with finite approximation ratio. However, we show that in a special case a solution can be computed efficiently with a provable approximation ratio. We find that a greedy algorithm gives good results on real data and that, using our approach, we can formulate and solve many known data-mining tasks. We demonstrate our method on several data mining tasks. We conclude that our framework is able to identify in various settings a small set of patterns that statistically explains the data and to formulate data mining problems in the terms of statistical significance. 相似文献
20.
文章针对面向对象软件具有的特征,给出一个面向对象的测试模型.详细探讨了面向对象单元测试、面向对象集成测试和面向对象系统测试的测试策略,以及相应的测试用例设计方法。 相似文献
|